This article discusses recommendations for reform and better practice in relation to ADM use by government. A number of the recommendations made by the Parliament of Australia, the Robodebt Royal Commission, the Commonwealth Ombudsman, the Office of the Australian Information Commissioner and the Attorney-General’s Department require legislative amendments that are yet to be made. This article was first published in Privacy Law Bulletin issue 22.6 (August 2025).

Introduction

Increasingly, automated decision-making (ADM) is being used by both government agencies and private organisations around the world to streamline processes and speed up decisions. But the use of ADM has raised significant public concerns about lack of transparency and accountability, and potential biases in decision making.  The Office of the Australian Information Commissioner (OAIC) 2023 Australian Community Attitudes to Privacy Survey showed that 89% of Australians believe they should have the right to know when their personal information is used in ADM if it could affect them, and only 21% were comfortable with government agencies using AI to make decisions about them.

In Australia, the Robodebt scandal had highlighted ADM-associated risks.[i] Following the Royal Commission into the Robodebt Scheme (Robodebt Royal Commission), the Australian Government committed to considering opportunities for legislative reform to introduce a consistent framework for ADM in service delivery by public sector agencies.[ii]

There has been extensive discussion and analysis of ADM in relation to AI. This includes, at the Australian Government level, the Department of Industry, Science and Resources’ publication in 2019 of the Artificial Intelligence (AI) Ethics Principles (updated October 2024)[iii] and a Voluntary AI Safety Standard (September 2024). Further, in March 2025 the Commonwealth Ombudsman, together with the OAIC and the Attorney-General’s Department (AGD), issued an updated version of its 2007 Better Practice Guide: Automated Decision Making (the ADM Better Practice Guide). This is ‘focused on practical guidance for agencies aimed to ensure compliance with administrative law and privacy principles, and best practice administration’.[iv]

This article outlines this guidance and recommendations made, many of which overlap, and considers progress toward implementation.

What do we mean by ADM?

While the wording of definitions of ADM may vary, for the purpose of the present article we’ve adopted the granular definition of ADM used by the Parliament of Australia in the Final Report of the Select Committee on Adopting Artificial Intelligence (AI) (November 2024), Chapter 5 of which focused on ADM:

ADM describes the use of computer systems to automate all or part of an administrative decision-making process. This can include using ADM to:

  • make a decision;
  • make an interim assessment or decision leading up to the final decision;
  • recommend a decision to a human decision-maker;
  • guide a human decision-maker through relevant facts, legislation or policy; and
  • automate aspects of the fact-finding process which may influence an interim decision or the final decision.[v]

Two of the 13 recommendations made by the Select Committee are directly relevant here:

Recommendation 11

5.127     That the Australian Government implement the recommendations pertaining to automated decision-making in the review of the Privacy Act, including Proposal 19.3 to introduce a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made.

Recommendation 12

5.130     That the Australian Government implement recommendations 17.1 and 17.2 of the Robodebt Royal Commission pertaining to the establishment of a consistent legal framework covering ADM in government services and a body to monitor such decisions. This process should be informed by the consultation process currently being led by the Attorney-General’s Department and be harmonious with the guardrails for high-risk uses of AI being developed by the Department of Industry, Science and Resources.

These recommendations reinforce previous recommendations and calls for action discussed below.

Robodebt Royal Commission recommendations on review of ADM decisions and transparency of ADM usage

Robodebt scheme recap

The Robodebt scheme, which operated through various iterations between July 2015 and November 2019, used a method of automated debt assessment and recovery. It was implemented by Services Australia, an Australian government agency, as part of its Centrelink payment compliance program. The scheme aimed to replace the formerly manual system of calculating overpayments and issuing debt notices to welfare recipients with an automated data-matching system that compared Centrelink records with averaged income data from the Australian Taxation Office (ATO).

The scheme resulted in false or incorrectly calculated debt notices being issued, which caused distress and harm to recipients. The scheme was found in the Federal Court to be both flawed and illegal.[vi]

The Royal Commission recommendations

The Royal Commission’s report noted that the fallout from the Robodebt Scheme was described as a ‘massive failure of public administration’. [vii] It warned against the  prospect of future programs, using increasingly complex and more sophisticated AI and automation, having even more disastrous effect, which will be magnified by the speed and scale at which AI can be deployed and the increased difficulty of understanding where and how the failures have arisen.

However, the report also acknowledged the benefits of AI and automation in administrative decision-making ‘when done well’.  These benefits included making government service delivery more efficient, cheaper and more accessible, as well as more consistent, more accurate and more transparent.[viii]

With this in mind, the Royal Commission’s report made the following recommendations:

Automated decision making

Recommendation 17.1: Reform of legislation and implementation of regulation

The Commonwealth should consider legislative reform to introduce a consistent legal framework in which automation in government services can operate.

Where automated decision-making is implemented:

  • there should be a clear path for those affected by decisions to seek review
  • departmental websites should contain information advising that automated decision-making is used and explaining in plain language how the process works
  • business rules and algorithms should be made available, to enable independent expert scrutiny.

Recommendation 17.2:  Establishment of a body to monitor and audit automated decision-making

The Commonwealth should consider establishing a body, or expanding an existing body, with the power to monitor and audit automated decision-making processes with regard to their technical aspects and their impact in respect of fairness, the avoiding of bias, and client usability[ix]

The Government accepted both of these recommendations in its response to the Royal Commission in November 2023, amongst other things committing to considering opportunities for legislative reform, to ensure appropriate oversight of the use of automation in service delivery, and to examining existing regulatory frameworks to ensure a consistent, legal, ethical and governance framework in which automation in government services can operate with appropriate safeguards. [x]

Progress toward implementation

While the legislative reform process to implement these recommendations is not yet complete, significant progress has already been made:

  • The Digital Transformation Agency (DTA) Policy for the responsible use of AI in government came into effect on 1 September 2024. Commonwealth entities that are bound by the policy were required to publish transparency statements outlining their approach to AI adoption and use within six months of the policy taking effect (by 28 February 2025).  This would include AI used in ADM.
  • The functions of the Administrative Review Counsel established under the Administrative Review Tribunal Act 2024 include monitoring the integrity and operation of the Commonwealth administrative law system, and inquiring into systemic issues relating to the making of administrative decisions and the exercise of administrative decisions.[xi] This will likely involve oversight of circumstances where ADM is used either to make or support administrative decisions.
  • In November 2024 the AGD commenced a consultation progress on use of ADM by government. Key themes arising from the consultation process were the importance of transparency and standards.  AGD is now leading the development of the legal framework for ADM.  Until the proposed framework is released, existing Commonwealth ADM provisions will continue to apply.  It is not yet clear when draft legislation implementing the Royal Commission’s Recommendations 17.1 and 17.2 will be publicly available for comment.
  • Recent privacy reforms include updates to the Australian Privacy Principles (APPs), commencing in December 2026, to increase transparency in relation to use of personal information in ADM (see discussion below).

Commonwealth Ombudsman Better Practice Guide on ADM

The Commonwealth Ombudsman’s ADM Better Practice Guide was updated in March 2025. It discusses a range of privacy issues associated with ADM, including the following.

  • Privacy by design and privacy impact assessments (PIAs)should form part of an agency’s regular risk management and planning processes when developing or reviewing a project that uses ADM.
  • The requirement for openness and transparency under APP 1 will include a requirement to include information about the kinds of decisions made through ADM, and the kinds of personal information used in those decisions, from 10 December 2026: see fuller discussion below.
  • Providing APP 5 collection notices effectively at or as soon as practicable after collection can be challenging where there is the potential for the personal information to be used or disclosed for the purposes of complex ADM. Privacy notices need to communicate information-handling practices clearly and simply but with enough detail to be meaningful.
  • ADM systems can become, or be integrated with, databases of personal information including sensitive information. These might be already held by the entity but originally collected for a different purpose, might be held by a third party, or might be linked with other datasets.   Any uses or disclosures of personal information other than for the primary purpose of collection will need to be assessed for compliance with APP 6.
  • As far as possible, individuals should not be surprised as to how their personal information is handled in connection with an ADM system. Privacy policies and privacy notices could be updated accordingly, to ensure that people are aware of likely secondary uses and disclosures of personal information, including for the purposes of ADM.
  • Administrative law requirements that decisions must be based on reliable and relevant information are consistent with APP 10, which requires agencies to take reasonable steps to ensure that:
    • the personal information it collects is accurate, up-to-date and complete (APP 10.1) and
    • the personal information it uses or discloses is, having regard to the purpose of the use or disclosure, accurate, up-to-date, complete and relevant (APP 10.2).

This requirement may present some challenges for large-scale ADM supported and underpinned by data analytics, AI and machine learning, particularly where they collect large amounts of data from diverse sources with limited opportunity to verify the relevance or accuracy of the information.  Some automatic algorithms may also create personal information with an inherent bias, that is discriminatory, or that leads to inaccurate or justified results.  Agencies should take rigorous steps to ensure the quality of the personal information collected as well as any additional personal information created by the algorithms that process the data, and have data validation processes in place before using personal information to inform ADM.

Appendix A to the ADM Better Practice Guide contains a better practice Checklist, summarising items to be addressed when considering implementing or updating an automated system for ADM, including the suitability of an automated system, administrative law issues, privacy, governance and design, and transparency and accountability.   While it is impossible to be comprehensive of all issues that may come up given the rapidly evolving legislative and technological landscape applicable to AI and ADM, the checklist is a good starting point for the PIA that should be undertaken for each ADM project.

We note that the ADM Better Practice Guide was published after completion of the AGD’s consultation process for legislative reform, and with input from AGD, OAIC and other key stakeholders likely to participate in any AGD-led legislative reform initiatives.  We consider it likely that the legislative reform proposals will be informed by, and address key issues raised in, the ADM Better Practice Guide.

APP changes re ADM disclosure and impact on APP entity privacy policies

As indicated above, gradual reform of Australia’s privacy regime has been underway over several years, relevantly including in relation to the existing APP 1.3. This principle provides that an APP entity must have a clearly expressed and up-to-date policy (the APP privacy policy) about the management of personal information by the entity.

The Privacy and Other Legislation Amendment Act 2024 (Cth) (Privacy Amendment Act) commenced on 10 December 2024. However, Schedule 1, Part 15 of the Privacy Amendment Act, which contains the amendments relevant to ADM, does not commence until 10 December 2026. Moreover, these amendments will apply only to decisions made after commencement, a long lead time during which the sophistication of ADM technology will certainly increase.

Nevertheless, when these amendments come into effect, without limiting APP 1.3, the privacy policy of an entity using ADM processes must also disclose the types of personal information used, the types of decisions made by the program, and the types of actions taken as a result of these decisions. The new provisions specify that decisions can include not only granting or denying benefits, but also affecting rights under contracts, agreements, or access to services. The entity must provide this information if the decision could significantly impact an individual’s rights or interests.

Plainly, in order to meet this requirement, those responsible for amending an entity’s privacy policy must understand every aspect of ADM use by the entity.

If we assume that any adoption of ADM by an agency will have been guided by ‘privacy by design’ principles, a good first step toward fuller understanding is analysis of the PIA that should have been conducted in relation to an agency’s relevant ADM procurement or use functionalities involving personal information.

But this step must be followed by a hands-on walk-through of the system. This will enable comparison of originals with both 1. later iterations of the documented architecture and information flow charts showing decision points and 2. the entity’s actual practice and privacy law. Check, for example, that consent was obtained if sensitive information was used to train AI models.

In our experience, whether system builds are undertaken in-house or by third parties, intended pathways, system features that impact privacy law compliance and procedural steps may change (or be omitted) without that change being clearly flagged to either decision makers or privacy officers. It is important for both transparency and accountability that assurances set out in privacy policies can be shown to be backed by an entity’s actual practices.

Lastly, we note that the Privacy Amendment Act did not introduce a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made, as was recommended by the Select Committee (see above). Introduction of this right had also formed part of Proposal 19.3 of the AGD’s Privacy Act Review Report (February 2023) which was agreed to in the Government’s Response.[xii]

When such a provision is introduced, it will reinforce the requirement for privacy units to fully understand ADM use in their entity. Detailed disclosure as part of a privacy policy should aim to reduce need for and instances of such requests for information.

Conclusion

It remains to be seen how effective the legislative reform to be proposed by AGD to address the issues relating to ADM in the Robodebt Royal Commission report will be in balancing the potential benefits of ADM with ensuring that personal information is appropriately protected.  However, given the massive failure of public administration that occurred, and the impact of this on not only the affected individuals but on public trust in government, there is every incentive to go above and beyond to avoid it happening again.

We have already seen significant progress in improving transparency and accountability in the use of AI and ADM in delivery of government services.  We eagerly await the AGD’s next move.   

[i] See the Commonwealth Ombudsman’s report Centrelink’s Automated Debt Raising and Recovery System, published April 2017.

[ii]AGD, Use of automated decision-making by government (Consultation Paper, November 2024) p. 6; Australian Government, Government Response: Royal Commission into the Robodebt Scheme (November 2023) p. 21.

[iii] See https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles

Australian AI Ethics Principles

  • Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
  • Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
  • Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
  • Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

[iv] ADM Better Practice Guide p. 6

[v] For more discussion of what Commonwealth ADM means, see ADG, Use of automated decision-making by government op cit p. 7 and ff: ‘ADM refers to the use of automated systems (which may include AI) to carry out administrative actions and decisions’.

[vi] Henriques-Gomes, Luke. Robodebt: court approves $1.8bn settlement for victims of government’s ‘shameful’ failure”. The Guardian. (11 June 2021); Prygodicz v Commonwealth of Australia (No 2) [2021] FCA 634 (11 June 2021)

[vii] Prygodicz ibid, quoted in the Report p. 488.

[viii] Commissioner, Report of the Royal Commission into the Robodebt Scheme p 488.

[ix] Ibid p. xvi

[x] Government Response: Privacy Act Review Report, 2023, pp. 21 – 22.

[xi] Administrative Review Tribunal Act 2024, s. 249(1).

[xii] Government Response: Privacy Act Review Report.

For further information please contact:

This article is for general information purposes only and does not constitute legal or professional advice.  It should not be used as a substitute for legal advice relating to your particular circumstances.  Please also note that the law may have changed since the date of this article.