AI and Automated Decision-Making: Are you just another number?


Justice Melissa Perry[*] co-authored with Sonya Campbell[+] 21 October 2021

Gilbert + Tobin Centre of Public Law, UNSW Law & Justice
NSW Chapter, Australian Institute of Administrative Law
'Kerr’s Vision Splendid for Administrative Law: Still Fit for Purpose? – Online Symposium on the 50th Anniversary of the Kerr Report'

Download RTF - 170 KB

1.  Introduction

The recommendations in the 1971 Kerr Committee Report were designed to “ensure the establishment and encouragement of modern administrative institutions able to reconcile the requirements of efficiency of administration and justice to the citizen”.[1]  As Robyn Creyke has noted, the principle encapsulated here distinguishes the form of ‘justice’ which courts seek to render, from ‘administrative justice’ where the needs of public administration temper justice to the individual.[2]

Automated and machine learning processes arm government with the capacity to make hundreds of millions of decisions annually affecting the lives of individuals at a speed and a cost unimaginable to the members of the Kerr Committee 50 years ago.  These processes can potentially promote consistency, accuracy, cost-effectiveness and timeliness in the making of government decisions. But they can also be consistently unlawful, unfair, and inaccurate, perpetuating stereotypes and reproducing flawed decisions on an unprecedented scale. 

Centrelink’s online compliance intervention system (also known as ‘robodebt’) is an illustration in point.  The now discredited and discarded system had the capacity to produce approximately 20,000 debt discrepancy notices per week in contrast to the previous annual average of around 20,000 income data-match discrepancies when manual investigations were undertaken.[3]  However, the notices were issued upon an erroneous view of the law, leading to the making of unlawful demands on a massive scale with consequential impacts and distress for the many vulnerable individuals to whom such notices were sent.

In the digital era, how then is the balance to be struck between the requirements of efficiency and those of legality and justice to the individual? Have you been reduced to just another number, despite the nature of the decision and its impact on you suggesting that your individual characteristics and circumstances should be taken into account?

Against this context, we will briefly touch on three issues:

  1. the issue of bias in such processes, relevant because biases potentially reduce us to data points input into the system thereby denuding us of our essentially human characteristics and removing from the equation potentially relevant considerations to the decision in question;
  2. the adequacy of judicial review mechanisms to provide effective oversight of the use of such processes by the executive in decision-making; and
  3. limits on the circumstances in which such technologies should be utilised in administrative decision-making.

2.  Bias

What do we mean by bias when we speak of decisions made by machines?

Biases will in one sense always be embedded in a decision-making system but they may be perfectly lawful because they accurately reflect the rules embodied in the law pursuant to which the decision in question is made. The concern is with unlawful biases such as those which discriminate on the basis of protected human characteristics or which give weight to considerations which are irrelevant to the exercise of power.

Biases in these senses can infect decisions utilising such technologies in at least three principal ways.

First, there is a well-documented risk of bias by a human decision-maker relying upon material generated by computer technologies, whereby an implicit assumption is made as to the superiority of machines to reach more accurate conclusions and consequential deference to conclusions or recommendations made by the computer.  Computer says “no” – to quote from Little Britain – in the sense of a definite full stop to any further argument.  The risk therefore is of a failure in such cases by the human decision-maker to bring a properly independent mind to bear on the issues.  

Secondly, bias may infect a program at the design stage and may reflect deliberate choices or unconscious biases of the programmer.  It is important in this regard to bear in mind that the process of translating law into code will almost inevitably be an imperfect one, given among other things that laws require interpretation on which views may differ, the purpose and meaning of a statute may not yet have been elucidated by courts despite a lack of clarity or ambiguity in the provisions, and the language of machines is more limited than our vocabulary.[4]  

Thirdly, in order to make rules, machine learning systems are trained using historical data.  The danger, of course, is that this data may reflect conscious or unconscious biases of the earlier human decision-makers, perpetuating stereotypes and unfair or arbitrary discrimination. For this reason, historical sentencing and arrest data represent particularly problematic training data.[5]  It may reflect outdated social values or fail to appreciate intersecting social disadvantage or the complexity of criminogenic factors.  This is evident if one reflects, for example, upon the changes in discrimination law postdating the Kerr recommendations in 1971.  For example, only in 1975 when the Racial Discrimination Act 1975 (Cth) was enacted was it rendered unlawful under federal law to refuse service or rental accommodation to a person on the basis of the colour of their skin. And only in 1984 was sex discrimination outlawed by federal legislation, prohibiting the kind of discrimination suffered by many women required to resign on marriage from their positions with the Australian Public Service or experiencing other penalties in the workplace after marriage.  

These are not far-fetched examples.  A hiring tool developed by Amazon in 2014 used patterns inferred from the analysis of 10 years’ worth of historical resumes submitted to the company.  Its computer systems were trained to vet potential employees on the basis of this analysis. The tool was ultimately discovered to be biased and decommissioned. Most of the resumes in the training data set came from men due to the male-dominated nature of the technology industry, and the historical trend at Amazon of hiring more male applicants. As a result, a preference for male candidates was inferred and the tool devalued resumes that included the word “women’s”, capturing phrases such as “women’s team” or “women’s award”.[6]

In the last of these examples, Amazon was able to detect and respond to the bias in its own system.  However, for the individual who is subject to an adverse decision made using these kinds of technologies to identify the existence of an impermissible bias in the system can prove to be a near impossible task.  Yet this may be a key question when a decision affecting the individual is challenged in the administrative law space.

First, biases can be difficult to discern due to the opacity of some automated decisions (particularly those resulting from machine-learning outputs). This complicates the task of human oversight and auditing in particular, which may be the best way to detect bias and discrimination problems.

Secondly, in order to comprehensively identify algorithmic biases, an individual seeking judicial review requires access to the relevant source code, algorithmic specifications and data. This creates an evidentiary challenge where governments resist releasing the same due to proprietary barriers.[7]  For example, in a 2017 case involving a predictive policing algorithm, the petitioner invoked the public interest in the transparency of the impugned system.  The New York Police Department (NYPD) argued that it was required to respect the vendor’s trade secret and non-disclosure agreement.  It was submitted that disclosure of the relevant products’ test results would discourage potential vendors from contracting with the NYPD in the future and limit the pool of technology available to it.  The Supreme Court of the State of New York struck a balance by ordering the release of output data starting from six months before the date of the relevant decision but rejecting the request for disclosure of the input data.[8]

Thirdly, algorithms are incomprehensible to the majority of the population, who lack the technical literacy required to read and write in code. Indeed, the self-learning properties of machine-learning algorithms may result in decisions that even programmers cannot readily explain.[9]

Finally, even if a judicial review challenge to a biased automated decision is successful, the re-making of such a decision by a human decision-maker does not guarantee that the system will be re-programmed or re-trained with new data.[10]  This leads us to the second issue.

3.  Review mechanisms

Judicial review through the courts can form only part of the solution for addressing these sorts of challenges.  First, judicial review is generally reactive and ad hoc. As such, it is likely to be of limited utility in promoting the ideals of administrative law at a systemic level.[11]  Moreover, it may well be the case that a critical volume of automated decisions is required before a pattern can be discerned from which the existence of an impermissible bias can be inferred. This (and the absence of statistical proof in particular) may pose evidentiary issues for even the most diligent and astute of lawyers.  Furthermore, if test cases or class actions settle, the overall scheme may escape judicial scrutiny.

Access to justice, especially for the vulnerable, is also a major issue. Judicial review applications can be expensive, particularly if expert evidence is required to, for example, interpret source code and data or analyse statistics, and the resources required to contest problematic decision-making means that less privileged groups may be disproportionately affected.  Furthermore, individual litigants may not even be aware that a computer program has been used in the making of their decision,[12] let alone have the capacity to understand how the program may have led to reviewable error.

It follows that issues pertaining to the legality and fairness of automated and machine learning processes in administrative decision-making need to be addressed from the outset in the design phase, rather than left to the point of judicial review when harm may have multiplied on a vast scale.  As the Administrative Review Council advised back in 2004 in its landmark report on automated systems in government decision-making,[13] lawyers must be involved in the design, development, deployment and ongoing audit of automated systems and machine learning programs.

4.  Discretionary and evaluative decisions

Is automated decision-making appropriate for discretionary and evaluative decisions?

Pre-programmed and rules-based processes rely on a deterministic logic which is better suited to non-discretionary, rules-based decisions.[14]  This is because decisions involving consideration of discretionary elements and situation-specific factors cannot be reduced to this logic.  Where pre-programmed processes are used to produce predetermined outcomes for discretionary decisions in the absence of human oversight, this could among other things constitute a constructive failure to exercise the discretion.[15]  The machine learning processes which we have already discussed in the context of bias are not deterministic, but their probabilistic decisions are shaped by human discretionary choices made in the design and training phrases.  As earlier mentioned, their opacity and self-learning properties render the task of human oversight more difficult.

It is very easy to fall prey to the efficiencies of new technologies to supplement or even supplant human decision-making. The danger of over-reliance on them is the dehumanisation of decision-making.  Discretion involves looking at individual circumstances within lawful bounds.  Thus, while human decision-making suffers from its own limitations due to our natural frailties (such as being subject to cognitive biases), the participation of humans in automated decision-making can help to identify errors and “humanise automated processes which are incapable of taking into account individual circumstances or other relevant context” (as one commentator has observed).[16]  Human variables such as empathy, compassion, competing values and the availability of mercy cannot be replicated by machines.  

In short, as Chief Justice Kourakis of the Supreme Court of South Australia has pertinently observed, writing extra-curially:

Deciding when future opportunities for transformation and redemption [of the individual] should prevail over [their] past failures involves what is an essentially human question.[17]

This is particularly so where the individual’s liberty or livelihood is at stake.



[*] Federal Court of Australia; LLB (Hons, Adel), LLM, PhD (Cantab), FAAL. 

[+] Associate to Justice Perry; BA/BTeach (UNE), JD (UNSW). 

[1] Robin Creyke, ‘Administrative Justice – Towards Integrity in Government’ (2007) 31 Melbourne University Law Review 705, 708.

[2] Ibid.

[3] Louise Macleod, ‘Lessons Learnt about Digital Transformation and Public Administration: Centrelink’s Online Compliance Intervention’ (2017) 89 Australian Institute of Administrative Law Forum 59, 59.

[4] Melissa Perry, ‘iDecide: Administrative Decision-Making in the Digital World’ (2017) 91(1) Australian Law Journal 29, 32.

[5] Céline Castets-Renard, ‘Accountability of Algorithms in the GDPR and Beyond: A European Legal Framework on Automated Decision-Making’ (2019) 30(1) Fordham Intellectual Property, Media & Entertainment Law Journal 91, 99.

[6] Sarah Crossman and Rachel Dixon, ‘Government Procurement and Project Management for Automated Decision-Making Systems’ in Janina Boughey and Katie Miller (eds), The Automated State: Implications, Challenges and Opportunities for Public Law (Federation Press, 2021) 154, 160.

[7] Estefania McCarroll, ‘Weapons of Mass Deportation: Big Data and Automated Decision-Making Systems in Immigration Law’ (2020) 34(3) Georgetown Immigration Law Journal 705, 708, 728, 730.

[8] Brennan Center for Justice at New York University School of Law v New York City Police Department (NY Sup Ct, 160541/2016, 22 December 2017) slip op 32716(U) discussed in Castets-Renard (n 5) 104.

[9] Dominique Hogan-Doran, ‘Computer Says “No”: Automation, Algorithms and Artificial Intelligence in Government Decision-Making’ (2017) 13(3) Judicial Review 345, 374–5; Monika Zalnieriute, Lyria Bennett Moses and George Williams, ‘The Rule of Law and Automation of Decision-Making’ (2019) 82(3) Modern Law Review 425, 435, 442.

[10] Anna Huggins, ‘Addressing Disconnection: Automated Decision-Making, Administrative Law and Regulatory Reform’ (2021) 44(3) UNSW Law Journal 1048, 1069.

[11] Mark Aronson, Matthew Groves and Greg Weeks, Judicial Review of Administrative Action and Government Liability (Thomson Reuters, 6th ed, 2017) 5–6.

[12] Yee-Fui Ng, Maria O’Sullivan, Moira Paterson and Norman Witzleb, ‘Revitalising Public Law in the Technological Era: Rights, Transparency and Administrative Justice’ (2020) 43(3) UNSW Law Journal 1041, 1052.

[13] Administrative Review Council, Automated Assistance in Administrative Decision Making (Report No 46, 1 January 2004).

[14] See generally Will Bateman, ‘Algorithmic Decision-Making and Legality: Public Law Dimensions’ (2020) 94(7) Australian Law Journal 520, 522; Zalnieriute, Bennett Moses and Williams (n 9) 440.

[15] Perry (n 4) 33.

[16] Huggins (n 10) 1075; see also Tim Wu, ‘Will Artificial Intelligence Eat the Law?: The Rise of Hybrid Social-Ordering Systems’ (2019) 119(7) Columbia Law Review 2001, 2004–5.

[17] Chris Kourakis, ‘The Intersection of Artificial Intelligence and Other New Technologies with the Judicial Role’ (2019) 31(4) Judicial Officers’ Bulletin 33, 35.

Was this page useful?

What did you like about it?

If you would like the Court to contact you about your website feedback enter your email address in the box below. If you need help with a Court matter, visit the Contact Us pages or go to Live Chat.

How can we make it better?

If you would like the Court to contact you about your website feedback enter your email address in the box below. If you need help with a Court matter, visit the Contact Us pages or go to Live Chat.

* This online submission is protected by captcha