Emerging technologies and international frameworks

Presented at the Australian Law Librarians’ Association Conference.

Justice Melissa Perry* 9 August 2024

Download RTF - 287 KB


Introduction

My foray into the intersection between machines and the law began in 2004 on my part-time appointment to the Administrative Review Council (ARC). That body reported to the Attorney-General and was responsible for supervising the health of the federal administrative law system. It was the title of their then latest report that piqued my interest. What, I thought, is automated decision-making?

As far as I am aware, the ARC’s report was the first world-wide to consider what legal and regulatory frameworks should govern the use of machines in decision-making processes (relevantly, in the context of that body) in government decisions affecting the rights and interests of legal persons. Automated systems, which were then regarded as “state of the art”, employ coded logic or algorithms and data-matching to make, or assist in making, decisions, and continue to be used today. The foundation for the ARC’s recommendations were the primacy of the rule of law and the fundamental administrative law values of legality, procedural fairness, transparency, and accountability through access to merits and judicial review.

Broadly speaking, systems have evolved since this time to include machine-learning whereby machines detect and learn from patterns and correlations in data, and more recently, AI whereby machines “exhibit or simulate intelligent behaviour[1] based on predictive or probabilistic outputs derived from large data sets. This is not to suggest that there has been a succession of technologies in the sense that one has superseded the other. Rather, these technologies constitute broad classifications within a “fast evolving family of technologies”, [2] which may work in combination depending upon the particular system. Further, the development of quantum technologies, a critical emerging technology, will bring with it ever more powerful computers and global communication networks.

It is no surprise that governments should seek to utilise these technologies across the broad spectrum of activities with hundreds of millions of decisions made by government in Australia alone every year utilising these machine technologies. That trend is likely to continue in line with rapid growth in the volume, complexity, and subject-matter of decisions made by governments affecting private and commercial rights and interests, given the capacity of these technologies to promote consistent, accurate, cost effective, and timely decisions.

This expansion in the use of such machine technologies is mirrored in the private sector. Indeed, it was estimated in 2023 that AI may contribute $22.17 trillion to the global economy by 2030, and $315 billion to the Australian economy.[3]

Yet the astronomical rate of change and scale of developments in machine technologies has far outpaced the capacity of humankind to develop the moral, ethical, legal and philosophical frameworks within which such technologies should be designed, developed, deployed and used. This is so notwithstanding significant recent progress nationally and internationally in developing appropriate regulatory frameworks and policies. It is therefore no overstatement to say that we are at a pivotal point in human history, confronted with the reality of machines of exceptional power which are capable of being harnessed for the betterment of humankind but also capable of great harm.

In addressing the topic of emerging technology today, I will focus on three key substantive aspects, while appreciating that it will be possible only to skim the surface of these complex topics:

  1. Current international collaboration on the creation of legal frameworks for the design, development, deployment and use of these new technologies; and
  2. Two key uses of AI systems classified as high-risk:
    1. first, the appropriate use of technology in decision making in the course of which I will touch on legal research; and
    2. secondly, the risks of misinformation and disinformation posed by new technologies in the context of modern means of communication.

International collaboration on framework principles

The rule of law and fundamental administrative law values recognised by the ARC in its ground-breaking report are reflected in current developments internationally with respect to the creation of legal frameworks for the design, training and development, and the deployment and monitoring of AI. These values, in turn, accord with, and promote compliance with, our international human rights obligations.

International co-operation has led to the development of guidelines specifically in the AI space for both governments and private entities. The first intergovernmental standard on AI was the Recommendation on Artificial Intelligence adopted by the Organisation for Economic Co-operation and Development (OECD) on 22 May 2019 which was updated earlier this year in light of new technological and policy developments (OECD Principles).[4] The OECD Principles make recommendations not only for governments but for “AI actors” being “those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI”.[5] While accepting that implementation will differ in different jurisdictions in response to their own unique needs and circumstances, the OECD Principles are intended to promote harmonious global AI governance, recognising that these technologies know no boundaries.

Building on the OECD Principles, in November 2023 the UK Government hosted the AI Safety Summit at Bletchley Park - the birthplace of the modern computer, for it was at Bletchley Park that the mathematician Alan Turing and other scientists broke the Nazis’ ENIGMA code. This summit brought together government representatives, leading AI companies, civil society groups and experts in research, with the composition of the summit itself recognising that the creation and implementation of legal regulatory frameworks requires collaboration not only between States, but with stakeholders and experts across a raft of disciplines.

The Bletchley Declaration which resulted from the summit was signed by the United Kingdom, the EU, China, the US, Australia and 25 other countries. This non-binding document recognises both the opportunities potentially afforded by AI “to transform and enhance human wellbeing, peace and prosperity”, while also recognising the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capability of these AI models”.

Among other responses to the risks identified in the Bletchley Declaration, the Declaration emphasised the “strong responsibility” borne by those developing AI systems, especially those of great potential power, and the need for increased transparency by those developing frontier AI capabilities.[6] Subsequently, the Seoul Ministerial Statement signed by 27 States, including Australia, and the EU, acknowledged the achievements initiated by the Bletchley Declaration and sought to direct the focus to implementing three inter-related priorities of safety, innovation and inclusivity.[7]

As I have elsewhere observed, “statements such as these highlight, [that] Australia is not an island; its response to AI must, of necessity, take account of, and, to the extent appropriate, align with, international responses.”[8] Many of the risks arising from AI are, as the Bletchley Declaration recognised, “inherently international in nature, and so are best addressed through international cooperation”.[9]

Australia has recognised the importance of aligning its legislative and policy responses to international responses in line with these international instruments. The joint State, Territory and Commonwealth National Framework for the Assurance of Artificial Intelligence in Government released in June 2024 affirmed the commitment to deeper international cooperation and dialogue.

The EU AI Act which came into effect on 1 August this year also constitutes a significant legal milestone in AI regulation. It comprises a detailed regulatory regime focused on mitigating the risks of high-risk AI systems through measures such as requiring risk and quality management systems, human oversight, and third party assessment, before being sold or used in the EU (Art 6). It also prohibits certain AI systems capable of significant harm for malevolent purposes. Its aim is to promote the use of AI which is safe, respects human rights, and protects health, safety and the environment. While not binding on non-member States, the EU AI Act nonetheless has potential relevance well beyond the jurisdictions of member States. This is because of its capacity to contribute to the development of principles of customary international law and its application to developers and deployers of high-risk AI systems where the system or its output is used in the EU, regardless of where the developer or deployer is based.

Judicial and administrative decision-making

(a) Discretions

In the movie, “Eye in the Sky”,[10] starring Helen Mirren, the British and United States authorities were advised that it was lawful, under international humanitarian law, to order an attack by a drone on a cell about to execute a suicide bombing with multiple deaths predicted. Collateral damage, being the likely killing of a little girl selling bread from a make-shift stall proximate to the attack, was considered proportionate to the harm that would be caused if the suicide bombing were permitted to proceed. However, the agent on the ground was moved by the little girl’s plight and, with barely a moment to spare, he acted to save her by bribing a local child to buy the last of the little girl’s loaves in the hope that she would leave before the assault.

This example illustrates how the exercise of discretion by a human decision-maker may mean achieving not merely a lawful result, but one which better accords with the fundamental human values of mercy, fairness, and compassion in the individual’s and the public interest. It also illustrates the ability of humans to problem-solve in creative and imaginative ways. Such fundamental human qualities as mercy, compassion, and fairness draw upon our shared understanding of the frailty of the human condition and lie at the core of our common humanity, resonating through the ages across cultures, and underpinning fundamental human rights.

However, these are qualities utterly lacking in any machine technologies. No robot has yet been created with a conscience, let alone the capacity for sentience, understanding or independent thought. For example, one form of generative AI is a large language model or LLM. An LLM is a complex algorithm which responds to human prompts to generate new text representing the most likely words and word order based on training from massive datasets. Devoid of understanding or concepts of accuracy, the capacity of LLMs to hallucinate – that is, to “make up” information – and to convey it convincingly is well-documented. An example is the well reported decision in the US last year of Mata v Avianca[11] (June 22, 2023) in which Mr Mata’s lawyers filed submissions containing three fake judgments generated by ChatGPT. When the existence of the decisions was questioned, the lawyers for Mr Mata filed an affidavit attaching copies of the alleged cases after “asking” ChatGPT to confirm that the cases were real and being “reassured” that they did in fact exist and could be found in reputable legal databases. Judge Castel found that there were obvious red flags, including that one of the decisions contained “gibberish”, and that the attorneys had acted with bad faith and in violation of various court rules. This highlights the imperative to approach the use of machine technologies in decisions affecting individual rights and freedoms with very great caution. As the Australian Government recently emphasised in its briefing “How might AI affect the trustworthiness of public service delivery?”, using AI should not come at the expense of empathy.[12] Nor should it come at the risk of accuracy or fairness.

Human qualities of the kind to which I have referred have long informed Courts and administrative decision-makers in the exercise of discretions. While the width of statutory discretions will vary according to context, discretions potentially afford humans the latitude to make judgments and reach decisions which reflect community, administrative and international values, and align with statutory objects, in the face of a wide or almost infinite variety of individual human circumstances.

The exponential increase in the subjects of statutory regulation and in the length and complexity of legislation, especially at the federal level, since the end of the Second World War has been accompanied by the vesting of a multitude of discretions in a wide variety of officials and authorities, both directly and through widespread delegations.[13] Those administrative discretions in turn must be exercised within the bounds of legal reasonableness, in accordance with the applicable requirements of procedural fairness, and oft-times guided by a raft of departmental policies and guidelines designed[14]

However, while generative AI can emulate the process of exercising a discretion and may provide reasons which give the appearance that it has exercised a discretion or engaged in an evaluative process, it is no more than smoke and mirrors. Nor are other forms of technology such as machine learning capable of exercising discretions. It follows that, were a machine used to “exercise” a statutory discretion conferred on an officer of the Commonwealth, the purported exercise of power would likely be invalid.

Even the use of such programs to assist in the exercise of discretions should be approached with caution in order to ensure that the human-decision-maker brings a properly independent mind to bear upon the issues and interrogates the data provided by such technologies.

(b)Use of AI in provision of reasons and legal research

A related issue is the question of using generative AI in the context of providing reasons.

The provision of reasons is of central importance to the efficacy of the administrative law system.

The benefits of providing reasons to those affected by administrative decision making include to:

  • provide evidence of the reasons for a decision in order to facilitate merits and judicial review;
  • to improve the quality and consistency of decision making; and
  • to promote public confidence in the administrative process through transparency as to the reasons for an outcome in circumstances where reasons are generally required only for decisions adverse to the individual’s rights or interests.

It follows, for example, as Gummow ACJ and Kiefel J (as her Honour then was) observed in Minister for Immigration and Citizenship v SZMDS,[15] that the obligation to set out material findings of fact “focuses on the thought processes of the decision maker”. In other words, a statutory obligation for an administrative decision-maker to provide reasons imposes a requirement for the decision-maker to provide their reasons for decision, “warts and all”, as it is to those reasons that courts and those subject to the decision must look to discern whether there is error applying established approaches to construing those reasons on judicial review.

However, if a primary decision-maker or Tribunal member were to use AI to write any part of their written reasons, then the question of whether their reasons revealed error would become an artificial one. It may well raise serious questions as to whether the Tribunal member had fulfilled their statutory obligation to provide reasons or indeed, whether Tribunal had lawfully undertaken its statutory decision-making task, even if the administrative decision-maker were to give evidence that they had adopted the reasons generated by the machine.

Turning to the judicial branch of government, there are examples abroad where judges have used AI and these have attracted quite a bit of media attention. However, in my view AI has no place in the expression of judicial reasoning and were that to occur, it would have a very real capacity to undermine public confidence in the judiciary, no matter how limited the use of AI in the particular judgment may have been. It is therefore not surprising that the EU AI Act 2024 has classified the use of AI tools in the administration of justice as high risk.

Indeed, even the use of AI to research the law are classified by it as high risk, reflecting among other things the potential for 3rd party AI tools to impact on judicial independence. There is a useful discussion of this in AIJA’s publication, AI Decision-Making and the Courts (2022): A Guide for Judges,Tribunal Members and Court Administrators at [4.2]). The authors of that report identified a number of particular risks, bearing in mind that courts and tribunals are accountable for the legality of their processes, including that.

  • the secret nature of many AI systems means that judges and parties are likely to be unaware of the way in which outputs from the system were generated;
  • the data set on which the system has been trained may be out-dated;
  • the AI system’s output may contravene Australian privacy law, Australian copyright law or (as I shortly explain) contain discriminatory material; and
  • there is a risk of potential control, interference or surveillance from foreign states via privately developed AI tools.

Added to this, there is the well-recorded capacity of generative AI to hallucinate which I have already described. Further, the AI system may retain prompts and human responses to “answers” to prompts and include that data in the dataset used to train the system going forward, raising the spectre of public disclosure of confidential information. From the operator’s perspective, data of this kind may have significant value in enabling it to further enhance the benefit of the system to other users. The terms and conditions of such systems must therefore be very closely scrutinised. Issues of this nature, in particular, have led to initiatives by courts in Australia, New Zealand and abroad to develop guidelines for judicial officers, tribunal members, lawyers and unrepresented litigants around the risks of using such technologies. These also emphasise the professional and ethical obligations of legal representatives, including to the Court, which squarely place responsibility on the legal representatives and make it clear that those responsibilities cannot be delegated to a machine.

None of this is to say that AI may not have a place in court administration where expert systems and rule-based systems have long had a role to play. Such uses are excluded from the EU AI Act classification of high risk.

(c)Bias

A second major risk in the use of such technologies in decision-making is that of bias and consequential impacts on the legality and fairness of the decision.

Biases will in one sense always be embedded in a decision-making system but they may be perfectly lawful because they accurately reflect the rules embodied in the law pursuant to which the decision in question is made. The concern is with unlawful biases such as those which discriminate arbitrarily on the basis of protected human characteristics or which give weight to considerations which are irrelevant to the exercise of power.

Biases in these senses can infect decisions utilising such technologies in at least three principal ways.

First, there is a well-documented risk of bias by a human decision-maker relying upon material generated by computer technologies, whereby an implicit assumption is made as to the superiority of machines to reach more accurate conclusions and consequential deference to conclusions or recommendations made by the computer. Computer says “no” – to quote from Little Britain – in the sense of a definite full stop to any further argument.

Secondly, biases may infect a program at the design stage and may reflect deliberate choices or unconscious biases of the programmer.

Thirdly, machine learning systems and AI rely upon historical data. The danger, of course, is that this data may reflect conscious or unconscious biases of the earlier human decision-makers or, indeed, as more text generated by Generative AI becomes available, from prior Generative AI material and prompts. By such, means, stereotypes and unfair or arbitrary discrimination may be perpetuated and embedded in decision-making processes.

For this reason, historical sentencing and arrest data represent particularly problematic training data,[16] as recognised in the EU AI ACT’s prohibition on AI systems to assess the risk of recidivism (Art 5). The data may be the product of outdated social values or fail to appreciate intersecting social disadvantage or the complexity of criminogenic factors.This is evident if one reflects, for example, upon the changes in discrimination law postdating the Kerr recommendations in 1971. For example, only in 1975 when the Racial Discrimination Act 1975 (Cth) was enacted was it rendered unlawful under federal law to refuse service or rental accommodation to a person on the basis of the colour of their skin. And only in 1984 was sex discrimination outlawed by federal legislation, prohibiting the kind of discrimination suffered by many women required to resign on marriage from their positions with the Australian Public Service or experiencing other penalties in the workplace after marriage.

These are not far-fetched examples. A hiring tool developed by Amazon in 2014 used patterns inferred from the analysis of 10 years’ worth of historical resumes submitted to the company. Its computer systems were trained to vet potential employees on the basis of this analysis. The tool was ultimately discovered to be biased and was decommissioned. Most of the resumes in the training data set came from men due to the male-dominated nature of the technology industry, and the historical trend at Amazon of hiring more male applicants. As a result, a preference for male candidates was inferred and the tool devalued resumes that included the word “women’s”, capturing phrases such as “women’s team” or “women’s award”.[17]

In this example, Amazon was able to detect and respond to the bias in its own system. However, for the individual who is subject to an adverse decision made using these kinds of technologies to identify the existence of an impermissible bias in the system can prove to be a near impossible task. Yet this may be a key question when a decision affecting the individual is challenged in the administrative law space.

Issues such as these underpin a number of the common principles endorsed in recent international instruments – in particular, the need for AI actors, including governments, to respect human rights including non-discrimination, equality and diversity, and to commit to transparency and meaningful information or explanations on machine processes used to make a decision.[18]

Risks of misinformation and disinformation

In 1950 Alan Turing posed the question “can machines think?[19] In order to answer this question, Mr Turing developed an adaption of the “imitation game” for AI,[20] now commonly known as the “Turing test”. The game is played with a human and a machine, known as the participants, and another human who is the interrogator. The interrogator may pose written questions to the participants. The object of the interrogator is to discern which participant is the machine. Of course, even if a machine were to pass the Turing Test, this would not mean that it could “think” in the way that a human can but rather, theoretically speaking, there would no longer be a meaningful way to differentiate between human responses and answers given by AI.

The development of new generative technologies raises acutely this question of differentiating between output from a machine and that which is the product of the human mind. An example which poses particular risks is the rapid evolution of deepfake technologies now readily available for free on the internet. Deepfakes “utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened.[21]

The use of doctored images to mislead and spread propaganda or misinformation is not new as is evidenced by the practices of such dictators as Stalin, Mao and Hitler. What is new is the capacity of deepfake software to convincingly manipulate videos and recordings based on training from photographs, recordings and videos readily available on the internet such as on news sites and social media accounts. Furthermore, the challenges posed by such technologies are, one might say, supercharged by the means by which we communicate and share information in a digital world. Multiple terabits of data are transferred every second via approximately 1.2 million km (745,000 miles) of submarine cables and 2,500 telecommunications satellites, with social media, news and other platforms facilitating the mass transmission of information, misinformation and disinformation globally by the roughly 5.3 billion internet users worldwide.[22] Added to this, the ability to disseminate deepfakes en masse through multiple information sources makes them appear to be more credible and therefore more likely to be accepted as true.[23]

As this indicates, the problem is therefore not merely identifying the deepfake; it can be in believing what is real. In other words, as one leading commentator, Professor Lilian Edwards, has observed, “[t]he problem may not be so much the faked reality as the fact that real reality becomes plausibly deniable.” The increasing inability to differentiate between what is real and what is not in the context of modern means of communication is particularly chilling given existing conflicts, expanding foreign influence utilising AI, and the heightened risks to global peace and security which currently exist. Indeed, the Global Risks Report 2024 by the World Economic Forum placed the likely adverse impact of misinformation and disinformation over the next two years through the adverse use of AI as the most severe global risk to world stability, over such risks as extreme weather events, interstate armed conflict, and inflation.

No institution is immune from these risks. By way of an example close to home from my perspective, the increase in live streaming of court proceedings and otherwise by audio/visual means, promotes the fundamental value of open justice in our common law system, but it also creates risks. The fact that the dissemination of deepfakes of court proceedings would almost certainly constitute a contempt of court cannot be assumed to be an adequate deterrence, for example, by those who seek to conceal their digital identities such as foreign actors and others with malign intentions.

Deepfakes offer significant opportunities for scammers to exploit individuals and corporations on a global scale, with worldwide losses from scams and bank fraud schemes totalling $US485.6 billion in 2023 according to a report by Nasdaq. They also afford significant opportunities for foreign actors to spread propaganda. A 2021 report from the US Department of Homeland Security on the Increasing Threat of Deep Fake Identities advised that:

Since 2019 malign actors associated with nation-states, including Russia and China, have conducted influence operations leveraging GAN-generated images on their social media profiles. They used these synthetic personas to build credibility and believability to promote a localized or regional issue.

A topical example is the fake documentary called “Olympics has Fallen” narrated by an AI-generated audio of Tom Cruise and promoted by bogus five-star reviews from the New York Times and the BBC. The fake video seeks to disparage the IOC and was apparently produced by a Kremlin-linked group known as Storm-1679, being intended as part of an anti-Olympics campaign which has also included fake news broadcasts to spread fear of terrorist attacks at the Paris Olympics.

It is therefore no surprise that the Bletchley Declaration noted “the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content”, and emphasised the urgent need to address those risks; nor that the EU AI Act prohibits the use of AI systems that deploy purposefully manipulative or deceptive techniques in order to impair the ability of persons to make informed decisions in a manner likely to cause significant harm (Article 5(a)). Significantly, however, the AI Act does not apply to AI systems used for military, defence or national security purposes (Art 2).

Earlier this year, in a potentially significant step forward, major organisations involved in developing and deploying frontier AI models and systems, including Amazon, Google, Meta, Microsoft and OpenAI, agreed to the Frontier AI Safety Commitments.[24] This not only records voluntary commitments by these organisations to implementing best practices relating to frontier AI safety including the development and deployment of mechanisms that enable users to understand if audio or visual content is AI generated, but to demonstrate how they have met their commitments by publication of a safety framework on severe risks for the upcoming AI Summit in France in February 2025.

Transparency in AI-generated outputs may also be essential if generative AI is to remain of value to humankind. An article published in Nature in July this year by a team of researchers from various universities, including Oxford, Cambridge and Edinburgh, highlighted this risk in the absence of a means of ensuring that data generated by large language models, on the one hand, and human-generated original source data, on the other hand, are clearly differentiated.[25] The research showed that the data generated by LLMs polluted the training set of data for the next generation which then “mis-percieve reality” and that the errors will accumulate because of automated training from generational data. An example was given of an original input about European architecture in the Middle Ages which, by the ninth generation, produced nonsense about multicoloured jackrabbits.

Conclusion

I have focused on two areas of heightened risk which the global community faces in endeavouring to regulate the complex, multifaceted and ever-changing challenges posed by utilising and regulating AI. At the core of these issues is the need for transparency to protect fundamental human rights and put in place effective safeguards against forms of AI which may distort reality and have significant harmful effects. As neither our means of communication nor emerging technologies are confined by State boundaries, urgent and ongoing international processes of collaboration, information exchange, and the development of standards for the design, development and utilisation of such technologies are necessary and ongoing. That urgency is reflected in the extent of progress internationally even over the last 12 months in which Australia has been, and continues to be, at the forefront.


* This paper draws upon my earlier presentations and publications. Since delivery of this paper, in September 2024 the Council of Europe’s AI Framework Convention opened for signature, including to signatories outside the EU. This is the first legally binding treaty to regulate AI and is intended to respond to the urgent need for a globally applicable legal framework.

[1] As defined in the Oxford English Dictionary (online): ‘artificial intelligence’.

[2] As described in the preamble to the European Union Artificial Intelligence Act 2024.

[3] Department of the Prime Minister and Cabinet, ‘How might AI affect the trustworthiness of public service delivery?’ (Long-Term Insights Briefing, 2023) 9 <https://www.pmc.gov.au/resources/long-term-insights-briefings/how-might-ai-affect-trust-public-service-delivery>.

[4] More recently, the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems was released by the G7 in November last year, which organisations, including the public sector, were urged to endorse. This “living document” builds on existing OECD AI Principles and is intended to respond, and be responsive, to these rapidly evolving technologies, recognising that governments are still in the process of developing appropriate and effective regulatory systems: G7, Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (Publication) 1.

[5] OCED, ‘Recommendation of the Council on Artificial Intelligence’ OECD Legal Instruments (Web Page, amended on 8 November 2023) <https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449>.

[6] Government of the United Kingdom, ‘The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023’ (Policy Paper, 1 November 2023).

[7]Seoul Ministerial Statement, AI Seoul Summit, 21-22 May 2024 <https://www.gov.uk/government/publications/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024>. See also Department of Industry, Science and Resources, ‘The Seoul Declaration by countries attending the AI Seoul Summit, 21-22 May 2024’ (24 May 2024) <https://www.industry.gov.au/publications/seoul-declaration-countries-attending-ai-seoul-summit-21-22-may-2024>. Australia was also a party to the Seoul Declaration for safe, innovative and inclusive AI, and the Seoul Statement of Intent toward International Cooperation on AI Safety Science.

[8] Melissa Perry, Benjamin Durkin and Charlotte Breznik, “From Shakespeare to AI: The Law and Evolving Technologies” (2024) 98 ALJ 272, 279.

[9] Ibid.

[10]Eye in the Sky (Entertainment One, 2015).

[11] F. Supp 3d 22-cv-1461 (PKC), 2023 WL 4114965 (June 22, 2023) <https://caselaw.findlaw.com/court/us-dis-crt-sd-new-yor/2335142.html>.

[12] Department of the Prime Minister and Cabinet, ‘How might AI affect the trustworthiness of public service delivery?’ (Long-Term Insights Briefing, 2023) 9 <https://www.pmc.gov.au/resources/long-term-insights-briefings/how-might-ai-affect-trust-public-service-delivery>.

[13] DJ Galligan, Discretionary Powers: A Legal Study of Official Discretion (Clarendon Press, Oxford, 1986) has observed: “… a notable characteristic of the modern legal system is the prevalence of discretionary powers vested in a wide variety of officials and authorities. A glance through the statute book shows how wide-ranging are the activities off the state in matters of social welfare, public order, land use and resources planning, economic affairs, and licensing. It is not just that the state has increased its regulation of these matters, but also that the method of doing so involves heavy reliance on delegating powers to officials to be exercised at their discretion” (at [72]).

[14]Minister for Immigration and Citizenship v Li (2013) 249 CLR 332, 348 [29] (French CJ).

[15] (2010) 240 CLR 611.

[16] Céline Castets-Renard, ‘Accountability of Algorithms in the GDPR and Beyond: A European Legal Framework on Automated Decision-Making’ (2019) 30(1) Fordham Intellectual Property, Media & Entertainment Law Journal 91, 99.

[17] Sarah Crossman and Rachel Dixon, ‘Government Procurement and Project Management for Automated Decision-Making Systems’ in Janina Boughey and Katie Miller (eds), The Automated State: Implications, Challenges and Opportunities for Public Law (Federation Press, 2021) 154, 160.

[18] See, eg, principles 1.2 and 1.3, OECD Recommendation of the Council on Artificial Intelligence (as updated in 2024).

[19] AM Turing, ‘Computing Machinery and Intelligence’ (1950) LIX(236) Mind 433, 433.

[20] The “imitation game” was previously developed to see if the interrogator could discern which participant was a man and which was a woman: see ibid at 433.

[21] As explained, for example, in the US Department of Homeland Security in Increasing Threat of Deepfake Identities at 3.

[22] R Limon, “Annual global data traffic equals 43 billion HD movies. How does it all flow?”, El Pais (4 May 2023) <https://english.elpais.com/science-tech/2023-05-04/annual-global-data-traffic-equals-43-billion-hd-movies-how-does-it-all-flow.html>.

[23] Ibid.

[24] Department for Science, Innovation & Technology, Frontier AI Safety Commitments, AI Seoul Summit 2024 (21 May 2024) <https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024>.

[25] Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson & Yarin Gal, “AI models collapse when trained on recursively generated data” (2024) 631 Nature 755 <https://www.nature.com/articles/s41586-024-07566-y>.

Was this page useful?

What did you like about it?

If you would like the Court to contact you about your website feedback enter your email address in the box below. If you need help with a Court matter, visit the Contact Us pages or go to Live Chat.

How can we make it better?

If you would like the Court to contact you about your website feedback enter your email address in the box below. If you need help with a Court matter, visit the Contact Us pages or go to Live Chat.

* This online submission is protected by captcha