AI Transparency Statement
1 April 2025
The Federal Court of Australia Listed Entity1 (the Entity) is committed to safety and transparency in its adoption of technological innovations, including the use of AI. The Entity proactively monitors advancements in AI, including in relation to potential applications, ethics, regulation and risk. In recognition of the utmost importance of ensuring the security of public sector data and responsible handling of personal information, we are taking a cautious approach to the adoption of AI while recognising its potential to deliver benefits for Australians.
This Statement applies to the Federal Court of Australia Listed Entity, the corporate entity responsible for delivering corporate services to the Federal Court of Australia, the Federal Circuit and Family Court of Australia (Division 1), the Federal Circuit and Family Court of Australia (Division 2), and the National Native Title Tribunal (the Courts and Tribunal). The Statement does not extend to AI use or adoption by the Courts or Tribunal in discharging their respective judicial or tribunal decision making functions.
Compliance and commitment
The Entity is committed to the safe, ethical and responsible use of AI in accordance with the Policy for the responsible use of AI in government (the Policy). Our AI use is planned, undertaken and monitored in accordance with relevant legislation and policies including the Privacy Act 1988 (Cth), the Protective Security Policy Framework, Australia's AI Ethics Principles, and the Entity's own policies relating to privacy, data and information technology.
Two accountable officials are nominated under the Policy and have responsibility for:
- implementation of the Policy within the Entity
- participating in whole-of-government AI activity as representatives of the Entity
- proactively maintaining awareness of and responding to developments in the AI space, including changes to regulatory requirements
- ensuring that Entity staff are appropriately trained in relation to the intersection of AI with their duties, particularly those with responsibility for developing, using and monitoring AI systems, and
- ensuring Entity-wide awareness of our approach to adopting AI, including relevant policies, controls and supports.
Use and classification
The Entity permits the use of generative AI tools within defined policy restrictions that prioritise and emphasise the importance of data security and privacy. In the context of the Entity's corporate operations, the use of these tools can increase productivity and efficiency.
Separately, the use of an AI-based testing automation tool is under consideration for use within our Information Technology section. If adopted, the tool will improve the reliability, maintainability and scalability of web application testing.
According to the classification system for AI use, the Entity's AI use is entirely within the 'workplace productivity' pattern and the 'corporate and enabling' domain.
The Entity does not use AI in any way where the public may directly interact with it or be significantly impacted by it. Specifically, the Entity does not use AI for decision-making, data analytics, prediction, service delivery, or policy and legal activity.
Governance and risk management
The Entity has issued policy guidance with the aim of addressing inherent risks associated with the deployment and use of generative AI tools, while recognising the opportunities that such tools present. That policy clearly controls and restricts the use of generative AI tools, anticipates logging and monitoring of their use, articulates technical and ethical risks that must be considered when using such tools, and requires that any use of permitted AI tools be consistent with all other Entity policies.
Separately, the Entity's Information Technology policy defines controls for the acceptable use, management and maintenance of software, including to protect the confidentiality of data.
Under the guidance of the nominated accountable officials, regular Entity-wide communication and training will ensure awareness of, and foster compliance with, defined policy and best practice in relation to the use of AI and data protection more broadly. Ongoing monitoring and control of the use of AI tools will ensure that their use continues to be effective and appropriate.
In combination, these governance controls will protect the public against any negative impact of the Entity's use of AI.
This statement was prepared on 1 April 2025. In accordance with the Policy, it will be updated at least annually and as otherwise required to ensure currency as our approach to the adoption of AI evolves.
Enquiries relating to this statement should be sent to data@fedcourt.gov.au
Footnote
1 The 'Federal Court of Australia Listed Entity' refers to a group of persons who hold non-judicial positions in three separate courts and one tribunal pursuant to s18ZB(a), (b) and (d) of the Federal Court of Australia Act 1976 (Cth).