PRESS RELEASE: Government AI transparency statements hard to find, new report finds
A new report has found that many Commonwealth government departments and agencies are failing to make their artificial intelligence (AI) transparency statements easily accessible or informationally meaningful, despite the requirement becoming mandatory in February 2025.
The report assesses compliance with the Australian Government’s Policy for the responsible use of artificial intelligence in government [https://apo.org.au/node/328039], which requires in-scope entities to publish an AI transparency statement outlining their use of AI systems.
The analysis found that AI transparency statements are often difficult to locate and vary significantly in quality and detail. Very few were accessible via a clear, direct link, as recommended by the Digital Transformation Agency (DTA).
Researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) identified 30 government entities potentially within the scope of the Policy for which no AI transparency statement could be found, although considered out of scope by the DTA.
While some published statements were detailed and informative, others did not comply with the requirements set out in the Standard for AI transparency statements.
The report concludes that without clearer publication practices and stronger compliance mechanisms, the policy risks falling short of its intended transparency and accountability goals.
Recommendations:
• AI transparency statements should be published in one central location.
• The DTA should reconsider the entities subject to the Policy and have an explicit list of the entities that are strictly bound by the policy.
• The DTA should explore mechanisms to ensure that the policy and requirements are complied with, including by considering what consequences flow from non-compliance.
• The Standard for AI transparency statements should be revised to ensure it cannot just be ‘formally’ complied with, without providing meaningful information.
This report was authored by Prof Kimberlee Weatherall, José-Miguel Bello y Villarino, and Alexandra Sinclair with research assistance provided by Shuxan (Annie) Luo and aligns with the Regulatory Project at the ADM+S.
Read the full report AI Transparency in Practice: An evaluation of Commonwealth entities’ compliance with their obligations regarding AI transparency statements [https://apo.org.au/node/333419]


The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is a cross-disciplinary, national research centre, which aims to create the knowledge and strategies necessary for responsible, ethical, and inclusive automated decision-making (ADM). Funded by the Australian Research Council from 2020 to 2027, ADM+S is hosted at RMIT in Melbourne, Australia, with nodes located at seven other Australian universities, and partners around the world. The Centre brings together leading researchers in the humanities, social and technological sciences in an international industry, research and civil society network.
Our Centre aims to contribute to the mitigation of the social and economic risks in the development and implementation of ADM and artificial intelligence (AI), and to improve outcomes and efficiencies in four key focus areas where these technologies are already well advanced: news and media, transport and mobility, health care, and social services
