Responsible use of AI: New Australian guardrails released

6 minute read  05.09.2024 Sonja Read, Shane Evans, Chelsea Gordon, Sam Burrett

We explore the Australian Government's two newly released publications to guide the development and deployment of AI in Australia.

Key takeouts

  • The Australian Government has proposed 10 mandatory guardrails for high-risk AI as part of the ongoing consultations on safe and responsible AI.
  • The Voluntary AI Safety Standard provides practical guidance on responsible AI implementation, broadly aligning with the proposed guardrails and international standards.
  • Australian organisations should familiarise themselves with the proposed guardrails and start aligning their practices with the voluntary guidelines to prepare for forthcoming regulation.

The Australian Government has released two publications to guide the development and deployment of artificial intelligence (AI) in Australia: the "Proposed Guardrails for the Mandatory Use of AI in High-Risk Settings" (Proposals Paper) and the "Voluntary AI Safety Standard" (Standard). These publications clarify the Government's intention for AI regulation in Australia and offer guidance for organisations seeking to implement responsible AI practices.

The proposed guardrails and voluntary standard mark a significant step on the journey to AI regulation in Australia. These measures are designed to complement existing legal frameworks, including privacy, consumer protection, and corporate governance laws. In addition, both the Standard and the Proposals Paper align the Australian Government with international developments in AI regulation, particularly in Canada, the US, and the EU.

For organisations developing or deploying AI, it is essential to stay across these and future regulatory developments, and to proactively adopt responsible AI practices. This will enable organisations to effectively mitigate regulatory and operational risk, enhance stakeholder trust, and navigate the evolving regulatory landscape with confidence.

In this article, we outline the key features of these publications, examine the differences between them, and contextualise this announcement from the Government in Australia's evolving landscape of AI Regulation and Governance.

Proposed Guardrails for High-Risk AI

The Proposed Guardrails outline 10 proposed mandatory guardrails for developers and deployers of AI systems in high-risk settings. These guardrails focus on ensuring testing, transparency, and accountability, to manage potential risks associated with AI systems.

Key aspects of the proposed guardrails include:

  1. Establishing clear accountability processes, governance, and strategies for regulatory compliance
  2. Implementing risk management processes to identify and mitigate risks
  3. Protecting AI systems and data quality through governance measures
  4. Testing AI models and systems before deployment and ongoing monitoring
  5. Enabling meaningful human oversight and intervention in AI systems
  6. Informing end-users about AI-enabled decisions, interactions, and AI-generated content
  7. Establishing processes for people impacted by AI systems to challenge outcomes
  8. Ensuring transparency across the AI supply chain to effectively address risks
  9. Maintaining records to allow third-party compliance assessments
  10. Conducting conformity assessments to demonstrate compliance with the guardrails

The Proposals Paper also includes principles for determining high-risk AI settings and includes a definition of General-Purpose AI (GPAI) models. Feedback is sought on whether mandatory guardrails should apply to all GPAI models, or a subset based on risk indicators.

The Proposals Paper defines General-Purpose AI (GPAI) as:

"An AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems."

This definition focuses on the versatility and adaptability of GPAI models, which can be applied to a wide range of use cases and integrated into various systems, unlike narrow AI models designed for specific tasks.

The Proposals Paper outlines the following principles for determining high-risk AI settings:

  1. Risk of adverse impacts on individual rights recognised under Australian and international human rights law,
  2. Risk of adverse impacts on an individual's physical or mental health or safety,
  3. Risk of adverse legal effects, defamation, or similarly significant effects on an individual,
  4. Risk of adverse impacts on groups of individuals or collective rights of cultural groups,
  5. Risk of adverse impacts on the broader Australian economy, society, environment, and rule of law,
  6. Severity and extent of the adverse impacts outlined in principles (a) to (e).

These principles consider the potential for AI systems to cause harm to individuals, groups, and society as a whole, taking into account factors such as human rights, health and safety, legal implications, and the severity and extent of adverse impacts.

These proposed guardrails are part of an ongoing consultation process, with submissions closing in October. If approved, it is anticipated that the regulations may not come into effect until 2025, allowing time for refinement based on the consultation process – and for Australian organisations to prepare.

Complementing existing requirements under legislation

The proposed mandatory guardrails for AI are designed to work in conjunction with existing legal frameworks that impact the development and use of AI in Australia. While the guardrails introduce new preventative measures, they do not replace or exempt Australian organisations from their obligations under current legislation. We highlight below some key areas where the guardrails complement existing laws.

Guardrail 2: Establish and implement a risk management process to identify and mitigate risks

Existing Laws / Regulation: This guardrail aligns with directors' duties under the Corporations Act 2001, which require directors to exercise powers and discharge duties with due care and diligence, and to assess and govern risks to the organisation, including non-financial risks such as those arising from AI and data.

Guardrail 3: Protect AI systems, and implement data governance measures to manage data quality and provenance

Existing Laws / Regulation: This guardrail is intended to complement requirements under other legislation, such as:

  • the Privacy Act 1988, which places obligations on organisations handling personal information,
  • the Copyright Act 1968, which gives owners of certain material exclusive economic rights that include the right to copy and the right to communicate the material to the public, and
  • the Security of Critical Infrastructure Act 2018, which imposes security obligations on data storage and processing assets.

Guardrail 6: Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content

Existing Laws / Regulation: This guardrail intersects with proposed reforms to the Privacy Act 1988 to enhance transparency about the use of personal information in automated decisions which have a legal or similarly significant effect on individuals' rights. It also complements prohibitions against misleading and deceptive conduct under the Australian Consumer Law.

Guardrail 7: Establish processes for people impacted by AI systems to challenge use or outcomes

Existing Laws / Regulation: Obligations under this guardrail will need to work alongside existing avenues for complaints handling, including rights and obligations under the Australian Consumer Law and administrative law.

Guardrail 8: Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks

Existing Laws / Regulation: This guardrail aligns with requirements under privacy laws, intellectual property laws, duties of confidence and contract law which protect the use, reproduction and/or disclosure of data and AI models or systems without the requisite consents or rights.

Voluntary AI Safety Standard

Alongside the Proposals Paper, the Government released the Standard to provide guidance about responsible AI implementation. These Standards will operate while the regulations are being developed.

This Standard provides practical guidelines for organisations aiming to implement responsible AI practices. The Standards align closely with the proposed guardrails set out in the Proposals Paper, except for one key difference: the tenth guideline focuses stakeholder engagement over conformity assessments. Conformity assessments are being prepared for in the voluntary guardrails through several voluntary steps, including around record keeping, transparency and testing.

The Standard also provides guidance about AI procurement processes, to help organisations align their contracts (with suppliers and developers) with the Standards.

Notably the Standard aligns with international standards, particularly AS ISO/IEC 42001:2023 and NIST AI RMF 1.0. It thus promotes consistency and interoperability.

Implications for Australian organisations

These publications clarify the government's intent for AI regulation in Australia. The broad alignment with international approaches should create a largely standardised basis from which organisations invested in the use or development of AI can operate with confidence.

While the mandatory guardrails are still subject to consultation and refinement, organisations can start preparing by familiarising themselves with the proposed requirements and assessing their current AI practices. This should include pre-configuration considerations, such as appropriate data governance, privacy, and cyber security considerations to ensure a responsible and secure technology environment for the integration of AI.

In the interim, the Standard offers a practical and applicable framework for organisations to follow. By aligning their AI development and deployment practices with the voluntary guidelines, organisations can demonstrate their commitment to responsible AI and position themselves for a smoother transition once the mandatory regulations come into effect.


To learn more about our expertise in navigating the evolving AI regulatory landscape and how we can help you elevate your AI strategies and policies with confidence, please contact us below.

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiIzZTE2OTU4ZC1lYzBlLTQwMGEtYWZmMy1hNDRjMzFkNjUwODkiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTcyNjY0MzI4OSwiZXhwIjoxNzI2NjQ0NDg5LCJpYXQiOjE3MjY2NDMyODksImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL3Jlc3BvbnNpYmxlLXVzZS1vZi1haS1uZXctYXVzdHJhbGlhbi1ndWFyZHJhaWxzLXJlbGVhc2VkIiwiYXVkIjoiaHR0cHM6Ly93d3cubWludGVyZWxsaXNvbi5jb20vYXJ0aWNsZXMvcmVzcG9uc2libGUtdXNlLW9mLWFpLW5ldy1hdXN0cmFsaWFuLWd1YXJkcmFpbHMtcmVsZWFzZWQifQ.sADAIrNfkdmw7pzCNVO2B0-Q5Xo3zBY2zLQLAdUC1ZQ
https://www.minterellison.com/articles/responsible-use-of-ai-new-australian-guardrails-released