The Department of Industry, Science and Resources released a proposals paper titled Safe and responsible AI in Australia: Proposals paper for introducing mandatory guardrails for AI in high-risk settings (Proposal). The Proposal outlined the government's approach to regulating artificial intelligence (AI) in high-risk settings.
Consultation for the Proposal opened on 5 September 2024 and closed on 4 October 2024. As of 17 October 2024, the Government has published 275 submissions from a wide variety of stakeholders.
Proposed AI guardrails
The Proposal was to require organisations with high-risk AI to:
- Establish clear accountability processes, governance, and strategies for regulatory compliance
- Implement risk management processes to identify and mitigate risks
- Protect AI systems and data quality through governance measures
- Test AI models and systems before deployment and ongoing monitoring
- Enable meaningful human oversight and intervention in AI systems
- Inform end-users about AI-enabled decisions, interactions, and AI-generated content
- Establish processes for people impacted by AI systems to challenge outcomes
- Ensure transparency across the AI supply chain to effectively address risks
- Maintain records to allow third-party compliance assessments
- Conduct conformity assessments to demonstrate compliance with the guardrails
For a more comprehensive analysis of the individual guardrails please visit our earlier publication, AI: New Australian guardrails released.
Adaptability of Guardrails
The proposed guardrails are designed with the future of AI in mind.
- Evolving technology: The guardrails are crafted to remain fit for purpose as AI technology continues to evolve. This forward-looking approach is designed to ensure that regulation won't quickly become obsolete as AI capabilities advance.
- Flexibility in application: While the guardrails provide a structured framework, they are intended to be flexible enough to apply to a wide range of AI systems and use cases, from current applications to future developments.
- Ongoing engagement: The framework emphasises the critical need for AI developers and deployers to engage openly with stakeholders across the AI supply chain and throughout the AI lifecycle. This continuous engagement allows for real-time feedback and adjustments as technology and its impacts evolve.
- Regular reassessment: Organisations deploying high-risk AI systems would adhere to the guardrails through initial and ongoing assessments, documenting changes, and potential third-party verification, ensuring continued compliance as the systems evolve and update over time.
- Adaptable risk management: The risk management processes required by the guardrails are designed to be ongoing and adaptable, allowing organisations to identify and mitigate new risks as they emerge with advancing AI capabilities.
- Balance between specificity and generality: The guardrails aim to strike a balance between being specific enough to provide clear guidance and general enough to apply to future AI developments that may not yet be foreseen.
What is high-risk: Context and capability
The proposed regulatory framework aims to categorise AI systems based on their context of use, capabilities and potential to cause harm to individuals, groups and society. This categorisation serves as the foundation for implementing appropriate risk mitigation strategies, ensuring responsible AI development and deployment across various sectors in Australia.
- Context: The framework will assess an AI system's risk level primarily by evaluating its intended use or application context, aligning with the broader goal of responsible AI development and deployment across Australia.
- Capability: The framework will also consider an AI system's capabilities, focusing on whether they are sufficiently advanced and could be classified as general-purpose AI (GPAI), warranting compliance with the guardrails.
Organisations should assess whether their AI applications might fall under the high-risk category and prepare for the potential increase in regulatory oversight and compliance requirements.
How will the Australian Government implement AI regulations?
The jury is still out on this but responses to date indicate the potential for a whole-of-economy approach. Below are the models that were consulted on.
- Domain-specific approach: Existing regulations would incorporate AI-specific safeguards, tailoring rules to address risks in each sector. This approach uses current industry expertise and regulatory frameworks to manage AI.
- Framework approach: New legislation would establish general AI definitions and measures across all sectors. Existing laws would then be amended to align with this framework, ensuring a level of consistency while allowing for sector-specific adaptations.
- Whole-of-economy approach: A broad AI law would apply across all economic sectors, defining high-risk AI applications and mandatory safeguards. An independent regulator would be established to monitor and enforce AI compliance throughout the economy.
Who will be affected?
The guardrails will have a broad potential applicability to both developers & deployers of AI systems and could impose significant requirements on Australian organisations in the near future.
The proposed mandatory guardrails are preventative measures that would require developers and deployers of high-risk AI to take specific steps across the AI lifecycle. These measures focus on testing to ensure systems perform as intended and meet appropriate performance metrics, both prior to and during deployment, and transparency regarding product development and use with end-users.
Consultation
The government's consultation process on AI guardrails aimed to gather diverse perspectives, identify potential issues, refine the approach, build consensus and inform implementation. Input was sought to ensure Australia develops a fit-for purpose regulatory framework.
The consultation garnered submissions from a diverse cross-section of organisations, drawing input from tech giants, government agencies, leading universities, major banks, industry bodies and not-for-profits.
Potential challenges
As Australia prepares to implement AI regulation, organisations must navigate a complex landscape of technical, commercial and legal challenges. This includes:
- Regulatory compliance in a rapidly evolving field: The speed of AI advancement poses challenges for regulatory compliance. Organisations must stay well-informed of developments to ensure their AI systems remain compliant with the guardrails. Legal teams will play a crucial role in interpreting and applying these regulations as they evolve, potentially requiring frequent policy updates and compliance reviews.
- Regulatory compliance costs: The proposed mandatory guardrails (and the current Voluntary AI Safety Standard) will require significant investment in governance processes in order to achieve compliance. For this reason organisations may wish to stage their implementation approach, firstly implementing guardrails to address their more significant identified risks.
- Balancing innovation and risk mitigation: The guardrails aim to advance innovation while mitigating risks, a balance that has legal implications. Organisations may need to reassess their R&D practices and product development strategies to ensure compliance without stifling innovation. Legal teams will be instrumental in guiding organisations through this delicate balancing act, helping to identify potential legal risks while maintaining a competitive edge.
- Financial implications and duty of care: The implementation of AI guardrails may impose significant compliance costs, particularly on SMEs. This raises questions about the duty of care owed by company directors in allocating resources for compliance. Legal advisors will need to guide boards on their obligations to balance financial prudence with regulatory compliance.
- Enforcement and liability: The diverse nature of AI applications presents challenges in monitoring and enforcing compliance. Legal teams need to anticipate potential areas of liability and advise on robust compliance frameworks. This may include refining internal audit processes and documenting AI decision-making to demonstrate due diligence in the event of scrutiny.
- Cross-border considerations: With AI development and deployment often occurring across borders, legal teams must grapple with complex jurisdictional issues. Organisations operating internationally will need to navigate potentially conflicting regulatory regimes, requiring careful legal analysis to ensure compliance across all relevant jurisdictions.
- Workforce and employment: The implementation of AI guardrails may necessitate new roles and responsibilities within organisations. This could have implications for employment contracts, job descriptions, and potentially, workforce restructuring. Legal teams will need to provide support on these changes, ensuring compliance with AI regulations and existing labour laws.
- Data protection and privacy: AI systems often rely on vast amounts of data, raising significant privacy concerns. Legal teams will need to ensure that AI implementations comply not only with the new guardrails but also with existing data protection regulations such as the Privacy Act 1988 (Cth) and Security of Critical Infrastructure Act 2018 (Cth). This may require reviewing and updating data handling practices and privacy policies.
Next steps: What can we do in the meantime?
The Australian Government has released a new Voluntary AI Safety Standard as an interim measure for responsible AI development. These voluntary standards align with international best practices and prepare organisations for future mandatory requirements, focusing on stakeholder engagement, safety considerations, diversity in AI development teams and ethical considerations in AI systems.
By implementing these standards, organisations can well position themselves for upcoming mandatory guardrails while following a practical guide for responsible AI development in the absence of formal regulation.
Final words: What should we expect and what else can we do?
AI guardrails will impact organisations economy-wide, requiring new risk management protocols and increased transparency in AI processes. High-risk AI developers and deployers must implement comprehensive testing regimes, rigorous oversight mechanisms, and regular audits of their AI systems.
Start-ups and SMEs may grapple with resource allocation challenges for compliance, potentially impacting their growth trajectories and market competitiveness.
Large enterprises will likely face heightened scrutiny from regulators and stakeholders, requiring substantial revisions to existing AI governance structures and ethical frameworks.
To prepare for compliance, organisations should audit their current AI use and governance state, bolster governance policies, invest in staff training on AI ethics and proactively engage with industry bodies and regulators.
For today and tomorrow, whether you need protection against AI risks (including in relation to compliance with changing legal frameworks), or are shaping your organisation with responsible and ethical AI to enhance and elevate capability, our nationwide AI Client Advisory team will guide you through your AI adoption journey – from insight, to strategy and implementation.
Our AI expertise includes legal and policy, risk, workforce, privacy, data protection and cyber, procurement, strategy, and a co-creation model to develop tailored solutions for your organisation (ME AI). Operating with the highest standards of independence and trust as a firm for almost 200 years, our nationwide AI experts have the know-how and experience to help you make the best decisions, faster.