Background to the National Framework
After nearly a year of collaboration between federal, state and territory governments, the National Framework for the Assurance of AI in Government was released in June 2024. It builds on existing policy and complementary initiatives including:
- Australia's AI Ethics Principles
- Consultation on Safe and Responsible AI in Australia
- The NSW Artificial Intelligence Assurance Framework
- OECD Principles for responsible stewardship of trustworthy AI
- The Bletchley Declaration on AI Safety
- The Australian Government's assent to 3 outcomes from the AI Seoul Summit in South Korea.
The Framework aims to enable governments to embrace AI opportunities, while maintaining public trust, through safe and responsible AI use. It provides guidance and foundations for AI assurance, but leaves room for jurisdictions to develop specific policies and guidelines 'considerate of their own legislative, policy, and operational context.'
MinterEllison Commentary
- This Framework comes at a time when the international regulatory landscape is changing rapidly, as is the underlying technology. Australia's approach has so far emphasised principles-based guidance rather than blanket regulation, less prescriptive than the EU's AI Act and aligned with other principles-based approaches including that of Singapore.
- By incorporating elements from international initiatives like the OECD Principles and the Bletchley Declaration, Australia is aligning its domestic AI governance with global best practice. This approach not only enhances the robustness of the Framework, but also positions Australia to participate more effectively in shaping international AI governance norms.
- Given the cross-jurisdictional nature of many AI applications and their impacts, the nationalisation of this AI Framework is a valuable step towards greater collaboration and innovation across Australia.
- The Framework may potentially set standards and expectations for AI governance that extend beyond government, and influence practices in the private sector and broader society.
The five pillars of AI Assurance
The Framework outlines five key "cornerstones" for AI assurance, which aim to help lift community trust to enable government adoption of AI technology. They are:
- Governance: Comprises the organisational structure, policies, processes, regulation, roles, responsibilities and risk management frameworks that ensure the safe and responsible use of AI in a way that is fit for the future. It emphasis the need for cross-functional expertise, leadership commitment, fostering a positive AI risk culture, staff training and resources to understand and implement AI governance effectively.
- Data Governance: This includes practices to ensure high-quality, reliable data for AI systems. This pillar requires robust data governance practices that comply with relevant legislation. This pillar also emphasises the link between data quality and AI model output quality. It also involves risk management related to data assets, including the importance of roles and responsibilities and understanding legislative and administrative obligations related to data.
- Risk-Based Approach: Assessing and managing AI risks on a case-by-case basis, with heightened requirements for high-risk settings. These risks should be managed throughout the AI system lifecycle, including across the 4 phases of an AI system defined by the OECD. In practice, this pillar may mandate self-assessment models such as the NSW Artificial Intelligence Framework.
- Standards: This pillar recommends alignment of AI governance practices with relevant international standards where practical, including current AI governance and management standards such as AS ISO/IEC 42001:2023, AS ISO/IEC 23894:2023, and AS ISO/IEC 38507:2022.
- Procurement: The consideration of AI ethics and assurance requirements in procurement and contracts. Includes the need for specific AI-related clauses and clearly established accountabilities in vendor relationships, and balancing risks and opportunities.
MinterEllison Commentary
- The five cornerstones provide the foundations for ethical AI implementation, and align with Australia's 8 AI ethics principles. This holistic approach, however, may pose coordination challenges across large government departments and agencies, especially given the pillars are interconnected (e.g. data governance impacts risk management).
- The Framework emphasises a case-by-case, risk-based approach, particularly for high-risk AI applications, building on the government's interim consultation response. Notably, it recommends that AI implementation be driven by business or policy areas and supported by technologists, which may represent a shift in some current practices.
Implementation Guidelines
The National Framework builds on Australia's 8 AI Ethics Principles. The Framework provides practical guidelines for implementing these principles in government AI projects, which we have outlined in the table below.
Human, societal and environmental wellbeing
Throughout their lifecycle, AI systems should benefit individuals, society, and the environment
1.1 Document intentions and expected outcomes
1.2 Consult with stakeholders
1.3 Assess impact on people, communities, and environment
Human-centred values
AI systems should respect human rights, diversity, and the autonomy of individuals.
2.1 Comply with rights protections
2.2 Incorporate diverse perspectives
2.3 Ensure digital inclusion
Fairness
AI systems should be inclusive and accessible and should not involve or result in unfair discrimination against individuals, communities, or groups.
3.1 Define fairness in context
3.2 Comply with anti-discrimination obligations
3.3 Ensure quality of data and design
Privacy protection and security
AI systems should respect and uphold privacy rights of individuals and ensure the protection of data.
4.1 Comply with privacy obligations
4.2 Minimise and protect personal information
4.3 Secure systems and data
Reliability and safety
Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.
5.1 Use appropriate datasets
5.2 Conduct pilot studies
5.3 Test and verify
5.4 Monitor and evaluate
5.5 Be prepared to disengage
Transparency and explainability
There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI and can find out when an AI system is engaging with them.
6.1 Disclose the use of AI
6.2 Maintain reliable data and information assets
6.3 Provide clear explanations
6.4 Support and enable frontline staff
Contestability
When an AI system significantly impacts a person, community, group, or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
7.1 Understand legal obligations
7.2 Communicate rights and protections clearly
Accountability
Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
8.1 Establish clear roles and responsibilities
8.2 Train staff and embed capability
8.3 Embed a positive risk culture
8.4 Avoid overreliance
Conclusion
The Framework represents a significant advancement in Australia's approach to ethical AI adoption in the public sector. The five cornerstones and the practical guidelines aligned with Australia's AI Ethics Principles, provide a foundational, yet flexible, roadmap for responsible AI implementation.
In practice, the success of this Framework will depend on implementation, which may vary across government contexts. This implementation may require substantial investment in processes, training, skillsets, and cultural approaches to responsible AI.
Of note is the potential for these standards to indirectly shape private sector practices, though this will of course depend on the willingness of private entities to adopt similar approaches to responsible AI.
Ultimately, this Framework marks an important step in Australia's journey towards responsible AI adoption and leadership in AI governance. and sets the tone for future developments in AI standards, policy and regulations.
Please reach out at any time if you would like to discuss the implications of the National Framework and its broader implications for your organisation.