Many believe the development and use of powerful Artificial Intelligence (AI) is outpacing regulation. On 17th January 2024, the Australian Government (Government) published its Safe and responsible AI in Australia consultation interim response (Report). The Report is a direct response to the Safe and responsible AI in Australia discussion paper, published in June 2023, which we previously detailed in the potential regulatory measures under consideration by the government.
A risk-based approach to AI
The Government proposes to adopt a risk-based regulatory approach to AI. Mandatory guardrails for high risk applications of AI would apply in high risk contexts, such as critical infrastructure, medical devices and biometric identification, with a particular focus on preventative interventions applied early in the AI lifecycle. This policy approach broadly aligns with that of Canada and the European Union.
Low-risk AI would proceed largely unimpeded, noting many applications of AI, such as monitoring biodiversity or automating routine internal business processes, do not present risks that require a regulatory response. The Government considers AI should continue to develop and be able to 'flourish' in low-risk contexts, relatively 'unhindered' by unnecessary regulatory intervention.
Is current regulatory framework sufficient to address known risks of AI?
The Report acknowledges public concern that the current regulatory framework is insufficient to address known risks of AI. The Government has also acknowledged that the sheer speed and scale of AI development potentially increases the risk of harm posed by the technology, which may be irreversible once effected.
Although recent regulatory changes have strengthened Australia's regulatory framework in broad terms (such as through privacy reform and online safety laws), the Report acknowledges those reforms do not go far enough to mitigate the risks posed by generative AI.
The Government has not confirmed whether the mechanism of reform would be via further amendments to existing laws and/or introduction of new dedicated legislation. However, its immediate focus will be to determine what mandatory guardrails are required to safeguard the Australian public from the risks of AI in high risk settings.
The Report identifies at least 10 legislative frameworks that may require amendment to respond to applications of AI, to ensure appropriate guardrails are in place for AI development and use. These include competition and consumer law, health and privacy laws and copyright.
The Government has indicated it will work with industry to develop voluntary AU AI Safety Standards. It also proposes to develop options for voluntary labelling of AI-generated materials and to establish a temporary expert advisory body to support development of options for AI guardrails.
The Government remains committed to ensuring that Australia can maximise the benefits of AI and will continue to engage internationally to help shape global AI governance.
The team at MinterEllison can assist you in understanding the legal issues and risks associated with AI to your organisation, monitor the process government is taking with responsible use of AI.
If your organisation is planning to use or interact with AI and you need more detailed advice, contact us.