An AI balancing act – Australia's road to enhanced AI regulation

9 minute read  14.06.2023 Vanessa Mellis, Michael Thomas

Following the release of the 'Safe and Responsible AI in Australia' Discussion Paper, the Department of Industry, Science and Resources is requesting submissions from the public on appropriate artificial intelligence policy settings and regulation. We set out the current regulatory frameworks in Australia and internationally considered by the Discussion Paper, as well as the various regulatory measures which are under consideration.


Key takeouts


  • The Discussion Paper provides an overview of the regulatory framework in Australia and internationally in relation to AI technologies.
  • The focus is on identifying governance mechanisms to ensure AI is developed and used safely and responsibly in Australia. The Discussion Paper provides a draft risk management approach to managing risks associated with AI technologies.
  • The Department wants feedback on the Discussion Paper, including the draft risk management approach, to identify whether there is the need to strengthen governance settings in response to rapid digital development.

    Submissions can be made until 26 July 2023.

On 1 June 2023, the Department of Industry, Science and Resources commenced consultation in relation to the Safe and Responsible AI in Australia Discussion Paper (Discussion Paper). The purpose of this consultation is to seek views on how the Australian Government can 'mitigate any potential risks of AI and support safe and responsible AI practices'.

The Discussion Paper sets out the current state of regulation and safeguards in place in Australia which would encompass, or could be adapted, to meet the risks posed by the increase in global investment and adoption of AI. It acknowledges that as a starting point, regulation and safeguards which are currently in place need to be understood to ensure that they can be appropriately adapted to meet the challenges and risks associated with AI.

The Discussion Paper also recognises that the response to regulating AI is at an early stage globally, and considers the state of regulation in a wide-range of jurisdictions.

To facilitate consultation, the Discussion Paper sets out 20 questions, which canvass a range of the issues raised in the Discussion Paper. Submissions responding to these questions, or other matters raised in the Discussion Paper are to be submitted online. Consultation is open until 26 July 2023.

Managing AI risks, encouraging innovation and building trust

The Discussion Paper considers that effective regulation will be necessary to ensure that potential risks are managed, while still being flexible enough to enable innovation to flourish and opportunities to be realised.

Key to this is an increase in public trust and confidence in AI, without which AI solutions cannot be broadly implemented and adopted. The Discussion Paper explains that while investment in AI is growing, public trust and confidence in AI solutions in Australia is low and this has led to a low rate of adoption in Australia. A key component to building public trust and confidence in AI systems will be ensuring that regulatory frameworks have been appropriately updated and implemented to ensure that they suitably respond to risks posed by AI.

The consultation aims to ensure that the regulatory framework operating in Australia is fit for purpose to:

  • control risks associated with AI
  • not stifle investment, and
  • ensure public trust and confidence in AI.

With these aims in mind, we overview the current framework in Australia and Internationally, before exploring the potential regulatory measures which are under consideration for use in Australia.

Current framework in Australia

The Discussion Paper identifies that the Australian regulatory landscape around AI is complex, due to the broad range of contexts in which it can be used. The following types of regulation are currently in place in Australia.

Legislation Navigation Show below Hide below

There is not specific legislation in force in Australia which is designed to regulate AI. Rather, the Discussion Paper highlights two types of legislation which may have the effect of regulating AI. These are 'general regulations' which have the effect of governing AI dependant on its application (for example, the use of personal information in the development of an AI system would be regulated by the Privacy Act 1988 (Cth) or AI as a consumer product would be regulated in part by the Australian Consumer Law) and 'sector-specific regulations' which have the effect of governing AI when it is used in a particular sector (for example, AI when used as a medical device would be regulated under the software as a medical device provisions in the Therapeutic Goods Act 1989 (Cth)).

A potential pathway forward in regulating AI is to update existing legislation to address contexts in which AI is used. Some of this work is already underway, for example, the ongoing review of the Privacy Act 1988 (Cth) to address, amongst other things, the way that personal information is used in automated decision making (ADM) and to provide for transparency in the way automated systems use personal information in direct marketing.

Read our article about the review of the Privacy Act 1988 (Cth).

The AI Ethics Framework Navigation Show below Hide below

In addition to general and sector-specific regulations, the Federal Government has implemented some light touch regulation of AI in Australia, in the form of voluntary guiding principles, which are designed to guide businesses and government to responsibly 'design, develop and implement' AI solutions. Australia's Artificial Intelligence Ethics Framework was introduced by the Federal Government on 7 November 2019 and is consistent with the OECD's Recommendation of the Council of Artificial Intelligence, which includes principles for the responsible stewardship of trustworthy AI. This voluntary code is intended to complement legislation and set out a 'best practice' approach for the design, development and implementation of AI. It is a principles based document, which sets out 8 principles to be applied in the design, development and implementation of AI. Read more about the AI Ethics Framework.

 

Private and public sector response Navigation Show below Hide below

In considering the current framework of regulation in Australia, the Discussion Paper acknowledges that a number of public and private sector organisations have voluntarily put in place policy and governance arrangements to ensure the ethical development and use of AI systems within their organisations. The Discussion Paper highlights that the Federal Government is in the process of taking a number of steps to 'lead by example' and model best practice. This has taken the form of guidance for the public sector adoption of AI published by the Digital Transformation Agency and guidance published by the Office of the Commonwealth Ombudsman in relation to the use of ADM.

This Federal guidance is complemented by State-level guidance, such as the New South Wales' Government's AI Assurance Framework which is designed to assist government agencies to design, build and use AI-enabled products and solutions. This includes the identification and management of AI-specific risks, through clear governance and accountability.

This is in addition to a number of academic initiatives, both in conjunction with Government and otherwise, intended to support the responsible and safe development and use of AI.

Technical standards to apply in the area of AI are also being progressed. This includes technical standards enabling more transparent, explainable and ethical design of AI systems.

International frameworks

The development and adoption of AI regulation internationally is in its early stages, however some jurisdictions are more advanced than others. The Discussion Paper identifies a number of approaches taken in different international jurisdictions. We touch on some of the high-level themes appearing in a range of international jurisdictions.

Risk-based regulation Navigation Show below Hide below

There appears a general tendency towards a risk-based approach to the regulation of AI. With AI classified on the basis of the risk that it poses, and the strictness of regulation increased commensurately. Generally this involves at least three levels of classification, with the lowest level of risk leading to limited or minimal regulation and the highest level of risk being deemed very high risk or unacceptable, in which case either significant levels of regulation are employed, or the proposed use of AI is banned.

A risk-based approach is proposed to be introduced in the European Union, has been implemented on a voluntary basis in the United States through the US Chamber of Commerce AI Risk Management Framework, has been implemented in Canada for most Government agencies and has been implemented for AI solutions deployed by the New Zealand Government.

Transparency Navigation Show below Hide below

A key focus of regulation is to do with the transparency of AI, particularly when AI is used for ADM. This approach ties in to the need for public trust and confidence in AI in order to support widespread acceptance. Measures implemented internationally appear to focus on the need to ensure that where AI is used for ADM the AI is free of bias, and that individuals the subject of the ADM decision understand that AI was being used in the decision, how the decision was made and how it can be challenged.

In the EU, where an ADM system produces a 'legal or similarly significant effect' the use of that system is regulated through the General Data Protection Regulation. The EU has also implemented the Digital Services Act which once fully operational will place transparency obligations on digital platforms, particularly around allowing access to algorithms for scrutiny under a transparency and accountability framework.

In April 2023, a proposed regulatory framework was introduced to the United States Senate to provide for transparent and responsible use of AI. This would primarily be achieved through requiring AI technologies to be independently tested prior to release through an independent expert assessment, and for the results of these independent tests to be made publicly available to users. A number of transparency measures are also being introduced in the United States at the State-level.

The United Kingdom has developed an Algorithmic Transparency Standard, which is designed to help public sector bodies provide clear information about AI systems they are using, and the purpose for their use.

In Italy ChatGPT was briefing banned by the Italian Data Protection Authority seemingly due to a number of concerns regarding transparency. ChatGPT was reinstated once OpenAI implemented a number of changes, including increased transparency in how ChatGPT processes user data to train its algorithm, and allowing users to opt-out of having their data used as 'training data' for the system.

Transparency is also a key part of the risk-based system which has been implemented in Canada.


Regulating government Navigation Show below Hide below

In international settings, it does not appear to be uncommon for AI regulation to only apply to Government, at this stage. Many jurisdictions, including Canada, the United Kingdom and New Zealand have implemented different levels of regulation for the Government and other public sector bodies than that implemented for private sector organisations. This is of course not to say that private sector regulation will not be a more frequent occurrence in the future. It seems reasonable to assume that due to the need for public trust in Government operations it would be appropriate to regulate the use of AI in Government settings in the first instance, before expanding regulation to the private sector, as needed and appropriate.

The importance of considering international regulatory settings when developing an Australian approach to regulating AI cannot be understated. The Discussion Paper acknowledges that a key consideration in developing a regulatory framework will be making sure that the regulatory framework achieves what is best for Australia's economy.

The Discussion Paper identifies that as a 'relatively small open economy' harmonising Australia's governance framework with international policy settings will be important as this could impact both Australia's domestic tech sector, and its ability to take advantage of AI enabled systems and the economies of scale associated with global growth of AI solutions.

Accordingly, understanding the international settings in relation to the regulation of AI is key when considering what alternatives for the regulation of AI in Australia may be viable, or ultimately preferred by the Government.

The Discussion Paper sets out a range of potential regulatory measures that could be utilised to regulate AI in Australia. What are the measures being considered? Find out more in our detailed overview of Australia's potential regulatory measures under consideration by government.

The Department of Industry, Science and Resources seeks feedback on the 20 questions set out in the Discussion Paper as well as any other matters arising from it. Feedback and the consultation period are open until 26 July 2023.


The team at MinterEllison can assist you in understanding the legal issues and risks associated with AI for your organisation, monitor the process government is taking with responsible use of AI, and help you make a submission on the Discussion Paper.

View the Discussion Paper

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiI5NDM3N2EzZi0zZGQ2LTQ4ZDQtOGEyMS1lNzhiODA2ZTQ5ZTIiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTcxNTMwODg1NCwiZXhwIjoxNzE1MzEwMDU0LCJpYXQiOjE3MTUzMDg4NTQsImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2F1c3RyYWxpYXMtcm9hZC10by1lbmhhbmNlZC1haS1yZWd1bGF0aW9uIiwiYXVkIjoiaHR0cHM6Ly93d3cubWludGVyZWxsaXNvbi5jb20vYXJ0aWNsZXMvYXVzdHJhbGlhcy1yb2FkLXRvLWVuaGFuY2VkLWFpLXJlZ3VsYXRpb24ifQ.TAvKM21BzphhBJunMw6Xz2jocugnQr6fYnw-HEWM-GA
https://www.minterellison.com/articles/australias-road-to-enhanced-ai-regulation