An AI balancing act – Australia's potential regulatory measures under consideration by  government

9 minute read  15.06.2023 Vanessa Mellis, Michael Thomas

Following the release of the Safe and Responsible AI in Australia Discussion Paper by the Department of Industry, Science and Resources, we detail the various potential regulatory measures that under consideration by the government.


Key takeouts


  • The Discussion Paper provides an overview of the regulatory framework in Australia and internationally in relation to AI technologies
  • The focus of the Discussion Paper is to identify regulatory measures to ensure AI is developed and used safely and responsibly in Australia. The Discussion Paper sets out a potential risk management framework which may be used to manage risks associated with AI technologies.
  • The Department is seeking feedback on the Discussion Paper, including the draft risk management approach, to identify whether there is the need to strengthen governance settings in response to rapid digital development. Submissions closed 26 July 2023

The Department of Industry, Science and Resources commenced consultation in relation to the Safe and Responsible AI in Australia Discussion Paper (Discussion Paper). To facilitate consultation, the Discussion Paper sets out 20 questions, which canvass a range of the issues raised in the Discussion Paper. Submissions responding to these questions, or other matters raised in the Discussion Paper, are to be submitted to the Australian Government Department of Industry, Science and Resources Consultation Hub. Submissions closed 26 July 2023.

Here we explore the range of potential AI regulatory measures set out in the Discussion Paper, which are currently under consideration. The Discussion Paper proposes that regulatory measures adopted in Australia must:

  • ensure appropriate safeguards are in place, particularly in relation to high-risk AI applications; and
  • provide greater certainty, in order to facilitate confident investment in AI solutions for businesses and confidence that these businesses are engaging in AI activities responsibly.

Drawing on the regulation being implemented internationally, and the current framework in Australia, the Discussion Paper sets out a range of potential measures the Government may implement to regulate AI. In the Discussion Paper, these responses are mapped on a 'voluntary' to 'regulatory' scale, indicating whether the measure would be the subject of a legislated obligation, or subject to voluntary compliance. It is possible for many of the measures discussed to transition from voluntary guidelines to legislated obligations over time. This is the approach being taken by the United Kingdom in relation to principles set out in its 'A pro-innovation approach to AI regulation' white paper, which are proposed to be issued and implemented on a 'non-statutory' basis, before being legislated following an initial implementation period.

We note that there will likely be no 'one size fits all' response, and that the position arrived at by the Government will likely be a mix of the measures canvassed in the Discussion Paper.

The following measures are included in the Discussion Paper:

Regulations Navigation Show below Hide below

A clear option for regulating AI is to develop legislation for this purpose, either through bespoke AI-specific legislation or the amendment of existing legislation. Compliance with the obligations set out by a legislative framework would be mandatory and could set out penalties and available remedies for non-compliance with the framework. This provides organisations seeking to develop or implement AI systems certainty as to their obligations and consequences of non-compliance, which in turn supports public trust and confidence in AI through an unambiguous regulatory framework.

It is notable however that a legislated framework is not as flexible and responsive as some of the other measures canvassed in the Discussion Paper. This could lead to increased regulatory burden and may stifle innovation or prevent Australia from accessing advances in AI technologies developed internationally.

Industry self-regulation or co-regulation Navigation Show below Hide below

Self-regulatory schemes are developed and implemented by industry through codes of conduct or other voluntary schemes (such as standardised self-testing tools similar to AI Verify which is being implemented in Singapore). These self-regulatory schemes are often faster to develop and implement than a legislative response, and can be changed easily, providing greater flexibility.

Industry self-regulation can be combined with legislation in a 'co-regulatory' fashion, allowing the code to be developed by industry as part of a legislative framework, which enables the code to have legislated remedies and penalties, rather than being complied with on a solely voluntary basis.

Regulatory principles Navigation Show below Hide below

Regulatory principles can be created by Government to guide regulators as to when and how AI should be regulated when it is used in the context of the regulator's specific sector / industry. This promotes greater regulatory coherence between regulators when addressing the use of AI within existing regulatory frameworks.

Regulatory principles are often outcomes based and can be applied with a great deal of flexibility to a wide range of contexts. This can make regulatory principles more challenging for organisations from a compliance perspective (depending on an organisation's resources and capacity to interpret and comply with the principles) but more supportive of innovation than other less flexible alternatives.

Regulator engagement Navigation Show below Hide below

Greater collaboration between regulators would help ensure that when AI is being regulated across multiple sectors / industries that regulators are minimising the compliance burden created by having multiple regulators responsible for the regulation of AI in different contexts. This is likely to be necessary for any regulation of AI to be effective, considering the broad range of applications for AI systems (and could be facilitated through, among other things, the use of regulatory principles). A number of regulators in Australia are already collaborating to develop guidance and share information in relation to the use of AI in particular contexts.

Governance and advisory bodies and platforms Navigation Show below Hide below

This would involve the introduction of AI-specific bodies to support the implementation of governance requirements and provide advice to both AI providers and consumers. An example of this is the National AI Centre's Responsible AI Network, which is being run by the CSIRO to focus on responsible AI solutions in Australian industry.

Enabling regulatory levers Navigation Show below Hide below

Enabling regulatory levers designed to enable and facilitate, rather than hinder, innovation. This would involve clear frameworks and guidance to experimentation with AI technologies. This can be achieved for example through the creation of 'regulatory sandboxes' which are controlled environments within particular industries, for experimentation and innovation.

Technical standards Navigation Show below Hide below

Technical standards designed through consensus by technical experts within industry-led organisations, with a view to developing universal standards, which are adopted to improve the interoperability of systems. This can improve consumer outcomes and facilitate international trade. These standards can be either voluntary, or made mandatory through legislation.

Assurance infrastructure and conformity processes or practices Navigation Show below Hide below

Theses are measures which are implemented to test and verify AI systems, to ensure that they meet standards or quality requirements. For example, these measures could relate to assessing the quality of data on which an AI system is developed, or requiring that AI systems achieve a certain level of transparency, to ensure that consumers and other stakeholders understand the algorithms and decision making underpinning the system. These measures largely go to transparency in AI solutions and would be useful in building public trust and confidence in AI.

These assurance processes can be voluntary, or mandated by legislation, developed by industry or Government (or in collaboration between the two) and can be administered internally by organisations, or administered by third parties (either private, or public sector).

Policies, principles or statements guiding the operations of Government Navigation Show below Hide below

The purpose of publishing and making available policies, principles or statements is to increase awareness of the Government's policy position, and expectations, in relation to the use of AI. The intent of this is to increase transparency and public trust and confidence in how the Government uses AI solutions and can also seek to influence the private sector by modelling best practice in Government.

Transparency and consumer information requirements Navigation Show below Hide below

These measures are designed to increase transparency through the use of initiatives such as the preparation and publication of AI impact assessments, and notifying individuals when AI applications are in use.

Bans, prohibitions and moratoriums Navigation Show below Hide below

This is the process of banning certain AI applications, either generally or in specific contexts. For example, a number of jurisdictions are moving to ban the use of ChatGPT in education settings. The EU is also proposing to ban the use of 'social scoring' and 'real-time biometric identification' in specific circumstances.

 

Public education and other supporting central functions Navigation Show below Hide below

This refers to 'non-regulatory' options (i.e. not legislation, codes or guidance designed to influence the behaviour of corporations) that encourage certain behaviours by increasing availability of information and awareness.

By-design considerations Navigation Show below Hide below

These are measures which are intended to be implemented in the design of AI systems, to ensure that from the outset AI systems are of a high quality, transparent and safe. This approach to design can be either voluntary, or required by legislation.

Risk management approach Navigation Show below Hide below

A risk management approach could be used to guide the implementation of any number of the measures outlined above (either in isolation, or multiple measures in conjunction). As we have discussed, a risk-based approach to AI is being implemented in a number of international jurisdictions and it seems likely at this stage that Australia will follow this trend. The Discussion Paper includes a draft risk management approach for AI, which the Government is seeking feedback and comment on.

The draft risk management approach involves a three-tier classification system, utilising the following three categories:

  1. Low risk: minor impacts that are limited, reversible or brief. For example, algorithm-based spam filters, general chatbots and AI-enabled business processes.
  2. Medium risk: high impacts that are ongoing and difficult to reverse. For example, AI-enabled loan worthiness evaluations, emergency service chatbots, AI-enabled applications in hiring and evaluating employees.
  3. High risk: very high impacts that are systemic, irreversible or perpetual. For example, AI-enabled robots for medical procedures and use of AI in safety-related car components or in self-driving cars for the purpose of making real-time decisions.

Under the draft risk management approach, an organisation will consider the risk level of the proposed AI application, and determine the requirements which will apply to that AI application, dependent on the risk level identified.

Requirements placed on AI systems under an eventual framework could be a combination of any of the measures discussed above, however examples of the measures identified in the draft framework are:

  • the conducting of AI impact assessments;
  • providing notices to users that an AI solution is being implemented and used;
  • the use of 'human in the loop' models, which require the AI solution to interact with a human who can intervene if required;
  • publicly available explanations of how the AI solution works;
  • specific requirements regarding user training; and
  • internal and external monitoring and documentation of how the AI solution is working.

What comes next? Submit your feedback and have your say.

The Department of Industry, Science and Resources seeks feedback on the 20 questions set out in the Discussion Paper as well as any other matters arising from it. Submissions closed 26 July 2023.


The team at MinterEllison can assist you in understanding the legal issues and risks associated with AI to your organisation, monitor the process government is taking with responsible use of AI, and help you make a submission on the Discussion Paper.

View the Discussion Paper

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiIyNGE2OTRkNi04N2U4LTQxYmItYTE5OS03ZWVhZTU3ZGNjNTIiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTczMzM0MDIyMywiZXhwIjoxNzMzMzQxNDIzLCJpYXQiOjE3MzMzNDAyMjMsImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2F1c3RyYWxpYXMtcG90ZW50aWFsLXJlZ3VsYXRvcnktbWVhc3VyZXMtdW5kZXItY29uc2lkZXJhdGlvbi1ieS1nb3Zlcm5tZW50IiwiYXVkIjoiaHR0cHM6Ly93d3cubWludGVyZWxsaXNvbi5jb20vYXJ0aWNsZXMvYXVzdHJhbGlhcy1wb3RlbnRpYWwtcmVndWxhdG9yeS1tZWFzdXJlcy11bmRlci1jb25zaWRlcmF0aW9uLWJ5LWdvdmVybm1lbnQifQ.hAHz87VvsZoIACu2wD2fK1d62i17SHmEMeK3vJdChwU
https://www.minterellison.com/articles/australias-potential-regulatory-measures-under-consideration-by-government