The European Union's AI Act – a model Australia will follow?

8 minute read  23.06.2023 Vanessa Mellis, Michael Thomas

The European Parliament recently passed its version of the Artificial Intelligence Act, a world-leading piece of legislation in the regulation of AI.


Key takeouts


  • If passed into law, the AI Act will be a first of its kind piece of legislation, which seeks to regulate the use of AI generally, rather than in particular contexts or when used for specific applications.
  • This will have an extraterritorial impact on the development and use of AI systems internationally, through limiting access to the European Union's single market, and influencing the development of AI systems within the European Union.
  • As Australia embarks on its own journey to regulate AI, it is important to consider the measures being implemented internationally, as this will likely influence Australia's own eventual regulatory position.

On 14 June 2023, the European Parliament passed its version of the Artificial Intelligence Act (AI Act), a key step in the AI Act becoming law. The AI Act will now be debated between the European Commission, Council and Parliament (the "Trilogue") through an expedited process, intended to pass the AI Act into law before the end of 2023. The text passed by the European Parliament (available here) represents the position of the European Parliament for the purposes of the Trilogue debate.

If this process is successful and the AI Act passes into law, it will be the first regulation in the world which is designed to regulate AI on a 'horizontal' basis meaning that it is designed to regulate AI across industries and sectors, rather than targeting the use of AI in a single context or when used in a particular sector.

In this update, we give a brief overview of how the proposed AI Act will operate if passed into law and how it may impact the ongoing consideration of AI regulation in Australia being led by the Department of Industry, Science and Resources.

What is the AI Act?

The AI Act is a proposed legislative framework to regulate the use of AI within the European Union (EU). If passed, the AI Act will set harmonised rules for the development, use and sale of AI technologies within the EU and is intended to avoid 'a patchwork of potentially divergent national rules' which the proposal underpinning the creation of the AI Act says would 'hamper the seamless circulation of products and services related to AI systems across the EU and will be ineffective in ensuring the safety and protection of fundamental rights and union values'.

How will the AI Act work?

The AI Act employs a risk-based regulatory model to assign a risk-rating, and scaled regulatory obligations to AI systems. This is achieved by categorising AI systems with the following four risk ratings:

  1. AI practices which pose an Unacceptable Risk would be banned in the EU. Examples of AI which pose an unacceptable risk are systems that deploy subliminal or purposeful manipulative techniques causing harm, exploit people's vulnerabilities or are used for social scoring (i.e. the process of classifying an individual on the basis of their social behaviour, personal characteristics or socio-economic status).
  2. AI Systems which pose a High-Risk, which would be subject to substantive and strict obligations under the AI Act which must be complied with before they are introduced into the EU. Examples of high risk AI systems are systems that are used in critical infrastructure which could put the life and health of citizens at risk, the safety components used in products, or systems which are employed in the administration of justice or the democratic process. It is notable that the test for AI Systems to be classified as high risk is focused on the context in which the AI system is being used rather than how the AI system works or its application, which is the test for categorising AI systems into the other risk ratings.
  3. AI Systems which pose a Limited Risk, which would be subject to transparency obligations, which would require users to be notified by the system that they are interacting with an AI system. For example, chatbots.
  4. AI Systems which pose Minimal or No Risk, which would be allowed to be used freely within the single market without restrictions. This includes AI Systems such as AI-enabled video games or spam filters. The European Commission's guidance on the proposed AI Act indicates that the vast majority of AI Systems currently in use in the European Union would likely fall into this category.

The provisions of the AI Act will be enforceable through the leveraging of potentially significant fines for non-compliance with the AI Act. The most significant penalties in the AI Act are:

  • for the breach of prohibited AI practices, a fine of the higher of €40M or, for a company, 7% of the global annual turnover of the company in the previous financial year, whichever is higher; and
  • for a failure to meet data governance and high-risk AI transparency obligations, penalties of up to €20M or, for a company, 4% of the company's global turnover in the previous financial year, whichever is higher.

Smaller fines are included in the AI Act for lower level breaches of the AI Act, in addition to a broad power which allows fines to be imposed 'in addition to or instead of non-monetary measures such as orders or warnings' in an amount calculated on the basis of the seriousness of the breach and circumstances of the offender.

The AI Act specifically calls out that penalties leveraged for breaching its requirements should be 'effective, proportionate and dissuasive' and take into account the interest of SMEs and start-ups, and their 'economic viability'. In doing so, the AI Act seeks to effectively disincentivise breaches of the AI Act in a proportional manner, in an attempt to mitigate the potential harms posed by AI (many of which are currently unknown) whilst not stifling innovation and the viability of smaller companies seeking to develop or implement AI systems.

Who will the AI Act apply to?

The AI Act is intended to regulate AI systems which are developed, used and sold within the EU. Notably, the AI Act in its current form is intended to operate in a 'non-discriminatory' manner. This means it will apply equally to:

  • companies established within the EU, who are seeking to develop, use or sell AI systems within the EU, or develop AI solutions within the EU but export these solutions outside of the EU; and
  • companies established outside of the EU, who are seeking to develop, use or sell AI systems within the EU.

The AI Act will also capture companies which are implementing AI systems which are used outside of the EU, but which produce outputs that are used within the EU.

Through this application, the AI Act will impact the development of AI systems globally in two ways:

  1. by denying AI systems which do not comply with the AI Act access to the EU's single market; and
  2. by preventing companies within the EU from developing and marketing AI systems that do not comply with the AI Act both within and outside the EU.

Public trust and confidence is important in influencing the uptake of AI systems

A key component influencing the uptake of AI systems is public trust and confidence in those systems. Public trust and confidence is likely to be positively influenced by the introduction of a strong legislative framework to regulate those systems. The EU's single market is one of the largest global trade markets and has the potential to be a significant market for the development and use of AI systems – particularly if public trust and confidence in AI systems is substantially increased through the implementation of the AI Act.

Since the implementation of the General Data Protection Regulation (GDPR) in 2018, the EU has led the development of privacy practices, through modelling a robust, human-rights focused approach to privacy regulation with the EU and the extra-territorial impact of the GDPR on international companies seeking to engage with the personal information of individuals within the EU. It is foreseeable that the AI Act, once implemented, could have a similar impact on the development of AI systems, by influencing the behaviour of companies seeking to develop, sell and use AI systems within the EU.

Companies seeking to develop AI systems, regardless of where they are established, need to be mindful of the AI Act. The alternative is that the company may find that it is unable to access a significant international market with its product, should it not be able to comply with the requirements in the AI Act.

In Australia, the recent Supporting Responsible AI: Discussion Paper published by the Department of Industry, Science and Resources (Discussion Paper) discussed the AI Act as part of its assessment of international regulatory frameworks and acknowledged the need for Australia to, as far as possible, harmonise its regulation with the international context, in order to ensure that Australia can unlock the benefits of AI systems developed and used internationally. The Discussion Paper sets out at a high level a potential regulatory framework takes a similar risk based approach to the regulation of AI systems to that implemented by the EU. It is foreseeable that as the process to develop AI regulation in Australia progress, legislation eventually implemented could be similar to the model utilised in the EU, to ensure Australia's ability to access AI systems developed globally. This means that Australian companies can gain some insight into what potential legislation in Australia may look like, by considering the AI Act and any impact it would have if implemented domestically.

If you would like to read more about the AI regulation being considered by the Department of Industry, Science and Resources, you can read our update, An AI balancing act – Australia's road to enhanced AI regulation.


As part of the ongoing discussion about the regulation of AI in Australia, bespoke AI regulation is being considered. The EU is a leader in this process globally and is likely to implement some of the first prescriptive AI regulation. This is a model that Australia can look to and learn from in its own journey to regulate AI. If you would like to discuss the impact of the AI Act on your organisation, or how you can engage in the ongoing discussion about regulating AI in Australia, please get in touch.

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiIyYzcwYzU5YS0xYmQzLTQ3YjQtYmU2Ny1mMmY2MTNkMTliN2QiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTczMDg5NTMzOCwiZXhwIjoxNzMwODk2NTM4LCJpYXQiOjE3MzA4OTUzMzgsImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL3RoZS1ldXJvcGVhbi11bmlvbnMtYWktYWN0LWEtbW9kZWwtYXVzdHJhbGlhLXdpbGwtZm9sbG93IiwiYXVkIjoiaHR0cHM6Ly93d3cubWludGVyZWxsaXNvbi5jb20vYXJ0aWNsZXMvdGhlLWV1cm9wZWFuLXVuaW9ucy1haS1hY3QtYS1tb2RlbC1hdXN0cmFsaWEtd2lsbC1mb2xsb3cifQ.r4X695iFHg3wJCIFFu67LsZVkAGxRHdbRFg_B731QZc
https://www.minterellison.com/articles/the-european-unions-ai-act-a-model-australia-will-follow