After extensive deliberation and negotiation, the European Union (EU) recently marked a significant milestone in the regulation of artificial intelligence (AI) with the enactment of the European Union's Artificial Intelligence Act, Regulation (EU) 2024/1689 (EU AI Act). The EU AI Act was published in the Official Journal of the European Union on 12 July 2024 and entered into force on 2 August 2024.
The EU AI Act aims to establish a harmonised framework for AI governance across EU member states. It does so by introducing a pioneering, risk-based legal framework to ensure that 'AI systems' are developed and utilised in a manner that appropriately reflects the established principles and fundamental rights in place in the EU.
The scope of the EU AI Act
What is an AI system?
Consistent with OECD definitions of the term, Art. 3(1) of the EU AI Act defines an 'AI system' as:
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Who is affected?
The EU AI Act imposes new obligations on a number of key actors, each defined in Art. 3 of the EU AI Act, including AI systems:
- 'operators',
- 'providers',
- 'deployers',
- 'importers',
- 'distributors', and
- 'product manufacturers'.
These obligations extend to:
- providers that place on the EU market or put into service AI systems or place on the EU market general-purpose AI models (GPAIMs) in the EU, regardless of whether those providers are established or located within the EU or in a third country,
- deployers of AI systems that have their place of establishment or are located within the EU, and
- providers and deployers of AI systems that are established or are located in a third country, where the output produced by the AI system is used in the EU (Art. 2(1), see also Art. 2 for full scope).
These definitions are notably broad and, in some cases, will apply to actors that are located outside of the EU. This extraterritorial reach means that it is entirely conceivable that Australian organisations may fall within the definition of a 'provider' or 'deployer' of an AI system under the EU AI Act, which will then require them to comply with relevant obligations under the Act.
Despite its broad application, the EU AI Act also provides a number of exceptions, including that its obligations do not apply in the context of:
- open-source AI systems (unless they relate to prohibited AI practices or are classified as 'high-risk' AI systems (defined in greater detail below)) (Art. 2(12)), and
- AI systems used for the sole purpose of scientific research and development (Art. 2(6)).
Failure to comply with the requirements of the EU AI Act may attract significant penalties, as discussed further below.
Risk based approach
Using a risk-based approach, the EU AI Act delineates between different categories of AI based on the potential harm they may cause. These categories include:
- prohibited AI practices,
- high-risk AI systems, and
- GPAIMs.
At a high level, under the EU AI Act, the use of prohibited AI practices is banned, while high-risk AI systems and GPAIMs must each adhere to separate obligations depending on which classification they fall into. These categories are discussed in further detail below.
Prohibited AI practices (Art. 5)
The EU AI Act deems that a number of 'prohibited AI practices' pose an unacceptable risk to important EU public interests and are therefore banned under the new law.
Some of these 'prohibited AI practices' include (non-exhaustively) AI systems that deploy subliminal, purposively manipulative or deceptive techniques to distort a person's behaviour, exploit vulnerabilities, classify individuals based on social behaviour, assess criminal risk through profiling, create unauthorised facial recognition databases, infer emotions in specific contexts, use biometric categorisation to deduce sensitive traits, and apply real-time biometric identification in public spaces.
High-risk AI systems (Chapter III)
The EU AI Act defines 'high-risk' AI systems as systems used as a safety component of a product (or otherwise subject to EU health and safety harmonisation legislation) and AI systems deployed in eight specific domains, which are set out in Annex III of the EU AI Act. The eight domains set out in Annex III are:
- biometrics,
- critical infrastructure,
- education and vocational training,
- employment, workers’ management and access to self-employment,
- access to and enjoyment of essential private services and essential public services and benefits,
- law enforcement, in so far as their use is permitted under relevant EU or national law,
- migration, asylum and border control management, in so far as their use is permitted under relevant EU or national law, and
- administration of justice and democratic processes.
However, an AI system will not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making (e.g., where an AI system is intended to perform a narrow procedural task or improve the result of a previously completed human activity).
However, irrespective of the above, an AI system referred to in Annex III will always be considered to be high-risk where the AI system performs profiling of natural persons.
If an AI system is identified as high-risk under the EU AI Act, a range of obligations (listed in Chapter III, Section 2) apply throughout the lifecycle of that AI system (Art. 8), including in relation to:
- risk management systems (Art. 9),
- data training and data governance (Art. 10),
- technical documentation (Art. 11),
- record keeping (Art. 12),
- transparency and provision of clear user information (Art. 13),
- human oversight (Art. 14), and
- accuracy, robustness and cybersecurity (Art. 15).
The EU AI Act also provides for the process and criteria by which the European Commission may add to or modify the identification of high-risk AI systems listed in Annex III, where those systems:
- are intended to be used in any of the areas listed in Annex III (Art. 7(1)(a)), and
- pose a risk of harm to health and safety, or an adverse impact on fundamental rights that is equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III (Art. 7(1)(b)).
General Purpose AI Models (Chapter V)
The EU AI Act also identifies GPAIMs as a distinct category of AI. A GPAIM is defined as:
an AI model, that 'is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications' (Art. 3(1)(63)).
Despite this broad definition, the EU AI Act specifically excludes from the definition of GPAIMs 'AI models that are used for research, development or prototyping activities before they are placed on the market'.
GPAIMs with systemic risk
The EU AI Act further delineates between GPAIMs and GPAIMs 'with systemic risk.' A GPAIM will be considered to fall under the definition of a GPAIM 'with systemic risk' if it has 'high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks or as identified as such by the European Commission' (Art. 51(1)). A GPAIM will be presumed to have to have high impact capabilities when the cumulative amount of computation used for its training measured in floating point operations is greater than 1025 (Art.51(2)).
Obligations differ in relation to GPAIMs and GPAIMs with systemic risk. Where all providers (and representatives of providers) of GPAIMs are subject to a number of obligations in relation to documentation and compliance more generally (Arts. 53 and 54), a provider of a GPAIM with systemic risk is additionally required to notify the European Commission if they become aware that a GPAIM does or will qualify as one with systemic risk.
This notification must be submitted without delay, and in any event within two weeks of that requirement being met or it becoming aware that it will be met (Art. 52(1)). The European Commission will then publish (and continue to regularly update) a list of GPAIMs with systemic risk, subject to general requirements to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with EU and national law (Art. 52(6)).
Codes of practice
The EU AI Act empowers the EU AI Office to encourage and facilitate the drawing up of codes of practice at the EU level in order to contribute to the proper application of the EU AI Act, taking into account international approaches (Art. 56). Such codes of practice are intended to represent a 'central tool' for proper compliance with the obligations of providers of general-purpose AI models under the EU AI Act (Rec. 117).
Until a harmonised standard is published, providers of GPAIMs may refer to such codes of practice to demonstrate compliance with the obligations imposed on all providers of GPAIMs under the EU AI Act. Alternatively, if codes of practice or harmonised standards are not available, or if providers choose not to rely on such codes of practice, providers of GPAIMs will need to be able to demonstrate compliance using alternative adequate means (Art 53(4)).
Deepfakes
The EU AI Act directly addresses the use of AI to create deepfakes. The EU AI Act defines a deepfake as 'AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful' (Art. 3(60)).
Deployers of an AI system that generates deepfakes are required to disclose that the content has been artificially generated or manipulated, except for where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences. In circumstances where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, these transparency obligations are limited to disclosing the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work (Art 50(4)).
Penalties and enforcement
The EU AI Act sets out an extensive penalties regime for non-compliance, including:
- for breaches of rules related to prohibited AI practices, maximum fines of up to 7% of total worldwide annual turnover for the preceding financial year or EUR35 million, whichever is higher (Art. 99(3)),
- for breaches of a number of other provisions, maximum fines of up to 3% of total worldwide annual turnover for the preceding financial year or EUR15 million, whichever is higher (Art. 99(4)),
- for the supply of incorrect, incomplete or misleading information to the relevant authorities, maximum fines of up to 1% of total worldwide annual turnover for the preceding financial year or EUR7.5 million, whichever is higher (Art. 99(5)), and
- for providers of GPAIMs that have intentionally or negligently infringed the EU AI Act or failed to comply with requests from regulators for documentation or information, maximum fines of up to 3% of total worldwide annual turnover for the preceding financial year or EUR15 million, whichever is higher (Art. 101).
In determining the means of enforcing the EU AI Act (and extent of applicable penalties), EU member states are required to take into account the interests of SMEs, including start-ups, and their economic viability (Art. 99(1)). In a further attempt to ensure proportionality and prevent the stifling of innovation, for each of the penalties in points 1 to 3 above that relate to start-ups or SMEs, fines are subject to the same maximum percentages or amounts, but whichever is lower, rather than the higher of the two (Art. 99(6)).
Relevant implementation timelines
The EU AI Act officially entered into the force of law on 1 August 2024; however, a number of obligations do not become active for some time. The EU AI Act will be effective from 2 August 2026 (Art. 113), except for the following:
- Chapters I and II (i.e., general provisions, definitions, and rules regarding prohibited uses of AI) will apply from 2 February 2025 (Art. 113(a)),
- certain requirements (including notification obligations, governance, rules on GPAIMs, confidentiality, and penalties (other than penalties for providers of GPAIMs)) will apply from 2 August 2025 (Art. 113(b)). However, for providers of GPAIMs that are placed on the EU market before 2 August 2025, this deadline is extended to 2 August 2027 (Art. 111(3)), and
- Art. 6(1) (including obligations regarding high-risk AI systems) applies from 2 August 2027 (Art. 113(c)).
Until the relevant provisions of the EU AI Act come into operation, the EU AI Act encourages providers of high-risk AI systems to comply with obligations on a voluntary basis (Rec. 178).
How the EU AI Act relates to Australian organisations
As referred to above, if Australian organisations are considered to be 'providers' or 'deployers' of AI systems as defined under the EU AI Act, they may be subject to obligations under the EU AI Act. Moreover, in addition to its direct application to some Australian organisations, the EU AI Act also represents a landmark approach to the comprehensive regulation of AI systems, setting a precedent that is likely to influence or spur regulation in other jurisdictions.
Australia is no exception, and in recent months, there have been a number of developments in relation to the potential regulation of AI under Australian law:
- in November 2023, at the AI Safety Summit hosted in the UK, Australia joined the EU and 27 countries in signing the Bletchley Declaration, committing to international collaboration on AI safety testing and the building of risk-based frameworks across countries to ensure AI safety and transparency,
- in January this year, the Australian Government provided its interim response to the Department of Industry, Science and Resources' Safe and Responsible AI in Australia consultation (Government Interim Response on AI), (resulting in the formation of an AI expert group),
- in February this year, Standards Australia announced the adoption of international standard ISO/IEC 42001, Information technology - Artificial Intelligence - Management System,
- in June this year, the Department of Finance released the National framework for the assurance of Artificial Intelligence in Government,
- in September this year, following the formation of the AI expert group under the Government Interim Response on AI, the Australian Government opened consultation on its Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Mandatory Guardrails for AI Proposal) and simultaneously introduced its new Voluntary AI Safety Standard (for more information about these developments, see our previous article Responsible use of AI: New Australian guardrails released),
- on 12 September this year, the Privacy and Other Legislation Amendment Bill 2024 (Privacy Act Reform Bill) was introduced into the lower house. The Privacy Act Reform Bill contains proposals to require APP entities to update their privacy policies to expressly outline where personal information will be used by a computer program (such as AI) to make a decision that ‘could reasonably be expected to significantly affect the rights or interests of an individual’ (for more information about this development, see our previous article On the road: Australia’s privacy law overhaul begins);
- on 19 September this year, the Digital Platform Regulators Forum released a working paper titled Examination of Technology: Multimodal Foundation Models (MFM Paper). The MFM Paper examines multimodal foundation models (MFMs), a type of generative AI that can process and output multiple data types (e.g., text, images and audio), and how various risks posed by MFMs are addressed under existing Australian laws (and the need for such laws to evolve as the technology itself changes over time), and
- on 15 October this year, the Australian Treasury Department released a consultation paper titled Review of AI and the Australian Consumer Law (AI ACL Consultation Paper). The AI ACL Consultation Paper aims to build on previous consultation in the Government Interim Response on AI and complement the Mandatory Guardrails for AI Proposal, with a specific focus on whether the consumer protections and remedies currently available under existing consumer law are appropriate for the evolving landscape of AI-enabled goods and services. The consultation process is currently open and scheduled to close on Tuesday, 12 November.
A prominent feature of the Government Interim Response on AI, Mandatory Guardrails for AI Proposal, MFM Paper and AI ACL Consultation Paper were questions in relation to the appropriate mechanism for regulating AI in Australia. Proposed options include the adaptation of existing regulatory frameworks to introduce additional guardrails on AI or creating new frameworks by introducing an Australian AI Act (similar to the EU AI Act). It remains to be seen which path the Federal Government will take.
Organisations required to comply with obligations under the EU AI Act should consider integrating the requirements of the EU AI Act into their planning now, including by amending data management policies or terms in contracts with suppliers and/or customers as necessary.
For today and tomorrow, whether you need protection against AI risks (including in relation to compliance with changing legal frameworks), or are shaping your organisation with responsible and ethical AI to enhance and elevate capability, our nationwide AI Client Advisory team will guide you through your AI adoption journey – from insight, to strategy and implementation.
Our AI expertise includes legal and policy, risk, workforce, privacy data protection & cyber, procurement, strategy, and a co-creation model to develop tailored solutions for your organisation (ME AI). Operating with the highest standards of independence and trust as a firm for almost 200 years, our nationwide AI experts have the know-how and experience to help you make the best decisions, faster.