By now, you will have used (or at least heard a lot about) generative artificial intelligence (GenAI). In short, GenAI is a form of artificial intelligence that can generate seemingly novel content, derived from its training data. You may also be familiar with the ongoing headache it is causing from a copyright perspective for creative industries, the developers of GenAI tools, and its users. The main reason for this headache? Legal uncertainty. Copyright laws in most jurisdictions around the world are not equipped to handle the rapid adoption of GenAI. While policy makers continue to grapple with how to support innovation and user demands, as well as the legal interests of copyright owners, battles are being fought based on existing laws in courtrooms around the world. In this update, we have outlined the notable legal developments that have transpired in this space in the past 12 months, with some very interesting developments in just the last week.
Copyright and GenAI – The three critical issues
Copyright is an intellectual property right that protects the expression of ideas that have been reduced to some tangible form. This includes original works of authorship (like literature, paintings, scripts or software) as well as certain films, sound recordings and published editions of works. Copyright gives its owner several exclusive rights to do certain acts with respect to that material. These rights depend on the type of material protected, but generally include the right to reproduce the material and communicate the material to the public. In Australia, copyright arises automatically upon creation of an original work.
GenAI refers to the AI systems (such as OpenAI’s GPT models, DALL-E, Stable Diffusion, etc.) that produce new content, often in response to user prompts. These models “learn” from their training data to identify patterns and features which it later uses to generate seemingly novel outputs.
Three critical (and complex) copyright issues arise in connection with the use of GenAI tools:
- Ingesting data: does the initial training of a GenAI model involve infringing a copyright owner's exclusive rights?
- Outputs: do the outputs generated by a GenAI model infringe the copyright owner's exclusive rights?
- Copyright subsistence: can the GenAI outputs themselves be protected by copyright? For example, if someone in your marketing team uses GenAI to produce a new jingle or graphic, can your company claim copyright in that creation, and prevent a competitor from using it?
We explored each of these issues (and more) in detail in our previous articles: Unleashing AI: your copyright playbook and ChatGPT and the legalities of language generation
So, what has transpired in the last 12 months?
Australia
Despite the establishment of a "Copyright and AI Reference Group" (CAIRG) in late 2023, Ministerial Roundtable sessions, Senate Committee reports and Productivity Commission inquiries, we are yet to see any proposed amendments to the Copyright Act 1968 (Cth) (Copyright Act) or specific legislation that governs GenAI and intellectual property rights. This means the Australian copyright position that was in place prior to the recent widespread use of GenAI has not meaningfully changed.
Notably, in November 2024, the Australian Senate Select Committee on AI recommended that the Australian Government ‘require the developers of AI products to be transparent about the use of copyright works in their training datasets, and require that the use of such works is appropriately licenced and paid for’, and that ‘the Australian Government urgently undertake further consultation with the creative industry to consider an appropriate mechanism to ensure fair remuneration is paid to creators for commercial AI-generated outputs based on copyrighted material used to train AI systems’. The Committee also recommended that the Government introduce a whole-of-economy piece of legislation, which aims to regulate high-risk uses of AI, and that there be a non-exhaustive list of defined high-risk AI uses, which includes the use of general purpose AI models. Despite these recommendations, we have still not seen any action.
The Australian courts are also yet to hear or determine any dispute that squarely deals with the intersection of GenAI and copyright. The lack of proceedings in this jurisdiction may well be because Australian copyright owners face a territorial barrier: under the Copyright Act, infringing conduct is limited to conduct engaged in Australia. Given most GenAI models have been developed overseas, the preliminary question as to whether ingesting data to train the model constitutes infringement, relates to conduct that has occurred off-shore.
This is where overseas developments are relevant: decisions of foreign courts and/or the legislation introduced in other countries will likely influence the direction in which Australia is headed, particularly given the role of international treaties that govern intellectual property rights.
With that in mind, here are some of the most recent developments in the UK, the US, and the EU.
Foreign updates
United Kingdom
The Getty Images v. Stability AI trial commenced in early June in the UK High Court. The case was set down for a three-week hearing and originally concerned allegations that Stability AI unlawfully reproduced millions of Getty’s photos (and metadata) to train its image-generating AI tool, without a licence. Stability AI denies these claims. However, in a major development in June 2025, and on the first morning of the parties' closing submissions, Getty Images has dropped their primary copyright infringement claims. It will no longer be alleging that (1) copyright infringement occurred when Stable Diffusion was being trained in the UK, or that (2) copyright infringement occurs where substantial parts of copyright material are reproduced in Stable Diffusion outputs, in the UK. While it will continue to claim trade mark infringement, passing off, and secondary infringement, this move is considered a major setback for creators. In any case, the dispute is being closely watched, as it still has the potential to shape copyright licensing in the AI age and influence potential legislative reforms in the UK.
Separately, the UK Government's public consultation on copyright and AI closed in recent months. As part of the consultation, the Government proposed four pathways that the country could take to address the legal uncertainties surrounding AI model training and copyright. In very brief terms, the options were (1) "do nothing"; (2) mandate express licensing for the training of AI models; (3) broadening the existing (and UK-specific) text and data mining exception such that copyright works can be used for AI training without permission; and (4) broadening the same exception, except that copyright owners are able to withhold their consent (i.e., "opt-out") from such use of their work. The Government is currently reviewing the (overwhelming number of) submissions, before it decides what steps to take next.
The Data (Use and Access) Act 2025 (UK) was passed by UK Parliament on 19 June 2025. Amongst other things, the Act reforms the powers afforded to the Information Commissioner's Office, introduces digital verification services, and reforms the UK data protection framework. Notably, the Act also contains a number of copyright-specific provisions (which were added to the bill in the course of lengthy and heated deliberations between the Upper and Lower Houses of Parliament).
The copyright specific provisions impose obligations on the Secretary of State, including to:
Publish an assessment of the economic impact of each of the four policy options described in the UK Government’s recent Copyright and AI consultation paper, including the impact on copyright owners, developers and users.
Publish a report on the use of copyright works in the development of AI systems, considering and making proposals in relation to:
technical measures and standards that may be used to control the use of works to develop AI systems and the accessing of works for that purpose;
the effect of copyright on access to and use of data;
the disclosure of information by developers about their use of copyright works to develop systems;
the granting of licences to developers to do acts restricted by copyright; and
ways of enforcing requirements and restrictions relating to the use of copyright works (including enforcement by a regulator).
Pressure on the UK Government to engage in further legislative reform relating to copyright is likely to escalate in the coming months, including as a result of protests from prominent musicians including the likes of Sir Elton John and Sir Paul McCartney, who are concerned that their life's work is being misappropriated and used by the creators of GenAI models for free.
United States
Unsurprisingly, the United States has quickly become the preferred battleground for AI and copyright issues. The U.S. Copyright Office has been proactively releasing guidance for practitioners, and multiple lawsuits have been filed by groups of creators against GenAI companies.
The U.S. Copyright Office launched an initiative in early 2023 to examine the issues raised by the overlap between copyright law and AI. The office published a notice of inquiry in the Federal Register in 2023, and received more than 10,000 comments by the end of the year. It then commenced preparing a report on Copyright and AI, which is split into three parts. Part 1 relates to digital replicas, Part 2 focuses on 'Copyrightability' and Part 3 focuses on Generative AI Training. While Copyright Office reports do not carry the force of law, they can be persuasive and may suggest the direction in which U.S. policy is headed.
Curiously, Part 3 has only been made available by the Office in 'pre-publication' format, noting that a final version will be published 'in future'. Perhaps more curiously, the Director of the Office, Shira Perlmutter, was dismissed by President Trump the day after Part 3 (albeit in 'Pre-publication' form) was made available. This is notwithstanding that Part 3 of the report provides a relatively balanced view on whether the fair use defence could cover the training of AI models (this defence is US-specific and permits the use of copyright material without a licence, where that use would be 'fair', taking into account four factors). Part 3 concludes that "various uses of copyrighted works in AI training are likely to be transformative. The extent to which they are fair, however, will depend on what works were used, from what sources, for what purpose, and with what controls on the outputs- all of which can affect the market".
The dismissal of the Director of the Copyright Office is suggestive of the enormous stakes in this battle between big tech and creators.
Turning to the U.S. courts, various lawsuits have been filed and we have grouped the most relevant below according to the subject-matter to which they relate.
Journalists and authors:
Kadrey v. Meta Platforms, Inc. (U.S. District Court for the Northern District of California):
- In 2023, a group of 13 prominent authors commenced proceedings against Meta Platforms for copyright infringement, alleging that Meta used copies of their books to train its GenAI model, LLaMA.
- Meta applied for summary judgement primarily on the basis of 'fair use', arguing that its use of the copyrighted works constituted a transformative use and there was no evidence of actual or potential harm to the market for the works.
- On 25 June 2025, Judge Vince Chhabria granted summary judgement in favour of Meta on the basis that the plaintiffs had made flawed legal arguments and had failed to develop a sufficient evidentiary case. In particular, his Honour held that the issue of market dilution was important in this context, but the plaintiffs presented no evidence of market dilution at all.
- His Honour noted that the ruling does not validate Meta’s conduct but rather reflects deficiencies in the plaintiffs’ legal approach.
- Given this case is not a class action, the ruling only effects the rights of the plaintiffs. As a result, the consequences of this ruling are likely to be limited. The Judge also emphasised that the ruling was based on a specific set of facts, stating that "in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission”. This leaves open the possibility for future cases to reach a difference outcome.
- The plaintiffs had also made separate allegations that Meta had used pirated copies of their books to train its GenAI. The piracy allegations were not dealt with in the summary judgement. Instead, the Court has scheduled a case management conference for 11 July 2025 to discuss how these claims are to proceed.
The New York Times Company v. Microsoft Corporation (U.S. District Court for the Southern District of New York):
- The New York Times commenced proceedings against OpenAI and one of its leading financial backers, Microsoft , for copyright infringement and related claims in 2023, alleging that the defendants infringed their copyright by using its articles to train their large language models, including those behind the GenAI products ChatGPT and Copilot. The defendants argue that the Times' content was used under the fair use defence.
- On 26 March 2025, U.S. District Judge Stein denied parts of the defendants' motions to dismiss and allowed the copyright infringement claims to proceed, having regard to examples provided of ChatGPT reproducing material from New York Times' articles.
- On 3 April 2025, eleven lawsuits that were separately filed against OpenAI and Microsoft by:
- the Centre for Investigative Reporting, Inc.;
- various regional newspapers such as the Daily News; and
- multiple groups of authors, such as Sarah Silverman, Paul Tremblay, and Michael Chabon,
were consolidated by the U.S. Judicial Panel on Multidistrict Litigation under a transfer order. This was because the lawsuits involved common questions of fact arising from allegations against OpenAI and Microsoft in relation to the use of copyright works without consent or compensation.
This case will test the scope of the fair use defence in U.S. copyright law, with a focus on whether the use of the copyright content is transformative or causes market harm.
Bartz v. Anthropic PBC (U.S. District Court for the Northern District of California):
- In August 2024, author Andrea Bartz and others commenced proceedings against Anthropic (the company responsible for developing large language models by the name of "Claude"), alleging that it used their copyright works without permission to train the models.
- The case centres on the use of the open-source 'Books3' dataset, which is a dataset of approximately 197,000 books, used to train the Claude models. The allegation is that the dataset allegedly contained unauthorised copies of copyright books.
- On 23 June 2025, the Court granted Anthropic's motion for a summary judgment that the training of Claude was a fair use, but only in relation to the copyright books that Anthropic had purchased. The Court likened Claude to a human who learns from copyright material and found that the uses in question were adequately transformative, stating in the judgment, "Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them, but to turn a hard corner and create something different".
- The case will now only proceed to trial in relation to Anthropic's use of pirated copies for the purposes of developing its AI model.
Nazemian v. NVIDIA Corporation (U.S. District Court, for the Northern District of California):
Authors Abdi Nazemian, Stewart O’Nan, Brian Keene, Andre Dubus and Susan Orlean filed a class action lawsuit alleging that their copyright works were included in the Book3 dataset without authorisation and used to train NVIDIA's large language model 'NeMo'.
NVIDIA has now filed its answer to the plaintiffs' complaints, denying that any copyright infringement has taken place on various grounds including that there was a failure to state a claim, and in any case, the fair use defence applies.
Discovery closes in November 2025. This case is significant as it targets a hardware and infrastructure provider, as opposed to a model developer. Pending the Court's decision, this case has the potential to broaden the scope of liability in AI copyright litigation.
Huckabee v. Bloomberg (U.S. District Court, for the Southern District of New York):
- Originally filed against Meta, Microsoft, EleutherAI, and Bloomberg, this case now proceeds solely against Bloomberg. Plaintiffs including Mike Huckabee (the former Governor of Arkansas) and others, allege that Bloomberg misused thousands of books to train their large language model, BloombergGPT – a GenAI system for financial analysis.
- The cases concerning Meta and Microsoft were transferred to California, and EleutherAI was dismissed.
- This case is pending a ruling on Bloomberg’s request for oral argument on a motion to dismiss, which argues that the use of the copyright works falls within the fair use defence on the basis that any uses were transformative and for research purposes.
Images:
Andersen v. Stability AI (U.S. District Court, for the Northern District of California):
A group of visual artists brought a class-action against Stability AI, alleging that Stability infringed their copyright by using datasets of images scraped from the internet, which include the plaintiffs' works, to train text to image GenAI model, Stable Diffusion.
The plaintiffs allege that the production of the new images based on the trained images creates infringing derivative works.
A Discovery Status Conference was held in April 2025. The discovery period is expected to close in March 2026.
Getty Images v. Stability AI (U.S. District Court, for the District of Delaware):
Getty Images commenced proceedings against Stability AI, an open source AI company known for developing Stable Diffusion – a generative model that creates images from text and image prompts – for copyright infringement and related claims. Getty alleges that Stability AI has infringed its copyright by using Getty's image database without permission to train the Stable Diffusion model.
It will be interesting to see if Getty Images also drops its primary copyright infringement claims in these proceedings as it has done in the UK.
Music or audio:
Concord Music Group, Inc. v. Anthropic PBC (U.S. District Court, for the Northern District of California):
- This dispute arose when eight major music publishers alleged that Anthropic used song lyrics to train its AI model, Claude, and sought a preliminary injunction to prevent further use of their works. The publishers claimed both reputational and market-related harms stemming from the GenAI's training and output.
- The Court found that the requested injunction was overly broad, vague, and unenforceable, especially given the undefined and potentially massive scope of the copyright works involved. The Court denied the motion for a preliminary injunction without prejudice. However, it left open the possibility of future proceedings on the merits of the copyright infringement claims.
The decisions made by these creative companies to commence legal action against the tech industry stands in contrast to the approach taken by the likes of NewsCorp (owner of the Wall Street Journal and New York Post), Shuttershock (owner of extensive video libraries) and Conde Nast (owner of Vogue, Vanity Fair and Wired), which have opted to negotiate and execute licensing deals for the use of their content.
Finally, as a separate but related point, judgment was recently delivered by the U.S. District Court for Delaware in RossAI v Westlaw. This matter concerns an AI analytical tool (rather than a pure GenAI tool), but it casts doubt on the extent to which AI developers can rely on the fair use defence. In this case, the Court found that RossAI had indirectly obtained Westlaw’s curated case summaries (“headnotes”) to train its algorithm after Westlaw’s owner (Thomson Reuters) denied it a license, and found that RossAI's use of Westlaw’s content to train the AI legal research tool was not fair use. This was including because RossAI's use was commercial, was not sufficiently transformative, aimed to create a competing legal research service, and posed a market substitute for Westlaw (thereby threatening Thomson Reuters’ market). Notwithstanding the legal differences between US and Australian copyright law, this judgment is of particular interest because the US-specific defence in question is generally considered more flexible than the Australian 'fair dealing' defences (which require that a use be for one of the purposes prescribed in an exhaustive list).
European Union
The EU has taken a more regulatory-focused approach, attempting to clarify the rules around using GenAI within their existing copyright framework and through new regulations.
The AI Act (Regulation (EU) 2024/1689) is a sweeping regulation for AI systems, which entered into force in August 2024 (AI Act). The copyright rules within the AI Act are designed to interface with text and data-mining exceptions that were introduced by the 2019 EU Directive, Copyright in the Digital Single Market (CDSM). For context, the CDSM contains a relatively flexible text and data-mining (TDM) exception under Article 4, which essentially allows data miners to use copyright-protected works or other subject-matter, for commercial purposes, if the consent of the rightsholder has not been expressly reserved. Unlike in the UK, this exception is not limited to TDM for the purposes of scientific research.
The AI Act introduces the obligation on general purpose AI (GPAI) model providers, to put in place a policy to identify and give effect to (including through state-of-the-art technologies) any rightsholders' reservations (i.e. “opt-outs”) expressed pursuant to Article 4 of the CDSM.
The AI Act also introduces a second obligation, providing that GPAI model providers must prepare and make publicly available a sufficiently detailed summary about the content they use for the training of their GPAI model(s), according to a template provided by the AI Office.
One of the controversial aspects of the AI Act, from a copyright perspective, is that one of the recitals requires GPAI model providers to, in effect, put in place a policy to comply with EU law, and in particular to identify and comply with the reservation of rights expressed by rightsholders pursuant to Article 4(3) of the CDSM, "regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of those GPAI models take place". Critically, this approach does not reconcile with a foundational principle of copyright law, being, that copyright protection (and any exclusive right conferred) is granted on a territory-by-territory basis, and so if copyright infringement occurs in a particular country, the copyright laws of that country must apply. While recitals are not binding in and of themselves, they nonetheless serve to explain the objective(s) of a regulation. There is clearly an attempt by the EU to level the playing field among providers of GPAI models, such that no single provider gains a competitive advantage in the EU market by applying lower copyright standards than those provided for in the EU.
From a judicial perspective, the Budapest District Court has very recently referred four major questions to the Court of Justice of the European Union (CJEU), which arose in Like Company v. Google Ireland (Case C-250/25). These include:
- Communication to the public: Does the display, in the responses of a large language model (LLM)-based chatbot, of a text partially identical to press articles that exceeds the “use of individual words or very short extracts,” constitute an act of communication to the public under Article 15(1) of the Directive on Copyright in the Digital Single Market (DSM Directive) and Article 3(2) of the Copyright and Information Society Directive 2001 (InfoSoc Directive)?
- Reproduction right: Does the training of an LLM-based chatbot, by learning and modelling linguistic patterns from protected works, constitute an act of reproduction under Article 2 of the InfoSoc Directive and Article 15(1) of the DSM Directive?
- Scope of the text and data mining exception: If such training is deemed reproduction, is it covered by the TDM exception provided in Article 4 of the DSM Directive?
- Liability for outputs: Can the reproduction of protected press content by a chatbot in response to a user prompt, which either quotes or refers to the original publication, be attributed to the chatbot provider under Article 2 of the InfoSoc Directive and Article 15(1) of the DSM Directive.
The CJEU's ruling will be hotly anticipated.
What this means for Australian businesses
If you are leveraging GenAI in Australia, either by developing your own AI models, customising them, or heavily relying on GenAI content from commercial models, there are a few key issues you should consider:
1. Do not assume that copyright is a non-issue
Because no broad "fair use" defence exists in Australia, using copyright material without permission, or without the benefit of an exception to infringement or statutory licence, is a high-risk practice. If your business is developing AI or relying on third-party AI services, make sure you know where the data comes from.
2. Address intellectual property ownership in contracts and policy
When engaging employees, contractors, or vendors to create content and they use GenAI, make sure your contracts clarify intellectual property ownership and usage rights.
3. Monitor legal developments and be ready to adapt
The next 1-2 years could bring significant legal changes in Australia. Organisations should develop fit-for-purpose AI governance, which includes appropriate assessment and monitoring, to ensure GenAI use aligns with intellectual property rights and obligations.
4. Leverage AI carefully and add human value
From a practical perspective, one way to address the copyright ownership concerns relating to GenAI content is to ensure human oversight and creativity remains in the loop.
5. Prepare for enforcement and licensing demands
International developments suggest that content owners are mobilising to either litigate or license.
Effective and responsible use of GenAI
GenAI offers incredible opportunities for innovation and efficiency, but it raises significant questions regarding legal accountability and the rights of copyright owners. Copyright law is trying to catch-up in Australia and abroad, to ensure that AI is not a lawless wild west.
The landscape in 2025 is one of rapid change – but if your business knows the rules, is alive to the global reforms, and mitigates the risks, it can harness GenAI effectively and responsibly.
Appropriate AI governance is vital to ensure legal rights and obligations in connection with GenAI are appropriately monitored and managed.
To learn more about our expertise in navigating the evolving intellectual property landscape and how we can assist your business, please contact us at any time.