eSafety Commissioner issues notices to 4 AI companies

4 minute read  29.10.2025 Chelsea Gordon, Dean Levitan

Australia's eSafety Commissioner has issued legal notices to 4 AI companion companies: Character Technologies, Inc., Glimpse.AI, Chai Research Corp, and Chub AI Inc.


Key takeouts


    AI Companies have been asked to demonstrate compliance with the Government's Basic Online Safety Expectations Determination, under the Online Safety Act.

    The obligation for AI Companies to respond is enforceable. Failure to respond can result in civil penalties and other enforcement action, including penalties up to $49.5 million.

    The four AI Companies use generative AI to create companion bots that simulate human-like conversations.

In an Australian first, on 23 October 2025 the Commonwealth eSafety Commissioner issued 4 legal notices to Artificial Intelligence (AI) companies that operate companion chatbots. These bots use generative AI large language models to simulate close personal relationships, and are marketed for emotional support, friendship, and in some cases romantic companionship.

The eSafety Commissioner requires each AI Company to answer questions demonstrating compliance with the Australian Government's Basic Online Safety Expectations Determination (Expectations), and to report the steps they are taking to keep Australians safe.

In a public statement, the eSafety Commissioner said the AI Companies must explain how they are protecting children from exposure to a range of harms, including “sexually explicit conversations and images, and suicidal ideation and self-harm”. This comes after a mother in the United States of America alleged in 2024 that use of an AI chatbot led her son to die by suicide, which resulted in her bringing a negligence and wrongful death claim in the Florida Federal Court.

What the Online Safety Act requires

Under the Online Safety Act 2021 (Cth) (the Act), online service providers are obliged to take reasonable steps to design systems that keep Australians safe, including by protecting children from exposure to age-inappropriate content. Online service providers must demonstrate how they are designing their services to prevent harm, not just respond to it.

Key obligations are set out in the Basic Online Safety Expectations (Online Safety (Basic Online Safety Expectations) Determination 2022 (Cth)) (Determination), and centre around the principle that service providers will take 'reasonable steps to ensure that end-users are able to use the service in a safe manner'. Service providers are expected to:

  • proactively minimise the extent to which material or activity on the service is unlawful or harmful; and
  • take reasonable steps to ensure the best interests of the child are primary consideration in the design and operation of any service that is likely to be accessed by children.

The Determination sets out examples of reasonable steps that could be taken by AI companies, such as:

  • developing processes to detect, moderate report and remove harmful material;
  • for children's services, ensuring default privacy and safety settings;
  • ensuring assessment of safety risks and impacts are undertaken, and identified risks are appropriately mitigated; and
  • assessing whether business decisions will have a significant adverse impact on the ability of end-users to use the service in a safe manner and appropriately mitigating that impact.

Specific obligations for service providers with generative AI capabilities

Social media services, electronic services and designated internet service providers have additional obligations regarding generative AI capabilities. These providers must take 'reasonable steps' to consider end-user safety and incorporate safety measures in the design, implementation and maintenance of generative AI capabilities. They must also 'proactively minimise' the extent to which generative AI capabilities may be used to produce material or facilitate activity that is unlawful or harmful

Examples of reasonable steps include:

  • ensuring assessments of safety and risk impacts are undertaken, and safety review processes are implemented through design, development, deployment and post-deployment stages of generative AI capabilities;
  • providing education or tools to end-users to promote understanding and risk of those capabilities;
  • ensuring 'to the extent reasonably practicable' that training material for generative AI capabilities and models do not contain unlawful or harmful material; and
  • ensuring, to the extent 'reasonably practicable' that generative AI capabilities can detect and prevent the execution of prompts that generate unlawful and harmful materials.

The Determination also includes a core expectation that the provider will take reasonable steps to prevent access by children to class 2 material (such as pornography and other material with high impact, such as violent material or demonstrating high impact drug use), including by implementing age assurance mechanisms, conducting child safety risk assessments, and implementing improved technologies to prevent access by children to these materials (Determination, Division 2, section 12).

Next steps

The obligation for the AI Companies to respond to the notices from the eSafety Commissioner is enforceable, and failure to respond can result in civil penalties and other enforcement action, including penalties up to $49.5 million.
The eSafety Commissioner can then publish public statements about the extent to which services are meeting the Expectations. 

Actions for AI developers and online service providers in Australia

Online service providers have a positive duty to demonstrate that they are compliant with the Expectations.

Accordingly, social media services, electronic services and designated internet service providers should proactively review their practices and procedures to ensure appropriate processes are in place that satisfy the requirements of the Online Safety Act, including the Expectations. 

Where a service is generative AI enabled, providers should ensure they meet the additional expectations set out in the Determination, Division 2, section 8a, including by:

  • taking reasonable steps to consider end-user safety, and incorporating safety measures in the design, implementation and maintenance of generative AI capabilities; and
  • taking reasonable steps to proactively minimise the extent to which generative AI capabilities may be used to produce material or facilitate activity that is unlawful or harmful.

Clients are increasingly seeking legal and AI governance advice, to ensure that the unique risks of generative AI products are properly managed. This is vital in the AI era when:

  • technological development is rapid;
  • generative AI scales risk; and
  • generative AI is by its nature probabilistic, which creates increased risk of harm with inappropriate use or exposure to children who do not properly understand the risks and nature of the technology.

MinterEllison’s leading AI Advisory team, together with our Media and Communications litigation team, is available to support providers to develop and implement AI responsibly and in accordance with legal requirements. We offer strategic guidance to help ensure your practices, products, and decisions remain safe, compliant and aligned with both federal and state or territory privacy obligations.

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiI1NzI1Zjc1Mi04ZmNiLTRmZGMtYTViYi1lODcxOTdhOTJlODAiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTc2MTk3ODk2MSwiZXhwIjoxNzYxOTgwMTYxLCJpYXQiOjE3NjE5Nzg5NjEsImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2VzYWZldHktdGFyZ2V0cy00LWFpLWNoYXRib3QtZmlybXMiLCJhdWQiOiJodHRwczovL3d3dy5taW50ZXJlbGxpc29uLmNvbS9hcnRpY2xlcy9lc2FmZXR5LXRhcmdldHMtNC1haS1jaGF0Ym90LWZpcm1zIn0.PkQZqowUIbV5LvSX8pHPt3I-LZ9IG22mvFcJ5a43Y74
https://www.minterellison.com/articles/esafety-targets-4-ai-chatbot-firms