Minimising misleading and deceptive exposure for Australian directors

8 minute read  05.08.2024 Dominick Seccombe and Adam Karras

Companies overselling, mischaracterising or poorly defining their AI capabilities risk exposing their directors and officers to breaches of consumer law and directors' duties.


Key takeouts


  • Public misstatements about a company's use of AI risks exposing directors and officers to liability under the Australian Consumer Law and a breach of their duties under the Corporations Act.
  • In the United States, we have seen the first examples of AI-related securities class actions arising from a company's misrepresentation of AI capabilities.
  • We provide five key risk mitigation strategies for companies to help protect their directors and officers from liability arising from misleading and deceptive statements about their company's AI use.

False or misleading statements

The widespread adoption of AI across many Australian industries has generated new risks and exposures for directors and officers. In addition to the widely reported AI risks to cybersecurity, privacy and quality control, an emerging exposure for Australian directors and officers arises from public statements made regarding their company's use of AI.

In Australia, directors of companies who make false or misleading statements may be liable under the misleading and deceptive provisions of Schedule 2 to the Competition and Consumer Act 2010 (Cth), being the Australian Consumer Law (ACL) and may also be in breach of their duties under the Corporations Act 2001 (Cth) (Corporations Act).

Earlier in 2024, ASIC successfully pursued a company for making false or misleading statements about certain environmental, social and governance matters (Australian Securities and Investments Commission v Vanguard Investments Australia Ltd [2024] FCA 308). Although this article focusses on directors' and officers' liabilities, there is a risk that companies could also face civil penalty actions brought by ASIC for false and misleading statements regarding AI, under ss 12DB and 12DF of the Australian Securities and Investments Commission Act 2001 (Cth).

Public statements about use of AI

As AI continues to be integrated within modern workplaces, companies are increasingly making public disclosures about how they have adopted AI to increase efficiency, improve work product, and reduce costs. Such disclosures have created a new branch of potential exposures for companies and directors, known as "AI Washing". In short, AI Washing is the practice of misstating a company's AI capabilities for the purpose of gaining a competitive advantage or improving a company's reputation in the market.

In the United States, the first AI-related securities class actions are underway as shareholders allege to have been misled or deceived by public disclosures of a company's use of AI. For example, on 21 February 2024, shareholders brought a securities class action against Innodata (an American technology consulting company), its CEO, and other corporate officers, alleging that Innodata falsely represented to investors and advertised that it used AI-powered operations for data preparation, when it had instead relied on offshore manual labour—not proprietary AI technology—to digitise medical records and insurance data, and underfunded its AI research and development.

Similarly, Zillow (an American-based real estate company) is facing a securities class action for allegedly misleading shareholders with overly optimistic claims regarding Zillow Offers, its AI-powered tool which was touted as having the capability to accurately estimate market prices of real estate. Zillow held such confidence in its tool that it developed a new function, coined the 'Zestimate', through which Zillow would advance a cash offer to purchase the property based on its estimate of market price. However, Zillow Offers' estimates were allegedly unreliable, partly because of changes in market dynamics due to the COVID-19 pandemic. The action brought against Zillow claims this resulted in significant losses for the company, the wind down of the Zillow Offers business, and a decline in the company’s share price.

The US Federal Trade Commission has provided some instructive guidance on the types of misleading or deceptive AI-related claims that could be susceptible to enforcement actions. These include claims arising from:

  • exaggerating what a company's AI products can do;
  • making promises that a company's AI product does something better than non-AI products without adequate proof;
  • failing to identify known likely risks associated with a company's AI systems; or
  • falsely asserting a company's products or services utilises AI.

Liability of Australian directors for misleading AI-related claims

Whilst the ACL does not expressly contemplate AI, there is no exclusion for AI-powered products or services. Businesses that adopt and promote the use of AI in their provision of goods or services must therefore have regard to the ACL. Moreover, directors have specific duties under the Corporations Act, which oblige directors to exercise their powers and discharge their duties:

  • in good faith including acting in the best interests of the company and for a proper purpose (s 181);
  • with reasonable care and diligence (s 180); and
  • without using their position or information to gain personal advantage (ss 182, 183).

As AI becomes a common tool of trade, there is a growing expectation for directors to be both technologically and AI literate in order to properly exercise their powers and discharge their obligations. This includes ensuring that their company does not breach any ACL provisions with misleading statements about AI.

In ASIC v RI Advice Group [2022] FCA 496, the Federal Court held that a director may be personally liable for failing to implement a practice that would minimise harm to the company caused by AI. This decision considered the liability of a company (and its directors) for failing to have adequate risk management systems in place to manage cybersecurity threats. The Court held that the conduct of the directors and officers of RI Advice Group contravened their statutory duty of care under s 180(1) of the Corporations Act by exposing their company to a risk of harm.

The personal liability of directors and officers for misleading and deceptive statements concerning the use of AI has not been specifically tested in Australian courts. However, it remains open and highly probable that such a personal liability may exist where directors fail to exercise reasonable care and diligence with the publication of such statements.

The ACL requires that all claims made about a product or service must be true, accurate and able to be substantiated. Accordingly, businesses must be able to both fully comprehend the mechanics of their AI tools and processes, and accurately articulate its usage. Any company promoting AI, or products and services which utilise AI, must be cautious of not making false or overreaching claims about the capability, accuracy, or functionality of a product or service. The misleading and deceptive conduct provisions of the ACL (Section 18) render it unlawful to engage in conduct, in trade or commerce, that is misleading or deceptive, or likely to mislead or deceive. If a company chooses to rely on functionality of AI or an AI tool, then they must ensure they truly understand the model in order to accurately report on, or advertise it, to the public. Otherwise, a company (and its directors) risk breaching the ACL or directors' respective duties.

Risk and exposure mitigation

To mitigate exposure under the ACL and Corporations Act, for misleading and deceptive statements about a company's use of AI, we recommend the following:

  • Accurately define 'AI': To prevent allegations of falsely portraying AI and its applications, companies (and their directors) should establish clear definitions of their AI tools that are consistent across both internal and external communication channels and that also reflect the true extent of the company's AI capabilities. It is critical for companies to limit internal and external inconsistencies in their description of AI tools to reduce the risk of public statements being perceived as deceptive.
  • Conduct legal and technical review of public AI statements for accuracy and compliance: All public statements regarding a company's use of AI ought to be reviewed by relevant technical specialists and legal experts. The focus of any review should be the proper description of the AI capabilities, the nature of its use in the company, and its consistency with internal descriptions of the AI. The legal review should consider whether the statements misrepresent or mischaracterise the tools' capabilities such that it would be capable of misleading or deceiving its audience.
  • Robust disclosure of AI-related risks: Implementing thorough risk disclosures about AI or its use may mitigate the potential for securities class actions or accusations of deceptive practices. This encompasses acknowledgements of the possibility that AI systems can occasionally produce erroneous outputs or malfunction. For example, any public disclosures about a company's implementation of large language model AI (such as ChatGPT) should include a disclosure that use of the model/tool includes inherent risks of hallucination or malfunction.
  • Evaluate AI risks: For AI systems that present a high level of risk, it is advisable to carry out comprehensive evaluations to identify: potential hazards, the consequences of those hazards, and the most effective mitigation strategies. Subsequently, it would be prudent to communicate these identified risks in any external disclosures relating to the AI systems. These risks include: breaches of privacy, cybersecurity and AI malfunction and hallucination. Companies need to accurately and adequately appraise those risks to avoid any misleading representations about the accuracy and/or safety of their use of AI.
  • Reassess insurance: It is crucial for directors to evaluate their current Directors and Officers (D&O) insurance coverage in light of their organisation's use of AI. As new risks emerge, existing D&O insurance policies may fall short in adequately covering AI-related activities, which could leave the company vulnerable to risks that are typically expected to be insured against. For example, there may be exclusions such as breach of contract, dishonesty, professional services and regulatory exclusion and conditions of the D&O policy that may effectively remove cover for the benefit of the directors.

Please reach out at any time to discuss the protection of your organisation's directors and officers from liability arising from misleading and deceptive statements about their AI use.

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiI1Mjc4YmI0Ny0xNDViLTRiMTEtYWUzNy00ZDM2YzRlYzRjYTUiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTczMzMzNDc3OSwiZXhwIjoxNzMzMzM1OTc5LCJpYXQiOjE3MzMzMzQ3NzksImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL25hdmlnYXRpbmctYWktbWlzbGVhZGluZy1hbmQtZGVjZXB0aXZlLWV4cG9zdXJlLWZvci1hdXN0cmFsaWFuLWRpcmVjdG9ycyIsImF1ZCI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL25hdmlnYXRpbmctYWktbWlzbGVhZGluZy1hbmQtZGVjZXB0aXZlLWV4cG9zdXJlLWZvci1hdXN0cmFsaWFuLWRpcmVjdG9ycyJ9._HTIzMNkT6LJXQb_jaLDoVKhd0BYa4sDwHfSmdgGFfI
https://www.minterellison.com/articles/navigating-ai-misleading-and-deceptive-exposure-for-australian-directors