ASIC urges stronger AI governance for AFS and credit licensees

8 minute read  20.11.2024 Paul Kallenbach, Sam Burrett, Eliza Campain

We outline the eight key findings from ASIC's Report 798, Beware the Gap: Governance Arrangements in the Face of AI Innovation.


Key takeouts


  • Governance gap:

    ASIC's Report highlights a governance gap, with AI adoption outpacing risk management and governance frameworks, posing risks to consumers and market confidence.
  • Call for action:

    ASIC urges licensees to swiftly update governance and risk management frameworks to address AI-specific risks like algorithmic bias, transparency, data quality and ethics.
  • Regulatory compliance:

    Licensees must ensure AI usage complies with regulatory obligations, including consumer protection laws and directors' duties.

On 29 October 2024, the Australian Securities and Investments Commission (ASIC) released a comprehensive report titled 'Beware the Gap: Governance Arrangements in the Face of AI Innovation' (Report). The report presents ASIC's findings following a review of AI usage among Australian financial services (AFS) and credit licensees. The review aimed to understand how licensees are currently utilising AI, their future plans for AI implementation, and the measures they are taking to mitigate associated risks.

The Report highlights a major 'governance gap', noting that the swift uptake of AI technologies is surpassing the pace at which risk management and governance frameworks are being established. ASIC expressed concerns that some licensees are not adequately prepared to address the challenges of their growing AI usage, noting that some licensees are updating their governance arrangements concurrently with the increasing adoption of AI.

ASIC's chair, Joe Longo, highlighted the need to tackle this issue, stating, "it is clear that work needs to be done - and quickly - to ensure governance is adequate for the potential surge in consumer-facing AI". Mr Longo also warned that "without appropriate governance, we risk seeing misinformation, unintended discrimination or bias, manipulation of consumer sentiment and data security and privacy failures, all of which has the potential to cause consumer harm and damage to market confidence".

With the growing adoption of AI, ASIC urges licensees to update their governance and risk management frameworks, to prevent the widening gap between AI usage and governance structures.

We outline ASIC's eight key findings below.

Scope and methodology of the review

The review analysed 624 AI use cases across 23 AFS and credit licensees who operate in the banking, credit, insurance and financial advisory sectors. It focused on use cases where AI directly or indirectly impacted consumers (excluding back-office functions and investing, markets and trading activities). The review covered various AI applications, such as advanced data analytics and generative AI.

Key statistics from the Report

  • AI adoption is increasing rapidly: 57% of all use cases reported were less than two years old or in development.
  • The adoption of generative AI is a recent development: 92% of generative AI use cases were deployed in 2022 or 2023, or were in development as at December 2023.
  • The pace of change is expected to continue: 61% of licensees told ASIC they planned to increase their use of AI in the next 12 months.
  • Increase in the use of more complex and opaque techniques for processing and analysing large volumes of images, audio and text data: These techniques (which include neural networks used in deep learning and generative AI) represent 32% of use cases under development.
  • Disclosure of AI use to consumers: Only 43% of licensees had policies that referenced disclosure of AI use to affected consumers.
  • Updating risk management policies or procedures to address AI risks: Around 50% of licensees had specifically updated their risk management policies or procedures to address AI risks. Other licensees relied on their existing policies or procedures without making changes.

Key findings from the Report

Use of AI findings

Finding 1: The extent to which licensees used AI varies significantly. Some licensees have been using forms of AI for several years, whilst others were early in their journey. Overall, AI adoption is accelerating rapidly.

Finding 2: While most current use cases leverage long-established, well-understood techniques, there is a shift towards more complex and opaque techniques. The adoption of generative AI, in particular, is increasing rapidly. This can present new challenges for risk management.

Finding 3: Existing AI deployment strategies are mostly cautious, including for generative AI. AI use cases focus on augmenting human decisions or increasing efficiency; AI is not being used to make autonomous decisions. Most use cases do not feature direct interaction with consumers.

Risk management and governance findings

Finding 4: Not all licensees have adequate arrangements in place for managing AI risks.

Finding 5: Some licensees assess risks through the lens of the business rather than the consumer. There are also gaps in how licensees assess risks, particularly risks to consumers that are specific to the use of AI, such as algorithmic bias.

Finding 6: AI governance arrangements vary widely. There are weaknesses that create the potential for gaps as AI use accelerates.

Finding 7: The maturity of governance and risk management does not always align with the nature and scale of licensees’ AI use – in some cases, governance and risk management lags the adoption of AI, creating the greatest risk of consumer harm.

Finding 8: Many licensees rely heavily on third parties for their AI models, but not all have appropriate governance arrangements in place to manage the associated risks.

ASIC's focus on potential risks to consumers

The Report highlights that one of ASIC's main goals is to ensure AFS and credit licensees that are using AI have strong governance frameworks in place to reduce risks and protect consumers. The regulator is particularly focused on how licensees detect and handle AI-related risks, the quality of data used in AI systems, and how ethical aspects of AI adoption and use are integrated into AI governance.

The Report identifies several potential risks to consumers, including biases in AI decision-making, a lack of transparency, and concerns about data privacy. To manage these risks, ASIC suggests that licensees develop thorough risk management plans, ensure transparency in AI processes, and uphold high standards of data quality and ethical conduct.

Ensuring compliance with regulatory obligations

In addition to managing consumer risks, the Report emphasises that licensees must ensure their AI usage complies with current regulatory obligations, including general obligations for licensees, consumer protection laws, and directors' duties. The Report cites examples of the following obligations for licensees:

  • providing financial or credit services efficiently, honestly, and fairly;
  • avoiding AI practices that exploit consumer vulnerabilities or result in unconscionable conduct;
  • ensuring representations about AI usage, performance, and outputs are accurate and not misleading;
  • documenting, implementing, monitoring, and regularly reviewing compliance measures, especially when AI introduces new risks;
  • maintaining adequate technological and human resources for data integrity, confidentiality, and operational needs;
  • updating risk management frameworks to reflect changes due to AI;
  • remaining accountable for outsourced functions and ensuring appropriate service provider selection; and
  • ensuring that directors and officers exercise care and diligence in AI adoption and use, being mindful of AI-generated information and related risks.

MinterEllison's perspective – strengthening AI governance and risk management

The Report offers valuable insights into the current state of AI governance in the financial services and credit industry. In this section, we expand our lens beyond the Report, to consider additional AI governance challenges; the multifaceted AI-related legal risks that organisations must consider and address; and the actionable steps required to build a sustainable and responsible AI framework.

A widening governance gap

It is possible that ASIC's findings materially understate the current scale of the AI governance challenge facing the financial services industry. The Report is based on use cases that AFS holders were “using, or developing, as at December 2023”. Since then, the governance gap has most likely widened significantly, due to three critical factors:

  • the increasing adoption of and investment in AI across financial services, in part driven by pressure on leaders to adopt AI to remain competitive and create efficiency;
  • the continuing advancement of AI capabilities, and the expanding complexity of AI use cases, with a trend towards more sophisticated, autonomous applications such as AI agents; and
  • an evolving regulatory landscape, including the introduction of the Australian Government's proposed mandatory guardrails for high-risk AI and the accompanying Voluntary AI Safety Standard.

The widening gap creates significant risks for boards and executives. As AI use and capabilities expand beyond existing governance frameworks, organisations face increased exposure to commercial, regulatory and reputational risk. This heightens the need for urgent action from organisations to manage this widening gap and embed AI governance in their innovation processes.

A range of relevant risks

While the Report appropriately focuses on consumer risks, organisations need to consider a broader range of legal risks and issues. Our experience suggests three particular areas that warrant prompt attention:

  • Directors duties and personal liability: the implications of AI development and deployment for directors are significant. Australian directors must increasingly consider the prospect of 'stepping stones' liability, whereby governance failures (including those related to AI systems) could lead to personal liability. Directors must exercise reasonable care and diligence, which may include understanding key AI risks; ensuring appropriate governance frameworks are in place; and ensuring appropriate oversight, reporting, and monitoring mechanisms for AI use cases. As AI systems continue to operate at scale and potentially affect multiple stakeholders simultaneously, the stakes for directors increase.
  • Privacy and data protection: Many organisations are grappling with the complexities of data protection and privacy in the age of rapidly scaling AI – particularly when using third-party AI models, as is the case in most Australian organisations. Key risks include protecting proprietary and client information (including personal information), ensuring compliance with data sovereignty regulations, and managing cross-border data flows. Given the scale of these risk, as well as the importance of data to extract value from AI use cases, it is critical that leaders take proactive measures to protect privacy and data integrity.
  • Operational risks and CPS 230: Complex operational risks require particular attention for Australian organisations, particularly in financial services, where misinformation, business continuity, and systemic dependencies are a focus of the forthcoming CPS 230 Prudential Standard on Operational Risk Management. Organisations must ensure AI systems are supported by robust risk management practices, and that they have the capability to mitigate systemic risks across interconnected systems. Aligning AI governance with CPS 230 will also enhance operational stability and improve safeguards.

Recommendations and suggested actions

Taking the lead from the Report, and having regard to the expanded governance, compliance and risk lens discussed above, those organisations who are continuing (or embarking) on an AI journey should consider doing the following:

  • Developing and implementing comprehensive AI risk management plans, including by conducting a detailed assessment of algorithmic bias, data quality and privacy issues, and other AI-related risks;
  • Enhancing AI governance frameworks, including by establishing clear governance structures (such as oversight committees or roles dedicated to monitoring AI usage and associated risks); integrating AI-specific risks into broader enterprise risk management frameworks; regularly reviewing and updating these arrangements to keep pace with advancements in AI technology; and clearly defining accountability for AI usage across all levels of the organisation;
  • Improving transparency and consumer communication, including by creating and implementing policies for disclosing AI use to consumers in a clear and understandable manner (which also pre-empts upcoming requirements for AI-related disclosures as part of the first tranche of the Privacy Act reforms - see our detailed article, On the road: Australia’s privacy law overhaul begins, for more information on the reforms);
  • Strengthening data quality and ethical standards, by adopting rigorous data quality controls; developing and enforcing ethical guidelines for AI usage, covering areas such as fairness, accountability and non-discrimination; and continuously monitoring AI models for unexpected behaviours or outcomes and promptly implementing corrective measures;
  • Establishing robust third party management protocols, to ensure that third party AI providers meet risk, governance and compliance standards, and that these requirements are appropriately reflected in the organisation's contract with each such provider;
  • Investing in AI training and education for staff (including directors and officers) on AI technologies, governance practices and regulatory obligations; and
  • Engaging with ASIC, other regulators (such as the APRA and the OAIC) and industry bodies, by actively participating in industry forums or initiatives focused on responsible AI usage, and by staying informed of evolving regulatory expectations and industry best practice.

By proactively addressing ASIC's key findings and implementing robust governance and risk management practices, AFS and credit licensees can not only bridge the governance gap, but can position themselves as leaders in ethical and responsible AI adoption. This can, in turn, enable them to increase consumer and market confidence in their offerings, and facilitate long-term resilience in an increasingly AI-driven financial landscape.

For further related insight, you can read ASIC's media release or listen to Inside ASIC podcast – Episode 4: Tech regulation.


For today and tomorrow, whether you need protection against AI risks (including in relation to compliance with changing legal frameworks), or are shaping your organisation with responsible and ethical AI to enhance and elevate capability, our nationwide AI Client Advisory team will guide you through your AI adoption journey – from insight, to strategy and implementation.

Our AI expertise includes legal and policy, risk, workforce, privacy data protection and cyber, procurement, strategy, and a co-creation model to develop tailored solutions for your organisation (ME AI). Operating with the highest standards of independence and trust as a firm for almost 200 years, our nationwide AI experts have the know-how and experience to help you make the best decisions, faster.

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiJhNGRkNGU2Mi04ZDQxLTRkMDMtYjUxYy1hZDEwM2FlZjFmYzciLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTczNDAwMDM3NSwiZXhwIjoxNzM0MDAxNTc1LCJpYXQiOjE3MzQwMDAzNzUsImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2FzaWMtdXJnZXMtc3Ryb25nZXItYWktZ292ZXJuYW5jZS1mb3ItYWZzLWFuZC1jcmVkaXQtbGljZW5zZWVzIiwiYXVkIjoiaHR0cHM6Ly93d3cubWludGVyZWxsaXNvbi5jb20vYXJ0aWNsZXMvYXNpYy11cmdlcy1zdHJvbmdlci1haS1nb3Zlcm5hbmNlLWZvci1hZnMtYW5kLWNyZWRpdC1saWNlbnNlZXMifQ.uq4ZWRh0VzopalRFn3UPpppLp634ZQ0h9qEgsVpIeNg
https://www.minterellison.com/articles/asic-urges-stronger-ai-governance-for-afs-and-credit-licensees