Financial Accountability Regime: Implications for accountable entities' AI use

8 minute read  29.04.2025 Siobhan Doherty, Paul Kallenbach, Sam Burrett, Chelsea Gordon and Jennifer Dornan

Practical steps institutions and accountable persons (APs) should take and questions they should ask to support FAR compliance in relation to AI systems.


Key takeouts


  • As artificial intelligence (AI) transforms the financial services landscape, regulators are sharpening their focus on its responsible deployment.
  • ASIC and APRA have both placed AI governance firmly on their agendas, expecting organisations to strengthen their AI governance frameworks to appropriately manage risks that could harm consumers or undermine market integrity.
  • FAR’s principles-based approach requires accountable entities and APs to implement appropriate governance, control and risk management systems – which would extend to the management of AI-related risks.

Drawing on global insights, this article explores the implications of the Financial Accountability Regime (FAR) for AI use and offers practical guidance, while recognising that the pace of change will continue to present ongoing challenges for boards and senior executives.

FAR: A brief overview

Following the Global Financial Crisis and various financial scandals, there was a sense that senior executives could evade accountability by claiming collective responsibility (i.e. ‘it wasn't me’, ‘it could have been anyone’ …).

In response, the UK's Senior Managers and Certification Regime (SM&CR) was introduced in 2016 with a view to clarifying the individual accountability of senior executives of financial institutions. The aim was to achieve more robust governance and risk management, and improved customer outcomes.

Building on a similar rationale, FAR applies accountability obligations to all Australian Prudential Regulation Authority (APRA) regulated institutions and their APs. FAR applied from March 2024 for authorised deposit-taking institutions (ADIs) (replacing the Banking Executive Accountability Regime, which had been in force since 2018), and from March 2025 for insurers and superannuation trustees.

Under FAR, the obligations of accountable entities and their APs include, broadly, acting with honesty and integrity, dealing openly with regulators, and taking reasonable steps to prevent matters that could harm the entity's prudential standing or reputation. The FAR makes clear that 'reasonable steps' include:

  • appropriate governance, control and risk management systems;
  • safeguards against inappropriate delegation;
  • procedures for identifying and remediating problems; and
  • responding to non-compliance.

FAR is administered jointly by the Australian Securities and Investments Commission (ASIC) and APRA.

AI and FAR

The growing use of AI in financial services presents new opportunities, but also brings increased risks and complexity. From algorithmic trading to risk identification, customer profiling and fraud detection, AI systems can improve efficiency, lower operational costs, and deliver better customer experiences and outcomes. However, AI also introduces challenges around transparency, fairness and operational resilience, and can just as easily undermine customer trust and outcomes if not properly managed. Beyond the risk of regulatory breaches, AI raises concerns about bias, lack of explainability, cyber vulnerabilities, intellectual property protection, and the management of personal information. These risks are further heightened by the rapid pace of AI development.

While APRA and ASIC have not yet released specific guidance on the application of FAR to AI, both regulators are actively considering the rise of AI within their respective mandates. ASIC has taken a more proactive approach, reviewing the use of AI among Australian financial services and credit licensees and identifying major 'governance gaps' where AI developments have outpaced the establishment of appropriate governance frameworks. APRA has been more measured, but has indicated that it will rely on existing regulations to ensure institutions maintain strong oversight of AI systems and effective risk management practices.

Regulators globally are also considering these issues. In the UK, the Financial Conduct Authority has published its views on how the UK's SM&CR (discussed above) applies to the use of AI by financial institutions. Their view is clear that the principle of individual accountability extends to the use of AI.

Regulators and institutions alike are now grappling with what this means in practice.

From a FAR perspective, the following principles are a likely starting point:

1. Existing regulation is principles-based and technology agnostic

The existing financial services regulation – such FAR, CPS 234, CPS 230, financial services and credit licensing requirements and directors duties – already provides a framework for governing AI, notwithstanding the absence of specific references to AI.

2. FAR requires clear allocation of responsibility for AI

As for any technology, FAR requires that there is clear allocation among APs of responsibilities for AI systems and their use, including the management of risks that are unique to AI.

3. FAR requires robust AI governance frameworks

FAR’s obligations would encompass taking reasonable steps, and using due care, skill and diligence, to ensure the safe and responsible use of AI so as to avoid harm to the institution's prudential reputation and standing. In practice, AI deployment is rarely a single point-in-time event. Most financial services organisations have already implemented AI systems across a range of functions, with these systems continually refined over time. As a result, effective AI governance must be both retrospective and ongoing.

Reasonable steps would include having (and implementing, testing and updating, as required) governance frameworks to manage AI risks, support clear disclosure about AI use, manage data (including personal information) and deploy AI in a manner that protects market integrity and is aligned with the culture and values of the institution. AI use and practice must be regularly tested against the expectations in the framework.

What this looks like in practice will depend on the relevant part of the business, the particular AI systems and their uses, the nature of the data underpinning the AI model, and the nature of the risks involved and their potential impact. While there will be common elements to robust AI governance frameworks, a tailored approach is required.

Defining AI: why it matters under FAR

Financial services regulators in Australia and around the world are adopting broad definitions of ‘AI systems’, encompassing everything from generative AI models like ChatGPT to traditional predictive algorithms and automated decision-making processes.

From a FAR perspective, this broad coverage has important implications. Entities must carefully define what constitutes an AI system within their business and ensure their FAR framework is applied consistently across all relevant systems — not only to new or high-profile AI tools, but also to legacy systems where appropriate.

Different types of AI may demand different ‘reasonable steps’ to manage risks. Traditional predictive models embedded in core banking systems, for example, may require ongoing monitoring for stability and fairness, while GenAI systems may present novel risks related to hallucination, data security, bias, or intellectual property.

A tailored, risk-based approach to AI governance, supported by clear internal definitions, is essential to meeting financial services legal and regulatory obligations, including those under FAR.

Key actions for accountable entities and their APs

To seek to align AI use with FAR obligations, accountable entities and APs should adopt a proactive, adaptive approach. There are a number of questions boards of accountable entities and APs more generally should be asking to test the extent to which AI use aligns with FAR and their other regulatory obligations. Many of these questions are challenging and evolving in real time.

Key actions include:

1. Define accountability for AI

Like any technology, clear lines of accountability should be established across the AI-lifecycle. There should be clarity as to each AP's responsibilities in relation to AI systems (whether in relation to the development of the AI system, or the use of the technology). In practice, this may include assigning executive-level accountability for AI risk management across the organisation, as well as delineating responsibilities at a functional or system level.

2. Map reasonable steps

Complex AI models can be difficult to interpret, raising questions about how APs can demonstrate reasonable steps to manage risks. This risk is exacerbated by the speed at which AI is developing and its increasing capability and complexity. APs should be aware of the use of AI within their function or business area, for example via a well-maintained 'AI system register'. There should be clarity as to the 'reasonable steps' taken to manage risks at the relevant stages of the AI lifecycle.

3. Embed AI in risk management frameworks

FAR requires the identification and management of risks that could affect the entity's prudential standing or reputation. Given the rapidly evolving nature of many AI technologies, there is a need for regular testing of AI systems to detect biases, errors, or instability. Entities should integrate AI risks—such as model drift or data breaches—into their existing risk frameworks.

4. Ensure transparency and fairness

Transparency is key to responsible AI use. A robust AI governance framework will require that AI-driven decisions, such as credit scoring or pricing, are explainable and do not unfairly discriminate against customers. Bias and discrimination should be carefully assessed in AI use case analysis, and appropriate guardrails implemented to mitigate risks, both prior to implementation and on an ongoing basis as AI systems evolve or updated. A robust risk-assessment tool is vital to ensure non-obvious risks are not missed.

5. Manage third-party AI providers

Many entities rely on third-party AI solutions, from cloud-based models to vendor-developed algorithms. Outsourcing does not absolve institutions of responsibility. Due diligence must be conducted on third-party providers, and outsourcing contracts should include robust protective mechanisms to support risk management, data and personal information security and auditability aligned with the entity's AI governance framework. This also reflects core obligations imposed on APRA-regulated entities under CPS 230.

6. Enhance and maintain AI literacy

Entities should train APs and staff on AI risks, fostering a culture where AI is used responsibly. Training can enhance awareness of the regulatory and operational challenges arising from how AI is used in the business. Maintaining AI literacy throughout the institution and among APs is vital to managing AI risk. Key questions include:

  • is there sufficient AI literacy within the business to properly monitor AI use and outcomes (ie to take the requisite 'reasonable steps')? This is likely to become an increasing challenge as AI systems become more advanced, and needs to be monitored
  • what level of AI literacy is required for APs to appropriately manage AI risk and oversee AI governance? Is a focus on governance and outcomes sufficient, is a baseline understanding of AI sufficient, or is more required?

7. Prepare for regulatory scrutiny

APRA and ASIC are likely to review AI use as part of compliance reviews, particularly where systems impact prudential risks or consumer outcomes. Entities should maintain detailed records of AI governance processes, including risk assessments, testing protocols and accountability assignments, to demonstrate FAR compliance. They should be prepared to effectively engage with regulators in relation to AI use, AI risks and the organisation’s AI governance framework. To this end, written AI governance framework documents will be vital, not just to manage AI risk, but to demonstrate compliance to regulators.

8. Prepare to adapt quicky

The rapid rate of AI innovation may outpace existing frameworks, requiring entities and APs to adapt quickly. Moving to modular, principles-based frameworks may assist, enabling organisations to promptly update specific modules without needing to revert to the drawing-board. Maintaining AI literacy within the organisation will also assist. Ongoing proactive monitoring is required to identify gaps or weaknesses in existing governance approaches and specific existing AI use cases.

With AI’s increasing integration into operations, customer interactions, risk management, and even decision-making, accountable entities and APs must carefully consider how best to demonstrate compliance with their regulatory obligations in relation to AI. FAR (as part of the broader regulatory framework) places an onus on accountable entities and APs to govern its use responsibly. To be able to demonstrate compliance, optimise customer outcomes, and maintain trust, accountable entities and APs should take steps to ensure that their AI governance frameworks are robust and adaptable as part of their broader governance and risk management arrangements.


Our specialist FAR Team has deep experience advising clients on BEAR and FAR implementation, accountability mapping, framework design, breach investigation and assurance. In conjunction with our nationwide AI Client Advisory Team, who assist clients in designing and implementing safe and responsible AI solutions, we are uniquely placed to assist you to ensure that your AI governance framework supports compliance with FAR and your other regulatory obligations.

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiIwMmE1OGZhMy05M2JlLTRiYjUtYjkzZC1lNTMxNWM5NzE4NWQiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTc0NjA2OTgzNiwiZXhwIjoxNzQ2MDcxMDM2LCJpYXQiOjE3NDYwNjk4MzYsImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2Zhci1pbXBsaWNhdGlvbnMtZm9yLWFjY291bnRhYmxlLWVudGl0aWVzLWFpLXVzZSIsImF1ZCI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2Zhci1pbXBsaWNhdGlvbnMtZm9yLWFjY291bnRhYmxlLWVudGl0aWVzLWFpLXVzZSJ9.TqzoTQFLnJCQnE9G7TyxgLnYhXZaecpYn0nvkE-HBlI
https://www.minterellison.com/articles/far-implications-for-accountable-entities-ai-use