In a speech at the UK Financial Conduct Authority's (FCA's) conference on Governance in Banking, Bank of England (BoE) Executive Director James Proudman talked about the governance challenges associated with the automation of tasks — and more particularly the implementation of artificial intelligence (AI) and machine learning (ML) — for the financial services sector. In addition, Mr Proudman identified three 'principles for governance' derived from them.
A high level overview of his comments is below.
Why uptake of AI is of concern to the prudential regulator
Mr Proudman prefaced his comments by stating that 'The art of managing technology is an increasingly important strategic issue facing boards, financial services companies included. And since it is a mantra amongst banking regulators that governance failings are the root cause of almost all prudential failures, this is also a topic of increased concern to prudential regulators'.
[Note: In his Final Report on the Financial Services Royal Commission, Commissioner Hayne identified lack of sufficient oversight of technological systems/processes as a cause of some of the misconduct (including fee for no service conduct) and as unacceptable. See: Financial Services Royal Commission Final Report, Vol 1 at 112-116, 138-139. See also: FSRC Final Report: technology and data implications]
Scale of uptake: 'Indicative' results of a systematic survey into AI adoption in the UK financial services sector
Mr Proudman said that in March 2019, the Bank of England and the Financial Conduct Authority (FCA) sent a survey to more than 200 financial firms (including the most significant banks, building societies, insurance companies and financial market infrastructure firms) to gather evidence about the extent of adoption of AI/ML in the UK financial services sector.
The focus of the survey was to gain insight into the following.
- the extent to which firms have adopted (or are intending to adopt) artificial intelligence (AI) and machine learning (ML) within their businesses
- the extent to which firms have clearly articulated strategies towards the adoption of AI/ML
- the extent of barriers to adoption and what techniques and tools could enable safer use of the technology
- an assessment of firms' perceptions of the risks, to both their own safety and soundness as well as to their conduct towards customers and clients, arising from AI/ML
- the extent to which the appreciation of these risks has given rise to changes in risk management, governance and compliance frameworks
Some indicative results
Though the full survey results will not be released until Q3 2019, Mr Proudman outlined some 'indicative' results including the following.
- Most firms are using AI in some form (but they expect use to ramp up over the next three years): Mr Proudman said that AI implementation amongst firms appears to be 'strategic but cautious' at this stage with many firms reporting that they are currently in the process of building the necessary infrastructure necessary for larger scale AI deployment. He said that 80% of respondents said that they are using ML applications in some form. The 'median firm' reported deploying six distinct such applications currently, and expected three further applications to go live over the next year, with ten more over the following three years.
- Large established firms seem to be most advanced in deployment.
- There is some reliance on external providers at various levels, ranging from providing infrastructure, the programming environment, or specific solutions.
- Barriers to AI deployment? Barriers to AI deployment appear to be predominantly internal to firms (eg legacy systems and unsuitable infrastructure) rather than stemming from regulation.
- Approaches to testing and explaining AI are under development and it appears that there is currently a range of approaches in use. Firms said that ML applications are embedded in their existing risk frameworks but many said that new approaches to model validation (which include AI explainability techniques) are needed in the future.
- AI is mostly being deployed in risk management/compliance areas: 57% of firms regulated by the BoE reported that they are using AI applications in risk management and compliance areas, including anti-fraud and anti-money laundering applications. 39% of firms said that they are using AI applications in customer engagement, 25% in sales and trading, 23% in investment banking, and 20% in non-life insurance.
- Self-assessment of the impact of AI? Mr Proudman said that 'by and large', firms were of the view, that properly used, AI and ML would lower risks for example, in anti-money laundering, Know Your Customer (KYC) and retail credit risk assessment. However, he added that some firms said that, incorrectly used, AI and ML techniques could give rise to new, complex risk types - and that could imply new challenges for boards and management.
Three governance challenges: data, accountability, pace of change
1. Data governance: risks associated with data quality and data use
The introduction of AI/ML poses significant 'ethical, legal, conduct and reputational issues' challenges around the collection and proper use of data, Mr Proudman said. For example, questions arise as to the accuracy of data and the accuracy of the models being used to analyse it, as well as questions of bias within the models, (eg is data being used unfairly to exclude individuals or groups, or to promote unjustifiably privileged access for others).
Implication for boards? The governance principle to emerge from these challenges is that boards should attach priority to the governance of data and more particularly should attach priority to: a) what data should be used; b) how should it be modelled and tested; and c) whether the outcomes derived from the data are correct.
2. Accountability challenges (including attributing individual accountability under the SMR)
Relying on algorithms and thereby removing human judgement could make identifying the root cause of problems more difficult Mr Proudman said. 'How would you know which issues are a function of poor design [inherent bias] — the manufacturer's fault if you have bought an "off the shelf" technology product — or poor implementation — which could demonstrate incompetence or a lack of clear understanding from the firm's management'. He added that in the context of decisions made by machines which themselves learn and change over time, defining what it means for the humans in the firm to act with "reasonable steps"/"due skill, care and diligence" could become more challenging.
Mr Proudman said that these questions are questions boards not just regulators 'will need to consider and be on top of'. More particularly he said that 'firms will need to consider how to allocate individual responsibilities, including under the Senior Managers Regime'.
Implication for boards? The governance principle to emerge from these challenges is that 'the introduction of AI/ML does not eliminate the role of human incentives in delivering good or bad outcomes, but transforms them, implies that boards should continue to focus on the oversight of human incentives and accountabilities within AI/ML-centric systems'.
3. Rate of change
As the rate of introduction of AI/ML in financial services looks set to increase, so too does the extent of execution risk that boards will need to oversee and mitigate Mr Proudman said. More particularly, he said that the transition to greater AI/ML-centric ways of working has 'major risks and costs arising from changes in processes, systems, technology, data handling/management, third-party outsourcing and skills' which will create demand for new skill sets on boards/senior management and necessitate changes in control functions/risk structures.
In terms of oversight, he said that given the complex interdependencies across firm functions entailed in the shift towards greater automation, there will be a need for a shift in approach — 'many of these interdependencies can only be brought together at, or near, the top of the organisation' he said.
Implication for boards? The governance principle to emerge from this is that boards should reflect on the range of skill sets and controls that are required to mitigate these risks both at senior level and throughout the organisation.
[Source: Speech at the FCA Conference on Governance in Banking, James Proudman: Managing machines: the governance of artificial intelligence]