A&O Shearman | FinReg | Bank of England and UK Financial Conduct Authority Findings on Third Survey of Artificial Intelligence and Machine Learning in UK Financial Services
Financial Regulatory Developments Focus
This links to the home page
Financial Regulatory Developments Focus
Filters
  • Bank of England and UK Financial Conduct Authority Findings on Third Survey of Artificial Intelligence and Machine Learning in UK Financial Services

    November 21, 2024
    The Bank of England published the findings of its third joint survey with the U.K. Financial Conduct Authority on the use of Artificial Intelligence and machine learning in financial services. The survey aims to build on existing work to further the BoE's and FCA's understanding of AI in financial services, in particular by providing ongoing insight and analysis into AI use by BoE and/or FCA-regulated firms.

    Points of interest include: (i) use and adoption—75% of firms are already using AI, with a further 10% planning to use AI over the next three years; (ii) third-party exposure—a third of all AI use cases are third-party implementations; (iii) automated decision-making—55% of all AI use cases have some degree of automated decision-making, with 24% of those being semi-autonomous; (iv) understanding of AI systems—46% of respondent firms reported having only 'partial understanding' of the AI technologies they use versus 34% of firms that said they have 'complete understanding'; (v) benefits and risks of AI—the highest perceived current benefits are in data and analytical insights, anti-money laundering and combating fraud, and cybersecurity. The areas with the largest expected increase in benefits over the next three years are operational efficiency, productivity, and cost base. These findings are broadly in line with the findings from the 2022 survey. Of the top five perceived current risks, four are related to data: data privacy and protection, data quality, data security, and data bias and representativeness. The risks that are expected to increase the most over the next three years are third-party dependencies, model complexity, and embedded or 'hidden' models. Cybersecurity is rated as the highest perceived systemic risk both currently and in three years. The largest increase in systemic risk over that period is expected to be from critical third-party dependencies; (vi) constraints—the largest perceived regulatory constraint to the use of AI is data protection and privacy followed by resilience, cybersecurity, third-party rules, and the FCA's Consumer Duty; and (vii) governance and accountability—84% of firms reported having an accountable person for their AI framework. Firms use a combination of different governance frameworks, controls, and/or processes specific to AI use cases.

    Return to main website.