A&O Shearman | FinReg | Blog
Financial Regulatory Developments Focus
This links to the home page

Filters

The following posts provide a snapshot of selected UK, EU and global financial regulatory developments of interest to banks, investment firms, broker-dealers, market infrastructures, asset managers and corporates.

  • Financial Stability Board Report on Financial Stability Implications of Artificial Intelligence
    November 14, 2024

    The Financial Stability Board has published a report outlining recent developments in the adoption of AI in finance and their potential implications for financial stability. The report notes that AI offers benefits from improved operational efficiency, regulatory compliance, personalised financial products, and advanced data analytics. However, AI may also amplify certain financial sector vulnerabilities and thereby pose risks to financial stability. According to the FSB, AI-related vulnerabilities with the potential to increase systemic risk include: (i) third-party dependencies and service provider concentration; (ii) market correlations; (iii) cyber risks; and (iv) model risk, data quality, and governance. In addition, GenAI could increase financial fraud and disinformation in financial markets. Misaligned AI systems that are not calibrated to operate within legal, regulatory, and ethical boundaries can also engage in behaviour that harms financial stability. From a longer-term perspective, AI uptake could drive changes in market structure, macroeconomic conditions, and energy use that may have implications for financial markets and institutions.

    The report notes that existing regulatory and supervisory frameworks address many of the vulnerabilities associated with AI adoption. However, more work may be needed to ensure that these frameworks are sufficiently comprehensive.

    Read more.
  • UK Financial Conduct Authority Seeks Views on Use of AI in UK Financial Services
    November 4, 2024

    The Financial Conduct Authority has launched a questionnaire on the current and future uses of AI in U.K. financial services and the financial services regulatory framework. The initiative is part of the FCA's AI Input Zone, which will help shape its future regulatory approach. Views are sought on: (i) what AI use cases firms are considering and what barriers are preventing any current or future adoption; (ii) whether current regulation is sufficient to support firms in embracing the benefits of AI in a safe and responsible way; and (iii) whether there are any specific changes to the regulatory regime or additional guidance that would be useful. The deadline for responses is January 31, 2025.

    The FCA has also opened applications for the first AI Sprint, which will be taking place in January 2025.
  • Bank of England Speech on Artificial Intelligence and Financial Stability
    October 31, 2024

    The Bank of England has published a speech by Sarah Breeden, BoE Deputy Governor, Financial Stability, on AI and financial stability. In the speech, Ms. Breeden explores the novel features of Generative AI, and how financial stability can be upheld whilst harnessing its potential benefits for economic growth.

    Read more.
  • UK Financial Conduct Authority Announces Launch of AI Lab
    October 17, 2024

    The Financial Conduct Authority has published a speech by Jessica Rusu, FCA Chief Data, Information and Intelligence Officer, on ten years of FCA innovation. In the speech, Ms Rusu announced the launch of the AI Lab, which will help the FCA in its mission to facilitate firms overcome challenges in building and implementing AI solutions, as well as supporting the U.K. Government's work on safe and responsible AI development. Ms Rusu explains that the AI Lab will play a critical role by providing AI-related insights, discussions, and case studies, assisting the FCA to deepen its understanding of potential AI risks and opportunities.

    Read more.
  • UK Technology Working Group and Investment Association report on Artificial Intelligence's Current and Future Uses in Investment Management
    October 10, 2024

    The U.K. Technology Working Group, supported by the Investment Association, published a report on the current and future usage of artificial intelligence in investment management. The U.K. Financial Conduct Authority and HM Treasury are observers on the Group and supportive of the agenda. The report outlines common use cases of AI, examines enablers and barriers for longer-term AI adoption, and makes recommendations for future AI integration in the investment management industry. Key recommendations include:
    • establishing regulatory clarity and consistency to enable developers and users of AI to plan and invest with confidence. This would include closer coordination between regulators and the further development of AI standards;
    • building a U.K. fintech ecosystem with strong international connections that investment management firms can leverage to gain access to innovative solutions, specialized knowledge, and valuable insights;
    • joint public and private sector action on AI-enabled fraud, to combat malicious actors and fight cybercrime and misinformation; and
    • managing systemic risk through collective understanding and identifying best practices in risk management. The changing profile of systemic risk in the financial sector should not be a reason to hold back from innovating.
  • Bank of England Establishes Artificial Intelligence Consortium
    September 25, 2024

    The Bank of England has announced the establishment of an Artificial Intelligence consortium. Its purpose is to provide a platform for public-private engagement to gather input from stakeholders on the capabilities, development, deployment and use of AI in U.K. financial services. Its specific aims are:
    • to identify how AI is or could be used in financial services, for example, by considering new capabilities, deployments and use cases as well as technical developments where relevant;
    • to discuss the benefits, risks and challenges arising from the use of AI. Such benefits, risks and challenges may be with respect to financial services firms or with respect to the wider financial system; and
    • to inform the BoE's approach to addressing risks and challenges, and promoting the safe adoption of AI.
    Membership of the consortium is at the BoE's invitation following a selection process. A consortium membership call for interest explains that applications to join the consortium can be submitted until November 8, 2024.
  • EU Artificial Intelligence Act Published
    July 12, 2024

    Regulation (EU) 2024/1689 laying down harmonized rules on AI (known as the AI Act) has been published in the Official Journal of the European Union. It aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The AI Act defines four main players in the AI sector—deployers, providers, importers, and distributors—and establishes obligations for AI systems based on their potential risks and level of impact. The AI Act will enter force on August 1, 2024. Most provisions will apply from August 2, 2026, but some rules will apply earlier: (i) prohibited AI systems will be banned from February 2, 2025; and (ii) penalties and the rules on general-purpose AI models will apply from August 2, 2025.
  • European Commission Consults on Artificial Intelligence in the Financial Sector
    June 18, 2024

    The European Commission has published a consultation on the use of artificial intelligence in the financial sector. The consultation aims to aid the Commission in developing guidance for the financial sector across all market areas in preparation for the expected adoption of the AI Act in July 2024. The survey covers three aspects: (i) general questions on the development of AI, (ii) specific use cases in finance, and (iii) the AI Act in relation to the financial sector, focusing on the industry's needs in order to implement the upcoming AI framework. The Commission has specifically requested input from those financial services companies that are actively providing or developing AI technology. The deadline for comments is September 13, 2024. Alongside the consultation, the European Supervisory Authorities will run a series of workshops providing a platform for stakeholders to exchange knowledge about the latest industry developments and present their progress on ongoing projects. The workshops will take place during the Autumn with registration closing on July 26, 2024. If sufficient progress is made, the Commission intends to publish a report on the findings and analysis of main trends and issues arising with the use of AI applications in financial services.
  • EU Statement on the Use of AI in the Provision of Retail Investment Services
    May 30, 2024

    The European Securities and Markets Authority has published a public statement on the use of AI in the provision of retail investment services. When using AI, ESMA expects firms to comply with relevant Markets in Financial Instruments package requirements, particularly when it comes to organizational aspects, conduct of business, and their regulatory obligation to act in the best interest of the client.

    ESMA reminds firms that although AI technologies offer potential benefits to firms and clients, they also pose inherent risks, such as: (i) algorithmic biases and data quality issues; (ii) opaque decision-making by a firm's staff members; (iii) overreliance on AI by both firms and clients for decision-making; and (iv) privacy and security concerns linked to the collection, storage, and processing of the large amount of data needed by AI systems.

    Read more.
  • International Organization of Securities Commissions Proposes Artificial Intelligence Requirements for Market Intermediaries and Asset Managers
    06/25/2020

    The International Organization of Securities Commissions has issued a consultation on proposed guidance on the use of artificial intelligence and machine learning by market intermediaries and asset managers. The draft guidance is intended to assist IOSCO member jurisdictions to develop appropriate regulatory frameworks to mitigate the risks arising from the increased use of AI and ML by financial institutions. Comments on the draft Guidance can be submitted until October 26, 2020.

    Read more.
  • European Commission Launches Strategy for Data and Artificial Intelligence
    02/19/2020

    The European Commission has published a set of documents presenting its strategies for data and Artificial Intelligence. The main document is a Communication to the European Parliament, the European Council and relevant committees, entitled "A European strategy for data." The Communication describes the policy measures put forward by the European Commission for an EU data economy that aims to increase the use of, and demand for, data and data-enabled products and services in the EU over the next five years. The Commission argues for an attractive policy environment that provides for access to data, the flow of data across the EU, protection of personal data protection rights and an open yet assertive approach to international data flows that is based on European values. 

    Read more.
  • European Commission Publishes Report on Liability for Artificial Intelligence
    11/21/2019

    The New Technologies formation of the European Commission’s Expert Group on Liability and New Technologies has published a report on liability regimes for artificial intelligence. The report discusses existing laws concerning liability for emerging digital technologies and describes how those laws could be improved to cater for the new risks and challenges associated with new technologies. The New Technologies formation is a panel that was established by the European Commission in March 2018 and was asked to examine existing EU liability regimes and make recommendations for amendments to take account of emerging digital technologies where necessary.

    Read more.
  • Financial Stability Board Considers Financial Stability Implications of Artificial Intelligence and Machine Learning in Financial Services
    11/01/2017

    The Financial Stability Board has published a report prepared by experts from its Financial Innovation Network, which examines the potential financial stability implications of the growing use of artificial intelligence and machine learning by financial institutions. Data on the extent of adoption of this technology is still relatively limited, but the FSB considers that, overall, AI and machine learning applications show great promise for the financial services industry, provided that their specific risks are managed properly. Potential risks for financial stability should be monitored over the coming years as this technology is adopted and more usage data becomes available. These potential risks include the possibility of third-party dependencies, which may introduce new systemically important players outside the regulated sector. There is also a risk of new and unexpected forms of interconnectedness developing between financial markets and institutions. The FSB is also concerned that it is not possible to interpret all aspects of AI and machine learning methods and stresses the importance of ensuring adequate testing and training of AI and machine learning applications with unbiased data and feedback mechanisms, to ensure applications behave as intended.

    The Report sets out a number of selected use cases and considers the possible effects of AI and machine learning on financial markets, financial institutions, consumers and investors along with a macro-financial analysis of issues such as possible market concentration and interconnectedness. The Report also considers the legal and ethical issues around AI and machine learning and sets out some preliminary thoughts on governance and the development and auditability of models.

    View FSB Report.

    View Press Release.