A&O Shearman | FinReg | Blog
Financial Regulatory Developments Focus
This links to the home page

Filters

The following posts provide a snapshot of selected UK, EU and global financial regulatory developments of interest to banks, investment firms, broker-dealers, market infrastructures, asset managers and corporates.

  • Bank of England Establishes Artificial Intelligence Consortium
    September 25, 2024

    The Bank of England has announced the establishment of an Artificial Intelligence consortium. Its purpose is to provide a platform for public-private engagement to gather input from stakeholders on the capabilities, development, deployment and use of AI in U.K. financial services. Its specific aims are:
    • to identify how AI is or could be used in financial services, for example, by considering new capabilities, deployments and use cases as well as technical developments where relevant;
    • to discuss the benefits, risks and challenges arising from the use of AI. Such benefits, risks and challenges may be with respect to financial services firms or with respect to the wider financial system; and
    • to inform the BoE's approach to addressing risks and challenges, and promoting the safe adoption of AI.
    Membership of the consortium is at the BoE's invitation following a selection process. A consortium membership call for interest explains that applications to join the consortium can be submitted until November 8, 2024.
  • EU Artificial Intelligence Act Published
    July 12, 2024

    Regulation (EU) 2024/1689 laying down harmonized rules on AI (known as the AI Act) has been published in the Official Journal of the European Union. It aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The AI Act defines four main players in the AI sector—deployers, providers, importers, and distributors—and establishes obligations for AI systems based on their potential risks and level of impact. The AI Act will enter force on August 1, 2024. Most provisions will apply from August 2, 2026, but some rules will apply earlier: (i) prohibited AI systems will be banned from February 2, 2025; and (ii) penalties and the rules on general-purpose AI models will apply from August 2, 2025.
  • European Commission Consults on Artificial Intelligence in the Financial Sector
    June 18, 2024

    The European Commission has published a consultation on the use of artificial intelligence in the financial sector. The consultation aims to aid the Commission in developing guidance for the financial sector across all market areas in preparation for the expected adoption of the AI Act in July 2024. The survey covers three aspects: (i) general questions on the development of AI, (ii) specific use cases in finance, and (iii) the AI Act in relation to the financial sector, focusing on the industry's needs in order to implement the upcoming AI framework. The Commission has specifically requested input from those financial services companies that are actively providing or developing AI technology. The deadline for comments is September 13, 2024. Alongside the consultation, the European Supervisory Authorities will run a series of workshops providing a platform for stakeholders to exchange knowledge about the latest industry developments and present their progress on ongoing projects. The workshops will take place during the Autumn with registration closing on July 26, 2024. If sufficient progress is made, the Commission intends to publish a report on the findings and analysis of main trends and issues arising with the use of AI applications in financial services.
  • EU Statement on the Use of AI in the Provision of Retail Investment Services
    May 30, 2024

    The European Securities and Markets Authority has published a public statement on the use of AI in the provision of retail investment services. When using AI, ESMA expects firms to comply with relevant Markets in Financial Instruments package requirements, particularly when it comes to organizational aspects, conduct of business, and their regulatory obligation to act in the best interest of the client.

    ESMA reminds firms that although AI technologies offer potential benefits to firms and clients, they also pose inherent risks, such as: (i) algorithmic biases and data quality issues; (ii) opaque decision-making by a firm's staff members; (iii) overreliance on AI by both firms and clients for decision-making; and (iv) privacy and security concerns linked to the collection, storage, and processing of the large amount of data needed by AI systems.

    Read more.
  • International Organization of Securities Commissions Proposes Artificial Intelligence Requirements for Market Intermediaries and Asset Managers
    06/25/2020

    The International Organization of Securities Commissions has issued a consultation on proposed guidance on the use of artificial intelligence and machine learning by market intermediaries and asset managers. The draft guidance is intended to assist IOSCO member jurisdictions to develop appropriate regulatory frameworks to mitigate the risks arising from the increased use of AI and ML by financial institutions. Comments on the draft Guidance can be submitted until October 26, 2020.

    Read more.
  • European Commission Launches Strategy for Data and Artificial Intelligence
    02/19/2020

    The European Commission has published a set of documents presenting its strategies for data and Artificial Intelligence. The main document is a Communication to the European Parliament, the European Council and relevant committees, entitled "A European strategy for data." The Communication describes the policy measures put forward by the European Commission for an EU data economy that aims to increase the use of, and demand for, data and data-enabled products and services in the EU over the next five years. The Commission argues for an attractive policy environment that provides for access to data, the flow of data across the EU, protection of personal data protection rights and an open yet assertive approach to international data flows that is based on European values. 

    Read more.
  • European Commission Publishes Report on Liability for Artificial Intelligence
    11/21/2019

    The New Technologies formation of the European Commission’s Expert Group on Liability and New Technologies has published a report on liability regimes for artificial intelligence. The report discusses existing laws concerning liability for emerging digital technologies and describes how those laws could be improved to cater for the new risks and challenges associated with new technologies. The New Technologies formation is a panel that was established by the European Commission in March 2018 and was asked to examine existing EU liability regimes and make recommendations for amendments to take account of emerging digital technologies where necessary.

    Read more.
  • Financial Stability Board Considers Financial Stability Implications of Artificial Intelligence and Machine Learning in Financial Services
    11/01/2017

    The Financial Stability Board has published a report prepared by experts from its Financial Innovation Network, which examines the potential financial stability implications of the growing use of artificial intelligence and machine learning by financial institutions. Data on the extent of adoption of this technology is still relatively limited, but the FSB considers that, overall, AI and machine learning applications show great promise for the financial services industry, provided that their specific risks are managed properly. Potential risks for financial stability should be monitored over the coming years as this technology is adopted and more usage data becomes available. These potential risks include the possibility of third-party dependencies, which may introduce new systemically important players outside the regulated sector. There is also a risk of new and unexpected forms of interconnectedness developing between financial markets and institutions. The FSB is also concerned that it is not possible to interpret all aspects of AI and machine learning methods and stresses the importance of ensuring adequate testing and training of AI and machine learning applications with unbiased data and feedback mechanisms, to ensure applications behave as intended.

    The Report sets out a number of selected use cases and considers the possible effects of AI and machine learning on financial markets, financial institutions, consumers and investors along with a macro-financial analysis of issues such as possible market concentration and interconnectedness. The Report also considers the legal and ethical issues around AI and machine learning and sets out some preliminary thoughts on governance and the development and auditability of models.

    View FSB Report.

    View Press Release.