Over the past few years, we’ve seen a huge expansion in the use digital technologies across society and across the economy. And this has affected all of us in some way, whether we’re shopping online or listening to music, streaming a film or engaging with friends and colleagues on social-media platforms, our lives have been transformed by remarkable innovation and technological advances. And Artificial Intelligence is increasingly at the heart of those innovative technologies.
Artificial Intelligence and Machine Learning are growing very rapidly in complexity, sophistication, and importance for society, the economy as a whole, and financial services in particular.
AI is already embedded in many of the things we use every day, from mobile phones and tablets, to online shopping platforms. It forms the core of many of the technologies that are becoming increasingly familiar, such as biometric devices or autonomous vehicles. And this is likely to accelerate as smaller and smarter devices become more widely used and embedded in homes and workplaces, in retail and consumer products, across the so-called Internet-of-Things and as the global context changes rapidly due to the COVID crisis.
And those same AI systems and processes are being used today across the financial sector to bring benefits to consumers, to firms, and to the UK financial system as a whole. AI-driven online platforms may, for example, help customers manage their money and savings more effectively, as well as make the processing of loan applications or insurance claims easier, quicker, and more transparent. Financial firms may, in turn, benefit from AI in making efficiency and productivity gains, in tailoring products to customers’ needs, and in more effective ways of combating financial crime. AI may also boost the overall efficient functioning and resilience of financial markets and the wider economy.
Clearly, the world is a very different place today from the one we started this year with; and while the COVID crisis has had, and will continue to have, a profound impact on the economy, and on the households and businesses that drive it, it has also increased and focussed interest in the potential uses of AI in tackling some of the many immediate problems and challenges precipitated by the crisis. For example, many businesses, including financial firms, are looking to use AI, sometimes combined with alternative data sources, for enhancing customer engagement, for driving more automation of internal processes, and for improving virtual working environments.
In a recent survey of the banks and insurers we regulate, conducted as part of our ongoing monitoring of the impact of COVID-19 by the Bank’s Fintech Hub along with colleagues from the PRA, around 45% of those firms that participated reported that the crisis has led to an increase in the importance of AI and data science applications for their future operations, with around 55% reporting no change, and none noting a decrease. The survey also looked at changes in firms’ investment plans and resource commitments for AI projects as well as areas where the crisis has had an impact on existing applications across different business lines.
While the use of AI has clear benefits in an increasingly data-driven economy, there are risks and challenges. And the impact of those benefits and risks will be felt at different paces and different depths at technological, social, corporate, and systemic levels. It’s useful to think of the risks and challenges in a hierarchical way, starting with those that may become apparent at the data level and building up to the level of models trained on those data; then, widening to the level of the firm, and finally, up to the level of the financial system as a whole. Overlaying this are the regulatory and legal challenges presented by AI.
Taking these in turn, data form the core of all AI models and good data management and data governance are essential in controlling issues with, for example, biases that may be embedded in the data. At the model level, we need to think about issues such as ensuring that models continue to perform under a wide range of different conditions and being able to explain the outputs of complex AI systems. Moving to the firm level, some of the key issues will revolve around risk management and governance structures; around accountability and appropriate controls. And finally, for the financial system as a whole, AI may amplify network effects such as unexpected changes in the scale and direction of market moves.
In terms of the regulatory challenges, it’s clear that policy thinking in this arena is also evolving rapidly. But the existing regulatory landscape is somewhat fragmented when it comes to AI, with different pieces of regulation applying to different aspects of the AI pipeline, from data through model risk to governance. Policy must strike a balance between high-level principles and a more rules-based approach. We also need to future-proof our policy initiatives in a fast-changing field.
With all that in mind, and in order to further our objective of promoting the safe adoption of AI in financial services, the Bank of England, along with the Financial Conduct Authority, is today launching the Artificial Intelligence Public-Private Forum. [Plans to set up this Forum were announced by the Bank in June 2019 as part of our response to the Future of Finance report.]
The purpose of the Forum is to further dialogue between the public and private sectors in order to better understand the use and impact of AI in financial services. The Forum, which is expected to run for one year, will consist of a series of quarterly meetings and workshops structured around each of three topic areas: Data, Model Risk Management, and Governance.
In terms of membership, the Forum will draw on the knowledge and experience of 21 leading AI experts from across the financial and technology sectors as well as academia; the Forum also has observers from the Information Commissioner’s Office and the Centre for Data Ethics and Innovation and more may be invited. The goal is to ensure that the Forum’s design reflects a wide variety of views. The full list of members can be found on our website.
The specific aims of the Forum are: firstly, to share information and understand the practical challenges of using AI in financial services, identify existing or potential barriers to deployment, and consider any potential risks or trade-offs; secondly, to gather views on areas where principles, guidance, or regulation could support safe adoption of these technologies; and finally, to consider whether once the forum has completed its work ongoing industry input could be useful and if so, what form this could take.
Those aims underline the fact that AI is a deep and multi-faceted field requiring a coordinated and collaborative approach from regulators. The knowledge, experience, and expertise of the Forum’s members and observers will be invaluable in helping us to contextualise and frame the Bank’s thinking on AI, its benefits, its risk and challenges, and any possible future policy initiatives.
The AI Public-Private Forum is committed to being transparent and will publish summary reports of all Forum and workshop discussions. It will also aim to deliver a final report of its findings and conclusions. However, the outputs of the Forum should not be considered as an indication of future policy by the Bank or the FCA. By being as open and transparent in our approach as possible, and by sharing the information of the Forum publically with all its stakeholders, including other regulators, we hope to further the collective dialogue on how best to support safe adoption of AI in financial services.
Finally, I would like to welcome the members of the Forum and its observers, and thank them in advance for committing to and participating in this exciting and important project.