Date of meeting: 9 February 2026
Item 1: Welcome
Co-chair Sarah Breeden opened the session by welcoming attendees to the AI Consortium’s (AIC) third quarterly meeting held virtually. Sarah welcomed new Co-chair David Geale, the FCA’s Executive Director for Payments and Digital Finance & Managing Director for the PSR, and congratulated member Harriet Rees on her recent appointment by the Government as an AI Champion for Financial Services.
David introduced himself, thanked departing members, and welcomed new members and an observer from Ofcom. He emphasised the importance of ensuring that regulatory and industry approaches to AI are safe and responsible.
The contributions during the session were under Chatham House Rule.
Item 2: Workshop presentations
Each group provided an update on their progress followed by discussion.
Workshop 1: Concentration risk
Workshop leads provided an interim update on their work on concentration risks arising from reliance on a small number of AI providers, models, and infrastructure. The workshop is exploring how such reliance can create vulnerabilities where firms have limited visibility over model design, performance changes and update schedules, constraining their ability to assess risk and maintain control. Members also discussed how the concentrated provision of compute capacity and specialist expertise could limit firms’ ability to respond to stress events or disruptions affecting widely used AI services.
Members discussed the limited control firms may have over updates and changes introduced by third‑party vendors, often at short notice. These dependencies were identified by members as a potential source of correlated risk, raising questions about resilience, assurance and effective oversight.
The workshop is also working with the Cross Market Operational Resilience Group (CMORG) AI Taskforce to consider how to improve the transparency in AI supply chain, that is, the components and services that AI systems depend on such as hardware, cloud infrastructure, data, and foundation models.
Workshop 2: Evolution of AI edge cases
In their substantive update, the Evolution of AI Edge Cases workshop presented practical methods for identifying and controlling ‘AI edge use cases’ – novel, high impact AI applications. The members plan to explore the monitoring and governance systems needed to manage risks from applications that introduce greater autonomy for AI models in executing decisions.
Workshop members acknowledged while advanced AI use cases can deliver important benefits, including efficiency gains and cost reductions, they can also introduce novel risks. Members suggested that agentic workflows may be used in material decision making and may autonomously act across multiple systems, potentially posing operational risks.
The workshop members were asked whether there is a shared understanding of what failure looks like in advanced AI use cases, in order to help identify higher risk “edge” applications. Members noted that failures can arise from a combination of speed, increased autonomy, and dependencies that create shared attack surfaces and correlated failure modes. These characteristics can create similar vulnerabilities and lead to correlated operational failures within and between firms, even where the underlying AI models differ.
Members also queried whether their approach of tailoring governance and controls to specific AI edge cases is compatible with the principle of technology neutrality. Some members noted that longstanding principles such as financial stability, consumer protection and data protection continue to apply regardless of technology. The workshop members noted, however, that for certain higher-risk edge cases, it may be appropriate for firms to implement system-specific control measures, such as predefined circuit breakers, to manage risks effectively.
Workshop 3: Explainability and transparency in generative AI
Workshop leads provided an interim update on their work which aims to clarify what meaningful explainability and transparency could look like for AI systems used in financial services.
One member highlighted large language models (LLMs) can offer traceability, and that the ability to track reasoning by an LLM could be helpful in assessing how decisions have evolved when developing AI models. Members discussed how existing model risk management expectations such as SS1/23 apply to AI systems and their components, including generative AI models, prompts, and retrieval layers. One member commended the user-research approach taken by the Government in developing the Algorithmic Transparency Recording Standard (Complete transparency, complete simplicity) as a potential model for thinking about how to communicate AI system behaviour clearly.
Some members explored how terms such as human in the loop (HiTL) should be interpreted as systems become more autonomous. Members noted that maintaining a ‘human in the loop’ may become increasingly strained as firms adopt agentic AI and move from back office to market-facing applications. The workshop members were asked about the importance of consistent definitions of HiTL where use cases span different sectors and may require different forms of explainability, meaning a single approach may not be appropriate. One member noted that caution is needed to avoid definitions that are sector-specific since clarity is required for AI providers delivering tools and services across sectors. Another member added the importance of distinguishing between explaining a system’s overall behaviour (global explainability) and explaining its individual decisions or outputs (local explainability). The workshop leads confirmed these concepts have been a part of their discussions.
Workshop 4: AI-accelerated contagion
Workshop leads provided an interim update on their work exploring how AI adoption may alter contagion pathways across the financial system, with potential impacts such as price volatility, changes in participant behaviour, and system-wide disruption during periods of stress. The workshop is examining how increased automation, speed, and shared technical dependencies may affect transmission channels in a financial market stress.
Members discussed how AI – especially agentic AI systems – may compress decision-making latency in ways that challenge traditional escalation and oversight mechanisms such as kill switches and circuit breakers. Some members questioned whether controls such as kill switches could unintentionally disrupt critical functions, providing the example of a kill switch that shuts off a system but simultaneously impedes payments across the financial system.
Members of this workshop also highlighted how reliance on shared infrastructure, cloud providers, and energy resources could interact with stress scenarios. Members noted that scenario analysis and wargaming are some approaches to explore how AI-driven systems might behave under stress.
Some members encouraged the workshop to consider whether AI-driven market scenarios did indeed pose novel risks as compared to algorithmic trading more generally, though members also noted the important distinction that AI introduces non-determinism and scale effects that differ from traditional deterministic systems.
Item 3: Consortium discussion on key trends
The Co‑chairs invited members to discuss potential implications for the financial system arising from firms’ efforts to create returns on AI investments (ROI). The discussion was framed around agent‑to‑agent commerce, agentic trading tools, reliance on third parties, and regulatory barriers or uncertainty.
One member noted that some commentary on ROI may over‑emphasise downside risks, observing that some financial firms are already seeing returns from their investments in AI adoption. It was suggested that capital is being deployed rapidly due to perceived first‑mover advantages, although there may currently be relatively limited areas in which AI can deliver value at scale. Other members questioned whether first‑mover advantage is as relevant in financial services as in the technology sector, suggesting that for financial firms, access to and governance of underlying data is a more significant differentiator for ROI than the speed of AI deployment.
Members discussed the relationship between risk and return in financial services use cases. It was suggested that current deployments tend to focus on lower‑risk, lower‑return applications, but that firms may, over time, move towards higher‑risk use cases with greater potential returns. One member proposed that attention should focus on new actors entering traditional systems, particularly in areas such as payments and commerce.
Members observed differences in adoption dynamics across firm sizes. Members perceived smaller firms may be more willing to adopt a quicker, ‘fail fast’ approach and, as a result, be less willing to build in-house models and more comfortable relying on third‑party solutions. It was further noted that pressure to adopt AI quickly may make it challenging for smaller firms to develop governance arrangements and build the necessary skills and expertise at pace. By contrast, members noted that ROI may materialise more slowly in larger firms, which may face higher internal barriers to adoption.
Members debated the sufficiency of HiTL as AI systems become more complex. One member opined that as AI adoption increases, the number and types of errors may become difficult to detect and real-time monitoring of individual components of a system (rather than its outputs) may become necessary. Members acknowledged these governance challenges and noted that a principles-based approach could accommodate the rapid technological change including scenarios where agentic systems act autonomously or learn dynamically.
Several members discussed the importance of harmonised regulatory approaches internationally, in order to encourage responsible innovation.
Wrap up
The Co‑chairs thanked members for a constructive discussion and for their contributions outside formal meetings. They noted the importance of workshops continuing to develop practical, tangible outputs and of building a shared understanding of how AI investment, adoption and risk are evolving across the financial system.
The next AIC quarterly meeting was expected to take place in June 2026.
Attendees
Co-chairs & Moderators
Breeden, Sarah – Bank of England
Geale, David – Financial Conduct Authority
Members
Ahmed, Ratul – Commerzbank AG
Beliossi, Giovanni – Axyon AI SRL
Bhatti, Tanveer – Independent
Brink, Suzanne – Lloyds Banking Group
Buchanan, Bonnie Gai – University of Surrey
Daley, Sue – techUK
Dunmur, Alan – Allica Bank
Heffron, Sarah – JP Morgan
Hughes, Clara – Pension Insurance Corp
Jefferson, Michael – Amazon Web Services
Jones, Matthew – Nationwide Building Society
Kazim, Emre – Holistic AI
Li, Feng – Bayes Business School
Patel, Parimal – Independent
Pearce, Luke – Santander
Rees, Harriet – Starling Bank Limited
Rosenshine, Kate – Microsoft
Szpruch, Lukasz – The Alan Turing Institute
Taylor, Neil – Mastercard
Valane, Jeffrey – HSBC
Wade, David – Goldman Sachs
Xu, Justin – MillTech
Apologies
Kazantsev, Gary – Bloomberg LP
Mullins, Inga – Fluency
Pearce, Christopher – Ageas UK
Prince, Emily – LSEG
Observers
Fairburn, James – HMT
Ignatidou, Sophia – ICO
Underhill, Michael – Ofcom
Bank of England
Gharbawi, Mohammed
Graham, Georgette
Lee, Amy
Mutton, Tom
Financial Conduct Authority
Jordan, Vicki
Levett, Freddie
Simon, Christopher
Thorman, Libby