Artificial Intelligence Consortium minutes – October 2025

The Artificial Intelligence Consortium (AIC) aims to provide a platform for public-private engagement to further dialogue on the capabilities, development, deployment, use, and potential risks of artificial intelligence (AI) in UK financial services.
Published on 18 December 2025

The Artificial Intelligence Consortium (AIC) provides a platform for public-private engagement to further dialogue on the capabilities, development, deployment, use, and potential risks of artificial intelligence (AI) in UK financial services. As stated in the AIC’s Terms of reference, the views expressed by the members in these minutes and any subsequent outputs do not reflect the views of their institutions, the Bank of England (Bank) or the Financial Conduct Authority (FCA). The activities, discussions, and outputs of the members should not be taken as an indication of future policy by the Bank or FCA.

Item 1: Welcome

Co-chair Sarah Breeden welcomed the members and observers to the AI Consortium’s (AIC) second quarterly meeting and the first held in-person. She noted that Sarah Pritchard, now Deputy CEO of the FCA, would hand over her role as AIC Co-chair to David Geale, the FCA’s Executive Director for Payments and Digital Finance.

Sarah reminded attendees that all contributions during the session would be under Chatham House Rule.

Sarah outlined the four workshop groups established to address the topics discussed at the AIC launch on 2 May 2025:

  • Concentration risk in AI infrastructure providers
  • Evolution of AI edge cases
  • Explainability and transparency in generative AI
  • AI-accelerated contagion in financial markets

Item 2: Workshop presentations

Each group outlined its progress including the workshop’s problem statement, a summary of technical discussions to date, and proposed next steps.

Workshop 1: Concentration risk

This workshop had discussed how AI’s increased integration into UK financial services introduced a new category of risks to consider, which not only arose from the AI technology itself, but also from the structure of the AI ecosystem.

The workshop had identified five key risk areas: concentration risk in third-party AI providers, contagion and disruption from model updates, capacity constraints and scalability challenges, the need for third-party assurance and minimum standards, and talent concentration and domestic capability gaps.

Members noted an increasing reliance on a small number of third-party AI providers, and the risks which could emerge as a result. The discussion also touched on the rise of AI agents and their potential implications for concentration risk, as these agents typically depend on models and infrastructure from a few dominant providers.

Workshop 2: Evolution of AI edge cases

This workshop had explored approaches to support the confident and responsible adoption of advanced AI in UK financial services, focusing on AI edge use cases – high-value applications where complexity introduced novel risks. Members noted that scenarios considered edge cases today could become business as usual in the future.

The workshop had highlighted several key perspectives, including the growing autonomy of AI models operating with limited human oversight. Members also observed that firms were facing pressure to demonstrate returns on AI investment, which could accelerate development timelines. Members noted that this reinforced the need to strike a careful balance between innovation and safety.

Workshop 3: Explainability and transparency in generative AI

This workshop had acknowledged challenges in defining explainability, particularly as explainability, interpretability, and transparency were often used interchangeably. While members agreed that explainability and transparency were essential for the trustworthy adoption of AI in financial services, the absence of consistent definitions could impede effective risk management.

To frame its approach, the workshop had adopted an outcomes-focused definition. The workshop intended to review existing domestic and international guidance on AI explainability and transparency to assess relevance for financial services, and to identify key characteristics of explainability and transparency for AI use in financial services.

Some members cautioned against focusing too heavily on definitions, suggesting that industry should prioritise building models with inherent explainability as this had not always been embedded in practice.

Workshop 4: AI-accelerated contagion

This workshop had discussed how AI-driven automation and interconnected decision-making could rapidly amplify shocks to the financial system. The workshop had emphasised the need to reflect on the long-term implications of AI without hindering competitiveness and innovation.

The workshop had identified three drivers of contagion risk:

  • Market dynamics: increasing use of similar vendors, models, strategies, and common data sources, which could lead to synchronised market moves and amplify volatility.
  • Operational resilience: dependency on a few critical vendors and infrastructure could create single points of failure.
  • Model concentration and homogeneity: widespread use of same or similar models – even across different vendors – could result in correlated errors and rapid propagation of flaws across institutions.

Members illustrated contagion risk through the example of multiple firms using the same AI models for coding support, leading to programs becoming very similar, which could result in operational risks. Members queried how agentic AI may further exacerbate these risks, meaning that the autonomy of agentic workflows and still-evolving interoperability protocols could accelerate the spread of flawed updates or misaligned actions across interconnected systems.

Item 3: Consortium discussion on key trends

Members discussed how emerging technical developments, such as Model Context Protocol (MCP), which provides an open-source standard for connecting AI systems, and small language models (SLM), which are AI models capable of generating natural language content but smaller in scale and scope than large language models (LLMs), were influencing real-world use cases.

Members acknowledged that recent protocol developments were in the early stages but noted that as they matured, they could significantly expand the range of accessible data, including historical and proprietary data sources. However, other members raised that this could also introduce data quality issues and, over time, scarcity of training data. Synthetic data was highlighted by members as a potential solution to closing these gaps in data adequacy.

Members then discussed agentic AI. Model drift, which describes the degradation of model performance due to changes in data or the relationships between input and output variables, was identified by members as a risk posed by the pace of agentic AI’s development. Some members emphasised the complexity of identifying where drift was occurring within multi-modal agentic chains. Although members recognised there were currently limited, large-scale agentic use cases, they agreed that governance and oversight would need to evolve to keep pace with these systems.

Members also explored model architecture, considering the benefits and limitations of SLMs compared to LLMs. It was noted by members that SLMs could offer benefits over LLMs such as greater data privacy as well simplifying implementation of guardrails and model performance evaluations. However, some members acknowledged the models’ inherently smaller size meant SLMs could be use-case specific and lack the flexibility of open-source frontier models.

Finally, concerns were raised about the overreliance on LLMs to summarise documents or research without human accountability, which could pose risks to accuracy and integrity. Members acknowledged that extensive work on AI testing and evaluation was underway in the UK, which would be key to scaling proof-of-concept models.

Wrap up

Sarah Breeden closed the session, thanking participants for a constructive discussion.

The next AIC quarterly meeting was expected to take place virtually in February 2026.

Attendees

Co-chair & Moderator

Breeden, Sarah – Bank of England

Members

Ahmed, Ratul – Commerzbank AG 

Beliossi, Giovanni – Axyon AI SRL 

Bhatti, Tanveer – Independent

Buchanan, Bonnie Gai – University of Surrey 

Daley, Sue – techUK 

Dunmur, Alan – Allica Bank 

Hughes, Clara – Pension Insurance Corp 

Jefferson, Michael – Amazon Web Services 

Jones, Matthew – Nationwide Building Society

Kazantsev, Gary – Bloomberg LP 

Kazim, Emre – Holistic AI 

Li, Feng – Bayes Business School 

Mullins, Inga – Fluency

Patel, Parimal – Independent 

Pearce, Christopher – esure Group 

Pearce, Luke – Santander 

Prince, Emily – LSEG

Rees, Harriet – Starling Bank Limited 

Rosenshine, Kate – Microsoft 

Szpruch, Lukasz – The Alan Turing Institute 

Xu, Justin – MillTech 

Observers

Fairburn, James – HMT

Ignatidou, Sophia – ICO

Seiler, Chia – Ofcom

Bank of England

Gharbawi, Mohammed

Graham, Georgette

Lee, Amy

Mutton, Tom

Hall, Jonathan (External member of the Financial Policy Committee)

FCA

Bagri, Jasmine Kaur 

Jordan, Vicki

Simon, Christopher

Thorman, Libby

Apologies

Pritchard, Sarah (Co-chair) – FCA

Azid, Dominique – ICO

Croxson, Karen – CMA

Levett, Freddie – FCA

Valane, Jeffrey – HSBC 

Wade, David – Goldman Sachs