AI and the UK Financial Conduct Authority

AI and the UK Financial Conduct Authority

Client Alert

Authors

Introduction

The UK Financial Conduct Authority (FCA) has elected not to introduce a bespoke AI rule book and has instead applied its existing, outcomes‑focused framework to firms’ design, deployment and oversight of AI systems.1

For FCA-regulated firms, that means AI risk management in the UK is primarily a question of mapping AI use cases onto familiar regulatory building blocks: consumer outcomes (Consumer Duty), accountability (the Senior Managers and Certification Regime (SM&CR) and governance), systems and controls, outsourcing/third‑party risk, and operational resilience.

Significantly, the FCA is trialling its own use of AI in its review of live enforcement data— including suspicious activity reports, customer complaints and case files—which has the potential to transform the regulator’s financial crime and regulatory breach detection, disruption, and associated supervision and enforcement capabilities.

FCA: “Rules Apply to AI” Rather Than “AI‑Specific Rules”

The FCA’s approach to the regulation of AI is principles‑based, technology‑agnostic and grounded in its existing frameworks. It aims to give firms flexibility rather than prescriptive requirements, and the regulator has consistently stated that it wants to support the safe and responsible adoption of AI in UK financial markets, balancing innovation with risks and consumer/market benefits. While this is certainly positive, a flexible, principles-based approach is double-edged; it also allows the FCA more flexibility to argue that firms’ approaches are non-compliant.

In its 2024 AI Update,2 the FCA stated that it takes an evidence‑based view balancing AI’s benefits and risks and emphasising close scrutiny of firms’ systems and processes to ensure regulatory expectations are met.

Key FCA Conduct Expectations

The Consumer Duty

The FCA has identified the Consumer Duty as part of the framework relevant to using AI safely. Operationally, where AI influences product design, distribution, pricing, eligibility/creditworthiness, servicing or customer communications, the Consumer Duty’s outcome requirements mean that firms should ensure AI outputs are tested for fairness, suitability and foreseeable harm.

The FCA’s emphasis on evidence‑based supervision and close scrutiny of systems/processes means that firms should document and monitor their use of AI, testing and remediation of any issues identified in order to demonstrate compliance.

Accountability: SM&CR and governance ownership for AI

The FCA/Bank of England joint survey in 2024 reported that a large proportion of firms have a person accountable for AI, which is in line with FCA expectations.3 Firms should ensure their AI governance framework aligns with the SM&CR and assigns clear responsibility for (i) approving AI use cases, (ii) managing model/data risk, (iii) overseeing third‑party AI services and (iv) ensuring operational resilience.

Outsourcing and third‑party risk

The FCA has highlighted that firms are likely to depend on outsourcing and third‑party providers for AI, such that they need to manage those relationships to mitigate the risks of consumer harm (and disruption in the event that issues are experienced with the technology).  

Operational resilience: AI as a component of important business services (IBS)

The FCA expects firms to be operationally resilient through comprehensive understanding and mapping of the people, processes, technology, facilities and information needed to deliver IBS.

Given that AI solutions are often embedded in customer‑facing processes (e.g., onboarding, payments controls, fraud detection) and back‑office processing, firms should treat AI components as part of IBS mapping, scenario testing and vulnerability remediation where relevant.

Enforcement Risks

The FCA has been clear that failures of governance, controls, accountability or consumer outcomes arising from AI use can and will trigger enforcement risk. We have set out below some of those key enforcement risks.

Consumer harm from opaque or biased AI decision‑making

Enforcement risk arises where AI systems used in credit, pricing, insurance underwriting, complaints handling or customer interaction produce outcomes that:

  1. Unfairly disadvantage certain customer groups;
  2. Cannot be explained to customers or supervisors; or
  3. Lead to systematically poor consumer outcomes.

The FCA’s view is that:

  1. Lack of explainability does not excuse non‑compliance;
  2. Firms remain accountable even where models are complex or supplied by third parties; and
  3. Discriminatory outcomes can breach Principle 6 of the FCA’s Principles for Businesses/the Consumer Duty regardless of intent.

Failure of SM&CR accountability for AI systems

Enforcement risk arises where:

  1. There is no clearly identified senior manager responsible for AI deployment;
  2. The board or senior management cannot explain how key AI tools operate or are controlled; or
  3. AI decisions are not treated as governed business decisions.

The FCA’s view is that:

  1. AI does not dilute senior management accountability; and
  2. Responsibility must sit clearly within the SM&CR, even where systems are outsourced.

Poor data governance leading to unfair or misleading outcomes

Enforcement risk arises where AI models are trained or operated using:

  1. Poor‑quality, outdated or unrepresentative data;
  2. Data not suitable for the intended purpose; or
  3. Data whose limitations are not understood by decision‑makers.

This leads to unfair treatment, incorrect decisions or misleading customer communications and resultant enforcement risk.

The FCA’s view is that:

  1. Data quality and suitability are core conduct risks, not technical issues; and
  2. Firms should ensure they understand training data provenance, limitations and drift, conduct ongoing monitoring and enhance AI models accordingly.

Over‑reliance on third‑party or “black box” AI providers

Enforcement risk arises where firms cannot evidence testing, validation or oversight of the deployment of third‑party AI models, particularly where:

  1. Contracts or model restrictions prevent meaningful audit or explanation; or
  2. Firms rely on vendors’ assurances without independent challenge.

The FCA’s view is that:

  1. Outsourcing does not transfer regulatory responsibility; and
  2. Lack of access to explainability or controls is not a defence.

AI‑driven market abuse or disruption to the markets

Enforcement risk arises where AI in trading, pricing or execution contributes to:

  1. Abusive trading strategies;
  2. Collusive or correlated behaviour; or
  3. Rapid amplification of market stress.

Market abuse and financial crime more broadly remain FCA areas of focus, so firms are likely to face regulatory scrutiny for any potential AI-related deficiencies in market abuse controls.

The FCA’s view is that:

  1. Automation heightens rather than reduces expectations on monitoring; and
  2. Firms must understand, control and act to mitigate the risks of emerging AI-enabled behaviours.

Misleading disclosures about AI capability or safeguards

Enforcement risk arises where firms overstate AI accuracy, neutrality or fairness, or customer‑facing disclosures imply outcomes are more reliable or objective than they are.

AI‑related representations are subject to the normal FCA financial promotions and disclosure standards. The regulator’s view is that “AI‑washing” can be misleading, similar to “green washing”.  Firms must ensure that their disclosures around AI use and capabilities are accurate and comprehensible.

The FCA’s Own Use of AI

The FCA has made clear that AI is an increasingly important tool in the discharge of its own statutory functions, as well as in the firms it regulates. As part of its five-year strategy, the FCA has committed to becoming a more data‑driven and technologically enabled regulator, explicitly identifying AI as a means of improving risk detection, supervisory focus and regulatory efficiency.4

In March this year it was reported that the FCA has engaged a third-party technology provider to trial the deployment of AI tools to analyse the vast quantity of sensitive regulatory data that the FCA holds, including suspicious activity reports, customer complaints and live case files. This is potentially a hugely significant development and could have a material, positive impact on the regulator’s financial crime and regulatory breach detection and disruption capabilities.

Best Practice for Compliance with the FCA’s AI Expectations

The FCA’s use of AI and the expectations it has voiced of regulated firms’ own use of AI indicate it is one of the most technologically advanced UK regulators.  Firms should, therefore, map their AI use cases to consumer, governance and resilience obligations, identifying where AI touches customer journeys, market functions or critical operations and apply the relevant FCA obligations in those areas.

Governance, accountability and documentation

Firms should establish clear ownership, governance and accountability for AI risk and outcomes, aligned to senior manager responsibility expectations highlighted by the FCA.  Firms should maintain documentation that supports an evidence‑based view of AI benefits/risks and demonstrates that systems and processes meet regulatory expectations, reflecting the FCA’s stated intent to scrutinise those controls.

Third‑party AI services: contracting, access, exit and resilience

Where AI is procured “as a service”, firms should assess whether arrangements constitute outsourcing (per the FCA’s definition) and apply proportionate third‑party risk controls and resilience planning. The FCA expects firms to ensure that third‑party dependencies are included in IBS mapping and resilience assessments.

Operational resilience integration

Firms should ensure AI components are integrated into their operational resilience frameworks, including IBS identification, impact tolerances, mapping, scenario testing and remediation. 

Authors

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link. (The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.