How AI use can increase failure to prevent fraud risk

How AI use can increase failure to prevent fraud risk

Blog WilmerHale W.I.R.E. UK

This article was originally published on May 11, 2026 in the FT Adviser.

Companies increasingly rely on artificial intelligence (AI) to streamline workflows, improve efficiency, and perform complex tasks. To date, much commentary on AI risk has focused on external threats to companies (for example, cyber intrusion and third-party misuse). Less attention has been paid to the UK corporate criminal enforcement risk that may be increased by a company’s own deployment of AI.

This article considers how the use of AI might increase companies’ exposure to criminal liability for the failure to prevent fraud offence under the Economic Crime and Corporate Transparency Act 2023 (ECCTA).

Failure to Prevent Fraud

As a brief reminder, a large organisation1, wherever situated, is itself criminally liable for failure to prevent fraud under ECCTA where an associated person2 commits a specified fraud offence3 in the UK or impacting UK customers intending to benefit the organisation. The offence makes it easier for UK law enforcement agencies to investigate and prosecute companies for fraud committed for their benefit.

The offence is a strict liability offence. Unlike traditional corporate criminal attribution, the prosecution does not need to establish that any senior individuals within the company knew about or were party to the fraud. Liability arises solely from the fact that an associated person committed a specified fraud offence to benefit the organisation.

The definition of associated person is broad and not limited by seniority. It includes employees, agents, subsidiaries and anyone else performing services for or on behalf of the organisation.4

There is no requirement for the associated person to be prosecuted for the underlying fraud. The prosecution is required, however, to establish, to the criminal standard, that the associated person committed fraud to benefit the organisation. Analogous cases of failure to prevent bribery suggest that, in practice, there is a meaningful difference between companies accepting, during the course of a negotiated outcome, that the underlying bribe, or fraud in this case, is made out on the facts, and the stress testing of each element of the underlying fraud offence before a jury in a contested trial.

It is a defence for the organisation to establish that it had reasonable procedures in place to prevent associated persons from committing fraud. To be considered reasonable, fraud-prevention measures must be tailored to the organisation’s specific risks, including any fraud risks arising from the use of AI.

The list of specified fraud offences includes fraud by false representation. This offence is committed where a person dishonestly makes a false or misleading representation, knowing that it is or might be untrue or misleading, intending to make a gain for themselves or another, or to cause loss to another or expose another to a risk of loss.5

How AI use can increase failure to prevent fraud risk

AI-use alters the risk that an organisation is investigated or prosecuted for failure to prevent fraud in several key ways, as below.

Generation Risk

AI-use materially increases the risk of false or misleading statements being generated. AI may introduce hallucinated facts or oversimplify information in statements which, if the statements are used externally, may give rise to allegations of fraud by false representation.

Process Risk

The insertion of AI in workflows may lead to a reduction in scrutiny over the accuracy of statements. Rather than drafting a statement from scratch, an employee may be asked to approve AI-content that already appears complete and credible. Verification risks becoming a passive formality, rather than an active challenge, increasing the likelihood that inaccurate statements remain unverified and in externally facing materials.

Evidential Risk

AI marginally increases the risk that an associated person may be found to have acted dishonestly. Where an employee understands the limitations of AI, but approves an AI-generated statement without verification, that approval may be characterised as a choice to proceed despite recognising a real risk that the representation may be misleading. AI provides evidence that the employee appreciated the risk and decided not to check the statement, which may indicate dishonesty.

Behavioural research helps to explain why employees may act like this in practice. Delegation of drafting to AI reduces the perceived moral cost of dishonesty by creating psychological distance with the dishonest act.6 That distance is strongest where individuals give AI outcome‑oriented prompts, such as asking AI to “produce a positive investor update” or “highlight commercial progress.” Employees find it easier to approve AI-generated statements that they know might be misleading than to issue explicit instructions to misstate facts. A company’s retention of the contemporaneous AI prompts used by associated persons in this context offers law enforcements agencies a new and potentially probative source of evidence in their attempt to establish dishonesty.

Scenario: AI-Generated Statements and Failure to Prevent Fraud

Consider the following scenario.

Facts

A listed company uses generative AI to assist in preparing a regulatory announcement and accompanying investor presentation ahead of its half-year results. AI systems are used to draft the announcement by combining prior market announcements, management guidance, and internal data. The announcement states that a significant project has converted into a profitable contract and includes an estimate of the expected profit.

In fact, the contract has not yet been executed. The finance director responsible for approving the announcement is aware that negotiations are ongoing and completion is highly likely but not confirmed. They approve the AI-generated announcement for release without qualification, as it reflects the company’s genuine expectation. They consider that the announcement has been drafted by the company’s AI system and assumes that, if its accuracy is later questioned, the misstatement could be characterised as a system failure. They intend that the company benefits by characterising the project as a profitable contract in the announcement by increasing the share price ahead of results.

Shortly after the release, the negotiations fall through and the company issues a correction. The circumstances of the announcement are investigated by the UK Serious Fraud Office (SFO).

Dishonesty

In assessing dishonesty, the SFO would apply the objective test set out by the Supreme Court in Ivey v Genting Casinos7: whether the conduct would be regarded as dishonest by the standards of ordinary decent people, taking account of the individual’s knowledge or belief as to the facts. In this scenario, the focus would be on the director’s knowledge or belief as at the point of approving the announcement, and whether their conduct, based on the available evidence of their and others intentions, use and knowledge of the AI-systems used, should properly be regarded as dishonest by the standards of ordinary decent people. The fact that the error originated in the AI system would be largely irrelevant. As the company released the statement externally, that statement became the company’s own.

Investigators would explore, through interviews and the compulsory production and review of contemporaneous documents, what the director understood about the status of the negotiations and about the limitations and use of the AI system used to generate the announcement.

Contingent on their preservation, the SFO would seek to examine the instructions given to the AI tool, including whether the director used outcome‑oriented prompts that were likely to generate optimistic language. Evidence of training or prior guidance on AI limitations would be relevant in assessing what risks the manager should have appreciated.

The SFO would also consider what verification steps were taken. The prosecution would focus on whether the director had opportunities to check the statement against underlying contract documentation or to qualify it but instead chose to dishonestly rely on the knowingly false or misleading AI‑generated draft.

Knowledge

In assessing the director’s knowledge, evidence that the organisation had trained management on AI limitations, or that management were otherwise aware of these limitations, would undermine claims by the director that they thought they could rely on the AI system without verification. The director may be treated as knowing that the representation might be untrue if they understood these limitations, appreciated the real risk the statement was misleading, and chose not to verify or qualify it.

Procedures

In assessing the company’s fraud-prevention procedures, the SFO may ask:

  • Was AI identified as presenting outward-facing fraud risk and reflected in the company’s anti-fraud procedures?
  • Who approved AI‑generated statements, and what training did they receive?
  • What verifications were required before AI‑generated statements were released externally?

Building Defensible Fraud Prevention Procedures

In the context of a failure to prevent offence, companies seeking to rely on having reasonable fraud-prevention procedures in place will need to demonstrate and evidence that they took fraud risk seriously.

Governance

Companies should treat the use of AI as an evolving risk. This means maintaining and documenting a clear picture of where AI is used across the business, with particular attention to uses that affect external statements, investor communications, customer representations, or regulatory disclosures. For higher‑risk uses, there should be a senior manager who is accountable for how AI content is used.

Training

Senior managers who approve AI content should also receive training. That training does not need to be overly technical, but it should ensure that managers understand AI limitations and why certain uses require particular caution.

Verification

AI used to draft external statements should be treated as a drafting aid, not as an authoritative source. Companies should require human verification before release, with the reviewer expected to check both underlying source information and the final statement.

Documentation

Failure‑to‑prevent cases frequently turn on what a company can evidence after the event. In an AI context, this means retaining sufficient records to reconstruct how a statement was produced and approved. Relevant materials may include prompts, system drafts, material edits, approval records, etc.

Testing

AI‑related controls should be tested against realistic failure scenarios, not only at deployment but on an ongoing basis. Scenario‑based testing can be particularly effective. Material changes to models or uses should trigger a review of associated risk.

Third‑party tools

Companies should therefore ensure that vendor arrangements allow sufficient visibility into system limitations and provide the ability to audit content.

Incident response

Finally, companies should treat AI errors as a distinct incident category. Where AI content may have resulted in a false or misleading external statement, clear escalation steps should apply, and legal teams should be involved at an early stage. Swift investigation, correction, and remediation will not prevent scrutiny but may be relevant to how investigators assess the seriousness of any failure in procedures.

 


 

  1. An organisation is a large organisation if it satisfies at least two of the following three criteria in the financial year preceding the year in which the fraud offence is committed: a turnover of more than £36 million; a balance sheet total of more than £18 million; (and/or) more than 250 employees. See sections 201-202 ECCTA.
  2. A person is an associated person if it is an employee, agent or subsidiary undertaking of the relevant body, or it otherwise performs services for or on behalf of the body. See section 199(7) ECCTA.
  3. A specified fraud offence is an offence listed in Schedule 13 ECCTA, or aiding, abetting, counselling or procuring the commission of such an offence.
  4. See section 199(7) ECCTA.
  5. Section 2 Fraud Act 2006.
  6. See for instance, Nils Köbis, Zoe Rahwan, Raluca Rilla, Bramantyo Ibrahim Supriyatno, Clara Bersch, Tamer Ajaj, Jean‑François Bonnefon and Iyad Rahwan, “Delegation to artificial intelligence can increase dishonest behaviour” (2025) 646 Nature 126–134.
  7. Ivey v Genting Casinos [2017] UKSC 67.

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link. (The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.