The Other Side Says Your Evidence Is A Deepfake. Now What?

The Other Side Says Your Evidence Is A Deepfake. Now What?


Partner Brent Gurney and Counsel Matthew Ferraro discuss the two central concerns about deepfakes in the courtroom in an expert analysis article published by Law360.

Excerpt: In several recent high-profile trials, defendants have sought to cast doubt on the reliability of video evidence by suggesting that artificial intelligence may have surreptitiously altered the videos.

These challenges are the most notable examples yet of defendants leveraging the growing prevalence in society of AI-manipulated media — often called deepfakes — to question evidence that, until recently, many thought was nearly unassailable.

There are two central concerns about deepfakes in the courtroom. First, as manipulated media becomes more realistic and harder to detect, the risk increases of falsified evidence finding its way into the record and causing an unjust result.

Second, the mere existence of deepfakes makes it more likely the opposing party will challenge the integrity of evidence, even when they have a questionable basis for doing so. This phenomenon, when individuals play upon the existence of deepfakes to challenge the authenticity of genuine media by claiming it is forged, has become known as the "liar's dividend," a term coined by law professors Bobby Chesney and Danielle Citron.[1]

Read the full article.

For more information on these topics, listen to Ferraro, Partner Jason Chipman and author Nina Schick discuss deepfakes and disinformation in an episode of the firm’s podcast, In the Public Interest.

View WilmerHale’s thought leadership related to disinformation and deepfakes.