AI Generated Deepfake Adverts Are Exploding And Platforms Can’t Keep Up

Headshot of Oliver Kampmeier

Oliver Kampmeier

Cybersecurity Content Specialist

Illustration of a web browser window showing a highlighted advertisement space with a stylized fake human face marked with an ‘X,’ symbolizing a deepfake ad.

AI is rapidly reshaping the advertising industry and not always for the better. Scam adverts featuring AI generated deepfake videos are now flooding social media feeds and even appearing inside popular games. According to Financial Times reporting, consumer advocate Martin Lewis encounters hundreds of these scams every day, many using fake videos of him to promote fraudulent investment schemes.

These campaigns are convincing, fast-moving and designed to bypass traditional review processes. The sheer volume highlights a bigger issue: current platform controls are not enough to keep harmful content out of the advertising ecosystem.

The Need for Real Consequences

The continued presence of deepfake adverts shows that the incentives in digital advertising are misaligned. When ads go live instantly and generate revenue until they are removed, there is little reason for platforms to slow down or invest heavily in human review. Without serious financial consequences, scam ads remain a profitable problem.

Slow Progress on Regulation

Although scam adverts were included in the UK’s Online Safety Bill back in 2023, enforcement remains a long way off. Consultations will only begin next year, meaning stricter measures may not take effect until 2027. For brands and consumers, that is far too long to wait.

Close-up of a judge striking a wooden gavel on a sound block in a courtroom, symbolizing justice and legal decisions

The Human Cost

Behind every deepfake scam ad is a victim. Recent cases include elderly consumers losing life-changing sums to fake crypto schemes. Some victims even struggle to believe that the adverts are fake, which shows just how realistic AI generated content has become and how easily trust can be exploited.

Another case of people falling for the trap of Deepfake videos was covered by the World Economic Forum, in which an employee of the engineering firm Arup was tricked via a video conference with what appeared to be a senior manager but was actually a deepfake. Fraudsters used that to authorize large financial transfers.

Illustration of a laptop screen showing a man in a suit whose face is partly pixelated, symbolizing a deepfake. Red labels read ‘Deepfake’ at the top and ‘Scam’ at the bottom, highlighting the risk of AI-generated fraud.

What This Means for Advertising

This is not just a consumer protection issue. Every scam ad that runs flows through the same systems that deliver legitimate campaigns, wasting advertiser budgets, corrupting performance data and putting brand safety at risk. With generative AI making scams more scalable and convincing, advertisers need to treat fraud prevention as a critical part of their media strategy.

Real time monitoring, exclusion of low quality inventory and proactive blocking of invalid traffic are no longer optional. They are essential to protecting budgets, preserving data quality and maintaining consumer trust in advertising.

Contents
Latest Whitepaper
Unmasking the Shadows 2025 Report Cover
Unmasking the shadows 2025

See what’s hidden: from the quality of website traffic to the reality of ad placements. Insights drawn from billions of data points across our customer base in 2024.

Subscribe to our newsletter
Share this article

Protect Your Data and Analytics From Bots and Invalid Traffic

Take back control over your data and try fraud0.

Try fraud0 for 7 days
No credit card required.

Already have an account? Log in