The Use of AI in Ad Fraud and Disinformation

The Use of AI in Ad Fraud and Disinformation

Ad fraudsters are closely monitoring innovations in AdTech. They are now actively exploiting one of these trends — artificial intelligence.

According to a new report by DoubleVerify, AI is often used in sophisticated fraudulent schemes involving advertising on various platforms, including CTV and audio streaming. The report states that AI was involved in 23% of new fraud schemes in 2023, leading to a 58% increase in ad fraud. This report examines data from 1 trillion impressions for 2000 advertisers in 100 countries on computers, mobile devices, and televisions. Researchers also note a 269% increase in bot fraud.

DoubleVerify (DV) is a company specializing in advertising verification solutions. It helps advertisers ensure that their ads are displayed in a safe and appropriate environment. DV provides tools for fraud prevention, improving ad visibility, and ensuring compliance with advertisers' requirements.

AI simplifies data spoofing, making fraudulent traffic appear human-like. AI helps fraudsters create fake company websites, malicious apps, and generate fake reviews in app stores.

Jack Smith, Director of Innovation at DV, said that in 2023, AI-driven ad fraud in CTV grew three times faster than in 2022. He believes this will continue in 2024 because AI is becoming stronger, larger, and easier to use.

"This is a problem for both advertisers and publishers," Smith told Digiday. "It takes money away from good traffic. If you fake an app, you drain money from the ecosystem. It's bad for everyone."

Here are two examples of AI-driven fraud schemes: FM Scam and CycloneBot. CycloneBot is a CTV scam that is extremely difficult to detect. It uses AI to create long viewing sessions on non-existent devices, quadrupling traffic. FM Scam is used in audio streaming and employs AI to create fake traffic that looks like normal user activity. According to DV, in March 2024, it was used to simulate activity on 500,000 devices. This is the first instance of ad fraud targeting smart speakers.

All this is costly for advertisers. DV reports that FM Scam is part of a global fraud project called BeatString, leading to over $1 million in monthly traffic losses. Meanwhile, CycloneBot has been used to fake up to 250 million ad requests, simulating about 1.5 million devices daily and costing advertisers $7.5 million per month.

But that's not all. DV's report states that AI usage increased the number of MFA sites by 20%. In a survey conducted by DV and Sapio Research among 1000 advertisers, 57% said they consider AI-generated content a problem. And 54% believe that AI-generated content degrades media quality.

Reviews and feedback generated by AI can create the illusion of user interest in mobile apps, as well as help mask fraudulent apps. As a result, DV has had to double the number of investigations into potentially fraudulent apps over the past year. As an example, Smith mentioned classic wallpaper apps, as well as others like an app that allows people to leave the TV on for their pets.

Disinformation and AI Ad Fraud

Experts say they are not surprised by the rise in AI-driven ad fraud. Researchers from NewsGuard have analyzed how AI and advertising can help spread fake news.

NewsGuard is a company that evaluates the reliability and transparency of news websites, providing users with information about the quality of news sources. NewsGuard's system uses a team of experienced journalists to assess news sites based on various criteria, such as accuracy, responsibility, and transparency of ownership.

"More than a year ago, we said that AI would become a powerful tool for spreading disinformation," said Steven Brill, co-founder of NewsGuard. "DV's research confirms this threat. In one of our reports, we describe a case where a single fraudster could be responsible for hundreds of fake sites posing as real news and promoting content aimed at influencing the outcome of US elections."

About 75% of fake news sites also show ads to users, according to a joint study by NewsGuard, Stanford University, and Carnegie Mellon. The report, published in Nature, states that between 46% and 82% of advertisers have inadvertently placed their ads on such sites. Researchers recommend addressing this issue through platforms that make advertising more transparent. But advertisers have been trying to solve this problem for many years.

DoubleVerify claims that its AI-based systems help detect ad fraud schemes, but some doubt the reliability of this approach. Recently, measurement firms often face conflicts of interest, as their services are used by the same companies they are supposed to measure and evaluate.

Other materials on this topic: