New Adalytics Report on Brand Safety

New Adalytics Report on Brand Safety

The new Adalytics report this time focuses on publishers working with UGC content and placing ads on pages with "unusual" content.

UGC content (User-Generated Content) is content created by internet users rather than professional authors or companies. It can be anything: social media posts, product reviews, photos, videos, comments, and more. UGC content is often used by brands and platforms to increase audience engagement because such content is generally considered more authentic and trustworthy than professionally created content. Examples include product reviews on e-commerce sites, hashtagged photos on Instagram, and video reviews on YouTube. Examples from Russia: Otzovik, Yandex Maps, VK, Wildberries, and Ozon.

Report Summary

The Adalytics report is a detailed study focusing on the issues and vulnerabilities of modern AI systems used to ensure brand safety. The report primarily focuses on the effectiveness of these technologies in protecting advertisers from placing their ads on inappropriate content.

Brand Safety (Brand Safety) refers to the measures companies take to ensure their ads do not appear alongside inappropriate or harmful content online. This includes setting up filters and blocks to prevent ads from showing next to content that could negatively impact the brand's reputation, such as violence, fake news, or extremism. The primary goal of Brand Safety is to protect the brand's image and maintain consumer trust by avoiding associations with negative contexts. Examples of companies dealing with Brand Safety in Russia: Brand Analytics, Weborama, MediaSniper, AdRiver.

Brand Safety

The study revealed that despite the use of tools like DoubleVerify and Integral Ad Science (IAS), ads from many well-known brands, including Procter & Gamble, Microsoft, IKEA, Mercedes-Benz, and others, appeared on pages with blatantly inappropriate content.

The report does not aim to accuse these publishers of wrongdoing. It is no surprise that UrbanDictionary.com contains inappropriate pages—it's a slang dictionary. The same goes for Genius, which reproduces song lyrics, or Tumblr, a free-flowing platform with diverse images. On Fandom, for example, ads appeared on pages containing offensive words or descriptions of unusual sexual practices. These words could even be in the URL. However, these pages were rated as brand-safe by IAS, as seen in the source code, despite containing adult content.

IAS uses its proprietary JavaScript library to analyze and classify content on web pages in real-time to determine its suitability for brand safety. When a page loads, the library collects data about the page's content and sends it to IAS servers for analysis. Based on this analysis, the page is assigned certain risk levels, such as "high," "medium," or "low," across various content categories (e.g., adult content, violence, hate). This data is then used to decide whether to display ads on that page, ensuring advertisers are protected from placement on inappropriate content.

Separate concerns are raised about opaque Retail Media Networks operating on a walled garden principle, such as Amazon Advertising, Walmart Connect, Target Roundel, ShopRite, Petco, and Best Buy. These companies are now actively expanding in the advertising market but do not offer brand safety services, which have already become standard for advertisers in the open internet.

The term "Walled Garden" has multiple meanings. In the context of this article, it refers to closed ecosystems of digital advertising where major platforms, such as Google and Facebook, control the entire advertising process, including data, inventory, and audience. These platforms limit third-party access to their data and technologies, offering only limited opportunities for external analysts and verification companies. The largest walled gardens in Russia are: Yandex, VK, Sber, Ozon.

Scale Issue

Adalytics founder Krzysztof Franasek began researching these media sellers after a global media department head of a major advertiser asked to review brand safety standards, he writes in the report's introduction. The advertiser received full assurance from their DSP and verifiers that their placements occurred on appropriate pages.

In response to the Adalytics report, DoubleVerify (DV) stated that the study was misinterpreted and required corrections. DV notes that the report does not consider important details such as advertiser campaign settings and strategies, which may include exceptions for certain publishers. DV also notes that some examples in the report are based on a misinterpretation of DV's code, which relates to services provided to publishers, not advertisers. The company emphasizes that their content classification system works accurately and correctly and that the Adalytics report is an example of analysis that distorts the real state of affairs.

"The results are manipulated, artificial, or not widely spread," says DoubleVerify in response to the report.

DoubleVerify Objections

DoubleVerify offers similar technologies as IAS: they filter content, use pre-bid and post-bid filtering, and actively apply AI and ML for content classification.

In other words, DoubleVerify claims that Adalytics deliberately selects pages with inappropriate content and creates conditions for ads to be displayed on these pages. This is done to present them in a negative light in the report.

Fandom is one of the largest sites on the internet. An advertiser buying ads on the open internet will get a significant share of Fandom in any campaign. In most cases, the advertiser will not encounter what Adalytics documents in its report, say DV and Fandom executives.

A Fandom representative contacted by Digiday journalists said that the company has not yet reviewed the full Adalytics report. However, they noted that the few screenshots they saw "highlight an industry-wide problem." The representative also emphasized that the examples include content from "old, extremely low-traffic wikis, so it was not flagged by our current moderation systems or Google AdManager, which monitors our active wikis."

One of the largest internet sites with 50 million pages of user-generated content, Fandom reported that less than 0.08% of content was deemed inappropriate. The site also employs various safety measures, including three industry vendors and an internal team that promptly reviews and removes flagged content.

"We do not endorse the placement of dangerous and harmful materials on our platform—such content is prohibited by our rules, and we will not allow it," says Fandom. "Ensuring brand safety on our platform is of the utmost importance to us, and we take these issues very seriously. Although these individual cases were not a widespread problem, we have added additional safety measures to proactively disable ads on low-traffic wikis."

On the other hand, DoubleVerify's response demonstrates the core of the problem. Both DV and IAS claim in their marketing materials and other sources that they provide 100% brand safety. However, it's clear that this is not the case.

"No client has expressed concern about the accuracy of our content categories," states DoubleVerify in their blog.

IAS declined to comment until they had fully reviewed the report.

Artificial Intelligence and the Speed Issue

Major brand safety companies are increasingly using AI to detect and prevent violations. However, advertisers, agencies, and other brand safety experts note that the report raises new questions about the effectiveness of these technologies.

Artificial Intelligence and the Speed Issue

Advertisers surveyed by Digiday and AdExchanger suspected that AI was not as effective as it was portrayed. However, they were still surprised to see that even with easy identification of harmful content, systems were missing it.

They also noted inconsistencies in how AI classifies web pages by risk level. The Adalytics report showed that pages with harmful content on Wikipedia were flagged as low-risk, while pages on The Washington Post and Reuters were classified as medium or high risk, despite lacking such content.

"Brand safety is an issue, and those who truly face this issue are the brands paying for it," said one source. "We are an industry that errs by believing we can develop tools quickly enough to solve all problems."

Jay Friedman, CEO of Goodway Group, believes that the number and severity of examples indicate that brand safety technologies are not effective enough to justify the time and money invested in them. Like other agencies, advertisers, and tech experts, he notes the need for greater transparency to better understand the problem and find an optimal solution. This includes more detailed reporting on every aspect of an advertising campaign so that everyone can make decisions based on the same data.

"The argument 'We can't tell you how it works because then the bad guys will know too' is no longer valid," says Friedman. "These vendors make billions of dollars a year from advertisers and should provide working technology with a transparent mechanism."

Constantly scanning web pages is a complex process, notes Joseph Turow, a professor of media systems and industry at the Annenberg School for Communication at the University of Pennsylvania. Turow and other scholars wonder how often bots browse a site and detect issues or check pages before ads are served. If companies can use AI for contextual targeting, they should be able to ensure reliable brand safety and do so in real time. However, while blocking swear words itself is not challenging, the speed required to analyze each page in nanoseconds before serving ads presents a more complex challenge.

Hidden Threat

The report also addresses advertisers placing ads on Retail Media Networks (RMN).

Retail Media Network (RMN) is an advertising platform based on the data of online and offline retailers. Such platforms allow advertisers to place ads on these retailers' websites and mobile apps. The primary goal of RMN is to provide advertisers with more targeted interaction with consumers at their points of purchase. Retailers use their data on consumer behavior to improve targeting and ad effectiveness, leading to increased sales for both advertisers and retailers themselves. Examples of such platforms globally include Kroger Precision Marketing, Carrefour Links, and Tesco Media and Insight. In Russia: SberMarketing, X5 Retail Group, Yandex.Market, Ozon Advertising, Wildberries Advertising.

Hidden Threat

Reviewing the screenshots of ads in the report, you can see how ads leading to marketplaces were displayed alongside harmful content, such as Fruity Pebbles from Target or Greenies dog treats from Petco. In an AdExchanger experiment, they launched a browser in incognito mode and received ads on pages with undesirable content for organic milk Horizon from ShopRite, Lunchables from Walmart, and electronics from well-known brands from Best Buy.

At first glance, there may be nothing wrong with such examples, as these are ordinary products that can be expected to be seen in ads. However, the problem arises in the context of how these ads are displayed. If ads for these products appear on pages with inappropriate or unsuitable content, it can damage the brands' reputations. For example, if ads for organic milk or children's toys appear alongside content containing profanity, violence, or other undesirable content, it can trigger a negative reaction from consumers and reduce trust in the brand.

Note that the latest Adalytics report does not mention Reddit, despite its extensive brand-unsafe content and larger scale compared to Fandom. This is because Reddit is a closed system where third-party JavaScript is prohibited. Adalytics cannot analyze these log files as they are inaccessible.

Reddit controls the ad placement process on its platform. The platform collects data on user behavior and interests. Its own system, Reddit Ads, is used for ad placement, with third-party tools being limited.

Meanwhile, DV and IAS are actively partnering with Reddit, Snapchat, TikTok, Meta, YouTube, and Amazon, offering brand safety solutions. In their presentations, they claim that these solutions guarantee 100% safe placements for advertisers.

This brings us to the main conclusion: you should not fully rely on ad tech vendors' claims of 100% safety, especially if you do not have the means to verify it yourself.

Other related materials