A brand may have the best creative, but if it’s placed next to the wrong content, it can lose it’s persuasiveness and the message can get muddied. That’s why, in an effort to prove to advertisers that they’re committed to ensuring brand safety, both Snap and Google recently made announcements on the work they’re doing to find solutions.
First, Snapchat announced that its teamed up with global media measurement and optimization platform IAS (Integral Ad Science) to develop a new brand safety reporting solution. The goal is to give advertisers transparency into the percentage of safe and suitable content their Snapchat ads are appearing against, using IAS’s AI-driven Total Media Quality (TMQ) Brand Safety and Suitability Measurement product. Aligned with the Global Alliance for Responsible Media (GARM) framework for brand suitability, the new product will be available to all advertisers in the coming months.
Snapchat also worked with IAS to conduct a measurement sample study on the brand suitability of its public content, specifically Spotlight and Creator Stories. IAS found that both Spotlight and Creator content on Snapchat is 99% brand safe and the platform says it is exploring broader brand safety measurement solutions later this year.
The platform is also providing advertisers with brand safety controls at the campaign level when launching new campaigns. This first-party tool will ensure brand ads are shown alongside premium content on the platform.
IAS is a member of the Snapchat Brand Safety Coalition and has partnered with Snap to drive greater transparency and media quality measurability of in-app photos and videos. Since 2018, IAS has provided viewability and invalid traffic measurement for global advertisers across their in-app video buys.
Over at Google, the 2023 Ads Safety Report was released to provide a look at what it did last year to keep its platforms safe for users, advertisers and publishers. The report shows that 5.5 billion bad ads were stopped in 2023 for abuses such as misinformation, inappropriate content, adult content, enabling dishonest behaviour, counterfeit goods and dangerous products or services.
In addition, 12.7 million advertiser accounts, nearly double from the previous year, were blocked or removed. Similarly, Google says it protects advertisers and people by removing ads from publisher pages and sites that violate its policies, such as sexually explicit content or dangerous products. Last year, the platform blocked or restricted ads from serving on more than 2.1 billion publisher pages, up slightly from 2022.
Last year, scams and fraud across all online platforms were on the rise. To counter these threats, Google updated its policies, deployed rapid-response enforcement teams and sharpened detection techniques. In November, Google launched its Limited Ads Serving policy, designed to protect users by limiting impressions of ads that have a higher potential of causing abuse or a poor experience for our users, according to the company.