The brand safety scandal seems to expand every day with more and more brands boycotting YouTube and the Google Display Network. As a result of the present outrage, Alphabet’s share price has even been affected.
Interestingly, the concern of having safe brand placements is barely new and does not only apply to digital channels. Lots of examples can be found with unfortunate outdoor or print ad placements.
Yet, advertisers seem to concentrate their anger on Google and YouTube. Whilst there are good reasons to criticise Google (keyword: walled gardens – we come back to this later), it’s fair to say that the present witch hunt may also be driven by political motives. Some may have waited for any chance to get back at Google, which has been very successful and accused of growing at the expense of other media companies.
However, it's not only Google who should consider assessing and updating their brand safety policies. Tabloids frequently serve ads next to clickbait stories dealing with sexual violence, animal cruelty or other disturbing and emotionally upsetting topics.
One must wonder: why should there be an ad next to an article which is even flagged with warnings, such as 'graphic content'? Shouldn’t this be a signal that the publisher knows the content would not represent a proper environment for most advertising?
Likewise, someone may post a racist comment on Facebook followed by a sponsored ad next to it. In fact, when ads appeared on pages that incited gender-based hate, Facebook faced a similar protest wave as Google in 2013 (albeit much smaller in scale), with Nissan and Nationwide suspending ad spend in the UK.
So why has the topic gained so much traction recently?
And why does brand safety matter?
Two key factors should be noted to understand the present debate.
1. Negative brand consequences due to unfortunate ad placements
Clearly, the idea of branding is to create positive associations for a brand, what may be difficult if a brand is linked to negative emotions. Uncomfortable feelings and upsetting content rarely works for advertisers. For example, research has shown that TV spots in programs containing sex or violence have lower ad recall, purchase intention and coupon redemption.
We know the problem of a mismatch between page content and the desired brand message is not novel. But the increase in reported mismatches shows that brand safety poses a significant challenge for the business model of social media platforms, which rely on user-generated content (UGC).
How to censor and monitor the quality of millions of free daily content contributions without jeopardising the attractive feature that anyone can share their views and stories?
2. Which content do the online ads fund and support?
In addition to PR risks and unwanted emotional triggers among consumers, brand safety represents a big concern in terms of content monetisation. Let’s recall that many websites function as an income source for organisations by being paid for providing ad spots. Clearly, no brand wants to see their ad money flowing into the wrong hands, such as hate preachers and terrorists.
Moreover, with the rise of terrorism on the one hand and right-wing movements on the other hand, the entire brand safety discussion has become very political. Consumer groups, such as Sleeping Giants or Stop Funding Hate, closely monitor misplaced ads and publicly shame brands on social media. This degree of involvement and the scale of public outrage are new phenomena fuelled by political ambitions.
How to restore brand safety
Google has stated that the challenge in monitoring brand safety is the amount of content created and shared through their pipelines: every minute about 400 hours are uploaded on YouTube and thousands of websites added every day.
But is high production output really an excuse to neglect quality control?
Why do externally developed tools seem to do a better job in flagging suspicious YouTube content than Google’s own monitoring efforts?
Google certainly has the resources to improve brand safety, even if this must be done by paying armies of human raters. Fortunately, Google has now announced to take more action in this regard.
Thus, the critical question is why this did not happen earlier? Wasn’t there enough motivation before the public outburst to make use of all available tools and measures?
We can only speculate about the underlying rationale. One reason may be the typical payment structure for platforms powered by ad monetisation. The existing financial incentives often put media owners – as well as agencies, DSPs and resellers – in a dilemma. For PR and ethical reasons, they need to clean their inventory and address brand safety (and ad fraud). At the same time they also earn revenue through misplaced (and fraudulent) ads because traditional payment methods in media are based on volume metrics, such as spend percentages, clicks or CPMs.
In other words, there is an incentive to have as much traffic and ad placements running through their own platforms as possible and to not be too strict in blocking pages or ads.
What to do?
Probably the most efficient solution would be to apply a fee structure that is not driven by quantity but fosters quality. For instance, a combination of fixed service fees and variable costs for indirect costs, such as ad serving (without markups), would ensure that there is no gain in serving unnecessary ads.
However, in reality, it will not always be possible to offer a fixed fee. Many publishers would find it tough to set fixed prices for their services. Such a strategy could also backfire for an advertiser as someone might be paid independently of the number of ads served, thus reducing the incentive to serve sufficient ads.
Since changing the payment structure for publishers is more difficult than for middle men, the most feasible solution in the short run is to demand independent measurements and checks from a third party to avoid media owners ‘grading their own homework’.
To achieve this goal, walled gardens need to change their protectionist attitude. It is high time that YouYube and other companies (e.g. Facebook) open up more to third-party verification. Even big and affluent walled gardens require independent checks - this was best illustrated by the Facebook metric disaster not too long ago.
Let’s hope that the advertising ecosystem has learnt its lessons from the metric and brand safety issues. No need to hide if you play by the rules.
By University of South Australia senior research analyst Nico Neumann