Google is currently training its ad bots to become more easily offended. Is this political correctness gone mad? Or is there something more sinister at hand?...
Google knows better than me. That’s my default, and probably why it’s my homepage. Anytime I’m in the dark, curious or confused, I’ll Google it and all will be revealed. So if I want to know, for example, how big the specific [sic] ocean is, I won’t guess, I’ll Google. (Apparently I’m not the only one, as believe it or not, that ridiculous search term returned over 7,500,000 results). However, my faith in the all-knowing all-seeing digital deity has recently been questioned...
So what’s gone wrong and will it be fixed?
Over the last couple of weeks, more and more major UK and US brands have pulled their ads from the Google Display Network over fears that their brands were being advertised next to inappropriate content, and the collateral damage and financial repercussions are mounting up.
The problem lies in the very same digitalisation and automation that made Google such a cost effective way to promote brands in the first place. Previously, ad agencies negotiated rates over the phone with their counterparts in media sales departments, but these days people bid directly for digital spots in an automated auction called programmatic advertising.
When programmatic become problematic
This programmatic system is running into problems, because the algorithms favour demographics over context in determining their relevance to you. When a bot (without a moral compass or conscience) controls what goes where, ads can potentially pop up next to questionable content.
UK Government inadvertently funding terrorism?
Even the UK Government has been affected. Google representatives have been called into the Cabinet Office after it emerged that government advertising was being inadvertently placed next to extremist material. Worse still, is that YouTube shares a portion of the ad sales with the creators of the content those ads appear against. So the implication here is that the UK Government, like many global brands have, in effect, been inadvertently funding everything from hate crime to extremist terrorism.
So how big is the problem?
Let’s just say it’s big. Bigger than the ‘specific’ ocean even. Just to put some figures on this, Google and Facebook together control almost 60% of the £11bn UK digital ad market, according to eMarketer. The Guardian suggests that programmatic advertising has gone from zero to accounting for almost 80% of the £3.3bn spent on the display advertising part of the market.
Google is also responsible for much of the infrastructure that delivers digital advertising across various digital channels, like YouTube for example, and this is where it’s running into difficulty. With over 400 hours of new video content being uploaded onto Youtube every minute, it’s impossible for humans to vet all this footage.
Everybody wants to find a solution. It’s not just affecting Google and the brands in question, but this has far reaching implications for the advertising industry itself. Some agencies have taken steps themselves. GroupM, the world’s largest media buying firm (whose clients include L'Oréal, HSBC Bank, Lloyds Banking Group, Tesco and Marks and Spencer) has announced that it has signed up a company to help ensure that its clients’ ads run against appropriate content.
Google’s response
In a bid to counter this blind spot, and win back precious ad revenues, Google is exploring ways in which their omnipresence can become more aware of its contextual surroundings and improve brand safety controls, without having to hire the population of a small republic to do it.
As the New York Times puts it, Google are training their bots to become more easily offended by content on the internet. To train the bots, Google is applying machine-learning techniques; the underlying technology for many of its biggest breakthroughs, like the self-driving car. In an effort to make everything more transparent and accountable, Google is now promoting its so called Preferred Advertising Program, which lets its advertisers see which videos their ads could run on.
Moral of the story
All this does beg the question though; could should Google have been more proactive? The New York Times suggests that it failed to address the issue adequately before because it did not have to; the instances in which ads appeared next to objectionable content happened infrequently and out of view from the broader public. Google said that for many of its top advertisers, the objectionable videos accounted for fewer than one one-thousandth of a percent of their total ad impressions.
Historical precedent suggests that when Google are forced to act, they do. Black Hat SEO hacks and keyword stuffing that plagued earlier iterations are now punished on search results. Websites not optimised for mobile will also be hurt. Make no mistake, this latest problem is contagious and hurts everybody; brands, ad agencies and of course Google themselves.
It’s not just Google’s credibility at stake this time. Now their morality is also being questioned.