Recently, Google has come under fire for inadvertently giving extremist and terrorist groups access to advertising revenue. A plethora of companies have withdrawn advertising in the wake of these revelations, including Marks & Spencer’s, McDonalds and L’Oréal. More worryingly, publicly, or partially publicly, owned corporations such as the BBC and bailed-out banks Hsbc, RBS and Lloyds have pulled out; the implication being that taxpayers’ money is getting through to extremists via ads on their videos.
The search engine giant has offered an apology, with Matt Brittin, head of Google in Europe, saying: “We take our responsibilities to these industry issues very seriously.” Google is looking to change its controls, making it easier for companies or individuals to choose where their adverts appear. Brittin added: “We have a review underway […] we are accelerating that review”.
Current Google policy is to follow up 98 per cent of flagged content within 24 hours. Some have suggested that instead of relying on users to report extremist videos, the onus is on the search engine’s staff to actively seek out this material, instead of its arguably more democratic mode of submission. They suggest that leaving something like this to the off-chance that a decent citizen will stumble across one of these videos and do his public duty by reporting it is morally indefensible. However, given the situation at the moment in which 400 hours of footage is uploaded to YouTube every minute, this proposal seems unworkable.
After a certain amount of engagement is garnered on an advert coming up on a video, the people behind that content get money from the company who pay for the ad. In figures this means that for every 1,000 clicks on a YouTube ad, roughly six pounds is paid to the creator of the clip. This amounts to approximately £250,000 being gifted to extremists or terrorist groups to date: a negligible sum when compared to Google’s net worth.
This leads to the question of how much responsibility search engines have. It is very hard to build content vetting into an algorithm because of its subjective nature. Yet, it defeats the purpose of advertising if a company’s reputation is tarnished by appearing next to a recruitment video for Islamic State.
Looking at the bigger picture, it seems difficult to pinpoint why it is only now that tensions seem to have come to a head. Surely both Google and the companies had known, or could have conjectured, that this had been going on long before the mass boycott. For Google, there is money to be made in fashionable fascism – this latest scandal is only a symptom of their money first, moderate after approach. In the companies’ case we have to cast our gaze to the current trend of ‘politically conscious’ advertising through the lens of which we can view this as a paradoxical publicity stunt.
Advertisements rarely put forward an overtly political agenda, unless they are officially for a campaign ie: a party political broadcast. This is because, traditionally, although they may have a target demographic in mind, they want to appeal to the broadest range of customers and avoid divisiveness. Yet in recent months, following Brexit and the election of Trump, major companies have sought to lash out against mainstream anti-globalisation with increased vigour by creating provocative pro-immigration ads and, in the case of Lego, pulling advertisements from the right-wing newspaper The Daily Mail. This appears cynical, especially when ad misplacement is not a novel concept.
Intervention could open a can of worms when it comes to content vetting. The more hysteria grows around the prevalence of misinformation and extremism, the more likely it is that we will end up in a mass censorship situation. An alternative could be to encourage critical thinking, but this solution seems a weak proposal in a climate that allows laws that will fine companies €50 million for not removing fake news (Germany) to pass.
Image: 422737 via pixabay