AT&T, Verizon Feign Ethical Outrage, Pile On Google's 'Extremist' Ad Woes

from the manufactured-outrage dept

So you may have noticed that Google has been caught up in a bit of a stink in the UK over the company’s YouTube ads being presented near “extremist” content. The fracas began after a report by the Times pointed out that advertisements for a rotating crop of brands were appearing next to videos uploaded to YouTube by a variety of hateful extremists. It didn’t take long for the UK government — and a number of companies including McDonald’s, BBC, Channel 4, and Lloyd’s — to engage in some extended pearl-clutching, proclaiming they’d be suspending their ad buys until Google resolved the issue.

Of course, much like the conversation surrounding “fake news,” most of the news coverage was bizarrely superficial and frequently teetering toward the naive. Most outlets were quick to malign Google for purposely letting extremist content get posted, ignoring the fact that the sheer volume of video content uploaded to YouTube on a daily basis makes hateful-idiot policing a Sisyphean task. Most of the reports also severely understate the complexity of modern internet advertising, where real-time bidding and programmatic placement means companies may not always know what brand ads show up where, or when.

Regardless, Google wound up issuing a mea culpa stating they’d try to do a better job at keeping ads for the McRib sandwich far away from hateful idiocy:

“We know advertisers don’t want their ads next to content that doesn?t align with their values. So starting today, we?re taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories. This change will enable us to take action, where appropriate, on a larger set of ads and sites.”

As we’ve noted countless times, policing hate speech is a complicated subject, where the well-intentioned often stumble down the rabbit hole into hysteria and overreach. Amusingly though, AT&T and Verizon — two U.S. brands not exactly synonymous with ethical behavior — were quick to take advantage of the situation, issuing statements that they too were simply outraged — and would be pulling their advertising from some Google properties post haste. This resulted in a large number of websites regurgitating said outrage with a decidedly straight face:

“We are deeply concerned that our ads may have appeared alongside YouTube content promoting terrorism and hate,” an AT&T spokesperson told Business Insider in a written statement. “Until Google can ensure this won?t happen again, we are removing our ads from Google?s non-search platforms.”

“Once we were notified that our ads were appearing on non-sanctioned websites, we took immediate action to suspend this type of ad placement and launched an investigation,” a Verizon spokesperson told Business Insider. “We are working with all of our digital advertising partners to understand the weak links so we can prevent this from happening in the future.”

Of course, if you know the history of either company, you should find this pearl-clutching a little entertaining. In just the last few years, AT&T has been busted for turning a blind eye to drug dealer directory assistance scams, ripping off programs for the hearing impaired, defrauding government programs designed to shore up low-income connectivity, and actively helping “crammers” by making scam charges on consumer bills harder to detect. Verizon, recently busted for covertly spying on internet users and defrauding cities via bogus broadband deployment promises isn’t a whole lot better.

That’s not to say that all of the companies involved in the Google fracas are engaged in superficial posturing for competitive advantage. Nor is it to say that Google can’t do more to police the global hatred brigades. But as somebody who has spent twenty years writing about these two companies specifically, the idea that either gives much of a shit about their ads showing up next to hateful ignoramuses is laughable. And it was bizarre to see an ocean of news outlets just skip over the fact that both companies are pushing hard into advertising themselves with completed or looming acquisitions of Time Warner, AOL and Yahoo.

Again, policing hateful idiocy is absolutely important. But overreach historically doesn’t serve anybody. And neither does pretentious face fanning by companies looking to use the resulting hysteria to competitive advantage.

Filed Under: , , , , ,
Companies: at&t, google, verizon, youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “AT&T, Verizon Feign Ethical Outrage, Pile On Google's 'Extremist' Ad Woes”

Subscribe: RSS Leave a comment
36 Comments
Ninja (profile) says:

Not to mention Google at the very least issued a mea culpa. The other two tired to spin things as awesome and positive or simply pretended it didn’t exist when caught with pants down. Having ads beside extremist content will be the least problematic issue in companies that are ok in destroying your privacy with stealth super cookies, browsing habits recording and selling, all without any way to opt-out.

btr1701 (profile) says:

Re: Extremist

The other problem, not talked about in any of these mainstream articles on the topic, is that “extremist” is being defined as everything from jihadi beheading videos to conservative political commentary. Basically anything that challenges politically-correct orthodoxy is now labeled “extremist” by YouTube and demonetized.

Richard (profile) says:

McRib

Regardless, Google wound up issuing a mea culpa stating they’d try to do a better job at keeping ads for the McRib sandwich far away from hateful idiocy:

I am sure that McD isquite happy to sell (Halal)burgers to Jhadis – just like Lenin’s capitalist who would source his own noose.

Of course they are presumably bothered about being labelled "Islamaphobic" for advertising McRib ( a pork product in case anyone hadn’t noticed) to Muslims.

OGquaker says:

Re: McRib

The decades long assault against Americans who shun beef and pork products (Oprah Winfrey?) continues; yesterday Carls Jr. refused to sell us their ‘Meatless Berger’ which we have bought from them for years.
Pushing cloven hoofed animal meat in your face has been as hate-filed as the usual profit over prophet.
As a religious Green, the meat industry in this country is disrespectful and dangerous to our collective future.

‘It is difficult to get a man to understand something, when his salary depends on his not understanding it.’ -Upton Sinclair

Anonymous Coward says:

Missing the problem

Of course the real problem isn’t Islam related “hate” videos. Because both the Jihadi ones and the anti-islam ones have the virtue of broadly telling the truth about Islam and Jihad. It is the likes of Bush, Obama, Cameron, Khan and May that lie about it. Those lies are the problem because they spread complacency about the issue.

Don’t listen to May and her crowd. Listen to ex-muslims and non-muslims who live in muslim majority countries – they know the truth.

Anonymous Coward says:

Perhaps the biggest problem is that the establishment treats left-wing extremists and right-wing extremists very differently for doing exactly the same thing.

Twitter is notorious for enforcing this double-standard, with Facebook not far behind, and now there is increasing pressure on Youtube to enact similar left-leaning censorship.

The DMCA has been the most potent weapon for censoring Youtube content. Right wing polemicists (YouTube has an awful lot of them) have learned to be very careful about using short video clips as “fair use” discussion material, as YouTube will reflexively remove such videos instantly (and slap the uploader with a copyright “strike”) upon receiving a DMCA claim, putting the burden on the uploader to embark on the long slow process of getting the video restored by arguing fair use.

That One Guy (profile) says:

Re: Everything is easy and cheap when you don't have to do it

Along those lines, the recording and movie industries also make billions in profits, which means they too can certainly afford to ‘hire a room full of people’ to review DMCA claims before sending them out to make sure that they don’t flag something erroneously.

If they can’t manage that, then perhaps their business models are broken.

Tell you what, if you think that it’s really that easy to review(superficially or otherwise) at least 400 hours worth of video uploaded per minute why don’t you give Google a call, I’m sure they’d love to offer you the job.

Anonymous Coward says:

Re: Re: Everything is easy and cheap when you don't have to do it

Its actually pretty simple. First thing is scan the title if the posting. Tgey could ding a bunch tight sway.

Second run the video past the google voice sppech capture system. Scan the results for keywords and kick out any video tgat fsils for manusl review.

Third check the first 30 second and the description text. Timeline jump to a dew place to spot check.

Already they will have gone a long way to spot troubles before they get posted.

That One Guy (profile) says:

Re: Re: Re: Everything is easy and cheap when you don't have to do it

Once again: 400+ hours worth of video per minute.

What may be ‘simple’ for a few videos is anything but when you scale it up to that level, so you’re not talking about ‘a room full of people’ but a massive system requiring various levels of review of enormous amounts of content.

There’s also the problem of false positives, something that already plagues ContentID, a black or white ‘Does X match Y?’ system. Make the question a subjective one, ‘Does X count as ‘extremist’ content?’ and things would be even more insane.

That One Guy (profile) says:

Re: Re: Re:3 Everything is easy and cheap when you don't have to do it

Only if what’s being scaled up is part of the business model being used, and not something they’re being slapped with after the fact.

Were Google/YT in the ‘pre-screening video content’ business then yes, they would be to blame if they set things up such that they couldn’t handle the increased load of what they had to go through, but since they’re not it’s not a ‘business model problem’ at all. Youtube hosts videos, that’s it’s business model, saying they should have to pre-screen everything first isn’t a matter of scaling up something they’ve always had to do, it’s adding something new on top of what they already do, something that the scale of the problem would make insanely expensive and bring the service to a crawling halt if they were required to do, contrary to your claims otherwise.

On a semi-related tangent, but your mention of how Google is big so it’s not a problem has me again wondering, do you hold others to that same standard? Do you think that the movie and recording industries should likewise hire ‘a room full of people’ to personally vet every DMCA claim they send out to avoid false positives? They make billions too after all, surely it would be just as easy if not easier for them to pre-screen DMCA claims as it would be for Google/YT to pre-screen videos, so does that standard of yours apply to everyone, or just Google?

Anonymous Coward says:

Re: Re: Re:5 Everything is easy and cheap when you don't have to do it

Indeed the difference is huge, the MPAA and RIAA members combined publish less hours of content in a year than is posted to YouTube in a few minutes. Add a requirement for per-screening, which those organization want Goggle to do, and they eliminate most of the content that is competing with theirs for for customer attention; because Google could not keep up with what is being posted.

As for the practicality of per-screening, that would require at least 24,00 actively screening content all the time, so to keep that up 24/7 would require at least 100,000 people; (allowing for holidays, sickness, meal breaks etc. along with the necessary managers and HR personnel). Then you run into the problem that those people do not know every existing work, who owns the copyright, and what licenses have been granted, or whether the poster works for the company that they claim, and have the authority to post the work.

The only people who can reliably identify a work as belonging to them or their organization, are the producers (not necessaries the creators) of the work. And only they know, or have access to the information needed to determine whether or not it has been licensed to the poster.

There is no magic crystal ball that will identify infringing works. Indeed, because of lack of a data-base of all works, there is no way of identifying the copyright holders of any particular work, or verifying that the claimant is actually the copyright holder, or licensed yo use the work

My_Name_Here says:

Re: Re: Re:6 Everything is easy and cheap when you don't have to do it

“As for the practicality of per-screening, that would require at least 24,00 actively screening content all the time, so to keep that up 24/7 would require at least 100,000 people; (allowing for holidays, sickness, meal breaks etc. along with the necessary managers and HR personnel). Then you run into the problem that those people do not know every existing work, who owns the copyright, and what licenses have been granted, or whether the poster works for the company that they claim, and have the authority to post the work.”

You are assuming that every minute of every video would have to be pre-screened. That is stupid. Nobody needs to watch a whole cat video to know it’s a cat video.

Nobody would have to watch the videos in real time either. Even with your example, run the videos all at double speed and the needed people drops in half. Only watch 10% of the video time at double speed, and suddenly the need is down to 5% of the people you suggested. So 5% of 100,000 people would be…5000 workers. Suddenly it’s falling into the realm of possible. Apply a little automation to filter out half of the videos that aren’t harmful at all, and boom, you need 2500 people. Getting easier, isn’t it?

It’s easy to blow it off as impossible. It’s not. It’s pretty simple stuff. The anonymous coward has it closer to right that anyone would like.

That One Guy (profile) says:

Re: Re: Re:5 Everything is easy and cheap when you don't have to do it

Hosting user submitted videos, not posting. The distinction is significant, as it means that thanks to 230/common sense protections in the US at least they aren’t held responsible for what their users post and as such have no requirement to pre-screen for CYOA reasons. With no requirement to pre-screen more videos isn’t a scaling problem getting out of control because it was never a problem in the first place.

(Imagine if you will a donation-based library, where all the books are donated by others. Their ‘business model’ is to make sure that they have enough shelves to hold all the books and making sure that people can find what they want. So long as they can manage those two, no matter how many books are donated or how fast then they’re doing good and their ‘business model’ is fine. Now imagine someone comes in and demands that they check every book for ‘offensive’ content before people can check it out. Now how much is donated is a problem, but that problem has nothing to do with their ‘business model’, and everything to do with the new requirement that’s been dumped in their laps.)

The entertainment industry does make money from the content that they’re filing DMCA claims for(assuming a valid notice anyway), and unlike user submitted content that YT makes money from hosting the DMCA contains a (effectively theoretical at this point) requirement to swear ‘under penalty of perjury’, which would require manual review.

As I noted above a DMCA claim is also easier to check, as the only subjective part involved is a consideration of fair use, which has a quick and easy ‘checklist’ attached, quite unlike the subjective ‘is this offensive/extremist?’ which, barring extreme cases(and sometimes not even then) can be much harder to decide on. Ask enough people and anything can be seen as ‘offensive’, so the question becomes ‘how many people can we safely offend according to the requirements?’

There’s also the difference in consequence, miss a ‘guilty’ copyright infringement case and the harm isn’t likely to be very bad, whereas if a site is liable for user submitted content and they let an ‘offensive/extremist’ post through they’re likely to be facing a serious penalty, which means they’re much more likely to block even fringe stuff ‘just in case’, leading to large amounts of content and/or speech blocked.

In both cases a faulty claim means legitimate/legal content and/or speech being removed, and while services like YT don’t have a requirement to screen content those sending out copyright claims (theoretically) do, so why is it you think that only the former group should be required to pre-screen?

Anonymous Coward says:

Re: Re: Re:4 Everything is easy and cheap when you don't have to do it

For what its worth i am not suggedting thst Google have people wstch every secind of every video. Its about coming up eith ways to pick out uploads that are potentially problems and review them.

Google has incredibly powerful tools to index content online and to extract semantic information. You don’t that they could apply this to videos and their comments to determine videos that need review?

I also don’t th8nk they should delete videos, it would be good enough to flag them gor adults and remove ads from them. Deleting a video should be saved for the most egregious situations.

That would go a very long way towards resolving the issues at hand.

That One Guy (profile) says:

Re: Re: Re:5 Everything is easy and cheap when you don't have to do it

Even having to pre-screen ‘problematic’ videos would be a huge problem due to how many they’d have to deal with, and the massive numbers of false positives they’ve be wading through.

ContentID, something that’s based upon a ‘Does it contain content X or doesn’t it?’ already has problems a plenty flagging things for reasons ranging from absurd to downright broken. Now imagine a similar system but for ‘offensive’ content and the nightmare that would be.

If Google wants to manually review videos flagged by users as ‘offensive/extremist’, which I believe they already do, then I’ve no problem with that. What I have a problem with is requiring them to do so ahead of time, as it would be insanely expensive, cause significant collateral damage, and make the service vastly less useful as a hosting platform(all of which would be bad enough for a huge company like them, but would be even worse for smaller services trying to break into the market and who wouldn’t have the same resources that YT/Google does).

Anonymous Coward says:

Re: Re: Re:6 Everything is easy and cheap when you don't have to do it

As soon as YouTube accepted to block copyright content and enforce dmca notices they tipped their hands. They are not a mere host but rather a publishing company. Hosting only would not include a YouTube mandated web page or related video link and such. Your uploaded video likely does not have ads embedded, those are added by the publishing company.

That One Guy (profile) says:

Re: Re: Re:7 Everything is easy and cheap when you don't have to do it

Uh, no. That was the entire point of codifying the idea of safe harbors into law, that sites can take steps to moderate content without suddenly becoming liable for content posted by users. Were what you are saying true then Youtube would have been better off not implementing ContentID and ignoring DMCA claims, and I rather doubt that’s what you meant to imply.

‘Voluntarily’ implementing a (lousy) filter for one type of content and complying with the law does not magically change their status such that they are responsible for what’s posted by others using their service, whether that content be copyright related or otherwise.

Anonymous Coward says:

Re: Re: Re: Everything is easy and cheap when you don't have to do it

That filtering model on keywords would be an incredibly bad idea. I thought we all learned about the “Scunthorpe” problem by now. Blind word-list filtering results in stupidity like requiring the real location and then banning people for giving it as “Fort Gay”. It was a bad idea then and it is a bad idea now.

Not to mention that meanings aren’t always clear just by the words used. Under the type of filtering logic you want we’d see British content flagged as hardcore gay child pornography because someone asked if they could “bum a couple of fags for the boys”.

Anonymous Coward says:

Re: Re:

Your concept of scale is badly broken. YouTube has hundreds of hours a minute uploaded to it, so your room full would in reality be a large building full, staffed 24/7.

YouTubes business model is not broke, it just that they are not gatekeepers, but rather facilitators that allow anybody to publish without seeking any form of permission or review.

Leave a Reply to That One Guy Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...