Culture

by Glyn Moody


Filed Under:
algorithms, gaming, kids, youtube

Companies:
google, youtube



Algorithmic Videos Are Making YouTube Unsuitable For Young Children, And Google's 'Revenue Architecture' Is To Blame

from the so-how-do-we-fix-it? dept

There's an interesting article on Medium by James Bridle that's generating plenty of discussion at the moment. It has the title "Something is wrong on the internet", which is certainly true. Specifically, what the article is concerned about is the following:

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.

I recommend reading the article so that you can decide whether it is a perspicacious analysis of what's wrong with the Internet today, or merely another of the hyperbolic "the Internet is corrupting innocent children" screeds that come along from time to time. As an alternative -- or in addition -- you might want to read this somewhat more measured piece from the New York Times, which raises many similar points:

the [YouTube Kids] app contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms.

In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes.

The piece on Medium explores a particular class of YouTube Kids videos that share certain characteristics. They have bizarre, keyword-strewn titles like "Bad Baby with Tantrum and Crying for Lollipops Little Babies Learn Colors Finger Family Song 2 " or "Angry Baby vs Spiderman vs Frozen Elsa BABY DROWNING w/ Maleficent Car Pink Spidergirl Superhero IRL". They have massive numbers of views: 110 million for "Bad Baby" and 75 million for "Angry Baby". In total, there seem to be thousands of them with similar, strange titles, and similar, disturbing content, which collectively are racking up billions of views.

As Bridle rightly notes, the sheer scale and downright oddness of the videos suggests that some are being generated, at least in part, by automated algorithms that churn out increasingly-deranged variations on themes that are already popular on the YouTube Kids channel. The aim is to garner as many views as possible, and to get children to watch yet more of the many similar videos. More views means more revenue from advertising: alongside the video, before it, or even in it -- some feature blatant product placement. Young children are the perfect audience for this kind of material: they are inexperienced, and therefore are less likely to dismiss episodes as poor quality; they are curious, and so will probably watch closely to see what happens, no matter how absurd and vacuous the storyline; and they probably don't use ad blockers. As Bridle says in his Medium post:

right now, right here, YouTube and Google are complicit in that system [of psychological abuse]. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale.

That may be overstating it, but it is certainly true that YouTube's "revenue architecture", based on how many views videos achieve, tends to produce a race to the bottom in terms of quality, and a shift to automated production of endless variations on a popular themes -- both with the aim of maximizing the audience.

YouTube has just announced that it will try to restrict access by young children to this type of video, a move that it rather improbably claims has nothing to do with the recent articles. But given the potential harm that inappropriate material could produce when viewed by young children, there's a strong argument that Google should apply other criteria in order to de-emphasize such offerings. A possible approach would be to allow adults to rate the material their children see, using a mechanism separate from the current "like" and "dislike". Google could then use adverse parental ratings to scale back payments it makes to channels, while good ratings from adults would cause income to be boosted. Parents would need to sign up before rating material, but that's unlikely to be a significant barrier to participation for those who care about what their children watch.

Although there is always a risk of such systems being gamed, the sheer scale of the audience involved -- millions of views for a video -- makes it much harder than for material that has smaller reach, where bogus votes skew results more easily. Google would anyway need to develop systems that can detect attempts to use large-scale bots to boost ratings. The fact that the company has become quite adept at spotting and blocking spam at scale on Gmail suggests it could create such a system if there were enough pressure from parents to do so.

If Google adopted such a reward system, Darwinian dynamics are likely to lead to better-quality content for children, where "better" is defined by the broad consensus of what adults want their children to see. Other ways that Google could encourage such content to be produced would be to allow parents to boost further what they regard as valuable content with one-off donations or regular subscriptions. Techdirt readers can doubtless come up with other ways of providing incentives to YouTube channels to move away from the automated and often disturbing material many are increasingly filled with.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+


Reader Comments

The First Word

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Anonymous Coward, 11 Nov 2017 @ 7:18am

    Re: Re: Re: Re: Re: Re:

    Pretty much this. I was watching James Bond, South Park, pre-politically correct Disney and Merry Melodies (Like, blackface and WWII propaganda era), playing violent video games and more at age 9. With the exception of south park, I was enjoying it too. I was a mild mannered, relatively mature, well behaved if perhaps hyperactive and kind child.

    Know why? Cause if I managed to watch something my mother wished I hadn't (South Park and James Bond in particular), she sat down and actually talked to me about it. Explained that South Park was just a way for grownups to make fun of things they thought were wrong or important or done wrong. Explained James Bond is just a movie character and it isn't right to be a pseudo-rapist egotist with a gun. Told me that some of the cartoons I was watching were from a different time. Frig at age 10 I was researching WWII because I was curious from the cartoons, not because school told me I had to.

    The real psycopaths I knew were kids whose parents either neglected them entirely or demanded to micromanage and censor every single experience in their kids life.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here
Get Techdirt’s Daily Email
Use markdown for basic formatting. HTML is no longer supported.
  Save me a cookie
Follow Techdirt
Techdirt Gear
Shop Now: Techdirt Logo Gear
Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.