AI Won't Save Us From Fake News: YouTube's Fact Checking Tool Thinks Notre Dame Fire Is About 9/11

from the content-moderation-doesn't-work-this-way dept

In the ongoing moral panic about social media algorithms and what they recommend, there are various suggestions on how the companies might “improve” what they do — and many of them suggest relying on newer, better, algorithms. It’s the “nerd harder” approach. Mark Zuckerberg himself, last year, repeatedly suggested that investing heavily in AI would be a big part of dealing with content moderation questions. This has always been a bit silly, but as if to demonstrate how silly this notion is, yesterday, during the tragic fire at Notre Dame Cathedral in Paris, YouTube’s fancy new “fact checking AI” seemed to think multiple videos of the fire were actually referring to the September 11th, 2001 attacks on the US and linked to a page on Encycolpedia Britannica with more info about the attacks:

These links didn’t last for very long, but at the very least, it should be a reminder that expecting AI to magically fact check breaking news in real-time is (at the very least) a long, long way off, and at worst, a nearly impossible request.

This puts YouTube and others in an impossible position of their own. Just a few weeks ago, people were freaking out that YouTube and Facebook (briefly) allowed videos from the attack in Christchurch to be on their platforms — and have been demanding that the platforms “do something” in response. Having a tool that provides at least some sort of context, or even counterpoint to nonsense (when people start posting nonsense) certainly seems like a good idea. But it requires a level of sophistication and accuracy that is currently severely lacking.

One response to all of this would be to admit that human beings are not perfect, that social media sometimes reflects all aspects of humanity, and that sometimes bad stuff is going to make it online, but that doesn’t seem acceptable to a large number of people. Given that, they’re going to have to accept that sometimes AI is going to get this kind of stuff laughably wrong.

Filed Under: , , ,
Companies: youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “AI Won't Save Us From Fake News: YouTube's Fact Checking Tool Thinks Notre Dame Fire Is About 9/11”

Subscribe: RSS Leave a comment
41 Comments
Anonymous Coward says:

This is reaching a bit. People know AI (or autocorrect, for that matter) will make errors. It’s not a big deal.

Content moderation is necessary because of those who abuse content, or who allow piracy. "Good" people standing down while bad actors perform malicious acts are why these regulations were necessary. Section 230 is on the way out because platforms converted it from a shield into a sword.

The internet is not some magical place where laws cease to exist, just because they can easily be broken online. The first priority will be to stop the lawbreaking, and only then whatever is left over can be devoted to freedom of expression.

Anonymous Coward says:

Re: Re:

We already have laws against all of the illegal activity you allude to. We really don’t need more that say "oh, it’s also still illegal when done on the internet" and we certainly don’t need to deputize (and burden) every web site on Earth to police the public.

These regulations were not necessary and Section 230 is only being fought against because without it people could sue the platforms for the actions of the public since the platforms have all the money; It’s not profitable to sue individuals, the actual law breakers.

Anonymous Coward says:

but that doesn’t seem acceptable to a large number of people.

More like it is not acceptable to a very vocal minority, and the politicians who listen to them. Do nothing has not been acceptable to politicians for a long time, and that gives vocal minorities a big lever, as the alternative to their desire is do nothing, and tell them they need not follow links, and can click away from anything that they do not like.

Anonymous Coward says:

One response to all of this would be to admit that human beings are not perfect, that social media sometimes reflects all aspects of humanity, and that sometimes bad stuff is going to make it online, but that doesn’t seem acceptable to a large number of people. Given that, they’re going to have to accept that sometimes AI is going to get this kind of stuff laughably wrong.

That’s fine. You have 1 hour to fix your or your AI’s mistake. That should be plenty.

/s (just in case)

Gary (profile) says:

Re: Re: Re:

Twenty years of intransigence by internet platforms has caused this.

You mean 20 years of our ISP’s not monitoring our every activity?

You are demanding – in no uncertain terms – to have your every private email opened and read just in case it contains forbidden material. And encrypted material blocked by default, just to be safe.

Every post you make, every email, every picture you take with your cellphone.

Some will shout, "Aha! I don’t have a cellphone." But you are still posting here, and DEMANDING that you be censored.

Good job AC.

Gary (profile) says:

Re: Re: Re:2 Re:

E-mail is private. These measures deal with what is posted publicly.

I have seen many comments that calmly explain that everything should be checked, because it can be checked. Because – pirates bad.

Are you really so naive that you don’t think ISP level filtering wouldn’t check email too? Or just good with it because only the pirates will be censored, eh?

Angus Hamish 'Anguish' McGillicuddy says:

Actually "Gary" we only want YOU to be decent and not steal.

But you are still posting here, and DEMANDING that you be censored.

No. We just want you acting same as in real life, WITHOUT Section 230 essentially legalizing piracy and vile attacks instead of reasonable discourse.

Why does The Internet require an exception to civil society?

Do you think that shouting over every little irritation is progress?

(Of course, it’s a given that you DO think piracy is okay up to justified, and that links to infringed content is "free speech" that must not be suppressed. Skip that.)

Now, what’s the purpose of Section 230? — To make it easy for The Public to Publish without corporate or gov’t interference.

But Rule by corporations over 1A Rights is what YOU advocate with Masnick’s view of Section 230. — And NO, "moderation" is not same as the absolute arbitrary control that masnicks want so can simply suppress all opposition to corporatism. — And there is NO "separate but equal" comparing giant platforms to tiny ones that Google doesn’t have to index, so will never be discovered.

Section 230 makes individual Publishers. The corporations are NOT the Publishers, not liable, but they intend to keep Editorial Control too. — That’s the real de facto censorship, kid, NOT the coming attempts to limit piracy and have a snippet tax.

In other words, you, Techdirt, and especially Masnick have it all backwards.

-s-u-b-s-t-i-t-u-t-e -h-o-r-i-z-o-n-t-a-l -r-u-l-e

Oh, and especially for you, "Gary", who’s actually Geigner: we want you to quit astro-turfing.

Killercool (profile) says:

Re: Actually "Gary" we only want YOU to be decent and not steal.

Are you kidding me.

If a person gets mugged in Walmart, you go after the mugger, not Walmart.
If someone steals a TV from a car at Best Buy, you go after the theif, not Best Buy.
If someone is selling cocaine near the candy aisle at Walgreens, you arrest the dealer, not Walgreens.
Hell, if the pharmacist is selling painkillers illegally at Walgreens, YOU ARREST THE ACTUAL CRIMINAL.

Why is the concept of personal responsibility so antithetical to your beliefs, especially since you have long spouted your SovCit ideologies?

Gary (profile) says:

Re: Re: Actually "Gary"

Why is the concept of personal responsibility so antithetical to your beliefs, especially since you have long spouted your SovCit ideologies?

The core belief of the Sod-Cit is they aren’t responsible for anything. Submitting to others laws is weakness. Therefore Blue-Balls hates getting downvoted because we show power over him.

Gary (profile) says:

Re: Actually "Gary" we only want World Peace

Actually Blue Balls copying is not Theft. You’d know that if you were paying attention.

Blocking suspicious content is censoring. Or is it only censorship if you get blocked?

You are still demanding global, automatic censorship on a mass scale. Nothing less. Every Communication to be checked in case it might be owned by a corporation.

Corporate censorship backed by common law!

Anonymous Coward says:

AI will continue to result in embarrassing failures, such as Google’s labeling of a notorious Nazi concentration camp as a "jungle gym" and identifying people of black African descent as "gorillas."

And in what could be the key to future of AI "improvements" — Google’s solution to accidental algorithmic racism: ban gorillas.

https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people

Ninja (profile) says:

I believe there may be one point where we manage to create AI smart enough that it will be able to read into context, nuances and the likes and do filtering right. None of us reading this will probably be here to see it happen.

Even then, we know AI will be as biased as their learning data sets and maintainers are. We’ll probably develop fully autonomous and sentient androids before we can do filtering right.

norahc (profile) says:

Re: Re:

I believe there may be one point where we manage to create AI smart enough that it will be able to read into context, nuances and the likes and do filtering right. None of us reading this will probably be here to see it happen.

Even then, we know AI will be as biased as their learning data sets and maintainers are. We’ll probably develop fully autonomous and sentient androids before we can do filtering right.

If people can’t figure out how to read context and nuances correctly, there’s absolutely no way an AI will be able to do it.

Anonymous Coward says:

Are we so sure the AI got it so wrong?

What is the primary defining feature of the 9/11 attack? They were Islamic in origin.

What are the primary features of most church desecrations and attacks in Europe? They’re Islamic in origin.

What is the primary feature of the Notre Dame Fire? Well, we’re not yet sure that it was of Islamist origin, but given the mountains upon mountains upon mountains of previous evidence of Islam destroying any monuments that don’t match their culture/religion, it’s pretty reasonable to guess that this might be another case of Jihadis Gone Wild.

Just look at the Bamiyan Buddhas. Not Islamic, so destroyed by Islamists. Priceless ancient monuments and knowledge of history, gone forever.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...