AI Won't Save Us From Fake News: YouTube's Fact Checking Tool Thinks Notre Dame Fire Is About 9/11
from the content-moderation-doesn't-work-this-way dept
In the ongoing moral panic about social media algorithms and what they recommend, there are various suggestions on how the companies might “improve” what they do — and many of them suggest relying on newer, better, algorithms. It’s the “nerd harder” approach. Mark Zuckerberg himself, last year, repeatedly suggested that investing heavily in AI would be a big part of dealing with content moderation questions. This has always been a bit silly, but as if to demonstrate how silly this notion is, yesterday, during the tragic fire at Notre Dame Cathedral in Paris, YouTube’s fancy new “fact checking AI” seemed to think multiple videos of the fire were actually referring to the September 11th, 2001 attacks on the US and linked to a page on Encycolpedia Britannica with more info about the attacks:
These links didn’t last for very long, but at the very least, it should be a reminder that expecting AI to magically fact check breaking news in real-time is (at the very least) a long, long way off, and at worst, a nearly impossible request.
This puts YouTube and others in an impossible position of their own. Just a few weeks ago, people were freaking out that YouTube and Facebook (briefly) allowed videos from the attack in Christchurch to be on their platforms — and have been demanding that the platforms “do something” in response. Having a tool that provides at least some sort of context, or even counterpoint to nonsense (when people start posting nonsense) certainly seems like a good idea. But it requires a level of sophistication and accuracy that is currently severely lacking.
One response to all of this would be to admit that human beings are not perfect, that social media sometimes reflects all aspects of humanity, and that sometimes bad stuff is going to make it online, but that doesn’t seem acceptable to a large number of people. Given that, they’re going to have to accept that sometimes AI is going to get this kind of stuff laughably wrong.