YouTube Briefly Nukes Video Of Nazi Symbol Destruction For Violating Hate Speech Rules
from the everyone-go-crazy-all-at-once dept
Things have gone slightly crazy in the wake of the Charlottesville protests. What started as speech and ended in violence has prompted a number of reactions, many of them terrible. The president took three swings at addressing the situation: one bad, one a bit better, and one that erased the “better” statement completely when Trump decided to go off-script and engage in a bunch of whataboutism.
Other reactions haven’t been much better. After defending the white nationalists’ right to protest the removal of Confederacy-related statues, the ACLU decided it would no longer protect the First Amendment rights of those exercising their Second Amendment rights. It didn’t state it quite as bluntly, but basically said if it detected some “intent” to harm counter-protesters, the ACLU wasn’t interested in defending gun-owning citizens’ right to assemble.
Over on the internet, things got weird. Third-party service providers suddenly began dumping white nationalist/Nazi-related websites and forums, setting a rather dangerous precedent for themselves. While some may view the moves as long overdue, the moment a platform starts engaging in arbitrary determinations about speech is the same moment government officials and entities start seeing wiggle room for further speech-policing demands.
Meanwhile, platforms’ decisions about acceptable speech are still being made as badly as ever. Rob Beschizza of Boing Boing points out YouTube (temporarily) took down a video of the US military destroying Nazi symbols for “violating” its policy on “hate speech.”
The video has since been restored, but it’s just another example of how this sort of moderation tends to be more of a threat to free speech than an effective deterrent of “hate speech.” To begin with, “hate speech” in the US is a term granted to the eye of the beholder. It’s not a legal term of art and there’s nothing in our laws or Constitution that forbid hateful speech. Attempts to police “hate speech” with algorithms results in spectacularly stupid “decisions.” Attempts to police this using human moderators seldom fares better, resulting in innocuous content being removed while truly vile speech remains where everyone can see it.
It’s understandable so many different entities are doing everything they can to combat hate in the wake of the Charlottesville protests, but the rush to do something means a lot of it will be done badly and will only target current Villains of the Week. It’s something that should be done cautiously, carefully, and with an eye on restricting as little speech as possible. Instead, we’re getting rubber banding of both artificial and human intelligence as everyone suddenly pitches in simultaneously. Maybe things will calm down in a few weeks, but the tensions brought to the surface by the Charlottesville protest suggest it’s going to be a long time before the nation returns to anything resembling “normal.”