Eric Schmidt Suggests Building A 'Spell Checker' For Online Harassment And Other Bad Things Online
from the good-luck-with-that dept
Google’s executive chairperson Eric Schmidt has an opinion piece in the NY Times, in which he advocates partly for an internet that is more widely available and enabling greater freedom of expression… but also one where there are “spell-checker” like tools to identify bad stuff online like harassment and ISIS videos:
Authoritarian governments tell their citizens that censorship is necessary for stability. It?s our responsibility to demonstrate that stability and free expression go hand in hand. We should make it ever easier to see the news from another country?s point of view, and understand the global consciousness free from filter or bias. We should build tools to help de-escalate tensions on social media ? sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice. Without this type of leadership from government, from citizens, from tech companies, the Internet could become a vehicle for further disaggregation of poorly built societies, and the empowerment of the wrong people, and the wrong voices.
This is one of those “sounds good when you say it, but what does it really mean” kind of statements. In some ways, you could argue that his statement is almost self-contradictory. We should make it easier to see news and information from another country’s point of view… unless that point of view is one we’ve declared to be terrorists. As easy as it is to agree with that general sentiment — I’d prefer a world without ISIS and other terrorist groups, certainly — it leaves open the possibility for widespread abuse. Don’t like a particular group or country? Just declare them terrorists, and shooop down they go through the internet memory hole.
And this is the problem with these kinds of feel good suggestions. So much is dependent on the idea that there is some objective standard for “good” content that is okay that we should all share and “bad” content that is evil and should be taken down. Where you draw that line can be quite different for nearly everyone, and if you’re in a position of serious power, drawing that line can result in dangerous abuses of power.
That’s why I still think a much better solution is to separate out the layers a bit. A few months ago, I talked about the importance of protocols instead of platforms. Separate out the content from the platforms, and then let many others create tools that can filter that content in different ways. Then each individual can decide for themselves which tools they want to use to create their own internet experience. Someone could create that kind of “anti-harassment/anti-terrorist” filter tool, and people who want to use it are free to do so, but it doesn’t impact the experience of others. Where things get tricky is — as the internet gets more centralized, the platforms are also in charge of the filters. When that happens there are inevitable mistakes and abuses, leading to censorship and voices being silenced.