from the a-rare-thing dept
At this point, the category of "fake news" has become all but meaningless — a trajectory many of us saw coming the moment we first heard the words or saw the hashtag. That doesn't mean the underlying problems aren't real; many people who talk about "fake news" are trying to express real concern about genuinely troubling trends, but the nebulous label isn't doing them any favors, and is in fact diverting attention from the heart of the issue. With thousands of words a day being expended on the subject with little to no visible progress on understanding it, and companies like Facebook unveiling fact-checking features that may prove to be interesting experiments but are unlikely to make much difference in the long run, it's rare and refreshing to see someone actually get things right. That's why if you're interested in the "fake news" phenomenon, you should read Danah Boyd's new post about the real problems that we can't expect internet platforms to magically address:
I don’t want to let companies off the hook, because they do have a responsibility in this ecosystem. But they’re not going to produce the silver bullet that they’re being asked to produce. And I think that most critics of these companies are really naive if they think that this is an easy problem for them to fix.
Too many people seem to think that you can build a robust program to cleanly define who and what is problematic, implement it, and then presto — problem solved. Yet anyone who has combatted hate and intolerance knows that Band-Aid solutions don’t work. They may make things invisible for a while, but hate will continue to breed unless you address the issues at the source. We need everyone — including companies — to be focused on grappling with the underlying dynamics that are mirrored and magnified by technology.
There’s been a lot of smart writing, both by scholars and journalists, about the intersection of intolerance and fear, inequality, instability, et cetera. The short version of it all is that we have a cultural problem, one that is shaped by disconnects in values, relationships, and social fabric. Our media, our tools, and our politics are being leveraged to help breed polarization by countless actors who can leverage these systems for personal, economic, and ideological gain. Sometimes, it’s for the lulz. Sometimes, the goals are much more disturbing.
That's just one small portion of a piece that is well worth reading in full. Boyd brings some highly relevant experience to the discussion: in the early days of Blogger, she worked for the platform doing all sorts of content moderation work and handling customer complaints, addressing things like online harassment and content policies when those issues were just emerging in a blogging world that was still taking shape. She knows firsthand that it's essentially impossible to draft and enforce a consistent content policy that can't be abused and isn't itself abusive, and it's worrying but not surprising to hear her say that even the experts working on these issues inside social media companies can't stay consistent when describing the problem they want to fix.
Of course, Boyd doesn't claim to have her own silver-bullet solution either, but her proposed approach — designing platforms and mechanisms to encourage the bridging of ideological gaps and world views — is certainly a much smarter and more useful way of thinking about the problem, calling for creative innovation to encourage better speech over the never-ending battle to suppress "bad" speech, even if it's still not immediately clear how it can be put into practice. In any case, we need much more discussion like this in place of people crying "fake news" and assuming everyone else is on board with their own personal, arbitrary definition of those words.