We Should Probably Stop Blaming Technology For The Failings Of Human Beings
from the technology-is-a-mirror dept
I’ve been thinking a lot lately about how so many of the “problems” that people bring up with regards to the internet these days — much of them having to do with disinformation, misinformation, propaganda, etc. — are really manifestations of the problems of people in general, perhaps magnified by technology. At some point I’ll likely write more on this concept, but as such it’s difficult to see how blaming the technology solves the underlying problems of humanity. It’s treating the symptoms, not the disease. Or, worse, in many cases it seems like an attempt to brush the disease under the rug and hope it disappears (to mix metaphors a bit). Alicia Wanless has written a long an interesting post that makes a similar, though slightly different point.
She also notes that blaming technology seems unlikely to solve the underlying issues.
And yet, in the fear that is spreading about the future of democracy and threats of undue influence (whatever that might mean), there is no shortage of blame and simplified calls to action: ?the governments should regulate the tech companies!?; ?the tech companies should police their users!?; ?we should do back whatever is done against us ? so, more propaganda!? In all of this, I cannot help but think we might be missing something; are we fundamentally approaching the problem in the wrong way? ?
Technology might have brought us to this point where we now must look at ourselves, our society in this digital mirror. But it is not to blame. These issues are not new. Persuasion, manipulation, deception, abuse ? none of these are new. Humans have been doing and suffering from these things forever. What is new is the scale, speed and reach of such things. If anything, ICTs have only amplified our pre-existing issues ? and as such, no technical solution can truly fix it.
She further notes that, since technology is changing so rapidly, any attempt to solve the “problems” discussed above by targeting tech platforms is unlikely to have much long term impact, as the purveyors of disinformation will just move elsewhere. Now that people know how to leverage technology in this manner, it’s not like they’re just going to stop. Furthermore, she notes that merely censoring and brushing content under the rug can often have the opposite of the intended effect (something we’ve talked quite a bit about here):
Filtering and blocking people into protection is not just impractical in a digital age, it is dangerous. This is the informational equivalent of obsessive disinfectant use ? what will happen to that bubbled audience when ?bad? information inevitably comes through? To say nothing of the consequences for democracy in such a managed information environment. After all, blocking access to information only makes it more coveted through the scarcity effect. And given that the people exposed to misinformation are seldom those who see the corrective content questions remain about the utility of efforts to dispel and counter such campaigns. If we want to protect democracy, we must make citizens more resilient and able to discern good information from bad.
That last line is important, but I’ve seen a number of people pushing to regulate tech more mock this idea, rolling their eyes and saying “media literacy is not the answer.” I’m not convinced of that, in general, but Wanless has a slightly different approach. She’s not just talking about media literacy, but about the fact that the western world seems blind to the fact that we have spent decades promoting our own propaganda on the rest of the world, and they find it laughable when we complain about them doing the same, using the technology that we developed.
Of course, to do this, we in liberal democracies, will have to come to terms with something we have avoided for a while: how much persuasion is acceptable and where are the lines? What is good versus bad information and who decides that? Who can legitimately use persuasion and when? ? This is the elephant in the room ? persuasion. And it is not enough to say that when we use persuasion it is acceptable, but when an adversary does so it is not.
Until we in the West come to terms with our awkward relationship with propaganda and persuasion, we cannot effectively tackle issues associated with the manipulation of the information environment. For far too long our aversion to persuasion has made us hypocrites, trying to disguise attempts at influencing other populations with various euphemisms (which also might explain why words are failing us now in describing the situation).
As she notes, we in the west can argue that US and western influence campaigns around the world were different from, say, Russian or Chinese influence campaigns these days, but it’s a distinction that doesn’t much matter to those pushing disinformation campaigns today. They see it all as the same thing.
She ends her piece with some suggestions on what to do — and I recommend going there to read them — but I’m still thinking a lot about how the internet has really held up a mirror to society, and we don’t really like what we see. But rather than recognizing that we need to fix society — or some of our political corruptions — we find it easier to blame the messenger. We find it easier to blame the tool that held up this mirror to society.
We can’t fix the underlying problems of society — including over-aggressive tribalism — by forcing tech companies to be arbiters of truth. We can’t fix underlying problems of society by saying “well, this dumb view should never be allowed to be shared” no matter how dumb. We fix the underlying problems in society by actually understanding what went wrong, and what is still going wrong. But fixing society is hard. Blaming the big new(ish) companies and their technology is easy. But it’s not going to fix anything at all if we keep denying the larger fundamental flaws and problems in society that created the conditions that resulted in the tech being used this way.