from the questions-worth-researching dept
Over the last few months, I’ve been asking a general question which I don’t know the answer to, but which I think needs a lot more research. It gets back to the issue of how much of the “bad” that many people seem to insist is caused by social media (and Facebook in particular) is caused by social media, and how much of it is just shining a light on what was always there. I’ve suggested that it would be useful to have a more nuanced account of this, because it’s become all too common for people to insist that anything bad they see talked about on social media was magically caused by social media (oddly, traditional media, including cable news, rarely gets this kind of treatment). The reality, of course, is likely that there are a mix of things happening, and they’re not easily teased apart, unfortunately. So, what I’d like to see is some more nuanced accounting of how much of the “bad stuff” we see online is (1) just social media reflecting back things bad things that have always been there, but which we were less aware of as opposed to (2) enabled by social media connecting and amplifying people spreading the bad stuff. On top of that, I think we should similarly be comparing how social media also has connected tons of people for good purposes as well — and see how much of that happens as compared to the bad.
I’m not holding my breath for anyone to actually produce this research, but I did find a recent Charlie Warzel piece very interesting, and worth reading, in which he suggests (with some interesting citations), that social media disproportionately encourages the miserable to connect with each other and egg each other on. It’s a very nuanced piece that does a good job highlighting the very competing incentives happening, and notes that part of the reason there’s so much garbage online is that there’s tremendous demand for it:
But online garbage (whether political and scientific misinformation or racist memes) is also created because there?s an audience for it. The internet, after all, is populated by people?billions of them. Their thoughts and impulses and diatribes are grist for the algorithmic content mills. When we talk about engagement, we are talking about them. They?or rather, we?are the ones clicking. We are often the ones telling the platforms, ?More of this, please.?
This is a disquieting realization. As the author Richard Seymour writes in his book The Twittering Machine, if social media ?confronts us with a string of calamities?addiction, depression, ?fake news,? trolls, online mobs, alt-right subcultures?it is only exploiting and magnifying problems that are already socially pervasive.? He goes on, ?If we?ve found ourselves addicted to social media, in spite or because of its frequent nastiness ? then there is something in us that?s waiting to be addicted.?
In other words, at least some of this shouldn’t be laid at the feet of the technology, but rather us, as humanity, in what we want out of the technology. It’s potentially a sad statement on human psychology that we’d rather seek out the garbage than the other stuff, but it also kind of suggests that the “solution” is not so much in attacking the technology, but maybe figuring out solutions that have more to do with our own societal and psychological outlook on the world.
However, as Warzel notes, if social media is preternaturally good at linking up the miserable, and encouraging them to be more miserable together, then you could argue that it does deserve some of the blame.
Misery is a powerful grouping force. In a famous 1950s study, the social psychologist Stanley Schachter found that when research subjects were told that an upcoming electrical-shock test would be painful, most wished to wait for their test in groups, but most of those who thought the shock would be painless wanted to wait alone. ?Misery doesn?t just love any kind of company,? Schachter memorably argued. ?It loves only miserable company.?
The internet gives groups the ability not just to express and bond over misery but to inflict it on others?in effect, to transfer their own misery onto those they resent. The most extreme examples come in the form of racist or misogynist harassment campaigns?many led by young white men?such as Gamergate or the hashtag campaigns against Black feminists.
Misery trickles down in subtler ways too. Though the field is still young, studies on social media suggest that emotions are highly contagious on the web. In a review of the science, Harvard?s Amit Goldenberg and Stanford?s James J. Gross note that people ?share their personal emotions online in a way that affects not only their own well-being, but also the well-being of others who are connected to them.? Some studies found that positive posts could drive engagement as much as, if not more than, negative ones, but of all the emotions expressed, anger seems to spread furthest and fastest. It tends to ?cascade to more users by shares and retweets, enabling quicker distribution to a larger audience.?
This part is fascinating to me in that it actually does try to tease out some of the differences between what anger does to us at an emotional level as compared to happiness. It also reminds me of the (misleadingly reported) Washington Post story regarding how Facebook kept adjusting the “weighting” of the various emoji responses it added, especially focused on how to weight the “anger” emoji.
Anger certainly feels like the kind of emotion that will lead something to spread quickly — we’ve all had that moment of anger over something, and spreading the news feels like at least some kind of outlet when you feel powerless over something awful that has happened. But I’m still not clear on how to break down the different aspects of how all of this interacts with social media, as compared to how much it’s shining a light on deeper, more underlying societal problems that need solving at their core.
Warzel argues that the connecting of the miserable is something different, and perhaps leads to a more combustible world:
But it also means that miserable people, who were previously alienated and isolated, can find one another, says Kevin Munger, an assistant professor at Penn State who studies how platforms shape political and cultural opinions. This may offer them some short-term succor, but it?s not at all clear that weak online connections provide much meaningful emotional support. At the same time, those miserable people can reach the rest of us too. As a result, the average internet user, Munger told me in a recent interview, has more exposure than previous generations to people who, for any number of reasons, are hurting. Are they bringing all of us down?
Some of the other research he highlights suggests something similar:
?Our data show that social-media platforms do not merely reflect what is happening in society,? Molly Crockett said recently. She is one of the authors of a Yale study of almost 13 million tweets that found that users who expressed outrage were rewarded with engagement, which made them express yet more outrage. Surprisingly, the study found that politically moderate users were the most susceptible to this feedback loop. ?Platforms create incentives that change how users react to political events over time,? Crockett said.
But in the end, he notes that, well, this is all interconnected and way more complicated than most people proposing solutions would like to admit. Destroying Facebook doesn’t solve this. Removing Section 230 doesn’t solve this (and, would almost certainly make this much, much worse).
But the technology is only part of the battle. Think of it in terms of supply and demand. The platforms provide the supply (of fighting, trolling, conspiracies, and junk news), but the people?the lost and the miserable and the left-behind?provide the demand. We can reform Facebook and Twitter while also reckoning with what they reveal about the nation?s mental health. We should examine more urgently the deeper forces?inequality, a weak social safety net, a lack of accountability for unchecked corporate power?that have led us here. And we should interrogate how our broken politics drive people to seek out easy, conspiratorial answers. This is a bigger ask than merely regulating technology platforms, because it implicates our entire country.
I think his suggestion is correct. We need to be looking across the board at how we build a better society — and in doing so, we’re doing everyone a disservice if we just think that “regulating tech” somehow will solve any of the underlying societal problems. But, as the article makes clear, there are so many different factors at play that’s it not easy to tease them a part.