from the sounds-good,-but-what-do-we-do-about-it dept
On Thursday, former President Barack Obama gave a speech at Stanford University talking about “Challenges to Democracy in the Digital Information Realm.” It’s worth watching, even if I have some issues with it. My very short summary is that he’s much more willing than most of the pundit class to grapple with the nuances and tradeoffs (which is good, and honestly, slightly refreshing to hear!) but that doesn’t make the speech necessarily good. It still overly simplifies things, somewhat misdiagnoses the issues, and comes up with weak platitudes, rather than actual solutions.
On the whole, it feels like he’s actually done the reading, but hasn’t fully grasped how all of these issues play together. So he can hit on some high points in a more reasonable way than most newbies to the tech policy space, but he fails to understand them at the deeper level necessary to recognize the actual tradeoffs and challenges with his ideas.
His section starts at around 33 minutes in.
The day before the speech, he pushed out a Disinformation and Democracy Reading List which is worth looking at, but also gives you kind of a preview of where his speech would go: highlighting lots of the problems and dangers of disinformation, but pretty weak on solutions. Notably the first item on the reading list was the high-profile Aspen Institute’s “Commission on Information Disorder” report that we panned a few months back as being wishy-washy nonsense. It was self-contradictory at times, highlighting problems but noting that most any solution they could come up with likely wouldn’t work.
And… that’s kind of how Obama’s speech went as well. He kicked it off laying out some fairly obvious concerns about the state of democracy today around the globe, and followed that up by pointing a finger at the state of information flows, while insisting that (unlike some others) he’s not demonizing technology or the internet, but feels like something needs to be done. Here’s the key bit at the beginning:
A lot of factors have contributed to the weakening of democratic institutions around the world. One of those factors is globalization. Which has helped lift hundreds of millions out of poverty, most notably in China and India. But which, along with automation, has also upended entire economies, accelerated global inequality, and left millions of others feeling betrayed and angry at existing political institutions.
There’s the increased mobility and urbanization of modern life, which further shakes up societies, including existing family structures and gender roles. Here at home, we’ve seen a steady decline in the number of people participating in unions, civic organizations, and houses of worship. Mediating institutions that once served as a kind of glue.
Internationally, the rise of China as well chronic political dysfunction here in the US and in Europe, not to mention the near collapse of the global financial system in 2008 has made it easier for leaders in other countries to discount democracy’s appeal. And, as once marginalized groups demand a seat at the table, politicians have found a new audience for old fashioned appeals to racial and ethnic, religious or national, solidarity.
And in the rush to protect “us” from “them,” virtues like tolerance and respect for democratic processes start to look not just expendable, but like a threat to “our” way of life.
So if we’re going to strengthen democracy, we’re going to have to address all of these trends. We’ll have to come up with new models for a more inclusive, equitable capitalism. We’ll have to reform our political institutions in ways that allow people to feel heard and give them real agency. We’ll have to tell better stories about ourselves, and how we can live together, despite our differences.
And that’s why I’m here. Today. On Stanford’s campus, in the heart of Silicon Valley. Where so much of the digital revolution began.
Because I’m convinced right, one of the biggest impediments to doing all of this. Indeed, one of the biggest reasons for democracy’s weakening, is the profound changes that’s taken place in how we communicate and consume information.
Now, let me start off by saying, I am not a Luddite. Although, it is true, that sometimes I have to ask my daughters how to work basic functions on my phone. I am amazed by the internet. It’s connected billions of people around the world. Put the collected knowledge of centuries at our fingertips. It’s made our economies vastly more efficient, accelerated medical advances, opened up new opportunities, allowed people with shared interests to find each other.
I might never have been elected President if it hadn’t been for websites like MySpace, MeetUp, and Facebook, that allowed an army of young volunteers to organize, raise money, spread our message. That’s what elected me. And since then, we’ve all witnessed how activists have used social media platforms to register dissent, shine a light on injustice, and mobilize people on issues like climate change and racial justice.
So the internet, and the accompanying information revolution has been transformative and there’s no turning back.
But, like all advances in technology, this progress has had unintended consequences that sometimes come at a price. And, in this case, we see that our new information ecosystem is turbocharging some of humanity’s worst impulses.
To be honest, I’m a little surprised at some of the restraint he uses in not just fully blaming “the internet” or individual companies. That’s the standard thing these days, which is facile and not very helpful. He even acknowledges (correctly) that much of this is really about human nature and society, and not technology.
Not all of these effects are intentional or even avoidable. They’re simply the consequences of billions of humans suddenly plugged in to an instant 24/7 global information stream. 40 years ago, if you were a conservative in rural Texas, you weren’t necessarily offended by what was going on in San Francisco’s Castro district, because you didn’t know what was going on.
If you lived in an impoverished Yemeni village. You had no insight into the spending habits of the Kardashians. For some, such exposure may be eye-opening. Maybe even liberating. But others may experience that exposure as a direct affront to their traditions, their belief systems, their place in society.
He goes on to note that the vast amount of information that’s out there, while exposing people to more such content, also means that we often appear to have less of a “shared” culture which different people can all relate to.
Then you have the sheer proliferation of content, and the splintering of information and audiences. That’s made democracy more complicated. I’ll date myself again. If you were watching TV, here in the United States, between about 1960 and 1990, you know, “I Dream of Jeanie,” “The Jeffersons.” Chances are you were watching one of the Big Three Networks. And this had its own problems. Particularly the ways in which programming often excluded voices and perspectives of women and people of color and other folks outside of the mainstream.
But it did fortify a sense of shared culture and when it came to the news, at least, citizens across the political spectrum, tended to use a shared set of facts. What they saw, what they heard, from Walter Cronkite or David Brinkley or others.
What I appreciate about this as a lead in to the talk is that it does not do the usual thing which is just camp out in one corner and say “this is good” and “this is bad,” but rather (rightly) acknowledges that it’s a lot more nuanced and complicated than all that, and that there are serious tradeoffs to all of this.
However, he then jumps to claims about confirmation bias, which gets me a bit concerned, because it’s not actually clear that it’s true, or if it is, that it’s a useful framing. A few months back, we highlighted some really fascinating research that made me rethink my own beliefs regarding confirmation bias and so-called “filter bubbles.” That research showed that most people’s actual “echo chambers” were (as Obama alluded to just a bit earlier in his talk) actually around the people they spent time with in real life, and the internet exposed them to much more. Indeed, the idea that we’re dealing solely with confirmation bias is belied by Obama’s own words, where he talks about the people in rural Texas (for example) being challenged by learning what goes on in San Francisco. In many ways that’s the opposite of a filter bubble.
The real issue tends to come down to how we interpret these things or react to them. And in that he may be correct that there’s confirmation bias in how we process these things, but even there, I think he goes a bit too far.
Today, of course, we occupy entirely different media realities, fed directly into our phones. Don’t even need to look up. And it’s made all of us more subject to what psychologists call “confirmation bias.” The tendency to select facts and opinions that reinforce those that support our pre-existing worldviews and filter out those that don’t. So inside our personal information bubbles, our assumptions, our blindspots, our prejudices, aren’t challenged. They’re reinforced. And, naturally, we’re more likely to react negatively to those consuming different facts and opinions. All of which deepens existing racial and cultural and religious divides.
Again, this… actually strikes me as a bit of confirmation bias in itself. And, yes, it’s ironic that I’m saying those believing that confirmation bias is the problem may be engaging in it a bit themselves. But the setup here doesn’t seem entirely accurate. He notes that people are inside their own “information bubbles” but if that was the case, how would they even know about the people consuming other types of content.
Like that study we reported on showed, people are actually exposed to other ideas and content. The issue is not the bubble, or the lack of exposure. It’s rather the way in which people respond to those things. You can argue that there’s no real difference there, but I think there is, and it really makes a difference as you start looking for ways to push back against these ideas.
Because if the problem is just that people are not exposed to other ideas, then you would push for more exposure. But if the problem is not the lack of exposure, but rather how the exposure is processed, then suddenly it’s a much more challenging problem. And it may be a problem of who people look to in order to pre-process the ideas for them. Because it seems that often the issue is not the bubble, or the confirmation bias, but rather the prism through which we interpret those ideas, and often many people take the shortcut of not actually exploring the ideas carefully themselves, but rather turning to other people to do the initial chewing and to spit out their interpretation, which is then taken as gospel.
But, again, that’s a very different problem than information bubbles or echo chambers. It’s who we choose to trust in examining the information that is out there.
From there, I think he gets back on point somewhat, briefly.
It’s fair to say then, that some of the current challenges we face are inherent to a fully connected world. Our brains aren’t accustomed to taking in this much information this fast. And a lot of us are experiencing overload.
That gets back to my argument that the problem isn’t the lack of exposure, but rather how we deal with it, and whose filter we rely on to pre-process the information for us.
From there, though, Obama does move on to the “but, hey, some of this really is just bad choices by various players in the space” part of the lecture. And he focuses on the internet/social media companies, which bothers me a bit, since multiple studies keep showing that, while the internet companies are part of this issue, it’s equally an issue from traditional media companies like News Corporation that are engaged in doing the kind of pre-processing I’ve talked about and whipping people into a frenzy, which is often then reflected on social media, rather than the other way around. I always worry about complaints that try to single out the internet without recognizing how it fits into the wider media ecosystem.
To be clear, that’s not to say that the internet companies are blameless, because of course, they’re often terrible, and make awful, dangerous, stupid decisions. But the ecosystem matters.
I’m not going to transcribe this whole section regarding the internet companies, because it’s the usual thing — about how they focused on engagement, and how nonsense and outrage often leads to more engagement (which is only partially true, and not entirely accurate, but certainly not inaccurate either). He does eventually admit that the internet is not the only source of “toxic” information, and even notes that “some of the most outrageous content on the web originates from traditional media.” But like so many others, he sort of brushes that aside and focuses on how the internet is still a part of the problem — and argues that they’ve “accelerated” the decline of traditional media.
That’s an argument that I think is somewhat unfair, and at the very least requires a lot more nuance and context. Certainly some media organizations are doing much better, and many are doing much worse. But blaming the internet for the decline of news organizations feels somewhat unfair, because you could easily argue that it was actually those news organizations’ unwillingness to embrace what the internet allowed (and the fact that they often fought against it) that actually led to their decline. As I’ve noted countless times, the news organizations forgot that they were really in the community business, and pretended they were in the “news” business. But the internet provided many more options for communities, and so the news organizations lost their main benefit to users by not adapting and providing a better community offering (indeed, they often were actively antagonistic to trying to build community).
That is to say, this is all crazy complicated, and I think it’s often unfortunate and simplistic to focus on just the internet component of it. As I’ve pointed out over and over and over again (I know, I know…) it seems pretty clear that there are a number of factors at play. Some of the “bad stuff” we see online is just the internet shining a light on bad stuff that always existed, but was hidden. And sometimes shining a light helps us deal with that as a society, but sometimes it might alert others to those bad things and attract them to it. So the internet can in some cases more widely (or more quickly) spread ideas that lead to bad or dangerous behavior. But it can also do the reverse. And frequently does.
And distinguishing one from the other is a massive challenge. And just focusing on one part, yet again, doesn’t really solve anything, because of the unfortunately high likelihood of doing damage to all of the good stuff the internet has enabled as well. This is a massive and inherently complex challenge, that doesn’t boil down to easy solutions.
And that’s where Obama’s speech again kind of flutters out to nothing. This isn’t a surprise because there is no easy solution and if Obama had figured out “the answer” he would be the first. But it does feel that his ideas, somewhat like that Aspen Institute report, are not really answers at all. He doesn’t give much direction beyond “well, something needs to be done.” And, unfortunately, when you’re at the point where you saying “something should be done,” there’s a much bigger risk that those in charge choose very bad ideas to bank on, simply because they’re “something”.
He notes, correctly, that we can (and should) embrace the “spirit of innovation” to try to fix the “bugs in the software” that may have contributed to the state of the world today. And that I agree with, but so do tons of people at all of these companies, and no one’s quite figured out the best approach yet. And, certainly, whenever they do, they get screamed at by those who have embraced the spewing of disinformation, claiming that any attempt to fix disinformation — including by adding more information — is (ridiculously) an attack on free speech.
Obama does give a hearty endorsement to the “marketplace of ideas” and the 1st Amendment, claiming he considers himself to be nearly a free speech absolutist, and saying that he believes in most cases the response to bad speech should be good speech. That’s good to hear. He also notes (correctly, and as we’ve tried to remind people dozens of times) that the 1st Amendment applies to governments, not private companies, who have their own rights to moderate as they see fit.
And (hallelujah!) he’s one of the first politicians I’ve seen comment on this stuff to publicly note the impossibility of content moderation.
Any rules we come up with to govern the distribution of content on the internet, will involve value judgments. None of us are perfectly objective. What we consider an unshakeable truth today, may prove to be totally wrong tomorrow.
But… then his “guiding principles” for content moderation leave a lot to be desired:
- Does it strengthen or weaken the prospects for a healthy, inclusive democracy.
- Does it encourage robust debate and respect for our differences
- Does it reinforce rule of law and self-governance
- Does it make decisions on the best available information
- Does it recognize the rights and dignities of all our citizens
And, well, sure. That all sounds great. Fantastic. But, you know, the devil’s in the details, and pretty much everyone involved in any of these debates, even those with terrible, awful policies, will claim that their proposals meet those criteria. So my problem with that list is that… it doesn’t help us at all. It doesn’t move us forward. It just sounds nice.
Also, those principles… aren’t really good principles for content moderation writ large. They’re interesting, if not very practical, guideposts for one small aspect of content moderation: how you handle debates about politics. But that’s a frighteningly tiny part of content moderation.
From there, it falls back on a lot of tropes mixed with accurate statements, but again it’s not very helpful. He rightly acknowledges that malicious actors will always play right up to the line of what’s allowed, to push the boundaries as far as they can go, but then almost immediately complains that tech platforms are not transparent enough about their moderation practices. Those two things are in conflict. The more transparent you are, the more the malicious actors learn to game the system. It’s complicated.
He then says that the companies need “some level” of public oversight and regulation, but fails to suggest what would actually work, or what would be constitutional (which is a challenge if you’re talking about regulating people’s speech). He then brings up Section 230, saying that while he’s not convinced “wholesale repeal of 230” is the answer, he still thinks it should be reformed because tech companies have changed so much over the past twenty years.
But he doesn’t say how, other than suggesting that companies should be subject to “a higher standard of care for advertisements on their site.” That must have been wordsmithed to death to be quite as specific as he makes it out to be. It kind of hints at a “duty of care” or a “reasonableness” standard (which have serious problems with potentially dangerous consequences), but then kind of shifts that to say… “for advertisements.” Which… what does that even mean? None of the rest of the talk about disinformation was talking about advertisements, so it feels sort of shoved in here in a way that doesn’t quite make sense.
Either way, regulating some sort of “duty of care” would still create huge headaches — not so much for the biggest companies (again, Facebook and Google would be able to survive, and even thrive, in such a world) but such a change would be a complete disaster for smaller platforms. As I wrote in my piece about how those who don’t understand Section 230 are doomed to repeal it, when you think that changing Section 230 will somehow limit the biggest companies you are effectively repealing Section 230 and giving those companies more power.
Because the biggest companies can handle the legal liability. They have buildings full of lawyers on staff. It’s the smaller companies who can’t, and who will be forced to litigate over and over again as to whether or not they’ve lived up to that “higher standard of care.” And at that point it becomes not just easier, but the only legally responsible move… to just outsource the control of such things on your site to the giants like Google and Facebook. Because they’ll deal with the legal liability.
And no one should want that. No one should be advocating for a position that effectively gives Google and Facebook more power and more control.
From there, his suggestions are, again, incredibly vague. He talks about “effective” regulation, put together in consultation with lots of experts, but provides no suggestions on what that would actually look like, how you’d avoid regulatory capture, or other kinds of permission-blocking that has doomed lots of innovation, let alone the issues described above that would effectively stomp out smaller competitors.
He moves on to transparency, using what he admits may sound like a strange analogy: comparing it to the processes a meat packaging company use to package meat, and how they have to reveal them to inspectors. Which sounds like a nice analogy until you realize two things: (1) speech isn’t meat and meat doesn’t have a Constitutional amendment protecting it and (2) “harm” in these contexts is extremely different. It’s not difficult to establish the factors that can lead to foodborne diseases. It’s much, much, much, much more difficult to make a judgment about speech (and Constitutionally questionable).
These are ideas that sound good at a surface level, but when you actually explore what they mean, they get ridiculously tricky.
And, whenever he gets into specifics you begin to get a sense of how little the speech actually covers these complexities. For example, towards the end, he invokes the Fairness Doctrine as an example of how we’ve tackled these kinds of challenges before. While he then immediately notes that we can’t “go back to the way things were” he still seems to suggest that things like the Fairness Doctrine were sensible.
After World War II, after witnessing how mass media and propaganda could fan the flames of hate, we put a framework in place that would ensure our broadcast system was compatible with democracy. We required a certain amount of children’s educational programming. Instituted the Fairness Doctrine. Newsrooms changed practices to maximize accuracy.
And the task before us is harder now. We can’t go back to the way things were, with three TV stations and newspapers in every major city. Not just because of the proliferation of content, but because that content can now move around the world in an instant.
Except, that actually helps demonstrate the problems here. The Fairness Doctrine didn’t work the way its backers hoped. It resulted in the suppression of important stories, because to air them might require finding opposing viewpoints in some form or another. It also lead to certain voices being suppressed entirely — something you could do back when there were only three major TV stations. And we won’t get into the fact that most of these ideas, including the Fairness Doctrine, were only Constitutional because they involved publicly owned and licensed spectrum.
But, as he notes, we can’t go back to that world, and the lesson we should take from things like the Fairness Doctrine is not that if we just nerd harder we’ll figure out the solution, but rather that we should be careful in cooking up ideas that sound good in a laboratory but don’t actually work in practice, especially when they set up a plan that will suppress speech.
So, in the end, the speech is not bad. He makes many good points about all of the problems the world is facing. He certainly speaks carefully about the different tradeoffs in a way that is rare, and which I appreciate. But, like the Aspen Institute report he recommends, there’s a lot of “you gotta do something,” but when we start to dig into what that “something” is, we’re left holding an empty bag.
That’s not to say there was much else that he could have said. The simple fact is that no one has figured out yet exactly how to solve these problems. And maybe speeches like Obama’s galvanize people to actually explore these issues and tradeoffs and nuances more deeply. And that would be great. But parts of the speech also demonstrate the risks inherent in all of this as well — suggesting that perhaps there are some simple solutions that will magically fix things, when at this point it’s clear that’s just not true.
Filed Under: barack obama, democracy, disinformation, internet, section 230, society, speech, transparency