Our Bipolar Free-Speech Disorder And How To Fix It (Part 1)
from the free-speech-and-social-media dept
When we argue how to respond to complaints about social media and internet companies, the resulting debate seems to break down into two sides. On one side, typically, are those who argue that it ought to be straightforward for companies to monitor (or censor) more problematic content. On the other are people who insist that the internet and its forums and platforms?including the large dominant ones like Facebook and Twitter?have become central channels of how to exercise freedom of expression in the 21st century, and we don’t want to risk that freedom by forcing the companies to be monitors or censors, not least because they’re guaranteed to make as many lousy decisions as good ones.
By reflex and inclination, I usually have fallen into the latter group. But after a couple of years of watching various slow-motion train wrecks centering on social media, I think it’s time to break out of the bipolar disorder that afflicts our free-speech talk. Thanks primarily to a series of law-review articles by Yale law professor Jack Balkin, I now believe free-speech debates no longer can be simplified in terms of government-versus-people, companies versus people, or government versus companies. No “bipolar” view of free speech on the internet is going to give us the complete answers, and it’s more likely than not to give us wrong answers, because today speech on the internet isn’t really bipolar at all?it’s an “ecosystem.”
Sometimes this is hard for civil libertarians, particularly Americans, to grasp. The First Amendment (like analogous free-speech guarantees in other democracies) tends to reduce every free-speech or free-press issue to people-versus-government. The people spoke, and the government sought to regulate that speech. By its terms, the First Amendment is directed solely at averting government impulses to censor against (a) publishers’ right to publish controversial content and/or (b) individual speakers’ right to speak controversial content. This is why First Amendment cases most commonly are named either with the government as a listed party (e.g., Chaplinsky v. New Hampshire) or a representative of the government, acting in his or her government role as a government official, as a named party (e.g. Attorney General Janet Reno in Reno v. ACLU).
But in some sense we’ve always known that this model is oversimplified. Even cases in which the complainant was nominally a private party still involved government action in the form of enactment of speech-restrictive laws that gave rise to the complaint. In New York Times Inc. v. Sullivan, the plaintiff, Sullivan, was a public official, but his defamation case against the New York Times was grounded in his reputational interest as an ordinary citizen. In Miami Herald Publishing Company v. Tornillo, plaintiff Tornillo was a citizen running for a state-government office who invoked a state-mandated “right of reply” because he had wanted to compel the Herald to print his responses to editorials that were critical of his candidacy. In each of these cases, the plaintiff’s demand did not itself represent a direct exercise of government power. The private plaintiffs’ complaints were personal to them. Nevertheless, in each of these cases, the role of government (in protecting reputation as a valid legal interest, and in providing a political candidate a right of reply) was deemed by the Supreme Court to represent exercises of governmental power. For this reason, the Court concluded that these cases, despite their superficial focus on a private plaintiff’s cause of action, nonetheless fall under the scope of the First Amendment. Both newspaper defendants won their Supreme Court appeals.
By contrast, private speech-related disputes between private entities, such as companies or individuals, normally are not judged as directly raising First Amendment issues. In the internet era, if a platform like Facebook or Twitter chooses to censor content or deny service to a subscriber because of (an asserted) violation of its Terms of Service, or if a platform like Google chooses to delist a website that offers pharmaceutical drugs in violation of U.S. law or the law of other nations, any subsequent dispute is typically understood, at least initially, as a disagreement that does not raise First Amendment questions.
But the intersection between governmental action and private platforms and publishers has become both broader and blurrier in the course of the last decade. Partly this is because some platforms have become primary channels of communication for many individuals and businesses, and some of these platforms have become dominant in their markets. It is also due in part to concern about various ways the platforms have been employed with the goal of abusing individuals or groups, perpetrating fraud or other crimes, generating political unrest, or causing or increasing the probability of other socially harmful phenomena (including disinformation such as “fake news.”)
To some extent, the increasing role of internet platforms, including but not limited to social media such as Facebook and Twitter in Western developed countries, as one of the primary media for free expression was predictable. (For example, in Cyber Rights: Defending Free Speech in the Digital Age (Times Books, 1998), I wrote this: “Increasingly, citizens of the world will be getting their news from computer-based communications-electronic bulletin boards, conferencing services, and networks-which differ institutionally from traditional print media and broadcast journalism.” See also “Net Backlash = Fear of Freedom,” Wired, August 1995: “For many journalists, ‘freedom of the press’ is a privilege that can’t be entrusted to just anybody. And yet the Net does just that. At least potentially, pretty much anybody can say anything online – and it is almost impossible to shut them up.”)
What was perhaps less predictable, prior to the rise of market-dominant social-media platforms, is that government demands regarding content may result in “private governance” (where market-dominant companies become the agents of government demands but implement those demands less transparently than enacted legislation or recorded court cases do). What this has meant is that individual citizens concerned about exercising their freedom of expression in the internet era may find that exercising one’s option to “exit” (in the Albert O. Hirschman sense) may impose great costs.
At the same time, lack of transparency about platform policy (and private government) may make it difficult for individual speakers to interpret what laws or policies the censorship of their content (or the exclusion of themselves or others) in ways that enable them to give effective “voice” to their complaints. For example, they may infer that their censorship or “deplatforming” represents a political preference that has the effect of “silencing” their dissident views, which in a traditional public forum might be clearly understood as protected by First Amendment-grounded free-speech principles.
These perplexities, and the current public debates about freedom of speech on the internet, create the need for a reconsideration of the internet free speech not as a simplistic dyad, or as a set of simplistic, self-contained dyads, but instead as an ecosystem in which decisions in one part may well lead to unexpected, undesired effects in other parts. A better approach would be to consider internet freedom of expression “ecologically,” to consider expression on the internet an “ecosystem,” and to think about various legal, regulatory, policy, and economic choices as “free-speech environmentalists,” with the underlying goal of protecting the internet free-speech ecosystem in ways that protect individuals’ fundamental rights.
Of course, individuals have more fundamental rights than freedom of expression. Notably, there is an international consensus that individuals deserve, inter alia, some kind of rights to privacy, although, as with expression, there is some disagreement about what the scope of privacy rights should be. But changing the consensus paradigm of freedom of expression so that it is understood as an ecosystem not only will improve law, regulation, and policy regarding free speech, but also will provide a model that possibly may be fruitful in other areas, like privacy.
In short, we need a theory of free speech that takes into account complexity. We need to build consensus around that theory so that stakeholders with a wide range of political beliefs nevertheless share a commitment to the complexity-accommodating paradigm. In order to do this, we need to begin with a taxonomy of stakeholders. Once we have the taxonomy, we need to identify how the players interact with one another. And ultimately we need some initiatives that suggest how we may address free-speech issues in ways that are not shortsighted, reactive, and reductive, but forward-looking, prospective, and inclusive.
The internet ecosystem: a taxonomy.
Fortunately, Jack Balkin’s recent series of law-review articles has given us a head start on building that theory, outlining the complex relationships that now exist among citizens, government actors, and companies that function as intermediaries. These paradigm-challenging articles culminate in a synthesis is reflected in his 2018 law-review article “Free Speech is a Triangle.”
Balkin rejects simple dyadic models of free speech. Because an infographic is sometimes worth 1000 words, it may be most convenient to reproduce Balkin’s diagram of what he refers to as a “pluralistic” (rather than “dyadic”) model of free speech. Here it is:
Balkin recognizes that the triangle may be taken as oversimplifying the character of particular entities within any set of parties at a “corner.” For example, social-media platforms are not the same things as payment systems, which aren’t the same things as search engines or standard-setting organizations. Nevertheless, entities in any given corner may have roughly the same interests and play roughly the same roles. End-users are not the same things as “Legacy Media” (e.g., the Wall Street Journal or the Guardian), yet both may be subject to “private governance” from internet platforms or subject to “old-school speech regulation” (laws and regulation) imposed by nation-states or treaties. (“New-school speech regulation” may arise when governments compel or pressure companies to exercise speech-suppressing “private governance.”)
Certainly some entities within this triangularized model may be “flattened” in the diagram in ways that don’t reveal the depth of their relationships to other parties. For example, a social-media company like Facebook may collect vastly more data (and use it in far more unregulated ways) than a payment system (and certainly far more than a standard-setting organization). Balkin addresses the problem of Big Data collection by social-media companies and others?including the issue of how Big Data may be used in ways that inhibit or distort free speech– by suggesting that such data-collecting companies be considered “information fiduciaries” with obligations that may parallel or be similar to those of more traditional fiduciaries such as doctors and lawyers. (He has developed this idea further in separate articles both sole-authored and co-authored with Jonathan Zittrain.)
Properly, the information-fiduciary paradigm maps more clearly to privacy interests rather than to free-expression interests, but collection, maintenance, and use of large amounts of user data may be used in free-speech contexts. The information-fiduciary concept may not seem to be directly relevant to content issues. But it’s indirectly relevant if the information fiduciary (possibly but not always at the behest of government) uses user data to try to manipulate users through content, or to disclose user content choices to government (for example).
In addition, information fiduciaries functioning as social-media platforms have a different relationship with the users, who create the content that makes these platforms attractive. In the traditional world of newspapers and radio, publishers had a close voluntary relationship with the speakers and writers who created their content, which meant that traditional-media entities had strong incentives to protect their creators generally. To some large degree, publisher and creator interests were aligned, although there are predictable frictions, as when a newspaper’s or broadcaster’s advertisers threaten to remove financial support for controversial speakers and writers.
With online platforms, that alignment is much weaker, if it exists at all: Platforms lack incentives to fight for their users’ content, and indeed may have incentives to censor it themselves for private profit (e.g., advertising dollars). In the same way that the traditional legal or financial or medical fiduciary relationship is necessary to correct possible misalignment of incentives, the “information fiduciary” relationship ought to be imposed on platforms to correct their misaligned incentives toward private censorship. In a strong sense, this concept of information fiduciary is a key to understanding how a new speech framework is arguably necessary, and how it might work.
I’ve written elsewhere about how Balkin’s concept of social-media companies (and others) as information fiduciaries might actually position the companies to be stronger and better advocates of free expression and privacy than they are now. But that’s only one piece of the puzzle when it comes to thinking ecologically about today’s internet free-speech issues. The other pieces require us to think about the other ways in which “bipolar thinking” about internet free speech not only causes us to misunderstand our problems but also tricks us into coming up bad solutions. And that’s the subject I’ll take up in Part 2.
Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.