Mike Godwin's Techdirt Profile

Mike Godwin

About Mike Godwin

Posted on Techdirt - 29 July 2020 @ 01:49pm

Former Rep. Chris Cox Used His Testimony At Tuesday's Senate Hearing On The Internet's Foundational Law To Do Some Myth-Busting

Whenever internet-law experts see a new Congressional hearing scheduled whose purpose is to explore whether Section 230—a federal statute that’s widely regarded as a foundational law of the internet—needs to be amended or repealed, we shudder. That’s because we know from experience that even some of the most thoughtful and conscientious lawmakers have internalized some broken notions about Section 230 and have the idea that this statute is responsible for everything that bothers us about today’s internet.

That’s why Tuesday’s Senate hearing about Section 230 was, in its own way, much more calming than earlier hearings on the law have been. Each of the four witnesses had substantive knowledge to share, and even if some witnesses were wrong (at least in my view) on this or that fine point, none of them was grandstanding or (as has often been the case in the past) unwittingly or intentionally deceptive about what might be wrong with 230. Each more or less acknowledged that the law which cyberlaw professor Jeff Kosseff has aptly characterized, in his book of the same name, really does contain “The Twenty-Six Words That Created the Internet.” Even Professor Olivier Sylvain of Fordham Law School, who believes Section 230’s protections to be “ripe for narrowing,” focuses on the courts’ role in interpreting the statute rather than Congress’s role in possibly amending it. Unlike some other hearings, this hearing’s witnesses had no one calling for repeal.

Kosseff, a faculty member at the U.S. naval academy at Annapolis, was himself one of the witnesses on Tuesday’s panel, which was convened by the Senate commerce committee’s Subcommittee on Communications, Technology, Innovation, and the Internet. But even though the hearing’s title as inspired by Kosseff’s book, it was former Representative Chris Cox, now a partner at the Morgan, Lewis & Bockius law firm and a board member at the tech lobbying group NetChoice, who was the star. In the 1990s, Representative Cox was an author and co-sponsor (with then-Representative, now Senator, Ron Wyden) of the bill that became Section 230. Having him as a witness on Tuesday’s panel was a bit like having James Madison show up to testify about what he was thinking when he wrote the Bill of Rights.

Cox’s testimony spotlighted the ways in which the legal immunities built into Section 230 in 1995—immunities that generally shield internet companies for liability for content created by users and subscribers—had given rise to the transformational effect those companies have had in the world of 2020. Just as important, Cox pointed out in his written testimony that the law does not shield service providers who created illegal or tortious content–”in whole or in part”–from legal liability:

Section 230 was written, therefore, with a clear fact-based test:

  • Did the person create the content? If so, that person is liable for any illegality.
  • Did someone else create the content? Then that someone else is liable.
  • Did the person do anything to develop the content created by another, even if only in part? If so, the person is liable along with the content creator.

Cox explained that this approach was aimed to accommodate the realities of being an online service provider but not to allow service providers that are clearly responsible for a crime or civil wrong to be immunized by the statute:

“Rep. Wyden and I knew that, in light of the volume of content that even in 1995 was crossing most internet platforms, it would be unreasonable for the law to presume that the platform will screen all material. We also well understood the corollary of this principle: if in a specific case a platform actually did review material and edit it, then there would be no basis for assuming otherwise. As a result, the plain language of Section 230 deprives such a platform of immunity.”

Cox used this portion of his written testimony to debunk which he called certain “myths” about Section 230—of which the first and most obvious myth is that Section 230 immunizes “websites that knowingly engage in, solicit, or support illegal activity.” Wrote Cox: “It bears repeating that Section 230 provides no protection for any website, user, or other person or business involved even in part in the creation or development of content that is tortious or criminal.”

Another of these myths had to do with the idea that 230’s purpose was to set up a separate legal rules for internet services that don’t apply in the outside world. Cox insists, however, that Section 230 simply extended to the online world the protections brick-and-mortar enterprises already had, in terms of not being liable for content they didn’t fully or partially create. (For example, if I slander someone in a restaurant, the restaurant’s proprietor shouldn’t be held liable for my using his premises to defame someone. I look forward to testing this principle when we’re all going out to restaurants again.)

Other creation myths included the idea that Section 230 was designed just to protect “an infant industry” (so is no longer necessary now that the industry is old enough to vote), or the idea that it was a favor to the tech industry (Cox says the tech companies in the 1990s mostly didn’t know enough to lobby for the provision—or else didn’t even exist then), or the idea that it was part of a “grand bargain” to help then-Senator James Exon pass his anti-porn legislation, then mostly known as the Communications Decency Act. With regard to that last theory, Cox explains that his and Wyden’s draft was “deliberately crafted as a rebuke” to Senator Exon’s approach to online porn. If service providers were going to make the world’s information available to users, Cox and Wyden reasoned, there was no way that any of the services could effectively be responsible for the “indecent” content in libraries and elsewhere that might show up on users’ screens.

The real reason Section 230 was included with Senator Exon’s Communications Decency Act language had to do with the politics of the conference committee that had to work out differences between the House and Senate versions of the Telecommunications Act of 1996. The Cox-Wyden provision was in the House version, but an overwhelming majority of senators had voted for the CDA in the Senate version. Harmonizing the two opposing provisions had some interesting consequences, as Cox’s testimony points out:

When the House and Senate met in conference on the Telecommunications Act, the House conferees sought to include Cox-Wyden and strike Exon. But political realities as well as policy details had to be dealt with. There was the sticky problem of 84 senators having already voted in favor of the Exon amendment. Once on record with a vote one way—particularly a highly visible vote on the politically charged issue of pornography—it would be very difficult for a politician to explain walking it back. The Senate negotiators, anxious to protect their colleagues from being accused of taking both sides of the question, stood firm. They were willing to accept Cox-Wyden, but Exon would have to be included, too. The House negotiators, all politicians themselves, understood. This was a Senate-only issue, which could be easily resolved by including both amendments in the final product. It was logrolling at its best.

“Perhaps part of the enduring confusion about the relationship of Section 230 to Senator Exon’s legislation has arisen from the fact that when legislative staff prepared the House-Senate conference report on the final Telecommunications Act, they grouped both Exon’s Communications Decency Act and the Internet Freedom and Family Empowerment Act into the same legislative title. So the Cox-Wyden amendment became Section 230 of the Communications Decency Act—the very piece of legislation it was designed to counter. Ironically, now that the original CDA has been invalidated, it is Ron’s and my legislative handiwork that forever bears Senator Exon’s label.”

Cox’s explanation should put to rest forever the myth that the Supreme Court’s decision in Reno v. ACLU (1997), when it struck down all other provisions of the Communications Decency Act as unconstitutional, left Section 230 alone as an incomplete fragment rendered meaningless and/or dysfunctional if standing alone. As Cox’s written testimony makes clear, Section 230 was originally crafted as a standalone statute whose purpose was to negate the effect of Stratton Oakmont v. Prodigy (1995)—a case whose judge drastically misread both prior caselaw and the facts of the case he decided—and restore something like state of the online-services law as it was understood after a federal court’s influential decision in 1991 in Cubby v. CompuServe.

One of the unfortunate aspects of Tuesday’s hearing is that Cox’s lengthy first-person account and massive debunking of common myths about Section 230 weren’t heard by most of the Senators or by the viewers who only watched the hearing online. In “person” (Cox, like the other witnesses, was beamed in via a teleconferencing system that I presume was Zoom), the former congressman departed from his written remarks to remind his audience that, among other things, Section 230 gave us Wikipedia, a free resource hosted by the Wikimedia Foundation, that serves most of us in the Western developed countries as a resource every day. This is something I wish more legislators would remember—that Wikipedia depends on Section 230 to exist in its current form and usefulness. Full disclosure: I spent a few years as general counsel and later outside counsel doing work for the Wikimedia Foundation. And, just like any other lawyer who who has worked to protect a highly valued online service, I can testify that we depended on Section 230 a lot.

Still another unfortunate aspect that is that Kosseff’s and Sylvain’s contributions, as well as those of the Internet Association’s deputy general counsel, Elizabeth Banker, were somewhat eclipsed both by Cox’s written testimony and by his live testimony as one of the two fathers of “the twenty-six words that created the internet.” But these tradeoffs were a small price to pay in order to spend so much of Tuesday morning getting myths busted and truths told. Even as someone who’s been dealing with Section 230 for almost as long as Cox has, I can say truthfully that I learned a lot.

Posted on Techdirt - 27 May 2020 @ 01:00pm

In Search Of A Grand Unified Theory Of Free Expression And Privacy

Every time I ask anyone associated with Facebook’s new Oversight Board whether the nominally independent, separately endowed tribunal is going address misuse of private information, I get the same answer—that’s not the Board’s job. This means that the Oversight Board, in addition to having such an on-the-nose proper name, falls short in a more important way—its architects imagined that content issues can be tackled substantively without addressing privacy issues. Yet surely the recent scandals that have plagued Facebook and some other tech companies in recent years have shown us that private information issues and harmful-content problems have become intimately connected.

We can’t turn a blind eye to this connection anymore. We need the companies, and the governments of the world, and the communities of users, and the technologists, and the advocates, to unite behind a framework that emphasizes the deeper-than-ever connection between privacy problems and free-speech problems.

What we need most now, as we grapple more fiercely with the public-policy questions arising from digital tools and internet platforms, is a unified field theory—or, more properly—a “Grand Unified Theory” (a.k.a. “GUT”)—of free expression and privacy.

But the road to that theory is going to be hard. From the beginning three decades ago when digital civil-liberties emerged as a distinct set of issues that needed public-policy attention, the relationship between freedom of expression and personal privacy in the digital world has been a bit strained. Even the name of the first big conference to bring all the policy people, technologists, government officials, hackers, and computer cops reflected the tension. The first Computers, Freedom and Privacy conference was held in Burlingame California, in 1991, made sure that attendees knew that “Privacy” was not just a kind of “Freedom” but its own thing that deserved its own special attention.

The tensions emerged early on. It seemed self-evident to most of us back then that the relationship between freedom of expression (and freedom of assembly and freedom of inquiry) had to have some limits—including limits on what any of us could do with the private information about other people. But while it’s conceptually easy to define in fairly clear terms what counts as “freedom of expression,” the consensus about what counts as a privacy interest is murkier. Because I started out as a free-speech guy, I liked the law-school-endorsed framework of “privacy torts,” which carved out some fairly narrow privacy exceptions to the broad guarantees of expressive freedom. That “privacy torts” setup meant that, at least when we talked about “invasion of privacy,” I could say what counted as such an invasion and what didn’t. Privacy in the American system was narrow and easy to grasp.

But this wasn’t the universal view in the 1990s, and it’s certainly not the universal view in 2020. In the developed world, including the developed democracies of the European Union, the balance between privacy and free expression has been struck in a different way. The presumptions in the EU favor greater protection of personal information (and related interests like reputation) and somewhat less protection of what freedom of expression. Sure, the international human-rights source texts like the Universal Declaration of Human Rights (in Article 19) may protect “freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media regardless of frontiers.” But ranked above those informational rights (in both the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights) is the protection of private information, correspondence, “honor,” and reputation. This difference balance is reflected in European rules like the General Data Protection Regulation.

The emerging international balance, driven by the GDPR, has created new tensions between freedom of expression and what we loosely call “privacy.” (I use quotation marks because the GDPR regulates not just the use of private information but also the use of “personal” information that may not be private—like old newspaper reports of government actions to recover social-security debts. This was the issue in the leading “right to be forgotten” case prior to the GDPR.) Standing by themselves, the emerging international consensus doesn’t provide clear rules for resolving those tensions.

Don’t get me wrong: I think the idea of using international human rights instruments as guidance for content approaches on social-media platforms has its virtues. The advantage is that in international forums and tribunals it gives the companies as strong a defense as one might wish in the international environment for allowing some (presumptively protected) speech to stay up in the face of criticism and removing some (arguably illegal) speech. The disadvantages are harder to grapple with. Countries will differ on what kind of speech is protected, but the internet does not quite honor borders the way some governments would like. (Thailand’s lèse-majesté is a good example.) In addition, some social-media platforms may want to create environments that are more civil, or child-friendly, or whatever, which will entail more content-moderation choices and policies than human-rights frameworks would normally allow. Do we want to say that Facebook or Google *can’t* do this? That Twitter should simply be forbidden to tag a presidential tweet as “unsubstantiated”? Some governments and other stakeholders would disapprove.

If a human-rights framework doesn’t resolve the free-speech/privacy tensions, what could? Ultimately, I believe that the best remedial frameworks will involve multistakeholderism, but I think they also need to begin with a shared (consensus) ethical framework. I present the argument in condensed form here: "It’s Time to Reframe Our Relationship With Facebook.” (I also published a book last year that presents this argument in greater depth.)

Can a code of ethics be a GUT of free speech and privacy? I don’t think it can, but I do think it can be the seed of one. But it has to be bigger than a single company’s initiative—which more or less is the best we can reasonably hope Facebook’s Oversight Board (assuming it sets out ethical principles as a product of its work on content cases) will ever be. I try not to be cynical about Facebook, which has plenty of people working on these issues who genuinely mean well, and who are willing to forgo short-term profits to put better rules in place. While it’s true at some sufficiently high level that the companies privilege profits over public interest, the fact is that once a company is market-dominant (as Facebook is), it may well trade off short-term profits as part of a grand bargain with governments and regulators. Facebook is rich enough to absorb the costs of compliance with whatever regimes the democratic governments come up with. (A more cynical read of Zuckerberg’s public writings in the aftermath of the company’s various public writings, is that he wants the governments to get the rules in place, and then FB will comply, as it can afford to do better than most other companies, and then FB’s compliance will be a defense against subsequent criticism.)

But the main reason I think reform has to come in part at the industry level rather than at the company level, is that company-level reforms, even if well-intended, tend to instantiate a public-policy version of Wittgenstein’s "private language" problem. Put simply, if the ethical rules are internal to a company, the company can always change them. If they’re external to a company, then there’s a shared ethical framework we can use to criticize a company that transgresses the standards.

But we can’t stop at the industry level either—we need governments and users and other stakeholders to be able to step in and say to the tech industries that, hey, your industry-wide standards are still insufficient. You know that industry standards are more likely to be adequate and comprehensive when they’re buttressed both by public approval and by law. That’s what happened with medical ethics and legal ethics—the frameworks were crafted by the professions but then recognized as codes that deserve to be integrated into our legal system. There’s an international consensus that doctors have duties to patients (“First, do no harm”) and that lawyers and other professions have “fiduciary duties” to their clients. I outline how fiduciary approaches might address Big Tech’s consumer-trust problems in a series of Techdirt articles that begins here.

The “fiduciary” code-of-ethics approach to free-speech and privacy problems for Big Tech is the only way I see of harmonizing digital privacy and free-speech interests in a way that will leave most stakeholders satisfied (as most stakeholders are now satisfied with medical-ethics frameworks and with lawyers’ obligations to protect and serve their clients). Because lawyers and doctors are generally obligated to tell their clients the truth (or, if for some reason they can’t, end the relationship and refer the clients to other practitioners), and because they’re also obligated to “do no harm” (e.g., by allowing companies to use personal information in a manipulative way or to violate clients’ privacy or autonomy), these professions already have a Grand Unified Theory that protects both speech and privacy in the context of clients relationships with practitioners.

Big Tech has a better shot at resolving the contradictory demands on its speech and privacy practices if it aspires to do the same, and if it embraces an industry-wide code of ethics that is acceptable to users (who deserve client protections even if they’re not paying for the services in question). Ultimately, if the ethics code is backed by legislators and written into the law, you have something much closer to a Grand Unified Theory that harmonizes privacy, autonomy, and freedom of expression.

I’m a big booster of this GUT, and I’ve been making versions of this argument before now. (Please don’t call it “Godwin-Unified Theory”—having one “law” named after me is enough.) But here in 2020 we need to do more than argue about this approach—we need to convene and begin to hammer out a consensus about a systematic, harmonized approach that protects human needs for freedom of expression, for privacy, and for autonomy that’s reasonably free of psychological-warfare tactics of informational manipulation. The issue is not just false content, and it’s not just personal information—open societies have to incorporate a fairly high degree of tolerance for unintentionally false expression and for non-malicious or non-manipulative disclosure or use of personal information. But an open society also needs to promote supporting an ecosystem—a public sphere of discourse—in which neither the manipulative crafting of deceptive and destructive content nor the manipulative targeting of it based on our personal data is the norm. That’s an ecosystem that will require commitment from all stakeholders to build—a GUT based not on gut instincts but on critical rationalism, colloquy, and consensus.

Posted on Techdirt - 7 March 2019 @ 01:37pm

A Book Review Of Code And Other Laws Of Cyberspace

Twenty years ago, Larry Lessig published the original version of his book Code and Other Laws of Cyberspace. A few years later, he put out a very updated version called Code 2.0. Both versions are classics and important pieces of the history of the internet — and are especially interesting to look at now that issues of how much “code” is substituting as “law” have become central to so many debates. When the original book was published, in 1999, Mike Godwin wrote a review for a long defunct journal called E-Commerce Law Weekly. Given the importance of these issues today, we’re republishing a moderately updated version of Godwin’s original 1999 review. It’s interesting to view this review through the lens of the past 20 years of history that we now have lived through.

Imagine that you could somehow assemble the pioneers of the Internet and the first political theorists of cyberspace in a room and poll them as to what beliefs they have in common. Although there would be lots of heated discussion and no unanimity on any single belief, you might find a majority could get behind something like the following four premises:

  1. The Internet does not lend itself to regulation by governments.
  2. The proper way to guarantee liberty is to limit the role of government and to prevent government from acting foolishly with regard to the Internet.
  3. The structure of the Internet?the “architecture” of cyberspace, if you will?is politically neutral and cannot easily be manipulated by government or special interests.
  4. The expansion of e-commerce and the movement of much of our public discourse to the online world will increase our freedom both as citizens and as consumers.

But what if each of these premises is at best incomplete and at worse false or misleading? (Leave aside the likelihood that they’re not entirely consistent with one another.) What if the architecture of the Net can be changed by government and the dynamism of e-commerce? What if the very developments that enhance electronic commerce also undermine political freedom and privacy? The result might be that engineers and activists who are concerned about preserving democratic values in cyberspace were focusing their efforts in the wrong direction. By viewing governmental power as the primary threat to liberty, autonomy, and dignity, they’d blind themselves to the real threats?threats that it may require government to block or remedy.

It is precisely this situation in which Harvard law professor Lawrence Lessig believes we find ourselves. In his new book Code and Other Laws of Cyberspace (Basic Books, 1999), Lessig explores at length his thesis that the existing accounts of the political and legal framework of cyberspace are incomplete and that their very incompleteness may prevent us from preserving the aspects of the Internet we value most. Code is a direct assault on the libertarian perspective that informs much Internet policy debate these days. What’s more, Lessig knows that he’s swimming against the tide here, but he nevertheless takes on in Code a project that, although focused on cyberspace, amounts to nothing less than the relegitimization of the liberal (in the American sense) philosophy of government.

It is a measure of Lessig’s thoroughness and commitment to this project that he mostly succeeds in raising new questions about the proper role of government with regard to the Net in an era in which, with the exception of a few carveouts like Internet gambling and cybersquatting, Congress and the White House have largely thrown up their hands when it comes to Internet policy. While this do-nothingism is arguably an improvement over the kind of panicky, ill-informed interventionism of 1996’s Communications Decency Act (which Lessig terms “[a] law of extraordinary stupidity” that “practically impaled itself on the First Amendment”), it also falls far short, he says, of preserving fundamental civil values in a landscape reshaped by technological change.

Architecture Is Not Static

To follow Lessig’s reasoning in Code, you need to follow his terminology. This is not always easy to do, since the language by which he describes the Internet as it is today and as it might someday become is deeply metaphorical. Perhaps the least problematic of his terms is “architecture,” which Lessig borrows from Mitchell Kapor’s Internet aphorism that “architecture is politics.” Although his use of the term is a little slippery, Lessig mostly means for us to understand the term “architecture” to refer to both (a) the underlying software and protocols on which the Internet is based and (b) the kinds of applications that may run “on top of that Internet software infrastructure.” And while the first kind of architecture is not by itself easily regulable, Lessig says, the second kind might make it so?for example, by incorporating the various monitoring and identification functions that already exist on proprietary systems and corporate intranets.

More difficult to get a handle on is his use of the word “code,” which seems to expand and contract from chapter to chapter. At some bedrock level, Lessig means “code” to signify the software and hardware that make up the Internet environment?akin to the sense of “code” that programmers use. But he is also fond of metaphoric uses of “code” that muddy the waters. “Code is law,” Lessig writes at several points, by which we may take him to mean that the Internet’s software constrains and shapes our behavior with as much force as law does. And of course the book’s title equates code and law.

Elsewhere, however, he writes that code is something qualitatively different from law in that it does not derive from legislative or juridical action or community norms, yet may affect us more than laws or norms do, while providing us less opportunity for amendment or democratic feedback. It does not help matters when he refers to things like bicycle locks as “real-world code.” But if you can suspend your lexical disbelief for a while, the thrust of Lessig’s argument survives any superficial confusions wrought by his terminology.

That argument depends heavily on the first point Lessig makes about Internet architecture, which is simply that it’s malleable?shapeable by human beings who may wish to implement an agenda. The initial architecture of the Internet, he says correctly, emphasized openness and flexibility but provided little support for identifying or authenticating actual individuals or monitoring them or gathering data about them. “On the Internet it is both easy to hide that you are a dog and hard to prove that you are not,” Lessig writes. But this is a version of the Internet, he says, that is already being reshaped by e-commerce, which has reasons for wanting to identify buyers, share financial data about them, and authenticate the participants in transactions. At the center of e-commerce-wrought changes is the technology of encryption, which, while it has the ability to render communications and transactions in transit, also enables an architecture of identification (through, e.g., encryption-based certification of identity and digital signatures).

The key to the creation of such an architecture, Lessig writes, is not that a government will require people to hold and use certified IDs. Instead, he writes, “The key is incentives: systems that build the incentives for individuals voluntarily to hold IDs.” Lessig adds, “When architectures accommodate users who come with an ID installed and make life difficult for users who refuse to bear an ID, certification will spread quickly.”

But even if you don’t believe that e-commerce alone will establish an architecture of identification, he writes, there are reasons to believe that government will want to help such an architecture along. After all, a technology that enables e-commerce merchants to identify you and authorize your transactions may also have an important secondary usefulness to a government that wants to know where you’ve been and what you’ve been up to on the Internet.

And if the government wants to change the technological architecture of the Internet, there is no reason to believe it would not succeed, at least to some extent. After all, Lessig says, the government is already involved in mandating changes in existing architectures in order to effectuate policy. Among the examples of this kind of architectural intervention, he says, are (a) the Communications Assistance to Law Enforcement Act of 1994, in which Congress compelled telephone companies to make their infrastructure more conducive to successful wiretaps, (b) Congress’s requiring the manufacturers of digital recording devices to incorporate technologies the extent to which perfect copies can be made, and (c) the requirement in the Telecommunications Act of 1996 that the television industry design and manufacture a V-chip to facilitate individuals’ ability to automatically block certain kinds of televised content.

With an identification architecture in place, Lessig argues, what previously might seem to be an intractable Internet-regulation problem, like the prohibition of Internet gambling, might become quite manageable.

The Government and Code

An account of social activity on the Internet that deals solely with the legal framework is inadequate, Lessig argues. In Lessig’s view, the actual “regulators” of social behavior come from four sources, each of which has its own dynamic. Those sources of social constraints are the market, the law, social norms, and architecture?here “architecture” means “the constructed environment in which human beings conduct their activities). “But these separate constraints obviously do not simply exist as givens in a social life,” Lessig writes. “They are neither found in nature nor fixed by God,” he writes, adding that each constraint “can be changed, although the mechanism of changing each is complex.” The legal system, he says, “can have a significant role in this mechanics.”

So can the open-source movement, which Lessig refers to as “open code.” The problem with “architectural” constraints, and the thing that distinguishes them from any other kind, is that they do not depend on human awareness or judgment to function. You may choose whether or not to obey a law or a social norm, for example, and you may choose whether or not to buy or sell something in the market, but (to use the metaphor) you cannot enter a building through a door if there is no door there, and you cannot open a window if there is no window. Open code?software that is part of a code “commons,” that is not owned by any individual or business, and that can be inspected and modified?can provide a “a check on state power,” Lessig writes, insofar as it makes any government-mandated component of the architecture of the Net both visible to, and (potentially) alterable by, citizens. Open code, which still makes up a large part of the Internet infrastructure, is thus a way of making architecture accountable and subject to democratic feedback, he argues. “I certainly believe that government must be constrained, and I endorse the constraints that open code imposes, but it is not my objective to disable government generally,” Lessig writes. But, he adds, “some values can be achieved only if government intervenes.”

A Jurisprudence of Cyberspace?

One way that government intervenes, of course, is through the court system. And as Lessig notes, it may be the courts that are first called upon to interpret and preserve our social values when technology shifts the effective balance of rights for individuals. A court faced with such a shift often must engage in “translation” of longstanding individual rights into a new context, he says.

Take wiretapping, for example. Once upon a time, it was not so easy for law-enforcement agents to get access to private conversations. But once telephones had become commonplace and, as Lessig puts it, “life had just begun to move onto the wires,” the government began to tap phones in order to gather evidence in criminal investigations. Does wiretapping raise Fourth Amendment concerns? The Supreme Court first answered this question in Olmstead v. United States (1928)?the answer for the majority was that wiretapping, at least when the tap was places somewhere other than on a tappee’s property, did not raise Fourth Amendment issues since the precise language of the Fourth Amendment does not address the non-trespassory overhearing of conversations. That is one mode of translation, Lessig writes?the court preserved the precise language of the Fourth Amendment in a way that contracted the scope of the zone of privacy protected by the Fourth Amendment.

Another, and arguably preferable approach, Lessig says, would be to follow Justice Louis Brandeis’s approach in his dissent in Olmstead?an approach that preserves the scope of the privacy zone while departing from a strict adherence to the literal language of the Amendment. Brandeis’s dissent, arguing that the capture of private conversations does implicate the Fourth Amendment, was adopted by the Supreme Court forty years after Olmstead.

But what if technology raises a question for a court for which it is not clear which interpretative choice comes closer to preserving or “translating” the values inherent in the Bill of Rights? Borrowing from contract law, Lessig calls such a circumstance a “latent ambiguity.” He further suggests?this is perhaps the most unfashionable of his arguments?that, instead of simply refusing to act and referring the policy question to the legislature, courts might simply attempt to make the best choice at preserving constitutional values in the hope that its choice will at minimum “spur a conversation about these fundamental values…to focus a debate that may ultimately be resolved elsewhere.”

Internet Alters Copyright and Privacy

All this begins to seem far afield from the law of cyberspace, but Lessig’s larger point is that the changes wrought by the Internet and related technologies are likely to raise significant “latent ambiguity” problems. He focuses on three areas in which technologies raise important questions about values but for which a passive or overliteral “translation” approach would not be sufficient. Those areas are intellectual property, privacy, and freedom of speech. In each case, the problem Lessig sees is one that is based on “private substitutes for public law”?private, non-governmental decision making that undercuts the values the Constitution and Bill of Rights were meant to preserve.

With intellectual property, and with copyright in particular, technological changes raise new problems that the nuanced established legal balances built into the law do not address. Lessig challenges the long-standing assertion, in Internet circles, at least, that the very edifice of copyright law is likely to crumble in the era of the Internet, which enables millions of perfect copies of a creative work to be duplicated and disseminated for free, regardless of whether the copyright holder has granted anyone a license. In response to that perceived threat, Lessig observes, the copyright holders have moved to force changes in technology and changes in the law.

As a result, technologically implemented copyright?protection and copyright?management schemes are coming online, and the government has already taken steps to prohibit the circumvention of such schemes. This has created a landscape in which the traditional exercise of one’s rights to “fair use” of another’s work under the Copyright Act may become meaningless. The fact that one technically has a right to engage in fair use is of no help when one cannot engage in any unauthorized copying. Complicating this development, Lessig believes, is the oncoming implementation of an ID infrastructure on the Internet, which may make it impossible for individuals to engage in anonymous reading.

This bears some explaining. Consider that if you buy a book in a bookstore with cash, or if you read it in the library, nobody knows what you’re buying and reading. By contrast, a code-based licensing scheme in which you identify yourself online in order to obtain or view a copy of a copyrighted work may undercut your anonymity, especially if there’s an Internet I.D. Infrastructure already in place. The technology changes are “private” ones?they do not involve anything we’d call “state action” and thus do not raise what we normally would call a constitutional problem?but they affect public values just as deeply as traditional constitutional problems do.

A similar argument can be made about how the Internet alters our privacy rights and expectations. Because the Internet both makes our backgrounds more “searchable” and our current behavior more monitorable, Lessig reasons, the privacy protections in our Bill of Rights may become meaningless. Once again, when the searching and monitoring is done by someone other than the government, it means that the “state action” trigger for invoking the Bill of Rights is wholly absent.

What’s more, such searching and monitoring, whether done by the government or otherwise, may be invisible to the person being investigated. You will have lost your right to any meaningful privacy and you will not even know it is gone until it is too late. Lessig’s analysis of the problem here is convincing, even though his proposed solution, a “property regime” for personal data that would replace today’s “liability regime,” is deeply problematic. This is partly because it would transmute invasions of privacy into property crimes?aren’t the jails full enough without adding gossips to the inmates?and partly because the distinction he draws between property regimes and liability regimes as to which benefits the individual more is (in my view) illusory in practical terms.

Perhaps Lessig’s most controversial position with regard to the threat of private action to public values is the one he has explored previously in a number of articles for law reviews and popular publications?the argument that some version of the Communications Decency Act?perhaps one that required minors to identify themselves as such so as to be blocked from certain kinds of content?is less dangerous to freedom of speech than is the private use of technologies that filter content. It is important to understand that Lessig is not actually calling for a new CDA here, although that nuance might escape some legislators.

Lessig interprets such a version of the CDA, and the architecture that might be created by it, as a kind of “zoning,” which he sees as preferable to private, non-legislated filtering because, he says, zoning “builds into itself a system for its limitation. A site cannot block someone from the site without that individual knowing it.” By contrast, he says, a filtering regime such as (now widely regarded as moribund) Platform for Internet Content Selection enables all sorts of censorship schemes, not just nominally child-protecting ones. PICS, because it can scale to function at the server or even network level, can be used by a government to block, say, troubling political content. And because PICS support can be integrated into the architecture of the Internet, it could be used to create compelling private incentives for people to label their Internet content. Worse, he says, such blocking would be invisible to individuals.

Lessig’s Arguments Hard to Harmonize

There are many problems with Lessig’s analysis here, and while it would take more space than I have here to discuss them in depth, I can at least indicate what some of the problems are. First of all, it’s not at all clear that one could not create a “zoning” solution that kept the zoning-excluded users from knowing?directly at least?that they have been excluded. Second, if a zoning scheme works to exclude users identified as kids, is there any reason to think it would not work equally well in excluding users identified as Iranians or Japanese or Americans? Don’t forget that incipient I.D. architecture, after all.

Third, a PICS-like scheme, implemented at the server level or higher, is actually less threatening to freedom of speech than key-word or other content filtering at the server level or higher. PICS, in order to function, requires that some high percentage of the content producers in the world buy into the self-labeling scheme before a repressive government could use it to block its citizens from disapproved content. Brute-force key-word filtering, by contrast, does not require anyone else’s cooperation?a repressive government could choose its own PICS-independent criteria and implement them at the server level or elsewhere.

Fourth, there’s nothing inherent in the architecture of a PICS-style scheme?in the unlikely event that such a scheme were implemented?or any other server-level filtering scheme that requires that users not be notified that blocking took place. In short, you could design that architecture so that its operation is visible.

Lessig is right to oppose the implementation of anything that might be called an architecture of filtering. But one wonders why he is so intent on saying that zoning is better than filtering when both models can operate as tools of repression. Lessig answers that question by letting us know what his real worry is, which is that individuals with filtering tools will block out those who need to be heard. Says Lessig: “[F]rom the standpoint of society, it would be terrible if citizens could simply tune out problems that were not theirs…. We must confront the problems of others and think about problems that affect our society. This exposure makes us better citizens.” His concern is that we will use filtering tools to bar us from that salutary exposure.

Leaving aside the question of whether his value here is one we should embrace?it is hard to harmonize it with what Brandeis in his Olmstead dissent termed “the right to be let alone”?it seems worth noting that the Internet does not really stand as evidence to Lessig’s assumption that people will use their new tools to avoid confrontation with those holding different opinions. Indeed, much of the evidence seems to point the other way, as anyone who has ever viewed a long-running Internet flame war or inspected dueling Web sites can attest. Nothing forces combatants on the Internet to stay engaged, but they do anyway. The fact is, we like to argue with each other?as Deborah Tannen has pointed out, we have embraced an “argument culture.” Whether that culture is healthy is another question, of course.

But even if one disagrees with Lessig’s analysis of certain particular issues, this does not detract from his main argument, which is that private decision making, enhanced by new technologies and implemented as part of the “architecture” of the Internet, may undercut the democratic values?freedom of speech, privacy, autonomy, access to information?at the core of our society. Implicit in his argument is that the traditional focus of civil libertarians, which is to challenge government interventions in speech and privacy arenas, may be counterproductive in this new context. If I read him right, Lessig is calling for a new constitutional philosophy, one rooted perhaps in Mill’s essay On Liberty in which government can function as a positive public tool to preserve from private encroachments of the liberty values we articulated in the Constitution. Such a philosophy would require, however, a very imaginative “translation” of constitutional values indeed to get past the objection that the Bill of Rights is only about limiting “state action.”

What Code is really about is (the author’s perception of) the need for political liberals to put a positive face on the role of government without embracing statism or seeming to. Although this is clearly Lessig’s project, he’s pessimistic about its success?in the public debate about Internet policy, he complains, the libertarians have essentially won the field. What he would like to see, perhaps, is a constitutional structure in which something like the Bill of Rights could be invoked against challenges to personal liberty or autonomy, regardless of whether the challenges come from public or private sources. The ideology of libertarianism, he believes, will interpret the changes wrought by e-commerce and other private action as a given, like the weather. “We will watch as important aspects of privacy and free speech are erased by the emerging architecture of the panopticon, and we will speak, like modern Jeffersons, about nature making it so?forgetting that here, we are nature,” he writes in a somewhat forlorn final chapter.

Lessig may be right in his gloomy predictions, but let us suppose that his worst fears are not realized and a new debate does begin about the proper role of government in cyberspace and about appropriate limitations on private crafting of the online architecture. If that happens, it may be that at least some of the thanks for that development will have to go to Lessig’s Code.

In 1999, Mike Godwin (@sfmnemonic) was senior legal editor of E-Commerce Law Weekly and had just recently published Cyber Rights: Defending Free Speech in the Digital Age. Currently he is a senior fellow at R Street Institute.

Posted on Techdirt - 18 January 2019 @ 10:43am

The Splinters Of Our Discontent: A Review Of Network Propaganda

Years before most of us thought Donald Trump would have a shot at the presidency, the Cato Institute’s Julian Sanchez put a name on a problem he saw in American conservative intellectual culture. Sanchez called it “epistemic closure,” and he framed the problem this way:

“One of the more striking features of the contemporary conservative movement is the extent to which it has been moving toward epistemic closure. Reality is defined by a multimedia array of interconnected and cross promoting conservative blogs, radio programs, magazines, and of course, Fox News. Whatever conflicts with that reality can be dismissed out of hand because it comes from the liberal media, and is therefore ipso facto not to be trusted. (How do you know they’re liberal? Well, they disagree with the conservative media!)? This epistemic closure can be a source of solidarity and energy, but it also renders the conservative media ecosystem fragile.”

Sanchez’s comments didn’t trigger any kind of real schism in conservative or libertarian circles. Sure, there was some heated debate among conservatives, and a few conservative commentators, like David Frum, Bruce Bartlett, and the National Review’s Jim Manzi, acknowledged that there might be some merit to Sanchez’s critique. But for most people, this argument among conservatives about epistemic closure hardly counted as serious news.

But the publication last fall of Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics by Yochai Benkler, Robert Faris, and Hal Roberts?more than eight years after the original “epistemic closure” debate erupted?ought to make the issue hot again. This long, complex, yet readable study of the American media ecosystem in the run-up to the 2016 election (as well as the year afterwards) demonstrates that the epistemic-closure problem has generated what the authors call an “epistemic crisis” for Americans in general. The book also shows that our efforts to understand current political division and disruptions simplistically?either in terms of negligent and arrogant platforms like Facebook, or in terms of Bond-villain malefactors like Cambridge Analytica or Russia’s Internet Research Agency?are missing the forest for the trees. It’s not that the social media platforms are wholly innocent, and it’s not that the would-be warpers of voter behavior did nothing wrong (or had no effect). But the seeds of the unexpected outcomes in the 2016 U.S. elections, Network Propaganda argues, were planted decades earlier, with the rise of a right-wing media ecosystem that valued loyalty and confirmation of conservative (or “conservative”) values and narratives over truth.

Now, if you’re a conservative, you may be reading this broad characterization of Network Propaganda as an attack on conservatism itself. Here are four reasons you shouldn’t fall into that trap! First, nothing in this book challenges what might be called core conservative values (at least as they have been understood for most of the last 100 years or so). Those values typically have included favoring limited government over expansive government, preferring economic growth and rights to property over promoting equity and equality for their own sake, supporting business flexibility over labor and governmental demands, committing to certain approaches to tax policy, and so forth. Nothing in Network Propaganda is a criticism of substantive conservative values like these, or even of what may increasingly be taken as “conservative” stances in the Trump era (nationalism or protectionism or opposition to immigration, say). The book doesn’t take a position on traditional liberal or progressive political stances either.

Second, nothing in the book discounts the indisputable fact that individuals and media entities on the left, and even in the center, have their own sins and excesses to account for. In fact, the more damning media criticisms in the book are aimed squarely at the more traditional journalistic institutions that made themselves more vulnerable to disinformation and distorted narratives in the name of “objectivity.” Where right-wing media set out to reinforce conservative identity and narratives?doing, in fact, what they more or less always promised they were going to do?the institutional press of the left and the center frequently let their superficial commitment to objectivity result in the amplification of disinformation and distortions.

Third, there are philosophical currents on the left as well as the right that call the whole notion of objective facts and truth into question?that consider all questions of fact to represent political judgments rather than anything that might be called “factual” or “truthful.” As the authors put it, reform of our media ecosystems “will have to overcome not only right-wing propaganda, but also decades of left-wing criticism of objectivity and truth-seeking institutions.” Dedication to truth-seeking is, or ought to be, a transpartisan value.

Which leads us to the fourth reason conservatives should pay attention to Network Propaganda, which is the biggest one. The progress of knowledge, and of problem-solving in the real world, requires us, regardless of political preferences and philosophical approaches, to come together in recognizing the value of facts. Consider: if progressives had cocooned themselves in a media ecosystem that had cut itself from the facts?that valued tribal loyalty and shared identity over mere factual accuracy?conservatives and centrists would be justified in pointing out not merely that the left’s media were unmoored but also that its insistence on doctrinal purity in the face of factual disproof was positively destructive.

But the massive dataset and analyses offered by Benkler, Faris, and Roberts in Network Propaganda demonstrate persuasively that the converse distortion has happened. Specifically, the authors took about four million online stories regarding the 2016 election or national politics generally and analyzed them through Media Cloud, a joint technological project developed by Harvard’s Berkman Klein Center and MIT’s Center for Civil Media over the course of the last decade. Media Cloud enabled the authors to study not only where the stories originate but also how they were linked and propagated, and how the various entities in our larger media ecosystem link to one another. The Media Cloud analytical system made it possible to study news sites, including the website versions of newspapers like the New York Times and the Wall Street Journal, along with the more politically focused websites on the left and right, like Daily Kos and Breitbart. The system also enabled the authors to study how the stories were retweeted and shared on Facebook, Twitter, and other social media, as well as how, in particular instances, television coverage supplemented or amplified online stories.

You might expect that any study of such a large dataset would show symmetrical patterns of polarization during the pre-election to post-election period the authors studied (basically, 2015 through 2017). It was, after all, an election period, which is typically a time of increased partisanship. You might also expect, given the increasing presence of social-media platforms like Facebook, Twitter, and Instagram in American public life, that the new platforms themselves, just by their very existence and popularity, shaped public opinion in new ways. And you might expect, given the now-indisputable fact that Russian “active measures” were trying to influence the American electorate in certain ways, to see clear proof either that the Russians succeeded in their disinformation/propaganda efforts (or that they failed).

Yet Network Propaganda, instantly a necessary text for those of us who study media ecologies, shows that the data point to different conclusions. The authors’ Media Cloud analyses (frequently represented visually in colorful graphs as well as verbally in tables and in the text of the book itself) point to different conclusions altogether. As Benkler characterizes the team’s findings in the Boston Review:

“The data was not what we expected. There were periods during the research when we were just working on identifying?as opposed to assessing?the impact of Russians, and during those times, I thought it might really have been the Russians. But as we analyzed these millions of stories, looking both at producers and consumers, a pattern repeated again and again that had more to do with the traditional media than the Internet.”

That traditional media institutions are seriously culpable for the spread of disinformation is counterintuitive. The authors begin Network Propaganda by observing what most of us also observed?the rise of what briefly was called “fake news” before that term was transmuted by President Trump into shorthand for his critics. But Benkler at al. also note that that the latter half of the 20th century, mainstream journalistic institutions, informed by a wave of professionalization that dates back approximately to the founding of the Columbia University journalism school, historically had been able to overcome most of the fact-free calumnies and conspiracy theories through their commitment to objectivity and fact-checking. Yet mainstream journalism failed the culture in 2016, and it’s important for the journals and the journalists to come to terms with why. But doing so means investigating how stories from the fringes interacted with the mainstream.

The fringe stories had weird staying power; in the period centering on the 2016 election, a lot of the stories that were just plain crazy?from the absurd narrative that was “Pizzagate” to claims that Jeb Bush had “close Nazi ties” (Alex Jones played a role in both of these narratives)–persistently resurfaced in the way citizens talked about the election. To the Network Propaganda authors, it became clear that in recent years something new has emerged?namely, a variety of disinformation that seems, weedlike, to survive the most assiduous fact-checkers and persist in resurfacing in the public mind.

How did this emergence happen, and should we blame the internet? Certainly this phenomenon didn’t manifest in any way predicted by either the more optimistic pundits at the internet’s beginnings or the backlash pessimists who followed. The optimists had believed that increased democratic access to mass media might give rise to a wave of citizen journalists who supplemented and ultimately complemented institutional journalism, leading both to more accuracy in reporting and more citizen engagement. The pessimists predicted “information cocoons” (Cass Sunstein’s term) and “filter bubbles” (Eli Pariser’s term) punctuated to some extent by quarrelsomeness because online media can act as disinhibition to bad behavior.

Yes, to some extent, the optimists and the pessimists both found confirmation of their predictions, but what they didn’t expect, and what few if any seem to have predicted, was the marked asymmetry of how the predictions played in the 2015-2017 period with regard to the 2016 election processes and their outcome. As the authors put it, “[t]he consistent pattern that emerges from our data is that, both during the highly divisive election campaign and even more so during the first year of the Trump presidency, there is no left-right division, but rather a division between the right and the rest of the media ecosystem. The right wing of the media ecosystem behaves precisely as the echo-chamber models predict?exhibiting high insularity, susceptibility to information cascades, rumor and conspiracy theory, and drift toward more extreme versions of itself. The rest of the media ecosystem, however, operates as an interconnected network anchored by organizations, both for profit and nonprofit, that adhere to professional journalistic norms.”

As a result, this period saw the appearance of disinformation narratives that targeted Trump and his primary opponents as well as Hillary Clinton, but the narratives that got more play, not just in right-wing outlets but ultimately in the traditional journalistic outlets at well, were the ones that centered on Clinton. This happened even when there were fewer available facts supporting the anti-Clinton narratives and (occasionally) more facts supporting the anti-Trump narratives. The explanation for the anti-Clinton narratives’ longevity in the news cycle, the data show, is the focus of the right-wing media ecology on the two focal media nodes of Fox News and Breitbart. At times during this period, Breitbart took the lead as an influencer from Fox News, which eventually responded by repositioning itself after Trump’s nomination as a solid Trump booster.

In contrast, left-wing media had no single outlet that defined orthodoxy for progressives. Instead, left-of-center outlets worked within the larger sphere of traditional media, and, because they were competing for the rest of the audience that had not committed itself to the Fox/Breitbart ecosystem, were constrained to adhere, mostly, to facts that were confirmable by traditional media institutions associated with the center-left (the New York Times and the Washington Post, say) as well as with the center-right (e.g., the Wall Street Journal). Basically, even if you were an agenda-driven left-oriented publication or online outlet, your dependence on reaching the mainstream for your audience meant that, you couldn’t get away with just making stuff up, or with laundering far-left conspiracy theories from more marginal sources.

Network Propaganda‘s data regarding the right-wing media ecosystem?that it’s insular, prefers confirmation of identity and loyalty rather than self-correction, demonizes perceived opponents, and resists disconfirmation of its favored narratives?map well to social-science political-communication theorists Kathleen Hall Jamieson and Joseph Capella’s 2008 book, Echo Chamber: Rush Limbaugh And The Rise Of Conservative Media. In that book, Jamieson and Capella outlined how, as they put it, “these conservative media create a self-protective enclave hospitable to conservative beliefs.” As a consequence, they write:

“[t]his safe haven reinforces conservative values and dispositions, holds Republican candidates and leaders accountable to conservative ideals, tightens their audience’s ties to the Republican Party, and distances listeners, readers, and viewers from ‘liberals,” in general, and Democrats, in particular. It also enwraps them in a world in which facts supportive of Democratic claims are contested and those consistent with conservative ones championed.”

The data analyzed by Benkler et al. in Network Propaganda support Jamieson’s and Capella’s conclusions from more than a decade ago. Moreover, Benkler et al. argue that the key factors in the promotion of disinformation were not “clickbait fabricators” (who generate eye-grabbing headlines to generate revenue), or Russian “active measures,” or the corrosive effects of the (relatively) new social-media platforms Facebook and Twitter. The authors are aware that in making this argument they’re swimming against the tide:

“Fake news entrepreneurs, Russians, the Facebook algorithm, and online echo chambers provide normatively unproblematic, nonpartisan explanations to the current epistemic crisis. For all of these actors, the strong emphasis on technology suggests a novel challenge that our normal systems do not know how to handle but that can be addressed in a nonpartisan manner. Moreover, focusing on ‘fake news’ from foreign sources and on Russian efforts to intervene places the blame onto foreigners with no legitimate stake in our democracy. Both liberal political theory and professional journalism consistently seek neutral justifications for democratic institutions, so visibly nonpartisan explanations such as these have enormous attraction.”

Nevertheless, Network Propaganda argues, the nonpartisan explanations are inconsistent with what the data show, which the authors characterize as “a radicalization of roughly a third of the American media system.” (It isn’t “polarization,” since the data don’t show any symmetry between left and right “poles.”) The authors argue that “[n]o fact emerges more clearly from our analysis of how four million political stories were linked, tweeted, and shared over a three-year period than that there is no symmetry in the architecture and dynamics of communications within the right-wing media ecosystem and outside of it.” In addition, they write, “we have observed repeated public humiliation and vicious disinformation campaigns mounted by the leading sites in this sphere against individuals who were the core pillars of Republican identity a mere decade earlier.” Those campaigns against Republican stalwarts came from the radicalized right-wing media sources, not from the left.

The authors acknowledge that they “do not expect our findings to persuade anyone who is already committed to the right-wing media ecosystem. [The data] could be interpreted differently. They could be viewed as a media system overwhelmed by liberal bias and opposed only by a tightly-clustered set of right-wing sites courageously telling the truth in the teeth of what Sean Hannity calls the ‘corrupt, lying media,’ rather than our interpretation of a radicalized right set apart form a media system anchored in century-old norms of professional journalism.” But that interpretation of the data flies in the face of Network Propaganda’s extensive demonstration that the traditional mainstream media?in what the authors call “the performance of objectivity”?actually had the effect of amplifying right-wing narratives rather than successfully challenging the false or distorted narratives. (The authors explore this paradox in Chapter 6.)

Democrats and progressives won’t have any trouble accepting the idea that radicalized right-wing media are the primary cause of what the authors call today’s “epistemic crisis.” But Benkler and his co-authors want Republicans to recognize what they lost in 2016:

“The critical thing to understand as you read this book is that the epochal change reflected by the 2016 election and the first year of the Trump presidency was not that Republicans beat Democrats [but instead] that in 2016 the party of Ronald Reagan and the two presidents Bush was defeated by the party of Donald Trump, Breitbart, and billionaire Robert Mercer. As our data show, in 2017 Fox News joined the victors in launching sustained attacks on core pillars of the Party of Reagan?free trade and a relatively open immigration policy, and, most directly, the national security establishment and law enforcement when these threatened President Trump himself.”

It’s possible that many or even most Republicans don’t yet want to hear this message?the recent shuttering of The Weekly Standard underscores one of the consequences of radicalization of right-wing media, which is that center-right outlets, more integrated with the mainstream media in terms of journalistic professionalism and factuality, have lost influence in the right-wing media sphere. (It remains to be seen whether The Bulwark helps fill the gap.)

But the larger message from Network Propaganda’s analyses is that we’re fooling ourselves if we blame our current culture’s vulnerability to disinformation on the internet in general or on social media (or search engines, or smartphones) ? or even on Russian propaganda campaigns. Blaming the Russians is trendy these days, and even Kathleen Jameson, whose 2008 book on right-wing media, Echo Chamber, informs the authors’ work in Network Propaganda, has adopted the thesis that the Russians probably made the difference for Trump in 2016. Her recent book Cyberwar?published a month after Network Propaganda was published?spells out a theory of Russian influence in the 2016 election that also, predictably, raises concerns about social media, as well as focusing on the role of the Wikileaks releases of hacked DNC emails and how the mainstream media responded to those releases.

Popular accounts of Jamieson’s book have interpreted Cyberwar as proof that the Russians are the central culprits in any American 2016 electoral dysfunction, even though Jamieson carefully qualifies her reasoning and conclusions in all the ways you would want a responsible social scientist to do. (She doesn’t claim to have proved her thesis conclusively.) Taken together with the trend of seeing social media as inherently socially corrosive, the Russians-did-it narrative suggests that if Twitter and Facebook (and Facebook-integrated platforms like Instagram and WhatsApp) clean up their acts and find ways to purge their products of foreign actors as well as homegrown misleading advertising and “fake news,” the political divisiveness we’ve seen in recent years will subside. But Network Propaganda provides strong reason to believe that reforming or regulating or censoring the internet companies won’t solve the problems they’re being blamed for. True, the book expressly endorses public-policy responses to the disinformation campaigns of malicious foreign actors as well as reforms of how the platforms handle political advertising. But, the authors insist, the problem isn’t primarily the Russians, or technology?it’s in our political and media cultures.

Possibly Jamieson is right to think that the Russians’ “active measures” were efforts that, amplifying pre-existing political divisions through social media, were the final straw that ultimately changed the outcome of the 2016 election. Nevertheless, at its best Jamieson’s book has taken a snapshot of how vulnerable our political culture was in 2016. Plus, her theory of Russian influence requires some suspension of disbelief, notably in her theory about how then-FBI-director James Comey’s interventions?departures from DOJ/FBI norms?were caused somehow by the fact of the Russian campaign. Even if you accept her account, it’s an account of our vulnerability that doesn’t explain where the vulnerability came from.

In contrast, Network Propaganda has a fully developed theory of where that vulnerability came from, and traces it?in ways aligned with Jamieson’s previous scholarship?to sources that predate the modern internet and social media. In addition, in what may be a surprise given the book’s focus on what might be mistakenly taken as a problem unique to American political culture, Network Propaganda expressly places the American problems in the context of the larger currents around the world to blame internet platforms in particular for social ills:

“For those not focused purely on the American public sphere, our study suggests that we should focus on the structural, not the novel; on the long-term dynamic between institutions, culture, and technology, not only the disruptive technological moment; and on the interaction between the different media and technologies that make up a society’s media ecosystem, not on a single medium, like the internet, much less a single platform like Facebook or Twitter. The stark differences we observe between the insular right-wing media ecosystem and the majority of the American media environment, and the ways in which open web publications, social media, television, and radio all interacted to produce these differences, suggest that the narrower focus will lead to systematically erroneous predictions and diagnoses. It is critical not to confound what is easy to measure (Twitter) with what is significantly effective in shaping beliefs and politically actionable knowledge in society…. Different countries, with different histories, institutional structures, and cultural practices of collective sense-making need not fear the internet’s effects. There is no echo chamber or filter-bubble effect that will inexorably take a society with a well-functioning public sphere and turn it into a shambles simply because the internet comes to town.”

Benkler, Faris, and Roberts expressly acknowledge, however, that it’s appropriate for governments and companies to consider how they regulate political advertising and targeted messaging going forward?even if this online content can’t be shown to have played a significant corrosive role in past elections, there’s no guarantee that refined versions won’t be more effective in the future. But even more important, they insist, is the need to address larger institutional issues affecting our public sphere. The book’s Chapter 13 addresses a full range of possible reforms. These include “reconstructing center-right media” (to address what the authors think Julian Sanchez correctly characterized as an “epistemic closure” problem) as well as insisting that professional journalists recognize that they’re operating in a propaganda-rich media culture, which ethically requires them to do something more than “performance of objectivity.”

The recommendations also include promoting what they call a “public health approach to the media ecosystem,” which essentially means obligating the tech companies and platforms to disclose “under appropriate legal constraints [such as protecting individual privacy]” the kind of data we need to assess media patterns, dysfunctions, and outcomes. They write, correctly, that we “can no more trust Facebook to be the sole source of information about the effects of its platform on our media ecosystem than we could trust a pharmaceutical company to be the sole source of research on the outcome of its drugs, or an oil company to be the sole source of measurements of particles emissions or CO2 in the atmosphere.”

The fact is that the problems in our political and media culture can’t be delegated to Facebook or Twitter to solve on their own. Any comprehensive, holistic solutions to our epistemic crises require not only transparency and accountability but also fully engaged democracy with full access to the data. Yes, that means you and me. It’s time for our epistemic opening.

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at R Street Institute.

Posted on Techdirt - 30 November 2018 @ 12:03pm

Our Bipolar Free-Speech Disorder And How To Fix It (Part 3)

Part 1 and Part 2 of this series have emphasized that treating today’s free-speech ecosystem in “dyadic” ways?that is, treating each issue as fundamentally a tension between two parties or two sets of stakeholders?doesn’t lead to stable or predictable outcomes that adequately protect free speech and related interests.

As policymakers consider laws that affect platforms or other online content, it is critical that they consider Balkin’s framework and the implications of this “new-school speech regulation” that the framework identifies. Failure to apply it could lead?indeed, has led in the recent past?to laws or regulations that indirectly undermine basic free expression interests.

A critical perspective on how to think about free speech in the twenty-first century requires that we recognize the extent to which free speech is facilitated by the internet and its infrastructure. We also must recognize that free speech is in some new ways made vulnerable by the internet and its infrastructure. In particular, free speech is particularly enhanced by the lowering barriers to entry for speakers that the internet creates. At the same time, free speech is made vulnerable insofar as the internet and the infrastructure it provides for freedom of speech is subject to legal and regulatory action that may not be transparent to users. For example, a government may seek to block the administration of a dissident website’s domain name, or may seek to block the use by dissident speakers of certain payment systems.

There are of course non-governmental forces that may undermine or inhibit free speech?for example, the lowered barriers to entry make it easier for harassers or stalkers to discourage individuals from participation. This problem is in some sense an old problem in free-speech doctrine?the so-called “heckler’s veto”?is a subset of this problem. The problem of harassment may give rise to users’ complaints directly to the platform provider, or to demands that government regulate the platforms (and other speakers) more.

Balkin explores the methods in which government can exercise both hard and soft power to censor or regulate speech at the infrastructure level. This can include direct changes of the law aimed at compelling internet platforms to censor or otherwise limit speech. This can include pressure that doesn’t rise to the level of law or regulation, as when a lawmaker warns a platform that it must figure out how to regulate certain kinds of troubling expression because “[i]f you don’t control your platform, we’re going to have to do something about it.” It can include changes in law or regulation aimed at increasing incentives for platforms to self-police with a heavier hand. Balkin characterizes the ways in which government can regulate speech of citizens and press indirectly, through pressure on or regulation of platforms and other intermediaries like payment systems, as “New School Speech Regulation.”

The important thing to remember is that government itself, although often asked to arbitrate issues that arise between internet platforms and users, is not always a disinterested party. For example, a government may have its own reasons for incentivizing platforms to collect more data (and to disclose the data it has collected), such as with National Security Letters. Because the government may regulate speech indirectly and non-transparently, there is a sense in which government cannot position itself on all issues as a neutral referee of competing interests between platforms and users. In a strong sense, the government itself may have its own interests that themselves may be in opposition to either user interests or platform interests or both.

Toward a new Framework

It is important to recognize that entities at each corner of Balkin’s “triangular” model may each have valid interests. For example, governmental entities may have valid interests in capturing data about users, or in suppressing or censoring certain (narrow) classes of speech, although only within a larger human-rights context in which speech is presumptively protected. End-users and traditional media companies share a presumptive right to free speech, but also other rights consistent with Article 19 of the ICCPR:

“Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

The companies, including but not limited to internet infrastructure companies in the top right corner of the triangle, may not have the same kind of legal status that end users or traditional media have. By the same token they may not have the same kind of presumptively necessary role in democratic as governments have. But we may pragmatically recognize that they have a presumptive right to exist, pursue profit, and innovate, on the theory that their doing so ultimately redounds to the benefit of end users and even traditional media, largely by expanding the scope of voice and access.

Properly, we should recognize all these players in the “triangular” paradigm as “stakeholders.” With the exception of the manifestly illegal or malicious entities in the paradigm (e.g., “hackers” and “trolls”), entities at all three corners each have their respective interests that may be in some tension with actors at other corners of the triangle. Further, the bilateral processes between any two sets of entities may obscure or ignore the involvement of the third set in shaping goals and outcomes.

What this strongly suggests is the need for all (lawful, non-malicious) entities to work non-antagonistically towards shared goals in a way that heightens transparency and that improves holistic understanding of the complexity of internet free speech as an ecosystem.

Balkin suggests that his free-speech-triangle model is a model that highlights three problems: (1) “new school” speech regulation that uses the companies as indirect controllers and even censors of content, (2) “private governance” by companies that lacks transparency and accountability, and (3) the incentivized collection of big data that makes surveillance and manipulation of end users (and implicitly the traditional media) easier. He offers three suggested reforms: (a) “structural” regulation that promotes competition and prevents discrimination among “payment systems and basic internet services,” (b) guarantees of “curatorial due process,” and (c) recognition of “a new class of information fiduciaries.”

Of the reforms, the first may be taken as a straightforward call for “network neutrality” regulation, a particular type of regulation of internet services that Balkin has expressly and publicly favored (e.g., his co-authored brief in the net neutrality litigation). But it actually articulates a broader pro-competition principle that has implications for our current internet free-speech ecosystem.

Specifically, the imposition of content-moderation obligations by law and regulation actually inhibits competition and discriminates in favor of incumbent platform companies. Which is to say, because content moderation requires a high degree both of capital investment (developing software and hardware infrastructure to respond to and anticipate problems) and of human intervention (because AI filters make stupid decisions, including false positives, that have free-speech impacts), highly capitalized internet incumbent “success stories” are ready to be responsive to law and regulation in ways that startups and market entrants generally are not. The second and third suggestions?that the platforms provide guarantees of “due process” in their systems of private governance, and that the companies that collect and hold Big Data meet fiduciary obligations?need less explanation. But I would add to the “information fiduciary” proposal that we would properly want such a fiduciary to be able to invoke some kind of privilege against routine disclosure of user information, just as traditional fiduciaries like doctors and lawyers are able to do.

Balkin’s “triangle” paradigm, which gives us three sets of discrete stakeholders, three problems relating to the stakeholders’ relationships with one another, and three reforms is a good first step to framing internet free-speech issues non-dyadically. But while the taxonomy is useful it shouldn’t be limiting or necessarily reducible to three. There are arguably some additional reforms that ought to be considered, at a “meta” level (or, if you will, above and outside the corners of the free-speech triangle). With this in mind let us add the following “meta” recommendations to Balkin’s three specific programmatic ones.

Multistakeholderism. The multipolar model that Balkin suggests, or any non-dyadic model, actually has been addressed in different ways by institutionalized precursors in the world of internet law and policy. That model is multistakeholderism. Those precursors, ranging from hands-on regulators and norm setters like ICANN to broader and more inclusive policy discussion forums like the Internet Governance Forum, are by no means perfect and so must be subjected to ongoing critical review and refinement. But they’re better at providing a comprehensive, holistic perspective than lawmaking and court cases. Governments should be able to participate, but should be recognizes as stakeholders and not just referees.

Commitment to democratic values, including free speech, on the internet. Everyone agrees that some kinds of freedom of expression are disturbing and disruptive on the internet?yet, naturally enough, not everybody agrees about what should be banned or controlled. We need to work actively to uncouple the commitment to free speech on the internet?which we should embrace as a function of both the First Amendment and international human-rights instruments?from debates about particular free-speech problems. The road to doing this lies in bipartisan (or multipartisan, or transpartisan) commitment to free-speech values. The road away from the commitment lies expressly in the presumption that “free speech” is a value that is more “right” than “left” (or vice versa). To save free speech for any of us, we must commit in the establishment of our internet policies to what Brandeis called “freedom for the thought that we hate.”

Commitment to “open society” models of internet norms and internet governance institutions. Recognition, following Karl Popper’s The Open Society and Its Enemies (Chapter 7) that our framework for internet law and regulation can’t be “who has the right to govern” because all stakeholders have some claims of right regarding this. And it can’t be “who is the best to govern” because that model leads to disputed notions of who’s best. Instead, as Popper frames it,

“For even those who share this assumption of Plato’s admit that political rulers are not always sufficiently ‘good’ or ‘wise’ (we need not worry about the precise meaning of these terms), and that it is not at all easy to get a government on whose goodness and wisdom one can implicitly rely. If that is granted, then we must ask whether political thought should not face from the beginning the possibility of bad government; whether we should not prepare for the worst leaders, and hope for the best. But this leads to a new approach to the problem of politics, for it forces us to replace the question: Who should rule? by the new question: How can we so organize political institutions that bad or incompetent rulers can be prevented from doing too much damage?”

Popper’s focus on institutions that prevent “too much damage” when “the worst leaders” in charge is the right one. Protecting freedom of speech in today’s internet ecosystem requires protecting against the excesses or imbalances that necessarily result from merely dyadic conceptions of where the problems are or where the responsibilities for correcting the problems lie. If, for example, government or the public want more content moderation by platforms, there need to be institutions that facilitate education and improved awareness about the tradeoffs. If, as a technical and human matter it’s difficult (maybe impossible) to come up with a solution that (a) scales and (b) doesn’t lead to a parade of objectionable instances of censorship/non-censorship/inequity/bias, then we need create institutions in which that insight is fully shared among stakeholders. (Facebook has promised more than once to throw money at AI-based solutions, or partial solutions, to content problems, but the company is in the unhappy position of having a full wallet with nothing that’s worth buying, at least for that purpose. (See “Can Mark Zuckerberg Fix Facebook Before It Breaks Democracy?”). The alternative will be increasing insistence that platforms engage in “private governance” that’s both inconsistent and less accountable. In the absence of an “ecosystem” perspective, different stakeholders will insist on short-term solutions that ignore the potential for “vicious cycle” effects.

Older models for mass-medium free-speech regulation were entitles like newspapers and publishers, with high degrees of editorial control, and common carriers like the telephone and telegraph, which mostly did not make content-filtering determinations. There is likely no version of these older models that would work for Twitter or Facebook (or similar platforms) while maintaining the great increase in freedom of expression that those platforms have enabled. Dyadic conceptions of responsibility may lead to “vicious cycles,” as when Facebook is pressured to censor some content in response to demands for content moderation, and the company’s response creates further unhappiness with the platform (because human beings who are the ultimate arbiters of individual content-moderation decisions are fallible, inconsistent, etc.). At that point, the criticism of the platform may frame itself as a demand for less “censorship” or for more “moderation” or for the end of all unfair censorship/moderation. There may also be the inference that platforms have deliberately been socially irresponsible. Although that inference may be correct in some specific cases, the general truth is that the platforms have more typically been wrestling with a range of different, competing responsibilities.

It is safe to assume that today’s mass-media platforms, including but not limited to social media, as well as tomorrow’s platforms will generate new models aimed at ensuring that the freedom of speech is protected. But the only way to increase the chances that the new models will be the best possible models is to create a framework of shared free-speech and open-society values, and to ensure that each set of stakeholders has its seats at the table when the model-building starts.

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.

Posted on Techdirt - 29 November 2018 @ 11:58am

Our Bipolar Free-Speech Disorder And How To Fix It (Part 2)

In Part 1 of this series, I gave attention to law professor Jack Balkin’s model of “free speech as a triangle,” where each vertex of the triangle represents a group of stakeholders. The first vertex is government and intergovernmental actors. The second is internet platform and infrastructure providers, and the third is users themselves. This “triangle” model of speech actors is useful because it enables us to characterize the relationships among each set of actors, thereby illuminating how the nature of regulation of speech has changed and become more complicated than it used to be.

Take a look again at Balkin’s Figure 1.

Although it’s clearer when we visualize all the players in the free-speech regulation landscape that a “free-speech triangle” at least captures more complexity than the usual speakers-against-the-government or speakers-against-the-companies or companies-against-the-government models, the fact is that our constitutional law and legal traditions predispose us to think of these questions in binary rather than, uh, “trinary” terms. We’ve been thinking this way for centuries, and it’s a hard habit to shake. But shaking the binary habit is a necessity if we’re going to get the free-speech ecosystem right in this century.

To do this we first have to look at how we typically reduce these “trinary” models to the binary models we’re more used to dealing with. With three classes of actors, there are three possible “dyads” of relationships: user?platform, government?platform, and user?government.

(a) Dyad 1: User complaints against platforms (censorship and data gathering)

Users’ complaints about platforms may ignore or obscure the effects of government demands on platforms and their content-moderation policies.

Typically, public controversies around internet freedom of expression are framed by news coverage and analysis as well as by stakeholders themselves, as binary oppositions. If there is a conflict over content between (for example) Facebook and a user, especially if it occurs more than once, that user may conclude that that her content was removed for fundamentally political reasons. This perception may be exacerbated if the censorship occurred and was framed as a violation of the platform’s terms of service. A user subject to such censorship may believe that her content is no more objectionable than that of users who weren’t censored, or that her content is being censored while content that is just as heated, but representing a different political point of view, isn’t being censored. Naturally enough, this outcome seems unfair, and a user may infer that the platform as a whole is politically biased against those of her political beliefs. It should be noted that complaints about politically motivated censorship apparently come from most and perhaps all sectors.

A second complaint from users may derive from data collection by a platform. This may not directly affect the direct content of a user’s speech, but it may affect the kind of content she encounters, which, when driven by algorithms aimed increasing her engagement on the platform, may serve not only to urge her participation in more or more commercial transactions, but also to “radicalize” her, anger her, or otherwise disturb her. Even if an individual may judge herself more or less immune from algorithmically driven urges to view more and more radical and radicalizing content, she may be disturbed by the radicalizing effects that such content may be having on her culture generally. (See, e.g., Tufekci, Zeynep, “YouTube, the Great Radicalizer.”) And she may be disturbed at how an apparently more radicalized culture around her interacts with her in more disturbing ways.

Users may be concerned both about censorship of their own content (censorship that may seem unjustified) and platforms’ use of data, which may seem to be designed to manipulate them or else manipulate other people. In response, users (and others) may demand that platforms track bad speakers or retain data about who bad speakers are (e.g., to prevent bad speakers from abandoning “burned” user accounts and returning with new accounts to create the same problems) as well as about what speakers say (so as to police bad speech more). But there are two likely outcomes of a short-term pursuit of pressuring platforms to censor more or differently, or to gather less data (about users themselves) or to gather more data (about how users’ data are being used). One obvious, predictable outcome of these pressures is that, to the extent the companies respond to them, governments may leverage platforms’ responses to user complaints in ways that make it easier for government to pressure platforms for more user content control (not always with the same concerns that individual users have) or to provide user data (because governments like to exercise the “third-party” doctrine to get access to data that users have “voluntarily” left behind on internet companies’ and platform providers’ services).

(b) Dyad 2: Governments’ demands on platforms (content and data)

Government efforts to impose new moderation obligations on platforms, even in response to user complaints, may result in versions of the platforms that users value less, as well as more pressure on government to intervene further.

In the United States, internet platform companies (like many other entities, including ordinary blog-hosting servers and arguably bloggers themselves) will find that their First Amendment rights are buttressed and extended by Section 230 of the Communications Decency Act, which generally prohibits content-based liability for those who reproduce on the internet content that is originated by others. Although a full discussion of the breadth and the exceptions to Section 230?which was enacted as part of the omnibus federal Telecommunications Act reform in 1996?is beyond the scope of this particular paper, it is important to underscore that Section 230 extends the scope of protection for “intermediaries” more broadly than First Amendment case law alone, if we are to judge by relevant digital-platform cases prior to 1996, might have done. But the embryonic case law in those early years of the digital revolution seemed to be moving in a direction that would have resulted in at least some First Amendment protections for platforms consistent with principles that protect traditional bookstores from legal liability for the content of particular books. One of the earliest cases prominent cases concerning online computer services, Cubby v. CompuServe (1991), drew heavily on a 1959 Supreme Court case, Smith v. California, that established that bookstores and newsstands were properly understood to deserve First Amendment protections based on their importance to the distribution of First Amendment-protected content.

Section 230’s broad, bright-line protections (taken together with the copyright-specific protections for internet platforms created by the Digital Millennium Copyright Act in 1998) are widely interpreted by legal analysts and commentators as having created the legal framework that gave rise to internet-company success stories like Google, Facebook, and Twitter. These companies, as well as a raft of smaller, successful enterprises like Wikipedia and Reddit, originated in the United States and were protected in their infancy by Section 230. Even critics of the platforms?and there are many?typically attribute the success of these enterprises to the scope of Section 230. So it’s no great surprise to discover that many and perhaps most critics of these companies (who may be government actors or private individuals) have become critics of Section 230 and want to repeal or amend it.

In particular, government entities in the United States, both at the federal level and at the state level, have sought to impose greater obligations on internet platforms not merely to remove content that is purportedly illegal, but also to prevent that content from being broadcast by a platform in the first place. The notice-and-takedown model of the Digital Millennium Copyright Act of 1998, which lends itself to automated enforcement and remedies to a higher degree than non-copyright-related content complaints, is frequently suggested by government stakeholders as a model for how platforms ought to respond to complaints about other types of purportedly illegal content, including user-generated content. The fact that copyright enforcement, as distinct from enforcement other communications-related crimes or private causes of action, is comparatively much simpler than most other remedies in communications law, is a fact that is typically passed over by those who are unsympathetic to today’s social-media landscape.

Although I’m focusing here primarily on U.S. government entities, this tendency is also evident among the governments of many other countries, including many countries that rank as “free” or “partly free” in Freedom House’s annual world freedom report. It may be reasonably asserted that the impulse of governments to offload the work of screening for illegal (or legal but disturbing) content is international. The European Union, for example, is actively exploring regulatory schemes that implicitly or explicitly impose content-policing norms on platform companies and that impose quick and large penalties if the platforms fail to comply. American platforms, which operate internationally, must abide by these systems at least with regard to their content delivery within EU jurisdictions as well as (some European regulators have argued) anywhere else in the world.

Added to governments’ impulse to impose content restrictions and policing obligations on platforms is governments’ hunger for the data that platforms collect. Not every aspect of the data that platforms like Google and Facebook and Twitter collect on users is publicly known, nor have the algorithms (decision-making processes and criteria implemented by computers) that the platforms use to decide what content may need monitoring, or what content users might prefer, being generally published. The reasons some aspects of the platforms’ algorithmic decision-making may be generally reduced to two primary arguments. First, the platforms’ particular choices about algorithmically selecting and serving content, based on user data, may reasonably classed as trade secrets, so that if they were made utterly public a competitor could free-ride on the platforms’ (former) trade secrets to develop competing products. Second, if platform algorithms are made wholly public, it becomes easier for anyone?ranging from commercial interests to mischievous hackers and state actors?to “game” content so that it is served to more users by the platform algorithms.

Governments’ recognition that protections for platforms has made it easier for the platforms to survive and thrive may wish to modify the protections they have granted, or to impose further content-moderation obligations on platforms as a condition of statutory protections. But even AI-assisted moderation measures will necessarily be either post-hoc (which means that lots of objectionable content will be public before the platform curates it) or pre-hoc (which means that platforms will become gatekeepers of public participation, shoehorning users into a traditional publishing model or an online-forum model as constrained by top editors as the early version of the joint Sears-IBM service Prodigy was).

(c) Dyad 3: People (and traditional press) versus government.

New, frequently market-dominant internet platforms for speakers create new government temptations and capabilities to (i) surveil online speech, (ii) leverage platforms to suppress dissident or unpopular speech or deplatform speakers, and/or (iii) employ or compel platforms to manipulate public opinion (or to regulate or suppress manipulation).

It’s trivially demonstrable that some great percentage of complaints about censorship in open societies is grounded in individual speakers’ or traditional publishers’ complaints that government is acting to suppress certain kinds of speech. Frequently the speech in question is political speech but sometimes it is speech of other kinds (e.g., allegedly defamatory, threatening, fraudulent, or obscene) of speech. This dyad is, for the most part, the primary subject matter of traditional First Amendment law. It is also a primary focus of international free-expression law where freedom of expression is understood to be guaranteed by national or international human-rights instruments (notably Article 19 of the International Covenant on Civil and Political Rights).

But this dyad has been distorted in the twenty-first century, in which, more often than not, troubling political speech or other kinds of troubling public speech are normally mediated by internet platforms. It is easier on some platforms, but by no means all platforms, for speakers to be anonymous or pseudonymous. Anonymous or pseudonymous speech is not universally regarded by governments as a boon to public discourse, and frequently governments will want to track or even prosecute certain kinds of speakers. Tracking such speakers was difficult (although not necessarily impossible) in the pre-internet era of unsigned postcards and ubiquitous public telephones. But internet platforms have created new opportunities to discover, track, and suppress speech as a result of the platforms’ collection of user data for their own purposes.

Every successful internet platform that allows users to express themselves has been a target of government demands for disclosure of information about users. In addition, internet platforms are increasingly the target of government efforts to mandate assistance (including the building of more surveillance-supportive technologies) in criminal-law or national-security investigations. In most ways this is analogous to the 1994 passage of CALEA in the United States, which obligated telephone companies (that is, providers of voice telephony) to build technologies that facilitated wiretapping. But a major difference is that the internet platforms more often than not capture far more information about users than telephone companies traditionally had done. (This generalization to some extent oversimplifies the difference, given that there is frequently convergence between the suites of services that internet platforms and telephone companies?or cable companies?now offer their users.)

Governmental monitoring may suppress dissenting (or otherwise troubling) speech, but governments (and other political actors, such as political parties) may also use internet platforms to create or potentiate certain kinds of political speech in opposition to the interests of users. Siva Vaidhyanathan documents particular uses of Facebook advertising in ways that aimed to achieve political results, including not just voting for an approved candidate but also dissuasion of some voters from voting at all, in the 2016 election.

As Vaidhyanathan writes: “Custom Audiences is a powerful tool that was not available to President Barack Obama and Governor Mitt Romney when they ran for president in 2012. It was developed in 2014 to help Facebook reach the takeoff point in profits and revenue.” Plus this: “Because Facebook develops advertising tools for firms that sell shoes and cosmetics and only later invites political campaigns to use them, ‘they never worried about the worst-case abuse of this capability, unaccountable, unreviewable political ads,’ said Professor David Carroll of the Parsons School of Design.”

There are legitimate differences of opinion regarding the proper regime for regulation of political advertising, as well as regarding the extent to which regulation of political advertising can be implemented consistent with existing First Amendment precedent. It should be noted, however, that advertising of the sort that Vaidhyanathan discusses raises issues not only of campaign spending (although in 2016, at least, the spending on targeted Facebook political advertising of the “Custom Audiences” variety seems to have been comparatively small) as of transparency and accountability. Advertising that’s micro-targeted and ephemeral is arguably not accountable to the degree that an open society should require. There will be temptations for government actors to use mechanisms like “Custom Audiences” to suppress opponents’ speech?and there also will be temptations by government to limit or even abolish such micro-targeted instances of political speech.

What is most relevant here is that the government may address temptations either to employ features like “Custom Audiences” or to suppress the use of those features by other political actors in non-transparent or less formal ways, (e.g., through the “jawboning” that Jack Balkin describes in his “New School Speech Regulation” paper). Platforms?especially market-dominant platforms that, as a function of their success and dominance, may be particularly targeted on speech issues?may feel pressured to remove dissident speech in response to government “jawboning” or other threats of regulation. And, given the limitations of both automated and human-based filtering, a platform that feels compelled to respond to such governmental pressure is almost certain to generate results that are inconsistent and that give rise to further dissatisfaction, complaints, and suspicions on the part of users?not just the users subject to censorship or deplatforming, but also users who witness such actions and disapprove of them.

Considered both separately and together, it seems clear that each of the traditional “dyadic” models of how to regulate free speech tend to focus on two vertices of the free-speech triangle while overlooking a third vertex, whose stakeholders may intervene or distort or exploit or be exploited by outcomes of conflicts of the other two stakeholder groups. What this suggests is that no “dyadic” conception of the free-speech ecosystem is sufficiently complex and stable enough to protect freedom of expression or, for that matter, citizens’ autonomy interests in privacy and self-determination. This leaves us with the question of whether it is possible to direct our law and policy in a direction that takes into account today’s “triangular” free-speech ecosystem in ways that provide stable, durable, expansive protections of freedom of speech and other valid interests of all three stakeholder groups. That question is the subject of Part 3 of this series.

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.

Posted on Techdirt - 28 November 2018 @ 11:56am

Our Bipolar Free-Speech Disorder And How To Fix It (Part 1)

When we argue how to respond to complaints about social media and internet companies, the resulting debate seems to break down into two sides. On one side, typically, are those who argue that it ought to be straightforward for companies to monitor (or censor) more problematic content. On the other are people who insist that the internet and its forums and platforms?including the large dominant ones like Facebook and Twitter?have become central channels of how to exercise freedom of expression in the 21st century, and we don’t want to risk that freedom by forcing the companies to be monitors or censors, not least because they’re guaranteed to make as many lousy decisions as good ones.

By reflex and inclination, I usually have fallen into the latter group. But after a couple of years of watching various slow-motion train wrecks centering on social media, I think it’s time to break out of the bipolar disorder that afflicts our free-speech talk. Thanks primarily to a series of law-review articles by Yale law professor Jack Balkin, I now believe free-speech debates no longer can be simplified in terms of government-versus-people, companies versus people, or government versus companies. No “bipolar” view of free speech on the internet is going to give us the complete answers, and it’s more likely than not to give us wrong answers, because today speech on the internet isn’t really bipolar at all?it’s an “ecosystem.”

Sometimes this is hard for civil libertarians, particularly Americans, to grasp. The First Amendment (like analogous free-speech guarantees in other democracies) tends to reduce every free-speech or free-press issue to people-versus-government. The people spoke, and the government sought to regulate that speech. By its terms, the First Amendment is directed solely at averting government impulses to censor against (a) publishers’ right to publish controversial content and/or (b) individual speakers’ right to speak controversial content. This is why First Amendment cases most commonly are named either with the government as a listed party (e.g., Chaplinsky v. New Hampshire) or a representative of the government, acting in his or her government role as a government official, as a named party (e.g. Attorney General Janet Reno in Reno v. ACLU).

But in some sense we’ve always known that this model is oversimplified. Even cases in which the complainant was nominally a private party still involved government action in the form of enactment of speech-restrictive laws that gave rise to the complaint. In New York Times Inc. v. Sullivan, the plaintiff, Sullivan, was a public official, but his defamation case against the New York Times was grounded in his reputational interest as an ordinary citizen. In Miami Herald Publishing Company v. Tornillo, plaintiff Tornillo was a citizen running for a state-government office who invoked a state-mandated “right of reply” because he had wanted to compel the Herald to print his responses to editorials that were critical of his candidacy. In each of these cases, the plaintiff’s demand did not itself represent a direct exercise of government power. The private plaintiffs’ complaints were personal to them. Nevertheless, in each of these cases, the role of government (in protecting reputation as a valid legal interest, and in providing a political candidate a right of reply) was deemed by the Supreme Court to represent exercises of governmental power. For this reason, the Court concluded that these cases, despite their superficial focus on a private plaintiff’s cause of action, nonetheless fall under the scope of the First Amendment. Both newspaper defendants won their Supreme Court appeals.

By contrast, private speech-related disputes between private entities, such as companies or individuals, normally are not judged as directly raising First Amendment issues. In the internet era, if a platform like Facebook or Twitter chooses to censor content or deny service to a subscriber because of (an asserted) violation of its Terms of Service, or if a platform like Google chooses to delist a website that offers pharmaceutical drugs in violation of U.S. law or the law of other nations, any subsequent dispute is typically understood, at least initially, as a disagreement that does not raise First Amendment questions.

But the intersection between governmental action and private platforms and publishers has become both broader and blurrier in the course of the last decade. Partly this is because some platforms have become primary channels of communication for many individuals and businesses, and some of these platforms have become dominant in their markets. It is also due in part to concern about various ways the platforms have been employed with the goal of abusing individuals or groups, perpetrating fraud or other crimes, generating political unrest, or causing or increasing the probability of other socially harmful phenomena (including disinformation such as “fake news.”)

To some extent, the increasing role of internet platforms, including but not limited to social media such as Facebook and Twitter in Western developed countries, as one of the primary media for free expression was predictable. (For example, in Cyber Rights: Defending Free Speech in the Digital Age (Times Books, 1998), I wrote this: “Increasingly, citizens of the world will be getting their news from computer-based communications-electronic bulletin boards, conferencing services, and networks-which differ institutionally from traditional print media and broadcast journalism.” See also “Net Backlash = Fear of Freedom,” Wired, August 1995: “For many journalists, ‘freedom of the press’ is a privilege that can’t be entrusted to just anybody. And yet the Net does just that. At least potentially, pretty much anybody can say anything online – and it is almost impossible to shut them up.”)

What was perhaps less predictable, prior to the rise of market-dominant social-media platforms, is that government demands regarding content may result in “private governance” (where market-dominant companies become the agents of government demands but implement those demands less transparently than enacted legislation or recorded court cases do). What this has meant is that individual citizens concerned about exercising their freedom of expression in the internet era may find that exercising one’s option to “exit” (in the Albert O. Hirschman sense) may impose great costs.

At the same time, lack of transparency about platform policy (and private government) may make it difficult for individual speakers to interpret what laws or policies the censorship of their content (or the exclusion of themselves or others) in ways that enable them to give effective “voice” to their complaints. For example, they may infer that their censorship or “deplatforming” represents a political preference that has the effect of “silencing” their dissident views, which in a traditional public forum might be clearly understood as protected by First Amendment-grounded free-speech principles.

These perplexities, and the current public debates about freedom of speech on the internet, create the need for a reconsideration of the internet free speech not as a simplistic dyad, or as a set of simplistic, self-contained dyads, but instead as an ecosystem in which decisions in one part may well lead to unexpected, undesired effects in other parts. A better approach would be to consider internet freedom of expression “ecologically,” to consider expression on the internet an “ecosystem,” and to think about various legal, regulatory, policy, and economic choices as “free-speech environmentalists,” with the underlying goal of protecting the internet free-speech ecosystem in ways that protect individuals’ fundamental rights.

Of course, individuals have more fundamental rights than freedom of expression. Notably, there is an international consensus that individuals deserve, inter alia, some kind of rights to privacy, although, as with expression, there is some disagreement about what the scope of privacy rights should be. But changing the consensus paradigm of freedom of expression so that it is understood as an ecosystem not only will improve law, regulation, and policy regarding free speech, but also will provide a model that possibly may be fruitful in other areas, like privacy.

In short, we need a theory of free speech that takes into account complexity. We need to build consensus around that theory so that stakeholders with a wide range of political beliefs nevertheless share a commitment to the complexity-accommodating paradigm. In order to do this, we need to begin with a taxonomy of stakeholders. Once we have the taxonomy, we need to identify how the players interact with one another. And ultimately we need some initiatives that suggest how we may address free-speech issues in ways that are not shortsighted, reactive, and reductive, but forward-looking, prospective, and inclusive.

The internet ecosystem: a taxonomy.

Fortunately, Jack Balkin’s recent series of law-review articles has given us a head start on building that theory, outlining the complex relationships that now exist among citizens, government actors, and companies that function as intermediaries. These paradigm-challenging articles culminate in a synthesis is reflected in his 2018 law-review article “Free Speech is a Triangle.”

Balkin rejects simple dyadic models of free speech. Because an infographic is sometimes worth 1000 words, it may be most convenient to reproduce Balkin’s diagram of what he refers to as a “pluralistic” (rather than “dyadic”) model of free speech. Here it is:

Balkin recognizes that the triangle may be taken as oversimplifying the character of particular entities within any set of parties at a “corner.” For example, social-media platforms are not the same things as payment systems, which aren’t the same things as search engines or standard-setting organizations. Nevertheless, entities in any given corner may have roughly the same interests and play roughly the same roles. End-users are not the same things as “Legacy Media” (e.g., the Wall Street Journal or the Guardian), yet both may be subject to “private governance” from internet platforms or subject to “old-school speech regulation” (laws and regulation) imposed by nation-states or treaties. (“New-school speech regulation” may arise when governments compel or pressure companies to exercise speech-suppressing “private governance.”)

Certainly some entities within this triangularized model may be “flattened” in the diagram in ways that don’t reveal the depth of their relationships to other parties. For example, a social-media company like Facebook may collect vastly more data (and use it in far more unregulated ways) than a payment system (and certainly far more than a standard-setting organization). Balkin addresses the problem of Big Data collection by social-media companies and others?including the issue of how Big Data may be used in ways that inhibit or distort free speech– by suggesting that such data-collecting companies be considered “information fiduciaries” with obligations that may parallel or be similar to those of more traditional fiduciaries such as doctors and lawyers. (He has developed this idea further in separate articles both sole-authored and co-authored with Jonathan Zittrain.)

Properly, the information-fiduciary paradigm maps more clearly to privacy interests rather than to free-expression interests, but collection, maintenance, and use of large amounts of user data may be used in free-speech contexts. The information-fiduciary concept may not seem to be directly relevant to content issues. But it’s indirectly relevant if the information fiduciary (possibly but not always at the behest of government) uses user data to try to manipulate users through content, or to disclose user content choices to government (for example).

In addition, information fiduciaries functioning as social-media platforms have a different relationship with the users, who create the content that makes these platforms attractive. In the traditional world of newspapers and radio, publishers had a close voluntary relationship with the speakers and writers who created their content, which meant that traditional-media entities had strong incentives to protect their creators generally. To some large degree, publisher and creator interests were aligned, although there are predictable frictions, as when a newspaper’s or broadcaster’s advertisers threaten to remove financial support for controversial speakers and writers.

With online platforms, that alignment is much weaker, if it exists at all: Platforms lack incentives to fight for their users’ content, and indeed may have incentives to censor it themselves for private profit (e.g., advertising dollars). In the same way that the traditional legal or financial or medical fiduciary relationship is necessary to correct possible misalignment of incentives, the “information fiduciary” relationship ought to be imposed on platforms to correct their misaligned incentives toward private censorship. In a strong sense, this concept of information fiduciary is a key to understanding how a new speech framework is arguably necessary, and how it might work.

I’ve written elsewhere about how Balkin’s concept of social-media companies (and others) as information fiduciaries might actually position the companies to be stronger and better advocates of free expression and privacy than they are now. But that’s only one piece of the puzzle when it comes to thinking ecologically about today’s internet free-speech issues. The other pieces require us to think about the other ways in which “bipolar thinking” about internet free speech not only causes us to misunderstand our problems but also tricks us into coming up bad solutions. And that’s the subject I’ll take up in Part 2.

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.

Posted on Techdirt - 25 October 2018 @ 01:26pm

Last Chance To Opt Out Of #MyHealthRecord, Australians!

Australia’s controversial and clumsy rollout of its “My Health Record” program this summer didn’t cause the “spill” — what Australians call an abrupt turnover of party leadership in Parliament ? that gave the country a new Prime Minister in August. But it didn’t improve public trust in the government either. The program ? which aims to create a massive nationally administered database of more or less every Australian’s health care records ? will pose massive privacy and security risks for the citizens it covers, with less-than-obvious benefits for patients, the medical establishment, and the government.

Citizen participation in the new program isn’t quite mandatory, but it’s nearly so, thanks to the government’s recent shift of the program from purely voluntary to “opt-out.” Months before the planned rollout, which began June 16, at least one poll suggested that a sizable minority of Australians don’t want the government to keep their health information in a centralized health-records database.

In response to ongoing concern about the privacy impact of the program (check out #MyHealthRecord on Facebook and Twitter), the new government is pushing for legislative changes aimed at addressing the growing public criticism of the program. But many privacy advocates and health-policy experts say the proposed fixes, while representing some improvements on particular privacy issues, don’t address the fundamental problem. Specifically, the My Health Record program, which originally was designed as a voluntary program, is becoming an all-but-mandatory health-record database for Australian citizens, held (and potentially exploited) by the government.

Australia’s shifting of its electronic-health-records program to “opt-out” ? which means citizens are automatically included in the program unless they take advantage of a short-term “window” to halt automatic creation of their government-held health records ? is a textbook example of how to further undermine trust in a government that already has trust issues when it comes to privacy. Every government that imposes record-keeping requirements that impact citizen privacy should view Australia’s abrupt shift to “opt-out” health-care records as an example of What Not To Do.

And yet: supporters of My Health Record have persisted in their commitment to “opt out” during the shift from Malcolm Turnbull’s administration to that of his successor, Scott Morrison. This means that if an Australian doesn’t invest time and energy into invoking her right not to be included in the database ? within the less-than-one-month window that citizens currently have to make this choice ? she will be included by default.

In other words, any citizen’s health-care records in the program will be held by the government permanently throughout that citizen’s and will persist for 30 years after that citizen’s death. Even if an Australian chose later to opt out of the program, the record might still (theoretically) accessible to health-care providers and government officials. Health Minister Greg Hunt introduced legislation last summer that would address some of these complaints about the program, but it’s unclear whether the Australian Parliament, which has weathered several leadership shifts over the past decade, has the focus or will to implement the changes.

The fact is, the automatic creation of your My Health Record could still result in a permanent health-care record that’s outside of any individual Australian’s control because the government can always repeal any law or regulation requiring deletion or limiting access. In effect, “My Health Record” is a misnomer: a more accurate name for the program would be “The Government’s Health Records About You.”

A great deal of Australian media coverage of the rollout has been critical of the Turnbull government’s -? and later the Morrison government’s — “full steam ahead” approach. The pushback against My Health Record has been immense. Worse, citizens who have rushed to opt out of the program have found the system less than easy to navigate ? whether on the Web or through a government call center. The flood of Australians who attempted to opt out of the program on the first day they were allowed to do so, found that they were unwitting beta testers, stress-testing the opt-out system. After the first-day opt-out numbers, the government has either declined or been unable to disclose how many Australians are opting out. But a Sydney Morning Herald report in July said the number of opt-outs might “run into the millions.”

In kind of a weird mirror-universe adventure, Australia has managed to reproduce the same kind of public concern that sank a similar health-care effort in the United Kingdom just a few years ago. Phil Booth of the UK’s Medconfidential privacy-advocacy group told the Guardian that “[t]he parallels are incredible” and that “this system seems to be the 2018 replica of the 2014 care.data.” After a government-appointed commission underscored privacy and security concerns, the UK’s “care.data” program was abandoned in 2016. Unfortunately for Australians, in the Australian version of the UK’s “care.data” scheme, Spock has a beard.

The UK’s experience suggests that the policy problem signaled by the opposition to the My Health Record initiative is bigger than Australia. That shouldn’t be a surprise. After all, a developed country may provide a “universal health care” program like the United Kingdom’s National Health Service, or a more “mixed” system (a public health care program supplemented by private insurers like that of Australia) or even an insurance-centric public-health program like Obamacare. But whatever the system, the appeal of “big data” approaches to create efficiencies in health care is broad, in the abstract.

But despite the theoretical appeal of #MyHealthRecord there’s a paucity of actual economic research that shows that centralized health-care databases will actually provide benefits that recoup the costs of investment. (Australia’s program has been estimated to cost more than $2 billion AUD so far, and it’s not yet fully implemented.) No one, in or out of government, has made a business case for My Health Record that uses actual numbers. Instead, the chief argument in favor MHR is that it will enable health-care providers to share patient data more easily ? which supposedly will save money ? but health-care workers, much as they hate the paperwork associated with it, mostly know that there’s no substitute for taking a fresh patient history at the point of intake.

The push for a national database of personal health information has been a fairly recent development, even though the country’s current health-care system has been in place in more or less its current form since 1984. The Australian Department of Health announced in 2010 that the government would be spending nearly half a billion Australian dollars to build a system of what then were called Personally Controlled Electronic Health Records. The primary idea was to make it more efficient to share critical patient information among health-care providers treating the same person.

Another purported benefit would be standardization. Like the United States (where proposals to for a national health-records system have sometimes been promoted) Australia is a federal system of states and territories, each of which has its own government. The concern was that a failure to set national standards for digital health records would lead to the states and territories developing their own, possibly mutually incompatible systems. The distance among the states and territories (mostly on the coasts surrounding Australia’s dry, unpopulated Outback) makes integration harder because of the distances separating different pockets of its population (now 25 million).

The 2010 announcement of the Personally Controlled Electronic Health Records program stated expressly “[a] personally controlled electronic health record will not be mandatory to receive health care.” The basic model was opt-in ? starting in 2012, Australians had to actively choose to create their shared digital health records. If you didn’t register for the program, however, you didn’t create a PCEHR. If you did register, you had the assurance that, under the government-promulgated Australian Privacy Principles, your personal health information would be strongly protected.

In practice, the PCEHR program, eventually rebranded as My Health Record, has never had much appeal to most citizens. The government burned somewhere near or past $2 billion AUD and yet, years into the program, the total number of citizens who had volunteered to “opt in” to have their health records shared and available in the program was only about 6 million. According to a March report in Australia’s medical-news journal, the Medical Republic, Australia’s physicians also seem to be less than sold on the value in the program either.

Prior to the latest push for a shift to “opt-out,” only a few citizens saw much benefit (much less any fun or personal return) of investing the time it takes to master producing a complete and useful health record, and even those who did only rarely ended up using its key features. (Some health-fashion-forward citizens who do want to share their health-care records easily have opted to invest in more private solutions rather than rely on a centralized database that may be less controllable and less complete.)

By 2014 it was clear that the Australian government (control of which had shifted to the more conservative of the two major parties) wanted to move in closer-to-mandatory direction. It did so by announcing a wholesale conversion of the My Health Record database from opt-in to opt-out. This meant that, if you were an Australian citizen, a health record would be created automatically for you?unless you explicitly said you didn’t want one. But the possibility of opting out hasn’t quelled these ongoing complaints from the general public:

  1. The still-too-short, too-limited opt-out window. Australians were originally given a three-month window, starting July 16, to opt out of My Health Record. (It was later extended to November 15. Of course, critics regard the one-month extension as something less than stellar.) If you don’t opt out in the approved window, an electronic health record will be created for you. By default, program provides that the government will keep the record for 30 years after your death. And the government will have the right to access the record?whether you’ve died or not? “for maintenance, audit and other purposes required or authorised[sic] by law.”
  2. This goes on your permanent record. The law already authorizes a lot of government access (for law-enforcement agencies, court proceedings, and other non-health-related purposes). And of course the laws can be amended to authorize even more access. Were you ever treated for alcohol poisoning? Did you ever have an abortion? You may be able to limit access somewhat by tweaking the privacy controls of “My Health Record,” but (unless you take strong, affirmative steps otherwise) it’s never erased. And it may be demanded by a range of government authorities for all sorts of reasons under current or future laws or regulations.
  3. The disputed warrant requirement. The Australian Digital Health Agency, the relatively new government agency in charge of the program, said a warrant would be required?but that claim was contradicted by Australia’s Parliamentary Library, whose analysis found that access by non-health government agencies with few if any procedural or privacy safeguards. Disturbingly, the Parliamentary Library’s report was abruptly removed and revised after pushback from the Turnbull government. (The removed report has been reproduced here.) A subsequent Senate inquiry?with a report issued October 12?shows growing consensus behind adding a warrant requirement before law enforcement gets health record access, but the Australian Labor Party and the Australian Greens have dissented on the question of whether a warrant requirement fixes the problems: Per the Greens, the warrant requirement is “an improvement on the status quo, but it is an insufficient and disappointing one.”
  4. And none of these criticisms even touch on the significance that a centralized health-care record database will give 900,000 health-care workers (not just doctors) comparatively unrestricted, untracked access to patient health records. By comparison, the average Australian under the pre-My Health Care system likely had to worry only about dozens of people having access to her health records ? not hundreds of thousands.

Then-Prime Minister Malcolm Turnbull was dismissive of privacy concerns early on arguing that “there have been no privacy complaints or breaches with My Health Record in six years and there are over 6 million people with My Health Records.” But many prominent health-care and privacy experts argue that the government’s new promises to patch the system are inadequate. For example, requiring government agencies to get a warrant does nothing to protect patients from unauthorized access to their records by health-care workers with access to the My Health Record system. And the Labor members have argued that the new system needs a statutory provision that prevents health-care insurers from accessing My Health Record’s data.

Typical of the external critics is former Australian Medical Association President Kerryn Phelps, who views the promises as “minor concessions” that are “woefully inadequate.” Phelps, who cites a survey showing that 75 percent of doctors are themselves planning to opt out, called for “full parliamentary review” of the My Health Record program. Other critics have argued the government has painted itself into a corner due to the “sunk costs” of $2 billion AUD. Bernard Robertson-Dunn of the Australian Privacy Foundation argues that the whole problem, despite the fact that the government has spent those billions, is that Australia needs to reboot its digital-health initiative entirely.

But many of the critics of My Health Record in Parliament seem to be maneuvering to lessen the privacy harms likely to ensure from the shift to near-mandatory participation in My Health Record. In this, they may be driven by the fear that writing off the Australian health-care-records program may look too much like the abject failure that was the UK’s “care.data” program. But Robertson-Dunn views the unwillingness of some members or Parliament to cut their losses as short-sighted, given the likely long-term harms the system poses to citizens’ health privacy. Better to scrap My Health Record and write off the costs so far, he argues. Once that’s done, he says, Australia can “[s]tart with a problem patients and doctors have and go from there.”

Mike Godwin (mnemonic@gmail.com) is a distinguished senior fellow at R Street Institute.

Posted on Techdirt - 16 July 2018 @ 10:40am

Everything That's Wrong With Social Media Companies and Big Tech Platforms, Part 3

I’ve written two installments in this series (part 1 is here and part 2 is here). And while I could probably turn itemizing complaints about social-media companies into a perpetual gig somewhere — because there’s always going to be new material — I think it’s best to list only just a few more for now. After that, we ought to step back and weigh what reforms or other social responses we really need. The first six classes of complaints are detailed in Parts 1 and 2, so we begin here in Part 3 with Complaint Number 7.

(7) Social media are bad for us because they’re so addictive to us that they add up to a kind of deliberate mind control.

As a source of that generalization we can do no better than to begin with Tristan Harris’s July 28, 2017 TED talk, titled “How a handful of tech companies control billions of minds every day.”

Harris, a former Google employee, left Google in 2015 to start a nonprofit organization called Time Well Spent. That effort has now been renamed the Center for Humane Technology ( http://www.timewellspent.io now resolves to https://humanetech.com). Harris says his new effort — which also has the support of former Mozilla interface designer Aza Raskin and early Facebook funder Roger McNamee — represents a social movement aimed at making us more aware of the ways in which technology, including social media and other internet offerings, as well as our personal devices, are continually designed and redesigned to make them more addictive.

Yes, there’s that notion of addictiveness again — we looked in Part 2 at claims that smartphones are addictive and talked about how to address that problem. But regarding the “mind control” variation of this criticism, it’s worth examining Harris’s specific claims and arguments to see how they compare to other complaints about social media and big tech generally. In his June 2017 TED talk. Harris begins with the observation that social-media notifications on your smart devices, may lead you to have thoughts you otherwise wouldn’t think:

“If you see a notification it schedules you to have thoughts that maybe you didn’t intend to have. If you swipe over that notification, it schedules you into spending a little bit of time getting sucked into something that maybe you didn’t intend to get sucked into.”

But, as I’ve suggested earlier in this series, this feature of continually tweaking content to attract your attention isn’t unique to internet content or to our digital devices. This is something every communications company has always done — it’s why ratings services for traditional broadcast radio and TV exist. Market research, together with attempts to deploy that research and to persuade or manipulate audiences, has been at the heart of the advertising industry for far longer than the internet has existed, as Vance Packard’s 1957 book THE HIDDEN PERSUADERS suggested decades ago.

One major theme of Packard’s THE HIDDEN PERSUADERS is that advertisers increasingly relied less on consumer surveys (derisively labeled “nose-counting”) but on “motivational research” — often abbreviated by 1950s practitioners as “MR” — to look past what consumers say they want. Instead, the goal is to how they actually behave, and then gear their advertising content to shape or leverage consumers’ unconscious desires. Packard’s narratives in THE HIDDEN PERSUADERS are driven by revelations of the disturbing and even scandalous agendas of MR entrepreneurs and the advertising companies that hire them. Even so, Packard is careful in his book, in its penultimate chapter, to address what he calls “the question of validity” — that is, the question of whether “hidden persuaders'” strategies and tactics for manipulating consumers and voters are actually scientifically grounded. Quite properly, Packard acknowledges that the claims of the MR companies may have been oversold, or may have been adopted by companies who simply lack any other strategy for figuring out how to reach and engage consumers.

In spite of Packard’s scrupulous efforts to make sure that no claims of advertising’s superpowers to sway our thinking are accepted uncritically, our culture nevertheless has accepted at least provisionally the idea that advertising (and its political cousin, propaganda), affects human beings at pre-rational levels. It is this acceptance of the idea that content somehow takes us over that Tristan Harris invokes consistently in his writings and presentations about how social media, the Facebook newsfeed, and internet advertising work on us.

Harris prefers to describe how these online phenomena affect us in deterministic ways:

“Now, if this is making you feel a little bit of outrage, notice that that thought just comes over you. Outrage is a really good way also of getting your attention. Because we don’t choose outrage — it happens to us.”

“The race for attention [is] the race to the bottom of the brainstem.”

Nothing Harris says about the Facebook newsfeed would have seemed foreign to a Madison Avenue advertising executive in, say, 1957. (Vance Packard includes commercial advertising as well as political advertising as centerpieces of what he calls “the large-scale efforts being made, often with impressive success, to channel our unthinking habits, our purchasing decisions, and our thought processes by the use of insights gleaned from psychiatry and the social sciences.”) Harris describes Facebook and other social media in ways that reflect time-honored criticisms of advertising generally, and mass media generally.

But remember that what Harris says about internet advertising or Facebook notifications or the Facebook news feed is true of all communications. It is the very nature of communications among human beings that they give us thoughts we would not otherwise have. It is the very nature of hearing things or reading things or watching things that we can’t unhear them, or unread them, or unwatch them. This is not something uniquely terrible about internet services. Instead it is something inherent in language and art and all communications. (You can find a good working definition of “communications” in Article 19 of the United Nations’ Universal Declaration of Human Rights, which states that individuals have the right “to seek, receive, or impart information.”) That some people study and attempt to perfect the effectiveness of internet offerings — advertising or Facebook content or anything else — is not proof that they’re up to no good. (They arguably are exercising their human rights!) Similarly, the fact that writers and editors, including me, try to study how words can be more effective when it comes to sticking in your brain is not an assault on your agency.

It should give us pause that so many complaints about Facebook, about social media generally, about internet information services, and about digital devices actively (if maybe also unconsciously) echo complaints that have been made about any new mass medium (or mass-media product). What’s lacking in modern efforts to criticize social media in particular — and especially when it comes to big questions like whether social media are damaging to democracy — is the failure of most critics to be looking at their own hypotheses skeptically, seeking falsification (which philosopher Karl Popper rightly notes is a better test of the robustness of a theory) rather than verification.

As for all the addictive harms that are caused by combining Facebook and Twitter and Instagram and other internet services with smartphones, isn’t it worth asking critics whether they’ve considered turning notifications off for the social-media apps?

(8) Social media are bad for us because they get their money from advertising, and advertising — especially effective advertising — is inherently bad for us.

Harris’s co-conspirator Roger McNamee, whose authority to make pronouncements on what Facebook and other services are doing wrong derives primarily from his having gotten richer from them, is blunter in his assessment of Facebook as a public-health menace:

“Relative to FB, the combination of an advertising model with 2.1 billion personalized Truman Shows on the ubiquitous smartphone is wildly more engaging than any previous platform … and the ads have unprecedented effectiveness.”

There’s a lot to make fun of here–the presumption that 2.1 billion Facebook users are just creating “personalized Truman Shows,” for example. Only someone who fancies himself part of an elite that’s immune to what Harris calls “persuasion” would presume to draw that conclusion about the hoi polloi. But let me focus instead on the second part–the bit about the ads with “unprecedented effectiveness.” Here the idea is, obviously, that advertising may be better for us when it’s less effective.

Let’s allow for a moment that maybe that claim is true! Even if that’s so, advertising has played a central role in Western commerce for at least a couple of centuries, and in world commerce for at least a century, and the idea that we need to make advertising less effective is, I think fairly clearly, a criticism of capitalism generally. Now, capitalism may very well deserve that sort of criticism, but it seems like an odd critique coming from someone who’s already profited immensely from that capitalism.

And it also seems odd that it’s focused particularly on social media when, as we have the helpful example of THE HIDDEN PERSUADERS to remind us, we’ve been theoretically aware of the manipulations of advertising for all of this century and at least half of the previous one. If you’re going to go after commercialism and capitalism and advertising, you need to go big–you can’t just say that advertising suddenly became a threat to us because it’s more clearly targeted to us based on our actual interests. (Arguably that’s a feature rather than a bug.)

In responding to these criticisms, McNamee says “I have no interest in telling people how to live or what products to use.” (I think the meat of his and Harris’s criticisms suggests otherwise.) He explains his concerns this way:

“My focus is on two things: protecting the innocent (e.g., children) from technology that harms their emotion development and protecting democracy from interference. I do not believe that tech companies should have the right to undermine public health and democracy in the pursuit of profits.”

As is so often the case with entrepreneurial moral panics, the issue ultimately devolves to “protecting the innocent” — some of whom surely are children but some other proportion of whom constitute the rest of us. In an earlier part of his exploration of these issues on the venerable online conferencing system The WELL, McNamee makes clear, in fact, that he really is talking about the rest of us (adults as well as children):

“Facebook has 2.1 billion Truman Shows … each person lives in a bubble tuned to their emotions … and FB pushes emotional buttons as needed. Once it identifies an issue that provokes your emotions, it works to get you into groups of like-minded people. Such filter bubbles intensify pre-existing beliefs, making them more rigid and extreme. In many cases, FB helps people get to a state where they are resistant to ideas that conflict with the pre-existing ones, even if the new ideas are demonstrably true.”

These generalizations wouldn’t need much editing to fit 20th-century criticisms of TV or advertising or comic books or 19th-century criticisms of dime novels or 17th-century criticisms of the theater. What’s left unanswered is the question of why this new mass medium is going to doom us when none of the other ones managed to do it.

(9) Social media need to be reformed so they aren’t trying to make us do anything or get anything out of us.

It’s possible we ultimately may reach some consensus on how social media and big internet platforms generally need to be reformed. But it’s important to look closely at each reform proposal to make sure we understand what we’re asking for and also that we’re clear on what the reforms might take away from us. Once Harris’s TED talk gets past the let-me-scare-you-about-Facebook phase, it gets better — Harris has a program for reform in mind. Specifically, he calls for what he calls “three radical changes to our society,” which I will paraphrase and summarize here.

First, Harris says, “we need to acknowledge that we are persuadable.” Here, unfortunately, he elides the distinction between being persuaded (which involves evaluation and crediting of arguments or points of view) and being influenced or manipulated (which may happen at an unconscious level). (In fairness, Vance Packard’s THE HIDDEN PERSUADERS is guilty of the same elision.) But this first proposition isn’t radical at all — even if we’re sticks-in-the-mud, we normally believe we are persuadable. It may be harder to believe that we are unconsciously swayed by how social media interact with us, but I don’t think it’s exactly a radical leap. We can take it as a given, I think, that internet advertising and Facebook’s and Google’s algorithms try to influence us in various ways, and that they sometimes succeed. The next question then becomes whether this influence is necessarily pernicious, but Harris finds passes quickly over this question, assuming the answer is yes.

Second, Harris argues, we need new models and accountability systems, guaranteeing accountability and transparency for the ways in which our internet services and digital devices try to influence us. Here there’s very little to argue with. Transparency about user-experience design that makes us more self-aware is all to the good. So that doesn’t seem like a particularly radical goal either.

It’s in Harris’s third proposal — “We need a design renaissance” — that you actually do find something radical. As Harris explains it, we need to redesign our interactions with services and devices so that we’re never persuaded to do something that we may not initially want to do. He states, baldly, that “the only form of ethical persuasion that exists is when the goals of the persuader are aligned with the goals of the persuadee.” This is a fascinating proposition that, so far as I know, is not particularly well-grounded in fact or in the history of rhetoric or in the history of ethics. It seems clear that sometimes it’s necessary to persuade people of ideas that they may be predisposed not to believe, and that, in fact, they may be more comfortable not believing.

Given that fact, it follows that If we are worried about whether Facebook’s algorithms lead to “filter bubbles,” we should call for (or design) a system around the idea of never persuading anyone whose goals aren’t already aligned with yours. Arguably, such a social-media platform might be more prone to filter bubbles rather than less so. One doesn’t get the sense, reviewing Harris’s presentations or other public writings and statements from his allies like Roger McNamee, either that they’ve compared current internet communications with previous revolutions driven by new mass-communications platforms, or analyzed their theories in light of the centuries of philosophical inquiry regarding human autonomy, agency, and ethics.

Moving past Harris’s TED talk, we next must consider McNamee’s recent suggestion that Facebook move from an advertising-supported to for-pay model. In a February 21 Washington Post op-ed, McNamee wrote the following:

“The indictments brought by special counsel Robert S. Mueller III against 13 individuals and three organizations accused of interfering with the U.S. election offer perhaps the most powerful evidence yet that Facebook and its Instagram subsidiary are harming public health and democracy. The best option for the company — and for democracy — is for Facebook to change its business model from one based on advertising to a subscription service.”

In a nutshell, the idea here is that the incentives of advertisers, who want to compete for your attention, will necessarily skew how even the most well-meaning version of advertising-supported Facebook interacts with you, and not for the better. So the fix, he argues, is for Facebook to get rid of advertising altogether. “Facebook’s advertising business model is hugely profitable,” he writes, “but the incentives are perverse.”

It’s hard to escape the conclusion that McNamee believes either (a) advertising is inherently bad, or (b) advertising made more effective by automated internet platforms is particularly bad. Or both. And maybe advertising is, in fact, bad for us. (That’s certainly a theme of Vance Packard’s THE HIDDEN PERSUADERS, as well as of more recent work such as Tim Wu’s book 2016 book THE ATTENTION MERCHANTS.) But it’s hard to escape the conclusion that McNamee, troubled by Brexit and by President Trump’s election, wants to kick the economic legs out from under Facebook’s (and, incidentally, Google’s and Bing’s and Yahoo’s) economic success. Algorithm-driven serving of ads is bad for you! It creates perverse incentives! And so on.

It’s true, of course, that some advertising algorithms have created perverse incentives (so that Candidate Trump’s provocative ads were seen as more “engaging” and therefore were sold cheaper — or, alternatively, more expensively — than Candidate Clinton’s. I think the criticism of that particular algorithmic approach to pricing advertising is valid. But there are other ways to design algorithmic ad service, and it seems to me that the companies that have been subject to the criticisms are being responsive to them, even in the absence of regulation. This, I think, is the proper way to interpret Mark Zuckerberg’s newfound reflection (and maybe contrition) over Facebook’s previous approach to its users’ experience, and his resolve — honoring without mentioning Tristan Harris’s longstanding critique — that “[o]ne of our big focus areas for 2018 is making sure the time we all spend on Facebook is time well spent.”

Some Alternative Suggestions for Reform and/or Investigation

It’s not too difficult, upon reflection, to wonder whether the problem of “information cocoons” or “filter bubbles” is really as terrible as some critics have maintained. If hyper-addictive filter-bubbles have historically unprecedented power to overcome our free will, surely presumably have this effect even on most assertive, independently thinking, strong-minded individuals — like Tristan Harris or Roger McNamee. Even six-sigma-degree individualists might not escape! But the evidence that this is, in fact, the case, is less than overwhelming. What seems more likely (especially in the United States and in the EU) is that people who are dismayed by the outcome of the Brexit referendum or the U.S. election are trying to find a Grand Unifying Theory to explain why things didn’t work out they way they’d expected. And social media are new, and they seem to have been used by mischievous actors who want to skew political processes, so it follows that the problem is rooted in technology generally or in social media or in smartphones in particular.

But nothing I write here should be taken as arguing that social media definitely aren’t causing or magnifying harms. I can’t claim to know for certain. And it may well be the case, in fact, that some large subset of human beings create “filter bubbles” for themselves regardless of what media technologies they’re using. That’s not a good thing, and it’s certainly worth figuring out how to fix that problem if it’s happening, but focusing on how that problem as a presumed phenomenon specific to social media perhaps focuses on a symptom of the human condition rather than a disease grounded in technology.

In this context, then, the question is, what’s the fix? There are some good suggestions for short-term fixes, such as the platforms’ adopting transparency measures regarding political ads. That’s an idea worth exploring. Earlier in this series I’ve written about other ideas as well (e.g., using grayscale on our iPhones).

There are, of course, more general reforms that aren’t specific to any particular platform. To start with, we certainly need to address more fundamental problems — meta-platform problems, if you will — of democratic politics, such as teaching critical thinking. We actually do know how to teach critical thinking — thanks to the ancient Greeks we’ve got a few thousand years of work done already on that project — but we’ve lacked the social will to teach it universally. It seems to me that this is the only way by which a cranky individualist minority that’s not easily manipulated by social media, or by traditional media, can become the majority. Approaching all media (including radio, TV, newspapers, and other traditional media — not just internet media, or social media) with appropriate skepticism has to be part of any reform policy that will lead to lasting results.

It’s easy, however, to believe that education — even the rigorous kind of education that includes both traditional critical-thinking skills and awareness of the techniques that may be used in swaying our opinions — will not be enough. One may reasonably believe that education can never be enough, or that, even when education is sufficient to change behavior (consider the education campaigns that reduced smoking or led to increased use of seatbelts), education all by itself simply takes too long. So, in addition to education reforms, there probably are more specific reforms — or at least a consensus as to best practices — that Facebook, other platforms, advertisers, government, and citizens ought to consider. (It seems likely that, to the extent private companies don’t strongly embrace public-spirited best-practices reforms, governments will be willing to impose such reforms in the absence of self-policing.)

One of the major issues that deserve more study is the control and aggregation of user information by social-media platforms and search services. It’s indisputable that online platforms have potentiated a major advance in market research — it’s trivially easy nowadays for the platforms to aggregate data as to which ads are effective (e.g., by inspiring users to click through to the advertisers’ websites). Surely we should be able to opt out, right?

But there’s an unsettled public-policy question about what opting out of Facebook means or could mean. In his testimony earlier this year at Senate and House hearings on Facebook, Mark Zuckerberg has consistently stressed that individual users do have some high degree of control over the data (pictures, words, videos, and so on) that they’ve contributed to Facebook, and that users can choose to remove the data they’ve contributed. Recent updates in Facebook’s privacy policy seem to underscore users’ rights in this regard.

It seems clear that Facebook is committing itself at least to what I call Level 1 Privacy: you can erase your contributions from Facebook altogether and “disappear,” at least when it comes to information you have personally contributed to the platform. But does it also mean that even other people who’ve shared my stuff no longer can share it (in effect, allowing me to depart and punch holes in other people’s sharing of my stuff when I depart?

If Level 1 Privacy relates to the information (text, pictures, video, etc., that I’ve posted), that’s not the end of the inquiry. There’s also what I have called Level 2 Privacy, centering on what Facebook knows about me, or can infer from my having been on the service, even after I’ve gone. Facebook has had a proprietary interest in drawing inferences from how we interact with their service and using that to inform what content (including but not limited to ads) that Facebook serves to us. That’s Facebook’s data, not mine, because FB generated it, not me. If I leave Facebook, surely Facebook retains some data about me based on my interactions on the platform. (We also know, in the aftermath of Zuckerberg’s testimony before Congress, that Facebook manages to collect data about people who themselves are not users of the service.)

And then there’s Level 3 Privacy, which is the question of what Facebook can and should do with this inferential data that it has generated. Should Facebook share it with third parties? What about sharing it with governments? If I depart and leave a resulting hole in Facebook content, are there still ways to connect the dots so that not just Facebook itself, but also third-party actors, including governments, can draw reliable inferences about the now-absent me? In the United States, there arguably may be Fourth Amendment issues involved, as I’ve pointed out in a different context elsewhere. We may reasonably conclude that there should be limits on how such data can be used and on what inferences can be drawn. This is a public-policy discussion that needs to happen sooner rather than later.

Apart from privacy and personal-data concerns, we ought to consider what we really think about targeted advertising. If the criticism of targeted advertising, “motivational research,” and the like historically has been that the ads are pushing us, then the criticism of internet advertising seems to be that internet-based ads are pulling us or even seducing us, based on what can be inferred about our inclinations and preferences. Here I think the immediate task has to be to assess whether the claims made by marketers and advertisers regarding the manipulative effects ads have on us are scientifically rigorous and testable. If the claims stand up to testing, then we have some hard public-policy questions we need to ask about whether and how advertising should be regulated. But if they aren’t — if, in fact, our individual intuitions that we retain freedom and autonomy even in the face of internet advertising and all the data that can be gathered about us — then we need to assert that that freedom and autonomy and acknowledge that, just maybe, there’s nothing categorically oppressive about being invited to engage in commercial transactions or urged to vote for a particular candidate.

Both the privacy questions and the advertising questions are big, complex questions that don’t easily devolve to traditional privacy talk. If in fact we need to tackle these questions pro-actively, I think we must begin by defining what the problems are in ways that all of us (or at least most of us) agree on. Singling out Facebook is the kind of single-root-cause theory of what’s wrong with our culture today may appeal to us as human beings — we all like straightforward storylines — but that doesn’t mean it’s correct. Other internet services harvest our data too. And non-internet companies have done so (albeit in more primitive ways) for generations. It is difficult to say they never should do so, and it’s difficult to frame the contours of what best practices should be.

But if we’re going to grapple with the question of regulating social-media platforms and other internet services, thinking seriously about what best practices should be, generally speaking, is the task that lies before us now. Offloading the public-policy questions to the platforms themselves — by calling on Facebook or Twitter or Google to censor antisocial content, for example — is the wrong approach, because it dodges the big questions that we need to answer. Plus, it would likely entrench today’s well-moneyed internet incumbents.

Nobody elected Mark Zuckerberg or Jack Dorsey (or Tim Cook or Sundar Pichai) to do that for us. The theory of democracy is that we decide the public-policy questions ourselves, or we elect policymakers to do that for us. But that means we each have to do the heavy lifting of figuring out what kinds of reforms we think we want, and what kind of commitments we’re willing to make to get the policies right.

Mike Godwin (mnemonic@gmail.com) is a Distinguished Senior Fellow at R Street Institute.

Posted on Techdirt - 5 June 2018 @ 12:07pm

Has Facebook Merely Been Exploited By Our Enemies? Or Is Facebook Itself The Real Enemy?

Imagine that you’re a new-media entrepreneur in Europe a few centuries back, and you come up with the idea of using moveable type in your printing press to make it easier and cheaper to produce more copies of books. If there are any would-be media critics in Europe taking note of your technological innovation, some will be optimists. The optimists will predict that cheap books will hasten the spread of knowledge and maybe even fuel a Renaissance of intellectual inquiry. They’ll predict the rise of newspapers, perhaps, and anticipate increased solidarity of the citizenry thanks to shared information and shared culture.

Others will be pessimists?they’ll foresee that the cheap spread of printed information will undermine institutions, will lead to doubts about the expertise of secular and religious leaders (who are, after all, better educated and better trained to handle the information that’s now finding its way into ordinary people’s hands). The pessimists will guess, quite reasonably, that cheap printing will lead to more publication of false information, heretical theories, and disruptive doctrines, which in turn may lead, ultimately, to destructive revolutions and religious schisms. The gloomiest pessimists will see, in cheap printing and later in the cheapness of paper itself?making it possible for all sorts of “fake news” to be spread–the sources of centuries of strife and division. And because the pain of the bad outcomes of cheap books is sharper and more attention-grabbing than contemplation of the long-term benefits of having most of the population know how to read, the gloomiest pessimists will seem to many to possess the more clear-eyed vision of the present and of the future. (Spoiler alert: both the optimists and the pessimists were right.)

Fast-forward to the 21st century, and this is just where we’re finding ourselves when we look at public discussion and public policy centering on the internet, digital technologies, and social media. Two recent books written in the aftermath of recent revelations about mischievous and malicious exploitation of social-media platforms?especially Facebook and Twitter?exemplify this zeitgeist in different ways. And although both of these books are filled with valuable information and insights, they also yield (in different ways) to the temptation to see social media as the source of more harm than good. Which leaves me wanting very much both to praise what’s great in these two books (which I read back-to-back) and to criticize them where I think they’ve gone too far over to the Dark Side.

The first book is Clint Watts’s MESSING WITH THE ENEMY: SURVIVING IN A SOCIAL MEDIA WORLD OF HACKERS, TERRORISTS, RUSSIANS, AND FAKE NEWS. Watts is a West Point graduate and former FBI agent who’s an expert on today’s information warfare, including efforts by state actors (notably Russia) and non-state actors (notably Al Qaeda and ISIS) to exploit social media both to confound enemies and to recruit and inspire allies. I first heard of the book when I attended a conference at Stanford this spring where Watts?who has testified several times on these issues?was a presenter. His presentation was an eye-opening, erasing whatever lingering doubt I might have had about the scope and organization of those who want to use today’s social media for malicious or destructive ends.

In MESSING WITH THE ENEMY Watts relates in a bracing yet matter-of-fact tone not only his substantive knowledge as a researcher and expert in social-media information warfare but also his first-person experiences in engaging with foreign terrorists active on social-media platforms and in being harassed by terrorists (mostly virtually) for challenging them in public exchanges. “The internet brought people together,” Watts writes, “but today social media is tearing everyone apart.” He notes the irony of social media’s receiving premature and overgenerous credit for democratic movements against various dictatorships but later being exploited as platforms for anti-democratic and terrorist initiatives:

“Not long after many across the world applauded Facebook for toppling dictators during the Arab Spring revolutions of 2010 and 2011, it proved to be a propaganda platform and operational communications network for the largest terrorist mobilization in world history, bringing tens of thousands of foreign fighters under the Islamic State’s banner in Syria and Iraq.”

And it wasn’t just non-state terrorists who learned quickly how to leverage social-media platforms; an increasingly activist and ambitious Russia, under the direction of Russian President Vladimir Putin, did so as well. Watts argues persuasively that Russia not only assisted and sponsored relatively inexpensive disinformation and propaganda campaigns using the social-media platforms to encourage divisiveness and lack of faith in government institutions (most successfully with the Brexit vote and the 2016 American elections) but also actively supported the hacking of the Democratic National Committee computer network which led to email dumps (using Wikileaks as a cutout). The security breaches, together with “computational propaganda”?social-media “bots” that mimicked real users in spreading disinformation and dissension?played an important role in the U.S. election, Watts writes, helping “the race remain close at times when Trump might have fallen completely out of the running.” Even so, Watts doesn’t believe Russian propaganda efforts alone would have tilted the outcome of the election?what it did instead was hobble support for Clinton so much that when, when FBI Director James Comey announced, one week before the election, that the Clinton email-server investigation had reopened, the Clinton campaign couldn’t recover. “Without the Comey letter,” he writes, “I believe Clinton would have won the election.” Later in the book he connects the dots more explicitly: “Without the Russian influence effort, I believe Trump would not have been within striking distance of Clinton on Election Day. Russian influence, the Clinton email investigation, and luck brought Trump a victory?all of these forces combined.”

Where Watts’s book focuses on bad actors who exploit the openness of social-media platforms for various malicious ends, Siva Vaidhyanathan’s ANTISOCIAL MEDIA: HOW FACEBOOK DISCONNECTS US AND UNDERMINES DEMOCRACY argues that the platforms?and especially the Facebook platform?is inherently corrosive to democracy. (Full disclosure: I went to school with Vaidhyanathan, worked on our student newspaper with him, and I consider him a friend.) Acknowledging his intellectual debt to his mentor, the late social critic Neil Postman, Vaidhyanathan blames the negative impacts of various exploitations of Facebook and other platforms on the platforms themselves. Postman was a committed technopessimist, and Vaidhyanathan takes time to chart in ANTISOCIAL MEDIA how Postman’s general skepticism about new information technologies ultimately led his younger colleague to temper his originally optimistic view of the internet and digital technologies generally. If you read Vaidhyanathan’s work over time, you find in his writing a progressively darker view of the internet and its ongoing evolution, taking a significantly more pessimistic turn around the time of his 2011 book, THE GOOGLIZATION OF EVERYTHING (AND WHY WE SHOULD WORRY). In his earlier book, Vaidhyanathan took pains to be as fair-minded as he could in raising questions about Google and whether it can or should be trusted to play such an outsized role in our culture as the mediator of so much of our informational resources. He was skeptical (not unreasonably) about whether Google’s confidence in both its own good intentions and its own expertise is sufficient reason to trust the company?not least because a powerful company can stay around as a gatekeeper for the internet long past the time its well-intentioned founders depart or retire.

With ANTISOCIAL MEDIA, Vaidhyanathan cuts Mark Zuckerberg (and his COO, Sheryl Sandberg) rather less of a break. Facebook’s leadership, as I read Vaidhyanathan’s take, is both more arrogant than Google’s and more heedless of the consequences of its commitment to connect everyone in the world through the platform. Synthesizing a full range of recent critiques of Facebook’s design as a platform, he relentlessly characterizes Facebook as driving us to shallow, reactive reactions to one another rather than promoting reflective discourse that might improve or promote our shared values. Facebook, in his view, distracts us instead of inspiring us to think. It’s addictive for us in something like the same way gambling or potato chips can be addictive for us. Facebook privileges the visual (photographs, images, GIFs, and the like), he insists, over the verbal and discursive.

And of course even the verbal content is either filter-bubbly?as when we convene in private Facebook groups to share, say, our unhappiness about current politics?or divisive (so that we share and intensify our outrage about other people’s bad behavior, maybe including screenshots of something awful someone has said elsewhere on Facebook or on Twitter). Vaidhyanathan suggests that at one point our political discourse as ordinary citizens was more rational and reflective, but now is more emotion- and rage-driven and divisive. Me, I think the emotionalism and rage was always there.

Even when Vaidhyanathan allows that there may be something positive about one’s interactions on Facebook, he can’t quite help himself from being reductive and dismissive about it:

“Nor is Facebook bad for everyone all the time. In fact, it’s benefited millions individually. Facebook has also allowed people to find support and community despite being shunned by friends and family or being geographically isolated. Facebook is still our chief source of cute baby and puppy photos. Babies and puppies are among the things that make life worth living. We could all use more images of cuteness and sweetness to get us through our days. On Facebook babies and puppies run in the same column as serious personal appeals for financial help with medical care, advertisements for and against political candidates, bogus claims against science, and appeals to racism and violence.”

In other words, Facebook may occasionally make us feel good for the right reasons (babies and puppies) but that’s about the best most people can hope for from the platform. Vaidhyanathan has a particular antipathy towards Candy Crush, which you can connect to your Facebook account?a video game that certainly seems vacuous, but also seems innocuous to me. (I’ve never played it myself.)

Given his antipathy towards Facebook, you might think that Vaidhyanathan’s book is just another reworking of the moral-panic tomes that we’ve seen a lot of in the last year or two, which decry the internet and social media much the same way previous generations of would-be social critics complained about television, or the movies, or rock music, or comic books. (Hi, Jonathan Taplin! Hi, Franklin Foer!) But that’s a mistake, primarily because Vaidhyanathan digs deep into choices?some technical and some policy-driven?that Facebook has made that facilitated bad actors’ using the platform maliciously and destructively. Plus, Vaidhyanathan, to his credit, gives attention to how oppressive governments have learned to use the platform to stifle dissent and mute political opposition. (Watts notes this as well.) I was particularly pleased to see his calling out how Facebook is used in India, in the Philippines, and in Cambodia?all countries where I’ve been privileged to work directly with pro-democracy NGOs.

What I find particularly valuable is Vaidhyanathan’s exploration of Facebook’s advertising policies and their effect on political ads?I learned plenty from ANTISOCIAL MEDIA about the company’s “Custom Audiences from Customer Lists,” including this disturbing bit:

“Facebook’s Custom Audiences from Customer Lists also gives campaigns an additional power. By entering email addresses of those unlikely to support a candidate or those likely to support an opponent, a campaign can narrowly target groups as small as twenty people and dissuade them from voting at all. ‘We have three major voter suppression operations under way,’ a campaign official told Bloomberg News just weeks before the election. The campaign was working to convince white leftists and liberals who had supported socialist Bernie Sanders in his primary bid against Clinton, young women, and African American voters not to go to the polls on election day. The campaign carefully targeted messages on Facebook to each of these groups. Clinton’s former support for international trade agreements would raise doubts among leftists. Her husband’s documented affairs with other women might soften support for Clinton among young women….”

What one saw in Facebook’s deployment of the Custom Audiences feature is something fundamentally new and disturbing:

“Custom Audiences is a powerful tool that was not available to President Barack Obama and Governor Mitt Romney when they ran for president in 2012. It was developed in 2014 to help Facebook reach the takeoff point in profits and revenue. Because Facebook develops advertising tools for firms that sell shoes and cosmetics and only later invites political campaigns to use them, ‘they never worried about the worst-case abuse of this capability, unaccountable, unreviewable political ads,’ said Professor David Carroll of the Parsons School of Design. Such ads are created on a massive scale, targeted at groups as small as twenty, and disappear, so they are never examined or debated.”

Vaidhyanathan quite properly criticizes Mark Zuckerberg’s late-to-the-party recognition that perhaps Facebook may much more of a home to divisiveness and political mischief (and general unhappiness) than he previously had been willing to admit. And he’s right to say that some of Zuckerberg’s framing of new design directions for Facebook may be as likely to cause harm (e.g., more self-isolation in filter bubbles) than good. “The existence of hundreds of Facebook groups devoted to convincing others that the earth is flat should have raised some doubt among Facebook’s leaders that empowering groups might not enhance the information ecosystem of Facebook,” he writes. “Groups are as likely to divide us and make us dumber as any other aspect of Facebook.”

But here I have to take issue with my friend Siva, because he overlooks or dismisses the possibility that Facebook’s increasing support for “groups” of like-minded users may ultimately add up to a net social positive. For example, the #metoo groups seem to have enabled more women (and men) to come forward and talk frankly about their experiences with sexual assault and to begin to hold perpetrators of sexual assault and sexual harassment accountable. The fact that some folks also use Facebook groups for more frivolous or wrongheaded reasons (like promoting flat-earthism) strikes me as comparatively inconsequential.

Vaidhyanathan’s also too quick, it seems to me, to dismiss the potential for Facebook and other platforms to facilitate political and social reform in transitional democracies and developing countries. Yes, bad governments can use social media to promote support for their regimes, and I don’t think it’s particularly remarkable that oppressive governments (or non-state actors like ISIS) learn to use new communications media maliciously. Governments may frequently be slow, but they’re not invariably stupid?so it’s no big surprise, for example that Cambodian prime minister Hun Sen has figured out how to use his Facebook page to drum up support for his one-party rule, which has driven out opposition press and the opposition Cambodia National Rescue Party.

But Vaidhyanathan overlooks how some activists are using Facebook’s private groups to organize reform or opposition activities. In researching this review, I reached out to friends and colleagues in Cambodia, the Philippines and elsewhere to confirm whether the platform is useful to them?certainly they’re cautious about what they say in public on Facebook, but they definitely use private groups for some organizational purposes. What makes the platform useful to activists is that it’s accessible, easy to use, and amenable to posting multimedia sources (like pictures and videos of police and soldiers acting brutally towards protestors). And it’s not just images–when I worked with activists in Cambodia on developing a citizen-rights framework as a response to their government’s abrupt initiation of “cybercrime” legislation (really an effort to suppress dissenting speech), I suggested they work collaboratively in the MediaWiki software that Wikipedia’s editors use. But the Cambodian activists quickly discovered that Facebook was an easier platform for technically less proficient users to learn quickly and use to review draft texts together. I was surprised at this, but also encouraged. Even though I had my own doubts whether Facebook was the right tool for the job, I figured they didn’t need yet another American trying to tell them how to manage their own collaborations.

Like Watts’s book, Vaidhyanathan’s is strongest where it’s built on independent research that doesn’t merely echo what other critics have said. And both books are weakest when they uncritically import notions like Eli Pariser’s “filter bubble” hypothesis or the social-media-makes-us-depressed hypothesis. (Both these notions are echoes of previous moral panics about previous new media, including broadcasting in the 20th century and cheap paper in the 19th. And both have been challenged by researchers.) Vaidhyanathan’s so certain of the meme that Facebook’s Free Basics program is an assault on network neutrality that he mostly doesn’t investigate the program itself in any detail. The result is that his book (to this reader, anyway) seems to conflate Free Basics (a collection of low-bandwidth resources that Facebook provided a zero-rated platform for) with Facebook Zero (a zero-rated low-bandwidth version of Facebook by itself). In contrast, the Wikipedia articles on Free Basics and Facebook Zero lead off with warnings not to confuse the two.

In addition to the strengths and weaknesses the two books share, they also have a certain rhetorical approach in common?largely, in my view, because both authors want to push for reform, and because they want to challenge with the sunny-yet-unwarranted optimism with which Zuckerberg and Sandberg and other boosters have characterized social media. In effect, both authors seem to take the approach that, as we learn to be much more critical of social-media platforms, we don’t need to worry about throwing out the baby with the bathwater?because, really, there is no baby. (If we bail on Facebook altogether, it’s only the frequent baby pictures that we’d lose.)

Even so, both books also share an unwillingness to call for simple opposition to Facebook and other social-media platforms merely because they’re misused. Watts argues persuasively instead for more coherent and effective positive messaging about American politics and culture?of the sort that used to be the province of the United States Information Agency. (I think he’d be happy if the USIA were revived; I would be too.) He also calls for an “equivalent of Consumer Reports” to “be created for social media feeds,” which also strikes me as a fine idea.

Vaidhyanathan’s reform agenda is less optimistic. For one thing, he’s dismissive of “media literacy” as a solution because he doubts “we could even agree on what that term means and that there would be some way to train nearly two billion people to distinguish good from bad content.” He has some near-term suggestions?for example, he’d like to see an antitrust-type initiative to break up Facebook, although it’s unclear to me whether multiple competing Facebooks or a disassembled Facebook would be less hospitable to the kind of shallowness and abuses he sees in the platform’s current incarnation. But mostly he calls for a kind of cultural shift driven by social critics and researchers like himself:

“This will be a long process. Those concerned about the degradation of public discourse and the erosion of trust in experts and institutions will have to mount a campaign to challenge the dominant techno-fundamentalist myth. The long, slow process of changing minds, cultures, and ideologies never yields results in the short term. It sometimes yields results over decades or centuries.”

I agree that it frequently takes decades or even longer to truly assess how new media affect our culture for good or for ill. But as long as we’re contemplating all those years of effort, I see no reason not to put media literacy on the agenda as well. I think there’s plenty of evidence that people can learn to read what they see on the internet critically and do better than simply cherry-pick sources that agree with them?a vice that, it must be said, predates social media and the internet itself. The result of increasing skepticism about media platforms and the information we find in them may also lead (as Watts warns us) to more distrust of “experts” and “expertise,” with the result that true expertise is more likely to be unfairly and unwisely devalued. But my own view is that skepticism and critical thinking?even about experts with expertise?is generally positive. For example, it may be annoying to today’s physicians that patients increasingly resort to the internet about their real or imagined health problems?but engaged patients, even if they have to be walked back from foolish ideas again and again, are probably better off than the more passive health-care consumers of previous generations.

I think Vaidhyanathan is right, ultimately, to urge that we continue to think about social media critically and skeptically, over decades?and, you know, forever. But I think Watts offers the best near-term tactical solution:

“On social media, the most effective way to challenge a troll comes from a method that’s taught in intelligence analysis. To sharpen an analyst’s skills and judgment, a supervisor or instructor will ask the subordinate two questions when he or she provides an assessment: ‘What do those who disagree with your assessment think, and why?’ The analyst must articulate a competing viewpoint. The second question is even more important: ‘Under what conditions, specifically, would your assessment be wrong?’ […] When I get a troll on Facebook, I’ll inquire, ‘Under what circumstance would you admit you were wrong?’ or ‘What evidence would convince you otherwise?” If they don’t answer or can’t articulate their answer, then I disregard them on that topic indefinitely.”

Watts’s heuristic strikes me as the perfect first entry in the syllabus for media literacy in particular and for criticism of social media in general.

In sum, I think both MESSING WITH THE ENEMY and ANTISOCIAL MEDIA deserve to be on every internet-focused policymaker’s must-read list this season. I also think it’s best that readers honor these books by reading them with the same clear-eyed skepticism that their authors preach.

Mike Godwin (@sfmnemonic) is a Distinguished Senior Fellow at R Street Institute.