from the just-lay-out-the-truth dept
The CEOs of Facebook, Google, and Twitter will once again testify before Congress this Thursday, this time on disinformation. Here?s what I hope they will say:
Thank you Mister Chairman and Madam Ranking Member.
While no honest CEO would ever say that he or she enjoys testifying before Congress, I recognize that hearings like this play an important role — in holding us accountable, illuminating our blind spots, and increasing public understanding of our work.
Some policymakers accuse us of asserting too much editorial control and removing too much content. Others say that we don?t remove enough incendiary content. Our platforms see millions of user-generated posts every day — on a global scale — but questions at these hearings often focus on how one of our thousands of employees handled a single individual post.
As a company we could surely do a better job of explaining — privately and publicly — our calls in controversial cases. Because it?s sometimes difficult to explain in time-limited hearing answers the reasons behind individual content decisions, we will soon launch a new public website that will explain in detail our decisions on cases in which there is considerable public interest. Today, I?ll focus my remarks on how we view content moderation generally.
In past hearings, I and my CEO counterparts have adopted an approach of highlighting our companies? economic and social impact, answering questions deferentially, and promising to answer detailed follow up questions in writing. While this approach maximizes comity, I?ve come to believe that it can sometimes leave a false impression of how we operate.
So today I?d like to take a new approach: leveling with you.
In particular, in the past I have told you that our service is ?neutral.? My intent was to convey that we don?t pick political sides, or allow commercial influence over our editorial content.
But I?ve come to believe that characterizing our service as ?neutral? was a mistake. We are not a purely neutral speech platform, and virtually no user-generated-content service is.
In general, we start with a Western, small-d democratic approach of allowing a broad range of human expression and views. From there, our products reflect our subjective — but scientifically informed — judgments about what information and speech our users will find most relevant, most delightful, most topical, or of the highest quality.
We aspire for our services to be utilized by billions of people around the globe, and we don?t ever relish limiting anyone?s speech. And while we generally reflect an American free speech norm, we recognize that norm is not shared by much of the world — so we must abide by more restrictive speech laws in many countries where we operate.
Even within the United States, however, we choose to forbid certain types of speech which are legal, but which we have chosen to keep off our service: incitements to violence, hate speech, Holocaust denial, and adult pornography, just to name a few.
We make these decisions based not on the law, but on what kind of service we want to be for our users.
While some people claim to want ?neutral? online speech platforms, we have seen that services with little or no content moderation whatsoever — such as Gab and Parler — become dominated by trolling, obscenities, and conspiracy theories. Most consumers reject this chaotic, noisy mess.
In contrast, we believe that millions of people use our service because they value our approach of airing a variety of views, but avoiding an ?anything goes” cesspool.
We realize that some people won?t like our rules, and go elsewhere. I?m glad that consumers have choices like Gab and Parler, and that the open Internet makes them possible. But we want our service to be something different: a pleasant experience for the widest possible audience.
Complicated info landscape means tough calls
When we first started our service decades ago, content moderation was a much less fractious topic. Today, we face a more complicated speech and information landscape including foreign propaganda, bots, disinformation, misinformation, conspiracy theories, deepfakes, distrust of institutions, and a fractured media landscape. It challenges all of us who are in the information business.
All user-generated content services are grappling with new challenges to our default of allowing most speech. For example, we have recently chosen to take a more aggressive posture toward election- and vaccine-related disinformation because those of us who run our company ultimately don?t feel comfortable with our platform being an instrument to undermine democracy or public health.
As much as we aim to create consistent rules and policies, many of the most difficult content questions we face are ones we?ve never seen before, or involve elected officials — so the questions often end up on my desk as CEO.
Despite the popularity of our services, I recognize that I?m not a democratically elected policymaker. I?m a leader of a private enterprise. None of us company leaders takes pleasure in making speech decisions that inevitably upset some portion of our user base – or world leaders. We may make the wrong call.
But our desire to make our platform a positive experience for millions of people sometimes demands that we make difficult decisions to limit or block certain types of controversial (but legal) content. The First Amendment prevents the government from making those extra-legal speech decisions for us. So it?s appropriate that I make these tough calls, because each decision reflects and shapes what kind of service we want to be for our users.
Long-term experience over short-term traffic
Some of our critics assert that we are driven solely by ?engagement metrics? or ?monetizing outrage? like heated political speech.
While we use our editorial judgment to deliver what we hope are joyful experiences to our users, it would be foolish for us to be ruled by weekly engagement metrics. If platforms like ours prioritized quick-hit, sugar-high content that polarizes our users, it might drive short term usage but it would destroy people?s long-term trust and desire to return to our service. People would give up on our service if it?s not making them happy.
We believe that most consumers want user-generated-content services like ours to maintain some degree of editorial control. But we also believe that as you move further down the Internet ?stack? — from applications towards ours toward app stores, then cloud hosting, then DNS providers, and finally ISPs — most people support a norm of progressively less content moderation at each layer.
In other words, our users may not want to see controversial speech on our service — but they don?t necessarily support disappearing it from the Internet altogether.
I fully understand that not everyone will agree with our content policies, and that some people feel disrespected by our decisions. I empathize with those that feel overlooked or discriminated against, and I am glad that the open Internet allows people to seek out alternatives to our service. But that doesn?t mean that the US government can or should deny our company?s freedom to moderate our own services.
First Amendment and CDA 230
Some have suggested that social media sites are the ?new public square? and that services should be forbidden by the government to block anyone?s speech. But such a rule would violate our company?s own First Amendment rights of editorial judgment within our services. Our legal freedom to prioritize certain content is no different than that of the New York Times or Breitbart.
Some critics attack Section 230 of the Communications Decency Act as a ?giveaway? to tech companies, but their real beef is with the First Amendment.
Others allege that Section 230?s liability protections are conditioned on our service following a false standard of political ?neutrality.? But Section 230 doesn?t require this, and in fact it incentivizes platforms like ours to moderate inappropriate content.
Section 230 is primarily a legal routing mechanism for defamation claims — making the speaker responsible, not the platform. Holding speakers directly accountable for their own defamatory speech ultimately helps encourage their own personal responsibility for a healthier Internet.
For example, if car rental companies always paid for their renters? red light tickets instead of making the renter pay, all renters would keep running red lights. Direct consequences improve behavior.
If Section 230 were revoked, our defamation liability exposure would likely require us to be much more conservative about who and what types of content we allowed to post on our services. This would likely inhibit a much broader range of potentially ?controversial? speech, but more importantly would impose disproportionate legal and compliance burdens on much smaller platforms.
Operating responsibly — and humbly
We?re aware of the privileged position our service occupies. We aim to use our influence for good, and to act responsibly in the best interests of society and our users. But we screw up sometimes, we have blind spots, and our services, like all tools, get misused by a very small slice of our users. Our service is run by human beings, and we ask for grace as we remedy our mistakes.
We value the public?s feedback on our content policies, especially from those whose life experiences differ from those of our employees. We listen. Some people call this ?working the refs,? but if done respectfully I think it can be healthy, constructive, and enlightening.
By the same token, we have a responsibility to our millions of users to make our service the kind of positive experience they want to return to again and again. That means utilizing our own constitutional freedom to make editorial judgments. I respect that some will disagree with our judgments, just as I hope you will respect our goal of creating a service that millions of people enjoy.
Thank you for the opportunity to appear here today.
Adam Kovacevich is a former public policy executive for Google and Lime, former Democratic congressional and campaign aide, and a longtime tech policy strategist based in Washington, DC.
Filed Under: 1st amendment, bias, big tech, congressional hearings, content moderation, disinformation, jack dorsey, mark zuckerberg, neutral, section 230, sundar pichai
Companies: facebook, google, twitter