Zuckerberg's Grand Illusion: Understanding The Oversight Board Experiment

from the tinker-bell dept

Everyone, it seems, has an opinion about The Oversight Board -- which everyone refers to as the Facebook Oversight Board, because despite its plans to work with other social media companies, it was created by Facebook, and feels inevitably connected to Facebook by way of its umbilical cord. As we noted earlier this month, after the Oversight Board's first decisions came down, everyone who had a strong opinion about the Oversight Board seemed to use the results to confirm their existing beliefs about it.

To some, the Oversight Board is just an attempt for Facebook and Mark Zuckerberg to avoid taking responsibility for societal-level impacts of its platform. For others it's a cynical ploy/PR campaign to make it look like it's giving up some of its power. To still others, it's a weak attempt to avoid regulation. To many, it's a combination of all three. And, then, to some, it's an interesting experiment in content moderation that attempts to actually separate some final decision making ability from a website itself. And, again, it could still be some combination of all of those. As I've said since it launched, I find it to be an interesting experiment, and even if the cynical reasons are a driving force behind it, that may not matter if the Board actually turns into a sort of authority that creates change. As I recently talked about in a podcast episode, the norms may become important.

That is, even if the whole thing is a cynical marketing ploy by a finger-waving Mark Zuckerberg, that might not matter if the Board itself actually is able to create meaningful change within the company and how it handles both moderation decisions and content moderation policy. And it's reasonable to point out that this has a high chance of failure and that there are a variety of structural problems in how the Board is setup, but that doesn't mean failure is guaranteed. And there's enough of a chance that the Board could make a difference that I think it's worth paying close attention to what happens with it.

And, if you believe that it's important to understand, then you owe it to yourself to read Prof. Kate Klonick's brilliant, thorough and detailed account of the making of the Board with lots of behind-the-scenes tidbits. I can think of no one better to do this kind of reporting. Klonick, a former reporter who has a JD and a PhD and is now a law professor, wrote the seminal paper on social media content moderation, The New Governors, which is also required reading for anyone seeking to understand content moderation online.

Buried deep within the article is an important point that gets to what I say above, about how the norms around the Board might make it powerful, even if the Board is not initially imbued with actual power:

Harris likened the board to Tinkerbell: “At the end of the day you can build all the things, but you just have to have enough people that believe in order to make it real.”

Some have been arguing that Klonick's New Yorker article is, itself, Facebook PR or a puff piece, but claiming such is only evidence that either (a) you haven't read it or (b) that your reading comprehension is so bad as to call into question pretty much anything else you write. Facebook does not come out of the article looking very good. Nor does the Oversight Board. But what the article does reveal is... just how fucking messy all of this is. It's a keen distillation of my impossibility theorem as applied to the world's largest collection of speech.

Indeed, the article includes this line that I'm sure will be repeated by critics of Facebook and the Board until the end of time itself:

At one point, a director of policy joked about ways to make the board seem independent, asking, “How many decisions do we have to let the Oversight Board win to make it legit?”

The article also makes clear the competing interests (some of which are seen in the opinions people hold about the Board) regarding how much power to give the Board, and the eventual decision to... well... give it very, very little. Facebook employees were worried that giving it too much power would give the Board the ability to literally kill Facebook. Though, on the flip side, the lack of power... makes the Board seem fairly hapless.

There's also a fun section on how people wanted Board members to be selected:

One attendee suggested letting the first board members choose the rest, to preserve their independence from the company. Lauren Rivera, a professor at Northwestern’s business school, cautioned against this approach: “It’s empirically proven that when you have a group self-select, in the absence of any kind of guidance, they just pick more people that look like them.” The experts then began giving their own ideas. Journalists said that the board should be mostly journalists. International human-rights lawyers said that it should be all international human-rights lawyers. Information scientists said that it should be “anyone but lawyers.” A white man at a think tank said that it should be populated with “regular people.”

Even, it seems, picking the judges presents something of a content moderation challenge. Indeed, as you might predict, in a moment of heightened political polarization, that was felt in the process of setting up the Board:

People familiar with the process told me that some Republicans were upset about what they perceived to be the board’s liberal slant. In the months leading up to the appointments, conservative groups pushed the company to make the board more sympathetic to Trump. They suggested their own lists of candidates, which sometimes included members of the President’s family, most notably Ivanka and the President’s sons. “The idea was, either fill this board with Trump-supporting conservatives or kill it,” one person familiar with the process said. In early May, shortly after the board members were announced, Trump personally called Zuckerberg to say that he was unhappy with the makeup of the board. He was especially angry about the selection of Pamela Karlan, a Stanford Law professor who had testified against him during his first impeachment. “He used Pam as an example of how the board was this deeply offensive thing to him,” the person familiar with the process said. Zuckerberg listened, and then told Trump that the members had been chosen based on their qualifications. Despite the pressure from Trump, Facebook did not change the composition of the board. (Trump declined to comment.)

That seems quite notable, of course, now that the Board is in the middle of deciding if Trump can get his account back.

But perhaps the most notable part of the article is Klonick's interview with Mark Zuckerberg himself. As Klonick notes, Zuckerberg has more or less accidentally put himself in the position of being the arbiter of what is and what is not allowed on the biggest platform for communication ever created. And he seems to be struggling with that, because figuring out how to deal with the societal-level problems that it creates is not nearly as interesting as just building products, which appears to be much more of his passion.

He looked tired. He seemed more at ease talking about “product” or “building tools” than he did discussing ethics or politics. It struck me that he was essentially a coder who had found himself managing the world’s marketplace of ideas. “The core job of what we do is building products that help people connect and communicate,” he said. “It’s actually quite different from the work of governing a community.” He hoped to separate these jobs: there would be groups of people who built apps and products, and others—including Facebook’s policy team and now the board—who deliberated the thorny questions that came along with them. I brought up a speech he gave at Georgetown, in 2019, in which he noted that the board was personally important to him, because it helped him feel that, when he eventually left, he would be leaving the company in safe hands. “One day, I’m not going to be running the company,” he told me. “I would like to not be in the position, long term, of choosing between someone who either is more aligned with my moral view and values, or actually is more aligned with being able to build high-quality products.”

I asked what kinds of cases he hopes the board will take. “If I was them, I’d be wary of choosing something that was so charged right off the bat that it was immediately going to polarize the whole board, and people’s perception of the board, and society,” he told me. He knew that critics wished the board had more power: “This is certainly a big experiment. It’s certainly not as broad as everyone would like it to be, upfront, but I think there’s a path for getting there.” But he rejected the notion that it was a fig leaf. “I’m not setting this up to take pressure off me or the company in the near term,” he said. “The reason that I’m doing this is that I think, over the long term, if we build up a structure that people can trust, then that can help create legitimacy and create real oversight. But I think there is a real risk, if it gets too polarized too quickly, that it will never be able to blossom into that.”

These statements are, at the same time, obviously true, obviously predictable... and just not very insightful. I don't envy anyone being in a position where much of the world's ills are blamed on what was your college coding side hustle, but Zuckerberg has now had a good 15 years to consider that all of these things are connected. You can't separate the policy and the product building. You can't have "the builders" and "the thinkers" as if they're two separate camps. They kind of all need to be combined.

In some sense, this reminds me (on a much different scale) of the rush in the early aughts to have companies appoint "Chief Digital Officers" in all sorts of old school companies. As we discussed at the time, thinking digitally isn't a separate job function. For most companies, digital thinking has to be a part of everything -- and that applies to understanding the policy and ethical implications of product design as well.

It's disappointing -- though not surprising -- that Zuckerberg hasn't quite grasped this yet.

But that doesn't make the Board itself any less of an interesting experiment. Indeed, if enough people actually believe in this Tinker Bell, it might not even matter that Zuckerberg himself misunderstands how the Board itself could be useful. It might just make itself useful by accident.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, kate klonick, mark zuckerberg, morality, oversight, regulation
Companies: facebook, oversight board


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Upstream (profile), 17 Feb 2021 @ 11:45am

    Starting places

    Just because you can do it doesn't mean it is a good idea.

    Similarly: Just because you have the right to do a thing doesn't mean it is the right thing to do.

    These ideas can be applied to a very broad range of situations. In the context of this article they could be applied to creating Facebook itself, to participating in Facebook, to deciding what to post on Facebook, to deciding whether or not to collect and sell user information (or how much to collect, or who to sell it to), to moderating content, to not moderating content, and so on.

    These two short sound bites provide no answers themselves, but they do provide a couple of easily-remembered starting places in the search for your own answers.

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 18 Feb 2021 @ 2:00am

    I go with it

    I would not want to “oversight” the majority of the worlds population who’s self awareness ends at “its your fualt i said this” either lol

    reply to this | link to this | view in chronology ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Close

Add A Reply

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Special Affiliate Offer

Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.