We’ve got a another cross-post episode for you this week, on a subject near and dear to our hearts: protocols over platforms, and restoring decentralization online. Mike recently joined Danny O’Brien on the DWeb Decoded podcast to talk all about these topics, as well as tell a little story about Danny’s role in the founding of Techdirt, and you can listen to the whole conversation here on this week’s episode.
When talking about content moderation, it’s easy to focus entirely on centralized platforms. But now, with the rise of more federated and decentralized systems like ActivityPub and Bluesky (and many others), it’s becoming more and more important to talk about how content moderation works in a decentralized space. This week we’re joined by Yoel Roth, the former head of Trust & Safety at Twitter and now a Tech Policy Fellow at UC Berkeley, to discuss the new and different content moderation challenges that decentralized platforms face.
A few weeks ago I wrote about an interview that Substack CEO Chris Best did about his company’s new offering, Substack Notes, and his unwillingness to answer questions about specific content moderation hypotheticals. As I said at the time, the worst part was Best’s unwillingness to just own up to what he was saying were the site’s content moderation plans, which was that they would be quite open to hosting the speech of almost anyone, no matter how terrible. That’s a decision that you can make (in the US at least), but if you’re going to do that, you have to be willing to own the decision that you’re making and be clear about it, which Best was unwilling to do.
I compared it the “Nazi bar” problem that has been widely discussed on social media in the past, where if you own a bar, and don’t kick the Nazis out up front, you get the reputation as a “Nazi bar” that is difficult to get rid of.
It was interesting to see the response to this piece. Some people got mad, claiming it was unfair to call Best a Nazi, even though I was not doing that. As in the story of the Nazi bar, no one is claiming that the bar owner is a Nazi, just that the public reputation of his bar would be that it’s a Nazi bar. That was the larger point. Your reputation is what you allow, and if you’re taking a stance that you don’t want to get involved at all, and you want to allow such things, that’s the reputation that’s going to stick.
I wasn’t calling Best a Nazi or a Nazi sympathizer. I was saying that if he can’t answer a straightforward question like the one that Nilay Patel asked him, Nazis are going to interpret that as he’s welcoming them in, and they will act accordingly. So too will people who don’t want to be seen hanging out at the Nazi bar. The vaunted “marketplace of ideas” includes the ability for a large group of people to say “we don’t want to be associated with that at all…” and to find somewhere else to go.
And this brings us to Bluesky. I’ve written a bunch about Bluesky going back to Jack Dorsey’s initial announcement which cited my paper among others as part of the inspiration for betting on protocols.
As Bluesky has gained a lot of attention over the past week or so, there have been a lot of questions raised about its content moderation plans. A lot of people, in particular, seem confused by its plans for composable moderation, which we spoke about a few weeks ago. I’ve even had a few people suggest to me that Bluesky’s plans represented a similar kind of “Nazi bar” problem as Best’s interview did, in particular because their initial reference implementation shows “hate speech” as a toggle.
I’ve also seen some people claim (falsely) that Bluesky would refuse to remove Nazis based on this. I think there is some confusion here, and it’s important to go deeper on how this might work. I have no direct insight into Bluesky’s plans. And they will likely make big mistakes, because everyone in this space makes mistakes. It’s impossible not to. And, who knows, perhaps they will run into their own Nazi bar problem, but I think there are some differences that are worth exploring here. And those differences suggest that Bluesky is better positioned not to be the Nazi bar.
The first is that, as I noted in the original piece about Best, there’s a big difference between a centralized service and its moderation choices, and a decentralized protocol. Bluesky is a bit confusing to some as it’s trying to do both things. Its larger goal is to build, promote, and support the open AT Protocol as an open social media protocol for a decentralized social media system with portable identification. Bluesky itself is a reference app for the protocol, showing how things can be done — and, as such it has to do content moderation tasks to avoid Bluesky itself running into the Nazi bar problem. And, at least so far, it seems to be doing that.
The team at Bluesky seems to recognize this. Unlike Best, they’re not refusing to answer the question, they’re talking openly about the challenges here, but so far have been willing to remove truly disruptive participants, as CEO Jay Graber notes here:
But, they definitely also recognize that content moderation at scale is impossible to do well, and believe that they need a different approach. And, again, the team at Bluesky recognizes at least some of the challenges facing them:
But, this is where things get potentially more interesting. Under a traditional centralized social media setup, there is one single decision maker who has to make the calls. And then you’re in a sort of benevolent dictator setup (or at least you hope so, as the malicious dictator threat becomes real).
And this is where we go on a little tangent about content moderation: again, it’s not just difficult. It’s not just “hard” to do. It’s impossible to do well. The people who are moderated, with rare exceptions, will disagree with your moderation decisions. And, while many people think that there are a whole bunch of obvious cases and just a few that are a little fuzzy, the reality (this is part of the scale part) is that there are a ton of borderline cases that all come down to very subjective calls over what does or does not violate a policy.
To some extent, going straight to the “Nazi” example is unfair, because there’s a huge spectrum between the user who is a hateful bigot, deliberately trying to cause trouble, and the good helpful user who is trying to do well. There’s a very wide range in the middle and where people draw their own lines will differ massively. Some of them may include inadvertent or ignorant assholery. Some of it may just include trolling. Or sometimes there are jokes that some people find funny, and others find threatening. Sometimes people are just scared and lash out out of fear or confusion. Some people feel cornered, and get defensive when they should be looking inward.
Humans are fucking messy.
And this is where the protocol approach with composable moderation becomes a lot more interesting. On the most extreme calls, the ones where there are legal requirements, such as child sexual abuse material and copyright infringement, for example, those can be removed at the protocol level. But as you start moving up into the more murky areas, where many of the calls are subjective (not so much: “is this person a Nazi” but more along the lines of “is this person deliberately trolling, or just uninformed…”) the composable moderation system begins to let (1) the end users make their own rules and (2) enable any number of 3rd parties to build tools to work with those rules.
Some people may (for perfectly good reasons, bad reasons, or no reasons at all) just not have any tolerance for any kind of ignorance. Others may be more open to it, perhaps hoping to guide ignorance to knowledge. Just as an example, outside of the “hateful” space, we’ve talked before about things like “eating disorder” communities. One of the notable things there was that when those communities were on more mainstream services, people who had gotten over an eating disorder would often go back to those communities and provide help and support to those who needed it. When those communities were booted from the mainstream services, that actually became much more difficult, and the communities became angrier and more insulated, and there was less ability for people to help those in need.
That is, there will still need to be some decision making at the protocol level (this is something that people who insist on “totally censorship proof” systems seem to miss: if you do this, eventually the government is going to shut you down for hosting CSAM), but the more of the decision making that can be pushed to a different level and the more control put in the hands of the user, the better.
This allows for more competition for better moderation, first of all, but also allows for the variance in preferences, which is what you see in the simple version that Bluesky implemented. The biggest decisions can be made at the protocol level, but above that, let there be competitive approaches and more user control. It’s unclear exactly where Bluesky the service will come down in the end, but the early indications from what’s been said so far are that the service level “Bluesky” will be more aggressive in moderating, while the protocol level “AT Protocol” will be more open.
And… that’s probably how it should be. Even the worst people should be able to use a telephone or email. But, enabling competition at the service level AND at the moderation level, creates more of the vaunted “marketplace of ideas” where (unlike what some people think the marketplace of ideas is about), if you’re regularly a disruptive, disingenuous, or malicious asshole, you are much more likely to get less (or possibly no) attention from the popular moderation services and algorithms. Those are the consequences of your own actions. But you don’t get banned from the protocol.
To some extent, we’ve already seen this play out (in a slightly different form) with Mastodon. Truly awful sites like Gab, and ridiculously pathetic sites like Truth Social, both use the underlying ActivityPub and open source Mastodon code, but they have been defederated from the rest of the fediverse. They still get to use the underlying technology, but they don’t get to use it to be obnoxiously disruptive to the main userbase who wants nothing to do with them.
With AT Protocol, and the concept of composable moderation, this can get taken even further. Rather than just having to choose your server, and be at the whims of that server admin’s moderation choices (or the pressure from other instances which keeps many instances in check and aligned), the AT Protocol setup allows for a more granular and fluid system, where there can be a lot more user empowerment, without having to resort to banning certain users from using the technology entirely.
This will never satisfy some people, who will continue to insist that the only way to stop a “bad” person is to ban them from basically any opportunity to use communications infrastructure. However, I disagree for multiple reasons. First, as noted above, outside of the worst of the worst, deciding who is “good” and who is “bad” is way more complicated and fraught and subjective than people like to note, and where and how you draw those lines will differ for almost everyone. And people who are quick to draw those lines should realize that… some other day, someone who dislikes you might be drawing those lines too. And, as the eating disorder case study demonstrated, there’s a lot more complexity and nuance than many people believe.
That’s why a decentralized solution is so much better than a centralized one. With a decentralized system you don’t have to be worrying about yourself getting cut out either. Everyone gets to set their own rules and their own conditions and their own preferences. And, if you’re correct that the truly awful people are truly awful, then it’s likely that most moderation tools and most servers will treat them as such, and you can rely on that, rather than having them cut off at the underlying protocol level.
It’s also interesting to also see how the decentralized social media protocol nostr is handling this as well. While it appears that some of the initial thinking behind it was the idea that nothing should ever be taken down, it appears that many are recognizing how impossible that is, and they’re now having really thoughtful discussions on “bottom up content moderation” specifically to avoid the “Nazi bar” problem.
Eventually in the process, thoughtful people recognize that a community needs some level of norms and rules. The question is how are those created, how are they implemented, and how are they enforced and by whom. A decentralized system allows for much greater control by end users to have the systems and communities that more closely match their own preferences, rather than requiring the centralized authority handle everything, and be able to live up to everyone’s expectations.
As such, you may end up with results like Mastodon/ActivityPub, where “Nazi bar” areas still form, but they are wholly separated from other users. Or you may end up with a result where the worst users are still there, shouting into the wind with no one bothering to listen, because no one wants to hear them. Or, possibly, it will be something else entirely as people experiment with new approaches enabled by a composable moderation system.
I’ll add one other note on that, because there are times when I’ve discussed this that people highlight that there are other forms of harassment or other kinds of risks beyond direct harassment. And just blocking a user does not stop them from harassing or encouraging or directing harassment against another. This is absolutely true. But, this kind of setup does also allow for better tooling for potentially monitoring such a thing without having to be exposed to it directly. This could take the form of Block Party’s “lockout folder” where you can have a trusted third party review the harassing messages you’ve been receiving rather than having to go through it yourself, or, conceivably. other monitoring and warning services could pop up, that could track people who are doing awful things, try to keep them from succeeding, and alert the proper people if things require escalation.
In short, decentralizing things, and allowing many different approaches, and open systems and tooling doesn’t solve all problems, but it presents some creative ways to handle the Nazi Bar problem that seem likely to be a lot more effective than living in denial and staring blankly into the Zoom screen as a reporter asks you a fairly basic question about how you’ll handle racist assholes on your platform.
As advocates of decentralization and a protocols-not-platforms approach to the web, there’s a lot about the concept of Web3 that sounds appealing to us at Techdirt — but the details usually leave a lot to be desired. A new project called TBD from Block aims to move beyond all that, and while its invocation of “Web5” understandably invites skepticism, it’s actually a lot more interesting. This week, we’re joined by project lead Mike Brock to discuss how TBD and the concept of Web5 aims to grapple with the true potential of decentralization.
Jack Dorsey has left Twitter, which he co-founded and ran for more than a decade. Many on the American political right frequently accused Dorsey and other prominent social media CEOs of censoring conservative content. Yet Dorsey doesn’t easily fit within partisan molds. Although Twitter is often lumped together with Facebook and YouTube, its founder’s approach to free speech and interest in decentralized initiatives such as BlueSky make Dorsey one of the more interesting online speech leaders of recent years. If you want to know what the future of social media might be, keep an eye on Dorsey.
Twitter has much in common with other prominent “Big Tech” social media firms such as Facebook and Google-owned YouTube. Like these firms, Twitter is centralized, with one set of rules and policies. Twitter is nonetheless different from other social media sites in important ways. Although often discussed in the context of “Big Tech” debates, Twitter is much smaller than Facebook and YouTube. Only about a fifth of Americans use Twitter and most are not active on the platform, with 10 percent of users being responsible for 80 percent of tweets. Despite its relatively small size, Twitter is often discussed by lawmakers because of its outsized influence among cultural and political elites.
Republican lawmakers’ focus on Twitter arose out of concerns over its content moderation policies. Over the last few years it has become common for members of Congress to decry the content moderation decisions of “Big Tech” companies. Twitter is often lumped together with Facebook and YouTube in such conversations, which is a shame given Dorsey’s views on free speech.
Dorsey has been more supportive of free speech than many on the American political right might think. Did Twitter, under Dorsey’s leadership, adhere to a policy of allowing all legal speech? Of course not. Did Twitter sometimes inconsistently apply its policies? Yes.
But no social media site could allow all legal speech. The wide range of awful but lawful speech aside, spam and other intrusive legal speech would ruin the online experience. Any social media site with millions or billions of users will experience false positives and false negatives while implementing a content moderation policy.
It became clear in the last few years that Dorsey is open to new ideas that may end up being considered mainstream eventually. We are still in the early years of the Internet and social media and users are used to centralized platforms such as Facebook, Twitter, and YouTube. But, increasingly, there are decentralized alternatives, and a few years ago Dorsey announced the decentralized social media project BlueSky, with the goal of moving Twitter over to such a system eventually.
Dorsey has not been shy about his passion for decentralization, citing the cryptocurrency bitcoin as a particular influence, “largely because of the model it demonstrates: a foundational internet technology that is not controlled or influenced by any single individual or entity. This is what the internet wants to be, and over time, more of it will be.”
I predict that in the coming years decentralized social media will gradually become more popular than current centralized platforms. As I wrote earlier this year:
“Americans across the political spectrum may look to decentralized social media and cryptocurrencies if their political allies continue to criticize household name firms. Those involved in protest movements as varied as Black Lives Matter and #StopTheSteal are especially likely to embrace such alternatives given their experiences with surveillance.
But Americans fed up with what they perceive to be politically?motivated content moderation and Big Tech’s irresponsible approach to harassment and misinformation may also join an exit from popular platforms and use decentralized alternatives. If they do, members of Congress upset over the spread of specific political content, COVID 19 misinformation, and election conspiracy theories will have to reach beyond Big Tech and grapple with decentralized systems where there is no CEO to subpoena or financial institution to investigate.”
Such platforms can embrace a Twitter-like aesthetic. Mastodon, a decentralized and open source social media service, looks very similar to Twitter, allowing users to send “toots.” Gab, a right wing social media network, which also mimics Twitter, became a Mastodon fork in 2019 after adopting Mastodon software. As policy fights over “Big Tech” and online speech continue, we should not be surprised if more people across the political spectrum adopt decentralized social media.
Dorsey clearly believes in a future where decentralized social media replaces centralized online speech platforms. If he is vindicated in that prediction it is likely that Dorsey’s legacy will be more bound to his work in decentralization more than his career at Twitter.
Matthew Feeney is the director of Cato?s Project on Emerging Technologies, where he works on issues concerning the intersection of new technologies and civil liberties.
Earlier this year we were excited to see the Filecoin Foundation give the Internet Archive its largest donation ever, to help make sure that the Internet Archive is both more sustainable as an organization, and that the works it makes available will be more permanently available on a more distributed, decentralized system. The Internet Archive is a perfect example of the type of organization that can benefit from a more distributed internet.
Another such organization is the Freedom of the Press Foundation, which, among its many, many projects, maintains and develops SecureDrop, the incredibly important tool for journalists and whistleblowers, which was initially developed in part by Aaron Swartz (as DeadDrop). So it’s great to see that the Freedom of the Press Foundation has now announced the largest donation it has ever received, coming from the Filecoin Foundation for the Distributed Web (the sister organization of the Filecoin Foundation):
Today, for the first time, that calculus has changed. We?re thrilled to announce the largest grant in the history of Freedom of the Press Foundation that will ensure SecureDrop survives ? and thrives ? for years to come. The Filecoin Foundation for the Decentralized Web ? a new grantmaking organization whose mission is to permanently preserve humanity?s most important information ? is funding FPF at over $1.7 million for each of the next three years, for a total of $5.8 million.
The funding will largely go towards sustaining and expanding our SecureDrop team, funding the development of the next-generation of the system, including exploring a new zero-trust architecture for the decentralized servers. This grant will ensure that SecureDrop will not only be sustainable over the long term, but will be easier to use and hopefully safer than ever. In short, it will have a game-changing impact on how we can build and improve SecureDrop for journalists around the world. You can read about some of our technical plans for the future here.
This is great to see as SecureDrop is another one of those tools that is so key, but as an open source project is often in a precarious position without the financial support to make sure that there is active development.
As you know by now, much of the tech news cycle yesterday was dominated by the fact that Facebook appeared to erase itself from the internet via a botched BGP configuration. Hilarity ensued — including my favorite bit about how Facebook’s office badges weren’t working because they relied on connecting to a Facebook server that could no longer be found (also, how in borking their own BGP, Facebook basically knocked out their own ability to fix it until they could get the right people who knew what to do to have physical access to the routers).
But in talking to people who were upset about being cut off from Facebook, Instagram, WhatsApp, or Facebook Messenger, it was a good point to remind people that another benefit of a protocols, not platforms approach to these things is that it’s way more resilient. If you’re using Messenger and it’s down, but can easily swap in a different tool and continue to communicate that’s a much better, more resilient solution than relying on Facebook not to mess up. And that’s on top of all the other benefits I laid out in my paper.
In fact, a protocols approach also creates more incentives for better uptime from services, since continually screwing up for extended periods of times doesn’t just mean losing ad revenue for a few hours, but it is much more likely to lead people to permanently switch to an alternative provider.
Indeed, a key part of the value of the internet, originally, was in its resiliency of being highly distributed, rather than centralized, and how it could continue to work well if one part fell off the network. The increasing centralization/silo-ization of the internet has taken away much of that benefit. So, if anything, yesterday’s mess should be seen as another reason to look more closely at a protocols-based approach to building new internet services.
Last Friday, Twitter made the decision to permanently ban Donald Trump from its platform, which I wrote about at the time, explaining that it’s not an easy decision, but neither is it an unreasonable one. On Wednesday, Jack Dorsey put out an interesting Twitter thread in which he discusses some of the difficulty in making such a decision. This is good to see. So much of the content moderation debate often is told in black and white terms, in which many people act as if one answer is “obvious” and any other thing is crazy. And part of the reason for that is many of these decisions are made behind close doors, and no one outside gets to see the debates, or how much the people within the company explore the trade-offs and nuances inherent in one of these decisions.
Jack doesn’t go into that much detail, but enough to explain that the company felt that, given the wider context of everything that happened last week, it absolutely made sense to put in place the ban now, even as the company’s general stance and philosophy has always pushed back on such an approach. In short, context matters:
I believe this was the right decision for Twitter. We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety. Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.
I do not celebrate or feel pride in our having to ban
@realDonaldTrump
from Twitter, or how we got here. After a clear warning we?d take this action, we made a decision with the best information we had based on threats to physical safety both on and off Twitter. Was this correct?
I believe this was the right decision for Twitter. We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety. Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.
That said, having to ban an account has real and significant ramifications. While there are clear and obvious exceptions, I feel a ban is a failure of ours ultimately to promote healthy conversation. And a time for us to reflect on our operations and the environment around us. Having to take these actions fragment the public conversation. They divide us. They limit the potential for clarification, redemption, and learning. And sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.
The check and accountability on this power has always been the fact that a service like Twitter is one small part of the larger public conversation happening across the internet. If folks do not agree with our rules and enforcement, they can simply go to another internet service. This concept was challenged last week when a number of foundational internet tool providers also decided not to host what they found dangerous. I do not believe this was coordinated. More likely: companies came to their own conclusions or were emboldened by the actions of others. This moment in time might call for this dynamic, but over the long term it will be destructive to the noble purpose and ideals of the open internet. A company making a business decision to moderate itself is different from a government removing access, yet can feel much the same.
Yes, we all need to look critically at inconsistencies of our policy and enforcement. Yes, we need to look at how our service might incentivize distraction and harm. Yes, we need more transparency in our moderation operations. All this can?t erode a free and open global internet.
I fear that many will miss the important nuances that Jack is explaining here, but there are a few overlapping important points. The context and the situation dictated that this was the right move for Twitter — and I think there’s clear support for that argument. However, it does raise some questions about how the open internet itself functions. If anything, this tweet-thread reminds me of when Cloudlflare removed the Daily Stormer from its service, and the company’s CEO, Matthew Prince highlighted that, while the move was justified for a wide variety of reasons, he felt uncomfortable that he had that kind of power.
At the time, Prince called for a wider discussion on these kinds of issues — and unfortunately those discussions didn’t really happen. And so, we’re back in a spot where we need to have them again.
The second part of Jack’s thread highlights how Twitter is actually working to remove that power from its own hands. As he announced at the end of 2019, he is exploring a protocol-based approach that would make the Twitter system an open protocol standard, with Twitter itself just one implementation. This was based, in part, on my paper on this topic. Here’s what Jack is saying now:
The reason I have so much passion for #Bitcoin is largely because of the model it demonstrates: a foundational internet technology that is not controlled or influenced by any single individual or entity. This is what the internet wants to be, and over time, more of it will be. We are trying to do our part by funding an initiative around an open decentralized standard for social media. Our goal is to be a client of that standard for the public conversation layer of the internet. We call it @bluesky.
This will take time to build. We are in the process of interviewing and hiring folks, looking at both starting a standard from scratch or contributing to something that already exists. No matter the ultimate direction, we will do this work completely through public transparency. I believe the internet and global public conversation is our best and most relevant method of achieving this. I also recognize it does not feel that way today. Everything we learn in this moment will better our effort, and push us to be what we are: one humanity working together.
There had been some concern recently that, since nothing was said about the Bluesky project in 2020, Twitter had abandoned it. That is not at all true. There have been discussions (disclaimer: I’ve been involved in some of those discussions) about how best to approach it and who would work on it. In the fall, a variety of different proposals were submitted for Twitter to review and choose a direction to head in. I’ve seen the proposals — and a few have been mentioned publicly. I’ve been waiting for Twitter to release all of the proposals publicly to talk about them, which I hope will happen soon.
Still, it’s interesting to see how the latest debates may lead to finally having this larger discussion about how the internet works, and how it should be managed. While I’m sure Jack will be getting some criticism (because that’s the nature of the internet), I appreciate that his approach to this, like Matthew’s at Cloudflare, is to recognize his own discomfort with his own power, and to explore better ways of going about things. I wish I could say the same for all internet CEOs.
After lots and lots of speculation, Facebook finally officially announced its cryptocurrency project last week, with a big event and a white paper that loosely describes the plans for the cryptocurrency called Libra. There was a lot to discuss, so in the spirit of slow news, I wanted to take some time to actually digest the plans before opining on it more thoroughly. Nearly all of the immediate reaction to the plan that I saw was not just negative, but mockingly so. Lots of jokes about “ZuckBucks” and the most common line of all: “who would actually trust Facebook with your money.”
Having spent time actually reading the white paper, as well as much of the commentary around it, as well as talking to a bunch of different people — some who are supportive of the program, some who are not at all supportive, and one very knowledgeable friend who basically rated the whole program as a big “meh” — my initial take is that the effort is in many ways a lot more interesting than I expected, but a lot less interesting than I hoped, and I don’t think anyone can really have much of a sense of what will become of it until we learn more.
More Interesting Than Expected
So, let’s start with why it’s a lot more interesting than I expected. And I’ll note that, in addition to reading the white paper, I also highly recommend John Constine’s writeup about Libra at TechCrunch, which is by far the most thorough and detailed analysis of the program. So what made Libra more interesting than I expected is that you can tell that a massive amount of effort and thought went into dealing with a single giant question: no one’s going to trust this, because no one trusts Facebook. The people designing this clearly knew that their biggest challenge was the fact that there’s massive global distrust of Facebook, and really bent over backwards to respond to that. I had kind of expected — like many big companies — that the koolaid inside would lead them to pretend that the distrust and hatred directed towards the company wasn’t that big of a deal. But, no, it’s clear that from the start, this was designed to answer many of the questions raised by “the… but why would anyone trust Facebook” question.
Indeed, in a big Wired “behind the scenes” profile of the Libra project, Libra creator Dave Marcus more or less says exactly that:
Marcus, then head of Facebook Messenger, thought he had an answer. He texted his boss and told him it was time to talk about Facebook creating a cryptocurrency, saying that he had a clear view of how to do it, in a way that would earn trust even from those skeptical of Facebook. Marcus spent the next few days writing a memo that laid out his ideas.
Later, the article notes:
Libra?s greatest hurdle may well be overcoming the tarnished reputation of its creator. Marcus knows this, and thinks that the key is making sure that Libra is not synonymous with Facebook.
And you can see that in many, many elements of the design. Over and over again you see decisions that effectively take Facebook’s vision out of the equation. It’s set up an outside non-profit, the Libra Association, which will oversee the project. It’s recruiting “founding members” who will (mostly) pay $10 million to become a validator node, and Facebook itself will just be a single member with a single vote. In effect, while Facebook is building the initial scaffolding, it’s setting the project free from its own direct control. The code is open sourced and can be audited. While Facebook has also set up a separate subsidiary, Calibra, that has built a wallet product that will integrate with various Facebook messaging products, it has isolated Libra information from the rest of the data it collects on you, so this isn’t going into the big data bank of info that you already don’t trust Facebook to keep private.
If you’re actually concerned about trusting Facebook, Facebook has made a rather Herculean effort to make it clear that to use Libra you don’t actually have to trust Facebook. But, that’s only true with two big caveats. First, you have to concern yourself enough to read the details. Not many people are going to do that. Second, you do still need to trust all the other validator node operators, which right now make up a rather motley mix of big organizations.
Because unlike most cryptocurrencies where anyone can effectively become a node in the open ledger, Libra made the choice to, at least initially, limit that to a bunch of founding members who will act as nodes. The reasoning behind this design choice is pretty obvious. For one, you can set it up in a way that doesn’t have the scaling/energy concerns of a Bitcoin. Also, doing it this way can be seen as less chaotic and subject to questionable behavior as a fully distributed/trustless setup of Bitcoin or Ethereum, but you still have a bunch of large, mostly “respected” organizations acting as validator nodes, meaning that for something to go horribly wrong, a bunch of those big organizations would have to all agree to do something really, really bad. And that seems… unlikely. Not impossible. But unlikely.
The initial members are an interesting, and slightly eclectic, mix:
Payments: Mastercard, PayPal, PayU (Naspers? fintech arm), Stripe, Visa
Technology and marketplaces: Booking Holdings, eBay, Facebook/Calibra, Farfetch, Lyft, Mercado Pago, Spotify AB, Uber Technologies, Inc.
Nonprofit and multilateral organizations, and academic institutions: Creative Destruction Lab, Kiva, Mercy Corps, Women?s World Banking
The non-profits don’t have to pay the $10 million entry fee. The others do, if they decide to buy in, though the Wired piece notes in passing that this initial list has not actually committed to buying in yet:
Actually, all of those are provisional partners. At this point, their participation in the association doesn?t mean they?ve committed to paying $10 million to become a Libra node. The partners seem motivated by curiosity, FOMO, and a shared dream with Facebook that the effort could be both a boon to their ambitions in underserved economies and a milestone in the evolution of digital currency. But they have varying degrees of enthusiasm.
As Joshua Gans of the University of Toronto?s Creative Destruction Lab, one of the launch partners, puts it, the members have thus far been invited to a kind of constitutional convention. ?It?s entirely possible not everyone stays part of the union after that,? he says.
Wired also notes that many of these organizations agreed to this in just the last few weeks. And, Facebook has made it clear that its goal is to have 100 members by the actual launch in 2020. In short, the partner list could change a lot. And, as many people have pointed out, that even if they recognize that you don’t need to trust Facebook to make use of Libra, they don’t trust a lot of those participants. There also aren’t any banks. Or any of the other really big tech companies (most of the tech partners are a tier down). How this all shakes out in the long run is going to be very, very important.
The other interesting tidbit is that it’s clear that the Libra team wanted to deal with the other big issue with many cryptocurrencies: that the volatility makes it mostly useless as a currency, since everyone is using it as a speculative vehicle. Libra is set up so that the currency will be fully backed by a reserve of real assets to make it stable. There are a few other “stablecoins” out there, but they’re not all that popular and, in some cases, such as with Tether, it’s long been accused of being a scam and not actually backed by dollars as promised (accusations that gained more credibility recently). However, given the players backing Libra, and the setup of the program, many of those kinds of concerns can be dealt with.
I’ve seen multiple reports saying that rather than thinking of Libra as another cryptocurrency, it’s probably best thought of as another Paypal or Venmo. Indeed, you could say that the only time the cryptocurrency/blockchain part matters is for getting the currency out of Facebook’s ecosystem. That is, if you use Calibra inside of a Facebook service, it’s really little more than an internal currency. What’s partly interesting, is that if you want to then move it out of Facebook’s control, then the blockchain aspect takes over and you get to extract yourself. That’s good.
But, to me, the better comparison, rather than Paypal or Venmo, is really WeChat in China. The whole Libra setup (as well as Calibra) seems like a fairly obvious attempt to recreate what WeChat has done. If you’re not familiar with it, WeChat is not just like the Facebook of China, it’s the everything China. People pay for nearly everything directly within WeChat. Without WeChat in China you basically can’t do anything. It seems pretty clear that Libra is an attempt to try to build that kind of functionality into Facebook, and to do it in a way that the currency aspect flows smoothly, without too much friction. That’s interesting in a lot of ways, because there’s been a lot of innovation coming out of WeChat, and showing what can be done when you integrate connectivity with currency.
Less Interesting Than Hoped
However, that last point is also why this whole thing is a lot less interesting than I hoped. Recreating what WeChat has done in China, and doing so in a way that is more open, and with less control by Facebook is interesting — and if it’s actually adopted could lead to some new innovations. But it’s not nearly as far as I had hoped Facebook might go. I’ve been talking about this protocols instead of platforms concept for a while now — and one element that could make that work is a better cryptocurrency or tokenization system that would open up new business models that don’t rely on advertising. And I had hoped that Facebook’s foray into cryptocurrency might include that element. But it clearly does not.
While part of Libra is the new Move programming language that will let people develop on top of the Libra blockchain, this does not seem designed for a world of protocols instead of platforms. It’s very much about integrating money into existing platforms. That’s understandable, but disappointing.
That’s not to say Libra isn’t ambitious and couldn’t lead to more interesting things down the road, but on the Clayton Christensen spectrum of sustaining to disruptive innovations, this looks like it hues much more towards a sustaining innovation. It’s a platform to add new features into existing systems, rather than a new way of organizing the world. It could still have a big impact. But it could have done a lot more.
And, frankly, as I’ll discuss in the next section, I think the failure to go this far has the greatest chance of holding Libra back in the long run. Since so much of Libra is about layering on payments to what’s out there already that just opens up Facebook to lots of criticism and potential regulatory hurdles. If Facebook had, instead, tried to restart its own setup as a cryptocurrency-based protocol, focused much more on the protocol service building, rather than the currency, the regulatory response likely would have been quite different. Indeed, turning its platform into a protocol would have been, in effect, Facebook breaking itself up. While Facebook took a bold approach with Libra in effectively taking its own power out of the equation to empower all the Libra Association Members, it’s only doing that for this payments layer — not for its underlying structure.
Lots of reasons to be skeptical:
In the end, while this is an interesting and ambitious project, there are plenty of reasons to be skeptical. While Facebook did bend over backwards to try to pre-answer the “but who would ever trust Facebook with money” question, how many people are actually going to take the time to understand that? At least based on the initial reactions I saw online, the answer is: not many.
And, of course the bigger threat are regulators. New York Magazine noted that, in the end, this seems to be Facebook trying to compete with the US dollar to become the global reserve currency. And governments sure aren’t going to like that. And they’re already pissed off about Facebook in general. So the idea that grandstanding politicians are going to take the time to understand that Facebook has mostly taken its own trust issues out of the equation is likely a non-starter. European regulators are already flipping out and the US Congress is already setting up hearings. The whole effort may be crushed by regulators before it even really begins.
And then, of course, there’s the distinct possibility that even if this does get off the ground, no one might actually use it. Yes, Facebook has a massive base to start with. Yes, there’s lots of publicity and I’m sure Calibra can pull lots of tricks to incentivize usage. But, getting people to buy into new payments is something of a crapshoot. Sometimes it works, sometimes it doesn’t. And the biggest platform doesn’t always win. People may not remember, but PayPal’s initial success was facilitating payments on eBay. eBay then came up with its own competitor to PayPal, and you might think that, given its own dominance and platform that it would take over — but that didn’t happen. eBay ended up having to buy PayPal because its own competitor flopped.
The other big challenge is actually getting however many members of the Libra Association actually on the same page. It’s not hard to see how different visions and in-fighting could mess up the entire project as well.
So while there are plenty of interesting things here, there’s plenty that could go wrong. And, frankly, I’m disappointed that the project’s ambition wasn’t nearly as disruptive as it could have been.
Of course, you never know what the future might hold. And, at least one person is already trying to setup their own version of the Libra blockchain that doesn’t involve the Libra Association, but is actually fully distributed and “privacy oriented.” Wouldn’t it be fascinating if Libra actually took off by a forked version that does an end-run around all those giant companies?