Checking In On Twitter’s Attempt To Move To Protocols Instead Of Platforms
from the it's-moving-forward dept
With Elon Musk now Twitter’s largest shareholder, and joining the company’s board, there have been some (perhaps reasonable) concerns about the influence he would have on the platform — mainly based on his childlike understanding of free speech, in which speech that he likes should obviously be allowed, and speech that he dislikes should obviously be punished. That’s not to say he won’t have some good ideas for the platform. Before his infamous poll about free speech on Twitter, he had done another poll asking whether or not Twitter’s algorithm should be open sourced.
And, that’s a lot more interesting, because it’s an idea that many people have discussed for a while, including Twitter founder, Jack Dorsey, who has talked a lot about creating algorithmic choice for users of the website, in part, based on Dorsey and Twitter’s decision to embrace my vision of a world of protocols over platforms.
Of course, it’s not nearly as easy as just “open sourcing” the algorithm. Once again, Musk’s simplification of a complex issue is a bit on the childlike side of things, even if the underlying idea is valuable. But you can’t just open source the algorithm, without a whole bunch of other things being in place. To just throw the doors open (1) wouldn’t really work because it wouldn’t mean much, and (2) without taking other steps first, it would basically open up the system for gaming by trolls and malicious users.
Either way, I’ve continued to follow what’s been happening with Project Bluesky, the Twitter-created project to try to build a protocol-based system. Last month, the NY Times had a good (if brief) update on the project, noting how Twitter could have gone down that route initially, but chose not to. Reversing course is a tricky move, but one that is doable.
What’s been most interesting to me is how Bluesky has been progressing. Some have complained that it’s basically done nothing, but watching over things, it appears what’s actually happening is that the people working on it are being deliberate and careful, rather than rushing in and breaking things in typical Silicon Valley fashion. There are lots of other projects out there that haven’t truly caught on. And whenever I mention things like Bluesky, people quickly rush in to point to things like Mastodon or other projects — which, to me, are only partial steps towards the vision of a protocol-based future, rather than really driving the effort forward in a way that is widely adopted.
We’re building on existing protocols and technologies but are not committed to any stack in its entirety. We see use cases for blockchains, but Bluesky is not a blockchain, and we believe the adoption of social web protocols should be independent of any blockchain.
And, after recently announcing its key initial hires, the Bluesky team has revealed some aspect of the plan, in what it’s calling a self-authenticating social protocol. As it notes, for all the existing projects out there, none truly match the protocol/not platform vision. But that doesn’t mean they can’t work within that ecosystem, or that there aren’t useful things to build on and connect with:
There are many projects that have created protocols for decentralizing discourse, including ActivityPub and SSB for social, Matrix and IRC for chat, and RSS for blogging. While each of these are successful in their own right, none of them fully met the goals we had for a network that enables global long-term public conversations at scale.
The focus of Bluesky is to fill in the gaps, to make a protocol-based system a reality. And the Bluesky team sees the main gaps being portability, scalability, and trust. To build that, they see the key initial need being that self-authenticating piece:
The conceptual framework we’ve adopted for meeting these objectives is the “self-authenticating protocol.” In law, a “self-authenticating” document requires no extrinsic evidence of authenticity. In computer science, an “authenticated data structure” can have its operations independently verifiable. When resources in a network can attest to their own authenticity, then that data is inherently live – that is, canonical and transactable – no matter where it is located. This is a departure from the connection-centric model of the Web, where information is host-certified and therefore becomes dead when it is no longer hosted by its original service. Self-authenticating data moves authority to the user and therefore preserves the liveness of data across every hosting service.
As they note, this self-authenticating protocol can help provide that missing portability, scalability and trust:
Portability is directly satisfied by self-authenticating protocols. Users who want to switch providers can transfer their dataset at their convenience, including to their own infrastructure. The UX for how to handle key management and username association in a system with cryptographic identifiers has come a long way in recent years, and we plan to build on emerging standards and best practices. Our philosophy is to give users a choice: between self-sovereign solutions where they have more control but also take on more risk, and custodial services where they gain convenience but give up some control.
Self-authenticating data provides a scalability advantage by enabling store-and-forward caches. Aggregators in a self-authenticating network can host data on behalf of smaller providers without reducing trust in the data’s authenticity. With verifiable computation, these aggregators will even be able to produce computed views – metrics, follow graphs, search indexes, and more – while still preserving the trustworthiness of the data. This topological flexibility is key for creating global views of activity from many different origins.
Finally, self-authenticating data provides more mechanisms that can be used to establish trust. Self-authenticated data can retain metadata, like who published something and whether it was changed. Reputation and trust-graphs can be constructed on top of users, content, and services. The transparency provided by verifiable computation provides a new tool for establishing trust by showing precisely how the results were produced. We believe verifiable computation will present huge opportunities for sharing indexes and social algorithms without sacrificing trust, but the cryptographic primitives in this field are still being refined and will require active research before they work their way into any products.
There’s some more in the links above, but the project is moving forward, and I’m glad to see that it’s doing so in a thoughtful, deliberate manner, focused on filling in the gaps to build a protocol-based world, rather than trying to reinvent the wheel entirely.
It’s that kind of approach that will move things forward successfully, rather than simplistic concepts like “just open source the algorithm.” The end result of this may (and perhaps hopefully will) be open sourced algorithms (many of them) helping to moderate the Twitter experience, but there’s a way to get there thoughtfully, and the Bluesky team appears to be taking that path.