Checking In On Twitter’s Attempt To Move To Protocols Instead Of Platforms
from the it's-moving-forward dept
With Elon Musk now Twitter’s largest shareholder, and joining the company’s board, there have been some (perhaps reasonable) concerns about the influence he would have on the platform — mainly based on his childlike understanding of free speech, in which speech that he likes should obviously be allowed, and speech that he dislikes should obviously be punished. That’s not to say he won’t have some good ideas for the platform. Before his infamous poll about free speech on Twitter, he had done another poll asking whether or not Twitter’s algorithm should be open sourced.
And, that’s a lot more interesting, because it’s an idea that many people have discussed for a while, including Twitter founder, Jack Dorsey, who has talked a lot about creating algorithmic choice for users of the website, in part, based on Dorsey and Twitter’s decision to embrace my vision of a world of protocols over platforms.
Of course, it’s not nearly as easy as just “open sourcing” the algorithm. Once again, Musk’s simplification of a complex issue is a bit on the childlike side of things, even if the underlying idea is valuable. But you can’t just open source the algorithm, without a whole bunch of other things being in place. To just throw the doors open (1) wouldn’t really work because it wouldn’t mean much, and (2) without taking other steps first, it would basically open up the system for gaming by trolls and malicious users.
Either way, I’ve continued to follow what’s been happening with Project Bluesky, the Twitter-created project to try to build a protocol-based system. Last month, the NY Times had a good (if brief) update on the project, noting how Twitter could have gone down that route initially, but chose not to. Reversing course is a tricky move, but one that is doable.
What’s been most interesting to me is how Bluesky has been progressing. Some have complained that it’s basically done nothing, but watching over things, it appears what’s actually happening is that the people working on it are being deliberate and careful, rather than rushing in and breaking things in typical Silicon Valley fashion. There are lots of other projects out there that haven’t truly caught on. And whenever I mention things like Bluesky, people quickly rush in to point to things like Mastodon or other projects — which, to me, are only partial steps towards the vision of a protocol-based future, rather than really driving the effort forward in a way that is widely adopted.
Bluesky, however, has a plan (and contrary to what people keep screaming at me whenever I mention Bluesky, no, it’s not designed to be a blockchain project, noting:
We’re building on existing protocols and technologies but are not committed to any stack in its entirety. We see use cases for blockchains, but Bluesky is not a blockchain, and we believe the adoption of social web protocols should be independent of any blockchain.
And, after recently announcing its key initial hires, the Bluesky team has revealed some aspect of the plan, in what it’s calling a self-authenticating social protocol. As it notes, for all the existing projects out there, none truly match the protocol/not platform vision. But that doesn’t mean they can’t work within that ecosystem, or that there aren’t useful things to build on and connect with:
There are many projects that have created protocols for decentralizing discourse, including ActivityPub and SSB for social, Matrix and IRC for chat, and RSS for blogging. While each of these are successful in their own right, none of them fully met the goals we had for a network that enables global long-term public conversations at scale.
The focus of Bluesky is to fill in the gaps, to make a protocol-based system a reality. And the Bluesky team sees the main gaps being portability, scalability, and trust. To build that, they see the key initial need being that self-authenticating piece:
The conceptual framework we’ve adopted for meeting these objectives is the “self-authenticating protocol.” In law, a “self-authenticating” document requires no extrinsic evidence of authenticity. In computer science, an “authenticated data structure” can have its operations independently verifiable. When resources in a network can attest to their own authenticity, then that data is inherently live – that is, canonical and transactable – no matter where it is located. This is a departure from the connection-centric model of the Web, where information is host-certified and therefore becomes dead when it is no longer hosted by its original service. Self-authenticating data moves authority to the user and therefore preserves the liveness of data across every hosting service.
As they note, this self-authenticating protocol can help provide that missing portability, scalability and trust:
Portability is directly satisfied by self-authenticating protocols. Users who want to switch providers can transfer their dataset at their convenience, including to their own infrastructure. The UX for how to handle key management and username association in a system with cryptographic identifiers has come a long way in recent years, and we plan to build on emerging standards and best practices. Our philosophy is to give users a choice: between self-sovereign solutions where they have more control but also take on more risk, and custodial services where they gain convenience but give up some control.
Self-authenticating data provides a scalability advantage by enabling store-and-forward caches. Aggregators in a self-authenticating network can host data on behalf of smaller providers without reducing trust in the data’s authenticity. With verifiable computation, these aggregators will even be able to produce computed views – metrics, follow graphs, search indexes, and more – while still preserving the trustworthiness of the data. This topological flexibility is key for creating global views of activity from many different origins.
Finally, self-authenticating data provides more mechanisms that can be used to establish trust. Self-authenticated data can retain metadata, like who published something and whether it was changed. Reputation and trust-graphs can be constructed on top of users, content, and services. The transparency provided by verifiable computation provides a new tool for establishing trust by showing precisely how the results were produced. We believe verifiable computation will present huge opportunities for sharing indexes and social algorithms without sacrificing trust, but the cryptographic primitives in this field are still being refined and will require active research before they work their way into any products.
There’s some more in the links above, but the project is moving forward, and I’m glad to see that it’s doing so in a thoughtful, deliberate manner, focused on filling in the gaps to build a protocol-based world, rather than trying to reinvent the wheel entirely.
It’s that kind of approach that will move things forward successfully, rather than simplistic concepts like “just open source the algorithm.” The end result of this may (and perhaps hopefully will) be open sourced algorithms (many of them) helping to moderate the Twitter experience, but there’s a way to get there thoughtfully, and the Bluesky team appears to be taking that path.
Filed Under: platforms, portability, protocols, protocols not platforms, scalability, self-authenticating, trust
Companies: bluesky, twitter
Comments on “Checking In On Twitter’s Attempt To Move To Protocols Instead Of Platforms”
This comment has been flagged by the community. Click here to show it.
Quick poll of Techdirt readers = who outside of Mike’s 6 ass kissing idiot friends thinks that Elon Musk is the “childlike” figure, and who thinks Mike and his six musketeers (Stephen Stone, PaulT, Rocky, That One Guy, the mask guy and the other guy) are the childish idiots? Did you read the Techdirt article about their only being “one truth” and it belongs to them? Hahahahaha
Re:
Yes, I think Elon Musk has a childlike understanding of the issues Mike raised.
And the commenters here seem to have a more thorough understanding of content moderation/free speech.
Not so difficult.
Re: Re:
(This is actually a response to the first commenter, whose comment I can’t unhide to reply directly to it.)
Your free speech rights exist only in public spaces. You wanna shout your conspiracy bullshit on the street or flip the bird to a cop? Go right ahead. Private spaces like Techdirt have a different set of rules, however, and you can’t just say whatever you want unless you change the law to allow that to happen. In such a case as that does happen, prepare for private spaces like your house to be invaded by others wanting to spout off their own shit in the name of free speech.
Re: Re:
How is that any different than now? Private company, private property. Private speech rules. Is it somehow different in premise when someone else does it or is it that you don’t like his choices?
Re: Re: Re:
Your question means you didn’t read Mike’s prior post referenced in his opening statement. Was this intentional or did you just forget it?
Re: Re: Re:2 Neither
I don’t see how it’s even worth a discussion. It’s a private company. if moderation choices change in a way you dislike you go elsewhere. That’s how freedom of speech and choice works!
Re: Re: Re:3
There is a slight difference between “popular platform moderates itself” and “billionaire buys his way in to force platform to go a completely different direction”.
I’m sure that most people will go the route of switching platforms (of which most people use more than one, despite the regular whining that they’re somehow monopolies), but I’m also sure that the platforms switched to won’t be the abject failures that the grifting honeypots set up to attract the people banned from mainstream platforms have been. But, there’s no hypocrisy in stating that everything was fine for most people until a new manager came in to mess things up while you move.
In the meantime, lets exercise our freedom in discussing why Musk has declined a place of Twitter’s board and his AMA session in a move apparently due to certain financial and ethical agreements.
Re: Re: Re:4
True, though I ultimately don’t care as I don’t use the platform.
Then again I understand your point better than you’d think. Since two less deletionist platforms I worked for (both free and paid at different points), AOL and CIS suffered exactly that situation. The change overs at AOL brought the discarding of ‘groups’ each time.
And CIS went from generally uncensored (except by law) to a hotbed of ban and delete first, figure it out later. Pushing anything slightly controversial or disruptive further and further into tech hell where only the most dedicated and advanced users could reach open discussion.
AOL was bought to death.
CIS suffered death by a thousand protocol jumps.
Re:
Why do you imagine other people are able to see your hallucinations?
Re:
Hey Chozen we still aren’t gonna fuck you.
Re: Re:
Generally, sissyfuckers aren’t interested in straight white people. Although he’s got some gumption to assume that he’ll be able to compete with the BBCs.
Re:
The editorial and journalistic staff do post here once in a while.
And they are all different people.
It’d be nice to refer to them by their actual names instead of the regular readers, who are part of the peanut gallery.
OH, THANK GOD.
“The algorithm” should be quoted too, because WTF does that refer to? A codebase like Twitter’s will inevitably use many algorithms. Is he referring to one particular algorithm, or to open-sourcing all of the code?
As long as we’re on the subject, it’s the Tesla self-driving algorithm I’d want to see Musk open-source. It’s safety-critical, despite what the fine print might say, and has already injured and killed people.
They need to do deep-dives on how abusive, hostile, bad-faith-actor users, groups of said users, and services/instances catering toward said users will be dealt with. Assurances that bigots, trolls, and harassers cannot have (or at least cannot easily have) a “home service” they can hop back and forth between, to organize and then strike. I can see harassment & brigading forums such as KiwiFarms wanting such a thing.
BlueSky is going to have to answer a lot of questions on how they plan to do right by marginalized groups that are victims of pervasive online harassment, as well as how they plan to tackle mis-and-disinformation. The answers to those questions cannot simply be “With subscriptions to the right algorithms, services, and filters created by other users and groups, you won’t be able to see them and they won’t be able to see you.” Opposition to racism, homophobia, transphobia, misogyny, and other forms of bigotry need to be baked in as heavily as possible at the start.
Without these kinds of assurances, BlueSky so far just sounds like a huge rewind button, rather than a step forward. Like it’s set to be an experiment in free speech and the marketplace of ideas that no user of Twitter will be able to opt out of.
When people say “the algorithm” in this context, they’re usually referring to the recommendation algorithm.
Re: "The algorithm"
Well judging by how badly defined and/or implemented some algorithms are, I’m wondering if they were leaning more towards the alco rhythm.
Re: Re: Nah
It was dealing more along the lines of Al Gore Rhythms.
Re: Re: Re: While you're here
Speaking of recommendation algorithms https://www.youtube.com/watch?v=EGmXAu8geVg and https://www.youtube.com/watch?v=JdtWE_LbTGw
Re: Re:
Or maybe just alcohol. (✖﹏✖)
Re:
Are they? Moderation algorithms are also frequent topics of discussion.
Right after he open sources the Full Self Driving math.
This comment has been flagged by the community. Click here to show it.
Punishment = consequence does it not?
Was it only last week folks around here were advocating for consequences to speech? Punishment is certainly a consequence, isn’t it?
So if Musk’s “childlike” view is what the author opines it is (“speech that he dislikes should obviously be punished”), that matches exactly the editorial view of Techdirt and its +1 commentariat.
Speech that some dislike must be punished with ended careers, lost relationships, banishment, muzzling, or whatever other consequences the mob desires or instigates because, you know, “speech has consequences”.
If Musk wants punishment for disfavored words, realize that the rest of the speech police do too. Obviously.
And if Musk is childlike for believing it then so is Techdirt and the self-righteous self-appointed “Committee for the Prevention of Vice and Promotion of Virtue in Speech“ vigilantes. Obviously.
Re:
I don’t think anyone was “advocating” for consequences. People were noting, accurately, that speech can have consequences. And that remains true and I don’t see how anything we’ve said changes that.
Wow. Did you really miss the point of what was being said by that much?
No one was saying that all consequences for any speech are good. We were just noting that there may be consequences. Sometimes those consequences are good, sometimes they’re not.
But, either way, that has nothing to do with the comments regarding Musk, which are more about his incredibly simplistic view of content moderation, in which he seems to believe that HIS determination of what’s good and what’s bad is obviously the same as the “objective” view of what’s good and what’s bad.
We’ve explained in the past, multiple times, how that view — that content moderation is easy because you just get rid of the bad stuff and leave the good stuff — is childlike. No one is saying that means that the decisions such a childlike view leads to are all bad, or all good, or shouldn’t be allowed.
I think you’ve really lost the plot in thinking this is a gotcha.
It’s not. It just looks like you have serious reading comprehension problems.
Re:
“Was it only last week folks around here were advocating for consequences to speech?”
Advocating that there were consequences to free speech from other private actors, as has happened since speech was invented? Yes. Because those consequences are also free speech.
“Speech that some dislike must be punished with ended careers, lost relationships, banishment, muzzling, or whatever other consequences the mob desires or instigates because, you know, “speech has consequences”.”
Like the decades of consequences that right-wingers were advocating whenever they didn’t like speech associated with music, comic books, role playing games, video games, books, and a great many other things that they supported in lockstep until the moment they realised the same rules applied to them?
“If Musk wants punishment for disfavored words, realize that the rest of the speech police do too. Obviously.”
I believe you’ll find that the right wing are the ones trying to outlaw words, the “left” are the ones trying to outlaw the context of the words as used to try to destroy the rights of certain populations. The left aren’t trying to say “don’t say gay” they’re trying to say “don’t engage in speech intended to treat gay people as inhuman”. There’s a difference.
Re:
“Was it only last week folks around here were advocating for consequences to speech? Punishment is certainly a consequence, isn’t it?”
You got a citation for that there statement?
No? Doors that way pal —–>
While I only skimmed the blueky self-authenticating data description, I gleaned enough to verify my guess that they were just meaning “user data is signed by the users asymmetric key(s)” (which is extremely sane).
It will be interesting to see what final solutions they have for key rotations, and “key authorities”, since I doubt they are interested the highly centralized PKI system we have now the HTTP (and similar) CA’s.
Thanks for the update. Still struggling to get my head around what they are doing. It seems that there is the data (user generated) which will exist in some way (from owner hosted to small communities to large twitter like corps) that can be accessed in a variety of ways (directly receiving or sending between two users all the way to aggregators who will filter and gatekeep as they currently do). The benefit being that the user and data can always be accessed so long as there’s a way to communicate it through the protocol (moving to another host or self hosting)? The ID is more for the data in such case, as the value and trust is from other users and it may appear in a standalone manner (like a link in a comment section like this)? Think that’s what it is, in which case the third party opportunities are to offer a way to move an account from an existing provider and host it and/or set it up so that previous friends/followers can interact in the same way via bluesky despite them remaining on the original platform. And hosting options for high volume independent users, or those in oppressive regions of the world.