Disclosure: I’m on the board of Bluesky, which was inspired by my “Protocols, Not Platforms” paper. But this post isn’t about Bluesky the app. It’s about the underlying protocol and what it enables for anyone who wants to build technology (even competitive to Bluesky) that actually respects users.
Last month, I helped release the Resonant Computing Manifesto, which laid out a vision for technology that empowers users rather than extracting from them. The response was gratifying—people are genuinely hungry for an alternative to the current enshittification trajectory of tech. But the most common piece of feedback we got was some version of: “Okay, this sounds great, but how do I actually build this?”
It’s a fair question. Manifestos are cheap if they don’t connect to reality.
So here’s my answer, at least for anything involving social identity: build on the ATProtocol. It’s the only available system today that actually delivers on the resonant computing principles, and it’s ready to use right now.
The Resonant Computing Manifesto laid out five principles for building technology that works for people:
Private: In the era of AI, whoever controls the context holds the power. While data often involves multiple stakeholders, people must serve as primary stewards of their own context, determining how it’s used.
Dedicated: Software should work exclusively for you, ensuring contextual integrity where data use aligns with your expectations. You must be able to trust there are no hidden agendas or conflicting interests.
Plural: No single entity should control the digital spaces we inhabit. Healthy ecosystems require distributed power, interoperability, and meaningful choice for participants.
Adaptable: Software should be open-ended, able to meet the specific, context-dependent needs of each person who uses it.
Prosocial: Technology should enable connection and coordination, helping us become better neighbors, collaborators, and stewards of shared spaces, both online and off.
If you’re building anything that involves users having identities, connecting with other users, or creating content that belongs to them—which describes basically every interesting app—you need infrastructure that makes these principles achievable rather than aspirational.
ATproto delivers all five.
Private and Dedicated come down to who controls your data. In the current paradigm, you’re rows in somebody else’s database, and they can do whatever they want with those rows. Dan Abramov, in his excellent explainer on open social systems, describes the problem perfectly:
The web Alice created—who she follows, what she likes, what she has posted—is trapped in a box that’s owned by somebody else. To leave is to leave it behind.
On an individual level, it might not be a huge deal.
Alice can rebuild her social presence connection by connection somewhere else. Eventually she might even have the same reach as on the previous platform.
However, collectively, the net effect is that social platforms—at first, gradually, and then suddenly—turn their backs on their users. If you can’t leave without losing something important, the platform has no incentives to respect you as a user.
With ATproto, your data lives in your own “personal repository” (the PDS)—think of it as your own storage container on the social web. You can host it with a free service (like Bluesky), a paid provider, or on your own server. If your current host turns evil or just annoys you, you pack up and move without losing your identity, your connections, or any of your content. The protocol handles the redirection automatically.
This isn’t theoretical. People are doing it right now. The infrastructure exists. You can literally move your entire social presence from one host to another and nobody who follows you needs to update anything (or even realize that you’ve moved).
You don’t need to figure out ways to extract data from an unwilling billionaire’s silo. It’s already yours.
And that’s beneficial for developers as well. If you’re trying to build a system, setting up the identity and social connections creates all sorts of challenges (and dangerous temptations) regarding how you deal with other people’s data, and what games you might play to try to juice the numbers. But with ATproto, the incentives are aligned. Users control their own data, their own connections, and you can just provide a useful service on top of that.
Plural is baked into the architecture. Because your identity isn’t tied to any single app or platform, you can use multiple apps that all read from and write to your personal repository. Abramov explains this clearly in that same post:
Each open social app is like a CMS (content management system) for a subset of data that lives in its users’ repositories. In that sense, your personal repository serves a role akin to a Google account, a Dropbox folder, or a Git repository, with data from your different open social apps grouped under different “subfolders”.
When you make a post on Bluesky, Bluesky puts that post into your repo:
When you star a project on Tangled, Tangled puts that star into your repo:
When you create a publication on Leaflet, Leaflet puts it into your repo:
You get the idea.
Over time, your repo grows to be a collection of data from different open social apps. This data is open by default—if you wanted to look at my Bluesky posts, or Tangled stars, or Leaflet publications, you wouldn’t need to hit these applications’ APIs. You could just hit my personal repository and enumerate all of its records.
This is the opposite of how closed platforms work. You’re not locked into any single company’s vision of what social software should be. Different apps can disagree about what a “post” is—different products, different vibes—and that’s a feature, not a bug. Your identity travels with you across all of them.
Indeed, we’re seeing some really cool stuff around this lately, such as with the new standard.site lexicon for long form publishing on ATproto. It’s been adopted by Leaflet, Pckt, and Offprint, with others likely to come on board as well.
Tynan Purdy, writing via the brand new Offprint (itself an ATproto app), captures the mindset shift that I think more developers need to internalize:
I have no more patience for platforms. I’m done.
Products come and go. This is a truism of the internet. Do not expect any particular service to exist forever, or you will be burned. It can be a depressing thought. So much of our lives are lived online. Communities and culture are created online. The play is performed on stages we call “social media”. But then they go away.
We make our homes on these platforms. Set up shop. Scale a business. Connect with our friends. Build a following. Then something changes. A change in corporate strategy. An IPO. A private equity takeover. A merger with AOL. And it’s never the same after that. All that work, all that culture, now painted in a different light. Sometimes locked away entirely.
His solution? Never build on closed platforms again:
I write to you now on a new kind of place on the internet. This place is mine. Or rather, what I create here is mine. This product (a rather fine one by @btrs.coif I say so myself), belongs to @offprint.app. They might go away. Someday they will. But this, my words, my creation. The human act of creating culture. This is mine. It lives in my personal folder. I keep my personal folder at @selfhosted.social. They will go away someday too, and that’s okay. I’ll move my folder somewhere else. You’ll still be able to read this. Offprint is just an app for reading a certain kind of post I publish to the ATmosphere. When Offprint inevitably dies, hopefully a long time from now, this post will still just be a file in my personal folder. And when that day comes, perhaps even before, there will be other ways to read this file from my personal folder. You can even do so right now.
That’s not idealism. That’s how ATproto actually works today.
Purdy mentions above his “personal folder” and in another post Abramov digs deeper into what that means:
This might sound very hypothetical, but it’s not. What I’ve described so far is the premise behind the AT protocol. It works in production at scale. Bluesky, Leaflet, Tangled, Semble, and Wisp are some of the new open social apps built this way.
It doesn’t feel different to use those apps. But by lifting user data out of the apps, we force the same separation as we’ve had in personal computing: apps don’t trap what you make with them. Someone can always make a new app for old data:
Like before, app developers evolve their file formats. However, they can’t gatekeep who reads and writes files in those formats. Which apps to use is up to you.
Together, everyone’s folders form something like a distributed social filesystem:
This is a fundamentally different relationship between users and services. And it breaks the economic logic that makes platforms turn against their users.
It’s an enshittification killswitch.
Cory Doctorow’s framing of enshittification notes that the demands (often from investors) for companies to extract more and more pushes them to enshittify. Once they have you in their silo, they can begin to turn the screws on you. They know that it’s costly for you to leave. You lose your contacts. Your content. Your community. The switching costs are the leverage.
ATproto breaks that leverage.
Because you control your data, your identity, and your connections, whichever services you’re using have strong incentives to never enshittify. Turn the screws and users just… leave. Click a button, move to a different service, take everything with them. The threat that makes enshittification profitable—”where else are you gonna go?”—has no teeth when the answer is “literally anywhere, and I’m taking my stuff.”
Paul Frazee, Bluesky’s CTO, talks about how this works in a post he recently did on the concept of “Atmospheric Computing.”
Connected clouds solve a lot of problems. You still have the always-on convenience, but you can also store your own data and run your own programs. It’s personal computing, for the cloud.
The main benefit is interoperation.
You signed up to Bluesky. You can just use that account on Leaflet. Both of them are on the Atmosphere.
If Leaflet decides to show Bluesky posts, they just can. If Leaflet decides to create Bluesky posts, they just need to use the right schema. The two apps don’t need to talk to directly to do it. They both just talk to the users’ account hosts.
Cooperative computing is possible.
The most popular algorithm on Bluesky is For You. It’s run by Spacecowboy on *squints* his gaming PC.
He ingests the firehose of public posts and likes and follows. Then the Bluesky app asks his server for a list of post URLs to render. The shared dataset means we can do deeply cooperative computing. An entirely third party service presents itself as first-party to Bluesky.
Because Tangled is Atmospheric, your self-hosted instance would see all of the same users and user activity as the first instance would.
The garden is unwalled.
SelfHosted.social is an account hosting service. The self-hosted users show up like any other user. If I had to guess, most of them started on Bluesky hosts, and then used something like PDS Moover to migrate.
It’s an open network.
In the Atmosphere, it does make sense to run a personal cloud, because your personal cloud can interoperate with other people’s personal clouds. It can also interoperate with BobbyCorp’s Big Bob Cloud, and the corner pie shop’s Pie Cloud, and on it goes.
There’s no silo to lock you in, and thus trying to turn the screws on users should backfire. Instead, services built on ATproto have “resonant” incentives, to keep you happy, to keep you feeling good about using the service, because it enables a plurality of other services as well.
In many ways it’s a rethinking of the entire web itself and how it can and should work. The web was supposed to be interoperable and buildable, but all our data and identity pieces got locked away in silos.
ATproto breaks all that down, and just lets people build. And connect. And share.
Adaptable is where the developer ecosystem comes in. Because the protocol is open and the data formats are extensible, anyone can build whatever they want. We’re already seeing this explosion right now: Bluesky for microblogging, Leaflet for long-form publishing, Tangled for code collaboration, Offprint for newsletters, Roomy for community discussions, Skylight for shortform video, Semble for organizing research, teal.fm for music scrobbling and dozens more. Some of these are mere “copycats” of existing services, but we’re already starting to see some others that are branching out beyond what was even possible before.
The key: these apps don’t just coexist—they can actively benefit from each other’s data. Abramov again:
Since the data from different apps “lives together”, there’s a much lower barrier for open social apps to piggyback on each other’s data. In a way, it starts to feel like a connected multiverse of apps, with data from one app “bleeding into” other apps.
When I signed up for Tangled, I chose to use my existing @danabra.mov handle. That makes sense since identity can be shared between open social apps. What’s more interesting is that Tangled prefilled my avatar based on my Bluesky profile. It didn’t need to hit the Bluesky API to do that; it just read the Bluesky profile record in my repository. Every app can choose to piggyback on data from other apps.
An everything app tries to do everything the way they tell you to do it. An everything protocol-based ecosystem lets everything get done. How you want. Now how some billionaire wants.
It’s becoming part of the motto of the Atmosphere: we can just do things. Anyone can. For years I’ve written about how much learned helplessness people have regarding social systems—thinking their only option is to beg billionaires or the government to fix things. But there’s a third way: just build. And build together. That’s what ATproto enables.
And it’s doable today. Yes, there are reasonable concerns about the hype machine around AI and vibe coding—but the flip side is that in the last couple of months, I, a non-professional coder, have built myself three separate things using ATproto. Including a Google Reader-style app that mixes RSS and ATproto together. That’s what “adaptable” actually means: tools malleable enough that regular people with little to no experience can shape them to their needs. The vibe coding revolution will enable even more people to just build what they want, and they can use ATproto as a foundational layer of that.
This used to be close to impossible. The big centralized platforms learned to lock everything down—sometimes suing those who sought to build better tools. ATproto doesn’t have that problem. We don’t need permission. We can just do things. Today. And with new AI-powered tools, it’s easier than ever for anyone to do so.
Prosocial is where this all comes together. Not “social” in the Zuckerbergian sense of harvesting your social graph to sell ads, but social in the human sense: enabling connection and coordination between people, without a controlling body in the middle looking to exploit those connections. The identity layer handles the hard problems—authentication, verification, portability—so developers (or, really, anyone—see the adaptable section) can focus on building things that actually help people connect.
Remember why people flocked to social media in the early years? They got genuine value out of it. Connecting with friends and family, new and old. But once the centralized systems had you trapped, those social tools became extraction tools.
The open social architecture of the Atmosphere means that trap can’t close. We can engage in prosocial activities without fear of bait-and-switch—without worrying that the useful feature we love is just bait to drag our data and connections into someone’s locked pen.
The protocol itself is politically neutral infrastructure, like email or the web. The point isn’t any particular app—it’s that we finally have a foundation for building social tools that don’t require users to surrender control of their digital lives.
If you’re building an app that needs user identity, or user-generated content, or any kind of social graph, you don’t have to build all that infrastructure yourself. You don’t have to trap your users’ data in your own database (and worry about the associated risks). You don’t have to make them create yet another account and remember yet another password. You can just plug into ATproto’s identity layer and get all of the resonant computing principles essentially for free.
Your users keep control of their identity. Their data stays under their control, but available to the wider ecosystem. Your app becomes part of that larger ecosystem rather than just another walled garden, meaning you’ve also solved part of the cold start problem. Over 40 million people already have an account that works on whatever it is that you’ve built. And if your app dies—let’s be honest, most apps die—the data and connections your users created don’t die with it.
The Resonant Computing Manifesto talked about technology that leaves people “feeling nourished, grateful, alive” rather than “depleted, manipulated, or just vaguely dirty.” That kind of technology can’t exist when the fundamental architecture treats users as resources to be extracted. But it can exist when users control their own data, when developers can build without permission, when leaving doesn’t mean losing everything.
That’s not a future we need to wait for. That’s ATproto. Today.
So when people ask “how do I actually build resonant computing?” this is a key part of the answer. Stop building on platforms. Stop begging billionaires to be better. Stop waiting for regulators to save you.
The tools are here. The infrastructure exists. We can just do things.
It’s always funny seeing Bluesky promoters talk about Bluesky like it’s the only game in town, the first to market, when Mastodon/ActivityPub has been around for a decade and already has tens of thousands of instances spun up compared to Bluesky’s handful of instances, including numerous single-user instances which really should be considered the gold standard for having control and ownership of one’s social media.
I’m not saying it’s bad to have competing protocols, but you could at least be honest about what’s around and who came first, especially since ATproto was explicitly inspired by ActivityPub in the first place.
It’s not “the only game in town” but I don’t think ActivityPub meets the criteria I’m talking about in the post. The only thing AP currently does is allow you to move and keep your social graph. The other features I discuss aren’t really possible with AP right now. That may change, and I hope it does. But, like already with ActivityPub I have an account on Mastodon, but I couldn’t use that same account or data from it on Lemmy. I had to create a separate account.
Also, it’s not true that ATproto “was explicitly inspired by ActivityPub.” I wrote the protocols paper not based on AP, and explained at the time why AP wasn’t quite what I was talking about because it was still more focused on mini-fiefdoms, not actual freedoms. And the people who created ATproto said they did so for the same reason. They looked at AP and said it couldn’t work for the use case they were looking to create… which is all of the identity stuff discussed above.
I like AP quite a bit, and am enthusiastic about the ways people are using it. But I don’t think it meets the criteria I discuss the way ATproto does.
In ATproto, it’s not set up as “instances” like in ActivityPub. In ActivityPub, you have each service sits on its own server, so you’re at the whims of whoever runs which server you’re on. With ATproto, there are different layers to the stack. You have the PDS, the relay, the AppView, and the client. And so different people can take control over different parts of the stack and mix and match.
The article above is really just discussing the PDS layer, but we’re seeing different relays, appviews, and clients all the time.
I take your meaning to be: “if i have a Bluesky account, can I still communicate with people on Bluesky without touching any Bluesky hardware” which is a bit of a different question. But the answer to that is also yes.
Blacksky is the most obvious one, that now runs its own PDS hosting, its own relay, its own appview, and its own client. So you can absolutely move elsewhere and tons of people have.
Northsky is working on a similar setup for Canadians. EuroSky will soon be online for Europeans. Others are building out their own offerings as well.
But that’s all for microblogging. The whole point of this post is that atproto does way more than that.
Anonymous Cowardsays:
I find it funny when “teal.fm for music scrobbling” is used as an example, because the first-party version isn’t even available yet but people are already using it with third party integrations.
As soon as it goes live on the web, it’ll already be full of data stretching back months.
Rob Riccisays:
What is meant by Private
I’m having a bit of trouble understanding your definition of Private – today, everything on-protocol in ATproto is public, in the sense that it can be fetched by anyone, and, protocol-wise, anyone can refer to any content in any context.
The context features that I’m aware of are above the protocol. For example, limiting replies and quote-posts in Bluesky: at the ATproto level, anyone can create records in their repos responding to posts or quoting them regardless of the content owner’s wishes. To the best of my knowledge, limits on the context are in the AppView, which can decide not to display replies or quotes based on records in the original author’s repo. While being fully compliant with ATproto, I believe that other AppViews, without any permission from the content creator, can implement completely different rules that allow posts to be used in any context?
This doesn’t seem to me like the protocol meets the definition of privacy here or in the Radiant Computing Manifesto? Or maybe I just misunderstand what it means to control the context of my data. Can you elaborate?
I think part of the issue here is that there are multiple definitions of private, which is one reason I had wanted to not use that term in the manifesto, but got outvoted.
The real issue is who has control over your data, and under ATproto it’s the user, and that’s what’s important.
But, on the issue of thinking of data as “private” in the way you described, you’re correct that the current version treats data as publicly available, and while some folks have hacked around that, the Bluesky team has been quite public about thinking through how to properly and carefully do private data on-protocol, and you should expect to see a lot more on that this year.
Robsays:
Re: Re:
Thanks! In terms of controlling context, specifically:
primary stewards of their own context
do you have any comments on how ATproto actually enables that today?
Katesays:
This is my first real exposure to how ATProto functions, but I’m a little confused.
In terms of privacy, it sounds like all data in your PDS is available to your connected services and to anyone who wants to view it. Is there any control over who can access what? I guess BlueSky DMs must be doing this somehow if those go through ATProto.
I’m also not sure what’s stopping a service from denying access to a PDS if they have to be centrally hosted. If BlueSky decided to lock it down before non-technical users could learn how to migrate away, would there be any recourse or would that data simply be lost? I’m assuming I misunderstood something here.
Anonymous Cowardsays:
Re:
In terms of privacy, it sounds like all data in your PDS is available to your connected services and to anyone who wants to view it. Is there any control over who can access what?
It’s available to everyone, connected services or no
I guess BlueSky DMs must be doing this somehow if those go through ATProto.
Actually, the are off-protocol.
I’m also not sure what’s stopping a service from denying access to a PDS if they have to be centrally hosted. If BlueSky decided to lock it down before non-technical users could learn how to migrate away, would there be any recourse or would that data simply be lost? I’m assuming I misunderstood something here.
They have designed for adversarial migration, which means migrating your data even if your old PDS becomes inaccessible or hostile.
In order to make use of this, you need to have prepared by getting control of your signing keys and having a recent backup of the data in your PDS. I don’t have a way to get numbers on the latter, but for the former, I did a survey last month re: how many ATproto users are prepared in this way: https://rob.leaflet.pub/3m7isflo7ls23
Sok Puppettesays:
Sorry, NO
The AT protocol is built around a single, highly centralized “relay” system. Even if it could be extended to use multiple relays, which nobody has even mapped out, let alone tried to do, each relay would still have to handle all of the traffic, which means that nothing smaller than a medium-large corporation will ever have the resources to run one.
It’s a plain bad, intrinsically unscalable design that guarantees centralization.
It also continues the whole follow-based friends and influencers model that one of the main root causes of “social media” sucking so much.
The AT protocol is built around a single, highly centralized “relay” system.
That’s false. I don’t know where that info is coming from, but there are multiple relays already and it has become easier and easier to create your own, and can currently be done for under $30/month.
Even if it could be extended to use multiple relays, which nobody has even mapped out, let alone tried to do, each relay would still have to handle all of the traffic
My goodness. You need to update your priors. That hasn’t been true since early 2024, about two years ago.
It’s a plain bad, intrinsically unscalable design that guarantees centralization.
You heard something two years ago that long ago got fixed and you refused to update your priors.
Maybe don’t do that?
It also continues the whole follow-based friends and influencers model that one of the main root causes of “social media” sucking so much.
There is nothing intrinsic to atproto that does that.
You seem woefully misinformed.
Rob Riccisays:
Re: Re:
Yeah, I am currently drinking the complete firehose from 4 relays with a machine in my living room.
This is different than running a full-network relay, but I did that (with only partial history) from my living room for a while too.
I do have concerns about the architectural role of the relay, but OP, you are quite misinformed.
Sok Puppettesays:
Re: Re:
OK, I stand corrected, there are multiple relays.
But I didn’t “hear something two years ago”. I read whatever white paper was available from the project itself, yes probably around two years ago. Obviously I should have checked again. To be honest, I just wrote the whole protocol off after reading that.
It does sound like there’s still a scaling issue, in that you guys seem to be saying that there’s still such a thing as a “fire hose”, and some class of nodes that have to deal with it. Unless there’s a plan to fix that, there’s still a problem. And there’s really no reason to have it in the first place.
By the way, using phrases like “update your priors” is kind of in the same vein as what I did. That phrase actually doesn’t make sense given what a prior is, nor is what you’re suggesting what “updating” usually means in contexts where you’d use the word “prior”. And I wouldn’t think you’d want to adopt that particular jargon anyway, given what I believe you think of the communities it comes from.
Obviously I should have checked again. To be honest, I just wrote the whole protocol off after reading that.
You also claimed that it hadn’t even been tried, which seems strange since a simple search would have proven that wrong. You acted as if it was impossible to run a relay.
And I don’t see what you mean by a scaling issue. Again, others are already running relays, some are partial, some are full, but people have built tools that mean you rarely actually need to run a full relay, since the times that’s necessary may be limited. It’s possible, but the ecosystem has basically solved all the problems you seemed to think were unsolvable, and which caused you to dismiss the entire ecosystem.
As for your last paragraph, I honestly have no idea what you’re saying. I’ve read it three times, and I still think it’s the proper phrase. It means that you came to a conclusion based on old data, and I am suggesting that you review the new data and update your conclusions.
If there’s some other meaning to the phrase, I am unaware of it.
Sok Puppettesays:
Re: Re: Re:2
You also claimed that it hadn’t even been tried,
You’re right. I should not have said that. I had no support for it. Sorry.
As for your last paragraph, I honestly have no idea what you’re saying.
You don’t update a prior. You update from a prior to a posterior (on some evidence). The prior is fixed and it makes no sense to talk about updating it.
Also, that whole set of ideas is about probability, which techically applies here but is a weird way to talk about me being totally wrong from the get-go.
And the people who popularized talking about “priors” are the rationalist/EA crowd, with whom I don’t think you want to be identified.
Rob Riccisays:
Re: Re: Re:
And there’s really no reason to have it in the first place.
There is, in fact, a reason to have relays, the people who designed the protocol were not just throwing stuff in there for the heck of it.
The reason is that everything above the PDS is assumed to need a stream of all events occurring on the network and/or the ability to view every event that has occurred on the network ever.
You can see how this makes sense for a Twitter-like application, and also other applications that are built on the notion of stream of an object graph. Blogging applications and source code repositories are two examples that are fairly popular in the ATproto world right now.
It does sound like there’s still a scaling issue, in that you guys seem to be saying that there’s still such a thing as a “fire hose”, and some class of nodes that have to deal with it. Unless there’s a plan to fix that, there’s still a problem.
And here I agree with you, it is a scaling problem for two reasons.
First, the cost to run a relay scales something-like-linearly in all of the following dimensions: (a) the event rate in the network, (b) the number of consumers of the data (typically AppViews but other consumers exist), and (c) the length of time that the network has existed (if you are going to run a relay with full history). The price Mike quoted is for a relay that does not backfill with history, so it’s turning that last term into a constant. But the other two are likely to remain fairly linear: let’s assume the number of users, and uses, continues to grow, so that’s (a) still growing, and that the number of applications continues to grow, so that’s (b) still growing. The long-term effect is that if the protocol is successful, the cost of running a relay goes from being manageable to being something that, yes, only a large organization can really run.
Second, it is a point of centralized control. Sure, the network may operate fine with a relatively small number of relays, assuming that entities like Bluesky continue to pay to run them. But, they become easy targets for censorship and other forms of centralized control. A company like Bluesky can decide not to carry certain accounts or content on their relay, or can be compelled by a government not to do so. Given that the applications that users actually use are are a couple steps away from a relay, “just switching relays” is not necessarily a thing users themselves have much control over. So, from the perspective of a threat model, the relay(s) are some of the best targets, and the expense of running one will get higher over time, keeping the number fairly small.
I think there are probably architectural ways out of this. The biggest is to build applications that are okay with just seeing part of the full network. But now we are discussing future evolution of the protocol and its use, not the protocol as it stands today.
Sok Puppettesays:
Re: Re: Re:2
The reason is that everything above the PDS is assumed to need a stream of all events occurring on the network and/or the ability to view every event that has occurred on the network ever.
Well, yeah, but that’s kind of what I mean there’s no reason for.
I guess it depends on what you think qualifies as a reason.
You can see how this makes sense for a Twitter-like application
If you mean an application that applies some kind of competely personalized score to every single thing ever posted by anybody in the world, and displays the top N in a time period, or all the ones that go over some threshold, or the like, then, OK, you have to score every single thing for every single user, which means you have to see them all.
But does either Twitter or Bluesky actually do completely personalized filtering for every single user on every single posting?
Anyway, my first reaction is to utterly fail to understand why would you’d want to do that to begin with. Topic-based forums have a vastly better UX, and you can extend that to other kinds of source tagging where consumers only have to look at things that bear a priori indicators of actually being interesting. Which indicators can be used to partition streams so nothing has to see everything.
Although I still dislike “follows”, if you really want them, you can definitely do them without anything looking at every event.
But even you feel you must potentially expose every user to every single event, you can almost certainly find better approaches.
You could show people what they’ve actually subscribed to, and then handle the rest by adding in a sample of the rest probabilistically filtered on some kind of “absolute” quality score, which you can do in a heirarchical rollup instead of anything having to see everything. There’s probably a fair amount of agreement among users about what’s good content… and perhaps even more agreement among developers and operators about what’s “good for” the users.
Or score each post along a bunch of axes, embedding-wise, and create separate streams for each of those axes, thus synthesizing something vaguely like source tagging. Let the user’s personal filter figure out which axes the user wants to see, and subscribe to them… possibly again in a probabilistic sort of way. You can probably even use those same embeddings to support global search without anything having to process
every single event.
And you can make it all probabilistic if you want to be sure of closing feedback loops.
I’m sure there are probably a bunch of other possible approach. It seems really unlikely that any user is going to notice or feel hurt by a well-crafted method that eliminates any choke point.
Given that there are scalable ways to achieve (what I believe to be) the goals, it seems reasonable to say there’s no reason to put in a choke point that will be hard or impossible to remove later.
Rob Riccisays:
Re: Re: Re:3
I’m with you on the basic idea that this is a pretty limiting way to build applications in general. But if you goal is to build something that looks like Twitter, a giant global view of everything is what you need. If Twitter is not what you want, as you clearly don’t, then yeah, Bluesky and the ATproto design decisions that support it aren’t going to look worth it, because they aren’t.
This is why I’m concerned about scaling to large applications that are not Twitter. Things that don’t need a graph streamed at them in real time are not going to be a very good fit for ATproto, at least not in the way that Bluesky uses it. So long as these applications are small, this is not a big deal. But if they grow, things will have to change. I’m moderately optimistic that they can.
I do want to point out one specific way that it is following the principles laid out in the Resonant Computing Manifesto and this post: everybody is only writing to their own repository on a PDS (regardless of how it is hosted). Every post, every mention, every like, etc. is put it a place where you do, in a meaningful sense, have control over it, even if it is hosted on someone else’s server. That is very nice. The price you pay is that in order to know how many likes a post got, you have to collect like that everybody in the world has done, and cross-reference them with the post you are showing. This is what the AppView does, and this makes it expensive. See this great post: https://unfoldingdiagrams.leaflet.pub/3mdf4b5dnms2p that explains it better than I can.
The downside from the perspective of “context” for your data, as I pointed out in another comment on this post, is that you have no control over the context in which your data is referenced or displayed. eg. at the ATproto level, you cannot stop people from replying to your posts, quoting them, tagging you. You have to rely on the AppView they are using respecting the records you put in your own repo to indicate it’s preferences re: these things. Maybe this is a reasonable tradeoff. But I want to see this actually discussed when people say things like “privacy” and “context”.
Can Bluesky still effectively ban people from the entire supposedly decentralised network even if they aren’t part of what is to all intents and purposes the main server and are they still bending the knee to the kind of censorship requests this site rails against? Are Jay Graber and the development team still being antagonistic towards the userbase while protecting bad actors like Jesse Singal? Are the bans of people from Palestine and those who mock Charlie Kirk still happening.
I understand being proud of a thing you’re a part of but from an outside perspective it just looks like pre-Elon twitter and the decentralisation is a figleaf at most as getting full independance doesn’t seem like an easy task, just ask Blacksky.
Rob Riccisays:
Re:
I’m going to answer this question from two perspectives, to provide the aspirational answer and the on-the-ground-today answer. I also want to preface this by saying that I expect the numbers regarding the deployment to change over time, in a positive way, and that it’s worth watching to see where they go:
From the perspective of the protocol:
No, Bluesky the company cannot ban accounts network-wide, period. For every piece of infrastructure needed, there is some way – that is already running – you can participate in the network without the company. And yeah, ask Blacksky, but not in the negative sense you are implying, but in the positive sense of look what it is possible to do without Bluesky the company. They are trailblazers, and the whole thing is going to be easier for others because of the work they are doing.
Now, from the perspective of what is out there today:
Practically, yes the company can cut you off from most of the network. Here are some numbers based off the measurements I’ve been taking.
PDSes (where user data is stored): As of today about 99% of active accounts are on Bluesky’s servers (source: https://arewedecentralizedyet.online/). They can, and sometimes do, issue takedowns at the PDS level, in which case you cannot get your data off to migrate elsewhere. The protocol is designed for adversarial migration, in which you can move your data even if something like this happens. To do so, you need to be prepared ahead of time with a backup of your data and rotation keys. As of December, the fraction of active accounts on Bluesky PDSes prepared with their own rotation keys was 0.0034% (source: https://rob.leaflet.pub/3m7isflo7ls23 )
PLC, where most identity is stored: This is a centralized service that tracks identity. The company could, but so far as I know, to their credit, does not block accounts there. Last I heard, it is in the process of being spun out into an independent organization – so while it will still be a centralized service, it will not be under control of the company. There is an alternative to PLC accounts, did:web, that puts your identity more firmly in your own hands and relies on neither Bluesky nor a future PLC organization. As of my December survey ( https://rob.leaflet.pub/3m7isflo7ls23 ), I found fewer than 100 active accounts using did:web. It is worth pointing out that, to the best of my knowledge, unlike migrating between PDSes, it is not possible to migrate an account created with did:plc to did:web.
Relay and AppView: these are separate but I’m going to lump them together to say that, today, most of the network is using Bluesky’s services, and in both cases, there is one production alternative, run by Blacksky. (There are more non-production versions of these services, especially on the relay side, but from a perspective of blocking access to the network, the production ones are the important ones.) To my knowledge, most Bluesky blocks are done at the AppView level, so let’s concentrate on that. I don’t believe that there are public numbers available as to the number of people using each AppView. Accounts are not inherently tied to one AppView, and in fact you can, and many people do, use multiple apps with the same account, and there is no reason those apps need to be using the same AppView. So: I cannot provide any hard numbers here on how much of the network an account is blocked from if it is blocked by Bluesky’s AppView. The best educated guess I can provide is that it is probably on the same order of magnitude as the fraction of the active network hosted on Bluesky’s PDSes.
Every one of the numbers here is changing, and it behooves everyone who is interested in this network to follow them, so that they understand the network as is evolves, rather than being stuck in some outdated understanding or be going solely off how it might look in the future.
Comments on “ATproto: The Enshittification Killswitch That Enables Resonant Computing”
"It’s the only..."
It’s always funny seeing Bluesky promoters talk about Bluesky like it’s the only game in town, the first to market, when Mastodon/ActivityPub has been around for a decade and already has tens of thousands of instances spun up compared to Bluesky’s handful of instances, including numerous single-user instances which really should be considered the gold standard for having control and ownership of one’s social media.
I’m not saying it’s bad to have competing protocols, but you could at least be honest about what’s around and who came first, especially since ATproto was explicitly inspired by ActivityPub in the first place.
Re:
It’s not “the only game in town” but I don’t think ActivityPub meets the criteria I’m talking about in the post. The only thing AP currently does is allow you to move and keep your social graph. The other features I discuss aren’t really possible with AP right now. That may change, and I hope it does. But, like already with ActivityPub I have an account on Mastodon, but I couldn’t use that same account or data from it on Lemmy. I had to create a separate account.
Also, it’s not true that ATproto “was explicitly inspired by ActivityPub.” I wrote the protocols paper not based on AP, and explained at the time why AP wasn’t quite what I was talking about because it was still more focused on mini-fiefdoms, not actual freedoms. And the people who created ATproto said they did so for the same reason. They looked at AP and said it couldn’t work for the use case they were looking to create… which is all of the identity stuff discussed above.
I like AP quite a bit, and am enthusiastic about the ways people are using it. But I don’t think it meets the criteria I discuss the way ATproto does.
If I have a Bluesky account, and I want to migrate it to another ATproto instance, can I do that?
Re:
In ATproto, it’s not set up as “instances” like in ActivityPub. In ActivityPub, you have each service sits on its own server, so you’re at the whims of whoever runs which server you’re on. With ATproto, there are different layers to the stack. You have the PDS, the relay, the AppView, and the client. And so different people can take control over different parts of the stack and mix and match.
The article above is really just discussing the PDS layer, but we’re seeing different relays, appviews, and clients all the time.
I take your meaning to be: “if i have a Bluesky account, can I still communicate with people on Bluesky without touching any Bluesky hardware” which is a bit of a different question. But the answer to that is also yes.
Blacksky is the most obvious one, that now runs its own PDS hosting, its own relay, its own appview, and its own client. So you can absolutely move elsewhere and tons of people have.
Northsky is working on a similar setup for Canadians. EuroSky will soon be online for Europeans. Others are building out their own offerings as well.
But that’s all for microblogging. The whole point of this post is that atproto does way more than that.
I find it funny when “teal.fm for music scrobbling” is used as an example, because the first-party version isn’t even available yet but people are already using it with third party integrations.
As soon as it goes live on the web, it’ll already be full of data stretching back months.
What is meant by Private
I’m having a bit of trouble understanding your definition of Private – today, everything on-protocol in ATproto is public, in the sense that it can be fetched by anyone, and, protocol-wise, anyone can refer to any content in any context.
The context features that I’m aware of are above the protocol. For example, limiting replies and quote-posts in Bluesky: at the ATproto level, anyone can create records in their repos responding to posts or quoting them regardless of the content owner’s wishes. To the best of my knowledge, limits on the context are in the AppView, which can decide not to display replies or quotes based on records in the original author’s repo. While being fully compliant with ATproto, I believe that other AppViews, without any permission from the content creator, can implement completely different rules that allow posts to be used in any context?
This doesn’t seem to me like the protocol meets the definition of privacy here or in the Radiant Computing Manifesto? Or maybe I just misunderstand what it means to control the context of my data. Can you elaborate?
Re:
I think part of the issue here is that there are multiple definitions of private, which is one reason I had wanted to not use that term in the manifesto, but got outvoted.
The real issue is who has control over your data, and under ATproto it’s the user, and that’s what’s important.
But, on the issue of thinking of data as “private” in the way you described, you’re correct that the current version treats data as publicly available, and while some folks have hacked around that, the Bluesky team has been quite public about thinking through how to properly and carefully do private data on-protocol, and you should expect to see a lot more on that this year.
Re: Re:
Thanks! In terms of controlling context, specifically:
do you have any comments on how ATproto actually enables that today?
This is my first real exposure to how ATProto functions, but I’m a little confused.
In terms of privacy, it sounds like all data in your PDS is available to your connected services and to anyone who wants to view it. Is there any control over who can access what? I guess BlueSky DMs must be doing this somehow if those go through ATProto.
I’m also not sure what’s stopping a service from denying access to a PDS if they have to be centrally hosted. If BlueSky decided to lock it down before non-technical users could learn how to migrate away, would there be any recourse or would that data simply be lost? I’m assuming I misunderstood something here.
Re:
It’s available to everyone, connected services or no
Actually, the are off-protocol.
They have designed for adversarial migration, which means migrating your data even if your old PDS becomes inaccessible or hostile.
In order to make use of this, you need to have prepared by getting control of your signing keys and having a recent backup of the data in your PDS. I don’t have a way to get numbers on the latter, but for the former, I did a survey last month re: how many ATproto users are prepared in this way: https://rob.leaflet.pub/3m7isflo7ls23
Sorry, NO
The AT protocol is built around a single, highly centralized “relay” system. Even if it could be extended to use multiple relays, which nobody has even mapped out, let alone tried to do, each relay would still have to handle all of the traffic, which means that nothing smaller than a medium-large corporation will ever have the resources to run one.
It’s a plain bad, intrinsically unscalable design that guarantees centralization.
It also continues the whole follow-based friends and influencers model that one of the main root causes of “social media” sucking so much.
Re:
That’s false. I don’t know where that info is coming from, but there are multiple relays already and it has become easier and easier to create your own, and can currently be done for under $30/month.
https://bsky.app/profile/did:plc:hdhoaan3xa3jiuq4fg4mefid/post/3lne2wvr5hc2b
https://whtwnd.com/bnewbold.net/3lo7a2a4qxg2l
Blacksky has it’s own relay: https://atproto.africa/
As do others, such as https://relay3.fr.hose.cam/ and https://relay-ovh.demo.bsky.dev/
And many more are being built.
Why lie?
My goodness. You need to update your priors. That hasn’t been true since early 2024, about two years ago.
You heard something two years ago that long ago got fixed and you refused to update your priors.
Maybe don’t do that?
There is nothing intrinsic to atproto that does that.
You seem woefully misinformed.
Re: Re:
Yeah, I am currently drinking the complete firehose from 4 relays with a machine in my living room.
This is different than running a full-network relay, but I did that (with only partial history) from my living room for a while too.
I do have concerns about the architectural role of the relay, but OP, you are quite misinformed.
Re: Re:
OK, I stand corrected, there are multiple relays.
But I didn’t “hear something two years ago”. I read whatever white paper was available from the project itself, yes probably around two years ago. Obviously I should have checked again. To be honest, I just wrote the whole protocol off after reading that.
It does sound like there’s still a scaling issue, in that you guys seem to be saying that there’s still such a thing as a “fire hose”, and some class of nodes that have to deal with it. Unless there’s a plan to fix that, there’s still a problem. And there’s really no reason to have it in the first place.
By the way, using phrases like “update your priors” is kind of in the same vein as what I did. That phrase actually doesn’t make sense given what a prior is, nor is what you’re suggesting what “updating” usually means in contexts where you’d use the word “prior”. And I wouldn’t think you’d want to adopt that particular jargon anyway, given what I believe you think of the communities it comes from.
Re: Re: Re:
You also claimed that it hadn’t even been tried, which seems strange since a simple search would have proven that wrong. You acted as if it was impossible to run a relay.
And I don’t see what you mean by a scaling issue. Again, others are already running relays, some are partial, some are full, but people have built tools that mean you rarely actually need to run a full relay, since the times that’s necessary may be limited. It’s possible, but the ecosystem has basically solved all the problems you seemed to think were unsolvable, and which caused you to dismiss the entire ecosystem.
As for your last paragraph, I honestly have no idea what you’re saying. I’ve read it three times, and I still think it’s the proper phrase. It means that you came to a conclusion based on old data, and I am suggesting that you review the new data and update your conclusions.
If there’s some other meaning to the phrase, I am unaware of it.
Re: Re: Re:2
You’re right. I should not have said that. I had no support for it. Sorry.
You don’t update a prior. You update from a prior to a posterior (on some evidence). The prior is fixed and it makes no sense to talk about updating it.
Also, that whole set of ideas is about probability, which techically applies here but is a weird way to talk about me being totally wrong from the get-go.
And the people who popularized talking about “priors” are the rationalist/EA crowd, with whom I don’t think you want to be identified.
Re: Re: Re:
There is, in fact, a reason to have relays, the people who designed the protocol were not just throwing stuff in there for the heck of it.
The reason is that everything above the PDS is assumed to need a stream of all events occurring on the network and/or the ability to view every event that has occurred on the network ever.
You can see how this makes sense for a Twitter-like application, and also other applications that are built on the notion of stream of an object graph. Blogging applications and source code repositories are two examples that are fairly popular in the ATproto world right now.
And here I agree with you, it is a scaling problem for two reasons.
First, the cost to run a relay scales something-like-linearly in all of the following dimensions: (a) the event rate in the network, (b) the number of consumers of the data (typically AppViews but other consumers exist), and (c) the length of time that the network has existed (if you are going to run a relay with full history). The price Mike quoted is for a relay that does not backfill with history, so it’s turning that last term into a constant. But the other two are likely to remain fairly linear: let’s assume the number of users, and uses, continues to grow, so that’s (a) still growing, and that the number of applications continues to grow, so that’s (b) still growing. The long-term effect is that if the protocol is successful, the cost of running a relay goes from being manageable to being something that, yes, only a large organization can really run.
Second, it is a point of centralized control. Sure, the network may operate fine with a relatively small number of relays, assuming that entities like Bluesky continue to pay to run them. But, they become easy targets for censorship and other forms of centralized control. A company like Bluesky can decide not to carry certain accounts or content on their relay, or can be compelled by a government not to do so. Given that the applications that users actually use are are a couple steps away from a relay, “just switching relays” is not necessarily a thing users themselves have much control over. So, from the perspective of a threat model, the relay(s) are some of the best targets, and the expense of running one will get higher over time, keeping the number fairly small.
I think there are probably architectural ways out of this. The biggest is to build applications that are okay with just seeing part of the full network. But now we are discussing future evolution of the protocol and its use, not the protocol as it stands today.
Re: Re: Re:2
Well, yeah, but that’s kind of what I mean there’s no reason for.
I guess it depends on what you think qualifies as a reason.
If you mean an application that applies some kind of competely personalized score to every single thing ever posted by anybody in the world, and displays the top N in a time period, or all the ones that go over some threshold, or the like, then, OK, you have to score every single thing for every single user, which means you have to see them all.
But does either Twitter or Bluesky actually do completely personalized filtering for every single user on every single posting?
Anyway, my first reaction is to utterly fail to understand why would you’d want to do that to begin with. Topic-based forums have a vastly better UX, and you can extend that to other kinds of source tagging where consumers only have to look at things that bear a priori indicators of actually being interesting. Which indicators can be used to partition streams so nothing has to see everything.
Although I still dislike “follows”, if you really want them, you can definitely do them without anything looking at every event.
But even you feel you must potentially expose every user to every single event, you can almost certainly find better approaches.
You could show people what they’ve actually subscribed to, and then handle the rest by adding in a sample of the rest probabilistically filtered on some kind of “absolute” quality score, which you can do in a heirarchical rollup instead of anything having to see everything. There’s probably a fair amount of agreement among users about what’s good content… and perhaps even more agreement among developers and operators about what’s “good for” the users.
Or score each post along a bunch of axes, embedding-wise, and create separate streams for each of those axes, thus synthesizing something vaguely like source tagging. Let the user’s personal filter figure out which axes the user wants to see, and subscribe to them… possibly again in a probabilistic sort of way. You can probably even use those same embeddings to support global search without anything having to process
every single event.
And you can make it all probabilistic if you want to be sure of closing feedback loops.
I’m sure there are probably a bunch of other possible approach. It seems really unlikely that any user is going to notice or feel hurt by a well-crafted method that eliminates any choke point.
Given that there are scalable ways to achieve (what I believe to be) the goals, it seems reasonable to say there’s no reason to put in a choke point that will be hard or impossible to remove later.
Re: Re: Re:3
I’m with you on the basic idea that this is a pretty limiting way to build applications in general. But if you goal is to build something that looks like Twitter, a giant global view of everything is what you need. If Twitter is not what you want, as you clearly don’t, then yeah, Bluesky and the ATproto design decisions that support it aren’t going to look worth it, because they aren’t.
This is why I’m concerned about scaling to large applications that are not Twitter. Things that don’t need a graph streamed at them in real time are not going to be a very good fit for ATproto, at least not in the way that Bluesky uses it. So long as these applications are small, this is not a big deal. But if they grow, things will have to change. I’m moderately optimistic that they can.
I do want to point out one specific way that it is following the principles laid out in the Resonant Computing Manifesto and this post: everybody is only writing to their own repository on a PDS (regardless of how it is hosted). Every post, every mention, every like, etc. is put it a place where you do, in a meaningful sense, have control over it, even if it is hosted on someone else’s server. That is very nice. The price you pay is that in order to know how many likes a post got, you have to collect like that everybody in the world has done, and cross-reference them with the post you are showing. This is what the AppView does, and this makes it expensive. See this great post: https://unfoldingdiagrams.leaflet.pub/3mdf4b5dnms2p that explains it better than I can.
The downside from the perspective of “context” for your data, as I pointed out in another comment on this post, is that you have no control over the context in which your data is referenced or displayed. eg. at the ATproto level, you cannot stop people from replying to your posts, quoting them, tagging you. You have to rely on the AppView they are using respecting the records you put in your own repo to indicate it’s preferences re: these things. Maybe this is a reasonable tradeoff. But I want to see this actually discussed when people say things like “privacy” and “context”.
Any thoughts on this post?
https://notes.nora.codes/atproto-again/
Can Bluesky still effectively ban people from the entire supposedly decentralised network even if they aren’t part of what is to all intents and purposes the main server and are they still bending the knee to the kind of censorship requests this site rails against? Are Jay Graber and the development team still being antagonistic towards the userbase while protecting bad actors like Jesse Singal? Are the bans of people from Palestine and those who mock Charlie Kirk still happening.
I understand being proud of a thing you’re a part of but from an outside perspective it just looks like pre-Elon twitter and the decentralisation is a figleaf at most as getting full independance doesn’t seem like an easy task, just ask Blacksky.
Re:
I’m going to answer this question from two perspectives, to provide the aspirational answer and the on-the-ground-today answer. I also want to preface this by saying that I expect the numbers regarding the deployment to change over time, in a positive way, and that it’s worth watching to see where they go:
From the perspective of the protocol:
No, Bluesky the company cannot ban accounts network-wide, period. For every piece of infrastructure needed, there is some way – that is already running – you can participate in the network without the company. And yeah, ask Blacksky, but not in the negative sense you are implying, but in the positive sense of look what it is possible to do without Bluesky the company. They are trailblazers, and the whole thing is going to be easier for others because of the work they are doing.
Now, from the perspective of what is out there today:
Practically, yes the company can cut you off from most of the network. Here are some numbers based off the measurements I’ve been taking.
PDSes (where user data is stored): As of today about 99% of active accounts are on Bluesky’s servers (source: https://arewedecentralizedyet.online/). They can, and sometimes do, issue takedowns at the PDS level, in which case you cannot get your data off to migrate elsewhere. The protocol is designed for adversarial migration, in which you can move your data even if something like this happens. To do so, you need to be prepared ahead of time with a backup of your data and rotation keys. As of December, the fraction of active accounts on Bluesky PDSes prepared with their own rotation keys was 0.0034% (source: https://rob.leaflet.pub/3m7isflo7ls23 )
PLC, where most identity is stored: This is a centralized service that tracks identity. The company could, but so far as I know, to their credit, does not block accounts there. Last I heard, it is in the process of being spun out into an independent organization – so while it will still be a centralized service, it will not be under control of the company. There is an alternative to PLC accounts, did:web, that puts your identity more firmly in your own hands and relies on neither Bluesky nor a future PLC organization. As of my December survey ( https://rob.leaflet.pub/3m7isflo7ls23 ), I found fewer than 100 active accounts using did:web. It is worth pointing out that, to the best of my knowledge, unlike migrating between PDSes, it is not possible to migrate an account created with did:plc to did:web.
Relay and AppView: these are separate but I’m going to lump them together to say that, today, most of the network is using Bluesky’s services, and in both cases, there is one production alternative, run by Blacksky. (There are more non-production versions of these services, especially on the relay side, but from a perspective of blocking access to the network, the production ones are the important ones.) To my knowledge, most Bluesky blocks are done at the AppView level, so let’s concentrate on that. I don’t believe that there are public numbers available as to the number of people using each AppView. Accounts are not inherently tied to one AppView, and in fact you can, and many people do, use multiple apps with the same account, and there is no reason those apps need to be using the same AppView. So: I cannot provide any hard numbers here on how much of the network an account is blocked from if it is blocked by Bluesky’s AppView. The best educated guess I can provide is that it is probably on the same order of magnitude as the fraction of the active network hosted on Bluesky’s PDSes.
Every one of the numbers here is changing, and it behooves everyone who is interested in this network to follow them, so that they understand the network as is evolves, rather than being stuck in some outdated understanding or be going solely off how it might look in the future.