Bluesky Continues To Explore More Creative Moderation Plans Openly
from the the-company-is-a-future-adversary-to-its-current-plans dept
I continue to be fascinated in watching how the various decentralized protocol-based social media systems are evolving — in particular how they’re dealing with the challenges of content moderation. There was an interesting discussion a recently on nostr over whether or not moderation should be best handled by relays or clients*.
ActivityPub has, of course, continued to move forward with its systems of moderation handled at each instance level, combined with the threat of “defederation” being used to keep “bad” instances in line (or cut off from parts of the network). That’s worked surprisingly well in some cases, but is also facing a few challenges, as there have been complaints about some of the largest instances, and now that Meta is planning to release an ActivityPub-compatible offering, there’s a weird push to make some instances promise to defederate from any Meta offering immediately.
And then, there was the big “Scaling Trust” report that we talked about last week (disclaimer: I was on the task force that helped with the report), which included an annex all about decentralized trust & safety.
But, again, Bluesky may be where the most interesting discussions on decentralized trust & safety and moderation are happening. A few months ago, we wrote about their plans for decentralized composable moderation, and recently they released some thoughts on how you can handle moderation in a public commons.
The goal of Bluesky is to turn social media into a shared public commons. We don’t want to own people’s social graphs or communities. We want to be a tool that helps communities own and govern themselves.
The reason we focus on communities is that for an open commons to work, there needs to be some sort of structure that protects the people who participate. Safety can’t just be left up to each individual to deal with on their own. The burden this puts on people — especially those who are most vulnerable to online abuse and harassment — is too high. It also doesn’t mirror how things work in the real world: we form groups and communities so that we can help each other. The tooling we’re building for moderation tries to take into consideration how social spaces are formed and shaped through communities.
Somewhat importantly, they make it clear that they don’t have all the answers (no one does!), but it’s really interesting to see them discussing this openly, and publicly, and asking for thoughts and feedback as they move forward. To me, the thing that stands out is that the ideas that are presented obviously involved a lot of thought (to the point that I haven’t fully wrapped my head around some of the different proposals, some of which seem clever, while others may need a bit more baking before they fully make sense).
Historically, trust & moderation decisions come in two forms: formed on high in a centralized system in which little is discussed publicly, and people are left trying to sort through what’s actually happening, or in an entirely distributed manner in which things often spring up ad hoc out of need (see: Usenet killfiles), which often run into problems later on.
The Bluesky folks are trying to think about something that is a more hybrid approach, in which the system itself is design to enable communities to better manage things, not just one giant opaque centralized control bunker, and not putting all the weight on users which is unfair to many (especially the targets of abuse and harassment).
I think this kind of vision seems exactly the right one for an organization like Bluesky to have:
A company is an efficient structure for building out a cohesive vision of how things should work, but locking users into our systems would be antithetical to our mission. An open commons can’t be governed at the sole discretion of one global company. We offer services like professional moderators so that we can help protect people and provide a good experience, but we shouldn’t exert total control over everyone’s experience, for all time, with no alternative. Users should be able to walk away from us without walking away from their social lives.
The reason we’re building in decentralization is because we observed that business interests and the open web have a habit of coming into conflict. Third-party developers often get locked out. Moderation policies come into conflict with the diverse interests and needs of different groups of users. Ads push towards algorithms that optimize for engagement. It’s a systemic problem that keeps playing out as centralized social media companies rise and fall.
On Bluesky itself, the lead developer, Paul Frazee noted that they view the future company as a potential adversary, and are designing accordingly. That, alone, is a fascinating perspective to have on things, and one that certainly makes sense in the age of enshittification. And, unlike the way many companies that start on the open web, and later come into conflict with it, as they seek to pull up the ladder behind them to protect a moat, Bluesky is trying to design its systems in a way that protects the system from their own future attempts at enshittification:
Even when things are working correctly on social platforms, there are weird dynamics caused by people’s relationships being mediated by a single company. The Internet is pretty obviously real life in the sense that its management has real-world consequences. When these places control our identities and our ability to connect and to make money, having no way out from the founding company is a precarious situation. The power difference is daunting.
The goal of Bluesky is to rebuild social networking so that there’s not a lock-in to the founding company, which is us. We can try to provide a cohesive, enjoyable experience, but there’s always an exit. Users can move their accounts to other providers. Developers can run their own connected infrastructure. Creators can keep access to their audiences. We hope this helps break the cycle of social media companies coming into conflict with the open web.
Now, some users point to the complex onboarding of Mastodon, or the “WTF how does any of this work?” nature of nostr, and worry that any decentralized/federated system has to be confusing. And that user unfriendliness, in some weird way, acts as a moderation tool in its own right, by keeping communities somewhat smaller. But it also keeps communities… smaller. So Bluesky has a different vision. A surprisingly refreshing and honest one:
A great experience should be simple to use. It shouldn’t be overly complex, and there should be sensible defaults and well-run entry points. If things are going well, the average user shouldn’t have to notice what parts are decentralized, or how many layers have come together to determine what they see. However, if conflict arises, there should be easy levers for individuals and communities to pull so that they can reconfigure their experience.
A great experience should recognize that toxicity is not driven only by bad actors. Good intentions can create runaway social behaviors that then create needless conflict. The network should include ways to downregulate behaviors – not just amplify them.
A great experience should respect the burden that community management can place on people. Someone who sets out to help protect others can quickly find themselves responsible for a number of difficult choices. The tooling that’s provided should take into account ways to help avoid burnout.
A great experience should find a balance between creating friendly spaces and over-policing each other. The impulse to protect can sometimes degrade into nitpicking. We should drive towards norms that feel natural and easy to observe.
A great experience should reflect the diversity of views within the network. Decisions that are subjective should be configurable. Moderation should not force the network into a monoculture.
Finally, a great experience should remember that social networking can be pleasant one day and harsh the next. There should be ways to react to sudden events or shifts in your mood. Sometimes you need a way to be online but not be 100% available.
There is no perfect content moderation solution out there. There is no whiz bang simple technical solution to the messiness that is human beings. As I’ve said many times, so many trust & safety dilemmas are really societal problems that we think are new or need to be solved by internet companies because they’re appearing through screens over the internet.
And, of course, nothing that Bluesky is working on may turn out to work, or matter. It’s still a small operation, and some of these ideas are completely untested. But, at the very least, it is presenting some pretty thoughtful ideas in an open way, and trying to think through the real consequences of what it’s creating here. And that, alone, is incredibly refreshing.
* The creator of nostr apparently does not believe moderation should happen at the client level, but when I asked him how relay operators could express their moderation rules suggested it didn’t matter since relays weren’t moderating anyway. Of course, since then I’ve noticed that nostr is being overrun with cryptocurrency spam, so at some point people there are going to realize that something needs to be done.
Filed Under: activitypub, at protocol, content moderation, decentralized social media, nostr, public commons, trust & safety
Companies: bluesky