Bluesky Continues To Explore More Creative Moderation Plans Openly

from the the-company-is-a-future-adversary-to-its-current-plans dept

I continue to be fascinated in watching how the various decentralized protocol-based social media systems are evolving — in particular how they’re dealing with the challenges of content moderation. There was an interesting discussion a recently on nostr over whether or not moderation should be best handled by relays or clients*.

ActivityPub has, of course, continued to move forward with its systems of moderation handled at each instance level, combined with the threat of “defederation” being used to keep “bad” instances in line (or cut off from parts of the network). That’s worked surprisingly well in some cases, but is also facing a few challenges, as there have been complaints about some of the largest instances, and now that Meta is planning to release an ActivityPub-compatible offering, there’s a weird push to make some instances promise to defederate from any Meta offering immediately.

And then, there was the big “Scaling Trust” report that we talked about last week (disclaimer: I was on the task force that helped with the report), which included an annex all about decentralized trust & safety.

But, again, Bluesky may be where the most interesting discussions on decentralized trust & safety and moderation are happening. A few months ago, we wrote about their plans for decentralized composable moderation, and recently they released some thoughts on how you can handle moderation in a public commons.

The goal of Bluesky is to turn social media into a shared public commons. We don’t want to own people’s social graphs or communities. We want to be a tool that helps communities own and govern themselves.

The reason we focus on communities is that for an open commons to work, there needs to be some sort of structure that protects the people who participate. Safety can’t just be left up to each individual to deal with on their own. The burden this puts on people — especially those who are most vulnerable to online abuse and harassment — is too high. It also doesn’t mirror how things work in the real world: we form groups and communities so that we can help each other. The tooling we’re building for moderation tries to take into consideration how social spaces are formed and shaped through communities.

Somewhat importantly, they make it clear that they don’t have all the answers (no one does!), but it’s really interesting to see them discussing this openly, and publicly, and asking for thoughts and feedback as they move forward. To me, the thing that stands out is that the ideas that are presented obviously involved a lot of thought (to the point that I haven’t fully wrapped my head around some of the different proposals, some of which seem clever, while others may need a bit more baking before they fully make sense).

Historically, trust & moderation decisions come in two forms: formed on high in a centralized system in which little is discussed publicly, and people are left trying to sort through what’s actually happening, or in an entirely distributed manner in which things often spring up ad hoc out of need (see: Usenet killfiles), which often run into problems later on.

The Bluesky folks are trying to think about something that is a more hybrid approach, in which the system itself is design to enable communities to better manage things, not just one giant opaque centralized control bunker, and not putting all the weight on users which is unfair to many (especially the targets of abuse and harassment).

I think this kind of vision seems exactly the right one for an organization like Bluesky to have:

A company is an efficient structure for building out a cohesive vision of how things should work, but locking users into our systems would be antithetical to our mission. An open commons can’t be governed at the sole discretion of one global company. We offer services like professional moderators so that we can help protect people and provide a good experience, but we shouldn’t exert total control over everyone’s experience, for all time, with no alternative. Users should be able to walk away from us without walking away from their social lives.

The reason we’re building in decentralization is because we observed that business interests and the open web have a habit of coming into conflict. Third-party developers often get locked out. Moderation policies come into conflict with the diverse interests and needs of different groups of users. Ads push towards algorithms that optimize for engagement. It’s a systemic problem that keeps playing out as centralized social media companies rise and fall.

On Bluesky itself, the lead developer, Paul Frazee noted that they view the future company as a potential adversary, and are designing accordingly. That, alone, is a fascinating perspective to have on things, and one that certainly makes sense in the age of enshittification. And, unlike the way many companies that start on the open web, and later come into conflict with it, as they seek to pull up the ladder behind them to protect a moat, Bluesky is trying to design its systems in a way that protects the system from their own future attempts at enshittification:

Even when things are working correctly on social platforms, there are weird dynamics caused by people’s relationships being mediated by a single company. The Internet is pretty obviously real life in the sense that its management has real-world consequences. When these places control our identities and our ability to connect and to make money, having no way out from the founding company is a precarious situation. The power difference is daunting.

The goal of Bluesky is to rebuild social networking so that there’s not a lock-in to the founding company, which is us. We can try to provide a cohesive, enjoyable experience, but there’s always an exit. Users can move their accounts to other providers. Developers can run their own connected infrastructure. Creators can keep access to their audiences. We hope this helps break the cycle of social media companies coming into conflict with the open web.

Now, some users point to the complex onboarding of Mastodon, or the “WTF how does any of this work?” nature of nostr, and worry that any decentralized/federated system has to be confusing. And that user unfriendliness, in some weird way, acts as a moderation tool in its own right, by keeping communities somewhat smaller. But it also keeps communities… smaller. So Bluesky has a different vision. A surprisingly refreshing and honest one:

A great experience should be simple to use. It shouldn’t be overly complex, and there should be sensible defaults and well-run entry points. If things are going well, the average user shouldn’t have to notice what parts are decentralized, or how many layers have come together to determine what they see. However, if conflict arises, there should be easy levers for individuals and communities to pull so that they can reconfigure their experience.

A great experience should recognize that toxicity is not driven only by bad actors. Good intentions can create runaway social behaviors that then create needless conflict. The network should include ways to downregulate behaviors – not just amplify them.

A great experience should respect the burden that community management can place on people. Someone who sets out to help protect others can quickly find themselves responsible for a number of difficult choices. The tooling that’s provided should take into account ways to help avoid burnout.

A great experience should find a balance between creating friendly spaces and over-policing each other. The impulse to protect can sometimes degrade into nitpicking. We should drive towards norms that feel natural and easy to observe.

A great experience should reflect the diversity of views within the network. Decisions that are subjective should be configurable. Moderation should not force the network into a monoculture.

Finally, a great experience should remember that social networking can be pleasant one day and harsh the next. There should be ways to react to sudden events or shifts in your mood. Sometimes you need a way to be online but not be 100% available.

There is no perfect content moderation solution out there. There is no whiz bang simple technical solution to the messiness that is human beings. As I’ve said many times, so many trust & safety dilemmas are really societal problems that we think are new or need to be solved by internet companies because they’re appearing through screens over the internet.

And, of course, nothing that Bluesky is working on may turn out to work, or matter. It’s still a small operation, and some of these ideas are completely untested. But, at the very least, it is presenting some pretty thoughtful ideas in an open way, and trying to think through the real consequences of what it’s creating here. And that, alone, is incredibly refreshing.

* The creator of nostr apparently does not believe moderation should happen at the client level, but when I asked him how relay operators could express their moderation rules suggested it didn’t matter since relays weren’t moderating anyway. Of course, since then I’ve noticed that nostr is being overrun with cryptocurrency spam, so at some point people there are going to realize that something needs to be done.

Filed Under: , , , , , ,
Companies: bluesky

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Bluesky Continues To Explore More Creative Moderation Plans Openly”

Subscribe: RSS Leave a comment
13 Comments
This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

Seriously, I completely understand why a large chunk of the Fediverse would want nothing to do with Meta. Beyond the potential for explosive growth to the size of the Fediverse userbase (and all the problems that would present), there’s…well, [gestures in the direction of Facebook’s entire history].

This comment has been flagged by the community. Click here to show it.

Arianity says:

In the discussion over Bluesky’s moderation, I’m surprised there aren’t more explicit comparisons to Reddit. It feels very similar to subreddits, in a lot of ways.

Reddit history also gives a lot of insight into potential problems/pitfalls (for example, subs like /r/fatpeoplehate or /r/jailbait showing the need for some sort of centralized lower bar. And on the flipside, the current debacle showing the threat of too much central control).

It has a lot of problems, but Reddit feels like maybe the closest thing we’ve seen to the hybrid approach, to date.

This comment has been deemed insightful by the community.
Bloof (profile) says:

There’s nothing weird about not wanting anything to do with Meta given everything they have done to damage the internet, online privacy, smaller competitors, the mental health of their users and democracy in general. The fact they’re forcing the instance admins that are willing to talk with them to sign NDAs is more than enough to set honking great alarm bells ringing about what they want to do. They do not play nice, they will not play nice and the actual users are in the right to view them with suspicion and to tell them to go eff themselves.

They do not want to grow the fediverse, they want to siphon away users before building walls around their garden. They do not deserve the benefit of the doubt.

jarocats (profile) says:

“And that user unfriendliness, in some weird way, acts as a moderation tool in its own right, by keeping communities somewhat smaller.”

Not weird at all, but true — and it also irritates the hell out of those of us capable but unwilling to invest our time and efforts into learning yet another new thing — at least not until we learn if this new thing is going to stick around or suddenly go out like so much flash paper.

I see gatekeeping. And I’m not so sure I trust any platform under the rule of the guy who said he was certain Elon Musk was The One to usher Twitter into a new utopian paradise. Or something.

Anonymous Coward says:

Re:

It is extremely unfortunate, really…

There are a lot of people who simply cannot do business anywhere else outside of Twitter simply because other platforms do not have the tools they need, and it’s just super cruel to dismiss their concerns.

Yes, the struggling freelance artist, writer, etc…

And this is me grimly being reminded of how saturated the creative industries are and that maybe the fucking conservatives had a fucking point

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...