Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?

from the zuckerman-v.-zuckerberg dept

There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of reasons, including a potentially consequential typo in the law that has been ignored for years.

Buckle in, this is a bit of a wild ride.

You would think with how much attention has been paid to Section 230 over the last few years (there’s an entire excellent book about it!), and how short the law is, that there would be little happening with the existing law that would take me by surprise. But the new Zuckerman v. Meta case filed on behalf of Ethan Zuckerman by the Knight First Amendment Institute has got my attention.

It’s presenting a fairly novel argument about a part of Section 230 that almost never comes up in lawsuits, but could create an interesting opportunity to enable all kinds of adversarial interoperability and middleware to do interesting (and hopefully useful) things that the big platforms have been using legal threats to shut down.

If the argument works, it may reveal a surprising and fascinating trojan horse for a more open internet, hidden in Section 230 for the past 28 years without anyone noticing.

Of course, it could also have much wider ramifications that a bunch of folks need to start thinking through. This is the kind of thing that happens when someone discovers something new in a law that no one really noticed before.

But there’s also a very good chance this lawsuit flops for a variety of other reasons without ever really exploring the nature of this possible trojan horse. There are a wide variety of possible outcomes here.

But first, some background.

For years, we’ve talked about the importance of tools and systems that give end users more control over their own experiences online, rather than leaving it entirely up to the centralized website owners. This has come up in a variety of different contexts in different ways, from “Protocols, not Platforms” to “adversarial interoperability,” to “magic APIs” to “middleware.” These are not all exactly the same thing, but they’re all directionally strongly related, and conceivably could work well together in interesting ways.

But there are always questions about how to get there, and what might stand in the way. One of the biggest things standing in the way over the last decade or so has been interpretations of various laws that effectively allow social media companies to threaten and/or bring lawsuits against companies trying to provide these kinds of additional services. This can take the form of a DMCA 1201 claim for “circumventing” a technological block. Or, more commonly, it has taken the form of a civil (Computer Fraud & Abuse Act) CFAA claim.

The most representative example of where this goes wrong is when Facebook sued Power Ventures years ago. Power was trying to build a unified dashboard across multiple social media properties. Users could provide Power with their own logins to social media sites. This would allow Power to log in to retrieve and post data, so that someone could interact with their Facebook community without having to personally go into Facebook.

This was a potentially powerful tool in limiting Facebook’s ability to become a walled-off garden with too much power. And Facebook realized that too. That’s why it sued Power, claiming that it violated the CFAA’s prohibition on “unauthorized access.”

The CFAA was designed (poorly and vaguely) as an “anti-hacking” law. And you can see where “unauthorized access” could happen as a result of hacking. But Facebook (and others) have claimed that “unauthorized access” can also be “because we don’t want you to do that with your own login.”

And the courts have agreed to Facebook’s interpretation, with a few limitations (that don’t make that big of a difference).

I still believe that this ability to block interoperability/middleware with law has been a major (perhaps the most major) reason “big tech” is so big. They’re able to use these laws to block out the kinds of companies who would make the market more competitive and pull down some the walls of walled gardens.

That brings us to this lawsuit.

Ethan Zuckerman has spent years trying to make the internet a better, more open space (partially, I think, in penance for creating the world’s first pop-up internet ad). He’s been doing some amazing work on reimagining the digital public infrastructure, which I keep meaning to write about, but never quite find the time to get to.

According to the lawsuit, he wants to build a tool called “Unfollow Everything 2.0.” The tool is based on a similar tool, also called Unfollow Everything, that was built by Louis Barclay a few years ago and did what it says on the tin: let you automatically unfollow everything on Facebook. Facebook sent Barclay a legal threat letter and banned him for life from the site.

Zuckerman wants to recreate the tool with some added features enabling users to opt-in to provide some data to researchers about the impact of not following anyone on social media. But he’s concerned that he’d face legal threats from Meta, given what happened with Barclay.

Using Unfollow Everything 2.0, Professor Zuckerman plans to conduct an academic research study of how turning off the newsfeed affects users’ Facebook experience. The study is opt-in—users may use the tool without participating in the study. Those who choose to participate will donate limited and anonymized data about their Facebook usage. The purpose of the study is to generate insights into the impact of the newsfeed on user behavior and well-being: for example, how does accessing Facebook without the newsfeed change users’ experience? Do users experience Facebook as less “addictive”? Do they spend less time on the platform? Do they encounter a greater variety of other users on the platform? Answering these questions will help Professor Zuckerman, his team, and the public better understand user behavior online and the influence that platform design has on that behavior

The tool and study are nearly ready to launch. But Professor Zuckerman has not launched them because of the near certainty that Meta will pursue legal action against him for doing so.

So he’s suing for declaratory judgment that he’s not violating any laws. If he were just suing for declaratory judgment over the CFAA, that would (maybe?) be somewhat understandable or conventional. But, while that argument is in the lawsuit, the main claim in the case is something very, very different. It’s using a part of Section 230, section (c)(2)(B), that almost never gets mentioned, let alone tested.

Most Section 230 lawsuits involve (c)(1): the famed “26 words” that state “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Some Section 230 cases involve (c)(2)(A) which states that “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Many people incorrectly think that Section 230 cases turn on this part of the law, when really, much of those cases are already cut off by (c)(1) because they try to treat a service as a speaker or publisher.

But then there’s (c)(2)(B), which says:

No provider or user of an interactive computer service shall be held liable on account of any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)

As noted, this basically never comes up in cases. But the argument being made here is that this creates some sort of proactive immunity from lawsuits for middleware creators who are building tools (“technical means”) to “restrict access.” In short: does Section 230 protect “Unfollow Everything” from basically any legal threats from Meta, because it’s building a tool to restrict access to content on Meta platforms?

Or, according to the lawsuit:

This provision would immunize Professor Zuckerman from civil liability for designing, releasing, and operating Unfollow Everything 2.0

First, in operating Unfollow Everything 2.0, Professor Zuckerman would qualify as a “provider . . . of an interactive computer service.” The CDA defines the term “interactive computer service” to include, among other things, an “access software provider that provides or enables computer access by multiple users to a computer server,” id. § 230(f)(2), and it defines the term “access software provider” to include providers of software and tools used to “filter, screen, allow, or disallow content.” Professor Zuckerman would qualify as an “access software provider” because Unfollow Everything 2.0 enables the filtering of Facebook content—namely, posts that would otherwise appear in the feed on a user’s homepage. And he would “provide[] or enable[] computer access by multiple users to a computer server” by allowing users who download Unfollow Everything 2.0 to automatically unfollow and re-follow friends, groups, and pages; by allowing users who opt into the research study to voluntarily donate certain data for research purposes; and by offering online updates to the tool.

Second, Unfollow Everything 2.0 would enable Facebook users who download it to restrict access to material they (and Zuckerman) find “objectionable.” Id. § 230(c)(2)(A). The purpose of the tool is to allow users who find the newsfeed objectionable, or who find the specific sequencing of posts within their newsfeed objectionable, to effectively turn off the feed.

I’ve been talking to a pretty long list of lawyers about this and I’m somewhat amazed at how this seems to have taken everyone by surprise. Normally, when new lawsuits come out, I’ll gut check my take on it with a few lawyers and they’ll all agree with each other whether I’m heading in the right direction or the totally wrong direction. But here… the reactions were all over the map, and not in any discernible pattern. More than one person I spoke to started by suggesting that this was a totally crazy legal theory, only to later come back and say “well, maybe it actually makes some sense.”

It could be a trojan horse that no one noticed in Section 230 that effectively bars websites from taking legal action against middleware providers who are providing technical means for people to filter or screen content on their feed. Now, it’s important to note that it does not bar those companies from putting in place technical measures to block such tools, or just banning accounts or whatever. But that’s very different from threatening or filing civil suits.

If this theory works, it could do a lot to enable these kinds of middleware services and make it significantly harder for big social media companies like Meta to stop them. If you believe in adversarial interoperability, that could be a very big deal. Like, “shift the future of the internet we all use” kind of big.

Now, there are many hurdles before we get to that point. And there are some concerns that if this legal theory succeeds, it could also lead to other problematic results (though I’m less convinced by those).

Let’s start with the legal concerns.

First, as noted, this is a very novel and untested legal theory. Upon reading the case initially, my first reaction was that it felt like one of those slightly wacky academic law journal articles you see law professors write sometimes, with some far-out theory they have that no one’s ever really thought about. This one is in the form of a lawsuit, so at some point we’ll find out how the theory works.

But that alone might make a judge unwilling to go down this path.

Then there are some more practical concerns. Is there even standing here? ¯\_(ツ)_/¯ Zuckerman hasn’t released his tool. Meta hasn’t threatened him. He makes a credible claim that given Meta’s past actions, they’re likely to react unfavorably, but is that enough to get standing?

Then there’s the question of whether or not you can even make use of 230 in an affirmative way like this. 230 is used as a defense to get cases thrown out, not proactively for declaratory judgment.

Also, this is not my area of expertise by any stretch of the imagination, but I remember hearing in the past that outside of IP law, courts (and especially courts in the 9th Circuit) absolutely disfavor lawsuits for declaratory judgment (i.e., a lawsuit before there’s any controversy, where you ask the court “hey, can you just check and make sure I’m on the right side of the law here…”). So I could totally see the judge saying “sorry, this is not a proper use of our time” and tossing it. In fact, that might be the most likely result.

Then there’s this kinda funny but possibly consequential issue: there’s a typo in Section 230 that almost everyone has ignored for years. Because it’s never really mattered. Except it matters in this case. Jeff Kosseff, the author of the book on Section 230, always likes to highlight that in (c)(2)(B), it says that the immunity is for using “the technical means to restrict access to material described in paragraph (1).”

But they don’t mean “paragraph (1).” They mean “paragraph (A).” Paragraph (1) is the “26 words” and does not describe any material, so it would make no sense to say “material described in paragraph (1).” It almost certainly means “paragraph (A),” which is the “good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” section. That’s the one that describes material.

I know that, at times, Jeff has joked when people ask him how 230 should be reformed he suggests they fix the typo. But Congress has never listened.

And now it might matter?

The lawsuit basically pretends that the typo isn’t there. Its language inserts the language from “paragraph (A)” where the law says “paragraph (1).”

I don’t know how that gets handled. Perhaps it gets ignored like every time Jeff points out the typo? Perhaps it becomes consequential? Who knows!

There are a few other oddities here, but this article is getting long enough and has mostly covered the important points. However, I will conclude on one other point that one of the people I spoke to raised. As discussed above, Meta has spent most of the past dozen or so years going legally ballistic about anyone trying to scrape or data mine its properties in anyway.

Yet, earlier this year, it somewhat surprisingly bailed out on a case where it had sued Bright Data for scraping/data mining. Lawyer Kieran McCarthy (who follows data scraping lawsuits like no one else) speculated that Meta’s surprising about-face may be because it suddenly realized that for all of its AI efforts, it’s been scraping everyone else. And maybe someone high up at Meta suddenly realized how it was going to look in court when it got sued for all the AI training scraping, if the plaintiffs point out that at the very same time it was suing others for scraping its properties.

For me, I suspect the decision not to appeal might be more about a shift in philosophy by Meta and perhaps some of the other big platforms than it is about their confidence in their ability to win this case. Today, perhaps more important to Meta than keeping others off their public data is having access to everyone else’s public data. Meta is concerned that their perceived hypocrisy on these issues might just work against them. Just last month, Meta had its success in prior scraping cases thrown back in their face in a trespass to chattels case. Perhaps they were worried here that success on appeal might do them more harm than good.

In short, I think Meta cares more about access to large volumes of data and AI than it does about outsiders scraping their public data now. My hunch is that they know that any success in anti-scraping cases can be thrown back at them in their own attempts to build AI training databases and LLMs. And they care more about the latter than the former.

I’ve separately spoken to a few experts who were worried about the consequences if Zuckerman succeeded here. They were worried that it might simultaneously immunize potential bad actors. Specifically, you could see a kind of Cambridge Analytica or Clearview AI situation, where companies trying to get access to data for malign purposes convince people to install their middleware app. This could lead to a massive expropriation of data, and possibly some very sketchy services as a result.

But I’m less worried about that, mainly because it’s the sketchy eventuality of how that data is being used that would still (hopefully?) violate certain laws, not the access to the data itself. Still, there are at least some questions being raised about how this type of more proactive immunity might result in immunizing bad actors that is at least worth thinking about.

Either way, this is going to be a case worth following.

Filed Under: , , , , , , ,
Companies: meta

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?”

Subscribe: RSS Leave a comment
20 Comments
Anonymous Coward says:

Now, it’s important to note that it does not bar those companies from putting in place technical measures to block such tools

One of the ways web sites are tested before being released to the public is by tools that look at the elements on the screen and insert input events (mouse movement/clicks, keyboard hits) as if the user was entering them.

Some of that involves not simply looking at JavaScript or HTML, but analyzing the display image and/or comparing it against reference values. That is, analogous to how humans do it.

Properly done, the only way the web site being analyzed/tested could determine it is being examined by a program is by asking the OS what is running.

All this is leading up to: in order to block tools such as this, implementers of “technical measures” would need to be so restrictive that it would degrade the user experience. Some providers might be willing to do that, of course. But it is definitely in “shooting yourself in the foot” territory.

Anonymous Coward says:

Re:

Some providers might be willing to do that, of course.

I’ve seen websites disable right clicking by default and then call it a premium feature they want you to pay for. If you’re just in it for the money and you don’t care about your users, it’s definitely an (often easily defeated) option.

This comment has been flagged by the community. Click here to show it.

Rico R. (profile) says:

Wait a dang second...

I’ve been talking to a pretty long list of lawyers about this and I’m somewhat amazed at how this seems to have taken everyone by surprise. Normally, when new lawsuits come out, I’ll gut check my take on it with a few lawyers and they’ll all agree with each other whether I’m heading in the right direction or the totally wrong direction. But here… the reactions were all over the map, and not in any discernible pattern. More than one person I spoke to started by suggesting that this was a totally crazy legal theory, only to later come back and say “well, maybe it actually makes some sense.”

Are you saying that, perhaps you, Mike, are, gasp, wrong about a part of Section 230? That other tech industry lawyers you know are also wrong? What topsy-turvy world did I just wake up in?

Nick-B says:

Re:

I was just coming down here to the comments to bring that up. With the utter hostility that websites have against ad-blockers, I can’t help but feel that this paragraph is on my side:

I find ads on a lot of sites EXTREMELY “objectionable”, and am irked at sites attempts to prevent me from avoiding them. Mostly, it’s the animation, constantly shifting there on the sidebar, repeatedly drawing my eyes away from the sites content for YET MORE denture cream (or whatever else these ads seem to think I am suffering from).

But ad blockers also protect us from malicious ads that attempt to hijack browsers or redirect you to hostile sites.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Exactly, (c)(2)(B) was put there to protect external obscenity filters. Apparently, noone considered the possibility that a platform’s own content feed would become so obnoxious that adults might seek an external filter.

Presuming that the typo (correctly, imo) gets redirected to (c)(2)(A), and the lawsuit is successful, its use for promoting middleware might be limited. While (c)(2)(B) does allow the restriction of material, it doesn’t seem to allow other actions such as retrieval or modification. Perhaps this would be a boon for adblockers, but most other middleware clients might not automatically be protected, unfortunately.

Ronald Davenport says:

reining in big tech without censorship - a first step

You know the cliche “if you’re not paying for the product then you are the product.” By this reasoning, consumers are subsidizing businesses by allowing businesses to harvest their data.
Requiring consumers to pay for the content they consume is fair. It’s also not the way the web was intended to work. A compromise: limited liability in exchange for forgoing advertising revenue.
Currently, under section 230 of the Communications Decency Act, platforms aren’t liable for the content on their platform. Amending section 230 to provide that a platform is not liable for content on its platform if the platform does not seek, accept, place or facilitate the placement of advertising would go a long way towards re-establishing the web as it used to be. Revenue obtained from subscriptions would still be covered by the limited liability restriction.
With this approach, the business reasons for harvesting user data — advertising — collapses. If a web business wants to keep advertising revenue, then it would have to accept liability for the content on its website. This approach also has the added effect of supporting local journalism by allowing local journalism to re-capture the local advertising dollars lost to web businesses. Further, this approach doesn’t involve censorship since publishers are free to publish whatever they choose — but it means that the general consumer public isn’t paying for the content unless they choose to do so directly with their subscription dollars.
Your thoughts?

Samuel Abram (profile) says:

Re:

You know the cliche “if you’re not paying for the product then you are the product.” By this reasoning, consumers are subsidizing businesses by allowing businesses to harvest their data.

I pay for Disney, Netflix, Paramount+, etc. and they still harvest my data. Hell, I can see the statistics on the backends of bandcamp and my streaming services with CDBaby. Yet all those cost $$$.

I would say that cliché is wrong.

Misha Hill says:

You've confused declaratory judgment with advisory opinion

Also, this is not my area of expertise by any stretch of the imagination, but I remember hearing in the past that outside of IP law, courts (and especially courts in the 9th Circuit) absolutely disfavor lawsuits for declaratory judgment (i.e., a lawsuit before there’s any controversy, where you ask the court “hey, can you just check and make sure I’m on the right side of the law here…”). So I could totally see the judge saying “sorry, this is not a proper use of our time” and tossing it. In fact, that might be the most likely result.

What you’re describing is called an Advisory Opinion, and yes, courts don’t like those. A Declaratory Judgment is where the controversy does exist, but the defendant files the suit first to get the court to say it’s not liable. The present case is on the line between the two — if there’s no dispute yet, then it’s an advisory opinion and no good; if there is a dispute, than they have standing to bring a DJ. That will be one of the first things the court sort out.

Anonymous Coward says:

Re: You've confused declaratory judgment with advisory opinion

So, suppose the case does get tossed, and then Zuckerman releases his tool, and gets sued. I guess we now know how he plans to defend himself against such a suit.

He has a very novel interpretation right now, but once it gets tossed and he brings it up a second time, it won’t be nearly as novel anymore. (Or am I just overthinking this?)

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

These are the 26 words that Stephen T. Stone has previously argued should be ignored when someone retweets defamatory content they don’t know to be untrue.

Richard Reisman (profile) says:

Hidden in plain sight?

This is a very nice analysis of a very interesting and potentially transformative case (with legal details far beyond me)…

At the same time, at a high level, this was hidden in plain sight — as Chris Riley and I wrote in 2022, in “Delegation, Or, The Twenty Nine Words That The Internet Forgot.” The title refers to the preamble to 230:

“It is the policy of the United States… to encourage the development of technologies which maximize user control over what information is received by individuals… who use the Internet…”

We took that as a US “policy” that clearly implies that the platforms should enable middleware. But we assumed that enforcing that policy would require regulators to apply that policy as mandated by Congress, or that Congress pass more specific enabling legislation, like the Senate ACCESS Act.

So, I am very happy to see Ethan’s request for a declaratory judgement as a brilliant and novel gambit – with a basis that has been in plain sight, as we had highlighted.

dbrower (profile) says:

Presents an interesting angle for Bluesky labellers

I’m at the beginning stages of a project that may try to use atproto and bluesky labellers to provide shared moderation to a completely different social platform.

The extension would be able to report things to a labeller, and get things from the labeller and block them on that platform, using the labeller as the database of reports and blocks.

Now, it appears the labeller has to be spun up somewhere, and is therefore providing a service to the extension users.

Questions arise, which seem true for Bluesky as well.
Suppose somebody feels improperly cancelled or libel/slandered by being on a block list kept by the labeller. Who do they sue — the operator of the labeller, whoever pushed the button that made the report, or whatever person/thing handled the report and put it on the block list?

I expect there -will- be litigation on this as soon as third-party labellers get used and the aggrieved blossom as a result.

If the S.230 argument made here holds, the labeller is off the hook, and the person who pushed the report button is liable. Since the labelling system requires credentials, that person is probably possible to locate, unlike a certain Cow.

This would have a chilling affect on attempts to report potentially problematic things that people might want blocked. “Joe Blow reported us for being Nazis and we’re not Nazis and don’t want to be on a list of Nazis, Waaah!”. Yes, Streisand would apply, but Joe Blow doesn’t want to have to deal with it, and might chose not to report “Storm88” as a result.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...