The Imbued Test, Or How To Know Whether Section 230 Applies

from the 230-or-not-230-that-is-the-question dept

The biggest mistakes people make about Section 230 involve thinking that it is somehow a complicated law. In reality, its operation is not all that complex. Accordingly, determining whether it applies to any particular situation should not be all that difficult to evaluate, even when we think about hard or edge cases. Ultimately what we care about is who imbued the objectionable quality into the content at issue. This question, put like this, gets right to the heart of what Section 230’s operation pivots on and prevents us from getting sidetracked by other considerations that might lead to weakening the statute’s critical protection by suddenly making it seem a lot less applicable than it is.

When people incorrectly accuse Section 230 of being some sort of special development in law one thing they frequently overlook is how, at its core, Section 230 leaves in place something that law has long recognized: direct liability. If someone has done something wrong, then the law can hold them responsible for it. And Section 230 does nothing to overturn that general rule. What has always been more exceptional, however, is the notion of secondary liability, or whether someone can be held responsible for something that someone else has done wrong. Law has at times recognized such liability, although until recently it was largely an exception applied sparingly, and there are many good reasons for such restraint, including that it generally offends our sense of fairness to hold someone liable for something someone else has done.

Furthermore, and perhaps more importantly, holding them so liable also threatens to chill whatever they have done to somehow put them in association with the actual wrongdoer, which is a problem if it’s something that by and large we would ordinarily like them to do – like, in the case of Internet platforms, having them available to help facilitate all the non-problematic content they can (as well as minimize all the problematic content they can). Section 230 acts as a statutory barrier to prevent finding platforms secondarily liable for how others have used their services because we want to make sure platforms can be in the position to supply those services.

When it comes to applying Section 230, however, the complication arises from figuring out “who did it,” or, more specifically, who created the content in question at the center of a dispute. If it were the platform itself, then Section 230 would not apply, because of course the platform should have to answer for its own actions. But if it were some other third party who was responsible, then Section 230 should apply to insulate the platforms from any secondary liability arising from these others’ behavior.

Sometimes the answer to the “who did it” question is obvious. But where the relationship between platforms and the content they facilitate is more nuanced or sophisticated, there can be the temptation to overly complicate the answer to “who created the content” to find any assistance provided to those who did create it as somehow sharing in responsibility for that content creation. But the problem with too easily finding that platforms have somehow played an authorial role in the development of others’ content is that it can eat the whole rule and make it so that Section 230 could almost never apply. Simply asking whether others’ content would exist “but for” the platform is not enough because the answer would of course almost always be no. Indeed, that’s why we bother to protect platforms with Section 230 in the first place, because people need them in order to be able to create their online content. But if that important help platforms provide others to create their own content could be found to be the authorial cause of the content those others created, then Section 230 could never apply to platforms to enable them to provide that help.

When we ask “who did it,” or who created the content at issue that should be directly responsible for it, we must actually ask a more careful question to get back a meaningful answer that leaves Section 230 the reliably protective law it was intended to be. Framing the inquiry into “who created the content” as “who imbued the content at issue with the objectionable quality” applies that care to zero in on what we need to know to figure out whether Section 230 applies by ensuring we remain focused on the specific objection at issue, the act of making it objectionable, and the full range of objections content could excite, valid or otherwise, which could all prompt a Section 230 defense.

Specificity is important because, as discussed above, if we think about Section 230 in terms of content creation generally it can too easily make it seem like any platform has had a hand in creating the content it facilitates just by virtue of having facilitated it, which would therefore make Section 230 useless. For Section 230 to be meaningful, the inquiry into authorship needs to be tied to the specific content at issue. But more than that, it should also be focused on the specific objectionable quality of that content, because if there is to be any liability arising from that content it will be over that objectionable quality and not any of the content’s non-objectionable aspects. It therefore would make little sense to condition platforms’ Section 230 protection – and potentially risk burying them in litigation – premised on the platforms’ alleged role in creating the content’s non-objectionable aspects when those aspects wouldn’t end up mattering for liability purposes anyway.

Remaining focused on the act of making objected-to content so objectionable also helps us not get distracted by the things platforms do to intermediate others’ content in a useful way. Some platforms, for instance, tend to attract certain types of content that may be particularly contentious and thus prone to objection, but if merely attracting contentious content could be deemed the same as creating it, then platforms would be deterred from being available to facilitate any of it, no matter lawful or beneficial that content may be. Furthermore, as a practical matter, it is often desirable for platforms to moderate and curate content created by others to better serve their users, including by prioritizing the display of what they might want to see and removing what they don’t.

Section 230 also protects and even encourages platforms to perform these tasks, but engaging in them sometimes necessarily requires platforms to heavily interact with that content they are facilitating. If that interaction could foreclose a Section 230 defense it would deter platforms from intermediating others’ content as effectively as we would want them to, if how they intermediated it could jeopardize the liability protection they depend on to do it. Logically it would also not make sense if such interaction could amount to content creation because that content they interact with obviously already exists. So if we instead focus on the content creation question of who imbued it with its objectionable quality, that inquiry will be more useful in helping us see who should be responsible for it. After all, it can’t have been the platform interacting with the content if that quality had already been there, and so responsibility for it should then still lie with whoever had caused it to be there in the first place.

Meanwhile, one of the keys to this “imbuing” test comes from phrasing it as an inquiry as to the particular “objectionable quality” of the content at issue, and not, say, the content’s “wrongfulness” or “illegality,” because people can easily object to expression that is perfectly legal. Section 230 works to spare platforms from having to defend themselves against any attempt to hold them liable for content another has created, regardless of how valid the complaint. It is not just about relieving platforms from ultimate liability but the cost of having to expend the resources defending themselves from any challenge over content created by others, and so evaluating whether Section 230 should apply should not depend on the specific complaint raised, just that there was a complaint raised.

This “imbued” test leaves us with a measure that can still hold platforms responsible when truly warranted but not so casually that they will no longer be able to perform the needed function of intermediating others’ content. The test is also extremely flexible. As we think about new platforms, their services, and the new technologies that enable them, like Section 230 itself, it offers a framework that can readily scale to help determine where liability should be: directly on the party responsible and not on the platforms whose services Section 230 exists to protect.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Imbued Test, Or How To Know Whether Section 230 Applies”

Subscribe: RSS Leave a comment
29 Comments

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

making people defenseless against anonymous attacks on their reputation.

If your reputation is so precarious that an anonymous comment can not be countermanded by your own speech, then the anonymous comment and §230 are the least of your problems.

Anathema Device (profile) says:

Re: Re:

“If your reputation is so precarious that an anonymous comment can not be countermanded by your own speech”

Do you know how many people have been targeted by ‘do-gooders’ after being accused of child sex abuse and worse?

I mean, there have been Paediatricians targeted because someone confused the word with ‘paedophile’.

There are lot of nuts out there, and depending on the forum and the audience – and the victim – anonymous accusations can and do ruin ordinary people’s lives.

Anonymous Coward says:

Re: Re: Re:

You might want to be aware of the context surrounding this particular poster.

Actually, I’ll do one better: it’s been explained to you before.

There’s certainly value in a discussion about overzealous loons who actively use the “pedo” claim as a universal “I win” button, but John Smith does not debate Section 230 on a basis of good faith.

Anonymous Coward says:

Re: Re: Re:3

The person you responded to isn’t John. Apologies on that front, I should have specified that the original commentator was John Smith.

The points you raised about a person getting their reputation sullied are good points, to which I have no criticism. My point is, if we’re going to talk about what online commentary can do to a person’s reputation, John Smith’s prompts are not premises made in good faith to spark said discussion. John Smith’s premises are typically based on cases like doctors getting reviewed bombed by throwaway user accounts from Russia, waitresses having their names found online after they turned down advances by patrons, that sort of thing. They’re not things that usually happen, or are protected by Section 230. And yet he’ll still do it.

Anathema Device (profile) says:

Re: Re: Re:4

“The person you responded to isn’t John”

That person had been flagged as a troll before I even saw the post, and is patently a nut even if I didn’t connect the dots. I am trying very hard not to reply to trolls.

(In fact that explanation you linked to, I’m not sure I had seen? There was another one about the same person and I replied to that one complaining about the inability to tell one anon from anon.)

I was simply addressing the idea by yet another anon (I really wish you lot would at least each pick a user name/nom de plume) that anonymous comments are harmless to reputations.

Ironically, the Paediatrician/Paedophile confusion turns out to be a minor incident which wide reporting cemented into an urban myth.

But it is certainly true that people have been attacked and even murdered after being identified incorrectly as child abusers.

Bob Hansen (profile) says:

"Imbuing" is a pretty loaded test, too

In looking at content created by individual users, I think the Imbuing Test is a good one, and in line with the spirit of Section 230 – apportioning blame the the creator.

I immediately become concerned with the Imbuing Test when we apply it to ranking and sorting algorithms, though. One could make the 1st amendment-supported case that my Facebook feed is a thing that is created by Facebook, and they should be allowed to make it what they want. Good. But when the selection and ordering of my Facebook feed becomes Imbued with badness, because Facebook intentionally picked the alcohol stories to send to an alcoholic, or the radicalization stories to send to a vulnerable target, couldn’t the Imbuement Test assign the “objectionable quality” of my feed (distinct from the objectionable qualities of the articles themselves) to Facebook?

That raises the question of “Well, shouldn’t they?” Should Facebook carry the blame for the alcoholic/alcohol story association above? If a Facebook whistleblower has a paper trail showing someone thought it would be funny to send radicalization videos to every male 16-18 years of age, should they be held liable? Would that make the Imbuement Test stronger?

Mike Masnick (profile) says:

Re:

I immediately become concerned with the Imbuing Test when we apply it to ranking and sorting algorithms, though. One could make the 1st amendment-supported case that my Facebook feed is a thing that is created by Facebook, and they should be allowed to make it what they want. Good. But when the selection and ordering of my Facebook feed becomes Imbued with badness, because Facebook intentionally picked the alcohol stories to send to an alcoholic, or the radicalization stories to send to a vulnerable target, couldn’t the Imbuement Test assign the “objectionable quality” of my feed (distinct from the objectionable qualities of the articles themselves) to Facebook?

But from there, the question raised with the imbuing test is where is the illegality? Recommending things is not illegal…

Anonymous Coward says:

Re: Re: If it goes farther than "recommending", recommending isn't the right word

It’s not just “recommending things”. It’s selecting things that people will see and not see. Gatekeeping would be a better word for it. In the Facebook example, they don’t send a bunch of material with some flag saying it’s recommended. What Facebook selects is all the user sees. And for much of the material, they recommend it based on being paid to match content to user predisposition. So, in the example of sending tempting content to alcoholics, they are purveying the tempting content, often, to make money out of the connection being successful. Indeed, Facebook offers the option of placing product ads with a tracking pixel, so they only get paid if the recipient buys the product.

So, if you’re right about sec. 230, it means, for “platforms” it’s okay to do exactly what people in other contexts are not permitted to do. Newspaper publishers are indeed responsible for ads, for example, if they meet the legal standard for libel or incitement, even though, all they did was to run an ad someone else made up for money. It never destroyed the newspaper business. To the contrary, that was done mostly by allowing on-line platforms to sell ads against the newspaper’s content, siphoning off a huge share of the advertising dollars available and leaving newspapers financially unsupported. But I digress.

If this “imbuing” test is how we should treat liability for purveying content, to be consistent, shouldn’t we apply it to all sorts of media enterprises? Why should the doctrine of “no responsibility” apply to taking money for promoting material on an internet platform, but not for newspapers and publishers of all content to have similar protection? Or is this just a way to favor internet communications over other types of content promoters?

Put another way, the “imbuing” theory would allow Internet platforms (but not other alternative communication channels) to externalize costs caused by the extremely profitably business of extremely sophisticated targeting of content to people most susceptible to be incited or harmed by it. That’s the cost of divorcing responsibility for content from efforts to promote that content. Maybe not allowing this would restrict freedom of expression online, as you say. So, ultimately you have to choose or make some compromise, not just evade the question by novel theories like “content imbuer liability”.

Mike Masnick (profile) says:

Re: Re: Re:

It’s not just “recommending things”. It’s selecting things that people will see and not see. Gatekeeping would be a better word for it.

But, again, there’s no law breaking in recommending things. Or gatekeeping. Not sure why that’s even a question. Recommending things is, inherently, protected speech. I can recommend bad, or even dangerous things, and it’s still protected.

Newspaper publishers are indeed responsible for ads, for example, if they meet the legal standard for libel or incitement, even though, all they did was to run an ad someone else made up for money. It never destroyed the newspaper business.

But that’s different in multiple ways. Newspapers run, what, 30 ads a day? Social media runs millions. It’s multiple orders of magnitude difference. It’s not even the same thing.

And, again, you’re wrong about what you claim, because existing law does not automatically make them liable. It’s only true if they have knowledge of the violative nature of the content. That’s why the publisher of an encyclopedia of mushrooms was found not liable when the encyclopedia recommended eating a deadly mushroom. Because the publisher could not have known about the violation, so the 1st Amendment forbade holding them liable (Winter v. Putnam if you’re looking for a cite).

And that ruins your theory of how all this works.

If this “imbuing” test is how we should treat liability for purveying content, to be consistent, shouldn’t we apply it to all sorts of media enterprises? Why should the doctrine of “no responsibility” apply to taking money for promoting material on an internet platform, but not for newspapers and publishers of all content to have similar protection? Or is this just a way to favor internet communications over other types of content promoters?

Again, not sure why this is that difficult to understand. The imbuing test makes sense because of the scale, and because of the knowledge requirement I discussed above.

Put another way, the “imbuing” theory would allow Internet platforms (but not other alternative communication channels) to externalize costs caused by the extremely profitably business of extremely sophisticated targeting of content to people most susceptible to be incited or harmed by it. That’s the cost of divorcing responsibility for content from efforts to promote that content. Maybe not allowing this would restrict freedom of expression online, as you say. So, ultimately you have to choose or make some compromise, not just evade the question by novel theories like “content imbuer liability”.

This is incorrect, though. The imbuer test also ENABLES platforms to be more aggressive in fixing problems, because without it, with the liability regime you want, you STILL RUN INTO THE KNOWLEDGE problem. And if you now hinge liability entirely on knowledge, congrats, you’ve now created incentives for platforms to ALWAYS look away and never gain knowledge.

Anonymous Coward says:

Re: Re: Re:2

When was the last time these platforms have been aggressive in fixing problems? They usually wait till the last minute, then finally act after much of the damage has been done, apologize, and then we have the same song and dance about how it’s important that Section 230 gives them the freedom to wait until the last minute and let the harm occur for as long as it did.

Anathema Device (profile) says:

Re: Re:

Mike, how doe that work with paid advertising?

If my YouTube or Facebook is suddenly serving me ads for alcohol, or gambling, or Covid deniers, or whatever, does Section 230 protect them in the US?

[We don’t have an equivalent law in Australia, but our advertising standards authority is weak as piss. On the other hand, our High Court decided Facebook was liable for comments made on its platform!]

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re:

If my YouTube or Facebook is suddenly serving me ads for alcohol, or gambling, or Covid deniers, or whatever, does Section 230 protect them in the US?

Yes. But mostly because none of those things violate the law. But even if they did, the liability is on the advertiser, and that would be true even absent 230, because of the knowledge issue I described in a comment above.

It helps to recognize that there’s a difference between “bad” and “illegal.” And after separating that out, the purpose of 230 is to determine who is actually to blame for the illegal part. Not the “bad” part.

This comment has been deemed insightful by the community.
sumgai (profile) says:

Re: Loaded Imbuement

*One could make the 1st amendment-supported case that my Facebook feed is a thing that is created by Facebook, and they should be allowed to make it what they want.

And one would correctly call that Facebook-made feed a tool, and nothing more. As in all things social media, there are tools, and there users of those tools. That’s the bottom line.

So how does so-called objectionable content get into one’s feed? Very,very simple – one asks for it. By selecting a suggested feed item, and/or self-selecting other things to read, you literally inform the feed algorithm of your interests. And of course the algorithm is going to “make you happy” by feeding you more content in line with your known interests – Facebook would be silly to do otherwise. Remember, it’s not their job to introduce you to new stuff, it’s their job to bring in money. The applicable business rule would be “feed them the known” because that’s guaranteed to bring in more money than “let’s see what they think of this unknown”. Simple economics.

[W]hen the selection and ordering of my Facebook feed becomes Imbued with badness, because Facebook intentionally picked the alcohol stories to send to an alcoholic, or the radicalization stories to send to a vulnerable target, couldn’t the Imbuement Test assign the “objectionable quality” of my feed (distinct from the objectionable qualities of the articles themselves) to Facebook?

Already answered by Mike, below, but to make it short and sweet: neither Facebook nor its algorithm intentionally picked anything,they simply followed the business rule as I laid out above. Therefore, Facebook doesn’t “know”, ahead of time, that you’re an alcoholic, or otherwise vulnerable in any sense of the word – it only “knows” what you’ve previously viewed.

Anonymous Coward says:

I’m curious if 230 also would likely apply when ads supplying malware are displayed without the express knowledge of the site owner. I suspect it would, but I’m not any sort of expert. The site should not be held liable for damage in the simple case where they farmed it out and just could not know, and the imbued test fingers the actual perp.

Leave a Reply to Anathema Device Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...