House Democrats Decide To Hand Facebook The Internet By Unconstitutionally Taking Section 230 Away From Algorithms

from the this-is-not-a-good-idea dept

We’ve been pointing out for a while now that mucking with Section 230 as an attempt to “deal” with how much you hate Facebook is a massive mistake. It’s also exactly what Facebook wants, because as it stands right now, Facebook is actually losing users to its core product, and the company has realized that burdening competitors with regulations — regulations that Facebook can easily handle with its massive bank account — is a great way to stop competition and lock in Facebook’s dominant position.

And yet, for reasons that still make no sense, regulators (and much of the media) seem to believe that Section 230 is the only regulation to tweak to get at Facebook. This is both wrong and shortsighted, but alas, we now have a bunch of House Democrats getting behind a new bill that claims to be narrowly targeted to just remove Section 230 from algorithmically promoted content. The full bill, the “Justice Against Malicious Algorithms Act of 2021 is poorly targeted, poorly drafted, and shows a near total lack of understanding of how basically anything on the internet works. I believe that it’s well meaning, but it was clearly drafted without talking to anyone who understands either the legal realities or the technical realities. It’s an embarrassing release from four House members of the Energy & Commerce Committee who should know better (and at least 3 of the 4 have done good work in the past on important tech-related bills): Frank Pallone, Mike Doyle, Jan Schakowsky, and Anna Eshoo.

The key part of the bill is that it removes Section 230 for “personalized recommendations.” It would insert the following “exception” into 230.

(f) PERSONALIZED RECOMMENDATION OF INFORMATION PROVIDED BY ANOTHER INFORMATION CONTENT PROVIDER.?

??(1) IN GENERAL.?Subsection (c)(1) does not apply to a provider of an interactive computer service with respect to information provided through such service by another information content provider if?

?(A) such provider of such service?
??(i) knew or should have known such provider of such service was making a personalized recommendation of such information; or
??(ii) recklessly made a personalized recommendation of such information; and
??(B) such recommendation materially contributed to a physical or severe emotional injury to any person.

So, let’s start with the basics. I know there’s been a push lately among some — including the whistleblower Frances Haugen — to argue that the real problem with Facebook is “the algorithm” and how it recommends “bad stuff.” The evidence to support this claim is actually incredibly thin, but we’ll leave that aside for now. But at its heart, “the algorithm” is simply a set of recommendations, and recommendations are opinions and opinions are… protected expression under the 1st Amendment.

Exempting Section 230 from algorithms cannot change this underlying fact about the 1st Amendment. All it means is that rather than getting a quick dismissal of the lawsuit, you’ll have a long, drawn out, expensive lawsuit on your hands, before ultimately finding out that of course algorithmic recommendations are protected by the 1st Amendment. For much more on the problem of regulating “amplification,” I highly, highly recommend reading Daphne Keller’s essay on the challenges of regulating amplification (or listen to the podcast I did with Daphne about this topic). It’s unfortunately clear that none of the drafters of this bill read Daphne’s piece (or if they did, they simply ignored it, which is worse). Supporters of this bill will argue that in simply removing 230 from amplification/algorithms, this is a “content neutral” approach. Yet as Daphne’s paper detailed, that does not get you away from the serious Constitutional problems.

Another way to think about this: this is effectively telling social media companies that they can be sued for their editorial choices of which things to promote. If you applied the same thinking to the NY Times or CNN or Fox News or the Wall Street Journal, you might quickly recognize the 1st Amendment problems here. I could easily argue that the NY Times’ constant articles misrepresenting Section 230 subject me to “severe emotional injury.” But of course, any such lawsuit would get tossed out as ridiculous. Does flipping through a magazine and seeing advertisements of products I can’t afford subject me to severe emotional injury? How is that different than looking at Instagram and feeling bad that my life doesn’t seem as cool as some lame influencer?

Furthermore, this focus on “recommendations” is… kinda weird. It ignores all the reasons why recommendations are often quite good. I know that some people have a kneejerk reaction against such recommendations but nearly every recommendation engine I use makes my life much better. Nearly every story I write on Techdirt I find via Twitter recommending tweets to me or Google News recommending stories to me — both based on things I’ve clicked on in the past. And both are (at times surprisingly) good at surfacing stories I would be unlikely to find otherwise, and doing so quickly and efficiently.

Yet, under this plan, all such services would be at significant risk of incredibly expensive litigation over and over and over again. The sensible thing for most companies to do in such a situation is to make sure that only bland, uncontroversial stuff shows up in your feed. This would be a disaster for marginalized communities. Black Lives Matter? That can’t be allowed as it might make people upset. Stories about bigotry, or about civil rights violations? Too “controversial” and might contribute to emotional injury.

The backers of this bill also argue that the bill is narrowly tailored and won’t destroy the underlying Section 230, but that too is incorrect. As Cathy Gellis just pointed out, removing the procedural benefits of Section 230 takes away all the benefits. Section 230 helps get you out of these cases much more quickly. But under this bill, now everyone will add in a claim under this clause that the “recommendation” cause “emotional injury” and now you have to litigate whether or not you’re even covered by Section 230. That means no more procedural benefit of 230.

The bill has a “carve out” for “smaller” companies, but again gets all that wrong. It seems clear that they either did not read, or did not understand, this excellent paper by Eric Goldman and Jess Miers about the important nuances of regulating internet services by size. In this case, the “carve out” is for sites that have 5 million or fewer “unique monthly visitors or users for not fewer than 3 of the preceding 12 months.” Leaving aside the rather important point that there really is no agreed upon notion of what a “unique monthly visitor” actually is (seriously, every stats package will give you different results, and now every site will have incentive to use a stats package that lies and gives you lower results to get beneath the number), that number is horrifically low.

Earlier this year, I suggested a test suite of websites that any internet regulation bill should be run against, highlighting that bills like these impact way more than Facebook and Google. And lots and lots of the sites I mention get way beyond 5 million monthly views.

So under this bill, a company like Yelp would face real risk in recommending restaurants to you. If you got food poisoning, that would be an injury you could now sue Yelp over. Did Netflix recommend a movie to you that made you sad? Emotional injury!

As Berin Szoka notes in a Twitter thread about the bill, this bill from Democrats, actually gives Republican critics of 230 exactly what they wanted: a tool to launch a million “SLAM” suits — Strategic Lawsuits Against Moderation. And, as such, he notes that this bill would massively help those who use the internet to spread baseless conspiracy theories, because THEY WOULD NOW GET TO SUE WEBSITES for their moderation choices. This is just one example of how badly the drafters of the bill misunderstand Section 230 and how it functionally works. It’s especially embarrassing that Rep. Eshoo would be a co-sponsor of a bill like this, since this bill would be a lawsuit free-for-all for companies in her district.

Another example of the wacky drafting in the bill is the “scienter” bit. Scienter is basically whether or not the defendant had knowledge that what they were doing was wrongful. So in a bill like this, you’d expect that the scienter would require the platforms to know that the information they were recommending was harmful. That’s the only standard that would even make sense (though would still be constitutionally problematic). However, that’s not how it is in the bill. Instead, the scienter is… that the platform knows they recommend stuff. That’s it. In the quote above the line that matters is:

such provider of a service knew or should have known such provider of a service was making a personalized recommendation of such information

In other words, the scienter here… is that you knew you were recommending stuff personally. Not that it was bad. Not that it was dangerous. Just that you were recommending stuff.

Another drafting oddity is the definition of a “personalized recommendation.” It just says it’s a personalized recommendation if it uses a personalized algorithm. And the definition of “personalized algorithm” is this bit of nonsense:

The term ‘personalized algorithm’ means an algorithm that relies on information specific to an individual.

“Information specific to an individual” could include things like… location. I’ve seen some people suggest that Yelp’s recommendations wouldn’t be covered by this law because they’re “generalized” recommendations, not “personal ones” but if Yelp is recommending stuff to me based on my location (kinda necessary) then that’s now information specific to me, and thus no more 230 for the recommendation.

It also seems like this would be hell for spam filters. I train my spam filter, so the algorithm it uses is specific to me and thus personalized. But I’m pretty sure that under this bill a spammer whose emails are put into a spam filter can now sue, claiming injury. That’ll be fun.

Meanwhile, if this passes, Facebook will be laughing. The services that have successfully taken a bite out of Facebook’s userbase over the last few years have tended to be ones that have a better algorithm for recommending things: like TikTok. The one Achilles heel that Facebook has — it’s recommendations aren’t as good as new upstarts — gets protected by this bill.

Almost nothing here makes any sense at all. It misunderstands the problems. It misdiagnoses the solution. It totally misunderstands Section 230. It creates massive downside consequences for competitors to Facebook and to users. It enables those who are upset about moderation choices to sue companies (helping conspiracy theorists and misinformation peddlers). I can’t see a single positive thing that this bill does. Why the hell is any politician supporting this garbage?

Filed Under: , , , , , , , , ,
Companies: facebook, yelp

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “House Democrats Decide To Hand Facebook The Internet By Unconstitutionally Taking Section 230 Away From Algorithms”

Subscribe: RSS Leave a comment
74 Comments
This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re:

"70% of facebook accounts haven’t been updated or touched in more than 2 years."

That sounds like an interesting piece of information that would be fascinating to see into further and see what methodology and data was used to confirm this, considering you claim that you’re not using the public data provided by Facebook themselves.

Please, link to the study so that I can research this further!

Scary Devil Monastery (profile) says:

Re: Re: Re:

Well, to be fair I’d have guessed it was far more than 70%. Unless an account – on any platform – is deliberately banned or purged, it’ll stick around forever. Users, meanwhile, move on. Blizzard kept bragging about their online WoW user base of twelve million until it was shown that some 90% of those accounts had been idle for years.

If Facebook only has 70% of it’s user account base idling that’s actually a pretty good ratio.

PaulT (profile) says:

Re: Re: Re: Re:

"I’d have guessed"

I don’t care about guesses. If someone’s going to claim a number, I want a source for that number, especially if the person claiming it is saying that he’s arriving at it without Facebook’s publicly released information.

"Blizzard kept bragging about their online WoW user base of twelve million until it was shown that some 90% of those accounts had been idle for years."

Define "idle" – were the people still paying the subscription and just not playing, or were they disabled?

"If Facebook only has 70% of it’s user account base idling that’s actually a pretty good ratio."

Again, define "idling". Accounts that haven’t been logged into for a week? A month? A year? Does logging in count or do you only count posting? Do people who mainly use Instagram or TikTok but have them set up to share posts on Facebook count, or does it only count if they log in directly? What about people who use their Facebook account purely to log into other sites, is that active or not?

There’s a lot of questions here, which is why I’m asking for a source other than "AC’s anus"…

Scary Devil Monastery (profile) says:

Re: Re: Re:2 Re:

"Define "idle" – were the people still paying the subscription and just not playing, or were they disabled?"

As I recall it concerned mainly F2P accounts – but don’t quote me on that because it might very well be subscriptions on hiatus for years.

"Again, define "idling". Accounts that haven’t been logged into for a week? A month? A year?"

As you clearly noted right below that question…it doesn’t get an answer before you supply proper context. I’m not sure I’d call an FB account only used to set up other website accounts "active" for instance.

"There’s a lot of questions here, which is why I’m asking for a source other than "AC’s anus"…"

He expresses himself a bit too certain, sure. But he does have a valid point of assumption – because abandoned accounts have always been the majority of account bases for every online service since the early days of usenet. Unless a service regularly purges accounts who’ve been inactive for X time it’s a given that the dead accounts will heavily outnumber the active ones on any matured online service.

That this is a default state of affairs is pretty well established by now. That facebook should make out the sole exception to this would be odd enough that is the assertion I’d demand evidence for.

However, a quick google provided me with a good google page to start – Query term; "Ghost Town? Study Says 70 Percent Of Facebook Pages Are Inactive"

Which references a study reference to recommend.ly’s "facebook pages usage patterns".

That One Guy (profile) says:

Re: Re: Re: Re:

The truth is that they can handle way more lawsuits than their competitors and they’ll be in an even better position when those competitors fold under the legal avalanche and they’re the only viable option left and don’t have to worry about any others springing up to challenge them.

Dealing with a bunch of SLAM’s might be a pain but gutting the industry you’re in and ensuring that you’re the only viable option is priceless, and I’m sure Facebook will be happy to make that trade.

Fizzlepop Berrytwist says:

Re: Re: Re:2 Re:

Dealing with a bunch of SLAM’s might be a pain but gutting the industry you’re in and ensuring that you’re the only viable option is priceless, and I’m sure Facebook will be happy to make that trade.

But, once those competitors fold, they’ll end up being the lone target for SLAM’s.

Scary Devil Monastery (profile) says:

Re: Re:

"One person can launch THOUSANDS of simultaneous lawsuits for $500 each."

Nope. I mean, you might be able to if it was about a copyright claim, because the DMCA is funny (read: Broken) that way.

But for any real court case, even under US tort law, that’s just a quick way to hand Facebook all your money in countersuits won by walk-over.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

I. Kinnock says:

Re: Another day, another MM panic fearing for corporate tyranny.

Going by that you hold on to only a couple dozen fanboys, you should try other sources. Quit re-writing elitist, Ivy League, NYT/LATimes/WaPo propaganda. — But no, your quote and contradict technique pretending to "analyze" other views isn’t adequate, either. — You need substance and originality, but of course don’t dare stray far from the safety of your silly little neo-liberal clique.

This comment has been flagged by the community. Click here to show it.

I. Kinnock says:

Re: Another day, another MM panic fearing for corporate tyranny.

You pretend to offer a discussion forum here, but then disadvantage / discriminate against any other than full-blown corporatist views. (BTW: your fanboys sniping at generic corporations while letting YOU put out explicitly pro-corporate shilling is one of my favorite aspects of Techdirt. That dissonance points up, doesn’t cover, your corporatism.)

This comment has been flagged by the community. Click here to show it.

I. Kinnock says:

Re: Another day, another MM panic fearing for corporate tyranny.

You also fear "conspiracy theories" getting notice, because… Well, YOU TELL US: WHY? What reason have you for this free-floating contextless fear of the mere thought of alternate explanations? What concern of yours is affected? What justifies corporations arbitrarily suppressing quite popular views? Do you just dismiss any view which isn’t approved by The Establishment / globalists? ‘SPLAIN why you think "conspiracy theories" are necessarily wrong and bad, you nasty little globalist PUNK.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Smarter Than I Thought

If you applied the same thinking to the NY Times or CNN or Fox News or the Wall Street Journal

That’s okay because these news outlets are publishers, and not platforms. The editors get the choice about what to promote, but they are also liable for what they publish. I have to give the drafters some credit here — it looks as if they understand at least some of the publisher/platform problem. The 1996 CDA was a compromise bill, so perhaps there is room for the two sides to come together.

Scienter is basically whether or not the defendant had knowledge that what they were doing was wrongful.

Just like how the cigarette industry knew it was selling a carcinogenic product, even though they went along begrudgingly with the warning labels, and was found liable for causing harm. Social media: the tobacco product of the internet.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Smarter Than I Thought

Social media: the tobacco product of the internet.

The comments being censored are the strongest

If the digital forum becomes large enough it becomes a public forum.

Funny how the talking points your russian handlers give you seem to change regularly.

It’s like each of your previous <gotcha> talking points have been so thoroughly debunked, over and over and over again, that you have to keep changing it up so as to not sound like the complete idiot that most of us see you as.

Any way, remember that time you though Facebook could use §230 to dismiss a lawsuit against Facebook’s own speech? That shows everybody here how little you know about section §230, the 1st amendment and internet moderation in general.

This comment has been deemed insightful by the community.
Strawb (profile) says:

Re: Smarter Than I Thought

That’s okay because these news outlets are publishers, and not platforms. The editors get the choice about what to promote, but they are also liable for what they publish. I have to give the drafters some credit here — it looks as if they understand at least some of the publisher/platform problem.

Given that that problem is completely made up and doesn’t exist in section 230, that’s not something they should get credit for.

ECA (profile) says:

Re: Smarter Than I Thought

Add to that.
Plastic corps knowing that MOST of the plastics will never degrade before our great grand children die.
Or PTFE Used as a Nonstick material in Tons of things. But has a half life longer then MOST Radioactive materials.
https://www.youtube.com/watch?v=9W74aeuqsiU&t=3s
How about the right to have an opinion, over reporting the news? That you dont have to tell the truth or just the facts you know, but can add additional idiocy.
When the thought of capitalism is based on the consumer having the ability to NOT use a service they dont like, and HAS A CHOICE, ISNT a fact.
Where the Supply of a service is based on the idea that A’ Service Is being supplied auxiliary to the Main/original one. And that NEW service is a Sat signal with a 3-7 second delay. ITS NOT COMPARABLE.
When the wages of the top exec’s is MORE then paying off the stocks or even giving dividends to those that Bought the stocks.

A good share of the system is broken, and our Gov. is 1/2 the problem. AND we are the Gov., supposedly.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

I believe that it’s well meaning, but it was clearly drafted without talking to anyone who understands either the legal realities or the technical realities.

Which if anything just makes it worse than if they’d proposed it with malice, because they could have had their intentions match their actions but didn’t.

Whether you punch someone in the face with the best of intentions because you were in a hurry and didn’t bother to check whether it actually would help or you did so maliciously you still punched someone in the face and they still have to deal with that.

They have proposed a bill that ignores the advice and expertise of people who have knowledge in the field and one that stands to do enormous damage to smaller platforms and entrench the current top ones, until and unless they pull the bill and admit that it’s a terrible idea they should be treated no differently than if they’d proposed it maliciously and be raked across the coals for it just the same.

Scary Devil Monastery (profile) says:

Re: Re: JAMAA-king Me Crazy

"say goodbye to personalized(and therefore useful and relevant) recommendations and hello to the equivalent of hitting ‘show random’ on platforms everywhere."

Annoying in itself but not the worst part of this bill. May I perhaps introduce you to the Thin End Of The Wedge?

Because what this means is that any platform without Facebook’s legal team will have to serve their users a horrible bullshit concoction of recommendations which will contain the most outrageous garbage vested interests saw fit to serve – from Viagra ads to Klan and neo-nazi propaganda. I wouldn’t be surprised to see the nigerian prince coming back strong in random clickbaits either.

Meanwhile Facebook will still be able to fend off the worst morons and maintain a sanitized environment, presenting a case for congress that "see? The internetz didn’t break! You should roll this out across the board!". And the US online environment is reduced to dregs.

Future alternatives will be coming from China, no doubt, which won’t mind helping the gwailo out with miraculous social platforms catering to every desire, so long as said gwailo don’t mind uncle Xi listening in on everything they do…

ECA (profile) says:

this is way out there.

So lets ask about Algorithms?
Which ones?
Suggested friends? based on the info you place in your account?
Your school, your work, your home town, where you live now?
I never fill that crap in. and HOPE no one else does.
On FB you can block just about anyone, including Some adverts.
So, most of this is based on the idea that YOU dont know how to use the site and BLOCK OR REPORT someone for picking on you?

Others Algorithms.
Amazon, and Tons of Sale sites use them base on what you Buy and What you have looked at on the site. ALL the major sale sites from amazon to ‘Whats the name of the restaurant?’ are SHARING your data. Even amazon is a FRONT for millions of other sites for a % of the sales. Even walmart does it.
So, reading this

"does not apply to a provider of an interactive computer service with respect to information provided through such service by another information content provider "

So, info of a sale or where you were looking at a site, isnt protected when its referred to another Service? to show you were shopping, looking around for ? Condoms?
Which would embarrass you All to hell when the next site you goto, pops up all these adverts for Condoms?

Anyone got a definition of being an ADULT? Or is this something thats supposed to protect our kids, but ends up treating us as Idiot adults?

Can we extend this, and carry it to OTHER services? Like Cable TV? Like Roku?
Isnt there a LAW about laws NOT being for an individual group or person?
How Thick Can we spread this butter across the Whole advert system?
How many Conglomerates are a series of business’s interlinked? Even Macy’s is part of this system. 4-5 levels of sales, as something dont sell at 1 set of stores you pass it down and write it off(wow, what a way to kill the tax system). And you are getinig adverts from each of those stores in your area.

"information provided through such service by another information content provider"

It affects the 3rd party, the one that intercepts the information, then uses it. Seems not to affect the primary site that gathered the info.

BE parts?
"knew or should have known."

"recklessly made a personalized recommendation of such information; and
‘‘such recommendation materially contributed to a physical or severe emotional injury to any person."

Algorithms DONT MAKE PERSONALIZED ANYTHING. They look at the data and say
‘ Wow, this person is looking at allot of porn’ Equals ‘I should show them more porn’.
‘Wow this person was born ?" equals ‘I should Post to everyone in that town where this person is’.
‘Wow, you work at blank’ Equals ‘I should contact all those people and let them know you are using this or that service’
‘Wow, you drink allot of alcohol’ equals ‘I should tell everyone what you drink and that you like it allot’

Computers are better at assumptions NOT analytical analysis.

Anonymous Coward says:

a service could base recommendation s on town or citys so its not personalised to one user, or part of a sate,east texas,west texas,etc but its a disaster of a bill cos it enables anyone to sue over a item of news, or a video,
say i young person got shown a video about police brutality or someone being attacked they could sue for trauma .
of course small startups cant afford even a few legal actions while facebook
has almost unlimited resources ,this would be really bad for tik tok as its based on showing every user different videos based on what videos they watch .the people who wrote this bill dont understand how section 230 is absolutely vital to free speech and the survival of smaller websites that have a minority audience ,eg asian americans, lgbt groups

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

Makes one wonder why they even bother asking for said experts if they’re just gonna ignore them because they dare say the issue is more nuanced and complex than "This is good and this is bad".

I’m willing to give them the benefit of the doubt that they drafted this bill with good intentions but when experts are yelling that this is not the right way to go about it yet plow ahead regardless well as the saying goes: "The road to hell is paved with good intentions."

That One Guy (profile) says:

Re: Re: Re: Re:

It’s probably a mix of honesty thinking that they’re right and being surprised when the experts tell them that no, they most certainly are not, and dishonestly aiming to exploit the experts, either boasting about how even the experts agree with them or derogatorily dismissing the ‘know-it-all who clearly don’t have any real learnin’ ‘ if the experts don’t agree with them.

That Anonymous Coward (profile) says:

Re: Re: Re: Re:

"give them the benefit of the doubt that they drafted this bill with good intentions"

It most likely was ghost written by someone who put money in a PAC for them.

I’d really love to bring back horsewhipping the liars…
One we’d need a lot of whips to get caught up, but one has to wonder if them seeing that if you lie we’ll whip you might change some of them…
mental images of some members of Congress fibbing about little things because its cheaper than paying the professional they normally get whipped by

ECA (profile) says:

Re: Re: Re:2 Re:

There is a trick they love.
To debate at nite when no one is there.
If a Quorum, isnt needed, then Just ask for a vote in the middle of the nite, with selected persons there.
Its been done MANY TIMES.

http://westwing.bewarne.com/discontinuity/government.html

https://www.youtube.com/watch?v=CO4JpVBUu80&t=4s
Look at background. Wish there were a clock showing THE current time this was done.

Anonymous Coward says:

But at its heart, "the algorithm" is simply a set of recommendations, and recommendations are opinions and opinions are… protected expression under the 1st Amendment.

This reasoning doesn’t add up for me. "The algorithm" (assuming we are speaking of a news feed "recommendation algorithm") is not the recommendation. The algorithm produces recommendations (in a sense), but also does considerably more than that. Specifically, it makes decisions about who does and doesn’t see a particular piece of content, and also about how many people see it. "The algorithm" determines which ideas get popular currency, and which are ignored.

I can’t speak for anyone else who wants to see news feed algorithms regulated, but for me it has nothing to do with the content being recommended, and everything to do with the power to determine what content is popularized. The power of mass influence is what needs regulation, and the nexus of that power is news feeds (and also advertising algorithms).

Even to the extent that algorithms produce recommendations (which again, is only a part of what algorithms do), saying that recommendations are protected opinions also doesn’t make sense. When a piece of content is recommended on my news feed, whose opinion is it? If the algorithm produced the recommendation, surely the most natural answer is that the opinion belongs to the algorithm. Which is nonsense; algorithms don’t have beliefs or opinions, they have inputs and outputs.

Even granting the nonsensical idea that an algorithm is expressing an opinion, saying that opinion is protected by the first amendment is equally nonsensical. The first amendment applies to speech by natural persons, not algorithms. There is no conceivable reason why an algorithm’s freedom of speech needs to be protected, and certainly no reason why the authors of the first amendment would have intended it that way.

The recommendations in a news feed have similar status to copyright in photos taken by a monkey. A monkey cannot obtain copyright in a photo because copyrights are created for human benefit, and there is no policy benefit to granting ownership to non-humans. Similarly, an algorithm is not entitled to first amendment protection because such protection is intended to protect the freedom of expression of humans, and there is no policy benefit to granting it to algorithms.

I’ll reluctantly concede that corporate speech is protected, but even if the recommendations contained in a news feed can be construed as protected opinion (a stretch for me, but I’ll grant it for the sake of argument), what’s protected is the expression of that opinion, not how those opinions are produced. A natural assumption behind protecting opinion is that opinions are "reasoned beliefs". They are protected because allowing for a variety of beliefs and reasoning is beneficial to society.

A company that chooses to express recommendations wholesale via a newsfeed is not expressing reasoned beliefs. It is making available the results of an algorithm. The algorithm itself is neither recommendation nor opinion, and I see no reason why the first amendment should apply to it.


Beyond all that … why should Section 230 have anything at all to do with algorithms? Excluding algorithmic recommendations from Section 230 seems pointless to me. Surely they are already excluded from Section 230, on the basis that they are generated by the company, not its users.

As I understand it Section 230 protects companies from liability over user content. But, news feeds are not user content. News feed recommendations are produced by companies, not users. And, as per the roommates.com decision, companies are already liable for content that they produce. So when companies "express" the recommendations produced by news feed algorithms, they are already liable for those recommendations. Carving algorithms out from Section 230 doesn’t change a damn thing as far as I can tell.

I suppose it might make companies liable for content generated by bots, which I guess could be a problem (it makes spam filtering problematic), but that’s pretty distant from either the intent behind the legislation or Mike’s handwringing over it.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re:

I can’t speak for anyone else who wants to see news feed algorithms regulated, but for me it has nothing to do with the content being recommended, and everything to do with the power to determine what content is popularized. The power of mass influence is what needs regulation, and the nexus of that power is news feeds (and also advertising algorithms).

Giving your opinion is absolutely protected activity, whether you’re talking to one person or one million, doing it directly or through a system you run. That aside I would be very leery of opening the can of worms that is regulating the ability to influence others as I can explain why it’s a terrible idea in two words: ‘Fake news’. Open that can and it’s not a question of will someone you vehemently disagree with get and use the power that would grant but how quickly.

Similarly, an algorithm is not entitled to first amendment protection because such protection is intended to protect the freedom of expression of humans, and there is no policy benefit to granting it to algorithms.

Who do you think creates and tweaks algorithms, because I’m pretty sure we haven’t quite reached the point of digital sentience where computers are doing thing entirely on their own. Humans are the ones coding the algorithms and deciding how they treat the content they are tasked to handle, saying that the algorithm and it’s output doesn’t deserve first amendment protection is saying that the humans that run it don’t.

Anonymous Coward says:

Re: Re: Re:

The power of mass influence is what needs regulation, and the nexus of that power is news feeds (and also advertising algorithms).

The nexus of that power is the news media, print radio and T.V. who are very selective about what stories they publish, anf who often put a slant on the stories to suite their political aims.

Anonymous Coward says:

Re: Re: Re: Re:

Plus "you are free to speak but not to be listened too" isn’t free speech. Saying there should be limits to mass communication is saying that the problem is that too many people might listen to what you have to say!

That is the sort of thing only a tyrant would call a problem. But people have become so goddamned stupidly reactionary even among so-called progressive complaining about a lack of manufactured consent is a mainstream opinion! That is what "we are too divided" really means.

PaulT (profile) says:

Re: Re: Re:2 Re:

"Plus "you are free to speak but not to be listened too" isn’t free speech"

Yes it is. You still had your freedom to speak, you just didn’t have a guaranteed audience. Which has never been something that was promised to you. The only guarantee you have is that the government is not allowed to shut you down, not that the rest of the public is not allowed to decide not to listen to you.

Anonymous Coward says:

Re: Re: Re: Re:

The nexus of that power is the news media, print radio and T.V. who are very selective about what stories they publish, anf who often put a slant on the stories to suite their political aims.

I agree, but the two nexii are not mutually exclusive. Newsfeeds and newsrooms both have agenda-setting effects. They are also different, and shouldn’t necessarily be regulated under the same laws. News media is at least arguably driven by human editorial decisions that are clearly protected first amendment activity, because ultimately we want people to be able to express political viewpoints. The same cannot be straightforwardly said of newsfeed algorithms.

Anonymous Coward says:

Re: Re: Re:2 Re:

Also part of the problem is politicians pushing controversial view such as Texas school administrator tells teachers to provide "opposing perspective" to books about the Holocaust. How can you have critical thought where that and intelligent design is pushed onto children. That is teaching them that all viewpoints are equally valid.

That One Guy (profile) says:

Re: Re: Re:3 Re:

That certainly explains why they went out of their way to explicitly make holocaust denialism a moderation-exempt category in the semi-recent bill, it would be rather awkward if children were taught that there were ‘good people on both sides’ when it came to the holocaust only to look online and see that there very much were not.

Anonymous Coward says:

Re: Re: Re:

Giving your opinion is absolutely protected activity, whether you’re talking to one person or one million

Giving your opinion is free speech. Having it heard by millions is not.

I am completely against regulating algorithms in ways that would censor speech. The regulations we need should be content agnostic, meaning we should not be regulating what content gets recommended.

What needs regulating is two things:
a) targeting (i.e. who receives what recommendations, and how is that determined), and
b) virality (i.e. how many people see a given piece of content, and perhaps placing global limits on how much any given piece of content can be recommended.)

Neither of these things implicates freedom of speech if done properly. Popularity of speech is not a right, and neither is a guaranteed listener.

Regulating newsfeed algorithms is (or should be) about regulating how audiences are formed, not about what speech is shared.

Anonymous Coward says:

Re: Re: Re:

That aside I would be very leery of opening the can of worms that is regulating the ability to influence others as I can explain why it’s a terrible idea in two words: ‘Fake news’. Open that can and it’s not a question of will someone you vehemently disagree with get and use the power that would grant but how quickly.

We are already there. That can is open and the worms are gone. That is why we need regulation. Right now, the decisions about what is and isn’t fake news, and who gets to decide, are being made in ways that are unaccountable to anyone (with the possible exception of corporate shareholders).

Unfortunately, regulation is a job for government. It may by that the US needs a functional, honest government before we get the regulation we need (don’t laugh), but that doesn’t abrogate the fact that regulation is needed. We need the government to create some accountability in a way that is non-partisan, fair, and truthful. However unrealistic that looks, we need the government to be the holder of that power, not corporations.

That One Guy (profile) says:

Re: Re: Re: Re:

Consolidating the replies to both into one comment for ease of reading.

Giving your opinion is free speech. Having it heard by millions is not.

It’s the same bloody thing, the same action does not go from protected speech to not protected simply because the audience increased.

Neither of these things implicates freedom of speech if done properly. Popularity of speech is not a right, and neither is a guaranteed listener.

Neither popularity nor an audience are rights under the first amendment or free speech in general but the ability to gather those(within certain restrictions like not using someone else’s property to do so) very much are.

Both of those are very much out of bounds, the government deciding who you are allowed to say certain things to and how many people are allowed to be in that group are both pretty blatant violations of the first amendment in the form of dictates relating to speech.

We are already there. That can is open and the worms are gone.

Oh? I wasn’t aware that the government was already in the business of issuing legal penalties against those that they disagreed with. Strange that, you’d think that would have made a bigger splash especially in the last four years when it was headed by someone who would have loved the ability to go after anyone spreading ‘fake news’.

Right now, the decisions about what is and isn’t fake news, and who gets to decide, are being made in ways that are unaccountable to anyone (with the possible exception of corporate shareholders).

Curse those people making use of their first amendment rights in ways you don’t agree with, those fiends.

There is a big difference between a person and/or a privately owned platform deciding to host or not host certain content, and choosing how to present that content due to their biases and positions and the government stepping in and dictating what can be said, how it can be said and how many people are allowed to listen.

We need the government to create some accountability in a way that is non-partisan, fair, and truthful.

Yeah, we already have those, they’re called defamation and liability laws, for when people go a little overboard in their claims and I’m not sure if you’ve noticed but they don’t always work out so well currently, they certainly don’t need to be expanded.

However unrealistic that looks, we need the government to be the holder of that power, not corporations.

Yes, what could go wrong with the government being able to dictate how many people you’re allowed to speak to and who is allowed or required to be in that group?

Anonymous Coward says:

Re: Re: Re:

Moreover, a quick Google search tells me that computer code counts as protected speech based on Bernstein v. Justice Department.

Unless companies are publishing the source code for their algorithms, I don’t see why Bernstein would apply. Code may be speech, but the result of running that code is not.

A recommendation algorithm is an automated way for a company to say "We think you’ll like these things based on what you’ve picked/searched for/watched/listened to/etc. in the past.

Yes, but the protected part is the "We think…" part, not the algorithm part. It becomes protected speech when the company endorses and (for lack of a better word) publishes it. It becomes corporate opinion when the company runs the algorithm to produce a recommendation not when the algorithm is coded.

nasch (profile) says:

Re: Re:

A company that chooses to express recommendations wholesale via a newsfeed is not expressing reasoned beliefs. It is making available the results of an algorithm. The algorithm itself is neither recommendation nor opinion, and I see no reason why the first amendment should apply to it.

You’re either talking about regulating the expression of the results of the algorithm, which would be a first amendment issue because the government is not supposed to regulate speech whether it’s a political opinion or a dick joke or a company saying "this is what our algorithm thinks you will be interested in" – or you’re talking about regulating what the algorithm itself does. Which would also be a first amendment issue, because a human wrote that algorithm, and the government is not supposed to regulate what people write, whether it’s on a protest poster or typed into a computer to make software.

Anonymous Coward says:

Another Really Inept Moderation Attempt

If they had anyone who could explain actual ARIMA Methods, they would still lack the training (6+ years of university maths) and possibly the wit to grasp the modelling techniques implemented in these algorithms. As usual, legislation this ignorantly designed and deployed will fail its intent and enact any number of strange and undesirable unintended consequences. Yay, Democrats! Just when I had a remote hope of a rescue from GOP insanity, you go full retard.

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...