Bipartisan Senators Want To Honor Charlie Kirk By Making It Easier To Censor The Internet
from the is-it-too-much-to-ask-you-to-understand-the-law? dept
Democratic Senator Mark Kelly and Republican Senator John Curtis want to gut Section 230 to combat “political radicalization”—in honor of Charlie Kirk, whose entire career was built on political radicalization.
Kirk styled himself as a “free speech warrior” because he would show up on college campuses to “debate” people, but as we’ve covered, the “debate me bro” shtick was just trolling designed to generate polarizing content for social media. He made his living pushing exactly the kind of inflammatory political content that these senators now claim is so dangerous it requires dismantling core legal protections for speech. Their solution to political violence inspired by online rhetoric is to create a legal framework that will massively increase censorship of political speech.
Which they claim they’re doing… in support of free speech.
Almost everything about what they’re saying is backwards.
The two Senators spoke at an event at Utah Valley University, where Charlie Kirk was shot, to talk about how they were hoping to stop political violence. That’s a worthwhile goal, but their proposed solution reveals they don’t understand how Section 230 actually works.
The senators also used their bipartisan panel on Wednesday to announce plans to hold social media companies accountable for the type of harmful content promoted around the assassination of Kirk, which they say leads to political violence.
During their televised discussion, Curtis and Kelly previewed a bill they intend to introduce shortly that would remove liability protection for social media companies that boost content that contributes to political radicalization and violence.
The “Algorithm Accountability Act” would transform one of the pillars of internet governance by reforming a 30-year-old regulation known as Section 230 that gives online platforms legal immunity for content posted by their users.
“What we’re saying is this is creating an environment that is causing all sorts of harm in our society and particularly with our youth, and it needs to be addressed,” Curtis told the Deseret News.
The bill would strip Section 230 protections from companies if it can be proven in court that they used an algorithm to amplify content that caused harm. This change means tech giants would “own” the harmful content they promote, creating a private cause of action for individuals to sue.
Like so many politicians who want to gut Section 230, Kelly and Curtis clearly don’t understand how it actually works. Their “Algorithm Accountability Act” would create exactly the kind of censorship regime they claim to oppose.
It’s kind of incredible how many times I’ve had to say this to US Senators, but repealing 230 doesn’t make companies automatically responsible for speech. That’s literally not how it works. They’re still protected by the First Amendment.
It just makes it much more expensive to defend hosting speech, which means they will take one of two approaches: (1) host way less speech and become much, much more restricted in what people can say or (2) do little to no moderation, because under the First Amendment, they can only be held liable if they have knowledge of legally violative content.
And most of the content that would be covered by this bill “speech that contributes to political radicalization” is, um, kinda quintessentially protected by the First Amendment.
Kelly’s comments reveal the stunning cognitive dissonance at the heart of this proposal:
“I did not agree with him on much. But I’ll tell you what, I will go to war to fight for his right to say what he believes,” said Kelly, who is a former Navy pilot. “Even if you disagree with somebody, doesn’t mean you put a wall up between you and them.”
This is breathtaking doublethink. Kelly claims he’ll “go to war” to protect Kirk’s right to speak while literally authoring legislation that will silence the platforms where that speech happens. It’s like saying “I’ll defend your right to assembly” while bulldozing every meeting hall in town.
Curtis manages to be even more confused:
What this bill would do, Curtis explained, is open up these trillion-dollar companies to the same kind of liability that tobacco companies and other industries face.
“If they’re responsible for something going out that caused harm, they are responsible. So think twice before you magnify. Why do these things need to be magnified at all?” Curtis said.
This comparison is absurdly stupid. Tobacco is a physical product that literally destroys your lungs and causes cancer. Speech is expression protected by the First Amendment. Curtis is essentially arguing that if political speech influences someone’s behavior in a way he doesn’t like, the platform should be liable—as if words and ideas are chemically addictive carcinogens.
The entire point of the First Amendment is that we don’t consider speech to be harmful.
What Curtis is proposing is holding companies liable whenever speech “causes harm,” which is fucking terrifying when Trump and his FCC are already threatening platforms for hosting criticism of the administration.
The political implications here are staggering. Kelly, a Democrat, is signing onto a bill that will let Trump and MAGA supporters (the bill has a private right of action that will let anyone sue!) basically sue every internet platform for “promoting” content they deem politically polarizing, which they will say is anything that criticizes Trump or promotes “woke” views.
And why is he pushing such a bill in supposed support of Charlie Kirk, a person whose only job was pushing political polarization, and whose entire “debate me bro” shtick was entirely designed to push political polarization online?
What are we even doing here?
This entire proposal is a monument to confused thinking. Kelly and Curtis claim they want to honor Charlie Kirk by passing legislation that would have silenced the very platforms where he built his career. They claim to support free speech while authoring a bill designed to chill political expression. They worry about political polarization while creating a legal weapon that will be used almost exclusively by the most polarizing political actors to silence their critics.
Rolling back Section 230 will lead to much greater censorship, not less. Claiming it’s necessary to diminish political polarization is disconnected from reality. But at least it will come in handy for whoever challenges this law as unconstitutional—the backers are out there openly admitting they’re introducing legislation designed to violate the First Amendment.
Filed Under: 1st amendment, algorithms, charlie kirk, free speech, john curtis, mark kelly, political polarization, section 230


Comments on “Bipartisan Senators Want To Honor Charlie Kirk By Making It Easier To Censor The Internet”
Have you ever noticed that every politican that is calling for S230 changes is always pushing for this or that “small” change to hold the companies accountable. As opposed to just repealing the whole thing and going back to how it was before S230 was passed?
Re:
Nah, a whole bunch have. Off the top of my head: Graham, Hawley, Blackburn. Dick Durbin also signed onto sunsetting it. It’s just bottom of the barrel since even among people who have complaints, basically no one wants to go back to Stratton Oakmont v. Prodigy.
This will last as long as it takes for someone to point out it applies to the Nazi Bars at X and Truth Social.
Re: 'That's no pro-violence that's just locker-room talk!'
Only if it was applied honestly and equally, and if there’s one thing you can be sure of it’s that MAGAts consider a law to only be valid when it’s working in their favor.
Re: Re:
See, that’s the funny thing when you create a law with a private right of action, you can’t really control who’s going to use it anymore…
Re: Re: Re:
Hmm, a very good point. Normally I’m vehemently against such clauses as dodges around the first amendment, but that does seem like it would be the quickest way to get the law torpedoed.
I see Mark Kelly has been hanging out with John Fetta-Cheese a bit too much lately.
It’s not that they don’t understand it. They WILLFULLY choose not to understand it.
It’s also just kind of breathtaking how many liberals think it’s an act of moral courage to fight a war on behalf of fascists.
Re:
It amazes me that people who use liberal as a perojative are so mortally offended at the very concept of horseshoe theory.
This actually makes sense to me...
It doesn’t seem to be making the platform accountable for the content itself, just for the promotion and surfacing of it. If the content is just in a “firehose” of chronological posts, then S230 should apply just as it does now. But if the platform applies any kind of programmatic repositioning, reordering, promotion, or other control over the presentation of the content, then it becomes more than user content, it becomes the platform’s content!
This makes sense! To reiterate: yes, the platforms don’t control the content; but they DO control the presentation of it. The only thing the users control is when they post. Thus building any presentation of the content that is not simply chronological should be considered speech of the platform, and not of the users. This is a huge part of how the biggest platforms make money (ads near content that is promoted because it’s “hot”, ads near content that is promoted because it will drive interaction, etc), yet they are using S230 to sheild what should be considered the speech of the platforms.
To put it another way, Facebook could not pull off the bullshit they are doing with scam ads if they didn’t control the presentation of the content. They literally had policies to let people get away with scams if it would impact their own bottom line, and to alter the presentation of user content to further increase their profit from those scam ads. Everything about this except for the actual user content, is purely in Facebook’s control, and they should be held accountable for everything except the user content!
(I suppose users can add tags and such, so filtering by those additions would also be outside of the platform’s control, as long as it stays chronological (no algorithmic sorting or promotion) inside the filters.)
Re:
And yes, 1A censorship still applies no matter who S230 (in any form) deems is responsible for the speech. But though citizens can be held responsible for incitement, currently the platforms cannot, despite the almost total control they hold over the actual presentation of user content. Because S230 is lacking and lets them claim “user content” for everything they offer, even if the actual “user content” is a small fraction of the final content.
Re:
Okay, but why, though?
Ordering posts from people whom you follow in a reverse chronological order, where their most recent posts sit at the top of your feed, is itself a programmatic decision even if it’s the default setting. The same goes for positioning/ordering reposts from people whom you follow, even if the option to see those reposts is turned on by default. That you don’t see those decisions as decisions doesn’t change the reality that they are, in fact, decisions made by the service as to how they’ll decide to show you content. Adding an extra algorithm is just an extra decision; it shouldn’t make a service more liable for user-generated content any more than it would be without that extra decision.
I understand the impulse to want to punish Elon Musk for what he’s done to Twitter. A shitload of other people probably feel the same way. But in trying to take down the tree that is Twitter’s promotion (accidental or otherwise) of Nazi and Nazi-adjacent content with a flamethrower, you’d also risk burning down the all the other trees in the forest—and by that, I mean you’d risk destroying a large swath of the Internet just so you can stick it to Elon. Is that a risk you want to take?
Re: Re:
It gets even more interesting if you consider how data is stored and indexed, because that index may not be based on a date of posting or anything like that but a hash created from various metadata including a date, ie just storing and later displaying a user’s post means algorithms galore without even presenting something in a chronological order.
What we have here is someone who don’t even understand even the tiniest thing about how storing and retrieving information works. There is also an apparent misunderstanding who is responsible for what.
As always, an argument based on misconceptions and misunderstandings how computers, section 230 and the 1:a works.
Re: Re:
Any programmatic choice is a choice, it’s true. But using e.g. chronological doesn’t have any intent behind it. You can’t really be held responsible if someone posts Nazi content in a chronological feed, you didn’t really contribute to that at all. If instead you say, decide you want to promote Nazi content, now there is an actual knowing intent to do so. They’re not the same type of choice.
Because then the platform contributed directly to the thing. That’s what causes the liability in the first place when e.g. a paper newspaper publishes an OpEd. That’s the whole point of publisher or distributor liability- you’re liable for the part you contribute to.
Fundamentally, that’s the point of liability. If you knowingly contribute to the thing that originally has liability, you’re partially responsible for some of it. The reason this doesn’t always make sense digitally is because when a service does e.g. chronological, it’s not actually making a choice to contribute.
Depends entirely on how the law is written. Not all laws have equal risk of splashback. Normal publisher liability already works this way, it hasn’t burnt down all the other newspapers (things like SLAPP aside). And vice versa, we don’t immunize them to everything to avoid splashback.
(Although Nazi content isn’t a good example, the underlying speech would be 1A protected upfront under Brandenburg. There’s no liability to begin with.)
Re: Re: Re:
Every choice has an intent behind it, even if it doesn’t seem that way on the surface. The choice to use reverse-chrono as a default sorting method, when made by the service itself, has an intent: “We want people to see new posts as they come in from people they follow instead of when our algorithm says they ‘should’ see those posts.” When an end user makes that decision, it has much the same intent. And…
…both choices still have intent regardless of who makes them.
Being on the service itself contributes to “the thing”, too. Unless you never follow anyone, never repost anything, never view your notifications, and otherwise never do anything to view or promote anyone else’s content, you will see other people’s content as part of your social media experience. Whether the platform itself “contributes” to this by using arcane magicks (i.e., algorithms) is beside the point—by exposing yourself to social media, you’ve made the choice to expose yourself to the content therein.
Section 230 doesn’t say shit about the publisher/platform dichotomy. Can you really say that a platform promoting third-party speech by use of an algorithm makes that platform as much of a publisher as a human who chose to promote that speech with direct knowledge of what that speech said? And if you do want to say that, do you also realize how dangerous that could be for social media, given that reverse-chrono is also an algorithm?
Except it is. You can hit “follow” on a social media service, but that doesn’t mean the service is obligated to show you new posts from the user you followed. (A regular complaint about Twitter these days is that it sometimes hides new posts from users even when those users opt-in to seeing those posts by hitting the follow button.) The choice to present new posts to you with a reverse-chrono algorithm is as much of a choice as using a “for you” algorithm that tosses in “adjacent” posts from people you don’t follow. Refusing to think it’s a choice doesn’t make it any less of a choice.
Every choice has a consequence—and sometimes, the consequence is unintended. The perverse incentives may not be obvious to you, but they can be to others. Consider what a law to control algorithms might incentivize social media services to do—but do it from every possible angle, not just the one where the incentives lack any obvious loopholes. If people can abuse gun buyback progams to make profits by making enough 3D-printed gun parts to meet the minimum legal definition of a firearm, people can and will abuse an anti-algorithm “Twitter will be held liable for third-party speech” law. How much rules lawyering would you like to do to make a mythically “perfect” version of such a law?
Re: Re: Re:2
Not the same type of intent. When you’re choosing e.g. chronological, the intent is not to force more Nazi content. It is content neutral in a way that explicitly designing an algorithm to push Nazis isn’t.
Indirectly, yes. Directly, no. Buying (or reading) a newspaper contributes to the newspaper, it does not make you liable if the newspaper publishes defamation.
No, it is the point. That is fundamentally what liability is built on. That’s why e.g. bookstores don’t have publisher liability (they have distributor liability), but book publishers do have publisher liability.
I didn’t say it did. Don’t twist my words. (If you’re wondering why I’m using those specific words, distributor liability is a lesser form of publisher liability, it’s not related to the “platform” nonsense people come up with. It’s what Oakmont Stratton and Cubby used, pre-230. And it’s still used for print businesses that don’t benefit from 230. A bookstore is under distributor liability, not publisher. A newspaper would be under publisher liability. Cubby found that Compuserve was a distributor because it didn’t moderate, whereas Prodigy was labelled a publisher due to it’s active moderation, which the court interpreted as editorial control)
That depends on how directly involved in tweaking that algorithm it is. Do I think say, Youtube directly surfaces defamation/Nazis, etc? No. Do I think Mr. MechaHitler (and Boers, etc), who has gotten caught hand tweaking individual topics, does? Absolutely. And not just algorithmically. I think the vast majority of platforms aren’t, and shouldn’t be responsible. They’re not really doing anything wrong, or acting as any sort of publisher.
I’m not saying it’s not a choice (see: Any programmatic choice is a choice,). I’m saying the choice isn’t directly designed to push e.g. Nazis. They’re fundamentally different.
I agree with that, but if that’s true, I expect the others to be able to articulate it clearly. And if they can’t, I don’t see any reason to take their word for it, especially if it’s “obvious”. I don’t think you should take people’s word for something. If I just handwaved it as saying it’s fine, without addressing specific concerns, I wouldn’t expect you to take my word for anything either. It should never be a “trust me bro” handwave. This is particularly true if there’s an underlying disagreement on how far something like free speech should go in general, because that might be where the difference is.
As much as any other part of the law, really. Our entire legal system is rules lawyered to the gills, including for things like incitement, defamation and anti-SLAPP laws. Stuff like Brandenburg is rules lawyered to hell and back despite the 1A being super simple textually, because it’s kind of necessary to have any sort of nuanced law around it.
I prefer simple laws wherever possible, but I care about the result. Now, if it can’t be done, that’s another story.
My problem with this is two-fold- First, it is a slippery slope argument. But more importantly, you can apply that exact argument to existing things. If they can do that, why can’t they do it for print publishers who have anti-SLAPP protection, which still have publisher liability? Why can’t they do it for defamation? Stuff like “actual malice” or Sullivan v NYT is not simple, and it certainly has been attempted to be abused.
It’s not just a slippery slope, it’s a slippery slope with actual existing examples. Why can one be slippery sloped, but the other can’t? Or do you think publisher liability shouldn’t exist for print?
Re: Re: Re:3
Until nazis decide to post content more frequently than foodbanks, ensuring your vaunted “content neutral” algorithm is explicitly designed to push nazis over foodbanks.
The reality is that there is no algorithm which does not promote nazis. Only algorithms the nazis aren’t currently promoting themselves with.
Actually, that is potentially false. The National Socialist German Workers Party was formed in 1920, so it could be argued that an algorithm which only shows records created prior to 1920 would technically never promote nazis. It’s not a good argument, but it’s at least possible to make.
Re: Re: Re:4
Yes, a content neutral algorithm can end up pushing Nazi content(although I would’ve gone with something more realistic, like higher engagement), no that isn’t “explicitly designed to push nazis over foodbanks”. There is a level of editorial intent/control that isn’t there.
There’s a reason I used the Youtube example, because it’s gotten flak around alt-right content before.
Re: Re:
“it shouldn’t make a service more liable for user-generated content any more than it would be without that extra decision”
I didn’t say they should be liable for the content. I said they should be liable for the intentional promotion and surfacing of it, and IMO doubly so if the promotion enables profit-taking.
1A ans S230 cover the liability for user-content itself, but the display, surfacing and promotion is almost always not user-controlled, and definitively (by your argument) speech of the platform. In your view, it seems, the ordering and display of the content belongs to no one, which doesn’t make sense, especially when the decisions (thanks for that terminology) are specifically being made by the platform.
Re:
This would make platforms, including your friendly local Mastodon instance, liable for having a spam filter. Every forum that allows the downvoting of comments would lose 230 protection, since sorting comments by votes or hiding comments that have been heavily downvoted is prioritization.
Re: Re:
It would make Techdirt liable because TD has the “threaded” and “unread” comment organizing algorithms.
Re:
The site’s “control” is merely an unbiased attempt by the site to deliver to the user that which he/she has shown an interest in, whether by following specific users or liking certain types of content or whatever the “algorithm” has done to try to deliver on the user’s interests. There’s no evaluation by the site as to what the site itself may or may not support. (But even if there were–though, I repeat, there isn’t–the Constitution says that’s just Free Speech. Conservative ilk seem to have a problem with Free Speech itself. Maybe they should self-deport, you know… cuz they can’t handle the truth.)
Re:
You don’t really seem to understand how any of this works, at all. First and foremost, with hundreds of millions of users that would make for an unreadable stream of content. Two, displaying chronologically isn’t easy, straight forward, or somehow the default or lowest common denominator. These systems are distributed around the world and they certainly don’t bring everything to north america the second it happens. These systems are also not some digital version of a tape where one entry is after the next one, so posting by time is an algorithm in itself.
TL;DR: Your understanding is flawed and your conclusion is unworkable. 1/10.
Re:
The problem really isn’t the control, or lack there of of what the platform’s offer to the users, the problem is defining what constitutes “harmful” content, and opening up a path for private lawsuits over it…
That’s opening the courthouses gates to loonies like my town selectwoman who thought a novel was harmful to minors because one characher thought another character was the type who would read Cosmo articles on wedding night BJ’s (no BJ occured in the book).
always happening
That THEY get control over speech.
THEY decide what is/isnt Good for the nation?
Who hasnt seen the Fake adverts and forced comments inside Politics or even Corporations.
Both get all the power and the people…Get nothing.
It took 60 min, YEARS before they Published the Infamous Tobacco Cancer problems from Cigarettes’. They Hate to be Sued. The National Fresh water failure, the Pollution problems ALL over the nation.
The Truth does nto set you Free, it gets you Sued.
100% of section 230’s opponents are psychopaths advocating harm to innocents.
One day an honest and fact-based argument may be made, but not yet
And the streak remains unbroken, yet again it is shown that it is impossible to argue against 230 honestly, factually, or based upon what the gorram law says or does.
Like so many politicians who want to gut Section 230, Kelly and Curtis clearly don’t understand how it actually works. Their “Algorithm Accountability Act” would create exactly the kind of censorship regime they claim to oppose.
Both of them are full grown adults who have access to any number of experts on this or any other subject at their fingertips. To the extent that they ‘don’t understand how it works'(rather than just are lying about it) I’ve no doubt whatsoever that that’s entirely deliberate on their part. If they truly don’t understand 230 that’s because they don’t want to understand how it works and what it says.
Re:
I’m genuinely surprised about Kelly’s role in this. If you’re smart enough to become a fighter pilot and an astronaut, you’re smart enough to learn how actually S230 works from people who aren’t financially vested in crippling it.
Re: Re:
Just goes to show that just because someone’s smart enough to learn and know a bunch about one subject it doesn’t mean they can’t be blindingly stupid(or dishonest as I suspect) on other subjects.
Re: Re: Re:
It’s just a variation of the Dunning-Kruger effect, specifically the skill transfer fallacy.
“I’m very good at math, therefore I would excel as an architect!”
Do they claim to be opposing that kind of censorship? By definition, accountability/liability is going to be a form of censorship. That’s kind of the point. The censorship they’re claiming to go after is incitement suppressing speech.
The quote doesn’t say automatically: if it can be proven in court that they used an algorithm to amplify content that caused harm. is not automatic. (The bigger problem is presumably that they used an algorithm to amplify content that caused harm. is nebulous, and doesn’t tie to something like an editorial decision)
If that were true, there would never be any liability for speech, ever. While the exceptions are narrow, there is existing exceptions under e.g. Brandenburg, with some caveats in terms of imminence, etc. That said, yes ‘most’ (doing a lot of lifting, that) of it will be covered by 1A, though, given how narrow stuff like Brandenburg is. But as far as things go, targeting defamation/incitement is actually that narrow window that isn’t 1A protected.
Clamping down on incitement isn’t by itself insane. As you yourself noted, feeling safe is part of people feeling free to speak: free speech isn’t just about whether you’re allowed to say something. It’s also about whether you feel safe saying it. Whether you feel welcome.. It’s pretty clear how that can be applied to incitement.
(That said, before people jump on this-I expect the actual implementation of this bill to be a trash. I just think the article is missing a lot of the genuine criticisms by trying to shoehorn the usual zingers. (and you missed an easy one! pointing out algorithm to amplify content includes normal content algorithms was an easy slam dunk))
sidenote:
Can you link to the actual bill text? Having a private right to action isn’t inherently problematic as long as it as anti-SLAPP-like protections. (I’m assuming it doesn’t because this is a stupid messaging bill).