For those that run online video game services, there have been plenty of ways to deal with those who cheat in-game. Some, like Blizzard, look to twist copyright law into a pretzel to argue that cheating in an online game somehow constitutes infringement. Other companies have gone for more creative options. Cheaters in Pokémon Go suddenly found themselves unable to find any but the most common Pokémon. Rockstar dumped cheaters in Max Payne 3‘s multiplayer into a cheater-only server where all the cheaters cheated against one another.
But Activision has gone with a slightly different plan to combat Call of Duty online cheaters: simply cheat them back.
Cheaters who are subject to a cloaking penalty will find that “characters, bullets, even sound from legitimate players will be undetectable,” according to a post on the official Call of Duty development blog. Those cheaters will remain fully visible to non-cheaters, though; Activision quips that “they’ll be the players you see spinning in circles hollering, ‘Who is shooting me?!'”
The latest anti-cheat update will roll out first for Call of Duty: Vanguard, then be applied to the free-to-play Warzone, Activision says, “to minimize any issues players may encounter.” It also comes on top of another cheating mitigation measure, called Damage Shield, that was announced in February and “disables the cheater’s ability to inflict critical damage on other players.”
Now, the caveat to all of this is that it’s really just one more step in an ongoing arms race between those that make and use cheating software and Activision’s ability to detect its usage. But, if done well, this is sort of an ingenious option. Cheaters cheat in online games for one of two reasons: to troll the other players or to appear to be a master at the game. This option, when working, eliminates both of those incentives.
Instead, it will be the cheaters who will be trolled, rendered helpless by the game, and at the mercy of the non-cheaters. And far from appearing to be gods of any particular game, “cloaking” will put cheaters at the bottom of the leaderboards.
You might think that Activision would be better served just banning cheaters and booting them from a match as soon as they’re detected, rather than merely messing with their effectiveness. But Activision wrote in February that instant mitigation “leaves the cheater vulnerable to real players and allows [the anti-cheat team] to collect information about a cheater’s system.” Activision also insists that there’s “no possibility” of a false positive punishing non-cheaters with mitigation drawbacks and that it “will never interfere in gunfights between law-abiding community members.”
The point here is that there are better options to combat online gaming cheaters than going with draconian legal routes. There’s no reason this can’t be fun!
We’ve noted a few times now that while Facebook gets most of the heat for its privacy scandals, the stuff going on in the telecom, app, and adtech markets in regards to location data makes many of Facebook’s privacy issues seem like a grade school picnic.
That was well highlighted by the recent Securus, LocationSmart, and numerous T-Mobile scandals which showcased perfectly how cellular carriers, app makers, and location data brokers routinely buy and sell your daily movement records with only a fleeting effort to ensure all of the subsequent buyers and sellers of that data adhere to basic privacy and security standards.
The end result has been just an absolute parade of scandals, but little more than a few pinky swears by impacted companies, wrist slaps by regulators, and apathy by Congress. As a result it just keeps happening over, and over, and over again.
The latest case in point: the Wall Street Journal has discovered that the Grindr dating app has also been collecting highly detailed user location data and selling it to a wide variety of middlemen since around 2017 or so:
The commercial availability of the personal information, which hasn’t been previously reported, illustrates the thriving market for at-times intimate details about users that can be harvested from mobile devices. A U.S. Catholic official last year was outed as a Grindr user in a high-profile incident that involved analysis of similar data.
The data in question was made available through the online ad network MoPub (previously owned by Twitter) and then sold through its partner company UberMedia (recently renamed UM). Researchers had been warning about this problem for a while, and were largely ignored.
As usual, Grindr executives claim this wasn’t a big deal because the data was “anonymized” and didn’t include personal names. But as we’ve noted countless times “anonymization” doesn’t actually mean anything, given you can identify these users with just a few additional datasets. The data was also “detailed enough to infer things like romantic encounters between specific users based on their device’s proximity to one another,” the Journal notes.
Grindr states that the company cut off sales of this data two years ago, but, as usual with this kind of stuff, there’s no independent way to test or confirm that claim without the aid of whistleblowers and competent and capable privacy regulators. An insider tells the Journal that Grindr didn’t do anything about this until 2020 because it didn’t see the harm (and it continues to downplay the harm in the story).
Grindr’s problems due to rampant over collection and sale of location (and other data) were bad enough that the report notes the app was used as an example in a presentation to multiple government agencies about the intelligence risks posed by over-abundant data collection and sale:
National-security officials have also indicated concern about the issue: The Grindr data were used as part of a demonstration for various U.S. government agencies about the intelligence risks from commercially available information, according to a person who was involved in the presentation.
Grindr’s Chinese owner Beijing Kunlun was forced to sell the app in 2020 due to national security concerns. But while DC loves to superficially hyperventilate about Chinese-owned companies and data collection specifically (see: the whole TikTok fracas), rampant data collection remains a problem with American-owned companies too, in part because that data still winds up widely available.
The reality is that the wild west approach to data collection and monetization causes an incalculable level of potential harm. Yet we don’t meaningfully address it because the sale of such data is simply too profitable for too many different industries (marketing, telecom, healthcare, insurance, banks, app makers), all simultaneously lobbying Congress to do either nothing, or the wrong thing.
As a result, we keep stumbling through the same stories week after week as if stuck in a bizarre Groundhog-Day-esque purgatory, with the key difference being that nobody in the U.S. seems to be learning anything from the experience.
Back in December we wrote about just how absolutely, pathetically ridiculous Alex Berenson’s lawsuit against Twitter was. As you’ll recall, Berenson, who has accurately been described as the “pandemic’s wrongest man“, got kicked off Twitter after posting a non-stop stream of utter nonsense, completely misinterpreting vaccine data in ways that weren’t just embarrassing but that likely were causing people to die. The lawsuit against Twitter trotted out a number of laughable theories, including that it violated the 1st Amendment to kick him off, and that it was “unfair competition” and a “breach of contract” among other things. We went through how laughable all of these were, but didn’t spend that much time on it because, really, there’s only so much time one should waste on such things.
There have been a bunch of filings back and forth in the lawsuit, with each of Berenson’s more ridiculous than the previous one, but we didn’t write about them because we were waiting for the judge to rule. Of course, last Wednesday night Berenson went on Fox News, natch, to tell the heir of a frozen food fortune that “our lawsuit, I think, is stronger than a lot of other lawsuits that have not survived the motion-to-dismiss stage.”
Two days later, Judge William Alsup (not known for putting up with very much bullshit) has dismissed nearly all of the lawsuit. Of course, one tiny bit of it has survived, just barely (and not for long), so in this way I guess Berenson actually got one thing right. His lawsuit is just ever so slightly “stronger” than a lot of other lawsuits. But not really.
Alsup has dismissed all of the speech arguments: the 1st Amendment claims, the unfair competition claims, etc, and did so incredibly easily by pointing to Section 230 and noting that Berenson has no claim here, despite his earlier confidence that his lawsuit was somehow “different.” It wasn’t. It’s worth noting that Alsup dismisses under 230(c)(2), which is a bit surprising, since most of these kinds of cases just point to (c)(1) and are done with it. As a refresher, (c)(1) is the part that says you can’t hold a website liable for someone else’s content, while (c)(2) is the more awkwardly worded part about no liability for “good faith” moderation actions. Many, many courts have realized, correctly, that (c)(2) barely matters in the face of (c)(1), because even if you had bad faith moderation, the website would still be immune because any liability would be based on the user’s content.
But here, Alsup notes that even under (c)(2) Berenson has no argument at all:
For an internet platform like Twitter, Section 230 precludes liability for removing content and preventing content from being posted that the platform finds would cause its users harm, such as misinformation regarding COVID-19. Plaintiff’s allegations regarding the leadup to his account suspension do not provide a sufficient factual underpinning for his conclusion Twitter lacked good faith. Twitter constructed a robust five-strike COVID-19 misinformation policy and, even if it applied those strikes in error, that alone would not show bad faith. Rather, the allegations are consistent with Twitter’s good faith effort to respond to clearly objectionable content posted by users on its platform
That’s it. That forecloses the core of the lawsuit. There isn’t that much discussion about it , because there doesn’t need to be. Alsup also completely trashes the specific 1st Amendment claim:
Aside from Section 230, plaintiff fails to even state a First Amendment claim. The free speech clause only prohibits government abridgement of speech — plaintiff concedes Twitter is a private company (Compl. ¶15). Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019). Twitter’s actions here, moreover, do not constitute state action under the joint action test because the combination of (1) the shift in Twitter’s enforcement position, and (2) general cajoling from various federal officials regarding misinformation on social media platforms do not plausibly assert Twitter conspired or was otherwise a willful participant in government action. See Heineke v. Santa Clara Univ., 965 F.3d 1009, 1014 (9th Cir. 2020). For the same reasons, plaintiff has not alleged state action under the governmental nexus test either, which is generally subsumed by the joint action test. Naoko Ohno v. Yuko Yasuma, 723 F.3d 984, 995 n.13 (9th Cir. 2013). Twitter “may be a paradigmatic public square on the Internet, but it is not transformed into a state actor solely by providing a forum for speech.” Prager Univ. v. Google LLC, 951 F.3d 991, 997 (9th Cir. 2020) (cleaned up, quotation omitted).
The Lanham Act claims? Also dismissed in a single paragraph:
Aside from Section 230, the Lanham Act claim also fails anyway. The Lanham Act “prohibits any person from misrepresenting her or another person’s goods or services in ‘commercial advertising or promotion.’” Ariix, LLC v. NutriSearch Corp., 985 F.3d 1107, 1114–15 (9th Cir. 2021) (quoting 15 U.S.C. § 1125(a)(1)(B)). Neither Twitter’s labelling of plaintiff’s tweets, nor its statement regarding the suspension of his account plausibly propose a commercial transaction. See United States v. United Foods, Inc., 533 U.S. 405, 409 (2001). They are not advertisements, nor do they refer to a particular product, and the theory that Twitter’s statements were made in the context in which plaintiff offers his services is too attenuated. See Hunt v. City of L.A., 638 F.3d 703, 715 (9th Cir. 2011) (citation omitted). Applying common sense, this order concludes Twitter’s warning labels and suspension notice constitute non-commercial speech aimed instead at promoting the veracity of tweets regarding COVID-19
Judge Alsup notes it’s not even worth going into Berenson’s laughable claims that Twitter is a common carrier under the California Constitution (it’s not) because Section 230 takes care of that anyway.
There are two claims that live on, though it’s unlikely they’ll last for very long. But let’s dig in on those two claims. They are part of his “breach of contract” claims and the “promissory estoppel” claim. These are all really based on a case that we’ve talked about before, one from about a decade ago, Barnes v. Yahoo, in which someone was able to get around Section 230 because a Yahoo employee had promised that they would “take care of” the content that was being complained about. Based on that, the courts ruled that a direct promise had been made, and breaking that was effectively breaking a contract.
When the original lawsuit came out, I had initially written up an analysis of why Berenson’s situation was so different from Barnes that the similar claims in the lawsuit were unlikely to fly, but dropped it because it seemed like a lot of words to explain something that was unlikely to amount to much. However, here it lives on, although there’s ever indication that Alsup will do away with it shortly.
The issue is that the claims lean very, very heavily on some email exchanges Berenson had with a Twitter comms exec, in which the exec told Berenson he didn’t believe that his tweets were likely the target of policy changes, and that if he heard anything else he’d try to let him know, and said he’d try to make sure Berenson was “given a heads up” before anything happened. That’s not anything even remotely in the vicinity of a promise that the company would never take action on Berenson, so not at all like the Barnes scenario.
Either way Judge Alsup, in his standard methodical manner, allows for very, very limited discovery to establish whether or not there was any actual contract here that was breached, and any actual promises made that would trip the promissory estoppel flag. Twitter will have to cough up some details of how it flagged Berenson’s account, and how it determined he had hit the “five strikes” threshold to have the account suspended. It also needs to share some of its communications about Berenson and the termination of his account.
Two separate lawyers I spoke to with litigation experience said this appears to be Alsup being extremely thorough and just making sure there really isn’t some secret thing going on in Twitter, and that the company followed the necessary steps in banning Berenson.
Of course, many foolish people are celebrating. This includes Berenson, who is claiming an extremely premature victory on his Substack, saying that Twitter “is going to have a hella time slithering out of” these discovery demands which he (incorrectly) calls “broad.” In another post he still appears to be claiming that this is a “fight for free speech” even as all the free speech parts were dismissed without leave to amend. Also, because an editor at Politico incorrectly titled its piece on this ruling “Twitter loses bid to toss Alex Berenson lawsuit” with a subhead that “the free speech complaint against Twitter will be allowed to proceed.” (The article is actually good, but the headline and subhed are wrong).
The free speech parts of the lawsuit were all tossed. There’s only the issue of breach of contract and promissory estoppel here, and the judge is allowing narrow discovery on those issues just to see if there’s any smoking gun. If there isn’t, those two claims will get tossed as well. It is, of course, always possible that discovery will turn up some internal nonsense at Twitter, but this case is very much on the rocks.
Late last year, more than 165 Hertz customers sued the company over false allegations of theft. Multiple plaintiffs claimed they had been stopped by law enforcement for supposedly having stolen a rental car. In some cases, customers were jailed for months before criminal charges were dropped. One former Hertz employee claims this is just how Hertz does business. Rather than go through the normal collection process, Hertz appears to prefer to let law enforcement handle it.
Lawyers are preparing to file 100 new claims over the coming weeks on behalf of Hertz customers who say they were victims of false arrest incidents.
Among the new cases is Brittany Morgan and Jeremy Benjamin, a couple who was pulled over and arrested at gunpoint in Houston, Texas after renting a Ford Mustang from Hertz at George Bush Intercontinental Airport.
“We showed the police the paperwork and told them we had obviously not stolen the car. We were stunned when the police told us that the license plate on our car was from another car that was reported stolen and that it was not even the license plate listed on our rental paperwork,” the couple wrote in a legal declaration. “We are infuriated that something like this could happen, and dumbfounded to learn that Hertz has previously rented “stolen” cars to customers.”
Hertz, of course, is still offering the same statement — one that claims this only affects a very small percentage of its millions of rentals. How small that percentage actually is remains to be seen. Hertz has been ordered to provide exact numbers of its bogus theft claims to the court handling the class action suit.
But even if it’s a small percentage, it’s still hundreds or thousands more than are generated by other car rental companies. And if those companies can accurately handle vehicle rentals without getting people arrested on bogus theft charges, there’s no reason Hertz can’t either… other than possibly a lack of desire.
And while Hertz continues to pretend the problem only Hertz has isn’t actually a problem, reports of bogus theft accusations continue to roll in. This one comes via travel blog View from the Wing, sent in by another regular Hertz customer who has just been screwed in a uniquely Hertz way:
I just got a letter today from Hertz claiming I still have a vehicle I returned last week and threatening me with arrest. No previous communication prior to the letter. President’s Club, no rental extension, picked up and returned at the airport. Thankfully, I took a photo of the dash when I returned it, for mileage and fuel record, and that’s geolocated to Hertz airport rental office.
Offshore call center agent said they’d “look into it.”
I’m done with these guys. They can keep my 10k points.
Just another anomaly for Hertz. Just another data point the company feels is too insignificant to address honestly, much less get sued over. But this certainly isn’t the last bit of anecdotal evidence that will likely soon become actual sworn evidence in the future. Hertz wants us to look at the iceberg. But the tip is where all the action is.
As everyone’s trying to read the tea leaves of what an Elon Musk-owned Twitter will actually look like, it’s been reported that in his presentation to Wall St. banks to get the financing he needs to complete the deal, he suggested the deal would be profitable because of some of his new business model ideas. Now, obviously, these are entirely speculative, and my guess is that he hasn’t thought through any of this that deeply (just like he hasn’t thought through content moderation’s challenges, even though he’s sure he can fix it). But, at least some of the banks are buying into the deal based on Musk promising a stronger Twitter business, so we need to pay attention to his ideas. Like this one, that, um, would be effectively impossible under the 1st Amendment.
Musk told the banks he also plans to develop features to grow business revenue, including new ways to make money out of tweets that contain important information or go viral, the sources said.
Ideas he brought up included charging a fee when a third-party website wants to quote or embed a tweet from verified individuals or organizations.
So, like, I don’t want to throw any cold water on the business model ideas of the guy people keep telling me is the most brilliant innovative business mind of our generation, but… it… um… seems at least a little ironic that he’s spent the past month screaming about “free speech” and enabling whatever the law allows… and now he wants to charge companies for quoting a tweet.
Yeah, so, thanks to the 1st Amendment (that he claims to support so much) he’s unlikely to be able to do that successfully. Quoting a tweet (we’ll deal with embedding shortly) in almost every damn case is going to be fair use under copyright law. And, a key reason we have fair use in copyright law… is that the 1st Amendment requires it, or else copyright law would stifle the very free speech that Musk claims to love so much.
In Eldred v. Ashcroft, the important (if wrongly decided) case on the Constitutionality of copyright term extension, Justice Ruth Bader Ginsburg repeatedly talked about how fair use was a “safeguard” in copyright law to make sure that copyright law could exist under the 1st Amendment, even as it could be used to suppress speech. The crux of the argument is that, because there’s fair use that allows people to do things like quote a 240 character outburst, then there’s no serious concern about copyright silencing speech. This point is often raised in the context of calling fair use a necessary safety valve on copyright to make it compatible with the 1st Amendment.
Given that Musk has claimed (incorrectly, but really, whatever) that free speech laws represent “the will of the people,” and his apparent big business model innovation is to demand that media organizations pay to quote tweets, which violates our fair use rights, which are necessary under the 1st Amendment… well, it appears that his biggest business model idea so far is to try to ignore the 1st Amendment rights of people wishing to quote tweets.
Good luck with that.
Also, under the current terms of service on Twitter, users hold any copyright interest in their own tweets. Twitter holds a license for it, but that wouldn’t allow Twitter as an entity to file copyright claims against any media organization that was quoting tweets in the first place. The only way it could do that is if it changed the terms entirely and required all its users to actually assign their copyrights to Twitter and, well, good luck with that as well.
Now, of course, the report claimed that the fee could be charged if someone “wants to quote or embed a tweet from verified individuals,” and the company certainly could set up some convoluted system to try to make people pay to embed, but that would (a) be fucking annoying for most everyone else and (b) would just lead to everyone screenshotting, instead of embedding, which is a lot less useful in the long run for Twitter, since it would drive fewer people to interact with Twitter. And, again, fair use and (I feel I must remind you) the 1st Amendment would protect all that screenshotting and quoting. Free speech, ftw!
And that’s not even getting into the idea that Twitter might now be effectively selling its popular tweets to websites. I mean, if this plan were to go forward (and somehow got over all the other hurdles), I’d imagine the company would literally need to cut its users in on the deal and set up some sort of “every time the NY Times embeds your tweet, they pay us $5 and we revert $3 of them to you” or some sort of nonsense like that. And, sure, maybe it’ll excite some Twitter users that they could get paid for their tweets (again, assuming any third party website out there ignores its fair use/1st Amendment rights to simply quote or screenshot and chooses to pay instead).
But, this would also likely create a whole world of complications. First, Twitter would need to set up an entirely new kind of operation to manage all of this. Musk also promised in these documents that he’s planning on reducing headcount at Twitter, but he’d need to staff up at least on managing the payments and payouts to tweeters. But, again, this is Elon Musk, so I’m guessing the system will work on the blockchain in Dogecoin and payments will flow automagically. And sure, maybe you could see how that could actually kinda work, if you’re into that sort of thing?
But, now, we get into the next issue: when you add money (even cute dog-meme based money) to a platform where people normally did shit for free, the incentives change. Oh, boy do they ever change. Suddenly you’re going to get scammers galore, looking to abuse the system, and get filthy stinkin’ Doge rich. I guess maybe this needs to be expressed in meme form?
And Elon should understand this better than anyone, given how frequently crypto scammers follow him around and try to scam his fans. Introducing actual money, even of the meme variety, into the mix is going to lead to a lot of scam behavior. And it would probably be helpful if the company had a… what’s it called… oh yeah, trust & safety staff to help think these issues through.
I’m never going to knock anyone for experimenting with creative business model ideas. And I’m all for Twitter trying out non-advertising based business models, as Elon has suggested is part of his focus. That actually seems like a good idea. But, it’s kinda weird when this whole deal is premised on the idea of bringing more “free speech” to the site… and his first business model suggestion when trying to convince banks to back him is to ignore the free speech rights of others and try to force them to pay up.
Free up space on your phone, tablet, laptop, or home computer. Prism Drive is a lightning fast hot storage solution that allows you to store all of your files in one place, and access them from any device. Easily share large files, like video, graphics, images, and audio. Access files from your computer, your phone, or your tablet. Preview popular file types, like Microsoft office, mp4, or jpeg in your browser or app without the need to download the file. You can get 2 TB of storage for $49, 5 TB for $69, or 10 TB for $89.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
NSO Group’s reputation continues to decline, tracked inversely by the rise of Citizen Lab, a team of Canadian security researchers working out of the University of Toronto. Citizen Lab has exposed plenty of abuse by NSO’s customers, and saved plenty of malware targets from remaining compromised by NSO-crafted spyware.
This obviously hasn’t made NSO Group happy. And it appears to have perturbed Novalpina Capital, a private equity group that acquired a majority share of NSO in 2019. Thanks to personal data requests enabled by UK data protection laws, Citizen Lab director Ron Deibert and senior researcher John Scott-Railton were able to obtain internal communications from Novalpina that show the equity firm took steps to limit the damage Citizen Lab’s research was doing to its new spyware acquisition.
The released data, combined with information from other sources, sheds light on an apparent attempt by Novalpina partner Stephen Peel to gather information on and undermine Citizen Lab. In one case, he even reached out to George Soros, whose foundation is an important Citizen Lab donor, and complained about the researchers.
That apparently had no effect, other than to have Soros suggest Novalpina divest itself of NSO. Neither did other efforts, which included hiring US lawyer Virek Krishnamurthy as a “specialist external adviser” to allegedly align NSO Group with UN’s guidance on business and human rights.
What Krishnamurthy actually did was something different. Krishnamurthy is a University of Toronto alumnus and had worked as a research assistant for Citizen Lab’s director, Ron Deibert.
A February 2019 proposal by Foley Hoag to provide legal services to NSO said Krishnamurthy’s prior relationship with Deibert meant he was in a “unique position to conduct outreach to Citizen Lab should the NSO Group find it desirable to do so”. The proposal acknowledged that NSO had “reputational challenges” and said: “Our goal is to help the NSO Group become seen as the world’s most ethical company in the surveillance space by establishing systems, policies, and procedures to ensure that it operates in a rights-respecting manner.”
In a 1 March 2019 exchange, Peel emailed Krishnamurthy telling the lawyer it was time to “reach out to Deibert to find out what is going on”. The lawyer promptly replied that he would, adding: “He can be prickly, and he’s clearly worked up about NSO.”
Deibert declined an invitation to meet with Krishnamurthy, citing NSO Group’s lack of good faith when responding to inquiries and investigations by Citizen Lab. Krishnamurthy was rejected again three months later when he attempted to meet with Deibert during a family trip to Toronto.
With this information now out in the open, the denials are rolling in.
Krishnamurthy claims his actions were undertaken in good faith and that he was not trying to persuade Deibert and Citizen Lab to ease up on their criticism of NSO Group. He also claims he now “regrets” his work with Novalpina harmed his relationship with his former colleague.
Mark Stephens, a UK lawyer who’s a mutual friend of both Krishnamurthy and Novalpina’s Stephen Peel (and who encouraged Krishnamurthy to work for Novalpina), offered up this ridiculous (and completely) laughable assessment of Citizen Lab in response to the Guardian article:
Stephens praised Peel and criticised Citizen Lab for disproportionately focusing on NSO.
“The practical result of what they [Citizen Lab] have done is to ignore and effectively divert attention from the other players in this marketplace and they have given them a completely free pass and I think that’s reprehensible,” Stephens said.
Citizen Lab has performed years of research into malware deployment, state-sponsored hacking, and government surveillance activities. If there’s been a spike in recent months in NSO Group-related research, it’s because new information recently surfaced showing how prevalent its malware is and how often it targets people who shouldn’t be targeted by extremely powerful malware.
What’s shown here isn’t particularly surprising. But it is disheartening. The information obtained by Citizen Lab shows little more than Novalpina belatedly realizing it had acquired an extremely toxic asset. But rather than search for a way to unload NSO, it chose to target Citizen Lab’s funding (via the conversation with George Soros) and leverage a lawyer’s personal and professional relationship with Citizen Lab’s director in hopes of gathering information on the Lab’s research or persuading it to shift its focus elsewhere.
You may have heard that Republican politicians have been celebrating Elon Musk’s announced plans to purchase Twitter, in the belief that his extraordinarily confused understanding of free speech and content moderation will allow them to ramp up the kinds of nonsense, abuse, and harassment they can spread on Twitter. I’m still not convinced that will actually be the result, but, in the meantime, it does seem weird that Republicans are now trying to burden their new friend with an avalanche of frivolous lawsuits. But, that’s exactly what they’re doing.
Republican Representative Marjorie Taylor Greene — not exactly known for understanding, well, anything — has introduced a bill to completely abolish Section 230. Also not known for being much of an original thinker, Greene’s bill is simply the House companion to Senator Bill Hagerty’s bill that was mocked almost exactly a year ago.
Of course, stripping Section 230 still doesn’t actually accomplish what most Republicans seem to think it would. Since it would increase liability on websites massively, it would actually make them much more interested in removing content to avoid those lawsuits. Indeed, Greene’s own press release about the bill seems to tout increased lawsuits as a feature of the bill.
Creating a Private Right of Action:
Consumers can address violations of the previous two provisions via civil action.
So, it seems that Greene’s excited move to abolish Section 230… is also a plan to burden Elon Musk with a ton of frivolous lawsuits. Also, Trump and his Truth Social.
It’s almost as if none of them have thought through any of this.