Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Bluesky at bsky.app/profile/mmasnick.bsky.social on Mastodon at mastodon.social/@mmasnick and still a little bit (but less and less) on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 3 January 2025 @ 01:32pm

Food Tracking App To Delete Users’ Previously Recorded Meals After Being Sued By Food Data Company

For the past few years, I’ve used Cronometer as a nutrition app. It’s pretty good for tracking food, exercise, and much more. I appreciate that it has very detailed nutritional data, and even when it doesn’t, I’m able to upload nutritional labels with just a quick photograph on my phone. It’s a nice app.

So I was quite surprised a month ago to receive an email from the company saying that they no longer had a contract with a former data provider, a company called Trustwell, and because of that, the company had to remove previously recorded data from past meals that I had logged.

We write to let you know that a former food data provider, Trustwell (Formerly ESHA), is requiring us to remove their foods from our database. We believe their positions are improper and violate U.S. laws. Nevertheless, we are in the process of removing Trustwell foods from our database.

The removal will happen no later than February 15, 2025, and may need to happen earlier based exclusively on the actions of Trustwell.

It then informs me that it needs to remove data from two meals I had logged, both of them from last April. The company suggests that: “If this older data is of importance to you, we recommend locating the items in your diary, and replacing them with alternatives from our database prior to their removal.”

It doesn’t much matter to me anymore since I don’t really care what I consumed back in April, but the whole thing seems preposterous. There are no intellectual property rights over nutritional data. Under the Supreme Court’s ruling in Feist, I can’t see how anyone could possibly claim copyright in such data. Under Feist, which was about phone numbers in a phone book, purely factual data (in that case, phone numbers) can’t be subject to copyright, even if someone goes around and collects them in a single database. The same principle applies to nutritional data — these are uncopyrightable facts, not creative expression.

It is possible, as some suggested to me on Bluesky, that there was a contractual agreement between Cronometer and ESHA to delete any data if the contract concluded. If such an agreement exists, it should be limited to the database itself, not people recording such data into their personal journals. Because it’s not “the database” that is being copied into people’s tracking journals, but merely the factual data about particular foods. In addition, the users are not bound to whatever terms Cronometer has in its contracts with others.

However, after digging into this story, some more details showed up, including that ESHA sued Cronometer back in September over this. The lawsuit claims that the agreement between the two companies was that Cronometer would only use ESHA data for “its own internal analyses” and not build the data into its product. Cronometer’s answer to the lawsuit says all of this is compete bullshit. It notes that the company directly helped Cronometer implement the databases into the publicly available software, that the two companies were regularly in contact about it, that ESHA employees told Cronometer how excited they were to check out the app, and how ESHA even asked Cronometer if they could release a press release about the integration.

Even more telling:

ESHA’s co-founder later downloaded a publicly-available copy of Cronometer’s
software and praised Cronometer for its use of the ESHA database within that software,
thanking Cronometer for “playing by the rules.”

Either way, it’s yet another unfortunate example of the world we live in where digital services mean that things you think you control can get ripped out from under you.

The reality is that this appears to be yet another thing ruined by private equity. The email mentions that Cronometer had a previous deal with ESHA, which is now Trustwell. It appears that new name was a result of a merger between ESHA and FoodLogiq, which was done in combination with an investment from the private equity firm The Riverside Company. It seems entirely likely that post-merger, Trustwell has made it more difficult/expensive for apps like Cronometer to use its nutritional data, and is now either demanding more cash or the removal of already logged foods from users.

Indeed, Cronometer says in its legal filing that the private equity goons who put together Trustwell really just want to break the deal they had with Cronometer in order to build a competing app:

What actually underlies this lawsuit is not any “secret” “scheme” by Cronometer,
whose use of the databases has always been wide open and publicly recognized by ESHA.
Instead, this lawsuit is part of a scheme devised by Trustwell and its investors to create a
monopoly for one of its other products: Food Processor®, a dietary tracking app that
competes with Cronometer’s products. This lawsuit is fueled by an infusion of private
equity money that purchased ESHA and formed Trustwell. Those private equity investors
are seeking to monetize their investment by illegally propping up Food Processor®’s
market through the pursuit of baseless litigations against smaller companies like
Cronometer to scare them out of the market with the threat of substantial legal
expenditures.

Once again, this is why we can’t have nice things.

Posted on Techdirt - 2 January 2025 @ 11:17am

Elon Musk Doesn’t Like Some Headlines. But That Doesn’t Make Them Defamatory

Elon Musk is once again threatening to sue over speech he dislikes — this time, over factual headlines about a deadly explosion involving a Tesla Cybertruck. But not liking how a story is framed doesn’t make it defamatory. For a statement to be defamatory, it must be false, damaging, and published with “reckless disregard for the truth” (effectively meaning “knowing it was false when you decided to publish”). None of that applies here.

Merely unflattering portrayals, or a factual framing that some feel is “misleading” is not defamatory.

Musk’s legal threats over these headlines are not just baseless, but dangerous. They show a disregard for free speech and an attempt to intimidate the press. And unfortunately, he’s not alone in pushing this censorial theory.

Back in 2020, you may recall that we criticized Larry Lessig for trying to make what he called “Clickbait Defamation” into a thing. His argument was that a fully truthful headline that is framed to imply something he felt was unfair should be considered defamatory. That, of course, is not how defamation actually works. Lessig eventually dropped his lawsuit after the NY Times changed the headline he disliked, but it appears that others are now picking up on this theory, with Elon Musk leading the charge.

As you have likely heard, yesterday, a US Army special forces operations sergeant allegedly drove a rented Tesla Cybertruck full of explosives/fireworks in the bed, and parked it in front of the Trump Hotel in Las Vegas, where the explosives in the trunk were then detonated, killing the driver and injuring a few people nearby.

As with many breaking news stories, the details of the story were not known at first, with the salient facts at the beginning being (1) Trump Hotel in Vegas, (2) Tesla Cybertruck, and (3) explosion.

Given that there have been multiple stories in the past year of Cybertrucks catching fire, including one from just a few days ago, many people initially wondered if this was just another case of that happening. Others, noting the close relationship between Donald Trump and Elon Musk suggested that the imagery of a burning Cybertruck in front of the Trump Hotel worked as a metaphor for world news, but also suggested something more deliberate. Investigators are still working out the details.

But, either way, including Cybertruck and explosion in a headline is totally factual. Yet, Elon Musk is suggesting that he might sue over such headlines:

But, for there to be actual defamation, there needs to be a false statement of fact (and, likely, published knowing or deeply suspecting it was false). Nothing in the headline: “Tesla Cybertruck explosion in front of Trump hotel in Las Vegas leaves 1 dead, 7 injured” is false. It’s all factual.

Senator Mike Lee, who once presented himself as a supporter of free speech and the First Amendment, also jumped into the fray suggesting the NYT v. Sullivan’s “actual malice” standard should fall, allowing Musk to sue over similar headlines:

I mean, first of all, Elon Musk isn’t even mentioned, so it’s difficult to say that this would be defamation against Elon. Second, that was the original AP headline, right after the event occurred, when that was basically all that was known: a Cybertruck did, indeed, catch fire outside of the Trump Hotel. At that moment it wasn’t even known that the bed was full of explosive materials.

But also, everything in there is factual.

And, yes, you can argue that the eventual framing is misleading or even unfair. But that’s how free speech works. There are tons of headlines that people feel are misleading or unfair. I call them out, and I also get accused of misleading headlines. That’s how free speech works. People sometimes don’t like the way other people frame things or title things.

But none of that is defamatory.

Indeed, if Mike Lee is so concerned about the use of the passive voice in headlines, when will we see him claiming that the traditional passive voice of “police-involved shooting” is defamatory as well?

Some could argue (and a few people did yell at me on Bluesky about this!) that other incidents involving cars, including the attack in New Orleans the same day, didn’t focus on the model of the car involved (a Ford F-150 Lightning, if you’re wondering).

But that’s understandable. Again, before anyone knew the details of what happened in Vegas, all that was known were the three simple facts that were reported in those headlines. Furthermore, the make and model of the car actually was perfectly newsworthy in this story because of Musk’s close association with Trump, which certainly suggested there may have been a connection worth mentioning.

That wasn’t true in the New Orleans case (though certainly some news stories talked about the Ford truck and how heavy it was, likely contributing to the damage caused).

Either way, this is yet another case where the self-described “free speech absolutist” Elon Musk seems to be threatening legal action over speech he dislikes, which isn’t even in the same zip code as defamation.

Whether or not he actually sues, it suggests an intimidation stance: if you don’t cover stories in a way that makes me look good, I may sue you and drag you into a costly and resource-intensive lawsuit, no matter how preposterous the claims may be.

Actual free speech means that public figures, like Elon Musk, need to have a thicker skin. They need to recognize that not everyone will publish things that are flattering, and sometimes you just have to suck it up and take it. Or use the fact that you have one of the world’s largest megaphones to… use your own voice to respond. Rather than threatening legal recourse. That’s how free speech works.

This is also why we need stronger anti-SLAPP laws in every state and a federal anti-SLAPP law. Because we know that the rich and powerful have no problem abusing the judicial system to burden the media with vexatious SLAPP suits as a method of intimidation.

Posted on Techdirt - 31 December 2024 @ 09:00am

The Biggest Challenges Create The Biggest Opportunities

As is the tradition on Techdirt, my final post of the year is about optimism and how I continue to be optimistic about innovation and online community, even in the midst of whatever other nonsense may be happening around the globe. This is now my 16th such post. The trend kicked off in 2008 when I had multiple conversations noting that I seemed weirdly optimistic about innovation, even as I was constantly writing angry screeds about stupid stuff that was going on.

As I told Ed Zitron during a podcast earlier this year, I’ve always been a fundamental believer in innovation making the world a better place for all, and whatever anger you see from me comes out of a frustration against those who seek to delay or limit the benefits of such useful innovation. However, as last year’s final message pointed out, this is not the same thing as saying “acceleration at all costs,” because if you build without taking into account the potential harms, your advancements will be short-lived, and the backlash will be even worse.

I believe in supporting innovation, but with an eye towards building it thoughtfully, such that the gains can be sustainable.

If you want to look at the past posts, here’s a list:

For this year, I’ve heard a few people suggesting that they were curious how I’d pull off an optimistic post given all the nonsense the larger world is facing, and the US (in particular) appears to be treading down a stupidly dangerous path. I have seen some suggest that the capture of the White House by at least some Silicon Valley interests could be good for innovation, but I think that’s only true at the margins.

The success of American dynamism comes from a variety of sources, but our basic institutions are a key part of them. And the MAGA/Trump world is threatening to rip apart some of those institutions while destroying other important norms that lead to innovative societies. The attacks on free speech, the gleeful vindictiveness against a list of perceived enemies, and the general openness to corruption certainly don’t bode well for building sustainable, useful innovations.

On top of that, the rush of big tech and media companies to fawn over and fund Donald Trump in his return to the White House similarly suggests that they’re not looking to push innovation forward, but rather to become toadies hoping for handouts.

So why am I still optimistic? Because this actually offers an important alternative (and potentially better) path towards innovation. This is not denying all of the terrible shit that is going to happen over the next few years, nor the harm that many will have to deal with, especially the most vulnerable among us.

But it can also kickstart alternatives. Rather than relying on the slow and messy process of antitrust or questionable regulations, the mess that the government is can open up opportunities to build better systems from the ground up. Obviously, I am very biased, but I think the rapid adoption and growth of Bluesky gives you just a tiny glance into the kinds of things that may be coming. And Bluesky is just one example – there are many areas ripe for building user-empowering protocols and systems.

When other stuff goes bad, it opens up an opportunity to build something fundamentally better, something that starts from a different conception, not just on creating “the next [fill in the blank enshittified service]” but a technology that is much more resistant to enshittification in the first place.

As we’re seeing with Bluesky, some of that change is difficult. Part of the important nature of Bluesky is that it’s built to give more power to the users, and yet we often see users demanding that the company reject that principle in favor of a preferred outcome, even as users have the ability to build that outcome for themselves.

We’ve spent much of the past two decades fighting over who was going to better protect people online: the big evil companies or the big evil government. And hopefully what we’re learning is that neither is the best solution. Providing users the tools (whether on their own or via third parties) is going to lead to better (and more competitive) long-term outcomes.

There will be growing pains because we’re all learning (or relearning) much of this on the fly. But the opportunity is now. People are reasonably upset with both the way the government handles things and the way the biggest companies handle things. Rooting for one or the other to get better seems futile. Let’s focus on making sure that neither matters as much.

We’re already seeing it happen in certain corners of the internet, and there’s plenty more opportunity where that comes from. Just today, I was having a conversation (on Bluesky) about possibilities for a more “protocol” approach to e-commerce, not just social media. There are all sorts of creative ways in which we can rethink the internet and bring it back to its original fundamental promise.

I don’t like the fact that we are in a position where the biggest companies and our elected officials are equally untrustworthy, but if that’s where we are, we might as well use it as an opportunity to route around both and build better systems that aren’t focused on extraction from the public, but empowering the public.

The concerning actions of both government and big tech companies, rather than being cause for despair, can actually spur people to build better, more decentralized systems that empower users rather than institutions.

Yes, many things are terrible, but history has shown us that the greatest innovations often seem to take hold at these kinds of moments. The need is there, as is the public’s distrust in the way things were done before. That has led some to embrace the wrecking ball approach to government, which seems quite likely to fail in disastrous ways.

A more positive approach is to build the systems that route around all of that, at a moment when many people are tired of the old systems and are much more open to adopting something that is both new and empowering. If we have to be dealing with lots of terrible things, I’m going to dedicate my efforts towards getting better systems online that help empower individuals, and I hope that others will join in the process.

As always, my final thoughts on these posts are thanking all of you, the community around Techdirt, for making all of this worthwhile. The community remains an amazing thing to me. I’ve said in the past that I write as if I’m going to share my thoughts into an empty void, not expecting anyone to ever pay attention, and I’m always amazed when anyone does, whether it’s to disagree with me, add some additional insights, challenge my thinking, or even reach out to talk about how to actually move some ideas forward.

I know this community is full of creators, thinkers and advocates who care deeply about using technology to make the world better. Let’s use this opportunity to prove that innovation, thoughtfully applied, can route around institutional failure and corruption. Once again, thank you to those who are reading this for making Techdirt such a wonderful and special place, and let’s focus on being truly optimistic about the opportunities in front of us.

Posted on Techdirt - 30 December 2024 @ 01:07pm

‘Free Speech Absolutist’ Elon Musk Suspends Critics On ExTwitter, Asks People To Be Nicer

The inevitable has happened and Elon has started banning and suppressing the speech of folks who were “on his team,” leading to many suddenly realizing that maybe he wasn’t such a free speech supporter after all.

Look, we’ve spent the better part of the last three years pointing out that Elon Musk does not understand free speech and has often worked directly against basic principles of free speech. He has filed numerous lawsuits that seek to suppress speech. And even if you want to claim he somehow took a more “free speech” approach to running ExTwitter than his predecessors, you’d still be wrong.

He has regularly banned journalists who anger him or shut down reporting that challenges his political allies. He has repeatedly throttled links to sites he views as competitive and recently admitted to suppressing posts with links to news sources.

And, of course, when it matters most for free speech, in pushing back against government attempts at suppression, Musk has shown that he’s a pushover for authoritarian demands, so long as he is supportive of the government in question. While he has occasionally stood up to when he ideologically disagrees with the government, these seem to be the exceptions that prove the rule.

Even Elon’s own ExTwitter transparency report admits that under his watch, account suspensions have tripled compared to what they were pre-Musk.

There is no measure under which you can say that Elon is a bigger supporter of free speech than the previous management of Twitter, except in the very, very narrow category of “allowing bigoted Elon Musk fans to be loudly disruptive on the platform.”

And now, even that is coming back to bite him a bit.

In the last week, a bunch of MAGA folks called out Elon for his support for H1B visas and other attempts to bring in high-skilled tech workers to the US. Given that many of the MAGA supporters have spent much of the last two years falsely claiming that Elon was “bringing free speech back,” it was almost amusing to watch them slowly realize that he’s willing to suspend them or to take away their premium features on the site when he gets angry with them.

The most prominent account was Laura Loomer, whose biggest claim to fame seems to be her ability to get banned from platforms.

Musk then used the favorite trick to justify account suppression not being an attack on free speech by redefining spam to mean something… totally unrelated to spam.

Musk’s explanation raises more questions than it answers. This is Elon retconning a justification for the suppression of certain accounts. First, he claims that the algorithm is set to “maximize unregretted user-seconds,” a made-up, impossible-to-calculate stat that he’s talked about for a while now. He then claims that the way the algorithm does this is by rating certain accounts based on how frequently other paying accounts mute or block them. But then he adds a caveat: if he discovers a brigading campaign by accounts to mute/block other accounts in an attempt to suppress their reach, ExTwitter can magically parse out the real mutes/blocks from the fake brigaded ones, and declare some accounts to be “spam.”

This is all a lot of nonsense for Elon to be able to suppress any speech he wants and try to justify it as spam (just like he’s done in the past by redefining “doxxing.”) Of course, as with Elon’s ever-changing definition of doxxing to justify his own actions, I imagine that his legion of fans will continue to buy into his nonsense definition of spam.

Well, except for those MAGA faithful who are now furious that their faces are being eaten by the Leopards Eating Faces Party they supported.

In other words, Musk reserves the right to unilaterally decide which blocks and mutes are “legitimate” and which are not, based on criteria known only to him. This arbitrary and opaque process is a far cry from a principled commitment to free speech.

(Also, I won’t even get into how his tweet misunderstands the whole “live by the sword/die by the sword” line, but will leave that as an exercise for readers).

The end result of this, though, came down to Musk pleading with people to stop being such assholes on his site he took over specifically to unban people for being assholes.

I mean, it’s not like we didn’t warn Elon exactly how this would go. And, it’s not like we haven’t written about how content moderation teams aren’t about ideology. They just wish everyone would stop being jerks, which is the key to any site that allows user-generated content.

I know that I’m banging the drum over this over and over again, but it’s because there are still a ton of people insisting, falsely, that Elon Musk has some sort of principled take on free speech, when it’s been made clear over and over and over and over again that his take is based entirely on his own whims of what he wants, and not any actual understandable conception of free speech.

No matter how many times Musk is caught red-handed suppressing speech he doesn’t like, a vocal contingent will likely continue to buy into the myth of him as a “free speech absolutist.” But for anyone willing to look objectively at his actions rather than his words, the reality is undeniable. Elon Musk’s “free speech” posture is nothing more than a flimsy rhetorical cover for his own desire to control the discourse.

Yes, he has every right to do this on his own platform, but so too did the operators of Twitter before him. Musk may draw the lines of content moderation slightly differently than the previous team, but he certainly seems to draw them much more arbitrarily according to his personal whims.

Posted on Techdirt - 27 December 2024 @ 01:04pm

It’s Doubtful United Healthcare Is Abusing The DMCA To Takedown Luigi Mangione Apparel, But Someone Is

I had seen this story before Christmas making the rounds on Bluesky, claiming that United Healthcare had sent DMCA takedowns to Teepublic to remove artist Rachel Kenaston’s illustration of Luigi Magione, the guy arrested for shooting and killing United Healthcare CEO Brian Thompson.

While it does appear that Teepublic did, in fact, remove the image and claims to have received a DMCA notice from United Healthcare, I find it extremely unlikely that UHC actually sent a DMCA notice, given that they have no legitimate copyright claim over the image and enough lawyers who would know that. But we’ll get to that.

This case highlights a fundamental problem with the DMCA — it enables censorship by creating a system (backed by law) which allows anyone to demand content be removed from the internet with no real due process, putting heavy legal (governmental) pressure on companies to comply even if the claims are dubious. This arguably violates First Amendment rights by allowing the government to silence lawful speech.

While we can’t say definitively that UHC is abusing copyright law here, the fact that someone is able to do so in this case demonstrates the need to view copyright and the DMCA’s notice-and-takedown procedure in particular as a problematic tool for censorship.

404 Media got its hands on the actual email Kenaston received from Teepublic:

For reference, here is the removed image that Kenaston made:

It’s clearly an illustration based on the photo the NYPD released when attempting to identify Mangione, of him apparently smiling for someone working at the hostel he stayed at.

So, first off, obviously, the underlying image that the NYPD released would have extremely limited copyright protections which, if anything, would be assigned to the operator of the surveillance camera that took it. Kenaston’s illustration might also receive some fairly limited copyright protections for her artistic input.

But, obviously, none of that means that UHC would have any copyright interest at all. I doubt that UHC would have actually filed anything here, even if they don’t like the fact that a very large group of people appear to be supportive of Mangione. UHC have enough lawyers who understand IP law to know that this would be a totally bogus request. Of course there are many cases of companies sending such bogus requests, but those typically involve media operations or other IP-based companies, where unrelated content gets swept up by indiscriminate waves of takedowns (often through a third-party brand monitoring service). It seems similarly unlikely that UHC operates that kind of large DMCA takedown regime.

Also, TeePublic is misrepresenting the DMCA when it says it has no say in what stays on the site, or that it is “required” to remove the content. That’s simply false. The law does not require it, though it does create strong incentives for removal, by offering up a liability safe harbor for those that do remove. But companies are free to reject takedown notices if they don’t believe they are legit. It’s just that they might have to later defend that decision in court.

For what it’s worth, Teepublic is owned by RedBubble, and RedBubble has been taken to court many times over bogus claims of infringement. Indeed, I was an expert witness for them in past cases, so I know that the company has lawyers on staff who know full well that they can push back against bogus takedown claims. But also, I recognize that having fought out some expensive cases in court, they may take a much more “just pull it down so we don’t have to pay more lawyers” approach.

Going through the Lumen Database for takedowns using the Luigi Mangione name, I see that there are a bunch. Though, many of them seem to be people who made other stylized designs of Mangione and are mad that others have put them on t-shirts and hoodies. I question how many of the senders have significant copyright claims in designs like the following:

As we’ve been pointing out for decades, copyright is one of the very few tools in the toolbox that allows anyone to legally demand content be removed from the internet, and companies feel strongly compelled to do so.

Whether or not UHC is actually abusing copyright law this way, it’s clear that someone out there is, and that’s a very problematic feature of copyright law. The assumption that anything listed in a takedown notice is infringing, and the corresponding heavy-handed pressure to remove the content or face huge potential penalties, again reminds us why the DMCA is very questionable on First Amendment grounds.

The fact that someone is abusing it in this particular case is just a reminder of that, even if it’s not actually UHC doing the abusing.

Posted on Techdirt - 23 December 2024 @ 11:03am

Ohio Steps Up To Defend Free Speech As Congress Dithers On Anti-SLAPP Law

While Congress still can’t get its act together to pass an anti-SLAPP law, the Ohio legislature has stepped up and done so in the Buckeye state. Most of the reporting on this has noted that it passed unanimously, and the expectation was that Governor Mike DeWine would sign it, though his big list of bills signed late last week did not include this one.

Strategic Lawsuits Against Public Participation (SLAPPs) are frivolous lawsuits intended to silence critics by burdening them with legal costs. Anti-SLAPP laws allow such suits to be quickly dismissed, often with “fee shifting” such that the bringer of such a vexatious lawsuit also has to pay the legal bills of the wrongly sued defendant.

The bill itself, SB 237, looks decent enough. It appears to be modeled on the Uniform Law Commission’s anti-SLAPP model law, which is very strong. Other anti-SLAPP laws based on that model law have been passed in a handful of states and it’s becoming more of a standard for these types of laws. Just this year alone, Pennsylvania, Maine, and Minnesota have enacted similar anti-SLAPP laws based on the ULC model.

But still, without a federal anti-SLAPP law, there is always the risk that the law won’t apply in federal court. Federal Circuits have been all over the map in deciding if state anti-SLAPP laws can apply. The First and Ninth Circuits have said that you can use state anti-SLAPP laws in federal court, while at last check, I believe the Second, Fifth, Tenth, Eleventh, and DC Circuits have all said you cannot (though this has changed over time, as I believe the Second and Fifth Circuits flipped positions from “allow” to “don’t allow” at some point).

Ohio is in the Sixth Circuit and in poking around, I don’t see that the Sixth Circuit has weighed in on this yet, so it may remain an open question.

Ohio’s passage of SB 237 is an important victory for free speech at a time when some politicians and public figures are eagerly seeking to expand defamation laws to silence critics. While questions remain about its applicability in federal court and the lack of a federal anti-SLAPP standard, this law should provide crucial protections for Ohioans against frivolous lawsuits intended to chill public participation (assuming DeWine gets around to signing it). Hopefully more states, and eventually Congress, will follow suit in defending the First Amendment rights of all Americans.

Indeed, with a bunch of new states passing state anti-SLAPP laws, this could actually present a real bipartisan win for free speech to pass a federal anti-SLAPP law. We just need the folks leading the Republican party to stop wanting to sue every minor critic.

Posted on Techdirt - 20 December 2024 @ 03:36pm

Ctrl-Alt-Speech: How The Online Regulators Stole Christmas

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund. While Online Regulators may have stolen Christmas, Ctrl-Alt-Speech is going to try to take a short holiday break and will return in early January.

Posted on Techdirt - 20 December 2024 @ 01:57pm

Success! One Billion Users Will Go Into Production (Late Backers Welcome)

At some point, I’ll have time to write more thoroughly about how the wonderful and supportive Bluesky community effectively willed One Billion Users over the funding threshold, but it’s quite amazing. We were around 50% of the funding threshold just a few days before the campaign was set to close (which generally means the campaign won’t reach its goal), but then a bunch of users on Bluesky really embraced the project, urging others to support it, and it quickly became a community celebration.

Around 2pm on Thursday, with just 10 hours to spare, we passed the $50k funding threshold and have since gone on to raise another $10k in support beyond that.

We had always set the bar exceptionally high for a card game, knowing that we didn’t want to go into production unless there really was significant demand for the game. Unlike many projects, we really do believe that Kickstarter is valuable not just as a crowdfunding tool, but as market research. It’s a tool to tell you whether or not there’s a big enough market to make the product you want, which is tremendously useful rather than discovering the market isn’t there after you’ve manufactured a product.

For a little while, it appeared that maybe our game might have been more of a “lessons learned” situation, but the final rush and the amazing, vocal support from so many on Bluesky was really heartening and inspiring (and a reminder of the good and fun side of social media that often feels lacking these days).

If you backed the campaign, we sent an update earlier today with details. If you didn’t back the campaign and are now regretting it, we’re testing out a new feature on Kickstarter that lets you make “late pledges” after the official campaign ends. We’ll keep that open for a little while longer to catch any procrastinators and stragglers, before shutting it down when it’s time to finalize our production order.

Don’t miss your chance to get in on the fun!

Posted on Techdirt - 20 December 2024 @ 12:11pm

Fifth Circuit: Salesforce Can’t Use Section 230 To Get Out Of Sex Trafficking Case, Because It Only Provided CRM Software, Not Content Moderation

A second appeals court has now said that Section 230 doesn’t protect Salesforce, the online software giant, from being held liable for sex trafficking, because Backpage… used Salesforce’s software.

If all of this sounds a bit crazy, buckle up. First, you need to understand the background here, before we can get into the details of this case. It first starts with the inaccurate narrative about Backpage.

As many folks here now recognize, the story about Backpage was grossly misleading. While the narrative pushed by politicians and the media was that the company was engaged in sex trafficking, the DOJ had actually commended the company for being a strong partner in the fight against sex trafficking. Indeed, the eventual takedown of Backpage days before FOSTA went into effect (which we were told was necessary to shut down Backpage) has been shown to make life more difficult for law enforcement trying to stop sex trafficking.

However, when law enforcement started demanding that Backpage help them stop non-trafficking, consensual sex work, the owners felt that went too far. Also, despite many years and many trials, the courts have never been able to pin sex trafficking on Backpage, though they did get guilty verdicts on relatively minor charges, including a very trumped-up charge on money laundering, because of how founder Michael Lacey moved some money around.

Either way, the exaggerated panic about Backpage, combined with the passage of FOSTA, kicked off a bunch of vexatious lawsuits, including a few against online software giant Salesforce, arguing that because Salesforce provided CRM software to Backpage, it should also be liable for any trafficking that occurred on the platform.

Salesforce tried using Section 230 to get out of these cases, noting that it can’t be held liable for content on Backpage (a site it didn’t run or host or anything, really). This was slightly ironic because the original arguments happened at the same time that Salesforce’s founder and CEO Marc Benioff was saying we needed to “abolish Section 230” because he was mad at disinformation on Facebook.

I guess maybe he doesn’t want 230 around because the courts keep rejecting his argument. Two years ago, the Seventh Circuit rejected this argument in a very poorly argued decision which appeared to go against multiple other circuits. But now it has company. In a nearly identical case, the Fifth Circuit has again rejected Salesforce’s 230 arguments.

Though it does so in a manner that basically highlights the value and importance of Section 230 in dismissing stupid vexatious cases early. Salesforce argued that because the only conduct on Backpage at issue here were third party ads, 230 should protect it from liability. But the Fifth Circuit points out that 230 only protects intermediaries for publishing related activities of third-party content. Instead, it’s seeking to hold them liable for providing back office software:

Plaintiffs’ claims do not seek to hold Salesforce liable for failing to moderate content or any other functions traditionally associated with a publisher’s role. See id. at 419–20. Rather, Plaintiffs seek to hold Salesforce liable for allegedly providing back-office business services to a company it knew (or should have known) was engaged in sex trafficking. These claims would not inherently require Salesforce, if found liable, to exercise any functions associated with publication.

This is pretty nonsensical, because if taken to its logical conclusion it would mean (1) Backpage would be protected under Section 230 since it did traditional publishing functions, but (2) the even further removed Salesforce is not protected, because it was providing back office software, rather than handling things like content moderation. If only Salesforce’s tools were used for content moderation, then it would be protected. Really.

The court’s reasoning turns Section 230 on its head by suggesting that companies more directly involved in publishing user content have greater immunity than those providing backend infrastructure. This arbitrary line-drawing ignores the reality that the modern internet depends on a complex ecosystem of service providers, all of whom could now face increased legal exposure.

The summary judgment evidence confirms this account, demonstrating that Plaintiffs do not seek liability for any publication-related functions. The evidence shows that Salesforce did not have any role in:

• screening, monitoring, or filtering content;

• reviewing or analyzing third-party content;

• transmitting or hosting third-party content;

• editing or altering third-party content;

• developing or enforcing content-moderation policies; or

• deciding how third-party content was organized or displayed.

In short, the argument here is that Section 230 protects secondary liability but not tertiary liability.

Which is very, very stupid.

The court’s reasoning opens the door to a ton of vexatious litigation against a wide range of service providers for the misdeeds of their customers’ users, as long as those providers are not directly involved in publishing activities. Under this flawed standard, it’s not hard to imagine a wide range of service providers facing lawsuits for how their customers use their offerings. Infrastructure companies like Cloudflare, payment processors like Stripe, cybersecurity firms, and more could all be dragged into court, forcing them to police their customers’ activity or face ruinous legal costs.

The result would be a chilling effect on innovation and free speech online, as companies become increasingly risk-averse in the services they provide and who they provide it to.

Now, you could argue (and I’m sure some people will) that the liability chain between the trafficking and Salesforce is so distantly attenuated that even without Section 230, Salesforce is almost certain to win the case in the long run. The Court does note that Salesforce may still win in the end:

In deciding the section-230-immunity question, we say nothing about the underlying merits of this dispute. Although section 230 does not immunize Salesforce, that does not necessarily mean that Salesforce is liable. Immunity and liability are distinct. The question of whether Salesforce is liable to Plaintiffs because it knowingly benefitted from participation in a sex-trafficking venture is not before our court and remains to be answered.

And it would be crazy if Salesforce were actually found liable. Under the Supreme Court’s Gonzalez ruling, you sorta have to show that the company had to be very knowingly participating in the law-violating activity, not just incidentally allowing someone to violate the law.

And there’s no indication at all of that here. The plaintiffs insist that Salesforce knew or should have known that Backpage was engaged in sex trafficking, but given that no court could even convict the company’s founders of that, it’s pretty crazy to think that Salesforce should be liable for something the company itself wasn’t found liable for… just by providing back office software.

But, to get to that point, the case will have to go on much longer, requiring a lot more resources, time, and expense.

And that’s what Section 230 was designed to prevent. It’s designed to kick out cases quickly when someone is trying to pin liability on the wrong party.

Except, according to the judges of the Fifth Circuit, apparently that only applies to the first party in the chain. If you go further down the list, it no longer applies. This misguided ruling sets a troubling precedent that could have far-reaching consequences for online service providers and the internet as a whole. It’s a blow to Section 230 and the critical protections it provides.

And, it’s yet another reason that the Supreme Court is, unfortunately, finally going to have to confront Section 230 at some point soon. It’s been avoiding taking 230 head on for a few years now, and I fear what will come out of the court when it actually has to interpret the law. But this interpretation seems nutty.

Posted on Techdirt - 20 December 2024 @ 09:30am

Death Of A Forum: How The UK’s Online Safety Act Is Killing Communities

We’ve been warning for years that the UK’s Online Safety Act would be a disaster for the open internet. Its supporters accused us of exaggerating, or “shilling” for Big Tech. But as we’ve long argued, while tech giants like Facebook and Google might be able to shoulder the law’s immense regulatory burdens, smaller sites would crumble.

Well, it’s already happening.

On Monday, the London Fixed Gear and Single-Speed (LFGSS) online forum announced that it would be shutting down the day before the Online Safety Act goes into effect. It noted that it is effectively impossible to comply with the law. This was in response to UK regulator Ofcom telling online businesses that they need to start complying.

This includes registering a “senior person” with Ofcom who will be held accountable should Ofcom decide your site isn’t safe enough. It also means that moderation teams need to be fully staffed with quick response times if bad (loosely defined) content is found on the site. On top of that, sites need to take proactive measures to protect children.

While all of this may make sense for larger sites, it’s impossible for a small one-person passion project forum for bikers in London. For a small, community-driven forum, these requirements are not just burdensome, but existential.

LFGSS points out that the rules are designed for big companies, not small forums, even as it’s likely covered by the law:

we’re done… we fall firmly into scope, and I have no way to dodge it. The act is too broad, and it doesn’t matter that there’s never been an instance of any of the proclaimed things that this act protects adults, children and vulnerable people from… the very broad language and the fact that I’m based in the UK means we’re covered.

The act simply does not care that this site and platform is run by an individual, and that I do so philanthropically without any profit motive (typically losing money), nor that the site exists to reduce social loneliness, reduce suicide rates, help build meaningful communities that enrich life.

The act only cares that is it “linked to the UK” (by me being involved as a UK native and resident, by you being a UK based user), and that users can talk to other users… that’s it, that’s the scope.

I can’t afford what is likely tens of thousand to go through all the legal hoops here over a prolonged period of time, the site itself barely gets a few hundred in donations each month and costs a little more to run… this is not a venture that can afford compliance costs… and if we did, what remains is a disproportionately high personal liability for me, and one that could easily be weaponised by disgruntled people who are banned for their egregious behaviour (in the years running fora I’ve been signed up to porn sites, stalked IRL and online, subject to death threats, had fake copyright takedown notices, an attempt to delete the domain name with ICANN… all from those whom I’ve moderated to protect community members)… I do not see an alternative to shuttering it.

The conclusion I have to make is that we’re done… Microcosm, LFGSS, the many other communities running on this platform… the risk to me personally is too high, and so I will need to shutter them all.

But it’s not just the LFGSS that’s shutting down, but also Microcosm, the open source forum platform underlying LFGSS which was apparently created by the same individual and offered similar local community forums for others beyond just the London biking community.

Apparently, Microcosm is hosting approximately 300 small communities, all of which will either shut down or have to migrate within three months. The developer behind all of this seems understandably devastated:

It’s been a good run, I’ve administered internet forums since 1996 having first written my own in Perl to help fans of music bands to connect with each other, and I then contributed to PHP forum software like vBulletin, Vanilla, and phpBB, before finally writing a platform in Go that made it cost efficient enough to bring interest based communities to so many others, and expand the social good that comes from people being connected to people.

Approximately 28 years and 9 months of providing almost 500 forums in total to what is likely a half a million people in that time frame… the impact that these forums have had on the lives of so many cannot be understated.

The peak of the forums has been the last 5 years, we’ve plateaued around 275k monthly users across the almost 300 websites on multiple instances of the platform that is Microcosm, though LFGSS as a single community probably peaked in the 2013-2018 time period when it alone was hitting numbers in excess of 50k monthly users.

The forums have delivered marriages, births, support for those who have passed (cancer being the biggest reason), people reunited with stolen bikes, travel support, work support, so much joy and happiness and memorable experiences… but it’s also been directly cited by many as being the reason that they are here today, the reason they didn’t commit suicide or self-harm. It’s help people get through awful relationship breakups, and helped people overcome incredible challenges with their health.

It’s devastating to just… turn it off… but this is what the Act forces a sole individual running so many social websites for a public good to do.

This is why we’ve spent years warning people. When you regulate the internet as if it’s all just Facebook, all that will be left is Facebook.

Policymakers have repeatedly brushed off warnings about these consequences, insisting that concerns are overblown or merely fear-mongering from big tech companies looking to avoid regulation. But it’s not. And we’re seeing the impact already.

The promise of the internet was supposed to be that it allowed anyone to set up whatever they wanted online, whether it’s a blog or a small forum. The UK has decided that the only forums that should remain online are those run by the largest companies in the world.

Some might still argue that this law is “making the internet safer,” but it sure seems to be destroying smaller online communities that many people relied on.

It may be too late for the UK, but one would hope that other countries (and states) realize this and step back from the ledge of passing similar legislation.

More posts from Mike Masnick >>