Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 12 August 2022 @ 01:45pm

California Legislature Kills Ridiculous ‘Social Media Addiction’ Bill, But Allows Other Bad Bills To Move Forward

On Thursday, the California Senate’s appropriations committee was set to review a collection of anti-internet bills to see which ones should move forward. It decided to drop one that was particularly terrible: AB 2408, which would allow basically any California prosecutor (local or state-level) to sue companies for “addicting” kids (with addiction being extremely loosely defined). This was a very silly bill, that would have resulted in all sorts of frivolous litigation over basically any feature of social media, based on the extremely faulty belief that social media is designed to be “addictive.” The decision not to move forward with the bill is about the only good news that happened. And even as some are foolishly whining about this decision, should that bill have moved forward, it would have been a fundamental disaster for the internet.

Unfortunately, the committee did decide to move forward with three other questionable and dangerous bills. This includes AB 2273, the “Age Appropriate Design Code,” which has all sorts of problems which we’ve detailed. It also approved a censorship bill (wrapped up in a “transparency” bill), AB 587. As we explained just recently, in the name of trying to stop disinformation online, this bill will actually make it much, much more difficult for websites to respond and react to bad actors seeking to exploit the system. You’d think that would concern legislators, but from what we’ve heard, they simply don’t care. They want to punish internet companies for being bad, and this is the hammer they have. Finally, it also approved AB 2879, yet another bill that tries to effectively outlaw “bad people” online. With so many bad bills happening at once (not just in California), we didn’t even get the chance to go into the problems of this one as well, but suffice it to say, it’s another one of these “for the children” bills written by people who clearly have no understanding of how either technology or human nature work.

As NetChoice (which sued and won over Texas’ and Florida’s content moderation laws) notes, California seems to be setting itself up for a similar lawsuit should Governor Gavin Newsom sign these bills into law:

“California has been a leader in technology development, but today’s actions would give innovators yet another reason to leave the Golden State to avoid overly burdensome regulation that harms families and violates the First Amendment.”

“It’s surprising that California chose to copy Florida and Texas’ laws which courts have already found unconstitutional. California families, not technology companies, are the ones who will truly bear the burden of today’s proposals,”

As we’ve been noting for a while now, moral-panic-style hatred of the internet, combined with ridiculously unconstitutional laws to try to punish those companies is not a partisan issue. Both parties seem willing to pass these kinds of laws, though with different focuses. In both cases, though, they’re unconstitutional. They attack companies for 1st Amendment protected speech, not to mention trying to have clueless bureaucrats insert themselves in the product design process of something they simply don’t understand.

It’s not good for competition, it’s not good for innovation, it’s not good for freedom of expression, and (contrary to the silly claims of the bills’ supporters) it’s not good for the children.

Posted on Techdirt - 12 August 2022 @ 12:09pm

Data Privacy Matters: Facebook Expands Encryption Just After Facebook Messages (Obtained Via Search Warrant) Used To Charge Teen For Abortion

In the wake of the Dobbs decision overturning Roe v. Wade, there has been plenty of attention paid to the kinds of data that companies keep on us, and how they could be exposed, including to law enforcement. Many internet companies seemed somewhat taken by surprise regarding all of this, which is a bit ridiculous, given that (1) they had plenty of time to prepare for this sort of thing, and (2) it’s not like plenty of us haven’t been warning companies about the privacy problems of having too much data.

Anyway, this week, a story broke that is re-raising many of these concerns, as it’s come out that a teenager in Nebraska has been charged with an illegal abortion, after Meta turned over messages on Facebook Messenger pursuant to a search warrant, which was approved following an affidavit from Norfolk Police Detective Ben McBride.

This is raising all sorts of alarms, for all sorts of good reasons. While many are blaming Meta, that’s somewhat misplaced. As the company notes (and as you can confirm by looking at the linked documents above), the search warrant that was sent to the company said it was an investigation into the illegal burning and burial of a stillborn infant, not something to do with abortion. Given that, it’s not difficult to see why Meta provided the information requested.

Of course, there’s a bigger question here: which is about why Meta should even have access to that information in the first place. And, it appears that Meta agrees. Just days after this all came out, the company announced that it is (finally) testing a much more encrypted version of Messenger (something the company has been talking about for a while, but which has proven more complicated to implement). The new features include encrypted backups of messages and also making end-to-end encrypted chats the default for some users.

While the timing is almost certainly a coincidence, many observers are making the obvious connection to this story.

While the Nebraska story is horrifying in many ways, it’s also a reminder of why full end-to-end encryption is so incredibly important, and how leaving unencrypted data with third parties means your data is always, inherently at risk.

Arguably, Facebook should have encrypted its messaging years ago, but it’s been a struggle for a variety of reasons, as Casey Newton laid out in a fascinating piece. Facebook has certainly faced technical challenges and (perhaps more importantly) significant political pushback from governments and law enforcement who like the ability to snoop on everyone.

But also part of the problem is the end users themselves:

The first is that end-to-end encryption can be a pain to use. This is often the tradeoff we make in exchange for more security, of course. But average people may be less inclined to use a messaging app that requires them to set a PIN to restore old messages, or displays information about the security of their messages that they find confusing or off-putting.

The second, related challenge is that most people don’t know what end-to-end encryption is. Or, if they’re heard of it, they might not be able to distinguish it from other, less secure forms of encryption. Gmail, among many other platforms, encrypts messages only when a message is in transit between Google’s servers and your device. This is known as transport layer security, and it offers most users good protection, but Google — or law enforcement — can still read the contents of your messages.

Meta’s user research has shown that people grow concerned when you tell them you’re adding end-to-end encryption, one employee told me, because it scares them that the company might have been reading their messages before now. Users also sometimes assume new features are added for Meta’s benefit, rather than their own — that’s one reason the company labeled stored-message feature “secure storage,” rather than “automatic backups,” so as to emphasize security in the branding.

It’s also interesting to note that Casey’s piece says that Meta’s user survey found that most of their users don’t think encrypting their own data is that much of a priority, as they’re just not that concerned. This does not surprise me at all — as we’ve now had decades of revealed preferences that show that, contrary to what many in the media suggest, most people don’t actually care that much about their privacy.

And, yet, as this story shows, they really should. But if we’ve learned anything over the past couple decades, no amount of horror stories about revealed data will convince the majority of people to take proactive steps to better secure their data. So, on that front, it’s actually a positive move that Meta is pushing forward with effectively moving people over to fully encrypted messaging — hopefully in a user-friendly manner.

Data privacy does matter, and the answer to it has to come from making it widely available in a consumer friendly version — even when that’s a really difficult challenge. Laws are not going to protect privacy. Remember, governments seem more interested in banning or breaking end-to-end encryption than encouraging it. And while perhaps the rise of data abuse post-Dobbs will expand the number of people who proactively seek to use encryption and take their own data privacy more seriously, history has shown that most people will still take the most convenient way forward.

And that means it’s actually good news that Facebook is finally moving forward with efforts to make that most convenient path… still end-to-end encrypted.

Posted on Techdirt - 12 August 2022 @ 09:27am

Federal Election Commission Makes The Right Call Allowing A Dumb Program By Google To Whitelist Political Spam Into Your Inbox

Over the last few months, Republican politicians have been working on a nonsense plan to force their spam into your inboxes. This kicked off following some Republican operatives misunderstanding (whether through their own cluelessness, or on purpose) a study about political spam and how different email providers deal with it. Since then, Republicans have been screaming about how Google is trying to silence their campaign emails — even though their emails tend to be a lot more spammy. And then you have GOP digital marketing people being so clueless that they misconfigure their email settings, and blame Google for it, rather than realizing it was their own fault (the party of personal responsibility is no longer, it seems).

Anyway, faced with so much misplaced anger over all this, Google caved, and introduced a pilot program to whitelist political spam. It requested that the Federal Election Commission bless the program to make sure that it was not deemed an unauthorized in-kind political contribution. Like any such request, this was opened to public comment, and the public absolutely hated the idea. I mean, really, really, really hated it.

It turned into one of the most active items ever on the FEC’s docket, with over 2600 comments on the initial proposal, and another 100 on the draft opinions the FEC released (one in favor of the program, and one rejecting it) and almost all universally spoke out against political spam and asked the FEC to reject the program.

Except, of course, the petition was not about whether or not political spam is good or bad, or whether or not Google’s whitelisting plan was good or bad. It was just about whether or not it constituted an in-kind contribution that would trigger campaign finance laws. And there’s really no reasonable way to argue that it should trigger such laws. And, so, the FEC has (quite reluctantly) given its blessing to the program.

This is the right call, legally speaking. This kind of service shouldn’t be seen as an in-kind contribution, and it sounds like all but one of the FEC commissioners realized that. Ellen Weintraub disagreed, calling it an in-kind contribution, and pushed forth the draft opinion rejecting the program. Weintraub argued that this kind of thing — avoiding spam filters — seems like something of value that lots of others would want, and to only offer it to campaigns suggests that political actors are getting something special, which (to her) is an in-kind contribution. Google responded to that in noting that this is being offered equally across the board to any campaign, and the issue of in-kind contribution tends to be one that is focused on trying to influence an election one way or the other — and this is not designed to do that.

Other commissioners noted, correctly, that even as they disliked the very idea of the program, there was no legal basis to block it. You can see the discussion in the video below, starting at 5 minutes and 40 seconds.

It’s actually pretty interesting to watch the discussion. Commissioners repeatedly try to dig down on just why Google is doing this, and even ask Google’s lawyer directly if it’s in response to Republicans whining about this. The Google lawyer diplomatically tap dances around that, even though that’s obviously what’s going on here.

Since Google keeps insisting its trying this pilot program for commercial, not political reasons, one commissioner asks Google’s lawyer if she’s aware of the universal anger in the comments to the program — leading her to note that they’re paying attention to all sorts of feedback and that’s why this is a “pilot” program, to see how users actually like it.

Of course, that all feels like a smokescreen. Google is doing this to try to calm down technically ignorant, but very angry, Republicans. Of course, saying that then makes this feel more like a political move — not necessarily to benefit one party, but to stop it from attacking the company so much (not that that will actually work).

It would be nice if people could just admit that the Republicans pushing for all this are a bunch of tech-clueless children, but apparently that’s not allowed to be part of the discussion.

One commissioner, Dara Lindenbaum, who only recently joined the Commission, noted that she was supporting the approval of the pilot program even though “I don’t want to, and it is for the same reason all the commenters don’t want to.” But, as she notes, the precedents all support the program, and she (rightly) fears that rejecting it could hinder future innovations and pilot programs for politicians that would be useful. So, even though this program is about spam — and people rightly have a negative feeling about spam — that’s not really the issue for the FEC to decide. Rather it’s whether or not this program is a problematic in-kind contribution, and all but Weintraub couldn’t see how it could be seen as one.

Again, this is annoying, but it’s right from a legal standpoint. One hopes that Google quickly discovers that with this program users absolutely loathe it and political spam, and they decide not to expand this program, but to shutter it.

But, congrats to the technically clueless Republicans out there who are forcing more spam into everyone’s in-boxes. I hope that Democrats start campaigning on just how much you want to seize people’s inboxes for your annoying spam.

Posted on Techdirt - 11 August 2022 @ 09:38am

Elon Musk’s Legal Filings Against Twitter Show How Little He Actually Cares About Free Speech

I can’t say for certain how Elon Musk’s thought process works, but his progression in how he talks about free speech over the last few months through this Twitter ordeal certainly provides some hints. When he first announced his intention to buy Twitter, he talked about how important free speech was, and how that was a key reason for why he was looking to take over the company. Here he was talking to TED’s Chris Anderson:

Well, I think it’s really important for there to be an inclusive arena for free speech. Twitter has become the de facto town square, so, it’s really important that people have both the reality and the perception that they’re able to speak freely….

Later in that same conversation, however, as Anderson pushed him a bit on the limits to free speech (including the terrible example of fire and theaters), Musk suggested that countries’ laws should define free speech — or at least the U.S.’s should:

Well, I think, obviously Twitter or any forum is bound by the laws of the country it operates in. So, obviously there are some limitations on free speech in the US. And of course, Twitter would have to abide by those rules.

Which, fair enough, that’s an accurate statement. But it raises questions about when you’re willing to push back against the government for trying to strip free speech rights. And there, Musk got pretty wishy washy, and basically said he’d follow the law anywhere.

Like I said, my preference is to hew close to the laws of countries in which Twitter operates. If the citizens want something banned, then pass a law to do so, otherwise it should be allowed.

Except, that’s not supporting free speech — for a wide variety of reasons, including that many countries on earth are not democracies in the first place. And even those that are have long established histories of passing laws that suppress free speech. Standing up for free speech means standing up to the government in support of free speech.

And, as we’ve noted, Twitter actually has a very long history of standing up against governments when they seek to suppress free speech, while Musk has… what?… a history of supporting censorial laws.

This has become even clearer with the recent counterclaims Musk filed against Twitter in their ongoing legal fight. We already covered a bunch of things in the filing, and how disconnected from reality they seem to be, but for this post I want to focus on one aspect of Musk’s narrative and counterclaims.

In the filing, he seems particularly mad that Twitter is suing the Indian government over its Information Technology Rules 2021, which India passed, and then used aggressively, to try to force Twitter to silence critics of the Mohdi government. The law is blatantly anti-free speech. Twitter, upholding its historical efforts to fight in favor of free speech, has sued to block the law.

Musk is mad about that.

However, on or around July 6, 2022, Twitter launched a legal challenge against India’s government in Court, challenging certain demands made by the Indian Government—suggesting that Twitter was under investigation between the signing of the Merger Agreement and the filing of its legal challenge.

Indeed, just a few paragraphs earlier, Musk admits that his commitment to free speech is literally just to follow the laws of a country, including India’s which are not free speech supportive at all.

In 2021, India’s information technology ministry imposed certain rules allowing the government to probe social media posts, demand identifying information, and prosecute companies that refused to comply. While Musk is a proponent of free speech, he believes that moderation on Twitter should “hew close to the laws of countries in which Twitter operates.”

But Musk is mad because he thinks that Twitter actually fighting for free speech in India threatens a very large market.

India is Twitter’s third largest market, and thus any investigation into Twitter that could lead to suspensions or interruptions of service in that market may constitute an MAE.

That doesn’t sound like someone who is supportive of free speech. It also doesn’t sound like someone who (as he claimed) isn’t buying Twitter for the revenue. It sounds like the opposite of that.

Indeed, later in the filing, Musk basically says “why can’t Twitter just suck it up and block people in India, like it’s done in other countries.”

Additionally, in July 2022, Twitter determined to challenge the Indian government in a lawsuit rather than follow its instructions pursuant to 2021 Information Technology rules. In the past, Twitter has followed obligations imposed by governments, including going as far as blocking pro-Ukrainian accounts for the Russian government. Accordingly, its decision to challenge the Indian government’s decisions is a departure from the ordinary course. And while the Musk Parties support free speech, they believe Twitter should follow the laws of the countries in which they operate. Regardless of how the Musk Parties would have decided to proceed, they bargained for the opportunity to understand the issues in the case, perform their own risk assessment, and have a say on strategy.

This is also bullshit. Musk is cherry picking examples. Twitter has a long history fighting various government attempts to stifle free speech, and had also already pushed back against India’s rules over the past year, prior to the Musk purchase agreement.

Given all of this, once again, it is ridiculous — and completely contradicted by the facts — to argue that Elon Musk “supports free speech.” He says he does, but then he embraces authoritarian governments trying to stifle speech, and even sues Twitter for its actions to actually try to defend free speech. Musk is nowhere near a free speech supporter, and seems actively engaged in trying to help governments suppress speech.

Posted on Techdirt - 10 August 2022 @ 03:43pm

Trump Campaign Releases Everyone Who Signed An NDA About 2016 Campaign, Saying It Will Not Try To Enforce Them

Two years ago we wrote about how a former Trump campaign staffer, Jessica Denson had sued the Trump campaign, claiming that the non-disclosure agreement she was pressured into signing by the campaign was not enforceable. As we know, Trump loves his non-disclosure agreements. He seems to use them frequently. When you’re a private citizen, or a private corporation, that’s one thing, but when you’re the President of the United States — or the campaign vehicle to elect you to that office — NDAs take on a slightly different feel.

That case has bounced around over the last two years, though back in March of 2021, the court ruled in favor of Denson, and said that the NDA was “invalid and unenforceable.” The case still went on, however, in an effort to turn it into a class action to release everyone else who worked on their campaign from their own NDAs. Other former employees sought to intervene and join the case as well.

There was some more back and forth, but this week (perhaps realizing there are some other big legal issues on the horizon), the Trump campaign officially announced that it was releasing everyone from their NDAs. That means, anyone who worked for the Trump campaign no longer needs to worry about “breaking their NDA” for talking about what went on.

Trump’s campaign organization (now the “Make America Great Again PAC”) issued a declaration by the PAC’s treasurer, that it will not enforce any NDAs.

The Campaign hereby avows that it shall not ever enforce or attempt to enforce any confidentiality or non-disparagement provisions contained in any written agreements signed by any employees, independent contractors, or volunteers who worked for the Campaign on the 2016 Presidential Election.

Of course, in theory, that leaves it open to trying to enforce such agreements against those who worked on the 2020 election (or 2024 election if it gets to that, or just for the PAC these days), but still, it’s a start.

The PAC also filed a “sample letter” releasing former staffers from their NDA.

We understand that you signed an NDA in connection with your work for the Campaign during the 2016 Presidential Election.

We are writing to advise you that you are no longer bound by the confidentiality and non-disparagement provisions in your NDA. The Campaign has determined that it will not enforce these provisions.

Chalk another one up for actual free speech.

Of course, I wonder if we’ll now see a flood of news stories about the 2016 campaign, as staffers and volunteers finally feel comfortable revealing what else went on.

Posted on Techdirt - 10 August 2022 @ 12:01pm

The ‘Institute For Free Speech’ Seems Confused About Free Speech Online

There’s a very strange opinion piece over at The Hill by the chair of something called The Institute for Free Speech, Bradley Smith, basically arguing that because courts are finding that websites are protected by Section 230 while moderating in ways that some people (read: him and his friends)… Congress may take away Section 230, and the way to avoid that is for sites to stop moderating content that some people (again: him and his friends) don’t like… even though they have a 1st Amendment right to do so.

The piece starts out by talking about the very good 11th Circuit decision calling Florida’s social media bill unconstitutional, along with the Supreme Court’s decision to reinstate a lower court ruling blocking Texas’ similar law from going into effect. But he uses these rulings as a jumping off point to argue that they will cause Congress to remove Section 230.

Within these victories, however, lie the seeds of disaster for the platforms — the possible repeal, or substantial alteration, of Section 230 of the Communications Decency Act of 1996.

I mean, it’s possible, though I’m not sure it would be because of those two rulings. There is bipartisan hatred of Section 230, but generally for opposite reasons, so rulings in any direction these days may cause an eager Congress to try to do something. But given that the 11th Circuit decision was based around the 1st Amendment, and barely touched on Section 230, it’s weird to call out Section 230 as the issue.

The key provision of Section 230, which has been dubbed the “the twenty-six words that created the internet” by cybersecurity law professor Jeff Kosseff, shields companies from liability for what others post on online platforms. Traditional publishers such as newspapers, by contrast, can be sued for what they allow in their pages.

It’s always weird when people cite Jeff’s book when it’s clear they haven’t read any of it. So, at the very least, I’d recommend that Smith actually take the time to read Jeff’s book, because it would debunk some other nonsense he has in his piece.

Section 230 was never meant as a gift to Big Tech, which could hardly be said to exist in 1996. Rather, it protected the nascent internet from being crushed by lawsuits or swamped with “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” speech. Congress wanted companies to be able to exercise editorial control over that sort of content without becoming liable for everything else users post on their platforms.

First off, Section 230 was passed in response to two cases: one involving CompuServe (at the time owned by H&R Block, which was a pretty big company at the time), and one involving Prodigy (at the time owned by IBM and Sears Roebuck, also pretty large companies). So this idea that it was to protect “nascent” industries has always struck me as ahistorical.

Second, that summary of what Congress “wanted” also seems to only get a part of the story, and not the full picture. As the authors of Section 230 have stated repeatedly, the point of Section 230 wasn’t just to keep websites from being crushed in this manner, but rather to let them create the kinds of communities they wanted, without fear of having to face litigation over every editorial decision. That is, it is designed as a procedural booster to the 1st Amendment — a kind of anti-SLAPP law to get rid of frivolous litigation quickly.

And, of course it was never meant to be “a gift to big tech” because it was never about the tech at all. It was meant to be a gift to free speech. That’s why it is focused on (1) protecting sites that host user content and (2) protecting those users as well (something that most critics of 230 ignore).

Smith then does correctly note that if websites had to carefully screen all content, it would basically be impossible and would create a mess, and notes how much 230 has helped to build the modern internet to enable people to communicate… but then it goes off the rails quickly. He suggests that “big tech” is somehow abusing Section 230.

The question now is: What happens when Big Tech decides it doesn’t want to let everyone speak freely?

Except, no, we already answered that question with Section 230 and the 1st Amendment much earlier: nothing happens. Companies are free to set their own editorial rules and people are free to use or not use the service based on those, and if you break the rules, the services are free to respond to that rule breaking. That’s exactly what 230 intended. There’s no further question that really needs to be asked or answered. But Smith thinks otherwise.

The major platforms censor users for purposes that Congress never considered or intended in 1996. Section 230 identifies only speech belonging to the categories above as appropriate for removal.

Except… this is not true. Congress absolutely considered and intended this in 1996, again, according to the very authors of Section 230. The entire intent was to allow websites to determine for themselves what kind of community they wanted, what to allow, and what not to allow, without fear of having to litigate every decision. As the authors of 230 have noted, a forum discussing Republican politics shouldn’t be forced to also host Democratic talking points, and vice versa.

The line about 230 identifying “only speech belonging to the categories above as appropriate for removal” is hogwash. It’s a myth promoted by people who do not understand the law, or any of the jurisprudence in the last two and a half decades around the law.

More specifically, it’s misreading how the two key sections of 230, (c)(1) and (c)(2) work. (c)(1) is the key part of the Section 230, and it is “the 26 words.” It makes no mention of categories of content. It flat out says that a website cannot be held liable for 3rd party speech. Full stop. And courts have interpreted that (correctly according to the authors of the law) to mean that there is no liability at all that can be based on third party speech — including around removals of content or other moderation choices.

The categories come in in (c)(2) — which, notably, are not part of the 26 words. There are actually very few cases exploring (c)(2), because (c)(1) covers almost all of content moderation. But in the rare cases where courts actually do consider (c)(2), they make it clear that the list of items that are mentioned in (c)(2) should be considered broadly and with great discretion towards the right and ability of the website itself to make the determination of what content it wants to allow (or not allow) on its website, because otherwise it would nuke the entire purpose of Section 230 and implicate the 1st Amendment, leading to vexatious litigation over every editorial and moderation decision.

So, Smith, here, seems confused about how Section 230 works, how (c)(1) and (c)(2) work together, and how the list of content that sites can moderate is illustrative, and not comprehensive — and that it needs to be to avoid running afoul of the 1st Amendment.

Smith, however, is sure that the law wasn’t intended to allow websites to take down content he doesn’t like. He’s wrong. It was. He also seems to have been taken in by misleading stories pushed by bad faith actors pretending that the big social media sites are biased against conservatives. He lists out a bunch of out of context examples (I’m not going to go through them now, we’ve debunked them all in the past) without noting how each of those examples actually involved breaking rules the platforms set forth, and how there were examples of those same rules being applied to left-leaning content as well. All of that disproves his theory, but he’s pushing an agenda, not reality.

If the law had intended to bless the removal of any speech that platforms wish to take down, it would say so. It does not.

Except it does. First, it says a platform can’t be held liable for 3rd party content, and courts have correctly (according to the bill’s own authors) interpreted that to mean the removal of their content as well. And, even if you have to rely on (c)(2), the courts say to construe that broadly, and that includes the “otherwise objectionable” part, which courts have correctly said must be based on what the website itself deems objectionable, not some other standard. Because if it wasn’t based on the platform’s opinion of what’s objectionable, it would interfere with the website’s own 1st Amendment editorial rights.

Nevertheless, the platforms now argue that they can block anything they want, at any time, for any reason, and there is nothing any person or state can do about it.

Because that’s correct. Bizarrely, Smith then admits that websites do in fact have a 1st Amendment right to moderate as they see fit. This paragraph is the most confusing one in the piece:

When courts review a platform’s curation of content, they claim a publisher’s First Amendment rights. But when legislatures review their liability for user speech, they suddenly transform into mere conduits deserving of special immunity. However comfortable that arrangement may be for the platforms, it is likely intolerable to Washington.

Yes. Websites have a 1st Amendment right to moderate how they wish to. The Section 230 liability provisions work in concert with the 1st Amendment as a procedural benefit, because having to fully litigate the 1st Amendment issue is long and costly. Section 230’s entire main feature is to say “this is already decided. A website cannot be liable for its moderation decisions, since that would interfere with the 1st Amendment, so therefore, the websites are not liable, kick this lawsuit out of court now.”

Big Tech’s arguments are so extreme as to close the door on virtually any effort to combat its influence over our politics, or to secure fairer treatment for Americans online. If the only option left for Congress is to amend or repeal Section 230, the result could be disastrous for the companies — and dangerous for free speech.

Except… you just admitted that the 1st Amendment already protects these decisions. So why are you now saying that these arguments are “extreme” and put 230 at risk? Are Republican politicians mad that the 1st Amendment allows sites to remove their propaganda and misleading content? Sure. But at the same time, Democrats are mad that websites don’t remove enough of that stuff. So, the entire crux of this article being “stop removing so much content or Congress may remove 230” doesn’t make any sense, because Democrats keep threatening to remove 230 because sites aren’t taking down enough content. Both sides are wrong, but it doesn’t make Smith’s argument make any more sense. He seems to live in such a deep bubble that he doesn’t realize what’s going on. That’s kind of embarrassing for a guy who used to run the Federal Election Commission.

The debate in the courts often plays out by analogy, as the two sides argue over whether social media is more like a newspaper or phone company, parade organizer or shopping mall. The reality, of course, is that they are none of these things exactly. A middle-ground solution might be best for all in the end, but its prospects are rapidly fading. Big Tech can celebrate for now, but they may look back and rue the day.

I mean, there’s a reason why those analogies are used: because people are citing back to relevant cases about newspapers, phone companies, parades, and shopping malls. It’s not like it just came out of the blue. They’re citing precedent.

And, what exactly is the “middle ground” you’re suggesting here, because it sure sounds like you mean that the tech companies shouldn’t be free to exercise their 1st Amendment editorial rights. And that seems like a dumb position to take for “The Institute for Free Speech.”

After I complained about this article on Twitter, Smith responded to me claiming I had misread the article, and presenting a further clarification via a Twitter thread. It does not help.

He’s correct about the role of the 1st Amendment here, as we already noted above, but seems oblivious to the fact that this completely undermines his argument that social media sites cannot or should not moderate content that he personally thinks they should not moderate.

He complains about the “otherwise objectionable” bit claiming that it “renders the first six categories meaningless.” Except, it does not. Again, he’s already deep in the (c)(2) weeds here, ignoring the much more important (c)(1), but even if we accept his framing that (c)(2) is important, he leaves out an important part BEFORE the categories. I’ll put in bold here to help him:

any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

It’s up to the provider to decide. That’s it. End of story. (Just for clarity’s sake, the “user” part is for moderation decisions made by end users, which 230 also protects — and it’s why 230 protects retweets, for example).

Section 230 is not saying that it only protects those things. It’s saying that the provider gets to decide.

Smith concludes his clarification… with an outright fabrication. Claiming that the big tech platforms “claim to be open to all.” That has never been the case. They all have terms of service and they all have rules. And that’s been true since the beginning.

And that final line is bizarre. If they have a 1st Amendment right to curate speech on your own platform (and they do), then the only way to make that right real is to get them out of lawsuits early. Which is what Section 230 does. It protects that right by making it procedurally possible to avoid having to go through a full 1st Amendment defense (which is involved and expensive).

Again, this is something you would think that the Institute for Free Speech would understand. And support.

Yet, the basic argument here is that by exercising their own free speech rights in a way that some people, including Brad Smith, don’t like, well, then Congress may seek to remove their rights. That strikes me as a counterproductive position for someone heading a free speech organization to take, but these days very little makes sense any more. Indeed, arguing that “if you don’t make editorial choices the government may like, the government may punish you” strikes me as a deeply ridiculous take for a “free speech” organization to take. It’s a kind of mafioso threat: “hey, big tech, if you don’t stop taking down the content I like, my friends in Congress may decide to punish you.” What a deeply cynical and ridiculous take for a free speech organization to make.

Of course, as I was finishing this piece, a friend pointed out to me, helpfully, that “The Institute for Free Speech” actually filed an amicus brief in support of Texas’s laughably unconstitutional anti-1st Amendment content moderation bill. So, as with so many of these organizations, the name appears to be the opposite of what they actually do. They’re just your garden variety anti-free speech, anti-1st Amendment authoritarians with a misleading name to cover up their authoritarian thuggish instincts.

Posted on Techdirt - 10 August 2022 @ 09:28am

EU Commissioner Pens Barely Coherent Defense Of Spying On Everyone, For The Children

You may recall back in May we wrote about a batshit crazy proposal out of the EU Commission to “protect the children” by mandating no encryption and full surveillance of all communications. Those behind the proposal would argue it’s not technically surveilling all messages, but all messages have to be surveillable, should the government decide that a company is not doing enough to “mitigate” child sexual abuse material (CSAM).

These are the kinds of “solutions” that very silly politicians come up with when they don’t understand the nature of the problem, nor anything about technology, but feel the need to “do something.” No one denies that CSAM is an issue, but it’s an issue that many experts have been working on for ages, and they recognize that the “easy” solutions that foolish people come up with often have tradeoffs that are actually much, much worse. And that’s absolutely true in this case with this proposal.

It would actually put children at much greater risk by removing their ability to communicate privately — including to alert others that they need help. And the proposal wasn’t just about actual CSAM, but it talked about surveilling messages to detect the more broadly defined “grooming.” Again, there are real problems with grooming that have been widely studied and reported upon, but “grooming” has now become a slur used by Trumpists and others to attack anyone who believes that LGBTQ+ people have the right to exist. In other words, it’s the kind of term that can be abused — and the EU wants to wipe out encryption and enable governments to force internet services to spy on anyone who might be “grooming” despite the fact that these days, that term is almost meaningless.

Thankfully, cooler heads in the EU, namely the EU Data Protection Board and the European Data Protection Supervisor, released a report a few weeks ago absolutely trashing the proposed rule. Well, at least as far as one can “trash” a proposal using typical EU bureaucratic speech:

The EDPB and EDPS stress that the Proposal raises serious concerns regarding the proportionality of the envisaged interference and limitations to the protection of the fundamental rights to privacy and the protection of personal data. In that regard, the EDPB and EDPS point out that procedural safeguards can never fully replace substantive safeguards. A complex system of escalation from risk assessment and mitigation measures to a detection order cannot replace the required clarity of the substantive obligations.

The EDPB and EDPS consider that the Proposal lacks clarity on key elements, such as the notions of “significant risk”. Furthermore, the entities in charge of applying those safeguards, starting with private operators and ending with administrative and/or judicial authorities, enjoy a very broad margin of appreciation, which leads to legal uncertainty on how to balance the rights at stake in each individual case. The EDPB and EDPSstress that the legislator must, when allowing for particularly serious interferences with fundamental rights provide legal clarity on when and where interferences are allowed. While acknowledging that the legislation cannot be too prescriptive and must leave some flexibility in its practical application, the EDPB and EDPS consider that the Proposal leaves too much room for potential abuse due to the absence of clear substantive norms.

As regards the necessity and proportionality of the envisaged detection measures, the EDPB and EDPS are particularly concerned when it comes to measures envisaged for the detection of unknown child sexual abuse material (‘CSAM’) and solicitation of children (‘grooming’) in interpersonal communication services. Due to their intrusiveness, their probabilistic nature and the error rates associated with such technologies, the EDPB and EDPS consider that the interference created by these measures goes beyond what is necessary and proportionate

Trust me, in EU bureaucratese, that’s pretty harsh. It’s basically saying this proposal is a dumpster fire that attacks the privacy rights of tons of people, without an understanding of how poorly the scanning technology proposed actually works, and without a real understanding of the nature of the problem — combined with broad and vague terminology that is wide open to abuse.

Now, one of the main backers of the proposal, the European Commissioner for Home Affairs, Ylva Johansson, has responded to the report in an almost incomprehensible blog post. It does not actually address the many detailed technical and legal issues raised by the EDPB report. Instead, it just is a performative “but think of the children” screed.

Sexual abuse of a child is a horrific act. It can destroy people’s lives, their sense of self. When images circulate online for years after the psychological effects on the person can be catastrophic. This right to not have images circulating, this right to privacy, is entirely absent in the opinion.

Again, no one denies that CSAM is a horrible problem. But what the actual experts are trying to explain is that you don’t solve it by spying on everyone, taking away their privacy rights, and breaking the technology that protects all of us. Johansson insists that there’s no technological issue here because tech platforms will have their choice of which way they wish to destroy encryption and spy on all communications:

The legislation leaves to the provider concerned, the choice of the technologies to be operated to comply effectively with detection orders, provided that the technologies meet the requirements of the Regulation.

Yes, but all of those options are bad. That’s what the EDPB is trying to explain. All of those options involve fundamentally breaking the technology that keeps us secure, and keeps out data private. And you’re just saying “eh, it’s no problem to destroy that.” And, you’re also insisting that destroying the technology that keeps all of us safe will magically keep kids safer, when all of our historical evidence from the people who actually study this stuff says the exact opposite — that it will put them at greater risk and greater danger because those who are looking to control them and abuse them will have even greater control over their lives.

Johansson waves away the privacy concerns again, by noting tech platforms should choose the “least privacy-intrusive” method of destroying privacy. That’s like saying, “our plan to blow up the sun won’t consume the Earth because we’re asking the sun to be blown up with the least planet destroying method available.” If all of those methods destroy the Earth, the problem is with the plan, and providing “options” doesn’t fix anything.

Over and over again, Johansson brushes off the actual detailed concerns about how this proposal will be abused to destroy people’s privacy, by doing the equivalent of saying “but don’t do that part.” I mean, just for example, here’s her response to the dangers of client-side scanning that would be mandated under this regulation — which creates all sorts of privacy concerns, and concerns about how that data can and will be misused. Johansson basically says “well, we won’t allow the data to be misused”

Detection systems are only to be used for the sole purpose of detecting and reporting child sexual abuse, and strict safeguards prevent use for other purposes.

The proposal provides for judicial redress, with both providers and users having a right to challenge any measure affecting them in Court. Users have a right of compensation for any damage that might result from processing under the proposal.

Sounds great in the theoretical world of Perfectistan, that does not exist in reality. Once you enable things like client-side scanning, it becomes way too tempting for everyone, from law enforcement on down, to gradually (and not so gradually) seek to expand access to that data. The idea that you can just say “well, don’t misuse it and put in place strict safeguards” ignores basically all of human and technological history.

Actual “strict safeguards” are that you keep encryption and you don’t allow client-side scanning. The fact that there is judicial redress isn’t that useful for most people. Especially if your data has been leaked or otherwise abused thanks to this process, to then have to go through years of expensive litigation to get “redress” is no answer at all.

Johansson, like way too many EU bureaucrats seems to think that law enforcement and governments would never abuse their authority to snoop on private messages. This is naïve in the extreme. Especially at a time of creeping authoritarianism, including across the EU. To open up everyone’s private messages is madness. But Johansson isn’t concerned because the government will have to approve the snooping.

Only where authorities are of the opinion that there is evidence of a significant risk of misuse of a service, and that the reasons for issuing the detection order outweigh negative consequences for the rights and legitimate interests of all parties affected, would they announce their intent to consider a detection order on child sexual abuse material or grooming targeted to the specific risks identified by the service provider.

Look, we’ve been here before. EVERY SINGLE TIME that some sort of mandated access to communications is granted to law enforcement, we’re told it’s only going to be used in special circumstances. And, every single time, it is widely abused.

And, not surprisingly, Johansson cites the ridiculous paper that recently came out from two Government Communications Headquarters employees pushing for client-side scanning. The same paper that we ripped apart for all its many flaws, but to Johansson, it’s proof that you can do client-side scanning on an encrypted system without violating privacy. That’s wrong. It’s simply untrue.

Incredibly, Johansson concludes her blog post with pure, unadulterated projection:

What frustrates me most is that there is a risk of advocates creating opposition based only on abstract concepts. Abstract notions based on misunderstandings.

That’s the summary of this entire nonsense proposal: it’s all abstract concepts based on misunderstandings about technology, security, privacy, and even how CSAM works and how best to stop it.

It’s great that the EDPB carefully detailed the problems with this proposal. It’s laughable that Johansson’s hand-waving is considered an acceptable response to this ridiculously dangerous proposal.

Posted on Techdirt - 9 August 2022 @ 12:31pm

Teaching Content Moderators About How To Moderate Is Tough, But TikTok’s Partner Using Actual Child Sexual Abuse Material Is Likely Criminal

WTF, TikTok? Time and time again we see that TikTok does weird things regarding content moderation. More than most firms in the space, TikTok often does things that suggest that it hasn’t bothered to speak to other experts in trust and safety, and they want to reinvent the wheel… but with terrible, terrible instincts. Apparently that applies to the company’s third party moderators as well.

Forbes has an astonishing piece claiming that a key third party company that TikTok uses for moderation, Telepresence, had, as part of its moderator training, actual images of child sexual abuse material (CSAM) shown to trainees. This is so unbelievably stupid that I still almost don’t believe it could possibly be true. Possession of CSAM is a strict liability situation. There are rules for how online service providers handle any CSAM they come across, involving notifying the National Center for Missing & Exploited Children (NCMEC) via its CyberTipline. 18 U.S. Code § 2258A has the details of how a provider that discovers CSAM must handle that content — sending a report to NCMEC, and then preserving the content as evidence for law enforcement.

But also, making damn sure that the content is kept very, very locked up:

A provider preserving materials under this section shall maintain the materials in a secure location and take appropriate steps to limit access by agents or employees of the service to the materials to that access necessary to comply with the requirements of this subsection.

Nowhere in the law do I see anything even approximately suggesting that a company can not just hang onto this material in a non-secure manner, but then to show the content to employees as part of training. I mean… I just can’t. How did anyone think this made sense?

I mean, sure, you can concoct a thought process chain that gets you there: we need to train employees, and the best way to train employees is to show them examples on which to train them. But, holy shit, how does no one realize way earlier that YOU DON’T DO THAT with CSAM?! I don’t see how it’s even possible that people didn’t realize how problematic this was. I mean, this paragraph just has me screaming out loud, because how does this happen?

Whitney Turner, who worked for Teleperformance’s TikTok program in El Paso for over a year and departed in 2021, also recalled being shown sexually exploitative imagery of kids as part of her training. Whitney was given access to a shared spreadsheet that she and other former employees told Forbes is filled with material determined to be violative of TikTok’s community guidelines, including hundreds of images of children who were naked or being abused. Former moderators said the document, called the “DRR,” short for Daily Required Reading, was widely accessible to employees at Teleperformance and TikTok as recently as this summer. While some moderators working in unrelated functions were restricted from viewing this material, sources told Forbes that hundreds of people across both companies had free access to the document. The DRR and other training materials were stored in Lark, internal workplace software developed by TikTok’s China-based parent company, ByteDance.

The excuses given are equally unbelievable..

Teleperformance’s Global President of Trust & Safety Akash Pugalia told Forbes the company does not use videos featuring explicit content of child abuse in training, and said it does not store such material in its “calibration tools,” but would not clarify what those tools are or what they do. He declined to answer a detailed list of other questions regarding how many people have access to child sexual abuse material through the DRR and how Teleperformance safeguards this imagery.

The Forbes piece has lots of crazy details, including the fact that tons of people had access to this content, and other things: like one moderator who claims her job didn’t even involve CSAM content, and she never encountered any on the job other than when it was showed to her as part of her training.

Honestly, this feels like the kind of thing that could, and perhaps should, lead to criminal charges against someone.

Posted on Techdirt - 9 August 2022 @ 09:25am

Rep. Cathy McMorris Rodgers And Deeply Unfunny ‘Satirist’ Seek To Remove Website 1st Amendment Rights To ‘Protect Free Speech’

Rep. Cathy McMorris Rodgers, who heads something called the “House Republican Big Tech Task Force” has teamed up with Seth Dillon, the CEO of the deeply unfunny “conservative” Onion wannabe, The Babylon Bee, to whine in the NY Post about “how to end big tech censorship of free speech.” The answer, apparently, is to remove the 1st Amendment. I only wish I were joking, but that’s the crux of their very, very confused suggestion.

Let’s start with the basics: Dillon’s site regularly posts culture-war promoting satire. Because Republican culture wars these days are about shitting on anyone they dislike, or who dares to suggest that merely respecting others is a virtue, many of those stories are not just deeply unfunny, but often pretty fucked up. None of this is surprising, of course. But, the thing about the modern GOP and its culture wars is that it’s entirely based around pretending to be the victim. It’s about never, not once, being willing to take responsibility for your own actions.

So, when the Babylon Bee publishes something dumb that breaks a rule, and they get a minor slap on the wrist for it, they immediately flop down on the ground like a terrible soccer player and roll around about how their free speech has been all censored. It hasn’t. You’re relying on someone else’s private property. They get to make the rules. And if they decide that you broke their rules, they get to show you the door (or whatever other on-site punishment) they feel is appropriate. This is pretty basic stuff, and actually used to be conservative dogma: private property rights, the rights to freely associate — or not — with whoever you want under the 1st Amendment, and accepting personal responsibility when you fuck around, were things we were told were core to being a conservative.

No longer (it’s arguable, of course, if they were ever actually serious about any of that).

There is no free speech issue here. The Babylon Bee has 1st Amendment rights to publish whatever silly nonsense it wants on its own site. It has no right to demand that others host its speech for it. Just as the Babylon Bee does not need to post my hysterically funny satire about Seth Dillon plagiarizing his “best” jokes by running Onion articles three times through GPT3 AI with the phrase “this, but for dumb rubes.” That’s freedom of association, Seth. That’s how it works.

Perhaps its no surprise that the CEO of a “what if satire were shitty” site doesn’t understand the 1st Amendment, but you’d think that a sitting member of Congress, who actually swore to protect and uphold the Constitution, might have a better idea. Not so for Rep. McMorris Rodgers, who once actually was decent on tech, before apparently realizing that her constituents don’t like elected officials from reality, and prefer them to be culture warriors as well.

Anyway, after whining about facing a tiny bit of personal responsibility — including, I shit you not, having to be fact checked by Facebook (note to the two of you: fact checking is more speech, it’s not censorship, you hypocritical oafs) — they trot out their “solutions.”

Big Tech must be held accountable. First, we propose narrowing Section 230 liability protections for Big Tech companies by removing ambiguity in the law — which they exploit to suppress and penalize constitutionally protected speech. Our proposal ensures Big Tech is no longer protected if it censors individuals or media outlets or removes factually correct content simply because it doesn’t fit its woke narrative.

I mean, holy fuck. There is no excuse in the year 2022 to still be so fucking ignorant of how Section 230 works. Especially if you’re in Congress. Narrowing Section 230’s liability protections won’t lead to less moderation. It will lead to more. The liability protections are what allow websites to feel comfortable hosting 3rd party content. The case that caused Section 230 in the first place, involved Prodigy being held liable for comments in a forum. If you make sites more liable, they are less likely to host whatever nonsense content you want to share on their website.

Second, removing “factually correct content” whether or not it “fits its woke narrative” (and, um, no big tech company has a “woke narrative”) is… protected by the 1st Amendment. Content moderation is protected by the 1st Amendment. Dillon doesn’t have to publish my unfunny piece. Twitter doesn’t need to publish his unfunny piece. Facebook can fact check all it wants — even if it gets the facts wrong. It’s all thanks to the 1st Amendment.

Taking away 230 protections doesn’t change that — it just makes websites even LESS likely to host is culture war nonsense.

But McMorris Rodgers and Dillon aren’t done yet.

Second, we propose requiring quarterly filings to the Federal Trade Commission to keep Big Tech transparent about content moderation. This will allow Congress, the FTC and Americans to know when and why these companies censor content to determine whether it’s justified. We’d also sunset Section 230 protections after five years, so Congress can reevaluate them if necessary and incentivize Big Tech to treat all content fairly or have their protections revoked.

Again, this is almost certainly unconstitutional. I know some people struggle with the idea of why transparency requirements are an affront to the 1st Amendment, but it’s pretty straightforward. If Congress ordered Seth Dillon to file his site’s editorial policies, including details about what stories they reject and which they promote “to determine whether its justified” for the site to make those editorial decisions, pretty much everyone would recognize the 1st Amendment concerns.

Demanding anyone justify editorial decisions by filing reports with the government to “determine whether [those editorial decisions are] justified” is just a blatant attack on free speech and the 1st Amendment.

Sunsetting Section 230 just takes us back to the issue we noted above. Without liability protections, websites are MORE likely to remove content to avoid liability, not less.

This isn’t like some big secret. Perhaps Dillon and McMorris Rodgers only get their news from sites like the Babylon Bee, and that helps them not understand how anything works. But, really, that’s no excuse.

Third, our proposal requires Big Tech to improve appeals processes for users to challenge moderation decisions and enables people to petition their state’s attorney general to bring legal action against Big Tech, enhancing users’ power to challenge censorship. Twitter would be required to notify a user, like the Babylon Bee, through direct communication before taking any censorship action. Big Tech would also be required to give users the option to challenge any censorship decisions with a real person — not a bot — to disincentivize Big Tech from completely automating its censorship process.

Right, so again, all of that is an affront to the 1st Amendment. Should I be able to petition my state’s attorney general to bring legal action against the Babylon Bee for failing to publish my truly hilarious article about how Cathy McMorris Rodgers hates the internet so much, she pushed legislation banning communities from building their own broadband networks (really funny stuff, because it’s true).

Of course not. The 1st Amendment protects websites and their editorial decisions. There is no constitutional cause of action any attorney general could take against a website for their moderation decisions.

As for the appeals process — most websites have one. But mandating one would, again, raise serious constitutional issues, as it’s the government interfering with the editorial process.

And, note, of course, that none of these complaints address the fact that the social media sites that people like Dillon like, including Parler, Gettr, and Truth Social, have far more arbitrary and aggressive content moderation policies (even as they pretend otherwise).

It’ll be hilarious — even Babylon Bee worthy, if I say so myself — if this bill passes, and woke liberals use it to sue Truth Social for taking down truthful content about the January 6th hearings. C’mon, Seth, let me publish that as an article on your site! Or you hate freedom of speech!

Free speech must be cherished and preserved. It’s time Big Tech companies uphold American values and become fair stewards of the speech they host.

But the Babylon Bee remains free to be as shitty as before? How is that fair?

Posted on Techdirt - 8 August 2022 @ 12:00pm

Project Veritas Not Only Loses Its Vexatious SLAPP Suit Against Stanford, It Has To Pay The University’s Legal Fees

Project Veritas, the faux conservative group of pranksters pretending to be journalists likes to pretend that they’re “free speech” supporters. But they’re not. They appear to really only support their own free speech, and have a much more flexible view of free speech when it includes speech critical of themselves. Over the past few years, Project Veritas (PV) has gotten fairly aggressive in suing organizations that are critical of PV. That’s… not very free speechy. PV has tried to silence the NY Times, has sued CNN, and last year it sued Stanford and the University of Washington over a blog post debunking some of the usual nonsense from PV.

A few months back, we reported that CNN won its case against PV. But, also, back in May we missed that a judge also dismissed PV’s case against Stanford. Basically, saying mean things about PV is not defamation, because opinions aren’t defamation tough guys:

Viewing the totality of the circumstances, the Court concludes that the phrases in the Blog Post that Project Veritas challenges as defamatory are nonactionable opinions. In considering the medium and context, “statements of opinion are expected to be found more often in certain contexts, such as editorial pages or political debates.” Dunlap v. Wayne, 105 Wn.2d 529, 539, 716 P.2d 842 (1986). Here, the statements regard whether claims of election fraud were based on misleading or inaccurate information. Throughout the 2020 presidential election, statements regarding election fraud often resulted in heated and emotional discussions. See Camer, 45 Wn. App. at 41 (determining that an article about issues resulting in heated and often emotional discussions constituted nonactionable opinion). This context suggests that the Blog Post is providing opinions.

Additionally, “[t]he court should consider the entire communication and note whether the speaker qualified the defamatory statement with cautionary ‘terms of apparency.’” Life Designs Ranch, 191 Wn. App. at 331 (quoting Dunlap, 105 Wn.2d at 539). Project Veritas challenges only a couple phrases of the Blog Post as defamatory and agrees that the majority of the Blog Post “purported to be a technical study of whether and how prominent conservatives had worked to promote and ‘aggressively spread’ the [Video Report].” Compl. at ¶ 82. Indeed, the Blog Post focuses on describing when posts about the Video Report were made on social media, who made them, and how influencers strategically worked to gain visibility for the Video Report. See EIP Blog Post. Thus, not only were the allegedly defamatory portions of the Blog Post an exceedingly small piece of the Blog Post, they also did not relate to the main subject of the Blog Post. That Project Veritas fails to take issue with the Blog Post as a whole, and instead cherry picks just a couple phrases as defamatory, does not weigh in its favor. Furthermore, EIP qualified one of the challenged statements by saying that it had determined that the Video Report was part of a disinformation campaign. This language constituted a “term of apparency” and signaled to the reader that the statement was one of opinion rather than fact….

The specific words used in the Blog Post were also indicative of them being opinions because they are incapable of defamatory meaning. Words that have imprecise meaning are incapable of being defamatory because they are not provably false. Paterson, 502 F. Supp. 2d at 1134–35. Courts have found phrases like “rip-off,” “fraud,” and “unethical” are nonactionable because of their imprecise meaning and because they are susceptible to many interpretations. See id. at 1135 & n.2. In this case, one cannot determine the truth or falsity for the phrases that Project Veritas alleges to be defamatory. For example, the statement that the Video Report is “misleading” or constitutes “disinformation” is capable of many interpretations and thus cannot be proven true or false. See Phantom Touring, Inc v. Affiliated Publ’ns, 953 F.2d 724, 728 n.7 (1st Cir. 1992) (“Even the less figurative assertion that appellants are ‘blatantly misleading the public,” . . . is subjective and imprecise, and therefore not capable of verification or refutation by means of objective proof.”). The statement that the Video Report had been “debunked” is similarly incapable of being proven true or false.

Anyway, that ruling actually came down in May, but we get to revisit it now, because last week, the judge took the next step. Because it was determined that the original lawsuit by PV was a SLAPP under Washington’s anti-SLAPP law, that meant that PV could be on the hook for Stanford’s legal fees… and that portion of the case has concluded with… PV being told to pay up to the tune of $149,596.90.

For what it’s worth, PV tried to get around having to pay by arguing that Washington’s anti-SLAPP’s fee shifting provisions can’t be applied in federal court. The court dismisses this argument in a footnote, and says that the fees requested by Stanford are reasonable under the law, and makes no adjustment on Stanford’s requested about.

Anyway, it’s pretty incredible that an organization that holds itself out as supporting free speech would ever try to argue that an anti-SLAPP law can’t apply in federal court. That’s just an undeniably anti-free speech position to take. Again, this is just a reminder that PV, for all its lofty talk about free speech, seems to be the same kind of anti-speech, pro-censorial organization like so many others when the speech is about itself.

Of course, this story is yet another reminder that strong anti-SLAPP laws are one of this country’s best protections for free speech, and against censorial thuggery. This is also why we need better state anti-SLAPP laws in every state AND a strong federal anti-SLAPP laws. If PV were an actual free speech organization it would be supporting such laws — not trying to tear them down and filing SLAPP suits.

More posts from Mike Masnick >>