Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Bluesky at bsky.app/profile/mmasnick.bsky.social on Mastodon at mastodon.social/@mmasnick and still a little bit (but less and less) on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 3 July 2024 @ 11:58am

GOP Really Committed To The Bit That Speech They Don’t Like Is Censorship

The House Oversight Committee is investigating NewsGuard, a private company, for supposed “censorship” for the crime of… offering its own opinions on the quality of news sites. The old marketplace of ideas seems to keep getting rejected whenever Republicans find that their ideas aren’t selling quite as well as they’d hoped.

Up is down, left is right, black is white, day is night. The modern GOP, which has left any semblance of its historical roots in the trash, continues to make sure that “every accusation, a confession” is the basic party line. And now, it’s claiming that free speech is censorship.

Apparently Rep. James Comer was getting kinda jealous that his good buddy Rep. Jim Jordan was out there weaponizing the government to suppress speech, all while pretending it was in an attempt to stop the weaponization of the government to suppress speech.

Comer heads the House Committee on Oversight and Accountability. He has apparently decided that it’s his job to investigate companies for the kind of speech he dislikes. In this case, it’s NewsGuard he’s investigating.

Today, House Committee on Oversight and Accountability Chairman James Comer (R-Ky.) launched an investigation into the impact of NewsGuard on protected First Amendment speech and its potential to serve as a non-transparent agent of censorship campaigns. In a letter to NewsGuard Chief Executive Officers Steven Brill and Gordon Crovitz, Chairman Comer raises concerns over reports highlighting NewsGuard’s contracts with federal agencies and possible actions being taken to suppress accurate information. Chairman Comer’s letter includes requests seeking documents and information on NewsGuard’s business relationships with federal agencies and its adherence to its own policies in light of highly political social media activity by NewsGuard employees.

First off, it helps to understand what NewsGuard is. The organization was set up in 2018 by two journalism moguls, Steven Brill and Gordon Crovitz, in an effort to combat the rise of disinformation and nonsense peddling online. The basic product is rating journalism websites to give a scoring of how credible and reliable they are.

And, let me be upfront: I’m not a fan of NewsGuard’s methodology, which I think isn’t particularly useful for doing what they’re trying to do. It’s formulaic in a somewhat arcane way, which enables terrible news sites to get rated well, while dinging (especially smaller, newer) publications that don’t check off all the boxes NewsGuard demands.

But, they’re allowed to do whatever they want. They are expressing their own First Amendment-protected opinion. And that’s a good thing. People don’t have to believe NewsGuard’s rankings (and my personal opinion is that everyone should take them with a large grain of salt). But it’s still their opinion. It’s their speech.

However, NewsGuard has been singled out as one of the enemies of free speech, like so much of the fantasy industrial complex that is making the rounds these days. This is because some of the nuttier nonsense-peddling grifters out there have been rated poorly by NewsGuard, and that’s resulted in some advertisers deciding to pull advertising.

Somehow that is a form of censorship. Of course, it’s not: it’s speech by a private party, in which other private parties listen to and potentially take some action on, exercising their own rights of association.

But, as the Comer “investigation” calls out, some US government agencies have worked with NewsGuard, most notably the Defense Department. A few years back, the DoD signed a contract with NewsGuard, in which NewsGuard would flag content it found online that it believed were foreign influence campaigns. Basically, it’s the Defense Department contracting with some internet watchers to see if they spot anything the DoD should be aware of.

I have no idea if NewsGuard is any good at this, and frankly, I’d be surprised if the DoD actually got any value out of the deal. But, it’s got nothing to do with “censorship” of any kind. It’s still just more speech.

To date, Crovitz (who was formerly the publisher of the Wall Street Journal, so you’d think the GOP grifter class would realize he’s much closer to them politically speaking) has tried defending NewsGuard by (1) inserting some facts into a discussion that will reject such facts and (2) stupidly insisting that his is the only “non-partisan” rating service, and the rest are all leftists.

“We look forward to clarifying the misunderstanding by the committee about our work for the Defense Department,” Crovitz said in a statement to The Hill. “Our work for the Pentagon has been solely related to hostile disinformation efforts by Russian, Chinese and Iranian government-linked operations targeting Americans and our allies.”

Crovitz, a former publisher of The Wall Street Journal, also touted NewsGuard as “the only apolitical service” that rates news outlets, saying, “the others are either digital platforms with their secret ratings or left-wing partisan advocacy groups.”

In some ways, this strategy of responding to the investigation kinda serves to explain why NewsGuard has always been kinda useless. They bring fact-checking to a vibes fight. That doesn’t work.

If we’ve learned anything from the failures of media over the past decade, it is not that we had a lack of fact-checking or other “objective” ways of measuring news. It’s that people don’t want that. What we’ve discovered is that tons of people are in the market for the Confirmation Bias Times, and they’re going to lap up anything that confirms their priors and outright reject anything that challenges what they believe.

We’ve seen things like Stanford’s Internet Observatory try to respond to similar attacks by coming back with facts, only to have those facts distorted, twisted, and turned right back around to accuse them of even worse things. Crovitz and NewsGuard seem likely to go through the same nonsense ringer.

Because the whole point of this is that facts no longer matter to the modern GOP. If you bring facts that conflict with their feelings, they’re going to blame you for it and attack you.

Here, all that NewsGuard has done is add their opinions about news sources. Some people trust them. Others don’t. That’s the marketplace of ideas in action.

And that’s what Comer is trying to suppress.

Posted on Techdirt - 3 July 2024 @ 09:29am

Tim Wu Is Out Of Control

I guess I should start this out by noting that I like Tim Wu quite a bit, and always felt like I learned something in the past when I spoke with him. He was even one of the people who reviewed and provided feedback on my big “protocols, not platforms” article months before it was published.

Tim has been an important voice in thinking through tech policy issues over the last two decades. He coined the term “net neutrality” and has been an advocate for breaking up “big tech” who he views as too powerful. As noted, I’ve learned a lot from him and agree with him that the world would be better if big tech companies weren’t so big (that’s one of the reasons why I wrote that protocols paper in the first place).

But just because you hate big tech, it doesn’t mean that you throw out basic, fundamental rights in pursuit of that goal. Yet Wu seems so focused on “big tech bad” that he’s literally willing to toss out the First Amendment in an effort to achieve his desires, even if all that will do is empower authoritarians and fascists, giving them even more power (which is not exactly a recipe for more competition in any industry, if history is any indication).

I’m confused about where Tim’s mind is at lately, as he seems to have embraced multiple ridiculous, dangerous, authoritarian policy ideas that would be incredibly damaging to the public, almost all of which involve suppressing speech in pursuit of policy goals that Wu supports, without even the slightest concern about the damage it will do to people.

We saw it last year, when he publicly supported having Congress move forward with KOSA, despite dozens of civil liberties and LGBTQ groups noting how its “duty of care” would be used to harm LGBTQ youth, blocking them from accessing information. Then, earlier this year, he signed onto an absolutely ridiculous amicus brief in support of Texas’s social media content moderation law. That brief was full of confused or misleading statements. He’s also been strongly supportive of banning TikTok, which is another attack on the First Amendment.

With all of these instances in the past few years, in each of which he dismisses of basic First Amendment principles, you might have been tempted to think that Wu hates the First Amendment. But even I had thought that would have been a bridge too far for Wu.

That is, until he published his latest op-ed in the NY Times: a full frontal attack on the First Amendment, entitled “The First Amendment is Out of Control.”

Even if he didn’t write that headline (at major publications, editors often write the headlines, rather than the authors themselves), the article is yet another horribly confused, badly argued, fundamentally ridiculous attack on the First Amendment.

The First Amendment is not out of control. Tim Wu is out of control.

He starts out his piece by arguing that the First Amendment used to be about protecting “political dissenters” but more recently has been twisted by judges to “an all-purpose tool of legislative nullification that now mostly protects corporate interests.”

As for the idea that the First Amendment used to protect political dissent, I think folks like Charles Schenk and Eugene Debs might question that claim.

But, more importantly, the idea that the First Amendment is now some sort of awful corporate protection tool is, well, also wrong. The problem is that lawmakers keep trying to pass laws to suppress speech, including the speech of corporations. Don’t do that, and the First Amendment doesn’t get in the way of regulations. It’s pretty straightforward.

The real problem here is that Tim Wu wants to suppress the speech of companies he doesn’t like. And he’s mad that the First Amendment doesn’t allow this. And, even if I don’t like the dominance of those companies either, there are ways to change that which DO NOT involve attacking their First Amendment rights. If only Wu were willing to explore those options, rather than stomping out rights in pursuit of punishing speech he dislikes.

Also, it’s either ironic or stupid that he’s arguing this in the NY Times, which is also a corporation, and which has established some of the most important First Amendment precedents in the last century in both NY Times v. Sullivan and NY Times v. US. Those two cases, from 1964 and 1971, did not establish the idea that corporations are protected by the First Amendment. Courts had ruled on that point much earlier. But the two NY Times cases certainly helped explain why it’s so important for companies to have First Amendment protections as well.

Otherwise, authoritarian leaders could suppress the speech of companies they dislike. They could sue publications into oblivion. They could block them from publishing important news.

Wu is doing some rewriting of history here, mainly because he’s mad that the majority of the Supreme Court in the NetChoice cases rejected his nonsense pro-authoritarian theories in support of Texas’ social media law that would have stripped companies of editorial discretion rights and enabled authoritarian politicians to force companies to host messages they had no interest in associating with.

In the process, he attacks a series of recent First Amendment decisions:

Over the past decade or two, however, liberal as well as conservative judges and justices have extended the First Amendment to protect nearly anything that can be called “speech,” regardless of its value or whether the speaker is a human or a corporation. It has come to protect corporate donations to political campaigns (Citizens United v. Federal Election Commission in 2010), the buying and tracking of data (Sorrell v. IMS Health in 2011), even outright lies (United States v. Alvarez in 2012). As a result, it has become harder for the government to protect its citizens.

Of course, it’s easy to toss this list out without actually digging deep into the details of each case and why they were decided the way they were. They were not designed to make it “harder for the government to protect its citizens,” but rather they were decided (correctly) in an effort to make it harder for the government to suppress the rights of the public.

Each of those cases has supporters and detractors, but if any of those cases had gone differently, it would have led to the suppression of speech in dangerous ways, enabling authoritarians to silence important political dissent (the thing Wu insists the First Amendment has moved away from).

Indeed, Wu’s own argument is undermined by history (which he seems unfamiliar with). Later in the piece, he cites Justice Robert Jackson, who wrote a dissent in Terminiello v. Chicago. That was a First Amendment case in which a horrible racist priest was arrested after giving an inflammatory racist speech. The Supreme Court noted that the ordinance he was arrested under violated the First Amendment. In Robert Jackson’s dissent, he made the now-famous quote about how this type of ruling could “convert the constitutional Bill of Rights into a suicide pact.”

But, as free speech expert (he literally wrote the book on it) Jacob Mchangama noted, the ruling in Terminiello has been important, such as when it was cited in Edwards v. South Carolina to protect black students in South Carolina who had been arrested for protesting segregation. There, the Supreme Court cited the majority opinion in Terminiello to point out that the protestors had the First Amendment right to protest segregation.

Similarly, Mchangama points out that Justice Jackson, who Wu holds up as his north star on the First Amendment, wrote a concurrence in Dennis v. US, in which he supported convicted members of the Communist Party. The concurrence is a broadside against anarchy and Communism, suggesting that the “clear and present danger” test is too lenient. Basically, he says Communists should be arrested before they actually do anything, because if you wait until a “clear and present danger” (a standard that was used to jail Schenk decades earlier, and soon after this case was made obsolete) it would be “too late.”

So, Wu is supporting the views of someone who wished to jail people as a national security threat based on their political leanings, while supporting his dissent in a case that was later key in supporting civil rights of those protesting segregation in the 1960s.

This is important: the nature of the First Amendment is that sometimes it protects the speech of people you dislike or distrust. Because if you don’t protect them, then those same authorities will suppress, stifle, and silence the speech of people you do support and do agree with.

Tim Wu is advocating a full frontal attack on the First Amendment because he greatly dislikes some companies — companies who have done amazing things to enable more speech in the world. Wu doesn’t like some of that speech and therefore looks to take down the First Amendment, even if it would give those like Donald Trump or Texas AG Ken Paxton that much more power to silence and suppress the speech of those they dislike.

I admit that I am perplexed by this side of Tim Wu. He’s a thoughtful explainer of various forces, but seems wholly incapable of thinking through how his desired outcomes would be used to silence the marginalized and the oppressed.

His anger about “big tech” seems to cloud his thinking about nearly everything.

The reasoning in the decision in the NetChoice cases marks a new threat to a core function of the state. By presuming that free speech protections apply to a tech company’s “curation” of content, even when that curation involves no human judgment, the Supreme Court weakens the ability of the government to regulate so-called common carriers like railroads and airlines — a traditional state function since medieval times.

This isn’t just wrong, it’s embarrassingly confused. The government can still regulate common carriers, that is businesses that “carry” commodity items (people, products, data) from point A to point B, and have no ongoing relationship beyond that.

Social media is not that. It’s nothing like a common carrier. A common carrier like an airplane need not allow a customer to travel non-stop on their airlines in perpetuity. But a social media website hosts content in perpetuity. It is not delivering from point A to point B. Nor is it providing a commodity service.

And, more to the point, the reason that the Supreme Court ruled the way it does is that without websites having editorial discretion, the internet cannot function. Perhaps Tim needs to get out of his academic ivory tower, and spend some time actually working in trust & safety, rather than coming up with pie in the sky nonsense about how the internet works.

It might help bring him back to reality. If he wants, I’ll let him moderate the Techdirt comments for a week, and we’ll see how he feels about allowing states to force websites to keep content online after that.

Posted on Techdirt - 2 July 2024 @ 01:06pm

Justice Alito’s Views On Social Media And The First Amendment Seem To Shift Depending On Who He Wants To Win

The Supreme Court’s opinions in the NetChoice/CCIA cases have been leading to some bizarre interpretations, as many people try to read into it things they wanted to see but just aren’t there. Cathy already covered some of the oddities of Justice Alito’s concurrence (which Justices Thomas and Gorsuch signed onto), but I wanted to dig in a little more to his concurrence, pointing out a few things that show just how much Alito is willing to decide on an ideological basis, rather than one based on principles.

First up is a point raised by Daphne Keller at Stanford. She notes that Alito cites to the Packingham ruling:

As the Court has recognized, social-media platforms have become the “modern public square.” Packingham v. North Carolina, 582 U. S. 98, 107 (2017). In just a few years, they have transformed the way in which millions of Americans communicate with family and friends, perform daily chores, conduct business, and learn about and comment on current events.

But, as Keller points out, in the Packingham case, Alito wrote a concurrence whining incessantly about the “dicta” in the Packingham ruling (not unlike what he did in this case) and specifically whined about the whole “public square” line, claiming it was “undisciplined” and would be interpreted dangerously by future courts. Here he is in Packingham:

I cannot join the opinion of the Court, however, because of its undisciplined dicta. The Court is unable to resist musings that seem to equate the entirety of the internet with public streets and parks.

He later notes:

I am troubled by the implications of the Court’s unnecessary rhetoric.

So it’s pretty rich for him to be now leaning on that “public square” dicta that he ridiculed in that very case. He is now arguing that states should absolutely be able to force websites to host content.

But we don’t even need to go back to that 2017 decision to see Alito seemingly changing his tune. (We still believe Packingham was correctly decided, and that people misunderstand the “public square” line, though for different reasons than Alito.)

Just last week in the Murthy v. Missouri ruling, Alito’s dissent explained why social media websites have the right to moderate as they see fit. He noted that websites are like newspapers and can publish or “decline to publish whatever they wish.”

Of course, purely private entities like newspapers are not subject to the First Amendment, and as a result, they may publish or decline to publish whatever they wish.

Yet, in the NetChoice ruling, he more or less argues that the states can block that right that he just admitted last week is protected by the First Amendment. He claims that perhaps the sites could be considered common carriers (which only makes sense if you don’t understand what a common carrier is).

Most notable is the majority’s conspicuous failure to address the States’ contention that platforms like YouTube and Facebook—which constitute the 21st century equivalent of the old “public square”—should be viewed as common carriers

The majority didn’t address it because (1) it’s stupid and (2) both the Fifth and Eleventh Circuits effectively rejected that argument. (Judge Oldham’s decision does talk about it, but neither of the two other Judges on the panel signed onto it, so it doesn’t count as binding in any way.)

Alito tries to get around this distinction by arguing that websites are somehow different than newspapers:

Instead of seriously engaging with this and other arguments, the majority rests on NetChoice’s dubious assertion that there is no constitutionally significant difference between what newspaper editors did more than a half-century ago at the time of Tornillo and what Facebook and YouTube do today.

Maybe that is right—but maybe it is not. Before mechanically accepting this analogy, perhaps we should take a closer look.

He later argues that there is some sort of distinction between algorithms making editorial decisions and humans (though, it’s not clear what constitutional relevance that has):

Now consider how newspapers and social-media platforms edit content. Newspaper editors are real human beings, and when the Court decided Tornillo (the case that the majority finds most instructive), editors assigned articles to particular reporters, and copyeditors went over typescript with a blue pencil. The platforms, by contrast, play no role in selecting the billions of texts and videos that users try to convey to each other. And the vast bulk of the “curation” and “content moderation” carried out by platforms is not done by human beings. Instead, algorithms remove a small fraction of nonconforming posts post hoc and prioritize content based on factors that the platforms have not revealed and may not even know. After all, many of the biggest platforms are beginning to use AI algorithms to help them moderate content. And when AI algorithms make a decision, “even the researchers and programmers creating them don’t really understand why the models they have built make the decisions they make.”56 Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?

But, if it were actually true that algorithmic decisions were not protected under the First Amendment (and, again, he’s wrong, and we have precedent to say he’s wrong), then why would he even bring up their rights to moderate in the Murthy decision a week ago?

It seems that Alito, like so many others, has a very flexible view of the First Amendment based on whether his political allies or enemies are making the argument. There is no consistency beyond “the Republicans should get what they want.”

Posted on Techdirt - 2 July 2024 @ 09:30am

Florida’s Attorney General Declares Victory In Social Media Case, Even Though The Supreme Court Makes It Clear She Lost Big Time

BREAKING NEWS: Florida’s Attorney General says the Supreme Court unanimously sided with her in a case where they unanimously ruled against her arguments.

Perhaps there’s a reason that Florida Attorney General Ashley Moody is so vigorously defending a Florida law that would block social media companies from diminishing the reach of disinformation: she loves spreading it herself.

Here’s Moody declaring victory based on a Supreme Court ruling she very clearly lost:

Image

“BREAKING NEWS,” she says, “SCOTUS Unanimously Sides with Florida in Social Media Case.” Followed by: “We are pleased that SCOTUS agreed with Florida and rejected the lower court’s flawed reasoning—invalidating our social media law. While there are aspects of the decision we disagree with, we look forward to continuing to defend state law.”

Except that’s not even close to an accurate summary of what happened. As we’ve already described, the majority ruling is a pretty clear rebuke to the laws in both Texas and Florida. While it spends a lot more time focusing on the Fifth Circuit’s monstrosity, its only real fault with the Eleventh Circuit (which rejected Florida’s law) was that they didn’t jump through all the procedural hurdles for a facial challenge.

The clear point of the ruling is not that Florida’s law is fine, it’s just that the lower courts should have taken a few more procedural steps to explain why the laws were not fine. Even in Justice Barrett’s concurrence, she notes that the Eleventh Circuit got the First Amendment analysis correct:

I join the Court’s opinion, which correctly articulates and applies our First Amendment precedent. In this respect, the Eleventh Circuit’s understanding of the First Amendment’s protection of editorial discretion was generally correct; the Fifth Circuit’s was not.

So, at this point, six out of the nine Justices (including three from the “conservative” wing) agree with basic First Amendment analysis by the Eleventh Circuit (which was written by a “conservative” Federalist Society-recommended judge).

The idea that this was a victory for Florida in any sense defies not just basic logic, but basic reading comprehension.

My inbox is filled with press releases from all sides claiming victory in this case. But most of them are wrong. The victory here was for fundamental First Amendment principles applying to the internet, meaning that states cannot pass laws that prevent editorial discretion. As the majority noted:

To the extent that socialmedia platforms create expressive products, they receive the First Amendment’s protection. And although these cases are here in a preliminary posture, the current record suggests that some platforms, in at least some functions, are indeed engaged in expression. In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. And while much about social media is new, the essence of that project is something this Court has seen before. Traditional publishers and editors also select and shape other parties’ expression into their own curated speech products. And we have repeatedly held that laws curtailing their editorial choices must meet the First Amendment’s requirements. The principle does not change because the curated compilation has gone from the physical to the virtual world. In the latter, as in the former, government efforts to alter an edited compilation of third-party expression are subject to judicial review for compliance with the First Amendment.

And that means what it says: any law that seeks to curtail the editorial discretion of private websites must be reviewed under the First Amendment. That means Florida’s law isn’t going to survive. Many other state laws won’t survive either.

Posted on Techdirt - 1 July 2024 @ 03:25pm

Ctrl-Alt-Speech Minisode: The Supreme Court’s NetChoice Ruling

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

Although our hosts are both on vacation this week, we didn’t want to leave our listeners waiting too long for an update on today’s big news about online speech: the Supreme Court’s ruling in the NetChoice cases, which sends the Texas and Florida laws that would limit the ability of online platforms to moderate political speech back to the lower courts. So Mike Masnick has stepped briefly back to the microphone to join our producer, Leigh Beadon, for a quick mini episode of Ctrl-Alt-Speech, which we’re also posting to the Techdirt podcast feed. In this short discussion, Mike explains the immediate implications of the ruling, the way it separates procedural questions from its broader guidance on the First Amendment, and what it signals about how the court will evaluate issues like this in the future.

Read more about the NetChoice ruling in our coverage on Techdirt:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Posted on Techdirt - 1 July 2024 @ 12:17pm

Court To Indiana: Age Verification Laws Don’t Override The First Amendment

We keep pointing out that, contrary to the uninformed opinion of lawmakers across both major parties, laws that require age verification are clearly unconstitutional*.

* Offer not valid in the 5th Circuit.

Such laws have been tossed out everywhere as unconstitutional, except in Texas (and even then, the district court got it right, and only the 5th Circuit is confused). And yet, we hear about another state passing an age verification law basically every week. And this isn’t a partisan/culture war thing, either. Red states, blue states, purple states: doesn’t matter. All seem to be exploring unconstitutional age verification laws.

Indiana came up with one last year, which targeted adult content sites specifically. And, yes, there are perfectly good arguments that kids should not have access to pornographic content. However, the Constitution does not allow for any such restriction to be done in a sloppy manner that is both ineffective at stopping kids and likely to block protected speech. And yet, that’s what every age-gating law does. The key point is that there are other ways to restrict kids’ access to porn, rather than age-gating everything. But they often involve this thing called parenting.

Thus, it’s little surprise that, following a legal challenge by the Free Speech Coalition, Indiana’s law has been put on hold by a court that recognizes the law is very likely unconstitutional.

The court starts out by highlighting that geolocating is an extraordinarily inexact science, which is a problem, given that the law requires adult content sites to determine when visitors are from Indiana and to age verify them.

But there is a problem: a computer’s IP address is not like a return address on an envelope because an IP address is not inherently tied to any location in the real world but consists of a unique string of numbers written by the Internet Service Provider for a large geographic area. (See id. ¶¶ 12–13). This means that when a user connects to a website, the website will only know the user is in a circle with a radius of 60 miles. (Id. ¶ 14). Thus, if a user near Springfield, Massachusetts, were to connect to a website, the user might be appearing to connect from neighboring New York, Connecticut, Rhode Island, New Hampshire, or Vermont. (Id.). And a user from Evansville, Indiana, may appear to be connecting from Illinois or Kentucky. The ability to determine where a user is connecting from is even weaker when using a phone with a large phone carrier such as Verizon with error margins up to 1,420 miles. (Id. ¶¶ 16, 19). Companies specializing in IP address geolocation explain the accuracy of determining someone’s state from their IP address is between 55% and 80%. (Id. ¶ 17). Internet Service Providers also continually change a user’s IP address over the course of the day, which can make a user appear from different states at random.

Also, users can hide their real IP address in various ways:

Even when the tracking of an IP address is accurate, however, internet users have myriad ways to disguise their IP address to appear as if they are located in another state. (Id. ¶ B (“Website users can appear to be anywhere in the world they would like to be.”)). For example, when a user connects to a proxy server, they can use the proxy server’s IP address instead of their own (somewhat like having a PO box in another state). (Id. ¶ 22). ProxyScrape, a free service, allows users to pretend to be in 129 different countries for no charge. (Id.). Virtual Private Network (“VPN”) technology allows something similar by hiding the user’s IP address to replace it with a fake one from somewhere else.

All these methods are free or cheap and easy to use. (Id. ¶¶ 21–28). Some even allow users to access the dark web with just a download. (Id. ¶ 21). One program, TOR, is specifically designed to be as easy to use as possible to ensure as many people can be as anonymous as possible. (Id.). It is so powerful that it can circumvent Chinese censors.

The reference to “Chinese censors” is a bit weird, but okay, point made: if people don’t want to appear as if they’re from Indiana, they can do so.

The court also realizes that just blocking adult content websites won’t block access to other sources of porn. The ruling probably violates a bunch of proposed laws against content that is “harmful to minors” by telling kids how to find porn:

Other workarounds include torrents, where someone can connect directly to another computer—rather than interacting with a website—to download pornography. (Id. ¶ 29). As before, this is free. (Id.). Minors could also just search terms like “hot sex” on search engines like Bing or Google without verifying their age. (Id. ¶ 32–33). While these engines automatically blur content to start, (Glogoza Decl. ¶¶ 5–6), users can simply click a button turning off “safe search” to reveal pornographic images, (Sonnier Decl. ¶ 32). Or a minor could make use of mixed content websites below the 1/3 mark like Reddit and Facebook

And thus, problem number one with age verification: it’s not going to be even remotely effective for achieving the policy goals being sought here.

With this background, it is easy to see why age verification requirements are ineffective at preventing minors from viewing obscene content. (See id. ¶¶ 14–34 (discussing all the ways minors could bypass age verification requirements)). The Attorney General submits no evidence suggesting that age verification is effective at preventing minors from accessing obscene content; one source submitted by the Attorney General suggests there must be an “investigation” into the effectiveness of preventive methods, “such as age verification tools.

And that matters. Again, even if you agree with the policy goals, you should recognize that putting in place an ineffective regulatory regime that is easily bypassed is not at all helpful, especially given that it might also restrict speech for non-minors.

Unlike the 5th Circuit, this district court in Indiana understands the precedents related to this issue and knows that Ashcroft v. ACLU already dealt with the main issue at play in this case:

In the case most like the one here, the Supreme Court affirmed the preliminary enjoinment of the Child Online Protection Act. See Ashcroft II, 542 U.S. at 660–61. That statute imposed penalties on websites that posted content that was “harmful to minors” for “commercial purposes” unless those websites “requir[ed the] use of a credit card” or “any other reasonable measures that are feasible under available technology” to restrict the prohibited materials to adults. 47 U.S.C. § 231(a)(1). The Supreme Court noted that such a scheme failed to clear the applicable strict scrutiny bar. Ashcroft II, 542 U.S. at 665–66 (applying strict scrutiny test). That was because the regulations were not particularly effective as it was easy for minors to get around the requirements, id. at 667– 68, and failed to consider less restrictive alternatives that would have been equally effective such as filtering and blocking software, id. at 668–69 (discussing filtering and blocking software). All of that is equally true here, which is sufficient to resolve this case against the Attorney General.

Indiana’s Attorney General points to the 5th Circuit ruling that tries to ignore Ashcroft, but the judge here is too smart for that. He knows he’s bound by the Supreme Court, not whatever version of Calvinball the 5th Circuit is playing:

Instead of applying strict scrutiny as directed by the Supreme Court, the Fifth Circuit applied rational basis scrutiny under Ginsberg v. New York, 390 U.S. 629 (1968), even though the Supreme Court explained how Ginsberg was inapplicable to these types of cases in Reno, 521 U.S. at 865–66. The Attorney General argues this court should follow that analysis and apply rational basis scrutiny under Ginsberg.

However, this court is bound by Ashcroft II. See Agostini v. Felton, 521 U.S. 203, 237–38 (1997) (explaining lower courts “should follow the case which directly controls”). To be sure, Ashcroft II involved using credit cards, and Indiana’s statute requires using a driver’s license or third-party identification software.10 But as discussed below, this is not sufficient to take the Act beyond the strictures of strict scrutiny, nor enough to materially advance Indiana’s compelling interest, nor adequate to tailor the Act to the least restrictive means.

And thus, strict scrutiny must apply, unlike in the 5th Circuit, and this law can’t pass that bar.

Among other things, the age verification in this law doesn’t just apply to material that is obscene to minors:

The age verification requirements do not just apply to obscene content and also burden a significant amount of protected speech for two reasons. First, Indiana’s statute slips from the constitutional definition of obscenity and covers more material than considered by the Miller test. This issue occurs with the third prong of Indiana’s “material harmful to minors” definition, where it describes the harmful material as “patently offensive” based on “what is suitable matter for . . . minors.” Ind. Code § 35- 49-2-2. It is well established that what may be acceptable for adults may still be deleterious (and subject to restriction) to minors. Ginsberg, 390 U.S. at 637 (holding that minors “have a more restricted right than that assured to adults to judge and determine for themselves what sex material they may read or see”); cf. ACLU v. Ashcroft, 322 F.3d 240, 268 (3d Cir. 2003) (explaining the offensiveness of materials to minors changes based on their age such that “sex education materials may have ‘serious value’ for . . . sixteen-year-olds” but be “without ‘serious value’ for children aged, say, ten to thirteen”), aff’d sub nom. in relevant part, 542 U.S. 656 (2004). Put differently, materials unsuitable for minors may not be obscene under the strictures of Miller, meaning the statute places burdens on speech that is constitutionally protected but not appropriate for children

Also, even if the government has a compelling interest in protecting kids from adult content, this law doesn’t actually do a good job of that:

To be sure, protecting minors from viewing obscene material is a compelling interest; the Act just fails to further that interest in the constitutionally required way because it is wildly underinclusive when judged against that interest. “[A] law cannot be regarded as protecting an interest ‘of the highest order’ . . . when it leaves appreciable damage to that supposedly vital interest unprohibited.” …

The court makes it clear how feeble this law is:

To Indiana’s legislature, the materials harmful to minors are not so rugged that the State believes they should be unavailable to adults, nor so mentally debilitating to a child’s mind that they should be completely inaccessible to children. The Act does not function as a blanket ban of these materials, nor ban minors from accessing these materials, nor impose identification requirements on everybody displaying obscene content. Instead, it only circumscribes the conduct of websites who have a critical mass of adult material, whether they are currently displaying that content to a minor or not. Indeed, minors can freely access obscene material simply by searching that material in a search engine and turning off the blur feature. (Id. ¶¶ 31–33). Indiana’s legislature is perfectly willing “to leave this dangerous, mind-altering material in the hands of children” so long as the children receive that content from Google, Bing, any newspaper, Facebook, Reddit, or the multitude of other websites not covered.

The court also points out how silly it is that the law only applies to sites with a high enough threshold (33%) of adult content. If the goal is to block kids’ access to porn, that’s a stupid way to go about it. Indeed, the court effectively notes that a website could get around the ban just by adding a bunch of non-adult imagery content.

The Attorney General has not even attempted to meet its burden to explain why this speaker discrimination is necessary to or supportive of to its compelling interest; why is it that a website that contains 32% pornographic material is not as deleterious to a minor as a website that contains 33% pornographic material? And why does publishing news allow a website to display as many adult-images as it desires without needing to verify the user is an adult? Indeed, the Attorney General has not submitted any evidence suggesting age verification would prohibit a single minor from viewing harmful materials, even though he bears the burden of demonstrating the effectiveness of the statute. Ultimately, the Act favors certain speakers over others by selectively imposing the age verification burdens. “This the State cannot do.” Sorrell v. IMS Health Inc., 564 U.S. 552, 580 (2011). The Act is likely unconstitutional.

In a footnote, the judge highlights an even dumber part of the law: that the 33% is based on the percentage of imagery, and gives a hypothetical of a site that would be required to age gate:

Consider a blog that discusses new legislation the author would like to see passed. It contains hundreds of posts discussing these proposals. The blog does not include images save one exception: attached to a proposal suggesting the legislature should provide better sexual health resources to adult-entertainment performers is a picture of an adult-entertainer striking a raunchy pose. Even though 99% of the blog is core political speech, adults would be unable to access the website unless they provide identification because the age verification provisions do not trigger based on the amount of total adult content on the website, but rather based on the percentage of images (no matter how much text content there is) that contain material harmful to minors.

The court suggests some alternatives to this law, from requiring age verification for accessing any adult content (though, it notes that’s also probably unconstitutional, even if it’s less restrictive) to having the state offer up free filtering and blocking tech for parents to make use of for their kids:

Indiana could make freely available and/or require the use of filtering and blocking technology on minors’ devices. This is a superior alternative. (Sonnier Decl. ¶ 47 (“Internet content filtering is a superior alternative to Internet age verification.”); see also Allen Decl. ¶¶ 38–39 (not disputing that content filtering is superior to age verification as “[t]he Plaintiff’s claim makes a number of correct positive assertions about content filtering technology” but noting “[t]here is no reason why both content filtering and age verification could not be deployed either consecutively or concurrently”)). That is true for the reasons discussed in the background section: filtering and blocking software is more accurate in identifying and blocking adult content, more difficult to circumvent, allows parents a place to participate in the rearing of their children, and imposes fewer costs on third-party websites.

And thus, due to the fact that the law is pretty obviously unconstitutional, the judge grants the injunction, blocking the law from going into effect. Indiana will almost certainly appeal and we’ll have to just keep going through this nonsense over and over again.

Thankfully, Indiana is in the 7th Circuit, not the 5th, so there’s at least somewhat less of a chance for pure nuttery on appeal.

Posted on Techdirt - 1 July 2024 @ 08:49am

In Content Moderation Cases, Supreme Court Says ‘Try Again’ – But Makes It Clear Moderation Deserves First Amendment Protections

Today, the Supreme Court made it pretty clear that websites have First Amendment rights to do content moderation as they see fit, but decided to send the cases challenging laws in Florida and Texas back to the lower courts to be litigated properly, effectively criticizing the litigation posture of the trade groups, NetChoice and CCIA, which brought the challenges in the first place. However, in doing so, the majority of the court also was pretty explicit that the Fifth Circuit got everything wrong all over again.

The Supreme Court waited until the very last day of the term to finally release its decisions in the cases regarding Florida and Texas’s social media moderation laws. I’m not going to go through a full history of either, as we’ve covered them in detail in the past, but both laws sought to place restrictions on how social media companies could moderate content in certain circumstances (generally political). The question at the heart of both cases was whether or not governments could compel private websites to host speech that those websites didn’t wish to host (i.e., violated their terms of service).

Both district courts rejected that premise as obviously unconstitutional. The appeals courts split, however. The 11th Circuit agreed that the law was mostly unconstitutional (though it allowed one problematic provision on transparency to continue). The 5th Circuit went rogue, upending a century’s worth of First Amendment law to say of course Texas has a right to compel websites to host speech that violates their rules.

The Supreme Court took its sweet time in dealing with this case, and now sends both cases back to the lower courts, saying that everyone did the analysis wrong: specifically by assuming the laws only applied to social media sites like Facebook and YouTube, when the reality is that they also probably apply to lots of other sites as well, and need to be analyzed on that basis.

The overall opinion on that point was 9-0, but there’s a bit of messiness involved in the rest, with some concurrences in parts and Alito, Thomas, and Gorsuch concurring only with the bottom line that the cases were decided on the wrong basis but insisting that the rest of the majority opinion, written by Justice Kagan, is unnecessary dicta that has no impact.

And while that may technically be true, that dicta makes some pretty strong and important points regarding the First Amendment rights of private platforms to moderate as they see fit, while the concurrence by Alito seems to disagree with Alito’s own dissent in the Murthy case from just last week.

Here’s a relatively quick analysis of the decision, and I’m sure we’ll have deeper, more nuanced analyses going forward.

Kagan starts off the majority opinion by citing back to the Reno v. ACLU case, which tossed out the Communications Decency Act (but not Section 230) as unconstitutional, and established some basic principles regarding how the First Amendment applies to the internet. And while the opinion notes that the internet has changed a lot, the First Amendment still applies:

But courts still have a necessary role in protecting those entities’ rights of speech, as courts have historically protected traditional media’s rights. To the extent that social media platforms create expressive products, they receive the First Amendment’s protection. And although these cases are here in a preliminary posture, the current record suggests that some platforms, in at least some functions, are indeed engaged in expression. In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. And while much about social media is new, the essence of that project is something this Court has seen before. Traditional publishers and editors also select and shape other parties’ expression into their own curated speech products. And we have repeatedly held that laws curtailing their editorial choices must meet the First Amendment’s requirements. The principle does not change because the curated compilation has gone from the physical to the virtual world. In the latter, as in the former, government efforts to alter an edited compilation of third-party expression are subject to judicial review for compliance with the First Amendment.

But, in the end, the cases are sent back on somewhat technical grounds, because the courts should have reviewed the “facial nature” of the challenge. This was the issue that came up a lot during oral arguments. In short: was the challenge to the law itself (facial), or to how it was applied (as applied)? And, the majority basically says rather than spending so much time talking about what it would mean if the law were applied to social media sites specifically, the courts should have taken a step back to look at the entire law and whether or not it was constitutional at all.

Today, we vacate both decisions for reasons separate from the First Amendment merits, because neither Court of Appeals properly considered the facial nature of NetChoice’s challenge. The courts mainly addressed what the parties had focused on. And the parties mainly argued these cases as if the laws applied only to the curated feeds offered by the largest and most paradigmatic social-media platforms—as if, say, each case presented an as-applied challenge brought by Facebook protesting its loss of control over the content of its News Feed. But argument in this Court revealed that the laws might apply to, and differently affect, other kinds of websites and apps. In a facial challenge, that could well matter, even when the challenge is brought under the First Amendment. As explained below, the question in such a case is whether a law’s unconstitutional applications are substantial compared to its constitutional ones. To make that judgment, a court must determine a law’s full set of applications, evaluate which are constitutional and which are not, and compare the one to the other. Neither court performed that necessary inquiry.

In effect, this means that the underlying issues in this case are almost certainly going to come right back to the Supreme Court in another year or two. But still, Kagan makes it pretty clear that there are lots of elements in these laws that appear to attack the First Amendment rights of websites. In setting forth “the relevant constitutional principles” it becomes pretty clear that the Fifth Circuit’s total nuttiness concerns the court.

Contrary to what the Fifth Circuit thought, the current record indicates that the Texas law does regulate speech when applied in the way the parties focused on below—when applied, that is, to prevent Facebook (or YouTube) from using its content-moderation standards to remove, alter, organize, prioritize, or disclaim posts in its News Feed (or homepage). The law then prevents exactly the kind of editorial judgments this Court has previously held to receive First Amendment protection. It prevents a platform from compiling the third-party speech it wants in the way it wants, and thus from offering the expressive product that most reflects its own views and priorities. Still more, the law—again, in that specific application—is unlikely to withstand First Amendment scrutiny. Texas has thus far justified the law as necessary to balance the mix of speech on Facebook’s News Feed and similar platforms; and the record reflects that Texas officials passed it because they thought those feeds skewed against politically conservative voices. But this Court has many times held, in many contexts, that it is no job for government to decide what counts as the right balance of private expression—to “un-bias” what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others.

The majority’s concern then is really just on how the case was litigated, in which it was brought as a facial challenge to the law itself, but litigated as if it were an as-applied challenge. And that meant the record is incomplete for a full facial challenge.

The parties have not briefed the critical issues here, and the record is underdeveloped. So we vacate the decisions below and remand these cases. That will enable the lower courts to consider the scope of the laws’ applications, and weigh the unconstitutional as against the constitutional ones.

But, again and again, the decision still makes it pretty clear that six out of the nine Justices appear to recognize just how crazy these laws are, and just how wrong the Fifth Circuit was in deciding that the law in Texas was just peachy.

But it is necessary to say more about how the First Amendment relates to the laws’ content-moderation provisions, to ensure that the facial analysis proceeds on the right path in the courts below. That need is especially stark for the Fifth Circuit. Recall that it held that the content choices the major platforms make for their main feeds are “not speech” at all, so States may regulate them free of the First Amendment’s restraints. 49 F. 4th, at 494; see supra, at 8. And even if those activities were expressive, the court held, Texas’s interest in better balancing the marketplace of ideas would satisfy First Amendment scrutiny. See 49 F. 4th, at 482. If we said nothing about those views, the court presumably would repeat them when it next considers NetChoice’s challenge. It would thus find that significant applications of the Texas law—and so significant inputs into the appropriate facial analysis—raise no First Amendment difficulties. But that conclusion would rest on a serious misunderstanding of First Amendment precedent and principle. The Fifth Circuit was wrong in concluding that Texas’s restrictions on the platforms’ selection, ordering, and labeling of third-party posts do not interfere with expression. And the court was wrong to treat as valid Texas’s interest in changing the content of the platforms’ feeds. Explaining why that is so will prevent the Fifth Circuit from repeating its errors as to Facebook’s and YouTube’s main feeds. (And our analysis of Texas’s law may also aid the Eleventh Circuit, which saw the First Amendment issues much as we do, when next considering NetChoice’s facial challenge.) But a caveat: Nothing said here addresses any of the laws’ other applications, which may or may not share the First Amendment problems described below

The majority opinion, rightly, points to the important Miami Herald v. Tornillo case that said that newspapers have the right to decide not to publish someone’s political views if they chose not to. Much of the debate in all of the cases around these laws was whether or not websites were more like newspapers, in which the Miami Herald ruling would apply, or if they were more like telephone lines, in which common carrier rules could apply. The majority pointing to Miami Herald suggests they realize (correctly) how the First Amendment works here.

The seminal case is Miami Herald Publishing Co. v. Tornillo, 418 U. S. 241 (1974). There, a Florida law required a newspaper to give a political candidate a right to reply when it published “criticism and attacks on his record.” Id., at 243. The Court held the law to violate the First Amendment because it interfered with the newspaper’s “exercise of editorial control and judgment.” Id., at 258. Forcing the paper to print what “it would not otherwise print,” the Court explained, “intru[ded] into the function of editors.” Id., at 256, 258. For that function was, first and foremost, to make decisions about the “content of the paper” and “[t]he choice of material to go into” it. Id., at 258. In protecting that right of editorial control, the Court recognized a possible downside. It noted the access advocates’ view (similar to the States’ view here) that “modern media empires” had gained ever greater capacity to “shape” and even “manipulate popular opinion.” Id., at 249–250. And the Court expressed some sympathy with that diagnosis. See id., at 254. But the cure proposed, it concluded, collided with the First Amendment’s antipathy to state manipulation of the speech market. Florida, the Court explained, could not substitute “governmental regulation” for the “crucial process” of editorial choice.

The fact that social media shows most content and only limits a very small amount doesn’t change the First Amendment analysis from the Miami Herald case (despite what some nonsense peddlers insisted):

That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference. Contra, 49 F. 4th, at 459–461 (arguing otherwise). To begin with, Facebook and YouTube exclude (not to mention, label or demote) lots of content from their News Feed and homepage. The Community Standards and Community Guidelines set out in copious detail the varied kinds of speech the platforms want no truck with. And both platforms appear to put those manuals to work. In a single quarter of 2021, Facebook removed from its News Feed more than 25 million pieces of “hate speech content” and almost 9 million pieces of “bullying and harassment content.” App. in No. 22–555, at 80a. Similarly, YouTube deleted in one quarter more than 6 million videos violating its Guidelines. See id., at 116a. And among those are the removals the Texas law targets. What is more, this Court has already rightly declined to focus on the ratio of rejected to accepted content.

And, yes, the decision notes, users may attribute views to the platforms themselves based on what they allow or disallow:

Similarly, the major social-media platforms do not lose their First Amendment protection just because no one will wrongly attribute to them the views in an individual post. Contra, 49 F. 4th, at 462 (arguing otherwise). For starters, users may well attribute to the platforms the messages that the posts convey in toto. Those messages—communicated by the feeds as a whole—derive largely from the platforms’ editorial decisions about which posts to remove, label, or demote. And because that is so, the platforms may indeed “own” the overall speech environment. In any event, this Court has never hinged a compiler’s First Amendment protection on the risk of misattribution. The Court did not think in Turner—and could not have thought in Tornillo or PG&E—that anyone would view the entity conveying the third-party speech at issue as endorsing its content.

As for the favorite two cases of those pushing these laws, Pruneyard (about a shopping mall) and FAIR (about allowing military recruiters on campus), the Court notes that the organizations involved in both were not expressive by nature, as opposed to social media, which is expressive.

To be sure, the Court noted in PruneYard and FAIR, when denying such protection, that there was little prospect of misattribution. See 447 U. S., at 87; 547 U. S., at 65. But the key fact in those cases, as noted above, was that the host of the third party speech was not itself engaged in expression. See supra, at 16–17. The current record suggests the opposite as to Facebook’s News Feed and YouTube’s homepage. When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices. And because that is true, they receive First Amendment protection.

Even more interesting: the Court notes that the Texas law almost certainly couldn’t even survive lower levels of First Amendment scrutiny because the entire point of the law is to suppress free speech.

In the usual First Amendment case, we must decide whether to apply strict or intermediate scrutiny. But here we need not. Even assuming that the less stringent form of First Amendment review applies, Texas’s law does not pass. Under that standard, a law must further a “substantial governmental interest” that is “unrelated to the suppression of free expression.” United States v. O’Brien, 391 U. S. 367, 377 (1968). Many possible interests relating to social media can meet that test; nothing said here puts regulation of NetChoice’s members off-limits as to a whole array of subjects. But the interest Texas has asserted cannot carry the day: It is very much related to the suppression of free expression, and it is not valid, let alone substantial.

Indeed, the statements from Texas politicians pushing the law undermine the law pretty clearly:

Texas has never been shy, and always been consistent, about its interest: The objective is to correct the mix of speech that the major social-media platforms present. In this Court, Texas described its law as “respond[ing]” to the platforms’ practice of “favoring certain viewpoints.” Brief for Texas 7; see id., at 27 (explaining that the platforms’ “discrimination” among messages “led to [the law’s] enactment”). The large social-media platforms throw out (or encumber) certain messages; Texas wants them kept in (and free from encumbrances), because it thinks that would create a better speech balance. The current amalgam, the State explained in earlier briefing, was “skewed” to one side. 573 F. Supp. 3d, at 1116. And that assessment mirrored the stated views of those who enacted the law, save that the latter had a bit more color. The law’s main sponsor explained that the “West Coast oligarchs” who ran social media companies were “silenc[ing] conservative viewpoints and ideas.” Ibid. The Governor, in signing the legislation, echoed the point: The companies were fomenting a “dangerous movement” to “silence” conservatives. Id., at 1108; see id., at 1099 (“[S]ilencing conservative views is unAmerican, it’s un-Texan and it’s about to be illegal in Texas”).

But a State may not interfere with private actors’ speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. That is, indeed, a fundamental aim of the First Amendment. But the way the First Amendment achieves that goal is by preventing the government from “tilt[ing] public debate in a preferred direction.” Sorrell v. IMS Health Inc., 564 U. S. 552, 578–579 (2011). It is not by licensing the government to stop private actors from speaking as they wish and preferring some views over others. And that is so even when those actors possess “enviable vehicle[s]” for expression. Hurley, 515 U. S., at 577. In a better world, there would be fewer inequities in speech opportunities; and the government can take many steps to bring that world closer. But it cannot prohibit speech to improve or better balance the speech market.

And Texas can’t do that:

They cannot prohibit private actors from expressing certain views. When Texas uses that language, it is to say what private actors cannot do: They cannot decide for themselves what views to convey. The innocent-sounding phrase does not redeem the prohibited goal. The reason Texas is regulating the content moderation policies that the major platforms use for their feeds is to change the speech that will be displayed there. Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose.

And thus, while the Court is sending the case back to the lower courts to review correctly under the necessary standards for a facial challenge, it makes it clear that the Fifth Circuit really fucked up its analysis, even if just to how social media functions:

But there has been enough litigation already to know that the Fifth Circuit, if it stayed the course, would get wrong at least one significant input into the facial analysis. The parties treated Facebook’s News Feed and YouTube’s homepage as the heartland applications of the Texas law. At least on the current record, the editorial judgments influencing the content of those feeds are, contrary to the Fifth Circuit’s view, protected expressive activity. And Texas may not interfere with those judgments simply because it would prefer a different mix of messages. How that matters for the requisite facial analysis is for the Fifth Circuit to decide. But it should conduct that analysis in keeping with two First Amendment precepts. First, presenting a curated and “edited compilation of [third party] speech” is itself protected speech. Hurley, 515 U. S., at 570. And second, a State “cannot advance some points of view by burdening the expression of others.” PG&E, 475 U. S., at 20. To give government that power is to enable it to control the expression of ideas, promoting those it favors and suppressing those it does not. And that is what the First Amendment protects all of us from.

Of the concurrences, Justice Barrett leans harder on the idea that NetChoice should have brought an “as applied” challenge, rather than a facial challenge. Justice Jackson also seems to feel that the litigation and the lower courts went too far in their analysis, and not just what was being challenged.

Justice Thomas wrote a concurrence with the underlying decision, but then whines for many pages about the rest of the majority’s analysis regarding the Fifth Circuit, saying that it is a waste of time, and also that it’s too early to be deciding these issues. He goes on for many pages slamming other Supreme Court decisions as well for being too broad. And, just to show how wrong he is, starts talking about “common carriers,” something even the final Fifth Circuit ruling wouldn’t fully endorse.

Justice Alito wrote a similar concurrence (which Thomas and Gorsuch sign onto) basically saying “we only agree that the cases should be sent back to the courts below to be evaluated as a facial challenge, and everything else in the majority decision is useless nonsense:

The holding in these cases is narrow: NetChoice failed to prove that the Florida and Texas laws they challenged are facially unconstitutional. Everything else in the opinion of the Court is nonbinding dicta.

I agree with the bottom line of the majority’s central holding. But its description of the Florida and Texas laws, as well as the litigation that shaped the question before us, leaves much to be desired. Its summary of our legal precedents is incomplete. And its broader ambition of providing guidance on whether one part of the Texas law is unconstitutional as applied to two features of two of the many platforms that it reaches—namely, Facebook’s News Feed and YouTube’s homepage—is unnecessary and unjustified.

In the end, these cases are not over. They’ll go back below and we’ll get more decisions and there’s a decent enough chance that the cases will end up back before the Supreme Court again. But there is a lot in the majority opinion which makes it clear that the Fifth Circuit’s decision was absolutely as nutty and ridiculous as I described when it came out. And that part of the decision is supported by Kagan, Sotomayor, Roberts, Kavanaugh, and Barrett (in other words, five of the nine Justices). And it’s mostly supported by Jackson (she just didn’t sign on to the full analysis of the Texas law’s many Constitutional problems, suggesting it was too early to do so).

This is a good sign for the overall internet and the First Amendment rights of websites to have editorial discretion in how they moderate.

Posted on Techdirt - 28 June 2024 @ 03:40pm

Ctrl-Alt-Speech: A Lack Of (Under)Standing

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Posted on Techdirt - 28 June 2024 @ 12:18pm

Yet Another ID Verification Service Breached, Exposing Private Info Collected On Behalf Of Uber, TikTok & More

As more and more governments try to pass more and more laws requiring age verification, some of us keep pointing out that age verification will cause a ton of harm. For all the talk of how it’s necessary to “protect the children,” the only way to verify ages is to collect a ton of private information on people, which then makes that information a target.

People like Jonathan Haidt in his new book like to pretend that there’s some magical way of doing privacy-protective age verification by outsourcing it to a third party, but that just passes the buck and makes that third party a target. Just a few weeks ago, we talked about this a bit in the context of Australia, where a third-party age ID verification vendor used by bars had a breach, leaking more than 1 million customer records.

Of course, some people would say, “but that’s a bar, that’s different than a website.”

Well, then, this new story should catch your attention. First reported by 404 Media, AU10TIX, an Israeli-based online identification company used by TikTok, ExTwitter, Uber, LinkedIn, PayPal, Fiverr and others has been leaking drivers’ licenses. For over a year.

The set of credentials provided access to a logging platform, which in turn contained links to data related to specific people who had uploaded their identity documents, Hussein showed. The accessible information includes the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license. A subsequent link then includes an image of the identity document itself; some of those are American drivers’ licenses.

The data also appears to include results from AU10TIX’s verification process, with a field for “liveness” reading “true”; the “probability” of that conclusion on a scale of 0 to 1, with a potential result being 0.9486029; and other fields called “DocumentAuthenticity” and “OverallQuality.” More results appear to relate to AU10TIX’s comparison of a photo of the person’s face to their uploaded document, with another section referencing a photo called “PhotoForFaceComparison.jpg.” 

Another screenshot from the tool shows a line chart with one axis labeled “clientOrganizationName.” That axis includes “TikTok_Shop_Creator,” “Impersonation_XCorp,” and “uber-carshare-passport,” apparent references to the three tech giants. 

Cool, cool. Nothing to be concerned about there at all.

Just last year, when Elon first hired this company to provide identification services for ExTwitter, we warned that these systems are not at all reliable and can be a threat to privacy. Turns out we were right.

As always, collecting unnecessary data makes you a target. And this data became a target and was exposed. The way we minimize that is not by forcing more companies to collect more such data. It’s to not need to collect such data in the first place.

This isn’t a case where someone just discovered this breach and no harm was done. Indeed, it appears that significant harm was done here:

The credentials appear to have been harvested by malware in December 2022, and first posted to a Telegram channel in March 2023, according to timestamps and messages from the Telegram channel that posted the credentials online. 404 Media downloaded these credentials and found the name matched that of someone who lists their role on LinkedIn as a Network Operations Center Manager at AU10TIX. The file contained a wealth of passwords and authentication tokens for various services used by the employee, including tools from Salesforce and Okta, as well as the logging service itself.

So this data has been out there for over a year. And shared. Widely. For over a year.

Can lawmakers please stop requiring more companies to harm everyone’s privacy this way? These breaches are only going to keep happening, and they’re only going to get worse the more and more ignorant policymakers keep forcing more companies to collect more such data, based on a myth that age verification will magically make the internet safe and wholesome. It won’t.

It’ll just expose private data to scammers.

Posted on Techdirt - 28 June 2024 @ 09:28am

Adults Are Losing Their Shit About Teen Mental Health On Social Media, While Desperate Teens Are Using AI For Mental Help

It’s just like adults to be constantly diagnosing the wrong thing in trying to “save the children.” Over the last couple of years there’s been a mostly nonsense moral panic claiming that the teen mental health crisis must be due to social media. Of course, as we’ve detailed repeatedly, the actual research on this does not support that claim at all.

Instead, the evidence suggests that there is a ton of complexity happening here and no one factor. That said, two potentially big factors contributing to the teen mental health crisis are (1) the mental health challenges that their parents are facing, and (2) the lack of available help and resources for both kids and parents to deal with mental health issues.

When you combine that, it should be of little surprise that desperate teens are turning to AI for mental health support. That’s discussed in an excellent new article in The Mercury News’ Mosaic Journalism Program, which helps high school students learn how to do professional-level journalism.

For many teenagers, digital tools such as programs that use artificial intelligence, or AI, have become a go-to option for emotional support. As they learn to navigate and cope in a world where mental health care demands are high, AI is an easy and inexpensive choice.

Now, I know that some people’s immediate response is to be horrified by this, and it’s right to be concerned. But, given the situation teens find themselves in, this is not all that surprising.

Teens don’t have access to real mental health help. On our most recent podcast, we spoke to an expert in raising kids in a digital age, Devorah Heitner, who mentioned that making real, professional mental health support available in every high school would be so much more helpful than something silly like a “Surgeon General’s Warning” on social media.

Indeed, as another recent podcast guest, Candice Odgers, has noted, the evidence actually suggests that the reason kids with mental health issues spend so much time on social media might be because they are already having mental health issues, and the lack of resources to actually help them makes them turn to social media instead.

And now, it appears it may also make them turn to AI systems.

The details in the article aren’t as horrifying as they might otherwise be. It does note that there are ways that using AI can be helpful to some kids, which I’m sure is true:

Some students, like Brooke Joly, who will be a junior at Moreau Catholic High School in Hayward in the fall, say they value the bluntness of AI when seeking advice or mental health tips.

“I’ve asked AI for advice a few times because I just wanted an accurate answer rather than someone I know sugar-coating,” she said by text in an interview.

The privacy and consistency that AI promises its young users does make a compelling case for choosing mental health care delivered via app.

Venkatesh, who said she has struggled with depression, said she appreciates that ChatGPT has no judgmental bias. “I think the symptoms of depression are very stigmatized, and if you were to tell people what the reality of depression is like — skipping meals or skipping showers, for instance — people would judge you for that. I think in those instances, it’s easier to talk to someone who is not human because AI would never judge you for that.”

AI can provide a safe space for teens to be vulnerable at a point when the adults in their lives may not be supportive of mental health care.

That said, this is another area that is simply not well-studied at all (unlike social media and mental health, which now have tons of studies).

Hopefully, we can see some actual studies on whether or not AI can actually be helpful here. The article does note that there are some specialized apps focused on this market, but one would hope those would have some data to back up their approach. Relying on a general LLM like ChatGPT seems like… a much riskier proposition.

As one youth director in the article notes, one thing that using AI does for kids is it puts them in control, at a time when they often feel they have control over so little. This brings us to yet another study that we’ve talked about in the past: one that suggests that another leading factor in mental health struggles for kids has been the lack of spaces where parents aren’t hovering over them and making all the decisions.

Given that, you can understand why kids might seek their own solutions. The lack of viable options that don’t involve, once again, having parents or other authority figures hovering over them, certainly makes them more appealing.

None of this is great, and (again) it would appear that any real solution should involve making mental health professionals more accessible to teens, such as in schools. But absent that, it’s understandable why they might turn to other types of tools. So, hopefully, there’s going to be a lot more research on how helpful (or unhelpful!) those tools actually are, or at least how to properly integrate them into a larger, more comprehensive, approach to improving mental health.

More posts from Mike Masnick >>