Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Bluesky at bsky.app/profile/mmasnick.bsky.social on Mastodon at mastodon.social/@mmasnick and still a little bit (but less and less) on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 26 April 2024 @ 03:09pm

Ctrl-Alt-Speech: The Bell Tolls For TikTok

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Posted on Techdirt - 26 April 2024 @ 12:06pm

Court Dismisses Mark Zuckerberg Personally From Massive ‘Social Media Addicts Children’ Lawsuit

Over the last few years, there have been a ton of lawsuits, pretty much all of them dubious, arguing that social media is inherently harmful to children (something the research does not show) and that therefore there is some sort of magic product liability claim that will hold social media companies responsible. A bunch of those lawsuits have been rolled up into a massive single multidistrict litigation case in California, under the catchy name: “In re: social media adolescent addiction/personal injury products liability litigation.”

The docket is massive, currently with well over 750 documents on the docket, and I’m sure many more are to come. At least some of the cases tried to put personal liability on Mark Zuckerberg himself, as if he were somehow directly gleefully looking to harm children with Facebook and Instagram.

The court has now dismissed those claims (though with leave to amend). Eric Goldman brought my attention to this latest ruling in the case on his blog (honestly there are so many documents on the docket I had completely missed this one).

As you might expect in a case this massive, with a bunch of personal injury attorneys jumping in with the hope of scoring some massive multi-billion dollar settlement, they’re willing to throw every stupid theory they can come up with against the wall to see if any one gets by an inattentive judge. Goldman’s summary is a bit more diplomatic: “the plaintiff lawyers are in total war mode, throwing seemingly inexhaustible resources at the case to explore every possible issue and angle of liability, no matter how arcane or tangential.”

Anyway, some of the plaintiffs argued that Zuck should be personally liable based on a wacky theory that Zuck concealed and misrepresented how safe Meta’s various platforms were in a negligent manner. The judge takes the various claims against Zuck and uses this example from one of the complaints to summarize them:

In Zuckerberg’s testimony before Congress and in other public statements alleged in paragraphs 364 through 391 of the Master Complaint, Defendants Meta and Zuckerberg disclosed some facts but intentionally failed to disclose other facts, making their disclosures deceptive. In addition, Meta and Zuckerberg intentionally failed to disclose certain facts that were known only to them, which Plaintiff and their parents could not have discovered. Had the omitted information been disclosed, the injuries that Plaintiff suffered would have been avoidable and avoided. Plaintiff reasonably would have been on alert to avoid an ultimately dangerous activity. Plaintiff asserts that she has always valued her health, and makes conscious choices to avoid other common dangerous activities teenagers and pre-teens often fall victim to, such as drinking and vaping. Because Plaintiff was unaware of the dangers of Instagram, she could not take those same healthy steps to avoid a dangerous situation. Plaintiff repeats and realleges against Zuckerberg each and every allegation against Meta contained in Count 8 (paragraphs 976 through 987) and Count 9 (paragraphs 988 through 999) in the Master Complaint.

In short, the argument is that Zuck knew Meta was inherently dangerous for kids (which is nonsense, not supported by the data). If only he had said that publicly, they claim, the plaintiff kids in this case would be good little kids and no longer would use Instagram, because Zuck told them it was dangerous.

If this seems absolutely preposterous, that’s because it is.

The plaintiffs also argue that Zuckerberg is personally liable for this due to reports of how much input he has into the design of the various products:

Plaintiffs build out their theory of liability as to Zuckerberg in their opposition to defendant’s motion to dismiss. … They focus primarily on two aspects of Zuckerberg’s role in Meta. First, plaintiffs allege that, from Meta’s inception to the present, Zuckerberg has maintained tight control over design decisions, including those relating to developing user engagement that are at issue in this litigation. Second, emphasizing Zuckerberg’s role as a public figure and given his alleged knowledge of Meta’s platforms’ dangers, plaintiffs allege that his statements about Meta’s platforms’ safety—including some of those excerpted above—form a pattern of concealment that is actionable under theories of fraudulent and negligent misrepresentation and concealment.

The court, impressively, has to look at this question under various different state laws, given that the case rolls up cases from different states, and where different state laws apply to different aspects (multidistrict litigation can be nuts). And, thus, it notes that in many of the states where this claim was brought, there’s a problem: a bunch of them don’t even recognize the tort of “negligent misrepresentation by omission.” So, it’s easy to dismiss such claims against him in those states.

But, even in the states where they do have such a tort, it doesn’t go well. The court notes that various states have different standards for a “duty to disclose” but basically finds all of them wanting.

Plaintiffs propose three bases for this Court to find Zuckerberg owed a duty to disclose the information he purportedly withheld: (i) Zuckerberg’s “exclusive and superior knowledge” of how Meta’s products harm minors; (ii) Zuckerberg’s “public, partial representations concerning the safety of Meta’s products”; and (iii) Zuckerberg’s fame and public notoriety. (Dkt. No. 538 at 7– 11.) None of these approaches is supported by any state’s law. In short, plaintiffs cannot rely on Zuckerberg’s comparative knowledge alone to establish the kind of “confidential” or otherwise “special” relationship with each plaintiff that these states’ laws require. The Court sets forth the analysis supporting this conclusion as to each of plaintiffs’ three theories below.

The “exclusive and superior knowledge” is laughable, as the court points out. That only applies to duties between transacting parties, like if you’re selling someone a car and fail to disclose that the engine fell out or whatever. That’s clearly not the case here:

No plaintiff here pleads they were transacting or were otherwise engaged with Zuckerberg personally. Thus, plaintiffs fail to establish a duty to disclose based on “superior knowledge.”

Again, no luck for the supposed “public partial representations.” As the court notes, in the states that have such a concept, it involves transactions (again) between people with a “special relationship,” between the parties, where such a disclosure would make sense. That does not exist:

Again, plaintiffs have not pled any relationship—let alone a “special” one—between themselves and Zuckerberg. This theory fails

And, finally, the “fame and public notoriety” doesn’t cut it either. Indeed, the court notes that if the plaintiffs’ theory made sense here, we’d see an absolute flood of lawsuits, any time anyone who was a public figure didn’t “disclose” random information.

Plaintiffs use this broad language to extrapolate a claim here. They argue, on the one hand, that Zuckerberg “was the trusted voice on all things Meta” and “remained an approachable resource to the public,” and, on the other hand, that he accepted Meta’s duty to its customers “[b]y cultivating his roles in public life as both the embodiment of Meta and Silicon Valley’s approximation of a philosopher king.” (Dkt. No. 538 at 9–10.) Specious allusions to Plato aside, plaintiffs have not provided case law to support this interpretation of the Berger standard, nor have they meaningfully grappled with the expansion of state tort law that would result were the Court to recognize the duty they identify. To that end, plaintiffs’ theory would invert the states’ “confidential” or “special” relationship requirements by creating a duty to disclose for any individual recognizable to the public. The Court will not countenance such a novel approach here.

And thus, the claims are dismissed. The court gives the plaintiffs leave to amend based on a theory they apparently tossed in at the last minute about corporate officer liability. However, the court only notes that it wasn’t fully briefed on that issue, and thus allows the plaintiffs to file an amended complaint on that issue. Normally, if you throw in a claim super late like that, a court will say “too late, too bad,” but here it admits that because the case is so complicated, with so many moving parts, it will let it slide.

Given the aggressive nonsense of the lawyers here, it seems likely that they’ll push forward with their theory and file an amended complaint, but it seems unlikely to survive.

Unfortunately, though, this is the dumb world we live in today. Product liability claims are being used against internet companies (and their executives) because any time anything bad happens, people want to find someone to blame. And, of course, there are sketchy bottom-feeder lawyers willing to bring such cases to court, in hopes of cashing in.

Posted on Techdirt - 26 April 2024 @ 09:20am

Biden Bans The App His Campaign Insists Is An Important Place To Talk To Voters

Apparently TikTok is so evil and pernicious that it must be banned from the United States… and so useful that the man who signed the ban, President Joe Biden, made sure to post a few new videos to the platform. It feels like maybe his concerns are a bit overblown?

It seems that TikTok users noticed.

Image

Makena Kelly from Wired reached out to the White House to ask the journalistic question of “wtf are you doing?” and basically got no answer.

It doesn’t seem like the administration or campaign has plans for Biden to address it either. After the Wednesday speech, I reached out to the White House to ask whether we could expect anything from the president in the near future. I was redirected to the National Security Council, which directed me to statements made by national security adviser Jake Sullivan during Wednesday’s White House press briefing. I also pinged the Biden campaign, and didn’t get a clear answer there either.

The White House did give a pretty weak answer to NBC though.

“A fragmented media environment requires us to show up and meet voters where they are — and that includes online,” a Biden campaign official told NBC News. “TikTok is one of many places we’re making sure our content is being seen by voters.”  

The Biden campaign says it plans to use “every tool we have to reach young voters where they are” and has pledged to keep using “enhanced security measures.”

What a load of nonsense. If TikTok is one of the many places where you know you need to “meet voters,” then you know that it’s a media property, and an important one. The fact that you claim you’re using “enhanced security measures” also just means that the threat can’t be that serious, because any measures the admin can take, so could users, if given the opportunity.

Again, all this really does is highlight the problematic hypocrisy in this decision. It remains an attack on the open internet, which the Biden administration used to pretend to support, and now has lost any moral authority to claim it now supports.

If there were legitimate concerns about privacy and security, then pass an actual law about privacy. If there are concerns about propaganda, then suck it up and recognize that in a free country that believes in free speech, we have always had to deal with propaganda, and the best way to deal with it is through educating the public. Perhaps by using TikTok.

But, instead, the President vocally supported and now signed the bill to ban TikTok. And yet, he still wants to use it. The whole situation is ridiculous and makes the administration look silly and hypocritical.

Posted on Techdirt - 25 April 2024 @ 08:05pm

Flynn Family’s SLAPP Suit Against CNN Slapped Down By Judge

MAGA SLAPP suits apparently aren’t going out of style, but yet another one has been tossed out of court.

Remember lawyer Steven Biss? He was the grand filer of tons of SLAPP suits for the MAGA crowd against media outlets. He had quite a losing streak, with nearly all of those cases failing. Last fall, I heard some rumors that Biss had either died or was facing serious health problems. In September, reporter Josh Gerstein broke the news that he’d had a stroke. In January, Biss’ law license was suspended, not for all of his frivolous cases, but “on impairment grounds.”

However, many of his cases were handed off to another MAGA lawyer, Jesse Binnall, who, at one time, was “Trump’s top election fraud lawyer,” to give you some sense of his worldview and credibility.

Anyway, handing off the cases to Binnall hasn’t made them work out any better. We had covered how Biss had filed a lawsuit on behalf of Jack & Leslie Flynn, the brother and sister-in-law of disgraced former (briefly) National Security Advisor Michael Flynn. The lawsuit was against CNN, claiming that a segment they had aired falsely associated him with the QAnon wackjob conspiracy theory.

CNN had aired the segment, which was mostly focused on a gathering of QAnon adherents. During the segment, CNN briefly shows a video that was taken at a barbecue, where Michael Flynn is standing alongside his brother Jack and sister-in-law Leslie, with their right hands raised, and where Michael Flynn says “where we go one, we go all,” a saying that has been associated with QAnon followers.

As that clip played, the voiceover said, “‘Where we go one, we go all’: an infamous QAnon slogan promoted by Trump’s first National Security Advisor, Michael Flynn.”

According to the lawsuit, this was defamatory to Jack and Leslie. This was laughable, as we pointed out at the time. The CNN report doesn’t even talk about Jack or Leslie, and they did stand there while Michael Flynn said the slogan. Hilariously, Biss tried to argue that “where we go one, we go all” was not a QAnon slogan, pointing out that John F. Kennedy had said it. But, it’s not about who said it first, it’s about what it’s associated with.

CNN pushed back hard on the lawsuit, also noting that Jack Flynn himself had retweeted the same phrase. Unfortunately, while the court dismissed parts of it at the motion to dismiss stage, it allowed part of the case to move on to summary judgment. The defamation claims were dismissed, but the “false light” claims (basically defamation claims in disguise) were allowed to go through the lengthy summary judgment process. We pointed out at the time (as did Eric Goldman) that there were plenty of reasons to toss this case at the earlier stage, but the judge wasted everyone’s time and money by letting it go one more round.

At some point, the case got reassigned to a new judge, and that judge has tossed the remaining false light claim at summary judgment. It appears that the Flynns’ new lawyer didn’t make the case any better.

Here, the Flynns’ claim is that CNN called them “QAnon followers.” See Dkt. 197 at 18–21; Dkt. 221 at 1. Although CNN never overtly said that, a false fact may be implied. See McCann v. Shell Oil Co., 551 A.2d 696, 697–98 (R.I. 1988). The Court assumes without deciding that the video was capable of implying that the Flynns were QAnon followers. That implication, “once defined, is treated like a claim for direct defamation.” Cheng, 51 F.4th at 444; see also Biro v. Conde Nast, 883 F. Supp. 2d 441, 468–69 (S.D.N.Y. 2012). In other words, the Court will analyze the issue as if CNN called the Flynns “QAnon followers” explicitly. But determining “whether a communication is capable of bearing a particular meaning” is only the first step. Restatement (Second) of Torts § 614(1)(b). It is still a matter for the Court to decide “whether that meaning is defamatory.”

It was not. Calling the Flynns “QAnon followers” was, in defamation law–speak, an opinion….

Here, the statement neither stated nor implied defamatory facts, so it is a nonactionable opinion. This conclusion is based on two independent—but mutually reinforcing—grounds. 4 First, the statement is unverifiable. And second, it was a comment on disclosed, nondefamatory facts. Both characteristics ensure that the reasonable viewer understands that the statement is the speaker’s opinion (rather than stating facts) and that the speaker is not harboring additional, undisclosed facts to justify the statement. So Rhode Island law and the First Amendment demand its protection

The court goes on to note that this was clearly a statement on matters of public concern. The Flynns’ attempt to get around that by claiming there was no “legitimate public interest” in the story fails easily:

The speech here plainly fits the bill. QAnon itself is a topic of public concern, and the segment also reported on the connections between QAnon, January 6, and former president Trump. The Flynns acknowledge that the report as a whole was on matters of public concern. Dkt. 197 at 25– 26. They argue that including them in the report did not “further[]” any “legitimate public interest” because (1) they are not public figures and (2) “the clip does not relate to the public concern that is the subject of the Report.” Id.

The first argument misunderstands the law. The public-figure and public-concern tests have little to do with each other. Compare Lerman v. Flynt Distrib. Co., 745 F.2d 123, 137 (2d Cir. 1984), with Snyder, 562 U.S. at 453. And the second argument fails because it presumes the Flynns’ favored conclusion on the merits. Connections between QAnon and those in power were the core public concern addressed by the report. The clip of Michael Flynn—President Trump’s first National Security Advisor—saying a phrase associated with QAnon certainly addresses that concern, even if the Flynns think it was totally innocent.

It also appears (unsurprisingly) that the Flynns’ lawyers (unclear whether this part was Biss or Binnall) were, well, not good. In particular, the Flynns relied heavily on statements they mischaracterized to argue that to believe in QAnon means believing in a very specific set of beliefs. Then, if they could show they didn’t believe in all of those things, they claimed it could be shown that the statement was false and defamatory.

Yet, as the court notes, the Flynns’ insistence on a long list of necessary beliefs to be a supporter of QAnon is based on the Flynns’ lawyers mischaracterizing testimony:

Yet they have mischaracterized that testimony. The quoted material in the Flynns’ filings is almost entirely from the statements of the attorney conducting the deposition, which the witness does not endorse. From the outset, the witness makes clear that QAnon is a “fluid” set of beliefs, and he rejects that there are any unifying features other than some “memes” and “slogans.” Dkt. 198-1 at 20:2–24:22. Later, the witness says that “parts” of a statement about QAnon’s origins and effects are accurate, but he still resists that there are unifying beliefs or behaviors. Id. at 32:10– 36:25. Later still, the witness again rejects that QAnon has a stable core, instead noting that its “beliefs can be broad and evolving.” Id. at 87:12–89:2, 90:3–91:18. Finally, the witness notes that even the nature and identity of Q—surely what one would think of as forming the core of QAnon— are unsettled. Id. at 53:22–54:4. Even read in the light most favorable to the Flynns, the deposition (in context) clearly supports the idea that QAnon is an amorphous, undefined concept

Yeah. It’s not a good idea to totally misrepresent testimony. Judges don’t like that. In fact, at the end of the ruling, Judge Arun Subramanian even included an appendix with nearly five pages of the deposition to show the actual context that the Flynns misrepresented in their filings.

It seems that the Flynn’s argument was about as solid as QAnon’s own grasp on reality.

Hell, even the Flynn’s own expert witness seemed to undermine the crux of their argument:

And CNN points to other record evidence to shore up this point. CNN’s expert testified that QAnon is “elastic and difficult to define,” lacks a “coherent belief system,” and that there “is no definition [of] what a QAnon follower is, or what ‘following’ QAnon actually entails.” Dkt. 184- 6 at 4–5, 7. Similarly, the Flynns’ expert agreed that QAnon is an “a la carte belief system,” “not an [o]rthodoxy,” and there’s no “formula for how you indicate QAnon belief.”

And, again, the Flynns’ legal team did the pair no favors:

The Flynns’ filings themselves reinforce this theme. The very first paragraph of the amended complaint describes QAnon as “a far right-wing, loosely organized network and community of believers who embrace a range of unsubstantiated beliefs.” Am. Compl. ¶ 1. And rather than grounding the meaning of “QAnon” in something concrete, their other descriptions just add more value judgments to the mix. Id. ¶¶ 2–3, 15, 19, 23(a), 26 (describing QAnon as “right-wing,” a “deranged conspiracy cult,” “based on age-old racist and anti-Semitic beliefs,” promoting “ancient and dark biases and bigotry,” “detached from reality,” having an “utter disregard for the facts,” “mentally ill and crazy,” “dangerous,” “violent,” “racist,” “extremist,” “insurrectionist,” a “domestic terrorist organization,” and stating that “trusting the plan [is] an important part of QAnon belief” (internal quotation marks omitted)).

Perhaps one could argue (though the Flynns don’t) that the report itself gives “QAnon follower” some fixed meaning. But it doesn’t. At one point in the video, a commentator says QAnon is about “community”: “One of my big takeaways from attending the Q conference is that the QAnon movement is about so much more than just the predictions … it’s about the community. The people there felt like they were part of something big and revolutionary and that they were opposing absolute evil.”

The court notes that even the term “follower” is ambiguous and not something capable of being true or false (and thus, an opinion):

At its root, whether someone is a “follower” is deep in the political thicket: “When used in political discourse, terms of relation and association often have meanings that are debatable, loose, and varying, rendering the relationships they describe insusceptible of proof of truth or falsity.” Egiazaryan v. Zalmayev, 880 F. Supp. 2d 494, 512 (S.D.N.Y. 2012) (internal quotation marks omitted) (applying Buckley to the statement that someone was a “leader” of a political party). Similarly, the Flynns tried to show that QAnon has a belief system by quoting the reporter’s testimony that “QAnon ha[s] become[] like a religion.” Dkt. 197 at 4 (citation omitted). But that comparison precisely illustrates the problem. All the difficulties discussed above show why courts are loath to decide who is a true believer. Cf. Hernandez v. Comm’r, 490 U.S. 680, 699 (1989) (“It is not within the judicial ken to question the centrality of particular beliefs or practices to a faith, or the validity of particular litigants’ interpretations of those creeds.”).

Also, to make this even crazier, the court notes that adherents to QAnon are told to deny that they follow QAnon:

Finally, there is also a unique twist to QAnon followership. It is undisputed that “Q instructed his followers to deny being QAnon followers.” Dkt. 212 ¶ 7. If a QAnon follower is asked under oath whether she is a QAnon follower, what is the honest response? And how should the jury interpret it? This problem feels a bit like trying to hold a trial on opposite day: Saying yes violates a supposed tenet of followership. Does that mean she’s not a true believer, making her answer untrue? If she answers no, is she really lying? After all, Q told her that “[t]here is no ‘Q[A]non.’” Id. Exactly how one untangles this brain teaser isn’t dispositive; it’s just another point of ambiguity.

Then the court notes that even if one could “verify” whether or not someone was a QAnon follower, it still wouldn’t be defamatory. This is because it’s a conclusion based on disclosed facts.

Calling the Flynns “QAnon followers” was a conclusion based on the following disclosed, nondefamatory facts: (1) the Flynns stood with Michael Flynn, their right hands raised, as he recited the phrase “where we go one, we go all,” and (2) the phrase was a QAnon slogan. The Flynns don’t fight these facts. On the first part, they haven’t challenged the clip’s authenticity. As to the second, they say they didn’t know that the phrase was a QAnon slogan. Dkt. 212 ¶¶ 9–12. But that’s irrelevant. They don’t contest that the phrase was in fact a QAnon slogan, and true statements are nondefamatory. See id.; see also Dkt. 221 at 3 (the Flynns’ submission referring to “the now-infamous QAnon slogan”); Gross v. Pare, 185 A.3d 1242, 1247 (R.I. 2018) (“[T]he events upon which [the plaintiff’s false-light] claim is premised actually occurred; therefore we cannot logically conclude that any publication regarding the dispute at issue was false or fictitious.”).

The Flynns disagree that the video included a factual basis for their being QAnon followers. Dkt. 221 at 3. Yet this argument is in tension with the most basic part of their case: that a reasonable viewer would infer from the video that they were QAnon followers. The reasonable viewer must have some factual basis to draw the inference. It is not enough that they merely appeared in a video that also included QAnon followers. Several reporters and news anchors appear in the video, but it’s obvious from context that the video isn’t calling them “QAnon followers.” And as noted above, the Flynns admit that they were “friendly” and partly “aligned with QAnon,” often posting or reposting QAnon-related content…..

And thus, the motion for summary judgment by CNN is granted, and the case is dismissed. Another SLAPP case tossed. It’s just too bad it didn’t come much earlier in the process.

Posted on Techdirt - 25 April 2024 @ 10:46am

The Problems Of The NCMEC CyberTipline Apply To All Stakeholders

The failures of the NCMEC CyberTipline to combat child sexual abuse material (CSAM) as well as it could are extremely frustrating. But as you look at the details, you realize there just aren’t any particularly easy fixes. While there are a few areas that could improve things at the margin, the deeper you look, the more challenging the whole setup is. There aren’t any easy answers.

And that sucks, because Congress and the media often expect easy answers to complex problems. And that might not be possible.

This is the second post about the Stanford Internet Observatory’s report on the NCMEC CyberTipline, which is the somewhat useful, but tragically limited, main way that investigations of child sexual abuse material (CSAM) online is done. In the first post, we discussed the structure of the system, and how the incentive structure regarding law enforcement is a big part of what’s making the system less impactful than it otherwise might be.

In this post, I want to dig in a little more about the specific challenges in making the CyberTipline work better.

The Constitution

I’m not saying that the Constitution is a problem, but it represents a challenge here. In the first post, I briefly mentioned Jeff Kosseff’s important article about how the Fourth Amendment and the structure of NCMEC makes things tricky, but it’s worth digging in a bit here to understand the details.

The US government set up NCMEC as a private non-profit in part because if it were a government agency doing this work, there would be significant concerns about whether or not the evidence it gets was collected with or without a warrant under the Fourth Amendment. If it’s a government agency, then the law cannot require companies to hand over the info without a warrant.

So, Congress did a kind of two-step dance here: they set up this “private” non-profit, and then created a law that requires companies that come across CSAM online to report it to the organization. And all of this seems to rely on a kind of fiction that if we pretend NCMEC isn’t a government agent, then there’s no 4th Amendment issue.

From the Stanford report:

The government agent doctrine explains why Section 2258A allows, but does not require, online platforms to search for CSAM. Indeed, the statute includes an express disclaimer that it does not require any affirmative searching or monitoring. Many U.S. platforms nevertheless proactively monitor their services for CSAM, yielding millions of CyberTipline reports per year. Those searches’ legality hinges on their voluntariness. The Fourth Amendment prohibits unreasonable searches and seizures by the government; warrantless searches are typically considered unreasonable. The Fourth Amendment doesn’t generally bind private parties, however the government may not sidestep the Fourth Amendment by making a private entity conduct a search that it could not constitutionally do itself. If a private party acts as the government’s “instrument or agent” rather than “on his own initiative” in conducting a search, then the Fourth Amendment does apply to the search. That’s the case where a statute either mandates a private party to search or “so strongly encourages a private party to conduct a search that the search is not primarily the result of private initiative.” And it’s also true in situations where, with the government’s knowledge or acquiescence, a private actor carries out a search primarily to assist the government rather than to further its own purposes, though this is a case-by-case analysis for which the factors evaluated vary by court.

Without a warrant, searches by government agents are generally unconstitutional. The usual remedy for an unconstitutional search is for a court to throw out all evidence obtained as a result of it (the so-called “exclusionary rule”). If a platform acts as a government agent when searching a user’s account for CSAM, there is a risk that the resulting evidence could not be introduced against the user in court, making a conviction (or plea bargain) harder for the prosecution to obtain. This is why Section 2258A does not and could not require online platforms to search for CSAM: it would be unconstitutional and self-defeating.

In CSAM cases involving CyberTipline reports, defendants have tried unsuccessfully to characterize platforms as government agents whose searches were compelled by Section 2258A and/or by particular government agencies or investigators. But courts, pointing to the statute’s express disclaimer language (and, often, the testimony of investigators and platform employees), have repeatedly held that platforms are not government agents and their CSAM searches were voluntary choices motivated mainly by their own business interests in keeping such repellent material off their services.

So, it’s quite important that the service providers that are finding and reporting CSAM are not seen as agents of the government. It would destroy the ability to use that evidence in prosecuting cases. That’s important. And, as the report notes, it’s also why it would be a terrible idea to require social media to proactively try to hunt down CSAM. If the government required it, it would effectively light all that evidence on fire and prevent using it for prosecution.

That said, the courts (including in a ruling by Neil Gorsuch while he was on the appeals court) have made it clear that, while platforms may not be government agents, it’s pretty damn clear that NCMEC and the CyberTipline are. And that creates some difficulties.

In a landmark case called Ackerman, one federal appeals court held that NCMEC is a “governmental entity or agent.” Writing for the Tenth Circuit panel, then-judge Neil Gorsuch concluded that NCMEC counts as a government entity in light of NCMEC’s authorizing statutes and the functions Congress gave it to perform, particularly its CyberTipline functions. Even if NCMEC isn’t itself a governmental entity, the court continued, it acted as an agent of the government in opening and viewing the defendant’s email and four attached images that the online platform had (as required) reported to NCMEC. The court ruled that those actions by NCMEC were a warrantless search that rendered the images inadmissible as evidence. Ackerman followed a trial court-level decision, Keith, which had also deemed NCMEC a government agent: its review of reported images served law enforcement interests, it operated the CyberTipline for public not private interests, and the government exerts control over NCMEC including its funding and legal obligations. As an appellate-level decision, Ackerman carries more weight than Keith, but both have proved influential.

The private search doctrine is the other Fourth Amendment doctrine commonly raised in CSAM cases. It determines what the government or its agents may view without a warrant upon receiving a CyberTipline report from a platform. As said, the Fourth Amendment generally does not apply to searches by private parties. “If a private party conducted an initial search independent of any agency relationship with the government,” the private search doctrine allows law enforcement (or NCMEC) to repeat the same search so long as they do not exceed the original private search’s scope. Thus, if a platform reports CSAM that its searches had flagged, NCMEC and law enforcement may open and view the files without a warrant so long as someone at the platform had done so already. The CyberTipline form lets the reporting platform indicate which attached files it has reviewed, if any, and which files were publicly available.

For files that were not opened by the platform (such as where a CyberTipline submission is automated without any human review), Ackerman and a 2021 Ninth Circuit case called Wilson hold that the private search exception does not apply, meaning the government or its agents (i.e., NCMEC) may not open the unopened files without a warrant. Wilson disagreed with the position, adopted by two other appeals-court decisions, that investigators’ warrantless opening of unopened files is permissible if the files are hash matches for files that had previously been viewed and confirmed as CSAM by platform personnel. Ackerman concluded by predicting that law enforcement “will struggle not at all to obtain warrants to open emails when the facts in hand suggest, as they surely did here, that a crime against a child has taken place.”

To sum up: Online platforms’ compliance with their CyberTipline reporting obligations does not convert them into government agents so long as they act voluntarily in searching their platforms for CSAM. That voluntariness is crucial to maintaining the legal viability of the millions of reports platforms make to the CyberTipline each year. This imperative shapes the interactions between platforms and U.S.-based legislatures, law enforcement, and NCMEC. Government authorities must avoid crossing the line into telling or impermissibly pressuring platforms to search for CSAM or what to search for and report. Similarly, platforms have an incentive to maintain their CSAM searches’ independence from government influence and to justify those searches on rationales “separate from assisting law enforcement.” When platforms (voluntarily) report suspected CSAM to the CyberTipline, Ackerman and Wilson interpret the private search doctrine to let law enforcement and NCMEC warrantlessly open and view only user files that had first been opened by platform personnel before submitting the tip or were publicly available.

This is all pretty important in making sure that the whole system stays on the right side of the 4th Amendment. As much as some people really want to force social media companies to proactively search for and report CSAM, mandating that creates real problems under the 4th Amendment.

As for the NCMEC and law enforcement side of things, the requirement to get a warrant for unopened communications remains important. But, as noted below, sometimes law enforcement doesn’t want to get a warrant. If you’ve been reading Techdirt for any length of time, this shouldn’t surprise you. We see all sorts of areas where law enforcement refuses to take that basic step of getting a warrant.

Understanding that framing is important to understanding the rest of this, including exploring where each of the stakeholders fall down. Let’s start with the biggest problem of all: where law enforcement fails.

Law Enforcement

In the first article on this report, we noted that the incentive structure has made it such that law enforcement often tries to evade this entire process. It doesn’t want to go through the process of getting warrants some of the time. It doesn’t want to associate with the ICAC task forces because they feel like it puts too much of a burden on them, and if they don’t take care of it, someone else on the task force will. And sometimes they don’t want to deal with CyberTipline reports because they’re afraid that if they’re too slow after getting a report, they might face liability.

Most of these issues seem to boil down to law enforcement not wanting to do its job.

But the report details some of the other challenges for law enforcement. And it starts with just how many reports are coming in:

Almost across the board law enforcement expressed stress over their inability to fully investigate all CyberTipline reports due to constraints in time and resources. An ICAC Task Force officer said “You have a stack [of CyberTipline reports] on your desk and you have to be ok with not getting to it all today. There is a kid in there, it’s really quite horrible.” A single Task Force detective focused on internet crimes against children may be personally responsible for 2,000 CyberTipline reports each year. That detective is responsible for working through all of their tips and either sending them out to affiliates or investigating them personally. This process involves reading the tip, assessing whether a crime was committed, and determining jurisdiction; just determining jurisdiction might necessitate multiple subpoenas. Some reports are sent out to affiliates and some are fully investigated by detectives at the Task Force.

An officer at a Task Force with a relatively high CyberTipline report arrest rate said “we are stretched incredibly thin like everyone.” An officer in a local police department said they were personally responsible for 240 reports a year, and that all of them were actionable. When asked if they felt overwhelmed by this volume, they said yes. While some tips involve self-generated content requiring only outreach to the child, many necessitate numerous search warrants. Another officer, operating in a city with a population of 100,000, reported receiving 18–50 CyberTipline reports annually, actively investigating around 12 at any given time. “You have to manage that between other egregious crimes like homicides,” they said. This report will not extensively cover the issue of volume and law enforcement capacity, as this challenge is already well-documented and detailed in the 2021 U.S. Department of Homeland Security commissioned report, in Cullen et al., and in a 2020 Government Accountability Office report. “People think this is a one-in-a-million thing,” a Task Force officer said. “What they don’t know is that this is a crime of secrecy, and could be happening at four of your neighbors’ houses.”

And of course, making social media platforms more liable doesn’t help to fix much here. At best, it makes it worse because it encourages even more reporting by the platforms, which only further overloads law enforcement.

Given all those reports the cops are receiving, you’d hope they had a good system for managing them. But your hope would not be fulfilled:

Law enforcement pick a certain percentage of reports to investigate. The selection is not done in a very scientific way—one respondent described it as “They hold their finger up in the air to feel the wind.” An ICAC Task Force officer said triage is more of an art than a science. They said that with experience you get a feel for whether a case will have legs, but that you can never be certain, and yet you still have to prioritize something.

That seems less than ideal.

Another problem, though, is that a lot of the reports are not prosecutable at all. Because of the incentives discussed in the first post, apparently certain known memes get reported to the CyberTipline quite frequently, and police feel they just clog up the system. But because the platforms fear significant liability if they don’t report those memes, they keep reporting them.

U.S. law requires that platforms report this content if they find it, and that NCMEC send every report to law enforcement. When NCMEC knows a report contains viral content or memes they will label it “informational,” a category that U.S. law enforcement typically interpret as meaning the report can be ignored, but not all such reports get labeled “informational.” Additionally there are an abundance of “age difficult” reports that are unlikely to lead to prosecution. Law enforcement may have policies requiring some level of investigation or at least processing into all noninformational reports. Consequently, officers often feel inundated with reports unlikely to result in prosecution. In this scenario, neither the platforms, NCMEC, nor law enforcement agencies feel comfortable explicitly ignoring certain types of reports. An employee from a platform that is relatively new to NCMEC reporting expressed the belief that “It’s best to over-report, that’s what we think.”

At best, this seems to annoy law enforcement, but it’s a function of how the system works:

An officer expressed frustration over platforms submitting CyberTipline reports that, in their view, obviously involve adults: “Tech companies have the ability to […] determine with a high level of certainty if it’s an adult, and they need to stop sending [tips of adults].” This respondent also expressed a desire that NCMEC do more filtering in this regard. While NCMEC could probably do this to some extent, they are again limited by the fact that they cannot view an image if the platform did not check the “reviewed” box (Figure 5.3 on page 26). NCMEC’s inability to use cloud services also makes it difficult for them to use machine learning age classifiers. When we asked NCMEC about the hurdles they face, they raised the “firehose of I’ll just report everything” problem.

Again, this all seems pretty messy. Of course you want companies to report anything they find that might be CSAM. And, of course, you want NCMEC to pass them on to law enforcement. But the end result is overwhelmed law enforcement with no clear process for triage and dealing with a lot of reports that were sent in an abundance of caution but which are not at all useful to law enforcement.

And, of course, there are other challenges that policymakers probably don’t think about. For example: how do you deal with hacked accounts? How much information is it right for the company to share with law enforcement?

One law enforcement officer provided an interesting example of a type of report he found frustrating: he said he frequently gets reports from one platform where an account was hacked and then used to share CSAM. This platform provided the dates of multiple password changes in the report, which the officer interpreted as indicating the account had been hacked. Despite this, they felt obligated to investigate the original account holder. In a recent incident they described, they were correct that the account had been hacked. They expressed that if the platform explicitly stated their suspicion in the narrative section of the report, such as by saying something like “we think this account may have been hacked,” they would then feel comfortable de-prioritizing these tips. We subsequently learned from another respondent that this platform provides time stamps for password changes for all of their reports, putting the burden on law enforcement to assess whether the password changes were of normal frequency, or whether they reflected suspicious activity.

With that said, the officer raised a valid issue: whether platforms should include their interpretation of the information they are reporting. One platform employee we interviewed who had previously worked in law enforcement acknowledged that they would have found the platform’s unwillingness to explicitly state their hunch frustrating as well. However, in their current role they also would not have been comfortable sharing a hunch in a tip: “I have preached to the team that anything they report to NCMEC, including contextual information, needs to be 100% accurate and devoid of personal interpretation as much as possible, in part because it may be quoted in legal process and case reports down the line.” They said if a platform states one thing in a tip, but law enforcement discovers that is not the case, that could make it more difficult for law enforcement to prosecute, and could even ruin their case. Relatedly, a former platform employee said some platforms believe if they provide detailed information in their reports courts may find the reports inadmissible. Another platform employee said they avoid sharing such hunches for fear of it creating “some degree of liability [even if ] not legal liability” if they get it wrong

The report details how local prosecutors are also loathe to bring cases, because it’s tricky to find a jury who can handle a CSAM case:

It is not just police chiefs who may shy away from CSAM cases. An assistant U.S. attorney said that potential jurors will disqualify themselves from jury duty to avoid having to think about and potentially view CSAM. As a result, it can take longer than normal to find a sufficient number of jurors, deterring prosecutors from taking such cases to trial. There is a tricky balance to strike in how much content to show jurors, but viewing content may be necessary. While there are many tools to mitigate the effect of viewing CSAM for law enforcement and platform moderators, in this case the goal is to ensure that those viewing the content understand the horror. The assistant U.S. attorney said that they receive victim consent before showing the content in the context of a trial. Judges may also not want to view content, and may not need to if the content is not contested, but seeing it can be important as it may shape sentencing decisions.

There are also issues outside the US with law enforcement. As noted in the first article, NCMEC has become the de facto global reporting center, because so many companies are based in the US and report there. And the CyberTipline tries to share out to foreign law enforcement too, but that’s difficult:

For example, in the European Union, companies’ legal ability to voluntarily scan for CSAM required the passage of a special exception to the EU’s so-called “ePrivacy Directive”. Plus, against a background where companies are supposed to retain personal data no longer than reasonably necessary, EU member states’ data retention laws have repeatedly been struck down on privacy grounds by the courts for retention periods as short as four or ten weeks (as in Germany) and as long as a year (as in France). As a result, even if a CyberTipline report had an IP address that was linked to a specific individual and their physical address at the time of the report, it may not be possible to retrieve that information after some amount of time.

Law enforcement agencies abroad have varying approaches to CyberTipline reports and triage. Some law enforcement agencies will say if they get 500 CyberTipline reports a year, that will be 500 cases. Another country might receive 40,000 CyberTipline reports that led to just 150 search warrants. In some countries the rate of tips leading to arrests is lower than in the U.S. Some countries may find that many of their CyberTipline reports are not violations of domestic law. The age of consent may be lower than in the U.S., for example. In 2021 Belgium received about 15,000 CyberTipline reports, but only 40% contained content that violated Belgium law

And in lower income countries, the problems can be even worse, including confusion about how the entire CyberTipline process works.

We interviewed two individuals in Mexico who outlined a litany of obstacles to investigating CyberTipline reports even where a child is known to be in imminent danger. Mexican federal law enforcement have a small team of people who work to process the reports (in 2023 Mexico received 717,468 tips), and there is little rotation. There are people on this team who have been viewing CyberTipline reports day in and day out for a decade. One respondent suggested that recent laws in Mexico have resulted in most CyberTipline reports needing to be investigated at the state level, but many states lack the know-how to investigate these tips. Mexico also has rules that require only specific professionals to assess the age of individuals in media, and it can take months to receive assessments from these individuals, which is required even if the image is of a toddler

The investigator also noted that judges often will not admit CyberTipline reports as evidence because they were provided proactively and not via a court order as part of an investigation. They may not understand that legally U.S. platforms must report content to NCMEC and that the tips are not an extrajudicial invasion of privacy. As a result, officers may need a court order to obtain information that they already have in the CyberTipline report, confusing platforms who receive requests for data they put in a report a year ago. This issue is not unique to Mexico; NCMEC staff told us that they see “jaws drop” in other countries during trainings when they inform participants about U.S. federal law that requires platforms to report CSAM.

NCMEC Itself

The report also details some of the limitations of NCMEC and the CyberTipline itself, some of which are legally required (and where it seems like the law should be updated).

There appears to be a big issue with repeat reports, where NCMEC needs to “deconflict” them, but has limited technology to do so:

Improvements to the entity matching process would improve CyberTipline report prioritization processes and detection, but implementation is not always as straightforward as it might appear. The current automated entity matching process is based solely on exact matches. Introducing fuzzy matching, which would catch similarity between, for example, bobsmithlovescats1 and bobsmithlovescats2, could be useful in identifying situations where a user, after suspension, creates a new account with an only slightly altered username. With a more expansive entity matching system, a law enforcement officer proposed that tips could gain higher priority if certain identifiers are found across multiple tips. This process, however, may also require an analyst in the loop to assess whether a fuzzy match is meaningful.

It is common to hear of instances where detectives received dozens of separate tips for the same offender. For instance, the Belgium Federal Police noted receiving over 500 distinct CyberTipline reports about a single offender within a span of five months. This situation can arise when a platform automatically submits a tip each time a user attempts to upload CSAM; if the same individual tries to upload the same CSAM 60 times, it could result in 60 separate tips. Complications also arise if the offender uses a Virtual Private Network (VPN); the tips may be distributed across different law enforcement agencies. One respondent told us that a major challenge is ensuring that all tips concerning the same offender are directed to the same agency and that the detective handling them is aware that these numerous tips pertain to a single individual.

As the report notes, there are a variety of challenges, both economic and legal, in enabling NCMEC to upgrade its technology:

First, NCMEC operates with a limited budget and as a nonprofit they may not be able to compete with industry salaries for qualified technical staff. The status quo may be “understandable given resource constraints, but the pace at which industry moves is a mismatch with NCMEC’s pace.” Additionally, NCMEC must also balance prioritizing improving the CyberTipline’s technical infrastructure with the need to maintain the existing infrastructure, review tips, or execute other non-Tipline projects at the organization. Finally, NCMEC is feeding information to law enforcement, which work within bureaucracies that are also slow to update their technology. A change in how NCMEC reports CyberTipline information may also require law enforcement agencies to change or adjust their systems for receiving that information.

NCMEC also faces another technical constraint not shared with most technology companies: because the CyberTipline processes harmful and illegal content, it cannot be housed on commercially available cloud services. While NCMEC has limited legal liability for hosting CSAM, other entities currently do not, which constrains NCMEC’s ability to work with outside vendors. Inability to transfer data to cloud services makes some of NCMEC’s work more resource intensive and therefore stymies some technical developments. Cloud services provide access to proprietary machine learning models, hardware-accelerated machine learning training and inference, on-demand resource availability and easier to use services. For example, with CyberTipline files in the cloud, NCMEC could more easily conduct facial recognition at scale and match photos from the missing children side of their work with CyberTipline files. Access to cloud services could potentially allow for scaled detection of AI-generated images and more generally make it easier for NCMEC to take advantage of existing machine learning classifiers. Moving millions of CSAM files to cloud services is not without risks, and reasonable people disagree about whether the benefits outweigh the risks. For example, using a cloud facial recognition service would mean that a third party service likely has access to the image. There are a number of pending bills in Congress that, if passed, would enable NCMEC to use cloud services for the CyberTipline while providing the necessary legal protections to the cloud hosting providers.

Platforms

And, yes, there are some concerns about the platforms. But while public discussion seems to focus almost exclusively on where people think that platforms have failed to take this issue seriously, the report suggests the failures of platforms are much more limited.

The report notes that it’s a bit tricky to get platforms up and running with CyberTipline reporting, and that even as NCMEC will do some onboarding, it’s very limited to avoid some of the 4th Amendment concerns talked about above.

And, again, some of the problem with onboarding is due to outdated tech on NCMEC’s side. I mean… XML? Really?

Once NCMEC provides a platform with an API key and the corresponding manual, integrating their workflow with the reporting API can still present challenges. The API is XML-based, which requires considerably more code to integrate with than simpler JSON-based APIs and may be unfamiliar to younger developers. NCMEC is aware that this is an issue. “Surprisingly large companies are using the manual form,” one respondent said. One respondent at a small platform had a more moderate view; he thought the API was fine and the documentation “good.” But another respondent called the API “crap.”

There are also challenges under the law about what needs to be reported. As noted above and in the first article, that can often lead to over-reporting. But it can also make things difficult for companies trying to make determinations.

Platforms will additionally face policy decisions. While prohibiting illegal content is a standard approach, platforms often lack specific guidelines for moderators on how to interpret nuanced legal terms such as “lascivious exhibition.” This term is crucial for differentiating between, for example, an innocent photo of a baby in a bathtub, and a similar photo that appears designed to show the baby in a way that would be sexually arousing to a certain type of viewer. Trust and safety employees will need to develop these policies and train moderators.

And, of course, as has been widely discussed elsewhere, it’s not great that platforms have to hire human beings and expose them to this kind of content.

However, the biggest issue on reporting seems to not be a company’s unwillingness to do so, but how much information they pass along. And again, here, the issue is not so much unwillingness of the companies to be cooperative, but the incentives.

Memes and viral content pose a huge challenge for CyberTipline stakeholders. In the best case scenario, a platform checks the “Potential Meme” box and NCMEC automatically sends the report to an ICAC Task Force as “informational,” which appears to mean that no one at the Task Force needs to look at the report.

In practice, a platform may not check the “Potential Meme” box (possibly due to fixable process issues or minor changes in the image that change the hash value) and also not check the “File Viewed by Company” box. In this case NCMEC is unable to view the file, due to the Ackerman and Wilson decisions as discussed in Chapter 3. A Task Force could view the file without a search warrant and realize it is a meme, but even in that scenario it takes several minutes to close out the report. At many Task Forces there are multiple fields that have to be entered to close the report, and if Task Forces are receiving hundreds of reports of memes this becomes hugely time consuming. Sometimes, however, law enforcement may not realize the report is a meme until they have invested valuable time into getting a search warrant to view the report.

NCMEC recently introduced the ability for platforms to “batch report” memes after receiving confirmation from NCMEC that that meme is not actionable. This lets NCMEC label the whole batch as informational, which reduces the burden on law enforcement

We heard about an example where a platform classified a meme as CSAM, but NCMEC (and at least one law enforcement officer we spoke to about this meme) did not classify it as CSAM. NCMEC told the platform they did not classify the meme as CSAM, but according to NCMEC the platform said because they do consider it CSAM they were going to continue to report it. Because the platform is not consistently checking the “Potential Meme” box, law enforcement are still receiving it at scale and spending substantial time closing out these reports.

There is a related challenge when a platform neglects to mark content as “viral”. Most viral images are shared in outrage, not with an intent to harm. However, these viral images can be very graphic. The omission of the “viral” label can lead law enforcement to mistakenly prioritize these cases, unaware that the surge in reports stems from multiple individuals sharing the same image in dismay.

We spoke to one platform employee about the general challenge of a platform deeming a meme CSAM while NCMEC or law enforcement agencies disagree. They noted that everyone is doing their best to apply the Dost test. Additionally, there is no mechanism to get an assurance that a file is not CSAM: “No one blesses you and says you’ve done what you need to do. It’s a very unsettling place to be.” They added that different juries might come to different conclusions about what counts as CSAM, and if a platform fails to report a file that is later deemed CSAM the platform could be fined $300,000 and face significant public backlash: “The incentive is to make smart, conservative decisions.”

This is all pretty fascinating, and suggests that while there may be ways to improve things, it’s difficult to structure things right and make the incentives align properly.

And, again, the same incentives pressure the platforms to just overreport, no matter what:

Once a platform integrates with NCMEC’s CyberTipline reporting API, they are incentivized to overreport. Consider an explicit image of a 22-year-old who looks like they could be 17: if a platform identified the content internally but did not file a report and it turned out to be a 17-year-old, they may have broken the law. In such cases, they will err on the side of caution and report the image. Platform incentives are to report any content that they think is violative of the law, even if it has a low probability of prosecution. This conservative approach will also lead to reports from what Meta describes as “non-malicious users”—for instance, individuals sharing CSAM in outrage. Although such reports could theoretically yield new findings, such as uncovering previously unknown content, it is more likely that they overload the system with extraneous reports

All in all, the real lesson to be taken from this report is that this shit is super complicated, like all of trust & safety, and tradeoffs abound. But here it’s way more fraught than in most cases, both in terms of the seriousness of the issue, the potential for real harm, and the potentially destructive criminal penalties involved.

The report has some recommendations, though they mostly seem to deal with things at the margins: increase funding for NCMEC, allow it to update its technology (and hire the staff to do so), and have some more information to help platforms get set up.

Of course, what’s notable is that this does not include things like “make platforms liable for any mistake they make.” This is because, as the report shows, most platforms seem to take this stuff pretty seriously already, and the liability is already very clear, to the point that they are often over-reporting to avoid it, and that’s actually making the results worse, because they’re overwhelming both NCMEC and law enforcement.

All in all, this report is a hugely important contribution to this discussion, and provides a ton of real-world information about the CyberTipline that were basically only known to people working on it, leaving many observers, media and policymakers in the dark.

It would be nice if Congress reads this report and understands the issues. However, when it comes to things like CSAM, expecting anyone to bother with reading a big report and understanding the tradeoffs and nuances is probably asking too much.

Posted on Techdirt - 25 April 2024 @ 09:29am

Our Online Child Abuse Reporting System Is Overwhelmed, Because The Incentives Are Screwed Up & No One Seems To Be Able To Fix Them

The system meant to stop online child exploitation is failing — and misaligned incentives are to blame. Unfortunately, today’s political solutions, like KOSA and STOP CSAM, don’t even begin to grapple with any of this. Instead, they prefer to put in place solutions that could make the incentives even worse.

The Stanford Internet Observatory has spent the last few months doing a very deep dive on how the CyberTipline works (and where it struggles). It has released a big and important report detailing its findings. In writing up this post about it, I kept adding more and more, to the point that I finally decided it made sense to split it up into two separate posts to keep things manageable.

This first post covers the higher level issue: what the system is, why it works the way it does, and how the incentive structure of the system is completely messed up (even if it was done with good intentions), and how that’s contributed to the problem. A follow-up post will cover the more specific challenges facing NCMEC itself, law enforcement, and the internet platforms themselves (who often take the blame for CSAM, when that seems extremely misguided).

There is a lot of misinformation out there about the best way to fight and stop the creation and spread of child sexual abuse material (CSAM). It’s unfortunate because it’s a very real and very serious problem. Yet the discussion about it is often so disconnected from reality as to be not just unhelpful, but potentially harmful.

In the US, the system that was set up is the CyberTipline, which is run by NCMEC, the National Center on Missing and Exploited Children. It’s a private, non-profit; however, it has a close connection with the US government, which helped create it. At times, there has been some confusion about whether or not NCMEC is a government agent. The entire setup of it was designed to keep it as non-governmental, to avoid any 4th Amendment issues with the information it collects, but courts haven’t always seen it that way, which makes it tricky (even as the 4th Amendment is important).

And while the system was designed for the US, it has become a defacto global system, since so many of the companies are US based, and NCMEC will, when it can, send relevant details to foreign law enforcement as well (though, as the report details, that doesn’t always work well).

The main role CyberTipline has taken on is coordination. It takes in reports of CSAM (mostly, but not entirely, from internet platforms) and then, when relevant, hands off the necessary details to the (hopefully) correct law enforcement agency to handle things.

Companies that host user-generated content have certain legal requirements to report CSAM to the CyberTipline. As we discussed in a recent podcast, this role as a “mandatory reporter” is important in providing useful information to allow law enforcement to step in and actually stop abusive behavior. Because of the “government agent” issue, it would be unconstitutional to require social media platforms to proactively search for and identify CSAM (though many do use tools to do this). However, if they do find some, they must report it.

Unfortunately, the mandatory reporting has also allowed the media and politicians to use the number of reports sent in by social media companies in a misleading manner, suggesting that the mere fact that these companies find and report to NCMEC means that they’re not doing enough to stop CSAM on their platforms.

This is problematic because it creates a dangerous incentive, suggesting that internet services should actually not report CSAM they found, as politicians and the media will falsely portray a lot of reports as being a sign of a failure by the platforms to take this seriously. The reality is that the failure to take things seriously comes from the small number of platforms (Hi Telegram!) who don’t report CSAM at all.

Some of us from the outside have thought that the real issue was that NCMEC and law enforcement had been unsuccessful on the receiving end to take those reports and do enough that was productive with them. It seemed convenient for the media and politicians to just blame social media companies for doing what they’re supposed to do (reporting CSAM), ignoring that what happened on the back end of the system might be the real problem. That’s why things like Senator Ron Wyden’s Invest in Child Safety Act seemed like a better approach than things like KOSA or the STOP CSAM Act.

That’s because the approach of KOSA/STOP CSAM and some other bills is basically to add liability to social media companies. (These companies already do a ton to prevent CSAM from appearing on the platform and alert law enforcement via the CyberTipline when they do find stuff.) But that’s useless if those receiving the reports aren’t able to do much with them.

What becomes clear from this report is that while there are absolutely failures on the law enforcement side, some of that is effectively baked into the incentive structure of the system.

In short, the report shows that the CyberTipline is very helpful in engaging law enforcement to stop some child sexual abuse, but it’s not as helpful as it might otherwise be:

Estimates of how many CyberTipline reports lead to arrests in the U.S. range from 5% to 7.6%

This number may sound low, but I’ve been told it’s not as bad as it sounds. First of all, when a large number of the reports are for content that is overseas and not in the US, it’s more difficult for law enforcement here to do much about it (though, again, the report details some suggestions on how to improve this). Second, some of the content may be very old, where the victim was identified years (or even decades) ago, and where there’s less that law enforcement can do today. Third, there is a question of prioritization, with it being a higher priority to target those directly abusing children. But, still, as the report notes, almost everyone thinks that the arrest number could go higher if there were more resources in place:

Empirically, it is unknown what percent of reports, if fully investigated, would lead to the discovery of a person conducting hands-on abuse of a child. On the one hand, as an employee of a U.S. federal department said, “Not all tips need to lead to prosecution […] it’s like a 911 system.”10 On the other hand, there is a sense from our respondents—who hold a wide array of beliefs about law enforcement—that this number should be higher. There is a perception that more than 5% of reports, if fully investigated, would lead to the discovery of hands-on abuse.

The report definitely suggests that if NCMEC had more resources dedicated to the CyberTipline, it could be more effective:

NCMEC has faced challenges in rapidly implementing technological improvements that would aid law enforcement in triage. NCMEC faces resource constraints that impact salaries, leading to difficulties in retaining personnel who are often poached by industry trust and safety teams.

There appear to be opportunities to enrich CyberTipline reports with external data that could help law enforcement more accurately triage tips, but NCMEC lacks sufficient technical staff to implement these infrastructure improvements in a timely manner. Data privacy concerns also affect the speed of this work.

But, before we get into the specific areas where things can be improved in the follow-up post, I thought it was important to highlight how the incentives of this system contribute to the problem, where there isn’t necessarily an easy solution.

While companies (Meta, mainly, since it represents, by a very wide margin, the largest number of reports to the CyberTipline) keep getting blamed for failing to stop CSAM because of its large number of reports, most companies have very strong incentives to report anything they find. This is because the cost for not reporting something they should have reported is massive (criminal penalties), whereas the cost for over-reporting is nothing to the companies. That means, there’s an issue with overreporting.

Of course, there is a real cost here. CyberTipline employees get overwhelmed, and that can mean that reports that should get prioritized and passed on to law enforcement don’t. So you can argue that while the cost of over-reporting is “nothing” to the companies, the cost to victims and society at large can be quite large.

That’s an important mismatch.

But the broken incentives go further as well. When NCMEC hands off reports to law enforcement, they often go through a local ICAC (Internet Crimes Against Children) task force, who will help triage it and find the right state or local law enforcement agency to handle the report. Different law enforcement agencies who are “affiliated” with ICACs receive special training on how to handle reports from the CyberTipline. But, apparently, at least some of them feel that it’s just too much work, or (in some cases) too burdensome to investigate. That means that some law enforcement agencies are choosing not to affiliate with their local ICACs to avoid this added work. Even worse, some law enforcement agencies have “unaffiliated” themselves with the local ICAC because they just don’t want to deal with it.

In some cases, there are even reports of law enforcement unaffiliating with an ICAC out of a fear of facing liability for not investigating an abused child quickly enough.

A former Task Force officer described the barriers to training more local Task Force affiliates. In some cases local law enforcement perceive that becoming a Task Force affiliate is expensive, but in fact the training is free. In other cases local law enforcement are hesitant to become a Task Force affiliate because they will be sent CyberTipline reports to investigate, and they may already feel like they have enough on their plate. Still other Task Force affiliates may choose to unaffiliate, perceiving that the CyberTipline reports they were previously investigating will still get investigated at the Task Force, which further burdens the Task Force. Unaffiliating may also reduce fear of liability for failing to promptly investigate a report that would have led to the discovery of a child actively being abused, but the alternative is that the report may never be investigated at all.

[….]

This liability fear stems from a case where six months lapsed between the regional Task Force receiving NCMEC’s report and the city’s police department arresting a suspect (the abused children’s foster parent). In the interim, neither of the law enforcement agencies notified child protective services about the abuse as required by state law. The resulting lawsuit against the two police departments and the state was settled for $10.5 million. Rather than face expensive liability for failing to prioritize CyberTipline reports ahead of all other open cases, even homicide or missing children, the agency might instead opt to unaffiliate from the ICAC Task Force.

This is… infuriating. Cops choosing to not affiliate (i.e., get the necessary training to help) or removing themselves from an ICAC task force because they’re afraid if they don’t help save kids from abuse that they might get sued is ridiculous. It’s yet another example of cops running away, rather than doing the job they’re supposed to be doing, but which they claim they have no obligation to do.

That’s just one problem of many in the report, which we’ll get into in the second post. But, on the whole, it seems pretty clear that with the incentives this out of whack, something like KOSA or STOP CSAM aren’t going to be of much help. Actually tackling the underlying issues, the funding, the technology, and (most of all) the incentive structures, is necessary.

Posted on Techdirt - 24 April 2024 @ 03:39pm

Universal Music’s Copyright Claim: 99 Problems And Fair Use Ain’t One

Welp, sometimes you gotta read Techdirt fast, or you just might miss something. And sometimes it’s because Universal Music is acting to silence creativity yet again. Yesterday, we posted about how Dustin Ballard, the creative genius behind There I Ruined It, who makes very funny parody songs, had posted a lengthy disclaimer on his latest YouTube upload.

The video was the Beach Boys covering Jay-Z’s 99 Problems where every bit of it (minus the lyrics) sounded like a classic Beach Boys song. What made it interesting to us at Techdirt was the long and convoluted explanation that was included in the video to explain that the song is parody fair use, which is supposed to be allowed under copyright law.

Image

But, sometime after that story got posted, Universal Music stepped in and decided to ruin the fun, in the only way Universal Music knows how to act: by being a copyright bully where it has no need or right to be.

Image

Now, this is likely an automated copyright claim using ContentID or something similar, rather than a full DMCA takedown. But, it’s bullshit either way. Universal Music knows full well that it’s supposed to take fair use into account before issuing a copyright claim. Remember, Universal Music lost a lawsuit over its bogus copyright claims where it was told that it had to take fair use into account before sending such claims.

But, alas, none of that matters the way the system works today. It’s more important for YouTube to keep Universal Music happy rather than the content creators on YouTube or people who want to enjoy this music.

And, thus, as was discussed in the podcast we just uploaded, copyright remains a powerful tool of censorship.

I’m almost hesitant to point this out, for fear that some asshole at Universal Music will read this and continue on their warpath of culture destruction, but you can still hear versions of the Beach Boys doing 99 problems at both Instagram and TikTok (I mean, at least until TikTok is banned). The versions on those two sites are a bit shorter than the full YouTube version. They also cut off the copyright disclaimer such that it’s shorter.

But, really, this is yet another example of how totally broken the copyright system is. There is no conceivable reason for removing this. It’s not taking anything. It’s not making the Beach Boys or Jay-Z lose any money (and, ditto for Universal Music). If anything, it’s making people more interested in the underlying songs and artists (no one is interested in fucking Universal Music, though).

Fair use is supposed to be the valve by which the copyright system doesn’t violate the First Amendment. But when we see copyright wielded as a censorial weapon like this, with no real recourse for the artist, it should raise serious questions about why we allow copyright to act this way in the first place.

Posted on Techdirt - 24 April 2024 @ 12:05pm

Biden Signs TikTok Ban Bill; Expect A Lawsuit By The Time You Finish Reading This Article

Get your dance moves on now, as TikTok may be going away. Okay, it’s not going away that quickly and quite possibly isn’t going away at all, but President Biden signed the bill this morning that requires ByteDance to divest itself from TikTok, or have the app banned from the Apple and Google app stores.

The law gives ByteDance 12 months to divest, but in all likelihood sometime today or tomorrow, TikTok will file a well-prepared lawsuit with high-priced lawyers challenging the law on a variety of different grounds, including the First Amendment.

As you’ve probably heard, the bill was tacked on to a foreign aid funding bill, and there was no way the President wasn’t going to sign that bill. But even as ridiculous as it is to tack on a TikTok ban to foreign spending support, Biden had made it clear he supported the TikTok ban anyway. Still, it does seem notable that, when signing the bill, Biden didn’t even mention the TikTok ban in his remarks.

We’ve discussed this a few times before, but the move to ban TikTok is particularly stupid. It demonstrates American hypocrisy regarding its advocacy for an open internet. It goes against basic First Amendment principles. It overreacts to a basic moral panic. And it does fuck all to stop the actual threats that people justifying the ban talk about (surveillance and manipulation/propaganda).

It’s particularly stupid to do this now, just as Congress was finally willing to explore a comprehensive privacy bill.

The NY Times has a big article about the “behind the scenes negotiations” that resulted in this bill that (bizarrely) makes it sound like the TikTok bill is an example of Congress working well:

For nearly a year, lawmakers and some of their aides worked to write a version of the bill, concealing their efforts to avoid setting off TikTok’s lobbying might. To bulletproof the bill from expected legal challenges and persuade uncertain lawmakers, the group worked with the Justice Department and White House.

And the last stage — a race to the president’s desk that led some aides to nickname the bill the “Thunder Run” — played out in seven weeks from when it was publicly introduced, remarkably fast for Washington.

This leaves out some fairly important elements, including powerful lobbying by companies like Meta (who were clearly threatened by TikTok) to spread a moral panic about the app. It also leaves out the massive financial conflicts of many of the lawmakers who pushed for this bill.

Either way, the bill is going to get challenged and quickly. Previous attempts to ban TikTok (one by former President Trump and one by Montana) were both rejected as violations of the First Amendment.

While this bill is written more carefully to try to avoid that fate, it’s all a smokescreen, as the underlying concerns still very much implicate the First Amendment. The only real question is whether or not the outrage and moral panic about “CHINA CONTROLS THIS APP!!!!” will lead judges to make exceptions in this case.

The bill still has fundamental free speech problems. First of all, banning users from accessing content raises serious First Amendment questions. Second, requiring an app store to stop offering an app raises different First Amendment questions. Yes, there are cases when the US can force divestiture, but the remedies in this bill raise serious concerns and would create a very problematic precedent allowing future Presidents to effectively ban apps they dislike or possibly force their sale to “friends.” And that’s not even getting into what it does in terms of justifying censorship and app banning elsewhere.

Posted on Techdirt - 24 April 2024 @ 09:31am

FTC Bans Non-Competes, Sparks Instant Lawsuit: The War For Worker Freedom

This is a frustrating article to write. The FTC has come out with a very good and important policy ruling, but I’m not sure it has the authority to do so. The legal challenge (that was filed basically seconds after the rule came out) could do way more damage not just to some fundamental parts of the administrative state, but to the very underlying policy that the FTC is trying to enact: protecting the rights of workers to switch jobs and not be effectively tied to an employer in modern-day indentured servitude with no realistic ability to leave.

All the way back in 2007, I wrote about how non-competes were the DRM of human capital. They were an artificial manner of restricting a basic freedom, and one that served no real purpose other than to make everything worse. As I discussed in that post, multiple studies done over the previous couple of decades had more or less shown that non-competes are a tremendous drag on innovation, to the point that some argue (strongly, with data) that Silicon Valley would not be Silicon Valley if not for the fact that California has deemed non-competes unenforceable.

The evidence of non-competes being harmful to the market, to consumers, and to innovation is overwhelming. It’s not difficult to understand why. Studies have shown that locking up information tends to be harmful to innovation. The big, important, innovative breakthroughs happen when information flows freely throughout an industry, allowing different perspectives to be brought into the process. Over and over again, it’s been shown that those big breakthroughs come when information is shared and multiple companies are trying to tackle the underlying problem.

But you don’t want companies colluding. Instead, it’s much better to simply ban non-competes, as it allows workers to switch jobs. This allows for more of a free flow of information between companies, which contributes to important innovations, rather than stagnant groupthink. The non-competes act as a barrier to the free flow of information, which holds back innovation.

They’re really bad. It’s why I’ve long supported states following California’s lead in making them unenforceable.

And, of course, once more companies realized the DRM-ish nature of non-competes, they started using them for more and more evil purposes. This included, somewhat infamously, fast food workers being forced to sign non-competes. Whatever (weak) justification there might be for higher-end knowledge workers to sign non-competes, the idea of using them for low-end jobs is pure nonsense.

Non-competes should be banned.

But, when the FTC proposed banning non-competes last year, I saw it as a mixed bag. I 100% support the policy goal. Non-competes are actively harmful and should not be allowed. But (1) I’m not convinced the FTC actually has the authority to ban them across the board. That should be Congress’ job. And, (2) with the courts the way they are today, there’s a very high likelihood that any case challenging such an FTC rule would not just get tossed, but that the FTC may have its existing authority trimmed back even further.

Yesterday, the FTC issued its final rule on non-competes. The rule bans all new non-competes and voids most existing non-competes, with the one exception being existing non-competes for senior executives (those making over $151,164 and who are in “policy-making positions”).

The rule is 570 pages long, with much of it trying to make the argument for why the FTC actually has this authority. And all those arguments are going to be put to the test. Very shortly after the new rule dropped (long before anyone could have possibly read the 570 pages), a Texas-based tax services company, Ryan LLC, filed a lawsuit.

The timing, the location, and the lawyers all suggest this was clearly planned out. The case was filed in Northern Texas. It was not, as many people assumed, assigned to judicial shopping national injunction favorite Matthew Kacsmaryk. Instead, it went to Judge Ada Brown. The law firm filing the case is Gibson Dunn, which is one of the law firms you choose when you’re planning to go to the Supreme Court. One of the lawyers is Gene Scalia, son of late Supreme Court Justice Antonin Scalia.

Also notable, as pointed out by a lawyer on Bluesky, is that the General Counsel of Ryan LLC clerked for Samuel Alito (before Alito went to the Supreme Court) and is married to someone who clerked for both Justices Alito and Thomas. She also testified before the Senate in support of Justice Gorsuch during his nomination.

The actual lawsuit doesn’t just seek to block the rule. It is basically looking to destroy what limited authority the FTC has. The main crux of the argument is on more firm legal footing, claiming that this rule goes beyond the FTC’s rulemaking authority:

The Non-Compete Rule far exceeds the Commission’s authority under the FTC Act. The Commission’s claimed statutory authority—a provision allowing it “[f]rom time to time” to “classify corporations and . . . make rules and regulations,” 15 U.S.C. § 46(g)—authorizes only procedural rules, as the Commission itself recognized for decades. This is confirmed by, among other statutory features, Congress’s decision to adopt special procedures for the substantive rulemaking authority it did grant the Commission, for rules on “unfair or deceptive acts or practices.”

I wish this weren’t the case, because I do think non-competes should be banned, but this argument may be correct. Congress should make this decision, not the FTC.

However, the rest of the complaint is pretty far out there. It’s making a “major questions doctrine” argument here, which has become a recent favorite among the folks looking to tear down the administrative state. It’s not worth going deep on this, other than to say that this doctrine suggests that if an agency is claiming authority over “major questions,” it has to show that it has clear (and clearly articulated) authority to do so from Congress.

Is stopping the local Subway from banning sandwich makers from working at the McDonald’s down the street a “major question”? Well, the lawsuit insists that it is.

Moreover, even if Congress did grant the Commission authority to promulgate some substantive unfair-competition rules, it did not invest the Commission with authority to decide the major question of whether non-compete agreements are categorically unfair and anticompetitive, a matter affecting tens of millions of workers, millions of employers, and billions of dollars in economic productivity.

And then the complaint takes its big swing: the whole FTC is unconstitutionally structured.

Compounding the constitutional problems, the Commission itself is unconstitutionally structured because it is insulated from presidential oversight. The Constitution vests the Executive Power in the President, not the Commission or its Commissioners. Yet the FTC Act insulates the Commissioners from presidential control by restricting the President’s ability to remove them, shielding their actions from appropriate political accountability.

This is taking a direct shot at multiple parts of the administrative state, where Congress (for very good reasons!!!) set up some agencies to be independent agencies. They were set up to be independent to distance them from political pressure (and culture war nonsense). While the President can nominate commissioners or directors, they have limited power over how those independent agencies operate.

This lawsuit is basically attempting to say that all independent agencies are unconstitutional. This is one hell of a claim, and would do some pretty serious damage to the ability of the US government to function. Things that weren’t that political before would become political, and it would be a pretty big mess.

But that’s what Ryan LLC (or, really, the lawyers planning this all out) are gunning for.

The announcement that Ryan LLC put out is also… just ridiculous.

“For more than three decades, Ryan has served as a champion for empowering business leaders to reinvest the tax savings our firm has recovered to transform their businesses,” the firm’s Chairman and CEO, G. Brint Ryan, said in a statement.. “Just as Ryan ensures companies pay only the tax they owe, we stand firm in our commitment to serve the rightful interest of every company to retain its proprietary formulas for success taught in good faith to its own employees.

Um. That makes no sense. The FTC ruling does not outlaw NDAs or trade secret laws. Those are what protect “proprietary formulas.” So, the concern that Mr. Ryan is talking about here is wholly unrelated to the rule.

Last spring, Ryan “sought to dissuade” the FTC from imposing the new rule by submitting a 54-page public comment against it. In the comment, Ryan called non-compete agreements “an important tool for firms to protect their IP and foster innovation,” saying that without them, firms could hire away a competitor’s employees just to gain insights into their competitor’s intellectual property. Ryan added that the rule would inhibit firms from investing in that IP in the first place, “resulting in a less innovative economy.”

Again, almost everything said here is bullshit. They can still use NDAs (and IP laws) to protect their “IP.” That’s got nothing to do with non-competes.

As for the claim that it will result in “a less innovative economy,” I’ll just point to the fact that California remains the most innovative economy in the US and hasn’t allowed non-competes. Every single study on non-competes has shown that they hinder innovation. So Ryan LLC and its CEO are full of shit, but that shouldn’t be much of a surprise.

Anyway, this whole thing is a stupid mess. Non-competes should be banned because they’re awful and bad for innovation and employee freedom. But it should be Congress banning them, not the FTC. But, now that the FTC has moved forward with this rule, it’s facing an obviously planned out lawsuit, filed in the Northern District of Texas with friendly judges, and the 5th Circuit appeals court ready to bless any nonsense you can think of.

And, of course, it’s happening at a time when the Supreme Court majority has made it pretty clear that dismantling the entire administrative state is something it looks forward to doing. This means there’s a pretty clear path in the courts for the FTC to lose here, and lose big time. One hopes that if the courts are leaning in this direction, they would simply strike down this rule, rather than effectively striking down the FTC itself. But these days, who the fuck knows how these cases will go.

And even just on the issue of non-competes, my fear is that this effort sets back the entire momentum behind banning them. Assuming the courts strip the FTC rule, many will see it as open season on increasing non-competes, and the FTC would likely be stripped of any power to challenge the most egregious, anti-competitive ones.

Non-competes should be banned. But the end result of this rule could be that they end up being used more widely. And that would really suck.

Posted on Techdirt - 23 April 2024 @ 01:38pm

When You Need To Post A Lengthy Legal Disclaimer With Your Parody Song, You Know Copyright Is Broken

In a world where copyright law has run amok, even creating a silly parody song now requires a massive legal disclaimer to avoid getting sued. That’s the absurd reality we live in, as highlighted by the brilliant musical parody project “There I Ruined It.”

Musician Dustin Ballard creates hilarious videos, some of which reimagine popular songs in the style of wildly different artists, like Simon & Garfunkel singing “Baby Got Back” or the Beach Boys covering Jay-Z’s “99 Problems.” He appears to create the music himself, including singing the vocals, but uses an AI tool to adjust the vocal styles to match the artist he’s trying to parody. The results are comedic gold. However, Ballard felt the need to plaster his latest video with paragraphs of dense legalese just to avoid frivolous copyright strikes.

When our intellectual property system is so broken that it stifles obvious works of parody and creative expression, something has gone very wrong. Comedy and commentary are core parts of free speech, but overzealous copyright law is allowing corporations to censor first and ask questions later. And that’s no laughing matter.

If you haven’t yet watched the video above (and I promise you, it is totally worth it to watch), the last 15 seconds involve this long scrolling copyright disclaimer. It is apparently targeted at the likely mythical YouTube employee who might read it in assessing whether or not the song is protected speech under fair use.

Image

And here’s a transcript:

The preceding was a work of parody which comments on the perceived misogynistic lyrical similarities between artists of two different eras: the Beach Boys and Jay-Z (Shawn Corey Carter). In the United States, parody is protected by the First Amendment under the Fair Use exception, which is governed by the factors enumerated in section 107 of the Copyright Act. This doctrine provides an affirmative defense for unauthorized uses that would otherwise amount to copyright infringement. Parody aside, copyrights generally expire 95 years after publication, so if you are reading this in the 22nd century, please disregard.

Anyhoo, in the unlikely event that an actual YouTube employee sees this, I’d be happy to sit down over coffee and talk about parody law. In Campell v. Acuff-Rose Music Inc, for example, the U.S. Supreme Court allowed for 2 Live Crew to borrow from Roy Orbison’s “Pretty Woman” on grounds of parody. I would have loved to be a fly on the wall when the justices reviewed those filthy lyrics! All this to say, please spare me the trouble of attempting to dispute yet another frivolous copyright claim from my old pals at Universal Music Group, who continue to collect the majority of this channel’s revenue. You’re ruining parody for everyone.

In 2024, you shouldn’t need to have a law degree to post a humorous parody song.

But, that is the way of the world today. The combination of the DMCA’s “take this down or else” and YouTube’s willingness to cater to big entertainment companies with the way ContentID works allows bogus copyright claims to have a real impact in all sorts of awful ways.

We’ve said it before: copyright remains the one tool that allows for the censorship of content, but it’s supposed to only be applied to situations of actual infringement. But because Congress and the courts have decided that copyright is in some sort of weird First Amendment free zone, it allows for the removal of content before there is any adjudication of whether or not the content is actually infringing.

And that has been a real loss to culture. There’s a reason we have fair use. There’s a reason we allow people to create parodies. It’s because it adds to and improves our cultural heritage. The video above (assuming it’s still available) is an astoundingly wonderful cultural artifact. But it’s one that is greatly at risk due to abusive copyright claims.

Let’s also take this one step further. Tennessee just recently passed a new law, the ELVIS Act (Ensuring Likeness Voice and Image Security Act). This law expands the already problematic space of publicity rights based on a nonsense moral panic about AI and deepfakes. Because there’s an irrational (and mostly silly) fear of people taking the voice and likeness of musicians, this law broadly outlaws that.

While the ELVIS Act has an exemption for works deemed to be “fair use,” as with the rest of the discussion above, copyright law today seems to (incorrectly, in my opinion) take a “guilty until proven innocent” approach to copyright and fair use. That is, everything is set up to assume it’s infringing unless you can convince a court that it’s fair use, and that leads to all sorts of censorship.

So even if I think the video above is obviously fair use, if the Beach Boys decided to try to make use of the ELVIS Act to go after “There I Ruined It,” would it actually even be worth it for them to defend the case? Most likely not.

And thus, another important avenue and marker of culture gets shut down. All in the name of what? Some weird, overly censorial belief in “control” over cultural works that are supposed to be spread far and wide, because that’s how culture becomes culture.

I hope that Ballard is able to continue making these lovely parodies and that they are able to be shared freely and widely. But just the fact that he felt it necessary to add that long disclaimer at the end really highlights just how stupid copyright has become and how much it is limiting and distorting culture.

You shouldn’t need a legal disclaimer just to create culture.

More posts from Mike Masnick >>