Cathy Gellis's Techdirt Profile

Cathy Gellis

About Cathy Gellis

Posted on Techdirt - 1 July 2022 @ 12:08pm

Because Vulnerable People Need Section 230, The Copia Institute Filed This Amicus Brief At The Eleventh Circuit

It is utterly and completely irrational for people who defend the vulnerable to call for the destruction of Section 230. Section 230 helps protect vulnerable people by making it possible to speak out against those who would hurt them. Weakening the critically important protection it provides online platforms would only weaken these platforms’ ability to provide an outlet for vulnerable people’s critically important expression, and thus in turn weaken them.

Which is why we filed an amicus brief in the case of M.H. v. Omegle. As with most of these cases challenging the application of Section 230, something terrible happened to someone online. In this case, it was the sexual abuse of a minor. But this litigation is not about holding the abuser responsible but instead the online platform that was used – and by plenty of people not abusing each other, of course.
The district court correctly found that Section 230 barred these claims. After all, the abuse in question was content created by a user, not the platform. And Section 230 exists to make sure that only the users who create wrongful content are held responsible for it and not the platforms that were used because there is simply no way for platforms to have to answer for the almost infinite amount of content that its users make that could be wrongful in an almost infinite numbers of ways. If they had to answer for any of it, they would likely have to refuse all of it (or at least plenty of perfectly legal and beneficial, or even necessary, expression).

But the plaintiffs didn’t like the district court’s answer and so they appealed to the Eleventh Circuit. The Copia Institute then filed its amicus brief to explain to the court what is at stake if it reversed the decision to find that Section 230 didn’t bar these claims in order to try to help this very sympathetic plaintiff. The upshot: much more trouble for future sympathetic plaintiffs, who will lose their ability to speak online safely, if not entirely, as platforms go out of business, refuse more user expression, or stop moderating any of it, which would leave their communities cesspools of even more abuse. And we know this dire prediction is true because we’ve already seen it happen where Section 230 has been weakened before or otherwise unavailable. As we’ve seen play out in the wake of FOSTA in particular, Section 230 does critically important work staving off this sort of dire future where vulnerable people lose all ability to safely use online systems to strengthen their position or even just call for help.

Again, it is a very odd thing for advocates of the vulnerable to call for more of. And so we hope the court will take heed. Not only would allowing these claims to go forward violate the policy balance Congress carefully struck when it passed Section 230, and for good reasons that still remain as valid today as they did back then, but it would outright hurt the very same people these advocates would claim to help. To protect the vulnerable, we need to protect Section 230.

Posted on Techdirt - 28 June 2022 @ 12:12pm

Wherein The Copia Institute Tells The Supreme Court Not To Let Copyright Law Destroy Free Expression, A Rare Right We Ostensibly Have Left

I had to rewrite this post before it got published. I originally began it with some whimsy in response to the absurdity that copyright cases like these always engender. The idea that people could ever use their rights in previous expression to forbid someone else’s subsequent expression is almost too absurd to take seriously as an articulation of law. And, according to the Supreme Court, at least in the past, it wasn’t the law. Fair use is supposed to allow people use pre-existing expression to say new things. In fact, if the new expression did say new things, then it is absolutely should be found fair use.

In other words, the Second Circuit got things very wrong in the Andy Warhol/Prince prints case, and also the Ninth Circuit in the ComicMix/Dr. Seuss case. And so the Copia Institute filed an amicus brief at the Supreme Court, which agreed to review the Second Circuit’s decision, to say so.

But in light of the Supreme Court’s most recent decisions, I had to take out the whimsy. Assuming that Constitutional rights can survive this Court’s review has become an iffy proposition and not one where any lightheartedness can be tolerated. Our brief was all about pointing out how free speech is chilled when fair uses are prohibited, and how, if the Court would like not to see that constitutional right extinguished too, it needs to overturn the decision from the Second Circuit.

In that decision the Second Circuit last year had found that Andy Warhol’s Prince prints did not constitute a fair use of Lynn Goldsmith’s photograph of the musician Prince. But the problem with that decision isn’t just what it means for Warhol, or the Andy Warhol Foundation for the Visual Arts (AWF) that now controls the rights in his works, but what it means for everyone, because to find his work wasn’t fair use would mean that many fewer works ever could be fair uses in the future.

And such a reality would be in conflict with what the Supreme Court had previously said about fair use in the past. Sadly, even when it comes to copyright, the Supreme Court has had a few absolute clunkers of decisions, like Aereo (“smells like cable!”), Golan (snatching works back from the public domain), and Eldred (okaying the extending of copyright terms beyond all plausible usefulness). But even in those last two cases the Court still managed to reaffirm how copyright law was always supposed to comport with the First Amendment, and how fair use was a mechanism baked into copyright to ensure copyright vindicated those values. And the Court also has since reiterated how expansive fair use must be to vindicate them, most notably in the Google v. Oracle case last year, which reaffirmed its earlier fair use-protecting decision in Campbell v. Acuff-Rose (involving the 2LiveCrew parody of “Pretty Woman”).

Unfortunately, however, the Second Circuit’s decision was out of step of both those fair use decisions, which is why AWF petitioned for Supreme Court review, probably a big reason why review was granted, and why the Copia Institute has now weighed in to support their position with our own amicus brief.

In our brief we made the point that copyright law has to be consistent with two constitutional provisions: the Progress Clause, which gives Congress the authority to pass law that “promotes the progress of science and the useful arts,” and the First Amendment, which prohibits Congress from passing a law that impinges on free expression. As long as copyright law promotes expression, it is potentially constitutional, but if it impinges on expression, then it couldn’t be constitutional, under either provision. (We also pointed to the dissents by Justice Breyer in Golan and Eldred, which cogently and persuasively made these points, because with him leaving the Court this month those dissents are the only way he can continue to speak to the Court’s future consideration of such an important question of free expression.)  The issue here in this case, however, is not that Congress tried to make a copyright-related law that was unconstitutional, but that the Second Circuit interpreted its copyright law in a way that now rendered it unconstitutional with its limiting read of the fair use provision that would now stand to chill myriad future expression, which even the majority decision in Eldred cast aspersions on courts doing.

We also pointed out how it would be so chilling to new expression by citing the Ninth Circuit’s even more terrible decision in the ComicMix case, where, like the Second Circuit, it similarly had found the fair use provision to be much more narrowly applicable to new expression than the Supreme Court had, and we used that case to help illustrate why the reasoning of the Second Circuit was so untenable. In particular, both these decisions negated the degree to which the original works were transformed to convey new meanings not present in the original, extended the exclusive powers of a copyright holder far beyond what the statute itself authorized, and threatened to choke off new expression building on previous works for generations, given the extraordinary length of copyright terms. As the ComicMix case illustrated so saliently, if this be the rule, then the dead have the power to gag the living, and that reality cannot possibly be consistent with a law designed to foster the creation of new expression.

Then we concluded by noting that it’s a fallacy to presume that giving more and more power to a copyright holder translates into more expression. Not only is there plenty of evidence to show that more copyright power is unnecessary for stimulating more expression, but, what these cases illustrate is that more power will ultimately result in even less.

Other amicus briefs are available on the Supreme Court’s docket page. We now await the response from Goldsmith and her amici, and oral argument, currently scheduled for October 12. And assuming precedent and actual Constitutional text still matter at all, a decision hopefully reversing the Second Circuit and reaffirming the right to free expression that fair use doctrine is supposed to protect.

Posted on Techdirt - 13 June 2022 @ 01:04pm

With The INFORM Act, Congress Plans To Empower Ken Paxton To Go After Amazon If It Doesn’t Tell Him Who Sold The Books He Doesn’t Like

Don’t think this headline is hyperbole; as this post will explain, it is not.

But what follows here isn’t just about books, Amazon, or even Paxton himself. What the headline captures is but one example of the catastrophic upshot to the long-concerning INFORM Act bill, should it get passed, as may now happen, what with it having been shoved into the politically popular (and ridiculously enormous) United States Innovation and Competition Act that awaits passage [skip to Section 20213 in the linked document to find the bits on INFORM], despite the INFORM Act having nothing to do with helping America compete in the global economy (except insofar that a law like it tends to make it more difficult).

In short, it is a law with minimal merit but great potential for mischief given how it is currently drafted.  And in an age where government officials and others are openly eager to go after people for things they have said, it seems a certainty that, if enacted with this language, these concerns will be realized and result in serious expressive harm.

To understand why, it is important to recognize what the INFORM Act is for: to identify marketplace sellers.  To an extent, such a policy might seem to make sense because it helps sellers be held accountable.  As we regularly argue, it is better to hold sellers liable for things that go wrong with their sales than hold marketplaces liable for their sellers.  When law tries to do things the other way around and make marketplaces liable for their sellers, it creates an unmanageable risk of liability, which makes it hard for them to offer their marketplace services to any sellers (including those of perfectly safe products).  And that’s bad for smaller, independent sellers, who need marketplaces to get their products to consumers, as well as consumers who benefit from having more choices of sellers to buy from, which online marketplaces allow them to have.

So if you are going to hold sellers liable, having some way to know whom to hold accountable in the event that liability needs to be pursued could make some logical sense. On the other hand, it is not clear that such a rule mandating the identification of sellers is necessary, because consumers could use their ability to identify a seller as a factor in their purchasing decisions.  Consumers could choose to buy from a seller who voluntarily provided their identifying information over one who didn’t but who may be selling the product more cheaply, and consumers could make that make that purchasing decision based on whether it is worth it to them to pay a little more for more accountability, or to pay less and take the chance that there may be no recourse if something goes wrong.

It is a paternalistic Congress who would insist on taking away that choice entirely, and it effectively reintroduces marketplace liability to have a regulation that puts legal pressure on marketplaces to force sellers to identify themselves, if marketplace platforms are going to be able to support any sellers at all. Even if requiring seller identification might sometimes be a best practice for marketplaces to choose to require (and a basis upon which consumers could choose to shop from marketplaces on that basis), it is something else entirely for law to demand it. There are often chilling consequences when platforms are forced to make their users do something – in general, but especially here, as this bill is currently drafted.

The fundamental problem with a rule that requires all sellers to identify themselves is that it will take away the ability to have anonymous sellers at all. And even if you think removing seller anonymity is a good outcome for when it comes to selling potentially dangerous products, destroying the right to sell things anonymously is an absolutely terrible outcome for myriad products where product safety is never an issue. Especially when these products are expressive. Think books. T-shirts. CDs. Is Congress worried that consumers will have no one to sue if they get a papercut, a rash, or a headache? This bill requires even sellers of those sorts of expressive goods to identify themselves, and such a law is simply not constitutional.

As we’ve discussed many, many times before, there is a right to speak anonymously baked into the First Amendment. And that right isn’t constrained by the medium used. People speak through physical media all the time, which is why they produce expressive things like books, t-shirts, and CDs, which consumers like to buy in order to enjoy that expression. But this law inherently requires anyone who would want to monetize their expression – again, a perfectly legal thing to do, and something that other law, like copyright, even exists to encourage – to identify themselves. And that requirement will be chilling to any of the many speakers eager to spread their message, who simply can’t pay that sort of price to do it.

There is some language in the bill that does sort of narrow the intended law’s applicability, but not adequately. (Or clearly: while it limits it to “high volume sellers,” there is one provision that defines “high volume” as $5000k in annual sales or 200 transactions [§ (f)(3)(A)] and another that defines it as $20,000 [§ (b)(1)(A)(i)], but neither is very high if you are in the business of selling expressive products to make your living, or have any expressive product that happens to achieve significant popularity). There is also a tiny bit of mitigation for sellers that sell out of the home or via personal phone numbers [§ (b)(2)(A)], but it still puts an onus on them to regularly “certify” to the platform that this criteria is applicable and, still, information about them, including name and general location, will be disclosed to the world. In other words, these sellers will have to be identified, and for when they sell any sort of good, because the law’s definition of applicable goods is so broad [§ (f)(2)] and reaches even expressive goods for which there is no valid consumer safety interest for a law like this to vindicate that could survive the constitutional scrutiny needed to overcome the harm to the right of anonymous speech it will cause.

And the concern is hardly hypothetical, which returns us to the headline. The INFORM Act opens the door to state attorney general enforcement against marketplace platforms, with the ability to impose significant sanctions, potentially even if only a few of a marketplace platform’s users fail to identify themselves properly, because it will be easy for them to claim that an online marketplace is out of compliance with this law (there’s no real limiting language in it that might describe what non-compliance would look like) and in a way that “affects one or more residents of that State,” as every online marketplace inevitably does. [§ (d)(1)]. Of course, even as applied to non-expressive products this provision is a problem in how it gives states undue power over interstate commerce, which should be the exclusive domain of Congress. In fact, it’s a significant problem that individual states have already tried to impose their own versions of INFORM. These efforts provide the one legitimate reason for Congress to try to regulate here at all, in order to pre-empt that resulting mess. Yet this bill, as drafted, manages to only double-down on it.

But the concern for the threat to expressive freedom becomes especially palpable when you think about who can enforce it against whom, and for what, and Texas state attorney general Ken Paxton serves as a salient Exhibit A for what a nightmare this law would unleash. Would you like to write a book about any of the subjects states like Texas have tried to ban? If so, good luck with self-publishing it anonymously. How about selling a t-shirt expressing your outrage at any of the policies states like Texas have tried to promote? Better hope your shirt isn’t so popular that you have to identify yourself! Same with CDs: your ability to make money from your music is conditional on you identifying yourself to the world, so you’d better be completely ok with that. Of course, the problem is not just that certain state attorney generals with a tendency to use their powers against people they don’t like can find you, but that, thanks to this law, anyone else who doesn’t like what you’ve said will be able to as well.

Again, even at best this law remains of dubious value as an enforceable policy and unduly burdensome on sellers and marketplaces in a way that is likely to be costly. But if supporting it is the Faustian bargain Congress wants to basically blackmail affected constituencies into making in order to avoid something even worse (like SHOP SAFE, which has also been shoved into the same enormous competition bill and which would wreck e-commerce for everyone except maybe Amazon), then so be it. But not as currently drafted. Especially not with the attorney-general provision (which, even with a less-hairpin trigger and less super-charged enforcement powers, is still a bad idea in how it invites any and every state to mess with online interstate commerce as their own personal whims would dictate), and certainly not with such broad applicability to essentially every seller of every sort of good.

To be constitutional this bill absolutely must, at minimum, exempt any seller of any expressive good from having to identify themselves, and no platform should be forced by this law to require otherwise. When the First Amendment says that Congress shall “make no law” that abridges on free expression it means any law, including Internet marketplace law. Congress needs to abide by that prohibition and not so carelessly do such abridging here.

Posted on Techdirt - 3 June 2022 @ 03:38pm

Yet Again We Remind Policymakers That “Standard Technical Measures” Are No Miracle Solution For Anything

I’m starting to lose count of how many regulatory proceedings there have been in the last 6 months or so to discuss “standard technical measures” in the copyright context. Doing policy work in this space is like living in a zombie movie version of “Groundhog Day” as we keep having to marshal resources to deal with this terrible idea that just won’t die.

The terrible idea? That there is some miracle technological solution that can magically address online copyright infringement (or any policy problem, really, but for now we’ll focus on how this idea keeps coming up in the copyright context). Because when policymakers talk about “standard technical measures” that’s what they mean: that there must be some sort of technical wizardry that can be imposed on online platforms to miraculously eliminate any somehow wrongful content that happens to be on their systems and services.

It’s a delusion that has its roots going back at least to the 1990s, when Congress wrote into the DMCA the requirement that platforms “accommodate and […] not interfere with standard technical measures” if they wanted to be eligible for its safe harbor protections against any potential liability for user infringements. Even back then Congress had no idea what such technologies would look like, and so it defined them in a vague way, as technologies of some sort “used by copyright owners to identify or protect copyrighted works [that] (A) have been developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process; (B) are available to any person on reasonable and nondiscriminatory terms; and (C) do not impose substantial costs on service providers or substantial burdens on their systems or networks.” Which is a description that even today, a quarter-century later, correlates to precisely zero technologies.

Because, as we pointed out in our previous filing in the previous policy study, there is no technology that could possibly meet all these requirements, even just on the fingerprinting front. And, as we pointed out in this filing, in this policy study, even if you could accurately identify copyrighted works online, no tool can possibly identify infringement. Infringement is an inherently contextual question, and there is no way to load up any sort of technical tool with enough information needed to be able to correctly infer whether a work appearing online is infringing or not. As we explained, it is simply not going to know:

(a) whether there’s a valid copyright in the work at all (because even if such a tool could be fed information directly from Copyright Office records, registration is often granted presumptively, without necessarily testing whether the work is in fact eligible for a copyright at all, or that the party doing the registering is the party entitled to do it);

(b) whether, even if there is a valid copyright, if it is one validly claimed by the party on whose behalf the tool is being used to identify the work(s);

(c) whether a copyrighted work appearing online is appearing online pursuant to a valid license (which the programmer of the tool may have no ability to even know about); or

(d) whether the work appearing online appears online as a fair use, which is the most contextual analysis of all and therefore the most impossible to pre-program with any accuracy – unless, of course, the tool is programmed to presume that it is.

Because the problem with presuming that a fair use is not a fair use, or that a non-infringing work is infringing at all, is that proponents of these tools don’t just want to be able to deploy these tools to say “oh look, here’s some content that may be infringing.” They want those tools’ alerts to be taken as definitive discoveries of infringement that will force a response from the platforms to do something about them. And the only response that will satisfy these proponents is (at minimum) removal of this content (if not also removal of the user, or more) if the platforms want to have any hope of retaining their safe harbor protection. Furthermore, proponents want this removal to happen irrespective of whether the material is actually infringing or not, because they also want to have this happen without any proper adjudication of that question at all.

We already see the problem of platforms being forced to respond to every allegation of infringement as presumptively valid, as an uncheckable flood of takedown notices keep driving offline all sorts of expression that is actually lawful. What these inherently flawed technologies would do is turn that flood into an even greater tsunami as platforms are forced to credit every allegation they automatically spew forth every time they find any instance of a work, no matter how inaccurate such an infringement conclusion actually is.

And that sort of law-caused censorship, forcing expression to be removed without there ever being any adjudication of whether the expression is indeed unlawful, deeply offends the First Amendment, as well as copyright law itself. After all, copyright is all about encouraging new creative expression (as well as the public’s access to it). But forcing platforms to respond to systems like these would be all about suppressing that expression, and an absolutely pointless thing for copyright law to command, whether in its current form as part of the DMCA or any of the new, equally dangerous updates proposed. And it’s a problem that will only get worse as long as anyone thinks that these technologies are any sort of miracle solution to any sort of problem.

Posted on Techdirt - 23 May 2022 @ 12:21pm

The Problem With The Otherwise Very Good And Very Important Eleventh Circuit Decision On The Florida Social Media Law

There are many good things to say about the Eleventh Circuit’s decision on the Florida SB 7072 social media law, including that it’s a very well-reasoned, coherent, logical, sustainable, precedent-consistent, and precedent-supporting First Amendment analysis explaining why platforms moderating user-generated speech still implicates their own protected rights. And not a moment too soon, while we wait for the Supreme Court to hopefully grant relief from the unconstitutional Texas HB20 social media bill.

But there’s also a significant issue with it, which is that it only found most of the provisions of SB 7072 presumptively unconstitutional, so some of the law’s less-obviously-yet-still pernicious provisions have been allowed to go into effect.

These provisions include the need to disclose moderation standards (§501.2041(2)(a)) (the court only took issue with needing to post an explanation for every moderation decision), disclose when the moderation rules change (501.2041(2)(c)), disclose to users view counts on their posts (§501.2041(2)(e)), disclose that it has given candidates free advertising (§ 106.1072(4)), and give deplatformed users access to their data (§ 510.2041(2)(i)). The analysis gave short-shrift to these provisions that it allowed to go into effect, despite their burdens on the same editorial discretion the court overall recognized was First Amendment-protected, despite the extent that they violate the First Amendment as a form of compelled speech, and despite how they should be pre-empted by Section 230.

Of course, the court did acknowledge that these provisions might yet be shown to violate the First Amendment. For instance, in the context of the data-access provision the court wrote:

It is theoretically possible that this provision could impose such an inordinate burden on the platforms’ First Amendment rights that some scrutiny would apply. But at this stage of the proceedings, the plaintiffs haven’t shown a substantial likelihood of success on the merits of their claim that it implicates the First Amendment. [FN 18]

And it made a somewhat similar acknowledgment for the campaign advertising provision:

While there is some uncertainty in the interest this provision serves and the meaning of “free advertising,” we conclude that at this stage of the proceedings, NetChoice hasn’t shown that it is substantially likely to be unconstitutional. [FN 24]

And for the other disclosure provisions as well:

Of course, NetChoice still might establish during the course of litigation that these provisions are unduly burdensome and therefore unconstitutional. [FN 25]

Yet because the court could not already recognize how these rules chill editorial discretion means that they will now get the chance to. For example, it is unclear how a platform could even comply with them, especially a platform like Techdirt (or Reddit, or Wikimedia), which use community-based moderation, and whose moderating whims are impossible to know, let alone disclose, in advance of implementing. Such a provision would seem to be chilling of editorial discretion by making it impossible to choose such a moderation system, even when doing so aligns with the expressive values of the platform. (True, SB 7072 may not yet reach the aforementioned platforms, but such is little consolation if it means that the platforms it does reach could still be chilled from making of such editorial choices.)

The analysis was also scant with respect to the First Amendment prohibition against compelled speech, which these provisions implicate by forcing platforms to say certain things. Although this prohibition against compelled speech supported the court’s willingness to enjoin the other provisions, its analysis glossed over how this constitutional rule should have applied to these disclosure provisions:

These are content-neutral regulations requiring social-media platforms to disclose “purely factual and uncontroversial information” about their conduct toward their users and the “terms under which [their] services will be available,” which are assessed under the standard announced in Zauderer. 471 U.S. at 651. While “restrictions on non-misleading commercial speech regarding lawful activity must withstand intermediate scrutiny,” when “the challenged provisions impose a disclosure requirement rather than an affirmative limitation on speech . . . the less exacting scrutiny described in Zauderer governs our review.” Milavetz, Gallop & Milavetz, P.A. v. United States, 559 U.S. 229, 249 (2010). Although this standard is typically applied in the context of advertising and to the government’s interest in preventing consumer deception, we think it is broad enough to cover S.B. 7072’s disclosure requirements—which, as the State contends, provide users with helpful information that prevents them from being misled about platforms’ policies. [p. 57-8]

And by not enjoining these provisions it will now compel platforms to publish information it wasn’t already publishing, or even potentially significantly re-engineer their systems (such as to give users view count data).

In addition, the decision then gave short shrift to how Section 230 pre-empted such requirements. To an extent, this oversight may in part be due to how the court found it was not necessary to reach Section 230 in finding that most of the law’s provisions should be enjoined (“Because we conclude that the Act’s content-moderation restrictions are substantially likely to violate the First Amendment, and because that conclusion fully disposes of the appeal, we needn’t reach the merits of the plaintiffs’ preemption challenge.” [p.18]).

But for the provisions where it couldn’t find the First Amendment to be enough of a reason to enjoin it, the court ideally should have moved onto this alternative basis before allowing the provisions to go into effect. Unfortunately, it’s also possible that the court really didn’t recognize how Section 230 was a bar to them:

Nor are these provisions substantially likely to be preempted by 47 U.S.C. § 230. Neither NetChoice nor the district court asserted that § 230 would preempt the disclosure, candidate-advertising, or user-data-access provisions. It is not substantially likely that any of these provisions treat social-media platforms “as the publisher or speaker of any information provided by” their users, 47 U.S.C. § 230(c)(1), or hold platforms “liable on account of” an “action voluntarily taken in good faith to restrict access to or availability of material that the provider considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,” id. § 230(c)(2)(A). [FN 26]

Fortunately, however, there will likely be future opportunities to brief that issue more clearly in the future, as the case has now been remanded back to the district court for further proceedings – this appeal was only in reference to whether the law was likely to be so legally dubious to warrant being enjoined while it was challenged, but the challenge itself can continue. And it will happen in the shadow of this otherwise full-throated defense of the First Amendment in the context of platform content moderation.

Posted on Techdirt - 18 May 2022 @ 12:11pm

And Now The Copia Institute Tells The US Supreme Court There’s A Big Problem With Texas’s Social Media Law

Last week a bizarre one-line order from the Fifth Circuit lifted the injunction on Texas’s social media law, allowing it to go into effect, despite all the massive problems with it – including the extent to which it violates the First Amendment and Section 230.

So NetChoice and CCIA filed an emergency application with the U.S. Supreme Court to try to have it at least reinstate the injunction while the case worked its way through the appellate courts. And yesterday Copia Institute filed an amicus brief supporting their application.

The brief is, in many ways, an encore performance of the brief we’d submitted to the Fifth Circuit, using ourselves and Techdirt as an example of how the terms of HB20 violates our constitutional and statutory rights, but this time around there are a few additional arguments that may be worth their own posts (in fact, one is a throwback to an old post). Also, one key argument that we added applies less to the problems with HB20 itself and more to the problems involved with the Fifth Circuit lifting the injunction, and especially in the way that it did. We pointed out that lifting the injunction, and without any explanation for why, looked an awful lot like the sort of prior restraint that had long been considered verboten under the First Amendment. State actors (including courts) are not supposed to chill the exercise of expression unless and until there’s been the adjudication needed to find that the First Amendment permits that sanction. Here the Fifth Circuit technically heard the case, but it issued a sanction stymying the exercise of speech (lifting the injunction) without ever actually having ruled that HB20’s chilling terms were actually ok under the First Amendment. Perhaps the court truly think’s HB20 is perfectly sound under the First Amendment, we don’t really know. And we can’t know, because they didn’t say anything. Which also means there’s nothing to appeal, because if the Fifth Circuit made an error in thinking HB20 is ok (which seems likely, because that law conflicts with so much established First Amendment precedent, as well as common sense) no one can say where that error was, or what of its judgment should be reversed.

Nevertheless, the law is out there, in effect now, doing harm to platforms’ expression. HB20 still needs to be thrown out on the merits, but for the moment we just all need the Supreme Court to get the bleeding to stop.

Posted on Techdirt - 9 May 2022 @ 12:18pm

Wherein The Copia Institute Reminds California’s New Privacy Agency That Its Regulations Need To Comport With The First Amendment

Last week the recently formed California Privacy Protection Agency held “pre-rulemaking stakeholder sessions” to solicit input on the regulations it intends to promulgate. I provided the following testimony on behalf of the Copia Institute.

Thank you for the opportunity to speak at these hearings. My name is Cathy Gellis, and I’m here representing myself and the Copia Institute, a think tank that regularly researches and comments on matters of tech policy, including as they relate to privacy and free speech.

I’m here today to talk about how privacy regulation and free speech converge in order to urge this board to carefully address the collision of any proposed regulation and the First Amendment, particularly with respect to the protection of speech and innovation. To do so I want to make three interrelated points.

First, as a general matter, it is important that any proposed regulation be carefully analyzed from a First Amendment perspective to make sure it comports with both its letter and spirit. When the First Amendment says “make no law” that abridges freedom of speech, that admonition applies to California privacy regulation. The enabling California legislation involved here itself acknowledges that it is only “intended to supplement federal and state law, where permissible, but shall not apply where such application is preempted by, or in conflict with, federal law, or the California Constitution,” and violating the First Amendment would run afoul of this clause.

It’s also important that any such regulation comport with the spirit of the First Amendment as well. The First Amendment exists to make sure we can communicate with each other, which is a necessary requirement of a healthy democracy and society. It would be an intolerable situation if these regulations were to chill our exchange of information and expression, or to unduly chill innovation. While wanting online services to be careful with how they handle the digital footprints the public leaves behind is admirable, the public would not be well served if new and better technologies couldn’t be invented, or new businesses or competitors couldn’t be established, because California privacy regulation was unduly burdensome or simply an obstacle to new and better ideas.

Along these lines a second point to make is that California is not Europe. Free speech concerns do not get “balanced” here and cannot be “balanced” without violating the First Amendment. The experience of the GDPR in Europe is instructive in warning what happens when regulators try to make such a balance, because inevitably free expression suffers.

For instance, privacy regulation in Europe has been used as a basis for powerful people to go after journalists and sue their critics, which makes criticizing them, even where necessary, and even where under the First Amendment perfectly legal, difficult if not impossible, and thus chills such important discourse.

The GDPR has also been used to force journalists to divulge their sources, which is also anathema to the First Amendment and California law, along with itself violating of the privacy values wrapped up in journalist source protection. It also chills the necessary journalism a democratic society depends on. (As an aside, the journalistic arm of the Copia Institute has had its own reporting suppressed via GDPR pressure on search engines, so this is hardly a hypothetical concern.)

And it was the GDPR that opened the door to the entire notion of “right to be forgotten,” which, despite platitudes to the contrary, has had a corrosive effect on discourse and the public’s First Amendment-recognized right to learn about the world around them, while also giving bad actors the ability to whitewash history so they can have cover for more bad acts.

Meanwhile we have seen, in Europe and even the U.S., how regulatory demands that have the effect of causing services to take down content invariably lead to too much content being taken down. Because these regulatory schemes create too great a danger for a service if they do not do enough to avoid sanction, they rationally chose to do too much in order to be safe than sorry. But when content has been taken down, it’s the world who needs it who’s sorry now.

As well as the person who created the content, whose own expression has now been effectively harmed by an extrajudicial sanction. The First Amendment forbids prior restraint, which means that it’s impermissible for speech to be punished before having been adjudicated to be wrongful. But we see time and time again such prior restraint happen thanks to regulatory pressure on the intermediary services online speakers need to use to speak, which force them to do the government’s censorial dirty work for it by causing expressive content to be deleted, and without the necessary due process for the speaker.

Then there is this next example, which brings up my third point. Privacy regulation does not stay well-cabined so that it only affects large, commercial entities. It inevitably affects smaller ones, directly or indirectly. In the case of the GDPR, it affected the people who used Facebook to run fan pages, imposing upon these individuals, who simply wanted to have a place where they could talk with others about their favorite subject, cripplingly burdensome regulatory liability. But who will want to run these pages and foster such discourse when the cost can be so high? Care needs to be taken so that regulatory pressure does not lead to the loss of speech or community, as the GDPR has done.

And that means recognizing that there are a lot of online services and platforms that are not large companies. Which is good; we want there to be a lot of online services and platforms so that we have places for communities to form and converse with each other. But if people are deterred from setting up, say, their own fan sites, independent of Facebook even, then that’s a huge problem.  Because we won’t get those communities, or that conversation.

Society wants that discourse. It needs that discourse. And if California privacy regulation does anything to smother it with its regulatory criteria, then it will have caused damage, which this agency, and the public that empowered it, should not suborn.

Thank you again for this opportunity to address you.  A version of this testimony with hyperlinks to the aforementioned examples will be published on shortly.

Posted on Techdirt - 5 May 2022 @ 12:06pm

No, Software-Bricked Tractors Thwarting Russian Looters Is Not A Sign That Either John Deere Or Copyright Is Good

There are not enough words to describe the horrors of what Russian troops have been doing to their Ukrainian neighbors. But it should go without saying that stealing their stuff is, on its own, not ok.

But it turns out that some of what they’ve stolen is farming equipment. And modern farming equipment at that, which is encumbered with software. But because it is encumbered with software, that means that the ability to control the machine does not remain physically with the machine. Were it an old-school, software-less tractor it would just need to be filled up with fuel to start operating. But not so with modern tractors, whose onboard software systems require licensed users to operate them – whom looters definitely are not.

And so CNN is reporting that that Russian looters are finding themselves unable to use the tractors they stole, because people elsewhere have been able to control the software to make it so the tractors can’t run. They just sit there in their yards as piles of useless metal, instead of valuable agricultural assets.

The sophistication of the machinery, which are equipped with GPS, meant that its travel could be tracked. It was last tracked to the village of Zakhan Yurt in Chechnya. The equipment ferried to Chechnya, which included combine harvesters — can also be controlled remotely. “When the invaders drove the stolen harvesters to Chechnya, they realized that they could not even turn them on, because the harvesters were locked remotely,” the contact said. The equipment now appears to be languishing at a farm near Grozny.

But we should not get carried away celebrating the apparent schadenfreude, because what’s stymying these looters is itself dystopian. Even if you can argue that embedding software logic on a physical piece of equipment, like a tractor, makes for a better tractor, the idea that someone else somewhere else can have dominion over that piece of equipment does not make anything better. Even if that embedded software only serves to function as a form of lo-jack to deter thieves from taking equipment they know they won’t be able to use, it also doesn’t follow that any such software is necessarily better than how things were before either. Because while, sure, in this case this arrangement has helped prevent thieves from benefiting from their ill-gotten gains, in all too many situations it is instead bona fide owners who have been unable to benefit from their own properly purchased property. Which is a huge problem that this one example of apparent karmic justice does not and cannot redeem.

The irony is that teams of hackers have long been hard at work figuring out how to modify the embedded software so that tractor owners everywhere (including in the US) can do what they need to operate and maintain with their own equipment. Including teams of Ukrainian hackers.

To avoid the draconian locks that John Deere puts on the tractors they buy, farmers throughout America’s heartland have started hacking their equipment with firmware that’s cracked in Eastern Europe and traded on invite-only, paid online forums. Tractor hacking is growing increasingly popular because John Deere and other manufacturers have made it impossible to perform “unauthorized” repair on farm equipment, which farmers see as an attack on their sovereignty and quite possibly an existential threat to their livelihood if their tractor breaks at an inopportune time. […] The nightmare scenario, and a fear I heard expressed over and over again in talking with farmers, is that John Deere could remotely shut down a tractor and there wouldn’t be anything a farmer could do about it. “What you’ve got is technicians running around here with cracked Ukrainian John Deere software that they bought off the black market.”

Of course, these hackers are unlikely to want to help their looting neighbors. Nor likely is John Deere. (Especially here, since many of the tractors appear to have been stolen from authorized dealers.) But even so, it’s not like the Russians will be returning any of these tractors to their rightful owners now that they’ve found they can’t use them. Simply depriving Ukrainians of their own property is a huge blow to them, and the looters may still profit from their thievery by scavenging the tractors for parts and raw materials. So it’s not like John Deere and its embedded software, or the copyright in that software that gives it such control over its sold machines, have managed to right a serious wrong.

But while we can easily recognize how wrong it is for looters to deprive people of the use of their own equipment, we somehow often miss how wrong it is for anyone to so deprive them. The reality is that if you’ve made it so that a tractor owner can’t use their own equipment, you might be a looter. But you also might be John Deere. The only difference is that the looter’s behavior is more clearly lawless, whereas John Deere’s is currently backed up by law. But the effect is just as wrong.

Posted on Techdirt - 29 April 2022 @ 12:31pm

White House Sets Up Monumentally Stupidly Named ‘Disinformation Governance Board’

The Biden Administration just announced the creation of a DHS subagency apparently intended to confront “disinformation.” The biggest problem with it is that it is impossible, right now, to even know whether it’s a good idea or not, because it is so unclear what this board is intended to do.

Further, its name does not inspire confidence. It is very easy to read “Disinformation Governance Board” and think it is some Orwellian government program designed to qualitatively analyze information in order to deem it either suitable to be expressed, or forbidden. And if that’s what is planned, then such a program should be loudly and immediately condemned.

Indeed, the decision to announce this in a weird, furtive way, without details, focus, or explicit limitations, only served to create a firestorm of rage among the Fox News set. By not explaining what the agency is actually going to do and calling it a “governance” (?!?) board, it allowed provocateurs and nonsense peddlers to jump in and fill up the void — perhaps somewhat ironically with disinformation insisting that this board was going to be “giving law enforcement power to punish people who think the wrong things.”

Of course, that’s almost certainly not what’s in store (beyond the Constitutional problems with such a thing, it wouldn’t make any sense at all). But without knowing what is instead planned, it’s hard to know what to think about it. Some reports suggest that it’s an agency effort designed to counter specific disinformation about the US government, particularly circulating rumors about US immigration policy that, when believed, make vulnerable immigrants even more vulnerable. From the AP article about the board’s launching:

A newly formed Disinformation Governance Board announced Wednesday will immediately begin focusing on misinformation aimed at migrants, a problem that has helped to fuel sudden surges at the U.S. southern border in recent years. Human smugglers often spread misinformation around border policies to drum up business.

There isn’t really anything objectionable about the government wanting to make sure people are not hurt by misunderstanding policies actually intended to help them, and it makes sense for it to want to have some faculty to be able to correct the record when it needs to be corrected.

But, as usual, the details matter, and HOW the government responds to specific disinformation will dictate whether the effort is something helpful, or instead something liable to only make a bigger mess (or, worse, unconstitutional). Much care will need to be taken to avoid the latter outcome, and it would be helpful if there was more initial transparency about what was planned so that the public can help make sure that such care is taken.

And the milquetoast statement from the (already Orwellian-named) Homeland Security that the board will “protect privacy, civil rights, and civil liberties” is basically worse than useless. It provides no concrete explanation of what the board will do to accomplish that, and again it allows political opponents to just make up whatever they want.

Meanwhile, the other thing that seems like it could be an interesting idea for a government “disinformation board” to do is simply to do more research into how and why disinformation has traction. It isn’t clear, though, that this will be one of its tasks, although appointing Nina Jancowicz, herself a social science researcher, to lead the board does spark hope that such projects may be in store. The sociology of mass communications is a deep and rich subject, and one that bears very heavily on the policy challenges of the day. If we care about disinformation at all, then we should be doing more to study it, if not directly by the government then via grants to social scientists with the methodological ability to do effective research.

We should be doing that anyway, more social science research at the intersection of information technology and people, so that we can build more effective policy in response to the insights we glean, instead of the constant guesswork that currently informs our political reactions to the challenges we face.

But everything about the way this Disinformation Governance Board has been rolled out has been a disaster. The lack of clear information about what it is, what it does. The naming of it. The fact that the White House simply left this giant open void to be filled by the misinformation peddlers themselves, suggests that the White House itself is not at all comprehending how any of this works. And that, alone, does not bode well for this terribly named board.

Posted on Techdirt - 13 April 2022 @ 08:11pm

The Constitutional Challenge To FOSTA Hits A Roadblock As District Court Again Ignores Its Chilling Effects

The constitutional challenge to FOSTA suffered a significant setback late last month when the district court granted the government’s motion for summary judgment, effectively dismissing the challenge. If not appealed (though it sounds like it may be), it would be the end of the road for it.

What is most dismaying about the decision – other than its ultimate holding – is the court’s failure to recognize the chilling effect on expression FOSTA has already had, and which the DC Circuit had previously acknowledged when it found the standing needed for the plaintiffs’ challenge to continue, after the district court had previously tried to dismiss it once before for lack of it. In this latest decision, the district court again turned a blind eye to the expressive harm FOSTA causes and rooted its ruling not in the language of the appellate holding suggesting there might actually be a problem here but instead in the dicta of Judge Katsas’s concurrence, even though none of this more equivocating language was a binding observation by the appeals court.

For instance, at one point in the decision the district court wrote:

Plaintiffs also contend that the prior decision of our Court of Appeals as to plaintiffs’ standing in this case precludes my holding that § 2421A is susceptible to the narrowing construction I endorse above. I disagree. Indeed, I find that plaintiffs’ argument not only overreads the majority’s opinion, but also ignores Judge Katsas’s concurrence. More specifically, while the majority did point out that FOSTA’s language, including the “promote or facilitate” elements discussed above, could be read to sweep broadly “when considered in isolation,” Woodhull I, 948 F.3d at 372, the panel did so in the context of its analysis of plaintiffs’ standing to bring a pre-enforcement challenge. That standing analysis merely requires considering whether plaintiffs have established that they engage in activities “arguably” within the scope of the challenged statute, see SBA, 573 U.S. at 164, not that the statute does in fact prohibit the alleged activities. As such, the majority was not determining the precise scope of what FOSTA proscribes, but rather whether plaintiffs’ broad reading of FOSTA was “arguably” a valid one. In short, the majority did not decide how FOSTA should be construed, only how it could be construed. To that end, the narrowing construction of the law discussed above was neither endorsed, nor rejected, by the majority’s opinion. Indeed, in his concurrence, Judge Katsas expressly stated that the majority did not purport to construe the statute for anything other than the standing analysis, noting instead that the plaintiffs’ preferred reading was only “identif[ied] … as at least one possible reading of FOSTA.” Woodhull II, 948 F.3d at 375 (Katsas, J., concurring in part and concurring in the judgment). Judge Katsas wrote separately specifically to indicate that he viewed the plaintiffs’ reading ultimately as untenable, even if he did also agree that the plaintiffs had standing under his narrower reading. I therefore find that plaintiffs are incorrect in arguing that I am precluded from reading FOSTA so narrowly: our Court of Appeals did not take any position on that reading of FOSTA, and indeed Judge Katsas expressly adopted it. [p. 17 (emphasis in the original)]

The problem is, at no point did the district court actually consider how FOSTA’s language would be construed, let alone how it had already been construed.

In conjunction with their motion for summary judgment, plaintiffs did submit a statement of facts accompanied with a number of supporting affidavits. However, these facts and affidavits are material only to establishing plaintiffs’ ongoing standing—which defendants do not challenge—and the entitlement of plaintiffs to injunctive relief should they prevail on the merits. Because, as explained below, I find that plaintiffs’ facial constitutional claims are without merit, there is no need to address the facts underpinning plaintiffs’ request for injunctive relief. [fn 5]

Nor, for that matter, had the DC Circuit itself previously, as it had not been called upon to fully inquire as to the expressive effects of FOSTA and therefore could not officially indicate one way or another whether there were any for any or all of the plaintiffs. As it was, once it found standing for just two of them, it had ended its inquiry, because finding it possible for just two plaintiffs was enough to revive the challenge, and even Judge Katsas’s doubt as to the expressive harm he articulated in his concurrence was still nothing more than idle musing, and not a definitive finding of any sort.

Nevertheless, the appeals court, and even Judge Katsas, had observed that there very easily could be some impermissible expressive harms resulting from FOSTA arising from its vague language. Yet the district court chose to largely ignore that observation, or the factual record documenting the ways these plaintiffs had already been chilled. Instead it treated the Katsas concurrence as an official finding that FOSTA’s language could cause no expressive harms at all.

Judge Katsas wrote separately specifically to indicate that he viewed the plaintiffs’ reading ultimately as untenable, even if he did also agree that the plaintiffs had standing under his narrower reading. I therefore find that plaintiffs are incorrect in arguing that I am precluded from reading FOSTA so narrowly: our Court of Appeals did not take any position on that reading of FOSTA, and indeed Judge Katsas expressly adopted it.

As such, a proper construal of FOSTA leads to the conclusion that it is narrowly tailored toward prohibiting activity that effectively aids or abets specific instances of prostitution. I therefore have no trouble finding that its legitimate sweep, encompassing only conduct or unprotected speech integral to criminal activity, predominates any sweep into protected speech—indeed, under the narrow construal above, I do not read FOSTA to possibly prohibit any such protected speech, much less a sufficient amount so as to render the Act overbroad. [p. 18]

A finding of summary judgment assumes that there are no issues of material fact, and so the only legal question before the court would have been one of how the law should treat the agreed-upon facts. But here the district court’s own reasoning indicates that there is indeed a question of fact: is the language of FOSTA one that can chill lawful expressive activity, or one that does not? That there was a plausible reading where it might not does not seem dispositive, especially in light of the fact that such readings had already occurred (particularly with respect to the massage therapist, who lost his ability to advertise on Craigslist, through which he had been successfully advertised for years, once FOSTA passed and Craigslist found the legal risk of allowing such ads to be too great in the face of it). It is thus preposterous to find, as this court did, that FOSTA could not have a chilling effect when there is already plenty of evidence of one.

If this decision were to stand Congress would only end up further emboldened to make more laws like this one that chill speech, even though the First Amendment unequivocally tells them not to. Because per this court it only matters if Congress intended to harm speech, and not whether it actually did.

Though FOSTA may well implicate speech in achieving its separate purpose, such an indirect effect does not provide a basis for strict scrutiny: “even if [a law] has an incidental effect on some speakers or messages but not others,” it is to be treated as content neutral. [p. 23]

But FOSTA’s chilling effect has been far from incidental, and hopefully on appeal this harm will be recognized and remedied.

More posts from Cathy Gellis >>