Cathy Gellis's Techdirt Profile

Cathy Gellis

About Cathy Gellis

Posted on Techdirt - 21 September 2022 @ 10:43am

No, The Solution For Criminal Defendants Is Not More Clearview AI

The problems with Clearview AI’s facial recognition system, particularly in the hands of police, are myriad and serious. That the technology exists as it does at all raises significant ethical concerns, and how it has been used to feed people into the criminal justice system raises significant due process ones as well. But an article in the New York Times the other day might seem to suggest that it perhaps also has a cuddly side, one that might actually help criminal defendants, instead of just hurting them.

But don’t be fooled – there is nothing benign about the facial recognition technology pushed by Clearview AI, and even this story ultimately provides no defense for it. It was not the hero here, because the problem it supposedly “solved” was not the problem that actually needed solving.

In the article, Kashmir Hill told the story of how Clearview AI’s facial recognition system apparently helped exonerate someone criminally charged with causing the single-car accident he had been in, which had killed the person he was with. The survivor defendant insisted he hadn’t been the one driving, but he was charged anyway. Proving his innocence was going to require finding the Good Samaritan witness who had pulled him from the burning car and could verify which seat he had been pulled from.

In November 2019, almost three years after the car accident, Mr. Conlyn was charged with vehicular homicide. Prosecutors said that Mr. Conlyn had been the one recklessly driving Mr. Hassut’s Mustang that night and that he was responsible for his friend’s death.

Mr. Conlyn vehemently denied this, but his version of events was hard to corroborate without the man who had pulled him from the passenger seat of the burning car. The police had talked to the good Samaritan that night, and recorded the conversation on their body cameras.

“The driver got ejected out of one of the windows. He’s in the bushes,” said the man, who had tattoos on his left arm and wore an orange tank top with “Event Security” emblazoned on it. “I just pulled out the passenger. He’s over there. His name is Andrew.”

The police did not ask for the man’s name or contact information. The good Samaritan and his girlfriend, who was with him that night, drove off in a black pickup truck.

Yet how could the defendant track down the witness? He had no idea himself who had helped him, and the police had not bothered to fully document what they observed at the scene. The only identifying data was the footage the police bodycams captured while they had been talking to the witness. Which led the defense to wonder, what if there was some sort of way to identify who was pictured in the footage. So defense counsel wrote to Clearview AI and asked for access to its facial recognition system to see if it could identify the person in the picture. And Clearview said yes, apparently smelling a PR opportunity by asking that that the defense “talk to the news media about it if the search worked.” And it turns out that it did work: the identity of the witness was readily found, the witness then located, and, with their testimony, the charges were consequently dropped.

The company would like the takeaway from this particular happy ending to be that Clearview AI’s facial recognition system might at least be a double-edge sword, offering some good and important benefits to the accused that might somehow counterbalance the tremendous threat it poses to all putative defendants (aka everyone), especially insofar how facial recognition technology in the hands of police tends to lead to people finding themselves in the crosshairs of the criminal justice system in the first place, often unaware and even by mistake.

Civil liberty advocates believe Clearview’s expansive database of photos violates privacy, because the images, though public on the web, were collected without people’s consent. The tool can unearth photos that people did not post themselves and may not even realize are online. Critics say it puts millions of law-abiding people in a perpetual lineup for law enforcement, which is particularly concerning given broader concerns about the accuracy of automated facial recognition.

But the story actually supports no such conclusion: Clearview AI was no way the solution to the problem presented here, because the problem here was not that the defense couldn’t find its witness. The problem was that there was obviously reasonable doubt as to his guilt, which prosecutors chose to ignore in deciding to charge him anyway. Which then meant that instead of the prosecution having the burden to prove his guilt, the defendant now had the burden to prove his innocence, which is how he found himself needing Clearview at all.

But he never should have had that need because that is not how things are supposed to work in our criminal justice system, where the accused are supposed to be presumed innocent and it is the prosecution’s job to prove beyond a reasonable doubt that they are not. True, to simply bring charges the prosecution may have needed to meet a lesser standard than reasonable doubt, but the problem here was that the prosecution gladly chose to pick this fight even though it knew that it should ultimately not be able to win the war.

Because think about how much evidence the prosecution already knew about that cast doubt on the suspicion the defendant had been driving. As the article lists, there was the defendant’s own denial, plus forensic evidence that was at best inconclusive to support that he had been the driver (and then there was also the fact that the passenger side door had been blocked by a tree, which would have meant that anyone in the car would need to leave by the driver’s side, even if they hadn’t been driving).

The body-camera footage did not seem to hold much weight with the prosecution.

“There was contradicting evidence,” said Samantha Syoen, the communications director for the state attorney’s office.

Witnesses who had arrived late to the scene saw Mr. Conlyn pulled out of the driver’s side of the car. Mr. Conlyn said, and police body camera footage appeared to confirm, that the passenger’s side door had been against a tree, which was why he’d had to be rescued from the other side. The police found his blood on the passenger’s side of the car but also on the driver’s side airbag.

An accident reconstruction expert hired by the prosecution said the injuries to the right side of Mr. Conlyn’s body could have come from the center console, not the passenger door. After Mr. Hassut’s father sued Mr. Conlyn in civil court in 2019, for the wrongful death of his son, Mr. Conlyn’s insurance agency settled the suit, unable to prove that Mr. Conlyn had not been driving the car.

But even if none of that contrary evidence had been compelling, THERE WAS ALSO A WITNESS! That the prosecution knew about! Because the police had spoken to him! AND THAT’S WHY THERE WAS EVEN A PICTURE TAKEN ON THE BODY CAM!

There should have been no need for the defense to ID the witness, because the mere fact that there was a witness, whose contemporaneous statement had cast doubt on the prosecution’s theory, should have been enough reasonable doubt to put an end to the prosecution. That the prosecution nevertheless continued, in spite of this contrary evidence, is the true problem that this story reveals. And it is the problem that Clearview AI in no way solves. The failure here is much more systemic, that overzealous prosecutors can go after defendants with such weak hands, and force defendants to have to do what may be impossible to prove their innocence (especially thanks to inexcusably poor documentation practices by investigating police).

That something like Clearview AI may make a defendant’s task slightly less impossible does not solve the ultimate problem, nor does it redeem technology as troubling as Clearview AI just because in this particular case it may have helped even the odds.  The issue is that the odds were ever so uneven in the first place. And Clearview AI ultimately just helps make sure they will stay uneven by so cavalierly dismissing the significant privacy rights that should be protecting citizens from exactly this sort of overzealous policing.

The article quotes the NACDL’s Jumana Musa accurately noting that offering defense counsel access to Clearview isn’t going to solve the problems with it:

Jumana Musa is the director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers, where she works to keep defense lawyers informed about the newest surveillance tools used by law enforcement.

“[Giving defendants access to the system] is not going to wipe away the ethical concerns about the way in which they went about building this tool. [I]t’s not going to do is make us feel comfortable with the secrecy around how this tool works.”

As she continued, “You don’t address issues in a broken criminal legal system by layering technology on them.”  Which is exactly the issue.  Technology is not the solution to all our failings. Sometimes the thing we need to do is just fail less, but that requires recognizing what has actually gone wrong and what therefore needs addressing.

So, no, Clearview AI is not the solution here because the problem is not that not enough people don’t have access to facial recognition systems. Rather, the problem is that our system of justice railroads people even in the face of reasonable doubt. That it does is the problem we should be fixing, but it is not one that something so invasive like Clearview AI could ever do for us because it is not like its own existential problems can somehow cure or cancel out the current constitutional infirmities of our criminal justice system. Instead each will only make the other worse.

Posted on Techdirt - 15 September 2022 @ 06:15am

Even The Copyright Office Doesn’t Want What The JCPA Is Selling

It should not be this hard to stamp out a bad idea, but here we are, with the JCPA continuing to haunt the country like a zombie that simply refuses to die. The JCPA, for those just tuning in, is a bill designed to create a link tax. Its supporters sometimes blanch at that description, but it is an apt one, rooted in the perversely censorial notion that no one should be able to link to material available on the Internet (or facilitate others linking to material available on the Internet) without paying for the privilege.

There was a glimmer of hope last week that Senator Cruz may have accidentally driven a stake through its heart with his surprise amendment that upended Senator Klobuchar’s cursed legislative apple cart, but her steadfast refusal to acknowledge any legitimate concerns about her bills has led her to keep trying to ram this one down America’s throat.

But, notably, she is doing it without the support of the Copyright Office, which earlier this year considered whether the sort of monopoly power the JCPA creates was the sort of monopoly power copyright law either does, or should, create. Sensibly, it decided that it was not.

As it lays out in the Executive Summary, the Copyright Office noted that US copyright law already granted media outlets substantial protection.

Press publishers have significant protections under U.S. copyright law. They generally own a copyright in the compilation of materials that they publish. In addition, they often own the copyright in individual articles through the work-made-for-hire doctrine and may also own rights in accompanying photographs.

At the same time, copyright law also currently contains limits on the reach of its exclusionary power, sometimes for constitutional reasons and often for the benefit of the public, which a link tax scheme would conflict with, to the inevitable detriment of the public.

Copyright law does, however, permit certain unlicensed uses of news content, by news aggregators or others. Facts and ideas are not protectable by copyright. The merger doctrine allows the use of original expression where there are limited ways of expressing a particular fact or idea, and individual words, titles, and short phrases are generally not protectable. Even where an aggregator reuses protectable expression, the fair use doctrine may apply. As a result, press publishers’ ability to rely on copyright to prevent third-party aggregators from using their content depends on the specific circumstances, including the nature and amount of the content used.

To nevertheless constrain public use of links via the use of this ancillary, or copyright-like, scheme, would require advancing new legal theories that are “untested.” And thus the Copyright Office could not recommend the exercise.

Given all of these variables, the Copyright Office does not recommend adopting new copyright protections for press publishers. Any change to U.S. copyright law that would meaningfully improve press publishers’ ability to block or seek remuneration for news aggregators’ use of their works would necessarily avoid or narrow limitations on copyright that have critical policy and Constitutional dimensions.

It further noted that the record simply didn’t support the extreme regulatory approach of expanding copyright law to create this new monopoly power to forbid links. To the extent that the funding models for journalism stood to be improved, it was not copyright law, or something so much akin to it, that stood to appropriately improve them.

The Office recognizes that adequate funding for journalism may currently be at risk, and that there are implications for the press’s essential role in our system of government. But the challenges for press publishers do not appear to be copyright-specific. It has not been established that any shortcomings in copyright law pose an obstacle to incentivizing journalism or that new copyright-like protections would solve the problems that press publishers face.

Indeed, even the Copyright Office’s report referenced how self-defeating this sort of proposal seemed to be for journalism by making it harder for media outlets to connect with the audiences that are their lifeblood. (See, for instance, footnote 57 of the report.)

As we (and many others) have said many times, both on these pages and in comments for these regulatory studies, link tax proposals like those now pushed by the JCPA are no solution for journalism. Indeed, they will HURT journalism, especially the journalism by smaller media outlets who can no longer count on people being able to freely share links to their material, and thus no longer be able to count on having untaxed connections to audiences.

It is therefore an odd thing for any regulator to want, especially if they are genuinely sincere about making journalism a more economically viable endeavor, and not simply pushing laws like these in an effort to punish those who foster Internet use they simply don’t like.

Posted on Techdirt - 1 July 2022 @ 12:08pm

Because Vulnerable People Need Section 230, The Copia Institute Filed This Amicus Brief At The Eleventh Circuit

It is utterly and completely irrational for people who defend the vulnerable to call for the destruction of Section 230. Section 230 helps protect vulnerable people by making it possible to speak out against those who would hurt them. Weakening the critically important protection it provides online platforms would only weaken these platforms’ ability to provide an outlet for vulnerable people’s critically important expression, and thus in turn weaken them.

Which is why we filed an amicus brief in the case of M.H. v. Omegle. As with most of these cases challenging the application of Section 230, something terrible happened to someone online. In this case, it was the sexual abuse of a minor. But this litigation is not about holding the abuser responsible but instead the online platform that was used – and by plenty of people not abusing each other, of course.
The district court correctly found that Section 230 barred these claims. After all, the abuse in question was content created by a user, not the platform. And Section 230 exists to make sure that only the users who create wrongful content are held responsible for it and not the platforms that were used because there is simply no way for platforms to have to answer for the almost infinite amount of content that its users make that could be wrongful in an almost infinite numbers of ways. If they had to answer for any of it, they would likely have to refuse all of it (or at least plenty of perfectly legal and beneficial, or even necessary, expression).

But the plaintiffs didn’t like the district court’s answer and so they appealed to the Eleventh Circuit. The Copia Institute then filed its amicus brief to explain to the court what is at stake if it reversed the decision to find that Section 230 didn’t bar these claims in order to try to help this very sympathetic plaintiff. The upshot: much more trouble for future sympathetic plaintiffs, who will lose their ability to speak online safely, if not entirely, as platforms go out of business, refuse more user expression, or stop moderating any of it, which would leave their communities cesspools of even more abuse. And we know this dire prediction is true because we’ve already seen it happen where Section 230 has been weakened before or otherwise unavailable. As we’ve seen play out in the wake of FOSTA in particular, Section 230 does critically important work staving off this sort of dire future where vulnerable people lose all ability to safely use online systems to strengthen their position or even just call for help.

Again, it is a very odd thing for advocates of the vulnerable to call for more of. And so we hope the court will take heed. Not only would allowing these claims to go forward violate the policy balance Congress carefully struck when it passed Section 230, and for good reasons that still remain as valid today as they did back then, but it would outright hurt the very same people these advocates would claim to help. To protect the vulnerable, we need to protect Section 230.

Posted on Techdirt - 28 June 2022 @ 12:12pm

Wherein The Copia Institute Tells The Supreme Court Not To Let Copyright Law Destroy Free Expression, A Rare Right We Ostensibly Have Left

I had to rewrite this post before it got published. I originally began it with some whimsy in response to the absurdity that copyright cases like these always engender. The idea that people could ever use their rights in previous expression to forbid someone else’s subsequent expression is almost too absurd to take seriously as an articulation of law. And, according to the Supreme Court, at least in the past, it wasn’t the law. Fair use is supposed to allow people use pre-existing expression to say new things. In fact, if the new expression did say new things, then it is absolutely should be found fair use.

In other words, the Second Circuit got things very wrong in the Andy Warhol/Prince prints case, and also the Ninth Circuit in the ComicMix/Dr. Seuss case. And so the Copia Institute filed an amicus brief at the Supreme Court, which agreed to review the Second Circuit’s decision, to say so.

But in light of the Supreme Court’s most recent decisions, I had to take out the whimsy. Assuming that Constitutional rights can survive this Court’s review has become an iffy proposition and not one where any lightheartedness can be tolerated. Our brief was all about pointing out how free speech is chilled when fair uses are prohibited, and how, if the Court would like not to see that constitutional right extinguished too, it needs to overturn the decision from the Second Circuit.

In that decision the Second Circuit last year had found that Andy Warhol’s Prince prints did not constitute a fair use of Lynn Goldsmith’s photograph of the musician Prince. But the problem with that decision isn’t just what it means for Warhol, or the Andy Warhol Foundation for the Visual Arts (AWF) that now controls the rights in his works, but what it means for everyone, because to find his work wasn’t fair use would mean that many fewer works ever could be fair uses in the future.

And such a reality would be in conflict with what the Supreme Court had previously said about fair use in the past. Sadly, even when it comes to copyright, the Supreme Court has had a few absolute clunkers of decisions, like Aereo (“smells like cable!”), Golan (snatching works back from the public domain), and Eldred (okaying the extending of copyright terms beyond all plausible usefulness). But even in those last two cases the Court still managed to reaffirm how copyright law was always supposed to comport with the First Amendment, and how fair use was a mechanism baked into copyright to ensure copyright vindicated those values. And the Court also has since reiterated how expansive fair use must be to vindicate them, most notably in the Google v. Oracle case last year, which reaffirmed its earlier fair use-protecting decision in Campbell v. Acuff-Rose (involving the 2LiveCrew parody of “Pretty Woman”).

Unfortunately, however, the Second Circuit’s decision was out of step of both those fair use decisions, which is why AWF petitioned for Supreme Court review, probably a big reason why review was granted, and why the Copia Institute has now weighed in to support their position with our own amicus brief.

In our brief we made the point that copyright law has to be consistent with two constitutional provisions: the Progress Clause, which gives Congress the authority to pass law that “promotes the progress of science and the useful arts,” and the First Amendment, which prohibits Congress from passing a law that impinges on free expression. As long as copyright law promotes expression, it is potentially constitutional, but if it impinges on expression, then it couldn’t be constitutional, under either provision. (We also pointed to the dissents by Justice Breyer in Golan and Eldred, which cogently and persuasively made these points, because with him leaving the Court this month those dissents are the only way he can continue to speak to the Court’s future consideration of such an important question of free expression.)  The issue here in this case, however, is not that Congress tried to make a copyright-related law that was unconstitutional, but that the Second Circuit interpreted its copyright law in a way that now rendered it unconstitutional with its limiting read of the fair use provision that would now stand to chill myriad future expression, which even the majority decision in Eldred cast aspersions on courts doing.

We also pointed out how it would be so chilling to new expression by citing the Ninth Circuit’s even more terrible decision in the ComicMix case, where, like the Second Circuit, it similarly had found the fair use provision to be much more narrowly applicable to new expression than the Supreme Court had, and we used that case to help illustrate why the reasoning of the Second Circuit was so untenable. In particular, both these decisions negated the degree to which the original works were transformed to convey new meanings not present in the original, extended the exclusive powers of a copyright holder far beyond what the statute itself authorized, and threatened to choke off new expression building on previous works for generations, given the extraordinary length of copyright terms. As the ComicMix case illustrated so saliently, if this be the rule, then the dead have the power to gag the living, and that reality cannot possibly be consistent with a law designed to foster the creation of new expression.

Then we concluded by noting that it’s a fallacy to presume that giving more and more power to a copyright holder translates into more expression. Not only is there plenty of evidence to show that more copyright power is unnecessary for stimulating more expression, but, what these cases illustrate is that more power will ultimately result in even less.

Other amicus briefs are available on the Supreme Court’s docket page. We now await the response from Goldsmith and her amici, and oral argument, currently scheduled for October 12. And assuming precedent and actual Constitutional text still matter at all, a decision hopefully reversing the Second Circuit and reaffirming the right to free expression that fair use doctrine is supposed to protect.

Posted on Techdirt - 13 June 2022 @ 01:04pm

With The INFORM Act, Congress Plans To Empower Ken Paxton To Go After Amazon If It Doesn’t Tell Him Who Sold The Books He Doesn’t Like

Don’t think this headline is hyperbole; as this post will explain, it is not.

But what follows here isn’t just about books, Amazon, or even Paxton himself. What the headline captures is but one example of the catastrophic upshot to the long-concerning INFORM Act bill, should it get passed, as may now happen, what with it having been shoved into the politically popular (and ridiculously enormous) United States Innovation and Competition Act that awaits passage [skip to Section 20213 in the linked document to find the bits on INFORM], despite the INFORM Act having nothing to do with helping America compete in the global economy (except insofar that a law like it tends to make it more difficult).

In short, it is a law with minimal merit but great potential for mischief given how it is currently drafted.  And in an age where government officials and others are openly eager to go after people for things they have said, it seems a certainty that, if enacted with this language, these concerns will be realized and result in serious expressive harm.

To understand why, it is important to recognize what the INFORM Act is for: to identify marketplace sellers.  To an extent, such a policy might seem to make sense because it helps sellers be held accountable.  As we regularly argue, it is better to hold sellers liable for things that go wrong with their sales than hold marketplaces liable for their sellers.  When law tries to do things the other way around and make marketplaces liable for their sellers, it creates an unmanageable risk of liability, which makes it hard for them to offer their marketplace services to any sellers (including those of perfectly safe products).  And that’s bad for smaller, independent sellers, who need marketplaces to get their products to consumers, as well as consumers who benefit from having more choices of sellers to buy from, which online marketplaces allow them to have.

So if you are going to hold sellers liable, having some way to know whom to hold accountable in the event that liability needs to be pursued could make some logical sense. On the other hand, it is not clear that such a rule mandating the identification of sellers is necessary, because consumers could use their ability to identify a seller as a factor in their purchasing decisions.  Consumers could choose to buy from a seller who voluntarily provided their identifying information over one who didn’t but who may be selling the product more cheaply, and consumers could make that make that purchasing decision based on whether it is worth it to them to pay a little more for more accountability, or to pay less and take the chance that there may be no recourse if something goes wrong.

It is a paternalistic Congress who would insist on taking away that choice entirely, and it effectively reintroduces marketplace liability to have a regulation that puts legal pressure on marketplaces to force sellers to identify themselves, if marketplace platforms are going to be able to support any sellers at all. Even if requiring seller identification might sometimes be a best practice for marketplaces to choose to require (and a basis upon which consumers could choose to shop from marketplaces on that basis), it is something else entirely for law to demand it. There are often chilling consequences when platforms are forced to make their users do something – in general, but especially here, as this bill is currently drafted.

The fundamental problem with a rule that requires all sellers to identify themselves is that it will take away the ability to have anonymous sellers at all. And even if you think removing seller anonymity is a good outcome for when it comes to selling potentially dangerous products, destroying the right to sell things anonymously is an absolutely terrible outcome for myriad products where product safety is never an issue. Especially when these products are expressive. Think books. T-shirts. CDs. Is Congress worried that consumers will have no one to sue if they get a papercut, a rash, or a headache? This bill requires even sellers of those sorts of expressive goods to identify themselves, and such a law is simply not constitutional.

As we’ve discussed many, many times before, there is a right to speak anonymously baked into the First Amendment. And that right isn’t constrained by the medium used. People speak through physical media all the time, which is why they produce expressive things like books, t-shirts, and CDs, which consumers like to buy in order to enjoy that expression. But this law inherently requires anyone who would want to monetize their expression – again, a perfectly legal thing to do, and something that other law, like copyright, even exists to encourage – to identify themselves. And that requirement will be chilling to any of the many speakers eager to spread their message, who simply can’t pay that sort of price to do it.

There is some language in the bill that does sort of narrow the intended law’s applicability, but not adequately. (Or clearly: while it limits it to “high volume sellers,” there is one provision that defines “high volume” as $5000k in annual sales or 200 transactions [§ (f)(3)(A)] and another that defines it as $20,000 [§ (b)(1)(A)(i)], but neither is very high if you are in the business of selling expressive products to make your living, or have any expressive product that happens to achieve significant popularity). There is also a tiny bit of mitigation for sellers that sell out of the home or via personal phone numbers [§ (b)(2)(A)], but it still puts an onus on them to regularly “certify” to the platform that this criteria is applicable and, still, information about them, including name and general location, will be disclosed to the world. In other words, these sellers will have to be identified, and for when they sell any sort of good, because the law’s definition of applicable goods is so broad [§ (f)(2)] and reaches even expressive goods for which there is no valid consumer safety interest for a law like this to vindicate that could survive the constitutional scrutiny needed to overcome the harm to the right of anonymous speech it will cause.

And the concern is hardly hypothetical, which returns us to the headline. The INFORM Act opens the door to state attorney general enforcement against marketplace platforms, with the ability to impose significant sanctions, potentially even if only a few of a marketplace platform’s users fail to identify themselves properly, because it will be easy for them to claim that an online marketplace is out of compliance with this law (there’s no real limiting language in it that might describe what non-compliance would look like) and in a way that “affects one or more residents of that State,” as every online marketplace inevitably does. [§ (d)(1)]. Of course, even as applied to non-expressive products this provision is a problem in how it gives states undue power over interstate commerce, which should be the exclusive domain of Congress. In fact, it’s a significant problem that individual states have already tried to impose their own versions of INFORM. These efforts provide the one legitimate reason for Congress to try to regulate here at all, in order to pre-empt that resulting mess. Yet this bill, as drafted, manages to only double-down on it.

But the concern for the threat to expressive freedom becomes especially palpable when you think about who can enforce it against whom, and for what, and Texas state attorney general Ken Paxton serves as a salient Exhibit A for what a nightmare this law would unleash. Would you like to write a book about any of the subjects states like Texas have tried to ban? If so, good luck with self-publishing it anonymously. How about selling a t-shirt expressing your outrage at any of the policies states like Texas have tried to promote? Better hope your shirt isn’t so popular that you have to identify yourself! Same with CDs: your ability to make money from your music is conditional on you identifying yourself to the world, so you’d better be completely ok with that. Of course, the problem is not just that certain state attorney generals with a tendency to use their powers against people they don’t like can find you, but that, thanks to this law, anyone else who doesn’t like what you’ve said will be able to as well.

Again, even at best this law remains of dubious value as an enforceable policy and unduly burdensome on sellers and marketplaces in a way that is likely to be costly. But if supporting it is the Faustian bargain Congress wants to basically blackmail affected constituencies into making in order to avoid something even worse (like SHOP SAFE, which has also been shoved into the same enormous competition bill and which would wreck e-commerce for everyone except maybe Amazon), then so be it. But not as currently drafted. Especially not with the attorney-general provision (which, even with a less-hairpin trigger and less super-charged enforcement powers, is still a bad idea in how it invites any and every state to mess with online interstate commerce as their own personal whims would dictate), and certainly not with such broad applicability to essentially every seller of every sort of good.

To be constitutional this bill absolutely must, at minimum, exempt any seller of any expressive good from having to identify themselves, and no platform should be forced by this law to require otherwise. When the First Amendment says that Congress shall “make no law” that abridges on free expression it means any law, including Internet marketplace law. Congress needs to abide by that prohibition and not so carelessly do such abridging here.

Posted on Techdirt - 3 June 2022 @ 03:38pm

Yet Again We Remind Policymakers That “Standard Technical Measures” Are No Miracle Solution For Anything

I’m starting to lose count of how many regulatory proceedings there have been in the last 6 months or so to discuss “standard technical measures” in the copyright context. Doing policy work in this space is like living in a zombie movie version of “Groundhog Day” as we keep having to marshal resources to deal with this terrible idea that just won’t die.

The terrible idea? That there is some miracle technological solution that can magically address online copyright infringement (or any policy problem, really, but for now we’ll focus on how this idea keeps coming up in the copyright context). Because when policymakers talk about “standard technical measures” that’s what they mean: that there must be some sort of technical wizardry that can be imposed on online platforms to miraculously eliminate any somehow wrongful content that happens to be on their systems and services.

It’s a delusion that has its roots going back at least to the 1990s, when Congress wrote into the DMCA the requirement that platforms “accommodate and […] not interfere with standard technical measures” if they wanted to be eligible for its safe harbor protections against any potential liability for user infringements. Even back then Congress had no idea what such technologies would look like, and so it defined them in a vague way, as technologies of some sort “used by copyright owners to identify or protect copyrighted works [that] (A) have been developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process; (B) are available to any person on reasonable and nondiscriminatory terms; and (C) do not impose substantial costs on service providers or substantial burdens on their systems or networks.” Which is a description that even today, a quarter-century later, correlates to precisely zero technologies.

Because, as we pointed out in our previous filing in the previous policy study, there is no technology that could possibly meet all these requirements, even just on the fingerprinting front. And, as we pointed out in this filing, in this policy study, even if you could accurately identify copyrighted works online, no tool can possibly identify infringement. Infringement is an inherently contextual question, and there is no way to load up any sort of technical tool with enough information needed to be able to correctly infer whether a work appearing online is infringing or not. As we explained, it is simply not going to know:

(a) whether there’s a valid copyright in the work at all (because even if such a tool could be fed information directly from Copyright Office records, registration is often granted presumptively, without necessarily testing whether the work is in fact eligible for a copyright at all, or that the party doing the registering is the party entitled to do it);

(b) whether, even if there is a valid copyright, if it is one validly claimed by the party on whose behalf the tool is being used to identify the work(s);

(c) whether a copyrighted work appearing online is appearing online pursuant to a valid license (which the programmer of the tool may have no ability to even know about); or

(d) whether the work appearing online appears online as a fair use, which is the most contextual analysis of all and therefore the most impossible to pre-program with any accuracy – unless, of course, the tool is programmed to presume that it is.

Because the problem with presuming that a fair use is not a fair use, or that a non-infringing work is infringing at all, is that proponents of these tools don’t just want to be able to deploy these tools to say “oh look, here’s some content that may be infringing.” They want those tools’ alerts to be taken as definitive discoveries of infringement that will force a response from the platforms to do something about them. And the only response that will satisfy these proponents is (at minimum) removal of this content (if not also removal of the user, or more) if the platforms want to have any hope of retaining their safe harbor protection. Furthermore, proponents want this removal to happen irrespective of whether the material is actually infringing or not, because they also want to have this happen without any proper adjudication of that question at all.

We already see the problem of platforms being forced to respond to every allegation of infringement as presumptively valid, as an uncheckable flood of takedown notices keep driving offline all sorts of expression that is actually lawful. What these inherently flawed technologies would do is turn that flood into an even greater tsunami as platforms are forced to credit every allegation they automatically spew forth every time they find any instance of a work, no matter how inaccurate such an infringement conclusion actually is.

And that sort of law-caused censorship, forcing expression to be removed without there ever being any adjudication of whether the expression is indeed unlawful, deeply offends the First Amendment, as well as copyright law itself. After all, copyright is all about encouraging new creative expression (as well as the public’s access to it). But forcing platforms to respond to systems like these would be all about suppressing that expression, and an absolutely pointless thing for copyright law to command, whether in its current form as part of the DMCA or any of the new, equally dangerous updates proposed. And it’s a problem that will only get worse as long as anyone thinks that these technologies are any sort of miracle solution to any sort of problem.

Posted on Techdirt - 23 May 2022 @ 12:21pm

The Problem With The Otherwise Very Good And Very Important Eleventh Circuit Decision On The Florida Social Media Law

There are many good things to say about the Eleventh Circuit’s decision on the Florida SB 7072 social media law, including that it’s a very well-reasoned, coherent, logical, sustainable, precedent-consistent, and precedent-supporting First Amendment analysis explaining why platforms moderating user-generated speech still implicates their own protected rights. And not a moment too soon, while we wait for the Supreme Court to hopefully grant relief from the unconstitutional Texas HB20 social media bill.

But there’s also a significant issue with it, which is that it only found most of the provisions of SB 7072 presumptively unconstitutional, so some of the law’s less-obviously-yet-still pernicious provisions have been allowed to go into effect.

These provisions include the need to disclose moderation standards (§501.2041(2)(a)) (the court only took issue with needing to post an explanation for every moderation decision), disclose when the moderation rules change (501.2041(2)(c)), disclose to users view counts on their posts (§501.2041(2)(e)), disclose that it has given candidates free advertising (§ 106.1072(4)), and give deplatformed users access to their data (§ 510.2041(2)(i)). The analysis gave short-shrift to these provisions that it allowed to go into effect, despite their burdens on the same editorial discretion the court overall recognized was First Amendment-protected, despite the extent that they violate the First Amendment as a form of compelled speech, and despite how they should be pre-empted by Section 230.

Of course, the court did acknowledge that these provisions might yet be shown to violate the First Amendment. For instance, in the context of the data-access provision the court wrote:

It is theoretically possible that this provision could impose such an inordinate burden on the platforms’ First Amendment rights that some scrutiny would apply. But at this stage of the proceedings, the plaintiffs haven’t shown a substantial likelihood of success on the merits of their claim that it implicates the First Amendment. [FN 18]

And it made a somewhat similar acknowledgment for the campaign advertising provision:

While there is some uncertainty in the interest this provision serves and the meaning of “free advertising,” we conclude that at this stage of the proceedings, NetChoice hasn’t shown that it is substantially likely to be unconstitutional. [FN 24]

And for the other disclosure provisions as well:

Of course, NetChoice still might establish during the course of litigation that these provisions are unduly burdensome and therefore unconstitutional. [FN 25]

Yet because the court could not already recognize how these rules chill editorial discretion means that they will now get the chance to. For example, it is unclear how a platform could even comply with them, especially a platform like Techdirt (or Reddit, or Wikimedia), which use community-based moderation, and whose moderating whims are impossible to know, let alone disclose, in advance of implementing. Such a provision would seem to be chilling of editorial discretion by making it impossible to choose such a moderation system, even when doing so aligns with the expressive values of the platform. (True, SB 7072 may not yet reach the aforementioned platforms, but such is little consolation if it means that the platforms it does reach could still be chilled from making of such editorial choices.)

The analysis was also scant with respect to the First Amendment prohibition against compelled speech, which these provisions implicate by forcing platforms to say certain things. Although this prohibition against compelled speech supported the court’s willingness to enjoin the other provisions, its analysis glossed over how this constitutional rule should have applied to these disclosure provisions:

These are content-neutral regulations requiring social-media platforms to disclose “purely factual and uncontroversial information” about their conduct toward their users and the “terms under which [their] services will be available,” which are assessed under the standard announced in Zauderer. 471 U.S. at 651. While “restrictions on non-misleading commercial speech regarding lawful activity must withstand intermediate scrutiny,” when “the challenged provisions impose a disclosure requirement rather than an affirmative limitation on speech . . . the less exacting scrutiny described in Zauderer governs our review.” Milavetz, Gallop & Milavetz, P.A. v. United States, 559 U.S. 229, 249 (2010). Although this standard is typically applied in the context of advertising and to the government’s interest in preventing consumer deception, we think it is broad enough to cover S.B. 7072’s disclosure requirements—which, as the State contends, provide users with helpful information that prevents them from being misled about platforms’ policies. [p. 57-8]

And by not enjoining these provisions it will now compel platforms to publish information it wasn’t already publishing, or even potentially significantly re-engineer their systems (such as to give users view count data).

In addition, the decision then gave short shrift to how Section 230 pre-empted such requirements. To an extent, this oversight may in part be due to how the court found it was not necessary to reach Section 230 in finding that most of the law’s provisions should be enjoined (“Because we conclude that the Act’s content-moderation restrictions are substantially likely to violate the First Amendment, and because that conclusion fully disposes of the appeal, we needn’t reach the merits of the plaintiffs’ preemption challenge.” [p.18]).

But for the provisions where it couldn’t find the First Amendment to be enough of a reason to enjoin it, the court ideally should have moved onto this alternative basis before allowing the provisions to go into effect. Unfortunately, it’s also possible that the court really didn’t recognize how Section 230 was a bar to them:

Nor are these provisions substantially likely to be preempted by 47 U.S.C. § 230. Neither NetChoice nor the district court asserted that § 230 would preempt the disclosure, candidate-advertising, or user-data-access provisions. It is not substantially likely that any of these provisions treat social-media platforms “as the publisher or speaker of any information provided by” their users, 47 U.S.C. § 230(c)(1), or hold platforms “liable on account of” an “action voluntarily taken in good faith to restrict access to or availability of material that the provider considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,” id. § 230(c)(2)(A). [FN 26]

Fortunately, however, there will likely be future opportunities to brief that issue more clearly in the future, as the case has now been remanded back to the district court for further proceedings – this appeal was only in reference to whether the law was likely to be so legally dubious to warrant being enjoined while it was challenged, but the challenge itself can continue. And it will happen in the shadow of this otherwise full-throated defense of the First Amendment in the context of platform content moderation.

Posted on Techdirt - 18 May 2022 @ 12:11pm

And Now The Copia Institute Tells The US Supreme Court There’s A Big Problem With Texas’s Social Media Law

Last week a bizarre one-line order from the Fifth Circuit lifted the injunction on Texas’s social media law, allowing it to go into effect, despite all the massive problems with it – including the extent to which it violates the First Amendment and Section 230.

So NetChoice and CCIA filed an emergency application with the U.S. Supreme Court to try to have it at least reinstate the injunction while the case worked its way through the appellate courts. And yesterday Copia Institute filed an amicus brief supporting their application.

The brief is, in many ways, an encore performance of the brief we’d submitted to the Fifth Circuit, using ourselves and Techdirt as an example of how the terms of HB20 violates our constitutional and statutory rights, but this time around there are a few additional arguments that may be worth their own posts (in fact, one is a throwback to an old post). Also, one key argument that we added applies less to the problems with HB20 itself and more to the problems involved with the Fifth Circuit lifting the injunction, and especially in the way that it did. We pointed out that lifting the injunction, and without any explanation for why, looked an awful lot like the sort of prior restraint that had long been considered verboten under the First Amendment. State actors (including courts) are not supposed to chill the exercise of expression unless and until there’s been the adjudication needed to find that the First Amendment permits that sanction. Here the Fifth Circuit technically heard the case, but it issued a sanction stymying the exercise of speech (lifting the injunction) without ever actually having ruled that HB20’s chilling terms were actually ok under the First Amendment. Perhaps the court truly think’s HB20 is perfectly sound under the First Amendment, we don’t really know. And we can’t know, because they didn’t say anything. Which also means there’s nothing to appeal, because if the Fifth Circuit made an error in thinking HB20 is ok (which seems likely, because that law conflicts with so much established First Amendment precedent, as well as common sense) no one can say where that error was, or what of its judgment should be reversed.

Nevertheless, the law is out there, in effect now, doing harm to platforms’ expression. HB20 still needs to be thrown out on the merits, but for the moment we just all need the Supreme Court to get the bleeding to stop.

Posted on Techdirt - 9 May 2022 @ 12:18pm

Wherein The Copia Institute Reminds California’s New Privacy Agency That Its Regulations Need To Comport With The First Amendment

Last week the recently formed California Privacy Protection Agency held “pre-rulemaking stakeholder sessions” to solicit input on the regulations it intends to promulgate. I provided the following testimony on behalf of the Copia Institute.

Thank you for the opportunity to speak at these hearings. My name is Cathy Gellis, and I’m here representing myself and the Copia Institute, a think tank that regularly researches and comments on matters of tech policy, including as they relate to privacy and free speech.

I’m here today to talk about how privacy regulation and free speech converge in order to urge this board to carefully address the collision of any proposed regulation and the First Amendment, particularly with respect to the protection of speech and innovation. To do so I want to make three interrelated points.

First, as a general matter, it is important that any proposed regulation be carefully analyzed from a First Amendment perspective to make sure it comports with both its letter and spirit. When the First Amendment says “make no law” that abridges freedom of speech, that admonition applies to California privacy regulation. The enabling California legislation involved here itself acknowledges that it is only “intended to supplement federal and state law, where permissible, but shall not apply where such application is preempted by, or in conflict with, federal law, or the California Constitution,” and violating the First Amendment would run afoul of this clause.

It’s also important that any such regulation comport with the spirit of the First Amendment as well. The First Amendment exists to make sure we can communicate with each other, which is a necessary requirement of a healthy democracy and society. It would be an intolerable situation if these regulations were to chill our exchange of information and expression, or to unduly chill innovation. While wanting online services to be careful with how they handle the digital footprints the public leaves behind is admirable, the public would not be well served if new and better technologies couldn’t be invented, or new businesses or competitors couldn’t be established, because California privacy regulation was unduly burdensome or simply an obstacle to new and better ideas.

Along these lines a second point to make is that California is not Europe. Free speech concerns do not get “balanced” here and cannot be “balanced” without violating the First Amendment. The experience of the GDPR in Europe is instructive in warning what happens when regulators try to make such a balance, because inevitably free expression suffers.

For instance, privacy regulation in Europe has been used as a basis for powerful people to go after journalists and sue their critics, which makes criticizing them, even where necessary, and even where under the First Amendment perfectly legal, difficult if not impossible, and thus chills such important discourse.

The GDPR has also been used to force journalists to divulge their sources, which is also anathema to the First Amendment and California law, along with itself violating of the privacy values wrapped up in journalist source protection. It also chills the necessary journalism a democratic society depends on. (As an aside, the journalistic arm of the Copia Institute has had its own reporting suppressed via GDPR pressure on search engines, so this is hardly a hypothetical concern.)

And it was the GDPR that opened the door to the entire notion of “right to be forgotten,” which, despite platitudes to the contrary, has had a corrosive effect on discourse and the public’s First Amendment-recognized right to learn about the world around them, while also giving bad actors the ability to whitewash history so they can have cover for more bad acts.

Meanwhile we have seen, in Europe and even the U.S., how regulatory demands that have the effect of causing services to take down content invariably lead to too much content being taken down. Because these regulatory schemes create too great a danger for a service if they do not do enough to avoid sanction, they rationally chose to do too much in order to be safe than sorry. But when content has been taken down, it’s the world who needs it who’s sorry now.

As well as the person who created the content, whose own expression has now been effectively harmed by an extrajudicial sanction. The First Amendment forbids prior restraint, which means that it’s impermissible for speech to be punished before having been adjudicated to be wrongful. But we see time and time again such prior restraint happen thanks to regulatory pressure on the intermediary services online speakers need to use to speak, which force them to do the government’s censorial dirty work for it by causing expressive content to be deleted, and without the necessary due process for the speaker.

Then there is this next example, which brings up my third point. Privacy regulation does not stay well-cabined so that it only affects large, commercial entities. It inevitably affects smaller ones, directly or indirectly. In the case of the GDPR, it affected the people who used Facebook to run fan pages, imposing upon these individuals, who simply wanted to have a place where they could talk with others about their favorite subject, cripplingly burdensome regulatory liability. But who will want to run these pages and foster such discourse when the cost can be so high? Care needs to be taken so that regulatory pressure does not lead to the loss of speech or community, as the GDPR has done.

And that means recognizing that there are a lot of online services and platforms that are not large companies. Which is good; we want there to be a lot of online services and platforms so that we have places for communities to form and converse with each other. But if people are deterred from setting up, say, their own fan sites, independent of Facebook even, then that’s a huge problem.  Because we won’t get those communities, or that conversation.

Society wants that discourse. It needs that discourse. And if California privacy regulation does anything to smother it with its regulatory criteria, then it will have caused damage, which this agency, and the public that empowered it, should not suborn.

Thank you again for this opportunity to address you.  A version of this testimony with hyperlinks to the aforementioned examples will be published on shortly.

Posted on Techdirt - 5 May 2022 @ 12:06pm

No, Software-Bricked Tractors Thwarting Russian Looters Is Not A Sign That Either John Deere Or Copyright Is Good

There are not enough words to describe the horrors of what Russian troops have been doing to their Ukrainian neighbors. But it should go without saying that stealing their stuff is, on its own, not ok.

But it turns out that some of what they’ve stolen is farming equipment. And modern farming equipment at that, which is encumbered with software. But because it is encumbered with software, that means that the ability to control the machine does not remain physically with the machine. Were it an old-school, software-less tractor it would just need to be filled up with fuel to start operating. But not so with modern tractors, whose onboard software systems require licensed users to operate them – whom looters definitely are not.

And so CNN is reporting that that Russian looters are finding themselves unable to use the tractors they stole, because people elsewhere have been able to control the software to make it so the tractors can’t run. They just sit there in their yards as piles of useless metal, instead of valuable agricultural assets.

The sophistication of the machinery, which are equipped with GPS, meant that its travel could be tracked. It was last tracked to the village of Zakhan Yurt in Chechnya. The equipment ferried to Chechnya, which included combine harvesters — can also be controlled remotely. “When the invaders drove the stolen harvesters to Chechnya, they realized that they could not even turn them on, because the harvesters were locked remotely,” the contact said. The equipment now appears to be languishing at a farm near Grozny.

But we should not get carried away celebrating the apparent schadenfreude, because what’s stymying these looters is itself dystopian. Even if you can argue that embedding software logic on a physical piece of equipment, like a tractor, makes for a better tractor, the idea that someone else somewhere else can have dominion over that piece of equipment does not make anything better. Even if that embedded software only serves to function as a form of lo-jack to deter thieves from taking equipment they know they won’t be able to use, it also doesn’t follow that any such software is necessarily better than how things were before either. Because while, sure, in this case this arrangement has helped prevent thieves from benefiting from their ill-gotten gains, in all too many situations it is instead bona fide owners who have been unable to benefit from their own properly purchased property. Which is a huge problem that this one example of apparent karmic justice does not and cannot redeem.

The irony is that teams of hackers have long been hard at work figuring out how to modify the embedded software so that tractor owners everywhere (including in the US) can do what they need to operate and maintain with their own equipment. Including teams of Ukrainian hackers.

To avoid the draconian locks that John Deere puts on the tractors they buy, farmers throughout America’s heartland have started hacking their equipment with firmware that’s cracked in Eastern Europe and traded on invite-only, paid online forums. Tractor hacking is growing increasingly popular because John Deere and other manufacturers have made it impossible to perform “unauthorized” repair on farm equipment, which farmers see as an attack on their sovereignty and quite possibly an existential threat to their livelihood if their tractor breaks at an inopportune time. […] The nightmare scenario, and a fear I heard expressed over and over again in talking with farmers, is that John Deere could remotely shut down a tractor and there wouldn’t be anything a farmer could do about it. “What you’ve got is technicians running around here with cracked Ukrainian John Deere software that they bought off the black market.”

Of course, these hackers are unlikely to want to help their looting neighbors. Nor likely is John Deere. (Especially here, since many of the tractors appear to have been stolen from authorized dealers.) But even so, it’s not like the Russians will be returning any of these tractors to their rightful owners now that they’ve found they can’t use them. Simply depriving Ukrainians of their own property is a huge blow to them, and the looters may still profit from their thievery by scavenging the tractors for parts and raw materials. So it’s not like John Deere and its embedded software, or the copyright in that software that gives it such control over its sold machines, have managed to right a serious wrong.

But while we can easily recognize how wrong it is for looters to deprive people of the use of their own equipment, we somehow often miss how wrong it is for anyone to so deprive them. The reality is that if you’ve made it so that a tractor owner can’t use their own equipment, you might be a looter. But you also might be John Deere. The only difference is that the looter’s behavior is more clearly lawless, whereas John Deere’s is currently backed up by law. But the effect is just as wrong.

More posts from Cathy Gellis >>