Cathy Gellis’s Techdirt Profile

cathy

About Cathy Gellis




Posted on Techdirt - 21 October 2020 @ 9:31am

Trademark Genericide And One Big Way The DOJ Admits That Its Antitrust Lawsuit Against Google Is Utter Garbage

from the admitting-their-own-bullshit dept

Don't misread the title of this post to think there's only one thing wrong with the DOJ's antitrust complaint against Google. There's plenty. But on the list is this particular self-defeating argument included in the complaint -- the complaint where the DOJ basically has but one job: show that Google is a monopoly.

To understand it, we need to first understand the idea of "trademark genericide." That's what happens when your brand name is, well, just too good and people start using your branding as the default word to describe the product or service in general. Famous examples include "Band-Aid," "Thermos," "Xerox," and plenty of other words we're all used to using in lower-case form to describe things that aren't actually produced by the companies that had those trademarks.

The issue here is not actually whether Google has lost its trademark rights due to genericide, which is a technical question particular to the operation of trademark law and not relevant to the issues raised here. The DOJ isn't actually arguing that Google has anyway. But what it is arguing is that the same basic dynamic has occurred, where the branded name has become a widely adopted synonym to describe other people's similar goods and services. However, in doing so, it has blown up its own argument because that means there are other similar goods and services. Which means that Google is not a monopoly.

Look at what it argued (emphasis added):

Google has thus foreclosed competition for internet search. General search engine competitors are denied vital distribution, scale, and product recognition—ensuring they have no real chance to challenge Google. Google is so dominant that “Google” is not only a noun to identify the company and the Google search engine but also a verb that means to search the internet. [complaint p. 4]

This argument makes no sense. On the one hand it asserts that Google has foreclosed competition for Internet search, and in almost the next breath it asserts (and as an attempt at proving the first assertion, bizarrely) that "Google" has now become the generic word for Internet searching offered by everyone. If "Google" is now being used by consumers to describe the use of competing goods and services, it means that there are competing goods and services. Ergo, Google is not a monopoly, and thus the alleged premise for bringing this antitrust action is unsound.

There are, of course, many reasons why this antitrust action against Google is unsound, but it does seem odd that the DOJ would so candidly confess such a notable one in the introduction of its own complaint.

Especially because even the DOJ itself admitted later in the complaint that there are actually competing search engines, namely Bing, Yahoo, and DuckDuckGo.

Google has monopoly power in the United States general search services market. There are currently only four meaningful general search providers in this market: Google, Bing, Yahoo!, and DuckDuckGo. According to public data sources, Google today dominates the market with approximately 88 percent market share, followed far behind by Bing with about seven percent, Yahoo! with less than four percent, and DuckDuckGo with less than two percent. [p. 29]

But the argument it made in this later section to try to wish away the import of these competitors did not do much better than the previous one in the logic department.

There are significant barriers to entry in general search services. The creation, maintenance, and growth of a general search engine requires a significant capital investment, highly complex technology, access to effective distribution, and adequate scale. For that reason, only two U.S. firms—Google and Microsoft—maintain a comprehensive search index, which is just a single, albeit fundamental, component of a general search engine. Scale is also a significant barrier to entry. Scale affects a general search engine’s ability to deliver a quality search experience. The scale needed to successfully compete today is greater than ever. Google’s anticompetitive conduct effectively eliminates rivals’ ability to build the scale necessary to compete. Google’s large and durable market share and the significant barriers to entry in general search services demonstrate Google’s monopoly power in the United States. [p. 31]

Once again, the DOJ has managed to swing and miss in trying to argue that Google is a monopoly with its rushed and unthoughtful lawyering. Google obviously isn't, not with actual competitors, and the DOJ's apparent fallback argument of it being a monopoly somehow due to monopolistic effect similarly fails. It whines that scale is important for a search engine's success, and that there are significant barriers to entry to becoming a competitive player in the search engine space. But the DOJ offers nothing more than "it must be antitrust!" to hand-wave away why Google has managed to succeed better than its rivals, including rivals like Yahoo that had entered the market long before Google (and for whom barriers to entry should not have been an issue), and rivals like Microsoft (which the DOJ acknowledges is able to achieve the same scale as Google). The market has had choices—choices that even the DOJ cannot ignore, no matter how much it is desperate to because of how their existence undermines its case.

And so with the "la-la-la-I-can't-hear-you" approach to antitrust enforcement the DOJ tries to wish these inconvenient facts away, arguing that Google's size and share of the market somehow magically evinces an antitrust violation, with little more support than "because we said so."

Which is not nearly a good enough basis for this sort of extraordinary action.

Read More | 34 Comments | Leave a Comment..

Posted on Techdirt - 20 October 2020 @ 9:37am

Section 230 Basics: There Is No Such Thing As A Publisher-Or-Platform Distinction

from the foundational-understanding dept

We've said it before, many times: there is no such thing as a publisher/platform distinction in Section 230. But in those posts we also said other things about how Section 230 works, and perhaps doing so obscured that basic point. So just in case we'll say it again here, simply and clearly: there is no such thing as a publisher/platform distinction in Section 230. The idea that anyone could gain or lose the immunity the statute provides depending on which one they are is completely and utterly wrong.

In fact, the word "platform" does not even show up in the statute. Instead the statute uses the term "interactive computer service provider." The idea of a "service provider" is a meaningful one, because the whole point of Section 230 is to make sure that the people who provide the services that facilitate others' use of the Internet are protected in order for them to be able to continue to provide those services. We give them immunity from the legal consequences of how people use those services because without it they wouldn't be able to – it would simply be too risky.

But saying "interactive computer service provider" is a mouthful, and it also can get a little confusing because we sometimes say "internet service provider" to mean just a certain kind of interactive computer service provider, when Section 230 is not nearly so specific. Section 230 applies to all kinds of service providers, from ISPs to email services, from search engines to social media providers, from the dial-up services we knew in the 1990s back when Section 230 was passed to whatever new services have yet to be invented. There is no limit to the kinds of services Section 230 applies to. It simply applies to anyone and everyone, including individual people, who are somehow providing someone else the ability to use online computing. (See Section 230(f)(2).)

So for shorthand people have started to colloquially refer to protected service providers as "platforms." Because statutes are technical creatures it is not generally a good idea to use shorthand terms in place of the precise ones used by the statutes; often too much important meaning can be lost in the translation. But in this case "platform" is a tolerable synonym for most of our policy discussions because it still captures the essential idea: a Section 230-protected "platform" is the service that enables someone else to use the Internet.

Which brings us to the term "publisher," which does appear in the statute. In particular it appears in the critically important provision at Section 230(c)(1), which does most of the work making Section 230 work:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

In this provision the term "publisher" (or "speaker") refers to the creator of the content at issue. Who did? Was it the provider of the computer service, aka the platform itself? Or was it someone else? Because if it had been someone else, if the information at issue had been "provided by another information content provider," then we don't get to treat the platform as the "publisher or speaker" of that information – and it is therefore immune from liability for it.

Where the confusion has arisen is in the use of the term "publisher" in another context as courts have interpreted Section 230. Sometimes the term "publisher" itself means "facilitator" or "distributor" of someone else's content. When courts first started thinking about Section 230 (see, e.g., Zeran v. AOL) they sometimes used the term because it helped them understand what Section 230 was trying to accomplish. It was trying to protect the facilitator or distributor of others' expression – or, in other words, the platform people used to make that expression – and using the term "publisher" from our pre-Section 230 understanding of media law helped the courts recognize the legal effect of the statute.

Using the term did not, however, change that effect. Or the basic operation of the statute. The core question in any Section 230 analysis has always been: who originated the content at issue? That a platform may have "published" it by facilitating its appearance on the Internet does not make it the publisher for purposes of determining legal responsibility for it, because "publishing" is not the same as "creating." And Section 230 – and all the court cases interpreting it – have made clear that it is only the creator who can be held liable for what was created.

There are plenty of things we can still argue about regarding Section 230, but whether someone is a publisher versus a platform should not be one of them. It is only the creator v. facilitator distinction that matters.

32 Comments | Leave a Comment..

Posted on Techdirt - 15 October 2020 @ 3:34pm

We Interrupt This Hellscape With A Bit Of Good News On The Copyright Front

from the carl-malamud-versus-the-world dept

We've written about this case – or rather, these cases – a few times before: Carl Malamud published the entire Code of Federal Regulations at Public.Resource.org, including all the standards that the CFR incorporated and thus gave the force of law. Several organizations that had originally promulgated these various standards then sued Public Resource – in two separate but substantially similar cases later combined – for copyright infringement stemming from his having included them.

In a set of really unfortunate decisions, the district court upheld the infringement claims, finding that the standards were copyrightable (and also actually owned by the standards organizations claiming them, despite reason to doubt those ownership claims), and that Public Resource including them as part of its efforts to post the law online was not a fair use. But then the DC Circuit reversed that decision. While it generally left the overall question of copyrightability for another day, it did direct the district court to re-evaluate whether the publication of the standards was fair use.

Now back at the district court, the cases had proceeded to the summary judgment stage and were awaiting a new ruling for the court. One case still remains pending – ASTM v Public.Resource.Org – but the other one, American Educational Research Association et al. v. Public.Resource.Org, has now been dismissed by the plaintiffs with prejudice. Effectively that means that Public Resource wins and can continue to host these standards online. Which is good news for Public Resource and its users. But it does still leave anyone else's ability to repost standards incorporated into law up in the air. Hopefully when the court eventually rules in the remaining case it will find such use fair, and in a way that others can similarly avail themselves of the ability to fully publish the law.

Read More | 14 Comments | Leave a Comment..

Posted on Techdirt - 14 October 2020 @ 1:42pm

An Update On The Pretty Crummy Supreme Court Term So Far On Issues We Care About

from the bad-beginnings dept

As the Senate hearings continue over what the future United States Supreme Court may do, it's worth taking a moment to talk about what the current Court has already just done. The RBG-less Supreme Court is now back in session, and in view of the actions it's taken in at least four separate cases, it has not been an auspicious beginning.

Even some of the best news still managed to be awful. For instance, cert was denied the Enigma Software v. Malwarebytes case. Denial is bad news because it leaves a terrible Ninth Circuit Section 230 decision on the books. On the other hand, having it denied may have dodged a bullet. Section 230 is already in the cross-hairs of Congress and the Executive Branch; inviting the Supreme Court to go to town on it too seemed like a risky proposition, and Justice Thomas's unprompted dissent ripping Section 230 jurisprudence to shreds makes clear how much damage the Court could do to this critically important law if it took on this case.

And the risk of cert being granted here just might not have been worth it. For one thing, the case may continue. The Ninth Circuit had overturned the original granting of defendant Malwarebytes' motion to dismiss, which sent the case back to the district court. Which means there could be another opportunity at some point later in the litigation for Malwarebytes to challenge the lousy reasoning the Ninth Circuit employed to revive the case. Of course, it's possible that the parties might settle and leave the Ninth Circuit decision on the books, unchallenged. However, even if that happens, it's a precedent already called into question by the more recent Supreme Court decision in Bostock v. Clayton County, Ga.. So it's not great that future defendants will have to argue around the Ninth Circuit's ruling, and by no means a certainty that the Bostock statutory construction argument would prevail, but at least there is something of substance to enable future defendants to make a good run at it.

Meanwhile, at least two other cert denials left us with even more bad news. One of these cases was Austin v. Illinois. Supreme Court review was sought after the Illinois Supreme Court left in place Illinois's revenge porn law. As we pointed out at the time – and the lower court in Illinois had recognized – the Illinois revenge porn law is not a content neutral law, and as such it's also not one sufficiently narrowly-tailored to meet the strict scrutiny the First Amendment requires. The law also doesn't take the intent of the defendant into account. Unfortunately the Illinois Supreme Court did not seem bothered by these constitutional infirmities and upheld the law. We were hoping the United States Supreme Court would recognize the problems and grant review to address them – but it didn't. The law now remains on the books. And while it might indeed punish some of the deserving people that a constitutional revenge porn law would also catch, the problem with unconstitutional laws is that they also tend to catch other people too, even those whose speech should have been constitutionally protected.

The other cert denial of note is in G&M Realty v. Castillo. This cert petition sought review of the shockingly awful Second Circuit decision doubling-down on the terribly troubling EDNY decision awarding a multi-million dollar judgement ostensibly for violating the Visual Artists Rights Act (VARA). Never mind that, despite its apparent policy intent, VARA will actually lead to less public art and thus actually hurt artists, and ignore for the moment the short shrift the law gives to real property rights, these decisions managed to offend the constitution in several other outrageous ways. As we explained previously, there were multiple due process issues raised by how this particular case was adjudicated and the extraordinarily punitive penalty awarded against the defendant property owner, who had simply painted over his own building after the district court told him he could.

But the problem isn't just that this particular case was a travesty; what this case also illustrated is how badly VARA offends both the First Amendment and the equal protection clause of the Constitution. It gratuitously awards an extra benefit to only certain expression based in some way on the content of that expression, which is not supposed to happen. (Put another way: it also denies a benefit to certain expression based on its content.) It is an utterly irredeemable law, and it is a great shame that the Supreme Court refused to grant review, not just to overturn the Second Circuit's galling miscarriage of justice but to free us all from this law's unconstitutional reach. Assuming Congress will refuse to repeal it, we will have to await a new victim of the law with the means and ability to challenge their injury before we will have any chance of being rid of it.

The reality is that Supreme Court jurisprudence is always at best a mixed bag when it comes to copyright. Earlier this year it did produce a good decision in Georgia v. Public.Resource.org, but it missed a rare opportunity to restore sanity when it comes to the VARA amendment to the copyright statute. Now the question is whether it will restore sanity when it comes to how copyright in software works.

Oral argument at the Supreme Court was finally held last week in Google v. Oracle, after having been postponed from its original March hearing date due to the pandemic. It's impossible to read the tealeaves and know how the court will rule, but it was hard to come away with too much optimism. What was concerning about the hearing is the undercurrent reflected in the justices' questions that if the Court rules in Google's favor it is somehow doing Google a favor and diminishing Oracle's copyright. When in actuality it is Oracle's copyright claim that is much broader than the law has ever allowed.

Copyrights have always (or at least until recently) been understood as limited monopolies granting their owners a limited set of exclusive rights for limited periods. Over the years these periods have become less limited, and interpretations of what these exclusive rights cover have tended to get broader. But the basic monopoly has still always been curtailed by the subject-matter limitation of Section 102(b) of the statute, which limits what can actually be subject to copyright in the first place, and fair use, which limits what uses of the work the copyright owner can exclude.

Both of these limitations are at issue before the Court: whether Oracle could even claim a copyright monopoly over the API in the first place, and, even if it could, whether that copyright could allow it to prevent other people from freely using the software's API to make their own interoperable software, or if that would be fair use. The complication in this case is that there's a special section in the copyright statute – Section 117 – that enunciates other exceptions to the reach of software copyright's, due to their unique nature that makes them different than other sorts of copyrightable works. Oracle argued that this section exhaustively articulated the limits to its software copyright, but if this view were correct it would mean that software copyright would not be subject to any of the other limitations that have always applied to any other form of copyright.

Worse, not only would such a conclusion be bad policy that would deter future software development – the sort of authorship a software copyright is supposed to incentivize – but it would also constitute a significant change from the status quo.

The one bit of good news to report is that at least Justice Sotomayor recognized this issue. In particular she observed how out of step the Federal Circuit's decisions had been from those of most other courts that had considered whether APIs could be subject to copyright. Their consensus had been no, and the freedom this view afforded software developers to make their software interoperable has enabled an entire industry to take root. In her questioning Justice Sotomayor appeared to recognize how badly it would threaten that industry if the Supreme Court adopted the Federal Circuit's decisions in favor of Oracle's copyright claims because of how much it represented a significant change in the previously understood reach of a software copyright.

JUSTICE SOTOMAYOR: Counsel, at the --in your beginning statement, you had the sky falling if we ruled in favor of Google. The problem with that argument for me is that it seems that since 1992, and Justice Kagan mentioned the case, the Second Circuit case, a Ninth Circuit case, an Eleventh Circuit case, a First Circuit case, that a basic principle has developed in the case law, up until the Federal Circuit's decision. I know there was a Third Circuit decision earlier on in the 1980s. But the other circuits moved away from that. They and the entire computer world have not tried to analogize computer codes to other methods of expression because it's sui generis. They've looked at its functions, and they've said the API, the Application Programming Interface, of which the declaring code is a part, is not copyrightable. Implementing codes are. And on that understanding, industries have built up around applications that know they can -- they can copy only what's necessary to run on the application, but they have to change everything else. That's what Google did here. That's why it took less than 1 percent of the Java code. So I guess that's the way the world has run in every other system. Whether it's Apple's desktop or Amazon's web services, everybody knows that APIs are not -- declaring codes are not copyrightable. Implementing codes are. So please explain to me why we should now upend what the industry has viewed as the copyrightable elements and has declared that some are methods of operation and some are expressions. Why should we change that understanding? [transcript p. 52-53.]

The question is whether her skepticism about Oracle's copyright claim is one that will be adopted by the rest of the justices, or whether sometime later this term we'll be writing even more posts about how the Supreme Court has let everyone down on this front too.

12 Comments | Leave a Comment..

Posted on Techdirt - 23 September 2020 @ 1:37pm

Busting Still More Myths About Section 230, For You And The FCC

from the human-readable-advocacy dept

The biggest challenge we face in advocating for Section 230 is how misunderstood it is. Instead of getting to argue about its merits, we usually have to spend our time disabusing people of their mistaken impressions about what the statute does and how. If people don't get that part right then we'll never be able to have a meaningful conversation about the appropriate role it should have in tech policy.

It's particularly a problem when it's a federal agency getting these things wrong. In our last comment to the FCC we therefore took issue with some of the worst falsehoods the NTIA had asserted in its petition demanding the FCC somehow seize imaginary authority it doesn't actually have to change Section 230. But in reading a number of public comments in support of its petition, it became clear that there was more to say to address these misapprehensions about the law.

The record developed in the opening round of comments in the [FCC's] rulemaking reflects many opinions about Section 230. But opinions are not facts, and many of these opinions reflect a fundamental misunderstanding of how Section 230 works, why we have it, and what is at risk if it is changed.

These misapprehensions should not become the basis of policy because they cannot possibly be the basis of *good* policy. To help ensure they will not be the predicate for any changes to Section 230, the Copia Institute submits this reply comment to address some of the recurrent myths surrounding Section 230, which should not drive policy, and reaffirm some fundamental truths, which should.

Our exact reply comment is attached below. But because it isn't just these agencies we want to make sure understand how this important law works, instead of just summarizing it here, we're including a version of it in full right here, below.

As we told the FCC, there are several recurring complaints that frequently appear in the criticism leveled at Section 230. Unfortunately, most of these complaints are predicated on fundamental misunderstandings of why we have Section 230, or how it works. What follows is an attempt to dispel many of these myths and to explain what is at risk by making changes to Section 230 – especially any changes born out of these misunderstandings.

To begin with, one type of flawed argument against Section 230 tends to be premised on the incorrect notion that Section 230 was intended to be some sort of Congressional handout designed to subsidize a nascent Internet. The thrust of the argument is that now that the Internet has become more established, Section 230 is no longer necessary and thus should be repealed. But there are several problems with this view.

For one thing, it is technically incorrect. Prodigy, the platform jeopardized by the Stratton Oakmont decision, which prompted the passage of Section 230, was already more than ten years old by that point and handling large amounts of user-generated content. It was also owned by large corporate entities (Sears and IBM). It is true that Congress was worried that if Prodigy could be held liable for its users' content it would jeopardize the ability for new service providers to come into being. But the reason Congress had that concern was because of how that liability threatened the service providers that already existed. In other words, it is incorrect to frame Section 230 as a law designed to only foster small enterprises; from the very beginning it was intended to protect entrenched corporate incumbents, as well as everything that would follow.

Indeed, the historical evidence bears out this concern. For instance, in the United States, where, at least until now, there has been much more robust platform protection than in Europe, investment in new technologies and services has vastly outpaced that in Europe. (See the Copia Institute's whitepaper Don't Shoot the Message Board for more information along these lines.) Even in the United States there is a correlation between the success of new technologies and services and the strength of the available platform protection, where those that rely upon the much more robust Section 230 immunity do much better than those that depend on the much weaker Digital Millennium Copyright Act safe harbors.

Next, it is also incorrect to say that Section 230 was intended to be a subsidy for any particular enterprise, or even any particular platform. Nothing in the language of Section 230 causes it to apply to apply only to corporate interests. Per Section 230(f)(2) the statute applies to anyone meeting the definition of a service provider, as well as any user of a service provider. Many service providers are also small or non-profit, and, as we've discussed before, can even be individuals. Section 230 applies to them all, and all will be harmed if its language is changed.

Indeed, the point of Section 230 was not to protect platforms for their own sake but to protect the overall health of the Internet itself. Protecting platforms was simply the step Congress needed to take to achieve that end. It is clear from the preamble language of Section 230(a) and (b), as well as the legislative history, that what Congress really wanted to do with Section 230 was simultaneously encourage the most good online expression, and the least bad. It accomplished this by creating a two-part immunity that both shielded platforms from liability arising from carrying speech, as well as from any liability in removing it.

By pursuing a regulatory approach that was essentially carrot-based, rather than stick-based, Congress left platforms free to do the best they could to vindicate both goals: intermediating the most beneficial speech and allocating their resources most efficiently to minimize the least desirable. As we and others have many times pointed out, including in our earlier FCC comment, even being exonerated from liability in user content can be cripplingly expensive. Congress did not want platforms to be obliterated by the costs of having to defend themselves for liability in their users' content, or to have their resources co-opted by the need to minimize their own liability instead of being able to direct them to running a better service. If platforms had to fear liability for either their hosting or moderation efforts it would force them to do whatever they needed to protect themselves but at the expense of being effective partners in achieving Congress's twin aims.

This basic policy math remains just as true in 2020 as it did in the 1990s, which is why it is so important to resist these efforts to change the statute. Undermining Section 230's strong platform protections will only undermine the overall health of the Internet and do nothing to help there be more good content and less bad online, which even the statute's harshest critics often at least ostensibly claim to want.

While some have argued that platforms who fail to be optimal partners in meeting Congress's desired goals should lose the benefit of Section 230's protection, there are a number of misapprehensions baked into this view. One misapprehension is that Section 230 contains any sort of requirement for how platforms moderate their user content; it does not. Relatedly, it is a common misconception that Section 230 hinges on some sort of "platform v. publisher" distinction, immunizing only "neutral platforms" and not anyone who would qualify as a "publisher." People often mistakenly believe that a "publisher" is the developer of the content, and thus not protected by Section 230. In reality, however, as far as Section 230 is concerned, platforms and publishers are actually one and the same, and therefore all are protected by the statute. The term "publisher" that appears in certain court decisions merely relates to the understanding of the word "publisher" to mean "one that makes public," which is of course the essential function of what a platform does to distribute others' speech. But content distribution is not the same thing as content creation. Section 230 would not apply to the latter, but it absolutely applies to the former, even if the platform has made editorial decisions with respect to that distribution. Those choices still do not amount to content creation.

In addition, the idea that a platform's moderation choices can jeopardize their Section 230 protection misses the fact that it is not Section 230 that gives platforms the right to moderate however they see fit. As we explained in our previous comment and on many other occasions, the editorial discretion behind content moderation decisions is protected by the First Amendment, not Section 230. Eliminating Section 230 will not take away the right for platforms to exercise their discretion. What it will do, however, is make it practically impossible for platforms to avail themselves of this right because it will force them to have to expend their resources defending themselves. They might potentially eventually win, but, as we earlier explained, even exoneration can be an extinction-level event for a platform.

Furthermore, it would effectively eviscerate the benefit of the statute if its protection were conditional. The point of Section 230 is to protect platforms from the crippling costs of litigation; if they had to litigate to find out whether they were protected or not, there would be no benefit and it would be as if there were no Section 230 at all. Given the harms to the online ecosystem Section 230 was designed to forestall, this outcome should be avoided.

All of this information boils down to this essential truth: the NTIA petition should be rejected, and so should any other effort to change Section 230, especially one that embraces these misunderstandings.

Read More | 2 Comments | Leave a Comment..

Posted on Techdirt - 15 September 2020 @ 3:34pm

Because Too Many People Still Don't Know Why The EARN IT Bill Is Terrible, Here's A Video

from the AV dept

The biggest problem with all the proposals to reform Section 230 is that way too many people don't understand *why* they are a terrible idea. And the EARN IT bill is one of the worst of the worst, because it does not just break Section 230 but also so much more, yet too many people remain oblivious to the issues.

Obviously there's more education to be done, and towards that end Stanford's Riana Pfefferkorn and I recently gave this presentation at the Crypto and Privacy Village at Defcon. The first part is a crash course in Section 230 and how it does the important work it does in protecting the online ecosystem. The second part is an articulation of all the reasons the EARN IT bill in particular is terrible and the specific damage it would do to encryption and civil liberties, along with ruining Section 230 and everything important that it advances.

We'll keep explaining in every way we can why Section 230 should be preserved and the EARN IT bill should be repudiated, but if you're the kind of person who prefers AV explanations, then this video is for you.

(Note: there's a glitch in the video at the beginning. Once it goes dark, skip ahead to about 3 minutes 20 seconds and it will continue.)

6 Comments | Leave a Comment..

Posted on Techdirt - 11 September 2020 @ 1:35pm

The First Hard Case: Zeran V. AOL And What It Can Teach Us About Today's Hard Cases

from the congress-and-the-courts-got-it-right dept

A version of this post appeared in The Recorder a few years ago as part of a series of articles looking back at the foundational Section 230 case Zeran v. America Online. Since to my unwelcome surprise it is now unfortunately behind a paywall, but still as relevant as ever, I'm re-posting it here.

They say that bad facts make bad law. What makes Zeran v. America Online stand as a seminal case in Section 230 jurisprudence is that its bad facts didn’t. The Fourth Circuit wisely refused to be driven from its principled statutory conclusion, even in the face of a compelling reason to do otherwise, and thus the greater good was served.

Mr. Zeran’s was not the last hard case to pass through the courts. Over the years there have been many worthy victims who have sought redress for legally cognizable injuries caused by others’ use of online services. And many, like Mr. Zeran, have been unlikely to easily obtain it from the party who actually did them the harm. In these cases courts have been left with an apparently stark choice: compel the Internet service provider to compensate for the harm caused to the plaintiff by others’ use of their services, or leave the plaintiff with potentially no remedy at all. It can be tremendously tempting to want to make someone, anyone, pay for harm caused to the person before them. But Zeran provided early guidance that it was possible for courts to resist the temptation to ignore Section 230’s liability limitations – and early evidence that they were right to so resist.

Section 230 is a law that itself counsels a light touch. In order to get the most good content on the Internet and the least bad, Congress codified a policy that is essentially all carrot and no stick. By taking the proverbial gun away from an online service provider’s proverbial head, Congress created the incentive for service providers to be partners in achieving these dual policy goals. It did so in two complementary ways: First, it encouraged the most beneficial content by insulating providers for liability arising from how other people used their services. Second, Congress also sought to ensure there would be the least amount of bad content online by insulating providers from liability if they did indeed act to remove it.

By removing the threat of potentially ruinous liability, or even just the immense cost arising from being on the receiving end of legal threats based on how others have used their services, more and more service providers have been able to come into existence and enable more and more uses of their systems. It's let these providers resist unduly censoring legitimate uses of their systems in order to minimize their legal risk. And by being safe to choose what uses to allow or disallow from their systems, service providers have been free to allocate their resources more effectively to police the most undesirable uses of their systems and services than they would be able to if the threat of liability instead forced them to divert their resources in ways that might not be appropriate for their platforms, optimal, or even useful at all.

Congress could of course have addressed the developing Internet with an alternative policy, one that was more stick than carrot and that threatened penalties instead of offering liability limitations, but such a law would not have met its twin goals of encouraging the most good content and the least bad nearly as well as Section 230 actually has. In fact, it likely would have had the opposite effect, eliminating more good content from the Internet and leaving up more of the bad. The wisdom of Congress, and of the Zeran court, was in realizing that restraint was a better option.

The challenge we are faced with now is keeping courts, and Section 230’s critics, similarly aware. The problem is that the Section 230 policy balance is one that works well in general, but it is not always in ways people readily recognize, especially in specific cases with particularly bad facts. The reality is that people sometimes do use Internet services in bad ways, and these uses can often be extremely visible. What tends to be less obvious, however, is how many good uses of the Internet Section 230 has enabled to be developed, far eclipsing the unfortunate ones. In the 20-plus years since Zeran people have moved on from AOL to countless new Internet services, which now serve nearly 90 percent of all Americans and billions of users worldwide. Internet access has gone from slow modem-driven dial-up to seamless always-on broadband. We email, we tweet, we buy things, we date, we comment, we argue, we read, we research, we share what we know, all thanks to the services made possible by Section 230, but often without awareness of how much we owe to it and the early Zeran decision upholding its tenets. We even complain about Section 230 using services that Section 230 has enabled, and often without any recognition of the irony.

In a sense, Section 230 is potentially in jeopardy of becoming a victim of its own success. It’s easy to see when things go wrong online, but Section 230 has done so well creating a new normalcy that it’s much harder to see just how much it has allowed to go right. Which means that when things do go wrong – as they inevitably will, because, while Section 230 tries to minimize the bad uses of online services, it’s impossible to eliminate them all—we are always at risk of letting our outrage at the specific injustice cause us to be tempted to kill the golden goose by upending something that on the whole has enabled so much good.

When bad things happen there is a natural urge to do something, to clamp down, to try to seize control over a situation where it feels like there is none. When bad things happen the hands-off approach of Section 230 can seem like the wrong one, but Zeran has shown how it is still very much the right one.

In many ways the Zeran court was ahead of its time: unlike later courts that have been able to point to the success of the Internet to underpin their decisions upholding Section 230, the Zeran court had to take a leap of faith that the policy goals behind the statute would be born out as Congress intended. It turned out to be a faith that was not misplaced. Today it is hard to imagine a world without all the benefit that Section 230 has ushered in. But if we fail to heed the lessons of Zeran and exercise the same restraint the court did then, such a world may well be what comes to pass. As we mark more than two decades since the Zeran court affirmed Section 230 we need to continue to carry its lessons forward in order to ensure that we are not also marking its sunset and closing the door on all the other good Section 230 might yet bring.

1 Comments | Leave a Comment..

Posted on Techdirt - 3 September 2020 @ 9:44am

The Copia Institute's Comment To The FCC Regarding The Ridiculous NTIA Petition To Reinterpret Section 230

from the what-utter-crap dept

In his post Mike called the NTIA petition for the FCC to change the enforceable language of Section 230 laughable. Earlier I called it execrable. There is absolutely nothing redeeming about it, or Trump's Executive Order that precipitated it, and it has turned into an enormous waste of time for everyone who cares about preserving speech on the Internet because it meant we all had to file comments to create the public record that might stop this trainwreck from causing even more damage.

Mike's post discusses his comment. He wrote it from the standpoint of a small businessman and owner of a media website that depends on Section 230 to enable its comment section, as well as help spread its posts around the Internet and took on the myth that content moderation is something that should inspire a regulatory reaction.

I also filed one, on behalf of the Copia Institute, consistent with the other advocacy we've done, including on Section 230. It was a challenge to draft; the NTIA petition is 57 pages of ignorance about the purpose and operation of the statute. There was so much to take issue with it was hard to pick what to focus on. But among the many misstatements the most egregious was its declaration on page 14 that:

"[L]iability shields can deter entrance."

There is so much wrong with this statement. It's the flat earth equivalent of Internet policy, so utterly untethered from reality it's hard to know where to begin. So we made several points in response:

The first is that this whole regulatory exercise is likely unconstitutional. (See this recent post for the basic argument on that front.)

Secondly, we know that liability shields not only don't deter new platforms; they are necessary to get new platforms. Without protection platforms face ruinous legal costs practically from the get-go, and will struggle to even get capitalized in the first place, as what investor wants their cash just to go to legal fees? Even if a platform might ultimately not be found liable for their users' content, simply being on the receiving end of a lawsuit, or even just a demand letter, can be extremely costly. With lots of users, the potential risk exposure is staggering.

And we have seen what can happen to platforms when they get sued over user content in a notable area where Section 230 does not apply: copyright. As we pointed out in this comment, Veoh Networks was ultimately found not to be liable for copyright infringement. But by that point the company had already been bankrupted. If the NTIA is serious about wanting to stimulate competition and ensure Internet users have lots of choices of platforms to use, it should be plugging the still-few holes there already are in Section 230's protection, not deliberately trying to add more.

In addition, we discussed how much of the platform behavior NTIA takes issue with is actually First Amendment-protected editorial discretion. As a threshold matter, we pointed out that in trying to take jurisdiction over what Section 230 says and how courts may interpret it, it would mean that the FCC would end up asserting jurisdiction over non-social media platforms too. If Section 230 changes for any platforms, ultimately it will change for them all, including media sites like Techdirt, newspapers, and even individuals' Facebook posts that depend on this statute – even though these are the types of expressive outlets that have never been subject to FCC regulation. The claim for FCC jurisdiction over social media platforms is already extremely flimsy. But giving it power over these other sorts of platforms is constitutionally and statutorily insupportable.

In any case, the NTIA provided no coherent or Constitutional basis why any platforms should lose their First Amendment-protected editorial discretion. We've discussed this before: if individuals should be able to decide which comments to maintain on their Facebook posts (and, as anyone who posts on Facebook can easily understand, they should), there needs to be a principled reason why individuals grouped together in large enough corporate form should lose that right.

Perhaps there is some distinction that can be found in antitrust law, but the NTIA didn't provide it. It just scapegoated certain commercial platforms that are large and popular. But being large and popular is no basis to lose a Constitutional right. Moreover, the "irony" is that taking away the statutory protection platforms depend on will only take it away from the potential competitors we should hope they have.

In short, the NTIA petition to the FCC is terrible and the FCC should reject it, and we did not hesitate to say so.

Read More | 8 Comments | Leave a Comment..

Posted on Techdirt - 1 September 2020 @ 12:02pm

Supreme Court To Courts And Federal Agencies Trying To Rewrite Section 230: Knock It Off

from the bottom-line dept

A version of this post appeared on Project Disco: What the Bostock Decision Teaches About Section 230.

Earlier this summer, in Bostock v. Clayton County, Ga. the Supreme Court voted 6-3 in favor of an interpretation of Title VII of the Civil Rights Act that bars discrimination against LGBT people. The result is significant, but what is also significant – and relevant for this discussion here – is the analysis the court used to get there.

What six justices ultimately signed onto was a decision that made clear that when a statute is interpreted, that interpretation needs to be predicated on what the statutory language actually says, not what courts might think it should say.

Ours is a society of written laws. Judges are not free to overlook plain statutory commands on the strength of nothing more than suppositions about intentions or guesswork about expectations. [p. 33]

This rule holds even when it might lead to results that were not necessarily foreseen at the time the legislation was passed:

Those who adopted the Civil Rights Act might not have anticipated their work would lead to this particular result. Likely, they weren’t thinking about many of the Act’s consequences that have become apparent over the years, including its prohibition against discrimination on the basis of motherhood or its ban on the sexual harassment of male employees. But the limits of the drafters’ imagination supply no reason to ignore the law’s demands. When the express terms of a statute give us one answer and extratextual considerations suggest another, it’s no contest. Only the written word is the law, and all persons are entitled to its benefit. [p. 2]

Courts do not get to second guess what Congress might have meant just because it may be applying that statutory text many years later, even after the world has changed. Of course the world changes, and Congress knows it will when it passes its legislation. If later on it thinks that a law hasn’t scaled to changed circumstances it can change the law. But per the Supreme Court, courts don’t get to make that change for Congress. The statute means what it says, and courts are obligated to enforce it the way Congress wrote it, regardless of whether they like the result.

The place to make new legislation, or address unwanted consequences of old legislation, lies in Congress. When it comes to statutory interpretation, our role is limited to applying the law’s demands as faithfully as we can in the cases that come before us. As judges we possess no special expertise or authority to declare for ourselves what a self-governing people should consider just or wise. And the same judicial humility that requires us to refrain from adding to statutes requires us to refrain from diminishing them. [p. 31]

Seth Greenstein already questioned how the Copyright Office’s Section 512 study can have any merit in the wake of the Bostock decision. In light of this Supreme Court ruling, it’s also hard to see how certain recent decisions denying Section 230 protection to platforms can survive. And it further calls into question the Constitutional propriety of the DOJ and NTIA efforts to reinterpret Section 230’s provisions and give these reinterpretations the force of law.

On the litigation front, the Bostock ruling calls into question the Ninth Circuit’s decision in Enigma Software Group v. Malwarebytes. In this case, Malwarebytes had been flagging Enigma’s software as malware based on its users’ complaints. Enigma sued, arguing that this flagging was an impermissible moderation decision because it was motivated by anticompetitive animus since the two companies at least nominally offer similar sorts of software. This opinion came at the early stage of the lawsuit, so there has not been a finding that there was in fact any anticompetitive animus. But the Ninth Circuit decided that because there could have been an anticompetitive motivation behind Malwarebytes’s moderation decision flagging Enigma’s software as malware, Section 230 was not available to Malwarebytes as a defense and caused the lawsuit to be dismissed.

The issue raised by this decision is not whether there was anticompetitive motivation or not. Even if there were, the issue is that the Ninth Circuit decided that possible animus would bear on whether Malwarebytes’s moderation decision was protected by Section 230, because it read into Section 230 a limitation that wasn’t there:

We hold that the phrase “otherwise objectionable” [in subsection (c)(2) of the statute] does not include software that the provider finds objectionable for anticompetitive reasons. [p. 1045]

And that’s a problem, because it puts the court in the position of doing the policymaking that is supposed to be the purview of Congress and rewriting the statute around that policy. But as the Supreme Court reminded us in Bostock, the courts don’t get to make these sorts of changes to legislation.

Not even if the court thinks that the statutory language fails the policy that Congress intended to meet. In this case the Ninth Circuit spent some time exploring Congress’s apparent desire when it passed Section 230 for it to help stimulate competition. It then used that analysis to buttress its conclusion that Congress therefore must have meant to have this limitation to Section 230(c)(2) protection, built into the statute. But as Bostock made clear, this rationale illustrates why the courts don’t get to fix the statute for Congress. Congress chose the language it wanted to vindicate its policy value. Sure, as with any statute, as things have evolved that language might potentially no longer be effective in achieving that value. But Congress understands that things can change, and if it now feels that the statutory language it previously chose is no longer working, then it can change it. That’s Congress’s job. It is not the job of the courts. Especially not if it turns out that Congress still thinks it is the better language to have.

Nor is it the job of any agency of the Executive Branch. True, the Bostock decision does not explicitly spell out that agencies are prohibited from making changes to legislation. But the Constitution is clear that legislating is the domain of Congress, and if the courts, who are charged with statutory interpretation, don't get to read new language into a statute, there is even less reason to believe that the Executive Branch gets to either.

Which puts the DOJ’s efforts to limit the availability of Section 230’s platform protection on extremely shaky ground, and it should make the NTIA petition for an FCC rulemaking dead on arrival. While the Ninth Circuit in Malwarebytes chose to read in a limitation into Section 230 (c)(2) that wasn’t there, the NTIA petition calls for the FCC to superimpose multiple, lengthy sections of regulatory code on top of the more minimal statutory language Congress chose to provide Section 230 immunity more broadly.

In other words, NTIA has asked the FCC to usurp Congress’s legislative role to rewrite legislation, which is definitely not the FCC’s job. And for good reason, because the changes NTIA proposes the FCC make would turn it into an entirely different and much more narrow law that vindicates entirely different policy goals than Congress intended when it passed Section 230 to ensure that the Internet could continue to grow to be vibrant and competitive. And it would do it at the latter’s expense.

12 Comments | Leave a Comment..

Posted on Techdirt - 31 August 2020 @ 1:32pm

A Paean To Transparency Reports

from the encouraging-nudges-are-better-than-beatings dept

One of the ideas that comes up a lot in proposals to change Section 230 is that Internet platforms should be required to produce transparency reports. The PACT Act, for instance, includes the requirement that they "[implement] a quarterly reporting requirement for online platforms that includes disaggregated statistics on content that has been removed, demonetized, or deprioritized." And the execrable NTIA FCC petition includes the demand that the FCC "[m]andate disclosure for internet transparency similar to that required of other internet companies, such as broadband service providers."

Any person providing an interactive computer service in a manner through a mass-market retail offering to the public shall publicly disclose accurate information regarding its content-management mechanisms as well as any other content moderation, promotion, and other curation practices of its interactive computer service sufficient to enable (i) consumers to make informed choices regarding the purchase and use of such service and (ii) entrepreneurs and other small businesses to develop, market, and maintain offerings by means of such service. Such disclosure shall be made via a publicly available, easily accessible website or through transmittal to the Commission.

Make no mistake: mandating transparency reports is a terrible, chilling, and likely unconstitutional regulatory demand. Platforms have the First Amendment right to be completely arbitrary in their content moderation practices, and requiring them to explain their thinking both chills their ability to exercise that discretion and presents issues of compelled speech, which is itself of dubious constitutionality. Furthermore, such a requirement itself threatens the moderation process on a practical level. As we are constantly reminding, content moderation at scale is really, really, hard, if not outright impossible, to get right. If we want platforms to nevertheless do the best they can, then we should leave them to be focused on that task and not encumber them with additional, and questionable, regulatory obligations.

All that said, while it is not good to require transparency reports, they are nevertheless a good thing to encourage. With Twitter recently announcing several innovations to their transparency reporting (including now having an entire "Transparency Center" to gather all released data in one place), it's a good time to talk about why.

Transparency reports have been around for a while. The basic idea has remained constant: shed light on the forces affecting how platforms host user expression. What's new is these reports providing more insight on the internal decisions bearing on how platforms do this hosting. For instance, Twitter will now be sharing data about how it has enforced its own rules:

For the first time, we are expanding the scope of this section [on rules enforcement] to better align with the Twitter Rules, and sharing more granular data on violated policies. This is in line with best practices under the Santa Clara Principles on Transparency and Accountability in Content Moderation.

This data joins other data Twitter releases about manipulative bot behavior and also the state-backed information operations it has discovered.

Which bears on one of the most important reasons to have transparency reports: they tell the public how *external* pressures have shaped how platforms can do their job intermediating their users' expression. Historically these reports have been crucial tools in fighting attacks against speech because they highlight where the attacks have come from.

In some instances these censorial pressures have been outright demands for content removal. For instance, the Twitter report calls out DMCA takedown notices, and takedown demands predicated on trademark infringement claims. It also includes other legal requests for content removal. For instance, in its latest report covering 2019, it found that

[i]n this reporting period, Twitter received 27,538 legal demands to remove content specifying 98,595 accounts. This is the largest number of requests and specified accounts that we’ve received since releasing our first Transparency Report in 2012.

But removal demands are not the only way that governments can mess with the business of intermediating user speech. One of the original purposes of these reports was to track the attempts to seek identifying information about platform users. These demands can themselves be silencing, scaring users into pulling down their own speech already made or biting their tongues going forward – even when their speech may be perfectly lawful and the public would benefit from what they have to say.

We've written many times before, quite critically, about how vulnerable speakers are to these sorts of abusive discovery demands. The First Amendment protects the right to speak anonymously, and discovery demands, that platforms find themselves having to yield to, can jeopardize that right.

As we've discussed previously, there are lots of different discovery instruments that can be propounded on a platform (ex: civil subpoenas, grand jury subpoenas, search warrants, NSLs, etc.) to demand user data. They all have different rules governing them, which affects both their propensity for abuse and the ability of the user or platform to fight off unmeritorious ones.

Transparency reports can be helpful in fighting discovery abuse because they can provide data showing how often these different instruments are used to demand user data from platforms. The problem, however, is that all too often the data in the reports is generalized, with multiple types of discovery instruments all lumped together.

I don't mean they are lumped together the way the volume of NSL letters can only be reported in wide bands. (But do note the irony that all of these Section 230 "reform" proposals mandating transparency reports do nothing about aspects of current law that actively *prevent* platforms from being transparent. If any of these proposals actually cared about the ability to speak freely online as much as they profess, their first step should be to remove any legal obstacle currently on the books that compromises speakers' First Amendment rights or platforms' ability to be protective of those rights – and the law regarding NSLs would be a great place to start.)

I mean that, for instance, multiple forms of data requests tend to get combined into one datapoint. In this aggregated form the reports have some informational value, but it obscures certain trends that are shaped by differences in each sort of instruments' rules. If certain instruments are more problematic than others, it would be helpful if we could more easily spot their impact, and then have data to cite in our advocacy against the more troubling ones.

In the case of Twitter, these "information requests" are reported as either government requests or non-government requests. For the government requests they are further broken into "emergency" and "routine," but not obviously broken out any further. On the other hand, Twitter has flagged CLOUD Act requests as something to keep an eye on when it goes into effect, as it will create a new sort of discovery instrument that may not adequately take into account the user and platform speech rights they implicate. But whether these existing government data requests were federal grand jury subpoenas, search warrants from any particular jurisdiction, NSLs, or something else is not readily apparent. Nor are the non-governmental requests broken out either, even though it might be helpful to know when the subpoena stemmed from federal civil litigation, state civil litigation, or was a DMCA 512(h) subpoena (where there may not be any litigation at all). Again, because the rules surrounding when each of these discovery instruments can be issued, and whether/how/by whom they can be resisted, differ, it would be helpful to know how frequently each is being used. Censorial efforts tend to take the path of least resistance, and this data can help identify which instruments may be most prone to abuse and need more procedural friction to be able to stem it.

It may of course not be feasible to report with more granularity, whether for such reasons such as amount of labor required or any rules barring more detailed disclosure (see, again, NSLs). And platforms may have other reasons for wanting to keep that information closer to the chest. Which, again, is a reason why mandating transparency reports, or any particular informational element that might go into a transparency report, is a bad idea. But platforms are not alone; if one is being bombarded with certain kinds of information requests then they all likely are. Transparency on these details can help us see how no platform is alone and help us all advocate for whatever better rules are needed to keep everyone's First Amendment rights from being too easily trampled by any of these sorts of "requests."

2 Comments | Leave a Comment..

Posted on Free Speech - 18 August 2020 @ 10:44am

Why Keep Section 230? Because People Need To Be Able To Complain About The Police

from the discourse-demands-it dept

The storm has passed and the charges have been dropped. But the fact that someone who tweeted about police behavior, and, worse, people who retweeted that tweet, were ever charged over it is an outrage, and to make sure that it never happens again, we need to talk about it. Because it stands as a cautionary tale about why First Amendment protections are so important – and, as we'll explain here, why Section 230 is as well.

To recap, protester Kevin Alfaro became upset by a police officer's behavior at a recent Black Lives Matter protest in Nutley, NJ. The officer had obscured his identifying information, so Alfaro tweeted a photo asking if anyone could identify the officer "to hold him accountable."

Several people, including Georgana Szisak, retweeted that tweet. The next thing they knew, Alfaro, Sziszak, and several other retweeters found themselves on the receiving end of a felony summons pressing charges of "cyber harassment" of the police officer.

As we've already pointed out, the charges were as pointless as they were spurious, because they themselves directly did the unmasking of the officer's identity, which the charges maintained was somehow a crime to ask for. Over at the Volokh Conspiracy, Eugene Volokh took further issue with the prosecution, and in particular its application of the New Jersey cyber harassment statute against the tweet. Particularly in light of an earlier case, State v. Carroll (N.J. Super. Ct. App. Div. 2018), he took a dim view:

N.J. Stat. 2C:33-4.1a(2), under which Sziszak is charged, provides, in relevant part,

A person commits the crime of cyber-harassment if, while making a communication in an online capacity via any electronic device or through a social networking site and with the purpose to harass another, the person … knowingly sends, posts, comments, requests, suggests, or proposes any lewd, indecent, or obscene material to or about a person with the intent to emotionally harm a reasonable person or place a reasonable person in fear of physical or emotional harm to his person.

According to the criminal complaint, the government's theory is that the post "caus[ed] Det. Sandomenico to fear that harm will come to himself, family and property."

But the Tweet (and the retweet) aren't "lewd, indecent, or obscene." ... [And] if the "lewd, indecent, or obscene" element isn't satisfied, N.J.S.A. 2C:33-4.1(a)(2) doesn't apply regardless of whether it was posted with the intent to "caus[e] Det. Sandomenico to fear that harm will come to himself, family and property."

These "cyber harassment" statutes are often problematic, targeting for punishment what should be protected and often socially valuable critical speech. Cases like these, where they get applied to criticism of state power, highlight the Constitutional concern. Being able to speak out against the state is at the heart of why we have the First Amendment, and laws interfering with that ability offend the Constitution. In this case, even if the New Jersey law had been drafted in a sufficiently narrow way to not be unconstitutional on its face by – in theory – only targeting speech beyond the protection of the First Amendment, applying it in this way to speech that should have been protected made it unconstitutional.

But while it's bad enough that the original tweeter had been targeted by the police for his speech, the aspect of the story that is most worrying is that police also targeted for prosecution people who had simply retweeted the original tweet. Section 230 should have barred such prosecutions. And before we so casually chuck out the statute, as so many propose, we need to understand why it should have applied here, and why it is so important to make sure that it still can in the future.

The First Amendment and Section 230 both exist to foster discourse. Discourse is more than just speech; it's the exchange of ideas. The First Amendment protects their expression, and Section 230 their distribution. Especially online, where speaking requires the facilitation of others, we need both: the First Amendment to make it possible to speak, and Section 230 to make it possible to be heard.

This case illustrates why it is so important to have both, and why Section 230 applies, and must apply, to more than just big companies. Here, someone tweeted protected speech to notify the community of concerning police behavior. Section 230 ensured that the Internet platform – in this case, Twitter – could exist to facilitate that speech. And it's good that Section 230 meant that Twitter could be available to play that role. But Alfaro only had 900 followers; Twitter helped him speak, but it was the retweeters who turned that speech into discourse by helping it reach the community. They had just as important a role to play in facilitating his speech as Twitter did, if not even more so.

It's important to remember that the statutory text of Section 230 in no way limits its protection to big Internet companies, or even to companies at all. It simply differentiates between whoever created the expression at issue (and can thus be held to answer for it) and who facilitated its distribution online (who therefore can't be). Given how important that facilitation role is in having meaningful public discourse, we need to ensure that everyone who performs it is protected. In fact, it may be even more important to ensure that individual facilitators can maintain this protection than the larger and more resourced corporate platforms who can better weather legal challenges.

Think about it: think about how many of us share content online. Many of us may even share far more content created by others than we create ourselves. But all that sharing would grind to a halt, if we could be held liable for anything allegedly wrong with that content. Not just civilly, but, as this case shows, even criminally.

And that chilling is not a good thing. One could certainly argue that people should take more care when they share content online and do the best they can in vetting it before sharing it, to the extent it is possible. Of course, it could also be fairly said that many people should use their right to free speech more productively than they necessarily do. But the reason we protect speech, even low-value speech, is because we need to make sure that the good, socially beneficial speech we depend on to keep our democracy healthy can still get expressed too. Which is also why we have Section 230: it is not possible to police all the third-party created content we intermediate, and if we want to make sure that the good, socially beneficial content can get through, to reach the people who need to hear it, then we need to make sure that we don't have to. When we snip away at Section 230's protection, or limit its application, we obstruct that spread and curtail the discourse society needs. We therefore do so at our peril.

Obviously in this case Section 230 did not prevent the attempted prosecution. Nor did the First Amendment, and that the police went after anyone over the tweet was an unacceptable abuse of authority that imposed an enormous cost. Discourse was damaged, and the targeted Twitter users may now think twice before engaging in online discourse at all, much less discourse intending to keep state power in check. These are costs that we, as a society, cannot afford to bear.

But at least by having both of these defenses available, the terrible toll this attempted prosecution took was soon abated. Think about how much worse it would have been had they not been. And ask why that is a future we should be continuing to spend any effort trying to invite. Our sole policy goal should be to enhance our speech protections, to impose costs on those who would undermine public discourse through their attempts at abusive process. The last thing we should be doing is taking steps to whittle away at them and make it any easier to chill discourse than it already is, and cases like this one, where people were trying to speak out against abuses of power, illustrate why.

30 Comments | Leave a Comment..

Posted on Techdirt - 22 July 2020 @ 1:44pm

Tech Policy In The Time Of Trump: Mid-2020 Edition

from the this-is-not-a-drill dept

We're not partisan here at Techdirt. We have our personal preferences, certainly, but technology policy tends to transcend normal political divisions. We have been just as likely to see good policy proposals from Democrats as Republicans, and bad ones just the same. What we care about here is ensuring that the founding principles of liberty articulated by the Constitution can be meaningfully applied in a modern, technology driven world. That value is not a partisan one. We don't care who is the hero who makes sure we do not spiral into dystopia; we just want to make sure we don't. And our job is to point out how we may already be.

For the first years of the Trump Administration I took to writing annual summaries of how things might shake out on the tech policy front given the current make-up of government. And then I stopped. By then we had children in cages, and suddenly trying to read the political tea leaves seemed like a remarkably pointless exercise. Also unhelpful, glib, and potentially even harmful. There is no point in acting as though everything is politics as usual when the situation has become anything but. A horrific line had been crossed, and it wasn't even the first. But unless everyone recognized how dangerously abnormal politics had become, it would certainly not be the last.

And yet it sadly appears that politics has chugged on as usual. And as a result more uncrossable lines have, indeed, been crossed. As was inevitable, yesterday's rounding up of immigrants became today's rounding up of American citizens.

So if we're going to talk about tech policy in the time of Trump, we need to be worried about what will happen tomorrow. Our paramount concern therefore needs to be ensuring that tech policy enables us to check further misuse of power. It certainly must not help further entrench it. So let's dig in and see where we are. In my original posts I distilled my comments into four general policy areas that now seem trivially pedestrian. The breakdown implies that we can simply focus on a particular area and its localized political skirmishes and leave the others for another day. Which is silly; when the whole house is on fire focusing on how an individual room may be decorated is not going to be an effective way of addressing the actual crisis at hand. But for the sake of uniformity, I might as well continue with the same organization.

Free speech/copyright. President Trump is infamous for lying. But there's one thing he said that has been true: that he was going to "open up" our libel laws to make it even easier to sue someone for their expression. In fact he's gone even further than that, undermining every expressive right the First Amendment guarantees, including the right to protest, which he has now co-opted federal forces to physically attack.

But as for making it easier to sue people for their speech, he has done that by example, as he and his confederates have launched specious lawsuit after specious lawsuit against speakers, platforms, and traditional press and publishers to challenge their critical (and generally completely lawful) contributions to needed public discourse. On more than one occasion he didn't even wait for them to make the speech before suing to shut them down. It turns out he didn't need to change a single law to effectively obviate the right to free speech; he just had to drown out the voices speaking against him with a flood of litigation in order to silence them.

The running theme throughout this commentary is that lawmakers should not waste time with the traditional horse-trading that fills the corridors of our capitols as policy normally gets set. We do not have the luxury, here in 2020, of developing policy that would optimize life in America; at the moment our only task is to save it. And that requires recognizing the urgency of the moment, because if you don't vote against totalitarianism when you have the chance, you may never have the chance again. So while there are plenty of areas where ordinarily lawmakers should act to articulate good policy in law, including on the tech policy front, right now there is no policy value more important for lawmakers to express in law than preserving the right to expression.

In particular, they should waste no time getting effective anti-SLAPP laws on the books. Every state needs one (looking at you, Virginia…). As does the entire federal legal system, so that we can ensure that federal courts can no longer be the refuge of the censor eager to chill the speech of their critics. Do not pass go; drop almost everything else to get this done. Because if we cannot ensure the public's right to speak out against oppression, then we all but guarantee that oppression to prevail.

Which brings us to copyright, the deck chairs on this sinking Titanic. Could copyright policy be better attuned to the economics of producing and consuming expression in the 21st century? Perhaps. But at the moment that policy challenge is largely irrelevant. The very ability to create and consume expression is itself under fire, and our sole goal needs to be to preserve it. Copyright law inherently is about controlling expression, and that's the last thing we need to be empowering anyone to do.

Mass surveillance/encryption. We have been warning for years against giving the police the unchecked power to invade people's privacy. The ability people need to have to keep their personal affairs free from the prying eyes of the government is no less essential to preserve now, in the 21st Century, than it ever was in the 18th. If anything it is even more important to hold fast to the constitutional barriers that prevent the government from readily invading our private lives now that so much of those lives – personal choices, associations, ideas, etc. – are so casually captured in digital records so easy for the government to track.

We also challenged the excuses law enforcement gave for why they needed this exceptional ability to bypass the basic constitutional tenets normally prohibiting them from helping themselves to this data. They were nearly all predicated on the assumption that the state authority was the good guy and that it needed to save the public from the bad guys hiding among us. We challenged these arguments because these assumptions were inherently unsound – as the news lately has been daily proving.

It is proving us right on a local level – see all the examples of violent police behavior that have inspired weeks and weeks of protest – and increasingly on the federal level, as President Trump unleashes federal forces against those who speak against him. These are not the acts of benevolent protectors we can safely entrust with the awesome power of the state, unchecked. These are the acts of the sorts of bad actors that our civil liberties were designed to protect us from. But when we bless digital surveillance programs that ignore our constitutional protections, and undermine the encryption technology that allows us to make the protections meaningful on a practical level, we make ourselves vulnerable to abuses of power by eliminating our defenses against them. No policymaker committed to the enduring idea of American democracy can possibly advocate in good faith or with intellectual coherence for any policy agenda that continues down such a destructive path. When a powerful state actor has already abused his power against the public, it makes no sense to give him more power to continue that abuse, and it is beyond naïve to believe it wouldn't be so abused. Not when we can already see in painful clarity how much it already has.

Net neutrality/intermediary liability. The political corruption of antitrust enforcement has poisoned this entire policy area. Net neutrality stands for the principle of non-discrimination on the part of service providers enabling the public's online expression. For Internet services where there is no meaningful competition, regulation committed to maintaining that principle is important. It is not, however, useful to enforce that principle in areas where there is competition. In fact, it presents its own harm to expressive liberty when these service providers are denied the freedom to discriminate. Having some sort of principled, meaningful, and consistent way of identifying which service providers are which is therefore crucial. Yet that is not what we've got. Instead we have angry, reactionary, inconsistent, unrealistic, unwise, and often unconstitutional policy demands from both sides of the aisle.

The upshot is that people's ability to speak freely online is at risk. The only way we can protect that ability is by protecting and promoting the existence of the service providers that enable it. Which means not only encouraging the competitive market needed to ensure there are enough avenues for basic Internet access, but also ensuring that there are no barriers limiting our supply of other platforms. Unfortunately, we are currently doing the complete opposite on both fronts, and in the process directly preventing needed lawful discourse.

In some cases it's because people can't get online at all. Either they don't have any service due to a failure of broadband competition policy, or, worse, because we have forced service providers to deny their expression. In those cases sometimes we've used copyright as the rationale to bludgeon service providers into removing speech or even kick off users from their services entirely (and regardless of whether they had actually violated any law). But it's also not the only way we have scared providers into pre-emptively kicking off users or their expression with the plausible fear of being held liable for that expression. The inscrutable FOSTA has already directly chilled platforms and the lawful expression they facilitate, and now lawmakers are threatening even more cumbersome regulation to do even more to terrify platforms into removing user expression, if not cease to exist entirely.

When the United States of America is teetering towards autocracy, it is not the time to impose any policy that would inhibit the public's ability to use the Internet to speak out against it. But that's what most of the proposals being put forth that target service providers threaten to do, from undermining their Section 230 immunity, to further conditioning their DMCA safe harbor, to even encumbering them excessively with ill-tailored regulations on the privacy and security front. Any policy that will have the effect of reducing the supply of online outlets or constraining their ability to enable protected speech – as all these policy proposals do – will only invite disaster when it erodes our ability to use the Internet to speak out against abuses of power, including state power. They all are a mistake.

Internet governance. In his tenure President Trump has accomplished two things: (1) eroding international cooperation and the US's commitment to the public international law that supports it, and (2) empowering autocrats. In the previous posts I lamented how Trump has also undermined the organs enabling international cooperation, but maybe it's just as well. Internationalism inherently wrangles input from around the globe, and that input increasingly includes hostility to freedom. The United States should be standing against this trend. Our tradition of liberty should be our chief export. But so long as all we are busy modeling is our indifference to freedom, if not also our abject surrender of it, then there may be no point in engaging with other national governments who would hasten its demise for everyone by giving them the institutional foothold from which to do it.

44 Comments | Leave a Comment..

Posted on Techdirt - 21 July 2020 @ 10:45am

A Case Where The Courts Got Section 230 Right Because It Turns Out Section 230 Is Not Really All That Hard

from the helpful-precedent dept

Having just criticized the Second Circuit for getting Section 230 (among other things) very wrong, it's worth pointing out an occasion where it got it very right. The decision in Force v. Facebook came out last year, but the Supreme Court recently denied any further review, so it's still ripe to talk about how this case could, and should, bear on future Section 230 litigation.

It is a notable decision, not just in terms of its result upholding Section 230 but in how it cut through much of the confusion that tends to plague discussion regarding Section 230. It brought the focus back to the essential question at the heart of the statute: who imbued the content at issue with its allegedly wrongful quality? That question is really is the only thing that matters when it comes to figuring out whether Section 230 applies.

This case was one of the many seeking to hold social media platforms liable for terrorists using them. None of them have succeeded, although for varying reasons. For instance, in Fields v. Twitter, in which we wrote an amicus brief, the claims failed but not for Section 230 reasons. In this case, however, the dismissal of the complaint was upheld on Section 230 grounds.

The plaintiffs put forth several theories about why Facebook should not have been protected by Section 230. Most of them tried to construe Facebook as the information content provider of the terrorists' content, and thus not entitled to the immunity. But the Second Circuit rejected them all.

Ultimately the statute is simple: whoever created the wrongful content is responsible for it, not the party who simply enabled its expression. The only question is who created the wrongful content, and per the court, "[A] defendant will not be considered to have developed third-party content unless the defendant directly and 'materially' contributed to what made the content itself 'unlawful.'" [p. 68].

Section 230 really isn't any more complicated than that. And the Second Circuit clearly rejected some of the ways people often try to make it more complicated.

For one thing, it does not matter that the platform exercised editorial judgment over which user content it displayed. After all, even the very decision to host third-party content at all is an editorial one, and Section 230 has obviously always applied in the shadow of that sort of decision.

The services have always decided, for example, where on their sites (or other digital property) particular third-party content should reside and to whom it should be shown. Placing certain third-party content on a homepage, for example, tends to recommend that content to users more than if it were located elsewhere on a website. Internet services have also long been able to target the third-party content displayed to users based on, among other things, users' geolocation, language of choice, and registration information. And, of course, the services must also decide what type and format of third-party content they will display, whether that be a chat forum for classic car lovers, a platform for blogging, a feed of recent articles from news sources frequently visited by the user, a map or directory of local businesses, or a dating service to find romantic partners. All of these decisions, like the decision to host third-party content in the first place, result in "connections" or "matches" of information and individuals, which would have not occurred but for the internet services' particular editorial choices regarding the display of third-party content. We, again, are unaware of case law denying Section 230(c)(1) immunity because of the "matchmaking" results of such editorial decisions. [p. 66-67]

Nor does it matter that the platforms use algorithms to help automate editorial decisions.

[P]laintiffs argue, in effect, that Facebook's use of algorithms is outside the scope of publishing because the algorithms automate Facebook's editorial decision-making. That argument, too, fails because "so long as a third party willingly provides the essential published content, the interactive service provider receives full immunity regardless of the specific edit[orial] or selection process." [p. 67]

Even if the platform uses algorithms to decide whether to make certain content more "visible," "available," and "usable," that does not count as developing the content. [p. 70]. Nor does simply letting terrorists use its platform to make it a partner in the creation of their content. [p. 65]. The court notes that in cases where courts have found platforms liable as co-creators of problematic content, they had played a much more active role in the development of specific instances of problematic expression than simply enabling it.

Employing this "material contribution" test, we held in FTC v. LeadClick that the defendant LeadClick had "developed" third parties' content by giving specific instructions to those parties on how to edit "fake news" that they were using in their ads to encourage consumers to purchase their weight-loss products. LeadClick's suggestions included adjusting weight-loss claims and providing legitimate-appearing news endorsements, thus "materially contributing to [the content's] alleged unlawfulness." [We] also concluded that a defendant may, in some circumstances, be a developer of its users' content if it encourages or advises users to provide the specific actionable content that forms the basis for the claim. Similarly, in Fair Housing Council v. Roommates.Com, the Ninth Circuit determined that—in the context of the Fair Housing Act, which prohibits discrimination on the basis of sex, family status, sexual orientation, and other protected classes in activities related to housing—the defendant website's practice of requiring users to use pre-populated responses to answer inherently discriminatory questions about membership in those protected classes amounted to developing the actionable information for purposes of the plaintiffs' discrimination claim. [p. 69]

Of course, as the court noted, even in Roommates.com, the platform was not liable for any and all potentially discriminatory content supplied by its users.

[I]t concluded only that the site's conduct in requiring users to select from "a limited set of pre-populated answers" to respond to particular "discriminatory questions" had a content-development effect that was actionable in the context of the Fair Housing Act. [p. 70]

Also, woven throughout the decision the court also included an extensive discussion, [see, e.g., p. 65-68], about that perpetual red herring: the term, "publisher," which keeps creating confusion about the scope of the law. One of the most common misconceptions about Section 230 is that it hinges on some sort of "platform v. publisher" distinction, immunizing only "neutral platforms" and not anyone who would qualify as a "publisher." People often mistakenly believe that a "publisher" is the developer of the content, and thus not protected by Section 230. In reality, however, for purposes of Section 230 platforms and publishers are actually one and the same, and therefore all protected by it. As the court explains, the term "publisher" just stems from the understanding of the word as "one that makes public," [p. 65], which is the essential function of what a platform does to distribute others' speech, and that distribution is not the same thing as creation of the offending content. Not even if the platform has made editorial decisions with respect to that distribution. Being a publisher has always entailed exercising editorial judgment over what content to distribute and how, and, as the court makes clear, it is not suddenly a basis for denying platforms Section 230 protection.

12 Comments | Leave a Comment..

Posted on Free Speech - 17 July 2020 @ 9:40am

Second Circuit Wrecks All Sorts Of First Amendment Protections To Keep Lawsuit Against Joy Reid Alive

from the what-public-discourse dept

The Second Circuit just issued an ugly decision in a defamation lawsuit against Joy Reid. It not only revived the case against her, but it greased the skids for many more defamation cases to be brought in federal court, including plenty even less meritorious.

The case, La Liberte v. Reid, involves two of Reid's social media posts from 2018. The first was from June 29:

At some point during the Council Meeting, La Liberte was photographed interacting with a fourteen-year-old teenager who appears to be (and is) Hispanic (the "Photograph"). The Photograph showed La Liberte with her mouth open and her hand at her throat in a gagging gesture. On June 28th, a social media activist named Alan Vargas tweeted the Photograph along with the following caption: "'You are going to be the first deported' [and] 'dirty Mexican' [w]ere some of the things they yelled they yelled [sic] at this 14 year old boy. He was defending immigrants at a rally and was shouted down. Spread this far and wide this woman needs to be put on blast." The Photograph went viral. The next day, Joy Reid, a personality on the MSNBC cable station, retweeted (i.e., shared) the Vargas tweet to her approximately 1.24 million followers. (La Liberte is not alleging defamation by Reid as to that communication.) Later that same day (June 29), Reid posted the Photograph on her Instagram with the following caption: "He showed up to a rally to defend immigrants . . . . She showed up too, in her MAGA hat, and screamed, 'You are going to be the first deported' . . . 'dirty Mexican!' He is 14 years old. She is an adult. Make the picture black and white and it could be the 1950s and the desegregation of a school. Hate is real, y’all. It hasn’t even really gone away." [p.6-7]

The second was from July 1:

Two days later (July 1), Reid published another post about La Liberte, this time on Instagram and Facebook. This post juxtaposed the Photograph of La Liberte with the 1957 photograph showing one of the Little Rock Nine walking past a screaming white woman. Reid added the following caption: "It was inevitable that this [juxtaposition] would be made. It's also easy to look at old black and white photos and think: I can't believe that person screaming at a child, with their face twisted in rage, is real. By [sic] every one of them were. History sometimes repeats. And it is full of rage. Hat tip to @joseiswriting. #regram #history #chooselove" [p. 7-8]

Subsequently, further media coverage revealed that the plaintiff had not been the source of the cited racist comments. [p. 7] On July 2 the plaintiff contacted Reid to ask that she delete the posts and apologize, which Reid did later that day. [p. 8]. Despite her doing so, the plaintiff sued anyway, but the district court in EDNY then dismissed it.

The Second Circuit has now stepped in to revive the case, and in doing so opened the door not only to this troublingly weak case but plenty of others even weaker.

There are a number of issues with the decision:

  • Its denial of Section 230 protection;
  • Its refusal to find the plaintiff a limited purpose public figure;
  • Its refusal to allow California's anti-SLAPP law apply in federal diversity cases; and
  • Its insistence on rendering destructive precedential decisions on questions that were moot.
Section 230

Section 230 became an issue because Reid had raised it as a defense for her June 29 posting of the picture on Instagram with her caption (although not her July 1 post on Instagram and Facebook). The district court rejected that defense, and the Second Circuit agreed with that rejection. But whereas it mattered less in the district court because it had found other reasons to dismiss the case against Reid, because the Second Circuit kept the case alive, it doing so also on Section 230 grounds raises more concerns (plus, it is an appeals court, so its decision will reverberate more into the future).

In denying her the statute's protection the court did get the basic rules right: only the party that created the offending expression can be held liable for it. Furthermore, citing earlier Circuit precedent, "a defendant will not be considered to have developed third-party content unless the defendant directly and 'materially' contributed to what made the content itself 'unlawful.'" [p. 22]. But in denying her the protection it applied these rules in a way that may expose myriad other social media posters - and even platforms themselves - to litigation in the future, and in a way that Section 230 should really forestall.

Reid was ostensibly only being sued for the commentary that she added to her re-posts of the original picture, and not the photographer's original tweet. Had it been the latter, Section 230 would have more clearly applied. Asserting it for her own speech is an aggressive argument, but not a ridiculous one. It's also not one that the court dismissed out of hand. As that prior precedent made clear, liability for speech hinges on who imbued the speech with its allegedly wrongful quality. Reid argued that it wasn't her: The original post had been of a picture of the plaintiff seemingly shouting threateningly at a Latino boy, and included a caption indicating that this picture was captured at an event where racist invective was shouted at him. Thus it was reasonable to take the original post as the statement that La Liberte was one of the people doing that shouting. Unfortunately that statement turned out to be wrong, but Reid repeating that statement in her own words was not what introduced the wrongfulness. Therefore she was not actually the "information content provider" with respect to this message, and Section 230 should have applied.

The trouble is, in the court's view, she had been the one to imbue the message with its wrongful quality. What might have made this case a close call was that the original post had only included an unspecific "they" in reference to the shouters, whereas Reid had attributed it to the plaintiff by name. However that attribution had already been made in the original post – not by her name, true, but by her picture. Thus Reid did not introduce anything new to the overall expression. Indeed, that she believed, albeit erroneously, that the plaintiff had screamed the invective at the boy was because that was the message the original post had conveyed. It may have been an erroneous message, but she was not the one who originated it.

The problem with now finding her the "information content provider" in this situation is that it reads into Section 230 a duty of care that does not exist in the statutory language, requiring people who share others' expression to make some sort of investigation into the veracity of that expression. While it might be good if people did – we certainly would like for people sharing things on social media to be careful about what they were sharing – Section 230 exists because it is hard to get intermediation of expression right, and we risk choking off speech if we make it legally risky to get wrong. (See what happened to Reid, where even if she had been wrong about the significance of the underlying tweet, it was a reasonable error to make.)

Worse, not only would it chill social media sharing, but this decision is unlikely to stay tightly cabined to that sort of intermediation of others' expression. If it were the rule that you had to vet the expression you allowed to be shared before you could be safe from sharing other people's expression, then Section 230 could almost never apply and *everyone* would be vulnerable to being sued over the expression they intermediate, since no matter how much care they took since they'd still have to defend those efforts in court. Such a rule would represent a profound shift in how Section 230 works, which up to now has not been conditional. Twenty-plus years of jurisprudence has made clear that Section 230 protection is not contingent on the intermediary vetting the expression produced by third parties that it helps share, and this decision undermines that clarity. And not just for social media users, but the platforms they use as well.

Ultimately, if Section 230 can apply to individuals sharing others' social media posts (prior precedent supports that conclusion, and this court accepted it as well [see footnote 8]) and if it can apply to original, summarizing content (as this court also accepted), then there's no principled reason it should not have applied here.

Limited-purpose public figures

Denying Section 230 protection is only the tip of the iceberg. Not only does it make people who share on social media vulnerable to being sued, but other aspects of the decision make it more likely that it is litigation they will lose.

The court's refusal to find that the plaintiff was a limited purpose public figure is one of these aspects. Because open discourse about matters of public concern is a value the First Amendment exists to protect, the Supreme Court has developed the concept of the "public figure" to help ensure that it is. A public figure is someone whose fame has so intertwined them in matters of public interest that they must plead "actual malice," a fairly exacting standard, on the part of a speaker in order to prevail on a claim that the speaker defamed them.

Here, no one argued that the plaintiff was a general purpose public figure. But there are also "limited-purpose public figures." These are people who are not inherently intertwined in matters of public interest but who may insert themselves in matters that are and thus become public figures within the context of that matter. In such cases they would also need to plead actual malice in any defamation lawsuit where there had been commentary about them in this context.

Reid argued that the plaintiff was a limited purpose public figure. In particular, she regularly appeared at council meetings about the immigration issue and had been visibly, and publicly, vocal on the subject. The court rejected the contention:

That is not nearly enough. […T]he district court did not take into account the requirement that a limited purpose public figure maintain "regular and continuing access to the media." One reason for imposing the actual malice burden on public figures and limited purpose public figures is that "[t]hey have media access enabling them to effectively defend their reputations in the public arena." We have therefore made "regular and continuing access to the media" an element in our four-part test for determining whether someone is a limited purpose public figure. [p. 24-25]

Per the court, "La Liberte plainly lacked such media access." [p. 25].

The earlier photograph, which showed her conversing, was in a Washington Post photo spread of attendees at an SB 54 protest. The article did not name La Liberte, let alone mention her views. The single caption described everyone depicted as “[s]upporters and opponents of [SB 54] rally[ing] and debat[ing] outside Los Alamitos City Hall.” Such incidental and anonymous treatment hardly bespeaks “regular and continuing access to the media.” [p. 25]

Furthermore:

Nor does La Liberte’s participation at city council meetings. La Liberte is said to have “testif[ied] eight times around the state” (Appellee’s Br. at 26 (citing App. at 102-05)); but Reid does not identify instances in which the media singled out La Liberte’s participation as newsworthy. Nor does speech, even a lot of it, make a citizen (or non-citizen) fair game for attack. Imposition of the actual malice requirement on people who speak out at government meetings would chill public participation in politics and community dialogue. [p. 26]

The problem with this analysis is that it better applies to why a person engaging in civic affairs does not become a full-fledged public figure, where every aspect of their life can be a matter of public interest. It misses the significance of why we have the limited purpose public figure doctrine in the first place, which is that in the context of a specific matter of public concern a person's behavior can become a matter of public interest. Here the plaintiff had concertedly inserted herself into a matter of public concern – the policymaking surrounding immigration - on a "regular and continuing" and conspicuously public basis. The court's ruling puts that public behavior beyond the reach of effective public comment by treating it as if it were private and thus lowering the standard of what the plaintiff would have to plead to support a defamation claim.

State anti-SLAPP in federal court

The decision also reaches an unfortunate conclusion we've taken issue with before: disallowing state anti-SLAPP laws in cases that end up in federal court via diversity jurisdiction. It's a conclusion that seems to reflect dubious constitutional analysis, is bad policy, and in this case, conflicts with Ninth Circuit precedent.

As we explained before:

Diversity jurisdiction arises when the parties in the litigation are from separate states and the amount in controversy is more than $75,000 and the issue in dispute is solely a question of state law. Federal courts ordinarily can't hear cases that only involve state law, but because of the concern that it could be unfair for an out-of-state litigant to have to be heard in a foreign state court, diversity jurisdiction can allow a case that would have been heard in state court to be heard by the federal one for the area instead.

At the same time, we don't want it to be unfair for the other party to now have to litigate in federal court if being there means it would lose some of the protection of local state law. We also don't want litigants to be too eager to get into federal court if being there could confer an advantage they would not have had if the case were instead being heard in state court. These two policy goals underpin what is commonly known as the "Erie doctrine," named after a 1938 US Supreme Court case that is still followed today.

The first problem with the Second Circuit's decision is that it does not even *mention* the Erie doctrine – instead it just dives right into a procedural rules' analysis. [p. 13]. The second problem is that its decision directly conflicts with Ninth Circuit precedent that applied Erie to find that California's anti-SLAPP law indeed applied in federal diversity occasions. In other words, the Second Circuit has just reached across the country and into the Ninth Circuit to snatch away the protection of a law that the Ninth Circuit already had assured Californians that they had.

The third problem is that it is bad policy because it would encourage forum-shopping, which is normally discouraged. As the Ninth Circuit articulated in that case, US Ex Rel. Newsham v. Lockheed Missiles & Space Co.:

[I]f the anti-SLAPP provisions are held not to apply in federal court, a litigant interested in bringing meritless SLAPP claims would have a significant incentive to shop for a federal forum. Conversely, a litigant otherwise entitled to the protections of the Anti-SLAPP statute would find considerable disadvantage in a federal proceeding.

The Second Circuit appeared indifferent to these concerns:

Finally, amici warn that refusal to apply the anti-SLAPP statute will “encourage forum shopping” and lead to “an increased burden on federal courts in this Circuit.” (Amici Br. at 11.) That may be so; but our answer to a legal question does not turn on our workload; and in any event, the incentive to forum-shop created by a circuit split can be fixed, though not here. [p. 16]

The concern about forum-shopping is not that it will overburden federal courts; the concern the is manifest unfairness to defendants that will arise when they suddenly lose the benefit of the the substantive protections for speech California gave them – and upon which they may have depended on to speak – because an out-of-state litigant was able haul them into federal court.

Mootness

It is also not clear why the Second Circuit even reached the anti-SLAPP question. If its public figure analysis was correct, the defense would be unlikely to be able to even use it, because by that logic the expression at issue would have failed to meet the anti-SLAPP law's requirement that it be about a matter of "public issue." Thus there was no need for this court to ever reach the anti-SLAPP question, and yet it chose to opine on it first, before even reaching the Section 230 and then the public figure discussions. But because after those latter two analyses there was no reason to reach the anti-SLAPP discussion, and it raises the question of whether at this point it was even a ripe enough issue for the court to have had appellate jurisdiction over. But even if it did, doctrines of judicial restraint should have precluded deciding the issue and creating a mess that speakers who thought they were protected will now have to contend with.

Read More | 10 Comments | Leave a Comment..

Posted on Techdirt - 4 June 2020 @ 1:43pm

Think Of The Kitten: A Crash Course On Section 230

from the presidential-education dept

We are so hip here at Techdirt that we've been writing about Section 230 long before it was cool. But even though everyone and their President seems to be talking about it these days, and keen to change it, it does not seem like everyone necessarily knows what it actually says or does. Don't let this happen to you!

The embedded video below is of a presentation I gave earlier this year at ShmooCon where I explained the magic of Section 230 through the lens of online cat pictures. As we head into more months of lockdown, our need for a steady supply of cat pictures has never been more important. Which means Section 230 has never been more important.

In this presentation I explain why we have Section 230, what it does, why it works, and how badly we jeopardize our supply of online cat pictures (as well as a lot of other good, important stuff) if we mess with it.

Tune in!

14 Comments | Leave a Comment..

Posted on Techdirt - 29 May 2020 @ 3:32pm

District Court Mostly Refuses To Terminate The Litigation Testing The Copyright Termination Provision

from the artists'-rights dept

The decision this post discusses, Waite v. Universal Music Group, came out at the end of March, but, as one of the leading cases litigating the termination provision of the copyright statute, it's still worth the attention. Maybe even especially now, as the Copyright Office overtly goes to bat for rightsholders. Because the termination provision speaks to who the rightsholders actually are. Without it, it's likely to not actually be the artists behind the creation of the works.

The decision does a good job at least partially explaining why the termination provision is important:

Aspiring singers, musicians, authors and other artists – sometimes young and inexperienced and often not well known – tend to have little bargaining power in negotiating financial arrangements with recording companies, publishers, and others who promote and commercialize the artists’ work. They often grant copyright in that work as part of the bargain they strike for promotion and commercialization. Accordingly, when an artistic work turns out to be a “hit,” the lion’s share of the economic returns often goes to those who commercialized the works rather than to the artist who created them. Section 203 of the Copyright Act of 1976 established a limited opportunity for artists to terminate the copyright ownership that they had granted to commercializers decades earlier in order to address this issue. The idea was that termination of these rights would more fairly balance the allocation of the benefits derived from the artists’ creativity. [p. 2]

In other words, since no one had a crystal ball, the law purposefully allows for a sort of "undo" button 35 years later to make sure that artists would not have to be forever locked out of the ability to control their own work. People sometimes refer to it as "recovering their masters," although it isn't really about recovering the physical media – termination is just about canceling the copyright assignment artists may have made in their future work when they first signed their record contracts so they can choose what to do with this work from here on out. But it is only in recent years that enough time had passed for artists to try to assert this provision in the copyright statute. And, particularly for musical artists, it has often been difficult to win back their copyrights from record labels reluctant to part with them, which has led to litigation – such as this case.

In this case several musicians, acting as a class, sued UMG. Their claims varied somewhat because the plaintiffs were not all in quite the same position, but they basically were all designed to get judicial acknowledgement that the artists either already had, or soon would, recover the copyrights they had long ago assigned to the record label. UMG filed a motion to dismiss all the claims, and this decision was a ruling on that motion. By and large the decision was a good one for artists desiring to recover their copyrights, but it wasn't an unequivocal win. While some of the artists' claims survived, some of the others' were dismissed, and sometimes for reasons that may fairly cause concern.

But first, the good news.

When an artist wants to terminate a copyright assignment, there are a bunch of procedural hoops to jump through, with specific time windows and other required formalities. But if the artist manages to check off all the boxes correctly, the copyright will then automatically revert back to them. Which means that if the record company continues to use the work without their permission, those uses are copyright infringement. And that's what Waite and many of the other artists in this case sued UMG for.

In its attempt to dismiss the lawsuit, UMG argued that the statute of limitations barred these infringement claims. The general rule for any sort of lawsuit is that there's a ticking clock on how long you have to sue, and that the clock starts when "a plaintiff 'knows or has reason to know of the injury upon which the claim is premised.'" [p. 8-9]. To sue for an infringement injury you need to show two things: that you have the ownership of a copyright in a work, and that someone did one of the things that only the copyright owner is allowed to do (such as copy the work) without permission. Most infringement lawsuits focus on the latter part, and the statute of limitations will start once the plaintiff has discovered the infringing activity. But in this case, where ownership of the copyright was the bigger issue, UMG argued that the statute of limitations had started to run when the plaintiff discovered that the question of ownership was in contention. The question for the court, then, was when that moment had occurred. [p. 9-10].

UMG tried to argue that this moment happened in the 1970s or 80s when the original contracts were signed because that was the moment when the ability to terminate the assignment in the future became a question. Why was that? Because many of these contracts stylized the resulting music as "works for hire," and, per the statute, works for hire are not eligible for termination. As the court explained:

The legal author (creator of the work) and owner of a “work made for hire” is the employer or person who specially ordered it, rather than the artist. Section 203 excludes these works from the termination right precisely for this reason: “The hired [or commissioned] party, although the ‘author’ in the colloquial sense . . . never owned the copyrights to assign. It stands to reason, then, that there are no rights the assignment of which [the artist or] his or her heirs may now terminate.” [p. 7]

It is important to note that the court did not actually decide here whether the works at issue in this case were works-for-hire or not. It saved that question for another day but did acknowledge that there was at least some reason to believe they were not works for hire, in which case the attempts to terminate the copyright assignments could still be successful. But the court rejected UMG's contention that the question of ownership should have been "discovered" almost as soon as the contracts were signed. Instead, it found that the question about ownership was only raised once the termination was attempted and the record labels continued to make what were now potentially infringing uses of the works.

In this way the court differentiated the case from an earlier one involving Meatloaf, who had tried to sue thirty years after signing his contract for a declaratory judgment that his music had never been a work-for-hire and thus was subject to the termination provision. In that case the court there had rejected his case for being filed too late. Since his case only involved the question of whether he owned his original copyright, and that question had been raised upon signing his contract back in 1977, the court in his case decided that he'd only had until 1980 to be able to sue to get clarification. [p. 11]. But in this case the lawsuit was seeking to pursue an infringement claim, and the infringement did not become an issue until after the moment termination had theoretically occurred and the record company continued to make copies of the work. Per the court in this case, that was the moment when the clock on the statute of limitations started running, and this lawsuit was filed in time.

And so this case continues for at least for some of the musicians in the plaintiffs' class. But not for all of them. Three sorts of claims were dismissed. One sort involved the musicians who had begun the termination process but for whom the process had not yet completed. The process requires giving notice in advance of the date when termination is to occur, and then waiting for that date to pass. For these artists that date was still in the future, which meant that the record label's continued use was not actually infringing, since the copyright assignment they had enjoyed thus far would still be valid. These plaintiff artists had sued for declaratory judgment, which basically amounted to asking the court to declare now that their copyright assignments would indeed be terminated once the relevant dates had passed. But the court declined to do what they asked, largely on the grounds that it wouldn't really lend them the clarity they sought. A declaratory judgment now would address the reasons UMG has so far protested that the termination provision wouldn't apply, but it wouldn't actually cause them to be terminated on the spot. The plaintiffs would still have to wait for the process to complete, at which point an infringement lawsuit would be appropriate if UMG came up with some other reason for why it did not think the termination was valid.

It is not clear how either objective would be achieved by the declaratory relief sought. The uncertainty here is whether UMG will continue to distribute plaintiffs’ sound recordings after the effective date of termination claimed by a plaintiff has passed. This uncertainty is eliminated if prior to the effective date of termination, the termination notices were declared valid and there were no grounds on which defendant could argue that the grants are not otherwise terminable. Plaintiffs suggest that the declaratory relief they seek, if granted, would achieve this outcome because it would address the various grounds defendant has cited when rejecting the termination notices. This would be so, however, only insofar as defendant never proffers new arguments as to why the termination notices are invalid or why the grants could not be terminated. The declaratory judgment plaintiffs seek now therefore could not guarantee that UMG would accept the termination notices and cease exploitation of the sound recordings after the effective date of termination. In other words, the cloud of uncertainty would not necessarily be lifted fully. [p. 16]

So these artists' claims were dismissed, although once the termination dates have passed there does not seem to be any legal reason why they could not sue again if they needed, since their claims would now be like the claims in this case that were permitted to go forward. (There may, of course, be practical reasons why it would be hard to sue again though – it's a huge, time consuming, expensive ordeal that people are rarely eager to go through.)

Another type of claim that the court dismissed involved the plaintiff Joe Ely. In most instances the court found that the termination notices submitted by the artists had generally met the formality standards the statute required. In fact, the court tended to have a generous read of the notices, generally finding that what errors there were had been made in good faith and did not prejudice the record label, and thus were still valid.

Despite the incorrect dates and omissions, defendant has sufficient notice as to which grants and works plaintiffs seek to terminate. While perhaps in other circumstances an omitted execution date could be fatal to the validity of a termination notice, defendant possesses the relevant agreements and can discern the relevant dates. Defendant cannot use the parties’ agreements to claim that the statue of limitation bars plaintiffs claims and then feign ignorance of which grants plaintiffs purport to terminate. [...] Nor is there any sufficient basis for claiming that the errors were not made in good faith. [...] Indeed, as the Copyright Office noted when promulgating 37 C.F.R. § 201.10, “we . . . must recognize that entirely legitimate reasons may exist for gaps in [grantor’s] knowledge and certainty” of required termination notice contents. Because the notices’ defects were harmless and not made with an intent to deceive, mislead, or conceal information, defendant’s motion is denied as to its claim that plaintiffs’ termination notices are facially invalid. [p. 18-20]

But with Ely it was more complicated. When we switched our copyright law from the 1909 Copyright Act to the 1976 Copyright Act, which then didn't go into effect until 1978, there were certain issues with that statutory handshake. Ely produced music that fell in that gap. Arguably his copyright assignments were still terminable, under a different provision of the copyright statute, but with the governing regulations more complicated, the court found that the need for getting the formalities right was heightened and that the deficiencies in his termination notices had rendered them invalid.

Instead of the grant date’s execution, under the 2011 Copyright Office regulation, Section 203 termination notices for gap grants must contain “the date on which the work was created.” “Sound recordings are created for purposes of the Copyright Act on the date they are ‘fixed,’ or recorded.” Ely’s termination notice includes only the publication date and the FAC alleges the release” date. “Publication” refers to the distribution or transfer of ownership, not to the creation of a work. Nor does “release” indicate when the work was “created.” Ely’s termination notice is thus insufficient. [p. 23]

The court otherwise had no further comment on where that left him in his ability to recover his copyrights.

Meanwhile, although much of this decision portends good news for artists who wish to recover their copyrights, the third sort of dismissed claim is alarming, and out-of-step with the rest of the decision. The issue relates to "loan out" artists. Per the court, these artists cannot recover their copyrights:

Only grants “executed by the author” (or the statutorily designated successor) maybe terminated. Therefore, third parties to a contract and loan-out companies, which “loan” out an artist’s services to employers and enter into contracts on behalf of the artist, do not have a termination right under the statute. It is undisputed that loan-out companies executed the Waite grants and that a third party company, South Coast, executed the grant for Ely’s recordings made under his 1979 agreement. In these instancesg [sic], neither Waite nor Ely was the grantor. The plain language of the statue [sic] precludes either of these plaintiffs from effectuating termination." [p. 20]

This portion of the decision reads uncharacteristically hostile to the interests of artists, which the court otherwise had been favorable to. Earlier in the decision the court had commented on why it was so important to have the termination provision on the books in the first place:

Defendant’s argument is weakened further by the music industry’s practice of frequently inserting “work made for hire” language into recording contracts. Its position requires that many artists, often early in their careers, would confront a choice when presented with a “works made for hire” provision. They could refuse to sign the contract and jeopardize their chance for the record company to record or distribute the artist’s music. Or the artist could sign the contract and then bring a claim within three years to dispute the effect of the “work made for hire” provision in order to protect the copyright. Either outcome would be inconsistent with Section 203. The first would exemplify the unequal bargaining power Section203 sought to correct. The second would render Section 203 meaningless, as its very purpose is to provide a mechanism by which artists can reclaim their copyright after the work has had time to become more valuable. Defendant’s argument simply does withstand scrutiny in light of the unequivocal purpose of the termination provision. [p. 14]

But in that rushed passage on page 20, the court took a suddenly hostile, shallowly-analyzed view towards the artists' position. And that hostility continued:

Plaintiffs argue also that the loan-out company is only a tax-planning device. Even so, people cannot use a corporate structure for some purposes – e.g. taking advantage of tax benefits– and then disavow it for others. While Waite and his loan-out companies, like Heavy Waite, Inc., perhaps are distinct entities only in a formal legal sense, the statutory text is clear: termination rights exist only if the author executed the grant. The Supreme Court recently reaffirmed that courts must adhere to the text the Copyright Act, even if the Act “has not worked as Congress likely envisioned.” The unambiguous text precludes Waite and Ely from terminating the copyrights granted by third parties. [p. 20-21]

This portion of the decision seems unduly harsh and unthoughtful. It presumes facts and betrays the purpose of the termination provision. It also more heavily favors the position of the record labels than the statute itself supports. A "you should have known what you were doing" attitude fails because no one could have known what they were doing at the time these agreements were made (not even the record labels).

At the time these agreements were made, there was no Internet and no digital rights to exploit. At the time these agreements were made, copyright terms lasted for many fewer years. At the time these agreements were made, the US wasn't even a party to the Berne Convention, and TRIPS and its progeny were just a glimmer in the WTO's eye. There is no way that artists could have foreseen the full dimensions of the deal they were striking and how it would affect their rights decades into the future. Had they known, they might have struck a different deal.

The termination provision exists because no one can see the future. But if certain artists can be automatically penalized for their choice of corporate form (and a worry is that many of these artists might be bands, for whom choosing a corporate form is more than just a "tax strategy" but a potentially existential decision necessary to successfully wrangle a diverse group of talented individuals into a coherent creative entity), and with so little effort to fully parse what the statute meant by authorship, it turns the provision into a nullity and awards the record labels a windfall Congress never intended to grant them.

The court did, however, suggest that loan-out artists such as Waite might be able to plead their authorship more successfully, and, if they do, then their terminations may valid. Given the number of artists who would otherwise arbitrarily be denied the benefit of this important balancing provision in copyright law, hopefully doing so will turn out to be a needle that this and other courts will allow them to thread.

Read More | 14 Comments | Leave a Comment..

Posted on Free Speech - 27 May 2020 @ 6:01pm

When The Problem Isn't Twitter But President Trump

from the solving-the-right-problem dept

President Trump is not happy with Twitter. But a lot of other people were already unhappy with Twitter. As his tweets have grown more abusive by the day, and the non-insane public has naturally grown more outraged by them, there has been an increase in calls for Twitter to delete his tweets, if not his account outright. But what's worse is the increase in calls that sound just like what Trump now demands: that Section 230 must be changed if Twitter is unwilling to take those steps. Both are bad ideas, however, for separate, although related, reasons.

The basic problem is that there is no easy answer for what to do with Trump's tweets, also for many reasons. One fundamental reason is that content moderation is essentially an impossible task. As we've discussed many, many times before, it is extremely difficult for any platform to establish an editorial policy that will accurately catch 100% of the posts that everyone agrees are awful and no posts that are fine. And part of the reason for that difficulty is that there is no editorial policy that everyone will ever be able to agree on. It's unlikely that one could be drawn up that even most people would agree on, yet platforms regularly attempt to give it their best shot anyway. But even then, with some sort of policy in place, it is still extremely difficult, if not impossible, to quickly and accurately ascertain whether any particular social media post amidst the enormous deluge of social media posts being made every minute, truly runs afoul of it. As we have said umpteen times, content moderation at scale is hard. Plenty is likely to go wrong for even the most well-intentioned and well-resourced platform.

Furthermore, Trump is no ordinary tweeter whose tweets may run afoul of Twitter's moderation policies. Trump happens to be the President of the United States, which is a fact that is going to strain any content moderation policy primarily set up to deal with the tweets by people who are not the President of the United States. It is possible, of course, to decide to treat him like any other tweeter, and many have called for Twitter to do exactly that. But it's not clear that doing so would be a good idea. For better or for worse, his tweets are the tweets of the American Head of State and inherently newsworthy. While one could argue that they should be suppressed because their impact is so prone to being so destructive, it would not be a costless decision. While having the President of the United States tweeting awful things does cause harm, not knowing that the President of the United States is trying to tweet awful things presents its own harm. This is the person we have occupying the highest political office in the land. It would not do the voting public much good if they could not know who he is and what he is trying to do.

The arguments for suppressing his tweets largely are based on the idea that taking away his power to tweet would take away his power to do harm. But the problem is that his power comes from his office, not from Twitter. Taking Twitter away from him doesn't ultimately defang him. It just defangs the public's ability to know what is being done by him in their name.

Twitter's recent decision to add contextualization to his tweets might present a middle ground, although it is unlikely to be a panacea. It puts Twitter in the position of having to make more explicit editorial decisions, which, as discussed above, is an exercise that is difficult to do in a way that will satisfy everyone. It also may not be sustainable: how many tweets will need this treatment? And how many public officials will similarly require it? Still, it certainly seems like a reasonable tack for Twitter to try – one that tries to mitigate the costs of Trump's unfettered tweeting without inflicting the costs that would result from their suppression.

Which leads to why Section 230 is so important, and why it is a bad idea to call for changing it in response to Trump. Because Section 230 is what gives Twitter the freedom to try to figure out the best way to handle the situation. There are no easy answers, just best guesses, but were it not for Section 230 Twitter would not be able to give it the best shot it can to get it right. Instead it would be pressured to take certain actions, regardless of whether those actions were remotely in the public interest. Without Section 230 platforms like Twitter will only be able to make decisions in their own interest, and that won't help them try to meet the public call to do more.

Changing Section 230 also won't solve anything, because the problem isn't with Twitter at all. The problem is that the President of the United States is of such poisoned character that he uses his time in office to spread corrosive garbage. The problem is that the President of the United States is using his power to menace citizens. The problem is that the President of the United States is using his role as the chief executive of the country to dissolve confidence in our laws and democratic norms.

The problem is that the President of the United States is doing all these things, and would be doing all these things, regardless of whether he was on Twitter. But what would change if there were no Twitter is our ability to know that this is what he is doing. It is no idle slogan to say that democracy dies in the darkness; it is an essential truth. And it's why we need to hold fast to our laws that enable the transparency we need to be able to know when our leaders are up to no good if we are to have any hope of keeping them in check.

Because that's the problem we're having right now. Not that Twitter isn't keeping Trump in check, but that nothing else is. That's the problem that we need to fix. And killing Twitter, or the laws that enable it to exist, will not help us get there. It will only make it much, much harder to bring about that needed change.

173 Comments | Leave a Comment..

Posted on Techdirt - 17 April 2020 @ 7:39pm

Book Review: Danny Dunn and the Homework Machine

from the plus-ca-change dept

We don't often do book reviews here on Techdirt, but since we've been talking about reading books scanned by the Internet Archive,* this one seemed good to discuss because of how it touches on many of the issues discussed here.

Of course, it's not actually a new book. Danny Dunn and the Homework Machine, by Jay Williams & Raymond Abrashkin (with illustrations by Ezra Jack Keats), is part of a series of children's novels I read as a kid. I remember liking the books but have no specific memories of any of them, except for this one, which stuck with me for all these years because of a particular point it made. But more on that in a bit.

The protagonist in these stories, Danny Dunn, is an eighth grade boy who, with his widowed mother, lives with Professor Bullfinch, an inventor (the mother is his housekeeper). As this particular book highlights, the professor's inventions include a special new kind of computer, which he keeps in his home laboratory. While today it hardly seems remarkable to have a computer in one's house, let alone one that can do everything that this one can, an important thing to remember is that this book was written in 1958, before computers were anywhere nearly as powerful and ubiquitous as they are today. Part of the magic of reading this book is getting a look at that historical snapshot of what the world was like when everything, that we today take for granted, was brand new.

As an author's note explains, the story was written with the input of IBM computer engineers, so presumably its description of how the machine would have worked was not entirely fanciful.

The authors are deeply grateful to Miss Terry di Senso, who guided us through two of the giant computers of the International Business Machines Corporation, and to Dr. Louis Robinson, Manager of the Mathematics and Applications Department, IBM, for his assistance, information, and painstaking reading of the manuscript.

What made the computer innovative, the book explained, were its new kind of switches and use of narrow magnetic tape, which allowed for the computer to be as small as it was. These switches were temperature-sensitive and climate-controlled by a thermostat. Information was recorded in one part of the computer, and programming involved pointing to the memory address where that information could be found and processed.

Also of note: instead of punch cards it took voice input (although it typed its output).

Of course, the downside of this book being written in the 1950s is the sexism. Throw-away lines about women's roles stand out as gratuitous sour notes. On the other hand, this book introduces a girl character, Irene Miller, who is clearly scientifically gifted and unwilling to be sold short by anyone underestimating her intellect. It's just too bad that the book makes it seem like she was unique to be a girl with such interests and talents. (It was also not okay that another character in the book was openly called "Fatso.")

Anyway, the story unfolds with the professor heading out of town and trusting Danny to take care of his machine. Danny has been helping the professor for quite some time and was familiar with its operation, and the professor trusted him to use it in his absence. Danny is a bright kid, but one always looking for shortcuts. As the book opened he was trying to make a mechanical device to help him produce two copies of homework he would only have to write out once, so he and a friend could share the load. Naturally, soon his thoughts turned to getting the computer make short work of his homework too.

***CAUTION: SPOILERS BELOW***

With the professor away, Danny, with his friends Joe and Irene, set about programming the computer to do their homework for them. After all, what kid wants to do homework? They were very excited by the prospect of the computer sparing them of this drudgery. After three days of them feeding into the computer all the material in their schoolbooks, they were at last ready to program it. What's programming, Joe asks? Danny explains:

"Programming is telling the machine exactly what questions you want answered and how you want them answered. In order to do that right, you have to know just what sequences of operation you want the machine to go through. [...] If we want [the computer] to give us the right answers to an arithmetic problem, or a history question, we first have to analyze the operations the machine has to go through, and the order in which it does them. Then we put this down on a piece of paper together with the addresses of all the information or the parts of the machine that will be used to solve the problems. That's programming."

It's a children's book, so naturally all goes well with the enterprise and the team is quite pleased with the results. But soon the plot thickens: the school bully figures out that they are getting the computer to do their homework for them and reports them to the teacher. The teacher then comes by the house to meet with Danny's mother. Oh dear, Danny is in trouble… But he makes a capable defense.

His argument largely follows three lines of reasoning to challenge the teacher's assertion that his use of the computer was somehow "unfair." One line foreshadowed the issues we are still having today with uneven access to Internet access and computing technology, and even educational resources generally. It is a significant and pervasive social problem, although from Danny's perspective he was concerned that he couldn't use a computer because other kids didn't have one. "[W]ould you forbid me to get information out of an encyclopedia, if I had one and the other kids didn't?" he asked? Probably not – although it's a problem unto itself that not all kids would even now still not have access to even an educational resource like that.

His second argument also questioned the idea that the innovation the computer represented somehow disqualified it from being used.

"Everybody uses tools to make his work easier. Why, we don't use inkwells and quill pens in school any more, Miss Arnold. We use fountain pens. Those are tools to make our work easier."

"But you can't compare a fountain pen to an electronic brain."

"Sure you can. It's just another kind of tool. Lots of kids do their homework on typewriters. In high school and college they teach kids to do some of their homework on slide rules. And scientists use all kinds of computers as tools for their work. So why pick on us? We're just - just going along with the times."

And then he made a third point, and it's this point that stuck in my memory all these years – although it didn't really sink in, with him or with me, until the end of the book. Because, while Danny managed to get himself out of trouble with his fine arguing, the story didn't end there. First, Danny's hopes of coasting through the rest of the school year were dashed the next day when the teacher assigned Danny, Joe, and Irene books from the Ninth Grade as the source of their homework assignments. Thus they were forced to spend lots of time programming the computer so that it could spit out what they needed to turn in.

Meanwhile, the school bully also wasn't done with tormenting them. In a brief moment when Danny was out of the lab, the bully snuck in, and, in an early example of a rogue attack on a computer, tampered with the machine. Which made for some dreadful moments the next day. The first was when Irene went to read aloud to the class the report on Peru the computer had spat out for her that morning and discovered that the report it had typed was nothing but gibberish. She was therefore forced to adlib the rest of the presentation based on what she could remember from when they'd entered the information into the computer.

And then that night the professor returned, this time with important guests from Washington interested in potentially purchasing the machine for government use – but dubious that it could do all that the professor had promised. So when it started printing out gibberish that evening, it was quite a serious problem. Fortunately, Danny realizes that the problem was that the thermostat had been hacked – the bully had removed a bolt so it got stuck on a setting, which had made the switches too cold to operate properly. Once they warmed up everything worked fine again and everyone was impressed – with the computer, and with Danny.

But the story closes with Danny coming home from school at the end of the term a few weeks later and sulking. Apparently that third prong of his defense had been right all along: using the computer to complete his homework hadn't been cheating, because he had to know how to do the homework in order to program the machine to do it for him. ("But I *know* these subjects. Gosh, I have to know them so I can program the machine to do them," he had earlier pointed out to the teacher.) And he was crushed to discover that it meant that he and his friends had actually been doing homework "all along" when they got the computer to help them work through the Ninth Grade material.

Professor Bullfinch coughed, and said, "I wondered how long it would be before you found that out. Naturally, in order to feed information into the computer you had to know it yourselves. And in order to give the machine the proper instructions for solving problems, you had to know how to solve them yourselves. So, of course, you had to do homework - and plenty of it."

And I think that's the idea that always stuck with me from the story, the message that computers were not some separate, magical entity, but rather just extensions of the human masters who made them.

And that matters, especially as computers become more sophisticated and more capable of replicating what humans can do.

To be fair, the book may have actually sold computers short. For instance, the professor took pains to disabuse Joe of their potential:

"The computer can reason […] It can do sums and give information and draw logical conclusions, but it can't create anything. It could give you all the words that rhyme with the moon, for instance, but it couldn't put them together into a poem."

On the other hand, even the professor came to reconsider his limited expectations of computers' potential. Inspired by Joe's curiosity, he decided to have his computer produce music. Which it did, but of course only after he'd fed the computer "full instructions for the composition of a sonata, plus information on note relationships and a lot of other technical material." And he still had doubts:

Professor Bullfinch shook his head. "No. It never can be Beethoven, Mrs. Dunn. No matter how intelligent the computer is, it is only a machine. It can solve problems in minutes that would take a man months to work out. But behind it there must be a human brain. It can never be a creator of music or of stories, or paintings, or ideas. It cannot even do our homework for us - *we* must do the homework. The machine can only help, as a textbook helps. It can only be a tool, as a typewriter is a tool."

The question we're faced with today, with the extraordinary power of computers no longer in doubt, is whether the professor is still right about the subordinate nature of computers, and I think the answer is yes. And that matters today for purposes of accountability, because no matter how sophisticated computers are, they still are dependent on the human masters who program them. They can produce amazing output, but it is human beings who have given them what they need to produce it – whether it be good or bad, for better or for worse. We can't just shrug and blame any unfortunate results on computers as though they were some separate beings. Sophisticated though they may be, they are still just our tools, and the responsibility for how they are used remains with the person who used them. As the professor said of his computer:

"It's a wonderful, complex tool, but it has no *mind*. It doesn't know it exists."

Perhaps a day may come when artificial intelligence will progress to the point where computers will have attained their own consciousness and stand as equals among their human progenitors. But until that day happens, the lesson of the book still holds.

* Note that if you want to ever read this book, even after the coronavirus health crisis is over, you will have to read it at the Internet Archive because it appears that many local libraries have already purged it from their collection.

19 Comments | Leave a Comment..

Posted on Techdirt - 10 April 2020 @ 7:39pm

Happy Birthday, Statute of Anne

from the bitter-reminders dept

Early in my legal career I had the opportunity to attend a conference in London organized to celebrate the launch of the Copyright History project. The goal of this project was to translate, annotate, analyze, and even just simply make available the original primary source documents that underpin our modern notions of copyright. It is an important enterprise because all too often we forget just how these historical documents actually do underpin it. History is often like playing a giant game of "telephone," where meaning changes over time, and in the case of many of these documents our understanding of what they were telling us has also changed over time -- and often become distorted. Having access to these original primary source documents means that we can recalibrate our understanding of what these policies actually were intended to do in order to ensure that our modern notions of copyright echo them properly.

At its launch the project included primary source documents from five jurisdictions -- Britain, Germany, France, Italy, and the US (with others added later) -- and the collection now includes documents from 1450 through 1900. For the conference, some of those original documents were brought in by an archivist and displayed under glass for us to examine. One of them was the original parchment copy of Statute of Anne, which attendees of the conference -- including me -- had the privilege of getting to see up close with our own eyes.

The Statute of Anne, whose anniversary of coming into force on April 10, 1710 we celebrate today, is one of the founding pillars of modern US and UK copyright law. At the time of its passage it reflected an enormous change in attitude about how the copy right should be handled. Before it came along English law (which is not to be confused with Scottish law, whose own system already bore more features of what we would recognize as modern copyright law) granted a monopoly in the copy right to a handful of printers that had the king's permission to publish. (It was fitting, in fact, that the Copyright History conference itself took place in a hall of the Stationers’ Company, one of the most powerful companies of the 17th century that then had near-exclusive license to print.) This use of a royal printing license to create a monopoly in publishing limited to just these few printers gave the government the ability to also limit what ideas could be published, which necessarily limited discourse.

However, the political pressure for democratic reform eventually caught up with this system, and by 1695 it finally gave way for good. And that set the stage for the Statute of Anne to be enacted in 1710, which changed the approach to copyright entirely. While the Licensing Act of 1662 was “[a]n act for preventing the frequent abuses in printing seditious treasonable and unlicensed books and pamphlets,” the Statute of Anne was purposefully “[a]n act for the encouragement of learning.” Whereas the former was about government control over ideas, the latter was about spreading them. Instead of using royal printing licenses to administer the copy right as a means of controlling discourse, by its very design the Statute of Anne was meant to stimulate it.

And it did. Right away newspapers proliferated, public houses exploded with popularity (as they had during earlier periods when licensing statutes had lapsed), and democratic ideals flourished as tight government control over ideas yielded. But while the structure of modern copyright law today looks much as it did following the Statute of Anne, its limiting effects on discourse now seem more similar to the period that preceded it.

There were a few other key differences between the Statute of Anne and the licensing statutes before it, beyond just their stated policy goals, which bore on the former's ability to stimulate discourse. For instance, the Statute of Anne fundamentally shifted the role of the author. Before the Statute, authors were largely relegated to subordinate figures, barely mentioned in association with the work. Instead full authority for the work was usurped by the printer, who, as an agent licensed to act on behalf of the government, had the sole discretion to deem it acceptable to be published. With the Statute of Anne, however, authors became central to the whole system. They retained full authority for the work and as such retained the rights to control its publication.

These rights were of limited duration, however, and the Statute of Anne further enhanced public discourse by creating a public domain. In fact, the only reason the Statute of Anne gave authors any limited rights was simply to address the problem of market failure. The fear was that no ideas would be contributed to public discourse at all if it were economically impossible for authors to contribute them. With the goal of the Statute being to get those ideas out there, these limited author monopolies were intended as a means for achieving that end.

Unfortunately, however, while in the early 18th century the focus on protecting and enhancing the rights of authors was intended to facilitate the growth of public discourse around those ideas, today that same focus on authors' rights does the exact opposite. With so much emphasis now being put on the rights of the author as owner of the work to control it, at the expense of the public benefit the system is supposed to impart, it has had the effect of choking off what discourse these works might spawn. Through needlessly lengthy monopolies and overly-expansive interpretations of the reach of these rights, history seems to be repeating itself, returning us to the discourse-choking limitations of the licensing era and forsaking the promise of the Statute of Anne to promote its spread.

For, just like the 17th Century printers, these authors’ copy rights get their teeth from government. They are government-granted monopolies with government-sanctioned reaches. With those rights, and with the government’s blessing, authors can limit ideas’ consumption and dampen their reach and influence long after any economic necessity would justify -- and just as the licensed printers once did. Back then the Stationers’ Company had powers of search and seizure and could prosecute competing printers; today, particularly as copyrights are so often aggregated in the hands of a few large corporate gatekeepers, modern infringement lawsuits look much the same.

So we find ourselves at the turn of the 21st century at the same crossroads we were at 300 years earlier, faced with a choice in how we use government power. Do we use it to enable public discourse, or to stifle it? For although our modern copyright systems trace their lineage back to the author-focused structure of the Statute of Anne, that basic structure alone does not determine which value is fostered. It’s how we implement it that matters to which ultimately survives.

Yet unfortunately, today, while the original document articulating that policy value to promote the spread of ideas has been carefully preserved, thanks to how we've enshrined the notion of copyright in our modern law, the historic change it was to herald has not.

38 Comments | Leave a Comment..

Posted on Techdirt - 6 March 2020 @ 12:13pm

In New 5Pointz Decision, Second Circuit Concludes That VARA Trumps The Constitution

from the immoral-rights dept

A few weeks ago there was news that a developer in New York City was being forced to dismantle twenty already-built floors in the building he built too high. If only he had thought to let some graffiti artists paint the walls of these excess floors, because then he could never take them down…

I say that, of course, in response to other recent news from New York: the Second Circuit has upheld the awful decision by EDNY to sanction a building owner millions of dollars for daring to paint the walls of his own building. And, in doing so, the Second Circuit has illuminated, in stark relief, what an unconstitutional disaster the Visual Artists Rights Act of 1990 (VARA) is.

But before explaining why, first here's some background. This decision, in Castillo v. GM Realty L.P., is the latest in the litigation over "5Pointz." In brief, a developer owned a building in Brooklyn that he wasn't doing anything with, so he let some graffiti artists paint its walls. Eventually he decided that he wanted to do something else with his building, and in response the graffiti artists sued him under VARA, because his plans would cause those paintings that hadn't already been destroyed by the artists [see p. 4] to now be destroyed by him. The district court refused to enjoin the building owner, however, so he went ahead and painted over them. Upon learning of the painting over, the district court then immediately had non-enjoiner's remorse and got so angry at the building owner for doing what it had let him do that it threw the book at him. In fact, it was $6.7 million dollars worth of book it threw in punitive statutory damages, because how dare that building owner paint the building he owned after the court said he could.

The appeals court decision doubles-down on all the problems with the original district court decision we flagged before, including how catastrophic it is for the future availability of public art to subject those who allow it on their property to such expensive consequences. It makes true the saying "no good deed goes unpunished" and will ensure that few will ever be inclined to offer such favors ever again.

We also highlighted the manifest unfairness of punishing the building owner for doing something that the court had cleared the way for. This unfairness itself presents a constitutional infirmity, particularly in light of the enormous statutory damages award granted, and then upheld, to punish the building owner.

Ultimately, the district court concluded that it could not reliably fix the market value of the destroyed paintings and, for that reason, declined to award actual damages. […] Nonetheless, the court did award statutory damages. It determined that statutory damages would serve to sanction Wolkoff’s conduct and to vindicate the policies behind VARA. [p. 8-9]

Statutory damages are already constitutionally suspect, especially when they are so severely inflated above any actual measure of harm, as was the case here, and especially when they appear to be punitive in nature, as was also the case here, because they function as quasi-criminal sanction without all the due process protections a finding of criminal liability is supposed to require.

But that's not the only constitutional problem with VARA that the decision highlighted. The decision made clear that it also fails on equal protection and First Amendment grounds.

The major issue that the appeals court considered was whether the district court was right in upholding the VARA claim. The crux of that analysis hinged on whether the destroyed paintings qualified as a work of "recognized stature." If they did, then they were protected from destruction by VARA, whereas if they did not qualify, then they would get no extra protection. [p. 13]

But think about the implications of the law. It means that the right to special protection for one's expression is only available for some expression, and whether it gets that protection pivots on the content of that expression. Laws are not supposed to be able to favor or disfavor expression. Yet, as this decision carefully – if perhaps inadvertently – explains, that's exactly what VARA does.

Whether a work is of recognized stature, and thus entitled to additional protection, hinges on its "high quality, status, or caliber." [p.14]. Contrast this special protection with regular copyright protection, which applies to any original work of authorship of any statutorily-enumerated type (literary work, musical work, etc.) fixed in any tangible medium, regardless of that particular work's quality. In other words, bad paintings are just as eligible for basic copyright protection as good paintings. But in the case of VARA, the moral rights provision inserted into the copyright statute, it is only the good paintings that get this extra protection, because the bad ones will never be able to achieve that stature. ("The most important component of stature will generally be artistic quality." [p. 14])

Of course, whether a work is of good quality or not is a matter of opinion. So who gets to decide? The court recognized that the "personal judgment of the court shouldn't be the determinative factor." [p. 14]. Instead it deferred to "the artistic community, comprising art historians, art critics, museum curators, gallerists, prominent artists, and other experts." [p. 14].

But ratifying the subjective opinion of others as the basis upon which to dole out special legal protections is no better than the court making the determination on its own initiative. First, the decision of what opinions to credit is at best arbitrary, as was the case here, where the court deferred to an opinion of a presumed expert who had not even seen the works:

Nor do we see merit in Wolkoff’s criticism of the court’s decision to credit the artists’ experts. As is almost always the case where competing expert testimony is adduced, the trier of fact accepts one side’s experts over the other’s. Judge Block did so here and gave sound reasons for his choice. Renee Vara, the artists’ expert, testified to the high artistic merit of the 5Pointz art but also testified that she had not seen the works before their destruction and had assessed them on the basis of images. We see nothing wrong and certainly nothing clearly erroneous with this approach, one well within a district court’s broad discretion to accept or reject evidence. [p. 22]

But even to the extent that the opinion the court adopts reflects a true consensus, it still means that popular expression gets more statutory protection than less popular expression, which is not something the First Amendment permits.

Worse, it means that certain people end up with more rights than others. As the court expressly noted, "[A] 'poor' work by an otherwise highly regarded artist nonetheless merits protection from destruction under VARA." [p. 14] In other words, some artists of poor works will get this bonus protection, yet some will be denied it, which puts VARA in conflict with the equal protection clause, which forbids this sort of legal favoritism.

The facts of this case illustrate the problem. The aerosol paintings at 5Pointz were "curated" by Jonathan Cohen. Cohen chose which artists could paint, what they could paint, where they could paint, and how long the paintings could remain. Yet despite his outsized role in the creation of the paintings in question – or, indeed, because of this role ("When the curator is distinguished, his selection of the work is especially probative." [p. 25]) – the court deferred to him as someone whose opinion on the worth of a work could be dispositive in determining whether it deserved these extra legal protections.

Next, Appellants object to the district court’s reliance on Jonathan Cohen’s testimony about his curation of the artwork. The district court reasoned that Cohen’s selection process, which involved review of a portfolio of an artist’s work and a plan for his or her 5Pointz project, screened for works of stature. Appellants, however, contend that this determination was irrelevant because Cohen made his evaluation before the artists painted their 5Pointz works. Nonetheless, the district court cogently reasoned that a respected aerosol artist’s determination that another aerosol artist’s work is worthy of display is appropriate evidence of stature. An artist whose merit has been recognized by another prominent artist, museum curator, or art critic is more likely to create work of recognized stature than an artist who has not been screened. This inference is even stronger where, as here, Cohen reviewed a plan for the subject work before allowing it to be painted. Accepting and crediting such testimony easily falls within a district court’s trial management responsibilities and in this instance involved no abuse of discretion or clear error. [p. 23-24]

There is no constitutional problem with the fact that Cohen played kingmaker with respect to what artists he allowed to exhibit at 5Pointz and which works could attain any sort of visibility. As an individual he is entitled to make these content-based decisions. What is not okay is for the courts to allow his personal opinion to acquire the imprimatur of the state to create extra rights for his favorites. As the Second Circuit set forth, this power appears to be what VARA allows. But it's not what the Constitution permits.

Read More | 32 Comments | Leave a Comment..

More posts from Cathy Gellis >>


This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it