Cathy Gellis’s Techdirt Profile

cathy

About Cathy Gellis




Posted on Techdirt - 15 November 2017 @ 10:43am

Ninth Circuit Lets Us See Its Glassdoor Ruling, And It's Terrible

from the making-secret-jurisprudence-public-precedent dept

Well, I was wrong: last week I lamented that we might never know how the Ninth Circuit ruled on Glassdoor's attempt to quash a federal grand jury subpoena served upon it demanding it identify users. Turns out, now we do know: two days after the post ran the court publicly released its decision refusing to quash the subpoena. It's a decision that doubles-down on everything wrong with the original district court decision that also refused to quash it, only now with handy-dandy Ninth Circuit precedential weight.

Like the original ruling, it clings to the Supreme Court's decision in Branzburg v. Hayes, a case where the Supreme Court explored the ability of anyone to resist a grand jury subpoena. But in doing so it manages to ignore other, more recent, Supreme Court precedents that should have led to the opposite result.

Here is the fundamental problem with both the district court and Ninth Circuit decisions: anonymous speakers have the right to speak anonymously. (See, e.g., the post-Branzburg Supreme Court decision McIntyre v. Ohio Elections Commission). Speech rights also carry forth onto the Internet. (See, e.g., another post-Branzburg Supreme Court decision, Reno v. ACLU). But if the platforms hosting that speech can always be forced to unmask their users via grand jury subpoena, then there is no way for that right to ever meaningfully exist in the context of online speech.

Yet neither of these more recent Supreme Court decisions seems to have had any impact on either the district court or Ninth Circuit's thinking. Instead both courts seem to feel their hands are tied, that in the 1970s the Supreme Court set forth, once and for all, the rule that no one can ever resist federal grand jury subpoenas, except in very limited circumstances, and that this ruling was the final word on their enforceability, no matter what the context. But as I wrote in the previous post, what the Supreme Court said in Branzburg about the enforceability of grand jury subpoenas only related to those that arose from a specific context, journalists shielding sources, and the only question before the court then was whether journalists, as journalists, had the ability to refuse them. The Supreme Court never considered whether there might be any other set of circumstances where grand jury subpoenas could be resisted. In Branzburg the Supreme Court had only considered the question with respect to journalists.

In fact, to make Branzburg apply to Glassdoor, the Ninth Circuit had to try to squeeze Internet intermediaries like Glassdoor into the shoes of reporters and make them seem like one and the same, even when they are not:

Although Glassdoor is not in the news business, as part of its business model it does gather and publish information from sources it has agreed not to identify. It argues that “[a]nonymity is an essential feature of the Glassdoor community,” and that “if employees cannot speak anonymously, they often will not speak at all,” which will reduce the availability of “information about what it is like to work at a particular job and how workers are paid.” In other words, forcing Glassdoor to comply with the grand jury’s subpoena duces tecum will chill First Amendment-protected activity. This is fundamentally the same argument the Supreme Court rejected in Branzburg.

With all due respect to the Ninth Circuit panel, this is not fundamentally the same argument the Supreme Court rejected in Branzburg. As I wrote last week, to view the role of an intermediary platform as the same thing as an intermediary journalist is to fundamentally misunderstand the role of the intermediary platform in intermediating information. It also fundamentally misunderstands the First Amendment interests at stake. This case isn't about the press-related First Amendment rights at issue in Branzburg; they are the speech-related First Amendment rights of online speakers. And it's not the platform's First Amendment interests that Glassdoor is primarily trying to vindicate; it is the interests of the platform's users. Yet here, too, the Ninth Circuit panel misunderstands those interests when it dismisses out of hand the idea that they might have any right not to be unmasked:

Furthermore, Branzburg makes it clear that Glassdoor’s users do not have a First Amendment right not to testify before the investigating grand jury about the comments they initially made under the cloak of anticipated anonymity. See id. at 695 (“[I]f the authorities independently identify the informant, neither his own reluctance to testify nor the objection of the newsman would shield him from grand jury inquiry . . . .”). Therefore, Glassdoor cannot refuse to turn over its users’ identifying information on the grounds that it is protecting its users’ underlying rights.

"Anticipated anonymity" is a pretty grotesque way of describing a constitutional right people expected to be protected by when they chose to speak online. And it suggests a misreading of Branzburg, which never considered speech interests that were truly analogous to those of Internet platform users. Even if there's no First Amendment right to speak anonymously with a reporter it does not follow that there is no First Amendment right to speak anonymously online at all.

But that's the upshot to this decision: people who wish to speak anonymously online, in any capacity, won't be able to. They will forever be vulnerable to being unmasked by any federal criminal investigation, just so long as the investigation is not being done in bad faith. Nothing else can provide any sort of check on these unmasking demands, regardless of any other interest in play – including those of innocent speakers simply trying to avail themselves of their First Amendment right to speak anonymously, and all those who benefit from that speech.

This is a pretty stark result, and one that stands to affect Internet speakers everywhere. Not only does it threaten those anywhere a grand jury within the Ninth Circuit will be able to reach, but it will serve as persuasive authority governing the enforceability of subpoenas from grand juries in other circuits. It's also one that stands to have this dramatic effect after having been whipped up in secret, with a hidden docket and adamant refusal to accept amicus support. (Although two amici are listed in the caption, it does not appear that either brief was ultimately accepted by the court, much less actually read and considered.) Like anyone who insists on going it alone, without the help of their friends, the results of that obstinate independence have been predictably disastrous. Friends don't let friends inadvertently undermine the First Amendment, and I wish the court had let those of us able to help it see the full implications of this ruling be that friend.

Read More | 20 Comments | Leave a Comment..

Posted on Free Speech - 14 November 2017 @ 12:01pm

California Appeals Court Issues A Ruling That Manages To Both Protect And Undermine Online Speech

from the good-news-bad-news dept

Earlier this year I wrote about Yelp's appeal in Montagna v. Nunis. This was a case where a plaintiff had subpoenaed Yelp to unmask one of its users and Yelp tried to resist the subpoena. In that case, not only had the lower court refused to quash the subpoena, but it sanctioned Yelp for having tried to quash it. Per the court, Yelp had no right to try to assert the First Amendment rights of its users as a basis for resisting a subpoena. As we said in the amicus brief I filed for the Copia Institute in Yelp's appeal of the ruling, if the lower court were right it would be bad news for anonymous speakers, because if platforms could not resist unfounded subpoenas then users would lose an important line of defense against all the unfounded subpoenas seeking to unmask them for no legitimate reason.

Fortunately, a California appeals court just agreed it would be problematic if platforms could not push back against these subpoenas. Not only has this decision avoided creating inconsistent law in California (earlier this year a different California appeals court had reached a similar conclusion), but now there is even more language on the books affirming that platforms are able to try to stand up for their users' First Amendment rights, including their right to speak anonymously. As we noted, platforms can't always push back against these discovery demands, but it is often in their interests to try protect the user communities that provide the content that make their platforms valuable. If they never could, it would seriously undermine those user communities and all the content these platforms enable.

The other bit of good news from the decision is that the appeals court overturned the sanction award against Yelp. It would have significantly chilled platforms if they had to think twice before standing up for their users because of how much it could cost them financially for trying to do so.

But any celebration of this decision needs to be tempered by the fact that the appeals court also decided to uphold the subpoena in question. While it didn't fault Yelp for having tried to defend its users, and, importantly, it found that it had the legal ability to, it gave short shrift to that defense.

The test that California uses to decide whether to uphold or quash a subpoena is a test from a case called Krinsky, which asks whether the plaintiff has made a "prima facie" case. In other words, we don't know if the plaintiff necessarily would win, but we want to ensure that it's at least possible for plaintiffs to prevail on their claims before we strip speakers of their anonymity for no good reason. That's all well and good, but thanks to the appeals court's extraordinarily generous read of the statements at issue in this case, one that went out of its way to infer the possibility of falsity in what were at their essence statements of opinion (which is ordinarily protected by the First Amendment), the appeals court decided that the test had been satisfied.

This outcome is not only unfortunate for the user whose identity will now be revealed to the plaintiff but for all future speakers now that there is an appellate decision on the books running through the "prima facie" balancing test in a way that so casually dismisses the protections speech normally has. It at least would have been better if the question considering whether the subpoena should be quashed had been remanded to the lower court, where, even if that court still reached a decision too easily-puncturing of the First Amendment protection for online speech it would have posed less of a risk to other speech in the future.

Read More | 10 Comments | Leave a Comment..

Posted on Techdirt - 10 November 2017 @ 12:16pm

Celebrate The 20th Anniversary Of A Seminal Section 230 Case Upholding It With This Series Of Essays

from the Internet-enabling-cases dept

We have been talking a lot lately about how important Section 230 is for enabling innovation and fostering online speech, and, especially as Congress now flirts with erasing its benefits, how fortuitous it was that Congress ever put it on the books in the first place.

But passing the law was only the first step: for it to have meaningful benefit, courts needed to interpret it in a way that allowed for it to have its protective effect on Internet platforms. Zeran v. America Online was one of the first cases to test the bounds of Section 230's protection, and the first to find that protection robust. Had the court decided otherwise, we likely would not have seen the benefits the statute has since then afforded.

This Sunday the decision in Zeran turns 20 years old, and to mark the occasion Eric Goldman and Jeff Kosseff have gathered together more than 20 essays from Internet lawyers and scholars reflecting on the case, the statute, and all of its effects. I have an essay there, "The First Hard Case: ‘Zeran v. AOL’ and What It Can Teach Us About Today’s Hard Cases," as do many other advocates, including lawyers involved with the original case. Even people who are not fans of Section 230 and its legacy are represented. All of these pieces are worth reading and considering, especially by anyone interested in setting policy around these issues.

2 Comments | Leave a Comment..

Posted on Techdirt - 7 November 2017 @ 9:33am

How The Internet Association's Support For SESTA Just Hurt Facebook And Its Users

from the with-friends-like-these dept

The Internet Association's support for SESTA is truly bizarre. Should its support cause the bill to pass it will be damaging to every one of its members. Perhaps some members feel otherwise, but it is hopelessly naïve for any of them to believe that they will have the resources to stave off all the potential liability, including criminal liability, SESTA invites to their companies generally and to their management teams specifically, or that they will be able to deploy these resources in a way that won't destroy their user communities by over-censoring the creativity and expression they are in the business of providing forums for.

But that's only part of the problem, because what no one seems to be remembering is that Section 230 does not just protect the Internet Association's platform members (and their management teams) from crippling liability; it also protects its platform members' users, and if SESTA passes that protection will be gone.

Naturally, Section 230 does not insulate users from liability in the things they themselves use the platforms to communicate. It never has. That's part of the essential futility of SESTA, because it is trying to solve a problem that was not a problem. People who publish legally wrongful content have always been subject to liability, even federal criminal liability, and SESTA does not change that.

But what everyone seems to forget is that on certain platforms users are not just users; in their use of these systems, they actually become platforms themselves. Facebook users are a prime example of this dynamic, because when users post status updates that are open for commenting, they become intermediary platforms for all those comments. Just as Facebook provides the space for third-party content in the form of status updates, users who post updates are now providing the space for third parties to provide content in the form of comments. And just as Section 230 protects platforms like Facebook from liability in how people use the space it provides, it equally protects its users for the space that they provide. Without Section 230 they would all be equally unprotected.

True, in theory, SESTA doesn't get rid of Section 230 altogether. It supposedly only introduces the risk of certain types of liability for any company or person dependent on its statutory protection. But as I've noted, the hole SESTA pokes through Section 230's general protection against liability is enormous. Whether SESTA's supporters want to recognize it or not, it so substantially undermines Section 230's essential protective function as to make the statute a virtual nullity.

And it eviscerates it for everyone, corporate platforms and individual people alike – even those very same individual people whose discussion-hosting activity has been what's made platforms like Facebook so popular. While every single platform, regardless of whether it is a current member of Internet Association, an unaffiliated or smaller platform, or a platform that has yet to be invented, will be harmed by SESTA, the particular character of Facebook, as a platform hosting the platforms of individual users, means it will be hit extra hard. It suddenly becomes substantially more difficult to maintain these sorts of dynamic user communities when a key law enabling those user communities is now taken away, because in its absence it becomes significantly more risky for any individual user to continue to host this conversation on the material they post. Regardless of whether that material is political commentary, silly memes, vacation pictures, or anything else people enjoy sharing with other people, without Section 230's critical protection insulating them from liability in whatever these other people should happen to say about it, there are no comments that these users will be able to confidently allow on their posts without fear of an unexpectedly harsh consequence should they let the wrong ones remain.

34 Comments | Leave a Comment..

Posted on Techdirt - 6 November 2017 @ 9:37am

The Case Of Glassdoor And The Grand Jury Subpoena, And How Courts Are Messing With Online Speech In Secret

from the it-ain't-so-grand dept

In my last post, I discussed why it is so important for platforms to be able to speak about the discovery demands they receive, seeking to unmask their anonymous users. That candor is crucially important in ensuring that unmasking demands can't damage the key constitutional right to speak anonymously, without some sort of check against their abuse.

The earlier post rolled together several different types of discovery instruments (subpoenas, warrants, NSLs, etc.) because to a certain extent it doesn't matter which one is used to unmask an anonymous user. The issue raised by all of them is that if their power to unmask an anonymous user is too unfettered, then it will chill all sorts of legitimate speech. And, as noted in the last post, the ability for a platform receiving an unmasking demand to tell others it has received it is a critical check against unworthy demands seeking to unmask the speakers behind lawful speech.

The details of each type of unmasking instrument do matter, though, because each one has different interests to balance and, accordingly, different rules governing how to balance them. Unfortunately, the rules that have evolved for any particular one are not always adequately protective of the important speech interests any unmasking demand necessarily affects. As is the case for the type of unmasking demand at issue in this post: a federal grand jury subpoena.

Grand jury subpoenas are very powerful discovery instruments, and with good reason: the government needs a powerful weapon to be able to investigate serious crimes. There are also important constitutional reasons for why we equip grand juries with strong investigatory power, because if charges are to be brought against people, it's important for due process reasons that they have been brought by the grand jury, as opposed to a more arbitrary exercise of government power. Grand juries are, however, largely at the disposal of government prosecutors, and thus a grand jury subpoena essentially functions as a government unmasking demand. The ability to compel information via a grand jury subpoena is therefore not a power we can allow to exist unchecked.

Which brings us to the story of the grand jury subpoena served on Glassdoor, which Paul Levy and Ars Technica wrote about earlier this year. It's a story that raises three interrelated issues: (1) a poor balancing of the relevant interests, (2) a poor structural model that prevented a better balancing, and (3) a gag that has made it extraordinarily difficult to create a better rule governing how grand jury subpoenas should be balanced against important online speech rights.

Glassdoor is a platform focused on hosting user-provided information about employers. Much of the speech it hosts is necessarily contributed anonymously so that the speakers can avoid any fallout from their candor. This is the sort of fallout that, if they had to incur it, would discourage them from contributing information others might find valuable. The seriousness of these sorts of consequences is why the district court decision denying Glassdoor's attempts to resist the grand jury subpoena seeking to unmask their users reflects such a poor balancing of the relevant interests. Perhaps if the subpoena had been intended to unmask people the government believed were themselves guilty of the crime being investigated, the balance might have tipped more in favor of enforcing it. But the people who the subpoena was seeking to unmask were simply suspected as possibly knowing something about the crime that others were apparently committing. It is not unreasonable for the government to want to be able to talk to witnesses, but that desire to talk to them is not the only interest present here. These are people who were simply availing themselves of their right to speak anonymously, and who, if this subpoena is enforced, are going to be shocked to suddenly find the government on their doorstep wanting to talk to them.

This sort of unmasking is chilling to them and anyone else who might want to speak anonymously because it means that there's no way they ever will be able to speak should their speech happen to ever somehow relate (however tangentially) to someone else's criminal behavior. It is also inconsistent with the purported goal of fighting crime because it will prevent criminal behavior from coming to light in the first place, for few will want to offer up information if it will only tempt trouble for them at some point in the future.

This mis-balancing of interests is almost a peripheral issue in this case, however. The more significant structural concern is why such a weak balancing test was used. As discussed previously, in order to protect the ability to speak anonymously online, it is important for a platform to be able to resist demands to unmask their users in cases where the reason for the unmasking does not substantially outweigh the need to protect people's right to speak anonymously online. But the district court denied Glassdoor's attempt to resist the subpoena when it chose to apply the test from Branzburg v. Hayes, a Supreme Court case focused on the ability to resist a grand jury subpoena. Branzburg, however, has nothing to do with the Internet or Internet platforms. It is a case from the 1970s that was solely focused on whether the First Amendment gave journalists the right to resist a grand jury subpoena. Ultimately it decided that they generally had no such right, at least so long as the government was not shown to be acting in bad faith, which, while not nothing, is not a standard that is particularly protective of anonymity. It also barely even addressed the interests of the confidential sources themselves, dismissing their interest in maintaining anonymity as a mere "preference," and one the Court presumed was being sought only to shield themselves from prosecution for their own criminal culpability.

The upshot of Branzburg is that the journalist, as an intermediary for a source's information, had no right to resist a grand jury subpoena. Unfortunately, Branzburg simply can't be extended to the online world where, for better or worse, essentially all speech must be intermediated by some sort of platform or service in order to happen. The need to let the platforms resist grand jury subpoenas therefore has less to do with whether an intermediary itself has a right to resist them and everything to do with the the right of their users to speak anonymously, which, far from being a preference, is an affirmative right the Supreme Court, after Branzburg, subsequently recognized.

A better test, and one that respects the need to maintain this critical speech right, is therefore needed, which is why Glassdoor appealed the district court's ruling. Unfortunately, its appeal has raised a third issue: while there is often a lot of secrecy surrounding a grand jury investigation, in part because it makes sense to keep the subject of an investigation in the dark, preserving that level of secrecy does not necessarily require keeping absolutely everything related to the subpoena under seal. Fortunately the district court (and the DOJ, who agreed to this) recognized that some information could safely be released, particularly related to Glassdoor's challenge of the subpoena's enforcement generally, and thanks to that limited unsealing we can tell that the case involved a misapplication of Branzburg to an Internet platform.

Unfortunately the Ninth Circuit didn't agree to this limited disclosure and sealed the entirety of Glassdoor's appeal, even the parts that were already made public. The effects of this sealing included that it became impossible for potential amici to weigh in in support of Glassdoor and to argue for a better rule that would allow platforms to better protect the speech rights of their users. While Glassdoor had been ably litigating the case, the point of amicus briefs is to help the court see the full implications of a particular ruling on interests beyond those immediately before it, which is a hard thing for the party directly litigating to do itself. The reality is that Glassdoor is not the first, and will not be the last, platform to get a grand jury subpoena, but unless the rules governing platforms' ability to resist are stronger than what's afforded by Branzburg, the privacy protection speakers have depended on will continue to evaporate should their speech ever happen to capture the interest of a federal prosecutor with access to grand jury.

For all we know, of course, the Ninth Circuit might have seen its point and quashed the subpoena. Or maybe it upheld it and maybe the FBI has now unpleasantly surprised those Glassdoor users. We may never know, just as we may never know if there are other occasions where courts have used specious reasoning to allow grand jury subpoenas to strip speakers of their anonymity. Even if the Ninth Circuit indeed fixed the problems with this questionable attempt at unmasking, by doing it in secret it's missed an important opportunity to provide guidance to lower courts to help ensure that they don't allow other questionable attempts to keep happening to speakers in the future.

30 Comments | Leave a Comment..

Posted on Techdirt - 3 November 2017 @ 1:32pm

Some Thoughts On Gag Rules And Government Unmasking Demands

from the dissent-dies-in-the-dark dept

The news about the DOJ trying to subpoena Twitter calls to mind an another egregious example of the government trying to unmask an anonymous speaker earlier this year. Remember when the federal government tried to compel Twitter to divulge the identity of a user who had been critical of the Trump administration? This incident was troubling enough on its face: there’s no place in a free society for a government to come after a critic of it. But largely overlooked in the worthy outrage over the bald-faced attempt to punish a dissenting voice was the government’s simultaneous attempt to prevent Twitter from telling anyone that the government was demanding this information. Because Twitter refused to comply with that demand, the affected user was able to get counsel and the world was able to know how the government was abusing its authority. As the saying goes, sunlight is the best disinfectant, and by shining a light on the government's abusive behavior it was able to be stopped.

That storm may have blown over, but the general issues raised by the incident continue to affect Internet platforms – and by extension their users and their speech. A significant problem we keep having to contend with is not only what happens when the government demands information about users from platforms, but what happens when it then compels the same platforms to keep those demands a secret. These secrecy demands are often called different things and are born from separate statutory mechanisms, but they all boil down to being some form of gag over the platform’s ability to speak, with the same equally troubling implications. We've talked before about how important it is that platforms be able to protect their users' right to speak anonymously. That right is part and parcel of the First Amendment because there are many people who would not be able to speak if they were forced to reveal their identities in order to do so. Public discourse, and the benefit the public gets from it, would then suffer in the absence of their contributions. But it's one thing to say that people have the right to speak anonymously; it's another to make that right meaningful. If civil plaintiffs, or, worse, the government, can too easily force anonymous speakers to be unmasked then the right to speak anonymously will only be illusory. For it to be something speakers can depend on to enable them to speak freely there have to be effective barriers preventing that anonymity from too casually being stripped by unjust demands.

One key way to prevent illegitimate unmasking demands is to fight back against them. But no one can fight back against what they are unaware of. Platforms are thus increasingly pushing back against the gags preventing them from disclosing that they have received discovery demands as a way to protect their communities of users.

While each type of demand varies in its particulars (for instance a civil subpoena is different from a grand jury subpoena, which is different than an NSL, which is different from the 19 USC Section 1509 summons that was used against Twitter in the quest to discover the Trump critic), as well as the rationale for why the demanding party might have sought to preserve the secrecy around the demand with some sort of gag, all of these unmasking demands still ultimately challenge the durability of an online speaker's right to remain anonymous. Which is why rulings that preserve, or, worse, even strengthen, gag rules are so troubling because they make it all the more difficult, if not outright impossible, to protect legitimate speech from illegitimate unmasking demands.

And that matters. Returning to the example about the fishing expedition to unmask a critic, while it's great that in this particular case the government quickly dropped its demand on Twitter, questions remain. Was Twitter the only platform the government went after? Perhaps, but how would we know? How would we know if this was the only speech it had chosen to investigate, or the 1509 summons the only unmasking instrument it had used to try to identify the speaker? If the other platforms it demanded information from were, quite reasonably, cowed by an accompanying demand for secrecy (the sanctions for violating such an order can be serious), we might never know the answers to these questions. The government could be continuing its attacks on its apparently no-longer-anonymous critics unabated, and speakers who depended on anonymity would unknowingly be putting themselves at risk when they continued to speak.

This state of affairs is an affront to the First Amendment. The First Amendment was intended in large part to enable people to speak truth to power, but when we make it too hard for platforms to be partners in protecting that right it entrenches that power. There are a lot of ways that platforms should have the ability to be that partner, but one of them must be the basic ability to tell us when that right is under threat.

10 Comments | Leave a Comment..

Posted on Techdirt - 27 October 2017 @ 1:40pm

Trump Campaign Tries To Defend Itself With Section 230, Manages To Potentially Make Things Worse For Itself

from the just-one-more-wafer-thin-defense dept

It isn't unusual or unwarranted for Section 230 to show up as a defense in situations where some might not expect it. Its basic principles may apply to more situations than may necessarily be readily apparent. But to appear as a defense in the Cockrum v. Campaign for Donald Trump case is pretty unexpected. From page 37 of the campaign's motion to dismiss the case against it, the following two paragraphs are what the campaign slipped in on the subject:

Plaintiffs likewise cannot establish vicarious liability by alleging that the Campaign conspired with WikiLeaks. Under section 230 of the Communications Decency Act (47 U.S.C. § 230), a website that provides a forum where “third parties can post information” is not liable for the third party’s posted information. Klayman v. Zuckerberg, 753 F.3d 1354, 1358 (D.C. Cir. 2014). That is so even when even when the website performs “editorial functions” “such as deciding whether to publish.” Id. at 1359. Since WikiLeaks provided a forum for a third party (the unnamed “Russian actors”) to publish content developed by that third party (the hacked emails), it cannot be held liable for the publication.

That defeats the conspiracy claim. A conspiracy is an agreement to commit “an unlawful act.” Paul v. Howard University, 754 A.2d 297, 310 (D.C. 2000). Since WikiLeaks’ posting of emails was not an unlawful act, an alleged agreement that it should publish those emails could not have been a conspiracy.

This is the case brought against the campaign for allegedly colluding with Wikileaks and the Russians to disclose the plaintiffs’ private information as part of the DNC email trove that ended up on Wikileaks. Like Eric Goldman, who has an excellent post on the subject, I'm not going to go into the relative merits of the lawsuit itself, but I would note that it is worth consideration. Even if it's true that the Trump campaign and Wikileaks were somehow in cahoots to hack the DNC and publish the data taken from it, whether and how the consequences of that disclosure can be recognized by law is a serious issue, as is whether this particular lawsuit by these particular plaintiffs with these particular claims is one that the law can permit to go forward without causing collateral effects to other expressive endeavors, including whistleblower journalism generally. On these points there may or may not be issues with the campaign's motion to dismiss overall. But the shoehorning of a Section 230 argument into its defensive strategy seems sufficiently weird and counterproductive to be worth commenting on in and of itself.

For one thing, it's not a defense that belongs to the campaign. It's a defense that belongs to a platform, if it belongs to anyone, and the campaign was not a platform. Meanwhile the question of whether Wikileaks is a platform able to claim a Section 230 defense with regard to the content at issue is not entirely clear; like most legal questions, the answer is, "It depends," and it can depend on the particular relationship the site had with the hosting of any particular content. True, to the extent that Wikileaks is just a site hosting material others have provided the answer is more likely to be yes – although even then there is an important caveat: as Eric pointed out, Section 230 doesn't magically make content be "legal." It's simply an immunity from liability for certain types of claims. It's not even all claims. There's no limitation, for instance, on liability for claims asserting violations of another's intellectual property, nor is there any limit to liability for claims arising from violations of federal criminal law. While the Cockrum plaintiffs are bringing forward tort claims, which are the sorts of claims that Section 230 generally insulates platforms from, Section 230 would do nothing to shield the exact same platform from a federal prosecution arising from its hosting of the exact same information.

But the bigger issue is whether Wikileaks is just a platform merely hosting information others have provided, particularly with respect to the DNC emails. If it had too much agency in the creation of the information that ended up hosted on it, it might not be a Section 230-immune "interactive computer service provider" and instead might be found to be a potentially liable "information content provider." The Trump campaign is correct that a platform can exert quite a bit of editorial discretion over the information that appears on it without being considered an information content provider, but at a certain point courts become unwilling to regard the platform's interaction as editorial and instead find it to be authorial. There are reasons to champion drawing the line on what counts as editorial expansively, but it is naïve to pretend that courts will deem all interaction between a platform and the content appearing on it to be so. There is simply far too much caselaw to the contrary.

In fact, a great deal of the caselaw suggests that courts are often particularly unwilling to simply assume that a platform lacked creative agency in the content at issue in cases where the optics surrounding the platform and the content at issue are poor. As Eric has noted in previous posts, this reluctance is problematic, because forcing a platform to go through discovery in order to satisfy the court that there is no evidence of the platform's authorship of the content at issue, which would disqualify the platform from Section 230's protection, raises the costs of being a platform to the sort of crippling level that Section 230 is supposed to forestall. There is reason to worry that the optics surrounding this case may potentially encourage courts to create unpleasant precedent that will make it harder for other platforms to raise Section 230 as a defense in order to quickly end expensive, Section 230-barred lawsuits against them in the future.

But it's the discovery issue that makes the campaign's raising of Section 230 as a defense seem so odd: on page 1 of the motion to dismiss they complain the lawsuit was brought as "a vehicle for discovery of documents and evidence," but by raising Section 230 as a defense it only invites more of it. If any of the plaintiffs' claims were to go forward there would already be plenty of discovery demands to explore the relationship between the campaign and Wikileaks, which the campaign would appear to not want. The objective of the campaign should therefore be nothing more than making the case go away as quickly and quietly as possible. But by gratuitously throwing in Section 230 as a defense, one in which Wikileaks' authorship role is inherently in question and potentially contingent on its relationship with the campaign, rather than provide a basis for dismissal, the campaign has instead provided the court with a reason for why the case should continue to the discovery stage. It seems like a tactical error and one that does not appear to understand the jurisprudence surrounding Section 230. It glibly presumes that Section 230 applies to any situation involving a platform hosting content, and that simply isn't correct. While we have encouraged it to be liberally applied to platform situations, it obviously is not always, and sometimes even for good reason.

18 Comments | Leave a Comment..

Posted on Techdirt - 25 October 2017 @ 10:38am

Study On Craigslist Shutting 'Erotic Services' Shows SESTA May Hurt Those It Purports To Help

from the good-intentions-do-not-make-good-policy dept

The last two posts I wrote about SESTA discussed how, if it passes, it will result in collateral damage to the important speech interests Section 230 is intended to protect. This post discusses how it will also result in collateral damage to the important interests that SESTA itself is intended to protect: those of vulnerable sex workers.

Concerns about how SESTA would affect them are not new: several anti-trafficking advocacy groups and experts have already spoken out about how SESTA, far from ameliorating the risk of sexual exploitation, will only exacerbate the risk of it in no small part because it disables one of the best tools for fighting it: the Internet platforms themselves:

[Using the vilified Backpage as an example, in as much as] Backpage acts as a channel for traffickers, it also acts as a point of connection between victims and law enforcement, family, good samaritans, and NGOs. Countless news reports and court documents bear out this connection. A quick perusal of news stories shows that last month, a mother found and recovered her daughter thanks to information in an ad on Backpage; a brother found his sister the same way; and a family alerted police to a missing girl on Backpage, leading to her recovery. As I have written elsewhere, NGOs routinely comb the website to find victims. Nicholas Kristof of the New York Times famously “pulled out [his] laptop, opened up Backpage and quickly found seminude advertisements for [a victim], who turned out to be in a hotel room with an armed pimp,” all from the victim’s family’s living room. He emailed the link to law enforcement, which staged a raid and recovered the victim.

And now there is yet more data confirming what these experts have been saying: when there have been platforms available to host content for erotic services, it has decreased the risk of harm to sex workers.

The September 2017 study, authored by West Virginia University and Baylor University economics and information systems experts, analyzes rates of female homicides in various cities before and after Craigslist opened an erotic services section on its website. The authors found a shocking 17 percent decrease in homicides with female victims after Craigslist erotic services were introduced.

The reasons for these numbers aren't entirely clear, but there does seem to be a direct correlation in the safety to sex workers when, thanks to the availability of online platforms, they can "move indoors."

Once sex workers move indoors, they are much safer for a number of reasons, Cunningham said. When you’re indoors, “you can screen your clients more efficiently. When you’re soliciting a client on the street, there is no real screening opportunity. The sex worker just has to make the split second decision. She relies on very limited and complete information about the client’s identity and purposes. Whereas when a sex worker solicits indoors through digital means, she has Google, she has a lot of correspondence, she can ask a lot of questions. It’s not perfect screening, but it’s better.”

The push for SESTA seems to be predicated on the unrealistic notion that all we need to do to end sex trafficking is end the ability of sex services to use online platforms. But evidence suggests that removing the "indoor" option that the Internet affords doesn't actually end sex work; it simply moves it to the outdoors, where it is vastly less safe.

In 2014, Monroe was a trafficking victim in California. She found her clients by advertising on SFRedbook, the free online erotic services website. One day, she logged into the site and discovered that federal authorities had taken it down. Law enforcement hoped that closing the site would reduce trafficking, but it didn’t help Monroe. When she told her pimp SFRedbook was gone, he shrugged. Then he told her that she would just have to work outdoors from then on.

“When they closed down Redbook, they pushed me to the street,” Monroe told ThinkProgress. “We had a set limit we had to make a day, which was more people, cheaper dates, and if you didn’t bring that home, it was ugly.” Monroe, who asked that her last name be withheld for privacy reasons, had been working through Redbook in hotel rooms almost without incident, but working outdoors was much less safe.

“I got raped and robbed a couple of times,” she said. “You’re in people’s cars, which means nobody can hear you if you get robbed or beaten up.”

A recurrent theme here on Techdirt is that, as with any technology policy, no matter how well-intentioned it is, whether or not it is a good policy depends on its unintended consequences. Not only do we need to worry about how a policy affects other worthwhile interests, but it also needs to consider how it affects the interest it seeks to vindicate. And in this case SESTA stands to harm the very people it ostensibly seeks to help.

Does that mean Congress should do nothing to address sex trafficking? Of course not, and it is considering many more options that more directly address the serious harms that arise from sex trafficking. Even Section 230 as it currently exists does not prevent the government from going after platforms if they directly aid it. But all too often regulators like to take shortcuts and target platforms simply because bad people may be using them in bad ways. It's a temptation that needs to be resisted for many reasons, but not the least of which is that doing so may enable bad people to behave even worse.

21 Comments | Leave a Comment..

Posted on Techdirt - 20 October 2017 @ 10:41am

A Joke Tweet Leads To 'Child Trafficking' Investigation, Providing More Evidence Of Why SESTA Would Be Abused

from the we-wish-we-were-kidding dept

Think we're unduly worried about how "trafficking" charges will get used to punish legitimate online speech? We're not.

A few weeks ago a Mississippi mom posted an obviously joking tweet offering to sell her three-year old for $12.

I tweeted a funny conversation I had with him about using the potty, followed by an equally-as-funny offer to my followers: 3-year-old for sale. $12 or best offer.

The next thing she knew, Mississippi authorities decided to investigate her for child trafficking.

The saga began when a caseworker and supervisor from Child Protection Services dropped by my office with a Lafayette County sheriff’s deputy. You know, a typical Monday afternoon.

They told me an anonymous male tipster called Mississippi’s child abuse hotline days earlier to report me for attempting to sell my 3-year-old son, citing a history of mental illness that probably drove me to do it.

Beyond notifying me of the charges, they said I’d have to take my son out of school so they could see him and talk to him that day, presumably protocol to ensure children aren’t in immediate danger. So I went to his preschool, pulled my son out of a deep sleep during naptime, and did everything in my power not to cry in front of him on the drive back to my office.

All of this for a joke tweet.

This story is bad enough on its own. As it stands now, actions by the Mississippi authorities will chill other Mississippi parents from blowing off steam with facetious remarks on social media. But at least the chilling harm is contained within Mississippi's borders. If SESTA passes, that chill will spread throughout the country.

If SESTA were on the books, the Mississippi authorities would not have had to stop with the mom. Its next stop could be Twitter itself. No matter how unreasonable its suspicions, it could threaten criminal investigation on Twitter for having facilitated this allegedly trafficking-related speech.

The unlimited legal exposure these potential prosecutions pose will force platforms to pre-emptively remove not just the speech of parents from Mississippi but any speech from any parent anywhere that might inflame the humorless judgment of overzealous Mississippi authorities – or authorities from anywhere else where humor and judicious sense is also impaired. In fact, it won't even be limited to parents. Authorities anywhere could come after anyone who posted anything that they decided to misinterpret as a credible threat.

These warnings might sound like hyperbole, but that's what hangs in the balance: hyperbole. The ability to say ridiculous things because sometimes we need to say ridiculous things. If anything that gets said can be so willfully misconstrued as evidence of a crime it will chill a lot of speech, and to an exponentially unlimited extent far beyond any authority's jurisdictional boundaries if it can force platforms to fear enabling any such speech that might happen to set any of them off.

42 Comments | Leave a Comment..

Posted on Techdirt - 19 October 2017 @ 10:45am

Beyond ICE In Oakland: How SESTA Threatens To Chill Any Online Discussion About Immigration

from the trafficking-is-in-the-ICE-of-the-beholder dept

First, if you are someone who likes stepped-up ICE immigration enforcement and does not like "sanctuary cities," you might cheer the implications of this post, but it isn't otherwise directed at you. It is directed at the center of the political ven diagram of people who both feel the opposite about these immigration policies, and yet who are also championing SESTA. Because this news from Oakland raises the specter of a horrific implication for online speech championing immigrant rights if SESTA passes: the criminal prosecution of the platforms which host that discussion.

Much of the discussion surrounding SESTA is based on some truly horrific tales of sex abuse, crimes that more obviously fall under what the human trafficking statutes are clearly intended to address. But with news that ICE is engaging in a very broad reading of the type of behavior the human trafficking laws might cover and prosecuting anyone that happens to help an immigrant, it's clear that the type of speech that SESTA will carve out from Section 230's protection will go far beyond the situations the bill originally contemplated.

Some immigration rights activists are worried that ICE has recently re-defined the crime of human trafficking to include assistance, like housing and employment, that adults provide to juveniles who come to the United States without their parents. In many cases, the adults being investigated and charged are close relatives of the minors who are supposedly being trafficked.

Is ICE simply misreading the trafficking statutes? Perhaps, but it isn't necessarily a far-fetched reading. People in the EU who've merely given rides to Syrian (and other) refugees tired from trekking on foot have been prosecuted for trafficking. Yes that's Europe, not the US, but it's an example of how well-intentioned trafficking laws can easily be over-applied to the point that they invite absurd results, including those that end up making immigrants even more vulnerable to traffickers than they would have been without the laws.

So what does that have to do with SESTA? SESTA is drafted with language that presumes that sex trafficking laws are clearly and unequivocally good in their results. And what that Oakland example suggests is that this belief is a myth. Anti-immigrant forces within the government, both federal and state, can easily twist them against the very same people they were ostensibly designed to protect.

And that means they are free to come after the platforms hosting any and all speech related to the assistance of immigrants, if any and all assistance can be considered trafficking. The scope of what they could target is enormous: tweets warning about plain-clothed ICE agents at courthouses, search engine results for articles indicating whether evacuation centers will be checking immigration status, online ads for DACA enrollment assistance, or even discussion about sanctuary cities and the protections they afford generally. If SESTA passes, platforms will either have to presumptively censor all such online speech, or risk prosecution by any government or state entity with different views on immigration policy. Far from being the minor carve-out of Section 230 that SESTA's supporters insist it is, it instead is an invitation to drive an awful lot of important speech from the Internet that these same supporters would want to ensure we can continue to have.

27 Comments | Leave a Comment..

Posted on Free Speech - 16 October 2017 @ 9:33am

New York Considers Barring Agreements Barring Victims From Speaking

from the perhaps-there-oughta-be-a-law dept

In the wake of the news about Harvey Weinstein's apparently serial abuse of women, and the news that several of his victims were unable to tell anyone about it due to a non-disclosure agreement, the New York legislature is considering a bill to prevent such NDAs from being enforceable in New York state. According to the Buzzfeed article the bill as currently proposed still allows a settlement agreement to demand that the recipient of a settlement not disclose how much they settled for, but it can't put the recipient of a settlement in jeopardy of needing to compensate their abuser if they choose to talk about what happened to them.

It's not the first time a state has imposed limits on the things that people can contract for. California, for example, has a law that generally makes non-compete agreements invalid. Even Congress has now passed a law banning contracts that limit consumers' ability to complain about merchants. Although, as we learn in law school, there are some Constitutional disputes about how unfettered the freedom to contract should be in the United States, there has also always been the notion that some contractual demands are inherently "void as against public policy." In other words, go ahead and write whatever contractual clause you want, but they aren't all going to be enforceable against the people you want to force to comply with them.

Like with the federal Consumer Review Fairness Act mentioned above, the proposed New York bill recognizes that there is a harm to the public interest when people cannot speak freely. When bad things happen, people need to know about them if they are to protect themselves. And it definitely isn't consistent with the public interest if the people doing the bad things can stop people from knowing that they've been doing them. These NDAs have essentially had the effect of letting bad actors pay money for the ability to continue the bad acts, and this proposed law is intended to take away that power.

As with any law the devil will be in the details (for instance, this proposed bill appears to apply only to non-disclosure clauses in the employment context, not more broadly), and it isn't clear whether this one, as written, might cause some unintended consequences. For instance, there might theoretically be the concern that without a gag clause in a settlement agreement it might be harder for victims to reach agreements that would compensate them for their injury. But as long as victims of other people's bad acts can be silenced as a condition of being compensated for those bad acts, and that silence enables there to be yet more victims, then there are already some unfortunate consequences for a law to try to address.

46 Comments | Leave a Comment..

Posted on Techdirt - 18 August 2017 @ 11:55am

Because Of Course There Are Copyright Implications With Confederacy Monuments

from the copyright-makes-a-mess-of-everything-dept dept

There's no issue of public interest that copyright law cannot make worse. So let me ruin your day by pointing out there's a copyright angle to the monument controversy: the Visual Artists Rights Act (VARA), a 1990 addition to the copyright statute that allows certain artists to control what happens to their art long after they've created it and no longer own it. Techdirt has written about it a few times, and it was thrust into the spotlight this year during the controversy over the Fearless Girl statue.

Now, VARA may not be specifically applicable to the current controversy. For instance, it's possible that at least some of the Confederacy monuments in question are too old to be subject to VARA's reach, or, if not, that all the i's were dotted on the paperwork necessary to avoid it. (It’s also possible that neither is the case — VARA may still apply, and artists behind some of the monuments might try to block their removal.) But it would be naïve to believe that we'll never ever have monument controversies again. The one thing VARA gets right is an acknowledgement of the power of public art to be reflective and provocative. But how things are reflective and provocative to a society can change over time as the society evolves. As we see now, figuring out how to handle these changes can be difficult, but at least people in the community can make the choice, hard though it may sometimes be, about what art they want in their midst. VARA, however, takes away that discretion by giving it to someone else who can trump it (so to speak).

Of course, as with any law, the details matter: what art was it, whose art was it, where was it, who paid for it, when was it created, who created it, and is whoever created it dead yet… all these questions matter in any situation dealing with the removal of a public art installation because they affect whether and how VARA actually applies. But to some extent the details don't matter. While in some respects VARA is currently relatively limited, we know from experience that limited monopolies in the copyright space rarely stay so limited. What matters is that we created a law that is expressly designed in its effect to undermine the ability of a community with art in its midst to decide whether it wants to continue to have that art in its midst, and thought that was a good idea. Given the power of art to be a vehicle of expression, even political expression or outright propaganda, allowing any law to etch that expression in stone (as it were) is something we should really rethink.

64 Comments | Leave a Comment..

Posted on Techdirt - 12 July 2017 @ 9:27am

Copyright Law And The Grenfell Fire - Why We Cannot Let Legal Standards Be Locked Up By Copyright

from the burning-down-the-house dept

It's always hard to write about the policy implications of tragedies – the last thing their victims need is the politicization of what they suffered. At the same time, it's important to learn what lessons we can from these events in order to avoid future ones. Earlier Mike wrote about the chilling effects on Grenfell residents' ability to express their concerns about the safety of the building – chilling effects that may have been deadly – because they lived in a jurisdiction that allowed critical speech to be easily threatened. The policy concern I want to focus on now is how copyright law also interferes with safety and accountability both in the US and elsewhere.

I'm thinking in particular about the litigation Carl Malamud has found himself faced with because he dared to post legally-enforceable standards on his website as a resource for people who wanted ready access to the law that governed them. (Disclosure: I helped file amicus briefs supporting his defense in this litigation.) A lot of the discussion about the litigation has focused on the need for people to know the details of the law that governs them: while ignorance of the law is no excuse, as a practical matter people need a way to actually know what the law is if they are going to be expected to comply with it. Locking it away in a few distant libraries or behind paywalls is not an effective way of disseminating that knowledge.

But there is another reason why the general public needs to have access to this knowledge. Not just because it governs them, but because others' compliance with it obviously affects them. Think for instance about the tenants in these buildings, or any buildings anywhere: how can they be equipped to know if the buildings they live in meet applicable safety standards if they never can see what those standards are? They instead are forced to trust that those with privileged access to that knowledge will have acted on it accordingly. But as the Grenfell tragedy has shown, that trust may be misplaced. "Trust, but verify," it has been famously said. But without access to the knowledge necessary to verify that everything has been done properly, no one can make sure that it has. That makes the people who depend on this compliance vulnerable. And as long as copyright law is what prevents them from knowing if there has been compliance, then it is copyright law that makes them so.

Of course, there are lots of standards at issue in the Public Resource cases, and not all of them necessarily would threaten mortal peril if they were not complied with. But the federal court's decision in these cases, if allowed to stand, means that all sorts of standards, including those bearing on public safety, can be kept from ready public view through a claim of copyright. As the resulting injunctions ordering Carl Malamud to delete accurate and operable law from his website makes clear, no matter how accurate or operable the legal standard, no matter how critical compliance with the standard is to ensure the health and safety of the public, people can be prevented from sharing the knowledge of what that standard contains.

And it not only prevents people in one jurisdiction from knowing what that standard is. It prevents people anywhere in the world from knowing. If an American jurisdiction has made innovations in public safety standards, no one else in the world can freely benefit from that knowledge in order to figure out whether their own local standards are sufficient. It's an absurd result – the purpose of copyright law is, after all, all about developing and disseminating knowledge – and it's one that hurts people. It is not something we should be encouraging copyright law, or any law, to do.

54 Comments | Leave a Comment..

Posted on Techdirt - 6 July 2017 @ 11:51am

Why Protecting The Free Press Requires Protecting Trump's Tweets

from the protecting-the-speech-you-disagree-with dept

Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration... It definitely wasn't this past weekend, because waiting for me in my Twitter stream was Trump's tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that's not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post.

I don't write any of this to defend the tweet: it was odious, unpresidential, and betrays an animus towards the press that is terrifying to see in any government official – and especially the Chief Executive of the United States of America. But inappropriate, disgraceful, and disturbing though it is, it was still just speech, and calls to suppress speech are always alarming regardless of who is asking for it to be suppressed or why.

Some have tried to defend these calls by arguing that suppressing speech is ok when it is not the government doing the suppressing. But the reason official censorship is problematic is because it drives away the dissenting voices democracy depends on hearing. Which is not to say that all ideas are worth hearing or critical to self-government; the point is that protecting opposing voices in general is what allows the meritorious ones to be able to speak out against the powerful. There is no way to split the baby so that only some minority expression gets protected: either all of it must be, or none of it will be. If only some of it is, then the person who has the power to decide which will be protected and which will not has the power to decide badly.

Consider how Trump himself would use that power. Given, as we see in his tweet, how much he wants to marginalize voices that speak against him, we need to make sure this protection remains as strong as possible, even if it means that he, too, gets the benefit of it. There simply is no way to punish one man's speech, no matter how troubling it may be, without opening the door to better speech similarly being suppressed.

Naturally as a private platform Twitter may, of course, choose to delete this or any other Trump tweet (or any tweet or Twitter account at all) for any reason. We've argued before that private platforms have the right to police their services however they choose. But we have also seen how when speech is eliminated from a forum, the forum is often much poorer for it. Deciding to suppress speech is not something we should be too quick to encourage, or demand. Not even when the speech is provocative and threatening, because so much important, valid, and necessary speech can so easily be labeled that way. As Justice Holmes noted, "Every idea is an incitement." In other words, it's easy to justify suppressing all sorts of speech, including valid and important speech, if any viewpoint aggressively at odds with any other can be eliminated because of the challenge it presents. Courts have therefore found that speech, even speech promoting the use of force or lawlessness, may only be censored when "such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action." Given that even a KKK rally was found not to meet this description, these requirements for likely imminence of harm are steep hurdles that Trump's tweet are unlikely to clear.

The truth may well be, as many fear, that Trump would actually like people to beat up journalists. It may also be true that he has some bad actors among his followers who are eager to do so. But even if people do assault journalists, it won't be because of this tweet. It will be because Trump, as president, supports the idea. He'll support it whether or not this tweet is deleted. After all, it's not as though deleting the tweet will make him change his view. And it's that view that's the real problem to focus on here.

Because Trump has far more powerful means at his disposal to act upon his antipathy towards the media than his Twitter account affords. In fact, better that he should tweet his drivel rather than act on this malevolence in a way that actually does do direct violence to our free press. Especially because, in an administration so lacking in transparency, his tweets at least help let us know that this animus lurks within. Armed with this knowledge we can now be better positioned to defend those critical interests his presidency so threatens. Painful though it is to see his awful tweets, ignorance on this point would in no way have been bliss.

140 Comments | Leave a Comment..

Posted on Techdirt - 9 June 2017 @ 10:38am

The Importance Of Defending Section 230 Even When It's Hard

from the preventing-tough-cases-from-making-bad-law dept

The Copia Institute filed another amicus brief this week, this time in Fields v. Twitter. Fields v. Twitter is one of a flurry of cases being brought against Internet platforms alleging that they are liable for the harms caused by the terrorists using their sites. The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform.

There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users' use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm. Otherwise, based on the general rules of tort law, there could be no liability. In this particular case, for instance, there is a fairly weak connection between ISIS members using Twitter and the specific terrorist act that killed the plaintiffs' family members.

But we left that point to Twitter to ably argue. Our brief focused exclusively on the fact that Section 230 should prevent a court from ever even reaching the tort law analysis. With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it. Our concern is that in several recent cases with their own terrible facts, the Ninth Circuit in particular has found itself willing to make exceptions to that rule. As much as we were supporting Twitter in this case, trying to help ensure the Ninth Circuit does not overturn the very good District Court decision that had correctly applied Section 230 to dismiss the case, we also had an eye to the long view of reversing this trend.

The problem is, like the First Amendment itself, speech protections only work as speech protections when they always work. When one can find exemptions here and there, all of a sudden none of these protections are effective and it chills the speech of those who were counting on them because no one can be sure whether or not the speech will ultimately be protected. In the case of Section 230, that chilling arises because if the platforms cannot be sure whether they will be protected from liability in their users' speech, then they will have to assume they are not. Suddenly they will have to make all the censoring choices with respect to their users' content that Section 230 was designed to prevent, just to avoid the specter of potentially crippling liability.

One of the points we emphasized in our brief was how such an outcome flouts what Congress intended when it passed Section 230. As we said then, and will say again as many times as we need to, the point of Section 230 is to encourage the most beneficial online speech and also minimize the worst speech. To see how this dual-purposed intent plays out we need to look at the statute as a whole, beyond the part of it that usually gets the most attention, at Subsection (c)(1), which is about how platforms are immune from liability manifest in their users' speech. There is also another equally important part of the statute, at Subsection (c)(2), that immunizes platforms from liability when they take steps to minimize harmful online content on their systems. This subsection rarely gets attention, but it's important not to overlook, especially as people look at the effect of the first subsection and worry that it might encourage too much "bad" speech. Congress anticipated this problem and built in a remedy as part of a balanced approach to encourage the most good speech and least bad speech. The problem with now holding online services liable for bad uses of their platforms is that it distorts this balance, and in distorting this balance undermines both these goals.

We used the cases of Barnes v. Yahoo and Doe 14 v. Internet Brands to illustrate this point. Both of these are cases where the Ninth Circuit did make exemptions and found Section 230 not to apply to certain negative uses of Internet platforms. For instance, in Barnes Section 230 was actually found to apply to part of the claim directly relating to the speech in question, which was a good result, but the lawsuit also included a promissory estoppel claim, and the Court decided that because it was not directly related to liability arising from content it could go forward. The problem here was that Yahoo had separately promised to take down certain content, and so the Court found it potentially liable for not having lived up to its promise. But as we pointed out, the effect of the Barnes case was that now platforms never promise to take content down. Even though Congress intended for Section 230 to help Internet platforms perform a hygiene function to help keep the Internet free of the worst content, by discouraging platforms from going the extra mile it has instead had the opposite effect from the one Congress intended. That's why courts should not continue to find reasons to limit Section 230's applicability. Even if they think they have good reason to find one, that very justification itself will be better advanced when Section 230's protection can be most robust.

We also pointed out that in terms of the other policy goal behind Section 230, to encourage more online speech, divining exemptions from Section 230's coverage would undermine that goal as well. In this case the plaintiffs want providers to have to deny terrorists the use of their platforms. As a separate amicus brief by the Internet Association explained, platforms actually want to keep terrorists off and go to great lengths to try to do so. But as the saying goes, "One man's terrorist is another man's freedom fighter." In other words, deciding who to label a terrorist can often be a difficult thing to do, as well as an extremely political decision to make. It's certainly beyond the ken of an "intermediary" to determine -- especially a smaller, less capitalized, or potentially even individual one. (Have you ever had people comment on one of your Facebook posts? Congratulations! You are an intermediary, and Section 230 applies to you too.)

Even if the rule were that a platform had to check prospective users' names against a government list, there are significant constitutional concerns, particularly regarding the right to speak anonymously and the prohibition against prior restraint, that arise from having to make these sorts of registration denial decisions this way. There are also often significant constitutional problems with how these lists are made at all. As the amicus brief by EFF and CDT also argued, we can't create a system where the statutory protection platforms depend on to be able to foster online free speech is conditioned on coercing platforms to undermine it.

Read More | 145 Comments | Leave a Comment..

Posted on Techdirt - 26 May 2017 @ 10:43am

Helping Platforms Protect Speech By Avoiding Bogus Subpoenas

from the it's-important dept

We often talk about how protecting online speech requires protecting platforms, like with Section 230 immunity and the safe harbors of the DMCA. But these statutory shields are not the only way law needs to protect platforms in order to make sure the speech they carry is also protected.

Earlier this month, I helped Techdirt's think tank arm, the Copia Institute, file an amicus brief in support of Yelp in a case called Montagna v. Nunis. Like many platforms, Yelp lets people post content anonymously. Often people are only willing to speak when they can do so without revealing who they are (note how many people participate in the comments here without revealing their real names), which is why the right to speak anonymously has been found to be part and parcel of the First Amendment right of free speech . It's also why sites like Yelp let users post anonymously, because often that's the only way they will feel comfortable posting reviews candid enough to be useful to those who depend on sites like Yelp to help them make informed decisions.

But as we also see, people who don't like the things said about them often try to attack their critics, and one way they do this is by trying to strip these speakers of their anonymity. True, sometimes online speech can cross the line and actually be defamatory, in which case being able to discover the identity of the speaker is important. This case in no way prevents legitimately aggrieved plaintiffs from using subpoenas to discover the identity of those whose unlawful speech has injured them to sue them for relief. Unfortunately, however, it is not just people with legitimate claims who are sending subpoenas; in many instances they are being sent by people objecting to speech that is perfectly legal, and that's a problem. Unmasking the speakers behind protected speech not only violates their First Amendment rights to speak anonymously but it also chills the speech the First Amendment is designed to foster generally by making the critical anonymity protection that plenty of legal speech depends on suddenly illusory.

There is a lot that can and should be done to close off this vector of attack on free speech. One important measure is to make sure platforms are able to resist the subpoenas they get demanding they turn over whatever identifying information they have. There are practical reasons why they can't always fight them -- for instance, like DMCA takedown notices, they may simply get too many -- but it is generally in their interest to try to resist illegitimate subpoenas targeting the protected speech posted anonymously on their platforms so that their users will not be scared away from speaking on their sites.

But when Yelp tried to resist the subpoena connected with this case, the court refused to let them stand in to defend the user's speech interest. Worse, it sanctioned(!) Yelp for even trying, thus making platforms' efforts to stand up for their users even more risky and expensive than they already are.

So Yelp appealed, and we filed an amicus brief supporting their effort. Fortunately, earlier this year Glassdoor won an important California State appellate ruling that validated attempts by platforms to quash subpoenas on behalf of their users. That decision discussed why the First Amendment and California State Constitution required platforms to have this ability to quash subpoenas targeting protected speech, and hopefully this particular appeals court will agree with its sister court and make clear that platforms are allowed to fight off subpoenas like this. As we pointed out in our brief, both state and federal law and policy require online speech to be protected, and preventing platforms from resisting subpoenas is out of step with those stated policy goals and constitutional requirements.

Read More | 12 Comments | Leave a Comment..

Posted on Techdirt - 18 November 2016 @ 1:09pm

More Thoughts On Trump's Technology And Innovation Policies -- It All Goes Back To Freedom Of Speech

from the be-afraid dept

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration.

Free speech/copyright. Like Mike, I find Trump's expressed views towards free speech deeply troubling. His threats to "open up our libel laws" would do a tremendous disservice to Americans' ability to speak freely, and, unless enough people in Congress see the problem, as Mike noted, there's little hope that the long-needed federal anti-SLAPP law could be brought forth and survive his veto. But there may yet be reason for optimism on this front: the proposed bill already had bi-partisan support, and in addition to Democrats there are several #NeverTrump GOP members who have since been chilled by threats from his supporters and who may also recognize the need for it. There's also still the opportunity to expand anti-SLAPP laws in individual states, and here Trump's bluster might help that process, as well as ultimately help fortify our defenses for free speech overall. As someone with a track record of attacking people he does not like, and who has just accumulated an awful lot of power, he is Exhibit A for why America has a robust tradition of free speech in the first place.

The problem here is that our previous decades of relative political stability have allowed attitudes to become a bit too casual about the importance of free speech as an escape valve against tyranny. But now that the need to speak out is so critical for so many, perhaps it will make us all be a little less glib about it.

One area where we need to be less glib is in copyright. While I would not be surprised to see Trump do something damaging in this space (probably in furtherance of Trump TV), copyright policy has always cut across party lines, and saner policy has in the past had the support of several GOP members of Congress, some of whom may still be in office. The silver lining here is that now that the need to preserve free speech is so apparent, it may become easier to point out how copyright policy interferes with it. For instance, because President Trump, or anyone supporting him in government or otherwise, can so easily cause criticism of him to be disappeared simply by sending a takedown notice or have people cut off from their online services with simply the mere allegations of infringement (as they effectively could right now thanks to recent jurisprudence on DMCA Section 512(i)), opposing voices are extremely vulnerable. As the opposition party, Democrats in particular need to start realizing how IP rights in general (copyright and also trademark and other quasi-IP monopolies like publicity rights) have been providing censors with enormous leverage over other people's speech. Now that these levers can be used against them and their constituencies, perhaps they will be more likely to see the problem and finally push back against it (or at least stop actively trying to make the situation even worse).

Mass surveillance/encryption. The problem with the policy debates on mass surveillance to date is that they have tended to get bogged down by the assumption that the government was inherently good, and that all the spying it did was in furtherance of protecting its people. Until now many of those who disagreed with that assumption have largely been marginalized. Now, however, it appears that millions of people will have serious doubts about the motivations of the chief executive. It is therefore going to be much harder for surveillance advocates to push the "trust us," argument when the incoming government has already indicated its strong desire to punish its internal enemies. Libertarians were already alarmed by the power of the surveillance state, and more Democrats may start seeing things their way pretty soon. The opportunity here is that there is now a new framing to help people see what a significant constitutional violation and danger this surveillance represents.

Encryption raises the same issues, and, as with mass surveillance, the public and even other members of Congress may also soon come to the painful realization about how important it is for them and the public to have robust, workable, non-backdoored encryption available to them too. After all, as we saw with Nixon, it is not unprecedented for a President to spy on his political adversaries. But this time Trump can leverage the NSA to do it.

Net Neutrality/Intermediary immunity. There are (at least) two other policy areas where the importance of continuing to protect free speech principles remains evident. Regarding net neutrality, there's little reason to believe Trump will have anything positive to contribute along these lines, unless he decides it is to his business advantage. But what has also become apparent from this election is the tremendous damage consolidated mass media can cause to democracy. Politics is too important to be left to just a few outlets to tell us about, yet without net neutrality that's the situation we will be left with.

The danger posed by homogeneous media is also why bolstering the protection of internet intermediaries is so important. Their protection is what helps ensure that a diversity of voices can be heard. The unfortunate reality is that there will likely be a lot of calls by people unhappy with this election and its fallout to limit those voices, particularly those whose message is most divisive, and with them also the platforms that facilitate their speech. But it will be important to hold fast to the intermediary-shielding principles that have to date largely protected platforms from liability in their users' content. It's only by leaving them free to operate without fear of liability that they are most able to voluntarily refuse the most awful content and be available for the most good. Neither is the case if the government effectively takes that decision away from them with the threat of punitive law, particularly when that law will inevitably reflect the government's own agenda regarding what it considers to be worthwhile content or not.

Internet governance. With regard to Internet governance, at least the TPP appears to be dead and with it its speech-chilling provisions. Trump claims to detest free trade treaties, and in this regard his presidency may be helpful for innovation policy, which has been poorly served by US trade representatives trying to bind the United States into secretly negotiated international trade agreements that undermine key American liberties by imposing crippling limitations and liability on tech businesses and other platforms. On the other hand, from time to time international accords are helpful and even necessary for technology businesses to continue to thrive, innovate, and employ people worldwide. (See, e.g., the former Safe Harbor rules.) Unfortunately Trump's presidency appears to have precipitated a loss of credibility on the world stage, creating a situation where it seems unlikely that other countries will be as inclined to yield to American leadership on any further issues affecting tech policy (or any policy in general) as they may have been in the past.

The bigger concern with respect to Internet governance, however, is whether tech policy advocates from America will be taken seriously in the future, if we go back on previous promises developed in thorough processes involving all stakeholders. It was already challenging enough to convince other countries that they should do things our way, particularly with respect to free speech principles and the like, but at least when we used to tell the world, "Do it our way, because this is how we've safely preserved our democracy for 200 years," people elsewhere (however reluctantly) used to listen. But now people around the world are starting to have some serious doubts about our commitment to internet freedom and connectivity for all. So we will need to tweak our message to one that has more traction.

Our message to the world now is that recent events have made it all the more important to actively preserve those key American values, particularly with respect to free speech, because it is all that stands between freedom and disaster. Now is no time to start shackling technology, or the speech it enables, with external controls imposed by other nations to limit it. Not only can the potential benevolence of these attempts not be presumed, but we are now facing a situation where it is all the more important to ensure that we have the tools to enable dissenting viewpoints to foment viable political movements sufficient to counter the threat posed by the powerful. This pushback cannot happen if other governments insist on hobbling the Internet's essential ability to broker these connections and ideas. It needs to remain free in order for all of us to be as well.

7 Comments | Leave a Comment..

Posted on Techdirt - 18 March 2016 @ 12:44pm

New Decision In Dancing Baby DMCA Takedown Case -- And Everything Is Still A Mess

from the didn't-really-fix-anything dept

I got very excited yesterday when I saw a court system alert that there was a new decision out in the appeal of Lenz v. Universal. This was the Dancing Baby case where a toddler rocking out to a Prince song was seen as such an affront to Prince's exclusive rights in his songs that his agent Universal Music felt it necessary to send a DMCA takedown notice to YouTube to have the video removed. Heaven forbid people share videos of their babies dancing to unlicensed music.

Of course, they shouldn't need licenses, because videos like this one clearly make fair use of the music at issue. So Stephanie Lenz, whose video this was, through her lawyers at the EFF, sued Universal under Section 512(f) of the DMCA for having wrongfully caused her video to be taken down.

Last year, the Ninth Circuit heard the case on appeal and then in September issued a decision that generally pleased no one. Both Universal and Lenz petitioned for the Ninth Circuit to reconsider the decision en banc. En banc review was particularly important because the decision suggested that the panel felt hamstrung by the Ninth Circuit's earlier decision in Rossi v. MPAA, a decision which had the effect of making it functionally impossible for people whose content had been wrongfully taken down to ever successfully sue the parties who had caused that to happen.

Although the updated language exorcises some unhelpful, under-litigated ideas that suggested automated takedown systems could be a "valid and good faith" way of processing takedowns while considering fair use, the new, amended decision does little to remediate any of the more serious underlying problems from the last version. The one bright spot from before fortunately remains: the Ninth Circuit has now made clear that fair use is something that takedown notice senders must consider before sending them. But as for what happens when they don't, or what happens when they get it wrong, that part is still a confusing mess. The reissued decision doubles-down on the contention from Rossi that a takedown notice sender must have just a subjectively reasonable belief – not an objectively reasonable one – that the content in question is infringing. And, according to the majority of the three-judge panel (there was a dissent), it is for a jury to decide whether that belief was reasonable.

The fear from September remains that there is no real deterrent to people sending wrongful takedown notices that cause legitimate, non-infringing speech to be removed from the Internet. It is expensive and impractical to sue to be compensated for the harm this censorship causes, and having to do it before a jury, with an extremely high subjective standard, makes doing so even more unrealistic.

It's possible that the Ninth Circuit may actually see the plaintiff as having been vindicated here; after all, she may still go to a jury and be awarded damages to compensate her, potentially even for the attorneys' fees expended in fighting this fight. But note that the issue of whether she is due anything, and, if so, how much, has not yet been fully litigated, despite this case having been going on since 2007! Not everyone whose content is removed is as tenacious as Ms. Lenz or her EFF counsel, and not everyone can even begin to fight the fight when their content is unjustly removed.

Furthermore, sometimes the value in having speech posted on the Internet comes from having it posted *then*. No amount of compensation can truly make up for the effect of the censorship on a speaker's right to be heard when he or she wanted to be heard. Consider, as we are in the thick of election season, what happens when election-related speech is taken down shortly before a vote. As was pointed out in several amicus briefs in support of the en banc rehearing, including one I filed on behalf of the Organization of Transformative Works and Public Knowledge, such DMCA-enabled censorship has happened before.

Suing won't solve that problem, but at least the threat of a lawsuit might make someone think twice before sending a wrongful takedown notice. But if a lawsuit isn't a realistic possibility then that deterrence won't happen. What the parties supporting the plaintiff have been worried about is that the DMCA allows for an unprecedented form of censorship we would not normally allow. Think about it: if there were no DMCA then people who wanted content removed from the Internet would have to file well-pleaded and well-substantiated lawsuits articulating why the content in question was so wrongful that an injunction compelling its removal was justified in the face of any defense. In other words, without the DMCA, the question of fair use would get considered, and it would get considered by a judge.

But thanks to the DMCA would-be censors can save the time, cost, and burden of having to make sure they got the fair use question right before causing content to be removed – and very likely with a complete lack of judicial oversight to hold them to account if they didn't. No judge may ever scrutinize their decision to ensure that they didn't abuse the shortcut to censorship to the DMCA affords them. Instead, Thursday's decision only further ensures that this sort of abuse will continue unabated.

Read More | 12 Comments | Leave a Comment..

Posted on Techdirt - 10 July 2015 @ 6:18pm

Dancing Babies, The DMCA, Fair Use And Whether Companies Should Pay For Bogus Takedowns

from the still-in-court dept

Earlier this week the Ninth Circuit heard oral arguments in the appeal of Lenz v. Universal. This was the case where Stephanie Lenz sued Universal because Universal had sent YouTube a takedown notice demanding it delete the home movie she had posted of her toddler dancing, simply because music by Prince was audible in the background. It's a case whose resolution has been pending since 2007, despite the fact that it involves the interpretation of a fundamental part of the DMCA's operation.

The portion of the DMCA at issue in this case is Section 512 of the copyright statute, which the DMCA added in 1998 along with Section 1201. As with Section 1201, Section 512 reflects a certain naivete by Congress in thinking any part of the DMCA was a good idea, rather than the innovation-choking and speech- chilling mess it has turned out to be. But looking at the statutory language you can kind of see how Congress thought it was all going to work, what with the internal checks and balances they put into the DMCA to prevent it from being abused. Unfortunately, while even as intended there are some severe shortcomings to how this balance was conceptualized, what's worse is how it has not even been working as designed.

One such problem is with the content takedown system incorporated into Section 512. The point of Section 512 is to make it possible for intermediaries to host the rich universe of online content users depend on intermediaries to host. It does this by shifting the burden of having to police users' content for potential copyright infringement from these intermediaries to copyright owners, who are better positioned to do it. Without this shift more online speech would likely be chilled, either because the fear of being held liable for hosting users' infringing content would prompt intermediaries to over-censor legitimate content, or because the possibility of being held liable for user content would make being an Internet intermediary hosting it too crushingly high a risk to attempt at all.

Copyright owners often grumble about having the policing be their responsibility, but these complaints ignore the awesome power they get in return: by merely sending a takedown notice they are able, without any litigation or court order or third-party review, to cause online speech to be removed from the Internet. It is an awesome power, and it is one that Congress required them to use responsibly. That's why the DMCA includes Section 512(f), as a mechanism to hold wayward parties accountable when they wield this powered unjustifiably.

Unfortunately this is a section of the statute that has lost much of its bite. A 2004 decision by the Ninth Circuit, Rossi v. MPAA, read into the statute a certain degree of equivocation about what the "good faith" requirement of a takedown notice actually demanded. Nonetheless, the statute on its face still requires that a valid takedown notice include a statement that the party sending it has "a good faith belief that use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law." (emphasis added)

The big question in this case is what the "or the law" part means in terms of making a takedown notice legitimate. No one is disputing that the notice that took down the dancing baby video was authorized by the agent in charge of administering the rights to Prince's music (at the hearing we learned that this is no longer Universal Music, but it was back then). But copyright is always contextual. In other words, just because someone uses (e.g., by posting to the Internet) a copyrighted work does not mean they have automatically infringed that work's copyright. There may well be circumstances enabling that use, like a license (including a statutory or compulsory license), or fair use.

Whether the "or the law" part included authorization pursuant to fair use is what a significant part of the hearing addressed. Universal said that it didn't, arguing that fair use was only an affirmative defense. By "affirmative defense" Universal meant that fair use was just something you could argue as a defense to being accused of copyright infringement in a lawsuit but not something that existed more integrally as part of copyright law itself. As such, Universal argued, it was not necessary to consider it when sending a takedown notice claiming that the use in question was not authorized.

EFF, arguing for Lenz, disagreed, however, arguing that the articulation of fair use in the statute, at 17 U.S.C. § 107, made fair use more than just a defense; rather, it is a statutory limitation constraining the scope of the copyright owner's exclusive rights and just as much a part of the law as the parts enumerating those rights. As a result, the EFF argued, a copyright owner sending a takedown notice always has to consider whether the rights the notice is seeking to vindicate are at all constrained by the sort of use being made of the work. If the copyright owner doesn't do that then it could be subject to the sanctions of 512(f).

Although one can never read the tea leaves from an oral argument, the judges did not seem to buy Universal's argument that fair use was just an affirmative defense. They seemed more persuaded by the EFF's position that it was enough a part of the copyright statute for at least some consideration of it to be required for a takedown notice to be valid. But then the court became concerned with the question of how much consideration was needed. After all, as Universal suggested (and EFF disagreed with), there may even be some question about whether the use of Prince's music in the dancing baby video was itself fair. Fair use is a very squishy thing always dependent on the particular context of a particular use of a copyrighted work. Often it takes massive amounts of litigation to determine whether a use was fair, so the judges spent a lot of time questioning both parties about what a copyright owner (or its agent), if the statute requires them to consider fair use, must actually do on that front in order to not run afoul of the law's requirements when sending takedown notices.

Universal argued that because it (and other similarly situated copyright holders) needed to send millions of takedown notices it would simply be too burdensome to have to consider fair use for each and every one of them. To this the EFF suggested that tools may be available to help triage the likely contenders needing closer analysis, but something else the EFF said I think drives the point home more aptly.

The DMCA also includes a "put back" process, at Section 512(g), so that Internet users' whose content has wrongfully been removed can have it replaced. Universal argued that this process should be enough to deal with any wrongful takedowns, as it allows for wrongfully removed content to be replaced. (Universal also argued that this "put back" notice was also necessary to give the copyright holder notice that fair use might be an issue to consider.) But if this were the case then why have a Section 512(f) in the statute at all? There is nothing in the statute that suggests that a "put back" notice needs to happen for Section 512(f) to be able to operate. Furthermore, although the record in this case was unfortunately poor as to what percentage of removed content was ever put back pursuant to 512(g) put back notices, as the EFF noted, even if it were a very small percentage of removed content, a small percentage of millions of instances suggests that quite a bit of non-infringing content is still getting removed.

Moreover, there is no reason to suspect that the content that has been restored in response to these put back notices represents the entire universe of wrongfully removed content. There is little basis to presume that everyone else who had their content removed simply shrugged it off as a fair cop. Because a put back notice can conspicuously put a user in the line of fire of a copyright owner many users might not have wanted to tempt the trouble. Also, as the EFF observed, the DMCA takedown system is fairly labyrinth and often needs the assistance of counsel to help navigate it. This form of support is likely not available to most, and even in the case of Ms. Lenz it did not readily result in her home video of her kid dancing being restored.

Ultimately Universal is arguing that this outcome is ok: despite this harm to legitimate speech, copyright owners should nonetheless be entitled to cause millions and millions of instances of user-generated content to disappear from the Internet with very little effort, inconvenience, or oversight on their part. But it's an argument that fails to recognize just what a privilege the takedown system represents. It is a huge shortcut, giving private parties the extraordinary power to be censors over Internet content without the trouble and expense of a lawsuit to first determine whether their rights have truly been infringed. With the DMCA copyright owners become judge, jury, and executioner over other people's speech all on their own, and when they decide to sentence content for disappearance they get to use the takedown notice as the gun to the head of the intermediary to force it do the deed.

Universal spent a lot of time arguing that the DMCA was intended to be this sort of shortcut in order to be a "rapid response" system to online infringements. But the "rapid response" the DMCA offers is that copyright owners don't first have to go to court. Nothing in statute suggests copyright owners are entitled to a response so rapid that they are excused from exercising the appropriate care a valid takedown notice requires – or that even a lawsuit would require. As Universal would have it, they get to be censors over other people's speech without any of the risk normally involved if they had to use the courts to vindicate their rights. Note that nothing in the DMCA precludes a copyright owner from suing an Internet user who has infringed its copyright. But with a lawsuit comes the risk that a copyright owner might have to pay the fees and costs of the defendant should their claims of infringement found unmeritorious (including because the targeted use was fair). According to Universal, however, copyright owners should face no similar consequence should the claims underpinning their takedown notices be similarly specious. Copyright owners should simply be able to cause content to be deleted at will, with no risk of any penalty to them for being wrong.

But that's not what the statute says. As was also argued at the hearing, Section 512(f) creates the penalty necessary to deter wrongful takedowns because without there being one, all the risk of the takedown system would be borne by those whose free speech rights (both to speak freely and to freely consume what others have said) are undermined by copyright owners' glib censorship. As the saying goes, with great power comes great responsibility, and it hardly misconstrues Congress's intent, or the express language of the statute, to demand copyright owners to carefully exercise that responsibility before letting their takedown notices fly, and to sanction them when they don't.

35 Comments | Leave a Comment..

Posted on Techdirt - 2 July 2015 @ 7:39pm

How Section 1201 Of The Copyright Statute Threatens Innovation

from the breaking-computers dept

It would take many, many blog posts to fully articulate all the ways that modern copyright law threatens innovation. But one notable way is through Section 1201 of the copyright statute.

As discussed previously, Section 1201 is ostensibly supposed to minimize copyright infringement by making it its own offense to bypass the technical protective measures (TPMs) controlling access to a particular copy of a copyrighted work. (Sometimes these sorts of TPMs are referred to as DRM, or Digital Rights Management.) It is a fair question whether forbidding the bypass of TPMs is at all an effective approach to minimizing infringement, but it’s an even more important question to ask whether the portion of the copyright statute that forbids the bypassing of TPMs does so at the expense of other sections of the statute that specifically entitle people to make certain uses of copyrighted works.

The answer to this latter question is clearly no, and in fact Congress anticipated that it would be “no,” when it put into Section 1201 the requirement that the Copyright Office consider afresh, every three years, whether certain types of TPM bypassing should be deemed specifically permissible, notwithstanding Section 1201’s general prohibition against it. Unfortunately these triennial rulemakings are an extremely cumbersome, expensive, and ineffective way of protecting the non-infringing uses of copyrighted works the public is entitled to make. But the even bigger problem, and the one that I will focus on here, is that Section 1201’s prohibition against bypassing TPMs is increasingly standing in the way of not just non-infringing uses of copyrighted works but non-infringing uses of computing devices as a whole.

In the triennial rulemaking underway members of the public petitioned for a number of exemptions to Sections 1201’s prohibition, which the Copyright Office distilled into 27 classes of exemptions. The first 10 classes generally sought to allow people interact with copies of copyrighted works in ways they were entitled to but that the TPMs controlling the interaction prevented. But the latter classes, 11 through 27, were notable in that, rather than involving the sort of consumption of copyrighted media content DRM is designed to control, they all were classes designed to allow people to interact with computing logic itself.

Some of these classes, like 23 (“Abandoned software – video games requiring server communication”) and 24 (“Abandoned software – music recording software”), sought to allow people to bypass TPMs so that they could actually run the copies of software they legitimately had access to. But for many of these classes petitioners found themselves needing to ask not for exemptions to use copyrighted works in ways that they that the legitimate right to but for exemptions allowing them to use computers in ways they had the legitimate right to use them.

Because particularly for the classes seeking exemptions to modify the functionality of, or perform security research on, devices like phones (Classes 11 and 16), tablets (Class 12), TVs (Class 20), vehicles (Classes 21 and 22), and even computer-chipped medical devices (Class 27), that’s what these devices all are: computers. They just happen to be phone, TV, car, and pacemaker-shaped computers. Like a home PC (which Congress had not explicitly sought to regulate access to in 1998 when it codified Section 1201) they are pieces of computing hardware with circuitry that gets controlled by software. And, just like the home PC, people should be able to use the processing power of their computing devices as they would choose to, regardless of the shapes they come in.

Unfortunately, unless they bypass the TPM they can’t, and unless the Copyright Office grants the exemption they can’t bypass the TPM legally. And that’s a problem, because when people’s exploration of the full contours of their computing devices is limited by the threat of legal sanction, all the innovation and discovery that exploration would have yielded is chilled.

But to the extent that it is copyright law that is causing this chilling, it is a particularly bizarre result. Copyright law is inherently about promoting the progress of the arts and sciences, or, in other words, stimulating innovation and knowledge-sharing. It is completely anathema to copyright law’s constitutional mandate for Section 1201 of the copyright statute to explicitly impose barriers to that discovery.

This contradiction was an important point I made in two sets of comments and testimony submitted as part of this rulemaking process. In them I argued that these exemptions, particularly for classes 11-27, should be granted liberally in order that people’s freedom to tinker with the tools they legitimately possessed not be impinged upon just because those tools happened to contain a TPM. If the Copyright Office were to do nothing and simply let these TPMs continue to block this free exploration with the threat of legal sanction it would be particularly unjust because none of those TPMs were implemented to limit the infringement of copyrighted works. While the software running a device may itself be a copyrighted work, the TPM bypass would not be about violating any of the exclusive rights in that work’s copyright. Rather, the TPM bypass would simply be about getting the device itself to work as its user would choose.

Opponents to these classes argued that, even if the TPMs were not guarding against copyright harms, they prevented other sorts of harms that might result if people could use computing technology with unfettered freedom. For instance, they fretted, with regard to vehicles it was argued that if people could study or modify the software on their cars then brakes would fail, pollution would increase, and other terrible consequences would befall the world. But something important to remember is that by limiting this sort of discovery we also limit all of its benefits as well. If people cannot legally do security research on their cars, for instance, it doesn’t make those cars more secure. It just makes it harder to make them more secure.

Also, it is not the role of copyright to regulate technology use and development (except to the extent that it is designed to stimulate innovation). When the Copyright Office suddenly gets to be the gatekeeper on how people can use their computing technology, while it may forestall some potential negative outcomes to that use, it also forestalls any good ones. Furthermore it prevents any other more appropriate authority better equipped to balance the costs and benefits of technology use to craft more nuanced and effective regulation to address any negative ones. As they would — after all, it’s not like we have been living in the Wild West up until the Copyright Office managed to become inserted into the technology regulation space. For instance, even in the analog world if people modified the physical attributes of their cars – something they never needed the Copyright Office’s blessing to do – other regulators could still speak to whether they would be allowed to drive their modified cars on open roads. These other regulators have not become enfeebled just because the modifications people may choose to make to their cars may now be digital, particularly when the consequences to these modifications are not.

But even when the consequences to how people use their machines are digital, regulators can still address those outcomes. The problem has been that regulating computing use is tricky and up to now we haven’t done it very well. Instead we’ve ended up with laws like the Computer Fraud and Abuse Act (CFAA), laws that are very powerful and just as blunt, which punish beneficial computer uses as much as negative ones. But just because we have not perfected laws governing computer use does not mean that the Copyright Office should simply say no to these uses. In fact, it’s actually reason that the Copyright Office should say yes to them.

One of the problems with the CFAA is that it construes the question of wrongfulness of a computer use based on the permissibility of that action. As a result, without the exemptions we are left in a situation where barriers erected under the auspices of copyright could threaten to become the sole basis by which the CFAA gets its teeth to sanction the very sort of inherently non-infringing activity that copyright law was never intended to prevent. And that’s the bitter irony, because while laws like the CFAA sadly lack any adequate mechanism to assess whether a computer use is a beneficial or otherwise fair use, copyright law by design can, and, indeed, pursuant to its Constitutional origins, must.

For these reasons the Copyright Office should grant all the sought after exemptions, particularly for these latter classes. And it’s also for these reasons that it’s time to amend the copyright statute to remove the bottleneck to innovation Section 1201 has become given how it requires the permission of the Copyright Office before any of this computer use can be allowed.

Thanks to Jeffrey Vagle and others for their help preparing these comments and testimony.

Reposted from Digital Age Defense

20 Comments | Leave a Comment..

More posts from Cathy Gellis >>