Cathy Gellis’s Techdirt Profile

cathy

About Cathy Gellis




Posted on Techdirt - 22 February 2018 @ 1:47pm

Court Destroys Future Public Art Installations By Holding Building Owner Liable For Destroying This One

from the no-good-deed-goes-unpunished dept

Last week was a big week for dramatically bad copyright rulings from the New York federal courts: the one finding people liable for infringement if they embed others' content in their own webpages, and this one about 5Pointz, where a court has found a building owner liable for substantial monetary damages for having painted his own building. While many have hailed this decision, including those who have mistakenly viewed it as a win for artists, this post explains why it is actually bad for everyone.

The facts in this case are basically this: the owner of a run-down, formerly industrial building in a run-down neighborhood aspired to do something to redevelop his property, but it would be a few years before the time would be right. So in the meantime he let some graffiti artists use the building for their aerosol paintings. The building became known as 5Pointz, and the artwork on it soon began to attract attention. The neighborhood also began to change, and with the improvement the prospects for redeveloping the property into residences became more promising. From the outset everyone knew that redevelopment would happen eventually, and that it would put an end to the arrangement since the redevelopment would likely necessitate tearing down the building, and with it the art on the walls. As the date of demolition grew closer, the artists considered buying the building from the owner in order to prevent it from being torn down and thus preserve the art. However the owner had received a variance that suddenly made the value of the property skyrocket from $40 million to $200 million, which made the buyout impossible. So the artists instead sued to halt the destruction of their art and asked for a preliminary injunction, which would ensure that nothing happened to the art while the case was litigated. But in late 2013 the court denied the preliminary injunction, and so a few days later the building owner went ahead and painted over the walls. The painting-over didn't end the litigation, which then became focused on whether this painting-over broke the law. In 2017 the court issued a ruling allowing the case to proceed to trial on this question. Then last week came the results of that trial, with the court finding this painting-over a "willfully" "infringing" act and assessing a $6.7 million damages award against the owner for it.

It may be tempting to cheer the news that an apparently wealthy man has been ordered to pay $6.7 million to poorer artists for damaging their art. True -- the building owner, with his valuable property, seems to be someone who potentially could afford to share some of that wealth with artists who are presumably of lesser means. But we can't assume that a defendant building owner, who wants to be able to do with his property what he is normally legally allowed to do, will always be the one with all the money, and the plaintiff artist will always be the one without those resources. The law applies to all cases, no matter which party is richer, and the judicial reasoning at play in this case could just as easily apply if Banksy happened to paint the side of your house and you no longer wanted what he had painted to remain there. Per this decision, removing it could turn into an expensive proposition.

The decision presents several interrelated reasons for concern. Some arise from the law underpinning it, the Visual Artists Rights Act of 1990, an amendment to copyright law that, as described below, turned the logic of copyright law on its head. But there are also some alarming things about this particular decision, especially surrounding the application of high statutory damages for what the court deemed "willful" "infringement," that accentuate everything that's wrong with VARA and present issues of its own.

With respect to the law itself, prior to VARA the point of copyright law (at least in the US) was to make sure that the most works could be created to best promote the progress of the sciences and useful arts (as the Constitution prescribed). The copyright statute did this by giving creators economic rights, or rights designed to ensure that if there was money to be made from their works, they would have first crack at making it. The thinking was that with this economic incentive, creators would create more works, and thus the public interest goal of having more works created would be realized.

VARA changed this statutory equation for certain kinds of visual works. Instead of economic rights, it gave their creators certain moral rights, including (as relevant for this case), the right to preserve the integrity of their work. This right of integrity includes the right

(A) to prevent any intentional destruction, mutilation, or other modification of that work which would be prejudicial to his or her honor or reputation, and any intentional distortion, mutilation, or modification of that work is a violation of that right, and
(B) to prevent any destruction of a work of recognized stature, and any intentional or grossly negligent destruction of that work is a violation of that right.

Which may sound well and good, but as we see with the costly way the statute plays out, rather than creating economic incentives stimulating the creation of new works, it has now created economic effects inhibiting them, which in the long run will only hurt the artists VARA was intended to help.

The most obvious way it hurts them is by deterring property owners from allowing any art to be installed on their property, because it means that if they do, they may be forever stuck with it. Allowing art to be installed means they will either stand to lose the control they would have had without it (itself a hit to the property's worth), or potentially be faced with thousands if not millions of dollars in liability if they do what they want with their property anyway. And what property owner would want to chance such dire consequences in order to encourage art?

Granted, some of this risk can be ameliorated with written agreements, which were lacking in this case. But if all public art requires lawyered paperwork, it raises costs and will deter both artist and property owner from pursuing this sort of mutually beneficial arrangement. In this case the property owner had let the artists use his building to create, for free, by unwritten agreement simply because at the time they all agreed that it was good for both of them. It will not be good for creativity if we discourage this sort of symbiotic relationship from taking root.

It also will not be good for future artists whose economic interests might have benefited from other such opportunities like those 5Pointz offered. Even in this case the court noted all the evidence presented in "Folios", showing that being able to paint the building had opened up all sorts of doors for the artists to reap further economic rewards for their art. Artists will have fewer opportunities for that sort of career-enhancing exposure if landlords are deterred from giving it to them.

There is an implicit argument present in the plaintiffs' case that some of the rise in the value of the building was due to the artwork, and that it would therefore be just to share some of that windfall with them. But by this same logic, the building owner would have been similarly responsible for, and thus entitled to a portion of, the rise in value of the work of the artists whom he had allowed to exhibit. It would not be good for artists in the long run if they should find themselves needing to share their good fortune with their benefactors – or be potentially liable for any loss in their property value, should the presence of their work diminish it.

This case also stands to have some directly chilling effects on artists. While this case is not about graffiti artists suing each other for painting over each other's works (as the court noted, up to now graffiti artists have routinely painted over each others' works without any more severe penalty than social approbation, if even that), it's not clear why, if the decision stands, the next case couldn't be. The decision found that a VARA claim could be vindicated regardless of whether a work was temporary or permanent, and instead focused on whether a work had achieved the stature needed to be entitled to protection under the statute. It won't be good for artists if they have to fear being tied up in litigation with their peers due to the transient nature of their medium (or locked out of being able to create at all because others have already used all the good spaces first), or caught in a judicial cage match to determine whose work has the stature to be more deserving of protection.

It is possible that the court erred, and transient art falls outside VARA's purview. But there is enough ambiguity in the statute to potentially extend to it, and in any case, the statute's deterring effects would apply to all sorts of art, not just aerosol-painted art. Unfortunately at no point does the decision contemplate these effects, or its effects on other important policy values such as urban planning and affordable housing, if VARA is able to trump other forms of law, such as property law, that normally speak to what a building owner may do. The decision also largely ignores that the building owner had let the artists paint there in the first place, when he didn't have to. And it ignores that the building owner had done this apparently now wrongful painting-over of the art on his walls after this very same court denied an injunction that would have told him not to. None of these factors mattered to the court.

But all of them should matter to us, as should the extremely troubling way the court found his "infringement" (in other words, the painting-over) "willful," and thus subject to heightened damages. This is where the decision not only encapsulates the policy flaws of VARA, but also threatens to be seriously distorting to copyright doctrine (and other law) generally.

One troubling aspect is the punitive attitude by the court towards the building owner for having painted over the art after the very same court had denied an injunction preventing it. In between its order of November 12, 2013 denying the preliminary injunction, and its November 20, 2013 decision explaining its order (embedded below), the building owner had gone ahead and done the painting-over. This act appears to have outraged the court, whose November 20 decision reads more as an explanation for why it probably should have issued the injunction, now that in the intervening time the owner had painted over it.

As the court correctly observed in this 2013 opinion, preliminary injunctions exist so that courts can prevent irreparable harm at the outset that a court is likely to later rule needs to be prevented, if it would be too late to unring the bell at that point. In fact the language the court cited for the injunction is so standard that when the naked order denying the preliminary injunction was issued, it was perfectly reasonable for the building owner to presume that either (a) he was likely to win the case and be able to do what he wanted to the building, or (b) it wasn't such a severe harm if he removed the art now and later the court decided he shouldn't have, or (c) some combination of both. So it reads as a serious miscarriage of justice for the court's 2018 decision to punish him for going ahead and removing the art, or "recklessly disregard[ing] the possibility" that removing it would be wrongful, as the court put it.

Furthermore, if the court is right in its 2018 decision that the painting-over raised a valid VARA claim, then it was wrong to deny the injunction in 2013. Problematic though it is for VARA to introduce non-economic rights into copyright law, the whole point of one of them – the right to maintain the integrity of the work – can only be vindicated with an injunction. If this right were something that could be adequately compensated for by monetary damages, then it would start to look a lot more like an economic right. That's not what VARA was ever intended to create, but it is what the court effectively created back in 2013 when it refused the injunction and deemed monetary damages sufficient to address any harm should the VARA claim later prevail.

As it wrote in a confused passage in its 2013 decision:

Although the works have now been destroyed—and the Court wished it had the power to preserve them—plaintiffs would be hard-pressed to contend that no amount of money would compensate them for their paintings; and VARA—which makes no distinction between temporary and permanent works of visual art—provides that significant monetary damages may be awarded for their wrongful destruction. See 17 U.S.C. §§ 501-505 (providing remedies for VARA violations). In any event, paintings generally are meant to be sold. Their value is invariably reflected in the money they command in the marketplace. Here, the works were painted for free, but surely the plaintiffs would gladly have accepted money from the defendants to acquire their works, albeit on a wall rather than on a canvas.

It continued more bizarrely:

Moreover, plaintiffs’ works can live on in other media. The 24 works have been photographed, and the court, during the hearing, exhorted the plaintiffs to photograph all those which they might wish to preserve. All would be protected under traditional copyright law, see 17 U.S.C. § 106 (giving, inter alia, copyright owners of visual works of art the exclusive rights to reproduce their works, to prepare derivative works, and to sell and publicly display the works), and could be marketed to the general public—even to those who had never been to 5Pointz.

In the court's defense, it is correct that VARA does allow for infringements of moral rights to be compensated by monetary damages. But it shouldn't be the primary form of relief, and the court's punitive use of the highest amount of statutory damages to compensate the artists exemplifies why. Statutory damages are normally for when it is hard to measure economic loss and so we have to instead make some presumptions about how much compensation that loss deserves. There are already plenty of problems with these presumptions tending to allow for the recovery of far more than what actual losses would have been, but this decision magnifies their problematic nature by allowing statutory damages not only to overcompensate economic loss but to overcompensate non-economic loss. In other words, congratulations, we now have "pain and suffering" in copyright cases.

Perhaps in a way we always have – the overuse of statutory damages has always suggested that is really a retributive, rather than truly compensatory, damages measure. In this case, the court is perfectly frank that's what it's doing:

If not for Wolkoff’s insolence, these damages would not have been assessed. If he did not destroy 5Pointz until he received his permits and demolished it 10 months later, the Court would not have found that he had acted willfully. Given the degree of difficulty in proving actual damages, a modest amount of statutory damages would probably have been more in order.

But courts have always maintained the facade that compensation for emotional harm is unavailable in copyright cases. The 5Pointz court even acknowledges this limitation in footnote 18 of its 2018 decision:

Plaintiffs contend that they are entitled to damages for emotional distress. Under traditional copyright law, plaintiffs cannot recover such damages. See Garcia v. Google, Inc., 786 F.3d 733, 745 (9th Cir. 2015) (“[A]uthors cannot seek emotional damages under the Copyright Act, because such damages are unrelated to the value and marketability of their works.”); Kelley v. Universal Music Group, 2016 WL 5720766, at *2 (S.D.N.Y. Sept. 29, 2016) (“Because emotional distress damages are not compensable under the Copyright Act, this claim must also be dismissed.”). Since VARA provides damages under “the same standards that the courts presently use” under traditional copyright law, H.R. Rep. No. 101-514, at 21-22 (1990), emotional damages are not recoverable.

But the outright hostility the court repeatedly shows the defendant, in the language in both decisions, makes it clear that statutory damages are being used to compensate for what is otherwise a purely emotional harm. From the 2018 decision:

The whitewash did not end the conflict in one go; the effects lingered for almost a year. The sloppy, half-hearted nature of the whitewashing left the works easily visible under thin layers of cheap, white paint, reminding the plaintiffs on a daily basis what had happened. The mutilated works were visible by millions of people on the passing 7 train. One plaintiff, Miyakami, said that upon seeing her characters mutilated in that manner, it "felt like [she] was raped." Tr. at 1306:24-25.

There are good reasons why we do not allow copyright to remediate hurt feelings, not the least of which being that they are likely to run raw on both sides. From an article from last year (before the $6.7 million judgment):

Wolkoff feels betrayed by the artists he thought he was helping by lending them his wall to bomb. He cried when the building came down, he confessed, and said he would bring back more street artists to paint at the location after the renovation—just not those who sued.

Notably in its 2017 ruling (also embedded below) allowing the VARA claim to go forward, the court dismissed the artists' claims for intentional infliction of emotional distress, despite the strong emotions the case had engendered.

Because the defendants destroyed 5Pointz only after the Court dissolved its temporary restraining order and did no more than raze what they rightfully owned, the defendants simply did not engage in the kind of outrageous and uncivilized conduct for whose punishment this disfavored tort was designed.

And therein lies the rub: the building owner did no more than what other law clearly allowed. But by allowing artists to bring claims for the "intentional distortion, mutilation, or other modification . . . [of works that] would be prejudicial to [the artist’s] honor or reputation" the court has set up a direct conflict between VARA and what traditional copyright law, and traditional property law, have allowed. And it has done this without addressing any of the implications of this new policy collision.

Read More | 44 Comments | Leave a Comment..

Posted on Techdirt - 12 February 2018 @ 3:35pm

Ninth Circuit Shuts Down 'Terrorists Used Twitter' Case But Not Because Of Section 230

from the this-is-ok-too dept

With the event at Santa Clara earlier this month, and the companion essays published here, we've been talking a lot lately about how platforms moderate content. It can be a challenging task for a platform to figure out how to balance dealing with the sometimes troubling content it can find itself intermediating on the one hand and free speech concerns on the other. But at least, thanks to Section 230, platforms have been free to do the best they could to manage these competing interests. However you may think they make these decisions now, they would not come out any better without that statutory protection insulating them from legal consequence if they did not opt to remove absolutely everything that could tempt trouble. If they had to contend with the specter of liability in making these decisions it would inevitably cause platforms to play a much more censoring role at the expense of legitimate user speech.

Fearing such a result is why the Copia Institute filed an amicus brief at the Ninth Circuit last year in Fields v. Twitter, one of the many "how dare you let terrorists use the Internet" cases that keep getting filed against Internet platforms. While it's problematic that they keep getting filed, they have fortunately not tended to get very far. I say "fortunately," because although it is terrible what has happened to the victims of these attacks, if platforms could be liable for what terrorists do it would end up chilling platforms' ability to intermediate any non-terrorist speech. Thus we, along with the EFF and the Internet Association (representing many of the bigger Internet platforms), had all filed briefs urging the Ninth Circuit to find, as the lower courts have tended to, that Section 230 insulates platforms from these types of lawsuits.

A few weeks ago the Ninth Circuit issued its decision. The good news is that this decision affirms that the end has been reached in this particular case and hopefully will deter future ones. However the court did not base its reasoning on the existence of Section 230. While somewhat disappointing because we saw this case as an important opportunity to buttress Section 230's critical statutory protection, by not speaking to it at all it also didn't undermine it, and the fact the court ruled this way isn't actually bad. By focusing instead on the language of the Anti-Terrorism Act itself (this is the statute barring the material support of terrorists), it was still able to lessen the specter of legal liability that would otherwise chill platforms and force them to censor more speech.

In fact, it may even be better that the court ruled this way. The result is not fundamentally different than what a decision based on Section 230 would have led to: like with the ATA, which the court found would have required some direct furtherance by the platform of the terrorist act, so would Section 230 have required the platform's direct interaction with the creation of user content furthering the act in order for the platform to potentially be liable for its consequences. But the more work Section 230 does to protect platforms legally, the more annoyed people seem to get at it politically. So by not being relevant to the adjudication of these sorts of tragic cases it won't throw more fuel on the political fire seeking to undermine the important speech-protective work Section 230 does, and then it hopefully will remain safely on the books for the next time we need it.

[Side note: the Ninth Circuit originally issued the decision on January 31, but then on 2/2 released an updated version correcting a minor typographical error. The version linked here is the latest and greatest.]

Read More | 14 Comments | Leave a Comment..

Posted on Techdirt - 31 January 2018 @ 9:08am

My Question To Deputy Attorney General Rod Rosenstein On Encryption Backdoors

from the golden-key-and-databreach dept

Never mind all the other reasons Deputy Attorney General Rod Rosenstein's name has been in the news lately... this post is about his comments at the State of the Net conference in DC on Monday. In particular: his comments on encryption backdoors.

As he and so many other government officials have before, he continued to press for encryption backdoors, as if it were possible to have a backdoor and a functioning encryption system. He allowed that the government would not itself need to have the backdoor key; it could simply be a company holding onto it, he said, as if this qualification would lay all concerns to rest.

But it does not, and so near the end of his talk I asked the question, "What is a company to do if it suffers a data breach and the only thing compromised is the encryption key it was holding onto?"

There were several concerns reflected in this question. One relates to what the poor company is to do. It's bad enough when they experience a data breach and user information is compromised. Not only does a data breach undermine a company's relationship with its users, but, recognizing how serious this problem is, authorities are increasingly developing policy instructing companies on how they are to respond to such a situation, and it can expose the company to significant legal liability if it does not comport with these requirements.

But if an encryption key is taken it is so much more than basic user information, financial details, or even the pool of potentially rich and varied data related to the user's interactions with the company that is at risk. Rather, it is every single bit of information the user has ever depended on the encryption system to secure that stands to be compromised. What is the appropriate response of a company whose data breach has now stripped its users of all the protection they depended on for all this data? How can it even begin to try to mitigate the resulting harm? Just what would government officials, who required the company to keep this backdoor key, now propose it do? Particularly if the government is going to force companies to be in this position of holding onto these keys, these answers are something they are going to need to know if they are going to be able to afford to be in the encryption business at all.

Which leads to the other idea I was hoping the question would capture: that encryption policy and cybersecurity policy are not two distinct subjects. They interrelate. So when government officials worry about what bad actors do, as Rosenstein's comments reflected, it can't lead to the reflexive demand that encryption be weakened simply because, as they reason, bad actors use encryption. Not when the same officials are also worried about bad actors breaching systems, because this sort of weakened encryption so significantly raises the cost of these breaches (as well as potentially makes them easier).

Unfortunately Rosenstein had no good answer. There was lots of equivocation punctuated with the assertion that experts had assured him that it was feasible to create backdoors and keep them safe. Time ran out before anyone could ask the follow-up question of exactly who were these mysterious experts giving him this assurance, especially in light of so many other experts agreeing that such a solution is not possible, but perhaps this answer is something Senator Wyden can find out...

32 Comments | Leave a Comment..

Posted on Free Speech - 24 January 2018 @ 1:34pm

Wherein We Ask The California Supreme Court To Lessen The Damage The Court Of Appeal Caused To Speech

from the nothing-to-see-here dept

A few weeks ago we posted an update on Montagna v. Nunis. This was a case where a plaintiff subpoenaed Yelp for the identity of a user. The trial court originally denied Yelp's attempt to quash the subpoena – and sanctioned it for trying – on the grounds that platforms had no right to stand in for their users to assert their First Amendment rights. We filed an amicus brief in support of Yelp's appeal of that decision, which fortunately the Court of Appeal reversed, joining another Court of Appeal that earlier in the year had also decided that of course it was ok for platforms to try to quash subpoenas seeking to unmask their users.

Unfortunately, that was only part of what this Court of Appeal decided. Even though it agreed that Yelp could TRY to quash a subpoena, it decided that it couldn't quash this particular one. That's unfortunate for the user, who was just unmasked. But what made it unfortunate for everyone is that this decision was fully published, which means it can be cited as precedent by other plaintiffs who want to unmask users. While having the first part of the decision affirming Yelp's right to quash the subpoena is a good thing, the logic that the Court used in the second part is making it a lot easier for plaintiffs to unmask users – even when they really shouldn't be entitled to.

So Yelp asked the California Supreme Court to partially depublish the ruling – or, in other words, make the bad parts of it stop being precedent that subsequent litigants can cite in their unmasking attempts (there are rules that prevent California lawyers from citing unpublished cases in their arguments, except under extremely limited circumstances). And this week we filed our own brief at the California Supreme Court in support of Yelp's request, arguing that the Court of Appeal's analysis was inconsistent with other California policy and precedent protecting speech, and that without its depublication it will lead to protected speech being chilled.

None of this will change the outcome of the earlier decision - the user will remain unmasked. But hopefully it will limit the effect of that Court of Appeal's decision with respect to the unmasking to the facts of that particular case.

Read More | 16 Comments | Leave a Comment..

Posted on Techdirt - 22 January 2018 @ 1:43pm

Tech Policy A Year Into The Trump Administration: Where Are We Now?

from the crystal-ball-testing dept

Shortly after Trump was elected I wrote a post predicting how things might unfold on the tech policy front with the incoming administration. It seems worth taking stock, now almost a year into it, to see how those predictions may have played out.

Most of this post will track the way the issues were broken down last time. But it is first worth commenting how in one significant overarching way last year's post does not hold up: it presumed, even if only naively in the face of evidence already suggesting otherwise, that the Trump administration would function with the competency and coherence that presidential administrations have generally functioned with in order to function at all, let alone effectively enough to drive forth a set of preferred policy positions. There seems to be growing consensus that this presumption was and remains unsound.

Furthermore, the normal sort of political considerations that traditionally have both animated and limited presidential policy advocacy do not seem applicable to this presidency. As a result, conventional political wisdom in other areas of government also now seems to be changing, as the rest of the political order reacts to what Trump actually has done in his year as President and prepares for the next major round of elections in 2018.

Free speech/copyright – For better or for worse, the Trump administration does not seem to be particularly interested in copyright policy, but it has nonetheless had an effect on it. The denial of the cert petition in Lenz following a strange brief from the Trump Administration's Solicitor General and the appointment of Justice Gorsuch will leave a mark, as without teeth being put back into the DMCA to deter abusive takedown notices, content will still be vulnerable to illegitimate takedown demands of all sorts of speech, including political speech. If there's one thing the Trump administration has accomplished it has been to make people much more politically aware, and we've already seen instances of people using the DMCA's notice and takedown system to try to suppress speech they don't like. To be fair, we've seen people of all political persuasions do this, but the concern is heightened when the views of those who already have power to suppress the views of those that do not. (Note also: it is not clear that a Clinton Solicitor General would have written an any more solicitous brief in support of Lenz, or that a justice other than Gorsuch would have changed the cert vote. Plenty of Democratic appointees have been disappointing on the copyright front. However, it is a policy result that is directly due to the new administration.)

More interestingly, however, is the impact on future copyright policy (and, indeed, lots of other tech policy) caused by the political toll the Trump administration has been having on the GOP. The impending retirements of Reps. Goodlatte and Issa, for instance, will lose the tempering influence they have sometimes had on some of the worst copyright policy pushes.

On the speech front, however, it looks like all the worry about the Trump administration last year has been born out. From Trump's frequent and overt diminishment of a free and independent media, to his constant legal threats to sue critics, to his administration's outright abuse of power to try to unmask them – and more – the Trump presidency tends to be an extremely cogent example of why it is so critically important to protect the right of free speech from government incursion.

Mass surveillance/encryption – This issue is always a mess, but now it's a mess in new ways that have realigned some of the political leanings, which may create opportunity but also creates new reasons for concern.

In litigation challenging digital surveillance the details of the surveillance obviously matter: what government authority is trying to do what, to whom, and under what statutory authority all affects the judicial inquiry. But in some ways none of these details matter: the essential question underpinning all these cases is what a state actor can constitutionally do to invade the privacy of its people. Whether the state actor is wearing an FBI hat, a CIA hat, an NSA one, a local police one, or some other official hat doesn't really matter to the person whose private dealings are now exposed to government review. But President Trump's unpopularity, petulance, and track record of threatened, if not actual, attacks on his political enemies should make it easy to see the problem with giving the government too much surveillance power since it means giving someone like him that much surveillance power. His attempts to increasingly politicize our various investigatory agencies further drives home this point because the more government surveillance is politicized, the people with opposing viewpoints will be hurt by those with political power able to wield this surveillance power against them.

On the other hand, there are serious allegations of wrongdoing by Trump, his family, and his associates, including allegations that raise serious national security concerns, and it is only because of the work of many of these investigatory agencies that these allegations stand any chance of being uncovered and appropriately prosecuted. And as a result, many who should be fearing the power of these investigatory agencies, simply because as state actors their behavior always needs to be subject to check, are now suddenly feeling quite cheerful about enabling these agencies and enhancing their power, even where the Constitution should forbid it.

Figuring out how to empower police in a way that protects our democracy without undermining the civil liberties that also protect our democracy requires a careful, nuanced conversation. Yet it's not one that we are having or seem likely to have under this administration. But if these agencies do become politicized as a result of Trump's presidency, it may then be too late to have it.

Net neutrality/intermediary immunity – Things are bad on both these fronts, although the impact of the Trump administration is different on each.

With regard to the former topic, the elevation of Chairman Pai by Trump opened the door to the most direct and obvious incursion on Net Neutrality protection. There's no point in dwelling on it here; read any of the many other posts here to see why. While it is possible that any Republican president would have made a similar appointment, a Democratic president would likely have made an appointment resulting in a different balance of power among the FCC commissioners. But there is also something rather Trumpian about Pai's move as well, the choice to govern by brute force rather than consensus, and it is also possible that a more politically-attuned Republican administration would have encouraged its appointee to have used a lighter hand in setting policy, particularly in light of significant opposition against this particular move, including from both sides of the aisle.

On the intermediary immunity front Section 230 is under heavy attack. Fortunately the Trump administration itself does not seem to be directly stoking the legislative fires; some of the most significant attacks on Section 230 have largely been instigated by Democrats (although with some bipartisan support). In general the Democrats appear to be a party whose political fortunes are on the rise due to Trump's unpopularity and resulting GOP incumbent retirements, including (as discussed above) some who have historically been helpful on the tech policy front. Although there are some outstanding Democrats on these sorts of issues (i.e., Wyden, Lofgren) tech policy has not often followed standard red-blue party lines, and a legislative switch back to blue overall will not necessarily lead to better policy on these issues overall as well.

Especially not when the Trump administration has in many ways been inspiring the attacks on platforms. It has become easy, for instance, for people to fault social media for his rise and for some of the worst things about his presidency (e.g., provoking North Korea on Twitter). As with mass surveillance Trump's unpopularity is tempting many to see as palatable any policy they think might temper him. Unfortunately, like with mass surveillance, this belief that a policy might have this tempering quality is often wrong. For the same reason that Trump is Exhibit A for why we should not do anything to enhance government surveillance power, it is also Exhibit A for why we should not do anything to undermine free speech, including online free speech, which these legislative attacks on platforms only invite.

Internet governance – Trump has been a disaster on the foreign policy front, measurably lowering the esteem of America in the eyes of the world. True, as discussed last year, by abandoning the TPP he spared us the harm to the important liberty interests the TPP would have imposed, but in nearly every other way he has undermined those same interests by making it more tempting and politically easier for other countries to try to set policy that will affect how everyone, including Americans, gets to use the Internet.

What I wrote last time remains apt:

Unfortunately Trump's presidency appears to have precipitated a loss of credibility on the world stage, creating a situation where it seems unlikely that other countries will be as inclined to yield to American leadership on any further issues affecting tech policy (or any policy in general) as they may have been in the past. … It was already challenging enough to convince other countries that they should do things our way, particularly with respect to free speech principles and the like, but at least when we used to tell the world, "Do it our way, because this is how we've safely preserved our democracy for 200 years," people elsewhere (however reluctantly) used to listen. But now people around the world are starting to have some serious doubts about our commitment to [...] freedom and connectivity for all.

But last year I noted that his administration also created opportunity to push for those values, and that view still holds today:

So we will need to tweak our message to one that has more traction. Our message to the world now is that recent events have made it all the more important to actively preserve those key American values, particularly with respect to free speech, because it is all that stands between freedom and disaster. Now is no time to start shackling technology, or the speech it enables, with external controls imposed by other nations to limit it. Not only can the potential benevolence of these attempts not be presumed, but we are now facing a situation where it is all the more important to ensure that we have the tools to enable dissenting viewpoints to foment [into] viable political movements sufficient to counter the threat posed by the powerful. This pushback cannot happen if other governments insist on hobbling the Internet's essential ability to broker these connections and ideas. It needs to remain free in order for all of us to [remain free] as well.

38 Comments | Leave a Comment..

Posted on Techdirt - 12 December 2017 @ 3:40pm

It Was Twenty(-odd) Years Ago Today When The Internet Looked Much Different Than It Does Now

from the time-machine dept

Last week, Mike and I were at a conference celebrating the 20th anniversary of the Supreme Court decision in Reno v. ACLU, a seminal case that declared that the First Amendment applied online. What makes the case so worth a conference celebrating it is not just what it meant as a legal matter – it's a significant step forward in First Amendment jurisprudence – but also what it meant as a practical matter. This decision was hugely important in allowing the internet to develop into what it is today, and that evolution may not be something we adequately appreciate. It's easy to forget and pretend the internet we know today was always a ubiquitous presence, but that wasn't always so, and it wasn't so back then. Indeed, it's quite striking just how much has changed in just two decades.

So this seemed like a good occasion to look back at how things were then. The attached paper is a re-publication of the honors thesis I wrote in 1996 as a senior at the University of California at Berkeley. As the title indicates, it was designed to study internet adoption among my fellow students, who had not yet all started using it. Even those who had were largely dependent on the University to provide them their access, and that access had only recently started to be offered on any significant a campus-wide basis. And not all of the people who had started using the internet found it to be something their lives necessarily needed. (For instance, when asked if they would continue to use the internet after the University no longer provided their access, a notable number of people said no.) This study tried to look at what influences or reasons the decision to use, or not use, the internet pivoted upon.

I do of course have some pause, now a few decades further into my career, calling attention to work I did as a stressed-out undergraduate. However, I still decided to dig it up and publish it, because there aren't many snapshots documenting internet usage from that time. And that's a problem, because it's important to understand how the internet transitioned from being an esoteric technology used only by some into a much more pervasive one seemingly used by nearly everyone, and why that change happened, especially if we want to understand how it will continue to change, and how we might want to shape that change. All too often it seems tech policy is made with too little serious consideration of the sociology behind how people use the internet – the human decisions internet usage represents – and it really needs to be part of the conversation more. Hopefully studies like this one can help with that.

Read More | 45 Comments | Leave a Comment..

Posted on Techdirt - 15 November 2017 @ 10:43am

Ninth Circuit Lets Us See Its Glassdoor Ruling, And It's Terrible

from the making-secret-jurisprudence-public-precedent dept

Well, I was wrong: last week I lamented that we might never know how the Ninth Circuit ruled on Glassdoor's attempt to quash a federal grand jury subpoena served upon it demanding it identify users. Turns out, now we do know: two days after the post ran the court publicly released its decision refusing to quash the subpoena. It's a decision that doubles-down on everything wrong with the original district court decision that also refused to quash it, only now with handy-dandy Ninth Circuit precedential weight.

Like the original ruling, it clings to the Supreme Court's decision in Branzburg v. Hayes, a case where the Supreme Court explored the ability of anyone to resist a grand jury subpoena. But in doing so it manages to ignore other, more recent, Supreme Court precedents that should have led to the opposite result.

Here is the fundamental problem with both the district court and Ninth Circuit decisions: anonymous speakers have the right to speak anonymously. (See, e.g., the post-Branzburg Supreme Court decision McIntyre v. Ohio Elections Commission). Speech rights also carry forth onto the Internet. (See, e.g., another post-Branzburg Supreme Court decision, Reno v. ACLU). But if the platforms hosting that speech can always be forced to unmask their users via grand jury subpoena, then there is no way for that right to ever meaningfully exist in the context of online speech.

Yet neither of these more recent Supreme Court decisions seems to have had any impact on either the district court or Ninth Circuit's thinking. Instead both courts seem to feel their hands are tied, that in the 1970s the Supreme Court set forth, once and for all, the rule that no one can ever resist federal grand jury subpoenas, except in very limited circumstances, and that this ruling was the final word on their enforceability, no matter what the context. But as I wrote in the previous post, what the Supreme Court said in Branzburg about the enforceability of grand jury subpoenas only related to those that arose from a specific context, journalists shielding sources, and the only question before the court then was whether journalists, as journalists, had the ability to refuse them. The Supreme Court never considered whether there might be any other set of circumstances where grand jury subpoenas could be resisted. In Branzburg the Supreme Court had only considered the question with respect to journalists.

In fact, to make Branzburg apply to Glassdoor, the Ninth Circuit had to try to squeeze Internet intermediaries like Glassdoor into the shoes of reporters and make them seem like one and the same, even when they are not:

Although Glassdoor is not in the news business, as part of its business model it does gather and publish information from sources it has agreed not to identify. It argues that “[a]nonymity is an essential feature of the Glassdoor community,” and that “if employees cannot speak anonymously, they often will not speak at all,” which will reduce the availability of “information about what it is like to work at a particular job and how workers are paid.” In other words, forcing Glassdoor to comply with the grand jury’s subpoena duces tecum will chill First Amendment-protected activity. This is fundamentally the same argument the Supreme Court rejected in Branzburg.

With all due respect to the Ninth Circuit panel, this is not fundamentally the same argument the Supreme Court rejected in Branzburg. As I wrote last week, to view the role of an intermediary platform as the same thing as an intermediary journalist is to fundamentally misunderstand the role of the intermediary platform in intermediating information. It also fundamentally misunderstands the First Amendment interests at stake. This case isn't about the press-related First Amendment rights at issue in Branzburg; they are the speech-related First Amendment rights of online speakers. And it's not the platform's First Amendment interests that Glassdoor is primarily trying to vindicate; it is the interests of the platform's users. Yet here, too, the Ninth Circuit panel misunderstands those interests when it dismisses out of hand the idea that they might have any right not to be unmasked:

Furthermore, Branzburg makes it clear that Glassdoor’s users do not have a First Amendment right not to testify before the investigating grand jury about the comments they initially made under the cloak of anticipated anonymity. See id. at 695 (“[I]f the authorities independently identify the informant, neither his own reluctance to testify nor the objection of the newsman would shield him from grand jury inquiry . . . .”). Therefore, Glassdoor cannot refuse to turn over its users’ identifying information on the grounds that it is protecting its users’ underlying rights.

"Anticipated anonymity" is a pretty grotesque way of describing a constitutional right people expected to be protected by when they chose to speak online. And it suggests a misreading of Branzburg, which never considered speech interests that were truly analogous to those of Internet platform users. Even if there's no First Amendment right to speak anonymously with a reporter it does not follow that there is no First Amendment right to speak anonymously online at all.

But that's the upshot to this decision: people who wish to speak anonymously online, in any capacity, won't be able to. They will forever be vulnerable to being unmasked by any federal criminal investigation, just so long as the investigation is not being done in bad faith. Nothing else can provide any sort of check on these unmasking demands, regardless of any other interest in play – including those of innocent speakers simply trying to avail themselves of their First Amendment right to speak anonymously, and all those who benefit from that speech.

This is a pretty stark result, and one that stands to affect Internet speakers everywhere. Not only does it threaten those anywhere a grand jury within the Ninth Circuit will be able to reach, but it will serve as persuasive authority governing the enforceability of subpoenas from grand juries in other circuits. It's also one that stands to have this dramatic effect after having been whipped up in secret, with a hidden docket and adamant refusal to accept amicus support. (Although two amici are listed in the caption, it does not appear that either brief was ultimately accepted by the court, much less actually read and considered.) Like anyone who insists on going it alone, without the help of their friends, the results of that obstinate independence have been predictably disastrous. Friends don't let friends inadvertently undermine the First Amendment, and I wish the court had let those of us able to help it see the full implications of this ruling be that friend.

Read More | 20 Comments | Leave a Comment..

Posted on Free Speech - 14 November 2017 @ 12:01pm

California Appeals Court Issues A Ruling That Manages To Both Protect And Undermine Online Speech

from the good-news-bad-news dept

Earlier this year I wrote about Yelp's appeal in Montagna v. Nunis. This was a case where a plaintiff had subpoenaed Yelp to unmask one of its users and Yelp tried to resist the subpoena. In that case, not only had the lower court refused to quash the subpoena, but it sanctioned Yelp for having tried to quash it. Per the court, Yelp had no right to try to assert the First Amendment rights of its users as a basis for resisting a subpoena. As we said in the amicus brief I filed for the Copia Institute in Yelp's appeal of the ruling, if the lower court were right it would be bad news for anonymous speakers, because if platforms could not resist unfounded subpoenas then users would lose an important line of defense against all the unfounded subpoenas seeking to unmask them for no legitimate reason.

Fortunately, a California appeals court just agreed it would be problematic if platforms could not push back against these subpoenas. Not only has this decision avoided creating inconsistent law in California (earlier this year a different California appeals court had reached a similar conclusion), but now there is even more language on the books affirming that platforms are able to try to stand up for their users' First Amendment rights, including their right to speak anonymously. As we noted, platforms can't always push back against these discovery demands, but it is often in their interests to try protect the user communities that provide the content that make their platforms valuable. If they never could, it would seriously undermine those user communities and all the content these platforms enable.

The other bit of good news from the decision is that the appeals court overturned the sanction award against Yelp. It would have significantly chilled platforms if they had to think twice before standing up for their users because of how much it could cost them financially for trying to do so.

But any celebration of this decision needs to be tempered by the fact that the appeals court also decided to uphold the subpoena in question. While it didn't fault Yelp for having tried to defend its users, and, importantly, it found that it had the legal ability to, it gave short shrift to that defense.

The test that California uses to decide whether to uphold or quash a subpoena is a test from a case called Krinsky, which asks whether the plaintiff has made a "prima facie" case. In other words, we don't know if the plaintiff necessarily would win, but we want to ensure that it's at least possible for plaintiffs to prevail on their claims before we strip speakers of their anonymity for no good reason. That's all well and good, but thanks to the appeals court's extraordinarily generous read of the statements at issue in this case, one that went out of its way to infer the possibility of falsity in what were at their essence statements of opinion (which is ordinarily protected by the First Amendment), the appeals court decided that the test had been satisfied.

This outcome is not only unfortunate for the user whose identity will now be revealed to the plaintiff but for all future speakers now that there is an appellate decision on the books running through the "prima facie" balancing test in a way that so casually dismisses the protections speech normally has. It at least would have been better if the question considering whether the subpoena should be quashed had been remanded to the lower court, where, even if that court still reached a decision too easily-puncturing of the First Amendment protection for online speech it would have posed less of a risk to other speech in the future.

Read More | 10 Comments | Leave a Comment..

Posted on Techdirt - 10 November 2017 @ 12:16pm

Celebrate The 20th Anniversary Of A Seminal Section 230 Case Upholding It With This Series Of Essays

from the Internet-enabling-cases dept

We have been talking a lot lately about how important Section 230 is for enabling innovation and fostering online speech, and, especially as Congress now flirts with erasing its benefits, how fortuitous it was that Congress ever put it on the books in the first place.

But passing the law was only the first step: for it to have meaningful benefit, courts needed to interpret it in a way that allowed for it to have its protective effect on Internet platforms. Zeran v. America Online was one of the first cases to test the bounds of Section 230's protection, and the first to find that protection robust. Had the court decided otherwise, we likely would not have seen the benefits the statute has since then afforded.

This Sunday the decision in Zeran turns 20 years old, and to mark the occasion Eric Goldman and Jeff Kosseff have gathered together more than 20 essays from Internet lawyers and scholars reflecting on the case, the statute, and all of its effects. I have an essay there, "The First Hard Case: ‘Zeran v. AOL’ and What It Can Teach Us About Today’s Hard Cases," as do many other advocates, including lawyers involved with the original case. Even people who are not fans of Section 230 and its legacy are represented. All of these pieces are worth reading and considering, especially by anyone interested in setting policy around these issues.

2 Comments | Leave a Comment..

Posted on Techdirt - 7 November 2017 @ 9:33am

How The Internet Association's Support For SESTA Just Hurt Facebook And Its Users

from the with-friends-like-these dept

The Internet Association's support for SESTA is truly bizarre. Should its support cause the bill to pass it will be damaging to every one of its members. Perhaps some members feel otherwise, but it is hopelessly naïve for any of them to believe that they will have the resources to stave off all the potential liability, including criminal liability, SESTA invites to their companies generally and to their management teams specifically, or that they will be able to deploy these resources in a way that won't destroy their user communities by over-censoring the creativity and expression they are in the business of providing forums for.

But that's only part of the problem, because what no one seems to be remembering is that Section 230 does not just protect the Internet Association's platform members (and their management teams) from crippling liability; it also protects its platform members' users, and if SESTA passes that protection will be gone.

Naturally, Section 230 does not insulate users from liability in the things they themselves use the platforms to communicate. It never has. That's part of the essential futility of SESTA, because it is trying to solve a problem that was not a problem. People who publish legally wrongful content have always been subject to liability, even federal criminal liability, and SESTA does not change that.

But what everyone seems to forget is that on certain platforms users are not just users; in their use of these systems, they actually become platforms themselves. Facebook users are a prime example of this dynamic, because when users post status updates that are open for commenting, they become intermediary platforms for all those comments. Just as Facebook provides the space for third-party content in the form of status updates, users who post updates are now providing the space for third parties to provide content in the form of comments. And just as Section 230 protects platforms like Facebook from liability in how people use the space it provides, it equally protects its users for the space that they provide. Without Section 230 they would all be equally unprotected.

True, in theory, SESTA doesn't get rid of Section 230 altogether. It supposedly only introduces the risk of certain types of liability for any company or person dependent on its statutory protection. But as I've noted, the hole SESTA pokes through Section 230's general protection against liability is enormous. Whether SESTA's supporters want to recognize it or not, it so substantially undermines Section 230's essential protective function as to make the statute a virtual nullity.

And it eviscerates it for everyone, corporate platforms and individual people alike – even those very same individual people whose discussion-hosting activity has been what's made platforms like Facebook so popular. While every single platform, regardless of whether it is a current member of Internet Association, an unaffiliated or smaller platform, or a platform that has yet to be invented, will be harmed by SESTA, the particular character of Facebook, as a platform hosting the platforms of individual users, means it will be hit extra hard. It suddenly becomes substantially more difficult to maintain these sorts of dynamic user communities when a key law enabling those user communities is now taken away, because in its absence it becomes significantly more risky for any individual user to continue to host this conversation on the material they post. Regardless of whether that material is political commentary, silly memes, vacation pictures, or anything else people enjoy sharing with other people, without Section 230's critical protection insulating them from liability in whatever these other people should happen to say about it, there are no comments that these users will be able to confidently allow on their posts without fear of an unexpectedly harsh consequence should they let the wrong ones remain.

35 Comments | Leave a Comment..

Posted on Techdirt - 6 November 2017 @ 9:37am

The Case Of Glassdoor And The Grand Jury Subpoena, And How Courts Are Messing With Online Speech In Secret

from the it-ain't-so-grand dept

In my last post, I discussed why it is so important for platforms to be able to speak about the discovery demands they receive, seeking to unmask their anonymous users. That candor is crucially important in ensuring that unmasking demands can't damage the key constitutional right to speak anonymously, without some sort of check against their abuse.

The earlier post rolled together several different types of discovery instruments (subpoenas, warrants, NSLs, etc.) because to a certain extent it doesn't matter which one is used to unmask an anonymous user. The issue raised by all of them is that if their power to unmask an anonymous user is too unfettered, then it will chill all sorts of legitimate speech. And, as noted in the last post, the ability for a platform receiving an unmasking demand to tell others it has received it is a critical check against unworthy demands seeking to unmask the speakers behind lawful speech.

The details of each type of unmasking instrument do matter, though, because each one has different interests to balance and, accordingly, different rules governing how to balance them. Unfortunately, the rules that have evolved for any particular one are not always adequately protective of the important speech interests any unmasking demand necessarily affects. As is the case for the type of unmasking demand at issue in this post: a federal grand jury subpoena.

Grand jury subpoenas are very powerful discovery instruments, and with good reason: the government needs a powerful weapon to be able to investigate serious crimes. There are also important constitutional reasons for why we equip grand juries with strong investigatory power, because if charges are to be brought against people, it's important for due process reasons that they have been brought by the grand jury, as opposed to a more arbitrary exercise of government power. Grand juries are, however, largely at the disposal of government prosecutors, and thus a grand jury subpoena essentially functions as a government unmasking demand. The ability to compel information via a grand jury subpoena is therefore not a power we can allow to exist unchecked.

Which brings us to the story of the grand jury subpoena served on Glassdoor, which Paul Levy and Ars Technica wrote about earlier this year. It's a story that raises three interrelated issues: (1) a poor balancing of the relevant interests, (2) a poor structural model that prevented a better balancing, and (3) a gag that has made it extraordinarily difficult to create a better rule governing how grand jury subpoenas should be balanced against important online speech rights.

Glassdoor is a platform focused on hosting user-provided information about employers. Much of the speech it hosts is necessarily contributed anonymously so that the speakers can avoid any fallout from their candor. This is the sort of fallout that, if they had to incur it, would discourage them from contributing information others might find valuable. The seriousness of these sorts of consequences is why the district court decision denying Glassdoor's attempts to resist the grand jury subpoena seeking to unmask their users reflects such a poor balancing of the relevant interests. Perhaps if the subpoena had been intended to unmask people the government believed were themselves guilty of the crime being investigated, the balance might have tipped more in favor of enforcing it. But the people who the subpoena was seeking to unmask were simply suspected as possibly knowing something about the crime that others were apparently committing. It is not unreasonable for the government to want to be able to talk to witnesses, but that desire to talk to them is not the only interest present here. These are people who were simply availing themselves of their right to speak anonymously, and who, if this subpoena is enforced, are going to be shocked to suddenly find the government on their doorstep wanting to talk to them.

This sort of unmasking is chilling to them and anyone else who might want to speak anonymously because it means that there's no way they ever will be able to speak should their speech happen to ever somehow relate (however tangentially) to someone else's criminal behavior. It is also inconsistent with the purported goal of fighting crime because it will prevent criminal behavior from coming to light in the first place, for few will want to offer up information if it will only tempt trouble for them at some point in the future.

This mis-balancing of interests is almost a peripheral issue in this case, however. The more significant structural concern is why such a weak balancing test was used. As discussed previously, in order to protect the ability to speak anonymously online, it is important for a platform to be able to resist demands to unmask their users in cases where the reason for the unmasking does not substantially outweigh the need to protect people's right to speak anonymously online. But the district court denied Glassdoor's attempt to resist the subpoena when it chose to apply the test from Branzburg v. Hayes, a Supreme Court case focused on the ability to resist a grand jury subpoena. Branzburg, however, has nothing to do with the Internet or Internet platforms. It is a case from the 1970s that was solely focused on whether the First Amendment gave journalists the right to resist a grand jury subpoena. Ultimately it decided that they generally had no such right, at least so long as the government was not shown to be acting in bad faith, which, while not nothing, is not a standard that is particularly protective of anonymity. It also barely even addressed the interests of the confidential sources themselves, dismissing their interest in maintaining anonymity as a mere "preference," and one the Court presumed was being sought only to shield themselves from prosecution for their own criminal culpability.

The upshot of Branzburg is that the journalist, as an intermediary for a source's information, had no right to resist a grand jury subpoena. Unfortunately, Branzburg simply can't be extended to the online world where, for better or worse, essentially all speech must be intermediated by some sort of platform or service in order to happen. The need to let the platforms resist grand jury subpoenas therefore has less to do with whether an intermediary itself has a right to resist them and everything to do with the the right of their users to speak anonymously, which, far from being a preference, is an affirmative right the Supreme Court, after Branzburg, subsequently recognized.

A better test, and one that respects the need to maintain this critical speech right, is therefore needed, which is why Glassdoor appealed the district court's ruling. Unfortunately, its appeal has raised a third issue: while there is often a lot of secrecy surrounding a grand jury investigation, in part because it makes sense to keep the subject of an investigation in the dark, preserving that level of secrecy does not necessarily require keeping absolutely everything related to the subpoena under seal. Fortunately the district court (and the DOJ, who agreed to this) recognized that some information could safely be released, particularly related to Glassdoor's challenge of the subpoena's enforcement generally, and thanks to that limited unsealing we can tell that the case involved a misapplication of Branzburg to an Internet platform.

Unfortunately the Ninth Circuit didn't agree to this limited disclosure and sealed the entirety of Glassdoor's appeal, even the parts that were already made public. The effects of this sealing included that it became impossible for potential amici to weigh in in support of Glassdoor and to argue for a better rule that would allow platforms to better protect the speech rights of their users. While Glassdoor had been ably litigating the case, the point of amicus briefs is to help the court see the full implications of a particular ruling on interests beyond those immediately before it, which is a hard thing for the party directly litigating to do itself. The reality is that Glassdoor is not the first, and will not be the last, platform to get a grand jury subpoena, but unless the rules governing platforms' ability to resist are stronger than what's afforded by Branzburg, the privacy protection speakers have depended on will continue to evaporate should their speech ever happen to capture the interest of a federal prosecutor with access to grand jury.

For all we know, of course, the Ninth Circuit might have seen its point and quashed the subpoena. Or maybe it upheld it and maybe the FBI has now unpleasantly surprised those Glassdoor users. We may never know, just as we may never know if there are other occasions where courts have used specious reasoning to allow grand jury subpoenas to strip speakers of their anonymity. Even if the Ninth Circuit indeed fixed the problems with this questionable attempt at unmasking, by doing it in secret it's missed an important opportunity to provide guidance to lower courts to help ensure that they don't allow other questionable attempts to keep happening to speakers in the future.

30 Comments | Leave a Comment..

Posted on Techdirt - 3 November 2017 @ 1:32pm

Some Thoughts On Gag Rules And Government Unmasking Demands

from the dissent-dies-in-the-dark dept

The news about the DOJ trying to subpoena Twitter calls to mind an another egregious example of the government trying to unmask an anonymous speaker earlier this year. Remember when the federal government tried to compel Twitter to divulge the identity of a user who had been critical of the Trump administration? This incident was troubling enough on its face: there’s no place in a free society for a government to come after a critic of it. But largely overlooked in the worthy outrage over the bald-faced attempt to punish a dissenting voice was the government’s simultaneous attempt to prevent Twitter from telling anyone that the government was demanding this information. Because Twitter refused to comply with that demand, the affected user was able to get counsel and the world was able to know how the government was abusing its authority. As the saying goes, sunlight is the best disinfectant, and by shining a light on the government's abusive behavior it was able to be stopped.

That storm may have blown over, but the general issues raised by the incident continue to affect Internet platforms – and by extension their users and their speech. A significant problem we keep having to contend with is not only what happens when the government demands information about users from platforms, but what happens when it then compels the same platforms to keep those demands a secret. These secrecy demands are often called different things and are born from separate statutory mechanisms, but they all boil down to being some form of gag over the platform’s ability to speak, with the same equally troubling implications. We've talked before about how important it is that platforms be able to protect their users' right to speak anonymously. That right is part and parcel of the First Amendment because there are many people who would not be able to speak if they were forced to reveal their identities in order to do so. Public discourse, and the benefit the public gets from it, would then suffer in the absence of their contributions. But it's one thing to say that people have the right to speak anonymously; it's another to make that right meaningful. If civil plaintiffs, or, worse, the government, can too easily force anonymous speakers to be unmasked then the right to speak anonymously will only be illusory. For it to be something speakers can depend on to enable them to speak freely there have to be effective barriers preventing that anonymity from too casually being stripped by unjust demands.

One key way to prevent illegitimate unmasking demands is to fight back against them. But no one can fight back against what they are unaware of. Platforms are thus increasingly pushing back against the gags preventing them from disclosing that they have received discovery demands as a way to protect their communities of users.

While each type of demand varies in its particulars (for instance a civil subpoena is different from a grand jury subpoena, which is different than an NSL, which is different from the 19 USC Section 1509 summons that was used against Twitter in the quest to discover the Trump critic), as well as the rationale for why the demanding party might have sought to preserve the secrecy around the demand with some sort of gag, all of these unmasking demands still ultimately challenge the durability of an online speaker's right to remain anonymous. Which is why rulings that preserve, or, worse, even strengthen, gag rules are so troubling because they make it all the more difficult, if not outright impossible, to protect legitimate speech from illegitimate unmasking demands.

And that matters. Returning to the example about the fishing expedition to unmask a critic, while it's great that in this particular case the government quickly dropped its demand on Twitter, questions remain. Was Twitter the only platform the government went after? Perhaps, but how would we know? How would we know if this was the only speech it had chosen to investigate, or the 1509 summons the only unmasking instrument it had used to try to identify the speaker? If the other platforms it demanded information from were, quite reasonably, cowed by an accompanying demand for secrecy (the sanctions for violating such an order can be serious), we might never know the answers to these questions. The government could be continuing its attacks on its apparently no-longer-anonymous critics unabated, and speakers who depended on anonymity would unknowingly be putting themselves at risk when they continued to speak.

This state of affairs is an affront to the First Amendment. The First Amendment was intended in large part to enable people to speak truth to power, but when we make it too hard for platforms to be partners in protecting that right it entrenches that power. There are a lot of ways that platforms should have the ability to be that partner, but one of them must be the basic ability to tell us when that right is under threat.

10 Comments | Leave a Comment..

Posted on Techdirt - 27 October 2017 @ 1:40pm

Trump Campaign Tries To Defend Itself With Section 230, Manages To Potentially Make Things Worse For Itself

from the just-one-more-wafer-thin-defense dept

It isn't unusual or unwarranted for Section 230 to show up as a defense in situations where some might not expect it. Its basic principles may apply to more situations than may necessarily be readily apparent. But to appear as a defense in the Cockrum v. Campaign for Donald Trump case is pretty unexpected. From page 37 of the campaign's motion to dismiss the case against it, the following two paragraphs are what the campaign slipped in on the subject:

Plaintiffs likewise cannot establish vicarious liability by alleging that the Campaign conspired with WikiLeaks. Under section 230 of the Communications Decency Act (47 U.S.C. § 230), a website that provides a forum where “third parties can post information” is not liable for the third party’s posted information. Klayman v. Zuckerberg, 753 F.3d 1354, 1358 (D.C. Cir. 2014). That is so even when even when the website performs “editorial functions” “such as deciding whether to publish.” Id. at 1359. Since WikiLeaks provided a forum for a third party (the unnamed “Russian actors”) to publish content developed by that third party (the hacked emails), it cannot be held liable for the publication.

That defeats the conspiracy claim. A conspiracy is an agreement to commit “an unlawful act.” Paul v. Howard University, 754 A.2d 297, 310 (D.C. 2000). Since WikiLeaks’ posting of emails was not an unlawful act, an alleged agreement that it should publish those emails could not have been a conspiracy.

This is the case brought against the campaign for allegedly colluding with Wikileaks and the Russians to disclose the plaintiffs’ private information as part of the DNC email trove that ended up on Wikileaks. Like Eric Goldman, who has an excellent post on the subject, I'm not going to go into the relative merits of the lawsuit itself, but I would note that it is worth consideration. Even if it's true that the Trump campaign and Wikileaks were somehow in cahoots to hack the DNC and publish the data taken from it, whether and how the consequences of that disclosure can be recognized by law is a serious issue, as is whether this particular lawsuit by these particular plaintiffs with these particular claims is one that the law can permit to go forward without causing collateral effects to other expressive endeavors, including whistleblower journalism generally. On these points there may or may not be issues with the campaign's motion to dismiss overall. But the shoehorning of a Section 230 argument into its defensive strategy seems sufficiently weird and counterproductive to be worth commenting on in and of itself.

For one thing, it's not a defense that belongs to the campaign. It's a defense that belongs to a platform, if it belongs to anyone, and the campaign was not a platform. Meanwhile the question of whether Wikileaks is a platform able to claim a Section 230 defense with regard to the content at issue is not entirely clear; like most legal questions, the answer is, "It depends," and it can depend on the particular relationship the site had with the hosting of any particular content. True, to the extent that Wikileaks is just a site hosting material others have provided the answer is more likely to be yes – although even then there is an important caveat: as Eric pointed out, Section 230 doesn't magically make content be "legal." It's simply an immunity from liability for certain types of claims. It's not even all claims. There's no limitation, for instance, on liability for claims asserting violations of another's intellectual property, nor is there any limit to liability for claims arising from violations of federal criminal law. While the Cockrum plaintiffs are bringing forward tort claims, which are the sorts of claims that Section 230 generally insulates platforms from, Section 230 would do nothing to shield the exact same platform from a federal prosecution arising from its hosting of the exact same information.

But the bigger issue is whether Wikileaks is just a platform merely hosting information others have provided, particularly with respect to the DNC emails. If it had too much agency in the creation of the information that ended up hosted on it, it might not be a Section 230-immune "interactive computer service provider" and instead might be found to be a potentially liable "information content provider." The Trump campaign is correct that a platform can exert quite a bit of editorial discretion over the information that appears on it without being considered an information content provider, but at a certain point courts become unwilling to regard the platform's interaction as editorial and instead find it to be authorial. There are reasons to champion drawing the line on what counts as editorial expansively, but it is naïve to pretend that courts will deem all interaction between a platform and the content appearing on it to be so. There is simply far too much caselaw to the contrary.

In fact, a great deal of the caselaw suggests that courts are often particularly unwilling to simply assume that a platform lacked creative agency in the content at issue in cases where the optics surrounding the platform and the content at issue are poor. As Eric has noted in previous posts, this reluctance is problematic, because forcing a platform to go through discovery in order to satisfy the court that there is no evidence of the platform's authorship of the content at issue, which would disqualify the platform from Section 230's protection, raises the costs of being a platform to the sort of crippling level that Section 230 is supposed to forestall. There is reason to worry that the optics surrounding this case may potentially encourage courts to create unpleasant precedent that will make it harder for other platforms to raise Section 230 as a defense in order to quickly end expensive, Section 230-barred lawsuits against them in the future.

But it's the discovery issue that makes the campaign's raising of Section 230 as a defense seem so odd: on page 1 of the motion to dismiss they complain the lawsuit was brought as "a vehicle for discovery of documents and evidence," but by raising Section 230 as a defense it only invites more of it. If any of the plaintiffs' claims were to go forward there would already be plenty of discovery demands to explore the relationship between the campaign and Wikileaks, which the campaign would appear to not want. The objective of the campaign should therefore be nothing more than making the case go away as quickly and quietly as possible. But by gratuitously throwing in Section 230 as a defense, one in which Wikileaks' authorship role is inherently in question and potentially contingent on its relationship with the campaign, rather than provide a basis for dismissal, the campaign has instead provided the court with a reason for why the case should continue to the discovery stage. It seems like a tactical error and one that does not appear to understand the jurisprudence surrounding Section 230. It glibly presumes that Section 230 applies to any situation involving a platform hosting content, and that simply isn't correct. While we have encouraged it to be liberally applied to platform situations, it obviously is not always, and sometimes even for good reason.

18 Comments | Leave a Comment..

Posted on Techdirt - 25 October 2017 @ 10:38am

Study On Craigslist Shutting 'Erotic Services' Shows SESTA May Hurt Those It Purports To Help

from the good-intentions-do-not-make-good-policy dept

The last two posts I wrote about SESTA discussed how, if it passes, it will result in collateral damage to the important speech interests Section 230 is intended to protect. This post discusses how it will also result in collateral damage to the important interests that SESTA itself is intended to protect: those of vulnerable sex workers.

Concerns about how SESTA would affect them are not new: several anti-trafficking advocacy groups and experts have already spoken out about how SESTA, far from ameliorating the risk of sexual exploitation, will only exacerbate the risk of it in no small part because it disables one of the best tools for fighting it: the Internet platforms themselves:

[Using the vilified Backpage as an example, in as much as] Backpage acts as a channel for traffickers, it also acts as a point of connection between victims and law enforcement, family, good samaritans, and NGOs. Countless news reports and court documents bear out this connection. A quick perusal of news stories shows that last month, a mother found and recovered her daughter thanks to information in an ad on Backpage; a brother found his sister the same way; and a family alerted police to a missing girl on Backpage, leading to her recovery. As I have written elsewhere, NGOs routinely comb the website to find victims. Nicholas Kristof of the New York Times famously “pulled out [his] laptop, opened up Backpage and quickly found seminude advertisements for [a victim], who turned out to be in a hotel room with an armed pimp,” all from the victim’s family’s living room. He emailed the link to law enforcement, which staged a raid and recovered the victim.

And now there is yet more data confirming what these experts have been saying: when there have been platforms available to host content for erotic services, it has decreased the risk of harm to sex workers.

The September 2017 study, authored by West Virginia University and Baylor University economics and information systems experts, analyzes rates of female homicides in various cities before and after Craigslist opened an erotic services section on its website. The authors found a shocking 17 percent decrease in homicides with female victims after Craigslist erotic services were introduced.

The reasons for these numbers aren't entirely clear, but there does seem to be a direct correlation in the safety to sex workers when, thanks to the availability of online platforms, they can "move indoors."

Once sex workers move indoors, they are much safer for a number of reasons, Cunningham said. When you’re indoors, “you can screen your clients more efficiently. When you’re soliciting a client on the street, there is no real screening opportunity. The sex worker just has to make the split second decision. She relies on very limited and complete information about the client’s identity and purposes. Whereas when a sex worker solicits indoors through digital means, she has Google, she has a lot of correspondence, she can ask a lot of questions. It’s not perfect screening, but it’s better.”

The push for SESTA seems to be predicated on the unrealistic notion that all we need to do to end sex trafficking is end the ability of sex services to use online platforms. But evidence suggests that removing the "indoor" option that the Internet affords doesn't actually end sex work; it simply moves it to the outdoors, where it is vastly less safe.

In 2014, Monroe was a trafficking victim in California. She found her clients by advertising on SFRedbook, the free online erotic services website. One day, she logged into the site and discovered that federal authorities had taken it down. Law enforcement hoped that closing the site would reduce trafficking, but it didn’t help Monroe. When she told her pimp SFRedbook was gone, he shrugged. Then he told her that she would just have to work outdoors from then on.

“When they closed down Redbook, they pushed me to the street,” Monroe told ThinkProgress. “We had a set limit we had to make a day, which was more people, cheaper dates, and if you didn’t bring that home, it was ugly.” Monroe, who asked that her last name be withheld for privacy reasons, had been working through Redbook in hotel rooms almost without incident, but working outdoors was much less safe.

“I got raped and robbed a couple of times,” she said. “You’re in people’s cars, which means nobody can hear you if you get robbed or beaten up.”

A recurrent theme here on Techdirt is that, as with any technology policy, no matter how well-intentioned it is, whether or not it is a good policy depends on its unintended consequences. Not only do we need to worry about how a policy affects other worthwhile interests, but it also needs to consider how it affects the interest it seeks to vindicate. And in this case SESTA stands to harm the very people it ostensibly seeks to help.

Does that mean Congress should do nothing to address sex trafficking? Of course not, and it is considering many more options that more directly address the serious harms that arise from sex trafficking. Even Section 230 as it currently exists does not prevent the government from going after platforms if they directly aid it. But all too often regulators like to take shortcuts and target platforms simply because bad people may be using them in bad ways. It's a temptation that needs to be resisted for many reasons, but not the least of which is that doing so may enable bad people to behave even worse.

21 Comments | Leave a Comment..

Posted on Techdirt - 20 October 2017 @ 10:41am

A Joke Tweet Leads To 'Child Trafficking' Investigation, Providing More Evidence Of Why SESTA Would Be Abused

from the we-wish-we-were-kidding dept

Think we're unduly worried about how "trafficking" charges will get used to punish legitimate online speech? We're not.

A few weeks ago a Mississippi mom posted an obviously joking tweet offering to sell her three-year old for $12.

I tweeted a funny conversation I had with him about using the potty, followed by an equally-as-funny offer to my followers: 3-year-old for sale. $12 or best offer.

The next thing she knew, Mississippi authorities decided to investigate her for child trafficking.

The saga began when a caseworker and supervisor from Child Protection Services dropped by my office with a Lafayette County sheriff’s deputy. You know, a typical Monday afternoon.

They told me an anonymous male tipster called Mississippi’s child abuse hotline days earlier to report me for attempting to sell my 3-year-old son, citing a history of mental illness that probably drove me to do it.

Beyond notifying me of the charges, they said I’d have to take my son out of school so they could see him and talk to him that day, presumably protocol to ensure children aren’t in immediate danger. So I went to his preschool, pulled my son out of a deep sleep during naptime, and did everything in my power not to cry in front of him on the drive back to my office.

All of this for a joke tweet.

This story is bad enough on its own. As it stands now, actions by the Mississippi authorities will chill other Mississippi parents from blowing off steam with facetious remarks on social media. But at least the chilling harm is contained within Mississippi's borders. If SESTA passes, that chill will spread throughout the country.

If SESTA were on the books, the Mississippi authorities would not have had to stop with the mom. Its next stop could be Twitter itself. No matter how unreasonable its suspicions, it could threaten criminal investigation on Twitter for having facilitated this allegedly trafficking-related speech.

The unlimited legal exposure these potential prosecutions pose will force platforms to pre-emptively remove not just the speech of parents from Mississippi but any speech from any parent anywhere that might inflame the humorless judgment of overzealous Mississippi authorities – or authorities from anywhere else where humor and judicious sense is also impaired. In fact, it won't even be limited to parents. Authorities anywhere could come after anyone who posted anything that they decided to misinterpret as a credible threat.

These warnings might sound like hyperbole, but that's what hangs in the balance: hyperbole. The ability to say ridiculous things because sometimes we need to say ridiculous things. If anything that gets said can be so willfully misconstrued as evidence of a crime it will chill a lot of speech, and to an exponentially unlimited extent far beyond any authority's jurisdictional boundaries if it can force platforms to fear enabling any such speech that might happen to set any of them off.

42 Comments | Leave a Comment..

Posted on Techdirt - 19 October 2017 @ 10:45am

Beyond ICE In Oakland: How SESTA Threatens To Chill Any Online Discussion About Immigration

from the trafficking-is-in-the-ICE-of-the-beholder dept

First, if you are someone who likes stepped-up ICE immigration enforcement and does not like "sanctuary cities," you might cheer the implications of this post, but it isn't otherwise directed at you. It is directed at the center of the political ven diagram of people who both feel the opposite about these immigration policies, and yet who are also championing SESTA. Because this news from Oakland raises the specter of a horrific implication for online speech championing immigrant rights if SESTA passes: the criminal prosecution of the platforms which host that discussion.

Much of the discussion surrounding SESTA is based on some truly horrific tales of sex abuse, crimes that more obviously fall under what the human trafficking statutes are clearly intended to address. But with news that ICE is engaging in a very broad reading of the type of behavior the human trafficking laws might cover and prosecuting anyone that happens to help an immigrant, it's clear that the type of speech that SESTA will carve out from Section 230's protection will go far beyond the situations the bill originally contemplated.

Some immigration rights activists are worried that ICE has recently re-defined the crime of human trafficking to include assistance, like housing and employment, that adults provide to juveniles who come to the United States without their parents. In many cases, the adults being investigated and charged are close relatives of the minors who are supposedly being trafficked.

Is ICE simply misreading the trafficking statutes? Perhaps, but it isn't necessarily a far-fetched reading. People in the EU who've merely given rides to Syrian (and other) refugees tired from trekking on foot have been prosecuted for trafficking. Yes that's Europe, not the US, but it's an example of how well-intentioned trafficking laws can easily be over-applied to the point that they invite absurd results, including those that end up making immigrants even more vulnerable to traffickers than they would have been without the laws.

So what does that have to do with SESTA? SESTA is drafted with language that presumes that sex trafficking laws are clearly and unequivocally good in their results. And what that Oakland example suggests is that this belief is a myth. Anti-immigrant forces within the government, both federal and state, can easily twist them against the very same people they were ostensibly designed to protect.

And that means they are free to come after the platforms hosting any and all speech related to the assistance of immigrants, if any and all assistance can be considered trafficking. The scope of what they could target is enormous: tweets warning about plain-clothed ICE agents at courthouses, search engine results for articles indicating whether evacuation centers will be checking immigration status, online ads for DACA enrollment assistance, or even discussion about sanctuary cities and the protections they afford generally. If SESTA passes, platforms will either have to presumptively censor all such online speech, or risk prosecution by any government or state entity with different views on immigration policy. Far from being the minor carve-out of Section 230 that SESTA's supporters insist it is, it instead is an invitation to drive an awful lot of important speech from the Internet that these same supporters would want to ensure we can continue to have.

27 Comments | Leave a Comment..

Posted on Free Speech - 16 October 2017 @ 9:33am

New York Considers Barring Agreements Barring Victims From Speaking

from the perhaps-there-oughta-be-a-law dept

In the wake of the news about Harvey Weinstein's apparently serial abuse of women, and the news that several of his victims were unable to tell anyone about it due to a non-disclosure agreement, the New York legislature is considering a bill to prevent such NDAs from being enforceable in New York state. According to the Buzzfeed article the bill as currently proposed still allows a settlement agreement to demand that the recipient of a settlement not disclose how much they settled for, but it can't put the recipient of a settlement in jeopardy of needing to compensate their abuser if they choose to talk about what happened to them.

It's not the first time a state has imposed limits on the things that people can contract for. California, for example, has a law that generally makes non-compete agreements invalid. Even Congress has now passed a law banning contracts that limit consumers' ability to complain about merchants. Although, as we learn in law school, there are some Constitutional disputes about how unfettered the freedom to contract should be in the United States, there has also always been the notion that some contractual demands are inherently "void as against public policy." In other words, go ahead and write whatever contractual clause you want, but they aren't all going to be enforceable against the people you want to force to comply with them.

Like with the federal Consumer Review Fairness Act mentioned above, the proposed New York bill recognizes that there is a harm to the public interest when people cannot speak freely. When bad things happen, people need to know about them if they are to protect themselves. And it definitely isn't consistent with the public interest if the people doing the bad things can stop people from knowing that they've been doing them. These NDAs have essentially had the effect of letting bad actors pay money for the ability to continue the bad acts, and this proposed law is intended to take away that power.

As with any law the devil will be in the details (for instance, this proposed bill appears to apply only to non-disclosure clauses in the employment context, not more broadly), and it isn't clear whether this one, as written, might cause some unintended consequences. For instance, there might theoretically be the concern that without a gag clause in a settlement agreement it might be harder for victims to reach agreements that would compensate them for their injury. But as long as victims of other people's bad acts can be silenced as a condition of being compensated for those bad acts, and that silence enables there to be yet more victims, then there are already some unfortunate consequences for a law to try to address.

46 Comments | Leave a Comment..

Posted on Techdirt - 18 August 2017 @ 11:55am

Because Of Course There Are Copyright Implications With Confederacy Monuments

from the copyright-makes-a-mess-of-everything-dept dept

There's no issue of public interest that copyright law cannot make worse. So let me ruin your day by pointing out there's a copyright angle to the monument controversy: the Visual Artists Rights Act (VARA), a 1990 addition to the copyright statute that allows certain artists to control what happens to their art long after they've created it and no longer own it. Techdirt has written about it a few times, and it was thrust into the spotlight this year during the controversy over the Fearless Girl statue.

Now, VARA may not be specifically applicable to the current controversy. For instance, it's possible that at least some of the Confederacy monuments in question are too old to be subject to VARA's reach, or, if not, that all the i's were dotted on the paperwork necessary to avoid it. (It’s also possible that neither is the case — VARA may still apply, and artists behind some of the monuments might try to block their removal.) But it would be naïve to believe that we'll never ever have monument controversies again. The one thing VARA gets right is an acknowledgement of the power of public art to be reflective and provocative. But how things are reflective and provocative to a society can change over time as the society evolves. As we see now, figuring out how to handle these changes can be difficult, but at least people in the community can make the choice, hard though it may sometimes be, about what art they want in their midst. VARA, however, takes away that discretion by giving it to someone else who can trump it (so to speak).

Of course, as with any law, the details matter: what art was it, whose art was it, where was it, who paid for it, when was it created, who created it, and is whoever created it dead yet… all these questions matter in any situation dealing with the removal of a public art installation because they affect whether and how VARA actually applies. But to some extent the details don't matter. While in some respects VARA is currently relatively limited, we know from experience that limited monopolies in the copyright space rarely stay so limited. What matters is that we created a law that is expressly designed in its effect to undermine the ability of a community with art in its midst to decide whether it wants to continue to have that art in its midst, and thought that was a good idea. Given the power of art to be a vehicle of expression, even political expression or outright propaganda, allowing any law to etch that expression in stone (as it were) is something we should really rethink.

64 Comments | Leave a Comment..

Posted on Techdirt - 12 July 2017 @ 9:27am

Copyright Law And The Grenfell Fire - Why We Cannot Let Legal Standards Be Locked Up By Copyright

from the burning-down-the-house dept

It's always hard to write about the policy implications of tragedies – the last thing their victims need is the politicization of what they suffered. At the same time, it's important to learn what lessons we can from these events in order to avoid future ones. Earlier Mike wrote about the chilling effects on Grenfell residents' ability to express their concerns about the safety of the building – chilling effects that may have been deadly – because they lived in a jurisdiction that allowed critical speech to be easily threatened. The policy concern I want to focus on now is how copyright law also interferes with safety and accountability both in the US and elsewhere.

I'm thinking in particular about the litigation Carl Malamud has found himself faced with because he dared to post legally-enforceable standards on his website as a resource for people who wanted ready access to the law that governed them. (Disclosure: I helped file amicus briefs supporting his defense in this litigation.) A lot of the discussion about the litigation has focused on the need for people to know the details of the law that governs them: while ignorance of the law is no excuse, as a practical matter people need a way to actually know what the law is if they are going to be expected to comply with it. Locking it away in a few distant libraries or behind paywalls is not an effective way of disseminating that knowledge.

But there is another reason why the general public needs to have access to this knowledge. Not just because it governs them, but because others' compliance with it obviously affects them. Think for instance about the tenants in these buildings, or any buildings anywhere: how can they be equipped to know if the buildings they live in meet applicable safety standards if they never can see what those standards are? They instead are forced to trust that those with privileged access to that knowledge will have acted on it accordingly. But as the Grenfell tragedy has shown, that trust may be misplaced. "Trust, but verify," it has been famously said. But without access to the knowledge necessary to verify that everything has been done properly, no one can make sure that it has. That makes the people who depend on this compliance vulnerable. And as long as copyright law is what prevents them from knowing if there has been compliance, then it is copyright law that makes them so.

Of course, there are lots of standards at issue in the Public Resource cases, and not all of them necessarily would threaten mortal peril if they were not complied with. But the federal court's decision in these cases, if allowed to stand, means that all sorts of standards, including those bearing on public safety, can be kept from ready public view through a claim of copyright. As the resulting injunctions ordering Carl Malamud to delete accurate and operable law from his website makes clear, no matter how accurate or operable the legal standard, no matter how critical compliance with the standard is to ensure the health and safety of the public, people can be prevented from sharing the knowledge of what that standard contains.

And it not only prevents people in one jurisdiction from knowing what that standard is. It prevents people anywhere in the world from knowing. If an American jurisdiction has made innovations in public safety standards, no one else in the world can freely benefit from that knowledge in order to figure out whether their own local standards are sufficient. It's an absurd result – the purpose of copyright law is, after all, all about developing and disseminating knowledge – and it's one that hurts people. It is not something we should be encouraging copyright law, or any law, to do.

54 Comments | Leave a Comment..

Posted on Techdirt - 6 July 2017 @ 11:51am

Why Protecting The Free Press Requires Protecting Trump's Tweets

from the protecting-the-speech-you-disagree-with dept

Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration... It definitely wasn't this past weekend, because waiting for me in my Twitter stream was Trump's tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that's not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post.

I don't write any of this to defend the tweet: it was odious, unpresidential, and betrays an animus towards the press that is terrifying to see in any government official – and especially the Chief Executive of the United States of America. But inappropriate, disgraceful, and disturbing though it is, it was still just speech, and calls to suppress speech are always alarming regardless of who is asking for it to be suppressed or why.

Some have tried to defend these calls by arguing that suppressing speech is ok when it is not the government doing the suppressing. But the reason official censorship is problematic is because it drives away the dissenting voices democracy depends on hearing. Which is not to say that all ideas are worth hearing or critical to self-government; the point is that protecting opposing voices in general is what allows the meritorious ones to be able to speak out against the powerful. There is no way to split the baby so that only some minority expression gets protected: either all of it must be, or none of it will be. If only some of it is, then the person who has the power to decide which will be protected and which will not has the power to decide badly.

Consider how Trump himself would use that power. Given, as we see in his tweet, how much he wants to marginalize voices that speak against him, we need to make sure this protection remains as strong as possible, even if it means that he, too, gets the benefit of it. There simply is no way to punish one man's speech, no matter how troubling it may be, without opening the door to better speech similarly being suppressed.

Naturally as a private platform Twitter may, of course, choose to delete this or any other Trump tweet (or any tweet or Twitter account at all) for any reason. We've argued before that private platforms have the right to police their services however they choose. But we have also seen how when speech is eliminated from a forum, the forum is often much poorer for it. Deciding to suppress speech is not something we should be too quick to encourage, or demand. Not even when the speech is provocative and threatening, because so much important, valid, and necessary speech can so easily be labeled that way. As Justice Holmes noted, "Every idea is an incitement." In other words, it's easy to justify suppressing all sorts of speech, including valid and important speech, if any viewpoint aggressively at odds with any other can be eliminated because of the challenge it presents. Courts have therefore found that speech, even speech promoting the use of force or lawlessness, may only be censored when "such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action." Given that even a KKK rally was found not to meet this description, these requirements for likely imminence of harm are steep hurdles that Trump's tweet are unlikely to clear.

The truth may well be, as many fear, that Trump would actually like people to beat up journalists. It may also be true that he has some bad actors among his followers who are eager to do so. But even if people do assault journalists, it won't be because of this tweet. It will be because Trump, as president, supports the idea. He'll support it whether or not this tweet is deleted. After all, it's not as though deleting the tweet will make him change his view. And it's that view that's the real problem to focus on here.

Because Trump has far more powerful means at his disposal to act upon his antipathy towards the media than his Twitter account affords. In fact, better that he should tweet his drivel rather than act on this malevolence in a way that actually does do direct violence to our free press. Especially because, in an administration so lacking in transparency, his tweets at least help let us know that this animus lurks within. Armed with this knowledge we can now be better positioned to defend those critical interests his presidency so threatens. Painful though it is to see his awful tweets, ignorance on this point would in no way have been bliss.

140 Comments | Leave a Comment..

More posts from Cathy Gellis >>