Cathy Gellis’s Techdirt Profile


About Cathy Gellis

Posted on Techdirt - 6 February 2019 @ 3:43pm

The 3rd Party Doctrine: Or Why Lawyers May Not Ethically Be Able To Use Whatsapp

from the metadata-matters dept

In December I went to install the Flywheel app on my new phone. Flywheel, for those unfamiliar, is a service that applies the app-dispatching and backend payment services typical of Uber and Lyft to the local medallion-based taxi business. I'd used it before on my old phone, but as I was installing it on my new one it asked for two specific permissions I didn't remember seeing before. The first was fine and unmemorable, but the second was a show-stopper: "Allow Flywheel access to your contacts?" Saying no made the app exit with passive-aggressive flourish ("You have forcefully denied some of the required permissions.") but I could not for the life of me figure out why I should say yes. Why on Earth would a taxi summoning app require access to my contacts? Tweets to the company were not answered, so it was impossible to know if Flywheel wanted that permission for some minor, reasonable purpose that in no way actually disclosed my contact data to this company, or if it was trying to slurp information about who I know for some other purpose. Its privacy policy, which on the surface seems both reasonable and readable, was last updated in 2013 and makes no reference to why it would now want access to my contacts.

So I didn't finish installing it, although to Flywheel's credit, a January update to the app seems to have re-architected it so that it no longer demands that permission. (On the other hand, the privacy policy appears to still be from 2013.) But the same cannot be said for other apps that insist on reading all my contacts, including, conspicuously, Whatsapp.

Whatsapp has been in the news a lot lately, particularly in light of Facebook's announcement that it planned to merge it with its Messenger service. But the problem described here is a problem even as the app stands on its own. True, unlike the old Flywheel app, Whatsapp can currently be installed without demanding to see the contact information stored on my phone. But it can't be used effectively. It can receive an inbound message from someone else who already knows my Whatsapp number, but it refuses to send an outbound message to a new contact unless I first let Whatsapp slurp up all my contacts. Whatsapp is candid in its privacy policy (last updated in 2016) that it collects this information (in fact it says you agree to "provide us the phone numbers in your mobile address book on a regular basis, including those of both the users of our Services and your other contacts."), which is good, but it never explains why it needs to, which is not good. Given that Signal, another encrypted communications app, does not require slurping up all contacts in order to run, it does not seem like something Whatsapp should need to do in order to provide its essential communications service. The only hint the privacy policy provides is that Whatsapp "may create a favorites list of your contacts for you" as part of its service, but it still isn't obvious why it would need to slurp up your entire address book, including non-Whatsapp user contact information, even for that.

The irony is that an app like Whatapp should be exactly the sort of app that lawyers use. We are duty-bound to protect our clients' confidences, and encrypted communications are often necessary tools for maintaining a meaningful attorney-client relationship because they should allow us to protect the communications secrecy upon which the relationship depends. But that's exactly why I can't use it, didn't finish installing the old Flywheel app, and refuse to use any other app that insists on reading all my contacts for no good, disclosed, or proportionally-narrow reason: I am a lawyer, and I can't let this information out. Our responsibility to protect client confidences may very well extend to the actual identity of our clients. There are too many situations where if others can know who we are talking to it will be devastating to our clients' ability to seek the counsel to which they are Constitutionally entitled.

I wrote about this problem a few years ago in an amicus brief on behalf of the National Association of Criminal Defense Lawyers for the appeal of Smith v. Obama. This case brought a constitutional challenge to the US government's practice of collecting bulk metadata from Verizon Wireless without warrants and without their incumbent requirements of probable cause and specificity. Unfortunately the constitutional challenge failed at the district court level, but not because the court couldn't see how it offended the Fourth Amendment when so much personal information could be so readily available to the government. Instead the district court dismissed the case because the court believed that it was hamstrung by the previous Supreme Court ruling in Smith v. Maryland. Smith v. Maryland is the 1979 case that gave us the third-party doctrine, this idea that if you've already disclosed certain information (such as who you were dialing) you can no longer have a reasonable expectation of privacy in this information that the Fourth Amendment should continue to protect (and thus require the government to get a warrant to access). Even in its time Smith v. Maryland was rather casual about the constitutionally-protected privacy interests at stake. But as applied to the metadata related to our digital communications, it eviscerates the personal privacy the Fourth Amendment exists to protect.

The reality is that metadata is revealing. And as I wrote in this amicus brief, the way it is revealing for lawyers not only violates the Fourth Amendment but the Sixth Amendment right to counsel relied upon by our clients. True, it is not always a secret who our clients are. But sometimes the entire representation hinges on keeping that information private.

Thus metadata matters because, even though it is not communications "content," it can nevertheless be so descriptive about the details of a life. And when it comes to lawyers' lives, it ends up being descriptive of their clients' lives as well. And that's a huge problem.

As the brief explained, lawyers get inquiries from uncharged people all the time. Perhaps they simply need advice on how to comport their behavior. Or perhaps they fear they may be charged with a crime and need to make the responsible choice to speak with counsel as early as possible to ensure they will have the best defense. The Sixth Amendment guarantees them the right to counsel, and this right has been found to be meaningful only when the client can feel assured of enough privacy in their communications to speak candidly with their counsel. Without that candor, the counsel cannot be as effective as the Constitution requires. But if the government can easily find out who lawyers have been talking to by accessing their metadata, then that needed privacy evaporates. Who a lawyer has been communicating with, especially a criminal defense lawyer, starts to look like a handy list of potential suspects for the government to go investigate.

And it's not just criminal defense counsel that is affected by metadata vulnerability. Consider the situation we've talked about many times before, where an anonymous speaker may need to try to quash some sort of discovery instrument (including those issued by the government) seeking to unmask them. We've discussed how important it is to have procedural protections so that an anonymous speaker can find a lawyer to fight the unmasking. Getting counsel of course means that there is going to be communication between the speaker and the lawyer. And even though the contents of those communications may remain private, the metadata related to the communications may not be. Thus even though the representation may be all about protecting a person's identity, there may be no way to accomplish it if it turns out there's no way for the lawyer to protect that metadata evincing this attorney-client relationship from either the government helping itself to it, or from greedy software slurping it up – which will make the app maker yet another third party that the government can look to demand this information from.

Unfortunately there is no easy answer to this problem. First, just as it's not really possible for lawyers to avoid using the phone, it is simply not viable for lawyers to avoid using digital technology. Indeed, much of it actually makes our work more productive and cost effective, which is ultimately good for clients. And especially given how unprotected our call records are, it may even be particularly important to use digital technology as an alternative to standard telephony. To some extent lawyers can refuse to use certain apps or services that don't seem to handle data responsibly (I installed Lyft and use Signal instead), but sometimes it's hard to tell the exact contours of an app's behavior, and sometimes even if we can tell it can still be an extremely costly decision to abstain from using certain technology and services. What we need, what everyone needs, is to be able to use technology secure in the knowledge that information shared with it travels no farther and for no other purpose than we expect it to.

Towards that end, we – lawyers and others – should absolutely pressure technology makers into (a) being more transparent about how and why it is accessing metadata in the first place, (b) enabling more gradated levels of access to it, and use of it, so that we don't have to tell any app or service more than it needs to know about our lives for it to run, or that it might ever have to ask for any more than it needs in order to run, and (c) being more principled in both their data sharing practices and resistance to government data demands. Market pressure is one way to affect this outcome (there are a lot of lawyers, and few technologies can afford to be off-limits to us), and perhaps it is also appropriate for some of this pressure to come from regulatory sources.

But before we turn to regulators in outrage we need to aim our ire carefully. Things like the GDPR and CCPA deserve criticism because they tend to be like doing pest control with a flame thrower, seeking to ameliorate harm while being indifferent to any new harm they invite. But the general idea of encouraging clear, nuanced disclosures of how software interacts with personal data, as well as discouraging casual data sharing, is a good one, and one that at the very least the market should demand.

The reality of course is that sometimes data sharing does need to happen – certain useful services will not be useful services without data access, and even data sharing among partners who together supply that service. It would be a mistake to ask regulators to prevent it altogether. Also, it is not private actors who necessarily are the biggest threat to the privacy interests we lawyers need to protect. Even the most responsible tech company is still at the mercy of a voracious government that sees itself as entitled to all the data that these private actors have collected. Someday hopefully the courts will recognize what an assault it is on our constitutional rights for metadata access not to be subject to a warrant requirement. But until that day comes, we should not have to remain so vulnerable. When we turn to the government to help ensure our privacy, our top demand needs to be for the government to better protect us from itself.

22 Comments | Leave a Comment..

Posted on Techdirt - 28 January 2019 @ 1:34pm

Dozens Of Privacy Experts Tell The California Legislature That Its New Privacy Law Is Badly Undercooked

from the hard-to-survive-this-turkey dept

Here at Techdirt we've taken issue with the California Consumer Privacy Act (CCPA), not because there's anything wrong with online privacy, or even all online privacy regulation. But there's definitely something wrong with regulating it badly. As we've seen with the GDPR, not only does poor regulation struggle to deliver any of the intended benefit, but it also causes all sorts of other harm. Thus it's enormously important to get this sort of regulation right.

But that's not the current iteration of the CCPA. Born out of an attempt at political blackmail, rather than considered and transparent policy making, even with several small attempts at improvements, it suffers from several showstopping infirmities. These were set forth in a letter to the California legislature organized by Eric Goldman, who has been closely tracking the law, and signed by 41 California privacy lawyers, professionals, and professors (including me). As he summarized in a blog post hosting a copy of the letter, these defects include:

  • That the law affects many businesses who never had a chance to explain the law’s problems to the legislature;
  • That compliance with the CCPA imposes excessive costs on small businesses;
  • That its inconsistencies with other privacy laws including the GDPR requires businesses to waste extra money;
  • The CCPA undermines other consumer privacy laws;
  • There are drafting errors and other problems, including with overbroad definitions; and
  • It claims an extraterritorial reach that may not be Constitutional, and will create substantial confusion for everyone, as well as costs for the state, as the question is litigated.

In other words, we can do better. As the letter concludes:

Everyone has acknowledged that the CCPA remains a work-in-progress, but there may be some misapprehensions about the scope and scale of the required changes still remaining. In our view, the CCPA needs many substantial changes before it becomes a law that truly benefits California. We appreciate your work on these important matters.

Read More | 12 Comments | Leave a Comment..

Posted on Techdirt - 22 January 2019 @ 3:32pm

Herrick V. Grindr – The Section 230 Case That's Not What You've Heard

from the pleading-matters dept

On the surface Herrick v. Grindr seems the same sort of case as Daniel v. Armslist (which we wrote about last week): it's a case at an appeals court that addresses the applicability of Section 230, meaning there is a reasonable possibility of it having long-lingering effect on platforms once it gets decided. It's also a case full of ugly facts with a sympathetic plaintiff, and, at least nominally, involves the same sort of claim against a platform – in Armslist the claim was for "negligent design," whereas here the claim is for "defective design." In both cases the general theory is that because people were able to use the platform to do bad things, the platforms themselves should be legally liable for the resulting harm.

Of course, if this theory were correct, what platform could exist? People use Internet platforms in bad ways all the time, and they were doing so back in the days of CompuServe and Prodigy. It is recognition of this tendency that caused Congress to pass Section 230 in the first place, because if platforms needed to answer for the terrible things their users used them for, then they could never afford to remain available for all the good things people used them for too. Congress felt it was too high a cost to lose the beneficial potential of the Internet because of the possibility of bad actors, and so Section 230 was drafted to make sure that we wouldn't have to. Bad actors could still be pursued for their bad acts, but not the platforms that they had exploited to commit them.

In this case the bad act in question was the creation and management of a false Grindr profile for Herrick by an ex-boyfriend bitter about their breakup. It led to countless strangers, often with aggressive expectations for sex, showing up at Herrick's home and work. There is no question that the ex-boyfriend's behavior was terrible, frightening, inexcusable, and, if not already illegal under New York law, deserving to be. But only to the extent that such a law would punish just the culprit (in this case the ex-boyfriend who created the fake profile).

The main problem with this case is that Herrick is seeking to have New York law extend to also punish the platform, which had not created the problematic content. But the plain language of Section 230 – both in its immunity provision along with its pre-emption provision – prevents platforms from being held liable for content created by others. Herrick argues that Grindr should be held liable anyway "because it knowingly facilitated criminal and tortious conduct." But that's not the standard. The standard is whether the platform created the wrongful content, or, at minimum, in the wake of Roommates, had a hand in imbuing it with its wrongful quality. But here there is no evidence to suggest that Grindr had anything to do with the creation of the fake profile. It was the awful ex-boyfriend who was doing all the malfeasant content supplying.

But here's where the two cases part company, and where the Grindr one gets especially messy. The good news for Section 230 is that this messiness may make it easy for the Second Circuit to resolve in favor of Grindr and leave Section 230 unscathed. The bad news is that if the Second Circuit decides the other way, it will be very messy indeed.

One of the core questions in most lawsuits involving Section 230 is whether the platform itself is an interactive computer service provider, and thus protected by Section 230 for lawsuits seeking to hold them liable for content created by others, or whether it is instead a non-immune "information content provider." Part of the problem with this case is that when Herrick filed the lawsuit originally, the pleading acknowledged that it was an interactive computer service provider. Later when he was fighting the motion to dismiss he changed its mind, but that's a problem. You don't usually get to change your mind about these critical elements of your complaint without repleading it. (Which is one of the reasons Herrick is appealing; the dismissal was "with prejudice," meaning it wouldn't easily be able to re-plead at this point, and Herrick wants another chance to amend his complaint.)

But that's only one of the pleading problems. A plaintiff also has to put forth a plausible theory of liability at the outset, in large part so that the defendant can be on notice of what it is being accused of to defend itself. It's not unusual for theories of liability to evolve as litigation proceeds, but if the theory changes too much too late in the process it raises significant due process problems for the defendant. Which seems to be happening here. The story Herrick told the Second Circuit about why it thought Grindr should be liable for the harm Herrick suffered differed in significant ways from the story it had told at the outset, or to the trial court. This change is one reason why the case is particularly messy, and may be messier still if the Second Circuit allows it to continue anyway.

At issue is what Herrick told the Second Circuit about his harassment. According to him now, strange men were showing up in his life not just constantly but everywhere he went. Yet according to the record at the trial court, they only showed up in two places: his home and his work. Which is not to say, of course, that it's ok for him to have these people harass him at either place (or any place). The issue is that this "everywhere" v. "only in two places" distinction significantly affects his theory of the case and therefore the merits of his appeal.

Because the argument he pressed at oral argument was that it was Grindr's geolocation service that removed the case from Section 230's purview. According to him there must be some bug in Grindr that allows these strange men to know where he is and seek him out, and so, he thinks, Grindr should be liable for not fixing this defect.

However there are a number of problems with this theory. First, it is highly implausible. For it to be true Grindr would need to not only still be tracking him (even as an ex-user) but then, for some unknown reason, somehow unite the location data of the actual Herrick person with the fake Herrick profile. Herrick tried to argue that the first part was likely, citing for instance Google's location services continuing to track users after they'd thought it had stopped. But even if it were true that Grindr had continued to track him, it would be really random to associate that data with any other account he didn't control. From Grindr's point of view, his real account and the fake account would look like two completely separate users. Sure, Grindr could have a bug that mis-associated location data, but there's no reason for it to pick these two completely different accounts to merge the data from. It would be just as arbitrary as if it mixed up his data with any other Grindr account.

Furthermore, there is zero evidence to suggest that the fake account used the geolocation data of anyone at all, other than perhaps the ex-boyfriend, who was operating the account. There certainly is no evidence to suggest that it was somehow using Herrick's actual data, and that's why the factual distinction about where he was harassed matters. If it truly was everywhere then he might have a point about the app having a vulnerability, and if so then perhaps his defective design claim might start to be colorable. But the only information he's alleged is that he was harassed in those two places, home and work, and no one needed to use any geolocation data to find him at either of these places. The ex-boyfriend knew of these places and could easily send would-be suitors to them directly via private messages. In other words, the reason they turned up at either of these places was because of content supplied by a third party (the ex-boyfriend). This fact puts the case clearly in Section 230-land and makes the case one where someone is trying to hold a platform liable for harm caused by how another communicated through their system.

Finally, an additional problem with this theory is that even if it were correct, and even if there were some evidence that the geolocation was allowing strangers to harass him everywhere, it needed to have come up before the appeal. The purpose of the appeal is to to review whether the first court made a mistake. Belatedly supplying more information for the benefit of the appeals court will not help it decide whether the first court made a mistake because that court could only have done the best it could with the information available to it. It isn't a mistake not to have had the benefit of more, and to add more at this late date would be incredibly unfair to the defendant. As it was, by pressing this new "he was tracked everywhere" theory at oral argument it left Grindr's counsel in the unenviable and risky position of having to field extremely hypothetical questions from the judges about their client's potential liability based on facts nowhere in the underlying record. It was uncomfortable to listen to the judges push Grindr's lawyers on the question of whether some hypothetical software bug that they had never contemplated, and likely doesn't exist, might undermine their Section 230 protection. To their credit they fielded the hypo on the fly pretty well by reminding the judges that Section 230 covers how platforms are used by other people, regardless of whether they are used appropriately or exploitatively. But given the way this case was pleaded from the outset, this hypo should never have come up, especially not at this late juncture.

So one of the overarching concerns about this case is that because this theory did not coalesce until it had reached the appeals court, it left the central legal questions it raised under-litigated, thus inviting poor results if the Second Circuit now gives them any credence. But that's not the only concern. It may still be an ominous harbinger, for even if Herrick loses the appeal, it may not be the last time we see this "software vulnerability makes you lose Section 230 protection" theory put forth. It foreshadows how we may see future privacy litigation wrapped-up as defective design cases, and, worse, it may encourage plaintiffs seeking to do an end-run around Section 230 to try to package their claims up as privacy cases.

Also, what Herrick asked for in his appeal was a remand back to the trial court to explore all these under-developed evidentiary issues. Was there a software bug? Was Grindr continuing to track former subscribers in a way they didn't know about? Was there a privacy leak, where the fake profile was somehow united with the geolocation of a real person? Herrick believes the case shouldn't have been dismissed without discovery on these issues, but early dismissal is a big reason why Section 230 provides valuable protection to a platform. It is extremely expensive to go through the discovery stage – in fact, it's often the most expensive stage – and if platforms had to endure it just so plaintiffs could explore paranoid fantasies with no evidence to give them even a veneer of plausibility, it will be extremely destructive to the online ecosystem.

On the upside, however, unlike the Wisconsin Court of Appeals in the Armslist case, after listening to the oral argument I'm relatively confident that the judges will be able to respect prior precedent upholding Section 230, even in these awful cases, and resist reaching an emotional conclusion that strays from it. Also, given the issues with the pleading and such – which at oral argument the judges flagged – there may be enough procedural problems with Herrick's case to make it easy for the court to dispense with it without causing damage to Section 230 jurisprudence in the Second Circuit in the process. But if these predictions turn out to be wrong, and if it turns out that these procedural issues pose no obstacle to the court issuing the remand Herrick seeks, then we might have to contend with something really ugly on the books at a federal appellate circuit level.

101 Comments | Leave a Comment..

Posted on Techdirt - 18 January 2019 @ 1:36pm

In Which We Warn The Wisconsin Supreme Court Not To Destroy Section 230

from the not-just-fosta dept

One of the ideas that we keep trying to drive home is that the Internet works only because Section 230 has allowed it to work. Mess with Section 230, and you mess with the Internet. FOSTA messed with it statutorily, but it isn't just Congress that can undermine all the speech and services that depend on Section 230's protection for the platforms that enable them. Courts can mess with it too.

While it's bad enough when courts get questions of whether Section 230 applies wrong at the trial court level, the higher the court, the more potentially destructive the decision if the court decides to curtail its protection. On the other hand, the higher the court, the more durable Section 230's protective language becomes when the decision gets it right. This post is about one of those cases where the future utility of Section 230 hangs in the balance, and where we hope that the Wisconsin Supreme Court, the highest court in the state, gets it right and finds it applies to the platform being sued -- and therefore all other platforms that depend on its protection.

We've written before about this case, Daniel v. Armslist. As with a lot of the litigation challenging Section 230 it was one of those "bad facts make bad law" sorts of cases. In this case an estranged husband, against whom there was a restraining order, bought a gun from an unlicensed seller who had advertised through the Armslist site. Notably it does not appear that the sale was necessarily illegal – in Wisconsin unlicensed dealers apparently do not have to run background checks – nor was the sale fully transacted on the site (the actual purchase was made in a McDonalds parking lot). Of course, even if the sale had been illegal, or fully brokered via the site, Section 230 should still have insulated the platform, but here the Section 230 inquiry should be much more straight forward: the lawsuit alleging that Armslist negligently designed a site that facilitated a third party's speech – in this case, the speech offering the gun for sale – should have been barred by Section 230.

The trial court actually had gotten this question right and dismissed the case. Unfortunately a state appeals court in Wisconsin opted to ignore twenty-plus years of jurisprudence, as well as the statute's pre-emption provision, which would have directed such a finding, and reversed the trial court's original decision. Armslist then sought review by the Wisconsin Supreme Court, and we filed an amicus brief supporting their petition. One of the main points we made in the brief was how much stood to be affected if the decision was not overturned and Section 230's applicability in Wisconsin was now narrowed in ways Congress hadn't intended. After all, it isn't just Armslist in the crosshairs; it is all platforms everywhere, and all the speech and services they enable, in Wisconsin and beyond, that are threatened if platforms can no longer depend on Section 230's critical protection applying to them as it once had.

Fortunately the Wisconsin Supreme Court agreed to hear the case, and this week we filed yet another amicus brief in support of Armslist on the merits. It is similar to the previous brief, with the added example of how much the Copia Institute itself, and Techdirt in particular, depends on Section 230 remaining robust and effective. It relies on it as a user of other services -- for instance, to have its posts shared through social media -- and as a platform itself. There could not be a comments section on Techdirt -- or all the vibrant and insightful discussion found there -- without Section 230 protecting the site from liability for what commenters say.

It would be easy for the tragedy underpinning this case to cause the court to fixate on Armslist and the type of user content it intermediates. But Internet platforms come in all sorts of shapes and sizes, offering all sorts of services, and enabling all sorts of speech on all sorts of topics. And all of them will be affected by how the court resolves this particular case before it. So we hope our brief helps remind the Wisconsin justices of just how much is at stake.

Read More | 138 Comments | Leave a Comment..

Posted on Techdirt - 3 December 2018 @ 1:41pm

Tech Policy In Times Of Trouble

from the pep-talk dept

A colleague was lamenting recently that working on tech policy these days feels a lot like rearranging deck chairs on the Titanic. What does something as arcane as copyright law have to do with anything when governments are giving way to fascists, people are being killed because of their race or ethnicity, and children are being wrested from their parents and kept in cages?

Well, a lot. It has to do with why we got involved in these policy debates in the first place. If we want these bad things to stop we can't afford for there to be obstacles preventing us from exchanging the ideas and innovating the solutions needed to make them stop. The more trouble we find ourselves mired in the more we need to be able to think our way out.

Tech policy directly bears on that ability, which is why we work on it, even on aspects as seemingly irrelevant to the state of humanity as copyright. Because they aren't irrelevant. Copyright, for instance, has become a barrier to innovation as well as a vehicle for outright censorship. These are exactly the sorts of chilling effects we need to guard against if we are going to be able to overcome these challenges to our democracy. The worse things are, the more important it is to have the unfettered freedom to do something about it.

It is also why we spend so much energy arguing with others similarly trying to defend democracy when they attempt to do so by blaming technology for society's ills and call for it to be less freely available. While it is of course true that not all technology use yields positive results, there are incalculable benefits that it does bring – benefits that are all too easy to take for granted but would be dearly missed if they were gone. Technology helps give us the power to push back against the forces that would hurt us, enabling us to speak out and organize against them. Think, for instance, about all the marches that have been marched around the world, newly-elected officials who've used new media to reach out to their constituencies, and volunteer efforts organized online to push back against some of the worst the world faces. If we too readily dull these critical weapons against tyranny we will soon find ourselves defenseless against it.

Of course, none of this is to say that we should fiddle while Rome burns. When important pillars of our society are under attack we can't pretend everything is business as usual. We have to step up to face these challenges however is needed. But the challenges of today don't require us to abandon the areas where we've previously spent so much time working. First, dire though things may look right now, we have not yet forsaken our constitutional order and descended into the primordial ooze of lawlessness. True, the press is under constant attack, disenfranchisement is rife, and law enforcement is strained by unprecedented tensions, but civil institutions like courts and legislatures and the media continue to function, albeit sometimes imperfectly and under severe pressure. But we strengthen these institutions when we hew to the norms that have enabled them to support our society thus far. That some in power may have chosen to abandon and subordinate these norms is no reason that the rest of us should do the same. Rather, it's a reason why we should continue to hold fast to them, to insulate them and buttress them against further attack.

Second, we are all capable of playing multiple roles. And the role we've played as tech policy advocates is no less important now than it was before. Our expertise on these issues is still valuable and needed – perhaps now more than ever. In times of trouble, when fear and confusion reign, the causes we care about are particularly vulnerable to damage, even by the well-meaning. The principles we have fought to protect in better days are the same principles we need to light the way through the dark ones. It is no time to give up that fight.

28 Comments | Leave a Comment..

Posted on Free Speech - 30 November 2018 @ 10:44am

How Civil Subpoenas Are Used To Unmask Online Speakers, And How A Recent Decision Will Help Deter Bogus Ones

from the unexpected-good-news dept

Important cases don't always happen with a lot of fanfare. It may be easy to follow what the US Supreme Court is up to, with its relatively small docket of high-profile matters, but plenty of other important cases get resolved by state and lower courts around the country with much less attention but just as much import.

This decision by a California appeals court, Roe v. Halbig, is one such example, and happily the impact it stands to have is a good one. It isn't a showy decision declaring some new principle of liberty. Rather, it stands to quietly help ensure that codified protections for speech, and anonymous speech in particular, work as intended.

We've written many times before about how important it is that anonymous speech be protected. Indeed, the US Supreme Court has found that the First Amendment includes the right to speak anonymously, because without that right a lot of important speech could not happen. But it's one thing to say that anonymous speech must be protected; it's another to make sure that anonymous online speakers can remain anonymous on a practical level. If it is too easy to unmask speakers, then their right to speak anonymously becomes illusory.

To prevent the right to anonymous speech from becoming meaningless, it's important that discovery instruments, like subpoenas, intended to unmask speakers, not be vulnerable to being abused, especially by plaintiffs who don't have a legitimate need to unmask their critics. Because not only is a SLAPP suit chilling to speech, but so is a subpoena arising from a SLAPP suit that strips a speaker of the anonymous protection they counted on having when they spoke.

This decision will help prevent the latter. To understand how, it helps to understand how these subpoenas get used.

What typically happens is that a SLAPP is filed in another state (or country), likely one that does not have a robust anti-SLAPP law, and names a "John Doe" defendant. The plaintiff then issues a subpoena connected to the case targeted at whatever Internet platform (e.g., Twitter, Google, Facebook, Automattic/WordPress, Yelp, Glassdoor, etc.) or platforms may have information that would help identify who the speaker was. Obviously this information would be needed in order to maintain the lawsuit – you need to know who you are suing in order to actually sue them – but there is nothing requiring a lawsuit to continue once the identification is made. Sometimes SLAPP plaintiffs file lawsuits only as a vehicle to learn who their critic was because that's all they need to be able to make their critic regret speaking against them.

If the platform is exposed to the jurisdiction of this other state, and thus subject to the subpoena, then all of that state's rules about subpoenas will govern what comes next. But if the California-based platform is not subject to the jurisdiction of this other state, then the plaintiff will need to "domesticate" it with the court of the county in California where the platform is located. It is generally easy to domesticate a subpoena; any California-licensed attorney can issue one on a special form provided by the California courts. It contains the same demand to produce information that the out-of-state subpoena had, only now the demand is governed by California law with its various speech-protecting rules.

In general, a platform will try to notify the user that it has received the subpoena to unmask them. (This is an important step, which is why we've also been so critical of discovery rules preventing this notice.) Sometimes the platforms might even try to fight the subpoena themselves, which some recent California appellate cases said they have the right to do. It also means that when the courts consider whether to quash a subpoena they will use the Krinsky test, which is a relatively speaker-protective test used by courts to decide whether there is a sufficient basis to warrant a speaker being unmasked. Courts won't definitively decide the case at this stage, but per the test they will not allow a speaker to be identified if the plaintiff has not made at least a prima facie showing that the claims in the lawsuit may be valid. Speakers shouldn't lose their anonymity if there's no chance that the plaintiff might win.

And then that's where this case comes in. Because if the motion to quash the subpoena is successful, the party who brought the motion gets to recover the fees and costs of doing so.

The rule at issue here is much like the anti-SLAPP statute, which serves to both compensate a wronged speaker who has been forced to defend a lawsuit targeting their protected speech and also to deter plaintiffs from bringing these garbage lawsuits in the first place by making the plaintiffs pay the defendant's legal fees. But the anti-SLAPP statute only governs actual lawsuits [p. 13]. It doesn't have any effect on similarly meritless subpoenas arising from out-of-state SLAPPs. In order to prevent litigants from filing their lawsuits in other states (or countries) beyond the reach of the California anti-SLAPP law, and then using those meritless lawsuits as a basis to issue subpoenas to unmask their critics, in 2008 the California legislature inserted some language into its rules of civil procedure to address this situation. Section 1987.2 of the California Code of Civil Procedure reads:

(c) If a motion is filed under Section 1987.1 for an order to quash or modify a subpoena from a court of this state for personally identifying information, as defined in subdivision (b) of Section 1798.79.8 of the Civil Code , for use in an action pending in another state, territory, or district of the United States, or in a foreign nation, and that subpoena has been served on any Internet service provider, or on the provider of any other interactive computer service, as defined in Section 230(f)(2) of Title 47 of the United States Code , if the moving party prevails, and if the underlying action arises from the moving party's exercise of free speech rights on the Internet and the respondent has failed to make a prima facie showing of a cause of action, the court shall award the amount of the reasonable expenses incurred in making the motion, including reasonable attorney's fees.

In other words, if a subpoena targeting protected online speech is successfully quashed, then, like with the anti-SLAPP statute, the court must award the party who expended the resources to quash the subpoena the fees and costs they had to spend to do it. Like the anti-SLAPP statute this language both serves to compensate a wronged-speaker for the defense of their speech rights and is a deterrent to others who otherwise might be inclined to casually issue subpoenas to harass their anonymous critics.

[T]he legislative history of section 1987.2, subdivision (c) highlights the Legislature’s focus on the burden on free speech posed by subpoenas derived from out-of-state cases targeting anonymous speakers on the Internet; the Legislature’s concern with the costs of litigating cases threatening free speech; and the Legislature’s intent to protect the exercise of free speech on the Internet. The anti-SLAPP statute reflects similar considerations. [p. 15]

Given how many Internet platforms there are in California, this is a particularly powerful piece of legal code. It's a significant reason why we've praised decisions requiring subpoenas to be domesticated in California, where so many platform companies are based, so that their users can benefit from the protection that this legal code affords. Similar protection is not generally available in other states (although it really should be), which means that if a platform is forced to respond to a subpoena governed by another state's law, the users will be on their own to fund their own defense.

But even though this powerful language has been on the books for a decade, there have not been many cases interpreting or affirming it. In fact, the court here suggests it may be the first, which is partly why this case is so important. [p. 28].

For one thing, it helps make it much more usable for Doe defendants. To explain why, we need to return to the story about how these subpoenas play out. After all, the courts don't automatically run the Krinsky test on every issued subpoena. First the Doe defendant has to find counsel to help challenge it, which is much easier to do when there is the more certain promise of fee recovery. Next, the Doe defendant has to get the subpoena challenge in the courthouse door. Unhelpfully, every California county's courts handle these petitions to quash a bit differently (which can present challenges for counsel, who needs to figure out what these procedures are – this is not an area of law well-documented in practice guides).

Also, some counties' courts require these petitions to quash be adjudicated by a rotating batch of pro-tem judges. These are practicing lawyers trusted by the courts to help clear the thicket of day-to-day discovery disputes that regularly arise from California litigation and would otherwise drown the courts without the extra help. Unfortunately they are not necessarily equipped to adjudicate the significant free speech issues that happen to end up before them because they come wrapped as a discovery matter – in this case, a motion to quash a subpoena.

(Note also that because these subpoenas are often channeled through the discovery departments of the county courts that is likely why there has been so little written precedent before now, because this sort of adjudication generally leaves no formal written decision, or even much of a record at all.)

So this case allows Doe defendants to more efficiently educate these judges. Not only does it provide a clear judicial precedent interpreting the procedural language and showing that its protection is real and meaningful, but, in rejecting the fee award as being unduly discounted, it tells these judges that the fee awards at stake in these cases are not only mandatory but potentially substantial. It is an important admonition because it is extremely rare that in a normal discovery dispute a judge would award a prevailing party more than a few thousand dollars. Judges hearing discovery matters may therefore be disinclined to ever award more. But it may realistically require an amount deep into five figures in order to properly quash one of these speech-chilling subpoenas, and here the appellate court says that these cost claims should not be presumptively discounted simply because they may be large.

The total bill came to $42,273 for 192 hours of attorneys’ work plus $308 for the paralegal’s time. Based on these numbers, the trial court stated, “[$]42,000 was a lot, extreme in setting in this case. . . . I think a petition to quash a subpoena should not require that amount of time. So I’m going to award $22,000 in fees.” The trial court made clear that, in its view, Roe’s attorneys had spent an unreasonable amount of time on the case. The trial court is entitled to draw that conclusion, for, “an ‘experienced trial judge is the best judge of the value of professional services rendered in [her] court.’ ” (Walent v. Commission on Professional Competence etc. (2017) 9 Cal.App.5th 745, 748.) Nevertheless, the starting point for the trial court’s calculation must be all of the hours counsel has spent on the case. Roe suggests that perhaps the trial court awarded an amount equal to the time his attorneys spent on the initial motion to quash. If the trial court’s fee award were based on this metric, it would constitute an abuse of discretion. Halbig filed an opposition to the motion to quash and withdrew the subpoena; both of those events reasonably required further briefing from Roe. An award based solely on the attorney hours spent preparing the initial motion to quash would violate the principle that the starting point of the fee calculation must be all of the attorney hours actually spent on the case. (Ketchum, supra, 24 Cal.4th at pp. 1131-1132.) [p. 27]

This decision also cleared up an area of potential ambiguity in the code's applicability by discussing what it means to be a "prevailing party" eligible to recover fees and costs. Here's the problem that this case illustrated: the Doe defendant had to spend a lot of money trying to quash the subpoena, but before the court was able to rule on the original motion to quash, the plaintiff withdrew the subpoena. The question the appeals court had to decide was whether that withdrawal made the Doe defendant ineligible to recover as a "prevailing party" since there was no judicial victory.

The appeals court decided that the Doe defendant could still collect, observing that it was impossible for the defendant to know whether they were out of the woods. The plaintiff, in rescinding the subpoena, did so explicitly "without prejudice," meaning that it could be reissued at any time, and the court found it contrary to the purpose of 1987.2 to leave the defendant uncompensated for the expenditure they had been forced to make.

Halbig argues that, once he had withdrawn the subpoena, Roe had no reason to fear that Halbig’s further efforts would unmask his identity, and he maintains that Roe’s refusal to withdraw the motion to quash needlessly incurred extra costs […] Halbig’s argument, however, ignores the possibility that he could have sought another subpoena for Roe’s identity. Halbig asked for and received a “dismissal” of the Google subpoena “without prejudice.” The record contains no evidence that at the time of the hearing on the motion to quash Halbig had determined the identity of the “Doe” defendants or dismissed the underlying lawsuit in Florida. As noted in the analogous anti-SLAPP context, “[t]he specter of the action being refiled (at least until the statute of limitations had run) would continue to have a significant chilling effect on the defendant’s exercise of its First Amendment rights. At that point, the plaintiff would have accomplished all the wrongdoing that triggers the defendant’s eligibility for attorney’s fees, but the defendant would be cheated of redress.” (Coltrain, supra, 66 Cal.App.4th at pp. 106-107.) [p. 21-22]

Read More | 4 Comments | Leave a Comment..

Posted on Techdirt - 27 November 2018 @ 12:11pm

We Interrupt All The Hating On Technology To Remind Everyone We Just Landed On Mars

from the inspiring dept

It was hardly more than 100 years ago that human beings figured out powered flight. Barely 80 since flight became jet-powered, 70 since it broke the sound barrier, and 60 since we mastered jet flight sufficiently for ordinary commercial use. It was also not even 60 years ago that we figured out how to send human beings into space, and not even 50 since we put them on the moon. These time periods hardly span geological epochs; they can be measured by a lifetime.

For those whose consciousness developed after these tectonic shifts in the development of human civilization, it can be easy to forget that mankind spent vastly more of its existence not being remotely able to succeed at any of these things than being able to do them all. It can be easy to lose sight of what a triumph each leap is when today they all seem so ordinary. We take it for granted that we can board a metal tube and just a few hours later end up a continent away. We become glib about putting people in space when we have them sitting up there 24/7. And the moon, that celestial body that from the dawn of man has been the object of every dream, has long faded into the rearview mirror. Been there, done that, we think, as the knowledge that it is within our grasp slowly extinguishes the wonder that used to fuel our drive to seek the impossible.

Fortunately space is full of other frontiers to tantalize us. And Mars is one of them. Orbiting our solar system between 35 and 250 million miles away from Earth (depending on our respective orbital positions), barely visible to the naked eye, and full of even more mystery than our much more proximate moon (which is less than 240,000 miles away), it passes through the heavens flashing its enticing red glow like a bullfighter to his charges. And so, like moth to flame, we go.

But it hasn't been easy. We didn't get close to Mars until the 1960s, or get any sort of good look until the 1970s. And it wasn't until the 1990s that we finally got to touch it with tools we created as stand-ins for ourselves. But even as some Mars exploration missions succeeded, most have ended prematurely, or failed altogether. Even though in the intervening twenty years since our first lander we've managed to send several more robotic extensions of ourselves, which in turn have sent us back enormous amounts of data teaching us about this place so difficult to know, every time we come up with some new apparatus to help move our still-limited knowledge of Mars forward, we still face the same nearly insurmountable problem: how do we manage to get this highly sophisticated piece of equipment to this incredibly far off place?

Yesterday, we got it right. Yesterday, we threaded this near-impossible needle and successfully landed the InSight Lander exactly where we wanted it. But this perfect arrival obscures what a tremendous accomplishment it represents. As The Oatmeal illustrated earlier this month, there were a zillion possible points of failure that we had to get perfectly right. We had to pick a spot to land. We had to pick a day to launch to hit that spot. We had to pick a place to launch from. We had to calculate where Mars would be by the time it got there. We had to fly it across 300,000,000 miles of space to get there. We had to get it to arrive at the correct 12 degree angle. We had to get it to survive the heat of atmospheric entry. We had to get it to successfully deploy a special "super-sonic parachute" at exactly the right time. We then had to get it to successfully detach from the heat shield, deploy some landing legs, and fall from its protective shell. And then, with the same impeccable timing, we had to get it to fire some retro-rockets to control its continued descent. And we had to perfectly anticipate every instruction for every task in programming baked into our robotic scout months and months before that program would ever be run. Programming error, mechanical error, or any other human error all could have doomed the mission. And yet none did. It has some more to do to prepare for all of its experiments (deploy instruments, etc.) but InSight now stands ready, on Mars, to continue teaching us about our mysterious planetary neighbor.

It is a moment worth celebrating. We spend so much time lamenting technology, often regarding human innovation as some sort of disease to be cured of, that we lose our ability to marvel over just what we've accomplished as a species. To see those first pictures beamed back to our home planet today from another elsewhere in the solar system because we figured out how to is like looking at something of unspeakable beauty. Not just in the view itself but in the momentous human achievement we are privileged to see unfold before our eyes.

66 Comments | Leave a Comment..

Posted on Techdirt - 20 November 2018 @ 6:36am

In A Speech Any Autocrat Would Love, French President Macron Insists The Internet Must Be Regulated

from the hate(d)-speech dept

Props to French President Emmanuel Macron, who had a busy week last week, what with the observance of the World War I armistice centennial, the Paris Peace Forum, the Internet Governance Forum (IGF), and various other related events. All drew attendees and attention from around the world to his capital city, and all required his participation in some significant way, including through the delivery of several speeches that each surely required substantial preparation to deliver so capably. Techdirt has already covered a few minor aspects of the IGF speech: the announcement that France would embed officials with Facebook, and reference to the "Paris Call." But in terms of the major substance of the speech, there are few compliments that can be paid.

At best it was the sort of speech that someone completely new to tech policy might have come up with. Someone who, upon finding an imperfect situation, presumes that they are the first to notice the issue. And then takes it upon themselves to heroically step in to address the problem, despite the fact that their proposed "solution" reflects an incomplete understanding of the matter.

There are a number of ways this incomplete understanding infected his speech and undermined the quality of his recommendation. There was, for instance, his erroneous declaration that the Internet today is too much about content distributors not enough about content creators. This declaration alone suggests a very poor understanding of all the myriad ways people all over the world use the Internet to create and then disseminate their expressive works themselves. In and of itself it calls into question whether his overall suggestion is capable of being adequately protective of all this expression.

Because it appears not, and not just because of this limited understanding of how the Internet is used. It also ignores the critical countervailing concerns that have long deemed his proposed "fix" to be an unacceptable one. Because the "cure" he proposed — greater regulation of the Internet — is a dangerous one that would destroy all that he purports to want to protect.

We were off to a bad start with his initial skewering of net neutrality, a topic slated to be dealt with head-on by EU regulators next year. To summarize his general view on the subject: sure, we don't want certain ideas to be marginalized. We should defend people's access to the Internet, he said, but not always. He interprets the term "neutrality" to mean that all ideas have to be treated equally, but, in his view, some ideas are more equal than others. And this is what so offends him: net neutrality allows those who do not share "our values" to spread their ideas too.

This appeal to "values" was a recurring reference that underpinned his speech. Thanks to the Internet, Macron said, we saw an upsurge in democracy (i.e. Tahrir Square). Now, however, he complained, the Internet is being deployed by fringe elements to work against those democratic values. As he put it, in the name of liberty we are allowing the enemies of liberty to speak, and this, Macron insisted, needs to end through the imposition of regulation on the Internet and its actors.

Of course it's not that the values he champions are bad: liberal democracy and personal liberty are certainly worth defending. And he's right to recognize that the Internet can be a valuable tool for advancing those values. He's also right to observe that some use the Internet to advance contrary values. But any autocrat can make the very same argument about how regulation of expressive technology is necessary to preserve a society's "values," and nearly all do.

There is nothing magical about any particular set of values that makes regulation designed to enforce them better than regulation designed to suppress them. Regulation that gives someone the power to decide which values are the good ones and which are the bad is regulation that gives someone the power to suppress any values, including the ones you prefer. Indeed, that's the very point of the very values he champions, to ensure that no one gets that power. You simply can't create that power and expect it not to be used badly.

At some level Macron understands this problem. In the same speech he lamented the autocratic approach of "China Cyberspace" as being a poor choice for the Internet's future, and yet that's exactly the future he invites as he calls for the Internet to be as tightly controlled by his preferred regulators as China would want it to be by its own.

But Macron fears that the only other choice to the regulatory solution he proposes is "California Cyberspace," where California-based companies instead are the de facto regulators of the Internet.

Again, though, Macron misapprehends the current situation, in at least two significant ways. First, part of his objection to the Internet being "regulated" by California companies is that he didn't vote for them, and thus he fears that he has no way to ensure that they act in a way that he considers sufficient to protect the values he prefers. But installing governments, even elected EU governments, as regulators of the Internet provides no guarantee that these values will be any better protected. France itself has members of the far right making increased inroads into government, as does Germany. The democratically-elected government in Poland is busy attacking its independent judiciary for not being nationalistic enough, while Hungary's is currently attempting to ban protest. Just the day before Macron told the world how poisonous nationalism is, and yet the regulation he prescribes would give nationalists in governments the tools to cement their alternative values.

The other significant misapprehension upon which his proposal is based is that "California Cyberspace" is a lawless zone. But not all law must say no; the laws that have allowed the Internet to thrive in California and beyond have been laws that have said yes to innovation and expression and worked to protect them from interference, including Section 230, the First Amendment, and even, to a degree, the DMCA. All of these sorts of legal structures are what enable the actual protection of all those very same liberty values Macron says he wants to foster.

But that's not the sort of regulatory approach Macron proposes. He wants one that will say no to technology — and, importantly, the expression facilitated by this technology — when he believes technology should say no to expression. In his mind this is a modest proposal, one that simply calls for regulation by international consensus via organizations like the IGF. He said this was to help transcend the "rifts" caused by different nations' regulatory approaches. But given that next year's IGF has been scheduled over Thanksgiving week, thereby shutting out many of the American participants who would prefer to observe one of the most significant holidays in the American calendar with their family, as is traditional, rather than on their own, working to save the Internet a continent away, it hardly seems like international pluralism is really high on the IGF agenda.

Instead it seems that the goal is to empower his own government with the ability to decide for the world what the Internet can be used for. While his call for this regulatory crackdown may be packaged up in language touting freedom, democracy, equality, and international cooperation, it is still the cry of the censor keen for the power to refuse others' expression.

52 Comments | Leave a Comment..

Posted on Free Speech - 23 October 2018 @ 10:45am

The Little Rock Drug Raid Story Is A Fourth Amendment Story. But It's Also A First Amendment One.

from the all-amendments-matter dept

The Little Rock drug raid story is appalling. The indiscriminate, repeated, and systemic violation of the Fourth Amendment has been enormously destructive to people's lives, as well as an entire community. But if this situation is to be remedied, and hopefully it will be, it will be thanks to the First Amendment.

Most obviously, the First Amendment is what has allowed for Radley Balko's reporting of the story. Speaking truth about power is only possible with strong press protection. By allowing injustice to be discovered and shared, justice becomes possible. With Balko's reporting the public at large can now be aware of the abuse being done in their name, and the revelation is what will allow people to press for change. As it is, publication of the story has already led to charges being dropped against one of its other victims.

Victim Roderick Talley's own First Amendment rights also made a difference, and in several ways. One important way is that they gave him the right to film the world around him, and that let him record the police's abuse, which provided him with compelling evidence to use in his pursuit of justice.

The complex where he currently resided had recently put out a notice to residents to be on the alert for break-ins. So Talley bought a security system to monitor both the inside and outside of his apartment. About a week before the raid, the outdoor camera picked up some strange activity outside Talley’s apartment. As he sat handcuffed while police officers rifled through his belongings, he began to make the connection. The outside camera had recorded two odd incidents. First, a man whom Talley didn’t know approached the apartment while Talley wasn’t home. Looking anxious, the man knocked, waited a few moments and then left. A few days later, the camera picked up a police officer outside the door. The officer looked around, snapped a photo of Talley’s door with his cellphone, and left. .


After reading the affidavit, Talley went back to check the camera footage of his mysterious visitor from the previous week. “Sure enough,” he says. “The dates matched up. And nobody else came to my apartment that day.” The informant described in the affidavit was the same man Talley’s camera had recorded knocking on his door, waiting and then leaving. Talley wasn’t home at the time. The account given by the detectives and informant was false. And Talley had the video to prove it.

His access to public records was also critical. Through them he was able to discover patterns of abuse affecting not just him but his fellow citizens.

In the months after his own raid, Talley filed open-records requests for every warrant and affidavit involving the detectives who handled his case. He then expanded out and asked for warrants related to other officers on the drug unit.

In those records was also another important piece of information: the informant's mugshot. Remember the story here about mugshots? The one about how people were arrested for having posted these completely public records, simply because they made the editorial decision about which ones to post based on a profit motive? This story shows why it is so important that they be public records that the public has ready access to. Because with the mugshot Talley was able to figure out what had happened to him and others. A name on a search warrant application is an abstraction; but with the picture he could compare the affidavit to his security camera footage to spot the lies.

Over the ensuing weeks, Talley scoured Facebook and Instagram. He talked to residents of the apartments and the surrounding neighborhood. He started watching the Arkansas courts website for cases that looked similar to his. He eventually found a mug shot of the informant. The man who falsely claimed to have purchased cocaine from Talley is a nine-time felon whose criminal record includes nine convictions for theft and another five for burglary. He has also been convicted for giving a false name to police officers after an arrest, for filing a false police report, and, while behind bars, for writing a death threat to a police officer, forging another inmate’s signature on the threat, and then reporting the threat in exchange for reducing his own charges.

The mugshot also helped him compare notes with other victims, some of whom remembered seeing the informant lurking around the neighborhood.

Talley found Davis’s case late last year on the Arkansas courts site. After contacting Davis, Talley showed him a photo of the informant. “Oh, that was him,” Davis says. “That was the guy who came to my apartment. He has what you might call a unique look. You don’t forget a guy like that.” The informant told the police that Davis sold him cocaine. The police found only pot, a scale, Davis’s gun, bullets and the registration for his gun.

And then there was social media. Not only did it help him figure out what had happened by letting him find posts and pictures from others' affected, but it gave him a forum to speak out about what had happened to him, and to reach out to other affected community members – er, at least until he was censored by Facebook for having posted public information about the state actors who had abused him…

He also continued to use social media to publicize his case and reach out to others who may have been raided. He says he was at one point suspended from Facebook for posting the officers’ identities, photos and contact information, though Talley insists this was all public record.

His First Amendment right to petition the government for redress of his grievances is also what allowed him to sue for the violation of his other rights. The city tried to seal all the records associated with the case, but fortunately a judge refused to allow that impingement of his First Amendment rights to add to the list of constitutional injuries.

In response to the lawsuit, the city’s first move was to ask a judge to seal the search warrants, affidavits and everything else Talley had found — including Talley’s own security camera videos. Laux and Crump fought the motion and won. Talley had obtained all of that information from his own cameras or from public records. The city couldn’t then bar him from sharing or publishing it.

Because Talley's story, and the records evidencing it, are able to remain in public view, we are able to learn about a renegade police force running around one of America's cities, unfettered by the constitutional limitations put on police power. Fortunately knowledge is power, and thanks to the First Amendment we can start to fight back.

23 Comments | Leave a Comment..

Posted on Techdirt - 22 October 2018 @ 10:54am

Shitty Man Shows How Shitty Men Can Shit On Free Speech By Suing Over The Shitty Media Men List

from the shitty-lawsuits dept

In the wake of the revelations about Harvey Weinstein, writer Stephen Elliott's name ended up on a Google doc called Shitty Media Men, along with the information "Rape accusations, sexual harassment [sic], coercion, unsolicited invitations to his apartment, a dude who snuck into Binders???" listed under the column heading "ALLEGED MISCONDUCT" and the additional note that, "Multiple women allege misconduct." He has now sued Moira Donegan, the owner of the Google doc, and dozens of anonymous third-party contributors to the list for defamation, as well as intentional and negligent infliction of emotional distress. He has also now cemented his reputation as a very shitty man.

First, let me say that I do not call Stephen Elliott a shitty man because of what posters to the Shitty Media Men list wrote about him. He's shitty for filing this lawsuit against the host of and contributors to the list, seeking to chill the speech of those who would speak out against bad behavior. He's shitty for threatening to unmask people who had exercised their right to speak anonymously to warn others of potential harm.

Plaintiff will know, through initial discovery, the names, email addresses, pseudonyms and/or “Internet handles” used by Jane Doe Defendants to create the List, enter information into the List, circulate the List, and otherwise publish information in the List or publicize the List. Through discovery, Plaintiff can obtain the email address information, Google account, Internet Protocol (“IP”) address assigned to the accounts used by the Jane Doe Defendants by the account holders’ Internet Service Provider (“ISP”), email accounts and/or Google accounts, on the date and time at which the Posts were published and/or information was entered into the List. Plaintiff intends to subpoena the shared Google spreadsheet metadata for the List, email accounts, Google accounts and ISPs in order to learn the identity of the account holders for the email addresses and IP addresses.

He's shitty for leaving everyone vulnerable to continued abuse, from any source, since by deterring speech about abuse, abuse will now be so much harder to check. And he's shitty for using disproportionate power to quell those who tried to resist him (which, of course, seems an odd play for someone who wants people to believe that claims he did the same sexually could not possibly have been true).

Because his power here is indeed unequal. The pen may be mightier than the sword, but it is no match for a lawsuit. A lawsuit targeting speech is giant tax on expression, extracting an immense cost for what should have been free to do. Speech isn't free when one must pay a minimum of five or six figures – if not more – to defend the right to "freely" express it.

This story therefore touches on a number of the issues we often talk about here at Techdirt highlighting this recurrent power imbalance that keeps threatening to make the right to free speech illusory. There's the chilling effect of the suit itself, both on these defendants and anyone else who might now be prompted to rethink speaking out in the future. There's the attack on anonymous speech, upon which public discourse often depends. And then there's the targeting of the intermediary in order to pressure those who enable others' speech to cease doing so.

That last line is an aspect to the suit particularly worth noting here. One of the points we keep making is that Section 230 isn't just about Facebook (or Twitter, or Google, or Yelp, etc…); it's about regular Internet users. This suit exemplifies why it's so important to preserve its critical protections for intermediaries of all sorts: because someday you, too, may want to facilitate the exchange of important information in a Google doc, and you might not want to be sued for it.

"In the beginning, I only wanted to create a place for women to share their stories of harassment and assault without being needlessly discredited or judged," Donegan wrote in The Cut in January 2018. "The hope was to create an alternate avenue to report this kind of behavior and warn others without fear of retaliation."

In this case, the progenitor of the Google doc was an intermediary enabling other people to express themselves through the online service – in this case, the Google doc – she provided. Section 230 allows that intermediaries can come in all sorts of shapes and sizes, because its immunity is provided broadly, to any provider of an "interactive computer service," which is "any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server." That's what Donegan did with her Google doc: provide access to software to multiple users. If anything is somehow wrong with the content they contributed through this service, then they can be held responsible for it. But per Section 230, not Donegan.

(In his complaint Elliott does accuse Donegan of editing the spreadsheet, but not in ways that transcend the typical editing activity of an intermediary protected under Section 230.)

And it shouldn't be the other contributors either. None actually accused him of rape; the statement in question reflected only that that there had been accusations of it. Elliott's complaint would have Donegan and any author of this spreadsheet entry bear the burden of proving that he actually raped someone in order not to be liable to him. But that's not how defamation law works, nor is it how it should work. As the Supreme Court observed in New York Times v. Sullivan:

A rule compelling the critic … to guarantee the truth of all his factual assertions—and to do so on pain of libel judgments virtually unlimited in amount— leads to a comparable "self-censorship." Allowance of the defense of truth, with the burden of proving it on the defendant, does not mean that only false speech will be deterred. Even courts accepting this defense as an adequate safeguard have recognized the difficulties of adducing legal proofs that the alleged libel was true in all its factual particulars. Under such a rule, would-be critics … may be deterred from voicing their criticism, even though it is believed to be true and even though it is in fact true, because of doubt whether it can be proved in court or fear of the expense of having to do so. They tend to make only statements which "steer far wider of the unlawful zone." The rule thus dampens the vigor and limits the variety of public debate.

The burden is therefore on the plaintiff to show that the statement was false or was made in reckless disregard of the truth. Even if he were to be considered a private, rather than public, figure the burden would still be on him to show that the defendants at least demonstrated a negligent, rather than reckless, disregard for the truth, but it would be strange for him to argue his own cultural irrelevance in order to be able to prevail on that lower standard. His complaint itself laments a loss of stature that suggests he was at least a limited-purpose public figure, and thus required to prove "actual malice," which, despite some handwringing about Donegan's public feelings about other terrible men, the complaint doesn't seem to do. The lawsuit is also over speech about a matter of public importance, which in New York would also prompt the higher standard, which likely would need to be met before unmasking the speakers.

[U]nder prior case law, Google cannot be compelled to reveal the identity of an anonymous poster unless and until Elliott can prove that the posts were libelous, said Paul Levy, an attorney with Public Citizen who has helped establish precedent for when a court can compel an internet provider to identify an anonymous user. So if Elliott's attorneys want to identify the list contributors, they'll have to prove his case of libel before Google can be compelled to provide the information.

Furthermore, the only "truth" at issue here is whether he had ever been accused of rape. There could have been a false accusation and the statement would still be true. But what Elliott really wants is for the court to grant him a "get out of rape accusation free" card, if these anonymous speakers cannot substantiate his guilt. Which is what renders this lawsuit the piece of crap SLAPP that it is, and illustrates why a strong anti-SLAPP law needs to apply. The complaint was filed in New York, which has an infamously limited anti-SLAPP law, but it's notable that, per the complaint, Stephen Elliott lives in Louisiana, where there is a decently strong anti-SLAPP law. If that law is found to apply, it could lead to Elliott having to pay everyone else's legal bills.

But even so, the essential truth will remain. Even if Elliott had never before victimized women, this lawsuit is an attempt to victimize them now by burdening them with a cripplingly expensive and impossible task. And for this behavior he indeed is a shitty man.

Read More | 131 Comments | Leave a Comment..

Posted on Techdirt - 25 September 2018 @ 10:44am

District Court Misses The Forest For The Trees In Dismissing Constitutional Challenge To FOSTA

from the stop-hitting-yourself dept

It's like the scene in the Naked Gun, where Leslie Nielsen stands outside the exploding fireworks factory telling everyone, "Nothing to see here. Please disperse." Such is the decision by the district court dismissing the EFF's lawsuit challenging the constitutionality of FOSTA.

Since FOSTA's passage, many have largely been reacting in terror at its vague, yet broad, language threatening civil and even criminal liability. It has led to the censorship of enormous swathes of legitimate speech as platforms seek to reduce this new risk. But in a decision Monday dismissing the case for lack of standing the district court basically declared that it couldn't understand what everyone was so worked up over.

Standing has to do with who is entitled to file a lawsuit. Ordinarily you have to have suffered an actual injury, although in certain situations, such as constitutional challenges, parties can have standing if it is likely that they will suffer an injury. After all, we wouldn't want people to have to expend resources needlessly in the effort to comply with an unconstitutional law, or have to risk prosecution in order to have its constitutionality tested before the courts. But the injury risk still needs to be reasonably likely.

Imminence, the element most relevant here, is concededly a somewhat elastic concept. Nevertheless, imminence "cannot be stretched beyond its purpose, which is to ensure that the alleged injury is not too speculative for Article III purposes – that the injury is certainly impending." […] The concept of imminence has been particularly important in the context of pre-enforcement challenges. The Supreme Court has held that plaintiff who challenges a statute must demonstrate a realistic danger of sustaining a direct injury as a result of the statute's operation or enforcement. A credible threat of prosecution exists when the challenged law is aimed directly at plaintiffs, who, if their interpretation of the statute is correct, will have to take significant and costly compliance measures or risk criminal prosecution. Thus, fear of prosecution cannot be "imaginary or wholly speculative," and allegations of a subjective "chill" are not an adequate substitute for a claim of specific present objective harm or a threat of specific future harm. [p. 15-16]

Yet here the court decided it was not.

It would be great if it were right, and no one had anything to fear. But while the court essentially declared the fears contorting the availability of online speech to be much ado about nothing, it didn't do so in a way that would effectively allay those fears.

As the court ran through its analysis of the standing of each plaintiff, it struggled to see how what they proposed to do, and how what they feared would be chilled by the law, was targeted by the law.

[P]laintiffs say, FOSTA criminalizes "anything that promotes or facilitates prostitution, and not a specific crime." This is particularly problematic because prostitution is an area where there has been significant advocacy, both by government entities and by private citizens. As plaintiffs see it, that advocacy places them in crosshairs. In pressing this argument, however, plaintiffs ignore key textual indications that make clear that FOSTA targets specific acts of illegal prostitution not the abstract topic of prostitution or sex work. [p. 22]

The above is some of what the court had to say about the lead plaintiff Woodhull Freedom Foundation. It concluded similarly for plaintiff Human Rights Watch. For plaintiff Jesse Maley a/k/a Alex Andrews, the creator and operator of an actual platform,, it similarly minimized her concerns.

Under Maley's reasoning, because providing housing or childcare services to sex workers "make[s] sex work easier," Rate That Rescue could be said to promote or facilitate prostitution. For this reason, Maley fears that amendments to Section 230 - which clarify that immunity does not extend to conduct made unlawful by Section 2421A - could expose her to prosecution for the speech of third parties on Rate That Rescue. […] Her concerns, however, are unwarranted. Put simply, Maley has failed to show that Section 230 amendments expose her to a credible threat of prosecution. That is so because Maley, on the current record, lacks the mens rea to violate any of the provisions specified in Section 230(c)(5). […] In managing Rate That Rescue, Maley cannot possibly be said to act "with the intent to promote or facilitate the prostitution of another person" in violation of Section 2421A. Maley's declaration concedes as much, repeatedly expressing concern that law enforcement could determine that "the user-generated content on Rate That Rescue promotes or facilitates prostitution." But those formulations lack the critical mens rea element of the Section 2421A offense. Indeed, Maley herself does not even assert that law enforcement could credibly contend that, in managing Rate That Rescue, she acts "with the intent to promote or facilitate" the prostitution of another person. Of course, the mere promotion or facilitation of prostitution is not enough: Maley must intend that her conduct produce the specific result. [p. 25-26]

It's a statutory parsing that would be a lot more assuring if it didn't ignore another perfectly plausible read of the statute. Of course it's ridiculous to say that Maley intended to promote prostitution. But that's not what the statute forbids. In a subsequent passage the court dismisses the argument that FOSTA's amendments to 18 U.S.C. Sec. 1591 create any additional legal risk for platforms. But the amendments expand the prohibition against the "participation in a venture" to engage in sex trafficking to include "knowingly assisting, supporting, or facilitating" such a venture. This language suggests that liability does not require knowledge of a specific act of sex trafficking. Instead, merely providing services to sex traffickers – even ones unsuccessful in their sex trafficking venture – would seem to trigger liability. In other words, knowledge seems to hinge not on knowledge of a sex trafficking act but on knowledge of a sex trafficking venture (including one that may even be victimless), yet both the statute and the court are silent as to how much, or how little, a platform would need to actually know in order to have "knowledge" for purposes of the statute. This vagueness is what is so chilling to them, because it forces them to guess conservatively. But the court provides little relief, and in dismissing the case denies the opportunity to even attempt to gain any.

Also, while these plaintiffs were suing because they feared prospective injury, plaintiff Eric Koszyk has already experienced a tangible injury directly traceable to the changes in the law wrought by FOSTA. He was a massage therapist who relied on Craigslist to advertise his services. In the wake of FOSTA, Craigslist shut down its Therapeutic Services section, thus limiting his ability to find customers. Without FOSTA (which would result if it were declared unconstitutional) it would seem that the shutdown decision could be reversed. But to the court this result would be too speculative:

Unfortunately for Koszyk, he cannot establish redressability under the relevant precedents. That is so because Koszyk has not established that a victory "will likely alleviate the particularized injury alleged." It is well established that a plaintiff lacks standing when the "redress for its injury depends entirely on the occurrence of some other, future event made no more likely by its victory in court." When, as here, a third party can exercise "broad and legitimate discretion the courts cannot presume either to control or to predict," a court is generally unable to redress the alleged injury and, accordingly, standing is found wanting. [p. 27-28]

This is insanity. Of course the court can't force Craigslist to re-open its Therapeutic Services section. But it can eliminate the reason for its closure and at least make the decision to re-open it possible. As long as FOSTA remains on the books it eliminates that possibility, and that's an injury.

It didn't go any better for the Internet Archive's standing as a plaintiff. As a platform that handles a massive amount of third party created content, for which review would be impossible, it worried it could nonetheless be caught in FOSTA's net. Don't worry about it, said the court.

Although the Internet Archive represents that it does not intend to promote sex trafficking or prostitution, it believes that the Section 230 amendments 2 and the ambiguity of their scope may expose it to liability. Once again, however, there are no facts in the record supporting an inference of the mens rea standard necessary to peel back Section 230's protections. The Internet Archive's practice of sweeping up vast amounts of content from the web for indefinite storage, and its attested practical inability to review the legality of that third-party content, mean that that entity simply cannot meet the stringent mans rea standard required for liability under Sections 2421A, 1591, or 1595. [p. 28]

In a way, that sounds great. Don't know what's in all that user content? No problem. But the problem is, inevitably platforms are going to have some knowledge of what's in all the user content. In fact, if Section 230 is going to work as intended to encourage platform moderation of content they are going to have to know. And, thanks to this decision, this knowledge remains a terrifying prospect for all.

It is likely that EFF will continue to press forward with this case, so it is not the final word on FOSTA's constitutionality, but it is an unfortunate start.

Read More | 30 Comments | Leave a Comment..

Posted on Techdirt - 20 September 2018 @ 1:33pm

Wherein Jean Luc Picard Learns How Not To Moderate Twitter

from the instructive-allegory dept

For those not familiar with the Star Trek: the Next Generation canon, in the episode "Hero Worship" the Enterprise receives a distress call from somewhere deep in space, and in responding discovers a heavily-damaged ship with just one survivor. While the Enterprise crew is investigating what happened to the ship, they soon realize that they are being pounded by energy waves, and eventually it dawns on them that these waves could eventually destroy their ship like they apparently did the other. As the Enterprise tries to channel more and more power to its shields to protect itself from the battering, the waves hitting the ship become more and more violent. Until finally – spoiler alert! (although let's be honest: the episode basically telegraphs that this will be the solution) – Commander Data realizes that the waves are reflecting back the energy the Enterprise is expending, and that the solution is to cut the power or else be destroyed by the slapback.

This is a sci fi story illustrating a phenomenon with which we're all familiar. It's that basic principle: to every action there is an equal and opposite reaction. And that's what's happening as people demand more censorship from platforms like Twitter, and then get more outraged when platforms have inevitably censored things they like. Of course increased calls to remove content will inevitably result in increased calls not to. And of course platforms' efforts to comply with all these competing demands will just make the platform more unusable until, like the wrecked ship, it will have torn itself apart to the point that it's hardly recognizable.

As the Enterprise crew learned, solutions don't always require figuring out ways to expend more energy. Sometimes they involve disengaging from a struggle that can never be won and finding new ways to view the problem. And when it comes to platform moderation, that same lesson seems relevant here.

Because just as the challenge facing the Enterprise was not actually to overpower the energy rocking it, that is not really the platforms' challenge either. The essential, and much less pugilistic, challenge they face is to figure out how to successfully facilitate the exchange of staggering amounts of expression between an unprecedented number of people. Content moderation is but one tool, but it's not the only one available, nor is it the best one for achieving that ultimate goal. Platforms shouldn't need to completely control the user experience; instead they need to deliver the control users need to optimize it for themselves. Being fixated only on the former at the expense of the latter is doomed to be no more successful than when the Enterprise was focused on doing nothing but feeding more power to the shields. In the end it wouldn't have saved the ship, because ultimately the solution it needed was something far less antagonistic. And the same is just as true for platforms.

Internet platforms of course are not fictional starships. And unlike fictional starships they can't depend on artificial intelligence to set them on the right path. Theirs is a very human exercise, that first requires understanding the human beings who use their systems and then ensuring that the interfaces of these systems are built in accordance with how those users expect to use them, and need to.

Which itself is a lesson the story teaches. The survivor of that wrecked ship happened to have been a child, who was worried that it was he who had accidentally destroyed his ship when he stumbled during a wave attack and hit a computer console during his fall. The Enterprise crew assured him there was nothing he could have done to hurt anything. The engineers who had designed those consoles understood what their users needed from their interfaces, including the protection the interfaces needed to afford, and the enormous stakes if users didn't get it. And that's what the people building computer systems always need to do, no matter what the century.

23 Comments | Leave a Comment..

Posted on Free Speech - 18 September 2018 @ 10:44am

How Regulating Platforms' Content Moderation Means Regulating Speech - Even Yours.

from the democratization-of-the-Internet dept

Imagine a scenario:

You have a Facebook page, on which you've posted some sort of status update. Maybe an update from your vacation. Maybe a political idea. Maybe a picture of your kids. And someone comes along and adds a really awful comment on your post. Maybe they insult you. Maybe they insult your politics. Maybe they insult your kids.

Would you want to be legally obligated to keep their ugly comments on your post? Of course not. You'd probably be keen to delete them, and why shouldn't you be able to?

Meanwhile, what if it was the other way around: what if someone had actually posted a great comment, maybe with travel tips, support for your political views, or compliments on how cute your kids are. Would you ever want to be legally obligated to delete these comments? Of course not. If you like these comments, why shouldn't you be able to keep sharing them with readers?

Now let's expand this scenario. Instead of a Facebook page, you've published your own blog. And on your blog you allow comments. One day you get a really awful comment. Would you want to be legally obligated to keep that comment up for all to see? Of course not. Nor would you want to be legally obligated to delete one that was really good. Think about how violated you would feel, though, if the law could force you to make these sorts of expressive decisions you didn't want to make and require you to either host speech you hated or force you to remove speech that you liked.

And now let's say that your website is not just a blog with comments but a larger site with a message board. And let's say the message board is so popular that you've figured out a way to monetize it to pay for the time and resources it takes to maintain it. Maybe you charge users, maybe you run ads, or maybe you take a cut from some of the transactions users are able to make with each other through your site.

And let's say that this website is so popular that you can't possibly run it all by yourself, so you run it with your friend. And now that there are multiple people and money involved, you and your friend decide to form a company to run it, which both gives you some protection and makes it easier to raise money to invest in better equipment and more staff. Soon the site is so popular that you've got dozens, hundreds, or even thousands of people employed to help you run it. And maybe now you've even been able to IPO.

And then someone comes along and posts something really awful on your site.

And someone else comes along and posts something you really like.

Which gets to the point on this post: if it was not OK for the law to be able to force you to maintain the bad comments, or to delete the good ones, when you were small, at what point did it become OK when you got big – if ever?

There is a very strong legal argument that it never became OK, and that the First Amendment interest you had in being able to exercise the expressive choices about what content to keep or delete on your website never went away – it's just that it's easier to see how the First Amendment prevents being forced to make those choices when the choices are so obviously personal (as in the original Facebook post example). But regardless of whether you host a small personal web presence, or are the CEO of a big commercial Internet platform, the principle is the same. There's nothing in the language of the First Amendment that says it only protects editorial discretion of small websites and not big ones. They all are entitled to its protection against compelled speech.

Which is not to say that as small websites grow into big platforms there aren't issues that can arise due to their size. But it does mean that we have to be careful in how we respond to these challenges. Because in addition to the strong legal argument that it's not OK to regulate websites based on their expressive choices, there's also a strong practical argument.

Ultimately large platforms are still just websites out on the Internet, and ordinarily the Internet allows for an unlimited amount of websites to come into being. Which is good, because, regardless of the business, we always want to ensure that it's possible to get new entrants who could provide the same services on terms the market might prefer. In the case of platform businesses, those may be editorial terms. Naturally we wouldn't want larger companies to be able to throw up obstacles that prevent competitors from becoming commercially viable, and to the extent that a large company's general business practices might unfairly prevent competition then targeted regulation of those specific practices may be appropriate. But editorial policies are not what may prevent another web-based platform from taking root. Indeed, the greater the discontent with the incumbent's editorial policies, the more it increases the public's appetite for other choices.

The problem is, if we regulate big platforms by targeting their editorial policies, then all of a sudden that loss of editorial freedom itself becomes a barrier to having those other choices come into being, because there's no way to make rules that would only apply to bigger websites and not also smaller or more personal ones, including all the nascent ones we're trying to encourage. After all, how could we? Even if we believed that only big websites should be regulated, how would we decide at what stage of the growth process website operators should lose their right to exercise editorial discretion over the speech appearing on their sites? Is it when they started running their websites with their friends? Incorporated? Hired? (And, if so, how many people?) Is it when they IPO'd? And what about large websites that are non-profits or remain privately run?

Think also about how chilling it would be if law could make this sort of distinction. Would anyone have the incentive to grow their web presence if its success meant they would lose the right to control it? Who would want to risk building a web-based business, run a blog with comments, or even have a personal Facebook post that might go viral, if, as a consequence of its popularity, it meant that you no longer could control what other expression appeared on it? Far from actually helping level the playing field to foster new websites seeking to be better platforms than the ones that came before, in targeting editorial policies with regulation we would instead only be deterring people from building them.

130 Comments | Leave a Comment..

Posted on Techdirt - 6 September 2018 @ 3:33pm

United Airlines Made Its App Stop Working On My Phone, And What This Says About How Broken The Mobile Tech Space Is

from the garbage-in-garbage-out? dept

This post isn't really about United Airlines, but let's start there because it's still due plenty of criticism.

One day my phone updated the United App. I forget if I had trusted it to auto-update, or if I'd manually accepted the update (which I usually do only after reviewing what's been changed in the new version), but in any case, suddenly I found that it wasn't working. I waited a few days to see if it was a transient problem, but it still wouldn't work. So I decided to uninstall and reinstall, and that's where I ran into a wall: it wouldn't download, because Google Play said the new version wasn't compatible with my phone.

Wait, what? It used to run just fine. So I tweeted at United, which first responded in a surprisingly condescending and unhelpful way.

Sometime later I tweeted again, and this time the rep at least took the inquiry seriously. Apparently United had made the affirmative choice to stop supporting my Android version. And apparently it made this decision without actually telling anyone (like, any of their customers still running that version, who might not have updated if they knew they would have to BUY A NEW PHONE if they wanted to keep running it).

Ranting about this on Twitter then led to an interesting argument about what is actually wrong with this situation.

But let's not let United off the hook too soon. First, even if United were justified in ceasing to support an Android 4.x capable app, it should have clearly communicated this to the customers with 4.x phones. Perhaps we could have refused the update, but even if not, at least we would have known what happened and not wasted time troubleshooting. Plus we would have had some idea of how much United valued our business...

Second, one of the points raised in United's defense is that it is expensive to have to support older versions of software. True, but if United wants to pursue the business strategy of driving its customers to its app as a way of managing that relationship, then it will need to figure out how to budget for maintaining that relationship with all of its customers, or at least those whose business it wants to keep. If providing support for older phones is too expensive, then it should reconsider the business decision of driving everyone to the app in the first place. It shouldn't make customers subsidize this business decision by forcing them to invest in new equipment.

And then there was the third and most troubling point raised in United's defense, which is that Android 4.x is a ticking time bomb of hackable horror, and that any device still running it should be cast out of our lives as soon as possible. According to this argument, for United to continue to allow people to use their app on a 4.x Android device would be akin to malpractice, and possibly not even be allowed per their payment provider agreements.

At this point we'll stop talking to United, because the problem is no longer about them. Let's assume that the security researchers making this argument are right about the vulnerability of 4.x and its lack of support.

The reality is, THE PHONES STILL WORK. They dial calls. They surf the web. They show movies. Display ebooks. Give directions. Hold information. Sure, at some point the hardware will fail. But for those wrapped in good cases that have managed to avoid plunging into the bath, there's no reason they couldn't continue to chug on for years. Maybe even decades. In fact, the first thing to go may be the battery – although, thanks to them often not being removable, this failure would doom the rest of the device to becoming e-waste. But why should it be doomed to becoming e-waste a moment before it actually becomes an unusable thing? Today these phones are still usable, and people use them, because it is simply not viable for most people to spend several hundred dollars every few years to get a new one.

And yet, in this mobile ecosystem, they'll need to. Not only to keep running the software they depend on, but to be able to use the devices safely. The mere ability to function no longer is enough to delineate a working device from a non-working one. The difference between a working device and a piece of trash is what the OS manufacturer deems it. Because when it says it's done maintaining the OS, then the only proper place for a phone that runs it is a landfill.

It is neither economically nor environmentally sustainable for mobile phones to have such artificially short lifespans. "Your phone was released in 2013!" someone told me, as if I'd somehow excavated it from some ancient ruin and turned it on. It's a perfectly modern device (in fact, this particular phone in my possession came into use far more recently than 2013), still holds a reasonable charge, and is perfectly usable for all the things I use it for (well, except the United app...). So what do you mean that I can't use it? Or that any of the other millions if not billions of people in the world running Android 4.x phones can't use them?

There are lots of fingers to point in this unacceptable state of affairs. At app makers who refuse to support older OSes. At app makers who make us use apps at all, instead of mobile web applications, since one of the whole points of the Web in the first place was to make sure that information sharing would not be device- or OS-dependent. At carriers who bake the OS into their phones in such a way that we become dependent on them to allow us OS updates. At the OS manufacturers who release these systems into the wild with no intention of supporting them beyond just a few years. And to various legal regimes (I'm looking at you, copyright law…) that prevent third parties from stepping in to provide the support the OEM providers refuse to anymore. Obviously there are some tricky issues with having a maintenance aftermarket given concerns with authentication, etc., but we aren't even trying to solve them. We aren't doing anything at all, except damning the public to either throw good money after bad for new devices that will suffer the same premature fate, or to continue to walk around with insecure garbage in their pockets. And neither is ok.

140 Comments | Leave a Comment..

Posted on Techdirt - 4 September 2018 @ 3:38pm

Ninth Circuit Stops Monkeying Around And Denies En Banc Review Of The Monkey Selfie Case

from the it-ain't-over-till-its-over dept

Whatever will we do without the Monkey Selfie case rearing its not-actually-copyrighted head every few months? We might finally get to find out, now that the Ninth Circuit has declined to rehear the appeal en banc. This denial now makes clear that monkeys lack standing to sue for copyright, at least within the Ninth Circuit. Someday (hopefully not soon) we may find out what other Circuits have to say about primate copyrights, but for now we can finally be confident that they lack standing to sue over them here.

Provided that no cert petition is granted, of course. And given that this is a case that has thus far steadfastly refused to end, it is way too soon to be confident that this is truly the last we've heard from Naruto or any of his alleged next friends. We should at least know whether a cert petition's been filed in about three months or so, though (see Rule 13), so stay tuned...

Read More | 30 Comments | Leave a Comment..

Posted on Techdirt - 31 August 2018 @ 12:09pm

The Scunthorpe Problem, And Why AI Is Not A Silver Bullet For Moderating Platform Content At Scale

from the what's-in-a-name dept

Maybe someday AI will be sophisticated, nuanced, and accurate enough to help us with platform content moderation, but that day isn't today.

Today it prevents an awful lot of perfectly normal and presumably TOS-abiding people from even signing up for platforms. A recent tweet from someone unable to sign up to use an app because it didn't like her name, as well as many, many, MANY replies from people who've had similar experiences, drove this point home:

Facebook, despite its insistence on users using real names, seems particularly bad at letting people actually use their real names.

But of course, Facebook is not the only instance where censorship rules based on bare pattern matching interfere not just with speech but with speaker's ability to even get online to speak.

This dynamic is what's known as the Scunthorpe Problem. Scunthorpe is a town in the UK whose residents have had an appallingly difficult time using the Internet due to a naughty word being contained within the town name.

The Scunthorpe problem is the blocking of e-mails, forum posts or search results by a spam filter or search engine because their text contains a string of letters that are shared with another (usually obscene) word. While computers can easily identify strings of text within a document, broad blocking rules may result in false positives, causing innocent phrases to be blocked.

The problem was named after an incident in 1996 in which AOL's profanity filter prevented residents of the town of Scunthorpe, North Lincolnshire, England from creating accounts with AOL, because the town's name contains the substring cunt. Years later, Google's opt-in SafeSearch filters apparently made the same mistake, preventing residents from searching for local businesses that included Scunthorpe in their names.

(A related dynamic, the Clbuttic Problem, creates issues of its own when, instead of outright blocking, software automatically replaces the allegedly naughty words with ostensibly less-naughty words instead. People attempting to discuss such non-purient topics as Buttbuttin's Creed and the Lincoln Buttbuttination find this sort of officious editing particularly unhelpful…)

While examples of these dynamics can be amusing, each is also quite chilling to speech, and to speakers wishing to speak.

It's not something we should be demanding more of, but every time people call for "AI" as a solution to online content challenges these are the censoring problems the call invites.

A big part of the problem is that calls for "AI" tend to treat it like some magical incantation, as if just adding it will solve all our problems. But in the end, AI is just software. Software can be very good at doing certain things, like finding patterns, including patterns in words (and people's names…). But it's not good at necessarily knowing what to make of those patterns.

More sophisticated software may be better at understanding context, or even sometimes learning context, but there are still limits to what we can expect from these tools. They are at best imperfect reflections of the imperfect humans who created them, and it's a mistake to forget that they have not yet replicated, or replaced, human judgment, which itself is often imperfect.

Which is not to say that there is no role for software to help in content moderation. The things that software is good at can make it an important tool to help support human decision-making about online content, especially at scale. But it is a mistake to expect software to supplant human decision-making. Because, as we see from these accruing examples, when we over-rely on them, it ends up being real humans that we hurt.

46 Comments | Leave a Comment..

Posted on Techdirt - 17 August 2018 @ 9:25am

NJ Courts Impose Ridiculous Password Policy 'To Comply With NIST' That Does Exactly What NIST Says Not To Do

from the the-poor-online-security-guardin'-state dept

As a New Jersey native I know how tempting it is for people to gratuitously bash my home state. But, you know, sometimes it really does have it coming.

In this case it's because of the recent announcement of a new password policy for all of the New Jersey courts' online systems – ranging from e-filing systems for the courts to the online attorney registration system – that will now require passwords to be changed every 90 days.

This notice is to advise that the New Jersey Judiciary is implementing an additional information security measure for those individuals who use Judiciary web-based applications, in particular, attorney registration, eCourts, eCDR, eTRO, eJOC, eVNF, EM, MACS, and DVCR. The new security requirement - password synchronization or p-:-synch - will require users to electronically reset their passwords every 90 days.

For reasons explained below, this new policy is a terrible idea. But what makes it particularly risible is that the New Jersey judiciary is claiming this change is being implemented in order to comply with NIST.

This requirement is being added to ensure that our systems and data are protected and secure consistent with industry security standards (National Institute of Standards and Technology Cybersecurity Framework (NIST CSF)).

The first problem here, of course, is that this general allusion to NIST is not helpful. If NIST has something specific to say that the courts are relying on, then the courts should specially say what it is. Courts would never accept these sorts of vague hand-wavy references to authority in matters before them. Assertions always require a citation to the support upon which they are predicated so that they can be reviewed for accuracy and reasonableness. Instead the New Jersey judiciary here expects us to presume this new policy is both, when in fact it is neither.

The reality is that the NIST Cybersecurity Framework does not even mention the word "password," let alone any sort of 90-day expiration requirement. Moreover, what NIST does actually say about passwords is that they should not be made to expire. In particular, the New Jersey judiciary should direct its attention to Special Publication 800-63B, which expressly says:

Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically).

That same section of the Special Publication also says that, "Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets" because, as a NIST study noted, it tends to reduce overall security hygiene. Guess what else the new New Jersey password policy does:

Users must select passwords that are no more than eight (8) characters long and contain at least one capital letter, one lower case letter, one numeral, and one of the enumerated special characters.

It also gets worse, because as part of this password protocol it will require security questions in order to recover lost passwords.

Additionally, this policy change will require that each user choose and answer three personal security questions that will later allow the user to reset their own password should their account become disabled, for example, because of an expired password. The answers to the three security questions should be kept confidential in order to reduce the risk of unauthorized access and allow for most password resets to be done electronically.

Security questions are themselves a questionable security practice because they are often built around information that, especially in a world of ubiquitous social media, may not be private.

From their dangerous guessability to the difficulty of changing them after a major breach like Yahoo's, security questions have proven to be deeply inadequate as contingency mechanisms for passwords. They're meant to be a reliable last-ditch recovery feature: Even if you forget a complicated password, the thinking goes, you won't forget your mother's maiden name or the city you were born in. But by relying on factual data that was never meant to be kept secret in the first place—web and social media searches can often reveal where someone grew up or what the make of their first car was—the approach puts accounts at risk. And since your first pet's name never changes, your answers to security questions can be instantly compromised across many digital services if they are revealed through digital snooping or a data breach.

The Wired article this passage came from is already two years old. Far from New Jersey imposing an "industry standard" password protocol, it is instead imposing one that is outdated and discredited, which stands to undermine its systems security, rather than enhance it.

And largely, it seems, because it does not seem to understand the unique needs of its users – who are not all the same. Some may log into these sites daily, while others (like me) only once a year when it's time to pay our bar dues. (What does this 90-day reset requirement mean for an annual-only user?) Furthermore, although things have been improving over the years, lawyers are notoriously non-technical. They are busy and stressed with little time to waste wrangling with the systems they need to use to do their job on behalf of their clients. And they are often dependent on vendors, secretaries, and other third parties to act on their behalf, which frequently results in credential sharing. In short, the New Jersey legal community has some particular (and varied) security needs, which all need to be understood and appropriately responded to, in order to improve systems security overall for everyone.

But that's not what the New Jersey courts have opted to do. Instead they've imposed a sub-market, ill-tailored, laborious, and needlessly demanding policy on their users, and then blamed it on NIST. But as yet another NIST study explains, security is only enhanced when users can respect the policy enforcing it. The more arbitrary and frustrating it is, the more risky the user behavior, and the weaker the security protocol becomes.

The key finding of this study is that employees’ attitudes toward the rationale be-hind cybersecurity policies are statistically significant with their password behaviors and experiences. Positive attitudes are related to more secure behaviors such as choosing stronger passwords and writing down passwords less often, less frustration with authentication procedures, and better understanding and respecting the significance to protect passwords and system security.

As NIST noted in a summary of the study, "'security fatigue' can cause computer users to feel hopeless and act recklessly." Yet here are the New Jersey courts, expressly implementing, for no good reason, a purposefully cumbersome and frustrating policy, one that could hardly be better calculated to overwhelm users, and which, despite its claims to the contrary, is far from a respected industry norm.

59 Comments | Leave a Comment..

Posted on Techdirt - 6 August 2018 @ 3:40pm

SESTA, FOSTA, And How To Make Sense Of The Acronym Soup

from the You-say-potato,-I-say-we-should-have-called-the-whole-thing-off dept

Here at Techdirt we've been slow to switch: so dug in were we for so long against the legislative scourge known as SESTA that we've been reluctant to call it anything else. Even after its ghastly provisions became law – in some ways, because its ghastly provisions became law – we've been reluctant to change what we called this vehicle of censoring doom. After all, we said for months that SESTA would be awful, and now here it is, being awful. If we called it something else people might be confused about what we had been complaining about.

The problem is, it's not technically correct to continue to call this legislative outrage SESTA, and doing so threatens to create its own confusion. SESTA didn't become law; FOSTA did. When we react to those legislative changes, and cite to their source, we are citing to the bill called FOSTA, not the bill called SESTA. SESTA itself no longer exists in legislative form – FOSTA's enactment mooted it – and it's confusing to complain about a law that isn't actually one, or ever going to be one, because even if you can convince someone that it's terrible, they'll never be able to find in any law book what it is they should be upset about.

It's FOSTA that now haunts us from the U.S. Code. But what's confusing is that while FOSTA is the enacted legislation now hurting us, SESTA was the proposed bill we had warned would. All the legislative history is with SESTA (well, most of it anyway), but all the legislative power is with FOSTA.

So what happened? What's up with the two names? Why the shift? Basically this:

SESTA was a terrible bill proposing to gut Section 230 that had been rumbling around the Senate for a while. There were some hearings and proposed amendments, but by and large it remained a bill full of terrible, Internet-ruining proposals. Eventually, when it looked like it might be picking up enough steam to pass, an alternate bill got floated in the House: FOSTA. It still played SESTA's game, but it did so with different language that presumably would have resulted in something less Internet-ruining.

For what it's worth, not everyone thought this was a great strategy. Some thought that it would be better to do nothing but try to nip the whole idea behind SESTA in the bud, but others thought it might be better to go with a "devil you know" strategy if passage of something seemed inevitable, because then hopefully it could at least be something a little less awful.

FOSTA was still pretty bad, although it had some hearings and amendments to try to make it less so. But then, all of a sudden, the legislative sausage-making machine went berserk and spit out something even worse. The result was a Frankenstein monster of a bill, still called FOSTA, which combined the worst of its own proposals with the worst of the SESTA bill percolating in the Senate. This new FOSTA bill soon passed the House, and shortly thereafter it's the bill that passed the Senate. Notably it was not the original SESTA bill that the Senate voted on, because if the Senate had tried to pass anything different from what the House had passed the reconciliation process between the two bills might have delayed the ultimate passage of either. Perhaps that delay would have spared us this horror, but such a fate was not something the law's Internet-undermining champions wanted to risk.

So here we are, stuck with this garbage on the books, legislation so awful it can't even be labeled coherently. But giving name to something always makes it easier to fight. So from here on out, we'll be calling it FOSTA.

28 Comments | Leave a Comment..

Posted on Techdirt - 20 July 2018 @ 1:30pm

Appeals Court Tells Lower Court To Consider If Standards 'Incorporated Into Law' Are Fair Use; Could Have Done More

from the 102(b)-or-not-102(b),-that-was-the-question dept

Carl Malamud published the law on his website. And for that he got sued. The problem was, in posting the Code of Federal Regulations he also included the various enforceable standards included as part of those Regulations. This displeased the organizations which had developed those standards (SDOs) and who claimed a copyright in them. So they sued Public Resource for infringement, and in a terrible decision last year Public Resource lost. Public Resource then appealed, and this week Malamud's organization won a reversal of the district court decision.

The decision by the D.C. Circuit in American Society for Testing and Materials v. stands as a win for those who would choose to republish the law, even when their doing so may involve republishing standards created by non-governmental SDOs that were then incorporated by reference into controlling law. Although one can never presume to read the tea leaves at oral argument, it did seem as though the court was extremely uncomfortable with the idea that someone could be punished for having published the law. But the particular way the court addressed the copyright and trademark claims brought against Public Resource for it having done so is still worth further discussion. Disclosure: I helped file an amicus brief on behalf of members of Congress supporting Public Resource's defense, and amicus briefs on behalf of law professors at the district court.

On the copyright front, it is important to first note how the court did NOT resolve the question of whether republishing standards incorporated into law constituted copyright infringement. A threshold question in any copyright infringement case is whether there's any copyright that could have been infringed at all, because no copyright = no infringement, and with no infringement the case goes away. One way there might not be a copyright is if employees of the federal government had worked on developing the standards, like the ones at issue in this case, since under § 105 of the copyright statute, works by federal government employees are ineligible for copyright protection. But in its decision the D.C. Circuit dismissed this argument, finding that Public Resource had effectively waived it at the district court below.

As an initial matter, PRO argues that there is a triable question as to whether the standards at issue here were ever validly copyrighted given the Act’s prohibition on copyrighting “work[s] of the United States Government,” 17 U.S.C. § 105, and the fact that government employees may have participated in drafting certain standards. PRO, however, failed to adequately present this claim to the district court and has thus forfeited it. [p. 14]

Another way there might not be copyright in the standards Public Resource published is that, by being published as a factual representation of what the law is, that factual nature would preclude there being a copyright in what was republished, since, per § 102(b) of the copyright statute, purely factual works are also not eligible for copyright protection. This consideration was kicked around by the judges during oral argument because it's a complicated issue with some interesting implications. First, there's the question of whether the standards themselves are too factual to be copyrighted, but for the sake of this case the court generally assumed they could be. But even if they are copyrightable, the next question is what happens when the standards have now become a factual representation of the law governing people's behavior? Does that incorporation cause them to lose their copyright? And what would it mean for SDOs and the development of future standards by third parties if that were the case?

The court, however, chose to avoid these questions. It gave several reasons for this avoidance, including that a ruling on the copyrightability of incorporated standards could have a significant economic effect on those SDOs, [p. 16], and also that it's generally considered better practice for courts to decide cases on grounds other than constitutional ones [p. 15]. (As Public Resource and amici pointed out, not being able to post the law for people governed by it to read raises significant First Amendment and due process concerns, which would mean that the question of if the law could be copyrighted may be a constitutional one.) [p. 14-15].

Avoiding the constitutional question is all the more pressing here given that the record reveals so little about the nature of any given incorporation or what a constitutional ruling would mean for any particular standard. After all, it is one thing to declare that “the law” cannot be copyrighted but wholly another to determine whether any one of these incorporated standards—from the legally binding prerequisite to a labeling requirement, see 42 U.S.C. § 17021(b)(1), to the purely discretionary reference procedure, see 40 C.F.R. § 86.113-04(a)(1)—actually constitutes “the law.” [p. 15-16]

Instead the court chose to find for Public Resource on fair use grounds. [p.17] Or at least put Public Resource in a position to ultimately prevail on those grounds. Although the court lifted the injunctions the district court had placed on it – injunctions that had forced Public Resource to remove from its site actual, operative, mandatory law binding on the public – the case still needs to go back to the district court because the appeals court didn't think it had a sufficiently developed record before it to itself fully perform its own fair use analysis. It did, however, give the district court a head start, with enough instruction of how to perform that analysis to make it likely to yield a favorable result for Public Resource on remand.

In this section, we review each of the fair use factors, and, as we shall explain, though there is reason to believe “as a matter of law” that PRO’s reproduction of certain standards “qualif[ies] as a fair use of the copyrighted work,” id. (internal quotations and citations omitted), we ultimately think the better course is to remand the case for the district court to further develop the factual record and weigh the factors as applied to PRO’s use of each standard in the first instance. As we have emphasized, the standards here and the modes of their incorporation vary too widely to conclusively tell goose apart from gander, and the record is just too thin to tell what went into the sauce. On remand, the district court will need to develop a fuller record regarding the nature of each of the standards at issue, the way in which they are incorporated, and the manner and extent to which they were copied by PRO in order to resolve this “mixed question of law and fact.” Id. This is not to say that the district court must analyze each standard individually. Instead, it might consider directing the parties, who poorly served the court by treating the standards interchangeably, to file briefs addressing whether the standards are susceptible to groupings that are relevant to the fair use analysis. [p. 19]

Overall, this is a good result for Public Resource. And far be it for me to rain on Carl Malamud and his legal team's well-deserved parade, it's still important to point out why, although this D.C. Circuit decision is a good one, it could have been better.

For one thing, the parties have already litigated a lengthy trial. And their prize for finally winning the pie eating contest now is more pie. That litigating fair use is so arduous, even for as well-counseled a defendant as Public Resource, is a significant problem. As Lawrence Lessig has observed, "Fair use is only the right to hire a lawyer." Fair use is of little value for worthy defendants who might ultimately win infringement cases on those grounds if they can get obliterated by the litigation defending themselves along the way. Which is one reason why the D.C. Circuit's refusal to evaluate the core copyrightability grounds is a troubling one, because while Public Resource may ultimately prevail, what about anyone else who similarly decides to publish the law that also incorporates standards?

Furthermore, while the court's interest in ensuring that Public Resource could survive a subsequent fair use inquiry is great for Public Resource, and there is nothing in the decision to suggest that it is only Public Resource that should get to, it won't be helpful if the way the court framed each of the fair use factors in order to ensure it could reach Public Resource can't be of use to other defendants not exactly like Public Resource but with their own plausible fair use defenses. Certain language in particular does give some pause, such as the hostility towards some of Public Resource's transformative uses.

On this point, the district court properly rejected some of PRO’s arguments as to its transformative use—for instance, that PRO was converting the works into a format more accessible for the visually impaired or that it was producing a centralized database of all incorporated standards. [p. 21 (citing American Geophysical Union v. Texaco Inc., 60 F.3d 913, 923–24 (2d Cir. 1994)]

On the other hand, much of its reasoning is necessarily flexible enough to reach other defendants so that they, too, can have the four factors balanced in their favor. For the same reasons the court found the idea distasteful that Public Resource should be prevented from sharing the law, it would be similarly distasteful if others were similarly prevented. In addition, should another defendant have difficulty showing its use is fair, the court also left open the possibility that the underlying copyrightability of the standards incorporated into law could still be challenged.

To be sure, it may later turn out that PRO and others use incorporated standards in a manner not encompassed by the fair use doctrine, thereby again raising the question of whether the authors of such works can maintain their copyright at all. [p. 16]

The concurrence by Judge Katsas provides additional reassurance. First, he reiterated that the Section 102(b) and Constitutional questions raised by someone claiming copyright over parts of published law remain unresolved and may yet be resolved in a way that dispels these claims. [Katsas concurrence p. 3]. He also provided some additional framing for the fair use analysis, noting that "it puts a heavy thumb on the scale in favor of an unrestrained ability to say what the law is." [Katsas concurrence p. 2]

Thus, when an incorporated standard sets forth binding legal obligations, and when the defendant does no more and no less than disseminate an exact copy of it, three of the four relevant factors—purpose and character of the use, nature of the copyrighted work, and amount and substantiality of the copying—are said to weigh “heavily” or “strongly” in favor of fair use. […] The Court acknowledges the thinness of the record in this case, and it appropriately flags potentially complicating questions about how particular standards may be incorporated into law, and whether such standards, as so incorporated, actually constitute “the law.” But, where a particular standard is incorporated as a binding legal obligation, and where the defendant has done nothing more than disseminate it, the Court leaves little doubt that the dissemination amounts to fair use. [Katas concurrence p. 2]

In other words, despite the above concerns, the decision will still make it harder for future plaintiffs to try to lock people out of sharing the law on copyright grounds, as it is not something that, at least in the D.C. Circuit, will be looked upon with a friendly eye.

Meanwhile, there is also some additional good news from this case on the trademark front. Public Resource had included the trademarks of the SDOs behind the incorporated standards, and the SDOs (and district court) believed this use of the marks to be infringing. The D.C. Circuit disagreed, however, and found it possible that Public Resource's use of the trademarks could qualify as nominative fair use, which "occurs when the defendant uses the plaintiff’s trademark to identify the plaintiff’s own goods and ‘makes it clear to consumers that the plaintiff, not the defendant, is the source of the trademarked product or service.’” [p. 33-34] This issue, too, was remanded back to the trial court, although with the admonition that if the trial court should again find Public Resource's use to be infringing, it should potentially refrain from issuing another injunction barring all use of the trademark and instead consider whether merely modifying the use would be an adequate remedy. [p.36-37].

Read More | 25 Comments | Leave a Comment..

Posted on Free Speech - 16 July 2018 @ 3:38pm

On Speech And Subpoenas, New York Giveth And Taketh (Now, The Bad News On Journalist Protection)

from the unappealing-jurisprudence dept

Having just written about a good New York ruling concerning third-party subpoenas and the ability to protect free speech, now we have to write about some less good news: the recent decision by New York's highest court undermining the protection afforded by the state's shield law.

Shield laws are critical to preserving a free and independent press because they enable journalists to resist testifying about the non-public aspects of their reporting, or having to turn over their notes and related work product. This ability to resist is what empowers them to promise anonymity to sources, which often can be the only way for news the public needs to know about to come to light. If journalists couldn't resist, or had to risk going to jail in order to try, it would inhibit their reporting and leave the public less able to learn about matters of public concern. Yet unfortunately this decision by the New York Court of Appeals invites just such a result by interfering with journalists' ability to avail themselves of the protection ostensibly afforded by the state shield law. (Note: New York confusingly labels its lowest court the Supreme Court. The highest court is instead known as the Court of Appeals. The Appellate Division is in the middle.)

As frequently happens with tough cases involving important First Amendment interests, the underlying facts of this case are awful: Conrado Juarez has been charged with the gruesome 1991 murder of his four year-old niece. The case remained unsolved until DNA evidence made him a suspect. After fourteen hours of interrogation, he purportedly confessed. He now claims that the confession was coerced, and prosecutors want to use the notes and testimony of New York Times reporter Frances Robles, who had interviewed him, to challenge his claims. The trial court originally refused her motion to quash the subpoena demanding she provide the notes and testimony, but the Appellate Division overruled that decision and quashed it. Only now the Court of Appeals has overturned the Appellate Division's ruling, thus making the subpoena once again enforceable.

In overturning the Appellate Division's decision the Court of Appeals found that the reporter had no right to appeal the original denial of her motion to quash the subpoena by the trial court. If she had no right to appeal the trial court's decision, then the Appellate Division had no ability to reverse it. [p. 2] But even if this Court of Appeals finding that she had no right of appeal were truly consistent with chapter and verse of New York appellate procedure (the dissent believes it isn't [Rivera dissent p. 8-9]), it's still a remarkably formalistic conclusion that gives short shrift to the significant substantive rights at stake.

Formalism isn't of course inherently bad; careful adherence to procedural rules can sometimes help protect substantive rights better than ad hoc short cuts can. These rules exist in order to further the administration of justice, and the Court of Appeals itself fairly makes this point: by limiting the ability to appeal in criminal matters, it keeps the administration of justice from being bogged down unfairly through appellate gamesmanship. [p. 2]

But justice isn't furthered by being a slave to interpretations of procedural rules so at odds with why we have the rules in the first place. Or, as in this case, so indifferent to the rights of those these rules were never intended to govern – namely, the third parties affected here and whose interests the Court of Appeals seems so hostile to [p. 4-5]. Or so arbitrary in their application and effect.

That arbitrariness is well on display here. First, the no-appeal rule the Court cites only applies to criminal cases, not civil ones, [p. 2], which suggests that if this case had not involved a prosecution, the reporter apparently could still have appealed a lower court's refusal to quash a subpoena without problem. Next, the rule limiting appeals does not apply to subpoenas issued as part of investigations of criminal matters. [p. 3] So, if they hadn't already begun to prosecute the defendant, the reporter also likely could have appealed a refusal to quash a subpoena.

In addition, if this case had originally broken the other way, and the trial court had originally quashed the subpoena, then per this rule, if applied consistently, it would have been the government who could not have been able to appeal that ruling. Obviously this particular result would be protective of journalists, but for the no-appeal rule to be applied this way it still makes journalists' protection entirely contingent on the judgment of trial courts. And that's a problem, because trial courts are not infallible. If they were, then there would be no need to have any appeals courts at all. We have these courts because sometimes lower courts get things wrong, as this one did here, and there needs to be some way to set things right when they do. But what the Court of Appeals is saying in this case is that when it comes to subpoenaing journalists (something that the NY legislature passed the shield law in order to prevent), if this subpoenaing happens as part of a criminal trial, then journalists will be entirely dependent on that trial court getting the decision whether to quash it perfectly correct in the first instance, because its decision on the matter will not be one that can ever be reviewed.

For shield law protection to be meaningful it needs to have adequate rights of appeal baked into it, in all situations where journalists may need to assert it. True, in the context of criminal trials journalists might be able to recover the right to appeal as part of their challenge of a contempt order seeking to punish their refusal to comply with a subpoena. But if journalists are forced to risk jail to assert their shield law protection effectively, then the protection the shield law affords is hardly effective.

The Court of Appeals seems to think that a legislative fix is the way to go to make it explicit that there is always a right of appeal. [p. 5] And there may also be the possibility of challenging a subpoena as part of an "Article 78" civil proceeding, although, as the dissent notes, forcing journalists to go this route does nothing to advance the speedy-trial interests the majority's "no appeal" rule is supposed to advance (nor is it clear that an Article 78 proceeding would necessarily be an effective option).

In any case, the alternatives available to a nonparty seeking some type of appellate review of the denial of a motion to quash will likely result in even greater delay of the criminal proceeding than would a direct appeal of a quashal motion. The two avenues left open to a nonparty to contest a denial would be a CPLR Article 78 action in the nature of prohibition or for the nonparty to simply fail to comply with the subpoena and seek appellate review of the subsequent order of contempt. In either case, if the prosecutor or defendant needed the nonparty’s evidence, they would wait until the resolution of the collateral proceedings. [Rivera dissent p. 11]

But the problem is that journalists should not be in the situation where their right and ability to resist subpoenas the shield law is supposed to protect them from are so uncertain. In order to be consistent with the First Amendment and similar principles enshrined in the New York Constitution, principles that the shield law seeks to vindicate, the right to appeal any trial court denial should be implicit, since the effect of barring these appeals so significantly impinges on the free press the public needs.

Sadly, however, this sort of decision – procedural formalism over the effective preservation of substantive speech rights – may be par for the course for the New York Court of Appeals these days. This case is not the first one where the Court of Appeals has reached a conclusion that puts substantive speech rights at risk because of the way it has limited the appellate rights of third parties. In fact, it justified this shield law decision by citing another case it decided last year where Facebook, as a third party, had tried to quash 381 Stored Communications Act "warrants" seeking information about its speakers. In that case, Facebook had been similarly denied a right to appeal the denial of its motion to quash, and for generally similar reasons as those cited in this case now.

We've written before about troubling effects that arise when shield law jurisprudence collides with attempts by platforms to protect the anonymity of their users. The questions of whether journalists can resist subpoenas and whether platforms also can are separate and distinct, and, as such, are often best resolved according to separate and distinct reasoning. After all, the right to a free press and the right to speak anonymously often affect liberty interests in different ways. Plus, as we saw in the Glassdoor case, when both the district court and the Ninth Circuit unhelpfully conflated the two sets of questions and used the reasoning for journalist subpoenas to drive its analysis of platform subpoenas, it used the weak reasoning in the former context to undermine the constitutional protection of anonymous speech in the latter. And in this case now we see further problems with conflating these issues, only this time in reverse, with the earlier Facebook case about platform subpoenas and anonymous speech now negatively shaping this case about journalist subpoenas and the right to a free press.

On the other hand, both anonymous speech and free press cases affect the interests of third parties and both vindicate important First Amendment rights upon which public discourse depends. Both therefore deserve to have had these critical rights treated with more care than the New York high court lately has afforded them.

Read More | 2 Comments | Leave a Comment..

More posts from Cathy Gellis >>