Cathy Gellis’s Techdirt Profile

cathy

About Cathy Gellis




Posted on Techdirt - 10 September 2019 @ 3:57pm

The Internet Remains Broken In The Ninth Circuit And, At Least For Now, The Third

from the third-state dept

Hopes that the Ninth Circuit would correct its earlier awful ruling against HomeAway and Airbnb were dashed recently when the court denied the petition for rehearing. We had supported that petition because the original decision read in an exception to Section 230's statutory protection that is not present in the statute, is out of step with prior precedent (including in the Ninth Circuit itself), and threatens the Internet economy. Unfortunately, now that rehearing has been denied, any platform that facilitates commercial speech, and whose revenue model depends on facilitating the transactions arising from commercial speech, will no longer be able to reliably depend on Section 230's protection, at least not in the Ninth Circuit.

It also remains vulnerable in the Third. The Oberdorf v. Amazon case allowed a products liability claim to proceed against Amazon based on Pennsylvania law. Subsequently, a district court in New Jersey – a state within the Third Circuit, for which the Oberdorf would be binding precedent – decided to allow a similar products liability claim to proceed against Amazon based on New Jersey law, finding that, under its relevant statute, Amazon is a "seller" for purposes of its products liability law.

All these decisions are troubling, and the New Jersey one pointedly illustrates why. Not only does this decision incorporate the same analytical defects as the previous decisions, but it also reflects how all the ignorance about and hostility toward Section 230 of late has been infecting the courts.

As we explained before, all these decisions look past these platforms' role as an enabler of other people's speech. In the case of Amazon, it is other people who say they have something to sell. Denying these platforms Section 230 protection for this sort of user speech means that few, if any, platforms will be able to remain available to facilitate similar commercial speech offering something to sell. Before cheering how this state of affairs might hobble Amazon, however, bear in mind that it will hobble ANY platform that offers independent merchants a chance to offer their goods to a wider audience - including platforms that might be able to compete with Amazon. The more distaste we have for large, incumbent market players, either as platforms or even direct merchants, the more this turn of events should alarm us, because it will ensure we remain stuck with the ones who are already well-capitalized enough to endure this liability minefield and prevent us from getting any new ones.

In most of these cases the courts tried to pretend that there is something different about Amazon's relationship with third party vendors that should put them on the hook for their liability. In this case, the New Jersey court didn't like that Amazon fulfilled orders, or otherwise reserved the right to exercise editorial control over the listings it hosted.

It is true that the agreements did not make Amazon the ultimate decisionmaker as to the prices or physical qualities of the product. As to the sale process, however, the level of control was greater. For example, Amazon processed all payments. [The seller] was required to provide information about its product in the manner that Amazon prescribed. Amazon exercised control over the listing itself—in particular, it retained the right to change, suspend, prohibit or remove listings. If notified that a product was defective, Amazon had the power to take it off the shelf, i.e., to remove the website listing and thereby shield innocent consumers. Under the EBA program, Amazon even had the right to dispose of products that were defective. Compare Oberdorf in which the vendor did not use Amazon’s fulfillment services, so Amazon never physically possessed or shipped the product. Not so here. The vendor in our case signed the FBA and used the fulfillment services, so Amazon physically took custody of, packaged, and shipped the scooter which injured the plaintiff. [p. 26]

The above paragraph shows how a significant problem with this decision is how the court seriously overestimates just what sort of "control" Amazon actually has over the products sold through it. In reality there is no practical way for Amazon to police all the listings for all goods that all its users try to sell. The court confused Amazon's efforts to contractually reserve the right to try to police the listings anyway, which is exactly the sort of policing that Section 230 tries to encourage, with the actual ability to police each and every listing, which is functionally impossible. Just as Amazon could not possibly police all of its user reviews, and Section 230 exists to relieve them from the burden of this impossible task by shielding them from liability arising from these reviews, it could not possibly police all of its listings either, and so Section 230 should similarly insulate them from liability from this form of user expression too. Courts have been wrong to deny them this statutory protection, and especially so when this denial has been based on the unfounded and erroneous assumption that all this policing was something a platform could actually do.

Meanwhile, the fact that these decisions each quibble over the definition of "seller" under each individual state's law, on their way to deciding whether transactional platforms like Amazon should be liable for problems with their users' content, is itself further evidence that this sort of judicial inquiry should have been barred by the statute entirely. One of Section 230's most important provisions is its preemption provision, which forbids any state or locality from mucking about with its local law in a way that interferes with the reliable protection Section 230 is supposed to provide any online service provider, whose services are inherently available across the nation. It's easy to understand that this provision means that states can't change their definition of "defamation" in order to make a platform become liable for user content. But courts seem to be struggling to recognize that this provision should apply to any other state that would seek to make a platform liable for something wrong in their users' content (in this case the offer to sell a defective product). Allowing platforms' liability to hinge on the specific drafting of these state laws turns Section 230's protection into something inconsistent and provincial, instead of predictable and therefore useful, as Congress had intended.

The New Jersey decision did not blaze new ground here, however; it ended up being fairly consistent with the Oberdorf decision that preceded it. But it is notable for its candid hostility toward, and, dare I say, ignorance about, Section 230. In particular, in a chilling footnote, it dismissed Professor Jeff Kosseff's well-researched book, "The Twenty-Six Words That Created the Internet," and instead cited one of the completely fictional diatribes recently published in the New York Times as one of its sources underpinning its erroneous belief in the limits of Section 230.

I am not oblivious to the context or the stakes here. It has been said that the “twenty-six words” of Section 230 of the CDA, enacted in 1996, made e-commerce itself economically feasible by permitting platforms such as Amazon.com to match sellers with buyers without taking on the seller’s liabilities. See, e.g., J. KOSSEFF, The Twenty-six Words that Created the Internet, Cornell University Press (2019). It would perhaps be more sober and accurate to say that the twenty-six words of Section 230 promoted or facilitated important aspects of the internet as we now know it. A recent New York Times article, to pick an example almost at random, is a useful backgrounder on Section 230’s evolution as a tool for promotion of e-commerce (whether sly or serendipitous depends on your point of view). https://nvw.nytimes.com/2019/08/06/technology/section-230-hate-speech.html The article notes that political leaders as ideologically diverse as House Speaker Nancy Pelosi (D-CaI) and Senator Ted Cruz (R-Tex) have publicly criticized Section 230 as a giveaway to the tech industry, and have raised the possibility of reform or abolition. [fn. 18]

The court does go on to say that it was only crediting the animus against Section 230 insofar as it applied to e-commerce.

These e-commerce issues are to be distinguished. however, from others that are driving the current debate, such as Section 230’s grant of immunity for speech-based harms such as hate speech or libel. Id.; see also Reno v. ACLU, 521 U.S. 844 (1997). [id.]

But this clarification is hardly reassuring. Not only does it ignore that commercial speech is inseparable from any other sorts of expression Section 230 reaches, but if the court was in any way relying upon this ignorant media coverage, which almost universally misunderstand the purpose, value, and mechanics of the statute, then no wonder it felt comfortable ignoring them itself in gutting this critical statutory protection.

Fortunately, the one bit of tentative good news is that, unlike the Ninth Circuit, the Third Circuit has now granted rehearing of its Oberdorf decision. And, as a result, the district court in New Jersey has stayed the effect of its own decision, pending that reconsideration. Hopefully on further review the Third Circuit will be able to recognize how Section 230 is supposed to apply to even these transactional platforms, and the importance of not interfering with this operation.

55 Comments | Leave a Comment..

Posted on Techdirt - 12 August 2019 @ 3:41pm

If You Lament The State Of Politics Today, Lament The Loss Of Aereo

from the lessons-for-Locast dept

Disclaimer: I did a teeny bit of legal work on a teeny part of Aereo's defense against the litigation onslaught seeking to obliterate it. But that's not why I think the Supreme Court's decision enabling that obliteration was terrible. On the contrary, it's why I wanted to work on the defense at all, because it was always apparent that trying to use copyright to crush Aereo was a terrible idea that would have terrible consequences. And time has, of course, born this prediction out.

It had never made sense why all these TV stations would be suing Aereo in the first place. After all, isn't the thing that TV stations always want a larger viewership? With a larger viewership they can charge more for ads and make more money. So a service that helps them get that larger viewership (and at no cost to themselves) seems like something they should actually be glad to have. In any case, it was certainly quite odd to see them resent something that helped connect them with bigger audiences beyond what their broadcast signal could manage.

And it made even less sense for a public television station like WNET to be part of any of these lawsuits. Commercial profit was never supposed to be its goal. Instead, pledge drive after pledge drive has always begged the public for the funds necessary to show its programming. Yet there it was, now trying to eradicate a service that helped people actually watch that programming. Which necessarily prompts the question of why anyone should ever bother to give money to WNET ever again if it was so bound and determined to limit the number of people who could benefit from it.

Anyway, while the fight against Aereo made no sense, and the US Supreme Court decision killing it made even less, the result is that today we live in a world without it, where the reach and influence of local TV stations has effectively been damned to the geographical limits of their signal strength. And this pointless and artificial limitation has had a cost.

Because think about what has been happening in recent elections: results end up hyper-localized, with impenetrable divisions between red and blue states, urban and rural regions, large markets and small, etc. At least in the story of the country mouse and city mouse they both got to visit each other and learn what each other's lives were like. But thanks to the Supreme Court, now it is so much harder for Americans everywhere to learn about what life is like outside the areas where they live.

Aereo helped build connections between these places by overcoming the barriers imposed by distance. Instead of people only being able to see the broadcasts they could receive on their own antennas, it gave them a window into other communities by allowing them to essentially rent antennas in these other places and experience the broadcasts aimed for people there. Certainly if they'd rented an entire house in these other places there would have been no issue with them using its antenna to watch these broadcasts. So it hardly follows that it should be illegal if they simply saved the enormous expense of moving to that other place and instead only rented the antenna. (Which, despite the Supreme Court's technical misunderstanding about what Aereo did, is exactly what Aereo – and, for the past year or so, now Locast – actually did.)

Especially not when, as described above, it would have been good for those stations. And especially not when it also would have been good for the nation. It does us no good to remain little regional enclaves unable to find common ground between each other. Sharing in each other's broadcast media would go a long way to bridging those geographically-enforced cultural gaps. Indeed, it would seem to vindicate the very goals of copyright, to promote the progress of arts and sciences, to ensure that local insight could be efficiently exchanged among these regions. Instead, however, the Supreme Court, in its decision to contort copyright law to effectively ban Aereo, doubled-down on the physical restrictions curtailing that exchange with artificial legal barriers that can only serve to enforce the effects of that distance upon the national electorate. And our democracy has been paying the price for this decision ever since.

Perhaps things can be different with Locast. While too new to have been able to have had as much impact on national political culture as a mature service would have had by now, since last year it has tried to thread the confusing needle the Supreme Court set out for these sorts of antenna-rental services. As the courts now stand to review the legal questions they raise again, one can only hope the courts better understand this time around the public interest in knowledge exchange that's at stake, which copyright law is supposed to advance, not smother.

29 Comments | Leave a Comment..

Posted on Techdirt - 16 July 2019 @ 12:06pm

The Sixth Circuit Also Makes A Mess Of Section 230 And Good Internet Policy

from the no-good-deed-goes-unpunished dept

Yesterday we wrote about a bad Section 230 decision against Amazon from the Third Circuit. But shortly before it came out the Sixth Circuit had issued its own decision determining that Section 230 could not protect Amazon from another products liability case. But not for the same reason.

First, the bad facts, which may even be worse: the plaintiffs had bought a hoverboard via Amazon, and it burned their house down (and while two of their kids were in it). So they sued Amazon, as well as the vendor who had sold the product.

From a Section 230 perspective, this case isn't quite as bad as the Third Circuit Oberdorf decision. Significantly, unlike the Third Circuit, which found Amazon to be a "seller" under Pennsylvania law, here the Sixth Circuit did not find that Amazon qualified as a "seller" under the applicable Tennessee state law. [p. 12-13] This difference illustrates why the pre-emption provision of Section 230 is so important. Internet platforms offer their services across state lines, but state laws can vary significantly. If their Section 230 protection could end at each state border it would not be useful protection.

But although this case turned out differently than the Third Circuit case and the Ninth Circuit's decision in HomeAway v. City of Santa Monica, it channeled another unfortunate Ninth Circuit decision: Barnes v. Yahoo. In Barnes Yahoo was protected by Section 230 from liability in a wrongful user post. After all, it was not the party that had created the wrongful content. Because it couldn't be held liable for it, it also couldn't be forced to take it down. But Yahoo had offered to take the post down anyway. It was a gratuitous offer, one it didn’t have to make. But, per the Ninth Circuit, once having made it, Section 230 provided no more protection from liability arising from how Yahoo fulfilled that promise.

Which may, on the surface, sound reasonable, except consider the result: now platforms don't offer to take posts down. It just doesn't pay to try to be so user-friendly, because if the platform can't get things exactly right on that front, they can be sued since, per the Ninth Circuit, Section 230 ceases to provide any protection. (And even if the platform might not ultimately face liability, it would still have to face an expensive lawsuit to get there.) So thanks to this case the Ninth Circuit ended up chilling platform behavior that we would have been better off instead encouraging to get more of. It may have won the battle for this person (their lawsuit could proceed) but it lost the war for the rest of the public.

This case from the Sixth Circuit presents a similar problem. Amazon did not have to do anything with respect to hoverboard sales, but it created liability problems for itself when it tried to anyway. Eventually it banned them, but more at issue is that it sent an email to purchasers indicating that there had been reports of problems with them:

“There have been news reports of safety issues involving products like the one you purchased that contain rechargeable lithium-ion batteries. As a precaution, we want to share with you some additional information about lithium-ion batteries and safety tips for using products that contain them.” The email included a link for the “information and safety tips,” a link “to initiate a return,” and a request that the recipient “pass along this information” to the proper person if the hoverboard was purchased for someone else. [p. 5]

The plaintiffs argued that the email Amazon sent was not enough of a warning and that it should have been more clear about the fire hazard. [p. 6] The Sixth Circuit did not decide whether it was adequate or not. What it did decide, however, was that Section 230 was no obstacle to the litigation continuing to explore that question.

Tennessee tort law provides that an individual can assume a duty to act, and thereby become subject to the duty of acting reasonably.

[…]

In this case, Plaintiffs allege that Defendant gratuitously undertook to warn Plaintiff Megan Fox of the dangers posed by the hoverboard when it sent her the December 12, 2015 email, that Defendant was negligent in that undertaking, and that Defendant’s negligence caused them harm. The district court held that § 324A was inapplicable to Plaintiffs’ claims because it “contemplate[d] liability to third parties.” (RE 161, PageID # 2221–22.) And the district court also held that Plaintiffs forfeited any § 323 claim. The first holding was erroneous, and the second we need not address.

[…]

Plaintiffs argue that Defendant undertook to warn Plaintiff Megan Fox when it sent her the December 12, 2015 email, and that Defendant’s negligent warning caused physical harm to the other members of her family. Accordingly, while Defendant’s liability to Plaintiff Megan Fox is properly governed by § 323, Defendant’s liability to the other members of her family is properly governed by § 324A.7 See Grogan, 535 S.W.3d at 872–73. Thus, the district court’s holding that § 324A was inapplicable to Plaintiffs’ Tennessee tort law claim was erroneous.

Applying § 324A to the facts of this case, Defendant chose to send the December 12, 2015 email to Plaintiff Megan Fox, and in doing so plainly sought to warn her of the dangers posed by the hoverboard.

[…]

Thus, we hold that Defendant assumed a duty to warn Plaintiff Megan Fox of the dangers posed by the hoverboard when it sent her the December 12, 2015 email. [p. 13-16]

The decision's explanation of how tort law works is not striking. The problem is that all sorts of state tort law could reach the Internet, and strangle it, if state tort law could reach platforms. And here is a court saying it can, despite the existence of Section 230 generally saying that it can't.

In a way, though, this case is much less dire for the Internet than some of the other cases we've discussed, like Oberdorf, HomeAway, and the Court of Appeals ruling in Armslist. Platforms can still avoid liability. But they will avoid it by curtailing the sort of beneficial activity Section 230 normally wants to encourage. In letting these state law tort claims go forward the decision reads as a big warning sign for platforms not to bother trying to help their users in similar ways. Amazon did not have to send an email, but by trying to reach out to users anyway it tempted trouble for itself it could have avoided if it had instead done nothing.

But if that fact doesn't pull at the heartstrings, remember that the precedent will apply to any other platform, no matter how small. The moral of this story is that it is much safer for all platforms to do nothing than to try to do something. If trying to be helpful to users causes platforms pick up duties that they otherwise would not have had and face liability for not fulfilling them well enough, they won't. They will be discouraged from trying, even though the public would be much better off if they were instead encouraged to continue these efforts. Curtailing Section 230 to allow state tort law to reach platforms now means that instead of getting more of the user-friendly behavior Section 230 tried to encourage, we will now get less.

Read More | 25 Comments | Leave a Comment..

Posted on Techdirt - 15 July 2019 @ 11:59am

The Third Circuit Joins The Ninth In Excluding E-Commerce Platforms From Section 230's Protection

from the selling-out-the-Internet dept

Remember when there was a terrible decision in the 5Pointz VARA case and I wrote 3000 words to explain just how terrible it was? Well, buckle-up, because here's another awful decision, this time in the Section 230 realm. In fact, this one may even be worse, because it was a decision at the federal appellate level, and thus we are more likely to feel the impact of its terribleness. What follows is an explanation of how it so badly missed the mark.

Not long ago we warned that the Ninth Circuit's decision in HomeAway v. City of Santa Monica, if allowed to stand, threatened Internet commerce. This new decision from the Third Circuit in Oberdorf v. Amazon heightens that alarm. As with the Ninth Circuit, it reflects undue focus on the commercial transaction it facilitated instead of on the underlying expression the transaction was connected to. Worse, it did so in a way that gave short shrift to the policy interests behind why Section 230 exists in the first place.

As is typical in cases with terrible Section 230 rulings, the underlying facts in this case are terrible too. One of the plaintiffs had bought a retractable dog leash via Amazon. The leash was defective, and when it broke it recoiled in a way that blinded her in one eye. She and her husband then sued Amazon over the injury. The district court dismissed their claims, partially for Section 230 reasons, and also because it could not find a way to deem Amazon a "seller" for purposes of the Pennsylvania consumer protection law the plaintiffs were trying to base their claim upon. But the Third Circuit, looking at the decision afresh, substantially rejected the district court's analysis and largely reversed its holding. It's this decision that joins the Ninth Circuit HomeAway decision in now seriously threatening Internet commerce.

It is worth noting that this was a 2-1 decision, with a majority opinion providing the controlling analysis and a dissent. Much of the majority decision involves pages and pages of discussion about what counts as a "seller" under that Pennsylvania law. While on the surface this discussion may seem at first seem tangential to our larger Section 230 concerns, in this case it ends up being fairly relevant. For one thing, it's part of the decision, and it shouldn't be. Section 230 includes a pre-emption provision because state and local laws are often messy and, worse, contradictory. An Internet platform's protection from liability should not be contingent on how any given state a platform's services may reach has opted to write its local law. So the mere fact that the decision starts out by reviewing how Pennsylvania's state law might affect the liability of an Internet platform like Amazon is the first sign that the decision is trouble.

Also, the "seller" analysis is itself revealing about how the court got the analysis denying Amazon Section 230 protection so very wrong. Not only does it read like a pre-ordained result – the court seems to really want Amazon to lose this case and stretches its reasoning to make sure this consumer protection law can reach them (in ways the dissent takes significant issue with) – but what's most telling is that the ways that the court decides that Amazon flunks the four-factor test it used to use to decide whether Amazon was a "seller" show why Section 230 should have applied and foreclosed this entire "are they a seller" analytical exercise in the first place.

Things start off poorly. The first factor is whether Amazon “may be the only member of the marketing chain available to the injured plaintiff for redress.” The majority complains:

[…]Amazon fails to account for the fact that under the Agreement, third-party vendors can communicate with the customer only through Amazon. This enables third-party vendors to conceal themselves from the customer, leaving customers injured by defective products with no direct recourse to the third-party vendor. [p. 14]

It is a legitimate policy problem that it can be challenging, if not sometimes impossible, to find the person who used the Internet to cause harm and then hold them responsible. But that difficulty doesn't mean that Section 230 is to blame, nor does it follow that Section 230 should be curtailed, which would only end up inviting all the other significant harms that Section 230 exists to prevent. Courts have been clear on this point for over twenty years: Section 230 applies even if the party behind the content at issue cannot be found.

Furthermore, even if there were some reason why that rule should be different here, the majority presented no meaningful justification for why, even though Section 230 would insulate a platform in cases where, say, a user might have said something defamatory, it would not similarly protect the platform if the user's expression instead had offered the sale of a defective good. In all these situations the problem with the expression originated with the user, not the platform, yet the majority treats these situations as if they were somehow different, when they are not. Section 230 should therefore still apply.

Meanwhile, the dissent points out that the majority's decision would effectively punish marketplace platforms for not having vetted all of their users. To which the majority unconvincingly dismisses this reality by declaring, without support, that if the Internet user cannot be found the platform must absorb the liability.

The first factor weighs in favor of strict liability not because The Furry Gang cannot be located and/or may be insolvent, but rather because Amazon enables third-party vendors such as The Furry Gang to structure and/or conceal themselves from liability altogether. As a result, Amazon remains “the only member of the marketing chain available to the injured plaintiff for redress.” [p. 15].

In other words, like the Wisconsin Court of Appeals in Armslist had tried to do (before the Wisconsin Supreme Court corrected it), the Third Circuit is seeking to allow for the punishment of a platform for being a platform that people could use in bad ways. But it is because we knew that people would use the Internet in bad ways that Congress passed Section 230 in the first place. It's the very reason why we insulated platforms from liability. It is not a reason to now take away that protection.

Section 230 was also passed so that platforms would not be crippled with the burden of having to vet all their users or all the expression their users used their services to facilitate. The dissent is right that the majority decision creates that obligation, and thus threatens to chill future online commercial activity, not just by Amazon, but by anyone, including any smaller platforms and anyone who might want to compete with Amazon.

Worse, the majority holds against Amazon that Amazon had tried to police the commercial user expression appearing on its platform anyway, even though it didn't have to. It deems the volitional acts Amazon performed as evidence of sufficient "control" over the contents appearing on its platform to justify holding Amazon liable for anything wrong with that user speech.

Although Amazon does not have direct influence over the design and manufacture of third-party products, Amazon exerts substantial control over third-party vendors. Third party vendors have signed on to Amazon’s Agreement, which grants Amazon “the right in [its] sole discretion to . . . suspend[], prohibit[], or remov[e] any [product] listing,” “withhold any payments” to third-party vendors, “impose transaction limits,” and “terminate or suspend . . . any Service [to a third-party-vendor] for any reason at any time.” Therefore, Amazon is fully capable, in its sole discretion, of removing unsafe products from its website. Imposing strict liability upon Amazon would be an incentive to do so. [p. 16]

First, given the sheer volume of content that Amazon intermediates, it is not at all certain that it is correct to say that "Amazon is fully capable of removing unsafe products from its website." It requires a giant, unsupported logical leap to read the language in Amazon's vendor agreement reserving its rights solely with respect to its vendor-users as any sort of declaration that it has the practical ability necessary to do the sort of moderation the majority declares it now must do.

The dissent recognizes this problem. As the majority cites in footnote 28:

The dissent contends that holding Amazon strictly liable for defective products will require them to “enter a fundamentally new business model” because “the company does not undertake to curate its selection of products, nor generally to police them for dangerousness.”

But then the majority dismisses this concern in the same footnote:

We do not believe that Pennsylvania law shields a company from strict liability simply because it adheres to a business model that fails to prioritize consumer safety. The dissent’s reasoning would give an incentive to companies to design business models, like that of Amazon, that do nothing to protect consumers from defective products.

Not only does this language return us to the discussion as to why Section 230 includes a pre-emption provision in order to protect it reliably from the vagaries of state law, but it also fails to account for the market pressures that will demand marketplace platforms act in ways that best protect consumers. The majority seems to take the view that "but for" its ruling no one would be looking out for consumers, but it provided no basis to believe that this assumption is true.

Worse, the decision ends up making it all that much harder, if not impossible, for platforms to look out for consumers as the Third Circuit would want them to. By taking issue with all the things Amazon says it may do to police their platform in the language of its vendor agreement, it's made it impossible for Amazon, or any other marketplace vendor, to actually pursue any of them since the attempt just risks liability for itself. Yet these things are exactly the sorts of moderation activities that Section 230 expressly protects in order to encourage platforms to do. Section 230 sets up a situation where platforms can feel able to take what steps they can to police the user content they facilitate because it removes the risk of liability if they do. But here is the Third Circuit now chilling that moderation activity by instead using it as a basis for imposing liability.

The reasoning for the third and fourth factors isn't much better. The third factor involves considering whether Amazon is “in a better position than the consumer to prevent the circulation of defective products.” [p. 17]. In an earlier Pennsylvania case an auction house had been found not to have been liable for a defective product it sold as a "seller," for several reasons, including that it didn't have sufficient ability to prevent the distribution of defective problems. With little more than supposition, however, the majority decided that Amazon, somehow, did.

Moreover, Amazon is uniquely positioned to receive reports of defective products, which in turn can lead to such products being removed from circulation. Amazon’s website, which Amazon in its sole discretion has the right to manage, serves as the public-facing forum for products listed by third party vendors. In its contract with third-party vendors, Amazon already retains the ability to collect customer feedback: “We may use mechanisms that rate, or allow shoppers to rate, Your Products and your performance as a seller and Amazon may make these ratings and feedback publicly available.” Third-party vendors, on the other hand, are ill-equipped to fulfill this function, because Amazon specifically curtails the channels that third-party vendors may use to communicate with customers: “[Y]ou may only use tools and methods that we designate to communicate with Amazon site users regarding Your Transactions . . ..” [p. 18]

The majority never explains why third-party vendors can't also read the public reviews, or why the Amazon-provided communications mechanisms might not be adequate. And the dissent questions the entire conclusion that Amazon is somehow better situated to take defective items out of circulation.

The dissent contends that Amazon is no better-positioned than the consumer to encourage the safety of products sold in the Amazon Marketplace. However, the dissent openly acknowledges at least one aspect of Amazon’s relationship with third-party sellers that demonstrates Amazon’s powerful position relative to the consumer: Amazon “reserves the right to eject sellers.” Imposing strict liability on Amazon will ensure that the company uses this relative position of power to eject sellers who have been determined to be selling defective goods. [fn 35]

It is hardly revelatory that platforms have the power to terminate users. It's no secret that they can. Pretty much all platforms can, but per the majority's reasoning, none of them should ever be able to avail themselves of Section 230 protection as a result, if they do. Also, using the threat of liability to mandate censorship of any kind – even censorship that might be valid or beneficial – is also what Section 230 was intended to prevent, since the censorship that results from liability pressures so often isn't valid or beneficial. Yet here is the majority doing just that, using the threat of liability to cause vendor expression to be removed, even though it will inevitably cause the same unwarranted censorship, for fear of liability, that Section 230 was designed to forestall for all types of user expression.

Then the fourth factor considers who can best pay to redress the harm. The majority effectively finds that Amazon has the deepest and most locatable pockets, so therefore it decides that it should pay. If this be the rule, other large platforms have deep pockets as well, but not so smaller ones generally, yet eroding Section 230 protection for the big players erodes it for them as well. And even large pockets are not infinitely deep. The majority dismisses this concern by reasoning that Amazon can simply raise its prices:

Moreover, Amazon can adjust the commission-based fees that it charges to third-party vendors based on the risk that the third-party vendor presents. [p. 20]

Of course, those price increases will ultimately be passed onto consumers. Also, if the solution to platform liability is that platforms should just charge more for their services, it bodes poorly for all the free services Internet users have been able to benefit from to date and threatens to lock out those users who won't be able to afford to continue.

Ultimately, if there is a bright spot in this decision it is that the majority still found that Section 230 knocked out a few of the plaintiffs' claims, claims where the court was able to identify how they involved expressive activity by the platform.

to the extent that Oberdorf is alleging that Amazon failed to provide or to edit adequate warnings regarding the use of the dog collar, we conclude that that activity falls within the publisher’s editorial function. That is, Amazon failed to add necessary information to content of the website. For that reason, these failure to warn claims are barred by the CDA. [p. 32-33]

But in failing to recognize the expressive activity involved with "Amazon’s role as an actor in the sales process" the Third Circuit has doubled-down on the false dichotomy the Ninth Circuit's earlier HomeAway decision had created by deeming the brokering of the financial transaction connected with the facilitation of expression as something somehow separate from that expression. This bifurcation threatens to put all commercial activity beyond the reach of Section 230 and create an exception to its protection that simply is not in the statute, for good reason.

Read More | 38 Comments | Leave a Comment..

Posted on Techdirt - 21 June 2019 @ 6:34am

Explainer: How Letting Platforms Decide What Content To Facilitate Is What Makes Section 230 Work

from the Congress-got-this-right dept

There seems to be some recurrent confusion about Section 230: how can it let a website be immune from liability for its users' content, and yet still get to affect whether and how that content is delivered? Isn't that inconsistent?

The answer is no: platforms don't lose Section 230 protection if they aren't neutral with respect to the content they carry. There are a few reasons, one being constitutional. The First Amendment protects editorial discretion, even for companies.

But another big reason is statutory, which is what this post is about. Platforms have the discretion to choose what content to enable, because making those moderating choices is one of the things that Section 230 explicitly gives them protection to do.

The key here is that Section 230 in fact provides two interrelated forms of protection for Internet platforms as part of one comprehensive policy approach to online content. It does this because Congress actually had two problems that it was trying to solve when it passed it. One was that Congress was worried about there being too much harmful content online. We see this evidenced in the fact that Section 230 was ultimately passed as part of the "Communications Decency Act," a larger bill aimed at minimizing undesirable material online.

Meanwhile Congress was also worried about losing beneficial online content. This latter concern was particularly acute in the wake of the Stratton Oakmont v. Prodigy case, where an online platform was held liable for its user's content. If platforms could be held liable for the user content they facilitated, then they would be unlikely to facilitate it, which would lead to a reduction in beneficial online activity and expression, which, as we can see from the first two subsections of Section 230 itself, was something Congress wanted to encourage.

To address these twin concerns, Congress passed Section 230 with two complementary objectives: encourage the most good content, and the least bad. Section 230 was purposefully designed to achieve both these ends by providing online platforms with what are ultimately two complementary forms of protection.

The first is the one that people are most familiar with, the one that keeps platforms from being held liable for how users use their systems and services. It's at 47 U.S.C. Section 230(c)(1).

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

It's important to remember that all this protection provision does is say that the platform cannot be held liable for what users do online; it in no way prohibits users themselves from being held liable. It just means that platforms won't have to be afraid of its users' online activity and thus feel pressured to overly restrict it.

Meanwhile, there's also another lesser-known form of protection built into Section 230, at 47 U.S.C. Section 230(c)(2). What this protection does is also make it safe for platforms to moderate their services if they choose to. Because it means they can choose to.

No provider or user of an interactive computer service shall be held liable on account of (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

Some courts have even read subsection (c)(1) to also cover these moderation decisions too. But ultimately, the wisdom of Section 230 is that it recognizes that to get the best results – the most good content and also the least bad – it needs to ensure platforms can feel safe to do what they can to advance both of these things. If they had to fear liability for how they chose to be platforms, they would be much less effective partners in achieving either. For instance, if a platform had to fear legal consequences for removing user content, they simply wouldn't. (We know this from FOSTA, which, by severely weakening Section 230 has created disincentives for platforms to try to police user content.) And if platforms had to fear liability for enabling user activity on its systems, they also wouldn't do that either. They would instead end up engaging in undue censorship, or cease to exist at all. (We also know this is true from FOSTA, which, by weakening Section 230, has driven platforms to censor wide swaths of content, or even cease to provide platform services to lawful expression.)

But even if Section 230 protected platforms for only one of these potential forms of liability, not only would it not be nearly as effective at achieving Congress's overall goal of getting both the most good and least bad online as protecting them in both ways would, but it wouldn't be nearly as effective for achieving even just one of those outcomes as a more balanced approach would. The problem is that if ever platforms find themselves in the position of needing to act defensively, out of fear of liability, it tends to undermine their ability to deliver the best results on either of these fronts. The fear of legal liability forces platforms to divert their resources away from the things they could be doing to best ensure they facilitate the most good, and least bad, content and instead spend them on only what will protect them from whatever the threat of legal liability is causing them to spend outsized attention on.

As an example, see what happens under the DMCA, where Section 230 is inapplicable and liability protection for platforms is so conditional. Platforms are so fearful of copyright liability that this fear regularly causes them to overly delete lawful, and even often beneficial, content, despite such a result being inconsistent with Congress's legislative intent, or waste resources weeding out the bad takedown demands. It's at least fortunate that the DMCA expressly does not demand that platforms actively police their users' content for infringement. Because if they had to spend their resources policing content in this way it would come at the expense of policing their content in a way that would be more valuable to the user community and public at large. Section 230 works because it ensures that platforms can be free to devote their resources to being the best platforms they can be to enable the most good and disable the most bad content, instead of having to spend them on activities that are focused only what protects them from liability.

To say, then, that a platform that monitors user content must then lose its Section 230 protection is simply wrong, because Congress specifically wanted platforms to do this. Furthermore, even if you think that platforms, even with all this protection, still don't do a good enough job meeting Congress's objectives, it would still be a mistake to strip them of them of what protection they have, since removing it will not help any platform, current or future, from ever doing any better.

What tends to confuse people is that curating user content appearing on a platform does not turn the content into something the platform should now be liable for. When people throw around the imaginary "publisher/platform" distinction as a basis for losing Section 230 protection they are getting at this idea that by exercising editorial discretion over the content appearing on their sites it somehow makes the content become something that the platforms should now be liable for.

But that's not how the law works. Nor how could it work. And Congress knew that. At minimum, platforms simply facilitate way too much content for them to be held accountable for any of it. Even when they do moderate content, it is still often at a scale beyond which it could ever be fair or reasonable to hold them accountable for whatever still remains online.

Section 230 never required platform neutrality as a condition for a platform getting to benefit from its protection. Instead, the question of whether a platform can benefit from its protection against liability in user content has always been contingent on who created that content. So long as the "information content provider" (whoever created the content) is not the "interactive computer service provider" (the platform), then Section 230 applies. Curating, moderating, and even editing that user content to some degree doesn't change this basic equation. Under Section 230 it is always appropriate to seek to hold responsible whomever created the objectionable content. But it is never ok to hold liable the platform they used to create it, which did not.

41 Comments | Leave a Comment..

Posted on Techdirt - 31 May 2019 @ 12:06pm

Federal Court Issues A Very Good Very Bad Decision Where Copyright And Free Speech Meet

from the good-news-bad-news dept

It’s hard to know exactly what to say about this decision involving a subpoena seeking to unmask a Reddit user. There are some notably good things about it, and still plenty bad. The bad: that a subpoena seeking to unmask a critic was upheld. The worse: that their First Amendment right to anonymous speech ultimately took a backseat to a copyright claim. On the other hand, there is some good in the decision, too, particularly in the discussion considering the First Amendment implications of upholding the subpoena, which may be helpful for future anonymous speakers. Also, while the subpoena was upheld, it was upheld with conditions that will somewhat minimize, but certainly not eliminate, the chilling effect of its enforcement.

In this case a user known as "Darkspilver" had criticized the Jehovah’s Witnesses organization on Reddit. He chose to do it on Reddit in significant part because Reddit allowed him to post his criticisms anonymously. [p. 2] In his critical posts he included two items that the Jehovah’s Witnesses organization claims violate its copyrights: an ad the Jehovah’s Witnesses had run to solicit donations, and a chart he made from data found in one of the organization’s Excel files. The organization then propounded a subpoena to find out the identity of the Reddit user it alleged had infringed its copyrights in posting these things.

We’ve written many times before about the concerns raised when discovery demands can cause online speakers to lose the anonymity the First Amendment entitles them. These discovery demands can come in many forms – state civil subpoenas, federal grand jury subpoenas, NSLs, etc. – but while the procedural rules governing how each one may be balanced against the speaker’s First Amendment right to anonymous speech can vary, that First Amendment right does not. All of these instruments should be equally, and adequately, protective of this constitutional interest. But in practice the protection they afford are not. An online speaker whose anonymity might end up protected in the face of certain types of discovery demands might find it trumped by others.

In this case the discovery demand came in the form of a Section 512(h) subpoena – the special species of subpoena that the DMCA invented for copyright holders to use to identify users of online platforms whom they allege had infringed their copyrights through their use of those platforms, and without first having filed an infringement lawsuit. This case addressed how courts should decide whether to uphold these subpoenas in the face of the First Amendment interest in protecting the identity of the speaker.

Which brings us to the good parts of the decision, where it recognized that there was a significant First Amendment interest in protecting anonymous speech. [p. 7-9] Perhaps most importantly, it recognized that the First Amendment protects anonymous speech even when the speaker is outside of the United States.

Based on the involvement of the United States Court’s procedures by and against United States companies and the audience of United States residents, as well as the broad nature of the First Amendment’s protections, the Court finds that the First Amendment is applicable here. [p. 7]

This judicial recognition is important, and this case may represent one of the first occasions when a court has articulated it so specifically. It would be a problem if the First Amendment protection for anonymous speakers could end at the border. For one thing – which the court did not discuss – for the protection to be meaningful for American speakers, it needs to be available to all speakers everywhere (at least when it comes to anonymous speech on US-based platforms). It would effectively eviscerate the right to speak anonymously if you first had to unmask the speaker to find out whether they had the right to resist the unmasking.

The other reason it would be a problem if non-Americans could not count on this protection – which the court did discuss – is that the First Amendment protects the right of the public to read as much as it protects the right to speak. ("[T]he First Amendment protects the audience as well as the speaker." [p. 6]). In other words, when we talk about the effects of discovery instruments on anonymous speech, the inquiry needs to go beyond focusing just on how it affects the rights of those whose anonymous speech is at risk but the rights of all the Americans whose ability to consume that speech will be undermined when speakers and their speech are chilled.

But simply saying that the First Amendment applies didn't, and wouldn't, resolve the matter. There are still competing concerns: the anonymous speaker's First Amendment interest, and the interest of the party propounding the subpoena, who may have a legitimate need to learn the identity of whomever they allege had wronged them. So to figure out whether a Section 512(h) subpoena should be upheld in the wake of these competing interests, the court borrowed a test that had been used before in the Highfields Capital Management and Art of Living Foundation cases. If the subpoena is to be upheld,

(1) The [subpoenaing party] must produce competent evidence supporting a finding of each fact that is essential to a given cause of action; and (2) if the [subpoenaing party] makes a sufficient evidentiary showing, the court must compare the magnitude of the harms that would be caused to the competing interests by a ruling in favor of the [subpoenaing party] and by a ruling in favor of the [anonymous speaker]. [p. 9]

Even though a 512(h) subpoena doesn't require a lawsuit to have first been filed, it still would be predicated on a copyright infringement claim. So in tackling the first part of the test the court considered whether that claim would be a valid one here. Which in this case it turned out to be a split decision. The Jehovah's Witnesses organization was actually asserting two claims of infringement, one for the advertisement and one for the chart, and the court rejected the one for the chart.

It is not clear that the chart meets the minimum standards of originality required for copyright protection. Watch Tower has not yet registered the chart. Therefore, the chart is not entitled to a presumption of copyright validity and Watch Tower must submit evidence to make the requisite showing. … Here, Watch Tower summarily argues that “the layout, design, and word choice [of the chart] are all creative in nature,” with no supporting evidence. In the absence of any supporting evidence, the Court finds that Watch Tower has not met its burden to show, with competent evidence, its ownership of a valid copyright in the chart. Therefore, Watch Tower fails to demonstrate a prima facie case of copyright infringement with respect to the chart. [p. 11].

But even for the advertisement, which had a more plausible copyright claim [p. 10-11], the infringement inquiry did not end there, because the speaker’s use of it could have been a fair use and thus still not constitute copyright infringement. And if there were no valid infringement claim, then there also should be no basis for upholding a subpoena seeking to learn the identity of a potential defendant to sue.

Which brought the court to the second part of the test, the "balancing of the harms," where the court analyzed whether the use of the ad was in fact a fair use. This is an important question for courts to ask in order to ensure that anonymous speech remains adequately protected because, as the court acknowledged, fair use is how the First Amendment gets baked into copyright law. [p. 13] It would do no good if the First Amendment protected public discourse that copyright law could then forbid. Public discourse often requires using copyrighted works in order for that discourse to be valuable, especially in cases like these where the critical discourse was about the subject of the work and the copyright holder itself. Accordingly, the court found that Darkspilver's use of the ad was likely fair.

In balancing the harms, while considering the fair use defense, the Court finds that they tip sharply in Darkspilver’s favor. [p. 17]

But even after all that discussion, the court still decided to uphold the subpoena, and that's why this decision is concerning. The issue is that the court seems to have minimized the range and degree of harms that could befall this and future speakers if the subpoena was upheld. Which is not to say that it presumed there to be none:

[T]he Court notes that Darkspilver’s concerns stem largely out of his fear that those in his congregation will discover his identity and shun him. [p. 17]

To address this concern, the court imposed conditions on the disclosure of his identity, ordering it to be released with an "attorney's eyes only" restriction. This restriction means that the plaintiff's lawyer would still have enough information to file a lawsuit against the user, but no one else would be the wiser. Which would mean that no one else, including the plaintiff, could know enough about this critic to impose any other retribution.

Watch Tower’s attorneys of record may not disclose Darkspilver’s identity even to its client, staff, or expert witnesses without approval in a Court Order from this Court. [p. 17]

That's great, but that's not the only thing the organization could do to Darkspilver. It still could sue him, although it would have to do it in a way that protected his identity.

If Watch Tower elects to file a lawsuit against Darkspilver, the Court directs Watch Tower to seek to file the suit under his pseudonym and to keep his actual identity under seal, for attorney’s eyes only. [p. 18].

In one sense, this is a good outcome, and not unprecedented. In Signature Management Team v. Doe a copyright case was able to be litigated without the defendant ever being publicly named. Because people unhappy with critical speech often want to know the name of the speaker in order to be able to make them regret speaking up, but not actually to sue them, the court's baby-splitting here makes some sense. With this protective order it should preclude that sort of extra-judicial retribution against the speaker.

But it pointedly doesn't preclude judicial retribution, and that's a problem. Per this decision, the speaker could still be sued. Granted, in the face of the court's copyright analysis it would be a pretty weak case. But even weak cases can be devastating in terms of time and money for defendants to have to litigate. A well-funded plaintiff could easily choose to tie a critic up in the courts to make them regret for having spoken out against them. The plaintiff might ultimately lose, but not before extracting an enormous and chilling cost from the speaker.

The reason this decision is so hard to cheer is that, despite of all the reasons the court recognized that the critical speech was likely legal, by allowing this subpoena to go forward, even when the case was so weak, the court has given the green light to this sort of abuse of the courts. All of the court’s lofty language about how the First Amendment works to protect critical speech on matters of public interest effectively becomes meaningless if it still means that, as long as the person who doesn’t like that critical speech can frame their dislike in terms of copyright claim, it can still impose a toll for having spoken it.

In this case the speaker in question has already been chilled and stopped posting on Reddit. [p. 12] And it won't be the last speaker to be cowed into silence if the price of speaking is litigating to defend it.

Read More | 24 Comments | Leave a Comment..

Posted on Techdirt - 7 May 2019 @ 10:44am

The Ninth Circuit Broke The Internet. So We Asked Them To Unbreak It.

from the please-fix-this dept

It is possible that if the Ninth Circuit panel truly realized how badly it messed up Section 230 it might have thought twice about it. So we’ve asked the court to give it a second thought. As did Airbnb and Homeaway, who were most immediately affected by the Ninth Circuit’s recent decision in their challenge of the Santa Monica ordinance that, like the San Francisco ordinance, and ordinances increasingly sprouting up around the country, seeks to make them liable for their users’ expression.

The problem: that’s exactly what Section 230 is supposed to prevent – holding a platform liable for user generated content that is wrongful in some way. If Santa Monica, San Francisco, and all those other cities want to make it illegal for people to list homes to rent, that’s fine. It may or may not be good local policy, but it won’t break the Internet. What breaks the Internet is when the law doesn’t just make people legally responsible for their own expression but makes the platform they used to express it liable for it too. Section 230 is supposed to prevent that, because if platforms can be held liable for all the myriad things that can be wrong with all the enormous amounts of user expression they intermediate, then they won’t be able to be platforms anymore. It will simply be too expensive to mitigate and manage this risk, at least not in a way that doesn’t result in enormous amounts of censorship of user content that isn’t even legally wrongful at all.

So Airbnb and Homeaway filed a petition for rehearing and rehearing en banc to ask the Ninth Circuit to review their case again, and last week the Copia Institute, along with the R Street Institute, filed an amicus brief in support of their petition. In our brief we reminded the court of what we have discussed here. First, that threatening platforms with liability forces platforms to have to monitor all their user expression, which may or may not even be possible, and at the expense of any monitoring that might be more effective. For instance, in this case, all these cities are asking Airbnb and Homeaway to ensure that every listing it allows to be rented be compliant with the registration requirement, but it might be better if instead they could focus their resources on building a more usable and secure platform, helping to eliminate fraud, or working to satisfy any other priority that would benefit the public more. Threatening platforms with liability for user content inevitably co-opts platforms’ resources, diverting them away from the sort of beneficial monitoring Congress tried to incentivize them to do with Section 230 and into monitoring that is solely self-protective.

Secondly, it may likely not even be possible for platforms to do enough monitoring to protect themselves. Although the Ninth Circuit’s decision spoke to the Santa Monica ordinance, there is nothing about the decision that is limited to this specific ordinance in this specific city. A core problem with the decision is the degree to which the court minimized how difficult it will be for Airbnb and Homeaway to even just monitor their user listings to see if they comply with even just this registration requirement in even just this city. But other cities now have ordinances too, thus vastly expanding the task. There is also nothing in the decision that limits what the ordinance can demand for compliance – today it may be registration, but tomorrow it might be habitability concerns, which are even more infeasible for platforms to police, or any other arbitrary policy demand. And there is nothing limiting this tearing open of Section 230’s pre-emption provision preventing local liability from being imposed on platforms for user content to just this sort of local regulation relating to short-term rental platforms. It opens the door to absolutely everything every jurisdiction everywhere can dream up to hold against platforms. There is no way for platforms to be able to successfully monitor every regulatory demand every jurisdiction can make on user expression, so they will either give up and shut down completely or adjust their practices to comply with the most restrictive jurisdiction’s demands and ultimately end up censoring an awful lot of perfectly lawful content – or both. Section 230 was supposed to prevent platforms from finding themselves in this impossible position, and our brief reminded the Ninth Circuit of this fact.

Also, as we previously pointed out, the fundamental error of the decision is that it split out facilitating the hosting of user expression from the facilitating of a transaction related to that user transaction. If this were a legitimate distinction, it would make it impossible to ever monetize one’s platform services, because every revenue transaction would always be connected to user content that could be wrongful. It doesn’t do anything to insulate platforms from the hosting of that content if it doesn’t insulate them from being able to afford to host that content. A decision like this one directly threatens the commercial viability of the Internet, which is definitely not what Congress wanted to have happen when it passed Section 230 in order expressly to protect that economic vitality.

Read More | 113 Comments | Leave a Comment..

Posted on Techdirt - 2 May 2019 @ 3:39am

Twenty-one States Inadvertently Tell The DC Circuit That The Plaintiffs Challenging FOSTA Have A Case

from the with-amici-like-these dept

The constitutional challenge to FOSTA chugs on. A few weeks ago the DOJ filed its opposition brief to defend FOSTA, and then last week several amicus briefs were filed intending to support the government's side. But in reading the one filed by twenty-one state attorneys general, it seems that's not what it did.

The important thing to remember about this appeal is that the question before the appeals court isn't really about the constitutionality of FOSTA itself. What's being appealed is the case having been dismissed for lack of standing by the plaintiffs. The district court never directly ruled on the constitutionality of the law; it only ruled that these plaintiffs had no right to complain about it to the courts. According to the district court these plaintiffs weren't being hurt, or likely to be hurt, by FOSTA, and so it dismissed their case. What the parties are fighting about now is whether this assessment by the district court was right.

For the plaintiffs it makes sense to keep pressing the constitutional issue because shining a light on the unconstitutionality of the law illuminates the injury the unconstitutionality has already caused and will continue to cause. But the defense has a different and much simpler job. All the DOJ has to do to defend FOSTA is say is, "The district court was right. These people were not hurt by FOSTA and will not be hurt by FOSTA, so keep this case dismissed." If the appeals court agrees that there has been no injury, and that there is unlikely to be any injury, then the case remains dismissed and this constitutional challenge goes away.

And so that's what the DOJ's brief basically does: parrot the district court's decision that there is nothing to see here. The DOJ spent its pages arguing that there has been no injury, nor is there the likelihood of any injury, because FOSTA could not possibly empower prosecutors to reach the plaintiffs.

As the district court correctly concluded, plaintiffs’ conduct is not “proscribed by [the] statute,” and plaintiffs face no “credible threat of prosecution thereunder.” [DOJ brief p. 10]

It's an unfortunate position for the government to take, but it's not an irrational litigation strategy. The only thing the DOJ needs to do here is assure the court that the plaintiffs have nothing to worry about.

But that's exactly what the amicus brief by the twenty-one state attorney generals does not do. Although it is intended to support the DOJ's defense of the statute, rather than supporting the DOJ's argument that the plaintiffs' complaints are much ado about nothing, their brief instead reads as a bright flashing neon sign warning the court that there is plenty of reason for them to be worried. Because, in contrast to the DOJ's arguments about what FOSTA does not do, this brief reads as a paean to everything FOSTA is going to let the states do, including to people just like the plaintiffs.

First, it reminds the court just how much FOSTA empowers states like them.

FOSTA makes explicit that: (1) federal law no longer can be said to provide legal protection for websites that unlawfully facilitate sex trafficking; and (2) States may now pursue state-law prosecutions based on conduct that would also violate FOSTA. 47 U.S.C. § 230(e)(5). [I]f a State criminalizes the same conduct FOSTA criminalizes, the State need not wait for the Department of Justice to prosecute traffickers operating in the State; the State’s prosecutors may do so themselves. FOSTA also authorizes a state attorney general, on behalf of the residents of his or her State, to initiate civil actions against those who violate 18 U.S.C. § 1591 (“Sex trafficking of children or by force, fraud, or coercion”) if there is “reason to believe” that an interest of the State’s residents has been or is threatened or adversely affected by the violators. [state AG brief p. 9-10]

It also tells the court just how keen they are to be so empowered. Although the brief is only 10 pages, more than five of them are devoted to a gushing inventory of all these states' policy agendas against sex trafficking. [state AG brief p. 3-9]

It further implies that the only reason there have not been more prosecutions predicated on FOSTA to date is because the states first need to pass some laws to enable these prosecutions, and that takes some time.

Bills to accomplish this are currently pending before the Texas Legislature. Tex. H.B. 15, 86th Leg., R.S. (2019) and Tex. S.B. 20, 86th Leg., R.S. (2019). [state AG brief fn. 3]

In other words, this brief undermines all the arguments that the unconstitutional effects of FOSTA are hypothetical by instead essentially pointing out to the court that they just haven't accrued yet. FOSTA empowers states to act, they are keen to act, and they just need a little more time until they will be able to act.

Which perhaps wouldn't be so much of a problem if states were carefully focused on actual instances of sex trafficking. But the amici themselves are proof that such restraint is unlikely.

In particular, note that one of the states on the brief is Florida. Now think back to just a few months ago when Florida prosecutors stole attention away from the Super Bowl with their announcement that they had broken up a "sex trafficking" operation in Palm Beach… which then turned out not to be a sex-trafficking operation after all.

It was sex work prosecutors had discovered, sure, but at least three of the plaintiffs challenging FOSTA are advocates for sex workers, in no small part because they believe that advocating for sex workers helps keep them safe and out of the clutches of sex traffickers. They sued because they are worried about how the vague language of FOSTA can be used against their advocacy.

A big part of the DOJ's argument is that no one could possibly misconstrue their speech on sex work with the speech relating to sex trafficking and end up using FOSTA to target them.

[Plaintiffs'] activity is wholly outside of FOSTA’s ambit. It is not proscribed by § 2421A, which prohibits owning, managing, or operating an interactive computer service with the intent to promote or facilitate specific instances of illegal prostitution. Nor is it prohibited by § 1591, the pre-existing federal criminal prohibition on sex trafficking. And because FOSTA amended Section 230 immunity only to permit civil claims under § 1595 “if the conduct underlying the claim constitutes a violation of section 1591,” and State criminal prosecutions “if the conduct underlying the charge would constitute a violation of section 1591” or § 2421A, see 47 U.S.C. § 230(e)(5), plaintiffs do not face a reasonable fear of prosecution as a result of those amendments, either. [DOJ brief p. 15-16]

Yet here before the court is an amicus who not long ago got sex work and sex trafficking very badly mixed up. And here it is announcing to the court how excited it is that FOSTA has given them the power to get them mixed up in a way that will affect even more people, including those situated exactly like the plaintiffs.

The DOJ wants the court to believe that any injury the plaintiffs complain about is entirely speculative.

To the extent plaintiffs are concerned that a State or private litigant might attempt to bring a lawsuit against them in the future notwithstanding the text of FOSTA, that concern cannot provide plaintiffs with standing to sue the federal government here. […T]hat fear is entirely conjectural, and “require[s] guesswork as to how independent decisionmakers will exercise their judgment.” [DOJ brief p. 29-30]

But thanks to amici, we know exactly how these independent decision makers will exercise their judgment: badly. Thus, despite the DOJ's best efforts to convince the court of the plaintiffs' lack of standing, the state AG's amicus brief has done the exact opposite.

In a way the amicus brief is just like FOSTA itself: not understanding the job that needed to be done but rushing in with legal guns blazing anyway. And just like FOSTA, this ill-tailored legal response has caused all sorts of collateral damage that just makes the problem to be solved worse.

Read More | 43 Comments | Leave a Comment..

Posted on Techdirt - 1 May 2019 @ 12:01pm

The Wisconsin Supreme Court Gets Section 230 Right

from the careful,-clear,-cogent-analysis dept

We've written a few times about an unfortunate case out of Wisconsin. Someone used the Armslist platform to find a gun to buy and then killed people with it. This led to a lawsuit against Armslist seeking to hold it liable for this terrible crime, which then led to a ruling by the Wisconsin Court of Appeals that ignored two decades of Section 230 precedent to allow the lawsuit to go forward. Last year the Copia Institute filed an amicus brief urging the Wisconsin Supreme Court to review the Court of Appeals decision, and, after it granted that review, this year we filed another brief urging it to reverse the decision. This week it did.

The court of appeals held that 47 U.S.C. § 230 (2018), the federal Communications Decency Act of 1996, did not bar Daniel's claims against Armslist for facilitating Radcliffe's illegal purchase. We disagree, and conclude that § 230(c)(1) requires us to dismiss Daniel's complaint against Armslist. Section 230(c)(1) prohibits claims that treat Armslist, an interactive computer service provider, as the publisher or speaker of information posted by a third party on its website. Because all of Daniel's claims for relief require Armslist to be treated as the publisher or speaker of information posted by third parties on armslist.com, her claims are barred by § 230(c)(1). Accordingly, we reverse the decision of the court of appeals, and affirm the circuit court's dismissal of Daniel's complaint. [p. 2-3]

The decision was lengthy, and referenced a litany of cases interpreting Section 230, nearly all of which the Court of Appeals had earlier discounted. Like this one, Section 230 cases are often tough cases. Terrible things have happened, and there can be a tremendous, and completely reasonable, temptation by courts to find some way to provide a remedy. Even if it means trying to hold an Internet platform liable, and even if Section 230 should prevent them from doing so.

But as we pointed out in our briefs, there is always more at stake than just the case at hand. Whittling away at Section 230's important protection because one plaintiff may be worthy leaves all the other worthy online speech we value vulnerable. It is protected only when platforms are protected. When their protection is compromised, so is all the speech they carry. Which is why it is so important for courts to resist the emotion stirred by instant facts and clinically apply the law as it was written, so that instead of helping just one person it will help everyone.

Which is what the Wisconsin Supreme Court has now done. As we recently saw recently with the Herrick v. Grindr case, another case with grotesque facts but claims that fell easily within Section 230's intended purview, plaintiffs often try to "artfully plead" around Section 230 to make their complaint seem like something other than trying to hold a platform liable for what another has said online. And like the Second Circuit did there, here the Wisconsin Supreme Court also refused to allow Section 230 to be circumvented.

"[W]hat matters is not the name of the cause of action . . . what matters is whether the cause of action inherently requires the court to treat the defendant as the 'publisher or speaker' of content provided by another." Barnes, 570 F.3d at 1101-02. In other words, "courts must ask whether the duty that the plaintiff alleges the defendant violated derives from the defendant's status or conduct as a 'publisher or speaker.'" Id. at 1102. This rule prevents plaintiffs from using "artful pleading" to state their claims only in terms of the interactive computer service provider's own actions, when the underlying basis for liability is unlawful third-party content published by the defendant. Universal Commc'n Sys., Inc. v. Lycos, Inc., 478 F.3d 413, 418 (1st Cir. 2007); see also Kimzey, 836 F.3d at 1266 ("[w]e decline to open the door to such artful skirting of the CDA's safe harbor provision."). [p. 24]

Ultimately this decision joins nearly all the other major Section 230 decisions over the years where courts have been able to remain focused on that bottom line and recognize that Section 230 prevents these lawsuits. In fact, as part of its decision the Wisconsin Supreme Court even called out a sister state supreme court that had not.

More importantly, [in J.S. v. Village Voice Media Holdings] the Washington Supreme Court ignored the text of the CDA, and the overwhelming majority of cases interpreting it, by inserting an intent exception into § 230(c)(1). The Washington Supreme Court opined that "[i]t is important to ascertain whether in fact Backpage designed its posting rules to induce sex trafficking . . . because 'a website helps to develop unlawful content, and thus falls within the exception to section 230, if it contributes materially to the alleged illegality of the conduct.'" J.S., 359 P.3d at 718 (citing Roommates.com, 521 F.3d at 1168). Underlying this statement is the implicit assumption that a website operator's subjective knowledge or intent may transform what would otherwise be a neutral tool into a "material contribution" to the unlawfulness of third-party content. As explained in Section II. C., however, this assumption has no basis in the text of § 230(c)(1). The relevant inquiry, regardless of foreseeability or intent, is "whether the cause of action necessarily requires that the defendant be treated as the publisher or speaker of content provided by another." Backpage.com, LLC, 817 F.3d at 19 (citing Barnes, 570 F.3d at 1101-02). [p. 27-28]

Unlike the Supreme Court in Washington, the Supreme Court in Wisconsin could see how the bigger picture required it not to read extra requirements into Section 230 that Congress had not put there. And so it has now joined most other courts that have let Section 230 do its job ensuring online speech can remain protected.

Read More | 112 Comments | Leave a Comment..

Posted on Techdirt - 30 April 2019 @ 2:33pm

Both Sides Want The Supreme Court To Review Decision Denying Copyright In Georgia's Law. How About You?

from the public-resource dept

Last year the Eleventh Circuit held that the Georgia statutory code, including annotations, was not protected by copyright. It was an important decision, not just for Carl Malamud's PublicResource.org, which had been sued for publishing Georgia's operative statutory law, including the annotations, but for any member of the public who necessarily needs to be able to freely access the law that governs them.

Georgia has now petitioned the US Supreme Court to review the Eleventh Circuit's decision. But more significantly, Public Resource is also planning to file a brief encouraging that review. Not because Public Resource wants the decision reversed, of course. But because it wants the decision to be affirmed.

Here's the situation. If the Supreme Court declines to review the decision, it will stand. That's a good thing, because it means there would be no risk of infringing copyright in publishing the Georgia state code. Given the decision's reasoning, it would also be difficult for any other state within the Eleventh Circuit to assert copyright in its statutory code either. But for any other state outside the Eleventh Circuit the question of whether statutory law could be copyrighted would remain unsettled. The Eleventh Circuit's decision is persuasive authority that courts elsewhere may defer to, but it's not binding authority, so they don't have to. What the Eleventh Circuit got right they could still get wrong.

Also, even if other courts were to ultimately follow in the Eleventh Circuit's footsteps, it is arduous and expensive to have to litigate in each state and circuit in order to get to that point. Meanwhile plenty of publicly-beneficial uses will remain chilled by the fear of potential litigation and liability as we wait for all these courts to eventually rule that this public access, unrestrained by copyright, is OK.

It would be much more efficient if the Supreme Court could just cut to the chase now and affirm that the Eleventh Circuit's holding is the law of the land. The case is ready and ripe for review, with especially cogent reasoning, so taking up this one would be much more expedient than having to wait for any other case to finally reach the petition stage. After all, the public's need to access the law that governs it is just as critical now as it will be later.

An amicus brief is being put together on behalf of law students, legal educators, and lawyers who are solo practitioners or in small firms to remind the court of this fact. All of these constituencies need access to the law, and not just superficial access, but meaningful access that will allow for the analysis necessary to teach, learn, and practice the law as clients, current and future, need. Yet neither are economically in the position to be able to easily afford the subscription fees they have to pay the commercial databases which are able to monopolize access to the law when states can get away with demanding paid licenses for it. Small law firms and solo practitioners are at a distinct disadvantage to large firms who, with generally wealthier clients, are better able to absorb these costs. And all are at a disadvantage to their peers in Georgia, who no longer need to pay to get access to what the Eleventh Circuit recognized was "intrinsically public domain material, belonging to the People."

If you are a solo or small firm lawyer, or are a law student, and would like to sign on as an amicus to encourage this Supreme Court review, click through the link above to the brief, where there is a form through which you may add your name before midnight on May 2.

Disclosure: I've contributed to the drafting of this brief.

18 Comments | Leave a Comment..

Posted on Techdirt - 16 April 2019 @ 12:00pm

Wherein The Copia Institute Updates The Copyright Office On The First Amendment Problems With The DMCA

from the rights-of-the-roundtable dept

A few years ago the Copyright Office commenced several studies on the DMCA. One, on Section 1201, resulted in a report to Congress and some improvements to the triennial rulemaking process. But for the other study, on Section 512, things had been quiet for a while. Until earlier this year, when the Copyright Office announced it was hosting an additional roundtable hearing to solicit additional input. What the Copyright Office wanted to know in particular was how recent developments in US and international law should inform the recommendations they may issue as a result of this study.

The Copia Institute had already submitted two rounds of comments, and both Mike and I had separately given testimony at the hearing held in San Francisco. This new hearing was a good chance to remind the Copyright Office of the First Amendment concerns with the DMCA we had already warned them about, many of which are just as worrying — if not more so — today.

One significant, overarching problem is the way the DMCA results in such severe consequences for speech, speakers, and platforms themselves based on the mere accusation of infringement. It is unique in American law for there to be such an effect like this: in most instances, sanction cannot follow unless and until a court has found there to be actual liability. In fact, when it comes to affecting speech interests it is expressly forbidden by the First Amendment to punish speakers or speech before a court has found specific instances of speech unlawful. To do otherwise – to punish speech, or, worse, to punish a speaker before they've even had a chance to make wrongful speech – is prior restraint, and not constitutional. Yet in the DMCA context, this sort of punishment happens all the time. And since the last roundtable hearing it has only gotten worse.

Several things are making it worse. One is that Section 512(f) remains toothless, thanks to the Supreme Court refusing to review the Ninth Circuit's decision in Lenz v. Universal. Section 512(f) is the provision in the DMCA that is supposed to deter, and punish, those who send invalid takedown notices. Invalid takedown notices force the removal of speech that may be perfectly lawful because they put the platform's safe harbor at risk if it doesn't remove it. Unfortunately, in the wake of Lenz it has been functionally impossible for those whose speech has been removed to hold the sender of these invalid notices liable for the harm they caused. And it's not like there are other options for affected speakers to use to try to remediate their injury.

Also, it is not only the sort of notices at issue in Lenz that have been impacting speakers and speech. An important thing to remember is that the DMCA actually provides for four different kinds of safe harbors. We most often discuss the Section 512(c) safe harbor, which is for platforms that store content "at the direction of users." Section 512(c) describes the "takedown notices" that copyright holders need to send these platforms to get that user-stored content removed. But the service providers that instead use the safe harbor at Section 512(a) aren't required to accept these sorts of takedown notices. Which makes sense, because there's nothing for them to take down. These sorts of platforms are generally all-purpose ISPs, including broadband ISPs, of which there are all-too-few choices for customers to use if they are cut off from one. All the user expression they handle is inherently transient, because the sole job of these providers is to deliver it to where it's going, not store it.

And yet, these sorts of providers are also required, like any other platform that uses any of the other safe harbors, to comply with Section 512(i) and have a policy to terminate repeat infringers. The question, of course, is how are they supposed to know if one of their users is actually a repeat infringer. And that's where recent case law has gotten especially troubling from a First Amendment standpoint.

The issue is that, while there are plenty of problems with Section 512(c) takedown notices, the sorts of notices that are being sent to 512(a) service providers are even uglier. As was the case with the notices sent by Rightscorp in the BMG v. Cox case – the first in an expanding line of cases pushing 512(a) service providers like Cox to lose their safe harbor for not holding these mere allegations of infringement against their users in order to terminate them from their services – these notices are often duplicative, voluminous beyond any reasonable measure, extortionate in their demands, and reflective of completely invalid copyright claims. And yet the courts have not yet seemed to care.

As we noted at the roundtable, the court in Cox ultimately threw out all the infringement claims for an entire plaintiff because it wasn't clear that it even owned the relevant copyrights, despite Rightscorp having sent numerous notices to Cox claiming that it did. But instead of finding that these deficiencies in the notices justified the ISP's suspicions about the merit of the other notices it had received, the court still held it against the ISP that they hadn't automatically credited all the other claims in all the other notices it had received, despite ample reason for being dubious about them. Worse, the court faulted the ISP for not just refusing to automatically believing the alleged infringement notices it had received but for not acting upon them to terminate people who had accumulated too many. As we and other participants flagged at the hearing, there are significant problems with this reasoning. One relates to the very idea that termination of a user is ever an appropriate or Constitutional reaction, even the user is actually infringing copyright. Since the last hearing the Supreme Court has announced in Packingham v. North Carolina that being cut off from the Internet in this day and age is unconstitutional. (As someone at the else roundtable this time pointed out, if it isn't OK to kick someone off the Internet for being a sex offender, it is less likely that it's OK to kick someone off the Internet for merely infringing copyright.)

Secondly, the Cox court ran square into the crux of the First Amendment problem with the DMCA: that it forces ISPs to act against their users based on unadjudicated allegations of infringement. It's bad enough that legitimate speech gets taken down by unadjudicated claims in the 512(c) notice-and-takedown context, but to condition a platform's safe harbor on preventing a person from ever getting to speak online ever again, simply because they've received too many allegations of infringement, presents an even bigger problem. Especially since, as we pointed out, it opens the door to would-be censors to game the system. Simply make as many unfounded accusations of infringement as you want against the speaker you don't like (which no one will ever be able to effectively sanction you for doing) and the platform will have no choice but to kick them off their service in order to protect their safe harbor.

There is also yet another major problem underlying this, and every other, aspect of the DMCA's operation: that there is no way to tell on its face whether user speech is actually infringing. Is there actually a copyright? If so, who owns it? Is there a license that permitted the use? What about fair use? Any provider that gets an infringement notice will have no way to accurately assess the answers to these questions, which is why it's so problematic that they are forced to presume every allegation is meritorious, since so many won't be.

But the roundtable also hit on another line of cases that also suffers from the same problem of infringement never being facially apparent. In Mavrix v. Livejournal the Ninth Circuit considered that the moderation Livejournal was doing – as allowed (and encouraged) by CDA Section 230 – to have potentially waived its safe harbor. The problem with the court's decision was that it construed the way Livejournal screened user-supplied content as converting it from content stored "at the direction of users" to its own content, and several roundtable participants pointed out that this reading was not a good one. In fact, it's terrible, if you want to ensure that platforms remain motivated – and able – to perform the screening functions Congress wanted them to perform when it passed Section 230. Because there's a more general concern: if various provisions of the DMCA suddenly turn out to be gotchas that cause platforms to lose their safe harbor, if in the process of screening content they happen to see some that might be infringing, they won't be able to keep doing it. Perhaps this is not a full-on First Amendment problem, but it still affects online expression and the ability of platforms to enable it.

106 Comments | Leave a Comment..

Posted on Techdirt - 29 March 2019 @ 9:38am

Section 230 Holds On As Grindr Gets To Use It As A Defense

from the finally-some-good-news-for-Section-230 dept

It's not really possible to predict the outcome of a court case. No matter how convinced you are that things look to be heading one way, there are still a zillion ways things can turn out otherwise.

That said, however, I'm glad to discover that my cautious optimism about the Herrick v. Grindr case was not misplaced. This was a case where a terrible ex-boyfriend set up a phony Grindr profile for Herrick, which led to him being harassed by would-be suitors thinking it was genuine. It was an awful situation and no one can fault Herrick for wanting to hold someone responsible. The problem was, if he were to succeed in holding the dating app liable, it would represent a serious weakening of Section 230's platform protection, which, as we've discussed many times, would lead to the reduction of online services and censorship.

Grindr has now prevailed, however, and, perhaps more importantly, so has Section 230 as a defense in the Second Circuit (albeit in a non-precedential decision).

Herrick’s products liability claims and claims for negligence, intentional infliction of emotional distress, and negligent infliction of emotional distress are barred by CDA § 230, and dismissal on that ground was appropriate because “the statute’s barrier to suit is evident from the face of the complaint.” [p. 7]

To some extent, the decision was fairly easy for the court to reach: first, Herrick had at various points acknowledged that Grindr was as an interactive computer service ("ICS"), and the Court of Appeals for the Second Circuit was not inclined to overturn the district court's finding that Grindr so qualified.

Indeed, the Amended Complaint expressly states that Grindr is an ICS, and Herrick conceded as much at a TRO hearing in the district court. Accordingly, we see no error in the district court’s conclusion that Grindr is an ICS. [p. 4]

The court also seemed to have little trouble recognizing that the objectionable behavior that was the subject of the complaint was based on information provided by a third party.

Herrick’s products liability claims arise from the impersonating content that Herrick’s ex‐boyfriend incorporated into profiles he created and direct messages with other users. Although Herrick argues that his claims “do[] not arise from any form of speech,” Appellant’s Br. at 33, his ex‐boyfriend’s online speech is precisely the basis of his claims that Grindr is defective and dangerous. Those claims are based on information provided by another information content provider and therefore satisfy the second element of § 230 immunity. [p. 5]

Perhaps more importantly for future cases, the court extended this reasoning to the Herrick's claims relating to the app's geo-location feature.

The claims for negligence, negligent infliction of emotional distress, and intentional infliction of emotional distress relate, in part, to the app’s geolocation function. These claims are likewise based on information provided by another information content provider. Herrick contends Grindr created its own content by way of the app’s “automated geolocation of users,” but that argument is undermined by his admission that the geolocation function is “based on real‐time streaming of [a user’s] mobile phone’s longitude and latitude.” Appellant’s Br. at 32. It is uncontested that Herrick was no longer a user of the app at the time the harassment began; accordingly, any location information was necessarily provided by Herrick’s ex‐boyfriend. [p. 5]

Finally, the court also recognized that the Herrick's claims involved treating Grindr as the publisher or speaker of the offensive content, when in fact it had originated with a third party (in this case the terrible ex-boyfriend).

Herrick’s failure to warn claim is inextricably linked to Grindr’s alleged failure to edit, monitor, or remove the offensive content provided by his ex‐boyfriend; accordingly, it is barred by § 230. … To the extent that the claims for negligence, intentional infliction of emotional distress, and negligent infliction of emotional distress are premised on Grindr’s allegedly inadequate response to Herrick’s complaints, they are barred because they seek to hold Grindr liable for its exercise of a publisher’s traditional editorial functions. To the extent that they are premised on Grindr’s matching and geolocation features, they are likewise barred, because under § 230 an ICS “will not be held responsible unless it assisted in the development of what made the content unlawful” and cannot be held liable for providing “neutral assistance” in the form of tools and functionality available equally to bad actors and the app’s intended users. [p. 6-7]

All in all, despite all the press coverage convinced that the terrible facts made the case seem like it would be a close call, the result was instead a pretty straightforward application of Section 230 as a defense working the way it was intended.

Read More | 15 Comments | Leave a Comment..

Posted on Techdirt - 28 March 2019 @ 12:05pm

9th Circuit's Bad AirBnB Decision Threatens Basic Internet Business Models

from the "but-wait,-there's-more!" dept

I'm not done excoriating the Ninth Circuit's recent decision dismissing Homeaway and Airbnb's challenge of the Santa Monica ordinance that holds them liable if their users illegally list their properties for rent. As I wrote before, that's what the ordinance in fact does, even though Section 230 is supposed to prevent local jurisdictions from enforcing laws on platforms that have this effect. Perhaps this decision may not be as obviously lethal to the Internet as the EU's passage of the Copyright Directive with Articles 11 and 13, but only because its consequences may, at the moment, be less obvious – not because they stand to be any less harmful.

Which is not to say that the court intended to herald the end of the Internet. Indeed there is a somewhat apologetic tone throughout the decision, as if the court felt it had no choice but to reach the conclusion that it did. But there is also a tone of dismissiveness that runs throughout the decision as well. The court largely minimized the platforms' arguments about how the ordinance will affect them, and by ignoring the inevitable consequences thus opened the door to them, now and in the future, far beyond the facts of this particular case.

Ultimately there are (at least) two big problems with the decision. The earlier post highlighted one of them, noting how chilling it is to speech if a law effectively forces platforms to police their users' expression in order to have any hope of avoiding being held liable for it. The problem with the court's decision in this regard is that it kept [see pages 13-14, 17, 20...] incorrectly insisting, over the platforms' protest, that the Santa Monica ordinance does not force them to monitor their users' expression when, in actuality, it most certainly does.

The second major problem with the decision is that the court kept trying to create an artificial distinction between imposing liability on platforms for facilitating user expression, which the court acknowledged would be prohibited by Section 230, and imposing liability on platforms for facilitating online transactions — which, per the court, Section 230 would apparently not prevent.

As the Platforms point out, websites like Craigslist "advertise the very same properties," but do not process transactions. Unlike the Platforms, those websites would not be subject to the Ordinance, underscoring that the Ordinance does not target websites that post listings, but rather companies that engage in unlawful booking transactions. [p. 20]

Unfortunately it's a nonsensical distinction, and one that leads to an entirely unprecedented curtailing of Section 230's critical statutory protection.

If the court's reasoning were correct, then no platform that profits from transactions between users could ever have been shielded from liability. It always would have been possible to predicate their liability on the brokering of the transaction, rather than on their intermediation of the user expression behind the transaction. In reality, though, over the past two decades plenty of transactional platforms have been able to avail themselves of Section 230's protection. For instance, EBay and Amazon, like Airbnb and Homeaway, make money from the transactions that result when user expression offering something for sale is answered by users who want to buy. Yet courts – including the Ninth Circuit – have found them just as protected by Section 230 as their non-transactional platform peers. Unlike this particular Ninth Circuit panel, these other courts recognized that the liability these platforms were having to face was inherently rooted in the user's expression, and thus something that Section 230 protected them from. For the Ninth Circuit to now decide that liability for facilitating a transaction is somehow something separate from facilitating the user expression behind it strikes at the heart of what Section 230 is supposed to do – protect platforms from liability in their users' activity – and puts all these transactional platforms' continued Section 230 protection in doubt.

It also puts in jeopardy the entire Internet economy by so severely limiting the ability of platforms to monetize their services. At minimum it calls into question any monetization model that derives revenue from any transactional user expression that successfully results in a consummated deal, since platforms can now be held to account for any alleged illegality in that user expression. (Oddly, though, a platform would seem to be just fine if its users only posted poorly-crafted listings for terrible properties at unmarketable rents, even if those listings were illegal under the ordinance, because there would be no danger of a rental transaction actually resulting. But as soon as users manage to successfully articulate their offerings such that they could result in real rentals the platform would suddenly find itself on the hook.)

Instead platforms will have to support themselves in other ways, such as by being ad-supported or charging for listings. But not only do these other revenue models raise their own concerns and considerations, but given the logic of this decision it's not a certainty that they, too, won't someday be found to put the platform beyond the reach of Section 230 as well. For as long as facilitating the exchange of money is treated as something separate from the facilitation of user expression the exchange is connected to, any method of monetary exchange connected to user content that's illegal in some way could still put the platform facilitating it beyond the reach of the statutory protection. The court cited Craigslist as an example of a platform that can retain its Section 230 immunity even when its users post illegal listings. But, notably, Craigslist does not charge users to post their listings. Which leaves us with a decision where the only platforms that can be sure to benefit from Section 230 are the ones that provide their services for free, which isn't consistent with what Congress intended or the commercial potential upon which, until now, the Internet economy has depended.

Furthermore, it stands to put any platform exclusively devoted to facilitating aspects of these transactions completely beyond the protective reach of Section 230, even though any liability connected to that transaction would still be due to others' expression. In this case the court was fairly indifferent as to what a platform like Homeaway or Airbnb would need to do to cope with liability under the ordinance.

[T]he Platforms argue that the Ordinance "in operation and effect . . . forces [them] to remove third-party content." Although it is clear that the Ordinance does not expressly mandate that they do so, the Platforms claim that "common sense explains" that they cannot "leave in place a website chock-full of un-bookable listings." For purposes of our review, we accept at face value the Platforms’ assertion that they will choose to remove noncompliant third-party listings on their website as a consequence of the Ordinance. Nonetheless, their choice to remove listings is insufficient to implicate the CDA. [p. 14-15]

Or that the platforms' attempt to avoid liability might result in undue censorship.

Moreover, the incidental impacts on speech cited by the Platforms raise minimal concerns. The Platforms argue that the Ordinance chills commercial speech, namely, advertisements for third-party rentals. But even accepting that the Platforms will need to engage in efforts to validate transactions before completing them, incidental burdens like these are not always sufficient to trigger First Amendment scrutiny. [p. 20-21]

These are significant concerns, however. It is a functionally impossible task the court sets for them, to put platforms in the position of needing to review and remove just the right amount user expression in order to protect themselves. But this sort of case-by-case, listing-by-listing censorship is at least theoretically within the power of a platform facilitating the expression itself. Whereas it's not at all within the power of platforms like payment providers that never touch the original user speech. Instead they will be left with a stark choice: leave open the firehose to process all transactions, and thus risk being liable for any transaction related to any illegal listing that comes through, or turn off their service entirely to any service that cannot guarantee to them that every transaction arising from every user listing will be legally compliant with every possible law. No platform can make that promise, of course, and it will inevitably result in the removal of substantial lawful content to even try.

True, the Santa Monica ordinance itself does not invite this full parade of horrors. It is but one ordinance, for one city, with a regulatory ask of the platform that the court somehow seems to think is easy to meet. But this decision and its contorted rationale opens the door to plenty more ordinances, from plenty more jurisdictions, with plenty more regulatory demands. In and of itself it is plenty onerous, and it invites even worse.

Read More | 104 Comments | Leave a Comment..

Posted on Techdirt - 15 March 2019 @ 3:30pm

Ninth Circuit Tells Online Services: Section 230 Isn't For You

from the practical-effect dept

Last year we wrote about Homeaway and Airbnb's challenge to an ordinance in Santa Monica that would force them to monitor their Santa Monica listings to ensure they were legally compliant. The Santa Monica ordinance, like an increasing number of ordinances around the country, requires landlords wanting to list their properties on these services to register with the city and meet various other requirements. That part of the ordinance is not what causes concern, however. It may or may not be good local policy, but it in no way undermines Section 230's crucial statutory protection for platforms for Santa Monica officials to attempt to hold their landlord users liable if they go online to say they have a non-compliant rental listing.

The problem with the ordinance is that it does not just impose liability on landlords. It also imposes liability on the platforms hosting their listings. The only way for them to avoid that liability is to engage in the onerous, if not outright impossible, task of scrutinizing whether or not the listings on their platforms are legal. Which is exactly what Section 230 exists to prevent: forcing platforms to monitor their users' speech for legality, because if they had to police them, they would end up facilitating a lot less legitimate speech.

Yet that's what the Ninth Circuit decided to let Santa Monica do – force platforms to monitor their user-generated speech – in a decision earlier this week upholding the district court's refusal to enjoin the ordinance.

Of course, that's not how the court saw it, however. To the court, platforms weren't being forced to police the speech they hosted. They were merely obligated to police the rental transactions they facilitated.

[T]he Ordinance does not require the Platforms to monitor third-party content and thus falls outside of the CDA’s immunity … [T]he only monitoring that appears necessary in order to comply with the Ordinance relates to incoming requests to complete a booking transaction—content that, while resulting from the third party listings, is distinct, internal, and nonpublic. [p. 13-14]

However this is a distinction without a difference.

As we pointed out in the amicus brief the Copia Institute filed in support of Homeaway and Airbnb, these listings are indeed user-generated speech. It may be speech that's extremely limited in scope, little more than "I have housing to rent," but it is still user speech that, per the ordinance, may not always be legal to say. The problem is that this ordinance in effect is all about passing on liability to the platform if they allow this speech to be illegally said, which is no different than trying to pass on liability to a platform for any other speech its users may illegally say.

Yet in its decision the court insisted that platform liability attaches to something entirely apart from its role as a platform facilitating user speech:

Similarly, here, the Ordinance is plainly a housing and rental regulation. The “inevitable effect of the [Ordinance] on its face” is to regulate nonexpressive conduct—namely, booking transactions—not speech. [p. 19-20]

It went on to declare that the ordinance in no way forces platforms to monitor user content:

Contrary to the Platforms’ claim, the Ordinance does not “require” that they monitor or screen [listings]. It instead leaves them to decide how best to comply with the prohibition on booking unlawful transactions. [p. 20]

At every step in its reasoning it kept treating the ordinance as something wholly apart from an ordinance impacting speech:

Nor can the Platforms rely on the Ordinance’s “stated purpose” to argue that it intends to regulate speech. The Ordinance itself makes clear that the City’s “central and significant goal . . . is preservation of its housing stock and preserving the quality and nature of residential neighborhoods.” As such, with respect to the Platforms, the only inevitable effect, and the stated purpose, of the Ordinance is to prohibit them from completing booking transactions for unlawful rentals. [p. 20]

But no amount of handwaving by the court to try to focus on the financial transaction between landlord and renter, or insistence that this ordinance doesn't force platforms to monitor user-generated speech, will change the basic reality that it does indeed force platforms to do exactly that: police user speech for legality in order to avoid liability arising from that speech. It is exactly the sort of situation Section 230 was intended to forestall because of the inevitable chilling effect fear-driven platform monitoring obligations have on online speech and innovation.

The court seemed to try to justify its contorted reasoning by noting that because "brick and mortar" businesses have to comply with all sorts of local regulations, Internet businesses also should have to.

We have consistently eschewed an expansive reading of the statute that would render unlawful conduct “magically . . . lawful when [conducted] online,” and therefore “giv[ing] online businesses an unfair advantage over their real-world counterparts.” For the same reasons, while we acknowledge the Platforms’ concerns about the difficulties of complying with numerous state and local regulations, the CDA does not provide internet companies with a one-size-fits-all body of law. Like their brick-and-mortar counterparts, internet companies must also comply with any number of local regulations concerning, for example, employment, tax, or zoning. [p. 16]

But this thinking fails to recognize the unique differences between brick and mortar businesses and Internet business, differences that help explain why it is so important to give Internet businesses this vital protection. After all, a brick and mortar store only has to comply with the laws of the jurisdiction where the store is located – as Internet platforms also need to, in the finite number of places where they have a physical or corporate presence. But when it comes to their online presence, an Internet business is everywhere and thus theoretically exposed to the laws of every single jurisdiction, no matter how onerous these laws are, or how much they may conflict with any other's.

Because while perhaps the Santa Monica ordinance may not be too onerous for the platforms to comply with in and of itself, Santa Monica is but one city, yet the Ninth Circuit has now given the green light to every other city in every other state to come up with their own ordinances that will similarly force platforms to monitor user content. As Congress feared in 1996 when it passed Section 230, this decision now invites platforms to divert resources better spent elsewhere, overly censor user speech, withdraw from entire markets – even those that might prefer to have these services available – or risk being bankrupted by an infinite number of local jurisdictions pulling them in every possible direction.

This result is chilling not just to these platforms but to any other innovative service, especially if the service has any effect in the offline world, as so many do, or facilitates economic transactions between users, as so many also do. If bearing these indicia are enough to cause a platform to lose its Section 230 protection, then few will be able to retain it.

Read More | 131 Comments | Leave a Comment..

Posted on Techdirt - 27 February 2019 @ 12:05pm

Wherein The Copia Institute, Engine, And Reddit Tell The DC Circuit That FOSTA Is Unconstitutional

from the chilling-speech-is-not-cool dept

Ever since SESTA was a gleam in the Senate’s eye, we’ve been warning about the harmful effects it stood to have on online speech. The law that was finally birthed, FOSTA, has lived up to its terrifying billing. So, last year, EFF and its partners brought a lawsuit on behalf of several plaintiffs – online speakers, platforms, or both – to challenge its constitutionality. Unfortunately, and strangely, the district court dismissed the Woodhull Freedom Foundation et al v. U.S. case for lack of standing. It reached this decision despite the chilling effects that had already been observed and thanks to a very narrow read of the law that found precision and clarity in FOSTA's language where in reality there is none. The plaintiffs then appealed, and last week I filed an amicus brief on behalf of the Copia Institute, Engine, and Reddit in support of the appeal.

The overarching point we made is that speech is chilled by fear. And FOSTA replaced the statutory protection platforms relied on to be able to confidently intermediate speech with the fear of it. Moreover, it didn't just cause platforms to have only a little bit of fear of only a little bit of legal risk: thanks to the vague and overbroad terms of the statutory language, it stoked fear of nearly unlimited scope. And not just a fear of civil liability but now also criminal liability and liability subject to the disparate statutory interpretations of every state authority.

We have often praised the statutory wisdom of Section 230 before it was modified by FOSTA. By going with an approach that was “all carrot, no stick,” Congress was able to conscript platforms into meeting the basic objectives it listed in the opening sections of the statute: get the most good content online, and the least bad. But FOSTA replaced these carrots with sticks, and left platforms, instead of incented to allow the most good speech and restrict the most bad, now afraid to do either.

As a result of this fear, platforms have become vastly less accommodating towards the speech they allow on their systems, which has led to the removal (if not outright prohibition) of plenty of good (and perfectly lawful) speech. They have also been deterred from fully policing their forums, which has led to more of the worst speech persisting online. Notably, nothing in FOSTA actually modified the stated objectives of Section 230 to get the most good and least bad expression online, yet what it did modify nevertheless made achieving either goal impossible.

And in a way that the Constitution does not allow. What we learned in Reno v. ACLU, where the Supreme Court found much of the non-Section 230 parts of Communications Decency Act unconstitutional, is that online speech is just as protected as offline speech. Congress does not get to pass laws affecting speech in ways that don’t meet the exacting standards the First Amendment requires. In particular, if speech is impacted, it can only be by a law that is narrowly tailored to the problem it is trying to solve. Yet, as fellow amici wrote, speech is indeed being affected, deliberately, and, as we've seen from the harm that has accrued, by a law poorly tailored to the problem it is ostensibly intended to solve. As we explained in our brief, FOSTA has led to this result by creating a real and palpable fear of significant liability for platforms, and thus already driven them to make choices that have harmed online speakers and their speech.

Read More | 15 Comments | Leave a Comment..

Posted on Techdirt - 6 February 2019 @ 3:43pm

The 3rd Party Doctrine: Or Why Lawyers May Not Ethically Be Able To Use Whatsapp

from the metadata-matters dept

In December I went to install the Flywheel app on my new phone. Flywheel, for those unfamiliar, is a service that applies the app-dispatching and backend payment services typical of Uber and Lyft to the local medallion-based taxi business. I'd used it before on my old phone, but as I was installing it on my new one it asked for two specific permissions I didn't remember seeing before. The first was fine and unmemorable, but the second was a show-stopper: "Allow Flywheel access to your contacts?" Saying no made the app exit with passive-aggressive flourish ("You have forcefully denied some of the required permissions.") but I could not for the life of me figure out why I should say yes. Why on Earth would a taxi summoning app require access to my contacts? Tweets to the company were not answered, so it was impossible to know if Flywheel wanted that permission for some minor, reasonable purpose that in no way actually disclosed my contact data to this company, or if it was trying to slurp information about who I know for some other purpose. Its privacy policy, which on the surface seems both reasonable and readable, was last updated in 2013 and makes no reference to why it would now want access to my contacts.

So I didn't finish installing it, although to Flywheel's credit, a January update to the app seems to have re-architected it so that it no longer demands that permission. (On the other hand, the privacy policy appears to still be from 2013.) But the same cannot be said for other apps that insist on reading all my contacts, including, conspicuously, Whatsapp.

Whatsapp has been in the news a lot lately, particularly in light of Facebook's announcement that it planned to merge it with its Messenger service. But the problem described here is a problem even as the app stands on its own. True, unlike the old Flywheel app, Whatsapp can currently be installed without demanding to see the contact information stored on my phone. But it can't be used effectively. It can receive an inbound message from someone else who already knows my Whatsapp number, but it refuses to send an outbound message to a new contact unless I first let Whatsapp slurp up all my contacts. Whatsapp is candid in its privacy policy (last updated in 2016) that it collects this information (in fact it says you agree to "provide us the phone numbers in your mobile address book on a regular basis, including those of both the users of our Services and your other contacts."), which is good, but it never explains why it needs to, which is not good. Given that Signal, another encrypted communications app, does not require slurping up all contacts in order to run, it does not seem like something Whatsapp should need to do in order to provide its essential communications service. The only hint the privacy policy provides is that Whatsapp "may create a favorites list of your contacts for you" as part of its service, but it still isn't obvious why it would need to slurp up your entire address book, including non-Whatsapp user contact information, even for that.

The irony is that an app like Whatapp should be exactly the sort of app that lawyers use. We are duty-bound to protect our clients' confidences, and encrypted communications are often necessary tools for maintaining a meaningful attorney-client relationship because they should allow us to protect the communications secrecy upon which the relationship depends. But that's exactly why I can't use it, didn't finish installing the old Flywheel app, and refuse to use any other app that insists on reading all my contacts for no good, disclosed, or proportionally-narrow reason: I am a lawyer, and I can't let this information out. Our responsibility to protect client confidences may very well extend to the actual identity of our clients. There are too many situations where if others can know who we are talking to it will be devastating to our clients' ability to seek the counsel to which they are Constitutionally entitled.

I wrote about this problem a few years ago in an amicus brief on behalf of the National Association of Criminal Defense Lawyers for the appeal of Smith v. Obama. This case brought a constitutional challenge to the US government's practice of collecting bulk metadata from Verizon Wireless without warrants and without their incumbent requirements of probable cause and specificity. Unfortunately the constitutional challenge failed at the district court level, but not because the court couldn't see how it offended the Fourth Amendment when so much personal information could be so readily available to the government. Instead the district court dismissed the case because the court believed that it was hamstrung by the previous Supreme Court ruling in Smith v. Maryland. Smith v. Maryland is the 1979 case that gave us the third-party doctrine, this idea that if you've already disclosed certain information (such as who you were dialing) you can no longer have a reasonable expectation of privacy in this information that the Fourth Amendment should continue to protect (and thus require the government to get a warrant to access). Even in its time Smith v. Maryland was rather casual about the constitutionally-protected privacy interests at stake. But as applied to the metadata related to our digital communications, it eviscerates the personal privacy the Fourth Amendment exists to protect.

The reality is that metadata is revealing. And as I wrote in this amicus brief, the way it is revealing for lawyers not only violates the Fourth Amendment but the Sixth Amendment right to counsel relied upon by our clients. True, it is not always a secret who our clients are. But sometimes the entire representation hinges on keeping that information private.

Thus metadata matters because, even though it is not communications "content," it can nevertheless be so descriptive about the details of a life. And when it comes to lawyers' lives, it ends up being descriptive of their clients' lives as well. And that's a huge problem.

As the brief explained, lawyers get inquiries from uncharged people all the time. Perhaps they simply need advice on how to comport their behavior. Or perhaps they fear they may be charged with a crime and need to make the responsible choice to speak with counsel as early as possible to ensure they will have the best defense. The Sixth Amendment guarantees them the right to counsel, and this right has been found to be meaningful only when the client can feel assured of enough privacy in their communications to speak candidly with their counsel. Without that candor, the counsel cannot be as effective as the Constitution requires. But if the government can easily find out who lawyers have been talking to by accessing their metadata, then that needed privacy evaporates. Who a lawyer has been communicating with, especially a criminal defense lawyer, starts to look like a handy list of potential suspects for the government to go investigate.

And it's not just criminal defense counsel that is affected by metadata vulnerability. Consider the situation we've talked about many times before, where an anonymous speaker may need to try to quash some sort of discovery instrument (including those issued by the government) seeking to unmask them. We've discussed how important it is to have procedural protections so that an anonymous speaker can find a lawyer to fight the unmasking. Getting counsel of course means that there is going to be communication between the speaker and the lawyer. And even though the contents of those communications may remain private, the metadata related to the communications may not be. Thus even though the representation may be all about protecting a person's identity, there may be no way to accomplish it if it turns out there's no way for the lawyer to protect that metadata evincing this attorney-client relationship from either the government helping itself to it, or from greedy software slurping it up – which will make the app maker yet another third party that the government can look to demand this information from.

Unfortunately there is no easy answer to this problem. First, just as it's not really possible for lawyers to avoid using the phone, it is simply not viable for lawyers to avoid using digital technology. Indeed, much of it actually makes our work more productive and cost effective, which is ultimately good for clients. And especially given how unprotected our call records are, it may even be particularly important to use digital technology as an alternative to standard telephony. To some extent lawyers can refuse to use certain apps or services that don't seem to handle data responsibly (I installed Lyft and use Signal instead), but sometimes it's hard to tell the exact contours of an app's behavior, and sometimes even if we can tell it can still be an extremely costly decision to abstain from using certain technology and services. What we need, what everyone needs, is to be able to use technology secure in the knowledge that information shared with it travels no farther and for no other purpose than we expect it to.

Towards that end, we – lawyers and others – should absolutely pressure technology makers into (a) being more transparent about how and why it is accessing metadata in the first place, (b) enabling more gradated levels of access to it, and use of it, so that we don't have to tell any app or service more than it needs to know about our lives for it to run, or that it might ever have to ask for any more than it needs in order to run, and (c) being more principled in both their data sharing practices and resistance to government data demands. Market pressure is one way to affect this outcome (there are a lot of lawyers, and few technologies can afford to be off-limits to us), and perhaps it is also appropriate for some of this pressure to come from regulatory sources.

But before we turn to regulators in outrage we need to aim our ire carefully. Things like the GDPR and CCPA deserve criticism because they tend to be like doing pest control with a flame thrower, seeking to ameliorate harm while being indifferent to any new harm they invite. But the general idea of encouraging clear, nuanced disclosures of how software interacts with personal data, as well as discouraging casual data sharing, is a good one, and one that at the very least the market should demand.

The reality of course is that sometimes data sharing does need to happen – certain useful services will not be useful services without data access, and even data sharing among partners who together supply that service. It would be a mistake to ask regulators to prevent it altogether. Also, it is not private actors who necessarily are the biggest threat to the privacy interests we lawyers need to protect. Even the most responsible tech company is still at the mercy of a voracious government that sees itself as entitled to all the data that these private actors have collected. Someday hopefully the courts will recognize what an assault it is on our constitutional rights for metadata access not to be subject to a warrant requirement. But until that day comes, we should not have to remain so vulnerable. When we turn to the government to help ensure our privacy, our top demand needs to be for the government to better protect us from itself.

22 Comments | Leave a Comment..

Posted on Techdirt - 28 January 2019 @ 1:34pm

Dozens Of Privacy Experts Tell The California Legislature That Its New Privacy Law Is Badly Undercooked

from the hard-to-survive-this-turkey dept

Here at Techdirt we've taken issue with the California Consumer Privacy Act (CCPA), not because there's anything wrong with online privacy, or even all online privacy regulation. But there's definitely something wrong with regulating it badly. As we've seen with the GDPR, not only does poor regulation struggle to deliver any of the intended benefit, but it also causes all sorts of other harm. Thus it's enormously important to get this sort of regulation right.

But that's not the current iteration of the CCPA. Born out of an attempt at political blackmail, rather than considered and transparent policy making, even with several small attempts at improvements, it suffers from several showstopping infirmities. These were set forth in a letter to the California legislature organized by Eric Goldman, who has been closely tracking the law, and signed by 41 California privacy lawyers, professionals, and professors (including me). As he summarized in a blog post hosting a copy of the letter, these defects include:

  • That the law affects many businesses who never had a chance to explain the law’s problems to the legislature;
  • That compliance with the CCPA imposes excessive costs on small businesses;
  • That its inconsistencies with other privacy laws including the GDPR requires businesses to waste extra money;
  • The CCPA undermines other consumer privacy laws;
  • There are drafting errors and other problems, including with overbroad definitions; and
  • It claims an extraterritorial reach that may not be Constitutional, and will create substantial confusion for everyone, as well as costs for the state, as the question is litigated.

In other words, we can do better. As the letter concludes:

Everyone has acknowledged that the CCPA remains a work-in-progress, but there may be some misapprehensions about the scope and scale of the required changes still remaining. In our view, the CCPA needs many substantial changes before it becomes a law that truly benefits California. We appreciate your work on these important matters.

Read More | 12 Comments | Leave a Comment..

Posted on Techdirt - 22 January 2019 @ 3:32pm

Herrick V. Grindr – The Section 230 Case That's Not What You've Heard

from the pleading-matters dept

On the surface Herrick v. Grindr seems the same sort of case as Daniel v. Armslist (which we wrote about last week): it's a case at an appeals court that addresses the applicability of Section 230, meaning there is a reasonable possibility of it having long-lingering effect on platforms once it gets decided. It's also a case full of ugly facts with a sympathetic plaintiff, and, at least nominally, involves the same sort of claim against a platform – in Armslist the claim was for "negligent design," whereas here the claim is for "defective design." In both cases the general theory is that because people were able to use the platform to do bad things, the platforms themselves should be legally liable for the resulting harm.

Of course, if this theory were correct, what platform could exist? People use Internet platforms in bad ways all the time, and they were doing so back in the days of CompuServe and Prodigy. It is recognition of this tendency that caused Congress to pass Section 230 in the first place, because if platforms needed to answer for the terrible things their users used them for, then they could never afford to remain available for all the good things people used them for too. Congress felt it was too high a cost to lose the beneficial potential of the Internet because of the possibility of bad actors, and so Section 230 was drafted to make sure that we wouldn't have to. Bad actors could still be pursued for their bad acts, but not the platforms that they had exploited to commit them.

In this case the bad act in question was the creation and management of a false Grindr profile for Herrick by an ex-boyfriend bitter about their breakup. It led to countless strangers, often with aggressive expectations for sex, showing up at Herrick's home and work. There is no question that the ex-boyfriend's behavior was terrible, frightening, inexcusable, and, if not already illegal under New York law, deserving to be. But only to the extent that such a law would punish just the culprit (in this case the ex-boyfriend who created the fake profile).

The main problem with this case is that Herrick is seeking to have New York law extend to also punish the platform, which had not created the problematic content. But the plain language of Section 230 – both in its immunity provision along with its pre-emption provision – prevents platforms from being held liable for content created by others. Herrick argues that Grindr should be held liable anyway "because it knowingly facilitated criminal and tortious conduct." But that's not the standard. The standard is whether the platform created the wrongful content, or, at minimum, in the wake of Roommates, had a hand in imbuing it with its wrongful quality. But here there is no evidence to suggest that Grindr had anything to do with the creation of the fake profile. It was the awful ex-boyfriend who was doing all the malfeasant content supplying.

But here's where the two cases part company, and where the Grindr one gets especially messy. The good news for Section 230 is that this messiness may make it easy for the Second Circuit to resolve in favor of Grindr and leave Section 230 unscathed. The bad news is that if the Second Circuit decides the other way, it will be very messy indeed.

One of the core questions in most lawsuits involving Section 230 is whether the platform itself is an interactive computer service provider, and thus protected by Section 230 for lawsuits seeking to hold them liable for content created by others, or whether it is instead a non-immune "information content provider." Part of the problem with this case is that when Herrick filed the lawsuit originally, the pleading acknowledged that it was an interactive computer service provider. Later when he was fighting the motion to dismiss he changed its mind, but that's a problem. You don't usually get to change your mind about these critical elements of your complaint without repleading it. (Which is one of the reasons Herrick is appealing; the dismissal was "with prejudice," meaning it wouldn't easily be able to re-plead at this point, and Herrick wants another chance to amend his complaint.)

But that's only one of the pleading problems. A plaintiff also has to put forth a plausible theory of liability at the outset, in large part so that the defendant can be on notice of what it is being accused of to defend itself. It's not unusual for theories of liability to evolve as litigation proceeds, but if the theory changes too much too late in the process it raises significant due process problems for the defendant. Which seems to be happening here. The story Herrick told the Second Circuit about why it thought Grindr should be liable for the harm Herrick suffered differed in significant ways from the story it had told at the outset, or to the trial court. This change is one reason why the case is particularly messy, and may be messier still if the Second Circuit allows it to continue anyway.

At issue is what Herrick told the Second Circuit about his harassment. According to him now, strange men were showing up in his life not just constantly but everywhere he went. Yet according to the record at the trial court, they only showed up in two places: his home and his work. Which is not to say, of course, that it's ok for him to have these people harass him at either place (or any place). The issue is that this "everywhere" v. "only in two places" distinction significantly affects his theory of the case and therefore the merits of his appeal.

Because the argument he pressed at oral argument was that it was Grindr's geolocation service that removed the case from Section 230's purview. According to him there must be some bug in Grindr that allows these strange men to know where he is and seek him out, and so, he thinks, Grindr should be liable for not fixing this defect.

However there are a number of problems with this theory. First, it is highly implausible. For it to be true Grindr would need to not only still be tracking him (even as an ex-user) but then, for some unknown reason, somehow unite the location data of the actual Herrick person with the fake Herrick profile. Herrick tried to argue that the first part was likely, citing for instance Google's location services continuing to track users after they'd thought it had stopped. But even if it were true that Grindr had continued to track him, it would be really random to associate that data with any other account he didn't control. From Grindr's point of view, his real account and the fake account would look like two completely separate users. Sure, Grindr could have a bug that mis-associated location data, but there's no reason for it to pick these two completely different accounts to merge the data from. It would be just as arbitrary as if it mixed up his data with any other Grindr account.

Furthermore, there is zero evidence to suggest that the fake account used the geolocation data of anyone at all, other than perhaps the ex-boyfriend, who was operating the account. There certainly is no evidence to suggest that it was somehow using Herrick's actual data, and that's why the factual distinction about where he was harassed matters. If it truly was everywhere then he might have a point about the app having a vulnerability, and if so then perhaps his defective design claim might start to be colorable. But the only information he's alleged is that he was harassed in those two places, home and work, and no one needed to use any geolocation data to find him at either of these places. The ex-boyfriend knew of these places and could easily send would-be suitors to them directly via private messages. In other words, the reason they turned up at either of these places was because of content supplied by a third party (the ex-boyfriend). This fact puts the case clearly in Section 230-land and makes the case one where someone is trying to hold a platform liable for harm caused by how another communicated through their system.

Finally, an additional problem with this theory is that even if it were correct, and even if there were some evidence that the geolocation was allowing strangers to harass him everywhere, it needed to have come up before the appeal. The purpose of the appeal is to to review whether the first court made a mistake. Belatedly supplying more information for the benefit of the appeals court will not help it decide whether the first court made a mistake because that court could only have done the best it could with the information available to it. It isn't a mistake not to have had the benefit of more, and to add more at this late date would be incredibly unfair to the defendant. As it was, by pressing this new "he was tracked everywhere" theory at oral argument it left Grindr's counsel in the unenviable and risky position of having to field extremely hypothetical questions from the judges about their client's potential liability based on facts nowhere in the underlying record. It was uncomfortable to listen to the judges push Grindr's lawyers on the question of whether some hypothetical software bug that they had never contemplated, and likely doesn't exist, might undermine their Section 230 protection. To their credit they fielded the hypo on the fly pretty well by reminding the judges that Section 230 covers how platforms are used by other people, regardless of whether they are used appropriately or exploitatively. But given the way this case was pleaded from the outset, this hypo should never have come up, especially not at this late juncture.

So one of the overarching concerns about this case is that because this theory did not coalesce until it had reached the appeals court, it left the central legal questions it raised under-litigated, thus inviting poor results if the Second Circuit now gives them any credence. But that's not the only concern. It may still be an ominous harbinger, for even if Herrick loses the appeal, it may not be the last time we see this "software vulnerability makes you lose Section 230 protection" theory put forth. It foreshadows how we may see future privacy litigation wrapped-up as defective design cases, and, worse, it may encourage plaintiffs seeking to do an end-run around Section 230 to try to package their claims up as privacy cases.

Also, what Herrick asked for in his appeal was a remand back to the trial court to explore all these under-developed evidentiary issues. Was there a software bug? Was Grindr continuing to track former subscribers in a way they didn't know about? Was there a privacy leak, where the fake profile was somehow united with the geolocation of a real person? Herrick believes the case shouldn't have been dismissed without discovery on these issues, but early dismissal is a big reason why Section 230 provides valuable protection to a platform. It is extremely expensive to go through the discovery stage – in fact, it's often the most expensive stage – and if platforms had to endure it just so plaintiffs could explore paranoid fantasies with no evidence to give them even a veneer of plausibility, it will be extremely destructive to the online ecosystem.

On the upside, however, unlike the Wisconsin Court of Appeals in the Armslist case, after listening to the oral argument I'm relatively confident that the judges will be able to respect prior precedent upholding Section 230, even in these awful cases, and resist reaching an emotional conclusion that strays from it. Also, given the issues with the pleading and such – which at oral argument the judges flagged – there may be enough procedural problems with Herrick's case to make it easy for the court to dispense with it without causing damage to Section 230 jurisprudence in the Second Circuit in the process. But if these predictions turn out to be wrong, and if it turns out that these procedural issues pose no obstacle to the court issuing the remand Herrick seeks, then we might have to contend with something really ugly on the books at a federal appellate circuit level.

102 Comments | Leave a Comment..

Posted on Techdirt - 18 January 2019 @ 1:36pm

In Which We Warn The Wisconsin Supreme Court Not To Destroy Section 230

from the not-just-fosta dept

One of the ideas that we keep trying to drive home is that the Internet works only because Section 230 has allowed it to work. Mess with Section 230, and you mess with the Internet. FOSTA messed with it statutorily, but it isn't just Congress that can undermine all the speech and services that depend on Section 230's protection for the platforms that enable them. Courts can mess with it too.

While it's bad enough when courts get questions of whether Section 230 applies wrong at the trial court level, the higher the court, the more potentially destructive the decision if the court decides to curtail its protection. On the other hand, the higher the court, the more durable Section 230's protective language becomes when the decision gets it right. This post is about one of those cases where the future utility of Section 230 hangs in the balance, and where we hope that the Wisconsin Supreme Court, the highest court in the state, gets it right and finds it applies to the platform being sued -- and therefore all other platforms that depend on its protection.

We've written before about this case, Daniel v. Armslist. As with a lot of the litigation challenging Section 230 it was one of those "bad facts make bad law" sorts of cases. In this case an estranged husband, against whom there was a restraining order, bought a gun from an unlicensed seller who had advertised through the Armslist site. Notably it does not appear that the sale was necessarily illegal – in Wisconsin unlicensed dealers apparently do not have to run background checks – nor was the sale fully transacted on the site (the actual purchase was made in a McDonalds parking lot). Of course, even if the sale had been illegal, or fully brokered via the site, Section 230 should still have insulated the platform, but here the Section 230 inquiry should be much more straight forward: the lawsuit alleging that Armslist negligently designed a site that facilitated a third party's speech – in this case, the speech offering the gun for sale – should have been barred by Section 230.

The trial court actually had gotten this question right and dismissed the case. Unfortunately a state appeals court in Wisconsin opted to ignore twenty-plus years of jurisprudence, as well as the statute's pre-emption provision, which would have directed such a finding, and reversed the trial court's original decision. Armslist then sought review by the Wisconsin Supreme Court, and we filed an amicus brief supporting their petition. One of the main points we made in the brief was how much stood to be affected if the decision was not overturned and Section 230's applicability in Wisconsin was now narrowed in ways Congress hadn't intended. After all, it isn't just Armslist in the crosshairs; it is all platforms everywhere, and all the speech and services they enable, in Wisconsin and beyond, that are threatened if platforms can no longer depend on Section 230's critical protection applying to them as it once had.

Fortunately the Wisconsin Supreme Court agreed to hear the case, and this week we filed yet another amicus brief in support of Armslist on the merits. It is similar to the previous brief, with the added example of how much the Copia Institute itself, and Techdirt in particular, depends on Section 230 remaining robust and effective. It relies on it as a user of other services -- for instance, to have its posts shared through social media -- and as a platform itself. There could not be a comments section on Techdirt -- or all the vibrant and insightful discussion found there -- without Section 230 protecting the site from liability for what commenters say.

It would be easy for the tragedy underpinning this case to cause the court to fixate on Armslist and the type of user content it intermediates. But Internet platforms come in all sorts of shapes and sizes, offering all sorts of services, and enabling all sorts of speech on all sorts of topics. And all of them will be affected by how the court resolves this particular case before it. So we hope our brief helps remind the Wisconsin justices of just how much is at stake.

Read More | 138 Comments | Leave a Comment..

Posted on Techdirt - 3 December 2018 @ 1:41pm

Tech Policy In Times Of Trouble

from the pep-talk dept

A colleague was lamenting recently that working on tech policy these days feels a lot like rearranging deck chairs on the Titanic. What does something as arcane as copyright law have to do with anything when governments are giving way to fascists, people are being killed because of their race or ethnicity, and children are being wrested from their parents and kept in cages?

Well, a lot. It has to do with why we got involved in these policy debates in the first place. If we want these bad things to stop we can't afford for there to be obstacles preventing us from exchanging the ideas and innovating the solutions needed to make them stop. The more trouble we find ourselves mired in the more we need to be able to think our way out.

Tech policy directly bears on that ability, which is why we work on it, even on aspects as seemingly irrelevant to the state of humanity as copyright. Because they aren't irrelevant. Copyright, for instance, has become a barrier to innovation as well as a vehicle for outright censorship. These are exactly the sorts of chilling effects we need to guard against if we are going to be able to overcome these challenges to our democracy. The worse things are, the more important it is to have the unfettered freedom to do something about it.

It is also why we spend so much energy arguing with others similarly trying to defend democracy when they attempt to do so by blaming technology for society's ills and call for it to be less freely available. While it is of course true that not all technology use yields positive results, there are incalculable benefits that it does bring – benefits that are all too easy to take for granted but would be dearly missed if they were gone. Technology helps give us the power to push back against the forces that would hurt us, enabling us to speak out and organize against them. Think, for instance, about all the marches that have been marched around the world, newly-elected officials who've used new media to reach out to their constituencies, and volunteer efforts organized online to push back against some of the worst the world faces. If we too readily dull these critical weapons against tyranny we will soon find ourselves defenseless against it.

Of course, none of this is to say that we should fiddle while Rome burns. When important pillars of our society are under attack we can't pretend everything is business as usual. We have to step up to face these challenges however is needed. But the challenges of today don't require us to abandon the areas where we've previously spent so much time working. First, dire though things may look right now, we have not yet forsaken our constitutional order and descended into the primordial ooze of lawlessness. True, the press is under constant attack, disenfranchisement is rife, and law enforcement is strained by unprecedented tensions, but civil institutions like courts and legislatures and the media continue to function, albeit sometimes imperfectly and under severe pressure. But we strengthen these institutions when we hew to the norms that have enabled them to support our society thus far. That some in power may have chosen to abandon and subordinate these norms is no reason that the rest of us should do the same. Rather, it's a reason why we should continue to hold fast to them, to insulate them and buttress them against further attack.

Second, we are all capable of playing multiple roles. And the role we've played as tech policy advocates is no less important now than it was before. Our expertise on these issues is still valuable and needed – perhaps now more than ever. In times of trouble, when fear and confusion reign, the causes we care about are particularly vulnerable to damage, even by the well-meaning. The principles we have fought to protect in better days are the same principles we need to light the way through the dark ones. It is no time to give up that fight.

28 Comments | Leave a Comment..

More posts from Cathy Gellis >>