Cathy Gellis’s Techdirt Profile


About Cathy Gellis

Posted on Techdirt - 12 July 2021 @ 12:09pm

Remembering Sherwin Siy

from the wishing-this-were-fake-news dept

My friend and colleague Sherwin Siy died suddenly last week, way before his time, way before any of us were ready to lose his friendship, and way before the Internet and the principles he defended could afford to be without his help defending them.

There are some people out there with the very silly idea that the only reason people advocate for the sorts of issues like we do here at Techdirt is because someone is paying us to. The thinking seems to go that "but for" this monetary carrot none of us would bother.

Nothing could be further from the truth. Those of us who fight for these issues do so because we fervently believe they are the right positions to serve the public interest, the Internet, and ultimately the world. Taking on these battles is in many ways a calling, something we feel personally compelled to do.

In the case of both Sherwin and myself (and many others, I'm sure), it is also what drove us to go to law school. As he explained in this interview about his most recent work at Wikimedia, as Lead Public Policy Manager:

"The reason I went to law school in 2002 was because I was inspired by the ability for the internet to be this collaborative, weird place, built out of collections of individuals."[…]

Deeply inspired by the weirdness of the internet and seeking to protect it, Sherwin made it his mission to support "the unanticipated things that are made by people who aren’t multibillion dollar corporations."

Sherwin was just slightly ahead of me in getting his legal career going but had so quickly established himself that as I started mine I reached out to him. "Dear Mr. Siy..." I apparently wrote in my first email to him. By that point he was working for Public Knowledge, eventually becoming Vice-President of Legal Affairs. To me (as I embarrassed him years later when I told him this) he was practically a celebrity. He was a key player at a key organization actually making a difference on these important issues, and I was in awe.

But over the years, as I worked to establish myself as someone who could make a difference too, he helped. One of my first amicus briefs was for the Ninth Circuit in the Lenz v. Universal (aka the "Dancing Baby") case. Public Knowledge was one of my clients for that brief, and that was thanks to him.

Of course, one of the things that has become apparent in the wake of his death is just how many of us he helped, how generous he was with his mentoring, and how unselfish he was with his own power and influence. Naturally he was embarrassed when I told him I'd regarded him as an Internet hero, because of course he was far too humble a person to see himself that way, regardless of how much he deserved such respect for all the work he was doing.

But his loss isn't just being felt by so many because he was a respected colleague we'd come to depend on as an invaluable ally in our advocacy. For many of us we are equally proud to have called him a friend. He was someone I reached out to every time I was in town, and he almost always had time for a good catch-up. Sure, sometimes our conversations might include deep dives into the latest copyright policy, and I remember another occasion when he regaled me of a deep understanding of the historical regulatory underpinnings of modern privacy law—expertise I was fully planning to tap into down the road at some point but now can't.

After all, we were both geeks, and that was our idea of a good time. Of course, our friendship wasn't all work and no play: as I think about my personal memories of Sherwin I remember the time when I was out in his area and suffering from some serious FOMO from seeing all the fun everyone else I knew seemed to be having, per all their pictures that kept showing up on my Facebook feed. So I called up another law geek friend and we planned an outing to go exploring in parkland along a river. And then we called up Sherwin, who hadn't before met my other friend, and we all had a great day hanging out together just like we'd all known each other forever.

I was fully expecting to know Sherwin forever. It was only just a month or so ago that he'd pinged me for advice. He was thinking about the future, as was I. So we talked about what we wanted from it and promised to help each other go get it. Never could I have ever imagined at that moment that when it came to the future my seemingly so vital friend wouldn't have one.

It's impossible to reconcile. In some ways my brain forgets that he's gone; since we didn't live in the same place, he wasn't someone I saw all the time, so to not see him now does not seem that unusual. The kick in the stomach comes from when he pops to mind because it suddenly occurs that I want to hear what he has to say, and then just as suddenly I have to remember that I never can hear it again.

And he's ALWAYS going to pop to mind. There is hardly a policy battle – on copyright, net neutrality, privacy, free speech, and more – where we have not been enormously lucky to have had Sherwin on our team. Sherwin was so smart, so erudite, and so needed in our community of advocates that his loss is incalculable. But it all begs the question: if he could leave this sort of mark in the world, and touch this many people's lives, in his first forty years, imagine what he could have done with 40 more.

Fate has certainly robbed us all.

18 Comments | Leave a Comment..

Posted on Techdirt - 28 June 2021 @ 4:03pm

Creating State Action Via Antitrust Law And Making The People Who've Been Wrong About The Constitutionality Of Content Moderation Suddenly Right

from the stopped-clocks dept

The challenge of a 24+ hour legislative session covering multiple bills is that it's hard to keep track of everything that happens. In my last post I wrote about a few impressions and examples that I happened to catch. This post is about another.

Plenty of people on both sides of the aisle have been plenty wrong about content moderation on the Internet. Many Democrats get it very wrong, and so do many Republicans. In the case of people like Reps. Jim Jordan and Matt Gaetz, their particular flavor of wrongness has been to rant and rave about the private editorial decisions platforms have made to remove the speech they think they should have the right to make on these services, no matter what. They complain that what these platforms are doing to their posts must somehow be violating their First Amendment rights—and they are completely and utterly wrong on that point. Platforms are private actors with their own First Amendment rights to choose what speech to associate with. Making those decisions, even in ways some people (including these Congressmen) don't like, is entirely legal and THEIR constitutional right to exercise. It in no way impinges on the First Amendment rights of any would-be user of their service to refuse their expression.

But these Congressmen and some of their similarly-minded colleagues have noticed that if these antitrust bills should become law in anything close to their current form their speech will continue to be denied access to these services. And this time that denial may well represent an unconstitutional incursion on their speech rights. Because it's one thing if the platforms make their own independent editorial decisions on whether to facilitate or deny certain user speech, including these Congressmen's speech. But it's another when government pressure forces platforms' hand to make those decisions in any particular way. And that's what these bills threaten to do.

One such way that they flagged is through the bills' demands for interoperability. Interoperability sounds like a nice idea in theory, but in practice there are significant issues with privacy, security, and even potentially content moderation, especially when it is demanded. Because one of the problems with an interoperability mandate is that it's hard to tell if, in being interoperable, one platform needs to adopt the same moderation policies of another platform they are trying to interoperate with. If the answer is yes, then suddenly platforms are no longer getting to make their own editorial decisions; now they are making editorial decisions the government is forcing them to make. Which means that when they impose them against certain user speech it now is at the behest of the state and therefore likely a violation of those users speech rights, which are rights that protect their speech against state action.

But even if a platform opts not to conform its moderation policies, the constitutional problem would remain. Because if these bills were to become law in their current form, the decision not to conform moderation policies might still be seen to flout the law's requirement for interoperability. And, at least initially, it would be up to the FTC to decide whether it does and thus warrants taking an enforcement action against the platform. But that means that the FTC could easily be in the position of making content-based decisions in order to decide whether the platform's content moderation decision (in this case not to conform) looks like an antitrust violation or not. This situation deeply concerned these Congressmen, who also happen to be of the belief that the FTC is a captured agency prone to making content decisions that conflict with their own preferred viewpoints. While their concerns generally seem overwrought, bills like these start to give them an air of legitimacy. Because regardless of whether the FTC actually is captured by any particular point of view or not, if it is going to make ANY enforcement decision predicated on any expressive decisions, that's a huge Constitutional problem, irrespective of which point of view may suffer or benefit from such government action.

So while it is very difficult to credit the particular outrage of these Congressmen, their alarm illustrates the fundamental problem with these bills and other similar legislative efforts (including some anti-Section 230 bills that these Congressmen favor): these targeted businesses are not ordinary companies selling ordinary products and services where market forces act in traditional market-driven ways. These are platforms and services handling SPEECH. And when companies are in the speech-handling business we can't treat them like non-speech businesses without impinging on those speech interests themselves in an unconstitutional "make no law" sort of way.

But that is exactly what Congress is deliberately trying to do. It is the government's displeasure with how these companies have been intermediating speech that is at the root of these regulatory efforts. It's not a case of, "These companies are big, maybe that's bad, and oops! Our regulatory efforts have accidentally implicated a speech interest." The whole acknowledged point of these regulatory efforts is to target companies that are "different," and the way they are different is because they are companies in the online speech business. Congress is deliberately trying to make a law that will shape how companies do that business. And the fact that its efforts are running headlong into some of the most provocative political speech interests of the day is Exhibit A for why the whole endeavor is an unconstitutional one.

32 Comments | Leave a Comment..

Posted on Techdirt - 25 June 2021 @ 10:43am

Congressman Nadler Throws The World's Worst Slumber Party In Order To Destroy The Internet

from the not-the-last-sleepless-night-this-nonsense-will-cause dept

House Judiciary Chairman Congressman Nadler really does not like "big tech" companies, and four of them (Apple, Google, Facebook, and Amazon) in particular. His antipathy has led him to bypass any further subcommittee inquiry to identify which issues raised by these companies might be suitable for regulation, or to develop careful language that could remediate them without being an unconstitutional and counter-productive legislative attack on the entire Internet economy.

Instead he called a full committee hearing this past Wednesday to debate and markup a slate of six bills that are, in their current form, an unconstitutional and counter-productive legislative attack on the entire Internet economy. (Here's where we'd normally include an embed of the hearing, but for reasons that are not at all clear, after the session was live-streamed via YouTube, it is currently blocked from showing the recording -- perhaps the session that was a debate about how best to break Google, has literally broken Google by streaming a video too long for YouTube to deal with).

Although the hearing lasted over 24 hours(!), from midday Wednesday into midday Thursday (with just one three-hour recess and a few other breaks for floor votes), there was little illumination on whether anything these bills target is truly an infirmity at all, an infirmity that Congress hasn't itself created, or an infirmity particular to just these targeted companies. Or whether any of these proposed "remedies" won't hurt the very interests they are ostensibly supposed to help.

Over the course of the hearing he did, of course, get some bi-partisan pushback. Some of the most credible seemed to come from Reps. Lofgren and Issa, who tried to alert the bills' proponents to many of the bills' defects, and also Rep. Spartz, who kept noticing all the due process and doctrinal shortcuts built into the bills. And some of the language did get amended. But no evidence was considered and no experts were consulted. The committee was not interested in building any further record that might challenge (or even potentially support) the foregone conclusions that something must be done and these bills should be the something.

As a result, the fundamental problems with the bills remain because the fundamental problem remains: even after all that effort the Committee still lacks a meaningful understanding of how and why tech companies get big, including any reasons why we either value that bigness or otherwise force it to happen. The kindest read of the situation – as with most tech policy regulation, it seems – is that it's a bit like the story of the blind men and the elephant, where each man has a different perception of what an elephant must look like depending on whether they are holding its trunk, its ear, or its tail. Here the House Judiciary Committee is holding tightly to the tail and refusing to even contemplate that there might be any more elephant to consider. As a result it also can't recognize how some of the problems they are worried about are actually problems of their own making.

One conspicuous example that came up during this marathon bill markup session was the outrage expressed by some members of the committee that Amazon sometimes kicks off independent vendors using its marketplace services. But instead of asking why Amazon might do that, the committee chose to presume that it was due to nothing more than some nefarious anti-competitive instinct. And in making that presumption the committee ignored its own role in forcing Amazon's hand.

For instance, how does it make sense for Congress to think that Amazon should potentially be liable for counterfeit or defective goods vendors use their platforms to sell, and yet simultaneously criticize Amazon for denying vendors with potentially problematic products access to their platform? Answer: it doesn't make any sense at all. Congress needs to decide: if it wants Amazon to be more open to more small business users, it has to make it safe for them to be.

Yet instead of fortifying laws that offer platforms protection to make it safe for them to be open to more users, including smaller businesses and potential competitors, Congress is instead hard at work crafting bills to further put the screws to the bigger platforms if they give access to the wrong third party user who does something with their platforms that Congress also doesn't like. It is deliberately creating a no-win situation for platforms that forces them to make only bad choices that no one will like – and that Congress will only want to further punish them for.

16 Comments | Leave a Comment..

Posted on Techdirt - 16 June 2021 @ 12:10pm

Think Tech Companies Are Too Monopolistic? Then Stop Giving Them Patent Monopolies

from the no-one-to-blame-but-yourself dept

There is a lot of sturm and drang in the halls of government these days about corporate mergers – or, at least, tech company mergers (oddly, this ire doesn't seem to necessarily extend to all mergers). But despite all the gnashing and wailing there's not a lot of understanding of why they happen. Which is strange, because if you think there's a problem, it would help to understand WHY there is a problem, because that understanding will give clues on how to fix it.

So let's think about why a company "merges" with another. I put "merges" in quotes, because usually it boils down to one company buying another – how much of a "merger" it is depends on how similarly positioned the respective companies are and the details of the deal, but regulators today seem most upset about the A part of M&A (acquisition) so let's focus on that aspect. Why would a company want to acquire another?

One big reason relates to patent law. Let's say you're a company with a product, and you want to make that product do something more, or better, or have some new feature that it doesn't already have. You could develop it on your own but (a) that will take time you may not have (ex: the market opening may close before you can get it out the door), (b) money you may not have (ex: you may not have the liquidity needed), or other resources you may not have (ex: you may not have the expertise needed or be able to easily hire it), and (c) even if you had what was needed you may still not be able to develop it on your own because it turns out that someone has already developed the best method, gotten a patent on it, and now they can block anyone else from implementing it with at least the threat of litigation if not also actual litigation.

So the shortest distance between two points for many companies, especially larger tech companies, is often to simply buy the other company that has the missing piece of the technology puzzle they want. This acquisition then does a few things. For one, it gives the purchasing company access to that technology, which means it could potentially produce a better product. Of course, it also gives the company exclusive access to the technology and lets them block anyone else from using it, including their competitors.

In other words, through patent law (and also copyright law, but we'll set that aside for now) we intentionally give companies the power to act like monopolies, even when it's not actually in our interest. So of course we're upset that companies use that power, but the problem is that it's a power we gave them. Splitting them up is not the cure for the problem we created; the only solution is to stop giving them so much monopoly power in the first place.

There are at least two things we should do differently. First, we need to stop giving out so many patents full stop (and we need to stop condemning the people calling for fewer to be issued). Too often these patents are not for actually significant innovations (or innovations at all), and all too often they are on subject matter that should be unpatentable. Every few decades or so the US Supreme Court wakes from its slumber to remind the world that, at least under US law, software is not patentable subject matter. But these decisions haven't stopped people from pursuing, and getting, these sorts of patents. So it's very strange that people wonder why there are so many tech companies with software-driven products that have so much market clout, when it's a power our own USPTO has been purposefully giving them.

But even where patents are issued appropriately, there are still things we can do to mitigate their anti-competitive effects. One of the problems with patents today is how they give patent holders the ability to shut out other users of the technology. That's why patents can have this harmful effect on the marketplace, and it's also what has put modern patent law out-of-step with the authority granted Congress by the Constitution to pass a patent law at all. Congress gets to legislate in this area for the purpose of "promot[ing] the progress" of science and the useful arts, but the reality of today's patent law is that instead of promoting progress it ends up creating huge obstacles to it.

This injunctive power that comes with a patent is also unnecessary to achieve anything that patent law was intended to vindicate. Even accepting as true the idea that innovators need some sort of reward for being the first to innovate something the world would benefit from – beyond, of course, the inherent market advantage that comes from being first – all that means is that if there's some profit to be had from the innovation that the patentholder should get to realize at least some of it. But you don't need the power to shut out all other uses to glean that profit; all you need to do is license it.

Of course, backed with the power to enjoin other uses, license fees today are less about reasonable market rates that provide benefit to everyone: the innovator, the implementer, and the public, which now gets to have more innovation in the marketplace at prices the market can bear. Instead, patent revenue today is more about extortive windfalls. The policy change we need is to switch up that balance. And one way to do that is by replacing the current (and often disproportionate) ability of patentholders to enjoin any uses of their technology with some sort of compulsory license system. A compulsory license system means that patent holders cannot say no to competitors and other innovators who want to use or build on their technologies, either directly, by refusing permission, or indirectly, through excessive license fees. Instead the reward for their patent is the reasonable income returned by the license they must offer.

There are several upsides to changing patent law this way. For one, even if it somehow diminishes the perceived luster of having a patent that would not be a bad thing: as explained before, the landgrab that has been trying to turn every technological improvement, no matter how small, into a powerfully enforceable monopoly has been at the root of much of the anticompetitive behavior regulators now lament, and discouraging it would, on its own, help mitigate those problems. (Constraining the Patent Office so that it also grants fewer patents, especially specious and/or software ones would help as well.)

Secondly, it also means that more people can use the technology, or even build on it. And that's good for society in general. The point of patents is to stimulate that innovation, and this change would do so by clearing the way to it. Furthermore, it would also have the effect of diminishing the monopolistic effects we don't like. Not only would patents now provide less monopoly power, but they would also lessen the incentive companies currently have to acquire other companies in order to horde more of it.

Which would also lead to less market consolidation. For instance, smaller companies with a sought after-innovation, instead of being bought out by one company that could now exclusively benefit from it, could stay going concerns and continue to put products in the market. If the innovations were legitimately patentable they could also use those licensing profits to subsidize their own further innovation and product development, and to the extent that fewer innovations may be patentable, the good news is that this reduction in patentability would mean that there would be more technologies available for them to help themselves to in order to compete, even against the companies we currently worry are too big.

Of course, there is a catch: compulsory license systems are great in theory but often cumbersome in practice, and, as we see in the copyright space, they can introduce new, unwelcome, and debilitating costs and regulatory impediments. (We'd also want to keep an eye on where non-practicing entities owning patents should be in this ecosystem, if anywhere.) So this isn't a case of "just add water" where tacking any old compulsory license system onto patent law will automatically make everything sunshine and roses. It will take some extremely careful thinking in how to implement.

In the meantime, however, we are seeing some other industry adaptations, like patent pools, emerge to help mitigate the extortive power of patents. And, in general, the idea of minimizing the exclusionary control of a patent, including through compulsory licenses, is a good one we would be better served to be thinking seriously about, rather than the zealous appetite to break up companies that has currently seized all of our attention. Especially when these proposed break-ups are so arbitrary, unprincipled, and ultimately costly in ways regulators do not seem to be contemplating.

In any case it just doesn't make any sense for the government to on one hand tell companies to go be monopolies and then immediately complain they are being monopolies. The solution to the problem of companies acting monopolistic is to not deliberately give them so much power to be.

6 Comments | Leave a Comment..

Posted on Techdirt - 8 June 2021 @ 3:37pm

Why The Ninth Circuit's Decision In Lemmon V. Snap Is Wrong On Section 230 And Bad For Online Speech

from the another-hard-case dept

Foes of Section 230 are always happy to see a case where a court denies a platform its protection. What's alarming about Lemmon v. Snap is how comfortable so many of the statute's frequent defenders seem to be with the Ninth Circuit overruling the district court to deny Snapchat this defense. They mistakenly believe that this case raises a form of liability Section 230 was never intended to reach. On the contrary: the entire theory of the case is predicated on the idea that Snapchat let people talk about something they were doing. This expressive conduct is at the heart of what Section 230 was intended to protect, and denying the statute's protection here invites exactly the sort of harm to expression that the law was passed to prevent.

The trouble with this case, like so many other cases with horrible facts, is that it can be hard for courts to see that bigger picture. As we wrote in an amicus brief in the Armslist case, which was another case involving Section 230 with nightmarish facts obscuring the important speech issues in play:

"Tragic events like the one at the heart of this case can often challenge the proper adjudication of litigation brought against Internet platforms. Justice would seem to call for a remedy, and if it appears that some twenty-year old federal statute is all that stands between a worthy plaintiff and a remedy, it can be tempting for courts to ignore it in order to find a way to grant that relief."

Here some teenagers were killed in a horrific high-speed car crash, and of course the tragedy of the situation creates an enormous temptation to find someone to blame. But while we can be sympathetic to the court's instinct, we can't suborn the facile reasoning it employed to look past the speech issues in play because acknowledging them would have interfered with the conclusion the court was determined to reach. Especially because at one point it even recognized that this was a case about user speech, before continuing on with an analysis that ignored its import:

Shortly before the crash, Landen opened Snapchat, a smartphone application, to document how fast the boys were going. [p.5] (emphasis added)

This sentence, noting that the boys were trying to document how fast they were going, captures the crux of the case: that the users were using the service to express themselves, albeit in a way that was harmful. But that's what Section 230 is built for, to insulate service providers from liability when people use their services to express themselves in harmful ways because, let's face it, people do it all the time. The court here wants us to believe that this case is somehow different from the sort of matter where Section 230 would apply and that this "negligent design" claim involves a sort of harm that Section 230 was never intended to apply to. Unfortunately it's not a view supported by the statutory text or the majority of precedent, and for good reason because, as explained below, it would eviscerate Section 230's critical protection for everyone.

Like it had done in the Homeaway case, the court repeatedly tried to split an invisible hair to pretend it wasn't trying to impose liability arising out of the users' own speech. [See, e.g., p. 10, misapplying Barnes v. Yahoo]. Of course, a claim that there was a negligent design of a service for facilitating expression is inherently premised on the idea that there was a problem with the resulting expression. And just because the case was not about a specific form of legal liability manifest in their users' speech did not put it outside of Section 230. Section 230 is a purposefully broadly-stated law ("No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."), and here the court wants the platform to take responsibility for how its users used its services to express themselves. [p. 15, misapplying the case].

Section 230 also covers everything that could be wrong with expression unless the thing wrong with it happens to fall into one of the few exceptions the statute enumerates: it involves an intellectual property right, violates federal criminal law, or otherwise implicates FOSTA. None of those exceptions apply here, and, in fact, in the same section of the law where these few exceptions are set forth there is also a pre-emption provision explicitly barring any state law from becoming the basis of any new exceptions. Which, with this decision giving the go-ahead to a state law-based tort claim of "negligent design," is what the Ninth Circuit has now caused to happen.

It hurts online speech if courts can carve out new exceptions. If judges can ever post hoc look at a situation where expressive activity has led to harm and decide the degree of harm warrants stripping service providers of their Section 230 protection, then there is basically no point in having Section 230 on the books. If platforms have to litigate over whether it protects them, then it doesn't really matter whether it does or not because they'll already have lost out on so much of the value the protection was supposed to afford them to make it possible for them to facilitate others' expression in the first place. The inevitable consequence of this functional loss of statutory protection is that there will be fewer service providers available to facilitate as much user expression, if any at all.

But even if there were some limiting principle that could be derived from this case to constrain courts from inventing any other new exceptions, just having this particular "negligent design" one will still harm plenty of speech. To begin with, one troubling aspect the decision is that it is not particularly coherent, and one area of confusion relates to what it actually thinks is the negligent design. [see, e.g., p. 15]. The court spends time complaining about how Snapchat somehow deliberately encourages users to drive at unsafe speeds, even though the court itself acknowledged that while Snapchat apparently rewards users with "trophies, streaks, and social recognitions" to encourage them to keep using their service [p. 5], it "does not tell its users how to earn these various achievements" [p. 5], and it is a leap to say that Snap is somehow wrongfully encouraging users to do anything when it is not actually saying anything of the kind. [See p. 6 ("Many of Snapchat’s users suspect, if not actually 'believe,' that Snapchat will reward them for 'recording a 100-MPH or faster [s]nap' using the Speed Filter.")]. In fact, as the decision itself cites, Snapchat actually cautioned against reckless posting behavior. [See p. 6 with the screenshot including the text, "Don't snap and drive."] If the case were actually about Snap explicitly encouraging dangerous behavior ("Drive 100 mph and win a prize!") then there might legitimately be a claim predicated on the platform's own harmful speech, for which Section 230 wouldn't apply. But the record does not support this sort of theory, the theory of liability was predicated on a user's apparently harmful speech, and in any case the alleged encouragement wasn't really what the plaintiffs were charging was actually negligently designed anyway.

Instead, what was at issue was the "speed filter," a tool that helped users document how fast they were traveling. Unlike the district court, the Ninth Circuit could not seem to fathom that a tool that helped document speed could be used for anything other than unsafe purposes. But of course it can. Whether traveling at speed is dangerous depends entirely on context. A user in a plane could easily document traveling at significant speed perfectly safely, while a user on a bike documenting travel at a much slower speed could still be in tremendous peril. One reason we have Section 230 is because it is impossible for the service provider to effectively police all the uses of its platform, and even if it could, it would be unlikely to know whether the speeding was safe or not. But in denying Snapchat Section 230 protection with the presumption that such speech is always unsafe, the court has effectively decided that no one can ever document that they are traveling quickly, even in a safe way, because it is now too legally risky for the platform to give users the tools to do it.

Furthermore, if a platform could lose its Section 230 platform because the design of its services enabled speech that was harmful, it would eviscerate Section 230, because there are few, if any, whose design would not. For example, Twitter's design lets people post harmful expression. Perhaps one might argue it even encourages them to by making it so easy to post such garbage. Of course, Twitter also makes it easy to post things that are not harmful too, but the Ninth Circuit's decision here does not seem to care that a design eliciting user expression might be used for both good and bad ends. Per this decision, which asserts a state law-created "duty to design a reasonably safe product," [see p. 13, misapplying the Doe 14 v. Internet Brands case], even a product that meets the definition of an "interactive computer service" set forth in Section 230 (along with its pre-emption provision), if the design could be used to induce bad expression, then the platform no longer qualifies for Section 230's protection. But that would effectively mean that everyone could always plead around Section 230 because nearly every Section 230 case arises from someone having used the service in a harmful way the service enabled. It is unfortunate that the Ninth Circuit has now opened the door to such litigation, as the consequences stand to be chilling to all kinds of online speech and services Section 230 was designed to protect.

46 Comments | Leave a Comment..

Posted on Techdirt - 7 May 2021 @ 1:37pm

Thanks To Section 230, I Can Correct Wired's Portrayal Of My Section 230 Advocacy

from the speaking-about-speaking-about-speech dept

I always thought it would be a great honor to be referenced in the hallowed pages of WIRED magazine. Like Mike, I've been reading it since its beginning, as a then student studying information technology and watching the Internet take hold in the world.

This week it finally happened, and ugh... My work was referenced in support of a terrible take on Section 230, which not only argued that Section 230 should be repealed (something that I spend a great deal of personal and professional energy trying to push back against) but masqueraded as a factual explanation of how there was no possible reasonable defense of the law and that therefore all its defenders (including me) are, essentially, pulling a fast one on the public by insisting it is important to hold onto. After all, as the title says, "Everything you've heard about Section 230 is wrong," including, it would seem, everything we've been saying about it all along.

Such an assertion is, of course, ridiculous. But this isn't the first bad Section 230 take and unfortunately is unlikely to be the last, so if that were all it was it might be much easier to simply let it fade into history. But that wasn't all it was, because the piece didn't just make that general statement; it used my own work to do it, and in the most disingenuous way.

Ordinarily, of course, my work can speak for itself. The problem was, the author of this piece didn't let it speak for itself. Instead he stripped it of its context, plucking out only bits of the overall argument, citing ideas so incompletely, so orphaned from the overall message in which they were delivered, as to effectively mischaracterize my position. And then he used that mischaracterization of what I had argued as ammunition to underpin his anti-230 argument.

Nor did the author let me speak for my work either, which could have corrected his apparent misapprehensions, if not about Section 230's merit at large, then at least the bigger picture I was getting at in the particular brief he had honed in on. But despite speaking with several of the law's detractors, he spoke to only one of its defenders, even though he obviously considered several of us expert enough to misleadingly reference our work in support of his dubious argument.

It reads as a hit piece, not just against the law itself but its supporters, and one that he was apparently so determined to make that speaking with us, and affording us the chance to explain our views and what informs them, was not something he could chance. After all, we might have convinced him of the statute's merit, or at least given him some actual factual fodder to include in his supposedly factual accounting of the law, and that was obviously not the piece he wanted to write.

And so it turns out that my first mention in WIRED is a misrepresentation of my advocacy. Which is rather depressing, personally, but it raises another issue, and one that ties back into the advocacy I do defending the statute and why I do it so fervently.

It's because the only way to make sure that my actual views can get widely expressed is to express them directly myself, and for that I need to use outlets that are protected by Section 230, like Twitter. Or maybe sometimes Facebook. Or even (as we always point out in our briefs) a site like Techdirt. If I have things to say, I obviously cannot depend on traditional media gatekeepers like Conde Nast magazines to help me say them. And while I am obviously eager to set the record straight on my free speech advocacy, I don't just speak for myself in expressing this concern. Without this law, we would all be without these outlets and without these opportunities to express ourselves publicly, even when we need to. Which would ultimately foreclose a lot of expression, including plenty of even more necessary expression.

This stark calculus is also why it is so odd to see some of the law's opponents (including those who were actually quoted in the article) praise the article for having included lots of voices in it. Sure, it shared some voices. But only some. And the absence of other voices shows why Section 230 is so needed: because often Section 230-enabled outlets are the only way many voices – including the most marginalized and vulnerable many of these advocates profess to be championing – can be heard.

Of course, to this point the anti-230 people unhappy about being de-platformed may say, "See? We told you it's bad to lose access to an online outlet for expression. So get rid of 230 so we can come back!" But this call to change Section 230 is silly, for a number of reasons. One is that Section 230 is not at the root of their de-platforming; the First Amendment is. Secondly, while I'm unhappy about WIRED's editorial decision to publish this piece of questionable journalism, I remain perfectly happy and committed to defending its right to publish its questionable journalism. Nobody's expression is vindicated by using law to limit anyone else's expressive rights, even if they have used them questionably; if anything, the situation calls for doubling down on speech protection, including with a law like Section 230 that makes First Amendment rights less illusory and more substantively meaningful for everyone.

Furthermore, as the Copia Institute has talked about many times in our Section 230 advocacy, there is an internal balance to Section 230 that allows it to work effectively. In order to get the most beneficial and least deleterious content online that we can overall, platforms need to be legally safe to leave as much as they want up and take as much as they want down. When either protection, currently guaranteed by Section 230, starts to disappear, it starts tying platforms' hands such that they are no longer able to do the best they can on either front. In other words, making it legally impossible for platforms to remove users is not going to lead to more valuable and less problematic content online. It will just put platforms under strain and make it hard for them to be available for anyone to use.

So repealing Section 230 is not going to help anyone speak online. And that includes both the people who have been de-platformed and also the vulnerable who always need to have a platform available to be able to speak out against those who would seek to hurt them. Which often, ironically, is the very same people complaining of being de-platformed. It is very strange to see the law's opponents quoted in the article, who often claim to oppose the law as a means of protecting the vulnerable, push for a policy change that will only make the vulnerable even more so. Especially when it's the exact same policy change that people who would want to hurt the vulnerable keep calling for themselves. They can't both be right, and the fact that these two fundamentally opposed groups would seem to want the same thing itself suggests that neither of them are.

What this episode shows is that people cannot be dependent on the traditional media gatekeepers to enable their public expression, no matter how much they need to be able to express it. In fact, the less powerful the voice, the more important it is that these voices not be dependent on gatekeepers to speak so that they can always be able to speak against those who might hurt them. And so they need Section 230 to exist to enable other outlets they can use instead (including, potentially, their own, which Section 230 makes it much more practically possible to make). Without that law, and without these outlets, we will be without that expression, and that will be no good for anyone.

77 Comments | Leave a Comment..

Posted on Techdirt - 5 May 2021 @ 11:02am

The Oversight Board's Decision On Facebook's Trump Ban Is Just Not That Important

from the undue-ado dept

Today is Facebook Oversight Board Hysteria Day, because today is the day that the Facebook Oversight Board has rendered its decision about Facebook's suspension of Donald Trump. And it has met the moment with an appropriately dull decision, dripping in pedantic reasonableness, that is largely consistent with our Copia Institute recommendation.

If you remember, we were hesitant about submitting a comment at all. And the reaction to the Board's decision bears out why. People keep reacting as though it is some big, monumental, important decision, when, in actual fact, it isn't at all. In the big scheme of things, it's still just a private company being advised by its private advisory board on how to run its business, nothing more. As it is, Trump himself is still on the Internet – it's not like Facebook actually had the power to silence him. We need to be worried about when there actually is power to silence people, and undue concern about Facebook's moderation practices only distracts us from them. Or, worse, leads people to try to create actual law that will end up having the effect of giving others the legal power to suppress expressive freedom.

So our pride here is necessarily muted, because ultimately this decision just isn't that big a deal. Still, as a purely internal advisory decision, one intended to help the company act more consistently in the interests of its potential user base, it does seem to be a good one given how it hews to our key points.

First, we made the observation that then-President Trump's use of his Facebook account threatened real, imminent harm. We did, however, emphasize the point that it was generally better to try not to delete speech (or speakers). Nevertheless, sometimes it might need to be done, and in those cases it should be done "with reluctance and only limited, specific, identifiable, and objective criteria to justify the exception." There might not ultimately be a single correct decision, we wrote, for whether speech should be left up or taken down. "[I]n the end the best decision may have little to do with the actual choice that results but rather the process used to get there."

And this sort of reasoning is basically at the heart of the Board's decision: Trump's posts were serious enough to justify a sanction, including a suspension, but imposing the indefinite suspension appeared to be unacceptably arbitrary. Per the Board, Facebook needs to make these sorts of decisions consistently and transparently from here on out.

On January 6, Facebook’s decision to impose restrictions on Mr. Trump’s accounts was justified. The posts in question violated the rules of Facebook and Instagram that prohibit support or praise of violating events, including the riot that was then underway at the U.S. Capitol. Given the seriousness of the violations and the ongoing risk of violence, Facebook was justified in imposing account-level restrictions and extending those restrictions on January 7. However, it was not appropriate for Facebook to impose an indefinite suspension. Facebook did not follow a clear published procedure in this case. Facebook’s normal account-level penalties for violations of its rules are to impose either a time-limited suspension or to permanently disable the user’s account. The Board finds that it is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.

The Board has given Facebook six months to re-evaluate the suspension in accordance with clear rules.

If Facebook determines that Mr. Trump’s accounts should be restored, Facebook should apply its rules to that decision, including any modifications made pursuant to the policy recommendations below. Also, if Facebook determines to return him to the platform, it must address any further violations promptly and in accordance with its established content policies.

As for what those rules should be, the Board also made a few recommendations. First, it noted that "political leader" versus "influential user" is not always a meaningful distinction. Indeed, we had noted that Trump's position cut both ways: as a political leader, there was public benefit to knowing what he had to say. On the other hand, that position also gave his posts greater ability to do harm. The Board for its part noted that context will matter; while the rules should ideally be the same for everyone, since the impact won't be, it is ok for Facebook to take into account the specific probability of imminent harm in making its decisions.

The Board believes that it is not always useful to draw a firm distinction between political leaders and other influential users. It is important to recognize that other users with large audiences can also contribute to serious risks of harm. The same rules should apply to all users of the platform; but context matters when assessing issues of causality and the probability and imminence of harm. What is important is the degree of influence that a user has over other users.

In general, the Board cited to general principles of human rights law, and specifically the Rabat Plan of Action "to assess the capacity of speech to create a serious risk of inciting discrimination, violence, or other lawless action." As for how long suspensions should generally last, they should be long enough to "deter misconduct and may, in appropriate cases, include account or page deletion." Facebook is therefore free to re-impose Trump's suspension as it re-evaluates it, if it feels it remains warranted. It just needs to do so in a more transparent way that would be scalable to other similar situations. As it summarized:

Facebook should publicly explain the rules that it uses when it imposes account-level sanctions against influential users. These rules should ensure that when Facebook imposes a time-limited suspension on the account of an influential user to reduce the risk of significant harm, it will assess whether the risk has receded before the suspension ends. If Facebook identifies that the user poses a serious risk of inciting imminent violence, discrimination or other lawless action at that time, another time-bound suspension should be imposed when such measures are necessary to protect public safety and proportionate to the risk. The Board noted that heads of state and other high officials of government can have a greater power to cause harm than other people. If a head of state or high government official has repeatedly posted messages that pose a risk of harm under international human rights norms, Facebook should suspend the account for a period sufficient to protect against imminent harm. Suspension periods should be long enough to deter misconduct and may, in appropriate cases, include account or page deletion.

As we suggested in our comment, the right policy choices for Facebook to make boil down to the ones that best make Facebook the community it wants to be. At its core, that's what the Board's decision is intended to help with: point out where it appears Facebook has fallen short of its own espoused ideals, and help it get back on track in the future.

Which is, overall, a good thing. It just isn't, as so many critics keep complaining, everything. The Internet is far more than just Facebook, no matter what Trump or his friends think. And there are far more important things for those of us who care about preserving online expression to give our attention to than this.

26 Comments | Leave a Comment..

Posted on Techdirt - 12 April 2021 @ 12:01pm

Oh Look, Here's Some More Culture Being Canceled, Now Thanks To The Second Circuit

from the someday-my-prints-will-come dept

This decision, Andy Warhol Foundation for the Visual Arts v. Goldsmith, came out only a few weeks ago, yet before the Supreme Court ruled in Google v. Oracle. In light of that latter decision it's not clear that this one is still good law. Then again, it's not clear it ever was.

The decision is the latest by a Court of Appeal eviscerating fair use. I recently wrote about the Ninth Circuit's ruling in Dr. Seuss Enterprises v. ComicMix, which also undermined fair use. To be fair, this latest one is perhaps a little less egregious. In this case, for instance, the copyright holder the court ruled in favor of is still alive while the defending party (referred in the decision as AWF) is the successor of someone who is dead. Whereas in the Dr. Seuss case it was the other way around, with the court going out of its way to let the successor to a dead person's copyrights stick it to a live creator trying to make new works the dead person was never going to make for any number of reasons, not the least of which being that he's dead.

But to call this decision less egregious is really more of a statement of how awful the Dr. Seuss case was, and not really any sort of compliment. Like the other decision, the implications of this one are just as dire.

For the basic background, the opening paragraph of the decision sets forth the basic facts (or you can read Mike's writeup about the District Court ruling two years ago):

This case concerns a series of silkscreen prints and pencil illustrations created by the visual artist Andy Warhol based on a 1981 photograph of the musical artist Prince that was taken by Defendant-Appellant Lynn Goldsmith in her studio, and in which she holds copyright. In 1984, Goldsmith’s agency, Defendant-Appellant Lynn Goldsmith, Ltd. (“LGL”), then known as Lynn Goldsmith, Inc., licensed the photograph to Vanity Fair magazine for use as an artist reference. Unbeknownst to Goldsmith, that artist was Warhol. Also unbeknownst to Goldsmith (and remaining unknown to her until 2016), Warhol did not stop with the image that Vanity Fair had commissioned him to create, but created an additional fifteen works, which together became known as the Prince Series. [p. 3]

Prince's death in 2016 seems to have been what precipitated Goldsmith's discovery of the additional prints, because, as the decision later explained, it set in motion a chain of events that led to it.

On April 22, 2016, the day after Prince died, Condé Nast, Vanity Fair’s parent company, contacted AWF. Its initial intent in doing so was to determine whether AWF still had the 1984 image, which Condé Nast hoped to use in connection with a planned magazine commemorating Prince’s life. After learning that AWF had additional images from the Prince Series, Condé Nast ultimately obtained a commercial license, to be exclusive for three months, for a different Prince Series image for the cover of the planned tribute magazine. Condé Nast published the tribute magazine in May 2016 with a Prince Series image on the cover. Goldsmith was not given any credit or attribution for the image, which was instead attributed solely to AWF.

It was at this point that Goldsmith first became aware of the Prince Series. In late July 2016, Goldsmith contacted AWF to advise it of the perceived infringement of her copyright. That November, Goldsmith registered the Goldsmith Photograph with the U.S. Copyright Office as an unpublished work. [p. 11]

There's plenty wrong with this decision. A lot of what's wrong is with the decision itself, but even where the decision didn't get the law wrong, it shows what is wrong with the law.

As far as the decision itself, one of the most prominent problems is its analysis of the four fair use factors, particularly on the question of whether the Warhol prints were transformative. Like the Ninth Circuit, here the Second Circuit tied itself in knots, "clarifying" [p. 18] binding precedent in that circuit (Cariou v. Prince, by focusing more heavily on the five prints that hadn't been cleared as fair use instead of the 25 that were [p. 19-20]) to decide it wasn't. Some of that knot-tying:

Which brings us back to the Prince Series. The district court held that the Prince Series works are transformative because they “can reasonably be perceived to have transformed Prince from a vulnerable, uncomfortable person to an iconic, larger-than-life figure.” That was error.

Though it may well have been Goldsmith’s subjective intent to portray Prince as a “vulnerable human being” and Warhol’s to strip Prince of that humanity and instead display him as a popular icon, whether a work is transformative cannot turn merely on the stated or perceived intent of the artist or the meaning or impression that a critic – or for that matter, a judge – draws from the work. Were it otherwise, the law may well “recogniz[e] any alteration as transformative.” 4 Melville B. Nimmer & David Nimmer, Nimmer on Copyright § 13.05(B)(6); see also Google, 804 F.3d at 216 n.18 (“[T]he word ‘transformative,’ if interpreted too broadly, can also seem to authorize copying that should fall within the scope of an author’s derivative rights.”). Rather, as we have discussed, the court must examine how the works may reasonably be perceived.

In conducting this inquiry, however, the district judge should not assume the role of art critic and seek to ascertain the intent behind or meaning of the works at issue. That is so both because judges are typically unsuited to make aesthetic judgments and because such perceptions are inherently subjective. As Goldsmith argues, her own stated intent notwithstanding, “an audience viewing the [Goldsmith] [P]hotograph today, across the vista of the singer’s long career, might well see him in a different light than Goldsmith saw him that day in 1981.” Appellants’ Br. at 40. We agree; it is easy to imagine that a whole generation of Prince’s fans might have trouble seeing the Goldsmith Photograph as depicting anything other than the iconic songwriter and performer whose musical works they enjoy and admire.

Instead, the judge must examine whether the secondary work’s use of its source material is in service of a “fundamentally different and new” artistic purpose and character, such that the secondary work stands apart from the “raw material” used to create it. Although we do not hold that the primary work must be “barely recognizable” within the secondary work, as was the case with the works held transformative in Cariou, the secondary work’s transformative purpose and character must, at a bare minimum, comprise something more than the imposition of another artist’s style on the primary work such that the secondary work remains both recognizably deriving from, and retaining the essential elements of, its source material. [p. 26-28]

That last bit in bold, is a new statement of law. But it's a "clarification" the court needed to make in order to impose its own judgment on what it perceived to be as an inadequate number of changes imposed by Warhol on the original photograph.

With this clarification, viewing the works side-by-side, we conclude that the Prince Series is not “transformative” within the meaning of the first factor. That is not to deny that the Warhol works display the distinct aesthetic sensibility that many would immediately associate with Warhol’s signature style – the elements of which are absent from the Goldsmith photo. […] As in the case of such paradigmatically derivative works, there can be no meaningful dispute that the overarching purpose and function of the two works at issue here is identical, not merely in the broad sense that they are created as works of visual art, but also in the narrow but essential sense that they are portraits of the same person. […] Although this observation does not per se preclude a conclusion that the Prince Series makes fair use of the Goldsmith Photograph, the district court’s conclusion rests significantly on the transformative character of Warhol’s work. But the Prince Series works can’t bear that weight. [p. 28-30]

In the court's view, Warhol used too much of the original.

Warhol created the series chiefly by removing certain elements from the Goldsmith Photograph, such as depth and contrast, and embellishing the flattened images with “loud, unnatural colors.” Nonetheless, although we do not conclude that the Prince Series works are necessarily derivative works as a matter of law, they are much closer to presenting the same work in a different form, that form being a high-contrast screenprint, than they are to being works that make a transformative use of the original. Crucially, the Prince Series retains the essential elements of the Goldsmith Photograph without significantly adding to or altering those elements. [p. 30]

It mattered not at all to the court that Warhol had significantly shifted the message of the original photograph. While Goldsmith had wanted to capture and convey Prince's strength as an artist, she was left with a jittery subject who left the sitting early. [p. 7]. It was Warhol who had, as the district court found, " transformed the image of Prince from a vulnerable, uncomfortable person to an iconic, larger-than-life figure.” [footnote 4]. She hadn't communicated that message in her image; he had in its transformation of it.

[T]he Prince Series retains the essential elements of its source material, and Warhol’s modifications serve chiefly to magnify some elements of that material and minimize others. While the cumulative effect of those alterations may change the Goldsmith Photograph in ways that give a different impression of its subject, the Goldsmith Photograph remains the recognizable foundation upon which the Prince Series is built. [p. 31]

But this is just more knot-tying.

We begin with the uncontroversial proposition that copyright does not protect ideas, but only “the original or unique way that an author expresses those ideas, concepts, principles, or processes.” Rogers, 960 F.2d at 308. As applied to photographs, this protection encompasses the photographer’s “posing the subjects, lighting, angle, selection of film and camera, evoking the desired expression, and almost any other variant involved.” Id. at 307. The cumulative manifestation of these artistic choices – and what the law ultimately protects – is the image produced in the interval between the shutter opening and closing, i.e., the photograph itself. This is, as we have previously observed, the photographer’s “particular expression” of the idea underlying her photograph. Leibovitz, 137 F.3d at 115-16.

It is thus easy to understand why AWF’s contention misses the mark. The premise of its argument is that Goldsmith cannot copyright Prince’s face. True enough. Were it otherwise, nobody else could have taken the man’s picture without either seeking Goldsmith’s permission or risking a suit for infringement. But while Goldsmith has no monopoly on Prince’s face, the law grants her a broad monopoly on its image as it appears in her photographs of him, including the Goldsmith Photograph concluded that “defendant could freely copy the central facial features of the Barbie dolls” and holding that Mattel could not monopolize the idea of a doll with “upturned nose, bow lips, and wide eyes,” but the law protected its specific rendition thereof). And where, as here, the secondary user has used the photograph itself, rather than, for example, a similar photograph, the photograph’s specific depiction of its subject cannot be neatly reduced to discrete qualities such as contrast, shading, and depth of field that can be stripped away, taking the image’s entitlement to copyright protection along with it.

With that in mind, we readily conclude that the Prince Series borrows significantly from the Goldsmith Photograph, both quantitatively and qualitatively. While Warhol did indeed crop and flatten the Goldsmith Photograph, the end product is not merely a screenprint identifiably based on a photograph of Prince. Rather it is a screenprint readily identifiable as deriving from a specific photograph of Prince, the Goldsmith Photograph. A comparison of the images in the Prince Series makes plain that Warhol did not use the Goldsmith Photograph simply as a reference or aide-mémoire in order to accurately document the physical features of its subject. Instead, the Warhol images are instantly recognizable as depictions or images of the Goldsmith Photograph itself. [p. 38-41]

The court is cynical about why Warhol used this photograph, musing that his choice to use this one must have been to exploit a particular artistic value inherent to it. ("[W]e have little doubt that the Prince Series would be quite different had Warhol used [another picture] instead of the Goldsmith Photograph to create it." [p. 41].) At the same time, it acknowledged that the original photograph was the only one available to him. [p. 42]. The knot-tying as the court both faulted him for using the photograph and also at the same time changing it.

For example, the fact that Prince’s mustache appears to be lighter on the right side of his face than the left is barely noticeable in the grayscale Goldsmith Photograph but is quite pronounced in the black-and-white Prince Series screenprints. Moreover, this feature of the Goldsmith Photograph is, again, not common to all other photographs of Prince even from that brief session. The similarity is not simply an artefact of what Prince’s facial hair was like on that date, but of the particular effects of light and angle at which Goldsmith captured that aspect of his appearance. [footnote 8]

The absurdity of this decision points to the real issue with it: as with the Dr. Seuss one, it conflates the copyright holder's right to control the making of derivative works with the public's fair use right to transform existing works, for which permission from the copyright holder should not be needed. As these courts have set forth, the latter right has now been all but subsumed by the former. Which is a huge problem, because the whole point of fair use is that of course you should be able to use a previous work in making your new one. But with these decisions it effectively puts previous works still under copyright off-limits by construing potentially any subsequent use as a derivative use the copyright holder has the power to control.

The court's reasoning on the fourth factor drives home this view.

In assessing market harm, we ask not whether the second work would damage the market for the first (by, for example, devaluing it through parody or criticism), but whether it usurps the market for the first by offering a competing substitute. See, e.g., Bill Graham Archives, 448 F.3d at 614. This analysis embraces both the primary market for the work and any derivative markets that exist or that its author might reasonably license others to develop, regardless of whether the particular author claiming infringement has elected to develop such markets. [p. 44]

It did not matter to the court that Goldsmith had not tried to further license the original photograph in the intervening years; the court was worried that she potentially could, and that somehow this Warhol work – different though it is from her original photo – would somehow "usurp" her licensing market.

While Goldsmith does not contend that she has sought to license the Goldsmith Photograph itself, the question under this factor is not solely whether the secondary work harms an existing market for the specific work alleged to have been infringed. Cf. Castle Rock, 150 F.3d at 145-46 (“Although Castle Rock has evidenced little if any interest in exploiting this market for derivative works . . . the copyright law must respect that creative and economic choice.”). Rather, we must also consider whether “unrestricted and widespread conduct of the sort engaged in by [AWF] would result in a substantially adverse impact on the potential market” for the Goldsmith Photograph. Campbell, 510 U.S. at 590 (internal quotation marks omitted) (alterations adopted)); see also Fox News Network, LLC v. TVEyes, Inc., 883 F.3d 169, 179 (2d Cir. 2018). [p. 45-46]

And the court put the burden on the transformative user to prove that it would not impact her market for licensing her works, [p. 47], while at the same time discounting all such evidence AWF provided that there was no conflict present here. Since both AWF and Goldsmith had sought to license their depictions, including for further derivative works, the court thought that was enough of a conflict.

In any case, whatever the scope of Goldsmith’s initial burden, she satisfied it here. Setting aside AWF’s licensing of Prince Series works for use in museum exhibits and publications about Warhol, which is not particularly relevant for the reasons set out in our discussion of the primary market for the works, there is no material dispute that both Goldsmith and AWF have sought to license (and indeed have successfully licensed) their respective depictions of Prince to popular print magazines to accompany articles about him. As Goldsmith succinctly states: “both [works] are illustrations of the same famous musician with the same overlapping customer base.” Appellants’ Br. at 50. Contrary to AWF’s assertions, that is more than enough. See Cariou, 714 F.3d at 709 (“[A]n accused infringer has usurped the market for copyrighted works . . . where the infringer’s target audience and the nature of the infringing content is the same as the original.”). And, since Goldsmith has identified a relevant market, AWF’s failure to put forth any evidence that the availability of the Prince Series works poses no threat to Goldsmith’s actual or potential revenue in that market tilts the scales toward Goldsmith. [p. 47-48]

And not only were both in the licensing business, but they also licensed for derivatives.

Finally, the district court entirely overlooked the potential harm to Goldsmith’s derivative market, which is likewise substantial. Most directly, AWF’s licensing of the Prince Series works to Condé Nast without crediting or paying Goldsmith deprived her of royalty payments to which she would have otherwise been entitled. Although we do not always consider lost royalties from the challenged use itself under the fourth factor (as any fair use necessarily involves the secondary user using the primary work without paying for the right to do so), we do consider them where the secondary use occurs within a traditional or reasonable market for the primary work. See Fox News, 883 F.3d at 180; On Davis v. Gap, Inc., 246 F.3d 152, 176 (2d Cir. 2001). And here, that market is established both by Goldsmith’s uncontroverted expert testimony that photographers generally license others to create stylized derivatives of their work in the vein of the Prince Series, see J. App’x 584-99, and by the genesis of the Prince Series: a licensing agreement between LGL and Vanity Fair to use the Goldsmith Photograph as an artist reference. [p. 48-49]

And that's a problem, thought the court in horror, as it echoed the Ninth Circuit, because what if every fair user could make these sorts of transformative fair uses for free?

Further, we also must consider the impact on this market if the sort of copying in which Warhol engaged were to become a widespread practice. That harm is also self-evident. There currently exists a market to license photographs of musicians, such as the Goldsmith Photograph, to serve as the basis of a stylized derivative image; permitting this use would effectively destroy that broader market, as, if artists “could use such images for free, there would be little or no reason to pay for [them].” Barcroft Media, Ltd. v. Coed Media Grp., LLC, 297 F. Supp. 3d 339, 355 (S.D.N.Y. 2017); see also Seuss, 983 F.3d at 461 (“[T]he unrestricted and widespread conduct of the sort ComicMix is engaged in could result in anyone being able to produce” their own similar derivative works based on Oh, the Places You’ll Go!). This, in turn, risks disincentivizing artists from producing new work by decreasing its value – the precise evil against which copyright law is designed to guard. [p. 49-50]

But as the Supreme Court just reminded us, the real thing to guard against is interfering with the purpose of copyright, to make sure it drives further progress. The reasoning these courts keep employing overlooks that fundamental animating purpose behind copyright law and instead treats it as some sort of expansive power to preclude that original creators get, even when there's little reason to award them this incredibly generous benefit.

Look at the facts of this case: the original photograph was taken nearly 40 years ago of a person who's now been dead for nearly five. The Warhol prints were made nearly 35 years ago, by someone who himself has been dead for more than thirty years, and thus was unable to testify about his artistic choices to defend his work – which now other people uninvolved with making those choices seek to exploit the copyrights of. Meanwhile, the original photographer had no idea that these works had existed all this time. But because copyrights last for so long, we're litigating this now.

As suggested at the outset, it appears that what may have motivated the court to tie fair use doctrine into such a pretzel in rendering this decision is that the original creator is still around, and it intuitively seems consistent with modern copyright to ensure that an original creator can get the benefit of their creativity. But it could just as easily have been her successors or assignees bringing the litigation, not just 30-40 years later than any alleged infringement but even more decades into the future because copyright lasts that long, regardless of whether it is needed to incentivize further creativity. There is nothing about this decision that limits it to just her. Yet every time decisions like this are issued, expanding the power of a copyright holder, it only threatens to chill that very creativity copyright is supposed to encourage.

And less by clear rule and more by minefield. As case in point, while one panel of the Second Circuit generated this unfortunate decision shrinking fair use, another panel shortly thereafter upheld it in another case also involving a photograph. In Marano v. Metropolitan Museum of Art, the Met was found not to be liable for using a picture that cropped an existing photo to focus on Eddie Van Halen's guitar, give that the original photo's composition had focused on Eddie Van Halen himself.

We begin with the first factor—often framed as whether the use is “transformative”—which constitutes the “heart of the fair use inquiry.” Blanch v. Koons, 467 F.3d 244, 251 (2d Cir. 2006) (quoting Davis v. The Gap, Inc., 246 F.3d 152, 174 (2d Cir. 2001)). The Met’s exhibition transformed the Photo by foregrounding the instrument rather than the performer. Whereas Marano’s stated purpose in creating the Photo was to show “what Van Halen looks like in performance,” App’x at 29, the Met exhibition highlights the unique design of the Frankenstein guitar and its significance in the development of rock n’ roll instruments. Further, the Photo appears alongside other photographs showing the physical composition of the guitar, which hare collectively accompanied by text discussing th e guitar’s genesis, specifications, and impact on rock n’ roll music, not Van Halen’s biography or discography. This context “adds something new, with a further purpose or different character, altering the [Photo] with new expression, meaning, or message.” Campbell, 510 U.S. at 579.

More critically, this time the court did not express the same hostility towards "transformative markets" that the Warhol decision did.

This transformative use of the Photo is consistent with the remaining factors under Section 107 tipping in favor of fair use. While the Photo is a “creative work of art,” that determination is of “limited usefulness” given that the Met is using the Photo “for a transformative purpose.” Bill Graham, 448 F.3d at 612. Similarly, the Met’s “copying the entirety of [the Photo] [was] . . . necessary to make a fair use of the image” as one of many “historical artifacts” in the exhibition. Id. at 613. Likewise, a “transformative market” does not qualify as a “traditional, reasonable, or likely to be developed market,”id. at 614 (quoting Am. Geophysical Union v. Texaco Inc., 60 F.3d 913, 930 (2d Cir. 1994)), and therefore Marano cannot “prevent others from entering fair use markets merely ‘by developing or licensing a market for . . . transformative uses of [his] own creative work,’” id. at 615 (quoting Castle Rock Ent., Inc. v. Carol Pub. Grp., Inc., 150 F.3d 132, 146 n.11 (2d Cir. 1998)). There is no indication in the record that the Met’s use of the Photo on a web page describing the Frankenstein guitar could, in any way, impair any other market for commercial use of the Photo, or diminish its value. On balance, these factors indicate that the Met’s display of the Photo qualifies for the fair use exception under Section 107.

There are certainly some differences between these decisions. For instance, the Met is a museum, which helps weigh in favor of fair use but, notably, is not inherently dispositive. Also, as some have noted, recent decisions by the Second Circuit upholding fair use have tended to not be precedential, while decisions, such as the Warhol one, limiting the doctrine have tended to be. But on the whole there is not much daylight between their analytical attack, only their ultimate conclusions. Which itself strongly suggests that one of the decisions was wrong. For all the aforementioned reasons, and as buttressed by the Google v. Oracle decision, it is likely to be the Warhol one.

Read More | 15 Comments | Leave a Comment..

Posted on Techdirt - 12 March 2021 @ 7:39pm

What Stevie Ray Vaughan Can Teach Us About Security Design

from the instructive-parable dept

The SolarWind intrusion, with the revelation that part of the architecture included, at least for a while, a really weak default password, and the hack of the water treatment plant with a similar password reuse problem, reminded me of this story I heard not long ago about another instance of poor security design.

In a recent fan Q&A on Facebook, Bill Gibson, the drummer for Huey Lewis and the News, told a story about his friendship with Stevie Ray Vaughan. Stevie Ray Vaughan and his band Double Trouble had opened for the News for a while in the mid-1980s, and in that time Bill and Stevie had become good friends. Back at the hotel one evening after a show in New York City it came up that Bill had seen Jimi Hendrix perform something like seven times. Stevie, a guitarist who idolized Hendrix, was in awe. He wanted to hear everything about what it was like seeing Hendrix play, so he grabbed some beer and they settled in for an evening of Bill telling Stevie everything he remembered.

By 3:00 AM they were out of beer, so they went down to Stevie's tour bus parked out in front of the hotel to get some more. He opened the bus with his key and started looking for the cooler he kept it in. "That's odd," Bill recalls Stevie musing, "The cooler is usually kept in this spot over here." Eventually he found a cooler elsewhere, removed the needed beer, and they left to go back up to finish their conversation.

The next day they discovered why they'd had trouble finding the cooler. At the time, most bands were touring in buses that all came from the same company. That all looked the same. And that all were opened by the exact same key. Thus the reason that Stevie could not find the cooler where he expected it to be was because they were not on the bus where they expected to be. Instead of being on Stevie's bus, it turns out they were actually on UB40's bus that, unbeknownst to them, had just pulled up that night while they'd been ensconced in the hotel talking. Which Stevie's key had opened. And on which the UB40 band had apparently been sleeping the whole time Stevie and Bill were there inadvertently pilfering their beer…

So let this story be a lesson to security designers, people who really should be employing security designers, and pretty much everyone else who likes to reuse their passwords: When the security credentials for one resource can be used to gain access elsewhere, especially in a way you did not anticipate, there's really not that much security to be had.

And in most such cases it will likely be so much more than UB40's beer that's now been put at risk.

22 Comments | Leave a Comment..

Posted on Techdirt - 10 March 2021 @ 12:00pm

Oh The Culture You'll Cancel, Thanks To The Ninth Circuit And Copyright

from the things-actually-worth-being-upset-about dept

If everyone's going to be talking about Dr. Seuss, then we need to talk about this terrible decision from the Ninth Circuit a few months ago. Not to validate the idea of "cancel culture" in the particular way it's often bandied about as a sort of whining over people not wanting to be associated with certain ideas, but because when law takes away the ability to express them in the first place, that's censorship, it's an affront to the First Amendment, and it's something we all should be outraged about. And, as this case illustrates, the law in question is copyright.

We've written about this case, Dr. Seuss Enters., L.P. v. ComicMix LLC, 983 F.3d 443 (9th Cir. 2020), many, many times before: some people wrote a mash-up using Seussian-type imagery and Star Trek vernacular to express new ideas that neither genre alone had been able to express before. And Dr. Seuss's estate sued them for it.

The little bit of good news: their trademark claim failed. Applying the Rogers test to determine whether the Lanham Act could support such a claim, both the district court and the appeals court agreed: it didn't.

Under the Rogers test, the trademark owner does not have an actionable Lanham Act claim unless the use of the trademark is "either (1) not artistically relevant to the underlying work or (2) explicitly misleads consumers as to the source or content of the work." Neither of these prongs is easy to meet. As to the first prong, any artistic relevance "above zero" means the Lanham Act does not apply unless the use of the trademark is explicitly misleading. Boldly easily surpasses this low bar: as a mash-up of Go! and Star Trek, the allegedly valid trademarks in the title, the typeface, and the style of Go! are relevant to achieving Boldly's artistic purpose. Nor is the use of the claimed Go! trademarks "explicitly misleading," which is a high bar that requires the use to be "an 'explicit indication,' 'overt claim,' or 'explicit misstatement'" about the source of the work. Thus, although titling a book "Nimmer on Copyright," "Jane Fonda's Workout Book," or "an authorized biography" can explicitly misstate who authored or endorsed the book, a title that "include[s] a well-known name" is not explicitly misleading if it only "implicitly suggest[s] endorsement or sponsorship." Boldly is not explicitly misleading as to its source, though it uses the Seussian font in the cover, the Seussian style of illustrations, and even a title that adds just one word—Boldly—to the famous title—Oh, the Places You'll Go!. Seuss's evidence of consumer confusion in its expert survey does not change the result. The Rogers test drew a balance in favor of artistic expression and tolerates "the slight risk that [the use of the trademark] might implicitly suggest endorsement or sponsorship to some people." [p. 31-32]

Note: as you read the quotes from the decision be aware that the court regularly refers to the mash-up as "Boldly" and the original Seuss work it riffed on as "Go!"

But while the Ninth Circuit was accommodating to artistry on the trademark front, it was hostile on the copyright front and overturned the district court's finding that the mash-up was fair use. It walked through the fair use factors with its thumb heavily on the side of the copyright owner, willfully blind to any "countervailing copyright principles [that would] counsel otherwise." [p. 11]. For instance, on the second factor, the nature of the work, it looked at the mash-up with a harsher eye because the original work had been a creative one, rather than one more informational. ("Hence, Boldly's copying of a creative and "expressive work[]" like Go! tilts the second factor against fair use." [p. 19])

But what's most alarming is not just how the court applied the other factors, but how its analysis effectively expanded the power of a copyright holder to shut down others' subsequent expression, far more than the statute allows, the Progress Clause of the Constitution permits, or the First Amendment tolerates.

For instance, on the fourth factor, because the original work, "Oh, the Places You'll Go," targeted the graduation market, the court gave it the power to shut out subsequent works that also might serve the same market by somehow construing the mash-up as a competitor with the original, even though it was a distinctively different creature—after all, there was no Star Trek in the original, and the appeal of the second work was entirely based on consumers wanting both genres combined in one.

The court further hangs this analysis on the fact that one of the exclusive rights a copyright holder has is the ability to license derivative works. But when combined with its flawed analysis on the first factor, transformativeness, and also the third, examining the amount and substantiality of the original used, it lets that right to license derivatives effectively swallow all fair use. The Dr. Seuss estate likes to license its works, the court reasons, including to those who might want to combine them with other genres. But if people could do these sorts of mash-ups for free then the Dr. Seuss estate would have a harder time making money from those licenses.

Crucially, ComicMix does not overcome the fact that Seuss often collaborates with other creators, including in projects that mix different stories and characters. Seuss routinely receives requests for collaborations and licenses, and has entered into various collaborations that apply Seuss's works to new creative contexts, such as the television and book series entitled The Wubbulous World of Dr. Seuss, a collaboration with The Jim Henson Company, famous for its puppetry and the creation of other characters like the Muppets. Other collaborations include a digital game called Grinch Panda Pop, that combines Jam City's Panda character with a Grinch character; figurines that combine Funko Inc.'s toy designs with Seuss characters; and a clothing line that combines Comme des Garçons' heart design with Grinch artwork. [p. 28-29]

Of course, the answer to this concern is "so what"? Because if the court were right, and this were the sort of market harm that would trump fair use, it would mean that the only such combinations we will ever get are the ones that the Dr. Seuss estate deigns to allow—assuming they allow any at all, because, per the court, it's totally ok if they don't ("Seuss certainly has the right to "the artistic decision not to saturate those markets with variations of their original." [p. 29]). If it chooses not to license a mash-up with Star Trek, then the world will never get a Seussian-Star Trek mash-up. Even though that's exactly the sort of making-something-new-there-hasn't-been-before creativity that copyright law is supposed to incentivize. Copyright law exists so that we can get new works, but per this Ninth Circuit decision the function of copyright law is instead to obstruct them.

And it won't just be this particular mash-up that we'll have to do without. Because with this decision the court is giving copyright holders the power to not only veto subsequent uses of a work but an entire expressive vernacular (and one that may even transcend any particular copyrighted work).

In fact, this lawsuit manages to not even be about the alleged infringement of a particular work. In some ways it is, such as the way the court takes issue with the fact that the mash-up referenced 14 of the 24 pages of the original Seussian "Places You'll Go" book [p. 20]. Of course, even that view ignores how unfaithful a copy the later work must inherently be given how much got left behind of the original, and how much space the omissions left for something new. But the court was even more put out by the pieces of the work used, objecting strenuously to the detail of the references, even though the use of that detail was so that the reference could be a meaningful enough foundation upon which to convey the new idea of the subsequent work.

Crucially, ComicMix did not merely take a set of unprotectable visual units, a shape here and a color patch there. For each of the highly imaginative illustrations copied by ComicMix, it replicated, as much and as closely as possible from Go!, the exact composition, the particular arrangements of visual components, and the swatches of well-known illustrations. ComicMix's claim that it "judiciously incorporated just enough of the original to be identifiable" as Seussian or that its "modest" taking merely "alludes" to particular Seuss illustrations is flatly contradicted by looking at the books. During his deposition, Boldly illustrator Templeton detailed the fact that he "stud[ied] the page [to] get a sense of what the layout was," and then copied "the layout so that things are in the same place they're supposed to be." The result was, as Templeton admitted, that the illustrations in Boldly were "compositionally similar" to the corresponding ones in Go!. In addition to the overall visual composition, Templeton testified that he also copied the illustrations down to the last detail, even "meticulously try[ing] to reproduce as much of the line work as [he could]." [p. 20-21]

And it wasn't even the pieces of that work that irked the court. In defending its distaste for these verbatim references, the court cites the mash-up's inclusion of the illustration of the machine from Sneetches, which was, not incidentally, an entirely different work than the book the defendants were being accused of copying too much from.

For example, ComicMix's copying of a Sneetches illustration exhibits both the extensive quantitative and qualitative taking by ComicMix. Sneetches is a short Seuss story about two groups of Sneetches: the snooty star-bellied Sneetches and the starless ones. The story's plot, the character, and the moral center on a highly imaginative and intricately drawn machine that can take the star-shaped status-symbol on and off the bellies of the Sneetches. Different iterations of the machine, the heart of Sneetches, appear in ten out of twenty-two pages of the book. ComicMix took this "highly expressive core" of Sneetches. Templeton testified that "the machine in the Star-Bellied Sneetches story" was "repurposed to remind you of the transporter" in Star Trek. Drawing the machine "took. . . about seven hours" because Templeton tried to "match" the drawing down to the "linework" of Seuss. He "painstakingly attempted" to make the machines "identical." In addition to the machine, Boldly took "the poses that the Sneetches are in" so that "[t]he poses of commander Scott and the Enterprise crew getting into the machine are similar." Boldly also captured the particular "crosshatch" in how Dr. Seuss rendered the machine, the "puffs of smoke coming out of the machine," and the "entire layout." [p. 23]

In other words, because the machine was important to a (completely different) story, the Dr. Seuss estate got to say no to anyone who wanted to reference that import. Yes, the mash-up referenced it in detail, but that's how the reference could be recognizable. The court is clearly offended by any verbatim copying of any aspect of the image, but fair use does not forbid verbatim copying or otherwise require deprecating the quality of the original. Yet per the court's reasoning, verbatim references in "overall composition and placement of the shapes, colors and detailed linework" are off-limits, even though using them did not amount to making an infringing copy of the entire work, page, or even full illustration and ultimately became part of something substantially different from the original. Because even if the original work had certain characters in certain poses that the mash-up emulated, it didn't have them posed in the futuristic environment that the mash-up expressed. That overall visual tableau was something new and different and transformative.

Above is a representative sample of what the plaintiffs showed to compare the two works so you can see what was literally referenced by the mash-up, and how much was obviously different about its own expression.

But the court also glossed over that transformative quality in its analysis of the first factor, instead focusing only on what was the same about the first work instead of what was different.

ComicMix copied the exact composition of the famous "waiting place" in Go!, down to the placements of the couch and the fishing spot. To this, ComicMix added Star Trek characters who line up, sit on the couch, and fish exactly like the waiting place visitors they replaced. Go! continues to carry the same expression, meaning, or message: as the Boldly text makes clear, the image conveys the sense of being stuck, with "time moving fast in the wink of an eye."

ComicMix also copied a scene in Sneetches, down to the exact shape of the sandy hills in the background and the placement of footprints that collide in the middle of the page. Seussian characters were replaced with Spocks playing chess, making sure they "ha[d] similar poses" as the original, but all ComicMix really added was "the background of a weird basketball court."

ComicMix likewise repackaged Go!'s text. Instead of using the Go! story as a starting point for a different artistic or aesthetic expression, Hauman created a side-by-side comparison of the Go! and Boldly texts in order "to try to match the structure of Go!." This copying did not result in the Go! story taking on a new expression, meaning, or message. Because Boldly "left the inherent character of the [book] unchanged," it was not a transformative use of Go!. [p. 17-19]

It's bad enough that it supplanted the district court's original fact finding with its own dismissive judgment, and that copying of an image from a separate work was bizarrely being used as evidence of infringement of the first. But the cynical determination that the second work was only a "repackaging" of any work designed to "avoid the drudgery in working up something fresh" because of how it used certain elements, including ephemeral elements (composition, posing, story structure), in order to produce something fresh, expands what a copyright holder in a work ordinarily can control and puts all sorts of fair reuse out of reach of subsequent creators.

Boldly also does not alter Go! with new expression, meaning, or message. A "'transformative work' is one that alters the original work." While Boldly may have altered Star Trek by sending Captain Kirk and his crew to a strange new world, that world, the world of Go!, remains intact. Go! was merely repackaged into a new format, carrying the story of the Enterprise crew's journey through a strange star in a story shell already intricately illustrated by Dr. Seuss. Unsurprisingly, Boldly does not change Go!; as ComicMix readily admits, it could have used another primer, or even created an entirely original work. Go! was selected "to get attention or to avoid the drudgery in working up something fresh," and not for a transformative purpose. [p. 16-17]

And that's the crux of the matter, because if a mash-up like this, that merged two aesthetics that had never been merged before, even if to convey a similarly inspirational message ("In propounding the same message as Go, Boldly used expression from Go! to "keep to [Go!'s] sentiment." [p. 16]), can violate a copyright, then a copyright holder has enormous veto power over all subsequent expression that might use the cultural vocabulary it ever introduced.

And that's what's truly canceling.

Read More | 28 Comments | Leave a Comment..

Posted on Techdirt - 8 March 2021 @ 12:05pm

The Digital Copyright Act: We Told Senator Tillis Not To Do This, But He Did It Anyway. So We Told Him Again.

from the deaf-ear dept

Back in December, the Copia Institute submitted comments to Senator Tillis, who wanted feedback on making changes to the DMCA. It was a tricky needle to thread, because there's a lot about the DMCA that could be improved and really needs to be improved to be constitutional. At the same time, having protection for platforms is crucial for there to be platforms, and we did not want to encourage anything that might lead to the weakening of the safe harbors, which are already flimsy enough. So our advice was two-fold: address the First Amendment problems already present with the DMCA, and check what assumptions were driving the reform effort in order to make sure that any changes actually made things better and not worse.

None of that happened, however. The draft legislation he proposed earlier this year, called the Digital Copyright Act, or DCA, is so troubling we haven't even had a chance to fully explain how. But at least he invited public comments on it, so last week we submitted some.

In short, we repeated our original two points: (1) as Mike wrote when it was originally unveiled the DCA, with its "notice and staydown" regime, has an even bigger First Amendment problem than the DMCA already does, and (2) the proposed DCA legislation is predicated on several faulty assumptions.

One such assumption is that the DCA appears to regard Internet service providers as little more than parasitic enterprises that must only be barely tolerated, rather than the intrinsically valuable services that have given artists greater opportunities for monetization and developing audience reach. Indeed, it was the recognition of their value that prompted Congress to try to protect them with the safe harbor system in the first place, whereas the DCA would all but slam the door on them, crushing them with additional burdens and even weaker liability protections. Sure, the proposed legislation offers to throw them a few bones around the edges, but in major substance it does little more than put them and the expression they facilitate in jeopardy.

And for little reason, because another significant misapprehension underpinning the DCA is that it helps creators at all. The DCA strengthens the power of certain copyright holders, certainly, but it doesn't follow that it necessarily helps creators themselves, who are often not the actual copyright holders. In fact, in certain art forms, like music, it is frequently the case they are not, and we know this from all the termination litigation where creators are having to go to great effort to try to recover the copyrights in their own works—and are not always succeeding.

As we pointed out:

Over the years we have evolved a system where creators can get—and in certain art forms like music often have gotten—locked out of being able to exploit their own works for decades. In fact, thanks to term extensions, they may be locked out for longer than they ever would have expected. As a result of getting locked out of their works, not only can they not economically exploit these works directly but they cannot even manage their overall relationship with their market: their fans. Even when it would be in their interests to give their fans a freer hand to interact with their works online, they cannot make that decision when the conglomerates and billionaires who own those rights are the ones sending takedown notices targeting their fans' postings, or, worse, their entire accounts. The DCA only further entrenches the power that these strangers can have over creators, their works, and their audiences and yet somehow presumes it will incentivize further creativity from the very people the system makes powerless.

In sum, the DCA is a mistake that Congress should not further pursue. It does nothing to help creators profit from their work, or help us do anything that will help get us more. It just gives certain people more power to say no to innovation and expression, which is exactly the opposite of what copyright law is for.

Read More | 18 Comments | Leave a Comment..

Posted on Techdirt - 4 March 2021 @ 4:06pm

Washington State Also Spits On Section 230 By Going After Google For Political Ads

from the guys-it's-still-the-law dept

In the post the other day about Utah trying to ignore Section 230 so it could regulate internet platforms, I explained why it was important that Section 230 pre-empted these sorts of state efforts:

Just think about the impossibility of trying to simultaneously satisfy, in today's political climate, what a Red State government might demand from an Internet platform and what a Blue State might. That readily foreseeable political catch-22 is exactly why Congress wrote Section 230 in such a way that no state government gets to demand appeasement when it comes to platform moderation practices.

We don't have to strain our imaginations very hard, because with this lawsuit, by King County, Washington prosecutors against Google, we can see a Blue State do the same thing Utah is trying to do and come after a platform for how it handles user-generated content.

Superficially there are of course some differences between the two state efforts. Utah's bill ostensibly targets social media posts whereas Washington's law goes after political ads. What's wrong with Washington's law may also be a little more subtle than the abjectly unconstitutional attempt by Utah to trump internet services' expressive and associative rights. But these are not meaningful distinctions. In both cases it still basically all boils down to the same thing: a state trying to force a platform to handle user-generated content (which online ads generally are) the way the state wants by imposing requirements on platforms that will inevitably shape how they do.

In the Washington case prosecutors are unhappy that Google is apparently not following well enough the prescriptive rules Washington State established to help the public follow the money behind political ads. One need not quibble with the merit of what Washington State is trying to do, which, at least on first glance, seems perfectly reasonable: make campaign finance more transparent to the public. Nor is it necessary to take issue with the specific rules the state came up with to try to vindicate this goal. The rules may or may not be good ones, but whether they are good or not is irrelevant. That there are rules is the problem, and one that that Section 230 was purposefully designed to avoid.

As discussed in that other post, Congress went with an all-carrot, no-stick approach in regulating internet content, giving platforms the most leeway possible to do the best they could to help achieve what Congress wanted overall: the most beneficial and least harmful content online. But this approach falls apart once sticks get introduced, which is why Congress included pre-emption in Section 230 so that states couldn't try to. Yet that's what Washington is trying to do with its disclosure rules surrounding political ads: introduce sticks by imposing regulatory requirements that burdens how platforms can facilitate user-generated content, in spite of Congress's efforts to alleviate them of these burdens.

The burden is hardly incidental or slight. Remember that if Washington could enforce its own rules, then so could any other state or any of locality, even when those rules were far more demanding, or ultimately compromise this or any other worthy policy goal—either inadvertently or even deliberately. Furthermore, even if every state had good rules, the differences between them would likely make compliance unfeasible for even the best-intentioned platform. Indeed, even by the state's own admission, Google actually had policies aimed at helping the public learn who had sponsored the ads appearing on its services.

Per Google’s advertising policies, advertisers are required to complete advertiser identity verification. Advertisers seeking to place election advertisements through Google’s advertising networks are required to complete election advertisement verification. Google notifies all verified advertisers, including, but not limited to sponsors of election advertisements, that Google will make public certain information about advertisements placed through Google’s advertising networks. Google notifies verified sponsors of election advertisements that information concerning their advertisements will be made public through Google’s Political Advertising Transparency Report.

Google’s policy states:

With the information you provide during the verification process, Google will verify your identity and eligibility to run election ads. For election ads, Google will [g]enerate, when possible, an in-ad disclosure that identifies who paid for your election ad. This means your name, or the name of the organization you represent, will be displayed in the ad shown to users. [And it will p]ublish a publicly available Political Advertising transparency report and a political ads library with data on funding sources for election ads, the amounts being spent, and more.

Google notifies advertisers that in addition to the company’s online Political Advertising Transparency Report, affected election advertisements "are published as a public data set on Google Cloud BigQuery[,]" and that users "can export a subset of the ads or access them programmatically." Google notifies advertisers that the downloadable election ad "dataset contains information on how much money is spent by verified advertisers on political advertising across Google Ad Services. In addition, insights on demographic targeting used in political advertisement campaigns by these advertisers are also provided. Finally, links to the actual political advertisement in the Google Transparency Report are provided." Google states that public access to "Data for an election expires 7 years after the election." [p. 14-15]

Yet Washington is still mad at Google anyway because it didn't handle user-generated content exactly the way it demanded. And that's a problem, because if it can sanction Google for not handling user-generated content exactly the way it wants, then (1) so could any other state or any of the infinite number of local jurisdictions Google inherently reaches, (2) to enforce an unlimited number of rules, and (3) governing any sort of user-generated content that may happen to catch a local regulator's attention. Utah may today be fixated on social media content and Washington State political ads, but once they've thrown off the pre-emptive shackles of Section 230 they or any other state, county, city or smaller jurisdiction could go after platforms hosting any of the myriad other sort of expression people use internet services to facilitate.

Which would sabotage the internet Congress was trying to foster with Section 230. Again, Congress deliberately gave platforms a free hand to decide how best to moderate user content so that they could afford to do their best at keeping the most good content up and taking the most bad content down. But with all these jurisdictions threatening to sanction platforms, trying to do either of these things can no longer be platforms' priority. Instead they will be forced to devote all their resources to the impossible task of trying to avoid a potentially infinite amount of liability. While perhaps at times this regulatory pressure might result in nudging platforms to make good choices for certain types of moderation decisions, it would be more out of coincidence than design. Trying to stay out of trouble is not the same thing as trying to do the best for the public—and often can turn out to be in direct conflict.

Which we can see from Washington's law itself. In 2018 prosecutors attempted to enforce an earlier version of this law against Google, which led it to declare that it would refuse all political ads aimed at Washington voters.

Three days later, on June 7, 2018, Google announced that the company’s advertising networks would no longer accept political advertisements targeting state or local elections in Washington State. Google’s announced policy was not required by any Washington law and it was not requested by the State. [p. 7]

Prosecutors may have been surprised by Google's decision, but no one should have been. Such a decision is an entirely foreseeable consequence, because if a law makes it legally unsafe for platforms to facilitate expression, then they won't.

Even the complaint itself, albeit perhaps inadvertently, makes clear what a loss for discourse and democracy it is when expression is suppressed.

As an example of Washington political advertisements Google accepted or provided after June 4, 2018, Google accepted or provided political advertisements purchased by Strategies 300, Inc. on behalf of the group Moms for Seattle that ran in July 2019, intended to influence city council elections in Seattle. Google also accepted or provided political advertisements purchased by Strategies 300, Inc. on behalf of the Seattle fire fighters that ran in October 2019, intended to influence elections in Seattle. [p. 9]

While prosecutors may frame it as scurrilous that Google accepted ads "intended to influence elections," influencing political opinion is at the very heart of why we have a First Amendment to protect speech in the first place. Democracy depends on discourse, and it is hardly surprising that people would want to communicate in ways designed to persuade on political matters.

Nor is the fact that they may pay for the opportunity to express it salient. Every internet service needs some way of keeping the lights on and servers running. That it may sometimes charge people to use their systems to convey their messages doesn't alter the fact that it is still a service facilitating user generated content, which Section 230 exists to protect and needs to protect.

Of course, even in the face of unjust sanction sometimes platforms may try to stick it out anyway, and it appears from the Washington complaint that Google may have started accepting ads again at some point after it had initially stopped. It also agreed to pay $217,000 to settle a 2018 enforcement effort—although, notably, without admitting to any wrongdoing, which is a crucial fact prosecutors omit in its current pleading.

On December 18, 2018, the King County Superior Court entered a stipulated judgment resolving Google’s alleged violations of RCW 42.17A.345 from 2013 through the date of the State’s June 4, 2018, Complaint filing. Under the terms of the stipulated judgment, Google agreed to pay the State $200,000.00 as a civil penalty and an additional $17,000.00 for the State’s reasonable attorneys’ fees, court costs, and costs of investigation. A true and correct copy of the State’s Stipulation and Judgment against Google entered by the King County Superior Court on December 18, 2018, is attached hereto as Exhibit B. [p. 8. See p. 2 of Exhibit B for Google expressly disclaiming any admission of liability.]

Such a settlement is hardly a confession. Google could have opted to settle rather than fight for any number of reasons. Even platforms as well-resourced as Google will still need to choose their battles. Because it's not just a question of being able to afford to hire all the lawyers you may need; you also need to be able to effectively manage them all, and every skirmish on every front that may now be vulnerable if Section 230 no longer effectively preempts those attacks. Being able to afford a fight means being able to afford it in far more ways than just financially, and thus it is hardly unusual for those threatened with legal process to simply try to purchase relief from onslaught instead of fighting for the just result.

Without Section 230, or its preemption provision, however, that's what we'll see a lot more of: unjust results. We'll also see less effective moderation as platforms redirect their resources from doing better moderation to avoiding liability instead. And we'll see what Google foreshadowed, of platforms withdrawing their services from the public entirely as it becomes financially prohibitive to pay off all the local government entities that might like to come after them. It will not get us a better internet, more innovative online services, or solve any of the problems any of these state regulatory efforts hope to fix. It will only make everything much, much worse.

Read More | 6 Comments | Leave a Comment..

Posted on Techdirt - 2 March 2021 @ 4:22pm

The Unasked Question In Tech Policy: Where Do We Get The Lawyers?

from the they-don't-grow-on-trees dept

When we criticize Internet regulations like the CCPA and GDPR, or lament the attempts to roll back Section 230, one of the points we almost always raise is how unduly expensive these policy decisions can be for innovators. Any law that increases the risk of legal trouble increases the need for lawyers, whose services rarely come cheap.

But bare cost is only part of the problem. All too often, policymakers seem to assume an infinite supply of capable legal counsel, and it's an assumption that needs to be questioned.

First, there are not an infinite number of lawyers. For better or worse, the practice of law is a heavily regulated profession with significant barriers to entry. The legal industry can be fairly criticized, and often is, for making it more difficult and expensive to become a lawyer than perhaps it should be, but there is at least some basic threshold of training, competence, and moral character we should want all lawyers to have attained given the immense responsibility they are regularly entrusted with. These requirements will inevitably limit the overall lawyer population.

(Of course, there shouldn't be an infinite number of lawyers anyway. As discussed below, lawyers play an important role in society, but theirs is not the only work that is valuable. In the field of technology law, for example, our need for people to build new things should well outpace our need for lawyers to defend what has been built. We should be wary of creating such a need for the latter that the legal profession siphons off too much of the talent able to do the former.)

But even where we have lawyers we still need the right kind of lawyers. Lawyers are not really interchangeable. Different kinds of lawyering need different types of skills and subject-matter expertise, and lawyers will generally specialize, at least to some extent, in what they need to master for their particular practice area. For instance, a lawyer who does estate planning is not generally the one you'd want to defend you against a criminal charge, nor would one who does family law ordinarily be the one you'd want writing your employment manual. There are exceptions, but generally because that particular lawyer went out of their way to develop parallel expertise. The basic fact remains: simply picking any old lawyer out of the yellow pages is rarely likely to lead to good results; you want one experienced with dealing with the sorts of legal issues you actually have, substantively and practically.

True, lawyers can retrain, and it is not uncommon for lawyers to switch their focus and develop new skills and expertise at some point in their careers. But it's a problem if a disproportionate number start to specialize in the same area because, just as we need people available to work professions other than law, even within the law we still need other kinds of lawyers available to work on other areas of law outside these particular specialized areas.

And we also need to be able to afford them. We already have a serious "access to justice" problem, where only the most resourced are able to obtain legal help. A significant cause of this problem is the expense of law school, which makes it difficult for graduates to resist the siren call of more remunerative employment, but it's a situation that will only get worse if lawyer-intensive regulatory schemes end up creating undue demand for certain legal specializations. For example, as we increasingly pass a growing thicket of complex privacy regulations we create the need for more and more privacy lawyers to help innovators deal with these rules. But as the need for privacy lawyers outstrips the ready availability of lawyers with this expertise, it threatens to raise the costs for anyone needing any sort of lawyering at all. It's a basic issue of supply and demand: the more privacy lawyers that are needed, the more expensive it will be to attract them. And the more these lawyers are paid a premium to do this work, the more it will lure lawyers away from other areas that still need serving, thus making it all the more expensive to hire those who are left to help with it.

Then there is the question of where lawyers even get the expertise they need to be effective counsel in the first place. The dirty little secret of legal education is that, at least until recently, it probably wasn't at their law schools. Instead lawyers have generally been trained up on the job, and what newbie lawyers ended up learning has historically depended on what sort of legal job it was (and how good a legal job it was). Recently, however, there has been the growing recognition that it really doesn't make sense to graduate lawyers unable to competently do the job they are about to be fully licensed to do, and one way law schools have responded is by investing in legal clinics.

By and large, clinics are a good thing. They give students practical legal training by letting them basically do the job of a lawyer, with the benefit of supervision, as part of their legal education. In the process they acquire important skills and start to develop subject-matter expertise in the area the clinic focuses on, which can be in almost every practice area, including, as is relevant here, technology law. Meanwhile, clinics generally let students provide these legal services to clients far more affordably than clients would normally be able to obtain them, which partially helps address the access to justice problem.

However, there are still some significant downsides to clinics, including the inescapable fact that it is students who are basically subsidizing the legal services they are providing by having to pay substantial amounts of money in tuition for the privilege of getting to do this work. A recurrent theme here is that law schools are notoriously expensive, often underwritten with loans, which means that students, instead of being paid for their work, are essentially financing the client's representation themselves.

And that arrangement matters as policymakers remain inclined to impose regulations that increase the need for legal services without better considering how that need will be met. It has been too easy for too many to assume that these clinics will simply step in to fill the void, with an endless supply of students willing and able to pay to subsidize this system. Even if this supposition were true, it would still prompt the question of who these students are. The massive expense of law school is already shutting plenty of people out of the profession and robbing it of needed diversity by making it financially out of reach for too many, as well as making it impossible for those who do make it through to turn down more lucrative legal jobs upon graduation and take ones that would be more socially valuable instead. The last thing we need is a regulatory environment dependent on this teetering arrangement to perpetuate it.

Yet that's the upshot of much of the policy lawmakers keep crafting. For instance, in the context of Section 1201 Rulemakings, it has been openly presumed that clinics would always be available to do the massive amount of work necessary to earn back for the public the right to do something it was already supposed to be legally allowed to do. But it's not just these cited examples of copyright or privacy law that are a problem; any time a statute or regulatory scheme establishes an unduly onerous compliance requirement, or reduces any of the immunities and safe harbors innovation has depended on, it puts a new strain on the legal profession, which now has to come up with the help from somewhere.

At the same time, however, good policy doesn't mean necessarily eliminating the need for lawyers entirely, like the CASE Act tries to do. The bottom line is that legal services are not like other professional services. Lawyers play a critical role in upholding due process, and laws like the CASE Act that short-circuit those protections are a problem. But so are any laws that have the effect of interfering with that greater Constitutional purpose of the legal profession.

For a society that claims to be devoted to the "rule of law," ensuring that the public can realistically obtain any of the legal help it needs should be a policy priority at least on par with anything else driving tech regulation. Lawmakers therefore need to take care in how they make policy to ensure they do not end up distorting the availability and affordability of legal services in the process. Such care requires (1) carefully calibrating the burden of any imposed policy to not unnecessarily drive up the need for lawyers, and (2) specifically asking the question: who will do the work. They cannot continue to simply leave "insert lawyers here" in their policy proposals and expect everything to be fine. If they don't also pointedly address exactly where it is these lawyers will come from then it won't be.

22 Comments | Leave a Comment..

Posted on Techdirt - 1 March 2021 @ 3:30pm

Utah Prematurely Tries To Dance On Section 230's Grave And Shows What Unconstitutional Garbage Will Follow If We Kill It

from the not-dead-yet dept

As Mike has explained, just about every provision of the social media moderation bill being proposed in the Utah legislature violates the First Amendment by conditioning platforms' editorial discretion over what appears on its services—discretion that the First Amendment protects—on meeting a bunch of extra requirements Utah has decided to impose. This post is about how everything Utah proposes is also barred by Section 230, and why it matters.

It may seem like a fool's errand to talk about how Section 230 prohibits state efforts to regulate Internet platforms while the statute currently finds itself on life support, with fading vital signs as legislators on both sides of the aisle keep taking aim at it. After all, if it goes away, then it won't matter how it blocks this sort of state legislation. But that it currently does preclude what we're seeing out of Utah it is why it would be bad if Section 230 went away and we lost it as a defense against this sort of speech-chilling, Internet-killing regulatory nonsense from state governments. To see why, let's talk about how and why Section 230 currently forbids what Utah is trying to do.

We often point out in our advocacy that Congress wanted to accomplish two things with Section 230: encourage the most good content online, and the least bad. We don't even need to speak to the law's authors to know that's what the law was intended to do; we can see that's what it was for with the preamble text in subsections (a) and (b), as well as the operative language of subsection (c) providing platforms protection for the steps they take to vindicate these goals, making it safe for them to leave content up as well as safe for them to take content down.

It all boils down to Congress basically saying to platforms, "When it comes to moderation, go ahead and do what you need to do; we've got you covered, because giving you the statutory protection to make these Constitutionally-protected choices is what will best lead to the Internet we want." The Utah bill, however, tries to directly mess with that arrangement. While Congress wanted to leave platforms free to do the best they could on the moderation front by making it legally possible, as a practical matter, for them to do it however they chose, Utah does not want platforms to have that freedom. It wants to force platforms to moderate the way Utah has decided they should moderate. None of what the Utah bill demands is incidental nor benign; even the requirements for transparency and notice impinge on platforms' ability to exercise editorial and associative discretion over what user expression they facilitate by imposing significant burdens on the exercise of that discretion. Doing so however runs headlong into the main substance of Section 230, which specifically sought to alleviate platforms of burdens that would affect their ability to moderate content.

It also contravenes the part of the statute that expressly prevented states from interfering with what Congress was trying to accomplish with this law. The pre-emption provision can be found at subsection (e)(3): "No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section." Even where Utah's law does not literally countermand Section 230's statutory language, what Utah proposes to do is nevertheless entirely inconsistent with it. While Congress essentially said with Section 230, "You are free to moderate however you see fit," Utah is trying to say, "No, you're not; you have to do it our way, and we'll punish you if you don't." Utah's demand is incompatible with Congress's policy and thus, per this pre-emption provision, not Constitutionally enforceable on this basis either.

And for good reason. As a practical matter, both Congress and Utah can't speak on this issue and have it yield coherent policy that doesn't subordinate Congress's mission to get the best online ecosystem possible by letting platforms feel safe to do what they can to maximize the most good content and minimize the least bad. Every new threat of liability is a new pressure diverting platforms' efforts away from being good partners in meeting Congress's goal and instead towards doing only what is needed to avoid the trouble for themselves these new forms of liability threaten. There is no way to satisfy both regulators; Congress's plan to regulate platform moderation via carrots rather than sticks is inherently undermined once sticks start to be introduced. Which is part of the reason why Congress wrote in the pre-emption provision: to make sure that states couldn't introduce any.

Section 230's drafters knew that if states could impose their own policy choices on Internet platforms there would be no limit to what sort of obligations they might try to dream up. They also knew that if states could each try to regulate Internet platforms it would lead to messy, if not completely irreconcilable, conflicts among states. That resulting confusion would smother the Internet Congress was trying to foster with Section 230 by making it impossible for Internet platforms to lawfully exist. Because even if Utah were right, and its policy happened to be Constitutional and not a terrible idea, if any state were free to impose a good policy on content moderation it would still leave any other state free to impose a bad one. Such a situation is untenable for a technology service that inherently crosses state boundaries because it means that any service provider would somehow have to obey both the good state laws and also the bad ones at the same time, even when they might be in opposition. Just think about the impossibility of trying to simultaneously satisfy, in today's political climate, what a Red State government might demand from an Internet platform and what a Blue State might. That readily foreseeable political catch-22 is exactly why Congress wrote Section 230 in such a way that no state government gets to demand appeasement when it comes to platform moderation practices.

The only solution to the regulatory paralysis Congress rightly feared is what it originally devised: writing pre-emption into Section 230 to get the states out of the platform regulation business and leave it all instead to Congress. Thanks to that provision, the Internet should be safe from Utah's attack on platform moderation and any other such state proposals. But only so long as Section 230 remains in effect as-is. What Utah is trying to do should therefore stand as a warning to Congress to think very carefully before doing anything to reverse course and alter Section 230 in any way that would invite the policy gridlock it had the foresight to foreclose these twenty years ago with this prescient statute.

41 Comments | Leave a Comment..

Posted on Techdirt - 22 February 2021 @ 3:42pm

What Landing On Mars Again Can Teach Us, Again

from the humanity-pep-talk dept

It seems I'm always writing about Section 230 or copyright or some sort of regulatory effort driven by antipathy toward technology. But one of my favorite posts I've ever written here is this one, "We Interrupt All The Hating On Technology To Remind Everyone We Just Landed On Mars." Given that we just landed on Mars again it seems a good time to revisit it, because it seems no less important today than it was in 2018 when I originally wrote it. Just as it seems no less important that we just landed on Mars again. In fact, it all may matter even more now.

Today we find ourselves even more mired in a world full of technological nihilism. It has become a well-honed reflex: if it involves technology, it must be bad. And in the wake of this prevailing distrust we've developed a political culture that is, at best, indifferent to innovation if not often downright eager to stamp it out.

It's a poisonous attitude that threatens to trap us in our currently imperfect world, with no way to solve our way out of our problems. But recognizing what an amazing achievement it was to successfully land on Mars can work as an antidote, in at least two important ways:

First, it can remind us of what wonder feels like. To dream the most fantastic dreams, and then to go make those dreams happen. Mankind hasn't gazed at the stars in ambivalence; the heavens have been one of our greatest sources of inspiration throughout the ages. That we have now managed, for the first time in the history of human civilization, to put another planet within our grasp should not extinguish that wonder, with a glib "been there, done that" shrug. Rather, it is a cause for enormous celebration and should do nothing but inspire us to keep dreaming, next time even bigger.

Because if there's one thing this landing teaches us, apart from the tangible fruits of our exploration, it is to believe in ourselves. Our failures and disappointments here on Earth are serious indeed. But what this success demonstrates is that we can overcome what was once thought impossible. It may take diligence, hard work, and faithful adherence to science. And our human imperfections can sometimes make it hard to manage these things.

But landing on Mars reminds us that we can and provides us with an amazing example of how.

3 Comments | Leave a Comment..

Posted on Techdirt - 18 February 2021 @ 10:45am

Is Section 230 Just For Start-ups? History Says Nope

from the original-intentions dept

One of the arguments for changing Section 230 is that even if we needed it a long time ago when the Internet was new, now that the Internet has been around for a while and some of the companies providing Internet services are quite big, we don't need it anymore. This view is simply untrue: Internet service providers of every size still need it, including and perhaps even especially the big ones because they are the ones handling the greatest volume of user expression.

Furthermore, Section 230 was never specifically aimed at start-ups. Indeed, from the outset it was intended to address the legal problems faced by an established incumbent.

The origin story for Section 230 begins with the Stratton Oakmont v. Prodigy case. In this case a New York state court allowed Prodigy to be sued over speech a user had posted. By doing so the court not only hurt Prodigy right then, in that case, but it threatened to hurt it in the future by opening the door to more lawsuits against it or any other online service provider—which would also be bad news for online expression more broadly as well. In the shadow of this decision services weren't going to be able to facilitate the greatest amount of valuable user expression, or minimize the greatest amount of detrimental. Even back then Prodigy was handling far too many posts by users for it to be possible to vet all, or even most, of them. While that volume might today seem like a drop in the bucket compared to how much expression Internet services handle now, the operative point is that use of online services like Prodigy had already surpassed the point where a service provider could possibly be able to review everything that ever appeared on its systems and make decisions about what to leave up or take down perfectly. If that's what they needed to do to avoid being crushed by litigation, then they were looking at a future of soon being crushed by litigation.

And that was the case for Prodigy even though it was an established service. As an *Internet* service provider Prodigy may have been new to the game because the Internet had only just left the realm of academia and become something that commercial service providers could provide access to. But it was hardly new as an "interactive computer service" provider, which is what Section 230 actually applies to. True, Section 230 contemplates that interactive computer service providers may likely provide Internet-based services, but it doesn't condition its statutory protection on being connected to the Internet. (From 47 U.S.C. Section 230(f)(2): "The term 'interactive computer service' means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server…"). To be eligible for Section 230 the service provider simply needs to be in the business of providing some form of interactive computer service, and Prodigy had been doing that for well over a decade as a dial-up service with its own proprietary network—just like CompuServe had long done and eventually America Online did as well.

Furthermore, Prodigy was a service provider started by Sears and IBM (and briefly also CBS). At the time these were some of the largest companies in America. The "big tech" of the era was "Big Blue" (as IBM was known). And while Sears may have managed to bungle itself into irrelevance in the years since, at the time Section 230 was passed there were few companies more expert in remote commerce than it was. Nevertheless it was the needs of these big companies that Congress was addressing with Section 230 because Congress recognized that it wasn't just their needs that it was addressing. The same legal rules that kept start-ups from being obliterated by litigation were the same ones needed to keep the bigger players from being obliterated as well.

The irony, of course, is that Section 230 may have ultimately ended up hurting the bigger players, because in the long run it opened the door to competition that ultimately ate these companies' lunch. Of course, that's what we would still want Section 230 to do: open the door to service providers that can do a better job than the large existing incumbents. It can hardly be said that Section 230 was or is a subsidy to "big tech," then or now, when building that on-ramp for something better is what it has always done and needs to be allowed to continue to do.

2 Comments | Leave a Comment..

Posted on Techdirt - 17 February 2021 @ 12:03pm

Why We Filed A Comment With Facebook's Oversight Board

from the less-is-more dept

Back when Facebook's Oversight Board was just getting organized, a colleague suggested I represent people before it as part of my legal practice. As a solo lawyer, my entrepreneurial ears perked up at the possibility of future business opportunities. But the rest of me felt extremely uncomfortable with the proposition. I defend free speech, but I am a lawyer and I defend it using law. If Facebook removes you or your content that is an entirely lawful choice for it to make. It may or may not be a good decision, but there is nothing for law to defend you from. So it didn't seem a good use of my legal training to spend my time taking issue with how a private entity made the moderation decisions it was entirely within its legal rights to make.

It also worried me that people were regarding Facebook's Oversight Board as some sort of lawmaking body, and I was hesitant to use my lawyering skills to somehow validate and perpetuate that myth. No matter how successful the Board turns out to be, it is still limited in its authority and reach, and that's a good thing. What is not good is when people expect that this review system should (a) have the weight of actual law or (b) be the system that gets to evaluate all moderation decisions on the Internet.

Yet here I am, having just written a comment for the Copia Institute in one of its cases. Not because I changed my mind about any of my previous concerns, but because that particular high-profile case seemed like a good opportunity to help reset expectations about the significance of the Oversight Board's decisions.

As people who care about the online ecosystem we want those decisions to be as good as they can be because they will have impact, and we want that impact to be as good as it can be. With our comment we therefore tried to provide some guidance on what a good result would look like. But whether the Board gets its decisions right or wrong, it does no good for the public, or even the Board itself, to think its decisions mean more than they do. Nor is it necessary: the Oversight Board already has a valid and even valuable role to play. And it doesn't need to be any more than what it actually is for it to be useful.

It's useful because every platform makes moderation decisions. Many of these decisions are hard to make perfectly, and many are made at incredible scale and speed. Even with the best of intentions it is easy for platforms to make moderation decisions that would have been better decided the other way.

And that is why the basic idea of the Oversight Board is a good one. It's good for it to be able to provide independent review of Facebook's more consequential decisions and recommend how to make them better in the future. Some have alleged that the board isn't sufficiently independent, but even if this were true, it wouldn't really matter, at least insofar as Facebook goes. What is important is that there is any operational way to give Facebook's moderation decisions a second look, especially in a way that can be informed by additional considerations that may not have been included in the original decision. That the Oversight Board is designed to provide such review is an innovation worth cheering.

But all the Oversight Board can do is decide what moderation decision might have been better for Facebook and its user community. It can't articulate, and it certainly can't decree, a moderation rule that could or should apply at all times on every platform anywhere, including platforms that are much different, with different reaches, different purposes, and different user communities than Facebook has. It would be impossible to come up with a universally applicable rule. And it's also not a power this Board, or any similar board, should ever have.

As we said in our comment, and have explained countless times on these pages, platforms have the right to decide what expression to allow on their systems. We obviously hope that platforms will use this right to make these decisions in a principled way that serves the public interest, and we stand ready to criticize them as vociferously as warranted when they don't. But we will always defend their legal right to make their moderation choices however perfectly or imperfectly they may make them.

What's important to remember in thinking about the Oversight Board is that this is still Facebook making moderation decisions. Not because the Board may or may not be independent from Facebook, but because Facebook's decision to defer to the Board's judgment is itself a moderation decision. It is not Facebook waiving its legal right to make moderation choices but rather it exercising that very right to decide how to make those choices, and this is what it has decided. Deferring to the Board's judgment does not obviate real-world law protecting its choice; it's a choice that real world law pointedly allows Facebook to make (and, thanks to Section 230, even encourages Facebook to try).

The confusion about the mandate of the Oversight Board seems to stem in part from the way the Board has been empowered and operates. In many ways it bears the hallmarks of a self-contained system of private law, and in and of itself that's fine. Private law is nothing new. For instance, when you hear the term "arbitration," that's basically what arbitration is: a system of private law. Private law can exist alongside regular, public, democratically-generated law just fine, although sometimes there are tensions because for it to work all the parties need to agree to abide by it instead of public law, and sometimes that consent isn't sufficiently voluntary.

But consent is not an issue here: before the Oversight Board came along Facebook users had no legal leverage of any kind over Facebook, so this is now a system of private law that Facebook has agreed can give them some. We can and should of course care that this system of private law is a good one, well-balanced and equitable, and thus far we've seen no basis for any significant concern. We instead see a lot of thoughtful people working very hard to try to get it right and open to being nudged to do better if such nudging should happen to be needed. But even if they were getting everything all wrong, in the big picture it doesn't really matter either, because ultimately it is only Facebook's oversight board, inherently limited in its authority and reach to that platform.

The misapprehension that this Board can or should somehow rule over all moderation decisions on the Internet is also not helped by the decision to call it the "Oversight Board," rather than the "Facebook Oversight Board." Perhaps it could become a model for other platforms to use, and maybe, just maybe, if it really does become a fully spun-off independent, sustainable, self-contained private law system it might someday be able to supply review services to other platforms too—provided, of course, that the Board is equipped to address these platforms' own particularities and priorities, which may differ significantly from Facebook's.

But right now it is only a solution for Facebook and only set up to consider the unique nature of the Facebook platform and what Facebook and its user community want from it. It is far from a one-size-fits-all solution for Internet content moderation generally, and our comment said as much, noting that the relative merit of the moderation decision in question ultimately hinged on what Facebook wanted its platform to be.

Nevertheless, it is absolutely fine for it to be so limited in its mission, and far better than if it were more. Just as Facebook had the right to acquiesce to this oversight board, other platforms equally have the right, and need to have the right, to say no to it or any other such board. It won't stop being important for the First Amendment to protect this discretion, regardless of how good a job this or any other board might do. While the Oversight Board can, and likely should, try to incorporate First Amendment values into its decisions to the extent it can, actual First Amendment law operates on a different axis than this system of private law ever would or could, with different interests and concerns to be balanced.

It is a mistake to think we could simply supplant all of those considerations with the judgment of this Oversight Board. No matter how thoughtful its decisions, nor how great the impact of what it decides, the Oversight Board is still not a government body. Neither it (nor even Facebook) has the sort of power the state has, nor any of the Constitutional limitations that would check it. Facebook remains a private actor, a company with a social media platform, and Facebook's Oversight Board simply an organization built to help it make its platform better. We should be extremely wary of expecting it to be anything other than that.

Especially because that's already plenty for it to be in order for it to be able to do some good.

10 Comments | Leave a Comment..

Posted on Techdirt - 12 February 2021 @ 12:01pm

The Copia Institute To The Oversight Board Regarding Facebook's Trump Suspension: There Was No Wrong Decision

from the context-driven-coin-flip dept

The following is the Copia Institute's submission to the Oversight Board as it evaluates Facebook's decision to remove some of Trump's posts and his ability to post. While addressed to the Board, it's written for everyone thinking about how platforms moderate content.

The Copia Institute has advocated for social media platforms to permit the greatest amount of speech possible, even when that speech is unpopular. At the same time, we have also defended the right of social media platforms to exercise editorial and associative discretion about the user expression it permits on its services. This case illustrates why we have done both. We therefore take no position on whether Facebook's decision to remove former-President Trump's posts and disable his ability to make further posts was the right decision for Facebook to make because choosing to do so or choosing not to is each defensible. Instead our goal is to explain why.

Reasons to be wary of taking content down. We have long held the view that the reflex to remove online content, even odious content, is generally not a healthy one. Not only can it backfire and lead to the removal of content undeserving of deletion, but it can have the effect of preserving a false monoculture in online expression. Social media is richer and more valuable when it can reflect the full fabric of humanity, even when that means enabling speech that is provocative or threatening to hegemony. Perhaps especially then, because so much important, valid, and necessary speech can so easily be labeled that way. Preserving different ideas, even when controversial, ensures that there will be space for new and even better ones, whereas policing content for compliance with current norms only distorts those norms' development.

Being too willing to remove content also has the effect of teaching the public that when it encounters speech that provokes the way to respond is to demand its suppression. Instead of a marketplace of ideas, this burgeoning tendency means that discourse becomes a battlefield, where the view that will prevail is the one that can amass enough censorial pressure to remove its opponent—even if it's the view with the most merit. The more Facebook feeds this unfortunate instinct by removing user speech, the more vulnerable it will be to further pressure demanding still more removals, even when it may be of speech society would benefit from. The reality is that there will always be disagreements over the worth of certain speech. As long as Facebook assumes the role of an arbitrator, it will always find itself in the middle of an unwinnable tug-of-war between conflicting views. To break this cycle, removals should be made with reluctance and only limited, specific, identifiable, and objective criteria to justify the exception. It may be hard to employ them consistently at scale, but more restraint will in the long run mean less error.

Reasons to be wary of leaving content up. The unique challenge presented in this case is that the Facebook user at the time of the posts in question was the President of the United States. This fact cuts in multiple ways: as the holder of the highest political office in the country Trump's speech was of particular relevance to the public, and thus particularly worth facilitating. After all, even if Trump's posts were debauched, these were the views of the President, and it would not have served the public for him to be of this character and the public not to know.

On the other hand, as the then-President of the United States his words had greater impact than any other user's. They could do, and did, more harm, thanks to the weight of authority they acquired from the imprimatur of his office. And those real-world effects provided a perfectly legitimate basis for Facebook to take steps to (a) mitigate that damage by removing posts and (b) end the association that had allowed him to leverage Facebook for those destructive ends.

If Facebook concludes that anyone's use of its services is not in its interests, the interests of its user community, or the interests of the wider world Facebook and its users inhabit, it can absolutely decide to refuse that user continued access. And it can reach that conclusion based on wider context, beyond platform use. Facebook could for instance deny a confessed serial killer who only uses Facebook to publish poetry access to its service if it felt that the association ultimately served to enable the bad actor's bad acts. As with speech removals, such decisions should be made with reluctance and based on limited, specific, identifiable, and objective criteria, given the impact of such terminations. Just as continued access to Facebook may be unduly empowering for users, denying it can be equally disempowering. But in the case of Trump, as President he did not need Facebook to communicate to the public. He had access to other channels and Facebook no obligation to be conscripted to enable his mischief. Facebook has no obligation to enable anyone's mischief, whether they are a political leader or otherwise.

Potential middle-grounds. When it comes to deciding whether to continue to provide Facebook's services to users and their expression, there is a certain amount of baby-splitting that can be done in response to the sorts of challenges raised by this case. For instance, Facebook does more than simply host speech that can be read by others; it provides tools for engagement such as comments and sharing and amplification through privileged display, and in some instances allows monetization. Withdrawing any or all of these additional user benefits is a viable option that may go a long way toward minimizing the problems of continuing to host problematic speech or a problematic user without the platform needing to resort to removing either entirely.

Conclusion. Whether removing Trump's posts and further posting ability was the right decision or not depends on what sort of service Facebook wants to be and which choice it believes it best serves that purpose. Facebook can make these decisions any way it wants, but to minimize public criticism and maximize public cooperation how it makes them is what matters. These decisions should be transparent to the user community, scalable to apply to future situations, and predictable in how they would, to the extent they can be, since circumstances and judgment will inevitably evolve. Every choice will have consequences, some good and some bad. The choice for Facebook is really to affirmatively choose which ones it wants to favor. There may not be any one right answer, or even any truly right answer. In fact, in the end the best decision may have little to do with the actual choice that results but rather the process used to get there.

16 Comments | Leave a Comment..

Posted on Techdirt - 10 February 2021 @ 1:46pm

How To Think About Online Ads And Section 230

from the oversimplification-avoidance dept

There's been a lot of consternation about online ads, sometimes even for good reason. The problem is that not all of the criticism is sound or well-directed. Worse, the antipathy towards ad tech, regardless of whether it is well-founded or not, is coalescing into yet more unwise, and undeserved, attacks on Section 230 and other expressive discretion the First Amendment protects. If these attacks are ultimately successful none of the problems currently lamented will be solved, but they will create lots of new ones.

As always, effectively addressing actual policy challenges first requires a better understanding of what these challenges are. The reality is that there are at least three separate issues that are raised by online ads: those related to ad content itself, those related to audience targeting, and those related to audience tracking. They all require their own policy responses—and, as it happens, none of those policy responses call for doing anything to change Section 230. In fact, to the extent that Section 230 is even relevant, the best policy response will always require keeping it intact.

With regard to ad content, Section 230 applies, and should apply, to the platforms that run advertiser-supplied ads for the same reasons it applies, and should apply, to the platforms hosting the other sorts of content created by users. After all, ad content is, in essence, just another form of user generated content (in fact, sometimes it's exactly like other forms of user content). And, as such, the principles behind having Section 230 apply to platforms hosting user-generated content in general also apply – and need to apply – here.

For one thing, as with ordinary user-generated content, platforms are not going to be able to police all the ad content that may run on their site. One important benefit of online advertising versus offline is that it enables far more entities to advertise to far larger audiences than they would be able to afford in the offline space. Online ads may therefore sometimes be cheesy, low-budget affairs, but it's ultimately good for the consumer if it's not just large, well-resourced, corporate entities who get to compete for public attention. We should be wary of implementing any policy that might choke off this commercial diversity.

Of course, the flip side to making it possible for many more actors to supply many more ads is that the supply of online ads is nearly infinite, and thus the volume is simply too great for platforms to be able to scrutinize all of them (or even most of them). Furthermore, even in cases where platforms might be able to examine an ad, it is still unlikely to have the expertise to review it for all possible legal issues that might arise in every jurisdiction where the ad may appear. Section 230 exists in large part to alleviate these impossible content policing burdens to make it possible for platforms to facilitate the appearance of any content at all.

Nevertheless, Section 230 also exists to make it possible for platforms to try to police content anyway, to the extent that they can, by making it clear that they can't be held liable for any of those moderation efforts. And that's important if we want to encourage them to help eliminate ads of poor quality. We want platforms to be able to do the best they can to get rid of dubious ads, and that means we need to make it legally safe for them to try.

The more we think they should take these steps, the more we need policy to ensure that it's possible for platforms to respond to this market expectation. And that means we need to hold onto Section 230 because it is what affords them this practical ability.

What's more, Section 230 affords platforms all this critical protection regardless of whether they profit from carrying content or not. The statute does not condition its protection on whether a platform facilitates content in exchange for money, nor is there any sort of constitutional obligation for a platform to provide its services on a charitable basis in order to benefit from the editorial discretion the First Amendment grants it. Sure, some platforms do pointedly host user content for free, but every platform needs to have some way of keeping the lights on and servers running. And if the most effective way to keep their services free for some users to post their content is to charge others for theirs, it is an absolutely constitutionally permissible decision for a platform to make.

In fact, it may even be good policy to encourage as well, as it keeps services available for users who can't afford to pay for access. Charging some users to facilitate their content doesn't inherently make the platform complicit in the ad content's creation, or otherwise responsible for imbuing it with whatever quality is objectionable. Even if that an advertiser has paid for algorithmic display priority, Section 230 should still apply just as it applies to any other algorithmically driven display decision the platform employs.

But on the off-chance that the platform did take an active role in creating that objectionable content, Section 230 has never stood in the way of holding the platform responsible. What Section 230 simply says is that making it possible to post unlawful content is not the same as creating content; for the platform to be liable as an "information content provider," aka a content creator, it had to have done something significantly more to birth its wrongful essence than simply be a vehicle for someone else to express it.

It's even true if the platform allows the advertiser to choose its audience. After all, the content has already been created. Audience targeting is something else entirely, but it's also something we should be wary of impinging upon.

There may, of course, be situations where advertisers try to target certain types of ads (ex: jobs, housing offers) in harmful ways. And when they do it may be appropriate to sanction the advertiser for what may amount to illegally discriminatory behavior. But not every such targeting choice is wrongful; sometimes choosing narrow audiences based on protected status may even be beneficial. But if we change the law to allow platforms be held equally liable with the advertiser for their wrongful targeting choices, we will take away the ability for platforms to offer audience targeting for any reasons, even good ones, by making it legally unsafe in case the advertiser does it for bad ones.

Furthermore, doing so will upend all advertising as we've known it, and in a way that's offensive to the First Amendment. There's a reason that certain things are advertised during prime time, or during sports broadcasts, or on late night tv, just as there's a reason that ads appearing in the New York Times are not necessarily the same ones running in Field & Stream or Ebony magazines. The Internet didn't suddenly make those choices possible; advertisers have always wanted the most bang for their buck, to reach the people most likely to be their ultimate customers as cost effectively as possible. And as a result they have always made choices about where to place their ads based on the demographics those ads likely reach. To now say that it should be illegal to allow advertisers to ever make such choices, simply because they may sometimes make these decisions wrongfully would disrupt decades upon decades of past practice and likely run afoul of the First Amendment, which generally protects the choice of whom to speak to. In fact, it protects it regardless of the medium in question, and there is no principled reason why an online platform should be any less protected than a broadcaster or some sort of printed periodical (especially not the former).

Even if it would be better if advertisers weren't so selective—and it's a fair argument to make, and a fair policy to pursue—it's not an outcome we should use the weight of legal liability to try to force. It won't work, and it impinges on important constitutional freedoms we've come to count on. Rather, if there is any affirmative policy response to ad tech that is warranted it is likely with the third constituent part: audience tracking. But even so, any policy response will still need to be a careful one.

There is nothing new about marketers wanting to fully understand their audiences; they have always tried to track them as well as the technology of the day would allow. What's new is how much better they now can. And the reality is that some of the tracking ability is intrusive and creepy, especially to the degree it happens without the audience being aware of how much of their behavior is being silently learned by strangers. There is room for policy to at minimum encourage, and potentially even require, such systems to be more transparent in how they learn about their audiences, tell others what they've learned, and give those audiences a chance to say no to much of it.

But in considering the right regulatory response there are some important caveats. First, take Section 230 off the table. It has nothing to do with this regulatory problem, apart from enabling platforms that may use ad tech to exist at all. You don't fix ad tech by killing the entire Internet; any regulatory solution is only a solution when it targets the actual problem.

Which leads to the next caution, because the regulatory schemes we've seen attempted so far (GDPR, CCPA, Prop. 24) are, even if well-intentioned, clunky, conflicting, and with plenty of overhead that compromises their effectiveness and imposes their own unintended and chilling costs, including on expression itself (and of more expression than just that of advertisers).

Still, when people complain about online ads this is frequently the area they are complaining about and it is worth focused attention to solve. But it is tricky; given how easy it is for all online activity to leave digital footprints, as well as the many reasons we might want to allow those footprints to be measured and then those measurements to be used (even potentially for advertising), care is required to make sure we don't foreclose the good uses while aiming to suppress the bad. But for the right law, one that recognizes and reasonably reacts to the complexity of this policy challenge, there is an opportunity for a constructive regulatory response to this piece of the online ad tech puzzle. There is no quick fix – and ripping apart the Internet by doing anything to Section 230 is certainly not any kind of fix at all – but if something must be done about online advertising, this is the something that's worth the thoughtful policy attention to try to get right.

6 Comments | Leave a Comment..

Posted on Techdirt - 9 February 2021 @ 10:45am

If We're Going To Talk About Discrimination In Online Ads, We Need To Talk About

from the deja-vu-all-over-again dept

It has been strange to see people speak about Section 230 and illegal discrimination as if it were somehow a new issue to arise. In fact, one of the seminal court cases that articulated the parameters of Section 230, the case, did so in the context of housing discrimination. It's worth taking a look at what happened in that litigation and how it bears on the current debate. was (and apparently remains) a specialized platform that does what it says on the tin: allow people to advertise for roommates. Back when the lawsuit began, it allowed people who were posting for roommates to include racial preferences in their ads, and it did so in two ways: (1) through a text box, where people could write anything about the roommate situation they were looking for, and (2) through answers to mandatory questions about roommate preferences. got sued by the Fair Housing Councils of the San Fernando Valley and San Diego for violating federal (FHA) and state (FEHA) fair housing law for allowing advertisers to express these discriminatory preferences. It pled a Section 230 defense, because the allegedly offending ads were user ads. But, in a notable Ninth Circuit decision, it both won and it lost.

In sum, the court found that Section 230 indeed applied to the user expression supplied through the text box. That expression, for better or worse, was entirely created by the user. If something was wrong with it, it was the user who had made it wrongful and the user, as the information content provider, who could be held responsible—but not, per Section 230, the platform, which was the interactive computer service provider for purposes of the statute and therefore immune from liability for it.

But the mandatory questions were another story. The court was concerned that, if these ads were illegally discriminatory, the platform had been a party to the creation of that illegality by prompting the user to express discriminatory preferences. And so the court found that Section 230 did not provide the platform a defense to any claim predicated on the content elicited by these questions.

Even though it was a split and somewhat messy decision, the case has held up over the years and provided subsequent courts with some guidance for how to figure out when Section 230 should apply. There are still fights around the edges, but figuring out whether it should apply has basically boiled down to determining who imbued the content with its allegedly wrongful quality. If the platform, then it's on the hook as much as the user may be. But its contribution to wrongful content's creation still had to be more substantive than merely offering the user the opportunity to express something illegal.

The fact that Roommate encourages subscribers to provide something in response to the prompt is not enough to make it a "develop[er]" of the information under the common-sense interpretation of the term we adopt today. It is entirely consistent with Roommate's business model to have subscribers disclose as much about themselves and their preferences as they are willing to provide. But Roommate does not tell subscribers what kind of information they should or must include as "Additional Comments," and certainly does not encourage or enhance any discriminatory content created by users. Its simple, generic prompt does not make it a developer of the information posted. [p. 1174].

The reason it is so important to hold onto that distinction is because the litigation has a punchline. The case didn't end there, with that first Ninth Circuit decision. After several more years of litigation there was a another Ninth Circuit decision in the case, this time on the merits of the discrimination claim.

And the claim failed. Per the Ninth Circuit, roommate situations are so intimate that the First Amendment rights of free association must be allowed to prevail and people be able to choose whom they live with by any means they like, even if its xenophobic prejudice.

Because of a roommate's unfettered access to the home, choosing a roommate implicates significant privacy and safety considerations. The home is the center of our private lives. Roommates note our comings and goings, observe whom we bring back at night, hear what songs we sing in the shower, see us in various stages of undress and learn intimate details most of us prefer to keep private. Roommates also have access to our physical belongings and to our person. As the Supreme Court recognized, "[w]e are at our most vulnerable when we are asleep because we cannot monitor our own safety or the security of our belongings." Minnesota v. Olson, 495 U.S. 91, 99, 110 S.Ct. 1684, 109 L.Ed.2d 85 (1990). Taking on a roommate means giving him full access to the space where we are most vulnerable. [p. 1221]


Government regulation of an individual's ability to pick a roommate thus intrudes into the home, which "is entitled to special protection as the center of the private lives of our people." Minnesota v. Carter, 525 U.S. 83, 99, 119 S.Ct. 469, 142 L.Ed.2d 373 (1998) (Kennedy, J., concurring). "Liberty protects the person from unwarranted government intrusions into a dwelling or other private places. In our tradition the State is not omnipresent in the home." Lawrence v. Texas, 539 U.S. 558, 562, 123 S.Ct. 2472, 156 L.Ed.2d 508 (2003). Holding that the FHA applies inside a home or apartment would allow the government to restrict our ability to choose roommates compatible with our lifestyles. This would be a serious invasion of privacy, autonomy and security. [id.].


Because precluding individuals from selecting roommates based on their sex, sexual orientation and familial status raises substantial constitutional concerns, we interpret the FHA and FEHA as not applying to the sharing of living units. Therefore, we hold that Roommate's prompting, sorting and publishing of information to facilitate roommate selection is not forbidden by the FHA or FEHA. [p. 1223]

This ruling is important on a few fronts. In terms of substance, it means that any law that itself tries to ban discrimination may itself have constitutional problems. It may be just, proper, and even affirmatively Constitutional to ban it in many or even most contexts. But, as this decision explains, it isn't necessarily so in all contexts, and it risks harm to people and the liberty interests that protect them to ignore this nuance.

Meanwhile, from a Section 230 perspective, the decision meant that a platform got dragged through years and years of expensive litigation only to ultimately be exonerated. It's amazing it even managed to survive, as many platforms needlessly put through the litigation grinder don't. And that's a big reason why we have Section 230, because we want to make sure platforms can't get bled dry before being found not liable. It is not ultimate liability that can crush them; it's the litigation itself that can tear them to pieces and force them to shut down or at least severely restrict even lawful content.

Section 230 is designed to avoid these outcomes, and it's important that we not let our distaste, however justified, for some of the content internet users may create prompt us to make the platforms they use vulnerable to such ruin. Not if we want to make sure internet services can still remain available to facilitate the content that we prefer they carry instead.

21 Comments | Leave a Comment..

More posts from Cathy Gellis >>

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it