corynne.mcsherry's Techdirt Profile

corynne.mcsherry

About corynne.mcsherry

Posted on Techdirt - 13 January 2026 @ 03:24pm

Online Gaming’s Final Boss: The Copyright Bully

Since earliest days of computer games, people have tinkered with the software to customize their own experiences or share their vision with others. From the dad who changed the game’s male protagonist to a girl so his daughter could see herself in it, to the developers who got their start in modding, games have been a medium where you don’t just consume a product, you participate and interact with culture.

For decades, that participatory experience was a key part of one of the longest-running video games still in operation: Everquest. Players had the official client, acquired lawfully from EverQuest’s developers, and modders figured out how to enable those clients to communicate with their own servers and then modify their play experience – creating new communities along the way.

Everquest’s copyright owners implicitly blessed all this. But the current owners, a private equity firm called Daybreak, want to end that independent creativity. They are using copyright claims to threaten modders who wanted to customize the EverQuest experience to suit a different playstyle, running their own servers where things worked the way they wanted. 

One project in particular is in Daybreak’s crosshairs: “The Hero’s Journey” (THJ). Daybreak claims THJ has infringed its copyrights in Everquest visuals and character, cutting into its bottom line.

Ordinarily, when a company wants to remedy some actual harm, its lawyers will start with a cease-and-desist letter and potentially pursue a settlement. But if the goal is intimidation, a rightsholder is free to go directly to federal court and file a complaint. That’s exactly what Daybreak did, using that shock-and-awe approach to cow not only The Hero’s Journey team, but unrelated modders as well.

Daybreak’s complaint seems to have dazzled the judge in the case by presenting side-by-side images of dragons and characters that look identical in the base game and when using the mod, without explaining that these images are the ones provided by EverQuest’s official client, which players have lawfully downloaded from the official source. The judge wound up short-cutting the copyright analysis and issuing a ruling that has proven devastating to the thousands of players who are part of EverQuest modding communities.

Daybreak and the developers of The Hero’s Journey are now in private arbitration, and Daybreak has wasted no time in sending that initial ruling to other modders. The order doesn’t bind anyone who’s unaffiliated with The Hero’s Journey, but it’s understandable that modders who are in it for fun and community would cave to the implied threat that they could be next.

As a result, dozens of fan servers have stopped operating. Daybreak has also persuaded the maintainers of the shared server emulation software that most fan servers rely upon, EQEmulator, to adopt terms of service that essentially ban any but the most negligible modding. The terms also provide that “your operation of an EQEmulator server is subject to Daybreak’s permission, which it may revoke for any reason or no reason at any time, without any liability to you or any other person or entity. You agree to fully and immediately comply with any demand from Daybreak to modify, restrict, or shut down any EQEmulator server.” 

This is sadly not even an uncommon story in fanspaces—from the dustup over changes to the Dungeons and Dragons open gaming license to the “guidelines” issued by CBS for Star Trek fan films, we see new generations of owners deciding to alienate their most avid fans in exchange for more control over their new property. It often seems counterintuitive—fans are creating new experiences, for free, that encourage others to get interested in the original work.

Daybreak can claim a shameful victory: it has imposed unilateral terms on the modding community that are far more restrictive than what fair use and other user rights would allow. In the process, it is alienating the very people it should want to cultivate as customers: hardcore Everquest fans. If it wants fans to continue to invest in making its games appeal to broader audiences and serve as testbeds for game development and sources of goodwill, it needs to give the game’s fans room to breathe and to play.

If you’ve been a target of Daybreak’s legal bullying, we’d love to hear from you; email us at info@eff.org.

Republished from EFF’s Deeplinks blog.

Posted on Techdirt - 2 July 2025 @ 03:43pm

The NO FAKES Act Has Changed – And It’s So Much Worse

A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from here on out.

The Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act aims to address understandable concerns about generative AI-created “replicas” by creating a broad new intellectual property right. That approach was the first mistake: rather than giving people targeted tools to protect against harmful misrepresentations—balanced against the need to protect legitimate speech such as parodies and satires—the original NO FAKES just federalized an image-licensing system.

The updated bill doubles down on that initial mistaken approach by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them, with few safeguards against abuse.

The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters;  c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”

This bill would be a disaster for internet speech and innovation.

Targeting Tools

The first version of NO FAKES focused on digital replicas. The new version goes further, targeting tools that can be used to produce images that aren’t authorized by the individual, anyone who owns the rights in that individual’s image, or the law. Anyone who makes, markets, or hosts such tools is on the hook. There are some limits—the tools must be primarily designed for, or have only limited commercial uses other than making unauthorized images—but those limits will offer cold comfort to developers given that they can be targeted based on nothing more than a bare allegation. These provisions effectively give rights-holders the veto power on innovation they’ve long sought in the copyright wars, based on the same tech panics. 

Takedown Notices and Filter Mandate

The first version of NO FAKES set up a notice and takedown system patterned on the DMCA, with even fewer safeguards. NO FAKES expands it to cover more service providers and require those providers to not only take down targeted materials (or tools) but keep them from being uploaded in the future.  In other words, adopt broad filters or lose the safe harbor.

Filters are already a huge problem when it comes to copyright, and at least in that instance all it should be doing is flagging for human review if an upload appears to be a whole copy of a work. The reality is that these systems often flag things that are similar but not the same (like two different people playing the same piece of public domain music). They also flag things for infringement based on mere seconds of a match, and they frequently do not take into account context that would make the use authorized by law.

But copyright filters are not yet required by law. NO FAKES would create a legal mandate that will inevitably lead to hecklers’ vetoes and other forms of over-censorship.

The bill does contain carve outs for parody, satire, and commentary, but those will also be cold comfort for those who cannot afford to litigate the question.

Threats to Anonymous Speech

As currently written, NO FAKES also allows anyone to get a subpoena from a court clerk—not a judge, and without any form of proof—forcing a service to hand over identifying information about a user.

We’ve already seen abuse of a similar system in action. In copyright cases, those unhappy with the criticisms being made against them get such subpoenas to silence critics. Often that the criticism includes the complainant’s own words as proof of the criticism, an ur-example of fair use. But the subpoena is issued anyway and, unless the service is incredibly on the ball, the user can be unmasked.

Not only does this chill further speech, the unmasking itself can cause harm to users. Either reputationally or in their personal life.

Threats to Innovation

Most of us are very unhappy with the state of Big Tech. It seems like not only are we increasingly forced to use the tech giants, but that the quality of their services is actively degrading. By increasing the sheer amount of infrastructure a new service would need to comply with the law, NO FAKES makes it harder for any new service to challenge Big Tech. It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES.

Requiring removal of tools, apps, and services could likewise stymie innovation. For one, it would harm people using such services for otherwise lawful creativity.  For another, it would discourage innovators from developing new tools. Who wants to invest in a tool or service that can be forced offline by nothing more than an allegation?

This bill is a solution in search of a problem. Just a few months ago, Congress passed Take It Down, which targeted images involving intimate or sexual content. That deeply flawed bill pressures platforms to actively monitor online speech, including speech that is presently encrypted. But if Congress is really worried about privacy harms, it should at least wait to see the effects of the last piece of internet regulation before going further into a new one. Its failure to do so makes clear that this is not about protecting victims of harmful digital replicas.

NO FAKES is designed to consolidate control over the commercial exploitation of digital images, not prevent it. Along the way, it will cause collateral damage to all of us.

Originally posted to the EFF’s Deeplinks blog, with a link to EFF’s Take Action page on the NO FAKES bill, which helps you tell your elected officials not to support this bill.

Posted on Techdirt - 21 May 2025 @ 12:46pm

The U.S. Copyright Office’s Draft Report On AI Training Errs On Fair Use

Within the next decade, generative AI could join computers and electricity as one of the most transformational technologies in history, with all of the promise and peril that implies. Governments’ responses to GenAI—including new legal precedents—need to thoughtfully address real-world harms without destroying the public benefits GenAI can offer. Unfortunately, the U.S. Copyright Office’s rushed draft report on AI training misses the mark.

The Report Bungles Fair Use

Released amidst a set of controversial job terminations, the Copyright Office’s report covers a wide range of issues with varying degrees of nuance. But on the core legal question—whether using copyrighted works to train GenAI is a fair use—it stumbles badly. The report misapplies long-settled fair use principles and ultimately puts a thumb on the scale in favor of copyright owners at the expense of creativity and innovation.

To work effectively, today’s GenAI systems need to be trained on very large collections of human-created works—probably millions of them. At this scale, locating copyright holders and getting their permission is daunting for even the biggest and wealthiest AI companies, and impossible for smaller competitors. If training makes fair use of copyrighted works, however, then no permission is needed.

Right now, courts are considering dozens of lawsuits that raise the question of fair use for GenAI training. Federal District Judge Vince Chhabria is poised to rule on this question, after hearing oral arguments in Kadrey v. Meta PlatformsThe Third Circuit Court of Appeals is expected to consider a similar fair use issue in Thomson Reuters v. Ross Intelligence. Courts are well-equipped to resolve this pivotal issue by applying existing law to specific uses and AI technologies. 

Courts Should Reject the Copyright Office’s Fair Use Analysis

The report’s fair use discussion contains some fundamental errors that place a thumb on the scale in favor of rightsholders. Though the report is non-binding, it could influence courts, including in cases like Kadrey, where plaintiffs have already filed a copy of the report and urged the court to defer to its analysis.   

Courts need only accept the Copyright Office’s draft conclusions, however, if they are persuasive. They are not.   

The Office’s fair use analysis is not one the courts should follow. It repeatedly conflates the use of works for training models—a necessary step in the process of building a GenAI model—with the use of the model to create substantially similar works. It also misapplies basic fair use principles and embraces a novel theory of market harm that has never been endorsed by any court.

The first problem is the Copyright Office’s transformative use analysis. Highly transformative uses—those that serve a different purpose than that of the original work—are very likely to be fair. Courts routinely hold that using copyrighted works to build new software and technology—including search engines, video games, and mobile apps—is a highly transformative use because it serves a new and distinct purpose. Here, the original works were created for various purposes and using them to train large language models is surely very different.

The report attempts to sidestep that conclusion by repeatedly ignoring the actual use in question—training —and focusing instead on how the model may be ultimately used. If the model is ultimately used primarily to create a class of works that are similar to the original works on which it was trained, the Office argues, then the intermediate copying can’t be considered transformative. This fundamentally misunderstands transformative use, which should turn on whether a model itself is a new creation with its own distinct purpose, not whether any of its potential uses might affect demand for a work on which it was trained—a dubious standard that runs contrary to decades of precedent.

The Copyright Office’s transformative use analysis also suggests that the fair use analysis should consider whether works were obtained in “bad faith,” and whether developers respected the right “to control” the use of copyrighted works.  But the Supreme Court is skeptical that bad faith has any role to play in the fair use analysis and has made clear that fair use is not a privilege reserved for the well-behaved. And rightsholders don’t have the right to control fair uses—that’s kind of the point.

Finally, the Office adopts a novel and badly misguided theory of “market harm.” Traditionally, the fair use analysis requires courts to consider the effects of the use on the market for the work in question. The Copyright Office suggests instead that courts should consider overall effects of the use of the models to produce generally similar works. By this logic, if a model was trained on a Bridgerton novel—among millions of other works—and was later used by a third party to produce romance novels, that might harm series author Julia Quinn’s bottom line.

This market dilution theory has four fundamental problems. First, like the transformative use analysis, it conflates training with outputs. Second, it’s not supported by any relevant precedent. Third, it’s based entirely on speculation that Bridgerton fans will buy random “romance novels” instead of works produced by a bestselling author they know and love.  This relies on breathtaking assumptions that lack evidence, including that all works in the same genre are good substitutes for each other—regardless of their quality, originality, or acclaim. Lastly, even if competition from other, unique works might reduce sales, it isn’t the type of market harm that weighs against fair use.

Nor is lost revenue from licenses for fair uses a type of market harm that the law should recognize. Prioritizing private licensing market “solutions” over user rights would dramatically expand the market power of major media companies and chill the creativity and innovation that copyright is intended to promote. Indeed, the fair use doctrine exists in part to create breathing room for technological innovation, from the phonograph record to the videocassette recorder to the internet itself. Without fair use, crushing copyright liability could stunt the development of AI technology.

We’re still digesting this report, but our initial review suggests that, on balance, the Copyright Office’s approach to fair use for GenAI training isn’t a dispassionate report on how existing copyright law applies to this new and revolutionary technology. It’s a policy judgment about the value of GenAI technology for future creativity, by an office that has no business making new, free-floating policy decisions.

The courts should not follow the Copyright Office’s speculations about GenAI. They should follow precedent.

Reposted from the EFF’s Deeplinks blog.

Posted on Techdirt - 19 May 2025 @ 12:03pm

The FCC Must Reject Efforts To Lock Up Public Airwaves

President Trump’s attack on public broadcasting has attracted plenty of deserved attention, but there’s a far more technical, far more insidious policy change in the offing—one that will take away Americans’ right to unencumbered access to our publicly owned airwaves.

The FCC is quietly contemplating a fundamental restructuring of all broadcasting in the United States, via a new DRM-based standard for digital television equipment, enforced by a private “security authority” with control over licensing, encryption, and compliance. This move is confusingly called the “ATSC Transition” (ATSC is the digital TV standard the US switched to in 2009 – the “transition” here is to ATSC 3.0, a new version with built-in DRM).

The “ATSC Transition” is championed by the National Association of Broadcasters, who want to effectively privatize the public airwaves, allowing broadcasters to encrypt over-the-air programming, meaning that you will only be able to receive those encrypted shows if you buy a new TV with built-in DRM keys. It’s a tax on American TV viewers, forcing you to buy a new TV so you can continue to access a public resource you already own. 

This may not strike you as a big deal. Lots of us have given up on broadcast and get all our TV over the internet. But millions of American still rely heavily or exclusively on broadcast television for everything from news to education to simple entertainment. Many of these viewers live in rural or tribal areas, and/or are low-income households who can least afford to “upgrade.” Historically, these viewers have been able to rely on access to broadcast because, by law, broadcasters get extremely valuable spectrum licenses in exchange for making their programming available for free to anyone within range of their broadcast antennas. 

Adding DRM to over-the-air broadcasts upends this system. The “ATSC Transition” is a really a transition from the century-old system of universally accessible programming to a privately controlled web of proprietary technological restrictions. It’s a transition from a system where anyone can come up with innovative new TV hardware to one where a centralized, unaccountable private authority gets a veto right over new devices. 

DRM licensing schemes like this are innovation killers. Prime example: DVDs and DVD players, which have been subject to a similar central authority, and haven’t gotten a single new feature since the DVD player was introduced in 1995. 

DRM is also incompatible with fundamental limits on copyright, like fair use.  Those limits let you do things like record a daytime baseball game and then watch it after dinner, skipping the ads. Broadcasters would like to prevent that and DRM helps them do it. Keep in mind that bypassing or breaking a DRM system’s digital keys—even for lawful purposes like time-shifting, ad-skipping, security research, and so on—risks penalties under Section 1201 of the Digital Millennium Copyright Act. That is, unless you have the time and resources to beg the Copyright Office for an exemption (and, if the exemption is granted, to renew your plea every three years). 

Broadcasters say they need this change to offer viewers new interactive features that will serve the public interest. But if broadcasters have cool new features the public will enjoy, they don’t need to force us to adopt them. The most reliable indicator that a new feature is cool and desirable is that people voluntarily install it. If the only way to get someone to use a new feature is to lock up the keys so they can’t turn it off, that’s a clear sign that the feature is not in the public interest. 

That’s why EFF joined Public Knowledge, Consumer Reports and others in urging the FCC to reject this terrible, horrible, no good, very bad idea and keep our airwaves free for all of us. We hope the agency listens, and puts the interests of millions of Americans above the private interests of a few powerful media cartels.

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 10 September 2024 @ 03:25pm

NO FAKES – A Dream For Lawyers, A Nightmare For Everyone Else

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn’t create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

The bill also includes a safe harbor scheme modelled on the DMCA notice and takedown process. To stay within the NO FAKES safe harbors, a platform that receives a notice of illegality must remove “all instances” of the allegedly unlawful content—a broad requirement that will encourage platforms to adopt “replica filters” similar to the deeply flawed copyright filters like YouTube’s Content I.D. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation incurring a $5000 penalty – which will add up fast. The bill does throw platforms a not-very-helpful-bone: if they can show they had an objectively reasonable belief that the content was lawful, they only have to cough up $1 million if they guess wrong.  

All of this is a recipe for private censorship. For decades, the DMCA process has been regularly abused to target lawful speech, and there’s every reason to suppose NO FAKES will lead to the same result.  

What is worse, NO FAKES offers even fewer safeguards for lawful speech than the DMCA. For example, the DMCA includes a relatively simple counter-notice process that a speaker can use to get their work restored. NO FAKES does not. Instead, NO FAKES puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.  

NO FAKES does include a provision that, in theory, would allow improperly targeted speakers to hold notice senders accountable. But they must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believes the lie to be true, no matter how unreasonable that belief. Given the multiple open questions about how to interpret the various exemptions (not to mention the common confusions about the limits of IP protection that we’ve already seen), that’s pretty cold comfort. 

These significant flaws should doom the bill, and that’s a shame. Deceptive AI-generated replicas can cause real harms, and performers have a right to fair compensation for the use of their likenesses, should they choose to allow that use. Existing laws can address most of this, but Congress should be considering narrowly-targeted and proportionate proposals to fill in the gaps.  

The NO FAKES Act is neither targeted nor proportionate. It’s also a significant Congressional overreach—the Constitution forbids granting a property right in (and therefore a monopoly over) facts, including a person’s name or likeness.  

The best we can say about NO FAKES is that it has provisions protecting individuals with unequal bargaining power in negotiations around use of their likeness. For example, the new right can’t be completely transferred to someone else (like a film studio or advertising agency) while the person is alive, so a person can’t be pressured or tricked into handing over total control of their public identity (their heirs still can, but the dead celebrity presumably won’t care). And minors have some additional protections, such as a limit on how long their rights can be licensed before they are adults.   

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 23 September 2021 @ 01:35pm

Content Moderation Beyond Platforms: A Rubric

For decades, EFF and others have been documenting the monumental failures of content moderation at the platform level—inconsistent policies, inconsistently applied, with dangerous consequences for online expression and access to information. Yet despite mounting evidence that those consequences are inevitable, service providers at other levels are increasingly choosing to follow suit.

The full infrastructure of the internet, or the “full stack,” is made up of a range of entities, from consumer-facing platforms like Facebook or Pinterest, to ISPs like Comcast or AT&T. Somewhere in the middle are a wide array of intermediaries, such as upstream hosts like Amazon Web Services (AWS), domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services.

For most of us, most of the stack is invisible. We send email, tweet, post, upload photos and read blog posts without thinking about all the services that help get content from the original creator onto the internet and in front of users’ eyeballs all over the world. We may think about our ISP when it gets slow or breaks, but day-to-day, most of us don’t think about intermediaries like AWS at all—until AWS decides to deny service to speech it doesn’t like, as it did with the social media site Parler, and that decision gets press attention.

Invisible or not, these intermediaries are potential speech “chokepoints” and their choices can significantly influence the future of online expression. Simply put, platform-level moderation is broken and infrastructure-level moderation is likely to be worse. That said, the pitfalls and risks for free expression and privacy may play out differently depending on what kind of provider is doing the moderating. To help companies, policymakers and users think through the relative dangers of infrastructure moderation at various levels of the stack, here’s a set of guiding questions.

  1. Is meaningful transparency, notice, and appeal possible? Given the inevitability of mistakes, human rights standards demand that service providers notify users that their speech has been, or will be, taken offline, and offer users an opportunity to seek redress. Unfortunately, many services do not have a direct relationship with either the speaker or the audience for the expression at issue, making all of these steps challenging. But without them, users will be held not only to their host’s terms and conditions but also those of every service in the chain from speaker to audience, even though they may not know what those services are or how to contact them. Given the potential consequences of violations, and the difficulty of navigating the appeals processes of previously invisible services (assuming such a process even exists), many users will simply avoid sharing controversial opinions altogether. Relatedly, where a service provider has no relationship to the speaker or audience, takedowns will be much easier and cheaper than a nuanced analysis of a given user’s speech.
  2. Do viable competitive alternatives exist? One of the reasons net neutrality rules for ISPs are necessary is that users have so few options for high-quality internet access. If your ISP decides to shut down your account based on your expression (or that of someone else using the account), in much of the world, including the U.S., you can’t go to another provider. At other layers of the stack, such as the domain name system, there are multiple providers from which to choose, so a speaker who has their domain name frozen can take their website elsewhere. But the existing of alternatives alone is not enough; answering this question also requires evaluating the costs of switching and whether it calls for technical savvy beyond the skill set of most users.
  3. Is it technologically possible for the service to tailor its moderation practices to target only the specific offensive expression? At the infrastructure level, many services cannot target their response with the necessary precision human rights standards demand. Twitter can block specific tweets; Amazon Web Services can only deny service to an entire site, which means they inevitably affect far more than the objectionable speech that motivated the action. We can take a lesson here from the copyright context, where we have seen domain name registrars and hosting providers shut down entire sites in response to infringement notices targeting a single document. It may be possible for some services to communicate directly with customers where they are concerned about a specific piece of content, and request that it be taken down. But if that request is rejected, the service has only the blunt instrument of complete removal at its disposal. 
  4. Is moderation an effective remedy? The U.S. experience with online sex trafficking teaches that removing distasteful speech may not have the hoped-for impact. In 2017, Tennessee Bureau of Investigation special agent Russ Winkler explained that online platforms were the most important tool in his arsenal for catching sex traffickers. Today, legislation designed to prevent the use of online platforms for sex trafficking has made it harder for law enforcement to find traffickers. Indeed, several law enforcement agencies report that without these platforms, their work finding and arresting traffickers has hit a wall.
  5. Will collateral damage, such as the stifling of lawful expression, disproportionally affect less powerful groups? Moderation choices may reflect and reinforce bias against marginalized communities. Take, for example, Facebook’s decision, in the midst of the #MeToo movement’s rise, that the statement “men are trash” constitutes hateful speech. Or Twitter’s decision to use harassment provisions to shut down the verified account of a prominent Egyptian anti-torture activist. Or the content moderation decisions that have prevented women of color from sharing the harassment they receive with their friends and followers. Or the decision by Twitter to mark tweets containing the word “queer” as offensive, regardless of context. As with the competition inquiry, this analysis should consider whether the impacted speakers and audiences will have the ability to respond and/or find effective alternative venues.
  6. Is there a user and speech friendly alternative to central moderation? Could there be? One of the key problems of content moderation at the social media level is that the moderator substitutes its policy preferences for those of its users. When infrastructure providers enter the game, with generally less accountability, users have even less ability to make their own choices about their own internet experience. If there are tools that allow users themselves to express and implement their own preferences, infrastructure providers should return to the business of servicing their customers — and policymakers have a weaker argument for imposing new requirements.
  7. Will governments seek to hijack any moderation pathway? We should be wary of moderation practices that will provide state and state-sponsored actors with additional tools for controlling public dialogue. Once processes and tools to takedown expression are developed or expanded, companies can expect a flood of demands to apply them to other speech. At the platform level, state and state-sponsored actors have weaponized flagging tools to silence dissent. In the U.S., the First Amendment and the safe harbor of Section 230 largely prevent moderation requirements. But policymakers have started to chip away at Section 230, and we expect to see more efforts along those lines. In other countries, such as Canada, the U.K., Turkey and Germany, policymakers are contemplating or have adopted draconian takedown rules for platforms and would doubtless like to extend them further. 

Companies should ask all of these questions when they are considering whether to moderate content (in general or as a specific instance). And policymakers should ask them before they either demand or prohibit content moderation at the infrastructure level. If more than two decades of social media content moderation has taught us anything, it is that we cannot “tech” our way out of political and social problems. Social media companies have tried and failed to do so; infrastructure companies should refuse to replicate those failures—beginning with thinking through the consequences in advance, deciding whether they can mitigate them and, if not, whether they should simply stay out of it.

Corynne McSherry is the Legal Director at EFF, specializing in copyright, intermediary liability, open access, and free expression issues.

Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we’ll have many of this series’ authors discussing and debating their pieces in front of a live virtual audience (register to attend here). On October 7th, we’ll be hosting a smaller workshop focused on coming up with concrete steps we can take to make sure providers, policymakers, and others understand the risks and challenges of infrastructure moderation, and how to respond to those risks.

Posted on Techdirt - 8 September 2021 @ 03:41pm

New Texas Abortion Law Likely To Unleash A Torrent Of Lawsuits Against Online Education, Advocacy And Other Speech

In addition to the drastic restrictions it places on a woman’s reproductive and medical care rights, the new Texas abortion lawSB8, will have devastating effects on online speech. 

The law creates a cadre of bounty hunters who can use the courts to punish and silence anyone whose online advocacy, education, and other speech about abortion draws their ire. It will undoubtedly lead to a torrent of private lawsuits against online speakers who publish information about abortion rights and access in Texas, with little regard for the merits of those lawsuits or the First Amendment protections accorded to the speech. Individuals and organizations providing basic educational resources, sharing information, identifying locations of clinics, arranging rides and escorts, fundraising to support reproductive rights, or simply encouraging women to consider all their options now have to consider the risk that they might be sued for merely speaking. The result will be a chilling effect on speech and a litigation cudgel that will be used to silence those who seek to give women truthful information about their reproductive options. 

SB8, also known as the Texas Heartbeat Act, encourages private persons to file lawsuits against anyone who “knowingly engages in conduct that aids or abets the performance or inducement of an abortion.” It doesn’t matter whether that person “knew or should have known that the abortion would be performed or induced in violation of the law,” that is, the law’s new and broadly expansive definition of illegal abortion. And you can be liable even if you simply intend to help, regardless, apparently, of whether an illegal abortion actually resulted from your assistance.

And although you may defend a lawsuit if you believed the doctor performing the abortion complied with the law, it is really hard to do so. You must prove that you conducted a “reasonable investigation,” and as a result “reasonably believed” that the doctor was following the law. That’s a lot to do before you simply post something to the internet, and of course you will probably have to hire a lawyer to help you do it.

SB8 is a “bounty law”: it doesn’t just allow these lawsuits, it provides a significant financial incentive to file them. It guarantees that a person who files and wins such a lawsuit will receive at least $10,000 for each abortion that the speech “aided or abetted,” plus their costs and attorney’s fees. At the same time, SB8 may often shield these bounty hunters from having to pay the defendant’s legal costs should they lose. This removes a key financial disincentive they might have had against bringing meritless lawsuits. 

Moreover, lawsuits may be filed up to six years after the purported “aiding and abetting” occurred. And the law allows for retroactive liability: you can be liable even if your “aiding and abetting” conduct was legal when you did it, if a later court decision changes the rules. Together this creates a ticking time bomb for anyone who dares to say anything that educates the public about, or even discusses, abortion online.

Given this legal structure, and the law’s vast application, there is no doubt that we will quickly see the emergence of anti-choice trolls: lawyers and plaintiffs dedicated to using the courts to extort money from a wide variety of speakers supporting reproductive rights.

And unfortunately, it’s not clear when speech encouraging someone to or instructing them how to commit a crime rises to the level of “aiding and abetting” unprotected by the First Amendment. Under the leading case on the issue, it is a fact-intensive analysis, which means that defending the case on First amendment grounds may be arduous and expensive. 

The result of all of this is the classic chilling effect: many would-be speakers will choose not to speak at all for fear of having to defend even the meritless lawsuits that SB8 encourages. And many speakers will choose to take down their speech if merely threatened with a lawsuit, rather than risk the law’s penalties if they lose or take on the burdens of a fact-intensive case even if they were likely to win it. 

The law does include an empty clause providing that it may not be “construed to impose liability on any speech or conduct protected by the First Amendment of the United States Constitution, as made applicable to the states through the United States Supreme Court’s interpretation of the Fourteenth Amendment of the United States Constitution.” While that sounds nice, it offers no real protection—you can already raise the First Amendment in any case, and you don’t need the Texas legislature to give you permission. Rather, that clause is included to try to insulate the law from a facial First Amendment challenge—a challenge to the mere existence of the law rather than its use against a specific person. In other words, the drafters are hoping to ensure that, even if the law is unconstitutional—which it is—each individual plaintiff will have to raise the First Amendment issues on their own, and bear the exorbitant costs—both financial and otherwise—of having to defend the lawsuit in the first place.

One existing free speech bulwark—47 U.S.C. § 230 (“Section 230”)—will provide some protection here, at least for the online intermediaries upon which many speakers depend. Section 230 immunizes online intermediaries from state law liability arising from the speech of their users, so it provides a way for online platforms and other services to get early dismissals of lawsuits against them based on their hosting of user speech. So although a user will still have to fully defend a lawsuit arising, for example, from posting clinic hours online, the platform they used to share that information will not. That is important, because without that protection, many platforms would preemptively take down abortion-related speech for fear of having to defend these lawsuits themselves. As a result, even a strong-willed abortion advocate willing to risk the burdens of litigation in order to defend their right to speak will find their speech limited if weak-kneed platforms refuse to publish it. This is exactly the way Section 230 is designed to work: to reduce the likelihood that platforms will censor in order to protect themselves from legal liability, and to enable speakers to make their own decisions about what to say and what risks to bear with their speech. 

But a powerful and dangerous chilling effect remains for users. Texas’s anti-abortion law is an attack on many fundamental rights, including the First Amendment rights to advocate for abortion rights, to provide basic educational information, and to counsel those considering reproductive decisions. We will keep a close eye on the lawsuits the law spurs and the chilling effects that accompany them. If you experience such censorship, please contact info@eff.org.

Originally published to the EFF Deeplinks blog.

Posted on Techdirt - 2 May 2019 @ 09:31am

Content Moderation is Broken. Let Us Count the Ways.

Social media platforms regularly engage in “content moderation”?the depublication, downranking, and sometimes outright censorship of information and/or user accounts from social media and other digital platforms, usually based on an alleged violation of a platform’s “community standards” policy. In recent years, this practice has become a matter of intense public interest. Not coincidentally, thanks to growing pressure from governments and some segments of the public to restrict various types of speech, it has also become more pervasive and aggressive, as companies struggle to self-regulate in the hope of avoiding legal mandates.

Many of us view content moderation as a given, an integral component of modern social media. But the specific contours of the system were hardly foregone conclusions. In the early days of social media, decisions about what to allow and what not to were often made by small teams or even individuals, and often on the fly. And those decisions continue to shape our social media experience today.

Roz Bowden?who spoke about her experience at UCLA’s All Things in Moderation conference in 2017?ran the graveyard shift at MySpace from 2005 to 2008, training content moderators and devising rules as they went along. Last year, Bowden told the BBC:

We had to come up with the rules. Watching porn and asking whether wearing a tiny spaghetti-strap bikini was nudity? Asking how much sex is too much sex for MySpace? Making up the rules as we went along. Should we allow someone to cut someone’s head off in a video? No, but what if it is a cartoon? Is it OK for Tom and Jerry to do it?

Similarly, in the early days of Google, then-deputy general counsel Nicole Wong was internally known as “The Decider” as a result of the tough calls she and her team had to make about controversial speech and other expression. In a 2008 New York Times profile of Wong and Google’s policy team, Jeffrey Rosen wrote that as a result of Google’s market share and moderation model, “Wong and her colleagues arguably have more influence over the contours of online expression than anyone else on the planet.”

Built piecemeal over the years by a number of different actors passing through Silicon Valley’s revolving doors, content moderation was never meant to operate at the scale of billions of users. The engineers who designed the platforms we use on a daily basis failed to imagine that one day they would be used by activists to spread word of an uprising…or by state actors to call for genocide. And as pressure from lawmakers and the public to restrict various types of speech?from terrorism to fake news?grows, companies are desperately looking for ways to moderate content at scale.

They won’t succeed?at least if they care about protecting online expression even half as much as they care about their bottom line.

The Content Moderation System Is Fundamentally Broken. Let Us Count the Ways:

1. Content Moderation Is a Dangerous Job?But We Can’t Look to Robots to Do It Instead

As a practice, content moderation relies on people in far-flung (and almost always economically less well-off) locales to cleanse our online spaces of the worst that humanity has to offer so that we don’t have to see it. Most major platforms outsourcing the work to companies abroad, where some workers are reportedly paid as little as $6 a day and others report traumatic working conditions. Over the past few years, researchers such as EFF Pioneer Award winner Sarah T. Roberts have exposed just how harmful a job it can be to workers.

Companies have also tried replacing human moderators with AI, thereby solving at least one problem (the psychological impact that comes from viewing gory images all day), but potentially replacing it with another: an even more secretive process in which false positives may never see the light of day.

2. Content Moderation Is Inconsistent and Confusing

For starters, let’s talk about resources. Companies like Facebook and YouTube expend significant resources on content moderation, employing thousands of workers and utilizing sophisticated automation tools to flag or remove undesirable content. But one thing is abundantly clear: The resources allocated to content moderation aren’t distributed evenly. Policing copyright is a top priority, and because automation can detect nipples better than it can recognize hate speech, users often complain that more attention is given to policing women’s bodies than to speech that might actually be harmful.

But the system of moderation is also inherently inconsistent. Because it relies largely on community policing?that is, on people reporting other people for real or perceived violations of community standards?some users are bound to be more heavily impacted than others. A person with a public profile and a lot of followers is mathematically more likely to be reported than a less popular user. And when a public figure is removed by one company, it can create a domino effect whereby other companies follow their lead.

Problematically, companies’ community standards also often feature exceptions for public figures: That’s why the president of the United States can tweet hateful things with impunity, but an ordinary user can’t. While there’s some sense to such policies?people should know what their politicians are saying?certain speech obviously carries more weight when spoken by someone in a position of authority.

Finally, when public pressure forces companies to react quickly to new “threats,” they tend to overreact. For example, after the passing of FOSTA?a law purportedly designed to stop sex trafficking but which, as a result of sweepingly broad language, has resulted in confusion and overbroad censorship by companies?Facebook implemented a policy on sexual solicitation that was essentially a honeypot for trolls. In responding to ongoing violence in Myanmar, the company created an internal manual that contained elements of misinformation. And it’s clear that some actors have greater ability to influence companies than others: A call from Congress or the European Parliament carries a lot more weight in Silicon Valley than one that originates from a country in Africa or Asia. By reacting to the media, governments, or other powerful actors, companies reinforce the power that such groups already have.

3. Content Moderation Decisions Can Cause Real-World Harms to Users as Well as Workers

Companies’ attempts to moderate what they deem undesirable content has all too often had a disproportionate effect on already-marginalized groups. Take, for example, the attempt by companies to eradicate homophobic and transphobic speech. While that sounds like a worthy goal, these policies have resulted in LGBTQ users being censored for engaging in counterspeech or for using reclaimed terms like “dyke”. 

Similarly, Facebook’s efforts to remove hate speech have impacted individuals who have tried to use the platform to call out racism by sharing the content of hateful messages they’ve received. As an article in the Washington Post explained, “Compounding their pain, Facebook will often go from censoring posts to locking users out of their accounts for 24 hours or more, without explanation ? a punishment known among activists as ?Facebook jail.'”

Content moderation can also pose harms to business. Small and large businesses alike increasingly rely on social media advertising, but strict content rules disproportionately impact certain types of businesses. Facebook bans ads that it deems “overly suggestive or sexually provocative”, a practice that has had a chilling effect on women’s health startups, bra companies, a book whose title contains the word “uterus”, and even the National Campaign to Prevent Teen and Unwanted Pregnancy.

4. Appeals Are Broken, and Transparency Is Minimal

For many years, users who wished to appeal a moderation decision had no feasible path for doing so…unless of course they had access to someone at a company. As a result, public figures and others with access to digital rights groups or the media were able to get their content reinstated, while others were left in the dark.

In recent years, some companies have made great strides in improving due process: Facebook, for example, expanded its appeals process last year. Still, users of various platforms complain that appeals lack result or go unanswered, and the introduction of more subtle enforcement mechanisms by some companies has meant that some moderation decisions are without a means of appeal.

Last year, we joined several organizations and academics in creating the Santa Clara Principles on Transparency and Accountability in Content Moderation, a set of minimum standards that companies should implement to ensure that their users have access to due process and receive notification when their content is restricted, and to provide transparency to the public about what expression is being restricted and how.

In the current system of content moderation, these are necessary measures that every company must take. But they are just a start.  

No More Magical Thinking

We shouldn’t look to Silicon Valley, or anyone else, to be international speech police for practical as much as political reasons. Content moderation is extremely difficult to get right, and at the scale at which some companies are operating, it may be impossible. As with any system of censorship, mistakes are inevitable.  As companies increasingly use artificial intelligence to flag or moderate content?another form of harm reduction, as it protects workers?we’re inevitably going to see more errors. And although the ability to appeal is an important measure of harm reduction, it’s not an adequate remedy.

Advocates, companies, policymakers, and users have a choice: try to prop up and reinforce a broken system?or remake it. If we choose the latter, which we should, here are some preliminary recommendations:

  • Censorship must be rare and well-justified, particularly by tech giants. At a minimum, that means (1) Before banning a category of speech, policymakers and companies must explain what makes that category so exceptional, and the rules to define its boundaries must be clear and predictable. Any restrictions on speech should be both necessary and proportionate. Emergency takedowns, such as those that followed the recent attack in New Zealand, must be well-defined and reserved for true emergencies. And (2) when content is flagged as violating community standards, absent exigent circumstances companies must notify the user and give them an opportunity to appeal before the content is taken down. If they choose to appeal, the content should stay up until the question is resolved. But (3) smaller platforms dedicated to serving specific communities may want to take a more aggressive approach. That’s fine, as long as Internet users have a range of meaningful options with which to engage.
  • Consistency. Companies should align their policies with human rights norms. In a paper published last year, David Kaye?the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression?recommends that companies adopt policies that allow users to “develop opinions, express themselves freely and access information of all kinds in a manner consistent with human rights law.” We agree, and we’re joined in that opinion by a growing coalition of civil liberties and human rights organizations.
  • Tools. Not everyone will be happy with every type of content, so users should be provided with more individualized tools to have control over what they see. For example, rather than banning consensual adult nudity outright, a platform could allow users to turn on or off the option to see it in their settings. Users could also have the option to share their settings with their community to apply to their own feeds.
  • Evidence-based policymaking. Policymakers should tread carefully when operating without facts, and not fall victim to political pressure. For example, while we know that disinformation spreads rapidly on social media, many of the policies created by companies in the wake of pressure appear to have had little effect. Companies should work with researchers and experts to respond more appropriately to issues.

Recognizing that something needs to be done is easy. Looking to AI to help do that thing is also easy. Actually doing content moderation well is very, very difficult, and you should be suspicious of any claim to the contrary.

Republished from the EFF’s Deeplinks Blog.

More posts from corynne.mcsherry >>