Eric Goldman's Techdirt Profile

Eric Goldman

About Eric Goldman

Posted on Techdirt - 23 June 2022 @ 10:47am

California Seems To Be Taking The Exact Wrong Lessons From Texas And Florida’s Social Media Censorship Laws

This post analyzes California AB 587, self-described as “Content Moderation Requirements for Internet Terms of Service.” I believe the bill will get a legislative hearing later this month.

A note about the draft I’m analyzing, posted here. It’s dated June 6, and it’s different from the version publicly posted on the legislature’s website (dated April 28). I’m not sure what the June 6 draft’s redlines compare to–maybe the bill as introduced? I’m also not sure if the June 6 draft will be the basis of the hearing, or if there will be more iterations between now and then. It’s exceptionally difficult for me to analyze bills that are changing rapidly in secret. When bill drafters secretly solicit feedback, every other constituency cannot follow along or share timely or helpful feedback. It’s especially ironic to see non-public activity for a bill that’s all about mandating transparency. ¯\_(ツ)_/¯

Who’s Covered by the Bill?

The bill applies to “social media platforms” that: “(A) Construct a public or semipublic profile within a bounded system created by the service. (B) Populate a list of other users with whom an individual shares a connection within the system. [and] (C) View and navigate a list of connections made by other individuals within the system.”

This definition of “social media” has been around for about a decade, and it’s awful. Critiques I made 8 years ago:

First, what is a “semi-public” profile, and how does it differ from a public or non-public profile? Is there even such a thing as a “semi-private” or “non-public” profile?…

Second, what does “a bounded system” mean?…The “bounded system” phrase sounds like a walled garden of some sort, but most walled gardens aren’t impervious. So what delimits the boundaries the statute refers to, and what does an “unbounded” system look like?

I also don’t understand what constitutes a “connection,” what a “list of connections” means, or what it means to “populate” the connection list. This definition of social media was never meant to be used as a statutory definition, and every word invites litigation.

Further, the legislature should–but surely has not–run this definition through a test suite to make sure it fits the legislature’s intent. In particular, which, if any, services offering user-generated content (UGC) functionality do NOT satisfy this definition? Though decades of litigation might ultimately answer the question, I expect that the language likely covers all UGC services.

[Note: based on a quick Lexis search, I saw similar statutory language in about 20 laws, but I did not see any caselaw interpreting the language because I believe those laws are largely unused.]

The bill then excludes some UGC services:

  • Companies with less than $100M of gross revenue in the prior calendar year. There are many obvious problems with this standard, such as the fact that the revenue is enterprise-wide (so bigger businesses with small UGC components will be covered if they don’t turn off the UGC functionality), the lack of a phase-in period, the lack of a nexus for revenues derived from California, and the absence of why $100M was selected instead of $50M, $500M, or whatever. Every legislator really ought to read this article about how to draft size metrics for Internet services.
  • Email service providers, “direct messaging” services, and “cloud storage or shared document or file collaboration.” All social media services are, in a sense, “cloud storage,” so what does this exclusion mean? ¯\_(ツ)_/¯
  • “A section for user-generated comments on a digital news internet website that otherwise exclusively hosts content published by” entities enumerated in the California Constitution, Article I(2)(b). Entities referenced in the Constitution: a “publisher, editor, reporter, or other person connected with or employed upon a newspaper, magazine, or other periodical publication, or by a press association or wire service” and “a radio or television news reporter or other person connected with or employed by a radio or television station.” I don’t know that any service can take advantage of this exclusion because every traditional publisher publishes content from freelancers and other non-employees, so the “exclusively hosts” requirement creates a null set. Also, this exclusion opts-into the confusion about the statutory differences between traditional and new media. See some cases discussing that issue.
  • “Consumer reviews of products or services on an internet website that serves the exclusive purpose of facilitating online commerce.” Ha ha. Should we call this the “Amazon exclusion”? If so, I’m not sure they are getting their money’s worth. Does Amazon.com EXCLUSIVELY facilitate online commerce? 🤔  And if this exclusion doesn’t benefit Yelp and TripAdvisor–because they have reviews on things that don’t support e-commerce (like free-to-visit parks)–I can’t wait to see how the state explains why non-commercial consumer reviews need transparency while commercial ones do not.
  • “An internet-based subscription streaming service that is offered to consumers for the exclusive purpose of transmitting licensed media, including audio or video files, in a continuous flow from the internet-based service to the end user, and does not host user-generated content.” Should we call this the “Netflix exclusion”? I’d be grateful if someone could explain to me the differences between “licensed media” and “UGC.” 🤔

The Law’s Requirements

Publish the “TOS”

The bill requires social media platforms to post their terms of service (TOS), translated into every language they offer product features in. It defines “TOS” as:

a policy or set of policies adopted by a social media company that specifies, at least, the user behavior and activities that are permitted on the internet-based service owned or operated by the social media company, and the user behavior and activities that may subject the user or an item of content to being actioned. This may include, but is not limited to, a terms of service document or agreement, rules or content moderation guidelines, community guidelines, acceptable uses, and other policies and established practices that outline these policies.

To start, I need to address the ambiguity of what constitutes the “TOS” because it’s the most dangerous and censorial trap of the bill. Every service publishes public-facing “editorial rules,” but the published versions never can capture ALL of the service’s editorial rules. Exceptions include: private interpretations that are not shared to protect against gaming, private interpretations that are too detailed for public consumption, private interpretations that governments ask/demand the services don’t tell the public about, private interpretations that are made on the fly in response to exigencies, one-off exceptions, and more.

According to the bill’s definition, failing to publish all of these non-public “policies and practices” before taking action based on them could mean noncompliance with the bill’s requirements. Given the inevitability of such undisclosed editorial policies, it seems like every service always will be noncompliant.

Furthermore, to the extent the bill inhibits services from making an editorial decision using a policy/practice that hasn’t been pre-announced, the bill would control and skew the services’ editorial decisions. This pre-announcement requirement would have the same effect as Florida’s restrictions on updating their TOSes more than once every 30 days (the 11th Circuit held that restriction was unconstitutional).

Finally, imagine trying to impose a similar editorial policy disclosure requirement on a traditional publisher like a newspaper or book publisher. They currently aren’t required to disclose ANY editorial policies, let alone ALL of them, and I believe any such effort to require such disclosures would obviously be struck down as an unconstitutional intrusion into the freedom of speech and press.

In addition to requiring the TOS’s publication, the bill says the TOS must include (1) a way to contact the platform to ask questions about the TOS, (2) descriptions of how users can complain about content and “the social media company’s commitments on response and resolution time.” (Drafting suggestion for regulated services: “We do not promise to respond ever”), and (3) “A list of potential actions the social media company may take against an item of content or a user, including, but not limited to, removal, demonetization, deprioritization, or banning.” I identified 3 dozen potential actions in my Content Moderation Remedies article, and I’m sure more exist or will be developed, so the remedies list should be long and I’m not sure how a platform could pre-announce the full universe of possible remedies.

Information Disclosures to the CA AG

Once a quarter, the bill would require platforms to deliver to the CA AG the current TOS, a “complete and detailed description” of changes to the TOS in the prior quarter, and a statement of whether the TOS defines any of the following five terms and what the definitions are: “Hate speech or racism,” “Extremism or radicalization,” “Disinformation or misinformation,” “Harassment,” and “Foreign political interference.” [If the definitions are from the TOS, can’t the AG just read that?]. I’ll call the enumerated five content categories the “Targeted Constitutionally Protected Content.”

In addition, the platforms would need to provide a “detailed description of content moderation practices used by the social media.” This seems to contemplate more disclosures than just the “TOS,” but that definition seemingly already captured all of the service’s content moderation rules. I assume the bill wants to know how the service’s editorial policies are operationalized, but it doesn’t make that clear. Plus, like Texas’ open-ended disclosure requirements,  the unbounded disclosure obligation ensures litigation over (unavoidable) omissions.

Beyond the open-ended requirement, the bill enumerates an overwhelmingly complex list of required disclosures, which are far more invasive and burdensome than Texas’ plenty-burdensome demands:

  • “Any existing policies intended to address” the Targeted Constitutionally Protected Content. Wasn’t this already addressed in the “TOS” definition?
  • “How automated content moderation systems enforce terms of service of the social media platform and when these systems involve human review.” As discussed more below, this is a fine example of a disclosure where any investigation into its accuracy would be overly invasive.
  • “How the social media company responds to user reports of violations of the terms of service.” Does this mean respond to the user or respond to notices through internal processes? At large services, the latter involves a complicated and constantly changing flowchart with lots of exceptions, so this would become another disclosure trap.
  • “How the social media company would remove individual pieces of content, users, or groups that violate the terms of service, or take broader action against individual users or against groups of users that violate the terms of service.” What does “broader action” mean? Does that refer to account-level interventions instead of item-level interventions? As my Content Moderation Remedies paper showed, this topic is way more complicated than a binary remove/leave up dichotomy.
  • “The languages in which the social media platform does not make terms of service available, but does offer product features, including, but not limited to, menus and prompts.” Given the earlier requirement to translate the TOS into these languages, this disclosure would be an admission of legal violations, no?
  • With respect to the Targeted Constitutionally Protected Content, the following data:
    • “The total number of flagged items of content.”
    • Number of items “actioned.”
    • “The total number of actioned items of content that resulted in action taken by the social media company against the user or group of users responsible for the content.” I assume this means account-level actions based on the Targeted Constitutionally Protected Content?
    • Number of items “removed, demonetized, or deprioritized.” Is this just a subset of the number reported in the second bullet above?
    • “The number of times actioned items of content were viewed by users.”
    • “The number of times actioned items of content were shared, and the number of users that viewed the content before it was actioned.” How is the second half of this requirement different from the prior bullet?
    • “The number of times users appealed social media company actions taken on that platform and the number of reversals of social media company actions on appeal disaggregated by each type of action.”
    • All of the data disclosed in response to the prior bullet points must be broken down further by:
      • Each of the five categories of the Targeted Constitutionally Protected Content.
      • The type of content (posts vs. profile pages, etc.)
      • The type of media (video vs. text, etc.)
      • How the items were flagged (employees/contractors, “AI software,” “community moderators,” “civil society partners” and “users”–third party non-users aren’t enumerated but they are another obvious source of “flags”)
      • “How the content was actioned” (same list of entities as the prior bullet)

All told, there are 7 categories of disclosures, and the bill indicates that the disclosure categories have, respectively, 5 options, at least 5 options, at least 3 options, at least 5 options, and at least 5 options. So I believe the bill requires that each service’s reports should include no less than 161 different categories of disclosures (7×5+7×5+7×3+7×5+7×5).

Who will benefit from these disclosures? At minimum, unlike the purported justification cited by the 11th Circuit for Florida’s disclosure requirements, the bill’s required statistics cannot help consumers make better marketplace choices. By definition, each service can define each category of Targeted Constitutionally Protected Content differently, so consumers cannot compare the reported numbers across services. Furthermore, because services can change how these define each content category from time to time, it won’t even be possible to compare a service’s new numbers against prior numbers to determine if they are getting “better” or “worse” at managing the Targeted Constitutionally Protected Content. Services could even change their definitions so they don’t have to report anything. For example, a service could create an omnibus category of “incivil content/activity” that includes some or all of the Targeted Constitutionally Protected Content categories, in which case they wouldn’t have to disclose anything. (Note also that this countermove would represent a change in the service’s editorial practices impelled by the bill, which exacerbates the constitutional problem discussed below). So who is the audience for the statistics and what, exactly, will they learn from the required disclosures? Without clear and persuasive answers to these questions, it looks like the state is demanding the info purely as a raw exercise of power, not to benefit any constituency.

Remedies

Violations can trigger penalties of up to $15k/violation/day, and the penalties should at minimum be “sufficient to induce compliance with this act” but should be mitigated if the service “made a reasonable, good faith attempt to comply.” The AG can enforce the law, but so can county counsel and city DAs in some circumstances. The bill provides those non-AG enforcers with some financial incentives to chase the penalty money as a bounty.

An earlier draft of the bill expressly authorized private rights of action via B&P 17200. Fortunately, that provision got struck…but, unfortunately, in its place there’s a provision saying that this bill is cumulative with any other law. As a result, I think the 17200 PRA is still available. If so, this bill will be a perpetual litigation machine. I would expect every lawsuit against a regulated service would add 587 claims for alleged omissions, misrepresentations, etc. Like the CCPA/CPRA, the bill should clearly eliminate all PRAs–unless the legislature wants Californians suing each other into oblivion.

Some Structural Problems with the Bill

Although the prior section identified some obvious drafting errors, fixing those errors won’t make this a good bill. Some structural problems with the bill that can’t be readily fixed.

The overall problem with mandatory editorial transparency. I just wrote a whole paper explaining why mandatory editorial transparency laws like AB 587 are categorically unconstitutional, so you should start with that if you haven’t already read it. To summarize, the disclosure requirements about editorial policies and practices functionally control speech by inducing publishers to make editorial decisions that will placate regulators rather than best serve the publisher’s audience. Furthermore, any investigation of the mandated disclosures puts the government in the position of supervising the editorial process, an “unhealthy entanglement.” I already mentioned one such example where regulators try to validate if the service properly described when it does manual vs. automated content moderation. Such an investigation would necessarily scrutinize and second-guess every aspect of the service’s editorial function.

Because of these inevitable speech restrictions, I believe strict scrutiny should apply to AB 587 without relying on the confused caselaw involving compelled commercial disclosures. In other words, I don’t think Zauderer–a recent darling of the pro-censorship crowd–is the right test (I will have more to say on this topic). Further, Zauderer only applies when the disclosures are “uncontroversial” and “purely factual,” but the AB587 disclosures are neither. The Targeted Constitutionally Protect Content categories all involve highly political topics, not the pricing terms at issue in Zauderer; and the disclosures require substantial and highly debatable exercises of judgments to make the classifications, so they are not “purely factual.” And even if Zauderer does apply, I think the disclosure requirements impose an undue burden. For example, if 161 different prophylactic “just-in-case” disclosures don’t constitute an undue burden, I don’t know what would.

The TOS definition problem. As I mentioned, what constitutes part of the “TOS” creates a litigation trap easily exploited by plaintiffs. Furthermore, if it requires the publication of policies and practices that justifiably should not be published, the law intrudes into editorial processes.

The favoritism shown to the Targeted Constitutionally Protected Content. The law “privileges” the five categories in the Targeted Constitutionally Protected Content for heightened attention by services, but there are many other categories of lawful-but-awful content that are not given equal treatment. Why?

This distinction between types of lawful-but-awful speech sends the obvious message to services that they need to pay closer attention to these content categories over the others. This implicit message to reprioritize content categories distorts the services’ editorial prerogative, and if services get the message that they should manage the disclosed numbers down, the bill reduces constitutionally protected speech. However, services won’t know if they should be managing the numbers down. The AG is a Democrat, so he’s likely to prefer less lawful-but-awful content. However, many county prosecutors in red counties (yes, California has them) may prefer less content moderation of constitutionally protected speech and would investigate if they see the numbers trending down. Given that services are trapped between these competing partisan dynamics, they will be paralyzed in their editorial decision-making. This reiterates why the bill doesn’t satisfy Zauderer “uncontroversial” prong.

The problem classifying the Targeted Constitutionally Protected Content. Determining what fits into each category of the Targeted Constitutionally Protected Content is an editorial judgment that always will be subject to substantial debate. Consider, for example, how often the Oversight Board has reversed Facebook on similar topics. The plaintiffs can always disagree with the service’s classifications, and that puts them in the role of second-guessing the service’s editorial decisions.

Social media exceptionalism. As Benkler et al’s book Network Propaganda showed, Fox News injects misinformation into the conversation, which then propagates to social media. So why does the bill target social media and not Fox News? More generally, the bill doesn’t explain why social media needs this intervention compared to traditional publishers or even other types of online publishers (say, Breitbart?). Or is the state’s position that it could impose equally invasive transparency obligations on the editorial decisions of other publishers, like newspapers and book publishers?

The favoritism shown to the excluded services. I think the state will have a difficult time justifying why some UGC services get a free pass from the requirements. It sure looks arbitrary.

The Dormant Commerce Clause. The bill does not restrict its reach to California. This creates several potential DCC problems:

  • The bill reaches extraterritorially.
    • It requires disclosures involving activity outside of California, including countries where the Targeted Constitutionally Protected Content is illegal. This makes it impossible to properly contextualize the numbers because the legislative restrictions may vary by country. It also leaves the services vulnerable to enforcement actions that their numbers are too high/low based on dynamics the services cannot control.
    • If the bill reaches services not located in California, then it is regulating activity between a non-California service and non-California residents.
  • The bill sets up potential conflicts with other states’ laws. For example, a recent NY law defines “hateful conduct” and provides specific requirements for dealing with it. This may or may not coincide with California’s requirements.
  • The cumulative effect of different states’ disclosure requirements will surely become overly burdensome. For example, Texas’ disclosure requirements are structured differently than California’s. A service would have to build different reporting schemes to comply with the different laws. Multiply this times many other states, and the reporting burden becomes overwhelming.

Conclusion

Stepping back from the details, the bill can be roughly divided into two components: (1) the TOS publication and delivery component, and (2) the operational disclosures and statistics component. Abstracting the bill at this level highlights the bill’s pure cynicism.

The TOS publication and delivery component is obviously pointless. Any regulated platform already posts its TOS and likely addresses the specified topics, at least in some level of generality (and an obvious countermove to this bill will be for services to make their public-facing disclosures more general and less specific than they currently are). Consumers can already read those onsite TOSes if they care; and the AG’s office can already access those TOSes any time it wants. (Heck, the AG can even set up bots to download copies quarterly, or even more frequently, and I wonder if the AG’s office has ever used the Wayback Machine?). So if this provision isn’t really generating any new disclosures to consumers, it’s just creating technical traps that platforms might trip over.

The operational disclosures and statistics component would likely create new public data, but as explained above, it’s data that is worthless to consumers. Like the TOS publication and delivery provision, it feels more like a trap for technical enforcements than a provision that benefits California residents. It’s also almost certainly unconstitutional. The emphasis on Targeted Constitutionally Protected Content categories seems designed to change the editorial decision-making of the regulated services, which is a flat-out form of censorship; and even if Zauderer is the applicable test, it seems likely to fail that test as well.

So if this provision gets struck and the TOS publication and delivery provision doesn’t do anything helpful, it leaves the obvious question: why is the California legislature working on this and not the many other social problems in our state? The answer to that question is surely dispiriting to every California resident.

Reposted, with permission, from Eric Goldman’s Technology & Marketing Law Blog.

Posted on Techdirt - 29 September 2021 @ 06:23am

The SHOP SAFE Act Is A Terrible Bill That Will Eliminate Online Marketplaces

We’ve already posted Mike’s post about the problems with the SHOP SAFE Act that is getting marked up today, as well as Cathy’s lamenting the lack of Congressional concern for what they’re damaging, but Prof. Eric Goldman wrote such a thorough and complete breakdown of the problems with the bill that we decided that was worth posting too.

[Note: this blog post covers Rep. Nadler’s manager’s amendment for the SHOP SAFE Act, which I think will be the basis of a committee markup hearing today. If Congress were well-functioning, draft bills going into markup would be circulated a reasonable time before the hearing, so that we can properly analyze them on a non-rush basis, and clearly marked as the discussion version so that we’re not confused by which version is actually the current text.]

The SHOP SAFE Act seeks to curb harmful counterfeit items sold through online marketplaces. That’s a laudable goal that I expect everyone supports. However, this bill is itself a giant counterfeit. It claims to focus on “counterfeits” that could harm consumer “health and safety,” but those are both lies designed to make the bill seem narrower and more balanced than it actually is.

Instead of protecting consumers, this bill gives trademark owners absolute control over online marketplaces by overturning Tiffany v. eBay. It creates a new statutory species of contributory trademark liability that applies to online marketplaces (defined more broadly than you think) selling third-party items that bear counterfeit marks and implicate “health and safety” (defined more broadly than you think), unless the online marketplace operator does the impossible and successfully navigates over a dozen onerous and expensive compliance obligations.

Because the bill makes it impossible for online marketplaces to avoid contributory trademark liability, this bill will drive most or all online marketplaces out of the industry. (Another possibility is that Amazon will be the only player able to comply with the law, in which case the law entrenches an insurmountable competitive moat around Amazon’s marketplace). If you want online marketplaces gone, you might view this as a good outcome. For the rest of us, the SHOP SAFE Act will reduce our marketplace choices, and increase our costs, during a pandemic shutdown when online commerce has become even more crucial. In other words, the law will produce outcomes that are the direct opposite of what we want from Congress.

In addition to destroying online marketplaces, this bill provides the template for how rightsowners want to reform the DMCA online safe harbor to make it functionally impossible to qualify for as well. In this respect, the SHOP SAFE Act portends how Congress will accelerate the end of the Web 2.0 era of user-generated content.


[The rest of this post is 4k+ words explaining what the bill does and why it sucks. You might stop reading here if you don’t want the gory/nerdy details.]

Who’s Covered by the Bill

The bill defines an “electronic commerce platform” as “any electronically accessed platform that includes publicly interactive features that allow for arranging the sale or purchase of goods, or that enables a person other than an operator of the platform to sell or offer to sell physical goods to consumers located in the United States.”

Clearly, the second part of that definition targets Amazon and other major marketplaces, such as eBay, Walmart Marketplace, and Etsy. I presume it also includes print-on-demand vendors that enable users to upload images, such as CafePress, Zazzle, and Redbubble (unless those vendors are considered to be retailers, not online marketplaces).

The first part of the definition includes services with “publicly interactive features that allow for arranging the sale or purchase of goods.” This is a bizarre way to describe any online marketplace, and it covers something other than enabling third-party sellers (that’s the second part of the definition), so what services does this describe? Read literally, all advertising “allow[s] for arranging the sale or purchase of goods,” so this law potentially obligates every ad-supported publisher to undertake the content moderation obligations the bill imposes on online marketplaces. That doesn’t make sense, because the bill uses the undefined term “listing” 11 times, and display advertising isn’t normally considered to be a listing. Still, this wording is unusual and broad — and you better believe trademark owners like its breadth. If the bill wasn’t meant to regulate all ads, the bill drafters should make that clear.

Like most Internet regulations nowadays, the bill distinguishes entities based on size. See my article with Jess Miers on how legislatures should do that properly. The bill applies to services that have “sales on the platform in the previous calendar year of not less than $500,000.” Some problems with this distinction:

  • The bill doesn’t define “platform,” so it’s unclear what revenues count. In Amazon’s case, is it only revenues from the marketplace or does it also include the revenues from Amazon’s retailing function? If the latter, then the definition will pick up smallish online retailers that have small marketplace components.
  • The bill also doesn’t distinguish between gross and net revenue. So, for example, assume a site takes a 10% commission on sales. If a service has $500k in merchandise sales (gross revenue), but only keeps $50k in commissions (net revenue), is it covered by the law or not? I think the bill covers gross revenue, which means the bill reaches companies with small net revenues.
  • As usual, the bill doesn’t provide a phase-in period. A service may not know its revenues until some time after the calendar year closed, but it would be obligated to comply with the law from the beginning of the calendar year. As usual, then, this forces services below the revenue threshold to comply anticipatorily in case they clear the threshold. How hard is it for bills to include a phase-in period?

I’d fret more about the $500k threshold, but it’s likely to be irrelevant anyways. The bill also applies to smaller services once they receive 10 NOCI notices over their lifetimes from all sources. (Unlike the other services, these services get a six-month phase-in period).

To qualify as a NOCI, the notice must (1) refer to the SHOP SAFE Act, (2) “include an explicit notification of the 10-notice limit and the requirement of the platform to publish” the NOCI disclosures below (I have no idea what this element means), and (3) “identify a listing on the platform that reasonably could be determined to have used a counterfeit mark in connection with the sale, offering for sale, distribution, or advertising of goods that implicate health and safety.” (So, a NOCI counts against the 10-notice threshold if it “reasonably could be determined” that the listing was counterfeit, even if the NOCI is actually wrong.)

A month after getting its first NOCI, the service must publicly post an attestation that it has less than $500k in revenue along with a running tally of the number of NOCIs received… I guess for shits and giggles so that trademark owners can compete to be the one to put the service over the 10 NOCI threshold? I mean, even tiny services will quickly accrue 10 NOCIs. Indeed, I imagine rightsowners will coordinate their NOCIs to ensure that small services clear this threshold and are obligated to comply with the law. Thus, the 10 lifetime NOCIs threshold is a ruse to mislead people that smaller services aren’t governed by the law, when of course they will be.

What’s Regulated?

The law applies to counterfeit “goods that implicate health and safety,” defined as “goods the use of which can lead to illness, disease, injury, serious adverse event, allergic reaction, or death if produced without compliance with all applicable Federal, State, and local health and safety regulations and industry-designated testing, safety, quality, certification, manufacturing, packaging, and labeling standards.” I mean, pretty much every physical product meets this definition, right? Virtually any poorly-designed or nonconforming physical item has the capacity to cause personal injury. For example, electronic items that don’t comply with industry standards can cause physical harm from electrical charges, which means every electronic item is categorically within the bill’s scope even if the allegedly counterfeited item actually complies with industry standards. Now, replicate that analysis for other goods and tell me which categories of goods lack the capacity to cause harm. Once again, the “health and safety” framing is another deceptive ruse because the bill functionally applies to all goods, not just especially risky goods.

Overturning Tiffany v. eBay

In 2010, the Second Circuit issued a watershed decision about secondary trademark infringement. Essentially, the court held that eBay wasn’t liable for counterfeit sales of Tiffany items because eBay honored takedown notices and Tiffany’s claims sought to hold eBay accountable for generalized knowledge. That ruling has produced a kind of détente in the online secondary trademark infringement field, where we just don’t see broad counterfeiting lawsuits against online marketplaces any more.

The SHOP SAFE Act ends that détente. First, it creates a new statutory contributory trademark infringement claim for selling the regulated items. Second, the bill says that the new contributory claim doesn’t preempt other plaintiff claims, so trademark owners will still bring the standard statutory direct trademark infringement claim and common law contributory trademark claims (and dilution, false designation of origin, etc.). Third, online marketplaces nominally can try to “earn” a safe harbor from the new statutory contributory liability claim (but not from the other legal claims) by jumping through an onerous gauntlet of responsibilities. Those requirements will impose huge compliance costs, but those investments won’t prevent online marketplaces from being dragged into extraordinarily expensive and high-stakes litigation over eligibility for this defense. Fourth, the law imposes a proactive screening obligation, something that Tiffany v. eBay rejected. Fifth, unlike Tiffany v. eBay, generalized knowledge can create liability, and takedown notices aren’t required as a prerequisite to liability. Sixth, in litigation over direct trademark infringement and common law contributory trademark infringement claims, trademark owners can cite compliance/non-compliance with the defense factors against the online marketplace, putting the online marketplace in a worse legal position than they currently are in.

All told, the SHOP SAFE Act will functionally repeal the Tiffany v. eBay standard that has fostered the growth of online marketplaces for the last decade-plus, and usher in a new era of online shopping that will likely exclude online marketplaces entirely.

The “Safe Harbor” Preconditions

To earn protection from the newly created contributory trademark infringement doctrine, online marketplaces must perfectly implement all of the following 13 requirements:

1. Determine, and periodically confirm, that third-party sellers have a registered US agent for service or a designated “verified” US address for service. (Just wait until other countries require the equivalent from US-based online sellers on foreign marketplaces. A new frontier for a trade war.)

2. Verify the third-party seller’s identity, principal place of business, and contact information through “reliable documentation, including to the extent possible some form of government-issued identification.” (What is “reliable” documentation, and how much risk will online marketplaces be willing to take?)

3. Require the third-party seller to take reasonable steps to verify the authenticity of its goods and attest to those steps. This requirement doesn’t apply to sellers who sell less than $5k/yr and lists no more than five of the same items per year. (Is the online marketplace liable if the seller doesn’t actually take reasonable steps? How can the online marketplace “require” independent sellers to do this?)

4. Impose TOS terms that the third-party seller (1) won’t use counterfeit marks, (2) consents to US jurisdiction, and (3) designates a US agent for service or has a verified US address for service. (Can trademark owners take advantage of the US jurisdiction consent between the online marketplace and its third-party sellers? Normally trademark owners aren’t third-party beneficiaries of that contract. Also, that consent isn’t limited to jurisdiction over counterfeit claims — it’s over everything the TOS might govern.)

5. Conspicuously display on the platform:

  • the third-party seller’s verified principal place of business,
  • contact information,
  • identity of the third-party seller, and
  • the country from which the goods were originally shipped from the third-party seller

But the online marketplace isn’t required to display “the personal identity of an individual, a residential street address, or personal contact information of an individual, and in such cases shall instead provide alternative, verified means of contacting the third-party seller.”

6. Conspicuously display “in each listing the country of origin and manufacture of the goods as identified by the third-party seller, unless such information was not reasonably available to the third-party seller and the third-party seller has identified to the platform the steps it undertook to identify the country of origin and manufacture of the goods and the reasons it was unable to identify the same.” This requirement doesn’t apply to sellers who sell less than $5k/yr and lists no more than five of the same items per year.

7. Require third-party sellers to “use images that accurately depict the goods sold, offered for sale, distributed, or advertised on the platform.” (Does this create an affirmative obligation to include images? While rare, I believe that some marketplace sellers sometimes currently sell their items without including any photo. Also, product shots have been a constant source of copyright litigation. The manufacturer can sue the seller for copying its shots; the manufacturer can sue for false advertising if non-official shots aren’t “accurate,” and freelancers love to sue over product shots they took and ones they think are too similar to the ones they took.)

8. Undertake “reasonable proactive measures for screening goods before displaying the goods to the public to prevent the use by any third-party seller of a counterfeit mark in connection with the sale, offering for sale, distribution, or advertising of goods on the platform. The determination of whether proactive measures are reasonable shall consider the size and resources of a platform, the available technological and non-technological solutions at the time of screening, the information provided by the registrant to the platform, and any other factor considered relevant by a court.” (This is the most coveted payload for trademark owners. Every rightsowner wants UGC services to engage in proactive screening. The screening won’t be limited to harmful counterfeit goods, and consider how courts will punish online marketplaces for undertaking this proactive screening in their analysis of direct and contributory trademark infringement.)

9. Provide “reasonably accessible electronic means by which a registrant and consumer can notify the platform of suspected use of a counterfeit mark.” (What are the odds that the consumer notifications will be made in good faith? Consider, in particular, how a dissatisfied buyer could weaponize this provision for reasons having nothing to do with counterfeiting. Note also how buyer complaints of counterfeiting, when not accurate — and buyers won’t necessarily know — could create scienter on the online marketplace’s part, and the countermoves by the marketplace could work to the detriment of the marketplace, the seller, AND the manufacturer by reducing their online marketplace sales.)

10. Implement “a program to expeditiously disable or remove from the platform any listing for which a platform has reasonable awareness of use of a counterfeit mark in connection with the sale, offering for sale, distribution, or advertising of goods.” The online marketplace’s scienter may be inferred from:

  • information regarding the use of a counterfeit mark on the platform generally,
  • general information about the third-party seller,
  • identifying characteristics of a particular listing, or
  • other circumstances as appropriate.

(This differs from the DMCA online safe harbor in many ways. The most obvious is that online marketplaces can be liable for the new statutory contributory trademark claim even if trademark owners never send them takedown notices. Among other things, this factor also emboldens trademark owners to send notices like “there are counterfeits on your site — find and remove them” without identifying any specific infringing listing. It seems those generalized notices would confer scienter sufficient to impose contributory trademark infringement. This, of course, directly rejects the Tiffany v. eBay precedent, which said such generalized knowledge wasn’t enough.)

An online marketplace can restore a listing “if, after an investigation, the platform reasonably determines that a counterfeit mark was not used in the listing.” (How many services will want to do the investigation, and how confident will the service be that the trademark owner will agree that they “reasonably” determined the listing wasn’t counterfeit? In practice, once a listing is down, it ain’t going back up.)

11. Implement “a publicly available, written policy that requires termination of a third-party seller that reasonably has been determined to have engaged in repeated use of a counterfeit mark.” (Note how this combines several parts of the DMCA online safe harbor, including the obligation to adopt a repeat infringer policy, to publish the repeat infringer policy, and to reasonably implement the repeat infringer policy.)

Apparently online marketplaces are free to create their own repeat termination policy, but the bill says “Use of a counterfeit mark by a third-party seller in 3 separate listings within 1 year typically shall be considered repeated use.” (This sidesteps the obvious question of how services “know” that a seller used the counterfeit mark. Remember, in obligation #10, online marketplaces must terminate listings when the service has a “reasonable awareness,” which isn’t conclusive proof that counterfeiting actually took place. So does each removal based on that lowered scienter count as one of the three strikes?)

Online marketplaces can reinstate terminated sellers in some circumstances, none of which have any realistic chance of happening.

12. Take reasonable measures to ensure terminated sellers don’t reregister on the service. (Another coveted item by rightsowner: a permanent staydown.)

13. Provide “a verified basis to contact a third-party seller upon request by a registrant that has a bona fide belief that the seller has used a counterfeit mark.” (I didn’t understand this provision because the trademark owners should already have all of the information they need to blast counterfeiters from obligation #5).

Whew! Could trademark owners ask for anything more? These obligations are pretty much their dream wishlist.

Liability for Bogus NOCIs

The bill creates a new cause of action for bogus takedown notices sent to online marketplaces. I’m going to dig into this cause of action, but no need to master the details: Congress has learned absolutely nothing from the failure of 17 USC 512(f), so there’s no possible way for any plaintiff to benefit from this provision.

The cause of action: “Any person who knowingly makes any material misrepresentation in a notice to an electronic commerce platform that a counterfeit mark was used in a listing by a third party seller for goods that implicate health and safety shall be liable in a civil action for damages by the third-party seller that is injured by such misrepresentation, as the result of the electronic commerce platform relying upon such misrepresentation to remove or disable access to the listing, including temporary removal or disablement.” If the third-party seller declines to sue the trademark owner, the online marketplace can sue (with the third-party seller’s consent) if the trademark owner sent 10+ bogus notices. The bill provides statutory damages that range between $2,500-$75,000 per notice.

That sounds swell, but it’s useless for two reasons.

First, the bill doesn’t require trademark owners to send takedown notices in the first place. Trademark owners can sue online marketplaces for contributory trademark infringement without ever sending a takedown notice. So if trademark owners face potential liability for sending bogus takedown notices, why send them at all? Or trademark owners will send very generalized notices that don’t trigger liability for them but will trigger liability for the online marketplace.

Second, and more importantly, the cause of action requires a scienter that plaintiffs can’t prove. How can a third-party seller or online marketplace show the trademark owner knowingly made a material misrepresentation in their takedown notices? They can’t — unless they find smoking-gun evidence in discovery, but their complaints won’t survive a motion to dismiss sufficient to get to discovery. So there’s no way to win.

The “knowingly makes any material misrepresentation” standard is virtually identical to the 512(f) standard (“knowingly materially misrepresents”), so I expect courts will interpret the scienter standards the same. The Ninth Circuit killed 512(f) claims when it concluded in the Rossi case that the copyright owner’s subjective belief of infringement was good enough to defeat liability. As a result, over the past 20+ years, there has been only a small handful of 512(f) cases that have led to damages, and those few mostly involve default judgments. If trademark owners similarly can defend against this claim based on their subjective belief that counterfeiting is taking place, plaintiffs cannot win.

This provision is yet another ruse. It’s designed to make people think there’s a disincentive against trademark owner overclaims; but anyone who knows the 512(f) caselaw knows that this cause of action is completely worthless and a waste of everyone’s time.


Selected Problems with the Bill

What is “Counterfeiting”? The bill defines “counterfeit mark” as “a counterfeit of a mark” (I can’t make this up). But there’s actually a lot of confusion about what constitutes counterfeiting. See, e.g., my post about the trademark enforcements involving the “EMOJI” word mark, where they take the position that a marketplace item using the term “emoji” in the product name or description “counterfeits” their mark (seriously, look at the example from their exhibit and tell them that’s not bogus). A similar issue arises with print-on-demand services, where trademark owners take the position that any variation of their mark being manufactured onto a good constitutes counterfeiting, even if it’s parodic or an obvious joke. Thus, the bill’s grammar restricting the “use of counterfeit marks” potentially covers a much wider range of activity than classic piratical counterfeiting. Trademark owners will weaponize that ambiguity.

Lack of State Preemption. The Lanham Act doesn’t preempt state trademark laws, so this law isn’t likely to preempt any state law equivalents. It also would leave in place laws like the Arkansas Online Marketplace Consumer Inform Act, which has overlapping but different requirements than the SHOP SAFE Act. That overlap jacks up compliance costs and risks even more. While the SHOP SAFE Act is terrible and should never pass, it is even more terrible without a preemption provision.

Country of Origin Problems. The mandatory reporting of products’ country of origin is a liability trap. The bill excludes the smallest sellers from making this disclosure, but plenty of small-scale sellers will be obligated nonetheless, and they (and even bigger players) are sure to botch this because the law is confusing and the information won’t always be available to resellers. Any error on country-of-origin disclosures sets up the third-party sellers for false advertising claims. (Per Malwarebytes, the online marketplace should qualify for Section 230 protection for the Lanham Act false advertising claims). This gives trademark owners a second way of targeting third-party sellers: even if those sellers aren’t engaging in counterfeiting or any trademark infringement at all, country-of-origin false advertising claims can still be weaponized to drive them out of the marketplace.

Repudiation of the 512 Deal. The DMCA online safe harbor struck a grand bargain: online copyright enforcement would be a shared responsibility. Copyright owners would identify infringing items, and service providers would then remove those items. There has never been a trademark equivalent of the DMCA, but the Tiffany v. eBay case has de facto created a similar balance. Unsurprisingly, copyright owners hate the DMCA’s shared responsibility, and they have tried to undermine that deal through lawfare in courts. Trademark owners similarly want a different deal.

This bill, as Congress’ first trademark complement to the DMCA, emphatically repudiates the DMCA deal. It gives trademark owners everything they could possibly want: turning online marketplaces into their trademark enforcement deputies, getting them to proactively screen for infringing items, making them wipe out listings without having to send listing-by-listing notices, upfront disclosure of the information needed to sue the sellers (rather than going through the 512(h) subpoena process), and permanent staydown of allegedly recidivist sellers.

Not only does this represent terrible trademark policy, but it’s a preview of how copyright owners will force DMCA safe harbor reform. They will want all of the same things: proactive monitoring of infringement, no need to send item-specific notices, authentication of users before they can upload, and staydown requirements. The SHOP SAFE Act isn’t just about counterfeits; it’s a proxy war for the next round of online copyright reform, and the open Internet doesn’t have a chance of surviving either reform.

“Reasonableness” Isn’t Reasonable to Online Marketplaces. I’ve blogged many times about how a “reasonableness” standard of liability in the online context is a fast-track to the end of UGC. As a legal standard, “reasonableness” often can’t be resolved on motions to dismiss because it’s fact-intensive and defendants can’t tell their side of the story at that procedural stage. As a result, “reasonableness” standards substantially increase the odds that lawsuits survive the motion to dismiss and get into discovery, which raises the defense costs by a factor of 10 or more.

The bill contains 21 instances of the term “reasonable” or variations. Each and every one of those is a fight the defendants can’t cost-justify. That means defendants will give up at the earliest opportunity or, more likely, self “censor” to avoid any potential courtroom battle over their “reasonableness.”

Too Many Defense Factors Makes the Defenses Unwinnable. More generally, to avoid the new cause of action, online marketplaces must win each and every one of the 13 preconditions (many of which have subparts). In other words, they must do everything perfectly AND prove all 13 elements to the court’s satisfaction. Safe harbors with that many prerequisites are extraordinarily costly because the plaintiffs can contest each element and engage in expensive discovery related to them. The DMCA online safe harbor has functionally failed for this reason: it’s too expensive for startups to prove they qualify, and copyright owners can weaponize those costs intentionally to drive entities out of the industry. This has turned the DMCA online safe harbor into a sport of kings, so only larger companies can afford it, which has exacerbated the concerns about “Big Tech” market consolidation. The SHOP SAFE Act replicates the structure that failed in the DMCA online safe harbor, so it’s predictable that the SHOP SAFE defenses also will fail to help out online marketplaces, leaving them highly vulnerable to the new cause of action.

Goodbye, Scalability. The Internet enables scalable operations in new and important ways. That scalability has created new functionality that never existed in the offline world — like online marketplaces. The SHOP SAFE Act blows scaling apart. Not only do the “reasonableness” requirements require careful attention to the facts, the bill makes it impossible to have true self-service signups of third-party sellers. Instead, there will need to be several levels of human review of new signups to satisfy the various authentication requirements. Furthermore, the proactive screening requirement will also require substantial human monitoring because determining “counterfeits” cannot be delegated solely to the machines. The absence of scalability and the need for substantial human labor will reward services that are really small, like a one-person operation, or really large, like a market-dominant player. Thus, SHOP SAFE’s elimination of scalability will exacerbate competition problems in the online retailing world.

Who Cares About Privacy? Trademark owners demanded the WHOIS system to make it easier for them to sue domain name registrants. The WHOIS system has collapsed due to the GDPR, which exposed how the WHOIS system was highly privacy-invasive. The SHOP SAFE Act doubles down on privacy invasions in two ways.

First, it requires online marketplaces to collect lots of sensitive information they don’t want, such as government-issued IDs. Those databases are honeypots for law enforcement and hackers.

Second, it requires publication of some information that sellers might consider private, especially if they are small operations with close identity between their professional and personal lives. (The bill’s exclusion of some private information incompletely addresses this concern.) For example, that information can be highly sensitive for sellers of controversial items who can be targeted by trolls and haters for local ostracism or physical attacks like swatting, and competitors can use this information too to engage in anti-competitive harassment.

Just like WHOIS struck a lopsided balance between trademark owners’ interests and registrant privacy, the SHOP SAFE Act similarly tosses privacy concerns under the trademark owners’ bus.

Why Would Anyone Support This Bill? This bill will kill online marketplaces and make markets less efficient. Where the online marketplace owner has a retailing function, like Amazon and Walmart, they can shut down the marketplace and subsume some items into their standard retailing function. That transition cuts off the long tail of items consumers expect to find online, and it burns hundreds of thousands of independent businesses that currently thrive in the marketplace system but become irrelevant in a retailing model. Meanwhile, standalone online marketplaces, like eBay and Etsy, have to revamp their entire business or exit the industry entirely, which further reduces competition for online retailing. The net competitive effects, then, are that consumers will pay higher prices, lose their ability to find long-tail items, and incur higher search costs to do so, while existing market leaders will consolidate their dominant positions, and hundreds of thousands of people will lose their jobs.

In contrast, who wins in this situation? The only winners are trademark owners, some of whom hate online marketplaces because they are tired of seeing their goods leak out of official distribution channels into more price-discounted online marketplaces, because they hate competing against used items of the goods they sell, and because some counterfeiting does take place there (as it does in the offline world too). To address those concerns, they are willing to burn down the entire online marketplace industry. What I can’t understand is why any members of Congress would be so willing to give trademark owners their wishlist when the results would be so disadvantageous for their constituents. The trademark owner lobby is strong, but our governance systems should be strong enough to resist terrible and selfish legislation like this.

Reposted with permission from Eric Goldman’s Technology & Marketing Law Blog

Posted on Techdirt - 14 April 2021 @ 10:48am

Deconstructing Justice Thomas' Pro-Censorship Statement

Last week, we had a post about Supreme Court Justice Clarence Thomas’ very weird statement in a concurrence on mooting an unrelated case, in which he seemed to attack free speech and Section 230. Law professor Eric Goldman has written up an incredibly thorough response to Thomas’ statement that we thought the Techdirt community might appreciate, and so we’re reposting in here.

Last week, the Supreme Court vacated the Second Circuit?s Knight v. Trump ruling. The Second Circuit held that Trump violated the First Amendment when he blocked other Twitter users from engaging with his @realdonaldtrump account. Other courts are holding that government officials can?t block social media users from their official accounts, but they can freely block from personal or campaign accounts. Vacating the Second Circuit opinion probably won?t materially change that caselaw.

That outcome was overshadowed by a concurring statement from Justice Thomas wherein he again embraced censorship. I blogged a similar statement from Justice Thomas from the October 2020 cert denial of Enigma v. Malwarebytes. That time, Justice Thomas criticized Section 230 by addressing topics he wasn?t briefed on and clearly did not understand. This time, his statement is even more unhinged and disconnected from the case at issue. It?s clear Justice Thomas feels free to publish whatever thoughts are on his mind. This is what bloggers do. I think he, and all of us, would benefit if he moved his musings to a personal blog, instead of misusing our tax dollars to issue official government statements.

Justice Thomas? statement ends (emphasis added):

As Twitter made clear, the right to cut off speech lies most powerfully in the hands of private digital platforms. The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions. This petition, unfortunately, affords us no opportunity to confront them.

So Justice Thomas acknowledges he wasn?t briefed on any of the interesting topics he wanted to discuss. He?s just making stuff up. This isn?t what Supreme Court justices do, or should do. I?m a little surprised that his colleagues haven?t publicly rebuked him for writing free-association statements. Such statements hurt the court?s credibility and abuse the privilege afforded Supreme Court justices.

Justice Thomas starts with an apparent contradiction he positions as a gotcha. The Second Circuit said that Trump created a public forum on Twitter, so Justice Thomas wonders how that could be when Twitter could unilaterally shut down that public forum. He says public forums are ?government-controlled spaces,? but any ?control Mr. Trump exercised over the account greatly paled in comparison to Twitter?s authority.? Still, Justice Thomas himself acknowledges that if the government rents private real property and uses it to create a public forum, it?s still a public forum even when a private landlord has the unilateral right to terminate the lease and evict the government. So?.where?s the gotcha?

Having failed to define that problem, Justice Thomas manufactures a strawman. He says: ?If part of the problem is private, concentrated control over online content and platforms available to the public, then part of the solution may be found in doctrines that limit the right of a private company to exclude.? Notice the conditional grammar to assume a problem without proving it. This is the foundation for a discussion about hypothetical solutions to hypothesized problems.

The two doctrines that ?limit the right of a private company to exclude? are common carriage and public accommodations. That leads to this bone-chilling declaration:

Internet platforms of course have their own First Amendment interests, but regulations that might affect speech are valid if they would have been permissible at the time of the founding. See United States v. Stevens, 559 U. S. 460, 468 (2010). The long history in this country and in England of restricting the exclusion right of common carriers and places of public accommodation may save similar regulations today from triggering heightened scrutiny?especially where a restriction would not prohibit the company from speaking or force the company to endorse the speech. See Turner Broadcasting System, Inc. v. FCC, 512 U. S. 622, 684 (1994) (O?Connor, J., concurring in part and dissenting in part); PruneYard Shopping Center v. Robins, 447 U. S. 74, 88 (1980). There is a fair argument that some digital platforms are sufficiently akin to common carriers or places of accommodation to be regulated in this manner.

[Freeze frame and record scratch…] What did he just say?

First, notice how far Justice Thomas has strayed from the case before him. Somehow he?s talking about common carriage and public accommodations when neither doctrine had anything to do with Trump?s management of his Twitter account.

Second, did Justice Thomas just favorably cite Pruneyard? Most ?conservatives? view Pruneyard skeptically because of its dramatic incursion into private property ownership. It?s also on the wane as precedent. Courts have been reluctant to extend it to new facts. The Pruneyard decision may be a low-water mark for private property ownership rights, not the foundation of expanded censorship. (There is also the standard Internet exceptionalism problem with applying an offline analogy like physical-space shopping malls to online media venues).

Third, he is about to make a ?fair argument? that ?some digital platforms are sufficiently akin to common carriers or places of accommodation.? OK, but are there any counterarguments to that ?fair? argument? Normally an opposing litigant would be aggressively telling its side of the story, and other Supreme Court justices would be pointing out the weaknesses in Justice Thomas? ?fair? arguments. Without these tempering forces, Justice Thomas is engaging in personal advocacy, not judicial analysis.

Regarding common carriers, Justice Thomas claims:

In many ways, digital platforms that hold themselves out to the public resemble traditional common carriers. Though digital instead of physical, they are at bottom communications networks, and they ?carry? information from one user to another?unlike newspapers, digital platforms hold themselves out as organizations that focus on distributing the speech of the broader public.

It should not matter how an editorial publication sources the content it publishes. I remember Zagat, which tried to faithfully mirror the opinions of ordinary restaurant consumers. Did ?distributing the speech of the broader public? make Zagat a common carrier? Of course not, because Zagat layered substantial editorial value on top of the consumer comments. But so does Twitter, which enforces its house rules and performs many crucial curatorial functions. Justice Thomas ignores those value-added editorial functions.

Justice Thomas then links common carriage to network effects:

The analogy to common carriers is even clearer for digital platforms that have dominant market share. Similar to utilities, today?s dominant digital platforms derive much of their value from network size?.The Facebook suite of apps is valuable largely because 3 billion people use it. Google search?at 90% of the market share?is valuable relative to other search engines because more people use it, creating data that Google?s algorithm uses to refine and improve search results. These network effects entrench these companies. Ordinarily, the astronomical profit margins of these platforms?last year, Google brought in $182.5 billion total, $40.3 billion in net income?would induce new entrants into the market. That these companies have no comparable competitors highlights that the industries may have substantial barriers to entry?.

It changes nothing that these platforms are not the sole means for distributing speech or information. A person always could choose to avoid the toll bridge or train and instead swim the Charles River or hike the Oregon Trail. But in assessing whether a company exercises substantial market power, what matters is whether the alternatives are comparable. For many of today?s digital platforms, nothing is.

The companies Justice Thomas disparages would hotly contest his assessment. But they weren?t in his courtroom to explain themselves.

More generally, normally common carriage redresses natural monopolies, where it would be socially wasteful to build duplicative infrastructure. Assuming Facebook and Google in fact benefit from network effects, they still lack that key attribute of natural monopolists. In particular, competitors can and will successfully compete by providing non-identical orthogonal solutions.

Justice Thomas continues smearing non-litigants:

Much like with a communications utility, this concentration gives some digital platforms enormous control over speech. When a user does not already know exactly where to find something on the Internet?and users rarely do?Google is the gatekeeper between that user and the speech of others 90% of the time. It can suppress content by deindexing or downlisting a search result or by steering users away from certain content by manually altering autocomplete results. Grind, Schechner, McMillan, & West, How Google Interferes With Its Search Algorithms and Changes Your Results, Wall Street Journal, Nov. 15, 2019. Facebook and Twitter can greatly narrow a person?s information flow through similar means. And, as the distributor of the clear majority of e-books and about half of all physical books, Amazon can impose cataclysmic consequences on authors by, among other things, blocking a listing.

Is Justice Thomas suggesting all of these services?including Amazon?s book retailing?should be treated like common carriers? Where does that stop?

Also, media industry consolidation is ubiquitous in every media niche. For example, there are 3 major record labels, and Disney has eaten a huge chunk of the movie business. Does that make them common carriers? In the 1970s and 1980s, there was a single daily newspaper in each metro area. Should they have been deemed common carriers because of that? Recall Florida tried to do that in Miami Herald v. Tornillo (though it didn?t use the term ?common carrier?). The Supreme Court held that the Miami Herald newspaper?s local market dominance did not reduce the newspaper?s constitutional protection.

With respect to public accommodations, Justice Thomas says ?a company ordinarily is a place of public accommodation if it provides ?lodging, food, entertainment, or other services to the public . . . in general.? Twitter and other digital platforms bear resemblance to that definition.? Every business will bear some ?resemblance? to that definition because they offer goods or services to their customers, but not every business is a place of public accommodation. Justice Thomas closes the thought by saying ?no party has identified any public accommodation restriction that applies here.? That?s because IT WASN?T RELEVANT TO THE CASE.

Justice Thomas cheerleads the #MAGA legislators around the country working on censorial bills:

The similarities between some digital platforms and common carriers or places of public accommodation may give legislators strong arguments for similarly regulating digital platforms. ?[I]t stands to reason that if Congress may demand that telephone companies operate as common carriers, it can ask the same of? digital platforms. Turner, 512 U. S., at 684 (opinion of O?Connor, J.). That is especially true because the space constraints on digital platforms are practically nonexistent (unlike on cable companies), so a regulation restricting a digital platform?s right to exclude might not appreciably impede the platform from speaking.

Justice Thomas somehow overlooked Reno v. ACLU (1997), which came out after Turner and Denver Area. The Supreme Court said that, unlike broadcasting and telecom, there was no basis for qualifying the First Amendment scrutiny applied to Internet content regulations. This is 100% responsive to his invocation of O?Connor?s language from Turner.

Justice Thomas then says ?plaintiffs might have colorable claims against a digital platform if it took adverse action against them in response to government threats.? Not this again. It?s a true statement with respect to ?government threats,? but general censorial exhortations by government officials aren?t ?threats.? In a footnote, he adds:

Threats directed at digital platforms can be especially problematic in the light of 47 U. S. C. ?230, which some courts have misconstrued to give digital platforms immunity for bad-faith removal of third-party content. Malwarebytes, Inc. v. Enigma Software Group USA, LLC, 592 U. S. ___, ___?___ (2020) (THOMAS, J., statement respecting denial of certiorari) (slip op., at 7?8). This immunity eliminates the biggest deterrent?a private lawsuit?against caving to an unconstitutional government threat.

Wait, who is the villain in that story? My vote: The government making unconstitutional threats. Section 230 doesn?t prevent lawsuits directly against the government for issuing these threats. Nevertheless, Justice Thomas apparently thinks that Internet services, receiving unconstitutional demands from government officials, should be sued by individual users for honoring those demands. Yet, an Internet service?s content removal in response to a government threat usually would be considered a ?good faith? removal and thus satisfy the statutory requirements of Section 230(c)(2), so I don?t understand why Justice Thomas thinks his Enigma statement is relevant. And if Section 230 didn?t protect the Internet service?s removal, is Justice Thomas saying that the Internet services should be compelled to carry potentially illegal content even if the government executes its threat? Here?s a better idea: we should all work together to stop the government from issuing unconstitutional threats. And the first government threat I think we should stop? I nominate Justice Thomas? threat to impose must-carry obligations.

Justice Thomas, citing Prof. Volokh, speculates that maybe Section 230 is itself unconstitutional:

some commentators have suggested that immunity provisions like ?230 could potentially violate the First Amendment to the extent those provisions pre-empt state laws that protect speech from private censorship

As I?ve said before, the phrase ?private censorship? is an oxymoron. Only governments censor. Private entities exercise editorial control.

More generally, I do not see how Section 230(c)(1) is unconstitutional. It?s a speech-enhancing statute that supplements the First Amendment. Section 230(c)(2) is more colorable because it does make distinctions between different content categories. However, so long as courts read the ?otherwise objectionable? exclusion broadly, that phrase basically applies to all content equally. Note that various Section 230(c)(2) reforms propose to remove or modify the ?otherwise objectionable? language, and those changes could create a constitutional problem where none currently exists.

Justice Thomas says the threats he?s talking about have nothing to do with the case at hand:

But no threat is alleged here?no party has sued Twitter. The question facing the courts below involved only whether a government actor violated the First Amendment by blocking another Twitter user.

I agree. So why is Justice Thomas discussing any of this?

Justice Thomas? statement concludes:

The Second Circuit feared that then-President Trump cut off speech by using the features that Twitter made available to him. But if the aim is to ensure that speech is not smothered, then the more glaring concern must perforce be the dominant digital platforms themselves. As Twitter made clear, the right to cut off speech lies most powerfully in the hands of private digital platforms.

I strongly disagree about the MOST ?glaring concern? here. Twitter lacks the power to order drone killings, separate parents from their children at the border, put a knee on the neck of a suspect for 9 minutes, incarcerate people, impose taxes, garnish people?s wages, or engage in the thousands of other ways that governments can deprive people of their assets, liberty, or life. Compared to the government?s vast power to squelch speech, the power of the ?dominant digital platforms? seems puny. Justice Thomas betrays his extraordinary degree of privilege. Due to that privilege, he doesn?t recognize how the truly glaring concern is that the government, fueled by his words, will use its ?dominance? to ?smother? far more speech than any Internet service ever could.

Implications

I hope Justice Thomas? colleagues do not share his views and this statement is just idle musings. But even if the statement doesn?t lead to changes at the Supreme Court, it will nevertheless contribute to three unfortunate dynamics.

First, plaintiffs will improperly cite the statement as if it is binding law (which they did with his prior statement: 1, 2). They will especially like the discussion about government threats.

Second, plaintiffs will appeal more censorial cases to the Supreme Court, knowing that Justice Thomas is a reliable vote to grant the cert petition and vote in their favor.

Third, state legislators will view this opinion as permission to pursue unconstitutional must-carry obligations. There are so many proposals percolating in the state legislatures right now, and odds are good that at least one will get enacted and the battle will shift to the court challenges of those laws. The future of the Internet rests on those coming court battles, and I feel less secure about the Internet?s fate knowing that Justice Thomas is one of the final 9 votes.

Finally, remember that Trump?s Twitter account was government speech. The thrust of Justice Thomas? statement would require Twitter to carry government speech it doesn?t want to carry. That isn?t garden-variety censorship. Justice Thomas seemingly wants private media operations to become government mouthpieces. Forcing media outlets to distribute government propaganda is a hallmark of repressive and autocratic countries. I don?t know what it means to be a ?conservative,? but I know it shouldn?t include that.

BONUS: Justice Thomas isn?t trying to hide his antipathy towards Google. See this passage from his dissent in Google v. Oracle, No. 18?956 (U.S. Sup. Ct. April 5, 2021):

If the majority is going to speculate about what Oracle might do, it at least should consider what Google has done. The majority expresses concern that Oracle might abuse its copyright protection (on outdated Android versions) and ??attempt to monopolize the market.?? Ante, at 34?35. But it is Google that recently was fined a record $5 billion for abusing Android to violate antitrust laws. Case AT.40099, Google Android, July 18, 2018 (Eur. Comm?n-Competition); European Comm?n Press Release, Commission Fines Google ?4.34 Billion for Illegal Practices Regarding Android Mobile Devices to Strengthen Dominance of Google?s Search Engine, July 18, 2018. Google controls the most widely used mobile operating system in the world. And if companies may now freely copy libraries of declaring code whenever it is more convenient than writing their own, others will likely hesitate to spend the resources Oracle did to create intuitive, well-organized libraries that attract programmers and could compete with Android. If the majority is worried about monopolization, it ought to consider whether Google is the greater threat.

Originally posted to Eric Goldman’s Technology & Marketing Law Blog.

Posted on Techdirt - 2 December 2020 @ 07:00am

New Ebook On Zeran v AOL, The Most Important Section 230 Case

Section 230 has become a mainstream discussion topic, but unfortunately many discussants don?t actually understand it well (or at all). To address this knowledge gap, co-editors Profs. Eric Goldman (Santa Clara Law) and Jeff Kosseff (U.S. Naval Academy) have released an ebook, called ?Zeran v. America Online,? addressing many aspects of Section 230. You can download the ebook for free at:

Zeran v. AOL is the most important Section 230 case of all time. The Zeran case was the first federal appellate decision interpreting Section 230, and its breathtakingly broad sweep turbocharged the rise of Web 2.0?with all of its strengths and weaknesses.

In recognition of Zeran?s importance, in 2017, Profs. Goldman and Kosseff helped assemble an essay package to honor the case?s 20th anniversary. They gathered two dozen essays from some of the most knowledgeable Section 230 experts. The essays address the case?s history and policy implications. Initially, Law.com published the essay package but then unexpectedly paywalled the essays after 6 months.

The new Zeran v. America Online ebook restores the 2017 essay package into a new and easy-to-read format. Together, they are a great entry point into the debates about Section 230, including how we got here and what?s at stake.

To supplement the essay package, the ebook compiles an archive of the key documents in the Zeran v. AOL litigation (with bonus coverage of Zeran?s case against radio station KRXO). Many of these materials have not previously been publicly available in electronic format. The case archives should be of interest both to historians and students of precedent-setting litigation tactics.

Section 230 will likely remain hotly debated, but the debates won?t be productive until we develop a shared understanding of what the law says and why. Ideally, this ebook will advance those goals.

Posted on Techdirt - 16 October 2019 @ 06:38am

Top Myths About Content Moderation

How Internet companies decide which user-submitted content to keep and which to remove?a process called ?content moderation??is getting lots of attention lately, for good reason. Under-moderation can lead to major social problems, like foreign agents manipulating our elections. Over-moderation can suppress socially beneficial content, like negative but true reviews by consumers.

Due to these high stakes, regulators across the globe increasingly seek to tell Internet companies how to moderate content. European regulators are requiring Internet services to remove extremist content within an hour and to install upload filters to prospectively block copyright infringement; and U.S. legislators have proposed to ban Internet services from moderating content at all.

Unfortunately, many of these regulatory efforts are predicated on myths about content moderation, such as:

Myth: Content moderation can be done perfectly.

Reality: Regulators routinely assume Internet services can remove all bad content without suppressing any good content. Unfortunately, they can?t. First, mistakes occur when the service lacks key contextual information about the content?such as details about the author?s identity, other online and offline activities, and cultural references. Second, any line-drawing exercise creates mistake-prone border cases because users routinely submit ?edgy? content. Third, a high-volume service will make many mistakes, even if it?s highly accurate?1 billion submissions a day at 99.9% accuracy still yields a million mistakes a day.

Myth: Bad content is easy to find and remove.

Reality: Regulators often assume every item of bad content has an impossible-to-miss flashing neon sign saying ?REMOVE THIS CONTENT,? but that?s rare. Content is often obviously bad only in hindsight or with context unavailable to the service. Regulators? cherry-picked anecdotes don?t prove otherwise.

Myth: Technologists just need to ?nerd harder.?

Reality: Filtering and artificial intelligence play important roles in content moderation. However, technology alone cannot magically solve the problem. ?Edgy? and contextless content vexes the machines, too.

Myth: Internet services should hire more humans to review content.

Reality: Humans have biases and make mistakes too, so adding human reviewers won?t lead to perfection. Furthermore, human reviewers sometimes experience an unrelenting onslaught of horrible content to protect the rest of us.

Myth: Internet companies have no incentive to moderate content.

Reality: In 1996, Congress passed 47 U.S.C. 230, which says Internet services generally aren?t liable for third-party content. Due to this legal protection, critics often assume Internet services won?t invest in content moderation; and some companies have stoked that perception by publicly positioning themselves as ?neutral? technology platforms. Yet, virtually every Internet service moderates content, and major services like Facebook and YouTube employ many thousands of content reviewers. Why? The services have their own reputation to manage, and they care about how content can affect their users (e.g., Pinterest combats content that promotes eating disorders). Furthermore, advertisers won?t let their ads appear on bad content, which provides additional financial incentives to moderate.

Myth: Content moderation, if done right, will make everyone happy.

Reality: By definition, content moderation is a zero-sum game. Someone gets their desired outcome, and someone else doesn?t?and those folks won?t be happy with the result.

Myth: There is a one-size-fits-all approach to content moderation.

Reality: Internet services cater to diverse audiences that have different moderation needs. For example, an online crowdsourced encyclopedia like Wikipedia, an open-source software repository like GitHub, and a payment service for content publishers like Patreon all solve different problems for their communities. These services shouldn?t have identical content moderation rules.

Myth: Imposing content moderation requirements will stick it to Google and Facebook.

Reality: Google and Facebook have enough money to handle virtually any requirement imposed by regulators. Startup enterprises do not. Increased content moderation burdens are more likely to block new entrants than to punish Google and Facebook.

Myth: Poor content moderation causes anti-social behavior.

Reality: Poorly executed content moderation can accelerate bad behavior, but often the Internet simply mirrors existing anti-social behavior or tendencies. Better content moderation can?t fix problems that are endemic in the human condition.

Regulators are right to identify content moderation as a critically important topic. However, until regulators overcome these myths, regulatory interventions will cause more problems than they solve.

Reposted from Eric Goldman’s Technology & Marketing Law Blog.

Posted on Techdirt - 8 November 2018 @ 02:52pm

CDA 230 Doesn't Support Habeus Petition by 'Revenge Pornographer'

As you may recall, Kevin Bollaert ran UGotPosted, which published third-party submitted nonconsensual pornography, and ChangeMyReputation.com, which offered depicted individuals a “pay-to-remove” option. Bollaert appeared multiple times in my inventory of nonconsensual pornography enforcement actions. Bollaert’s conduct was disgusting, and I have zero sympathy for him. Nevertheless, I also didn’t love the path prosecutors took to bust him. The lower court convicted him of 24 counts of identity theft and 7 counts of extortion and sentenced him to 8 years in jail and 10 years of supervised release. Pay-to-remove sites are not inherently extortive, and identity theft crimes often overreach to cover distantly related activities.

Worse, the appeals court affirmed the convictions despite a significant Section 230 defense. The opinion contorted Section 230 law, relying on outmoded legal theories from Roommates.com. Fortunately, I haven’t seen many citations to the appellate court’s misinterpretation of Section 230, so the doctrinal damage to Section 230 hasn’t spread too much (yet). However, that still leaves open whether Bollaert’s conviction was correct.

Bollaert raised that issue by filing a habeus corpus petition in federal court. Such petitions are commonly filed and almost never granted, so Bollaert’s petition had minimal odds of success as a matter of math. Not surprisingly, his petition fails.

The district court says that Section 230’s application to Bollaert’s circumstance does not meet the rigorous standard of “clearly established federal law”:

In this case, the Supreme Court has never recognized that the CDA applies in state criminal actions. The Supreme Court has never indicated circumstances that would qualify a state criminal defendant for CDA immunity. Absence of applicable Supreme Court precedent defeats the contention that Petitioner is entitled to CDA immunity under clearly established federal law…

federal circuits have not applied CDA immunity in state criminal actions or indicated circumstances that would qualify a state criminal defendant for CDA immunity. Petitioner cannot satisfy ? 2254(d)(1) with district court opinions applying CDA immunity in state criminal actions.

I’ve routinely blogged about the application of Section 230 to state criminal prosecutions, and I even wrote a lengthy discourse on why that was a good thing. Still, I can’t think of any federal appellate courts that have reached this conclusion, so perhaps the court’s factual claim about the jurisprudential absence is correct.

The court adds that even if Section 230 qualified as “clearly established federal law,” the appellate court ruling didn’t necessarily contravene that law:

the California Court of Appeal performed an exhaustive and comprehensive analysis of the applicable circuit court decisions before concluding Petitioner is an information content provider under Roommates. The state court reasonably interpreted Roommates and Jones, and reasonably concluded that Petitioner “developed, at least in part, the offensive content on his Web site by requiring users to input private and personal information as a condition of posting the victims’ pictures, making him an information content provider within the meaning of the CDA.”

This passage reinforces the deficiencies of the appellate court’s Section 230 discussion. “[R]equiring users to input private and personal information as a condition of posting the victims’ pictures” is not the encouragement of illegal content, as referenced by Roommates.com, as that information isn’t actually illegal; and the Jones case rejected an “encouragement” exclusion to Section 230 while ruling for the defense. Do those deficiencies support the extraordinary relief of habeus corpus? Apparently not.

Reposted from Eric Goldman’s Technology & Marketing Law Blog

Posted on Techdirt - 29 January 2018 @ 01:23pm

It's Time to Talk About Internet Companies' Content Moderation Operations

As discussed in this post below, on February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event — and over the next few weeks we’ll be publishing many of those essays. This first one comes from Professor Eric Goldman, who put together the conference, explaining the rationale behind the event and this series of essays.

Many user-generated content (UGC) services aspire to build scalable businesses where usage and revenues grow without increasing headcount. Even with advances in automated filtering and artificial intelligence, this goal is not realistic. Large UGC databases require substantial human intervention to moderate anti-social and otherwise unwanted content and activities. Despite the often-misguided assumptions by policymakers, problematic content usually does not have flashing neon signs saying “FILTER ME!” Instead, humans must find and remove that content?especially with borderline cases, where machines can’t make sufficiently nuanced judgments.

At the largest UGC services, the number of people working on content moderation is eye-popping. By 2018, YouTube will have 10,000 people on its “trust & safety teams.” Facebook’s “safety and security team” will grow to 20,000 people in 2018.

Who are these people? What exactly do they do? How are they trained? Who sets the policies about what content the service considers acceptable?

We have surprisingly few answers to these questions. Occasionally, companies have discussed these topics in closed-door events, but very little of this information has been made public.

This silence is unfortunate. A UGC service’s decision to publish or remove content can have substantial implications for individuals and the community, yet we lack the information to understand how those decisions are made and by whom. Furthermore, the silence has inhibited the development of industry-wide “best practices.” UGC services can learn a lot from each other?if they start sharing information publicly.

On Friday, a conference called “Content Moderation and Removal at Scale” will take place at Santa Clara University. (The conference is sold out, but we will post recordings of the proceedings, and we hope to make a live-stream available). Ten UGC services will present “facts and figures” about their content moderation operations, and five panels will discuss cutting-edge content moderation issues. For some services, this conference will be the first time they’ve publicly revealed details about their content moderation operations. Ideally, the conference will end the industry’s norm of silence.

In anticipation of the conference, we assembled ten essays from conference speakers discussing various aspects of content moderation. These essays provide a sample of the conversation we anticipate at the conference. Expect to hear a lot more about content moderation operational issues in the coming months and years.

Eric Goldman is a Professor of Law, and Co-Director of the High Tech Law Institute, at Santa Clara University School of Law. He has researched and taught Internet Law for over 20 years, and he blogs on the topic at the Technology & Marketing Law Blog.

Posted on Techdirt - 12 April 2017 @ 02:58pm

Texas Supreme Court Is Skeptical About Wikipedia As A Dictionary

This is an interesting opinion from the Texas Supreme Court on citing Wikipedia as a dictionary. The underlying case involves an article in D Magazine titled “The Park Cities Welfare Queen.” The article purports to show that the plaintiff, Rosenthal, “has figured out how to get food stamps while living in the lap of luxury.” After publication, evidence emerged that the plaintiff had not committed welfare fraud. She sued the magazine for defamation.

The appeals court denied the magazine’s anti-SLAPP motion in part because it held the term “Welfare Queen,” as informed by the Wikipedia entry, could be defamatory. The Texas Supreme Court affirms the anti-SLAPP denial, but it also criticizes the appeals court for not sufficiently examining the entire article’s gist. Along the way, the court opines on the credibility and validity of Wikipedia as a dictionary. TL;DR = the Supreme Court says don’t treat Wikipedia like a dictionary.

Apologies for the block quoting, but here’s the detail:

Wikipedia is a self-described “online open-content collaborative encyclopedia.” Wikipedia: General Disclaimer, https://en.wikipedia.org/wiki/Wikipedia:General_disclaimer (last visited Mar. 13, 2017). This means that, except in certain cases to prevent disruption or vandalism, anyone can write and make changes to Wikipedia pages. Wikipedia: About, https://en.wikipedia.org/wiki/Wikipedia:About (last visited Mar. 13, 2017). Volunteer editors can submit content as registered members or anonymously. Id. Each time an editor modifies content, the editor’s identity or IP address and a summary of the modification, including a time stamp, become available on the article’s “history” tab. Jason C. Miller & Hannah B. Murray, Wikipedia in Court: When and How Citing Wikipedia and Other Consensus Websites Is Appropriate, 84 ST. JOHN’S L. REV. 633, 637 (2010). Wikipedia is one of the largest reference websites in the world, with over “70,000 active contributors working on more than 41,000,000 articles in 294 languages.” Wikipedia: About, supra.

References to Wikipedia in judicial opinions began in 2004 and have increased each year, although such references are still included in only a small percentage of opinions. Jodi L. Wilson, Proceed with Extreme Caution: Citation to Wikipedia in Light of Contributor Demographics and Content Policies, 16 VAND. J. ENT. & TECH. L. 857, 868 (2014). These cites often relate to nondispositive matters or are included in string citations. But, some courts “have taken judicial notice of Wikipedia content, based their reasoning on Wikipedia entries, and decided dispositive motions on the basis of Wikipedia content.” Lee F. Peoples, The Citation of Wikipedia in Judicial Opinions, 12 YALE J. L. & TECH. 1, 3 (2009?2010). While there has been extensive research on Wikipedia’s accuracy, “the results are mixed?some studies show it is just as good as the experts, [while] others show Wikipedia is not accurate at all.” Michael Blanding, Wikipedia or Encyclop?dia Britannica: Which Has More Bias?, FORBES (Jan. 20, 2015), http://www.forbes.com/sites/hbsworkingknowledge/2015/01/20/wikipedia-or-encyclopaediabritannica-which-has-more-bias/#5c254ac51ccf.

Any court reliance on Wikipedia may understandably raise concerns because of “the impermanence of Wikipedia content, which can be edited by anyone at any time, and the dubious quality of the information found on Wikipedia.” Peoples, supra at 3. Cass Sunstein, legal scholar and professor at Harvard Law School, also warns that judges’ use of Wikipedia “might introduce opportunistic editing.” Noam Cohen, Courts Turn to Wikipedia, but Selectively, N.Y. TIMES (Jan. 29, 2007), http://www.nytimes.com/2007/01/29/technology/ 29wikipedia.html. The Fifth Circuit has similarly warned against using Wikipedia in judicial opinions, agreeing “with those courts that have found Wikipedia to be an unreliable source of information” and advising “against any improper reliance on it or similarly unreliable internet sources in the future.” Bing Shun Li v. Holder, 400 F. App’x 854, 857 (5th Cir. 2010); accord Badasa v. Mukasey, 540 F.3d 909, 910?11 (8th Cir. 2008).

For others in the legal community, however, Wikipedia is a valuable resource. Judge Richard Posner has said that “Wikipedia is a terrific resource ? because it [is] so convenient, it often has been updated recently and is very accurate.” Cohen, supra. However, Judge Posner also noted that it “wouldn’t be right to use it in a critical issue.” Id. Other scholars agree that Wikipedia is most appropriate for “soft facts,” when courts want to provide context to help make their opinions more readable. Id. Moreover, because Wikipedia is constantly updated, some argue that it can be “a good source for definitions of new slang terms, for popular culture references, and for jargon and lingo including computer and technology terms.” Peoples, supra at 31. They also argue that open-source tools like Wikipedia may be useful when courts are trying to determine public perception or community norms. Id. at 32. This usefulness is lessened, however, by the recognition that Wikipedia contributors do not necessarily represent a cross-section of society, as research has shown that they are overwhelmingly male, under forty years old, and living outside of the United States. Wilson, supra at 885?89.

Given the arguments both for and against reliance on Wikipedia, as well as the variety of ways in which the source may be utilized, a bright-line rule is untenable. Of the many concerns expressed about Wikipedia use, lack of reliability is paramount and may often preclude its use as a source of authority in opinions. At the least, we find it unlikely Wikipedia could suffice as the sole source of authority on an issue of any significance to a case. That said, Wikipedia can often be useful as a starting point for research purposes. See Peoples, supra at 28 (“Selectively using Wikipedia for ? minor points in an opinion is an economical use of judges’ and law clerks’ time.”). In this case, for example, the cited Wikipedia page itself cited past newspaper and magazine articles that had used the term “welfare queen” in various contexts and could help shed light on how a reasonable person could construe the term.

However, the court of appeals utilized Wikipedia as its primary source to ascribe a specific, narrow definition to a single term that the court found significantly influenced the article’s gist. Essentially, the court used the Wikipedia definition as the lynchpin of its analysis on a critical issue. As a result, the court narrowly read the term “welfare queen” to necessarily implicate fraudulent or illegal conduct, while other sources connote a broader common meaning. See, e.g., Oxford Living Dictionaries, https://en.oxforddictionaries.com/definition/welfare_queen (last visited Mar. 13, 2017) (broadly defining “welfare queen” as a “woman perceived to be living in luxury on benefits obtained by exploiting or defrauding the welfare system”); YourDictionary, http://www.yourdictionary.com/welfare-queen (last visited Mar. 13, 2017) (broadly defining “welfare queen” as a “woman collecting welfare, seen as doing so out of laziness, rather than genuine need”). In addition, and independent of the Wikipedia concerns, the court of appeals’ overwhelming emphasis on a single term in determining the article’s gist departed from our jurisprudential mandate to evaluate the publication as a whole rather than focus on individual statements.

A concurring opinion by Justice Guzman amplifies the concerns (FNs omitted):

Wikipedia has many strengths and benefits, but reliance on unverified, crowd-generated information to support judicial rulings is unwise. Mass-edited collaborative resources, like Wikipedia, are malleable by design, raising serious concerns about the accuracy and completeness of the information, the expertise and credentials of the contributors, and the potential for manipulation and bias. In an age when news about “fake news” has become commonplace, long-standing concerns about the validity of information obtained from “consensus websites” like Wikipedia are not merely the antiquated musings of luddites. To the contrary, as current events punctuate with clarity, courts must remain vigilant in guarding against undue reliance on sources of dubious reliability. A collaborative encyclopedia that may be anonymously and continuously edited undoubtedly fits the bill.

Legal commentators may debate whether and to what extent courts could properly rely on online sources like Wikipedia, but the most damning indictment of Wikipedia’s authoritative force comes directly from Wikipedia:

  • “WIKIPEDIA MAKES NO GUARANTEE OF VALIDITY”
  • “Please be advised that nothing found here has necessarily been reviewed by people with the expertise required to provide you with complete, accurate or reliable information.”
  • “Wikipedia cannot guarantee the validity of the information found here.”
  • “Wikipedia is not uniformly peer reviewed.”
  • “[A]ll information read here is without any implied warranty of fitness for any purpose or use whatsoever.”
  • “Even articles that have been vetted by informal peer review or featured article processes may later have been edited inappropriately, just before you view them.”
  • Indeed, “Wikipedia’s radical openness means that any given article may be, at any given moment, in a bad state: for example, it could be in the middle of a large edit or it could have been recently vandalized.” Even if expeditiously remediated, transient errors are not always obvious to the casual reader. As Wikipedia states more pointedly, “Wikipedia is a wiki, which means that anyone in the world can edit an article, deleting accurate information or adding false information, which the reader may not recognize. Thus, you probably shouldn’t be citing Wikipedia.”

    Apart from these candid self-assessments, which no doubt apply with equal force to other online sources and encyclopedias, a more pernicious evil lurks?”opportunistic editing.” Because “[a]nyone with Internet access can write and make changes to Wikipedia articles” and “can contribute anonymously, [or] under a pseudonym,” reliance on Wikipedia as an authoritative source for judicial decision-making incentivizes self-interested manipulation. Case in point: a Utah court of appeals recently described how the Wikipedia definition of “jet ski” provided “stronger support” for one of the parties in a subsequent appeal than it had when considered by the court in the parties’ previous appeal. The court observed the difficulty of discerning whether the change was instigated by the court’s prior opinion, perhaps “at the instance of someone with a stake in the debate.”

    Still, some have argued Wikipedia is “a good source for definitions of new slang terms, for popular culture references, and for jargon and lingo including computer and technology terms.” Perhaps, but not necessarily. While Wikipedia’s “openly editable” model may be well suited to capturing nuances and subtle shifts in linguistic meaning, there is no assurance that any particular definition actually represents the commonly understood meaning of a term that may be central to a legal inquiry. In truth, Wikipedia’s own policies disclaim the notion: “Wikipedia is not a dictionary, phrasebook, or a slang, jargon or usage guide.” Whatever merit there may be to crowdsourcing the English language, Wikipedia simply lacks the necessary safeguards to prevent abuse and assure the level of certainty and validity typically required to sustain a judgment in a legal proceeding.

    Take, for example, the Wikipedia entry for “welfare queen,” which was first created in November 2006 by the user Chalyres. Since the entry was first drafted, 239 edits have been made by 146 users. But there is no reliable way to determine whether these edits (1) deleted or added accurate information, (2) deleted or added false or biased information, (3) were made by individuals with expertise on the term’s usage, or (4) were made by individuals actually representative of the community.

    As a court, one of our “chief functions” is “to act as an animated and authoritative dictionary.” In that vein, we are routinely called upon to determine the common meaning of words and phrases in contracts, statutes, and other legal documents. Though we often consult dictionaries in discharging our duty, rarely, if ever, is one source alone sufficient to fulfill the task. To that end, I acknowledge that Wikipedia may be useful as a “starting point for serious research,” but it must never be considered “an endpoint,” at least in judicial proceedings.

    Wikipedia’s valuable role in today’s technological society cannot be denied. Our society benefits from the fast, free, and easily-accessible information it provides. A wealth of information is now available at the touch of a few key strokes, and a community of Wikipedia editors serves to increase the accuracy and truth of that information, promoting the public good through those efforts. However, in my view, Wikipedia properly serves the judiciary only as a compendium?a source for sources?and not as authority for any disputed, dispositive, or legally consequential matter.

    To punctuate her skepticism, Justice Guzman’s concurrence displays this screenshot:

    In a footnote, you can almost hear a sneer as she characterizes the screenshot as “Screenshot of unsaved edits to Welfare Queen.” NB: Wikipedia is trivially easy to edit, but getting those edits to stick is an entirely different matter.

    My Thoughts

    It makes sense not to treat Wikipedia as the authoritative citation source. However, I would make the same declaration about many sources, crowd-sourced or not. Often, a range of sources is required to establish a “fact.”

    We especially see the trickiness of treating a single dictionary as an authoritative source, because there are often subtle but crucial differences in dictionaries’ definitions of the same term. Indeed, Wikipedia self-acknowledges its limits as a dictionary. In contrast, sometimes Wikipedia is an OK citation for the zeitgeist about an issue, where the citation is for the ranges of issues rather than for the truth of any issue.

    I was a little surprised that the court didn’t discuss the Urban Dictionary as an alternative to Wikipedia as a dictionary (it comes up only in a reference in a footnote in Justice Guzman’s opinion). What I like about Urban Dictionary is that it doesn’t purport to offer a single definition of any term. Instead, it lists a range of definitions ordered by crowd-sourced voting. In my experience, the Urban Dictionary often fills in the gaps in my “street lingo” much better than any other source, so long as I use it advisedly.

    I’m paying closer attention to courts’ citations to online dictionaries based on my research for my Emojis and the Law paper. As bad as things are between Wikipedia and Urban Dictionary as online dictionaries, things are much worse with emojis because no credible dictionary is trying to provide definitive definitions of emojis. Eventually, as I’ll argue in my paper, we’ll need the equivalent of an Urban Dictionary for emojis to capture their disparate meanings across online subcommunities.

    Republished from Eric Goldman’s Technology & Marketing Law Blog

    Posted on Techdirt - 28 September 2016 @ 01:02pm

    Does The FTC Get To Ignore Section 230 Of The CDA?

    I’ve often joked that the FTC and state AGs choose to live in a fantasy world where Section 230 doesn’t exist. A new ruling from the Second Circuit has turned my joke on its ear, suggesting that my underlying fears — of a Section 230-free zone for consumer protection agencies — may have become our dystopian reality.

    The Opinion

    The case involves weight loss products, including colon cleanses, vended by LeanSpa. To generate more sales, LeanSpa hired LeadClick to act as an affiliate marketing manager. LeadClick coordinated promotion of LeanSpa’s products with LeadClick’s network of affiliates. Some affiliates promoted the products using fake news sites, with articles styled to look like legitimate news articles and consumer comments/testimonials that were fake. Apparently, all of this added up to big business. LeanSpa paid LeadClick $35-$45 each time a consumer signed up for LeanSpa’s “free” trial (which was a negative billing option). LeadClick shared 80-90% of these sign-up fees with affiliates and kept the remainder for itself. In total, LeadClick billed LeanSpa $22M, of which LeanSpa paid only $12M. Still, LeanSpa turned into LeadClick’s top customer, constituting 85% of its eAdvertising division’s sales.

    The court summarizes the key facts about LeadClick’s role in the fake new sites scheme:

    While LeadClick did not itself create fake news sites to advertise products?it (1) knew that fake news sites were common in the affiliate marketing industry and that some of its affiliates were using fake news sites, (2) approved of the use of these sites, and, (3) on occasion, provided affiliates with content to use on their fake news pages.

    The court also notes that LeadClick occasionally bought ads on legitimate news sites to promote fake news sites in its affiliate network.

    The FTC’s Prima Facie Case

    The FTC alleged that LeadClick engaged in deceptive practices. LeadClick responded that it didn’t do any deceptive practices itself; if anyone did, it was its affiliates. Extensively citing the Ninth Circuit’s FTC v. Neovi ruling from 2010 (an unfairness case, not a deception case, but this panel ignores the difference) and a subsequent 11th Circuit case (FTC v. IAB Marketing Associates), the Second Circuit concludes that “a defendant may be held liable for engaging in deceptive practices or acts if, with knowledge of the deception, it either directly participates in a deceptive scheme or has the authority to control the deceptive content at issue.”

    In the Neovi case, the defendant Qchex had an online check-creation tool that fraudsters used to create and send bogus checks. The court held that Qchex engaged in unfair practices when it printed and then delivered the bogus checks to recipients. But here, LeadClick never “delivered” anything. Indeed, LeadClick argued that the legal standard conflates direct liability with aiding/abetting liability. The Second Circuit disagreed, saying a defendant who “allows the deception to proceed” thus “engages, through its own actions, in a deceptive act or practice that causes harm to consumers.”

    I’m not a philosopher, but to me, “allowing” a third party to commit misconduct is a bizarre and overly expansive way of defining *direct* liability. Once this court makes this doctrinal cheat, LeadClick didn’t have a chance. Applying the legal standard to LeadClick:

    • knowledge. “LeadClick knew that (1) the use of false news pages was prevalent in affiliate marketing, and (2) its own affiliate marketers were using fake news sites to market LeanSpa?s products.”
    • “direct participation in the deceptive conduct.” LeadClick satisfied this standard by “recruiting and paying affiliates who used fake news sites for generating traffic, managing those affiliates, suggesting substantive edits to fake news pages, and purchasing banner space for fake news sites on legitimate news sources.”
    • “ability to control.” LeadClick ran an affiliate network that included fake news sites. “As the manager of the affiliate network, LeadClick had a responsibility to ensure that the advertisements produced by its affiliate network were not deceptive or misleading.” I thought the legal standard required
      “ability,” but the court tautologically uses the term “responsibility” to satisfy this element. Also note that the court’s legal standard (“has the authority to control the deceptive content at issue”) sounds a lot like principal-agency liability, but the court doesn’t say or imply that LeadClick had a principal-agency relationship with affiliates. Apparently the court is applying some kind of agency-lite liability.

    Finally, the court says that LeadClick’s intent to deceive consumers is irrelevant; “it is enough that it orchestrated a scheme that was likely to mislead reasonable consumers.”

    Section 230

    Because of the court’s intellectual corner-cutting that LeadClick committed a “direct” violation of the FTCA, the Section 230 immunity was already doomed. This is consistent with the Neovi case, where Section 230 didn’t even come up even though all of the fraudulent content was provided by third parties. Even though Section 230 doesn’t apply to a defendant’s own legal violations, the court unfortunately decides to muck up Section 230 jurisprudence anyway, apparently for kicks.

    I believe this is only the second time that the Second Circuit has discussed Section 230. The prior case was GoDaddy’s undramatic 2015 win in Ricci v. Teamsters, issued per curiam. Oddly, this panel doesn’t cite the Ricci case at all — not even once. The opinion simply says “We have had limited opportunity to interpret Section 230” without referencing the Ricci case by name. I’m baffled why this opinion so deliberately avoided engaging the recent and obviously relevant Ricci precedent?? Could it be that Ricci would have forced the panel to reach a different result or clearly created an intra-circuit split? Is there some kind of behind-the-scenes politics among Second Circuit judges? I welcome your theories.

    The court runs through the standard 3 prong test for Section 230’s immunity:

    1. provider/user of an interactive computer service (ICS). The court correctly says “Courts typically have held that internet service providers, website exchange systems, online message boards, and search engines fall within this definition.” (What is a “website exchange system”?). Then the court goes sideways, saying it is “doubtful” that LeadClick qualifies as an ICS because it acts as an affiliate manager that doesn’t provide access to servers.

      LeadClick argued that it provided affiliate tracking URLs and recorded activity on its server, but the panel responds that LeadClick didn’t cite any cases applying Section 230 in similar contexts. The court continues that LeadClick’s tracking service “is not the type of service that Congress intended to protect in granting immunity” because “routing customers through the HitPath server before reaching LeanSpa?s website[] was invisible to consumers and did not benefit them in any way. Its purpose was not to encourage discourse but to keep track of the business referred from its affiliate network.”

      Say what? Affiliate programs are just another form of advertising, so like other advertising programs, they help compensate publishers for creating and disseminating their content. We may not want this particular content (fake news sites touting dubious weight loss products). Even so, affiliate programs do support discourse, and the court’s denigration of affiliate programs’ speech benefits is unfortunate and unsupportable. More generally, the court seems to be marginalizing the speech benefits that third party vendors give to publishers, which is obviously misguided when vendors help publishers conduct their business more efficiently. I hope other courts don’t apply a “discourse promotion” threshold for applying Section 230.

      We rarely see cases turn on the ICS prong, so it’s really shocking to see the court go there — especially when it eventually expressly punts on the issue, making this discussion dicta.

    2. content provided by another information content provider (ICP). The court cites Accusearch for the proposition that ICP “cover[s] even those who are responsible for the development of content only in part,? but then adds a “defendant, however, will not be held responsible unless it assisted in the development of what made the content unlawful.”

      The court says LeadClick “participated in the development of the deceptive content posted on fake news pages” because it recruited affiliates knowing some had fake news sites, paid them, occasionally advised them to edit content, and bought ads on legitimate news sites. In other words, the court cites the exact same evidence of LeadClick’s prima facie liability as evidence of its lack of qualification for Section 230. This is just another way of saying that once the Second Circuit treated LeadClick as a direct violator of the FTCA, LeadClick had no chance of qualifying for Section 230.

      Notice that none of the cited facts actually involve content “creation” by LeadClick, so the court apparently assumes content “development” covers other activities — but doesn’t say what that term means.

      The court continues: “LeadClick?s role in managing the affiliate network far exceeded that of neutral assistance. Instead, it participated in the development of its affiliates? deceptive websites, ?materially contributing to [the content?s] alleged unlawfulness.'” What does “neutral assistance” mean, and how does that relate to Section 230 immunity? I assume all future plaintiffs in the Second Circuit will claim that the defendant provided “assistance” to the content originator that wasn’t “neutral.” That should be fun.

    3. treated as publisher/speaker. The court pulls the same trick with this prong, i.e., LeadClick was facing direct liability due to its own misconduct and citing evidence from the prima facie case as disqualifying evidence for this prong.

    Further Implications

    As we all know, no business wants to litigate against the FTC in court. Not only do the FTC’s litigation resources dwarf those available even to large defendants, but judges give the FTC extra credit as the voice of consumers. This case highlighted how the Second Circuit bent plenty of legal doctrine to get the FTC its win. Future defendants who want to fight the FTC in federal court, take note. This kind of doctrinal distortion happens far too frequently in FTC cases, so it would be a mistake to treat it as an unlikely-to-repeat accident.

    There is so much unnecessary bad stuff here for Section 230 jurisprudence in the Second Circuit. Plaintiffs can find plenty of mischief in the court’s discussion about what qualifies as “interactive computer services,” “neutral assistance” and “development.” Yuck.

    In a footnote, the court says the analysis would be the same under Connecticut’s UTPA. This suggests that state AGs could similarly establish a prima facie “direct” violation against defendants like LeadClick per their state unfair competition laws without running afoul of Section 230 either. I expect we’ll see this case cited extensively by state AGs in future enforcement actions.

    Section 230’s year-of-woe keeps going. I’m ready for 2016 to be over. Perhaps the Section 230 pendulum will swing back towards defendants in 2017.

    Republished from Eric Goldman’s Technology & Marketing Law Blog

    Posted on Techdirt - 12 August 2016 @ 01:06pm

    FTC Sues 1-800 Contacts For Restricting Competitors From Using Competitive Keyword Advertising

    This is a crosspost from Professor Eric Goldman’s website.

    For over a decade, I’ve blogged about 1-800 Contacts’ campaign to suppress competitive keyword advertising, including its legislative games (e.g., those times when 1-800 Contacts asked the Utah legislature to ban competitive keyword advertising) and at least 15 lawsuits against competitors costing millions of dollars in legal fees. I’ve also marveled at its duplicity; 1-800 Contacts historically employed the same competitive keyword advertising practices it subsequently sought to suppress.

    Things have been quiet on the 1-800 Contacts front for the past several years, after it suffered a major blow in the 10th Circuit’s Lens.com ruling, but sometimes the machinery of justice keeps turning quietly in the background. This week, the FTC sued 1-800 Contacts for antitrust violations. I believe this is the FTC’s first foray into keyword advertising issues, and it’s left some folks scratching their heads.

    The FTC’s Allegations

    Let’s take a closer look at the FTC’s allegations. (As you know, pleadings by government entities are usually a mixture of truth, half-truth and fiction). The complaint says 1-800 Contacts has 50% share of the online retail market for contact lenses. Facing emerging competition from lower-priced entrants, starting in 2004, 1-800 Contacts pursued trademark enforcement against advertisers engaged in competitive keyword advertising. 14 advertisers agreed to settlement terms; only Lens.com didn’t give in. (The complaint redacts the names of the settling advertisers, but in a supplement to this blog post, I identify many of them). The settlement agreements barred the competitors from bidding on “1-800 Contacts” or variants; and 1-800 Contacts reciprocally agreed not to bid on the competitors’ trademarks.

    13 agreements also required the competitors to put “1-800 Contacts” on the negative keyword list. Thus, a search for “1-800 Contacts Cheaper Competitors” — which strongly implies that the consumer sought competitive alternatives to 1-800 Contacts — would not display these competitors’ ads. The FTC provides a screenshot of that search, shown at the right. As you can see, 1-800 Contacts is the only keyword advertiser. However, the top organic result is a link to AllAboutVision.com, which claims to be “an unbiased source of trustworthy information on eye health and vision correction options” and possibly would be a helpful aggregator for consumers. The organic search results also link to some deep pages on 1-800 Contacts’ site, none of which consumers would find responsive to the query. While these search results seem a little funky, the search term “1-800 Contacts Cheaper Competitors” is a VERY long-tail query that few, if any, consumers actually tried. The settlement agreements had reciprocal negative keyword obligations for 1-800 Contacts.

    The complaint alleges that “1-800 Contacts has aggressively policed the Bidding Agreements, complaining to competitors when the company has suspected a violation, threatening further litigation, and demanding compliance.”

    1-800 Contacts’ campaign to restrict competitive keyword advertising could potentially hurt three different marketplace players: (1) the competitors who are hamstrung in their efforts to reach interested consumers, (2) consumers who suffer from a less competitive market, and (3) search engines whose ad auctions are rendered less efficient (and less profitable) when interested bidders choose not to participate. The complaint recaps some of the harms allegedly caused by 1-800 Contacts’ conduct, including:

    • distorting the price-setting mechanisms of search engine ad auctions
    • degrading the quality of search results pages by keeping them from displaying the most relevant ads to consumers
    • preventing truthful non-misleading information from reaching consumers
    • suppressing price and service competition among online contact lenses retailers, which causes “at least some consumers to pay higher prices for contact lenses than they would pay absent the agreements, acts, and practices of 1-800 Contacts”
    • increasing consumer search costs to purchase contact lenses online

    In other words, the FTC sees competitive keyword advertising as contributing to efficient search advertising auctions and, more importantly, improving consumers’ choices and fostering vendor competition on price and quality. Viewing competitive keyword advertising as pro-competitive isn’t novel, but it’s satisfying to see the FTC embrace the view so enthusiastically.

    Finally, the complaint enumerates the FTC’s wish list of remedies, including:

    • banning 1-800 Contacts from entering into contracts with competitors that restrict participation in search engine ad auctions or suppress the dissemination of truthful non-misleading information
    • banning 1-800 Contacts from “filing or threatening to file a lawsuit against any contact lens retailer alleging trademark infringement, deceptive advertising, or unfair competition that is based on the use of 1-800 Contacts’ trademarks in a search advertising auction. Provided, however, that Respondent shall not be barred from filing or threatening to file a lawsuit challenging any advertising copy where Respondent has a good faith belief that such advertising copy gives rise to a claim of trademark infringement, deceptive advertising, or unfair competition.”

    While the FTC’s focus on keyword advertising is new, its interest in advertising restrictions is not. For example, our Advertising and Marketing Law casebook covers the FTC v. Polygram case from 2003, in which the FTC successfully pursued two competitors’ agreements to restrict advertising of old stock in order to prop up a new product release.

    Questions Raised

    Why Is the FTC Acting Now? The FTC says 1-800 Contacts started its enforcement-and-settlement campaign in 2004. Why is the FTC acting now, a dozen years later?

    Normally a complaint like this is instigated by a competitor’s complaint, and it would make sense if Lens.com tipped off the FTC about its situation. (In addition to the trademark battle, Lens.com had a parallel antitrust lawsuit against 1-800 Contacts going back years). However, I assume Lens.com would have raised this issue with the FTC a long time ago. After all, Lens.com filed its antitrust lawsuit in 2011. Perhaps the FTC waited to see how that lawsuit would play out before deciding whether or not to intervene. The district court dismissed Lens.com’s antitrust complaint in 2014; I then see a notice of appeal to the 10th Circuit but it’s murky what happened after that.

    Perhaps the FTC is acting now because competitive keyword advertising law has cleared up a lot over the years. As I’ve mentioned many times, lawsuits over buying competitors’ trademarks haven’t succeeded in court for about a half-decade; and even lawsuits over the inclusion of a competitor’s trademark in the ad copy rarely make much progress in court any more. While I doubt the FTC could have confidently taken a strong stand on the legality of competitive keyword advertising in the aftermath of the Second Circuit’s 2009 Rescuecom opinion, the jurisprudential dust has settled a lot since then.

    It’s also possible that the FTC finally appreciated how restrictions on competitive keyword advertising distort ad auctions. Auctions work really well to set market prices when all of the relevant bidders participate. I could see how the FTC Bureau of Competition folks, steeped in economics doctrine, had their interest piqued when they first learned about agreements among potential auctions bidders not to participate in the ad auction. Perhaps it took a while for the issue to find its way to the right folks.

    Will 1-800 Contacts Accept a Big Fight Against the FTC? When the FTC conducts an investigation into a complaint like this and thinks there’s a problem, inevitably it discusses settlement options with the investigated company before suing. Therefore, it seems very likely that 1-800 Contacts already refused a settlement offer from the FTC. I can understand why 1-800 Contacts might do so. Presumably, 1-800 Contacts believes that its actions over the past dozen years are justified, and it’s willing to throw more money to defend that proposition (and, as discussed below, to try to maintain its above-market prices).

    Still, fighting the FTC is a daunting challenge for any company. The FTC always says it’s a small agency, but it’s still the freaking U.S. government and has more resources than any company it targets. Further, it has an exceptionally strong batting average in litigation, and judges view the FTC as the voice of the consumer — making it a more sympathetic litigant than a competitor trying to defend its profitable investments in competitive keyword advertising. Furthermore, the FTC has picked a friendly litigation venue. The FTC steered this case into its in-house adjudication process, so the case will be heard before an FTC administrative law judge with appeals going to the FTC Commissioners before this case can be heard in federal court. By keeping this litigation within the FTC, it will take years and lots of money before 1-800 Contacts can tell its story to an adjudicator not employed by the FTC.

    I respect companies that have the fortitude and wealth to stand up to the FTC, but I often question their wisdom and logic.

    Is Competitive Keyword Advertising Legitimate? The FTC’s complaint assumes, but doesn’t prove, that competitive keyword advertising is a legally legitimate practice. For example, the FTC alleges (para. 18) that “1-800 Contacts claimed?inaccurately?that the mere fact that a rival’s advertisement appeared on the results page in response to a query containing a 1-800 Contacts trademark constituted infringement.” I’m sure 1-800 Contacts (and all trademark owners) would love to see the FTC’s citations for the “inaccurately” comment. Later (para. 32), the FTC says?again, without any citations?that agreements not to engage in competitive keyword advertising “exceed the scope of any property right that 1-800 Contacts may have in its trademarks, and they are not reasonably necessary to achieve any procompetitive benefit.”

    Now, as you know, I emphatically agree with this proposition. I’ve argued for over a decade that competitive keyword advertising is pro-competitive and should be legal; and I’ve chronicled the systematic failure of trademark owners’ anti-keyword advertising lawsuits over the past half-decade. However, I acknowledge that this issue is still being hotly contested in the courts. Indeed, just last week I blogged about a ruling sending a competitive keyword advertising lawsuit (with the trademark used in the ad copy) to a jury because the defendant couldn’t convince the judge that it was entitled to summary judgment. So while I wish the state of competitive keyword advertising law was definitively resolved, the FTC’s implied factual claim is aggressive.

    For those of us a little tired of the decade-long competitive keyword advertising battles, the FTC’s move offers some tantalizing prospects. Because the FTC stacked the litigation deck in its favor, we could get some clean and powerful judicial pronouncements about the legitimacy and pro-competitive nature of competitive keyword advertising. Combined with developments like the Texas ethics opinion greenlighting competitive keyword advertising by lawyers, this case could help push the pendulum so decisively in favor of competitive keyword advertising that it permanently ends the debates.

    What About Vertical Restrictions on Competitive Keyword Advertising? This case deals with horizontal restrictions between competitors. While those are relatively rare, it’s quite common (at least in certain industries) for trademark owners to restrict keyword ad bidding by vertical channel partners such as affiliates and distributors. What implications does this lawsuit have for those vertical restrictions?

    Usually, distributors can use manufacturers’ trademarks for the goods or services they resell without a trademark license. In contrast, affiliates usually need a trademark license, in which case the trademark owner should be able to put conditions on its trademark license to affiliates. However, trying to impose those same conditions on distributors could be a legal overreach because they didn’t need trademark permission at all.

    The FTC might be signaling that it’s a problem to restrict keyword ads by channel partners who don’t need trademark permission in the first place. However, manufacturers have substantial power to control intra-channel conflicts (see, e.g., the modern deference to resale price maintenance), and restrictions on keyword advertising help the trademark owner manage the trademark and prevent channel partners from driving up the owner’s cost of doing so. So vertical restrictions on keyword advertising bidding may have better competitive justifications than horizontal restrictions. My guess is that the FTC didn’t intend to implicate vertical restrictions; but it probably wouldn’t categorically greenlight them either because some vertical restrictions could indeed have anti-competitive effects.

    What Does This Mean For Trademark Owners? Trademark owners, PAY ATTENTION. Effectively, the FTC is saying that 1-800 Contacts committed antitrust violations by making overreaching trademark demands. I can’t recall when the FTC last implied that trademark overclaiming could create antitrust problems (nothing comes to mind immediately). Among academics, we’ve frequently discussed how trademark overclaims can hurt competition, so many academics probably think it’s about time the FTC moved in that direction. However, if this lawsuit signals that the FTC plans to pay more attention to trademark owner overreaching, that would be a seismic event for the trademark owner community.

    Two related notes. First, as I’ve said before, I think 1-800 Contacts is an exceptionally weak trademark because it’s more a phone number than a source identifier. Just like trademark law won’t protect [noun].[tld] domain names (when the domain name relates to the noun), “800 [noun]” also should be generic. It creates a lot of friction when we weaponize such highly descriptive terms, and it makes sense for the FTC to pay particular attention to those weapons deployments.

    Second, I’ve observed before that owners of weak descriptive marks tend to be the most litigious and make the most aggressive interpretations of trademark law. They often use litigation to try to overcome the intrinsic shortcomings of the mark; and they are often paranoid about the so-called “policing duty” that makes trademark owners think they will lose the mark if they don’t shut down other users of the term. However, the trademark policing “obligation” is often overstated; and the TTAB has expressly said there’s no policing obligation against competitive keyword advertising.