from the fix-the-damn-bill dept
Over the last few weeks, we’ve written quite a bit about the American Innovation and Choice Online Act (AICOA), which has become the central push by a bunch of folks in Congress to create a special antitrust bill for “big tech.” There are some good ideas in the bill, but, as we’ve been highlighting, a major problem is that the language in the bill is such that it could be abused by politically motivated politicians and law enforcement to go after perfectly reasonable content moderation decisions.
Indeed, Republicans have made it clear that they very much believe this bill will enable them to go after tech companies over content moderation decisions they dislike. Most recently, they’ve said that if the bill is clarified to say that it should not impact content moderation, that they will walk away from supporting the bill. That should, at the very least, give pause to everyone who keeps insisting that the bill can’t be abused to go after content moderation decisions.
We recently wrote about four Senators, led by Brian Schatz (with Ron Wyden, Tammy Baldwin, and Ben Ray Lujan), suggesting a very, very slight amendment to the bill, which would just make it explicit that the law shouldn’t be read to impact regular content moderation decisions.
In response to that Schatz letter, Rep. David Cicilline (who is spearheading the House version of the bill, while Senator Amy Klobuchar is handling the Senate side), sent back a letter insisting that Section 230 and the 1st Amendment already would prevent AICOA from being abused this way. Here’s a snippet of his letter.
Moreover, even if a covered platform’s discriminatory application of its terms of service
materially harmed competition, the Act preserves platforms’ content-moderation-related
defenses under current law. Section 5 of S. 2992 states expressly that “[n]othing in this Act may
be construed to limit . . . the application of any law.”
One such law is Section 230(c) of the Communications Decency Act. Under that
provision, social-media platforms may not “be treated as the publisher or speaker of any
information provided by another information content provider.” They also may not be held
civilly liable on account of “any action voluntarily taken in good faith to restrict access to or
availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy,
excessively violent, harassing, or otherwise objectionable, whether or not such material is
constitutionally protected.” Accordingly, as with other liability statutes enacted since the
passage of Section 230, Section 230 provides “an affirmative defense to liability under [the Act]
for . . . the narrow set of defendants and conduct to which Section 230 applies.” Another still
applicable law is the First Amendment to the U.S. Constitution, which the Act does not—and
He then goes on in more detail as to why he believes the bill really cannot be abused. And while he does note that that he remains “committed to doing what is necessary to strengthen and improve the bill” and that he is happy to keep working with these Senators on it, the very clear message from his letter is that he’s pretty sure the bill is just fine as is, and that Section 230 and the 1st Amendment already protect against abuse.
Finally, your proposed language for the Act—although well intentioned—is already
reflected in the base text of the bill. As detailed above, among other things, section 5 of S. 2992
preserves the continued applicability of current laws, including 47 U.S.C. § 230(c), that protect
social-media platforms from liability for good-faith content moderation. Although I agree that
legislation is necessary to address concerns with misinformation and content-moderation
practices by dominant social-media platforms, I have consistently said that this legislation is not
the avenue for doing so. As such, this legislation is narrowly tailored to address specific
anticompetitive practices by dominant technology firms online. And as the Department of Justice
has noted, it is a complement to and clarification of the antitrust laws as they apply to digital
markets. As such, it does not supersede other laws.
Except… Cicilline is wrong. Very wrong. We at the Copia Institute this week signed onto a letter from TechFreedom and Free Press (two organizations that rarely agree with each other on policy issues) along with some expert academics explaining why.
The letter explains why Cicilline’s faith in Section 230 and the 1st Amendment is misplaced. It walks through, step by step, ways in which motivated state AGs (or even the DOJ) might get around those concerns, by claiming that moderation decisions were not actually content-based decisions, but business conduct, focused on anti-competitive behavior.
We don’t have to look far to see how that played out: the Malwarebytes case was an example of that in action. That was a case where a company was able to avoid Section 230 by claiming that a moderation decision (calling an app malware), was actually done for anti-competitive reasons. But with AICOA, we could get that on steroids. As the letter notes:
There is a substantial risk that courts will extend the Malwarebytes reasoning to exclude AICOA claims from Section 230 protection—including politically motivated claims aimed at content moderation. Specifically, courts may try to harmonize the two statutes—i.e., “strive to give effect to both”—by accepting some showing of anticompetitive results as sufficient to circumvent Section 230(c)(2)(A) in non-discrimination claims.
Anticompetitive animus is not required by the plain text of AICOA § 3(a)(3). Allowing only AICOA claims that allege (and, ultimately, prove) anticompetitive motivation to bypass Section 230’s protection would infer an intent requirement where Congress chose not to include one. While courts do sometimes infer intent requirements, they may reasonably conclude that doing so here would effectively read Section 3(a)(3) out of the statute. How could a platform with no direct stake in the market where competitive harm is alleged ever have an anticompetitive intent? Thus, how could any plaintiff ever bring a Section 3(a)(3) claim regarding “harm to competition” between downstream business users that would survive Section 230(c)(2)(A)? For Rep. Cicilline’s presumptions about Section 230 to be correct, courts would have to effectively render Section 3(a)(3) a nullity by holding that only claims of self-preferencing—but not discrimination between other business users—are actionable. This is an implausible reading that clearly contradicts what the present draft of AICOA says.
The Malwarebytes court relied heavily on Section 230’s “history and purpose” as evincing Congressional intent to “protect competition.” Here, there is explicit statutory language and legislative history from which a court could conclude that AICOA’s purpose is to prohibit anticompetitive results, regardless of motive—and thus to carve those claims out from Section 230. This result would apparently be statutorily required if another bill co-sponsored by Sen. Klobuchar becomes law: The SAFE TECH Act (S. 299) would amend Section 230 to exempt “any action brought under Federal or State antitrust law.”
There’s a lot more in the letter, but the point is clear. The idea that 230 will magically stop the abuse of this bill seems contradicted by the way the law is currently drafted, and actual cases on the books.
Filed Under: 1st amendment, aicoa, amy klobuchar, ben ray lujan, brian schatz, content moderation, david cicilline, ron wyden, section 230, tammy baldwin