Democrats Need To Get Their Head Out Of The Sand: The Only Reason GOP Is Supporting Their Antitrust Bills Is To Force Companies To Host Disinfo
from the this-matters dept
We’ve pointed this out a few times over the past year. The main antitrust bills that are floating around both the House and the Senate only have Republican support because they have a trojan horse hidden in them that will make it much more difficult for the biggest websites to do any moderation on Republican culture war propaganda campaigns. The two major bills, the American Innovation and Choice Online Act (AICOA) and the Open App Markets bill, both have clauses against anti-competitive “preferencing.”
However, as we keep pointing out, this would allow Parler to argue that Amazon, Google, and Apple treated it differently than, say, Twitter, when those three companies chose not to do business with Parler. Parler even made some of these arguments in its lawsuit against Amazon, and while that lawsuit flopped, if these laws passed, it would reopen the issue and allow companies to sue.
A number of Democrat supporters of these bills, and various civil society organizations, including many that we’ve worked with and usually support, keep trying to brush aside this issue, and keep insisting that it won’t really matter. Some are even willing to align with outright bigots who are only supporting these bills for this very reason, because they think getting something passed on antitrust is the bigger issue.
However, the Washington Post has a great op-ed from two academics who understand this issue better than just about anyone else: Jane Bambauer from University of Arizona and Anupam Chander from Georgetown. I highly encourage everyone supporting these bills to read their analysis of how these bills could create a real mess for disinformation online.
They also point to the Parler example, but they also, thankfully, take on the main argument I’ve heard back from friends supporting these bills: that courts would throw out such lawsuits. This, to me, has always been an odd take, since they know how damaging even frivolous lawsuits can be, and how much of a chilling effect even the threat of extensive litigation can cause. And as Bambauer and Chander make clear, here the chilling effects can be significant.
But the bills would hand the makers of services and apps that give free rein to hate speech and disinformation a powerful weapon to use in court: If Apple or Google kicked them out of app stores, or downgraded them in search results, these companies could argue that the decisions weren’t about content moderation at all, but rather market domination.
At the least, such claims would have to be litigated — a costly proposition, with no guarantee of victory. Alternatively, Apple, Google and other companies might become less vigilant about screening out hate speech and disinformation. You can be wary of Big Tech’s market power and still think the implication of these bills for the speech that is spread online is extremely bad.
And as the article makes clear, the idea that these cases would quickly be thrown out is hardly a given, especially after seeing how courts around the country are willing to view issues around content moderation through partisan lenses.
Suppose Truth Social — President Donald Trump’s Twitter rival — becomes a hotbed of election disinformation, vaccine misinformation and racist speech, and Apple decides that it is violating its App Store guidelines, which require app-makers to filter objectionable content. Would Truth Social or an ideological ally sue, arguing that Apple was preferencing its own News app, or its business partner Twitter’s app? Some judges, and possibly a Supreme Court majority, would be sympathetic to such claims. After all, this would represent a difference in treatment between similar apps (though Apple could of course argue that all apps that permit disinformation are treated alike). Sen. Ted Cruz (R-Tex.) is among those who have noticed that these bills could lead to results similar to those of the recently eviscerated Texas content-moderation law. The bill targeting app stores would “make some positive improvement on the problem of censorship,” he said during markup for the bill.
Also, the bills’ authors could make it clear that these laws can’t be used to stop lawsuits related to content moderation choices, but they have deliberately chosen not to (because they know they’d lose the Republican support if they do).
The Klobuchar-Grassley bill does allow companies to defend against lawsuits by demonstrating that their actions were taken to protect safety, user privacy or the security of the platform, but this defense would likely prove inadequate. Apple or Google would carry the burden of proving that its actions were “reasonably necessary” to protect those specific interests. And even showing that the removed app or speech was sexist, racist, antisemitic or Islamophobic would not be enough. The other bill’s safeguards against abuse are even weaker.
The article also notes that while some supporters of the bill insist that Section 230 would protect these bills from being abused to stop moderation choices, that also seems unlikely for two reasons. Under the Malwarebytes case, companies can get around the 230 issue by claiming that the moderation decision was anticompetitive, rather than for legitimate content moderation needs, and then 230 gets taken off the table. Also, that depends on no more changes being made to either Section 230 itself, or how the courts interpret 230 — both of which seems like dubious propositions (unfortunately).
But, really, the 5th Circuit’s decision in the case highlights the fact that it’s not at all likely that courts would toss out these cases. And, importantly, given the size of the penalties under at least one of the laws, it would be risky for companies to not act accordingly.
Note that if the Internet platform loses, the Klobuchar-Grassley bill would subject it to a penalty of up to 15 percent of its U.S. revenue (not just profits), a risk that few companies would be willing to take.
Perhaps some companies are willing to risk 15% of their revenue on judges understanding bad faith litigation, but that’s a huge bet.
And, again, the article notes that the bills’ authors could fix this, and make it clear that these scenarios don’t apply to the bill, but it appears Democratic Senators have deliberately chosen not to, because they know they’d lose GOP support for the bill.
The Klobuchar-Grassley bill authors recognize that it could affect moderating activity by platforms. The bill, therefore, explicitly excludes from its definition of unlawful activity any reasonable actions the platforms take to protect the copyrights and trademarks of others. Unfortunately, actions motivated by corporate responsibility and designed to protect against hate speech, harassment or misinformation don’t receive similar protection.
What’s most frustrating to me in all of this is how supporters of these bills refuse to actually engage on this point beyond insisting that the courts will dump these lawsuits. That’s far from certain. And even if it were true, these are the same groups that often point out the chilling effects of even frivolous, vexatious litigation.
If those groups, and the politicians pushing these bills, really believe in the underlying concepts in the bill there’s a solution: amend the bills to make it clear they can’t be used in these kinds of content moderation situations. If they’re unwilling to do that, it just feels like they’re carrying water for disinformation peddlers and trollish bigots who are eagerly looking forward to using these laws to litigate.