On the one hand, content moderation at the scale modern social media companies operate at is an impossible nightmare. Companies are always going to lack the staff and resources to do it well (raising questions about the dangers of automation at scale), and they’re always going to screw things up for reasons well discussed.
At the same time, there’s Facebook. A company whose executive leadership team often compounds these challenges by making the worst and most idiotic decisions possible at any particular moment.
To corroborate this activity, on Friday a Motherboard reporter attempted to post the phrase “abortion pills can be mailed” on Facebook using a burner account. The post was flagged within seconds as violating the site’s community standards, specifically the rules against buying, selling, or exchanging medical or non-medical drugs. The reporter was given the option to “disagree” with the decision or “agree” with it. After they chose “disagree,” the post was removed.
Again, we’re not just talking about blocking websites that actually mail abortion pills. Reporters at Vice’s Motherboard found that even publicly acknowledging that abortion pills exist and could be mailed resulted in an account ban:
Other reporters have confirmed the changes. Facebook refuses to reverse the bans or even respond to reporter inquiries into the policy, which are consciously bad choices, not content moderation at scale problems.
The company’s systems claim that even mentioning that these pills exist violates its community standards related to “restricted goods and services.” Yet when other reporters made similar posts promising to mail marijuana or guns, there were no restrictions:
The Facebook account was immediately put on a “warning” status for the post, which Facebook said violated its standards on “guns, animals and other regulated goods.”
Yet, when the AP reporter made the same exact post but swapped out the words “abortion pills” for “a gun,” the post remained untouched. A post with the same exact offer to mail “weed” was also left up and not considered a violation. Marijuana is illegal under federal law and it is illegal to send it through the mail.
Activist groups like Fight For the Future were decidedly unimpressed, saying the policy foretold uglier things to come as the far right continues to push its court-enabled advantage:
Facebook’s censorship of critical reproductive healthcare information and advocacy should be a massive, code-red warning to Democrats who want to revise or repeal Section 230. In a post-Roe environment, litigation-fearing platforms will cover their hides by tearing down online access to abortion healthcare and support.
Facebook, no stranger to sucking up to and amplifying the authoritarian right, has also tried to restrict employees from talking about abortion bans at work, triggering a backlash. The company is also finding itself under fire after it classified one prominent pro-choice activism group a terrorist organization.
Countless tech companies, including Facebook, have failed to even issue basic platitudes on securing women’s location, app usage, or browsing data from state officials (or vigilantes) looking to punish women in the wake of Roe’s reversal.
Again, this initial lack of any meaningful backbone whatsoever in the face of one of the most wide-reaching, transformative, legally dubious, and dangerous political projects in a generation doesn’t exactly instill confidence that Facebook will make sound decisions as U.S. authoritarianism accelerates and a radical court steadily chips away at democratic norms and long-established law.
But, for the past four years, within EU policy circles, it has been entirely taboo to even suggest that maybe the EU made a mistake four years ago with the GDPR. Any time we’ve suggested it, we’ve received howls of indignation from “data protection” folks in the EU, who insist that we’re wrong about the GDPR.
However, sooner or later someone had to realize that the emperor had no clothes. And in a surprising move, the first EU official apparently willing to do so is Wojciech Wiewiórowski, the EU’s Data Protection Supervisor.
So far, officials at the EU level have put up a dogged defense of what has become one of their best-known rulebooks, including by publicly pushing back against calls to punish Ireland for what activists say is a failure to bring Big Tech’s data-hungry practices to heel.
Now, one of the European Union’s key voices on data protection regulation is breaking the Brussels taboo of questioning the bloc’s flagship law’s performance so far.
“I think there are parts of the GDPR that definitely have to be adjusted to the future reality,” European Data Protection Supervisor Wojciech Wiewiórowski told POLITICO in an interview earlier this month.
Wiewiórowski, who leads the EU’s in-house privacy regulator, is gathering data protection decision-makers in Brussels Thursday-Friday to open the debate about the GDPR’s failings and lay the groundwork for an inevitable revaluation of the law when the new EU Commission takes office in 2024.
Of course, what’s funny is that when that event actually happened, the complaints were not about how maybe the entire approach of the GDPR was wrong, but that the real problem is that the Irish Data Protection Commission wasn’t willing to fine Google and Facebook enough.
European Data Protection Supervisor Wojciech Wiewiórowski on Friday said there isn’t enough privacy enforcement against tech companies like Meta and Google, hinting at a bigger role for a “pan-European” regulator.
In a speech marking the end of a two-day conference designed to scrutinize the EU’s flagship privacy code, the General Data Protection Regulation or GDPR, Wiewiórowski said enforcers had so far failed to rein in data protection abuses by big companies.
“I also see hopes that certain promises of the GDPR will be better delivered. I myself share views of those who believe we still do not see sufficient enforcement, in particular against Big Tech,” he said.
This is really a “no, it’s the children who are wrong” moment of clarity. The GDPR was sold to the European technocrats as “finally” a way to put Google and Facebook in their place. But, in practice, as multiple studies have shown, the two companies have been mostly just fine, and it’s a bunch of their competitors that have been wiped out by the onerous compliance costs.
Rather than recognizing that maybe the whole concept behind the GDPR is the problem, they’ve decided the problem must be the enforcer in Ireland (where most of the US internet companies have their EU headquarters) so the answer must be to move the enforcement to the EU itself.
Basically, the EU expected the GDPR to be a regular tool for slapping fines on American internet companies, and now that this hasn’t come to pass, the problem must be with the enforcer not doing its job, rather than the structure of the law itself. That means… it’s likely only going to get worse, not better.
In response to the Supreme Court’s recent assault on female bodily autonomy, numerous U.S. corporations have issued statements stating they’ll be paying for employee abortion travel. You’re to ignore, apparently, that many of these same companies continue to throw millions of dollars at the politicians responsible for turning the Supreme Court into a dangerous, cruel, legal norm-trampling joke:
1. Several companies that have announced they will cover travel costs for employees that need an abortion are financially backing a political committee openly devoted to eliminating abortion rights around the country.
With abortion now or soon to be illegal in countless states, there’s newfound concern about the privacy issues we’ve talked about for years, like how user location data, period tracking data, or browsing data can all be used against women seeking abortions and those looking to aid them… by both the state and violent vigilantes (thanks to flimsy U.S. standards on who can buy said data and how it can be used).
Reporters that have tried to ask modern data-hoovering companies if they’ll do better job securing data to ensure it can’t be used against women, or if they’ll fight efforts from states hunting abortion seekers and aiders in and out of state, have been met with dead silence. Not even rote statements on how the safety of women is important, but dead silence:
Multiple tech companies are saying they'll pay for employees to travel for abortions. (Employees who probably already have resources to do so unlike many Americans.)
I've heard zero about how these companies intend to protect user data from being used to criminalize abortion.
Motherboard asked a long line of companies including Facebook, Amazon, Twitter, TikTok, AT&T, Uber, and Snapchat if they’d hand over user data to law enforcement and not a single one was willing to commit to protecting women’s data:
Motherboard asked if each will provide data in response to requests from law enforcement if the case concerns users seeking or providing abortions, or some other context in which the agency is investigating abortions. Motherboard also asked generally what each company is planning to protect user data in a post-Roe America.
None of the companies answered the questions. Representatives from Twitter and Snapchat replied to say they were looking into the request, but they did not provide a statement or other response.
To be fair, company legal departments haven’t finished doing the risk calculations of showing a backbone and upsetting campaign contributors and law enforcement. They’ve also got to weigh the incalculable looming harms awaiting countless women against any potential lost snoopvertising revenues, so there’s that.
As public pressure grows, ham-fisted state enforcement begins, and the dynamics of the Roe repeal become harder for them to ignore, several of these companies may find something vaguely resembling a backbone in time. But the initial lack of any clarity or courage whatsoever in the face of creeping authoritarianism (and a high court gone completely off the rails) doesn’t inspire a whole lot of confidence.
Look: there are very real issues with the state of the internet today, including the amount of power a few companies have. But that doesn’t mean any solution is a good solution. Unfortunately, Senator Amy Klobuchar, whenever given the option, seems to put forth the worst possible plan. It’s mind boggling.
For a while now, Klobuchar, along with Senator Chuck Grassley, have been pushing their American Innovation and Choice Online Act (AICOA). It’s got a fair bit of support, including from companies and organizations I often agree with on the issues. But, this bill has serious problems. Many of us raised concerns about those problems, and even made suggestions on how to fix the problems. There are ways to create a bill that would target the actual bad practices of internet companies. But this isn’t it.
For a few months, Klobuchar has apparently been working on a new and improved version of the bill, which was revealed last night. Somewhat incredibly, it fixes none of the problems people raised. The major change: making sure it doesn’t apply to telcosand financial companies.
I only wish I were joking. Of course, this is the same Klobuchar who, on a different antitrust bill, made sure to carve out her state’s largest employer, Target. So, we get it. Klobuchar cares more about making the lobbyists and specific industries happy than tackling the real problems of her bill. It’s pathetic.
The main “focus” of the bill is that it’s supposed to bar certain large companies from preferencing their own products. So, for example, Yelp has spent over a decade whining that Google showed people the results of its own Local search, crowding Yelp results out of search. The bill is designed to say that companies can’t do that any more. Of course, there are legitimate concerns that this will mean certain companies sending people to very useful products that people actually like will violate this bill. The quintessential example of this: when doing a search on a location, Google can point you to Google Maps. But, under this bill, that would be problematic.
discriminate in the application or enforcement of the terms of service of the covered platform among similarly situated business users in a manner that would materially harm competition;
So, Amazon telling Parler that it violates AWS’ terms of service and booting it off the service? That would not be allowed under this bill. Remember, Parler sued Amazon, and a key part of their initial claims was that because Amazon treated Twitter differently than Parler (which wasn’t true at the time, as Twitter had only just signed a deal to use AWS but wasn’t on it yet), that it was anticompetitive for Amazon to remove Parler. The judge in that case was not impressed, but if AICOA becomes law, suddenly we’re going to see a ton of claims like this in response to moderation choices.
Tons of companies already love to claim that moderation decisions are about harm to competition. Hell, for many years, the main company going after Google for antitrust was a really, really spammy tool called Foundem, that was upset that Google had realized that users hated getting sent to Foundem, and downranked the site. Foundem (apparently funded by Microsoft) spent years insisting this was “anticompetitive” rather than “making search work better by not sending users to spammy sites they don’t want.” But, again, under AICOA, arguments like that are going to have to be considered by judges.
Downranking spammy sites and services, or removing sites that ignores terms of service like Parler, now become competition law minefields.
It’s difficult to see how that’s good for anyone, other than the operators of sketchy sites.
As we’ve noted, everyone in the Senate actually knows this. Because the main reason that Klobuchar keeps this nonsense in the bill and doesn’t fix the language, is because she knows that this is the only way to keep Republicans on the bill. Republicans see this content moderation trojan horse in the bill, and are thrilled with it. Because they think it’s going to allow lawsuits to protect Parler, Truth Social, and their other also ran websites.
Remember, Ted Cruz was so excited about this bill because it would, in his words, “unleash the trial lawyers” to sue Google, Facebook and others for content moderation decisions.
Republicans are supporting this bill because they know it will be used to hit internet companies with all sorts of lawsuits over their moderation decisions.
Of course, it appears that some Republicans worried (or, rather, some telco lobbyists told Republicans) that the law might ALSO result in broadband providers facing the same sorts of nonsense lawsuits. Indeed, part of the original bill could have been read as a kind of net neutrality bill in disguise, because larger ISPs would be barred from similarly “favoring” services over others in a way deemed anticompetitive. And you can bet that some telcos that rely on things like zero rating were worried.
So, that brings us to the major change in this new version of Klobuchar’s bill: she carved out the telcos to make sure the bill doesn’t apply to them. Even though telcos are way more of a competition problem than any online service. Here’s some new language in the bill excluding telcos. It explicitly says that the definition of an “online platform”:
does not include a service by wire or radio that provides the capability to transmit data to and receive data from all or substantially all internet endpoints, including any capabilities that are incidental to and enable the operation of the communications service.
Got it? So, no preferencing. Unless you’re the only broadband player in town. Then, go hog-wild, according to Senator Klobuchar.
Nice work there. That won’t make people cynical at all about the political process.
Of course, once again, this is almost certainly appeasement to Republicans, who, for clear political reasons, want to continue to pretend that telcos are no big deal, and that it’s only the big internet providers who are evil.
It makes no sense at all that Democrats like Amy Klobuchar are playing right into their hands, and giving them everything that they want. But, of course, Klobuchuar has decided for political reasons that she wants to be seen as the senator who took on big tech for her next presidential campaign. And, if that means handing Republicans all the tools they need to file a ton of vexatious lawsuits to try to force companies to enable more hate speech and propaganda, so be it.
It’s pure cynical opportunism.
Oh, and also, it looks like financial firms got a little carve out as well. The original bill said the term online platforms would apply to websites that “facilitates the offering, advertising, sale, purchase, payment, or shipping of products or services…” The new version of the bill covers those that “enables the offering, advertising, sale, purchase, or shipping of products or services…”
So, the same list minus payments.
That’s two giant industries — telcos and banks — that were able to secure their carveouts. But, no effort to fix any of the actual problems of the bill.
With the original bill, NERA Economic Consulting had written up an analysis of companies that would be considered covered platforms in the bill, noting that it directly would hit just six: Google, Apple, Facebook, Amazon, Microsoft, and likely TikTok. However, it also noted that there were 13 other companies that were below the size thresholds in the bill, but close enough that they would likely “take measure to avoid significant risk incumbent upon exceeding the thresholds.” Notably, many of those included broadband companies and financial companies. By my count, the new carve outs in the bill likely cut that list of 13 by at least 7.
It’s possible that some of the others might be excluded as well, though I’m not as sure. Still, it seems pretty clear that these new carveouts were directly because of lobbying by these firms that didn’t want to be included, despite the fact that all are arguably much more problematic, and have much less readily available competition than the companies targeted by the bill.
It’s enough to make one think that senators like Klobuchar don’t really care about doing the right thing at all. They just want to be seen as doing something.
It’s truly amazing how focused people are, in discussions on content moderation, on the claims that “content moderation is censorship” and that it’s primarily “suppressing” political speech. That’s not how it works at all. Honestly, the origins of most content moderation efforts were around two major things: (1) spam prevention and (2) copyright infringement. Over time, that’s expanded, but the major categories of content moderation have little to nothing to do with “viewpoint” discrimination, no matter what Texas seems to think.
An important thing to focus on, whether you’re an average user worried about censorship or recently bought a social network promising to allow almost all legal speech, is what kind of kind of speech Facebook removes. Very little of it is “political,” at least in the sense of “commentary about current events.” Instead, it’s posts related to drugs, guns, self-harm, sex and nudity, spam and fake accounts, and bullying and harassment.
To be sure, some of these categories are deeply enmeshed in politics — terrorism and “dangerous organizations,” for example, or what qualifies as hate speech. But for the most part, this report chronicles stuff that Facebook removes because it’s good for business. Over and over again, social products find that their usage shrinks when even a small percentage of the material they host includes spam, nudity, gore, or people harassing each other.
Usually social companies talk about their rules in terms of what they’re doing “to keep the community safe.” But the more existential purpose is to keep the community returning to the site at all.
I dug into some of the numbers, and if we just look at “content actioned” over the last couple years, it appears that spam is still the major focus. Facebook removed 1.8 billion pieces of content it judged as spam in just the fourth quarter of 2021. It also removed 1.6 billion “fake accounts” (Facebook requires accounts to be associated with real humans). You get to much smaller numbers for other categories, like 31 million pieces of content removed for “sexual activity,” 16.5 million pieces of content dealing with sexual exploitation, and another 2.1 million around “nudity and physical abuse” involving children. 16 million pieces of content dealt with for terrorism (which was way up). 26 million pieces of content were deemed problematic for “violent and graphic content.” 6.8 million were dealt with over “suicide and self-injury.” And 15 million for “hate speech.” Another 9.5 million were around “bullying and harassment.”
Even if you assume that some of the listed categories above were political, the numbers are still dwarfed by the spam and fake accounts issues that are the vast majority of content that Facebook’s moderators need to deal with. Putting this all in graphic form, you realize that content moderation is almost entirely about spam and (for Facebook) dealing with fake accounts. It is not, generally, about being “censors.” (Copyright seems to be part of a separate transparency report).
So, for everyone who insists that there should be no content moderation and that everything should flow, just recognize that most of what you’d be enabling is… spam. Lots and lots and lots of spam. Unfathomable amounts of spam.
To make this more explicit, I put all of the other categories together and made this chart:
So, yeah. You want content moderation. You need content moderation.
Content moderation is not about censoring political views.
We’ve talked a fair bit about Australia’s ridiculous “News Bargaining Code,” which is literally nothing more than a tax on Facebook and Google for sending traffic to media organizations. Again, the law requires Facebook and Google (and just Facebook and Google) to pay media organizations for sending them web traffic. This is, of course, backwards to any sensible set up. Why should anyone have to pay for sending traffic to a website? Here, the answer appears to be, because Rupert Murdoch wants to get paid and is jealous of the success of Facebook and Google. The best summary of the whole thing comes from the Australian satirical video maker, The Juice Media:
The law does seem widely popular in Australia, mainly because the very same media that is getting paid because of this law spent years demonizing Facebook and Google, so many people think the law is good, even though it’s really just transferring money from some giant companies to other giant companies, and making sure that smaller media organizations get screwed in the process.
Anyway, as the law went into effect, Facebook broke out the nuclear option and blocked news sharing in Australia. This move made a lot of people angry, but I still don’t understand why. You don’t create a tax on things you want more of, you create a tax on things you want less of. If you tell a social media company that you are going to make them pay for sending any traffic to news stories, why is it surprising or bad that the company then says “ok, no more links to news stories?” If you don’t like that result, then maybe don’t pass such ridiculous laws?
Eventually, after a slight modification to the law, Facebook caved in anyway, re-enabled links to news stories, and started negotiating to pay money to a small number of big media organizations in Australia.
Anyway, the Wall Street Journal recently had an article revealing some internal Facebook documents from a whistleblower with the fairly provocative title: “Facebook Deliberately Caused Havoc in Australia to Influence New Law, Whistleblowers Say” and I went to read it, expecting some terrible smoking gun, about just how badly Facebook management screwed up — something the company has an uncanny knack for doing. And… I couldn’t really find anything.
Basically, the documents seem to show that when Facebook put in place that Australian news block, it ended up over-blocking sites that shouldn’t have been blocked, including government and charity sites. And, yes, that sounds bad, but again, the way the law was written was that if Facebook was allowing links to any news, it faced potentially massive fees it would be forced to pay up. So, when that’s how the law is structured, the only reasonable response is to over-block. It’s what any lawyer would recommend. It’s what I would recommend too.
The law is written in such a way that if you make a mistake and let news through you’re going to get into trouble, and as we’ve seen from other intermediary liability laws from around the globe, the perfectly natural response to any of this is to over-block. If you don’t, you face massive liability.
So I kept reading the WSJ piece, and waiting for the big reveal of what terrible thing that Facebook had done… and it appears to basically be… exactly what you’d expect. The company rushed to put this in place and over-blocked because they didn’t want to let anything get through that would cause them problems under the law.
Two hours later, the product manager for the team wrote in the internal logs: “Hey everyone—the [proposed Australian law] we are responding to is extremely broad, so guidance from the policy and legal team has been to be overinclusive and refine as we get more information.”
She then outlined the team’s plan to undo the improper blocking, including starting with “the most obvious cases” like government and healthcare pages, and the need to go to outside legal counsel for “more nuanced” cases.
And… yeah? I mean, of course you would want to be overly inclusive. If you weren’t then the whole reason for blocking news links goes away. Again, it seems like the real issue here is the law, and how broadly and stupidly it was written.
The WSJ piece does raise a point that Facebook already had a list of news orgs that it didn’t use, and at first that seemed like perhaps that was significant… until the very next paragraph when it becomes obvious why that list wasn’t used: because it clearly was not comprehensive, and the law applied to a much broader list of news sites:
Instead of using Facebook’s long-established database of existing news publishers, called News Page Index, the newly assembled team developed a crude algorithmic news classifier that ensured more than just news would be caught in the net, according to documents and the people familiar with the matter. “If 60% of [sic] more of a domain’s content shared on Facebook is classified as news, then the entire domain will be considered a news domain,” stated one internal document. The algorithm didn’t distinguish between pages of news producers and pages that shared news.
The Facebook documents in the complaints don’t explain why it didn’t use its News Page Index. A person familiar with the matter said that since news publishers had to opt in to the index, it wouldn’t have necessarily included every publisher.
Okay, so… no real scandal there. The article does make a big deal of the fact that Facebook blocked government websites, and even implies repeatedly that Facebook deliberately did so to put pressure on the government to change its policy… but then 55 (55!) paragraphs into the article, the reporters admit that the internal documents they saw shows that the government website blocking was a legitimate mistake.
On the first day of the action, Facebook executives discussed that the platform had blocked about 17,000 pages as news that shouldn’t have been, of which 2,400 were “high priority” pages such as government agencies and nonprofits that they were working to unblock first, according to emails viewed by the Journal.
I mean, maybe I’m just a small country blogger, but if you’re going to spend all this time hinting at a smoking gun and how the company “deliberately caused havoc… to influence [the] new law” then maybe you shouldn’t wait until paragraph 55 to say, well, actually, the thing we spent many previous paragraphs talking about being really awful, was a legitimate accident that the company spotted almost immediately and moved to fix?
Look, there are many, many reasons why Facebook is a terrible company doing terrible things, and there’s little reason to trust the company to do the right thing when the wrong thing always seems to be the company’s top choice. But this whole article seems bizarrely empty of any actual support of its central premise.
Yes, Facebook broadly blocked news links for a little while until the law was slightly adjusted, but under the law, letting any links to news through would have triggered a provision that effectively would have enabled the government to simply order Facebook to pay huge sums to media companies. It seems like a perfectly logical and reasonable step to respond to such a move by blocking news links, and even more so to block broadly to avoid accidentally tripping the wire to trigger the law. And, yes, the overly broad blocking did include some government websites, but as the article itself admits if you make it all the way down to paragraph 55, the company immediately recognized those issues and set to work on fixing them, though it wanted to proceed cautiously to avoid accidentally triggering the law.
It’s easy to hate on Facebook — again, the company does a lot of terrible things — but this seems like yet another case where the journalists really badly wanted to tell a story of something evil, but the actual documents didn’t support the story… so they just wrote it anyway.
Laura Loomer still thinks she can sue her way back onto Facebook and Twitter. In support of her argument, she brings arguments that failed in the DC Appeals Court as well as a bill for $124k in legal fees for failing to show that having your account reported is some sort of legally actionable conspiracy involving big tech companies.
For this latest failed effort, she has retained the “services” of John Pierce, co-founder of a law firm that saw plenty of lawyers jump ship once it became clear Pierce was willing to turn his litigators into laughingstocks by representing Rudy Giuliani and participating in Tulsi Gabbard’s performative lawsuits.
Laura Loomer has lobbed her latest sueball into the federal court system and her timing could not have been worse. Her lawsuit against Twitter, Facebook, and their founders was filed in the Northern District of California (where most lawsuits against Twitter and Facebook tend to end up) just four days before this same court dismissed Donald Trump’s lawsuit [PDF] alleging his banning by Twitter violated his First Amendment rights.
Trump will get a chance to amend his complaint, but despite all the arguments made in an attempt to bypass both the First Amendment rights of Twitter (as well as its Section 230 immunity), the court’s opinion suggests a rewritten complaint will meet the same demise.
Plaintiffs’ main claim is that defendants have “censor[ed]” plaintiffs’ Twitter accounts in violation of their right to free speech under the First Amendment to the United States Constitution… Plaintiffs are not starting from a position of strength. Twitter is a private company, and “the First Amendment applies only to governmental abridgements of speech, and not to alleged abridgements by private companies.”
Loomer’s lawsuit [PDF] isn’t any better. In fact, it’s probably worse. But it is 133 pages long! And (of course), it claims the banning of her social media accounts is the RICO.
The lawsuit wastes most of its pages saying things that are evidence of nothing. It quotes several news reports about social media moderating efforts, pointing out what’s already been made clear: it’s imperfect and it often causes collateral damage. What the 133 pages fails to show how sucking at an impossible job is a conspiracy against Loomer in particular, which is what she needs to support her RICO claims.
The lawsuit begins with the stupidest of opening salvos: direct quotes from Florida’s social media law, which was determined to be unconstitutional and blocked by a federal judge last year. It also quotes Justice Clarence Thomas’ idiotic concurrence in which he made some really dumb statements about the First Amendment and Section 230 immunity. To be sure, these are not winning arguments. A blocked law and a concurrence are not exactly the precedent needed to overturn decades of case law to the contrary.
It doesn’t get any better from there. There’s nothing in this lawsuit that supports a conspiracy claim. And what’s in it ranges from direct quotes of news articles to unsourced claims thrown in there just because.
For instance, Loomer’s lawsuit quotes an authoritarian’s George Soros conspiracy theory as though that’s evidence of anything.
On or about May 16, 2020, Hungarian Prime Minister Viktor Orbán and the Hungarian Government called Defendant Facebook’s “oversight board” not some neutral expert body, but a “Soros Oversight Board” intended to placate the billionaire activist because three of its four co-chairs include Catalina Botero Marino, “a board member of the pro-abortion Center for Reproductive Rights, funded by Open Society Foundations” — Soros’s flagship NGO — and Helle Thorning-Schmidt, former Prime Minister of Denmark, who is “unequivocally and vocally anti- Trump” and serves alongside Soros and his son Alexander as trustee of another NGO, and a Columbia University professor Jamal Greene who served as an aide to Senator Kamala Harris (D-CA) during Justice Kavanaugh’s 2018 confirmation Hearings.
Or this claim, which comes with no supporting footnote or citation. Nor does it provide any guesses as to how this information might violate Facebook policy.
Defendant Facebook allows instructions on how to perform back-alley abortions on its platform.
Loomer’s arguments don’t start to coalesce until we’re almost 90 pages into the suit. Even then, there’s nothing to them. According to Loomer, she “relied” on Mark Zuckerberg’s October 2019 statement that he didn’t “think it’s right for tech companies to censor politicians in a democracy.” This statement was delivered five months after Facebook had permanently banned Loomer. Loomer somehow felt this meant she would have no problems with Facebook as long as she presented herself as a “politician in a democracy.”
In reliance upon Defendant Facebook’s promised access to its networks, Plaintiffs Candidate Loomer and Loomer Campaign raised money and committed significant time and effort in preparation for acting on Defendant Facebook’s fraudulent representation of such promised access to its network.
On or about November 11, 2019, Loomer Campaign attempted to set up its official campaign page for Candidate Loomer as a candidate rather than a private citizen.
On November 12, 2019, Defendant Facebook banned the “Laura Loomer forCongress” page, the official campaign page for Candidate Loomer, from its platform, and subsequently deleted all messages and correspondence with the campaign.
On page 94, the RICO predicates begin. At least Loomer and her lawyer have saved the court the trouble of having to ask for these, but there’s still nothing here. The “interference with commerce by threats or violence” is nothing more than noting that Facebook, Google, and Twitter hold a considerable amount of market share and all deploy terms of service that allow them to remove accounts for nearly any imaginable reason. No threats or violence are listed.
The “Interstate and Foreign Transportation in Aid of Racketeering Enterprises” section lists a bunch of content moderation stuff that happened to other people. “Fraud by Wire, Radio, or Television” consists mostly of Loomer reciting the law verbatim before suggesting Facebook and Procter & Gamble “schemed” to deny her use of Facebook or its ad platform. Most of the “fraud” alluded to traces back to Zuckerberg saying Facebook would allow politicians and political candidates to say whatever they wanted before deciding that the platform would actually moderate these entities.
There’s also something in here about providing material support for terrorism (because terrorists use the internet), which has never been a winning argument in court. And there’s some truly hilarious stuff about “Advocating Overthrow of Government” which includes nothing about the use of social media by Trump supporters to coordinate the raid on the US Capitol building, but does contain a whole lot of handwringing about groups like Abolish ICE and other anti-law enforcement groups.
All of this somehow culminates in Loomer demanding [re-reads Prayer for Relief several times] more than $10 billion in damages. To be fair, the ridiculousness of the damage demand is commensurate with the ridiculousness of the lawsuit. It’s litigation word soup that will rally the base but do nothing for Loomer but cost her more money. Whatever’s not covered by the First Amendment will be immunized by Section 230. There’s no RICO here because, well, it’s never RICO. This is stupid, performative bullshit being pushed by a stupid, performative “journalist” and litigated by a stupid, performative lawyer. A dismissal is all but inevitable.
Earlier this year, we covered what appears to be the first of several lawsuits filed on behalf of parents by the Social Media Victims Law Center. In that lawsuit, the mother of an eleven-year-old who committed suicide sued Meta and Snap, claiming SnapChat’s algorithmically enabled feedback loops drove her daughter to her death. The suit recounted the last few years of her daughter’s life, which increasingly revolved around social media use. Despite taking actions to limit her daughter’s interactions with these services, along with seeking psychiatric intervention, her daughter ultimately took her own life.
Seeking some form of closure or justice often follows tragedies, but trying to hold social media platforms directly responsible for the actions of users isn’t likely to achieve either of those goals. What isn’t foreclosed by Section 230 immunity is shielded by the First Amendment. Even if the plaintiff somehow manages to get past these arguments, they still have to show how the platform contributed to the user’s death.
Christopher James Dawley, known as CJ to his friends and family, was 14 years old when he signed up for Facebook, Instagram and Snapchat. Like many teenagers, he documented his life on those platforms.
CJ worked as a busboy at Texas Roadhouse in Kenosha, Wisconsin. He loved playing golf, watching “Doctor Who” and was highly sought after by top-tier colleges. “His counselor said he could get a free ride anywhere he wanted to go,” his mother Donna Dawley told CNN Business during a recent interview at the family’s home.
But throughout high school, he developed what his parents felt was an addiction to social media. By his senior year, “he couldn’t stop looking at his phone,” she said. He often stayed up until 3 a.m. on Instagram messaging with others, sometimes swapping nude photos, his mother said. He became sleep deprived and obsessed with his body image.
On January 4, 2015, while his family was taking down their Christmas tree and decorations, CJ retreated into his room. He sent a text message to his best friend – “God’s speed” – and posted an update on his Facebook page: “Who turned out the light?” CJ held a 22-caliber rifle in one hand, his smartphone in the other and fatally shot himself. He was 17. Police found a suicide note written on the envelope of a college acceptance letter. His parents said he never showed outward signs of depression or suicidal ideation.
The wrongful death lawsuit [PDF] (which CNN didn’t include in its report for unknown reasons) presents a bunch of product liability claims, along with references to recent congressional hearings about social media moderation efforts. The biggest problem facing the plaintiffs isn’t Section 230 immunity or First Amendment protections. It’s the fact that these allegations are foreclosed by the statute of limitations. Both wrongful death and product liability suits must be brought within three years. (There is an exemption that extends the product liability statute of limitations but it only applies to latent diseases caused by products or if the manufacturer has explicitly promised the product would last more than 15 years.)
Here’s how the lawsuit hopes to avoid the statute of limitations issues.
Plaintiff did not discover, or in the exercise of reasonable diligence could not have discovered, that CJ’s death by suicide was caused by the Defendant’s unreasonably dangerous products until September or October of 2021.
This refers to the information exposed by Facebook whistleblower Frances Haugen, which provided details on the inner workings of the platform’s algorithms, and how they were skewed to ensure the company made more money even if it meant making the experience worse (and potentially more dangerous) for users.
By claiming they had no idea how much Meta and Snap manipulated users until this date, the plaintiff is apparently hoping the court will consider September 2021 to be the starting point of the injury, rather than the date her son committed suicide, which was more than seven years ago. Whether the court will agree to start the clock six years after the tragedy remains to be seen, but the rest of arguments are similar to those raised in lawsuits brought against social media services by victims of terrorist attacks… and not a single one of those lawsuits has resulted in a win for the plaintiffs.
This lawsuit goes out of its way to ensure it never refers to any content hosted by SnapChat as being a contributing factor, but it does specifically refer to moderation efforts, algorithms, and other newsfeed tweaks that would appear to raise 1st Amendment and Section 230 questions, even if it’s clear the plaintiff and their reps definitely don’t want those issues raised. Trying to plead around them may be a nice try, but is unlikely to be successful. Similar cases have been dismissed on 230 and 1st Amendment grounds and it’s likely that this one will face the same fate.
There’s also the problem that, generally speaking, you can’t blame someone’s suicide on a third party. Courts frown upon such things.
This firm’s efforts appear to be in good faith… or at least in better faith than the social media/terrorism lawsuits filed en masse by 1-800-LAW-FIRM and Excolo Law. But that doesn’t mean these better-intentioned efforts are any more likely to succeed.
Feeling the crunch of this economy? Why not leverage government power to create a sustainable revenue stream? That’s the plan in Vietnam, a country not unfamiliar with regular deployments of censorial efforts by the government.
The Vietnamese government keeps the internet — and its citizens — on a short leash. Only so much free expression is allowed and that “free” expression had better steer clear of criticizing the government. The government literally polices the internet with a 10,000-employee strong internet task force that monitors the internet for “wrongful views.” It also leverages social media companies’ built-in tools to silence dissent.
To maintain control on citizens’ speech, the government has demanded foreign platforms maintain a local presence in the form of Vietnam-located data centers. It also is quick to complain when it feels foreign internet services aren’t as responsive to its censorship demands as it would like.
As has been noted multiple times here at Techdirt, moderation at scale is impossible. And every new demand for government makes it just that much more impossible. This matters not to the Vietnamese government, which apparently believes it can turn this truism into cashable checks, according to this Reuters exclusive.
Vietnam is preparing new rules requiring social media firms to take down content it deems illegal within 24 hours, three people with direct knowledge of the matter said.
The planned amendments to current law will cement Vietnam, a $1 billion market for Facebook, as one of the world’s most stringent regimes for social media firms and will strengthen the ruling Communist Party’s hand as it cracks down on “anti-state” activity.
To ensure foreign platforms remain solid revenue streams, there will be no grace period granted to those who can’t find and/or eliminate the offending content within 24 hours. The proposed law also makes platforms subject to fines for not removing “illegal livestreams” within three hours.
This law would allow the Vietnamese government to print (foreign) currency. On top of these impossible demands lies another demand that is at vague as it is profitable.
Social media companies have also been told content that harms national security must be taken down immediately, according to two of the people and a third source.
National security is in the eye of the government beholder, which means social media services won’t necessarily know what to take down until they’ve been informed they’re already in violation of the super-vague law. Win-win for cash-strapped autocrats. Lose-lose for citizens unhappy with their representation and foreign companies who have yet to exit the Vietnamese market.
Why is this happening? Well, it looks like further censorship and rent-seeking from the Vietnamese government. It’s not like US companies haven’t done what they can to satiate the censorial regime.
According to data from Vietnam’s communications ministry, during the first quarter of 2022, Facebook complied with 90% of the government’s take-down requests, Alphabet complied with 93% and TikTok complied with 73%.
Not good enough, says a government that has found compliance to be unprofitable. The only solution — at least when you’re looking for sustainable revenue streams — is to create impossible situations that can be turned into fines, fees, threats, and excuses to craft even more legislative impossibilities to mitigate the future loss of income as companies exit the market or refine their algorithms.
This is Vietnam soaking the rich in the most self-serving way possible. It allows the government to dip into platforms’ billions while censoring criticism of the government by its population. Vietnam’s government has never cared what the rest of the world thinks about it, much less how its citizens feel about its overreach. With this proposal, it has the tools to stay funded while deliberately (and lawfully) ignoring criticism.
Here we go again. It’s a plan that almost never works but one that legislators and the special interest groups pushing for it continue to believe will shower them with untold riches from billion dollar tech companies that they blame for the destruction of local content creation.
I mean, they’re not entirely wrong… at least in terms of some uncomfortable facts. Local journalism is dying. Some platforms have pushed for adoption of their protocols (looking at you, Facebook) that ultimately result in little more than the platform’s consolidation of power.
But large tech companies aren’t killing local journalism. Lots of entities assumed printing news on paper once a day would be all people needed to stay abreast of current news. Once it became apparent people were moving on to other platforms and services, news agencies reacted. By that time, it was too late. Thousands of sources for news replaced “the only news game in town” as well as the assumption that news only needed to be delivered once a day.
Cable news networks broadcasting 24 hours a day started this landslide. The arrival of Google, Facebook, Twitter, and others only cemented the demise of news agencies that believed people would be satisfied with a product that contained an outsized percentage of ads and copy-pasted articles from national news services.
Local news agencies could have opted for a more focused product that engaged directly with readers. Instead, agencies outsourced reader engagement to Facebook and increased uptake of ads and third party content generated by national news sources.
In response, many governments and the vocal special interests they’ve opted to speak for have decided it’s not their fault for responding poorly to this massive shift to internet news sources. Rather than accept the fact they responded poorly, they’ve decided the road to financial solvency runs through the pockets of tech companies that were never in the news business to begin with.
The Canadian government this week introduced a law bill that would force the likes of Google and Facebook to pay Canadian news publishers for using their articles online.
The Online News Act was created to address what Canadian Heritage Minister Pablo Rodriguez described as a crisis in the country’s media sector that has resulted in 451 outlets disappearing between 2008 and 2015. “We want to make sure that news outlets and journalists receive fair compensation for their work. We want to make sure that local independent news thrives in our country,” Rodriguez said in a press statement.
Specifically, the proposed law seeks to ensure journalists and publishers get a fair cut of the revenues Big Tech banks from aggregating, distributing, sharing, or summarizing stories; the exact arrangements have yet to be hammered out.
Let me just “hammer this out” for you. Do you really want to know how this will work, Canadian legislators and news agencies? Let me demonstrate using this imaginary conversation that it also a fairly recognizable meme:
Canadian gov’t/news agencies: If you want to link to Canadian content, you’ll need to pay us.
US tech companies: Fine, we won’t link to Canadian content.
Gov’t and news agencies: no not like that
This chain of events has occurred repeatedly. And yet, entities like these think it will somehow be different this time.
Tech companies aren’t going to pay for indexing content. Even if you firmly believe companies are morally obligated to pay news sources for sending traffic their way, there’s nothing that actually justifies companies paying to send traffic to others. Even if you firmly believe Google, et al deliberately killed local journalism and desecrated its corpse, simple math shows soaking tech companies won’t return flailing news agencies to solvency, much less to historical levels of profitability predicated on being the only game in town.
The only thing preventing Google, Facebook, etc. from performing a cut-and-run is optics. If these companies determine it’s ultimately more profitable to leak revenue to appease regulators, they will do so. But that determination is largely dependent on how much governments enacting laws like this will demand.
A similar law passed in Australia (one said to directly inspire this Canadian effort) hasn’t resulted in a mass exodus… yet.
When Facebook heard of Australia’s plans, it blocked the ability of users to share any Australian news articles on its social network before agreeing to un-ban the content and enter a peace pact with the government.
Google, meanwhile, said it would pull its search engine out of Australia if forced to pay for news, though has since invested nearly $1 billion dollars to expand staffing and grow its cloud operations in the country.
Australia’s bill passed and went into effect on March 2, 2021, and neither Facebook nor Google have left Down Under.
This sounds like a battle regulators might win. But look closer at the details. Google remains active in Australia and has actually invested more money (in questionable news purveyors), possibly indicating nothing more that it believes adhering to these regulations will keep less well-funded companies from cutting into its market share. And Facebook deployed the nuclear option before regulators were forced to address Facebook’s concerns.
If Canada wants to roll the dice on obligated engagement, it can. But it should probably take a closer look at those demanding to be paid for failing to make the most of increased engagement before deciding the only way forward is to punish other companies for their success.