It turns out that if you fire basically all of the competent trust & safety people at your website, you end up with a site that is neither trustworthy, nor safe. We’ve spent months covering ways in which you cannot trust anything from Twitter or Elon Musk, and there have been some indications of real safety problems on the site, but it’s been getting worse lately, with two somewhat terrifying stories that show just how unsafe the site has become, and how risky it is to rely on Twitter for anything.
First, former head of trust & safety at Twitter, Yoel Roth, a few weeks after he quit, said that “if protected tweets stop working, run.” Basically, when core security features break down, it’s time to get the hell out of there.
Protected tweets do still seem to kinda be working, but a related feature, Twitter’s “circles,” which lets you tweet to just a smaller audience, broke. Back in February, some people noticed that it was “glitching,” in ways that were concerning, including a few reports that some things that were supposedly posted to a circle were viewable publicly, but there weren’t many details. However, in early April, such reports became widespread, with further reports of nude imagery that people thought was being shared privately among a smaller group being available publicly.
Twitter said nothing for a while, before finally admitting earlier this month that there was a “security incident” that may have exposed some of those supposed-to-be-private tweets, though it appears to have only sent that admission to some users via email, rather than publicly commenting on it.
The second incident is perhaps a lot more concerning. Last week, some users discovered that Twitter’s search autocomplete was recommending… um… absolutely horrific stuff, including potential child sexual abuse material and animal torture videos. As an NBC report by Ben Collins notes, Twitter used to have tools that stopped search from recommending such awful things, but it looks like someone at Twitter 2.0 just turned off that feature, enabling anyone to get recommended animal torture.
Yoel Roth, Twitter’s former head of trust and safety, told NBC News that he believes the company likely dismantled a series of safeguards meant to stop these kinds of autocomplete problems.
Roth explained that autocompleted search results on Twitter were internally known as “type-ahead search” and that the company had built a system to prevent illegal, illicit and dangerous content from appearing as autocompleting suggestions.
“There is an extensive, well-built and maintained list of things that filtered type-ahead search, and a lot of it was constructed with wildcards and regular expressions,” Roth said.
Roth said there was a several-step process to prevent gore and death videos from appearing in autocompleted search suggestions. The process was a combination of automatic and human moderation, which flagged animal cruelty and violent videos before they began to appear automatically in search results.
“Type-ahead search was really not easy to break. These are longstanding systems with multiple layers of redundancy,” said Roth. “If it just stops working, it almost defies probability.”
In other words, this isn’t something that just “breaks.” It’s something that someone had to go in and actively go through multiple steps to turn off.
After news of this started to get attention, Twitter responded by… turning off autocomplete entirely. Which, I guess, is better than leaving up the other version.
But, still, this is why you have a trust & safety team who works through this stuff to keep your site safe. It’s not just content moderation, as there’s a lot more to it than that. But Twitter 2.0 seems to have burned to the ground a ton of institutional knowledge and is just winging it. If that means recommending CSAM and animal torture videos, well, I guess that’s just the kind of site Twitter wants to be.
Sometimes it feels like we need to keep pointing this out, but it’s (1) often forgotten and (2) really, really important. Section 230 doesn’t just protect “big tech.” It also doesn’t just protect “small tech.” It literally protects you and me. Remember, the key part of the law so that no provider or user of an interactive computer service shall be held liable for someone else’s speech:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Professor Janet Monge from the University Pennsylvania, and curator of part of the Penn Museum did not like what this HyperAllergic article said about her, and insisted that it was defamatory. She then sued a whole bunch of people, including the publisher, HyperAllergic, and the two authors of the article, Kinjal Dave and Jake Nussbaum. However, there were many others listed as well, including a fellow UPenn faculty member, Dr. Deborah Thomas, who did nothing more than share the article on an email listserv.
The allegations in Dr. Monge’s amended complaint demonstrate that this statement is, in all material respects, substantially true, and thus Hyperallergic, Ms. Dave, and Mr. Nussbaum cannot be held liable.
Other statements are non-defamatory because they’re “pure opinions that convey the subjective belief of the speaker and are based on disclosed facts.”
However, now in dealing with the claims against Dr. Thomas, the court was able to use Section 230 to dismiss them even more easily without having to even analyze the content again:
Dr. Monge, by asserting defamation claims against Dr. Thomas, seeks to treat Dr. Thomas as the publisher of the allegedly defamatory articles which Dr. Thomas shared via email. This is precisely the kind of factual scenario where CDA immunity applies. Therefore, Dr. Thomas’s conduct of sharing allegedly defamatory articles via email is immune from liability under the CDA.
Monge tried to get around this by arguing that Thomas “materially contributed” to the defamation by including commentary in the email forward, but the court notes that since she did not contribute any defamatory content, that’s not how this works. You have to imbue the content with its violative nature, and simply summarizing or expressing an opinion about the article in question is not that:
The CDA provides immunity to Dr. Thomas for sharing the allegedly defamatory articles via email and for allegedly suggesting that Dr. Monge mishandled the remains because Dr. Thomas did not materially contribute to the allegedly defamatory articles she forwarded.
As Prof. Goldman notes in his writeup of this case (which he describes as “an easy case” regarding 230), this highlights two key aspects of Section 230:
This is a good example of how Section 230 benefits online users, not just “Big Tech.” Dr. Thomas gets the same legal protection as Google and Facebook, even though she’s didn’t operate any system at all.
It’s also a reminder of how Section 230 currently protects the promotion of content, in addition to the hosting of it. That aspect remains pending with the US Supreme Court.
These are both important points. In the leadup to the Gonzalez case at the Supreme Court, lots of people kept trying to argue that merely recommending content somehow should not be covered by Section 230, but as this case shows were that to be the case, it would wipe out 230 in cases like this where its protections are so important.
I just wrote about Utah’s ridiculously silly plans to sue every social media company for being dangerous to children, in which I pointed out that the actual research doesn’t support the underlying argument at all. But I forgot that a few weeks ago, Seattle’s public school district actually filed just such a lawsuit, suing basically every large social media company for being a “public nuisance.” The 91-page complaint is bad. Seattle taxpayers should be furious that their taxes, which are supposed to be paying for educating their children, are, instead, going to lawyers to file a lawsuit so ridiculous that it’s entirely possible the lawyers get sanctioned.
The lawsuit was filed against a variety of entities and subsidiaries, but basically boils down to suing Meta (over Facebook, Instagram), Google (over YouTube), Snapchat, and TikTok. Most of the actual lawsuit reads like any one of the many, many moral panic articles you read about how “social media is bad for you,” with extremely cherry-picked facts that are not actually supported by the data. Indeed, one might argue that the complaint itself, filed by Seattle Public Schools lawyer Gregory Narver and the local Seattle law firm of Keller Rohrback, is chock full of the very sort of misinformation that they so quickly wish to blame the social media companies for spreading.
First: as we’ve detailed, the actual evidence that social media is harming children basically… does not exist. Over and over again studies show a near total lack of evidence. Indeed, as recent studies have shown, the vast majority of children get value from social media. There are plenty of moral paniciky pieces from adults freaked out about what “the kids these days” are doing, but little evidence to support any of it. Indeed, the parents often seem to be driven into a moral panic fury by… misinformation they (the adults) encountered on social media.
The school’s lawsuit reads like one giant aggregation of basically all of these moral panic stories. First, it notes that the kids these days, they use social media a lot. Which, well, duh. But, honestly, when you look at the details it suggests they’re mostly using them for entertainment, meaning that it hearkens back to previous moral panics about every new form of entertainment from books, to TV, to movies, etc. And, even then, none of this even looks that bad? The complaint argues that this chart is “alarming,” but if you asked kids about how much TV they watched a couple decades ago, I’m guessing it would be similar to what is currently noted about YouTube and TikTok (and note that others like Facebook/Instagram don’t seem to get that much use at all according to this chart, but are still being sued):
There’s a whole section claiming to show that “research has confirmed the harmful effects” of social media on youth, but that’s false. It’s literally misinformation. It cherry-picks a few studies, nearly all of which are by a single researcher, and ignores the piles upon piles of research suggesting otherwise. Hell, even the graphic above that it uses to show the “alarming” addition to social media is from Pew Research Center… the organization that just released a massive study about how social media has made life better for teens. Somehow, the Seattle Public Schools forgot to include that one. I wonder why?
Honestly, the best way to think about this lawsuit is that it is the Seattle Public School system publicly admitting that they’re terrible educators. While it’s clear that there are some kids who end up having problems exacerbated by social media, one of the best ways to deal with that is through good education. Teaching kids how to use social media properly, how to be a good digital citizen, how to have better media literacy for things they find on social media… these are all the kinds of things that a good school district builds into its curriculum.
This lawsuit is effectively the Seattle Public School system publicly stating “we’re terrible at our job, we have not prepared your kids for the real world, and therefore, we need to sue the media apps and services they use, because we failed in our job.” It’s not a good look. And, again, if I were a Seattle taxpayer — and especially if I were a Seattle taxpayer with kids in the Seattle public school district — I would be furious.
The complaint repeatedly points out that the various social media platforms have been marketed to kids, which, um, yes? That doesn’t make it against the law. While the lawsuit mentions COPPA, the law designed to protect kids, it’s not making a COPPA claim (which it can’t make anyway). Instead, it’s just a bunch of blind conjectures, leading to a laughably weak “public nuisance” claim.
Pursuant to RCW 7.48.010, an actionable nuisance is defined as, inter alia,
“whatever is injurious to health or indecent or offensive to the senses, or an obstruction to the
free use of property, so as to essentially interfere with the comfortable enjoyment of the life and
property.”
Specifically, a “[n]uisance consists in unlawfully doing an act, or omitting to
perform a duty, which act or omission either annoys, injures or endangers the comfort, repose,
health or safety of others, offends decency . . . or in any way renders other persons insecure in
life, or in the use of property.”
Under Washington law, conduct that substantially and/or unreasonably interferes
with the Plaintiff’s use of its property is a nuisance even if it would otherwise be lawful.
Pursuant to RCW 7.48.130, “[a] public nuisance is one which affects equally the
rights of an entire community or neighborhood, although the extent of the damage may be
unequal.”
Defendants have created a mental health crisis in Seattle Public Schools, injuring
the public health and safety in Plaintiff’s community and interfering with the operations, use, and
enjoyment of the property of Seattle Public Schools
Employees and patrons, including students, of Seattle Public Schools have a right
to be free from conduct that endangers their health and safety. Yet Defendants have engaged in
conduct which endangers or injures the health and safety of the employees and students of
Seattle Public Schools by designing, marketing, and operating their respective social media
platforms for use by students in Seattle Public Schools and in a manner that substantially
interferes with the functions and operations of Seattle Public Schools and impacts the public
health, safety, and welfare of the Seattle Public Schools community
This reads just as any similar moral panic complaint would have read against older technologies. Imagine schools in the 1950s suing television or schools in the 1920s suing radios. Or schools in the 19th century suing book publishers for early pulp novels.
For what it’s worth, the school district also tries (and, frankly, fails) to take on Section 230 head on, claiming that it is “no shield.”
Plaintiff anticipates that Defendants will raise section 230 of the Communications
Decency Act, 47 U.S.C. § 230(c)(1), as a shield for their conduct. But section 230 is no shield for
Defendants’ own acts in designing, marketing, and operating social media platforms that are
harmful to youth.
….
Section 230 does not shield Defendants’ conduct because, among other
considerations: (1) Defendants are liable for their own affirmative conduct in recommending and
promoting harmful content to youth; (2) Defendants are liable for their own actions designing
and marketing their social media platforms in a way that causes harm; (3) Defendants are liable
for the content they create that causes harm; and (4) Defendants are liable for distributing,
delivering, and/or transmitting material that they know or have reason to know is harmful,
unlawful, and/or tortious.
Except that, as we and many others explained in our briefs in the Supreme Court’s Gonzalez case, that’s all nonsense. All of them are still attempting to hold companies liable for the speech of users. None of the actual complaints are about actions by the companies, but rather how they don’t like the fact that the expression of these sites users are (the school district misleadingly claims) harmful to the kids in their schools.
First, Plaintiff is not alleging Defendants are liable for what third-parties have
said on Defendants’ platforms but, rather, for Defendants’ own conduct. As described above,
Defendants affirmatively recommend and promote harmful content to youth, such as proanorexia and eating disorder content. Recommendation and promotion of damaging material is
not a traditional editorial function and seeking to hold Defendants liable for these actions is not
seeking to hold them liable as a publisher or speaker of third party-content.
Yes, but recommending and promoting content is 1st Amendment protected speech. They can’t be sued for that. And, it’s not the “recommendation” that they’re really claiming is harmful, but the speech that is being recommended which (again) is protected by Section 230.
Second, Plaintiff’s claims arise from Defendants’ status as designers and
marketers of dangerous social media platforms that have injured the health, comfort, and repose
of its community. The nature of Defendants’ platforms centers around Defendants’ use of
algorithms and other designs features that encourage users to spend the maximum amount of
time on their platforms—not on particular third party content.
One could just as reasonably argue that the harm actually arises from the Seattle Public School system’s apparently total inability to properly prepare the children in their care for modern communications and entertainment systems. This entire lawsuit seems like the school district foisting the blame for their own failings on a convenient scapegoat.
There’s a lot more nonsense in the lawsuit, but hopefully the court quickly recognizes how ridiculous this is and tosses it out. Of course, if the Supreme Court screws up everything with a bad ruling in the Gonzalez case, well, then this lawsuit should give everyone pretty clear warning of what’s to come: a whole slew of utterly vexatious, frivolous lawsuits against internet websites for any perceived “harm.”
The only real takeaways from this lawsuit should be (1) Seattle parents should be furious, (2) the Seattle Public School system seems to be admitting it’s terrible at preparing children for the real world, and (3) Section 230 remains hugely important in protecting websites against these kinds of frivolous SLAPP suits.
So, plenty of Supreme Court watchers and Section 230 experts all knew that this term was going to be a big one for Section 230… it’s just that we all expected the main issue to be around the Netchoice cases regarding Florida and Texas’s social media laws (those cases will likely still get to SCOTUS later in the term). There were also a few other possible Section 230 cases that I thought SCOTUS might take on, but still, the Court surprised me by agreeing to hear two slightly weird Section 230 cases. The cases are Gonzalez v. Google and Twitter v. Taamneh.
There are a bunch of similar cases, many of which were filed by two law firms together, 1-800-LAW-FIRM (really) and Excolo Law. Those two firms have been trying to claim that anyone injured by a terrorist group should be able to sue internet companies because those terrorist groups happened to use those social media sites. Technically, they’re arguing “material support for terrorism,” but the whole concept seems obviously ridiculous. It’s the equivalent of the family of a victim of ISIS suing Toyota after finding out that some ISIS members drove Toyotas.
Anyway, we’ve been writing about a bunch of these cases, including both of the cases at issue here (which were joined at the hip by the 9th Circuit). Most of them get tossed out pretty quickly, as the court recognizes just how disconnected the social media companies are from the underlying harm. But one of the reasons they seem to have filed so many such cases all around the country was to try to set up some kind of circuit split to interest the Supreme Court.
The first case (Gonzalez) dealt with ISIS terrorist attacks in Paris in 2015. The 9th Circuit rejected the claim that Google provided material support to terrorists because ISIS posted some videos to YouTube. To try to get around the obvious 230 issues, Gonzalez argued that YouTube recommended some of those videos via the algorithm, and those recommendations should not be covered by 230. The second case, Taamneh, was… weird. It has a somewhat similar fact pattern, but dealt with the family of someone who was killed by an ISIS attack at a nightclub in Istanbul in 2017.
The 9th Circuit tossed out the Gonzalez case, saying that 230 made the company immune even for recommended content (which is the correct outcome) but allowed the Taamneh case to move forward, for reasons that had nothing to do with Section 230. In Taamneh, the district court initially dismissed the case entirely without even getting to the Section 230 issue by noting that Taamneh didn’t even file a plausible aiding-and-abetting claim. The 9th Circuit disagreed, said that there was enough in the complaint to plead aiding-and-abetting, and sent it back to the district court (which could then, in all likelihood, dismiss under Section 230). Oddly (and unfortunately) some of the judges in that ruling issued concurrences which meandered aimlessly, talking about how Section 230 had gone too far and needed to be trimmed back.
Gonzalez appealed the issue regarding 230 and algorithmic promotion of content, while Twitter appealed the aiding and abetting ruling (noting that every other court to try similar cases found no aiding and abetting).
Either way, the Supreme Court is taking up both cases and… it might get messy. Technically, the question the Supreme Court is asked to answer in the Gonzalez case is:
Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.
Basically: can we wipe out Section 230’s key liability protections for any content recommended? This would be problematic. The whole point of Section 230 is to put the liability on the proper party: the one actually speaking. Making sites liable for recommendations creates all of the same problems that making them liable for hosting would — specifically, requiring them to take on liability for content they couldn’t possibly thoroughly vet before recommending it. A ruling in favor of Gonzalez would create huge problems for anyone offering search on any website, because a “bad” content recommendation could lead to liability, not for the actual content provider, but for the search engine.
That can’t be the law, because that would make search next to impossible.
For what it’s worth, there were some other dangerously odd parts of the 9th Circuit’s Gonzalez rulings regarding Section 230 that are ripe for problematic future interpretation, but those parts appear not to have been included in the cert petition.
In Taamneh, the question is focused on the aiding and abetting question, but ties into Section 230, because it asks if you can hold a website liable for aiding and abetting if they try to remove terrorist content but a plaintiff argues they could have been more aggressive in weeding out such content. There’s also a second question of whether or not you can hold a website liable for an “act of intentional terrorism” when the actual act of terrorism had nothing whatsoever to do with the website, and was conducted off of the website entirely.
(1) Whether a defendant that provides generic, widely available services to all its numerous users and “regularly” works to detect and prevent terrorists from using those services “knowingly” provided substantial assistance under 18 U.S.C. § 2333 merely because it allegedly could have taken more “meaningful” or “aggressive” action to prevent such use; and (2) whether a defendant whose generic, widely available services were not used in connection with the specific “act of international terrorism” that injured the plaintiff may be liable for aiding and abetting under Section 2333.
These cases should worry everyone, especially if you like things like searching online. My biggest fear, honestly, is that this Supreme Court (as it’s been known to do) tries to split the baby (which, let us remember, kills the baby) and says that Section 230 doesn’t apply to recommended content, but that the websites still win because the things on the website are so far disconnected from the actual terrorist acts.
That really feels like the kind of solution that the Roberts court might like, thinking that it’s super clever when really it’s just dangerously confused. It would open up a huge pandora’s box of problems, leading to all sorts of lawsuits regarding any kind of recommended content, including search, recommendation algorithms, your social media feeds, and more.
A good ruling (if such a thing is possible) would be a clear statement that of course Section 230 protects algorithmically rated content, because Section 230 is about properly putting liability on the creator of the content and not the intermediary. But we know that Justices Thomas and Alito are just itching to destroy 230, so we’re already down two Justices to start.
Of course, given that this court is also likely to take up the NetChoice cases later this term, it is entirely possible that next year the Supreme Court may rules that (1) websites are liable for failing to remove certain content (in these two cases) and(2) websites can be forced to carry all content.
It’ll be a blast figuring out how to make all that work. Though, some of us will probably have to do that figuring out off the internet, since it’s not clear how the internet will actually work at that point.
The various “for the children” moral panic bills about the internet are getting dumber. Over in Minnesota, the legislature has moved forward with a truly stupid bill, which the legislature’s own website says could make the state “a national leader in putting new guardrails on social media platforms.” The bill is pretty simple — it says that any social media platform with more than 1 million account holders (and operating in Minnesota) cannot use an algorithm to recommend content to users under the age of 18.
Prohibitions; social media algorithm. (a) A social media platform with more than 1,000,000 account holders operating in Minnesota is prohibited from using a social media algorithm to target user-created content at an account holder under the age of 18.
(b) The operator of a social media platform is liable to an individual account holder who received user-created content through a social media algorithm while the individual account holder was under the age of 18 if the operator of a social media platform knew or had reason to know that the individual account holder was under the age of 18. A social media operator subject to this paragraph is liable to the account holder for (1) any regular orspecial damages, (2) a statutory penalty of $1,000 for each violation of this section, and (3) any other penalties available under law.
So, um, why? I mean, I get that for computer illiterate people the word “algorithm” is scary. And that there’s some ridiculous belief among people who don’t know any better that recommendation algorithms are like mind control, but the point of an algorithm is… to recommend content. That is, to make a social media (or other kind of service) useful. Without it, you just get an undifferentiated mass of content, and that’s not very useful.
In most cases, algorithms are actually helpful. They point you to the information that actually matters to you and avoid the nonsense that doesn’t. Why, exactly, is that bad?
Also, it seems that under this law, websites would have to create a different kind of service for those under 18 and for those over 18, and carefully track how old those users are, which seems silly. Indeed, it would seem like this bill should raise pretty serious privacy concerns, because now companies are going to have to much more aggressively track age information, meaning they need to be much more intrusive. Age verification is a difficult problem to solve, and with a bill like this, making a mistake (and every website will make mistakes) will be costly.
But, the reality is that the politicians pushing this bill know how ridiculous and silly it is, and how algorithms are actually useful. Want to know how I know? Because the bill has a very, very, very telling exemption:
Exceptions. User-created content that is created by a federal, state, or local government or by a public or private school, college, or university is exempt from this section.
Algorithms recommending content are bad, you see, except if it’s recommending content from us, your loving, well-meaning leaders. For us, keep on recommending our content and only our content.
We’ve been pointing out for a while now that mucking with Section 230 as an attempt to “deal” with how much you hate Facebook is a massive mistake. It’s also exactly what Facebook wants, because as it stands right now, Facebook is actually losing users to its core product, and the company has realized that burdening competitors with regulations — regulations that Facebook can easily handle with its massive bank account — is a great way to stop competition and lock in Facebook’s dominant position.
And yet, for reasons that still make no sense, regulators (and much of the media) seem to believe that Section 230 is the only regulation to tweak to get at Facebook. This is both wrong and shortsighted, but alas, we now have a bunch of House Democrats getting behind a new bill that claims to be narrowly targeted to just remove Section 230 from algorithmically promoted content. The full bill, the “Justice Against Malicious Algorithms Act of 2021 is poorly targeted, poorly drafted, and shows a near total lack of understanding of how basically anything on the internet works. I believe that it’s well meaning, but it was clearly drafted without talking to anyone who understands either the legal realities or the technical realities. It’s an embarrassing release from four House members of the Energy & Commerce Committee who should know better (and at least 3 of the 4 have done good work in the past on important tech-related bills): Frank Pallone, Mike Doyle, Jan Schakowsky, and Anna Eshoo.
The key part of the bill is that it removes Section 230 for “personalized recommendations.” It would insert the following “exception” into 230.
(f) PERSONALIZED RECOMMENDATION OF INFORMATION PROVIDED BY ANOTHER INFORMATION CONTENT PROVIDER.?
??(1) IN GENERAL.?Subsection (c)(1) does not apply to a provider of an interactive computer service with respect to information provided through such service by another information content provider if?
?(A) such provider of such service?
??(i) knew or should have known such provider of such service was making a personalized recommendation of such information; or
??(ii) recklessly made a personalized recommendation of such information; and
??(B) such recommendation materially contributed to a physical or severe emotional injury to any person.
So, let’s start with the basics. I know there’s been a push lately among some — including the whistleblower Frances Haugen — to argue that the real problem with Facebook is “the algorithm” and how it recommends “bad stuff.” The evidence to support this claim is actually incredibly thin, but we’ll leave that aside for now. But at its heart, “the algorithm” is simply a set of recommendations, and recommendations are opinions and opinions are… protected expression under the 1st Amendment.
Exempting Section 230 from algorithms cannot change this underlying fact about the 1st Amendment. All it means is that rather than getting a quick dismissal of the lawsuit, you’ll have a long, drawn out, expensive lawsuit on your hands, before ultimately finding out that of course algorithmic recommendations are protected by the 1st Amendment. For much more on the problem of regulating “amplification,” I highly, highly recommend reading Daphne Keller’s essay on the challenges of regulating amplification (or listen to the podcast I did with Daphne about this topic). It’s unfortunately clear that none of the drafters of this bill read Daphne’s piece (or if they did, they simply ignored it, which is worse). Supporters of this bill will argue that in simply removing 230 from amplification/algorithms, this is a “content neutral” approach. Yet as Daphne’s paper detailed, that does not get you away from the serious Constitutional problems.
Another way to think about this: this is effectively telling social media companies that they can be sued for their editorial choices of which things to promote. If you applied the same thinking to the NY Times or CNN or Fox News or the Wall Street Journal, you might quickly recognize the 1st Amendment problems here. I could easily argue that the NY Times’ constant articles misrepresenting Section 230 subject me to “severe emotional injury.” But of course, any such lawsuit would get tossed out as ridiculous. Does flipping through a magazine and seeing advertisements of products I can’t afford subject me to severe emotional injury? How is that different than looking at Instagram and feeling bad that my life doesn’t seem as cool as some lame influencer?
Furthermore, this focus on “recommendations” is… kinda weird. It ignores all the reasons why recommendations are often quite good. I know that some people have a kneejerk reaction against such recommendations but nearly every recommendation engine I use makes my life much better. Nearly every story I write on Techdirt I find via Twitter recommending tweets to me or Google News recommending stories to me — both based on things I’ve clicked on in the past. And both are (at times surprisingly) good at surfacing stories I would be unlikely to find otherwise, and doing so quickly and efficiently.
Yet, under this plan, all such services would be at significant risk of incredibly expensive litigation over and over and over again. The sensible thing for most companies to do in such a situation is to make sure that only bland, uncontroversial stuff shows up in your feed. This would be a disaster for marginalized communities. Black Lives Matter? That can’t be allowed as it might make people upset. Stories about bigotry, or about civil rights violations? Too “controversial” and might contribute to emotional injury.
The backers of this bill also argue that the bill is narrowly tailored and won’t destroy the underlying Section 230, but that too is incorrect. As Cathy Gellis just pointed out, removing the procedural benefits of Section 230 takes away all the benefits. Section 230 helps get you out of these cases much more quickly. But under this bill, now everyone will add in a claim under this clause that the “recommendation” cause “emotional injury” and now you have to litigate whether or not you’re even covered by Section 230. That means no more procedural benefit of 230.
The bill has a “carve out” for “smaller” companies, but again gets all that wrong. It seems clear that they either did not read, or did not understand, this excellent paper by Eric Goldman and Jess Miers about the important nuances of regulating internet services by size. In this case, the “carve out” is for sites that have 5 million or fewer “unique monthly visitors or users for not fewer than 3 of the preceding 12 months.” Leaving aside the rather important point that there really is no agreed upon notion of what a “unique monthly visitor” actually is (seriously, every stats package will give you different results, and now every site will have incentive to use a stats package that lies and gives you lower results to get beneath the number), that number is horrifically low.
Earlier this year, I suggested a test suite of websites that any internet regulation bill should be run against, highlighting that bills like these impact way more than Facebook and Google. And lots and lots of the sites I mention get way beyond 5 million monthly views.
So under this bill, a company like Yelp would face real risk in recommending restaurants to you. If you got food poisoning, that would be an injury you could now sue Yelp over. Did Netflix recommend a movie to you that made you sad? Emotional injury!
As Berin Szoka notes in a Twitter thread about the bill, this bill from Democrats, actually gives Republican critics of 230 exactly what they wanted: a tool to launch a million “SLAM” suits — Strategic Lawsuits Against Moderation. And, as such, he notes that this bill would massively help those who use the internet to spread baseless conspiracy theories, because THEY WOULD NOW GET TO SUE WEBSITES for their moderation choices. This is just one example of how badly the drafters of the bill misunderstand Section 230 and how it functionally works. It’s especially embarrassing that Rep. Eshoo would be a co-sponsor of a bill like this, since this bill would be a lawsuit free-for-all for companies in her district.
10/ In short, Republicans have long aimed to amend #Section230 to enable Strategic Lawsuits Against Moderation (SLAMs)
This new Democratic bill would do the same
Who would benefit? Those who use the Internet to spread hate speech and lies about elections, COVID, etc
Another example of the wacky drafting in the bill is the “scienter” bit. Scienter is basically whether or not the defendant had knowledge that what they were doing was wrongful. So in a bill like this, you’d expect that the scienter would require the platforms to know that the information they were recommending was harmful. That’s the only standard that would even make sense (though would still be constitutionally problematic). However, that’s not how it is in the bill. Instead, the scienter is… that the platform knows they recommend stuff. That’s it. In the quote above the line that matters is:
such provider of a service knew or should have known such provider of a service was making a personalized recommendation of such information
In other words, the scienter here… is that you knew you were recommending stuff personally. Not that it was bad. Not that it was dangerous. Just that you were recommending stuff.
Another drafting oddity is the definition of a “personalized recommendation.” It just says it’s a personalized recommendation if it uses a personalized algorithm. And the definition of “personalized algorithm” is this bit of nonsense:
The term ‘personalized algorithm’ means an algorithm that relies on information specific to an individual.
“Information specific to an individual” could include things like… location. I’ve seen some people suggest that Yelp’s recommendations wouldn’t be covered by this law because they’re “generalized” recommendations, not “personal ones” but if Yelp is recommending stuff to me based on my location (kinda necessary) then that’s now information specific to me, and thus no more 230 for the recommendation.
It also seems like this would be hell for spam filters. I train my spam filter, so the algorithm it uses is specific to me and thus personalized. But I’m pretty sure that under this bill a spammer whose emails are put into a spam filter can now sue, claiming injury. That’ll be fun.
Meanwhile, if this passes, Facebook will be laughing. The services that have successfully taken a bite out of Facebook’s userbase over the last few years have tended to be ones that have a better algorithm for recommending things: like TikTok. The one Achilles heel that Facebook has — it’s recommendations aren’t as good as new upstarts — gets protected by this bill.
Almost nothing here makes any sense at all. It misunderstands the problems. It misdiagnoses the solution. It totally misunderstands Section 230. It creates massive downside consequences for competitors to Facebook and to users. It enables those who are upset about moderation choices to sue companies (helping conspiracy theorists and misinformation peddlers). I can’t see a single positive thing that this bill does. Why the hell is any politician supporting this garbage?
The whole dynamic between Facebook and the Oversight Board has received lots of attention — with many people insisting that the Board’s lack of official power makes it effectively useless. The specifics, again, for most of you not deep in the weeds on this: Facebook has only agreed to be bound by the Oversight Board’s decisions on a very narrow set of issues: if a specific piece of content was taken down and the Oversight Board says it should have been left up. Beyond that, the Oversight Board can make recommendations on policy issues, but the companies doesn’t need to follow them. I think this is a legitimate criticism and concern, but it’s also a case where if Facebook itself actually does follow through on the policy recommendations, and everybody involved acts as if the Board has real power… then the norms around it might mean that it does have that power (at least until there’s a conflict, and you end up in the equivalent of a Constitutional crisis).
And while there’s been a tremendous amount of attention paid to the Oversight Board’s first set of rulings, and to the fact that Facebook asked it to review the Trump suspension, last week something potentially much more important and interesting happened. With those initial rulings on the “up/down” question, the Oversight Board also suggested a pretty long list of policy recommendations for Facebook. Again, under the setup of the Board, Facebook only needed to consider these, but was not bound to enact them.
Last week Facebook officially responded to those recommendations, saying that it had agreed to take action on 11 of the 17 recommendations, is assessing the feasibility on another five, and was taking no action on just one. The company summarized those decisions in that link above, and put out a much more detailed pdf exploring the recommendations and Facebook’s response. It’s actually interesting reading (or, at least for someone like me who likes to dig deep into the nuances of content moderation).
Since I’m sure it’s most people’s first question: the one “no further action” was in response to a policy recommendation regarding COVID-19 misinformation. The Board had recommended that when a user posts information that disagrees with advice from health authorities, but where the “potential for physical harm is identified but is not imminent” that “Facebook should adopt a range of less intrusive measures.” Basically, removing such information may not always make sense, especially if it’s not clear that the information in disagreement with health authorities might not be actively harmful. As per usual, there’s a lot of nuance here. As we discussed, early in the pandemic, the suggestions from “health authorities” later turned out to be inaccurate (like the WHO and CDC telling people not to wear masks in many cases). That makes relying on those health authorities as the be all, end all for content moderation for disinformation inherently difficult.
The Oversight Board’s response in this issue more or less tried to walk that line, recognizing that health authorities’ advice may adapt over time as more information becomes clear, and automatically silencing those who push back on the official suggestions from health officials may lead to over-blocking. But, obviously, this is a hellishly nuanced and complex topic. Part of the issue is that — especially in a rapidly changing situation, where our knowledge base starts out with little information and is constantly correcting — it’s difficult to tell who is pushing back against official advice for good reasons or for conspiracy theory nonsense reasons (and there’s a very wide spectrum between those two things). That creates (yet again) an impossible situation. The Oversight Board was suggesting that Facebook should be at least somewhat more forgiving in such situations, as long as they don’t see any “imminent” harm from those disagreeing with health officials.
Facebook’s response isn’t so much pushing back against the Board’s recommendation — but rather to argue that it already takes a “less intrusive” approach. It also argued that Facebook and the Oversight Board basically disagree on the definition of “imminent danger” from bad medical advice (the specific issue came up in the context of someone in France recommending hydroxychloroquine as a treatment for COVID). Facebook said that, contrary to the Board’s finding, it did think this represented imminent danger:
Our global expert stakeholder consultations have made it clear that, that in the context of a health
emergency, the harm from certain types of health misinformation does lead to imminent physical harm.
That is why we remove this content from the platform. We use a wide variety of proportionate measures
to support the distribution of authoritative health misinformation. We also partner with independent
third-party fact-checkers and label other kinds of health misinformation.
We know from our work with the World Health Organization (WHO) and other public health authorities
that if people think there is a cure for COVID-19 they are less likely to follow safe health practices, like
social distancing or mask-wearing. Exponential viral replication rates mean one person?s behavior can
transmit the virus to thousands of others within a few days.
We also note that one reason the board decided to allow this content was that the person who posted
the content was based in France, and in France, it is not possible to obtain hydroxychloroquine without a
prescription. However, readers of French content may be anywhere in the world, and cross-border flows
for medication are well established. The fact that a particular pharmaceutical item is only available via
prescription in France should not be a determinative element in decision-making.
As a bit of a tangent, I’ll just note the interesting dynamic here: despite “the narrative” which claims that Facebook has no incentive to moderate content due to things like Section 230, here the company is arguing for the ability to be more heavy handed in its moderation to protect the public from danger, and against the Oversight Board which is asking the company to be more permissive.
As for the items that Facebook “took action” on, a lot of them are sort of bland commitments to do “something” rather than concrete changes. For example, at the top of the list were things around confusion between the Instagram community guidelines and the Facebook community guidelines, and to be more transparent about how those are enforced. Facebook says that they’re “committed to action” on this, but I’m not sure I can actually tell you what actions it’s actually taken.
We?ll continue to explore how best to provide transparency to people about enforcement actions, within
the limits of what is technologically feasible. We?ll start with ensuring consistent communication across
Facebook and Instagram to build on our commitment above to clarify the overall relationship between
Facebook?s Community Standards and Instagram?s Community Guidelines.
Um… great? But what does that actually mean? I have no idea.
Evelyn Douek, who studies this issue basically more than anyone else, notes that many of these commitments from Facebook are kind of weak:
Some of the ?commitments? are likely things that Facebook had in train already; others are broad and vague. And while the dialogue between the FOB and Facebook has shed some light on previously opaque parts of Facebook?s content moderation processes, Facebook can do much better.
As Douek notes, some of the answers do reveal some pretty interesting things that weren’t publicly known before — such as how its AI deals with nudity, and how it tries to distinguish the nudity it doesn’t want from things like nudity around breast cancer awareness:
Facebook explained the error choice calculation it has to make when using automated tools to detect adult nudity while trying to avoid taking down images raising awareness about breast cancer (something at issue in one of the initial FOB cases). Facebook detailed that its tools can recognize the words ?breast cancer? but users have used these words to evade nudity detection systems, so Facebook can?t just rely on just leaving up every post that says ?breast cancer.? Facebook has committed to providing its models with more negative samples to decrease error rates.
Douek also notes that some of Facebook’s claims to be implementing the Board’s recommendations are… misleading. They’re actually rejecting the Board’s full recommendation:
In response to the FOB?s request for a specific transparency report about Community Standard enforcement during the COVID-19 pandemic, Facebook said it was ?committed to action.? Great! What ?action,? you might ask? It says that it had already been sharing metrics throughout the pandemic and would continue to do so. Oh. This is actually a rejection of the FOB?s recommendation. The FOB knows about Facebook?s ongoing reporting and found it inadequate. It recommended a specific report, with a range of details, about how the pandemic had affected Facebook?s content moderation. The pandemic provided a natural experiment and a learning opportunity: Because of remote work restrictions, Facebook had to rely on automated moderation more than normal. The FOB was not the first to note that Facebook?s current transparency reporting is not sufficient to meaningfully assess the results of this experiment.
Still, what’s amazing to me is that these issues, which might actually change key aspects of Facebook’s moderation basically got next to zero public attention last week as compared to the simple decisions on specific takedowns (and the massive flood of attention the Trump account suspension decision will inevitably get).
For quite some time now, we’ve pointed out that we should stop blaming technology for problems that are actually societal. Indeed, as you look deeper at nearly every “big tech problem,” you tend to find the problem has to do with people, not technology. And “fixing” technology isn’t really going to fix anything when it’s not the real problem. Indeed, many proposals to “fix” the tech industry seem likely to exacerbate the problems we’re discussing.
Of course, the “techlash” narrative is incredibly powerful, and the media has really run with it of late (as have politicians). So, it’s nice to see at least Wired is starting to push back on the narrative. A new cover story makes it clear that “Bad Algorithms Didn’t Break Democracy.” It’s a great article, by Gideon Lewis-Kraus. It acknowledges the narrative, and even that the techlash narrative is appealing at a surface level:
It?s easy to understand why this narrative is so appealing. The big social media firms enjoy enormous power; their algorithms are inscrutable; they seem to lack a proper understanding of what undergirds the public sphere. Their responses to widespread, serious criticism can be grandiose and smarmy. ?I understand the concerns that people have about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people?s hands,? said Mark Zuckerberg, in an October speech at Georgetown University. ?I?m here today because I believe we must continue to stand for free expression.?
If these corporations spoke openly about their own financial interest in contagious memes, they would at least seem honest; when they defend themselves in the language of free expression, they leave themselves open to the charge of bad faith.
But as the piece goes on to highlight, this doesn’t really make much sense — and despite many attempts to support it with actual evidence, the evidence is completely lacking:
Over the past few years, the idea that Facebook, YouTube, and Twitter somehow created the conditions of our rancor?and, by extension, the proposal that new regulations or algorithmic reforms might restore some arcadian era of ?evidential argument??has not stood up well to scrutiny. Immediately after the 2016 election, the phenomenon of ?fake news? spread by Macedonian teenagers and Russia?s Internet Research Agency became shorthand for social media?s wholesale perversion of democracy; a year later, researchers at Harvard University?s Berkman Klein Center concluded that the circulation of abjectly fake news ?seems to have played a relatively small role in the overall scheme of things.? A recent study by academics in Canada, France, and the US indicates that online media use actually decreases support for right-wing populism in the US. Another study examined some 330,000 recent YouTube videos, many associated with the far right, and found little evidence for the strong ?algorithmic radicalization? theory, which holds YouTube?s recommendation engine responsible for the delivery of increasingly extreme content.
The article has a lot more in it — and you should read the whole thing — but it’s nice to see it recognizes that the real issue is people. If there’s a lot of bad stuff on Facebook, it’s because that’s what its users want. You have to be incredibly paternalistic to assume that the best way to deal with that is to have Facebook deny users what they want.
In the end, as it becomes increasingly untenable to blame the power of a few suppliers for the unfortunate demands of their users, it falls to tech?s critics to take the fact of demand?that people?s desires are real?even more seriously than the companies themselves do. Those desires require a form of redress that goes well beyond ?the algorithm.? To worry about whether a particular statement is true or not, as public fact-checkers and media-literacy projects do, is to miss the point. It makes about as much sense as asking whether somebody?s tattoo is true. A thorough demand-side account would allow that it might in fact be tribalism all the way down: that we have our desires and priorities, and they have theirs, and both camps will look for the supply that meets their respective demands.
Just because you accept that preferences are rooted in group identity, however, doesn?t mean you have to believe that all preferences are equal, morally or otherwise. It just means our burden has little to do with limiting or moderating the supply of political messages or convincing those with false beliefs to replace them with true ones. Rather, the challenge is to persuade the other team to change its demands?to convince them that they?d be better off with different aspirations. This is not a technological project but a political one.
Perhaps it’s time for a backlash to the techlash. And, at the very least, it’s time that instead of just blaming the technology, we all take a closer look at ourselves. If it’s a political or societal problem, we’re not going to fix it (at all) by blaming Facebook.
It’s become almost “common knowledge” that various social media recommendation engines “lead to radicalization.” Just recently in giving a talk to telecom execs, I was told, point blank, that social media was clearly evil and clearly driving people into radicalization because “that’s how you sell more ads” and that nothing I could say could convince them otherwise. Thankfully, though, there’s a new study that throws some cold water on those claims, by showing that YouTube’s algorithm — at least in late 2019 — appears to be doing the opposite.
To the contrary, these data suggest that YouTube’s recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels….
Indeed, as you read through the report, it suggests that YouTube’s algorithm if it has any bias at all, it’s one towards bland centrism.
The recommendations algorithm advantages several groups
to a significant extent. For example, we can see that when one
watches a video that belongs to the Partisan Left category,
the algorithm will present an estimated 3.4M impressions to
the Center/Left MSM category more than it does the other
way. On the contrary, we can see that the channels that suffer
the most substantial disadvantages are again channels that
fall outside mainstream media. Both right-wing and left-wing
YouTuber channels are disadvantaged, with White Identitarian
and Conspiracy channels being the least advantaged by the
algorithm. For viewers of conspiracy channel videos, there are
5.5 million more recommendations to Partisan Right videos
than vice versa.
We should also note that right-wing videos are not the
only disadvantaged groups. Channels discussing topics such
as social justice or socialist view are disadvantaged by the
recommendations algorithm as well. The common feature of
disadvantages channels is that their content creators are seldom
broadcasting networks or mainstream journals. These channels
are independent content creators.
Basically, YouTube is pushing people towards mainstream media sources. Whether or not you think that’s a good thing is up to you. But at the very least, it doesn’t appear to default to extremism as many people note. Of course, that doesn’t mean that it’s that way for everyone. Indeed, there are some people criticizing this study because it only studies non-logged in user recommendations. Nor does it mean that it wasn’t like that in the past. This study was done recently, and it’s been said that YouTube has been trying to adjust its algorithms quite a bit over the past few years in response to some of these criticisms.
However, this actually highlights some key points. Given enough public outcry, the big social media platforms have taken claims of “promoting extremism” seriously, and have taken efforts to deal with it (though, I’ll also make a side prediction that some aggrieved conspiracy theorists will try to use this as evidence of “anti-conservative bias” despite it not showing that at all). Companies are still figuring much of this stuff out and insisting that because of some anecdotes of radicalization that it must always be so, is obviously jumping the gun quite a bit.
In a separate Medium blog post by one of the authors of the paper, Mark Ledwich, it’s noted that the “these algorithms are radicalizing everyone” narrative also is grossly insulting to people’s ability to think for themselves:
Penn State political scientists Joseph Philips and Kevin Munger describe this as the ?Zombie Bite? model of YouTube radicalization, which treats users who watch radical content as ?infected,? and that this infection spreads. As they see it, the only reason this theory has any weight is that ?it implies an obvious policy solution, one which is flattering to the journalists and academics studying the phenomenon.? Rather than look for faults in the algorithm, Philips and Munger propose a ?supply and demand? model of YouTube radicalization. If there is a demand for radical right-wing or left-wing content, the demand will be met with supply, regardless of what the algorithm suggests. YouTube, with its low barrier to entry and reliance on video, provides radical political communities with the perfect platform to meet a pre-existing demand.
Writers in old media frequently misrepresent YouTube?s algorithm and fail to acknowledge that recommendations are only one of many factors determining what people watch and how they wrestle with the new information they consume.
Is it true that some people may have had their views changed over time by watching a bunch of gradually more extreme videos? Sure. How many people did that actually happen to? We have little evidence to show that it’s a lot. And, now, there is some real evidence suggesting that YouTube is less and less likely to push people in that direction if they’re among those who might be susceptible to such a thing in the first place.
For what it’s worth, the authors of the study have also created an interesting site, Recfluence.net where you can explore the recommendation path of various types of YouTube videos.
There must be some irony in the fact that the well-hyped documentary film about Cambridge Analytica/Facebook, called The Great Hack was released by Netflix — a company who really is kinda famous for trying to suck up as much data as possible to build a better algorithm to keep you using its service more — and potentially violating people’s privacy in the process. I know it’s ancient history in terms of internet years, and everyone has decided that Facebook and Google are the root of all internet/data evils, but back in 2006, Netflix launched a contest, offering $1 million to anyone who could “improve” its recommendation algorithm over a certain threshold. It took a few years, but the company awarded the $1 million to a team that improved its algorithm — though, it never actually implemented that algorithm, claiming that the benefits “did not seem to justify the engineering effort.”
But, perhaps more interesting, was that while the contest was ongoing, some computer scientists de-anonymized the dataset that Netflix had released, leading some to point out that the whole project almost certainly violated the law. Eventually, Netflix shuttered its plans for a follow up contest as part of a legal settlement regarding the privacy violations of the original.
So, perhaps feel a bit conflicted when Netflix’s vaunted algorithm recommends “The Great Hack” for you to watch.
This is not to say the documentary is not important, but it does highlight our troubling desire to immediately point fingers and describe certain things as “evil.” Even the name — The Great Hack — is ridiculously misleading. Nothing Cambridge Analytica did involved a “hack” in the way most people think of the word. Yes, you could argue that it was a “hack” of the larger system — using Facebook’s platform in a way that was not intended, but easily done, but it didn’t involve any technical proficiency. Just a willingness to use the data that way.
But, it’s interesting to me to see the press rush in to use the documentary as the exclamation point to the narrative that’s become popular these days: that Silicon Valley is too obsessed with collecting data as a business model. Janus Rose, at Vice, has a big piece that describes the movie as a condemnation of “surveillance capitalism.”
The real ?great hack? isn?t Cambridge?s ill-gotten data or Facebook?s failure to protect it. It?s the entire business model of Silicon Valley, which has incentivized the use of personal data to manipulate human behavior on a massive scale.
In that way, The Great Hack is a modern horror story. The villain is Cambridge Analytica, yes, but also Facebook, and all the systems that let people become manipulated by the digital psychological clues they leave through their lives. It’s terrifying because it’s true.
But in displaying the ruthlessly transactional underpinnings of social platforms where the world?s smartphone users go to kill time, unwittingly trading away their agency in the process, Netflix has really just begun to open up the defining story of our time.
Oddly, none of them mention Netflix’s algorithm and history. Ah, right. Because the narrative these days is Facebook/Google/Silicon Valley. Netflix has mostly migrated south to Hollywood. And, Hollywood and the media industry have no history at all of “manipulating” the public. Nope, no history of that at all.
None of this is to absolve Silicon Valley and the big tech companies — who really have done a piss poor job of thinking through the consequences of basically anything they’ve done, but forgive me for being marginally skeptical when the same industries that have a long history of pushing propaganda and trying to manipulate audiences in one direction or another suddenly start clutching pearls at the new kids on the block.
And if you want to point fingers, there are lots of directions they could go as well. All the internet haters seem to have glommed onto Shosana Zuboff’s term “Surveillance Capitalism” as a sort of shibboleth to the savvy to show that you know (you know) those internet companies are truly evil in their hearts. But taken to its logical extreme, one might as well blame Wall Street. When you have a company, say, like Pinterest, that tries to avoid social media “growth hacking” then Wall St. punishes it. Witness the ongoing freakout through the past few months from Wall St. as it grapples with Alphabet/Google’s revenue growth slowing.
If companies are constantly being told that they have a “fiduciary duty” to increase the stock, and Wall Street flips out any time they can’t keep growing at insane, unsustainable rates, is it any wonder that all of the incentives lead us to a place where companies focus heavily on growth?
Again, this is not an excuse. It’s all a problem. But we don’t solve large societal problems by picking off one symptom of the disease that’s really just a link in a larger societal chain. Surveillance capitalism is a symptom. Abusive data practices are a symptom. Propaganda and political grandstanding are symptoms. There are big societal problems at the root of all this — but very few seem to be interested in exploring what they are and how to deal with them. Instead, we just get one part of the surveillance capitalist propaganda machine to convince everyone that another part of the surveillance capitalist propaganda machine is the problem. And, because that bit of propaganda is successfully manipulative and compelling, lots of people buy into it.