Last week, we wrote about the positively ridiculous lawsuit filed by the Seattle Public School district against basically all of social media claiming social media was “a public nuisance.” As we noted, the school district appeared to be wasting taxpayer money, that could have gone to educating their kids, on this lawsuit that screamed out to the public that the school district had totally failed in educating their children how to be good digital citizens, how to use the internet properly, and how to be prepared for living life in the age of the internet.
And, now it appears that the Mesa, Arizona school district has decided to do the same thing. Using the same lawyers. The law offices of Keller Rohrback appears to be trying to carve out this corner of the market as their own: having public school districts waste a shitload of time and resources to publicly proclaim that they can’t prepare the children they’re in charge of educating for the modern internet world.
The Mesa complaint is, not surprisingly, similar to the Seattle complaint. It’s suing the same companies (really: Meta, Google, Snap, Tiktok). Like the Seattle complaint, it argues that social media is a “public nuisance.” Like the Seattle complaint, it says that Section 230 doesn’t protect the companies (it’s wrong). Like the Seattle complaint, it posts a few cherry-picked studies claiming that social media is bad for kids, and ignores more comprehensive studies that argue that opposite. Like the Seattle complaint, it goes hard in proving that Mesa public schools apparently are staffed by administrators and teachers who suck at educating children, and find themselves powerless against… entertainment.
In short, it’s pathetic.
The one main “difference” between the Seattle complaint and the Mesa one is that in Mesa they’ve added a “negligence” claim, saying that social media companies “owe” the school district “a duty not to expose Plaintiff to an unreasonable risk of harm….”
This is all laughably stupid, and not at all how the law works. I mean, it’s possible that the lawyers at Keller Rohrback figure that if they file enough of these lawsuits, eventually they’ll find a judge who lets the moral panic of “social media is bad for kids” overwhelm the actual legal issues, but it’s difficult to see it standing up to any legitimate judicial scrutiny.
Of course, now that we have these two lawsuits, it means it’s almost certain that they’re shopping for similar lawsuits. One hopes that other school districts will reject this nonsense. The whole point of these lawsuits is almost certainly to try to shake down the social media companies to get them to settle, but that seems unlikely.
Either way, if you’re a parent of a student in the Mesa public schools, you should be asking why your school’s administrators seem to be publicly admitting that they can’t teach your children how to deal with the modern internet world.
I just wrote about Utah’s ridiculously silly plans to sue every social media company for being dangerous to children, in which I pointed out that the actual research doesn’t support the underlying argument at all. But I forgot that a few weeks ago, Seattle’s public school district actually filed just such a lawsuit, suing basically every large social media company for being a “public nuisance.” The 91-page complaint is bad. Seattle taxpayers should be furious that their taxes, which are supposed to be paying for educating their children, are, instead, going to lawyers to file a lawsuit so ridiculous that it’s entirely possible the lawyers get sanctioned.
The lawsuit was filed against a variety of entities and subsidiaries, but basically boils down to suing Meta (over Facebook, Instagram), Google (over YouTube), Snapchat, and TikTok. Most of the actual lawsuit reads like any one of the many, many moral panic articles you read about how “social media is bad for you,” with extremely cherry-picked facts that are not actually supported by the data. Indeed, one might argue that the complaint itself, filed by Seattle Public Schools lawyer Gregory Narver and the local Seattle law firm of Keller Rohrback, is chock full of the very sort of misinformation that they so quickly wish to blame the social media companies for spreading.
First: as we’ve detailed, the actual evidence that social media is harming children basically… does not exist. Over and over again studies show a near total lack of evidence. Indeed, as recent studies have shown, the vast majority of children get value from social media. There are plenty of moral paniciky pieces from adults freaked out about what “the kids these days” are doing, but little evidence to support any of it. Indeed, the parents often seem to be driven into a moral panic fury by… misinformation they (the adults) encountered on social media.
The school’s lawsuit reads like one giant aggregation of basically all of these moral panic stories. First, it notes that the kids these days, they use social media a lot. Which, well, duh. But, honestly, when you look at the details it suggests they’re mostly using them for entertainment, meaning that it hearkens back to previous moral panics about every new form of entertainment from books, to TV, to movies, etc. And, even then, none of this even looks that bad? The complaint argues that this chart is “alarming,” but if you asked kids about how much TV they watched a couple decades ago, I’m guessing it would be similar to what is currently noted about YouTube and TikTok (and note that others like Facebook/Instagram don’t seem to get that much use at all according to this chart, but are still being sued):
There’s a whole section claiming to show that “research has confirmed the harmful effects” of social media on youth, but that’s false. It’s literally misinformation. It cherry-picks a few studies, nearly all of which are by a single researcher, and ignores the piles upon piles of research suggesting otherwise. Hell, even the graphic above that it uses to show the “alarming” addition to social media is from Pew Research Center… the organization that just released a massive study about how social media has made life better for teens. Somehow, the Seattle Public Schools forgot to include that one. I wonder why?
Honestly, the best way to think about this lawsuit is that it is the Seattle Public School system publicly admitting that they’re terrible educators. While it’s clear that there are some kids who end up having problems exacerbated by social media, one of the best ways to deal with that is through good education. Teaching kids how to use social media properly, how to be a good digital citizen, how to have better media literacy for things they find on social media… these are all the kinds of things that a good school district builds into its curriculum.
This lawsuit is effectively the Seattle Public School system publicly stating “we’re terrible at our job, we have not prepared your kids for the real world, and therefore, we need to sue the media apps and services they use, because we failed in our job.” It’s not a good look. And, again, if I were a Seattle taxpayer — and especially if I were a Seattle taxpayer with kids in the Seattle public school district — I would be furious.
The complaint repeatedly points out that the various social media platforms have been marketed to kids, which, um, yes? That doesn’t make it against the law. While the lawsuit mentions COPPA, the law designed to protect kids, it’s not making a COPPA claim (which it can’t make anyway). Instead, it’s just a bunch of blind conjectures, leading to a laughably weak “public nuisance” claim.
Pursuant to RCW 7.48.010, an actionable nuisance is defined as, inter alia,
“whatever is injurious to health or indecent or offensive to the senses, or an obstruction to the
free use of property, so as to essentially interfere with the comfortable enjoyment of the life and
property.”
Specifically, a “[n]uisance consists in unlawfully doing an act, or omitting to
perform a duty, which act or omission either annoys, injures or endangers the comfort, repose,
health or safety of others, offends decency . . . or in any way renders other persons insecure in
life, or in the use of property.”
Under Washington law, conduct that substantially and/or unreasonably interferes
with the Plaintiff’s use of its property is a nuisance even if it would otherwise be lawful.
Pursuant to RCW 7.48.130, “[a] public nuisance is one which affects equally the
rights of an entire community or neighborhood, although the extent of the damage may be
unequal.”
Defendants have created a mental health crisis in Seattle Public Schools, injuring
the public health and safety in Plaintiff’s community and interfering with the operations, use, and
enjoyment of the property of Seattle Public Schools
Employees and patrons, including students, of Seattle Public Schools have a right
to be free from conduct that endangers their health and safety. Yet Defendants have engaged in
conduct which endangers or injures the health and safety of the employees and students of
Seattle Public Schools by designing, marketing, and operating their respective social media
platforms for use by students in Seattle Public Schools and in a manner that substantially
interferes with the functions and operations of Seattle Public Schools and impacts the public
health, safety, and welfare of the Seattle Public Schools community
This reads just as any similar moral panic complaint would have read against older technologies. Imagine schools in the 1950s suing television or schools in the 1920s suing radios. Or schools in the 19th century suing book publishers for early pulp novels.
For what it’s worth, the school district also tries (and, frankly, fails) to take on Section 230 head on, claiming that it is “no shield.”
Plaintiff anticipates that Defendants will raise section 230 of the Communications
Decency Act, 47 U.S.C. § 230(c)(1), as a shield for their conduct. But section 230 is no shield for
Defendants’ own acts in designing, marketing, and operating social media platforms that are
harmful to youth.
….
Section 230 does not shield Defendants’ conduct because, among other
considerations: (1) Defendants are liable for their own affirmative conduct in recommending and
promoting harmful content to youth; (2) Defendants are liable for their own actions designing
and marketing their social media platforms in a way that causes harm; (3) Defendants are liable
for the content they create that causes harm; and (4) Defendants are liable for distributing,
delivering, and/or transmitting material that they know or have reason to know is harmful,
unlawful, and/or tortious.
Except that, as we and many others explained in our briefs in the Supreme Court’s Gonzalez case, that’s all nonsense. All of them are still attempting to hold companies liable for the speech of users. None of the actual complaints are about actions by the companies, but rather how they don’t like the fact that the expression of these sites users are (the school district misleadingly claims) harmful to the kids in their schools.
First, Plaintiff is not alleging Defendants are liable for what third-parties have
said on Defendants’ platforms but, rather, for Defendants’ own conduct. As described above,
Defendants affirmatively recommend and promote harmful content to youth, such as proanorexia and eating disorder content. Recommendation and promotion of damaging material is
not a traditional editorial function and seeking to hold Defendants liable for these actions is not
seeking to hold them liable as a publisher or speaker of third party-content.
Yes, but recommending and promoting content is 1st Amendment protected speech. They can’t be sued for that. And, it’s not the “recommendation” that they’re really claiming is harmful, but the speech that is being recommended which (again) is protected by Section 230.
Second, Plaintiff’s claims arise from Defendants’ status as designers and
marketers of dangerous social media platforms that have injured the health, comfort, and repose
of its community. The nature of Defendants’ platforms centers around Defendants’ use of
algorithms and other designs features that encourage users to spend the maximum amount of
time on their platforms—not on particular third party content.
One could just as reasonably argue that the harm actually arises from the Seattle Public School system’s apparently total inability to properly prepare the children in their care for modern communications and entertainment systems. This entire lawsuit seems like the school district foisting the blame for their own failings on a convenient scapegoat.
There’s a lot more nonsense in the lawsuit, but hopefully the court quickly recognizes how ridiculous this is and tosses it out. Of course, if the Supreme Court screws up everything with a bad ruling in the Gonzalez case, well, then this lawsuit should give everyone pretty clear warning of what’s to come: a whole slew of utterly vexatious, frivolous lawsuits against internet websites for any perceived “harm.”
The only real takeaways from this lawsuit should be (1) Seattle parents should be furious, (2) the Seattle Public School system seems to be admitting it’s terrible at preparing children for the real world, and (3) Section 230 remains hugely important in protecting websites against these kinds of frivolous SLAPP suits.
Another Section 230 case has made its way into the federal court system. Of course, the plaintiffs really doesn’t want this to be a Section 230 case, since their lawsuit is predicated on content created by users of two chat apps.
The lawsuit alleged that the developers of YOLO (an anonymous chat app) and LMK (an add-on app for Snapchat that gives users more customization options) are somehow responsible for the acts of other app users. From the recent federal court decision [PDF]:
Plaintiffs allege they received harassing messages in response to their benign posts on Defendants’ applications and did not receive comparable messages on other platforms in which user identities were revealed. Plaintiffs allege that YOLO had pop-up notifications that stated individuals’ identities would be revealed if they harassed other users and LightSpace [the designer of the LMK app] similarly stated it would take reports of bullying it received seriously and potentially send those reports to law enforcement. Plaintiffs reference several specific explicit messages they received on these platforms and also aver more generally that they received harassing messages on both applications. Plaintiffs allege that YOLO in particular did not respond to reports of harassment and that a decedent of one of the Plaintiffs unsuccessfully attempted to search online for ways to “reveal” the identities of individuals who had previously sent him harassing messages on YOLO the night before his death.
As you can infer from the last sentence of this summary, there’s a tragedy at the center of this case, an apparent suicide the survivors believe was a response to online harassment via these apps. While it’s understandable the survivors are attempting to right a wrong via this litigation, this isn’t the sort of wrong that can be addressed by taking legal action against app developers who did not create the harassing content.
Section 230 immunizes the app developers from lawsuits brought over content created by users. That’s why this lawsuit was framed as alleged violations of consumer laws from all over the United States. It’s a cause of action grab bag.
As stated above, the FAC brings twelve causes of action under state law against Defendants; namely: (1) strict product liability based on a design defect; (2) strict product liability based on a failure to warn; (3) negligence; (4) fraudulent misrepresentation; (5) negligent misrepresentation; (6) unjust enrichment; (7) violation of the Oregon Unlawful Trade Practices Act; (7) violation of the New York General Business Law § 349; (8) violation of the New York General Business Law § 350; (9) violation of the Colorado Consumer Protection Act; (10) violation of the Pennsylvania Unfair Trade Practices Law; (11) violation of the Minnesota False Statement in Advertising Act; and (12) violation of California Business and Professions Code §§ 17200 & 17500.
None of that matters, though.
[T]he court finds that each of these causes of action is predicated on the theory that Defendants violated various state laws by failing to adequately regulate end-users’ abusive messaging, and is therefore barred by Section 230.
That’s the correct finding. And that addresses all of the causes of action, some of which are clearly stretched past the point of credibility… like the plaintiffs’ insistence that allowing users to create anonymous accounts is a “defective design feature.” This presumes two dumb and disingenuous things: that offering anonymity to users is irresponsible, and that knowing who these abusive users were would deter them from being abusive.
So, that’s it for now for this case. It will likely be appealed. If it is, it will be headed to the Ninth Circuit Appeals Court — a court that has said some rather strange things about Section 230 immunity in recent years. Or the plaintiffs may wait to see what the Supreme Court has to say about Section 230 in the Gonzalez v. Google case it recently granted cert to. Depending on what the justices decide, this case may still have plenty of life left in it, despite being dismissed with prejudice by this court.
In response to the Supreme Court’s recent assault on female bodily autonomy, numerous U.S. corporations have issued statements stating they’ll be paying for employee abortion travel. You’re to ignore, apparently, that many of these same companies continue to throw millions of dollars at the politicians responsible for turning the Supreme Court into a dangerous, cruel, legal norm-trampling joke:
1. Several companies that have announced they will cover travel costs for employees that need an abortion are financially backing a political committee openly devoted to eliminating abortion rights around the country.
With abortion now or soon to be illegal in countless states, there’s newfound concern about the privacy issues we’ve talked about for years, like how user location data, period tracking data, or browsing data can all be used against women seeking abortions and those looking to aid them… by both the state and violent vigilantes (thanks to flimsy U.S. standards on who can buy said data and how it can be used).
Reporters that have tried to ask modern data-hoovering companies if they’ll do better job securing data to ensure it can’t be used against women, or if they’ll fight efforts from states hunting abortion seekers and aiders in and out of state, have been met with dead silence. Not even rote statements on how the safety of women is important, but dead silence:
Motherboard asked a long line of companies including Facebook, Amazon, Twitter, TikTok, AT&T, Uber, and Snapchat if they’d hand over user data to law enforcement and not a single one was willing to commit to protecting women’s data:
Motherboard asked if each will provide data in response to requests from law enforcement if the case concerns users seeking or providing abortions, or some other context in which the agency is investigating abortions. Motherboard also asked generally what each company is planning to protect user data in a post-Roe America.
None of the companies answered the questions. Representatives from Twitter and Snapchat replied to say they were looking into the request, but they did not provide a statement or other response.
To be fair, company legal departments haven’t finished doing the risk calculations of showing a backbone and upsetting campaign contributors and law enforcement. They’ve also got to weigh the incalculable looming harms awaiting countless women against any potential lost snoopvertising revenues, so there’s that.
As public pressure grows, ham-fisted state enforcement begins, and the dynamics of the Roe repeal become harder for them to ignore, several of these companies may find something vaguely resembling a backbone in time. But the initial lack of any clarity or courage whatsoever in the face of creeping authoritarianism (and a high court gone completely off the rails) doesn’t inspire a whole lot of confidence.
Can cops pretend to be real people on social media to catfish people into criminal charges? Social media services say no. Facebook in particular has stressed — on more than one occasion — that it’s “real name” policy applies just as much to cops as it does to regular people.
Law enforcement believes terms of service don’t apply to investigators and actively encourages officers to create fake accounts to go sniffing around for crime. That’s where the Fourth Amendment comes into play. It’s one thing to passively access public posts from public accounts. It’s quite another when investigators decide the only way to obtain evidence to support search or arrest warrants involves “friending” someone whose posts aren’t visible to the general public.
What’s public is public and the third party doctrine definitely applies: users are aware their public posts are visible to anyone using the service. But those who use some privacy settings are asking courts whether it’s ok for cops to engage in warrantless surveillance of their posts just because they made the mistake of allowing a fake account into their inner circle.
Accepting a friend request is an affirmative act. And that plays a big part in court decisions finding in favor of law enforcement agencies. Getting duped isn’t necessarily a constitutional violation. And it’s difficult to claim you’ve been unlawfully surveilled by fake accounts run by cops. You know, due diligence and all that. It apparently makes no difference to courts that cops violated platforms’ terms of service or engaged in subterfuge to engage in fishing expeditions for culpatory evidence.
Massachusetts’ top court has been asked to settle this. And the state justices seem somewhat skeptical that current law (including the state’s constitution) allows for extended surveillance via fake social media accounts. No decision has been reached yet, but lower courts in the state are adding to case law, providing additional precedent that may influence the final decision from the state’s Supreme Court.
This recent decision [PDF] by a Massachusetts Superior Court indicates the courts are willing to give cops leeway considering the ostensibly-public nature of social media use. But it doesn’t give the Commonwealth quite as much leeway as it would like.
Here’s how it started:
After accepting a “friend” request from the officer, the defendant published a video recording to his social media account that featured an individual seen from the chest down holding what appeared to be a firearm. The undercover officer made his own recording of the posting, which later was used in criminal proceedings against the defendant. A Superior Court judge denied the defendant’s motion to suppress the recording as the fruit of an unconstitutional search, and the defendant appealed. We transferred the matter to this court on our own motion.
Here’s how it’s going:
Among other arguments, the defendant suggests that because his account on this particular social media platform was designated as “private,” he had an objectively reasonable expectation of privacy in its contents. The Commonwealth contends that the act of posting any content to a social media account de facto eliminates any reasonable expectation of privacy in that content.
The competing arguments about expectation are (from the defendant) “some” and (from the Commonwealth) “none.” It’s not that simple, says the court.
Given the rapidly evolving role of social media in society, and the relative novelty of the technology at issue, we decline both the defendant’s and the Commonwealth’s requests that we adopt their proffered brightline rules.
In this case, Boston police officer Joseph Connolly created a fake Snapchat account and sent a friend request to a private account run by “Frio Fresh.” Fresh accepted the friend request, allowing the officer access to all content posted. In May 2017, Officer Connolly saw a “story” posted by “Frio Fresh” that showed him carrying a silver revolver. Connolly recorded this and passed the information on to a BPD strike force after having observed (but not recorded) a second “story” showing “Frio Fresh” in a gym. The strike force began surveilling the gym and soon saw “Frio Fresh” wearing the same clothes observed in the first story (the one the officer was able to record with a second device). Strike force members pursued “Frio Fresh” and searched him, recovering the revolver seen in the Snapchat story.
The court recognizes the damage free-roaming surveillance of social media can do to constitutional rights, as well as people’s generally accepted right to converse freely among friends.
Government surveillance of social media, for instance, implicates conversational and associational privacy because of the increasingly important role that social media plays in human connection and interaction in the Commonwealth and around the world. For many, social media is an indispensable feature of social life through which they develop and nourish deeply personal and meaningful relationships. For better or worse, the momentous joys, profound sorrows, and minutiae of everyday life that previously would have been discussed with friends in the privacy of each others’ homes now generally are shared electronically using social media connections. Government surveillance of this activity therefore risks chilling the conversational and associational privacy rights that the Fourth Amendment and art. 14 seek to protect.
Despite this acknowledgment, the court rules against the defendant, in essence saying it was his own fault for not vetting his “friends” more thoroughly. The defendant seemed unclear as to Snapchat privacy settings and, in this case, willingly accepted a friend request from someone he didn’t know who used a Snapchat-supplied image in his profile. In essence, the court is saying either you care about your privacy or you don’t. And, in this case, the objective expectation of privacy is undercut by the subjective expectation of privacy this user created by being less than thorough in his vetting of friend requests.
Nonetheless, the defendant’s privacy interest in this case was substantially diminished because, despite his asserted policy of restricting such access, he did not adequately “control[] access” to his Snapchat account. Rather, he appears to have permitted unknown individuals to gain access to his content. See id. For instance, Connolly was granted access to the defendant’s content using a nondescript username that the defendant did not recognize and a default image that evidently was not Connolly’s photograph. By accepting Connolly’s friend request in those circumstances, the defendant demonstrated that he did not make “reasonable efforts to corroborate the claims of” those seeking access to his account.
[…]
Indeed, Connolly was able to view the defendant’s stories precisely because the defendant gave him the necessary permissions to do so. That the defendant not only did not exercise control to exclude a user whose name he did not recognize, but also affirmatively gave Connolly the required permissions to view posted content, weighs against a conclusion that the defendant retained a reasonable expectation of privacy in his Snapchat stories.
The final conclusion is that this form of surveillance — apparently without a warrant — is acceptable because the surveilled user didn’t take more steps to protect his posts from government surveillance. There’s no discussion about the “reasonableness” of officers creating fake accounts to gain access to private posts without reasonable suspicion of criminal activity. Instead, the court merely states that “undercover police work” is “legitimate,” and therefore not subjected to the same judicial rigor as the claims of someone who was duped into revealing the details of their life to an undercover cop.
The defendant may get another chance to appeal this decision if the state’s Supreme Court decides creating fake accounts to trawl for criminal activity falls outside the boundaries of the Constitution. Until then, the only bright line is don’t accept friend requests from people you don’t know. But that’s still problematic, considering there’s no corresponding restriction on government activities, which may lead to officers impersonating people from targets’ social circles to gain access to private posts. And when that happens, what recourse will defendants have? The court says it’s on defendants to protect their privacy no matter how many lies law enforcement officers tell. That shifts too much power to the government and places the evidentiary burden solely on people who expect their online conversations to be free of government surveillance.
In the wake of a tragedy, it’s human nature to seek some form of justice or closure. The feeling is that someone should be held accountable for a senseless death, even when there’s no one to blame directly. This tends to result in misguided lawsuits, like the multiple suits filed by (far too opportunistic) law firms that seek to hold social media platforms accountable for the actions of mass shooters and terrorists.
The desire to respond with litigation remains even when there’s a single victim — one who has taken their own life. That’s the case here in this lawsuit, coming to us via Courthouse News Service. Plaintiff Tammy Rodriguez’s eleven-year-old daughter committed suicide. Her daughter was allegedly a heavy user of both Snapchat and Instagram. The connection between the platforms and her daughter’s suicide is alluded to and alleged, but nothing in the lawsuit [PDF] shows how either of the companies are directly responsible for the suicide.
Here’s how the complaint hopes to achieve these questionable ends:
Defendants have designed Instagram and Snapchat to allow minor users to use, become addicted to, and abuse their products without the consent of the users’ parents, like Tammy Rodriguez.
Defendants have specifically designed Instagram and Snapchat to be attractive nuisances to underage users but failed to exercise ordinary care owed to underage business invitees to prevent the rampant solicitation of underage girls by anonymous older users who do not disclose their real identities, and mass message underage users with the goal of grooming and sexually exploiting minors.
Defendants not only failed to warn Tammy and Selena Rodriguez of the dangers of addiction, sleep deprivation, and problematic use of their applications, but misrepresented the safety, utility, and addictive properties of their products. For example, the head of Instagram falsely testified under oath at a December 8, 2021 Senate Committee hearing that Instagram does not addict its users.
As a result of Selena Rodriguez’s addictive and problematic use of Instagram and Snapchat, she developed numerous mental health conditions including multiple inpatient psychiatric admissions, an eating disorder, self-harm, and physically and mentally abusive behaviors toward her mother and siblings.
As a proximate result of her addiction to Instagram and Snapchat, Selena Rodriguez committed suicide on July 21, 2021. She was eleven years old at the time.
There’s the first problem with the lawsuit: “proximate result.” While it’s possible to show an indirect connection between a person’s act and the actions of the services they use, you really can’t argue “proximate cause” while also arguing “strict product liability.” Either this suicide was the direct result of the design flaws or warning failures or it wasn’t. It can’t be both strict and proximate.
Inside those claims are further problems. What’s stated in the product liability arguments is sometimes directly contradicted by the narrative of the lawsuit. For instance, under the claims about violations of California’s Unfair Competition Law, the plaintiff says this:
Defendants engaged in fraudulent and deceptive business practices in violation of the UCL by promoting products to underage users, including Selena Rodriguez, while concealing critical information regarding the addictive nature and risk of harm these products pose. Defendants knew and should have known that their statements and omissions regarding the addictive and harmful nature of their products were misleading and therefore likely to deceive the members of the public who use Defendants’ products and who permit their underage children to use Defendants’ products. Had Plaintiff known of the dangerous nature of Defendants’ products, she would have taken early and aggressive steps to stop or limit her daughter’s use of Defendants’ products.
But the plaintiff did know, as is clearly stated earlier in the lawsuit:
Plaintiff Tammy Rodriguez, Selena’s mother, attempted multiple times to reduce or limit her daughter’s use of social media, which caused a severe reaction by Selena due to her addiction to Defendants’ products. Because Defendants’ products do not permit parental controls, the only way for Tammy Rodriguez to effectively limit access to Defendants’ products would be to physically confiscate Selena’s internet-enabled devices, which simply caused Selena to run away in order to access her social media accounts on other devices.
Plaintiff Tammy Rodriguez attempted to get Selena mental health treatment on multiple occasions. An outpatient therapist who evaluated Selena remarked that she had never seen a patient as addicted to social media as Selena. In the months leading up to Selena’s suicide, she was experiencing severe sleep deprivation that was caused and aggravated by her addiction to Instagram and Snapchat, and the constant 24-hour stream of notifications and alerts Defendants sent to Selena Rodriguez.
So, the plaintiff was aware — not only by what she had personally observed but by what she had been told by her daughter’s therapist. That not only undercuts the arguments made in the state law claims, but the very nextparagraph from the narrative of the lawsuit.
Throughout the period of Selena’s use of social media, Tammy Rodriguez was unaware of the clinically addictive and mentally harmful effects of Instagram and Snapchat.
Now, I’m not saying any of this should excuse the slot machine feedback loops that permeate so many social media services. But it’s a huge stretch to say social media platforms are directly (or proximately) responsible for violent acts by users, whether they’re mass shootings or singular suicides. This lawsuit seems to admit there’s no direct link (even though it definitely doesn’t want to) by undercutting its own claims with factual assertions about the last several months of this child’s life.
While it’s true the average person may not understand the personal and psychological aspects of social media addiction, most people are familiar with the symptoms of addiction and, if they’re a concerned parent like the one filing this suit, act accordingly. The narrative in this lawsuit attempts to show how dangerous these platforms are, but by doing so, show the plaintiff was well aware of the negative side effects. Just because that knowledge failed to prevent a tragedy does not automatically transfer full responsibility to the internet services her daughter used.
A lawsuit like this may prompt the creation of better parental controls or age verification procedures, but it’s unlikely to result in Instagram and Snapchat being ordered to compensate this mother for the tragic death of her child. The connections are too tenuous and the allegations too conclusory to survive a motion to dismiss. The lawsuit’s internal contradictions aren’t going to help. And, despite some concerning comments from the Ninth Circuit of the supposed overbreadth of Section 230 immunity in recent months, this suicide was ultimately the act of someone who used these services, rather than an act encouraged or abetted by the services being sued.
Summary: Snapchat debuted to immediate success a decade ago, drawing in millions of users with its playful take on instant messaging that combined photos and short videos with a large selection of filters and “stickers.” Stickers are graphics that can be applied to messages, allowing users to punch up their presentations (so to speak).
Snapchat’s innovations in the messaging space proved incredibly popular, moving Snapchat from upstart to major player in a few short years. It also created more headaches for moderators as sent messages soared past millions per day to billions.
Continuing its expansion of user options, Snapchat announced its integration with Giphy, a large online repository of GIFs, in February 2018. This gave users access to Giphy’s library of images to use as stickers in messages.
But the addition of thousands of images to billions of messages quickly resulted in an unforeseen problem. In early March of 2018, Snapchat users reported a search of the GIPHY image database for the word “crime” surfaced a racist sticker, as reported by Josh Constine for TechCrunch:
“We first reported Instagram was building a GIPHY integration back in January before it launched a week later, with Snapchat adding a similar feature in February. But it wasn’t long before things went wrong. First spotted by a user in the U.K. around March 8th, the GIF included a racial slur.” — Josh Constine, TechCrunch
Both platforms immediately pulled the plug on the integration while they sorted things out with GIPHY.
Company Considerations:
What measures can be put in place to prevent moderation problems from moving from one platform to another during cross-platform integration?
What steps should be taken prior to launch to integrate moderation efforts between platforms?
What can “upline” content providers do to ensure content moving from their platforms to others meets the content standards of the “downline” platforms?
Issue Considerations:
What procedures aid in facilitating cross-platform moderation?
Which party should have final say on moderation efforts, the content provider or the content user?
“We’ve been in close contact with GIPHY throughout this process and we’re confident that they have put measures in place to ensure that Instagram users have a good experience” an Instagram spokesperson told TechCrunch.
After investigation of the incident, this sticker was available due to a bug in our content moderation filters specifically affecting GIF stickers.
We have fixed the bug and have re-moderated all of the GIF stickers in our library.
The GIPHY staff is also further reviewing every GIF sticker by hand and should be finished shortly.
Snapchat was the last to reinstate its connection to GIPHY, stating it was working directly with the site to revamp both moderation systems to ensure offensive content would be prevented from being uploaded to GIPHY and/or making the leap to connected social media services.
A few years ago, the Georgia Court of Appeals kept a lawsuit alive against Snapchat, brought by the parents of a victim of a car crash — one supposedly encouraged by Snapchat’s “speed filter.” No Section 230 immunity was extended to Snapchat, which only made the filter available, but did not actually participate (other than as another passenger) in the reckless driving that resulted in the accident that left another driver permanently brain damaged.
Removing this case to federal court most likely would not have helped. Another lawsuit against Snapchat over its “speed filter” has been allowed to move forward by the Ninth Circuit Court of Appeals. (via Ars Technica)
This case involves another tragic car accident and the use of Snap’s app and “speed filter.” From the decision [PDF]:
According to the Parents’ amended complaint, Jason Davis (age 17), Hunter Morby (age 17), and Landen Brown (age 20) were driving down Cranberry Road in Walworth County, Wisconsin at around 7:00 p.m. on May 28, 2017. Jason sat behind the wheel, Landen occupied the front passenger seat, and Hunter rode in the back seat. At some point during their drive, the boys’ car began to speed as fast as 123 MPH. They sped along at these high speeds for several minutes, before they eventually ran off the road at approximately 113 MPH and crashed into a tree. Tragically, their car burst into flames, and all three boys died.
One of the people in the car opened Snapchat and enabled the “speed filter” minutes before the fatal accident. According to the allegations, a number of Snapchat users believed (incorrectly) Snapchat would reward them (the ruling doesn’t specify with what) for exceeding 100 mph. That’s where the allegations tie Snapchat to reckless driving by end users. It isn’t much, but it’s apparently enough to allow the lawsuit to move forward.
And it’s these specific allegations that allow the lawsuit to avoid the expected Section 230 immunity defense. The plaintiffs don’t claim Snapchat isn’t a publisher of third-party content. They don’t even argue this lawsuit centers on the “speed filter” post made shortly before the accident. Instead, they argue the app itself — with its attendant (but now removed) “speed filter” — is negligently-designed, leading directly to the tragedies at the center of this suit.
It is thus apparent that the Parents’ amended complaint does not seek to hold Snap liable for its conduct as a publisher or speaker. Their negligent design lawsuit treats Snap as a products manufacturer, accusing it of negligently designing a product (Snapchat) with a defect (the interplay between Snapchat’s reward system and the Speed Filter). Thus, the duty that Snap allegedly violated “springs from” its distinct capacity as a product designer. Barnes, 570 F.3d at 1107. This is further evidenced by the fact that Snap could have satisfied its “alleged obligation”—to take reasonable measures to design a product more useful than it was foreseeably dangerous—without altering the content that Snapchat’s users generate.
Because of that, the only discussion of Section 230 is to explain why it’s not an appropriate defense in this case.
That Snap allows its users to transmit user-generated content to one another does not detract from the fact that the Parents seek to hold Snap liable for its role in violating its distinct duty to design a reasonably safe product.
The Ninth Circuit says the filter is first-party content and it’s Snapchat’s software that’s the problem.
Snap indisputably designed Snapchat’s reward system and Speed Filter and made those aspects of Snapchat available to users through the internet.
The lawsuit heads back down to the district level, where the lower court originally dismissed this case on Section 230 grounds. The plaintiffs will get another chance to amend their lawsuit to fully flesh out arguments — namely, Snapchat’s allegedly-negligent design — that weren’t fully addressed the first time around.
There’s a chance another round of litigation may result in a win for Snapchat. The plaintiffs still need to fully connect Snapchat to the reckless driving committed by the victims of the car crash. It may not be enough to simply say the presence of a “speed filter” is responsible for this tragic outcome. After all, Snapchat has millions of users and presumably a majority of those managed to refrain from driving recklessly even with the supposed incentive of being rewarded before Snap removed the “speed filter” option.
A
carefully posed photo
of dangerous driving attracted some attention online in early May.
The photo shows a picture from the driver’s seat of a Nissan.
The photographer is driving, doing 90 mph as he brandishes a handgun
with his finger resting on the trigger. To make matters worse,
there’s an alcoholic cider propped against the dash. This
extensive set of unsafe behaviors was intended to outrage, offend,
and attract attention — all goals it undoubtedly met. And such
foolishness is an invitation to a lengthy imprisonment. But it would
be a mistake to treat Nissan, Heckler & Koch, Angry Orchard Hard
Cider, the driver’s cell phone manufacturer, and whatever
platform he used to share the photo as responsible for his
misbehavior.
Unfortunately,
two ongoing lawsuits against Snapchat apply this logic to the app’s
speed filter feature. Alongside other sensor-based filters, like
altimeters and location-based geofilters, Snapchat provides a
speedometer filter that superimposes the user’s current speed
over a photograph. Passengers can use the filter safely in all manner
of vehicles, from boats to airplanes. However, it can also be used
dangerously by reckless drivers speeding on public roads in pursuit
of a high speedometer reading.
In
September of 2015, teen driver Crystal McGee was allegedly traveling
at over 100 miles-per-hour while using Snapchat when she struck
Wentworth Maynard’s vehicle. McGee was later charged with
causing serious injury by vehicle, but Maynard
sued
both McGee and Snapchat, attempting to hold the company responsible
for McGee’s reckless driving.
In
most cases, Section
230 of the Communications Decency Act
indemnifies the creators of an “interactive computer service”
against liability for consumer misuse of their publishing tools. The
law prevents social media platforms from being treated as the
“publisher or speaker” of user-generated content.
Indeed,
the case was initially dismissed on Section 230 grounds, but this
decision was reversed
by
the Georgia Court of Appeals. The court reasoned that because McGee
did not actually post the photo, Snapchat was not being treated as
the publisher of her speech, but the creator of a dangerous product
that had somehow, per Maynard’s complaint,
“facilitated McGee’s excessive speeding.” The court
allowed the case to go forward because
the suit “seek[s] to hold Snapchat liable for its own conduct,
principally for the creation of the Speed Filter and its failure to
warn users that the Speed Filter could encourage speeding and unsafe
driving practices.”
It’s
hard to see how the existence of Snapchat’s speedometer
encouraged Crystal McGee to drive at 113 miles-per-hour on a busy
road. Snapchat doesn’t reward users for achieving high
speedometer ratings, and opening the filter triggers a popup warning
reading:
“Please,
DO NOT Snap and drive.”
Snapchat may have made it easier for her to record and share her
behavior, but reckless drivers have long taken photos of their speed
as displayed on the dash. One might just as easily claim that the
existence of dashboard speedometers similarly encourages speeding.
Arguably, driving fast might be less alluring without a way to
determine how fast you’re actually going. The collection of
items in the photo above all contribute to its outrageousness, yet
none of the companies represented are responsible for the reckless
tableau.
InLemmon
v. Snap,
a similar case dismissed with leave to amend in February, the
District Court for the Central District of California found that
Section 230 protected Snapchat from liability because the filter “is
a neutral tool, which can be utilized for both proper and improper
purposes. The Speed Filter is essentially a speedometer tool, which
allows Defendant’s users to capture and share their speeds with
others.” While a user might behave recklessly in pursuit of a
high recorded speed, the decision is theirs and theirs alone. The
court describes the recorded speed as content submitted by the user.
“While a user might use the Speed Filter to Snap a high number,
the selection of this content (or number) appears to be entirely left
to the user,” the Court reasoned.
Snapchat doesn’t play a role in selecting the user’s
speed, making it a “neutral tool” protected by Section
230.
While
Maynard
and
Lemmon
may
seem like instances of overly litigious ambulance-chasing, and
Snapchat will likely win its case even in the absence of Section 230,
the suit’s sweeping theory of intermediary liability has
supporters in Congress.
In
a recent Federalist Society teleforum, Josh Divine, Deputy Counsel to
Sen. Josh Hawley, argued
that Snapchat should be held responsible for users’ misuse of
the filter. Divine asserts that “most people recognize that
this kind of tool is primarily attractive to reckless drivers and
indeed encourages reckless driving,” ignoring both the varied,
user-defined applications of the filter, and its inbuilt warning. He
contends
that plaintiffs in the speed filter lawsuits are “complaining
about a reckless platform design decision” rather than anything
“specific to speech.” However, Maynard
and similar suits hinge on platform design’s facilitation of
user speech. Snapchat is being sued upon the belief that it
contributed to the plaintiffs’ injuries by providing a tool
that allows speakers to easily tell others how fast they’re
moving. Any remedy would involve limiting the sorts of speech that
Snapchat can host.
Section
230 was intended to protect the creation and operation of
communicative tools like Snapchat. In Maynard,
litigants attempt to circumvent Section 230 by, in essence, suing
over Snapchat’s non-use, alleging that Section 230 should not
apply because McGee did not actually publish any photos taken before
the crash.If
merely creating a tool that can be used illegally or dangerously
opens platforms to liability, Section 230 offers little real
protection, and such a determination would imperil more than
camera-speedometer amalgamations. Responsibility for one’s
behavior — be it the dangerous acts pictured above, or the
reckless driving at issue in Lemmon
and
Maynard
— should rest with the individual.
Will
Duffield is a Policy Analyst at the Cato Institute
We warned last week that Senator Lindsey Graham was holding a “but think of the children online” moral panic hearing. Indeed, it happened. You can watch the whole 2 hours, but… I wouldn’t recommend it (I did it for you, though). Most of it is the usual moral panic, technologically illiterate nonsense we’ve all come to expect from Congress. Indeed, in a bit of good timing, the Pessimist’s Archive just tweeted out a clip of a 1993 Senate hearing in which then Senator Joe Lieberman flipped out about evil video games. Think about this, but two hours, and a wider array of nonsense:
— Pessimists Archive Podcast (@PessimistsArc) July 9, 2019
It starts out with a prosecutor from South Carolina, Duffie Stone, moral panicking about basically everything. Encryption is evil. Children are being sex trafficked online. And, um, gangs are recruiting members with (gasp) music videos. Later he complains that some of those kids (gasp!) mock law enforcement in their videos. Something must be done! The second speaker, a law professor, Angela Campell, claims that we need more laws “for the children!” She also goes further and says that the FTC should go after Google and others for not magically stopping scammy companies from existing. Then there was this guy, Christopher McKenna, from an organization (“Protect Young Eyes!”) dedicated to moral panics, telling all sorts of unbelievable anecdotes about evil predators stalking young people on Instagram and “grooming” them. Remember, that actual data on this kind of activity shows that it’s actually quite rare (not zero, and that’s not excusing it when it does happen, but the speaker makes it sound like every young girl on Instagram is likely to be at risk of sex trafficking). He also asks the government to require an MPAA/ESRB-style “rating” system for apps — apparently unaware that laws attempting to require such ratings have been struck down as unconstitutional, and the MPAA/ESRB ratings only exist through voluntary agreements.
There’s also… um… this:
It’s the app where every kid, regardless of age, has access to the Discover News section, where they are taught how to engage in risky sexual behavior, such as hookup, group, anal, or torture sex, how to sell drugs, and how to hide internet activity from parents using “incognito mode.”
He’s describing Snapchat. I’ve used Snapchat for years and, uh, I’ve never come across any of that. Also, the complaint about incognito mode is… pretty messed up, considering how that’s a tool for protecting privacy. This is all straight from the standard moral panic playbook. Also, he claims that Twitter has “hardcore porn and prostitution was everywhere” — which is also news to me (and I use Twitter a lot). He also whines that VPNs are too easy to get — and then later whines that it’s “too hard” to protect our privacy. Um, hiding VPNs will harm our privacy. It’s like a hodge podge of true nonsense.
There was also John Clark from NCMEC — the National Center for Missing and Exploited Children. NCMEC actually does good work in helping platforms screen out and spot child porn. However, Clark contributes to the scare-mongering about just how awful the internet is. He also flat out lies. At one point during the panel, Senator Ted Cruz asks Clark about FOSTA and what it’s done so far. Clark flat out lies and says that FOSTA took down Backpage. This is false. Backpage was taken down and its founders arrested before FOSTA was even signed into law.
The only semi-reasonable panelist was the last one, Stephen Balkam, from the Family Online Safety Institute. While McKenna mocks the idea that “parents have a role” by pointing out that parents can’t watch over their kids every hour of every day (duh), Balkam points out that what we should be doing is not watching over our kids all the time, but rather training them and educating them to know how to be good digital citizens online and to avoid trouble. But that kind of message was basically ignored by the Senators, because what fun is actually respecting our kids and teaching them how to be smart internet users. Instead, most of panel focuses on crazy anecdotes and salacious claims about internet services that make them sounds a hell of lot more insane than any of those platforms actually are.
Later, Senator John Kennedy asks the guy from “Protect Young Eyes” if Apple can build a filter that will magically help parents block kids from ever seeing sexually explicit material. McKenna stumbles and admits he has no idea, leading Balkam to finally have to jump into the conversation (he’s the only panelist that no Senator had called on throughout the entire ordeal) to point out that all platforms have some forms of parental controls. But Kennedy cuts him off and says “but can it be done?” Balkam stutters a “yes,” which is not accurate — since Kennedy is asking for something impossible. But then Kennedy suggests that Congress write a law that requires companies like Apple and Google to install filters (something that’s already been ruled unconstitutional).
Kennedy’s idea is… nutty. He includes the obligatory “I don’t know how any of this is done” comment before suggesting a bunch of impossible ideas.
Could Apple, for example, design a program that a parent could opt into, and the instructions to Apple would be “design a program that will filter all information that my daughter or son may see that would be sexually exploitative”? Maybe “filter all pictures or written references to human genitalia.” Can that be done? … Isn’t that the short way home here?
[….]
So could we write legislation, or promulgate a rule, that says “here’s the thing that a reasonable parent would do to protect his or her child from seeing this stuff.” And we do that in conjunction with somebody that has the obvious expertise. And you filter everything. I don’t know how to do it. I can’t write software. Maybe it’s to prevent any pictures of human genitalia. Or prohibit any reference to sexual activity. I don’t know. The kids aren’t gonna like it, but that’s not who we’re trying to please here. Why couldn’t that be done?
Well, the Constitution is why it can’t be done Senator. Also, basic understanding of technology. Or the limits on filter technology. Block all mention of sexual activity? Sure, then kids will use slang. Good luck keeping up with that. Block all pictures of genitalia — then say good buy to biology texts online. Or pages about breast cancer. This is all stuff that lots of people have studied for decades and Kennedy is displaying his ignorance about the Constitution, the law, the internet, the technology, and just about everything else as well. Including kids.
Balkam points out that there are lots of private companies already making such filters, but Kennedy keeps saying “can we write a law” and “can we require every device have these filters” and Balkam looks panic’d noting he has no idea about whether or not they can write such a law (answer: they cannot, at least not if they want it to pass Constitutional muster).
Senator Blackburn… brings up Jeffrey Epstein. Who, as far as we know… didn’t use the internet to prey on girls. But according to Blackburn, Epstein proves the problems of the internet. Because. Senator Hawley then completely makes up a claim that YouTube is deliberately pushing kids to pedophiles and refuses to do anything about it. He claims — incorrectly — that Google admitted that it knows it sends videos of kids to pedophiles (and, he claims, allows the pedophiles to contact the kids) and that it deliberately has decided not to stop this. This misrepresents… basically everything once again.
Senator Thom Tillis then grandstands that it’s all the parents’ fault — and if a kid gets a mobile phone and lies about his age, we should be… blaming the parents for “giving the kids a lethal device.” No hyperbole and grandstanding there, huh? He’s also really focused on “lethality.” He later claims that the internet content itself is “lethal.”
Towards the end, the Senators all gang up on Section 230. Senator Cruz asks his FOSTA question (leading NCMEC’s Clark to falsely state that it was necessary to take down Backpage), and then Blumenthal calls 230 “the elephant in the room” and suggests that there needs to be a “duty of care” to get companies to do anything. It seems like Hawley is already gone by this time, but no one seems to point out that any such duty of care would likely lead to much greater censorship on these platforms, in direct contrast with his demand that the companies censor less.
Nevertheless, Senator Graham closes the hearing by saying that he thinks the companies need to “earn” their CDA 230 protections (which is part of Hawley’s nonsense bill). Graham suggests that Congress needs to come up with “best business practices” and platforms should only get 230 protections if they “meet those best business practices.”
Who knew the Republican Party was all about dictating business standards. What happened to the party of getting government out of business?
Who knows what will actually come out of this hearing, but it was mostly a bunch of ill-informed or mis-informed, technologically illiterate grandstanding, moral panic nonsense. In other words, standard operating procedure for most of Congress.