When Does Speech Go From Legal To Lethal?
from the regulation-of-cyberharassment dept
In the Maryland case, William Cassidy had been charged with cyberstalking Alyce Zeoli, a former colleague and a Buddhist religious leader, based on his tweets, such as, "Do the world a favor and go kill yourself. P.S. Have a nice day." Zeoli asserted that the tweets made her so fear for her safety that she had not left her house for a year and a half, except to see her psychiatrist. But the judge dismissed the case. His reasoning will animate discussions in legislatures about how to amend state and federal laws.
Judge Titus indicated that "threats of harm" are punishable, but not communications "intending" emotional distress. He also considered it relevant that the medium used (a tweet) was communicated to the public at large rather than just the victim. The target of the harassment could just choose not to follow the tweets. "This," said the judge, "is in sharp contrast to a telephone call, letter or e-mail specifically addressed to and directed at another person, and that difference ... is fundamental to the First Amendment analysis in this case." Judge Titus also seemed to think that it was unreasonable for Zeoli to have such a dramatic reaction; he said that Cassidy's tweets were not a "true threat."
Words are powerful. They can move listeners or readers to action, sometimes even to harm themselves or someone else. But generally, our society doesn't punish the speaker or writer. Think about Ozzy Osbourne. Thirty years ago, he recorded the song Suicide Solution. The song states that "Suicide is the only way out," and contains the barely-recognizable lyrics, sung at a faster speed, "Get the gun and try it; Shoot, shoot, shoot."
When a 19 year old shot himself in the head with a .22 caliber handgun after spending five hours listening to Ozzy's music, his grieving parents sued Ozzy and the record distributor. The California Appellate Court rejected their claims (pdf), noting that speech does not lose its First Amendment protection merely because it "may evoke a mood of depression." The court said the lyrics failed to "order or command anyone to concrete action at any specific time."
But courts have held differently when the speech is directly addressed to a particular person. In a case currently on appeal in Minnesota, William Melchert-Dinkel was charged with pressuring two people over the internet to commit suicide. He posed as a young female nurse who pretended to enter into a suicide pact with his victims. The judge in the Minnesota case pointed out that Melchert-Dinkel's "encouragement and advice imminently incited the suicide of Nadia Kajouji and was likely to have that effect." The judge in the case labeled the instant messages as "lethal advocacy" and held that Melchert-Dinkel's words were "analogous to the category of unprotected speech known as 'fighting words' and 'imminent incitement of lawlessness.'" The judge distinguished messages sent to the public at the large, saying that Melchert-Dinkel had the right to take his pro-suicide message to the public--over the internet, on television, and so forth--but did not have the right to address that message to a single, vulnerable individual. Melchert-Dinkel's attorney is appealing the case, based on the First Amendment.
But how direct does a threat have to be? What if Melchert-Dinkel had just sent Nadia an mp3 file with Ozzy’s Suicide Solution? Courts are already weighing whether people's "likes" on a social network can be used as evidence against them. In a Wisconsin case, a judge admitted into evidence a litigant's MySpace reference to a short story in which a judge was harmed. In contrast, a Mississippi court refused to use a dad's MySpace post of Ronald McDonald being shot in the face to prove that the mom should get custody of the kids.
The "intent" standard is also problematic. As Judge Titus suggested, the standard is too broad, covering speech that is constitutionally protected. But the "intent" standard is also, in some cases, too narrow. It might allow someone to evade legitimate prosecution by claiming they didn't intend harm, they just intended to be funny.
That strategy worked for 40-year-old Elizabeth Thrasher, whose victim was her ex-husband’s new girlfriend’s daughter. Thrasher posted photos, the phone number, and the email address of the 17-year-old girl in the "Casual Encounters" section of Craigslist, in which people expressed their interest in casual sex. As a result of the "Casual Encounters" posting, the 17-year-old girl was swamped with sexually explicit cell phone calls, emails, and text messages that included nude pictures and solicitations for sex. One man even came to the Sonic Restaurant where she worked after failing to reach her on her phone, leading her to eventually quit her job out of fear. She testified that the publication of the information made her feel like she "was set up to get killed and raped by somebody." Thrasher’s attorney argued that photos of the girl and her work location were already available on the girl's MySpace profile. He said the postings were "tantamount to a practical joke"--and Thrasher was acquitted.
The issue of targeting a victim versus the public at large is also a question to be considered. Judge Titus suggests that, to be criminally actionable, tweets and posts need to be sent directly to the victim. But because of the nature of digital communications, Judge Titus' distinction between public tweets and direct communications with the victim may not hold up in future cases. Much of the cyberharassment of women does not involve a direct threat from one person to another. In a Connecticut case, a man posted a YouTube rap video of himself waving a gun while threatening to shoot his baby's mom and "put her face on the dirt until she can't breathe no more." Even though the man was in North Carolina at the time and the woman resided in Connecticut, the court issued a restraining order against him.
Under Judge Titus' standard, such a video might not have been a cause for concern because the woman could just have turned it off. This issue of targeting a private person versus the public at large will be key in cases where the person posts information (such as a Google map to a woman's house with a claim she wants men to act out rape fantasies) and the poster himself does not intend to do violence.
As courts and legislators deal with cyberharassment, they'll be determining what the limits are to punishing people for tweets and posts that threaten violence or cause emotional harm. They'll also have to determine whether the rule that the communication must be sent directly to the victim makes any sense in the age of Twitter, Facebook, and YouTube, where public posts--especially those that urge someone else to harm the victim--might be even more deadly than private ones.
Lori Andrews is the author of the upcoming I Know Who You Are and I Saw What You Did: Social Networks and the Death of Privacy