will.duffield's Techdirt Profile


About will.duffield

Posted on Techdirt - 28 December 2021 @ 03:55pm

A Grope In Meta's Space

Horizon Worlds is a VR (virtual reality) social space and world building game created by Facebook. In early December, a beta tester wrote about being virtually groped by another Horizon Worlds user. A few weeks later, The Verge and other outlets published stories about the incident. However, their coverage omits key details from the victim’s account. As a result, it presents the assault as a failure of user operated moderation tools rather than the limits of top-down moderation. Nevertheless, this VR groping illustrates the difficulty of moderating VR, and the enduring value of tools that let users solve problems for themselves.

The user explains that they reported and blocked the groper, and a Facebook “guide”, an experienced user trained and certified by Facebook, failed to intervene. They write, “I think what made it worse, was even after I reported, and eventually blocked the assaulter, the guide in the plaza did and said nothing.” In the interest of transparency, I have republished the beta user’s post in full, sans identifying information, here;

**Trigger Warning** Sexual Harassment. My apologies for the long post: Feel free to move on.

Good morning,

I rarely wake up with a heavy heart and a feeling of anger to start a fresh new day, but that is how I feel this morning. I want to be seen and heard. I reach out to my fellow community members in hopes of understanding and reassurance that they will be proactive in supporting victims and eliminating certain types of behavior in horizon worlds. My expectations as a creator in horizon worlds aren’t unreasonable and I’m sure many will agree.

You see this isn’t the first time, I’m sure it won’t be the last time that someone has sexually harassed me in virtual reality. Sexual harassment is no joke on the regular Internet but being in VR adds another layer that makes the event more intense. Not only was I groped last night, but there were other people there who supported this behavior which made me feel isolated in the Plaza. I think what made it worse, was even after I reported, and eventually blocked the assaulter, the guide in the plaza did and said nothing. He moved himself far across the map as if to say, you’re on your now.

Even though my physical body was far removed from the event, my brain is tricked into thinking it’s real, because…..you know……Virtual REALITY. We can’t tout VR’s realness and then lay claim that it is not a real assault. Mind you, this all happened within one minute of arriving in the plaza, I hadn’t spoken a word yet and could have possibly been a 12-year-old girl.


I would like a personal bubble that will force people away from my avatar and I would like to be able to upload my own recording with my harassment ticket. I would also like that all guides are given sensitivity training on this specific subject, so they will understand what is expected. If META won’t give guides tools that will allow them to remove a player immediately from a situation, at least train them to deal with it and not run away.

Rant over, I’m still mad, but I will sort through and process. I love this community and the thought of leaving it makes me deeply sad. So I am hopeful we can evolve as a community and foster behaviors that support collaboration, understand, and a willingness to speak out against gross behaviors.

Initial coverage in The Verge did not mention the victim’s use of the block feature, even as the user describes using it in the post above. Instead, reporter Alex Heath relayed Facebook’s account of the incident, saying “the company determined that the beta tester didn’t utilize the safety features built into Horizon Worlds, including the ability to block someone from interacting with you.”

These details are important because subsequent writing about the incident builds on the false, but purported non-use of the blocking feature to make the case that offering users tools to control their virtual experience is “unfair and doesn’t work.” In Technology Review, Tanya Basu makes hay of the user’s failure to use the “safe zone” feature, which temporarily removes users from their surroundings. Yet this is a red herring. The user might not have immediately disappeared into her safe zone, but she used the block feature to do away with her assailant.

In reality, contra Basu or Facebook’s description of events, it seems that user-directed blocking put a stop to the harm while the platform provided community guide failed to intervene. VR groping is a serious issue, but it is not one that will be solved from the top-down. Inaccurate reporting that casts user-operated moderation tools as ineffective may spur platforms to pursue less effective solutions to sexual harassment in VR.

Implications of the incident’s misreporting aside, it provides a useful case study in the difficulties of moderating VR. One suggestion put forward by the user and backed by University of Washington Professor Katherine Cross warrants discussion. Closer inspection of their proposals illustrates the careful tradeoffs that inform the current safe zone and blocking tools offered to Horizon users.

They request a “personal bubble that will force people away from my avatar” or “automatic personal distance unless two people mutually agreed to be closer.” This might make some groping harder, but it creates other opportunities for abuse.

If players’ avatars can take up physical space and block movement, keeping others at bay, then they can block doorways and trap other players in corners or against other parts of world. Player collision could render abuse inescapable or allow players to hold others’ avatars prisoner.

MMOs (Massively Multiplayer Online games) have long struggled with this problem – “holding the door” is only a contextually heroic action. Player collision makes gameplay more realistic, but allows some players to limit everyone else’s access to important buildings by loitering in the doorway.

Currently, players’ avatars in Horizon may pass through one another. They can retreat into a safe zone, disappearing from the world. They can also “block” other users – preventing both the blocked and blocking users from seeing one another. Even through a block, they can still see one each other’s nametags – total invisibility created problems I covered here. As such, the current suite of user moderation tools strikes a good balance between empowering users and minimizing new opportunities for misuse.

Finally, given the similarity of the transgression, it is worth recalling Julian Dibbell’s “A Rape in Cyberspace”, one of the first serious accounts of community governance online. In this Village Voice article, Dibbell relates how users of role-playing chatroom LambdaMOO (the best virtual reality to be had in 1993) responded to a string of virtual sexual assaults. After fruitless deliberation, a LambdaMOO coder banned the offending user. After the incident, LambdaMOO established democratic procedures for banning abusive users, and created a “boot” command allowing users to temporarily remove troublemakers.

As the internet has developed content moderation has centralized. Today, users are usually expected to let platforms moderate for them. However, just as in the web’s early days, empowering users remains the best solution to interpersonal abuse. The tools they need to keep themselves safe may be different, but in virtual reality as in role-playing chat, those closest to abuse are best positioned to address it. Users being harassed should not be expected to wait for the mods.

Will Duffield is a Policy Analyst at the Cato Institute

Posted on Techdirt - 27 September 2021 @ 12:11pm

Bankers As Content Moderators

In August, porn-subscription platform OnlyFans announced that it would no longer permit pornography, blaming pressure from banks. The porn policy was rescinded after a backlash from platform users, but the incident illustrates how a handful of heavily regulated financial service providers can act as meta-moderators by shaping the content policies of platforms that rely on them. 

How did banks acquire such power over OnlyFans? Although people sometimes express themselves for free, they usually demand compensation. Polemicists, scientists, poets, and, yes, pornographers all need a paying audience to put food on their tables. Unless the audience is paying in cash, their money must move through payment processors, banks, and other financial intermediaries. If no payment is processed, no performance will be forthcoming. 

OnlyFans relies on financial intermediaries in several ways. It must be able to accept payments from users, send payments to content creators, and raise capital from investors. Each of these activities requires the services of a bank or payment processor. In an interview with the Financial Times, OnlyFans CEO Tim Stockey pointed to banks’ refusals to process payments to content creators as the pressure behind the proposed policy change.

“We pay over one million creators over $300m every month, and making sure that these funds get to creators involves using the banking sector,” he said, singling out Bank of New York Mellon as having “flagged and rejected” every wire connected to the company, “making it difficult to pay our creators.”

BNY Mellon processes a trillion dollars of transfers a day. At this scale, OnlyFans’ $300 million a month in creator payments could be lost in a rounding error. Like individual users on massive social media platforms, the patronage of any one website or business doesn’t matter to financial intermediaries. Banks often refuse service to the sex industry because of its association with illegal prostitution. In the face of bad press or potential regulatory scrutiny, it is usually easier, and in the long run, cheaper, to simply sever ties with the offending business. 

This leaves an excluded firm like OnlyFans with few options. OnlyFans cannot simply become a payment processor. Financial intermediaries are heavily regulated. OnlyFans is unlikely to clear the regulatory hurdles, and even if it could, compliance with anti-money laundering laws would strip its users of anonymity. 

Financial intermediaries are uniquely positioned to police speech because they are heavily regulated. While Section 230 keeps the costs of starting a speech platform low, banking regulation makes it difficult and expensive to enter the financial services market. There are hundreds of domain registrars, but only a handful of major payment processors. This disparity makes the denial of payment processing one of the most effective levers for controlling speech. 

Banks have the same rights of conscience as other firms, but regulation gives their decisions added weight. Financial intermediaries are in the business of making money, not curating for a particular audience, so they have less incentive to moderate than publishers. However, when financial intermediaries moderate, regulation prevents alternative service providers from entering the market. 

Peer-to-peer payment systems, such as cryptocurrency, offer a solution that circumvents intermediaries entirely. However, cryptocurrency has proven difficult to use as money at scale. OnlyFans was able to grow to its current size through access to the traditional banking system. At this stage, it cannot easily abandon it. OnlyFans would lose many users if it required buyers and sellers to maintain cryptocurrency wallets. The platform’s current investors would likely balk at issuing a token to raise additional capital. Decentralized alternatives are, for the moment, unworkably convoluted

While financial intermediaries’ power to moderate is not absolute, they can keep unwanted speech at the fringes of society and prevent it from being very profitable. This is not merely a problem for porn. Many sorts of legal but disfavored speech are vulnerable to financial deplatforming. Gab, a social media platform popular with the alt-right, has been barred from PayPal, Venmo, Square, and Stripe. It eventually found a home with Second Amendment Processing, an alternative payment processor originally created to serve gun stores. 

Commercial banks have faced pressure to cease serving gun stores from both activists and the government in Operation Choke Point. Operation Choke Point sought to discourage banks from serving porn actors, payday lenders, gun merchants, and a host of other “risky” customers. The FDIC threatened banks with “unsatisfactory Community Reinvestment Act ratings, compliance rating downgrades, restitution to consumers, and the pursuit of civil money penalties,” if they failed to follow the government’s risk guidance. Operation Choke Point officially ended in 2017, but it set the tone for banks’ treatment of covered businesses. Because the banking sector is highly regulated, it is very susceptible to informal government pressure—regulators have many ways to interfere with disobedient banks.

In 2011, when Wikileaks published a trove of leaked State Department cables, Senator Joe Lieberman pressured nearly every service Wikileaks used to ban the organization. Wikileaks was deplatformed by its web host, its domain name service, and even its data visualization software provider. Bank of America, VISA, MasterCard, PayPal, and Western Union all prohibited donations to Wikileaks. Wikileaks was able to quickly move to European web hosts and domain name services beyond the reach of Senator Lieberman. But even Swiss bank PostFinance refused Wikileaks’ business. Unlike foreign web hosting and domain registration services, foreign banks are still part of a global financial system for which America largely sets the rules.

Denied access to banking services, Wikileaks became an early adopter of Bitcoin. Simply sending money to a small organization was simple enough that even in 2011, Bitcoin could offer Wikileaks a viable alternative to the traditional financial system. It also probably helped that Wikileaks’ cause was popular with the sort of people already using Bitcoin in 2011.

While cryptocurrency has come a long way in the past decade, adoption is still limited, and alternatives to traditional methods of raising capital are still in their infancy. Bitcoin offered Wikileaks a way out, and some OnlyFans content creators may turn to decentralized alternatives. But as a business, OnlyFans remains at the mercy of the banking industry. Financial intermediaries cannot stamp out disfavored speech, but they can cap the size of platforms that host it. Sitting behind and above the commercial internet, payment processors and banks retain a unique capability to set rules for platforms, and, in turn, platform users.

Will Duffield is a Policy Analyst at the Cato Institute

Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we’ll have many of this series’ authors discussing and debating their pieces in front of a live virtual audience (register to attend here). On October 7th, we’ll be hosting a smaller workshop focused on coming up with concrete steps we can take to make sure providers, policymakers, and others understand the risks and challenges of infrastructure moderation, and how to respond to those risks.

Posted on Techdirt - 22 July 2021 @ 08:45pm

Learning About Content Moderation From Ghosts In Virtual Reality

Content moderation in virtual reality comes with its own unique challenges. What works for the moderation of text and video doesn?t neatly translate into VR. In late June, Facebook?s Horizon, a VR social space still in beta testing, released an update to prevent its blocking feature from creating ghosts. That might sound hyperbolic, but it is a perfectly apt description of the feature?s effect in Horizon prior to the update. In the earlier build, both the blocker and the blocked were made invisible to one another, but allowed to continue interacting with the same virtual world. While they couldn?t see one another, they could see each other?s effects on their shared environment. If someone blocked you, your obscene gestures might be invisible to them, but you could still move the furniture about and rattle chains ? practically becoming a poltergeist.

Improvements to Blocking in Horizon

We?re beginning to roll out changes to how blocking works in Horizon. These changes are based on people?s feedback, and are designed to improve people?s experience and make Horizon a safer and more welcoming place.

Previously, when you blocked someone in Horizon, both you and the person you blocked became invisible to each other. We heard feedback from people that this was confusing, for example when the other person continued interacting with objects in the same space.

Now, both the person who has been blocked and the person who blocked them will be able to see each other?s username tag, while keeping their avatars invisible to each other. This update allows both people to know they?re present, but blocked and muted.

You?ll also be able to see the people you?ve blocked in your menu (such as in the People Nearby list) instead of them being completely hidden. This means you can see who you?ve blocked without having to interact with them. You can also visit the settings page and see a block list, where you can see people you?ve blocked and choose to unblock them if you want.

As the patch notes explain, when used in VR, the traditional approach to blocking caused unintended problems. Unlike static social media profiles, users embody their avatars. The user?s digital representation mimics their motions and gestures as it moves through a shared virtual world. On traditional social media, blocking another user hides your speech from their view and limits their ability to reply. Hiding your avatar from their view is a logical translation of this policy to virtual reality. However, because the invisible-to-one-another blocker and blocked still shared the same virtual world, a malicious user could potentially block someone to haunt them or spy unobserved. Tagging speech from blocked users would be unnecessary on traditional social media, as their speech is already excluded from the blocker?s conversations. However, in a shared virtual environment, it becomes a necessary component of a useful blocking feature.

While these are far from life-threatening abuses, they illustrate why best practices for traditional content moderation can?t always be easily applied to VR. In many ways, Facebook Horizon?s moderation challenges look more like those of a video game, especially a massively multiplayer online game (MMO), than those of a traditional social network. In both cases, players interact through avatars, and can simultaneously affect the same virtual world. In either a game or a shared VR world, the properties of the digital environment govern player interactions as much, if not more than, rules about players? speech. This often introduces tradeoffs between antiharassment measures and realism or interactivity. If a game models fire realistically, a malicious player might kick a campfire into another?s tent and set it ablaze. This can be avoided by either limiting players? ability to interact with fire (stopping them from kicking it), or the properties of the fire itself (preventing it from burning the tent). Environmental design choices in games or VR somewhat resemble architectural choices faced by traditional platforms – whether to create retweet or quote tweet functions, or to allow users to control who can reply to their tweets. However, creating an interactive virtual world requires making many more of these decisions.

MMOs are typified as either ?theme park? or ?sandbox? games. In the former, designers set fixed goals for players to compete or cooperate towards, justifying referee-like governance. The latter offers players a set of tools, and expects them to make their own fun, limiting the need for intercession by designers. Conflict between players with different goals is an expected part of the fun.

While platforms for knitting patterns or neighborhood conversation have purposes that recommend some rules over others, more open-ended platforms have struggled to justify their rules. YouTube is a home for video content. Which video content? Who?s to say? VR is, for the time-being, mostly used for gaming. However, as social and commercial applications of the technology become more popular, this question of purpose will become politically relevant, as it has for YouTube.

Horizon?s chief product is a framework for users to create their own virtual worlds. Horizon exists not to provide a Facebook designed environment, but to offer users the ability to create their own environments. This gives Horizon some guiding purpose, and relieves its designers of pressure to make one-size-fits-all decisions. Because most worlds within Horizon are created by users, these users can set the rules of interactivity. Facebook has neither the time nor the resources to govern the behavior and use of every virtual tennis racket across myriad virtual spaces. However, the creators of these little worlds know whether they?re creating a virtual tennis club or a garden party fighting game, and can set the rules of the environment accordingly. This will not be the first time Facebook finds that a rule that works for text and video publishing platforms falls flat in virtual reality. However, its response to the unintended effects of the block feature shows a willingness to appreciate the new demands of the medium.

Will Duffield is a Policy Analyst at the Cato Institute

Posted on Techdirt - 28 October 2020 @ 01:45pm

Another Section 230 Reform Bill: Dangerous Algorithms Bill Threatens Speech

Representatives Malinowski and Eshoo and have introduced a Section 230 amendment called the ?Protecting Americans from Dangerous Algorithms Act? (PADAA). The title is somewhat of a misnomer. The bill does not address any danger inherent to algorithms but instead seeks to prevent them from being used to share extreme speech.

Section 230 of the Communications Act prevents providers of an interactive computer service, such as social media platforms, from being treated as the publisher or speaker of user-submitted content, while leaving them free to govern their services as they see fit.

The PADAA would modify Section 230 to treat platforms as the speakers of algorithmically selected user speech, in relation to suits brought under 42 U.S.C. 1985 and the Anti-Terrorism Act. If platforms use an ?algorithm, model, or computational process to rank, order, promote, recommend, [or] amplify? user provided content, the bill would remove 230?s protection in suits seeking to hold platforms responsible for acts of terrorism or failures to prevent violations of civil rights.

These are not minor exceptions. A press release published by Rep. Malinowski?s office presents the bill as intended to reverse the US Court of Appeals for the 2nd Circuit?s ruling in Force v. Facebook, and endorses the recently filed McNeal v. Facebook, which seeks to hold Facebook liable for recent shootings in Kenosha, WI. These suits embrace a sweeping theory of liability that treats platforms? provision of neutral tools as negligent.

Force v. Facebook concerned Facebook?s algorithmic ?Suggested Friends? feature and its prioritization of content based on users? past likes and interests. Victims of a Hamas terror attack sued Facebook under the Anti-Terrorism Act for allegedly providing material support to Hamas by connecting Hamas sympathizers to one another based on their shared interests and surfacing pro-Hamas content in its Newsfeed.

The 2nd Circuit found that Section 230 protected Facebook?s neutral processing of the likes and interests shared by its users. Plaintiffs appealed the ruling to the US Supreme Court, which declined to hear the case. The 2nd Circuit?s held that, although:

plaintiffs argue, in effect, that Facebook?s use of algorithms is outside the scope of publishing because the algorithms automate Facebook?s editorial decision?making.

Facebook is nevertheless protected by Section 230 because its content-neutral processing of user information doesn?t render it the developer or author of user submissions.

The algorithms take the information provided by Facebook users and ?match? it to other users?again, materially unaltered?based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers.

The court concludes by noting the radical break from precedent that that plaintiffs? claims demand. The PADAA would establish this sweeping shift as law.

Plaintiffs ?matchmaking? argument would also deny immunity for the editorial decisions regarding third?party content that interactive computer services have made since the early days of the Internet.?The services have always decided, for example, where on their sites (or other digital property) particular third?party content should reside and to whom it should be shown

Explicitly opening platforms to lawsuits for algorithmically curated content would compel them to remove potentially extreme speech from algorithmically curated feeds. Algorithmic feeds are given center stage on most contemporary platforms ? Twitter?s Home timeline, the Facebook Newsfeed, and TikTok?s ?For You? page are all algorithmically curated. If social media platforms are exposed to liability for harms stemming from activity potentially inspired by speech in these prominent spaces, they will cleanse them of potentially extreme, though First Amendment protected, speech. This amounts to legislative censorship by fiat.

Exposing platforms to a general liability for speech implicated in terrorism and civil rights deprivation claims is more insidiously restrictive than specifically prohibiting certain content. In the face of a nebulous liability, contingent on the future actions of readers and listeners, platforms will tend to restrict speech on the margins.

The aspects of the bill specifically intended to address Force v. Facebook?s ?Suggested Friends? claims, the imposition of liability for automated procedures that ?recommend? any ?group, account, or affiliation,? will be even more difficult to implement without inhibiting speech and political organizing in an opaque, idiosyncratic, and ultimately content-based manner.

After I attended a Second Amendment Rally in Richmond early this year, Facebook suggested follow-up rallies, local town hall meetings, and militia musters. However, in order to avoid liability under the PADAA, Facebook wouldn?t have to simply refrain from serving me militia events. Instead, it would have to determine if the Second Amendment rally itself was likely to foster radicalism. In light of their newfound liability for users? subsequent actions, would it be wise to connect participants or suggest similar events? Would all political rallies receive the same level of scrutiny? Some conservatives claim Facebook ?incites violent war on ICE? by hosting event pages for anti-ICE protests. Should Facebook be held liable for Willem van Spronsen?s firebombing of ICE vehicles? Rep. Malinowski?s bill would require private firms to far reaching determinations about diverse political movements under a legal Sword of Damocles.

Spurring platforms to exclude organizations and interests tangentially linked to political violence or terrorism from their recommendation algorithm would have grave consequences for legitimate speech and organization. Extremists have found community and inspiration in everything from pro-life groups to collegiate Islamic societies. Budding eco-terrorists might be connected by shared interests in hiking, veganism, and conservation. Should Facebook avoid introducing people with such shared interests to one another?

The bill also fails to address real centers of radicalization. As I noted in my recent testimony to the House Commerce Committee, most radicalization occurs in small, private forums, such as private Discord and Telegram channels, or the White Nationalist bulletin board Iron March. The risk of radicalization ? like rumor, disinformation, or emotional abuse, is inherent to private conversation. We accept these risks because the alternative ? an omnipresent corrective authority ? would foreclose the sense of privileged access necessary to the development of a self. However, this bill does not address private spaces. It only imposes liability on algorithmic content provision and matching, and wouldn?t apply to sites with fewer than 50 million monthly users. Imageboards such as 4 or 8chan are too small to be covered, and users join private Discord groups via invite links, not algorithmic suggestion.

The Protecting Americans from Dangerous Algorithms Act attempts to prevent extremism by imposing liability on social media firms for algorithmically curated speech or social connections later implicated in extremist violence. Expecting platforms to predictively police algorithmically selected speech a la Minority Report is fantastic. In practice, this liability will compel platforms to set broad, stringent rules for speech in algorithmically arranged forums. Legislation that would push radical, popularly disfavored, or simply illegible speech out the public eye via private proxies raises unavoidable First Amendment concerns.

Posted on Techdirt - 16 June 2020 @ 01:39pm

Cars, Guns, Cider, And Snapchat Don't Cause Crime

A carefully posed photo of dangerous driving attracted some attention online in early May. The photo shows a picture from the driver’s seat of a Nissan. The photographer is driving, doing 90 mph as he brandishes a handgun with his finger resting on the trigger. To make matters worse, there’s an alcoholic cider propped against the dash. This extensive set of unsafe behaviors was intended to outrage, offend, and attract attention — all goals it undoubtedly met. And such foolishness is an invitation to a lengthy imprisonment. But it would be a mistake to treat Nissan, Heckler & Koch, Angry Orchard Hard Cider, the driver’s cell phone manufacturer, and whatever platform he used to share the photo as responsible for his misbehavior.

Unfortunately, two ongoing lawsuits against Snapchat apply this logic to the app’s speed filter feature. Alongside other sensor-based filters, like altimeters and location-based geofilters, Snapchat provides a speedometer filter that superimposes the user’s current speed over a photograph. Passengers can use the filter safely in all manner of vehicles, from boats to airplanes. However, it can also be used dangerously by reckless drivers speeding on public roads in pursuit of a high speedometer reading.

In September of 2015, teen driver Crystal McGee was allegedly traveling at over 100 miles-per-hour while using Snapchat when she struck Wentworth Maynard’s vehicle. McGee was later charged with causing serious injury by vehicle, but Maynard sued both McGee and Snapchat, attempting to hold the company responsible for McGee’s reckless driving.

In most cases, Section 230 of the Communications Decency Act indemnifies the creators of an “interactive computer service” against liability for consumer misuse of their publishing tools. The law prevents social media platforms from being treated as the “publisher or speaker” of user-generated content.

Indeed, the case was initially dismissed on Section 230 grounds, but this decision was reversed by the Georgia Court of Appeals. The court reasoned that because McGee did not actually post the photo, Snapchat was not being treated as the publisher of her speech, but the creator of a dangerous product that had somehow, per Maynard’s complaint, “facilitated McGee’s excessive speeding.” The court allowed the case to go forward because the suit “seek[s] to hold Snapchat liable for its own conduct, principally for the creation of the Speed Filter and its failure to warn users that the Speed Filter could encourage speeding and unsafe driving practices.”

It’s hard to see how the existence of Snapchat’s speedometer encouraged Crystal McGee to drive at 113 miles-per-hour on a busy road. Snapchat doesn’t reward users for achieving high speedometer ratings, and opening the filter triggers a popup warning reading: “Please, DO NOT Snap and drive.” Snapchat may have made it easier for her to record and share her behavior, but reckless drivers have long taken photos of their speed as displayed on the dash. One might just as easily claim that the existence of dashboard speedometers similarly encourages speeding. Arguably, driving fast might be less alluring without a way to determine how fast you’re actually going. The collection of items in the photo above all contribute to its outrageousness, yet none of the companies represented are responsible for the reckless tableau.

In Lemmon v. Snap, a similar case dismissed with leave to amend in February, the District Court for the Central District of California found that Section 230 protected Snapchat from liability because the filter “is a neutral tool, which can be utilized for both proper and improper purposes. The Speed Filter is essentially a speedometer tool, which allows Defendant’s users to capture and share their speeds with others.” While a user might behave recklessly in pursuit of a high recorded speed, the decision is theirs and theirs alone. The court describes the recorded speed as content submitted by the user. “While a user might use the Speed Filter to Snap a high number, the selection of this content (or number) appears to be entirely left to the user,” the Court reasoned. Snapchat doesn’t play a role in selecting the user’s speed, making it a “neutral tool” protected by Section 230.

While Maynard and Lemmon may seem like instances of overly litigious ambulance-chasing, and Snapchat will likely win its case even in the absence of Section 230, the suit’s sweeping theory of intermediary liability has supporters in Congress.

In a recent Federalist Society teleforum, Josh Divine, Deputy Counsel to Sen. Josh Hawley, argued that Snapchat should be held responsible for users’ misuse of the filter. Divine asserts that “most people recognize that this kind of tool is primarily attractive to reckless drivers and indeed encourages reckless driving,” ignoring both the varied, user-defined applications of the filter, and its inbuilt warning. He contends that plaintiffs in the speed filter lawsuits are “complaining about a reckless platform design decision” rather than anything “specific to speech.” However, Maynard and similar suits hinge on platform design’s facilitation of user speech. Snapchat is being sued upon the belief that it contributed to the plaintiffs’ injuries by providing a tool that allows speakers to easily tell others how fast they’re moving. Any remedy would involve limiting the sorts of speech that Snapchat can host.

Section 230 was intended to protect the creation and operation of communicative tools like Snapchat. In Maynard, litigants attempt to circumvent Section 230 by, in essence, suing over Snapchat’s non-use, alleging that Section 230 should not apply because McGee did not actually publish any photos taken before the crash. If merely creating a tool that can be used illegally or dangerously opens platforms to liability, Section 230 offers little real protection, and such a determination would imperil more than camera-speedometer amalgamations. Responsibility for one’s behavior — be it the dangerous acts pictured above, or the reckless driving at issue in Lemmon and Maynard — should rest with the individual.

Will Duffield is a Policy Analyst at the Cato Institute

Posted on Techdirt - 29 May 2020 @ 11:07am

Two Cheers For Unfiltered Information

In the early hours of December 31st, 2019 weeks before the coronavirus was recognized as a budding pandemic, Taiwanese Centers for Disease Control Deputy Director Luo Yijun was awake, browsing the PTT Bulletin Board. A relic of 90s-era hacker culture, PTT is an open source internet forum originally created by Taiwanese university students. On the site’s gossip board, hidden behind a warning of adult content, Yijun found a discussion about the pneumonia outbreak in Wuhan. However, the screenshots from WeChat posted to PTT described a SARS-like coronavirus, not the flu or pneumonia. The thread identified a wet market as the likely source of the outbreak, indicating that the disease could be passed from one species to another. Alarmed, Luo Yijun warned his colleagues and forwarded his findings to China and the World Health Organization (WHO). That evening, Taiwan began screening travelers from Wuhan, acting on the information posted to PTT.

A niche Internet forum, not the WHO or Chinese Communist Party (CCP), notified Taiwan, and the world more broadly, of the seriousness of COVID-19 – the disease caused by the new coronavirus. The same day, Wuhan’s Municipal Health Commission described the disease as pneumonia and cautioned against assumptions of human-to-human transmission. While Chinese health authorities downplayed the seriousness of the outbreak, a lightly governed website helped information about the disease to escape China’s Great Firewall. As viral misinformation inspires skepticism of free speech in the west and conservative legal scholars express admiration for China’s system of information control, this episode illustrates the value of unfiltered speech.

PTT’s gossip board is not fact checked by experts, and while the board has some rules, it is a place for gossip rather than verified information or news. The forum is governed far more liberally than contemporary social media platforms with extensive community standards and tens of thousands of paid moderators. While bulletin boards have largely fallen out of favor with western Internet users, PTT probably is most comparable to 4chan, the Something Awful forums, or Hackernews. In the past, it has hosted leaked government surveillance proposals, and Chinese officials have recently complained about the site as a source of abusive speech about the WHO.

There is a real difference between lightly governed or unmoderated spaces, essentially ruled by the First Amendment (which inevitably play host to the good, the bad, and the ugly) and platforms that are specifically curated to highlight vulgar or illiberal content. 4chan contains image boards dedicated to fashion, travel, umpteen forms of Japanese animation, and /pol, a board for politically incorrect conversation that receives an outsized amount of attention in mainstream media. The Daily Stormer is a blog for white nationalists. We must resist the urge to condemn ungoverned fora alongside badly governed forums simply because both provide platforms for noxious speech.

Because the Daily Stormer is specifically curated to highlight neo-Nazi speech, we can safely assume that it won’t host valuable information. Its gatekeepers explicitly select fascistic speech for publication before the content goes live and are unlikely to grant a platform to anything else. It certainly isn’t a hangout for anonymous epidemiologists. 4chan, on the other hand, contains its fair share of extremist speech but the platform is not moderated by fascists, nor, for the most part, anyone at all. 4chan hosts almost any sort of speech; despite being unverified, useful information may still be posted there. Due to its lack of formal gatekeeping, users’ comments are not screened for either accuracy or good taste. As a result of 4chan’s norm of anonymous participation, prominence, and popularity with particularly active internet trolling communities in the mid-aughts, the site gained a reputation as an informational free-for-all, rendering it a useful dumping ground for both leaks of authentic nonpublic information and unhinged conspiracy.

Even as its prominence has diminished, 4chan’s reputation ensures that it remains a popular space to share privileged information, often in concert with other essentially unmoderated publication services such as Pastebin. Last year, News of Jeffrey Epstein’s death was first leaked on the site. While it can be difficult to prove the veracity of any one claim, the existence of such a place–an ungoverned information clearinghouse–has undeniable value. Ungoverned fora allow arguments, assertions, and media to be freely shared and considered without giving undue authority to unproven assertions.

Because users participate anonymously or pseudonymously, they cannot rely upon, and subsequently do not risk, their permanent personal reputations and credentials. Likewise, it is the very popularity of these message boards as information clearinghouses that makes them attractive to bad actors. If you want to publish a sensitive message, for good or for ill, lightly moderated platforms are good tools for the job.

Although these platforms may spread disinformation, if read with a healthy dose of skepticism the content they carry is not per-se dangerous. Crucially, they fail differently than, in this case, Chinese state health authorities, which had political reasons to downplay the seriousness of the outbreak. Rather than providing filtered, authoritative information that can cause widespread harm if incorrect, such as the WHO recommendations against mask use published throughout March, open fora host many unfiltered claims that, without supporting evidence, carry little authority whatsoever. A healthy information ecosystem will contain both trustworthy authorities, and bottom up information distribution networks that can correct institutional failures. In a world in which seemingly authoritative sources are not trustworthy, unfiltered platforms will gain credence, for good and ill.

However, as Luo Yijun’s late night discovery on PTT demonstrates, unverified information can inform and illuminate, especially in the absence of trustworthy authoritative information. Furthermore, if used effectively, open-source information hosted on ungoverned platforms can enhance the capability and legitimacy of traditional institutions, such as the Taiwanese CDC. Liberally governed platforms are often blamed for their role in transmitting falsity and hate but seldom lauded when they facilitate the spread of life-saving information.

Will Duffield is a Policy Analyst at the Cato Institute

More posts from will.duffield >>