Learning About Content Moderation From Ghosts In Virtual Reality
from the ghosts-in-the-machine dept
Content moderation in virtual reality comes with its own unique challenges. What works for the moderation of text and video doesn?t neatly translate into VR. In late June, Facebook?s Horizon, a VR social space still in beta testing, released an update to prevent its blocking feature from creating ghosts. That might sound hyperbolic, but it is a perfectly apt description of the feature?s effect in Horizon prior to the update. In the earlier build, both the blocker and the blocked were made invisible to one another, but allowed to continue interacting with the same virtual world. While they couldn?t see one another, they could see each other?s effects on their shared environment. If someone blocked you, your obscene gestures might be invisible to them, but you could still move the furniture about and rattle chains ? practically becoming a poltergeist.
Improvements to Blocking in Horizon
We?re beginning to roll out changes to how blocking works in Horizon. These changes are based on people?s feedback, and are designed to improve people?s experience and make Horizon a safer and more welcoming place.
Previously, when you blocked someone in Horizon, both you and the person you blocked became invisible to each other. We heard feedback from people that this was confusing, for example when the other person continued interacting with objects in the same space.
Now, both the person who has been blocked and the person who blocked them will be able to see each other?s username tag, while keeping their avatars invisible to each other. This update allows both people to know they?re present, but blocked and muted.
You?ll also be able to see the people you?ve blocked in your menu (such as in the People Nearby list) instead of them being completely hidden. This means you can see who you?ve blocked without having to interact with them. You can also visit the settings page and see a block list, where you can see people you?ve blocked and choose to unblock them if you want.
As the patch notes explain, when used in VR, the traditional approach to blocking caused unintended problems. Unlike static social media profiles, users embody their avatars. The user?s digital representation mimics their motions and gestures as it moves through a shared virtual world. On traditional social media, blocking another user hides your speech from their view and limits their ability to reply. Hiding your avatar from their view is a logical translation of this policy to virtual reality. However, because the invisible-to-one-another blocker and blocked still shared the same virtual world, a malicious user could potentially block someone to haunt them or spy unobserved. Tagging speech from blocked users would be unnecessary on traditional social media, as their speech is already excluded from the blocker?s conversations. However, in a shared virtual environment, it becomes a necessary component of a useful blocking feature.
While these are far from life-threatening abuses, they illustrate why best practices for traditional content moderation can?t always be easily applied to VR. In many ways, Facebook Horizon?s moderation challenges look more like those of a video game, especially a massively multiplayer online game (MMO), than those of a traditional social network. In both cases, players interact through avatars, and can simultaneously affect the same virtual world. In either a game or a shared VR world, the properties of the digital environment govern player interactions as much, if not more than, rules about players? speech. This often introduces tradeoffs between antiharassment measures and realism or interactivity. If a game models fire realistically, a malicious player might kick a campfire into another?s tent and set it ablaze. This can be avoided by either limiting players? ability to interact with fire (stopping them from kicking it), or the properties of the fire itself (preventing it from burning the tent). Environmental design choices in games or VR somewhat resemble architectural choices faced by traditional platforms – whether to create retweet or quote tweet functions, or to allow users to control who can reply to their tweets. However, creating an interactive virtual world requires making many more of these decisions.
MMOs are typified as either ?theme park? or ?sandbox? games. In the former, designers set fixed goals for players to compete or cooperate towards, justifying referee-like governance. The latter offers players a set of tools, and expects them to make their own fun, limiting the need for intercession by designers. Conflict between players with different goals is an expected part of the fun.
While platforms for knitting patterns or neighborhood conversation have purposes that recommend some rules over others, more open-ended platforms have struggled to justify their rules. YouTube is a home for video content. Which video content? Who?s to say? VR is, for the time-being, mostly used for gaming. However, as social and commercial applications of the technology become more popular, this question of purpose will become politically relevant, as it has for YouTube.
Horizon?s chief product is a framework for users to create their own virtual worlds. Horizon exists not to provide a Facebook designed environment, but to offer users the ability to create their own environments. This gives Horizon some guiding purpose, and relieves its designers of pressure to make one-size-fits-all decisions. Because most worlds within Horizon are created by users, these users can set the rules of interactivity. Facebook has neither the time nor the resources to govern the behavior and use of every virtual tennis racket across myriad virtual spaces. However, the creators of these little worlds know whether they?re creating a virtual tennis club or a garden party fighting game, and can set the rules of the environment accordingly. This will not be the first time Facebook finds that a rule that works for text and video publishing platforms falls flat in virtual reality. However, its response to the unintended effects of the block feature shows a willingness to appreciate the new demands of the medium.
Will Duffield is a Policy Analyst at the Cato Institute