Wherein Jean Luc Picard Learns How Not To Moderate Twitter
from the instructive-allegory dept
For those not familiar with the Star Trek: the Next Generation canon, in the episode “Hero Worship” the Enterprise receives a distress call from somewhere deep in space, and in responding discovers a heavily-damaged ship with just one survivor. While the Enterprise crew is investigating what happened to the ship, they soon realize that they are being pounded by energy waves, and eventually it dawns on them that these waves could eventually destroy their ship like they apparently did the other. As the Enterprise tries to channel more and more power to its shields to protect itself from the battering, the waves hitting the ship become more and more violent. Until finally ? spoiler alert! (although let’s be honest: the episode basically telegraphs that this will be the solution) ? Commander Data realizes that the waves are reflecting back the energy the Enterprise is expending, and that the solution is to cut the power or else be destroyed by the slapback.
This is a sci fi story illustrating a phenomenon with which we’re all familiar. It’s that basic principle: to every action there is an equal and opposite reaction. And that’s what’s happening as people demand more censorship from platforms like Twitter, and then get more outraged when platforms have inevitably censored things they like. Of course increased calls to remove content will inevitably result in increased calls not to. And of course platforms’ efforts to comply with all these competing demands will just make the platform more unusable until, like the wrecked ship, it will have torn itself apart to the point that it’s hardly recognizable.
As the Enterprise crew learned, solutions don’t always require figuring out ways to expend more energy. Sometimes they involve disengaging from a struggle that can never be won and finding new ways to view the problem. And when it comes to platform moderation, that same lesson seems relevant here.
Because just as the challenge facing the Enterprise was not actually to overpower the energy rocking it, that is not really the platforms’ challenge either. The essential, and much less pugilistic, challenge they face is to figure out how to successfully facilitate the exchange of staggering amounts of expression between an unprecedented number of people. Content moderation is but one tool, but it’s not the only one available, nor is it the best one for achieving that ultimate goal. Platforms shouldn’t need to completely control the user experience; instead they need to deliver the control users need to optimize it for themselves. Being fixated only on the former at the expense of the latter is doomed to be no more successful than when the Enterprise was focused on doing nothing but feeding more power to the shields. In the end it wouldn’t have saved the ship, because ultimately the solution it needed was something far less antagonistic. And the same is just as true for platforms.
Internet platforms of course are not fictional starships. And unlike fictional starships they can’t depend on artificial intelligence to set them on the right path. Theirs is a very human exercise, that first requires understanding the human beings who use their systems and then ensuring that the interfaces of these systems are built in accordance with how those users expect to use them, and need to.
Which itself is a lesson the story teaches. The survivor of that wrecked ship happened to have been a child, who was worried that it was he who had accidentally destroyed his ship when he stumbled during a wave attack and hit a computer console during his fall. The Enterprise crew assured him there was nothing he could have done to hurt anything. The engineers who had designed those consoles understood what their users needed from their interfaces, including the protection the interfaces needed to afford, and the enormous stakes if users didn’t get it. And that’s what the people building computer systems always need to do, no matter what the century.