Blaming The Messenger (App): WhatsApp Takes The Blame In India Over Violence
from the this-doesn't-help dept
You may have heard over the past few weeks that there’s been some mob violence in India in response to totally false information that is being spread. But if you’ve heard about it, it’s almost certainly in conjunction with a lot of finger pointing not at the people spreading the misinformation, or those, you know, lynching people based on false information. Instead, the blame is being squarely placed… on the app where the misinformation is being spread: WhatsApp.
A mob in India lynched five people after rumors spread by WhatsApp messages prompted suspicion that they were child abductors, the latest in a spate of violent crimes linked to the messaging service.
The victims were killed in Dhule district of the western state of Maharashtra on Sunday morning after locals accused them of being part of a gang of “child lifters,” police said.
It was the fourth time in recent weeks that WhatsApp messages have inspired deadly attacks in India.
This has resulted in many, many calls for WhatsApp (and its parent company, Facebook) to “do something” about this. Indeed, the Indian government has more or less demanded that WhatsApp stop “false messages” from being spread on its app. Of course, that’s… not easy. It’s not easy for a variety of reasons, both technical and cultural. On the technical side, WhatsApp is (famously, and for very good and helpful reasons) using end-to-end encryption. So no one at WhatsApp/Facebook can see what’s in those messages. That’s a good thing (especially for everyone whining about how Facebook sucks up too much data about us). No one should want WhatsApp to backdoor that encryption in any way, because that just creates even more problems.
And then of course, there’s the cultural side of this. Even if WhatsApp could read the messages, how could it possibly know what was legit and what was not. And how could it determine that fast enough to stop a mob from going nuts.
WhatsApp has tried to explain all of this to the Indian government — and rather than understanding these issues, many people seem to be screaming about how this is Facebook/WhatsApp “ignoring” its responsibility.
That doesn’t mean things can’t be done. Nikhil Pahwa wrote up a thoughtful analysis of how to best tackle the problem noting (correctly) upfront that “This is a complex problem with no single solution: there is no silver bullet here.” Importantly, Pahwa notes that many of the “solutions” are not dependent on WhatsApp doing anything, but rather better law enforcement, counter speech efforts, user education and more. He does have some suggestions for how WhatsApp could make a few changes that would create a level of friction for public messages and publicly sharing content — including tagging public messages with a unique ID tied to the original message creator.
But… there are also potential unintended consequences with these approaches. And others reasonably point out that activists and dissidents could potentially be seriously hurt by some of the proposed suggestions:
The suggestions @nixxin majes are well-meant and this debate is overdue. But it complicates the lives of dissidents who want anonymity. They?re unsafe even in countries with the rule of law. I?m not sure what the answer is though. 1/2 https://t.co/1da4JmPSsx
— Salil Tripathi (@saliltripathi) July 3, 2018
And, WhatsApp does appear to be trying to do something. A new version has apparently included a “suspicious link detector.” If you’re wondering how that’s possible with end-to-end encryption, it works locally on your phone. Of course, that also probably limits its effectiveness. It appears to at least notice “suspicious” characters that are designed to mimic more standard characters to fake more well known sites. But it’s unclear how much that will actually help.
Thankfully, at least some are pointing out that blaming WhatsApp makes no sense, and the country’s own government really has itself to blame.
The fact that such misinformation not only fuels citizens? paranoia, but also causes them to take matters into their own hands in droves, is indicative of a lack of faith in the machinery meant to maintain law and order in the country, a lack of understanding of the consequences of participating in these activities, and an inability to find truth beyond the realm of their messaging inbox.
That article, at The Next Web, by Abhimanyu Ghoshal points out that rather than the Indian government demanding WhatsApp fix the problem, it might want to consider using WhatsApp to try to counter the narrative:
Instead of blaming WhatsApp, India?s government needs to tackle the larger issues that are making its people paranoid and vulnerable to the viral spread of lies. Hell, it could even use WhatsApp to do that.
Last year, the Bharatiya Janata Party, which is currently in power in the country, was reportedly working to set up roughly 5,000 WhatsApp groups to spread its campaign messaging for the 2018 assembly elections across the southern state of Karnataka, which is home to some 61 million people.
For starters, it should launch a campaign to encourage people to question the veracity of information they receive via social media and messaging platforms. It also needs to remind people about the laws that they must adhere to within the country?s borders.
It’s obviously problematic that misinformation is leading to such violence and death. And, obviously, there’s a lot of interest in how these messages are spreading so rapidly using apps like WhatsApp. But we shouldn’t get so focused on the shiny new thing as the actual point of failure. There are much larger societal and governmental issues at play. Blaming the app may be politically convenient, but it is not accurate, and is unlikely to help in either the short or the long run.