Why Requiring Social Networks To Monitor Posts To Spot Terrorists Will Make It Even Harder To Catch Them
from the false-positives dept
He runs through various ways Facebook, Twitter and the rest might try to spot potential terrorists before they acted -- for example, by using keywords, lists of suspicious sites, social graphs etc. But one feature automated systems all share is that to avoid the risk of letting individuals slip through the net, the criteria for flagging up people have to be loose. And that, inevitably, means there will be false positives:
However sophisticated these systems are, they always produce false positives, so if you are unlucky enough to type oddly, or to say the wrong thing, you might end up in a dragnet.
Here's what that would mean in practice:
Data strategist Duncan Ross set out what would happen if someone could create an algorithm that correctly identified a terrorist from their communications 99.9% of the time -- far, far more accurate than any real algorithm - with the assumption that there were 100 terrorists in the UK.
Requiring social networks to bring in any kind of automated monitoring -- the only kind that is feasible given the huge volume of posts involved -- will simply cause the intelligence agencies to be swamped with a huge number of false leads that will make it impossible to pick out the real terrorists from among the data supplied. In other words, the UK government's plans, if implemented, will just make a bad situation much, much worse.
The algorithm would correctly identify the 100 terrorists. But it would also misidentify 0.01% of the UK's non-terrorists as terrorists: that’s a further 60,000 people, leaving the authorities with a still-huge problem on their hands. Given that Facebook is not merely dealing with the UK’s 60 million population, but rather a billion users sending 1.4bn messages, that's an Everest-sized haystack for security services to trawl.