Dozens Of Tech Experts Tell DHS & ICE That Its Social Media Surveillance And Extreme Vetting Should Be Stopped

from the bad-policies dept

Last week dozens of well known technologists sent a letter to Homeland Security arguing that Immigration & Customs Enforcement’s (ICE) plans to use technology for “extreme vetting” is a really, really dumb idea.

According to its Statement of Objectives, the Extreme Vetting Initiative seeks to make ?determinations via automation? about whether an individual will become a ?positively contributing member of society? and will ?contribute to the national interests.? As far as we are aware, neither the federal government nor anyone else has defined, much less attempted to quantify, these characteristics. Algorithms designed to predict these undefined qualities could be used to arbitrarily flag groups of immigrants under a veneer of objectivity.

Inevitably, because these characteristics are difficult (if not impossible) to define and measure, any algorithm will depend on ?proxies? that are more easily observed and may bear little or no relationship to the characteristics of interest. For example, developers could stipulate that a Facebook post criticizing U.S. foreign policy would identify a visa applicant as a threat to national interests. They could also treat income as a proxy for a person?s contributions to society, despite the fact that financial compensation fails to adequately capture people?s roles in their communities or the economy.

The Extreme Vetting Initiative also aims to make automated determinations about whether an immigrant ?intends to commit? terrorism or other crime. However, there is a wealth of literature demonstrating that even the ?best? automated decisionmaking models generate an unacceptable number of errors when predicting rare events. On the scale of the American population and immigration rates, criminal acts are relatively rare, and terrorist acts are extremely rare. The frequency of individuals? ?contribut[ing] to national interests? is unknown. As a result, even the most accurate possible model would generate a very large number of false positives – innocent individuals falsely identified as presenting a risk of crime or terr

In short, this is the tech world telling DHS and ICE that its belief that there’s a “nerd harder” solution to using computers and algorithms to sniff out terrorists is a load of pure hooey. It may be true, as Arthur C. Clarke once stated, that “any sufficiently advanced technology is indistinguishable from magic,” but the corollary does not apply: not all magical solutions can be implemented in technology. It’s kind of ridiculous that actual technologists were needed to explain this to DHS, but that’s where things are these days.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Dozens Of Tech Experts Tell DHS & ICE That Its Social Media Surveillance And Extreme Vetting Should Be Stopped”

Subscribe: RSS Leave a comment
Anonymous Coward says:

It was never going to work

Social media sites contain millions to hundreds of millions of fake profiles. Facebook recently admitted that it has 200 million:

I think that’s low by at least a factor of two, possibly low by a factor of four.

Twitter and the others are no different. A combination of automated and manual strategies make it trivially easy for anybody to create an arbitrary number at a high rate.

Given that, and given the DHS’s history of conflating people with each other based on name similarities, it would be a nearly-trivial task to create, let’s say, 50 million fake profiles carrying the same names as 50 million real people and then stuff those profiles full of data suggesting connections to terrorist organizations. DHS will acquire all this data, feed it into automated analysis, and draw 50 million wrong conclusions.

Rekrul says:

I have no social media accounts, unless you count posting comments on YouTube. Yes, technically I have a YouTube “channel” since you need to have an account to post comments and it assigns you a channel automatically, but there’s nothing on it. There’s also nothing on my obligatory Google+ account, which is under the name Ben Dover.

I don’t use Facebook or Twitter. I post on a couple forums dedicated to retro video game and computer systems, but that’s about it.

Anonymous Coward says:

Re: Re:

Oh, you don’t have any social media accounts? Okay.

You’re the perfect target for the kind of attacker I mentioned above. All they have to do is compare “lists of people with social media accounts” to “lists of people” and note that differences. Sure, some of those in the diff will be there because the algorithm failed (although: it can be refined by using multiple lists) but a lot of them will be people just like you…who didn’t have a social media account…UNTIL NOW.

Nothing stops them from setting up one with your name on it. And even IF you notice, and IF you can reach someone at one of those sites, and IF they decide to take action, then it may still be too late to stop ersatz data associated with you from being vacuumed up.

Discuss It (profile) says:

Pig. Leg. Wrong.

Extreme Vetting isn’t about anything more than finding “something” to exclude those that don’t look like you. It has nothing at all to do with “security”, nothing at all to do with accurately assessing a security risk, and everything to do with being able to slap “DANGEROUS” label on pure xenophobia.

This won’t make us “safe”, all it does is to allow us to exclude those we don’t want. Interestingly, when it suited the government to do so, they allowed card carrying nazi’s to enter the US, to give them citizenship. (whispers) Werner Von Braun.

Zgaidin (profile) says:

Re: Pig. Leg. Wrong.

While you’re almost certainly correct in this case, Mike’s point about Clarke’s Law and it’s reverse is still valid, and something to bear in mind as policy makers try to legislate uses for technology.

As with any field of knowledge both broad and deep (say medicine and it’s related sciences), the wider and deeper the field grows the more impossible it is for even an “expert” to know everything about the field. If you’re a professional app developer, you probably know a ton about APIs, app dev languages, and the hardware and firmware in the products you develop for. That doesn’t mean you know much more than any layman on the street about AI development, how to code a robust encryption algorithm, or design a new chip set. My ENT is a good doctor, but his knowledge of neurology is limited to whatever he learned in med school and his internship, and even that’s at least out of date by 10+ years. He’s not really qualified to make a neurological diagnosis or prescribe medicine to treat a neurological disorder. If, like these legislators, you’re not a tech expert of any kind, it all looks like magic, and the temptation to believe that it can be made to do whatever you want is real. I’m not a tech expert, but I’ve been tinkering with computers as a hobby since 300 baud modems were the order of the day. Beyond setting up a new desktop for use and slightly more advanced troubleshooting that a clueless layman, my tech knowledge can most adequately be summed up as “there are some things current technology can’t do.” That’s what my 20+ years of fiddling around with these machines has earned me, and I’m fine with admitting it, but these guys don’t even have that going for them, which makes even their well-intentioned ideas (which I agree this is probably not) frightening.

That Anonymous Coward (profile) says:

We saw this in a movie/tv show so it has to be totally possible.
They live in a fantasy land detached from reality, where life is like it is in the movies.

Terrorists have that little mask printing machine from MI, but they don’t replace the iris so that biometric can save us!

We can’t hold our first line of defense employees to any standards, because that might lower morale among them. We over look the civil rights abuses, the theft rings, the selling access to drug dealers, to keep them happy… because someone hired off of a pizza box & trained in the use of the terrorist detecting rock is going to save us all.

We don’t live in the movies.
The good guys aren’t limited to wearing a white hat all the time & the bad guys don’t have to wear a black hat to make themselves identifiable.
I’m not a terrorist, but I can offer up 4 different clean online personas to be vetted… and I’m not even trying to do terrorist stuff (well I’ve terrorized some lawyers). I’ve had decades to learn to build fall back identities that hold up to scrutiny, you expect some software is going to unravel it? You’re dumb.

Zgaidin (profile) says:

Re: Re:

I was thinking about something like this while reading the article from yesterday about Google collecting info while an Android device wasn’t networked.

I don’t know enough, and am willing to admit that, about the tech to know if this is possible, but could someone build an app that peer-to-peer swapped random information, like bits of GPS data, small sections of browser history, etc with every other user of the app on a regular basis? Basically, since we can’t stop the snoopy bastards from digging through our hay piles, can we make our hay piles so big and so full of fake needles that searching it becomes worthless?

Bergman (profile) says:

There would be no need to nerd harder

If the police would just cop harder. Obviously, they should be able to catch any bad guy ever in the space of an hour or two. And if they can’t do that, we should fire them and hire someone who can. Why spend tax money on lazy cops, when we just need to get them to cop harder?

After all, Hollywood has taught us that an hour (two hours for season finale level bad guys) is more than enough to unravel even the most fiendish plot, just like it has taught us that an hour is enough time to patch any security hole and track down any hacker.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...