EU-Funded ‘Automated Deception Detection’ Border Security Project Concludes, But Public Aren’t Allowed To See Research Details

from the pay-up-and-shut-up dept

Note: Since the publication of this article, the iBorderCtrl website has disappeared. We have updated the links in the post to point to an archived version of the site.

Four years ago, Techdirt wrote about iBorderCtrl, a research project funded by the EU under the Horizon 2020 framework. According to the project’s Web site:

iBorderCtrl provides a unified solution with aim to speed up the border crossing at the EU external borders and at the same time enhance the security and confidence regarding border control checks by bringing together many state of the art technologies (hardware and software) ranging from biometric verification, automated deception detection, document authentication and risk assessment.

“Automated deception detection” is jut a fancy name for lie detection, but with the twist that it uses AI to analyze “non-verbal micro-gestures”. As the earlier Techdirt article pointed out, there’s no hard evidence this approach works. Even the project’s FAQ admits there are issues:

With regard to iBorderCtrl, it can be concluded that the automatic deception detection system (ADDS), which relies on AI, poses various risks with regard to fundamental rights. As the iBorderCtrl cannot provide 100% accuracy, there is always a risk of false positives (people being falsely identified as deceptive) and false negatives (criminals being falsely identified as truthful). This is true of any decision-making system, including those where classifications are made by humans. This might also lead to a stigmatisation or prejudice against affected persons, for instance when talking to a real border guard afterwards.

The FAQ also makes clear that the two main tests of the system, at borders in Hungary and Latvia, have concluded, and that there are no plans to roll it out anywhere in the EU, not least because there are unresolved legal and ethical issues of its approach:

How far the system, or parts of it, will be used at the border in the future will need to be defined. It should be also noted that some technologies are not covered by the existing legal framework, meaning that they could not be implemented without a democratic political decision establishing a legal basis. At the time of doing so, appropriate safeguards would also need to be considered, e.g. as proposed by the iBorderCtrl Consortium, to ensure the system operates with full respect for protect human rights.

The iBorderCtrl project received €4.5 million from the EU, so it would not be unreasonable for EU citizens to be able to see what their money was used for. In 2018, Patrick Breyer, a member of the European Parliament, requested access to documents held by the European Commission regarding the development of iBorderCtrl. As Article 19 recounts:

The REA [EU Research Executive Agency] – the agency at the helm of the iBorderCtrl project – granted Breyer full access to one document and partial access to another. They denied him access to numerous additional documents, citing the protection of the commercial interests of a consortium of companies collaborating with the REA on the project.

Breyer challenged this decision, pointing out that there was a strong public interest in having access to information about projects that used controversial technology, as iBorderCtrl certainly did. The EU’s so-called “General Court” published a ruling in December 2021:

the General Court established that a number of access requests denied to Breyer were not sufficiently justified by the REA. While the Court’s recognition of public interest in the democratic oversight of the development of surveillance and control technologies is a step in the right direction, the decision did not go far enough and Breyer appealed. In its decision, the Court suggested that such democratic oversight should begin only after these types of research and pilot projects were concluded. In other words, the Court failed to acknowledge the importance of ensuring transparency is in place at the outset of taxpayer-funded projects with immense impact on citizens, rather than when research and development has already been completed.

Breyer took his case to the main EU Court of Justice (CJEU), which has just issued its judgement:

While the CJEU did recognise that the fact that the obligation of participants in the iBorderCtrl project to respect fundamental rights is not grounds to assume that they will automatically do so, it maintained that because this was a research project, the public’s right to know about the results – rather than the process of the research – was sufficient.

The CJEU agreed with the General Court that the commercial interests of the REA outweighed the public interest. But as a listing of the documents requested indicates, several were about the ethics of the project, and others were general progress reports. It seems entirely legitimate for that information to be available to the public: claims of commercial interests should not be able to stymie the crucial oversight of new surveillance and control technologies funded by taxpayers.

More generally, the idea that people can only see the results of publicly-funded research is absurd. Today there is a recognition that science research needs to be open – not just in its results, but in its entire process. The CJEU ruling that the public can only ask to see the results is retrogressive, and a return to the bad old days of science funding when the public was expected to pay up and then shut up.

Follow me @glynmoody on Mastodon.

Filed Under: , , , , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “EU-Funded ‘Automated Deception Detection’ Border Security Project Concludes, But Public Aren’t Allowed To See Research Details”

Subscribe: RSS Leave a comment
13 Comments
Anonymous Coward says:

Bad links

The “iBorderCtrl” link in the first sentence gives the error “This account has been suspended … Either the domain has been overused, or the reseller ran out of resources”; the other links on that domain give “The requested URL was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.”

Rocky says:

It seems entirely legitimate for that information to be available to the public: claims of commercial interests should not be able to stymie the crucial oversight of new surveillance and control technologies funded by taxpayers.

The cynic in me: That depends who’s commercial interests may be negatively effected if the information becomes public knowledge.

Anonymous Coward says:

Re:

I’d argue that it’s in the public interest to know the process and the results so we’re not doomed to repeat this expenditure of taxpayer money.

If we just know the results, there’s nothing stopping the same players (or different ones) from getting a similar grant in the future to repeat the study, including the same known flaws.

That One Guy (profile) says:

'Yes it works, no you can't see the methodology to double-check my work'

They denied him access to numerous additional documents, citing the protection of the commercial interests of a consortium of companies collaborating with the REA on the project.

Nothing screams ‘bogus science/tech’ quite like only letting people see the results but not the methods of testing, and the fact that they are hiding that data to protect company profits does not make that argument weaker.

Anonymous Coward says:

Re:

Nothing screams ‘bogus science/tech’…

Indeed, my thought as well.

It seems that if science and/or research of any kind wants to earn and deserve a good reputation in the minds of most people, becoming able to easily override, dispel and send “alternate facts junk science” to the dust bin, then scientists should not even thinking of hiding anything, particularly that which was paid for on the public’s dime.

“Oh, but our corporate masters won’t let us reveal anything” doesn’t cut it. Your real masters are those who paid for the job, not a bunch of Harvard MBA’s. Wise up and stop acting like musicians in fear of their gatekeepers, that act is getting old.

And BTW, you government types that fall for this ‘profit protection’ scheme (aka a con job), you need to remember who pays your wages too.

PaulT (profile) says:

“it uses AI to analyze “non-verbal micro-gestures”

Cool, so new travellers, people on the spectrum or from a different cultural background get flagged, while those who had some training or experience get through? All dependent on which dataset the AI was trained on of course, which we know from experience with facial recognition might be “accidentally” be geared toward white westerners?

Doesn’t sound much more efficient to me, unless there’s a certain demographic you’re trying to find an excuse to harass, of course.

Anonymous Coward says:

The real deception is claiming you can detect deception

WHOA HOLDUP!

This is very, very bad. Anyone who tells you that they can automatically detect deception is flat out lying. Deception detection is a HIGHLY studied field.

This is my exact bailiwick as I did my Master’s on counter-deception and let me tell you what literally all the research says in brief.

1) No method of detecting deception has done a better than 50% (minor error bars of a few percent based on the study). This includes via trained personnel such as police officers.
2) What deception “tells” MAY exist are culturally specific. There are no “tells” that are universally specific. So this is an example of of unconscious bias going into an AI system at best.

Rocky says:

Re:

Some anecdotal evidence from an experience I had many years ago:

I worked all over Europe in the 90’s which meant a lot of flying. One time, due to my itinerary, I hadn’t eaten anything during the whole day so I bought a kebab from a “hole in the wall”. Lets just say it didn’t sit well with me which I discovered when I finally arrived at the airport.

You all know the feeling, you have an upset and cramping stomach, you are all clammy with dark saucers under your armpits, looking pale and uncertain.

I missed my flight sitting in an “interview room” talking to a customs officer who asked me questions where I had been, what I had been doing, why did I travel to so many countries (lots of stamps in the passport) and so on – during which his colleagues were tearing through my luggage.

The interview ended when I finally bazooka-barfed in the waste-basket, I guess he didn’t like the sour smell of a partially digested kebab.

The reason they took me in for an “interview” was that I looked nervous and guilty which begs the question: If several trained custom officers can’t tell the difference between being sick and being nervous, why do some think an “AI” can?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...