EU-Funded ‘Automated Deception Detection’ Border Security Project Concludes, But Public Aren’t Allowed To See Research Details
from the pay-up-and-shut-up dept
Note: Since the publication of this article, the iBorderCtrl website has disappeared. We have updated the links in the post to point to an archived version of the site.
iBorderCtrl provides a unified solution with aim to speed up the border crossing at the EU external borders and at the same time enhance the security and confidence regarding border control checks by bringing together many state of the art technologies (hardware and software) ranging from biometric verification, automated deception detection, document authentication and risk assessment.
“Automated deception detection” is jut a fancy name for lie detection, but with the twist that it uses AI to analyze “non-verbal micro-gestures”. As the earlier Techdirt article pointed out, there’s no hard evidence this approach works. Even the project’s FAQ admits there are issues:
With regard to iBorderCtrl, it can be concluded that the automatic deception detection system (ADDS), which relies on AI, poses various risks with regard to fundamental rights. As the iBorderCtrl cannot provide 100% accuracy, there is always a risk of false positives (people being falsely identified as deceptive) and false negatives (criminals being falsely identified as truthful). This is true of any decision-making system, including those where classifications are made by humans. This might also lead to a stigmatisation or prejudice against affected persons, for instance when talking to a real border guard afterwards.
The FAQ also makes clear that the two main tests of the system, at borders in Hungary and Latvia, have concluded, and that there are no plans to roll it out anywhere in the EU, not least because there are unresolved legal and ethical issues of its approach:
How far the system, or parts of it, will be used at the border in the future will need to be defined. It should be also noted that some technologies are not covered by the existing legal framework, meaning that they could not be implemented without a democratic political decision establishing a legal basis. At the time of doing so, appropriate safeguards would also need to be considered, e.g. as proposed by the iBorderCtrl Consortium, to ensure the system operates with full respect for protect human rights.
The iBorderCtrl project received €4.5 million from the EU, so it would not be unreasonable for EU citizens to be able to see what their money was used for. In 2018, Patrick Breyer, a member of the European Parliament, requested access to documents held by the European Commission regarding the development of iBorderCtrl. As Article 19 recounts:
The REA [EU Research Executive Agency] – the agency at the helm of the iBorderCtrl project – granted Breyer full access to one document and partial access to another. They denied him access to numerous additional documents, citing the protection of the commercial interests of a consortium of companies collaborating with the REA on the project.
Breyer challenged this decision, pointing out that there was a strong public interest in having access to information about projects that used controversial technology, as iBorderCtrl certainly did. The EU’s so-called “General Court” published a ruling in December 2021:
the General Court established that a number of access requests denied to Breyer were not sufficiently justified by the REA. While the Court’s recognition of public interest in the democratic oversight of the development of surveillance and control technologies is a step in the right direction, the decision did not go far enough and Breyer appealed. In its decision, the Court suggested that such democratic oversight should begin only after these types of research and pilot projects were concluded. In other words, the Court failed to acknowledge the importance of ensuring transparency is in place at the outset of taxpayer-funded projects with immense impact on citizens, rather than when research and development has already been completed.
Breyer took his case to the main EU Court of Justice (CJEU), which has just issued its judgement:
While the CJEU did recognise that the fact that the obligation of participants in the iBorderCtrl project to respect fundamental rights is not grounds to assume that they will automatically do so, it maintained that because this was a research project, the public’s right to know about the results – rather than the process of the research – was sufficient.
The CJEU agreed with the General Court that the commercial interests of the REA outweighed the public interest. But as a listing of the documents requested indicates, several were about the ethics of the project, and others were general progress reports. It seems entirely legitimate for that information to be available to the public: claims of commercial interests should not be able to stymie the crucial oversight of new surveillance and control technologies funded by taxpayers.
More generally, the idea that people can only see the results of publicly-funded research is absurd. Today there is a recognition that science research needs to be open – not just in its results, but in its entire process. The CJEU ruling that the public can only ask to see the results is retrogressive, and a return to the bad old days of science funding when the public was expected to pay up and then shut up.
Follow me @glynmoody on Mastodon.