How Refugee Applications Are Being Lost In (Machine) Translation
from the AI-not-I dept
As you may have noticed, headlines are full of the wonders of chatbots and generative AI these days. Although often presented as huge breakthroughs, in many ways they build on machine learning techniques that have been around for years. These older systems have been deployed in real-life situations for some time, which means they provide valuable information about the possible pitfalls of using AI for serious tasks. Here is a typical example of what has been happening in the world of machine translation when applied to refugee applications for asylum, as reported on the Rest of the World site:
A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee’s asylum bid because her written application didn’t match the story told in the initial interviews.
In the interviews, the refugee had first maintained that she’d made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.
After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the “I” pronouns in the woman’s statement to “we.”
That’s a tiny difference, and one that today’s machine translation programs can easily miss, especially for languages where training materials are still scarce. And yet the consequences of the shift from singular “I” to plural “we” can have life-changing consequences – in the case above, whether asylum was granted to a refugee fleeing Afghanistan. There are other problems too:
Based in New York, the Refugee Translation Project works extensively with Afghan refugees, translating police reports, news clippings, and personal testimonies to bolster claims that asylum seekers have a credible fear of persecution. When machine translation is used to draft these documents, cultural blind spots and failures to understand regional colloquialisms can introduce inaccuracies. These errors can compromise claims in the rigorous review so many Afghan refugees experience.
In the future it is likely that the number of people seeking asylum will increase, not least because of environmental refugees who are fleeing lands made uninhabitable by climate change. Their applications for asylum elsewhere are likely to involve a wider range of lesser-known languages. Turning to machine translation will be a natural move by the authorities, since it takes time and resources to recruit specialist human translators.
The new generation of AI tools and their high-profile abilities will encourage this trend, as well as their use to evaluate applications and to make recommendations about whether they should be accepted. The Rest of the World article points out that OpenAI, the company that is behind ChatGPT, updated its user policies in late March with the following as “Disallowed usage of our models”:
High risk government decision-making, including:
- Law enforcement and criminal justice
- Migration and asylum
Governments trying to save money will doubtless use them anyway. It will be important for courts and others dealing with asylum claims to bear this in mind when there seem to be serious discrepancies in refugees’ applications. They may be all in the (machine’s) mind.
Follow me @glynmoody on Mastodon.
Filed Under: afghanistan, ai, asylum, chatbots, chatgpt, climate crisis, machine learning, openai, pashto, refugees, translation
Companies: openai