Banks, ISPs Increasingly Embrace 'Voice Print' Authentication Despite Growing Security Risk
from the this-probably-won't-go-well dept
While it’s certainly possible to sometimes do biometrics well, a long line of companies frequently… don’t. Voice print authentication is particularly shaky, especially given the rise of inexpensive voice deepfake technology. But, much like the continued use of text-message two-factor authentication (which is increasingly shown to not be secure), it apparently doesn’t matter to a long list of companies.
Banks and telecom giants alike have started embracing voice authentication tech at significant scale despite the added threat to user privacy and security. And they’re increasingly collecting user “voice print” data without any way to opt out:
“despite multiple high-profile cases of scammers successfully stealing money by impersonating people via deepfake audio, big banks and ISPs are rolling out voice-based authentication at scale. The worst offender that I could find is Chase. There is no ?opt in?. There doesn?t even appear to be a formal way to ?opt out?! There is literally no way for me to call my bank without my voice being ?fingerprinted? without my consent.”
The U.S. has generally been extremely lax on privacy and security legislation and oversight, generally opting for baseline requirements that companies at least be transparent about their security and privacy practices, and provide users with working opt out tools. But time and time again neither are really adhered to. Eventually our lack of any meaningful privacy rules for the internet era will culminate in a privacy scandal that makes past scandals look like a grade school picnic. And with companies increasingly prioritizing convenience and simplicity over security and common sense, that day could arrive sooner than we think.
The rush toward voice authentication tech is particularly problematic given the quick rise of automated deepfake systems and the growing trove of user voice data available online. With parades of online creators, and smart televisions and other gadgets hoovering up voice data (and frequently failing to secure or encrypt it), availability of this data is ballooning. As are examples where faking a user’s voice has been used for significant thefts. What happens when voice print authentication is adopted at scale, and exploitation of that trend becomes automated by robocall scammers already running amok? Nothing good.
Using voice authentication to secure your finances (or much of anything notable) is, at its base, already very much a hit or miss proposition:
The problem here isn't "deep voice" tech. This is a laughably poor security protocol that allows money to be moved with a phone call. https://t.co/ClzzxILOz7
— Anthony DeRosa (@Anthony) October 14, 2021
If you figure voice deepfake tech will only get cheaper and better over time, you can also figure replacing passwords and pins with voice authentication isn’t a great idea in a country already drowning in robocall scams. Yet we’re apparently doing it anyway:
“Again, society must adjust to the following reality: It?s become easy for anyone to spoof the voices of others who have public recordings of them talking (very common). Therefore, companies (especially banks) should not be using this as a @#%!ing way to log into accounts! You would think this is SIMPLE-enough for corporate America to understand, but alas, here we are.”
At the very least informed users should have the ability to opt out of voice data collection, yet in many cases they can’t even do that. It’s yet another example of why the nation needs at least some kind of baseline privacy rules that at an extreme minimum mandates that both data collection and security options should be transparent, and users should always retain opt out control. Baseline privacy legislation should also include meaningful penalties and accountability for the very long line of companies that view consumer privacy and security as an annoying afterthought.
Given this would cost a large number of politically powerful industries money we’re not going to do any of that. Instead, we’re going to continue to embrace the current paradigm: a few badly crafted state privacy proposals and a generalized apathy on the federal level. Surely that will work out well, right?