For years we’ve talked about the growing threat of SIM hijacking, which involves a criminal covertly porting out your phone number from right underneath your nose (quite often with the help of bribed or conned wireless carrier employees).
Once they have your phone identity, they have access to most of your personal accounts secured by two-factor SMS authentication, opening the door to the theft of social media accounts or the draining of your cryptocurrency account. If you’re really unlucky, the hackers will harass the hell out of you in a bid to extort you even further.
“The news provides more context on how hackers may have taken over Sweeney’s Twitter account to boost the value of an obscure cryptocurrency on the same day. The hack also highlights how telecommunications companies continue to be a soft-spot for personal and professional security, even for high profile stars.”
Continued problems related to SIM hijacking are particularly problematic given the people and services that still rely heavily on text message two-factor authentication (SMS 2FA). If the underlying verifying tech isn’t secure, all the accounts and services tethered to it aren’t either.
Senators like Ron Wyden have been sending letters to the FCC for years, asking the nation’s top telecom regulator to, you know, do its job. Late last year the FCC voted to craft new rules that were supposed to help fix the problem, but observers noted they were too vague to be of meaningful use.
And they were too vague to be of meaningful use because captured regulators (even the well intentioned ones) aren’t keen to truly stand up to major, politically powerful wireless providers. So what you often tend to get is a form of regulatory theater that doesn’t always accomplish much. With recent Supreme Court rulings that erode regulatory authority further, it’s not a dysfunction set to improve anytime soon.
A lot of people freaked out on Friday after the news came out that Twitter was going to make SMS two-factor authentication (2FA) only available to paid Twitter Blue subscribers. The news was first broken, like so much Twitter news these days, by Platformer reporter Zoe Schiffer.
It’s understandable that people were up in arms over this, as one read of this is that it meant that keeping your account secure was a luxury item that you had to pay extra for. But, the details matter here, and I actually think many people are overreacting to this. There are actually fundamentally good reasons to move away from SMS-based 2FA: mainly in that it’s woefully insecure, and runs the risk of making people think they’re way more secure than they are. If you follow cybersecurity news, there are tons of articles talking about why SMS 2FA is not a good idea and you should ditch it if you can. Some have argued it’s actually worse than just having a good password, though I think that very much depends on your threat model, and for most users it’s not true (i.e., it is probably true for targeted individuals, and probably not true if there’s more of a brute force hacking effort). Years back, Microsoft actually told everyone to move away from SMS-based 2FA. Google started transitioning people off of SMS-based 2FA all the way back in 2017, which was slightly after NIST deprecated it from its recommended multi-factor authentication list. But, at least there was a clear transition plan.
Soon after Schiffer’s tweet, Twitter released a blog post explaining the decision (though, bizarrely, despite coming out on Friday afternoon, the blog post was backdated to Wednesday?!?):
While historically a popular form of 2FA, unfortunately we have seen phone-number based 2FA be used – and abused – by bad actors. So starting today, we will no longer allow accounts to enroll in the text message/SMS method of 2FA unless they are Twitter Blue subscribers. The availability of text message 2FA for Twitter Blue may vary by country and carrier.
Non-Twitter Blue subscribers that are already enrolled will have 30 days to disable this method and enroll in another. After 20 March 2023, we will no longer permit non-Twitter Blue subscribers to use text messages as a 2FA method. At that time, accounts with text message 2FA still enabled will have it disabled. Disabling text message 2FA does not automatically disassociate your phone number from your Twitter account. If you would like to do so, instructions to update your account phone number are available on our Help Center.
We encourage non-Twitter Blue subscribers to consider using an authentication app or security key method instead. These methods require you to have physical possession of the authentication method and are a great way to ensure your account is secure.
It also helps to understand a bit of the background here. First, Twitter was (like in so many other areas) somewhat late to the 2FA game. When it added SMS-based 2FA in 2013, there were headlines about how it had “finally” done so. And, it was only in 2019 that the company let you turn on non-SMS 2FA without a phone number, again leading to headlines that included the word “finally.” And, the lack of security with SMS 2FA was pretty damn clear when someone hacked Jack Dorsey‘s own Twitter account using SIM swapping, the easiest way to get around SMS 2FA.
On top of that, I’ve spoken with former Twitter employees who say that the blog post above is not wrong when it says that SMS 2FA is often abused by bad actors in a manner that generates a ton of SMS messages, and is actually extremely costly for Twitter. Even if Elon is no longer paying any of Twitter’s bills, there may be legitimate business reasons for ending support for SMS 2FA (also if, hypothetically, Musk had stopped paying the bills for their SMS 2FA provider, it’s possible that vendor was threatening to cut Twitter off entirely, which might also explain the short timeline here).
So, I think that many of the headlines and tweets decrying this as being about making security a “luxury,” for only paying subscribers is not fair and not accurate. There are lots of things (obviously) that I criticize Musk about, but I think there are perfectly legitimate reasons to end support for SMS 2FA, and at least some of the freakout people had was an overreaction.
That said… I do still have many concerns with how this was rolled out, and it wouldn’t surprise me if the FTC has some concerns as well. While it’s a bit out of date, Twitter’s last transparency report on security (covering the second half of 2021) shows that only 2.6% of Twitter users even have 2FA-enabled, which is really not great. And of those that have it enabled, nearly 75% are using SMS based authentication:
So, there’s a legitimate fear that in simply killing off SMS 2FA and not providing a very clear and very straightforward transition to an authenticator app (or security key) the percentage of people using any 2FA at all may go down quite a bit, potentially putting more people at risk. If Twitter and Elon Musk weren’t just cost cutting and were actually looking to make Twitter more secure for its users, they would create a plan that did a lot more to transition users over to an authenticator app.
I mean, the fact that they’re still leaving SMS 2FA for Twitter Blue subscribers pretty much gives away the game that this is solely about cost-cutting and not about transitioning users to better security. Indeed, it seemed like after spending a day talking about the expenses, it was only then that Musk realized that SMS 2FA also wasn’t good for security and started making those claims as well (a day late to be convincing that this has anything to do with the decision).
All that said, I am wondering if this might trigger yet another FTC investigation. The last consent decree with the FTC (remember, this was less than a year ago) was mostly about SMS 2FA, and how Twitter had abused the phone numbers it had on file, provided for 2FA, as a tool for marketing. That’s obnoxious and wrong and the FTC was correct to slam Twitter for it. Part of the consent decree was that Twitter had to provide 2FA options “that don’t require people to provide a phone number” (such as an authenticator app or security key, which the company does). But, also, it says that “Twitter must implement an enhanced privacy program and a beefed-up information security program.”
The details of that program include regular security assessments any time that the company “modifies” security practices. I’m curious if Twitter did such an assessment before making this change? The requirements of the program also include things like the following:
Identify and describe any changes in how privacy and security-related
options will be presented to Users, and describe the means and results of any
testing Respondent performed in considering such changes, including but not
limited to A/B testing, engagement optimization, or other testing to evaluate a
User’s movement through a privacy or security-related pathway;
Include any other safeguards or other procedures that would mitigate the
identified risks to the privacy, security, confidentiality, and integrity of Covered
Information that were not implemented, and each reason that such alternatives
were not implemented; and
Was any of that done? Or was it just Musk getting upset after seeing a bill for SMS messaging and declaring that they were cutting of SMS 2FA? We may find out eventually…
In the end, I do think Twitter is right to move away from SMS 2FA (and, as users, you should do so yourself wherever you use it). Multi-factor authentication is a very important security practice, and one that more people should use, but the SMS variety is not nearly as safe as other methods. But there is little indication here that Musk is doing it for any reason other than to cut costs, and the haphazard way in which this has been rolled out suggests that it may increase security risks for a noticeable percentage of Twitter users.
While a lot of the scandals surrounding “big tech” have been overblown, one that hasn’t been discussed enough is Silicon Valley companies’ abuse of user two-factor authentication data. If you’ve been napping, two-factor authentication (preferably of the email variety) helps protect your accounts from being compromised by hackers.
But when both Facebook and Twitter implemented it, company executives apparently thought it would be a great idea to use the email and phone data collected for marketing purposes. That’s a massive problem, as it completely undermines trust in the two-factor authentication process (and these companies’ security standards in general), making it less likely that users would protect themselves.
For much of the last decade, Twitter told its users that it was collecting their phone and email addresses for account-security purposes. But they didn’t inform users that they would also use this information to send targeted ads to customers. When it was revealed, Twitter claimed it didn’t know this was happening, which if true is still… sloppy and bad in terms of both privacy and security standards.
In 2020, reports emerged that the company would likely be fined up to $250 million for the behavior by the FTC. This week the long-looming action finally dropped, with the FTC announcing that Twitter had struck a $150 million settlement with the DOJ and FTC:
from May 2013 to September 2019, Twitter told its users that it was collecting their telephone numbers and email addresses for account-security purposes, but failed to disclose that it also would use that information to help companies send targeted advertisements to consumers. The complaint further alleges that Twitter falsely claimed to comply with the European Union-U.S. and Swiss-U.S. Privacy Shield Frameworks, which prohibit companies from processing user information in ways that are not compatible with the purposes authorized by the users.
It was underplayed in the scope of other concerns, but big portion of the $5 billion fine levied against Facebook by the FTC also involved this same exploitation of information provided specifically for 2FA.
There are obviously numerous other companies that have chosen to undermine user security by monetizing 2FA information, but given the FTC is too underfunded and understaffed to handle them all, these fines have to be large enough to act as a warning shot over the bow in the absence of federal legislation.
Not only are countless systems and services not secure, security itself often isn’t treated with the respect it deserves. And tools that are supposed to protect you from malicious actors are often monetized in self-serving ways. Like that time Facebook advertised a “privacy protecting VPN” that was effectively just spyware used to track Facebook users when they weren’t on Zuckerberg’s platform. Or that time Twitter was hit with a $250 million fine after it chose to use the phone numbers provided by users for two-factor authentication for marketing purposes (something Facebook was also busted for).
SMS verification ads themselves are also now being exploited as a marketing opportunity. Developer Chris Lacy was recently taken aback after an SMS two-factor authentication code from Google was injected with an SMS ad:
I just received a two factor authentication SMS from Google that included an ad. Google's own Messages SMS app flagged it as spam.
Google confirmed to 9to5Google they didn’t inject the ads, and that this was done by Lacy’s wireless carrier (which he refused to reveal for privacy purposes). I’ve never seen a wireless carrier attempt this, and my guess is that (assuming he’s in the States) this isn’t one of the major three (AT&T, T-Mobile, and Sprint). It’s most likely a smaller prepaid operator which, even in the wake of a more feckless FCC, faces some notable fines should the behavior get widespread attention. Both Google and Lacy say they’re working with the anonymous carrier in question.
Needless to say, security experts like Kenn White weren’t particularly impressed:
While I generally consider myself an eternal optimist, with telco carriers, I'm a fairly jaded SOB. That said, the fact that a mobile carrier would inject ads directly into otherwise authentic SMS content (especially from a major security service endpoint) is shocking to me. https://t.co/Mt6ZXnK7og
Ironically the ad was for VPN services, which themselves promise layers of security and privacy that often don’t exist. Sent over an SMS system that security researchers are increasingly warning isn’t secure enough for two-factor authentication or much of anything else. We live in an era where we prioritize monetization, but pay empty lip service to security and privacy. What could possibly go wrong in a climate like that?
So last year, when everybody was freaking out over TikTok, we noted that TikTok was likely the least of the internet’s security and privacy issues. In part because TikTok wasn’t doing anything that wasn’t being done by thousands of other companies in a country that can’t be bothered to pass even a basic privacy law for the internet. Also, any real security and privacy solutions need to take a much broader view.
For example, while countless people freaked out about TikTok, none of those same folks seem bothered by the parade of nasty vulnerabilities in the nation’s telecom networks, whether we’re talking about the SS7 flaw that lets governments and bad actors spy on wireless users around the planet or the constant drumbeat of location data scandals that keep revealing how your granular location data is being sold to any nitwit with a nickel. Or the largely nonexistent privacy and security standards in the internet of broken things. Or the dodgy security in our satellite communications networks.
Point being, hysteria over the potential threat of a Chinese app packed with dancing tweens trumped any real concerns about widespread, long-standing security vulnerabilities and privacy issues, particularly in telecom. This week this apathy was once again on display after reporters found that a gaping flaw in the SMS standard lets hackers take over phone numbers in minutes by simply paying a company to reroute text messages. All for around $16:
“I didn’t expect it to be that quick. While I was on a Google Hangouts call with a colleague, the hacker sent me screenshots of my Bumble and Postmates accounts, which he had broken into. Then he showed he had received texts that were meant for me that he had intercepted. Later he took over my WhatsApp account, too, and texted a friend pretending to be me.
Looking down at my phone, there was no sign it had been hacked. I still had reception; the phone said I was still connected to the T-Mobile network. Nothing was unusual there. But the hacker had swiftly, stealthily, and largely effortlessly redirected my text messages to themselves. And all for just $16.”
Carriers told the reporter they couldn’t replicate the problem and that they’d done their best to lock it down (not that there’s any level of transparency or regulatory accountability that would let somebody verify that claim). The hackers involved disagree. This wasn’t a SIM hijack, another problem we really haven’t done enough about. In this case, the hacker used a service from a company dubbed Sakari, which sells SMS marketing and mass messaging services, to reroute the reporter’s messages to them. With little in the way of serious screening of more nefarious users, apparently.
That in turn opens the door to having all your online accounts compromised, all without the target being any the wiser. It’s a relatively trivial attack to accomplish, and exposes a general lack of any meaningful authentication process to ensure it isn’t exploited by bad actors. As an aside, there’s a tool you can now use to confirm whether your text messages have been compromised. Meanwhile, security researchers warn that there are so many SMS vulnerabilities now, it’s time to stop using SMS for sensitive security purposes.
Meanwhile, the failure by regulators and industry to police and prevent the flaw also (once again) showcases how Ajit Pai’s decision to turn the FCC into a mindless rubber stamp for industry had a much broader impact than just killing net neutrality, says Senator Ron Wyden:
“It?s not hard to see the enormous threat to safety and security this kind of attack poses. The FCC must use its authority to force phone companies to secure their networks from hackers. Former Chairman Pai?s approach of industry self-regulation clearly failed,” Senator Ron Wyden said in a statement after Motherboard explained the contours of the attack.”
While everybody professes to be concerned about internet security and privacy, we’re routinely only paying lip service to the concept. The internet of things is seen more as something funny than a massive security and privacy headache. The Trump TikTok hysteria saw more press and national attention than any of a laundry list of more problematic telecom flaws. Having a basic privacy law for an era in which there are a dozen major hacks, breaches, or data leaks every week is treated as something that’s optional. As is functional, basic regulatory oversight at agencies like the FCC.
Most modern security and privacy problems require holistic, collaborative efforts between government, the media, industry, and activists. Instead, more often than not, knee jerk clickbait hysteria has us routinely distracted from much broader problems we seem intent on doing little too little to address.
There are many things that big internet companies do that the media have made out to be scandals that aren’t — but one misuse of data that I think received too little attention was how both Facebook and later Twitter were caught using the phone numbers people gave it for two factor authentication, and later used them for notification/marketing purposes.
In case you’re somehow unaware, two-factor authentication is how you should protect your most important accounts. I know many people are too lazy to set it up, but please do so. It’s not perfect (Twitter’s recent big hack routed around 2FA protections), but it is many times better than just relying on a username and password. In the early days of 2FA, one common way to implement it was to use text messaging as the second factor. That is, when you tried to login on a new machine (or after a certain interval of time), the service would have to text you a code that you would need to enter to prove that you were you.
Over time, people realized that this method was less secure. Many hacks involved people “SIM swapping” (using social engineering to have your phone number ported over to them), and then getting the 2FA code sent to the hacker. These days, good 2FA usually involves using an authenticator app, like Google Authenticator or Twilio’s Authy or even better a physical key such as the Yubikey or Google’s Titan Key. However, many services and users have stuck with text messaging for 2FA because it’s the least complex for users — and the issue with any security practice is that if it’s not user-friendly, no one will use it, and that doesn’t do any good either.
But using phone numbers given for 2FA purposes for notifications or marketing is really bad. First of all, it undermines trust — which is the last thing you want to do when dealing with a security mechanism. People handed over these phone numbers/emails for a very specific and delineated reason: to better protect their account. To then share that phone number or email with the marketing team is a massive violation in trust. And it serves to undermine the entire concept of two factor authentication, in that many users will become less willing to make use of 2FA, fearing how the numbers might be abused.
As we noted when Facebook received the mammoth $5 billion fine from the FTC a year ago, while the media focused almost entirely on the Cambridge Analytica situation as the reason for the fine, if you actually read the FTC’s settlement documents, it was other things that really caused the FTC to move, including Facebook’s use of 2FA phone numbers for marketing. We were glad that Facebook got punished for that.
And now it’s Twitter’s turn. Twitter has revealed that the FTC is preparing to fine
the company $150 million to $250 million for this practice — noting that it violated the terms of an earlier consent decree with the FTC in 2011, where the company promised not to mislead users about how it handled personal information. Yet, for years, Twitter used the phone numbers and emails provided for 2FA to help target ads (basically using the phone number/email as an identifier for targeting).
There’s no explanation for this other than really bad handling of data at Twitter, and the company should be punished for it. There are many things I think Twitter gets unfairly blamed for, but a practice like this is both bad and dangerous, and I’m all for large fines from the FTC to convince companies to never do this kind of thing again.
In our ongoing discussions about the new platform wars going on between Steam and the Epic Store, perhaps we’ve been unfair to another participant in those wars: EA’s Origin. Except that no we haven’t, since Origin is strictly used for EA published games, and now EA is pushing out games on Steam as well. All of which is to say that Origin, somehow, is still a thing.
Enough of a thing, actually, for EA to have tried to do something beneficial around Cybersecurity Month. For Origin users that enabled two-factor authentication on the platform, EA promised to reward those users with a free month of Origin Access Basic. That free month would give those that had enabled better security on their accounts access to discounts on new games and downloads of old games. Cool, right?
This morning at around 3am, jolted awake by an antsy newborn, I rolled over to check my email and was alarmed to see a message from EA with the subject: “You’ve redeemed an Origin Access Membership Code.” Goddamnit, I thought. Did someone hack me? Turns out it was just EA starting off everyone’s day with a nice little scare.
The email thanked the user for redeeming the access code without mentioning as a reminder that any of this was tied to enabling 2FA last month. It looked for all the world like any other purchase confirmation from Origin does. This sent a whole bunch of people scrambling, assuming their accounts had been hacked. Then those same people jumped on Twitter, either recognizing that this scare was a result of EA’s crappy communication, or else not realizing that and asking all of Twitter what to do now.
That all of this came as a result of a Cybersecurity Month initiative was an irony not lost on the public.
Ironically, this email came as the result of an EA initiative to reward users of its PC platform with more security. Last month, EA quietly announced that Origin users with two-step verification enabled (in honor of “National Cybersecurity Month”) would get a free month of Origin Access Basic, which offers discounts and access to a bunch of old games. This was them making good on that promise.
Now if only “making good” hadn’t also equated to “scaring the hell out of users into thinking they’d been hacked and might have even lost all of their progress in Star Wars Jedi Fallen Order and had to start from scratch just like their buddy Kirk did.” Telling people that they’ve redeemed a code out of the blue is a good way to get them to immediately freak out and change all their passwords, especially in a world where just about every company (EA included) has been the target of a massive security breach.
EA: where even when the company tries to do something nice and good, it just ends up scaring the shit out of everyone.
When you sign up for security services like two-factor authentication (2FA), the phone number you’re providing is supposed to be explicitly used for security. You’re providing that phone number as part of an essential exchange intended to protect yourself and your data, and that information is not supposed to be used for marketing. Since we’ve yet to craft a formal privacy law, there’s nothing really stopping companies from doing that anyway, something Facebook exploited last year when it was caught using consumer phone numbers provided explicitly for 2FA for marketing purposes.
It’s not only a violation of your users’ trust, it incentivizes them to not use two-factor authentication for fear of being spammed, making everybody less secure. As part of Facebook’s recent settlement with the FTC the company was forbidden from using 2FA phone numbers for marketing ever again.
Having just watched Facebook go through this, Twitter has apparently decided to join the fun. In a blog post, the company this week acknowledged that participants of the company’s Tailored Audiences and Partner Audiences advertising system may have had their phone numbers used for 2FA used for marketing as well:
“We cannot say with certainty how many people were impacted by this, but in an effort to be transparent, we wanted to make everyone aware. No personal data was ever shared externally with our partners or any other third parties. As of September 17, we have addressed the issue that allowed this to occur and are no longer using phone numbers or email addresses collected for safety or security purposes for advertising.”
Security conscious folks had already grumbled about the way Twitter sets up 2FA, and those same folks weren’t, well, impressed:
In all seriousness: whose idea was it to use a valuable advertising identifier as an input to a security system. This is like using raw meat to secure your tent against bears.
While it’s nice that Twitter came out and admitted the error, you have to think it’s unlikely this would happen were there real federal penalties for being cavalier about user privacy and security.
Last year, the company admitted to storing passwords for 330 million customers unencrypted in plain text, and a bug in the company’s code also exposed subscriber phone number data, something Twitter knew about for two years before doing anything about it. Earlier this year Twitter acknowledged that another bug exposed the location data of its users to an unknown partner. And of course Jack’s own account was hacked thanks to an SMS hijacking problem agencies like the FCC haven’t been doing much (read: anything) about.
While there’s understandable fear about the unintended consequences of poorly crafted privacy legislation, having at least some basic god-damned rules in place (including things like penalties for storing user data in plaintext, or using security-related systems like 2FA as marketing opportunities) would likely go a long way in deterring these kinds of “inadvertent oversights.” Outside of the problematic COPPA (which applies predominately to kids), there are no real federal guidelines disincentivizing the cavalier treatment of user data, though apparently we’re going to stumble through another 10 years of daily privacy scandals before “conventional wisdom” realizes that’s a problem.
Facebook’s definition of protection isn’t quite up to snuff. Last week, some Facebook users began seeing a new option in their settings simply labeled “Protect.” Clicking on that link in the company’s navigation bar will redirect Facebook users to the ?Onavo Protect ? VPN Security? app?s listing on the App Store. There, they’re informed that “Onavo Protect helps keep you and your data safe when you browse and share information on the web.” You’re also informed that the “app helps keep your details secure when you login to websites or enter personal information such as bank accounts and credit card numbers.”
What you’re not told is that Facebook acquired the company back in 2013, and is now using it as little more than glorified spyware, allowing Facebook to track and monetize your travels around the internet (especially time spent wandering around competing social media platforms). That is, understandably, upsetting some people who believe that security tools should, well, actually protect you from surveillance, not open up an entirely new avenue for it:
“Facebook, however, purchased Onavo from an Israeli firm in 2013 for an entirely different reason, as described in a Wall Street Journal report last summer. The company is actually collecting and analyzing the data of Onavo users. Doing so allows Facebook to monitor the online habits of people outside their use of the Facebook app itself. For instance, this gave the company insight into Snapchat?s dwindling user base, even before the company announced a period of diminished growth last year.”
Amusingly, as one Facebook team was busy pushing a VPN service that spies on you, other parts of the company have been busy pushing a new two-factor authentication system (good) that the company also thought should be co-opted for marketing purposes (not so good). Ideally, two-factor authentication should use your phone number exclusively to send you authentication codes via SMS. But Facebook apparently got the nifty idea to immediately take that number and spam customers in the hopes this would drive additional engagement at the website:
So I signed up for 2 factor auth on Facebook and they used it as an opportunity to spam me notifications. Then they posted my replies on my wall. ???? pic.twitter.com/Fy44b07wNg
On a positive note, Facebook was quick to acknowledge that the SMS spam isn’t intentional, and that it would be rolling out out a fix shortly (hopefully before too many people get disgusted by 2FA):
“It was not our intention to send non-security-related SMS notifications to these phone numbers, and I am sorry for any inconvenience these messages might have caused. We are working to ensure that people who sign up for two-factor authentication won’t receive non-security-related notifications from us unless they specifically choose to receive them, and the same will be true for those who signed up in the past. We expect to have the fixes in place in the coming days. To reiterate, this was not an intentional decision; this was a bug.”
While Facebook was quick to own its 2FA problem, the company has been somewhat mute regarding the backlash to its “VPN” service offering. That effort likely began with good intentions among Facebook’s security team, then got hijacked by company higher ups nervous about the fact Facebook’s engagement and subscriber numbers have begun a precipitous dive. The solution to that problem is making Facebook better and more secure, not pushing security and privacy services whose real agenda is monetization and, apparently, annoyance.
Our government isn’t exactly known for its security chops, but in a letter sent recently from Senator Ron Wyden to two of his colleagues who head the Committee on Rules & Administration, it’s noted that (incredibly), the ID cards used by Senate Staffers only appear to have a smart chip in them. Instead of the real thing, some genius just decided to put a photo of a smart chip on each card, rather than an actual smart chip. This isn’t security by obscurity, it’s… bad security through cheap Photoshopping. From our Senate.
Moreover, in contrast to the executive branch’s widespread adoption of PIV cards with a smart
chip, most Senate staff ID cards have a photo of a chip printed on them, rather than a real chip.
Given the significant investment by the executive branch in smart chip based two-factor
authentication, we should strongly consider issuing our staff real chip-based ID cards and then
using those chips as a second factor.
We asked the Senate if there was any way we could get a (heavily redacted, obviously) image of a Senate ID with the “photo” smart chip but (not at all surprisingly) that request was rejected. So, instead, we’ve got this artist’s rendering of what something like it might look like, more or less.
Most of the letter (as the last sentence suggests), is about how the Senate barely uses two factor authentication, which is also kind of stunning. These days, two factor authentication is the absolute basic level necessary for anything that you want to keep moderately secure. That the Senate isn’t doing this (and that it’s faking smart chips) is preposterous. It’s great that Senator Wyden is calling out the Senate IT staff for this very basic failing. I don’t know for sure, but a lot about this letter makes me suspect that one Chris Soghoian is behind discovering the lack of a real smart chip and highlighting the lack of true two factor authentication (it’s possible it’s someone else, but it feels like a very Chris Soghoian thing to notice and call out…).