For years we’ve talked about the growing threat of SIM hijacking, which involves a criminal covertly porting out your phone number from right underneath your nose (quite often with the help of bribed or conned wireless carrier employees).
Once they have your phone identity, they have access to most of your personal accounts secured by two-factor SMS authentication, opening the door to the theft of social media accounts or the draining of your cryptocurrency account. If you’re really unlucky, the hackers will harass the hell out of you in a bid to extort you even further.
“The news provides more context on how hackers may have taken over Sweeney’s Twitter account to boost the value of an obscure cryptocurrency on the same day. The hack also highlights how telecommunications companies continue to be a soft-spot for personal and professional security, even for high profile stars.”
Continued problems related to SIM hijacking are particularly problematic given the people and services that still rely heavily on text message two-factor authentication (SMS 2FA). If the underlying verifying tech isn’t secure, all the accounts and services tethered to it aren’t either.
Senators like Ron Wyden have been sending letters to the FCC for years, asking the nation’s top telecom regulator to, you know, do its job. Late last year the FCC voted to craft new rules that were supposed to help fix the problem, but observers noted they were too vague to be of meaningful use.
And they were too vague to be of meaningful use because captured regulators (even the well intentioned ones) aren’t keen to truly stand up to major, politically powerful wireless providers. So what you often tend to get is a form of regulatory theater that doesn’t always accomplish much. With recent Supreme Court rulings that erode regulatory authority further, it’s not a dysfunction set to improve anytime soon.
A lot of people freaked out on Friday after the news came out that Twitter was going to make SMS two-factor authentication (2FA) only available to paid Twitter Blue subscribers. The news was first broken, like so much Twitter news these days, by Platformer reporter Zoe Schiffer.
It’s understandable that people were up in arms over this, as one read of this is that it meant that keeping your account secure was a luxury item that you had to pay extra for. But, the details matter here, and I actually think many people are overreacting to this. There are actually fundamentally good reasons to move away from SMS-based 2FA: mainly in that it’s woefully insecure, and runs the risk of making people think they’re way more secure than they are. If you follow cybersecurity news, there are tons of articles talking about why SMS 2FA is not a good idea and you should ditch it if you can. Some have argued it’s actually worse than just having a good password, though I think that very much depends on your threat model, and for most users it’s not true (i.e., it is probably true for targeted individuals, and probably not true if there’s more of a brute force hacking effort). Years back, Microsoft actually told everyone to move away from SMS-based 2FA. Google started transitioning people off of SMS-based 2FA all the way back in 2017, which was slightly after NIST deprecated it from its recommended multi-factor authentication list. But, at least there was a clear transition plan.
Soon after Schiffer’s tweet, Twitter released a blog post explaining the decision (though, bizarrely, despite coming out on Friday afternoon, the blog post was backdated to Wednesday?!?):
While historically a popular form of 2FA, unfortunately we have seen phone-number based 2FA be used – and abused – by bad actors. So starting today, we will no longer allow accounts to enroll in the text message/SMS method of 2FA unless they are Twitter Blue subscribers. The availability of text message 2FA for Twitter Blue may vary by country and carrier.
Non-Twitter Blue subscribers that are already enrolled will have 30 days to disable this method and enroll in another. After 20 March 2023, we will no longer permit non-Twitter Blue subscribers to use text messages as a 2FA method. At that time, accounts with text message 2FA still enabled will have it disabled. Disabling text message 2FA does not automatically disassociate your phone number from your Twitter account. If you would like to do so, instructions to update your account phone number are available on our Help Center.
We encourage non-Twitter Blue subscribers to consider using an authentication app or security key method instead. These methods require you to have physical possession of the authentication method and are a great way to ensure your account is secure.
It also helps to understand a bit of the background here. First, Twitter was (like in so many other areas) somewhat late to the 2FA game. When it added SMS-based 2FA in 2013, there were headlines about how it had “finally” done so. And, it was only in 2019 that the company let you turn on non-SMS 2FA without a phone number, again leading to headlines that included the word “finally.” And, the lack of security with SMS 2FA was pretty damn clear when someone hacked Jack Dorsey‘s own Twitter account using SIM swapping, the easiest way to get around SMS 2FA.
On top of that, I’ve spoken with former Twitter employees who say that the blog post above is not wrong when it says that SMS 2FA is often abused by bad actors in a manner that generates a ton of SMS messages, and is actually extremely costly for Twitter. Even if Elon is no longer paying any of Twitter’s bills, there may be legitimate business reasons for ending support for SMS 2FA (also if, hypothetically, Musk had stopped paying the bills for their SMS 2FA provider, it’s possible that vendor was threatening to cut Twitter off entirely, which might also explain the short timeline here).
So, I think that many of the headlines and tweets decrying this as being about making security a “luxury,” for only paying subscribers is not fair and not accurate. There are lots of things (obviously) that I criticize Musk about, but I think there are perfectly legitimate reasons to end support for SMS 2FA, and at least some of the freakout people had was an overreaction.
That said… I do still have many concerns with how this was rolled out, and it wouldn’t surprise me if the FTC has some concerns as well. While it’s a bit out of date, Twitter’s last transparency report on security (covering the second half of 2021) shows that only 2.6% of Twitter users even have 2FA-enabled, which is really not great. And of those that have it enabled, nearly 75% are using SMS based authentication:
So, there’s a legitimate fear that in simply killing off SMS 2FA and not providing a very clear and very straightforward transition to an authenticator app (or security key) the percentage of people using any 2FA at all may go down quite a bit, potentially putting more people at risk. If Twitter and Elon Musk weren’t just cost cutting and were actually looking to make Twitter more secure for its users, they would create a plan that did a lot more to transition users over to an authenticator app.
I mean, the fact that they’re still leaving SMS 2FA for Twitter Blue subscribers pretty much gives away the game that this is solely about cost-cutting and not about transitioning users to better security. Indeed, it seemed like after spending a day talking about the expenses, it was only then that Musk realized that SMS 2FA also wasn’t good for security and started making those claims as well (a day late to be convincing that this has anything to do with the decision).
All that said, I am wondering if this might trigger yet another FTC investigation. The last consent decree with the FTC (remember, this was less than a year ago) was mostly about SMS 2FA, and how Twitter had abused the phone numbers it had on file, provided for 2FA, as a tool for marketing. That’s obnoxious and wrong and the FTC was correct to slam Twitter for it. Part of the consent decree was that Twitter had to provide 2FA options “that don’t require people to provide a phone number” (such as an authenticator app or security key, which the company does). But, also, it says that “Twitter must implement an enhanced privacy program and a beefed-up information security program.”
The details of that program include regular security assessments any time that the company “modifies” security practices. I’m curious if Twitter did such an assessment before making this change? The requirements of the program also include things like the following:
Identify and describe any changes in how privacy and security-related
options will be presented to Users, and describe the means and results of any
testing Respondent performed in considering such changes, including but not
limited to A/B testing, engagement optimization, or other testing to evaluate a
User’s movement through a privacy or security-related pathway;
Include any other safeguards or other procedures that would mitigate the
identified risks to the privacy, security, confidentiality, and integrity of Covered
Information that were not implemented, and each reason that such alternatives
were not implemented; and
Was any of that done? Or was it just Musk getting upset after seeing a bill for SMS messaging and declaring that they were cutting of SMS 2FA? We may find out eventually…
In the end, I do think Twitter is right to move away from SMS 2FA (and, as users, you should do so yourself wherever you use it). Multi-factor authentication is a very important security practice, and one that more people should use, but the SMS variety is not nearly as safe as other methods. But there is little indication here that Musk is doing it for any reason other than to cut costs, and the haphazard way in which this has been rolled out suggests that it may increase security risks for a noticeable percentage of Twitter users.
Not only are countless systems and services not secure, security itself often isn’t treated with the respect it deserves. And tools that are supposed to protect you from malicious actors are often monetized in self-serving ways. Like that time Facebook advertised a “privacy protecting VPN” that was effectively just spyware used to track Facebook users when they weren’t on Zuckerberg’s platform. Or that time Twitter was hit with a $250 million fine after it chose to use the phone numbers provided by users for two-factor authentication for marketing purposes (something Facebook was also busted for).
SMS verification ads themselves are also now being exploited as a marketing opportunity. Developer Chris Lacy was recently taken aback after an SMS two-factor authentication code from Google was injected with an SMS ad:
I just received a two factor authentication SMS from Google that included an ad. Google's own Messages SMS app flagged it as spam.
Google confirmed to 9to5Google they didn’t inject the ads, and that this was done by Lacy’s wireless carrier (which he refused to reveal for privacy purposes). I’ve never seen a wireless carrier attempt this, and my guess is that (assuming he’s in the States) this isn’t one of the major three (AT&T, T-Mobile, and Sprint). It’s most likely a smaller prepaid operator which, even in the wake of a more feckless FCC, faces some notable fines should the behavior get widespread attention. Both Google and Lacy say they’re working with the anonymous carrier in question.
Needless to say, security experts like Kenn White weren’t particularly impressed:
While I generally consider myself an eternal optimist, with telco carriers, I'm a fairly jaded SOB. That said, the fact that a mobile carrier would inject ads directly into otherwise authentic SMS content (especially from a major security service endpoint) is shocking to me. https://t.co/Mt6ZXnK7og
Ironically the ad was for VPN services, which themselves promise layers of security and privacy that often don’t exist. Sent over an SMS system that security researchers are increasingly warning isn’t secure enough for two-factor authentication or much of anything else. We live in an era where we prioritize monetization, but pay empty lip service to security and privacy. What could possibly go wrong in a climate like that?
It’s not sure why journalists keep having to do the wireless industry’s job, yet here we are.
Sometime around mid-march, Motherboard reporter Joseph Cox wrote a story explaining how he managed to pay a hacker $16 to gain access to most of his online accounts. How? The hacker exploited a flaw in the way text messages are routed around the internet, paying a third party (with pretty clearly flimsy standards for determining trust) to reroute all of his text messages, including SMS two factor authentication. From there, it was relatively trivial to break into several of the journalist’s accounts, including Bumble, Whatsapp, and Postmates.
It’s a flaw the industry has apparently known about for some time, but they only decided to take action after the story made the rounds. This week, all major wireless carriers indicated they’d be taking significant steps to the way text messages are routed to take aim at the flaw:
“The Number Registry has announced that wireless carriers will no longer be supporting SMS or MMS text enabling on their respective wireless numbers,” the March 25 announcement from Aerialink, reads. The announcement adds that the change is “industry-wide” and “affects all SMS providers in the mobile ecosystem.”
“Be aware that Verizon, T-Mobile and AT&T have reclaimed overwritten text-enabled wireless numbers industry-wide. As a result, any Verizon, T-Mobile or AT&T wireless numbers which had been text-enabled as BYON no longer route messaging traffic through the Aerialink Gateway,” the announcement adds, referring to Bring Your Own Number.”
It’s a welcome move, but it’s also part of a trend where journalists making a pittance somehow routinely have to prompt an industry that makes billions of dollars a year to properly secure their networks. It’s not much different from the steady parade of SIM swapping attacks that plagued the industry for years, only resulting in substantive action by the sector after reporters began documenting how common it was (and big name cryptocurrency investors had millions of dollars stolen). It was another example of how two factor authentication over text messages isn’t genuinely secure.
Or the SS7 flaw, which the industry has known about for years but didn’t take seriously until journalists began documenting how the flaw lets all manner of malicious private and government actors spy on wireless users without them knowing. US consumers pay some of the highest prices in the developed world for mobile data. At that price point, it doesn’t matter how clever these attacks are. Telecom giants should be getting out ahead of security flaws before they become widespread problems, not belatedly acting only after news outlets showcase their apathy and incompetence.
So last year, when everybody was freaking out over TikTok, we noted that TikTok was likely the least of the internet’s security and privacy issues. In part because TikTok wasn’t doing anything that wasn’t being done by thousands of other companies in a country that can’t be bothered to pass even a basic privacy law for the internet. Also, any real security and privacy solutions need to take a much broader view.
For example, while countless people freaked out about TikTok, none of those same folks seem bothered by the parade of nasty vulnerabilities in the nation’s telecom networks, whether we’re talking about the SS7 flaw that lets governments and bad actors spy on wireless users around the planet or the constant drumbeat of location data scandals that keep revealing how your granular location data is being sold to any nitwit with a nickel. Or the largely nonexistent privacy and security standards in the internet of broken things. Or the dodgy security in our satellite communications networks.
Point being, hysteria over the potential threat of a Chinese app packed with dancing tweens trumped any real concerns about widespread, long-standing security vulnerabilities and privacy issues, particularly in telecom. This week this apathy was once again on display after reporters found that a gaping flaw in the SMS standard lets hackers take over phone numbers in minutes by simply paying a company to reroute text messages. All for around $16:
“I didn’t expect it to be that quick. While I was on a Google Hangouts call with a colleague, the hacker sent me screenshots of my Bumble and Postmates accounts, which he had broken into. Then he showed he had received texts that were meant for me that he had intercepted. Later he took over my WhatsApp account, too, and texted a friend pretending to be me.
Looking down at my phone, there was no sign it had been hacked. I still had reception; the phone said I was still connected to the T-Mobile network. Nothing was unusual there. But the hacker had swiftly, stealthily, and largely effortlessly redirected my text messages to themselves. And all for just $16.”
Carriers told the reporter they couldn’t replicate the problem and that they’d done their best to lock it down (not that there’s any level of transparency or regulatory accountability that would let somebody verify that claim). The hackers involved disagree. This wasn’t a SIM hijack, another problem we really haven’t done enough about. In this case, the hacker used a service from a company dubbed Sakari, which sells SMS marketing and mass messaging services, to reroute the reporter’s messages to them. With little in the way of serious screening of more nefarious users, apparently.
That in turn opens the door to having all your online accounts compromised, all without the target being any the wiser. It’s a relatively trivial attack to accomplish, and exposes a general lack of any meaningful authentication process to ensure it isn’t exploited by bad actors. As an aside, there’s a tool you can now use to confirm whether your text messages have been compromised. Meanwhile, security researchers warn that there are so many SMS vulnerabilities now, it’s time to stop using SMS for sensitive security purposes.
Meanwhile, the failure by regulators and industry to police and prevent the flaw also (once again) showcases how Ajit Pai’s decision to turn the FCC into a mindless rubber stamp for industry had a much broader impact than just killing net neutrality, says Senator Ron Wyden:
“It?s not hard to see the enormous threat to safety and security this kind of attack poses. The FCC must use its authority to force phone companies to secure their networks from hackers. Former Chairman Pai?s approach of industry self-regulation clearly failed,” Senator Ron Wyden said in a statement after Motherboard explained the contours of the attack.”
While everybody professes to be concerned about internet security and privacy, we’re routinely only paying lip service to the concept. The internet of things is seen more as something funny than a massive security and privacy headache. The Trump TikTok hysteria saw more press and national attention than any of a laundry list of more problematic telecom flaws. Having a basic privacy law for an era in which there are a dozen major hacks, breaches, or data leaks every week is treated as something that’s optional. As is functional, basic regulatory oversight at agencies like the FCC.
Most modern security and privacy problems require holistic, collaborative efforts between government, the media, industry, and activists. Instead, more often than not, knee jerk clickbait hysteria has us routinely distracted from much broader problems we seem intent on doing little too little to address.
Facebook’s definition of protection isn’t quite up to snuff. Last week, some Facebook users began seeing a new option in their settings simply labeled “Protect.” Clicking on that link in the company’s navigation bar will redirect Facebook users to the ?Onavo Protect ? VPN Security? app?s listing on the App Store. There, they’re informed that “Onavo Protect helps keep you and your data safe when you browse and share information on the web.” You’re also informed that the “app helps keep your details secure when you login to websites or enter personal information such as bank accounts and credit card numbers.”
What you’re not told is that Facebook acquired the company back in 2013, and is now using it as little more than glorified spyware, allowing Facebook to track and monetize your travels around the internet (especially time spent wandering around competing social media platforms). That is, understandably, upsetting some people who believe that security tools should, well, actually protect you from surveillance, not open up an entirely new avenue for it:
“Facebook, however, purchased Onavo from an Israeli firm in 2013 for an entirely different reason, as described in a Wall Street Journal report last summer. The company is actually collecting and analyzing the data of Onavo users. Doing so allows Facebook to monitor the online habits of people outside their use of the Facebook app itself. For instance, this gave the company insight into Snapchat?s dwindling user base, even before the company announced a period of diminished growth last year.”
Amusingly, as one Facebook team was busy pushing a VPN service that spies on you, other parts of the company have been busy pushing a new two-factor authentication system (good) that the company also thought should be co-opted for marketing purposes (not so good). Ideally, two-factor authentication should use your phone number exclusively to send you authentication codes via SMS. But Facebook apparently got the nifty idea to immediately take that number and spam customers in the hopes this would drive additional engagement at the website:
So I signed up for 2 factor auth on Facebook and they used it as an opportunity to spam me notifications. Then they posted my replies on my wall. ???? pic.twitter.com/Fy44b07wNg
On a positive note, Facebook was quick to acknowledge that the SMS spam isn’t intentional, and that it would be rolling out out a fix shortly (hopefully before too many people get disgusted by 2FA):
“It was not our intention to send non-security-related SMS notifications to these phone numbers, and I am sorry for any inconvenience these messages might have caused. We are working to ensure that people who sign up for two-factor authentication won’t receive non-security-related notifications from us unless they specifically choose to receive them, and the same will be true for those who signed up in the past. We expect to have the fixes in place in the coming days. To reiterate, this was not an intentional decision; this was a bug.”
While Facebook was quick to own its 2FA problem, the company has been somewhat mute regarding the backlash to its “VPN” service offering. That effort likely began with good intentions among Facebook’s security team, then got hijacked by company higher ups nervous about the fact Facebook’s engagement and subscriber numbers have begun a precipitous dive. The solution to that problem is making Facebook better and more secure, not pushing security and privacy services whose real agenda is monetization and, apparently, annoyance.
The texts you think you’re sending in private can be used against you in court, according to a potentially precedent-setting new ruling from the Ontario Court of Appeal, which critics believe will have implications on privacy throughout the province.
The government’s comment on the decision makes it sound even worse.
“The Crown’s position … is that once a person sends a message into the ether, he or she loses the requisite level of control over that message needed to challenge its subsequent acquisition by authorities from sources outside of that person’s control,” Nick Devlin, senior counsel with the Public Prosecution Service of Canada, told VICE News.
But that’s not what the ruling says. Text messages sent “into the ether” do not lose their expectation of privacy. That would make SMS message content open to interception or seizure without a wiretap order or warrant. The circumstances of the case undercut the claims made in these two soundbites.
In no way does this create some sort of “Third Party Doctrine” governing the content of text messages. Instead, it simply confirms what should be obvious: that once messages are received, the recipient is free to discuss, expose, or otherwise provide the content to whoever asks for it. The sender is no longer in control of the sent message and cannot claim it is still a private communication.
An investigation into the trafficking of illegal firearms resulted in the seizure of phones owned by the two suspects. Police performed forensic searches on both devices and found messages implicating both arrestees. One of the suspects challenged the search and seizure of the devices. For the most part, he won.
1. Mr. Marakah’s s. 8 Charter challenge to exclude from evidence the items seized by the police during the search of his residence on November 6, 2012 is allowed and the evidence is excluded pursuant to s. 24(2) of the Charter;
2. Mr. Marakah’s s. 8 Charter challenge to exclude evidence obtained from his phone that was seized from him by police at the time of his arrest on November 6, 2012 is also allowed and the evidence is excluded pursuant to s. 24(2) of the Charter; and
3. Mr. Marakah’s s. 8 Charter challenge to exclude the evidence of his text messages found by the police on Andrew Winchester’s phone on November 6, 2012, is dismissed.
The last item on the list — a dismissal of an evidence challenge — is related to the messages found on Winchester’s phone, which included Marakah’s end of these conversations. The court ruled there is no expectation of privacy in messages sent to another person’s phone.
This is pretty much analogous to claiming an expectation of privacy in mail sent (and received, opened, read, etc.) by another party. The government can’t intercept and read the mail without the proper authorization, but there’s nothing stopping it from viewing the content if it’s seized from the recipient. The same goes for phone calls, which are ostensibly private conversations, but both conversants are more than welcome to discuss the content of the phone calls with law enforcement without infringing on the other party’s expectation of privacy.
The failure here is operational security, not a lack of protections for Canadian citizens.
The appellant cited a 2013 ruling that said sent messages are “private communications” and can’t be obtained by the government without a wiretap order.
As all parties acknowledged, it is clear that text messages qualify as telecommunications under the definition in the Interpretation Act. They also acknowledged that these messages, like voice communications, are made under circumstances that attract a reasonable expectation of privacy and therefore constitute “private communication” within the meaning of s. 183. Similarly, there is no question that the computer used by Telus would qualify as “any device” under the definitions in s. 183.
The difference between the Telus decision and this one is that in Telus, law enforcement intercepted messages in transit, utilizing the telco’s temporary storage of transmitted messages to obtain “continuous production” of messages sent between two numbers. It’s the interception that’s key, not whether or not the content can be afforded a reasonable expectation of privacy. The appeals court points out that the court in Telus did not actually reach the conclusions the appellant claims it reached.
Abella J. expressly declined to decide the issue that is before the court in this appeal:
[15] We have not been asked to determine whether a general warrant is available to authorize the production of historical text messages, or to consider the operation and validity of the production order provision with respect to private communications. Rather, the focus of this appeal is on whether the general warrant power in s. 487.01 of the Code can authorize the prospective production of future text messages from a service provider’s computer. That means that we need not address whether the seizure of the text messages would constitute an interception if it were authorized after the messages were stored.
The court points out that a reasonable expectation of privacy is not automatically granted to all cases and incidents involving ostensibly private communications. Context factors into the equation — both in determining the “reasonableness” of privacy expectations, as well as standing to challenge searches. Here, it finds the context does not help the appellant’s case.
In this case, the application judge’s analysis was guided by Edwards and, on the objective reasonableness of the expectation of privacy, the factors set out by Binnie J. in Patrick. Having regard to those factors, he found that the factors that weighed most heavily in his assessment of the totality of the circumstances were that: (1) the appellant had no ownership in or control over Winchester’s phone; and (2) there was no obligation of confidentiality between the parties.
[…]
He had no ability to regulate access and no control over what Winchester (or anyone) did with the contents of Winchester’s phone. The appellant’s request to Winchester that he delete the messages is some indication of his awareness of this fact. Further, his choice over his method of communication created a permanent record over which Winchester exercised control.
The long dissent is worth reading as it challenges much of what the official opinion asserts — mainly that a lack of control equals a lack of privacy expectations. Arguably, courts should treat text messages more carefully as they generate permanent records of conversations (phone calls don’t) and are used far, far more often than email or snail mail (which also create permanent records of conversations).
It’s much more on point, however, when noting that the seizure and search of the other party’s phone — resulting in the collection of Marakah’s messages — was also ruled to be unreasonable and a violation of Winchester’s rights. The denial of Marakah’s request to have this evidence excluded means it’s possible for Canadian law enforcement to obtain evidence illegally but still use it in court — just as long as it obtains the incriminating messages it needs from someone other than the sender.
[T]he text messages at issue are essential to the Crown’s case only because of this pattern of Charter infringements. The messages obtained from the appellant’s phone and evidence seized from his apartment are not admissible because the police infringed the appellant’s s. 8 rights when obtaining that evidence. The Crown abandoned reliance on the accused’s inculpatory statements and evidence obtained from them when faced with a challenge to their admissibility. And now the admissibility of the text messages obtained from Winchester’s phone is in issue because they too were obtained in a manner that infringed a Charter-protected right.
Finally, while the search of Winchester’s phone, considered in isolation, may be classified as a less serious breach of the appellant’s Charter-protected interests, I would take into account the fact that the appellant suffered many serious breaches of his Charter rights. In this case the police intruded upon significant privacy interests by conducting a warrantless search of his home and conducting an unnecessary and unrestricted forensic analysis of the appellant’s phone. Refusing to exclude the text messages obtained from Winchester’s phone would, in effect, neutralize any remedy granted for those breaches.
Considering that the court has already quashed the messages obtained from Marakah’s phone due to the illegality of the seach, it only makes sense to do the same to the same messages that were obtained from Winchester’s phone. Without evidence suppression, law enforcement will be encouraged to route around presumed privacy expectations (and warrant requirements) by choosing an alternate, “less private” source to obtain the same communications.
In the wake of the tragic events in Paris last week encryption has continued to be a useful bogeyman for those with a voracious appetite for surveillance expansion. Like clockwork, numerous reports were quickly circulated suggesting that the terrorists used incredibly sophisticated encryption techniques, despite no evidence by investigators that this was the case. These reports varied in the amount of hallucination involved, the New York Times even having to pull one such report offline. Other claims the attackers had used encrypted Playstation 4 communications also wound up being bunk.
Yet, pushed by their sources in the government, the media quickly became a sound wall of noise suggesting that encryption was hampering the government’s ability to stop these kinds of attacks. NBC was particularly breathless this week over the idea that ISIS was now running a 24 hour help desk aimed at helping its less technically proficient members understand encryption (even cults help each other use technology, who knew?). All of the reports had one central, underlying drum beat implication: Edward Snowden and encryption have made us less safe, and if you disagree the blood is on your hands.
Yet, amazingly enough, as actual investigative details emerge, it appears that most of the communications between the attackers was conducted via unencrypted vanilla SMS:
“…News emerging from Paris ? as well as evidence from a Belgian ISIS raid in January ? suggests that the ISIS terror networks involved were communicating in the clear, and that the data on their smartphones was not encrypted.
European media outlets are reporting that the location of a raid conducted on a suspected safe house Wednesday morning was extracted from a cellphone, apparently belonging to one of the attackers, found in the trash outside the Bataclan concert hall massacre. Le Monde reported that investigators were able to access the data on the phone, including a detailed map of the concert hall and an SMS messaging saying ?we?re off; we?re starting.? Police were also able to trace the phone?s movements.
The reports note that Abdelhamid Abaaoud, the “mastermind” of both the Paris attacks and a thwarted Belgium attack ten months ago, failed to use any encryption whatsoever (read: existing capabilities stopped the Belgium attacks and could have stopped the Paris attacks, but didn’t). That’s of course not to say batshit religious cults like ISIS don’t use encryption, and won’t do so going forward. Everybody uses encryption. But the point remains that to use a tragedy to vilify encryption, push for surveillance expansion, and pass backdoor laws that will make everybody less safe — is nearly as gruesome as the attacks themselves.
Paul Ford, once again, has written up something fascinating. He discusses something I had no idea happened: when an iPhone user texts with another iPhone user using iMessage, the outgoing texts appear in calm blue bubbles. When an iPhone user texts with a non-iPhone user (or an iPhone user using something other than iMessage — meaning mainly Android users, obviously), those outgoing texts are in a harsh green. Here are the two examples Paul shows, starting with the iPhone to iPhone:
And then the Android to iPhone:
As noted, I had no idea that this happened, because I don’t own an iPhone. There is one slight functional reason for this: users may have to pay for SMS messages, but not for iMessages, and thus it could have an impact on a bill. But here’s the more interesting tidbit, which is the crux of Ford’s article: lots of people absolutely hate those green bubbles. As he notes, if you do a Twitter search on “green bubbles” you’ll see an awful lot of anti-green-bubble sentiment. Here are just a few examples I quickly found (Paul has others in his article).
Those are just some of the anti-green-bubble messages from the past 24 hours. There are actually a lot more, and it goes on and on. It’s kind of amazing just how many people are tweeting about their hatred for green bubbles.
Ford, then goes into a really interesting discussion on the nature of product management and design choices — the kind of thing that Apple doesn’t do on a whim — to get to the real point: Apple is likely choosing harsh, ugly green bubbles on purpose. As a petty way to put down Android users:
Apple must know by now that the people of the blue bubbles make fun of the people of the green. And I guess if I worked at Apple I?d be pretty psyched with this reaction. After all, what is a more powerful brand amplifier than social pressure? If people who converse in green bubbles start to feel relatively poor, or socially inferior, because they chose to use a less-expensive pocket supercomputer than those made by Apple, that could lead to iPhone sales. Ugly green bubbles = $$$$$ and promotions.
But I think the ugly green bubbles are the result of a mean-spirited, passive-aggressive product decision, marketed in a mean-spirited way. Certainly it?s not a crisis in capitalism. This is not to say that Google is good and Apple is bad; they?re both enormous structures that have so much power that they can manufacture their own realities (except for Google Glass, then not so much).
The bubbles are a subtle, little, silly thing but they are experienced by millions of people. That amplifies that product descision into a unsubtle, large, serious-yet-still silly thing. The people who are tweeting about green bubbles are following Apple?s lead. It?s not unprecedented; Apple has done stuff like this before, like giving Windows machines on its network a ?Blue Screen of Death? icon. But people spend so much time texting that it adds up.
Beyond highlighting Apple’s apparent pettiness (and lack of ability to allow users to customize things for themselves), it also highlights how very minor design decisions do matter in a fairly big way. I recognize that some people like to get into tech fanboy wars: iPhone v. Android, Mac v. Windows v. Linux, Playstation v. Xbox, etc. That’s going to happen, even if it mostly seems like a waste of time. But, really, using subtle design choices to highlight and further such fights seems to show such a childish attitude to competition. Good competitors focus on making their own products better, not demeaning the competition. It’s when they run out of good ideas that the focus shifts to attacking the competition. Apple has done so many things right with the iPhone in pushing the barriers of innovation, it would be better if they just focused on making the overall customer experience better, rather than trying to offer subtle digs at non-iPhone users.
The latest in the ongoing revelations for the Ed Snowden leaks is that the NSA and GCHQ are collecting what appears to be hundreds of millions of text messages every day. While it does try to “minimize” messages involving Americans, it appears to allow access to the rest of the database to the GCHQ in the UK, who does use it to spy on UK phone numbers. GCHQ staff are told to avoid viewing actual content of UK text messages, but it’s unclear if that’s enforced. As the NSA notes, this database, called DISHFIRE, is “a goldmine to exploit.”
And, yes, they’re exploiting it. And no, not just against “targets.”
The NSA has made extensive use of its vast text message database to extract information on people’s travel plans, contact books, financial transactions and more – including of individuals under no suspicion of illegal activity.
The NSA runs a program called “Prefer” across all the information in the database to turn up “gems” that it admits “are not in current metadata stores and would enhance current analytics.”
The NSA sure does like its smiley faces, doesn’t it? Among the data they’re able to access? Each day this data gives them details of “over 800,000 financial transactions, either through text-to-text payments or linking credit cards to phone users.” It also tracks travel information from things like itinerary texts as well as cancellations and delays. Thought it was nice that your airline informed you of your flight delay? Sounds like the NSA got to know about it too.
It’s not entirely clear from the report how the NSA/GCHQ are getting access to these messages. Reporters spoke to Vodafone, who insisted that they were not handing over such data, but these days, you never know who’s being truthful. In response to all of this, the NSA repeated its nearly impossible to believe claim that its “activities are focused and specifically deployed against — and only against — valid foreign intelligence targets in response to intelligence requirements.” Valid foreign intelligence targets send 200 million text messages a day? Yeah…