We’ve noted repeatedly how the hyperventilation about TikTok privacy is largely just a distraction from the U.S.’ ongoing failure to pass even a basic privacy law or meaningfully regulate data brokers.
We haven’t done those things for two reasons. One, the dysfunctional status quo (where companies mindlessly over-collect data and fail to secure it, resulting in endless privacy scandals) is hugely profitable to everybody in the chain. Two, the government long ago realized it can abuse the barely regulated info-hoovering user tracking system we’ve built to avoid having to get warrants.
There’s simply no meaningful incentive for reform.
None of this is helped by the fact that an ad-based, wealth-obsessed tech press is financially incentivized to prioritize engagement clickbait (billionaire cage matches! Poorly-made blockchain-based ape art will change the world!), over nuance and deeper analysis. A media ecosystem owned by billionaires that seems to have an ever-dwindling interest in meaningfully challenging money, power, or the status quo.
The result of our collective superficiality isn’t hard to find when looking at the tech knowledge of the broader public. A recent Pew survey of 5,101 U.S. adults found that 80 percent of Americans know that Elon Musk now owns Tesla and Twitter, but just 23 percent were aware that the United States lacks a meaningful privacy law addressing how companies can use the data they collect:
52 percent of the public wasn’t sure if we had a privacy law. At the same time, while 77 percent of the public knows that Facebook changed its name to Meta in 2021, less than half (48 percent) of those surveyed know what two-factor authentication is. And while 87 percent know that more complicated passwords are better, just 32 percent have a basic understanding of how “AI” (LLMs) function.
When the press covers consumer privacy, the fact that the U.S. government has proven too corrupt to pass even a basic internet-era privacy law rarely gets mentioned. The idea that the government has been lobbied into apathy on this subject for 30 years by a broad coalition of industries (opposed to anything but the most toothless oversight) rarely even warrants a mention in mainstream tech coverage.
While I’m sure a superficial, clickbait obsessed tech press isn’t the only culprit here (our shaky education standards surely play a role), I can’t imagine it helps much. As a tech reporter I’ve watched a long, long line of quality independent tech news outlets get dismantled in favor of superficial clickbait machines, terrified of offending anyone in power, whose output is now being clumsily supercharged by “AI”.
Tech journalism’s failure to accurately portray the sorry state of U.S. privacy was perfectly exemplified by coverage over the TikTok privacy scandals. Endless outlets parroted worries that a single app might share U.S. consumer data with the Chinese government; few if any could be bothered to note that same Chinese government can buy endless reams of consumer data from barely regulated data brokers.
As a broadband and telecom beat reporter in particular, I’ve similarly seen how when press outlets cover substandard broadband, the real underlying problem (consolidated monopoly power has lobbied a corrupt government into apathy) again rarely warrants a mention. It’s systemic, and until we dedicate some serious time toward creatively funding independent journalism, it’s simply not getting better.
A lot of people freaked out on Friday after the news came out that Twitter was going to make SMS two-factor authentication (2FA) only available to paid Twitter Blue subscribers. The news was first broken, like so much Twitter news these days, by Platformer reporter Zoe Schiffer.
It’s understandable that people were up in arms over this, as one read of this is that it meant that keeping your account secure was a luxury item that you had to pay extra for. But, the details matter here, and I actually think many people are overreacting to this. There are actually fundamentally good reasons to move away from SMS-based 2FA: mainly in that it’s woefully insecure, and runs the risk of making people think they’re way more secure than they are. If you follow cybersecurity news, there are tons of articles talking about why SMS 2FA is not a good idea and you should ditch it if you can. Some have argued it’s actually worse than just having a good password, though I think that very much depends on your threat model, and for most users it’s not true (i.e., it is probably true for targeted individuals, and probably not true if there’s more of a brute force hacking effort). Years back, Microsoft actually told everyone to move away from SMS-based 2FA. Google started transitioning people off of SMS-based 2FA all the way back in 2017, which was slightly after NIST deprecated it from its recommended multi-factor authentication list. But, at least there was a clear transition plan.
Soon after Schiffer’s tweet, Twitter released a blog post explaining the decision (though, bizarrely, despite coming out on Friday afternoon, the blog post was backdated to Wednesday?!?):
While historically a popular form of 2FA, unfortunately we have seen phone-number based 2FA be used – and abused – by bad actors. So starting today, we will no longer allow accounts to enroll in the text message/SMS method of 2FA unless they are Twitter Blue subscribers. The availability of text message 2FA for Twitter Blue may vary by country and carrier.
Non-Twitter Blue subscribers that are already enrolled will have 30 days to disable this method and enroll in another. After 20 March 2023, we will no longer permit non-Twitter Blue subscribers to use text messages as a 2FA method. At that time, accounts with text message 2FA still enabled will have it disabled. Disabling text message 2FA does not automatically disassociate your phone number from your Twitter account. If you would like to do so, instructions to update your account phone number are available on our Help Center.
We encourage non-Twitter Blue subscribers to consider using an authentication app or security key method instead. These methods require you to have physical possession of the authentication method and are a great way to ensure your account is secure.
It also helps to understand a bit of the background here. First, Twitter was (like in so many other areas) somewhat late to the 2FA game. When it added SMS-based 2FA in 2013, there were headlines about how it had “finally” done so. And, it was only in 2019 that the company let you turn on non-SMS 2FA without a phone number, again leading to headlines that included the word “finally.” And, the lack of security with SMS 2FA was pretty damn clear when someone hacked Jack Dorsey‘s own Twitter account using SIM swapping, the easiest way to get around SMS 2FA.
On top of that, I’ve spoken with former Twitter employees who say that the blog post above is not wrong when it says that SMS 2FA is often abused by bad actors in a manner that generates a ton of SMS messages, and is actually extremely costly for Twitter. Even if Elon is no longer paying any of Twitter’s bills, there may be legitimate business reasons for ending support for SMS 2FA (also if, hypothetically, Musk had stopped paying the bills for their SMS 2FA provider, it’s possible that vendor was threatening to cut Twitter off entirely, which might also explain the short timeline here).
So, I think that many of the headlines and tweets decrying this as being about making security a “luxury,” for only paying subscribers is not fair and not accurate. There are lots of things (obviously) that I criticize Musk about, but I think there are perfectly legitimate reasons to end support for SMS 2FA, and at least some of the freakout people had was an overreaction.
That said… I do still have many concerns with how this was rolled out, and it wouldn’t surprise me if the FTC has some concerns as well. While it’s a bit out of date, Twitter’s last transparency report on security (covering the second half of 2021) shows that only 2.6% of Twitter users even have 2FA-enabled, which is really not great. And of those that have it enabled, nearly 75% are using SMS based authentication:
So, there’s a legitimate fear that in simply killing off SMS 2FA and not providing a very clear and very straightforward transition to an authenticator app (or security key) the percentage of people using any 2FA at all may go down quite a bit, potentially putting more people at risk. If Twitter and Elon Musk weren’t just cost cutting and were actually looking to make Twitter more secure for its users, they would create a plan that did a lot more to transition users over to an authenticator app.
I mean, the fact that they’re still leaving SMS 2FA for Twitter Blue subscribers pretty much gives away the game that this is solely about cost-cutting and not about transitioning users to better security. Indeed, it seemed like after spending a day talking about the expenses, it was only then that Musk realized that SMS 2FA also wasn’t good for security and started making those claims as well (a day late to be convincing that this has anything to do with the decision).
All that said, I am wondering if this might trigger yet another FTC investigation. The last consent decree with the FTC (remember, this was less than a year ago) was mostly about SMS 2FA, and how Twitter had abused the phone numbers it had on file, provided for 2FA, as a tool for marketing. That’s obnoxious and wrong and the FTC was correct to slam Twitter for it. Part of the consent decree was that Twitter had to provide 2FA options “that don’t require people to provide a phone number” (such as an authenticator app or security key, which the company does). But, also, it says that “Twitter must implement an enhanced privacy program and a beefed-up information security program.”
The details of that program include regular security assessments any time that the company “modifies” security practices. I’m curious if Twitter did such an assessment before making this change? The requirements of the program also include things like the following:
Identify and describe any changes in how privacy and security-related
options will be presented to Users, and describe the means and results of any
testing Respondent performed in considering such changes, including but not
limited to A/B testing, engagement optimization, or other testing to evaluate a
User’s movement through a privacy or security-related pathway;
Include any other safeguards or other procedures that would mitigate the
identified risks to the privacy, security, confidentiality, and integrity of Covered
Information that were not implemented, and each reason that such alternatives
were not implemented; and
Was any of that done? Or was it just Musk getting upset after seeing a bill for SMS messaging and declaring that they were cutting of SMS 2FA? We may find out eventually…
In the end, I do think Twitter is right to move away from SMS 2FA (and, as users, you should do so yourself wherever you use it). Multi-factor authentication is a very important security practice, and one that more people should use, but the SMS variety is not nearly as safe as other methods. But there is little indication here that Musk is doing it for any reason other than to cut costs, and the haphazard way in which this has been rolled out suggests that it may increase security risks for a noticeable percentage of Twitter users.
The good folks over at Platformer broke the news that Twitter is experimenting with Elon’s desperate attempt to make money: forcing people to “opt-in” to share personal info so they can better target ads. And, yes, there’s a contradiction between “force” and “opt-in.”
As everyone already knows, Elon is desperate for revenue, seeing as he took on $13 billion in debt and has massive interest payments to make. Yet his own actions have caused many advertisers to abandon the platform, likely driving him further into a hole. And, so far, his only big product idea has been the disastrous rollout of the new Twitter Blue program, for which he’s charging $8/month ($11 if you buy via iOS to cover Apple’s vig), whose main selling point is that you get a blue checkmark… causing people to mock you for “paying for Twitter.”
One of the other big selling points is “half the ads.” Which… first of all, it’s not at all clear how they would measure this. Many people are asking for no ads at all, and Musk has said that maybe in the future they’ll offer a higher tier with no ads. But it’s tricky. Because, at least in the US, the company was making in the range of $10 – $14/month per user on advertising. So, if you get them to pay $8 (minus transaction fees) and you take out half the ads, then you might actually be decreasing rather than increasing actual revenue.
That takes us to the new report from Platformer, which makes it clear that Elon’s Twitter is trying to figure out how to increase the amount of money they make per ad, and wants to do that with better targeting of those ads. Of course, right now, that’s… trickier than it might have been at any time in the past. Between EU data protection laws, California’s privacy laws, and other crackdowns on tracking (such as Apple completely kneecapping Meta’s access to private info for ad tracking) it seems that traditionally targeted ads are on the way out.
It appears that Musk’s solution to this is to force people to cough up their private info.
Twitter’s solution: require users to opt in to personalized ads and share their location information, or risk losing access to the service. The company is developing plans to prompt existing users to opt in to personalized ads and will make it the default for new users, according to plans shared with Platformer.
Once users have agreed, they won’t ever be able to opt out, sources said.
So… that seems like the kind of thing that EU and California privacy law enforcers are not going to like. It also seems like the kind of thing that users aren’t going to like and may drive even more away from the service. Remember how AT&T once tried to charge users for privacy? That doesn’t fly any more. If you don’t have a clear need for this info, you don’t get to force users to hand over the data. And I don’t think “helping the world’s second richest man have less regret for his stupidly impulsive purchase” qualifies as a need.
And there’s more. According to Platformer, Twitter is also planning on using phone numbers that people provided for two factor authentication for marketing purposes.
The company is also considering forcing users to share their location, let Twitter share their data with its business partners, and use contact data phone numbers used in two-factor authentication for ad targeting purposes.
Except, um, as we’ve been saying, it was just back in May of this year that Twitter was fined $150 million by the FTC for… doing exactly this. And using 2FA phone numbers for marketing was a big part of the FTC’s $5 billion fine against Meta. We already wondered if Twitter folks remembered the consent decree that it had with the FTC, but it’s worth noting that the company signed that new consent decree in May specifically around this kind of thing, and I don’t think the FTC will look too kindly on Twitter turning around and doing the same thing after forcing people to “opt-in.”
According to the new consent decree, Twitter has to provide multi-factor authentication tools that do not require providing a phone number. It sounds like Musk’s plans go against that. There is also a big long list of requirements for launching any new program like this to avoid the data being misused, and I don’t see how Twitter is going to make that work, when the whole point of this seems to be to abuse this data.
And none of that is even getting to how this almost certainly violates California’s new privacy laws, which put limits on the purpose for which an internet company collects private information from users.
Perhaps Musk thinks Twitter can just ignore all the laws and somehow push through, but that seems like a pretty big task. And while I’m sure plenty of users won’t realize that the company is demanding they hand over their private info and just click “okay” and move forward, it seems that at least a decent segment of the more informed populace will opt-out the only way they can: by no longer using Twitter.
While a lot of the scandals surrounding “big tech” have been overblown, one that hasn’t been discussed enough is Silicon Valley companies’ abuse of user two-factor authentication data. If you’ve been napping, two-factor authentication (preferably of the email variety) helps protect your accounts from being compromised by hackers.
But when both Facebook and Twitter implemented it, company executives apparently thought it would be a great idea to use the email and phone data collected for marketing purposes. That’s a massive problem, as it completely undermines trust in the two-factor authentication process (and these companies’ security standards in general), making it less likely that users would protect themselves.
For much of the last decade, Twitter told its users that it was collecting their phone and email addresses for account-security purposes. But they didn’t inform users that they would also use this information to send targeted ads to customers. When it was revealed, Twitter claimed it didn’t know this was happening, which if true is still… sloppy and bad in terms of both privacy and security standards.
In 2020, reports emerged that the company would likely be fined up to $250 million for the behavior by the FTC. This week the long-looming action finally dropped, with the FTC announcing that Twitter had struck a $150 million settlement with the DOJ and FTC:
from May 2013 to September 2019, Twitter told its users that it was collecting their telephone numbers and email addresses for account-security purposes, but failed to disclose that it also would use that information to help companies send targeted advertisements to consumers. The complaint further alleges that Twitter falsely claimed to comply with the European Union-U.S. and Swiss-U.S. Privacy Shield Frameworks, which prohibit companies from processing user information in ways that are not compatible with the purposes authorized by the users.
It was underplayed in the scope of other concerns, but big portion of the $5 billion fine levied against Facebook by the FTC also involved this same exploitation of information provided specifically for 2FA.
There are obviously numerous other companies that have chosen to undermine user security by monetizing 2FA information, but given the FTC is too underfunded and understaffed to handle them all, these fines have to be large enough to act as a warning shot over the bow in the absence of federal legislation.
Not only are countless systems and services not secure, security itself often isn’t treated with the respect it deserves. And tools that are supposed to protect you from malicious actors are often monetized in self-serving ways. Like that time Facebook advertised a “privacy protecting VPN” that was effectively just spyware used to track Facebook users when they weren’t on Zuckerberg’s platform. Or that time Twitter was hit with a $250 million fine after it chose to use the phone numbers provided by users for two-factor authentication for marketing purposes (something Facebook was also busted for).
SMS verification ads themselves are also now being exploited as a marketing opportunity. Developer Chris Lacy was recently taken aback after an SMS two-factor authentication code from Google was injected with an SMS ad:
I just received a two factor authentication SMS from Google that included an ad. Google's own Messages SMS app flagged it as spam.
Google confirmed to 9to5Google they didn’t inject the ads, and that this was done by Lacy’s wireless carrier (which he refused to reveal for privacy purposes). I’ve never seen a wireless carrier attempt this, and my guess is that (assuming he’s in the States) this isn’t one of the major three (AT&T, T-Mobile, and Sprint). It’s most likely a smaller prepaid operator which, even in the wake of a more feckless FCC, faces some notable fines should the behavior get widespread attention. Both Google and Lacy say they’re working with the anonymous carrier in question.
Needless to say, security experts like Kenn White weren’t particularly impressed:
While I generally consider myself an eternal optimist, with telco carriers, I'm a fairly jaded SOB. That said, the fact that a mobile carrier would inject ads directly into otherwise authentic SMS content (especially from a major security service endpoint) is shocking to me. https://t.co/Mt6ZXnK7og
Ironically the ad was for VPN services, which themselves promise layers of security and privacy that often don’t exist. Sent over an SMS system that security researchers are increasingly warning isn’t secure enough for two-factor authentication or much of anything else. We live in an era where we prioritize monetization, but pay empty lip service to security and privacy. What could possibly go wrong in a climate like that?
It’s not sure why journalists keep having to do the wireless industry’s job, yet here we are.
Sometime around mid-march, Motherboard reporter Joseph Cox wrote a story explaining how he managed to pay a hacker $16 to gain access to most of his online accounts. How? The hacker exploited a flaw in the way text messages are routed around the internet, paying a third party (with pretty clearly flimsy standards for determining trust) to reroute all of his text messages, including SMS two factor authentication. From there, it was relatively trivial to break into several of the journalist’s accounts, including Bumble, Whatsapp, and Postmates.
It’s a flaw the industry has apparently known about for some time, but they only decided to take action after the story made the rounds. This week, all major wireless carriers indicated they’d be taking significant steps to the way text messages are routed to take aim at the flaw:
“The Number Registry has announced that wireless carriers will no longer be supporting SMS or MMS text enabling on their respective wireless numbers,” the March 25 announcement from Aerialink, reads. The announcement adds that the change is “industry-wide” and “affects all SMS providers in the mobile ecosystem.”
“Be aware that Verizon, T-Mobile and AT&T have reclaimed overwritten text-enabled wireless numbers industry-wide. As a result, any Verizon, T-Mobile or AT&T wireless numbers which had been text-enabled as BYON no longer route messaging traffic through the Aerialink Gateway,” the announcement adds, referring to Bring Your Own Number.”
It’s a welcome move, but it’s also part of a trend where journalists making a pittance somehow routinely have to prompt an industry that makes billions of dollars a year to properly secure their networks. It’s not much different from the steady parade of SIM swapping attacks that plagued the industry for years, only resulting in substantive action by the sector after reporters began documenting how common it was (and big name cryptocurrency investors had millions of dollars stolen). It was another example of how two factor authentication over text messages isn’t genuinely secure.
Or the SS7 flaw, which the industry has known about for years but didn’t take seriously until journalists began documenting how the flaw lets all manner of malicious private and government actors spy on wireless users without them knowing. US consumers pay some of the highest prices in the developed world for mobile data. At that price point, it doesn’t matter how clever these attacks are. Telecom giants should be getting out ahead of security flaws before they become widespread problems, not belatedly acting only after news outlets showcase their apathy and incompetence.
There are many things that big internet companies do that the media have made out to be scandals that aren’t — but one misuse of data that I think received too little attention was how both Facebook and later Twitter were caught using the phone numbers people gave it for two factor authentication, and later used them for notification/marketing purposes.
In case you’re somehow unaware, two-factor authentication is how you should protect your most important accounts. I know many people are too lazy to set it up, but please do so. It’s not perfect (Twitter’s recent big hack routed around 2FA protections), but it is many times better than just relying on a username and password. In the early days of 2FA, one common way to implement it was to use text messaging as the second factor. That is, when you tried to login on a new machine (or after a certain interval of time), the service would have to text you a code that you would need to enter to prove that you were you.
Over time, people realized that this method was less secure. Many hacks involved people “SIM swapping” (using social engineering to have your phone number ported over to them), and then getting the 2FA code sent to the hacker. These days, good 2FA usually involves using an authenticator app, like Google Authenticator or Twilio’s Authy or even better a physical key such as the Yubikey or Google’s Titan Key. However, many services and users have stuck with text messaging for 2FA because it’s the least complex for users — and the issue with any security practice is that if it’s not user-friendly, no one will use it, and that doesn’t do any good either.
But using phone numbers given for 2FA purposes for notifications or marketing is really bad. First of all, it undermines trust — which is the last thing you want to do when dealing with a security mechanism. People handed over these phone numbers/emails for a very specific and delineated reason: to better protect their account. To then share that phone number or email with the marketing team is a massive violation in trust. And it serves to undermine the entire concept of two factor authentication, in that many users will become less willing to make use of 2FA, fearing how the numbers might be abused.
As we noted when Facebook received the mammoth $5 billion fine from the FTC a year ago, while the media focused almost entirely on the Cambridge Analytica situation as the reason for the fine, if you actually read the FTC’s settlement documents, it was other things that really caused the FTC to move, including Facebook’s use of 2FA phone numbers for marketing. We were glad that Facebook got punished for that.
And now it’s Twitter’s turn. Twitter has revealed that the FTC is preparing to fine
the company $150 million to $250 million for this practice — noting that it violated the terms of an earlier consent decree with the FTC in 2011, where the company promised not to mislead users about how it handled personal information. Yet, for years, Twitter used the phone numbers and emails provided for 2FA to help target ads (basically using the phone number/email as an identifier for targeting).
There’s no explanation for this other than really bad handling of data at Twitter, and the company should be punished for it. There are many things I think Twitter gets unfairly blamed for, but a practice like this is both bad and dangerous, and I’m all for large fines from the FTC to convince companies to never do this kind of thing again.
In our ongoing discussions about the new platform wars going on between Steam and the Epic Store, perhaps we’ve been unfair to another participant in those wars: EA’s Origin. Except that no we haven’t, since Origin is strictly used for EA published games, and now EA is pushing out games on Steam as well. All of which is to say that Origin, somehow, is still a thing.
Enough of a thing, actually, for EA to have tried to do something beneficial around Cybersecurity Month. For Origin users that enabled two-factor authentication on the platform, EA promised to reward those users with a free month of Origin Access Basic. That free month would give those that had enabled better security on their accounts access to discounts on new games and downloads of old games. Cool, right?
This morning at around 3am, jolted awake by an antsy newborn, I rolled over to check my email and was alarmed to see a message from EA with the subject: “You’ve redeemed an Origin Access Membership Code.” Goddamnit, I thought. Did someone hack me? Turns out it was just EA starting off everyone’s day with a nice little scare.
The email thanked the user for redeeming the access code without mentioning as a reminder that any of this was tied to enabling 2FA last month. It looked for all the world like any other purchase confirmation from Origin does. This sent a whole bunch of people scrambling, assuming their accounts had been hacked. Then those same people jumped on Twitter, either recognizing that this scare was a result of EA’s crappy communication, or else not realizing that and asking all of Twitter what to do now.
That all of this came as a result of a Cybersecurity Month initiative was an irony not lost on the public.
Ironically, this email came as the result of an EA initiative to reward users of its PC platform with more security. Last month, EA quietly announced that Origin users with two-step verification enabled (in honor of “National Cybersecurity Month”) would get a free month of Origin Access Basic, which offers discounts and access to a bunch of old games. This was them making good on that promise.
Now if only “making good” hadn’t also equated to “scaring the hell out of users into thinking they’d been hacked and might have even lost all of their progress in Star Wars Jedi Fallen Order and had to start from scratch just like their buddy Kirk did.” Telling people that they’ve redeemed a code out of the blue is a good way to get them to immediately freak out and change all their passwords, especially in a world where just about every company (EA included) has been the target of a massive security breach.
EA: where even when the company tries to do something nice and good, it just ends up scaring the shit out of everyone.
When you sign up for security services like two-factor authentication (2FA), the phone number you’re providing is supposed to be explicitly used for security. You’re providing that phone number as part of an essential exchange intended to protect yourself and your data, and that information is not supposed to be used for marketing. Since we’ve yet to craft a formal privacy law, there’s nothing really stopping companies from doing that anyway, something Facebook exploited last year when it was caught using consumer phone numbers provided explicitly for 2FA for marketing purposes.
It’s not only a violation of your users’ trust, it incentivizes them to not use two-factor authentication for fear of being spammed, making everybody less secure. As part of Facebook’s recent settlement with the FTC the company was forbidden from using 2FA phone numbers for marketing ever again.
Having just watched Facebook go through this, Twitter has apparently decided to join the fun. In a blog post, the company this week acknowledged that participants of the company’s Tailored Audiences and Partner Audiences advertising system may have had their phone numbers used for 2FA used for marketing as well:
“We cannot say with certainty how many people were impacted by this, but in an effort to be transparent, we wanted to make everyone aware. No personal data was ever shared externally with our partners or any other third parties. As of September 17, we have addressed the issue that allowed this to occur and are no longer using phone numbers or email addresses collected for safety or security purposes for advertising.”
Security conscious folks had already grumbled about the way Twitter sets up 2FA, and those same folks weren’t, well, impressed:
In all seriousness: whose idea was it to use a valuable advertising identifier as an input to a security system. This is like using raw meat to secure your tent against bears.
While it’s nice that Twitter came out and admitted the error, you have to think it’s unlikely this would happen were there real federal penalties for being cavalier about user privacy and security.
Last year, the company admitted to storing passwords for 330 million customers unencrypted in plain text, and a bug in the company’s code also exposed subscriber phone number data, something Twitter knew about for two years before doing anything about it. Earlier this year Twitter acknowledged that another bug exposed the location data of its users to an unknown partner. And of course Jack’s own account was hacked thanks to an SMS hijacking problem agencies like the FCC haven’t been doing much (read: anything) about.
While there’s understandable fear about the unintended consequences of poorly crafted privacy legislation, having at least some basic god-damned rules in place (including things like penalties for storing user data in plaintext, or using security-related systems like 2FA as marketing opportunities) would likely go a long way in deterring these kinds of “inadvertent oversights.” Outside of the problematic COPPA (which applies predominately to kids), there are no real federal guidelines disincentivizing the cavalier treatment of user data, though apparently we’re going to stumble through another 10 years of daily privacy scandals before “conventional wisdom” realizes that’s a problem.
If the entire Cambridge Analytica scandal didn’t make that clear enough, Facebook keeps doubling down on behaviors that highlight how security and privacy routinely play second fiddle to user data monetization. Like the VPN service Facebook pitches users as a privacy and security solution, but is actually used to track online user behavior when they wander away from Facebook to other platforms. Or that time Facebook implemented two-factor authentication, only to use your provided (and purportedly private) number to spam users (a problem Facebook stated was an inadvertent bug).
This week, a new report highlighted how Facebook is letting advertisers market to Facebook users by using contact information collected in surprising ways that aren’t entirely clear to the end user, and, according to Facebook, aren’t supposed to work. That includes not only private two-factor authentication contact info users assume to be private, but data harvested from other users about you (like secondary e-mail addresses and phone numbers not directly provided to Facebook). The findings come via a new report (pdf) by Northeastern University’s Giridhari Venkatadri, Alan Mislove, and Piotr Sapiezynski and Princeton University’s Elena Lucherini.
In it, the researchers highlight how much of the personally identifying information (PII) data collected by Facebook still isn’t really explained by Facebook outside of painfully generic statements. This data in turn can be used to target you specifically with ads, and there’s virtually no transparency on Facebook’s part in terms of letting users see how this data is being used, or providing fully operational opt out systems:
“Worse, we found no privacy settings that directly let a user view or control which PII is used for advertising; indeed, we found that Facebook was using the above PII for advertising even if our control account user had set the existing PII-related privacy settings on to their most private configurations. Finally, some of these phone numbers that were usable to target users with did not even appear in Facebook?s ?Access Your Data? feature that allows users to download a copy of all of their Facebook data as a ZIP file.
Again, this includes the use of two-factor authentication (2FA) credentials that Facebook has previously stated aren’t supposed to be used for marketing purposes. It’s something that Facebook has repeatedly claimed doesn’t happen:
“Facebook is not upfront about this practice. In fact, when I asked its PR team last year whether it was using shadow contact information for ads, they denied it.
User efforts to glean more transparency from Facebook haven’t fared well either, even in the UK where the GDPR was supposed to have put an end to this kind of cavalier treatment of user data:
“I?ve been trying to get Facebook to disclose shadow contact information to users for almost a year now. But it has even refused to disclose these shadow details to users in Europe, where privacy law is stronger and explicitly requires companies to tell users what data it has on them. A UK resident named Rob Blackie has been asking Facebook to hand over his shadow contact information for months, but Facebook told him it?s part of ?confidential? algorithms, and ?we are not in a position to provide you the precise details of our algorithms.”
And again, this is a company in the wake of several major privacy scandals, attempting to avoid heavy-handed privacy regulations on both the state and federal level, making you wonder what it looks like when Facebook truly doesn’t give a damn.