For years, the cable industry has dreamed of a future where they could use your cable box to actively track your every behavior using cameras and microphones and then monetize the data. At one point way back in 2009, Comcast made it clear they were even interested in using embedded microphones and cameras to monitor the number of people in living rooms and listen in on conversations.
More than a decade later, and the cable industry is openly bragging that they’ve accomplished their vision.
Last month, 404 Media reported that Cox Media Group–an extension of Cox cable–has been happily bragging about its ability to use mics and cameras in smartphones, smart TVs, and other devices to actively monitor users and then use that gathered information for targeted ads.
From the company’s website:
“What would it mean for your business if you could target potential clients who are actively discussing their need for your services in their day-to-day conversations? No, it’s not a Black Mirror episode—it’s Voice Data, and CMG has the capabilities to use it to your business advantage,” CMG’s website reads.”
As for legality, Cox isn’t really worried about it:
“Is this legal? YES- it is totally legal for phones and devices to listen to you. That’s because consumers usually give consent when accepting terms and conditions of software updates or app downloads.”
The company can’t be all that proud of the accomplishment, since it deleted the reference to the claim very shortly after the news report emerged. And after a delay, it finally also issued a statement walking back its previous claims:
“CMG businesses do not listen to any conversations or have access to anything beyond a third-party aggregated, anonymized and fully encrypted data set that can be used for ad placement. We regret any confusion and we are committed to ensuring our marketing is clear and transparent,” the statement added.”
That statement isn’t particularly clarifying, especially given the repeated studies that have shown that the term “anonymization” doesn’t actually meaning anything. It’s a term the marketing industry often trots out as a get out of jail free card any time it’s accused of surreptitious surveillance.
(As an aside, 404 Media was launched only a few months ago by Motherboard editors fleeing the idiotic Vice bankruptcy, highlighting the benefit of having a healthy and functional independent media).
Again, the cable industry has been actively bragging about its interest in using embedded microphones and cameras to listen in and watch living room behaviors in order to sell you things for as long as I’ve been a reporter, so it would be surprising if they hadn’t implemented some flavor of the idea, carefully tailored to tap dance around our flimsy ass existing wiretap and privacy laws.
Security researchers have found it trivial to also hack Comcast cable remotes or smart televisions from different vendors to listen in on users without their consent. And for years, marketing companies have been using phones to listen in on consumer activity for marketing purposes, often using inaudible tones transmitted by TVs and collected by phones.
Why wouldn’t companies pursue such technologies in a country that’s genuinely too corrupt to pass even a baseline privacy law for the Internet era? Our regulators generally lack the staff or resources to even come close to policing the privacy abuses already happening everyday at scale, and cable and wireless companies have long been at the front of the parade of companies eager to exploit it.
Thanks to industry consolidation and saturated market growth, the streaming industry has started behaving much like the traditional cable giants they once disrupted.
As with most industries suffering from “enshittification,” that generally means imposing obnoxious new restrictions (see: Netflix password sharing), endless price hikes, and obnoxious and dubious new fees geared toward pleasing Wall Street’s utterly insatiable demand for improved quarterly returns at any cost.
Case in point: Amazon customers already pay $15 per month, or $139 annually for Amazon Prime, which includes a subscription to Amazon’s streaming TV service. In a bid to make Wall Street happy, Amazon recently announced it would start hitting those users with entirely new streaming TV ads, something you can only avoid if you’re willing to shell out an additional $3 a month.
There was ample backlash to Amazon’s plan, but it apparently accomplished nothing. Amazon says it’s moving full steam ahead with the plan, which will begin on January 29th:
“We aim to have meaningfully fewer ads than linear TV and other streaming TV providers. No action is required from you, and there is no change to the current price of your Prime membership,” the company wrote. Customers have the option of paying an additional $2.99 per month to keep avoiding advertisements.”
If you recall, it took the cable TV, film, music, and broadcast sectors the better part of two decades before they were willing to give users affordable, online access to their content as part of a broader bid to combat piracy. There was just an endless amount of teeth gnashing by industry executives as they were pulled kicking and screaming into the future.
Despite having just gone through that experience, streaming executives refuse to learn anything from it, and are dead set on nickel and diming their users. This will inevitably drive a non-insignificant amount of those users back to piracy, at which point executives will blame the shift on absolutely everything and anything other than themselves. And the cycle continues in perpetuity…
One reason that “right to repair” reform has such broad, bipartisan public support is because there’s really no aspect of your daily life that isn’t touched by it. The effort to monopolize repair isn’t just the territory of Apple or game console makers like Sony and Microsoft. The problem is present in everything from the agricultural and medical gear sectors, to transportation.
Everywhere you look you have companies attempting to drive independent repair shops out of business, in turn creating numerous headaches while they drive up costs for consumers. Then, whenever absolutely anybody proposes doing anything about their attempt to monopolize repair, these companies will complain critics are putting consumer security, privacy, or safety at risk. It’s clockwork.
The latest case in point: 404 Media noticed that over in Poland, one regional rail company and a train manufacturer named NEWAG has taken to using DRM to lock down trains that are repaired by independent technicians, in a bid to both monopolize — and drive up the costs of repair.
The intentionally broken tractors disrupted rail travel, so independent technicians took to hiring a white hat hacking group dubbed Dragon Sector to bypass the DRM and get the trains running again:
“These trains were locking up for arbitrary reasons after being serviced at third-party workshops. The manufacturer argued that this was because of malpractice by these workshops, and that they should be serviced by them instead of third parties,” Bazański, who goes by the handle q3k, posted on Mastodon. “After a certain update by NEWAG, the cabin controls would also display scary messages about copyright violations if the human machine interface detected a subset of conditions that should’ve engaged the lock but the train was still operational. The trains also had a GSM telemetry unit that was broadcasting lock conditions, and in some cases appeared to be able to lock the train remotely.”
Again, manufacturers aren’t doing this to genuinely protect hardware or customer security and safety (though executives may have convinced themselves of such). They’re doing it because they’re obsessed with control, and because they want a monopoly on repair.
And, as always, the folks trying to bypass the unnecessary, self-serving restrictions are framed by industry as radical rabble-rousers and a threat to public safety:
“Hacking IT systems is a violation of many legal provisions and a threat to railway traffic safety,” NEWAG added. “We do not know who interfered with the train control software, using what methods and what qualifications. We also notified the Office of Rail Transport about this so that it could decide to withdraw from service the sets subjected to the activities of unknown hackers.”
The problem for companies following this path is that the widespread, bipartisan support for right to repair reform is only growing. The more companies try to fight back, the bigger the opposition gets. That’s a major reason why companies like Apple and Microsoft (at least publicly), have begun softening their rhetoric and started focusing on controlling the contours of potential legislative reforms.
Half a decade ago we documented how the U.S. wireless industry was caught over-collecting sensitive user location and vast troves of behavioral data, then selling access to that data to pretty much anybody with a couple of nickels to rub together. It resulted in no limit of abuse from everybody from stalkers to law enforcement — and even to people pretending to be law enforcement.
While the FCC purportedly moved to fine wireless companies for this behavior, the agency still hasn’t followed through. Despite the obvious ramifications of this kind of behavior during a post-Roe, authoritarian era.
Nearly a decade later, and it’s still a very obvious problem. The folks over at 404 Media have documented the case of a stalker who managed to game Verizon in order to obtain sensitive data about his target, including her address, location data, and call logs.
Her stalker posed as a police officer (badly) and, as usual, Verizon did virtually nothing to verify his identity:
“Glauner’s alleged scheme was not sophisticated in the slightest: he used a ProtonMail account, not a government email, to make the request, and used the name of a police officer that didn’t actually work for the police department he impersonated, according to court records. Despite those red flags, Verizon still provided the sensitive data to Glauner.”
In this case, the stalker found it relatively trivial to take advantage of Verizon Security Assistance and Court Order Compliance Team (or VSAT CCT), which verifies law enforcement requests for data. You’d think that after a decade of very ugly scandals on this front Verizon would have more meaningful safeguards in place, but you’d apparently be wrong.
Keep in mind: the FCC tried to impose some fairly basic privacy rules for broadband and wireless in 2016, but the telecom industry, in perfect lockstep with Republicans, killed those efforts before they could take effect, claiming they’d be too harmful for the super competitive and innovative (read: not competitive or innovative at all) U.S. broadband industry.
In fact, any time the FCC proposes doing absolutely anything about lax privacy standards in wireless or broadband, Republicans work in perfect synchronicity with Comcast, Verizon, and AT&T to demonize and crush the effort. They’re currently trying to block an FCC effort requiring that broadband providers do a better, faster job informing customers about hacks and data breaches.
The Republican party not only never has to truly own this dangerous policy decision in the press, you can often watch as cable news outlets present Republicans like Marsha Blackburn, Ted Cruz, or Brendan Carr as good faith privacy reformers (see their performative outrage about TikTok).
At the same time, Congress, as a whole, has proven too corrupt to pass even a basic privacy law for the internet era, despite no limit of problematic scandals. In part because there’s a massive coalition of companies across numerous industries lobbying against it, but also because this lax data-hoovering system we’ve constructed helps the government avoid having to get actual warrants.
So what we get is this steady beat of ugly and avoidable privacy scandals we’ve chosen to do nothing about. Those in power have effectively decided that making money is more important than market health, human safety, or pretty much anything else. Eventually, there will be a scandal at a scale so disturbing it finally shakes Congress out of its corrupt slumber, and it’s going to be a doozy.
The Italians are the new Israelis… at least in terms of hawking phone exploits and other spyware.
NSO Group crashed hard following leaks showing its customers (many of which were, shall we say, questionable) were targeting political rivals, dissidents, human rights activists, journalists, lawyers, and religious leaders with powerful exploits that completely exposed the contents of targeted phones, as well as allowing those doing the targeting to eavesdrop on every conversation and communication engaged in by phone users.
NSO not only crashed hard, but its splash damage was enough to get other Israel competitors hit with US sanctions or forced to hook up with offshore distributors just to be able to sell their products to the few world governments still willing to do business with them. No surprise here, though. The list of willing buyers had pretty much been whittled down to the serial human rights violators that got NSO Group in trouble in the first place.
Israeli news service Haaretz took a trip to Milipol in Paris, a convention hosting a ton of third parties providers from all over the world seeking to sell surveillance tech (as well as good old fashioned guns) to the world’s cops and soldiers.
Not present at this gathering? Israeli phone exploit hawkers, who apparently decided now is not the time to be making public appearances. NSO, Candiru, and other homegrown Israeli spyware firms took a pass on Milipol, ceding plenty of floor space to numerous rivals, many of which called Italy home.
Though Israeli offensive cyber firms did not attend, their European competitors did: RCS, producer of the Hermit spyware that is considered a competitor of NSO’s Pegasus; Memento Labs, formerly known as Hacking Team; and IPS-Intelligence, all Italian firms, were present.
Memento Labs certainly has more reason now than ever to separate itself from the “Hacking Team” brand. Not only did the company suffer through a truly embarrassing hacking itself (one that exposed its sales to UN-blacklisted governments), but one of its founders was recently arrested for attempted murder (!!).
Not that Italy has the surveillance market cornered. But companies calling it home have been major players for years and the void left by the sudden absence of dominant Israeli firms has opened up the market to companies often considered to be nothing more than also-rans.
In addition to the heavy Italian presence at this wing of Milipol, Haaretz reports other firms offering anything from remote phone exploits to “tactical” communication interception software/hardware pitched their products to buyers. It’s not just one corner of the world producing surveillance tech. Haaretz notes several new players are on the scene, including companies located in Croatia, Czechoslovakia, and… France.
Not that there weren’t any Israeli firms in attendance. Those, however, were pushing products for more passive surveillance, as well as regular army stuff, like anti-drone systems and battery packs for military hardware.
If companies like NSO Group made any friends in the surveillance tech field, they sure don’t seem to have many left. While they aren’t exactly distancing themselves from the new pariahs, they sure don’t seem to have much sympathy for a company that saw itself outed by a disastrous target list leak.
“The Israelis fucked up and their clients exposed them,” an employee from one of the EU-based cyber arms firms said on condition of anonymity.
And certainly none of these spyware purveyors are any more honorable than the companies they’re supplanting.
They admitted that their firm and others in Europe sell their spywares to clients in the same states that Israeli firms also sold to – including countries in Africa and in the Arab world – from which they are currently barred from doing business with.
This seems surprising, given that a lot of NSO Group’s sales to human rights violators were brokered by the Israeli government, which seemingly considered selling powerful tools to dictators to be a form of diplomacy. According to the companies spoken to by Haaretz, European restrictions are far less restricting than those imposed by the Israeli government on Israeli tech firms. So, if we were hoping for a better world with fewer powerful tech tools in the hands of autocrats, it appears that dream is as dead as NSO itself.
And it’s not as though Memento Labs — formerly Hacking Team — has turned over a new leaf as a result of its own disastrous public exposure. It has already rebranded its rebrand, referring to itself as M-Labs. All its reformation has accomplished is the creation of a new layer of plausible deniability. It no longer sells its exploits to questionable governments. Instead, it sells its exploits to other developers, allowing them to do the actual dirty work of dealing with human rights violators while it collects licensing fees for its exploit code.
There will always be a market for these products. And there will always be a very healthy autocrat market for the most powerful exploits and surveillance gear. It isn’t that no one should be creating these tools. It’s that they should be far more judicious about who they sell to. That’s what got NSO in trouble. And that’s what’s ultimately going to cause the downfall of other companies that have the same tech talent and the same gaping hole where a moral center should be.
At best, moves like this give an appearance of impropriety. At best, that’s what they do. At worst, they look like what they almost always are: government officials moving directly to the positions within the industry they just recently regulated, carrying with them family photos, desk decorations, and a file box full of conflicted interests.
Things look mighty conflicted here, even though the commissioner who passed through the revolving door on the way to his private sector office claims there’s nothing wrong with what he did. Mark Townsend has the details for The Guardian.
In a move critics have dubbed an “outrageous conflict of interest”, Professor Fraser Sampson, former biometrics and surveillance camera commissioner, has joined Facewatch as a non-executive director.
Sampson left his watchdog role on 31 October, with Companies House records showing he was registered as a company director at Facewatch the following day, 1 November. Campaigners claim this might mean he was negotiating his Facewatch contract while in post, and have urged the advisory committee on business appointments to investigate if it may have “compromised his work in public office”. It is understood that the committee is currently considering the issue.
Facewatch — like all facial recognition tech — is controversial. Even in a nation inundated with surveillance cameras and facial recognition programs, Facewatch drew more opposition than most. Adding to this controversy is the fact that the UK Home Office was less than subtle in its, shall we say, suggestion that the Information Commissioner’s Office come down on the side of the Home Office and its preferred tech provider.
Correspondence reveals that the Home Office wrote to the Information Commissioner’s Office (ICO) warning that policing minister, Chris Philp, would “write to your commissioner” if the regulator’s investigation into Facewatch – whose facial recognition cameras have provoked huge opposition after being installed in shops – was not positive towards the firm.
An official from the Home Office’s data and identity directorate warned the ICO: “If you are about to do something imminently in Facewatch’s favour then I should be able to head that off [Philp’s intervention], otherwise we will just have to let it take its course.”
The apparent threat came two days after a closed-door meeting on 8 March between Philp, senior Home Office officials and Facewatch.
Those emails were sent in early March. By the end of the month, ICO had completed its investigation of Facewatch and its tech, declaring it to be suitable for public deployment in the interest of “detection and prevention of crime.” When confronted about the emails and any effect that might have played in its decision, ICO claimed the implicit threats had not altered the course of its investigation.
But that’s not the only correspondence involving Facewatch. Other emails showed the policing minister assuring Facewatch it had his full support and that he would continue to “push” the facial recognition agenda “forward.” As critics noted then, the policing minister sounded more like Facewatch’s PR rep than a public servant.
Now, there’s this: an actual public servant who pushed for Facewatch deployment moving on from public oversight of this tech to working directly for a (contested) subject of his former regulatory work.
Sampson, for his part, claims he’s done nothing wrong.
Sampson said that after the government proposed abolishing his post, he wrote publicly to the home secretary on 1 August, giving three months’ notice, after which he received a formal approach to join Facewatch. “I notified the Home Office and put in place specific measures to ensure the avoidance of any potential conflict of interest, however limited that potential might be. I am satisfied that no such conflict arose,” said Sampson.
What those “specific measures” were are left to the reader’s imagination. Sampson did not provide any details of the supposed three month firewall he erected between the tech company he regulated and his remaining work for the UK government. The wall must have been pretty thin, though, seeing as it only took one day to exit the public sector and step into a high-ranking position at Facewatch.
Even if this is all on the up-and-up, as Sampson claims, the optics are still horrible. The public does not approve of this sort of thing. The only people that seem to think it’s acceptable are the company executives and public officials that hop in the revolving door as soon as it starts spinning. Facewatch wants a little regulatory capture. With Sampson, it has reeled in a keeper: a former public employee with connections and knowledge of the sector. And all it had to do was pitch him a job as soon as it knew he might be looking for one.
Apple has spent the past few years pushing the marketing message that it, alone among the big tech companies, is dedicated to your privacy. This has always been something of an exaggeration, but certainly less of Apple’s business is based around making use of your data, and the company has built in some useful encryption elements to its services (both for data at rest, and data in transit). But, its actions over the past few days call all of that into question, and suggest that Apple’s commitment to privacy is much more a commitment to walled gardens and Apple’s bottom line, rather than the privacy of Apple’s users.
First, some background:
Back in September, we noted that the EU had designated which services were going to be “gatekeepers” under the Digital Markets Act (DMA), which would put on them various obligations, including regarding some level of interoperability. Apple had been fighting the EU over whether or not iMessage would qualify, and just a few days ago there were reports that the EU would not designate iMessage as a gatekeeper. But that’s not final yet. This also came a few weeks after Apple revealed that, after years of pushing back on the idea, it might finally support RCS for messaging (though an older version that doesn’t support end-to-end encryption).
Separately, for years, there has been some debate over Apple’s setup in which messaging from Android phones shows up in “green bubbles” vs. iMessage’s “blue bubbles.” The whole green vs. blue argument is kind of silly, but some people reasonably pointed out that by not allowing Android users to actually use iMessage itself, it was making communications less secure. That’s because messages within the iMessage ecosystem can be end-to-end encrypted. But messages between iMessage and an Android phone are not. If Apple actually opened up iMessage to other devices, messaging for iPhone users and the people they spoke to would be much more protected.
But, instead of doing that, Apple has generally made snarky “just buy an iPhone” comments when asked about its unwillingness to interoperate securely.
That’s why Apple’s actions over the last week have been so stupidly frustrating.
For the past few years, some entrepreneurs (including some of the folks who built the first great smartwatch, the Pebble), have been building Beeper, a universal messaging app that is amazing. I’ve been using it since May and have sworn by it and gotten many others to use it as well. It creates a very nice, very usable single interface for a long list of messaging apps, reminiscent of earlier such services like Trillian or Pidgin… but better. It’s built on top of Matrix, the open-source decentralized messaging platform.
Over the last few months I’ve been talking up Beeper to lots of folks as the kind of app the world needs more of. It fits with my larger vision of a world in which protocols dominate over siloed platforms. It’s also an example of the kind of adversarial interoperability that used to be standard, and which Cory Doctorow rightfully argues is a necessary component of stopping the enshittification curve of walled garden services.
Of course, as we’ve noted, the big walled gardens are generally not huge fans of things that break down their walls, and have fought back over the years, including with terrible CFAA lawsuits against similar aggregators (the key one being Facebook’s lawsuit against Power.com). And ever since I started using Beeper, I wondered if anyone (and especially Apple) might take the same approach and sue.
There have been some reasonable concerns, about how Beeper handled end-to-end encrypted messaging services like Signal, WhatsApp, and iMessage. It originally did this by basically setting up a bunch of servers that it controls, which has access to your messages. In some ways, Beeper is an “approved” man-in-the-middle attack on your messages, with some safeguards, but built in such a way that those messages are no longer truly end-to-end encrypted. Beeper has taken steps to do this as securely as possible, and many users will think those tradeoffs are acceptable for the benefit. But, still, those messages have not been truly end-to-end encrypted. (For what it’s worth, Beeper open sourced this part of its code so if you were truly concerned, you could also host the bridge yourself and basically man in the middle yourself to make Beeper work, but I’m guessing very few people did that).
That said, from early on Beeper has made it clear that it would like to move away from this setup to true end-to-end encryption, but that requires interoperable end-to-end encrypted APIs, which (arguably) the DMA may mandate.
Or… maybe it just takes a smart hacking teen.
Over the summer, a 16-year-old named James Gill reached out to Beeper’s Eric Migicovsky and said he’d reimplemented iMessage in a project he’d released called Pypush. Basically, he reverse engineered iMessage and created a system by which you could message securely in a truly end-to-end encrypted manner with iMessage users.
If you want to understand the gory details, and why this setup is actually secure (and not just secure-like), Snazzy Labs has a great video:
Over the last few months, Beeper had upgraded the bridge setup it used for iMessage within its offering to make use of Pypush. Beeper also released a separate new app for Android, called Beeper Mini, which is just for making iMessage available for Android users in an end-to-end encrypted manner. It also allows users (unlike the original Beeper, now known as Beeper Cloud) to communicate with iMessage users just via their phone number, and not via an AppleID (Beeper Cloud requires the Apple ID). Beeper Mini costs $2/month (after a short free trial), and apparently there was demand for it.
I spoke to Migicovsky on Sunday and he told me they had over 100k downloads in the first two days it was available, and that it’s the most successful launch of a paid Android app ever. It was a clear cut example of why interoperability without permission (adversarial interoperability) is so important, and folks like Cory Doctorow rightfully cheered this on.
But all that attention also seems to have finally woken up Apple. On Friday, users of both Beeper Cloud and Beeper Mini found that they could no longer message people via iMessage. If you watch that YouTube video above by Snazzy Labs, he explains why it’s not that easy for Apple to block the way Beeper Mini works, but, Apple still has more resources at its disposal than just about anyone else and devoted some of them to doing exactly what Snazzy Labs (and Beeper) thought it was unlikely to do: blocking Beeper Mini from working.
So… with that all as background, the key thing to understand here is that Beeper Mini was making everyone’s messaging more secure. It certainly better protected Android users in making sure their messages to iPhone users were encrypted. And it similarly better protected Apple users, in making sure their messages to Android users were also encrypted. Which means that Apple’s response to this whole mess underscores the lie that Apple cares about users’ privacy.
Apple’s PR strategy is often to just stay silent, but it actually did respond to David Pierce at the Verge and put out a PR statement that is simply utter nonsense, claiming it did this to “protect” Apple users.
At Apple, we build our products and services with industry-leading privacy and security technologies designed to give users control of their data and keep personal information safe. We took steps to protect our users by blocking techniques that exploit fake credentials in order to gain access to iMessage. These techniques posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks. We will continue to make updates in the future to protect our users.
Almost everything here is wrong. Literally, Beeper Mini’s interoperable setup better protected the privacy of Apple’s customers than Apple itself did. Beeper Mini’s setup absolutely did not “pose significant risks to user security and privacy.” It effectively piggybacked onto Apple’s end-to-end encryption system to make sure that it was extended to messages between iOS users and Android users, better protecting both of them.
When I spoke to Eric on Sunday he pledged that if Apple truly believed that Beeper Mini somehow put Apple users at risk, he was happy to agree to have the software fully audited by an independent third party security auditor that the two organizations agreed upon to see if it created any security vulnerabilities.
For many years people like myself and Cory Doctorow have been talking up the importance of interoperability, open protocols, and an end to locked-down silos. Big companies, including Apple, have often made claims about “security” and “privacy” to argue against such openness. But this seems like a pretty clear case in which that’s obviously bullshit. The security claims here are weak, given that from the way Beeper Mini is constructed, it seems significantly more secure than Apple’s own implementation, which puts less security on iOS-Android interactions.
And for Apple to do this just as policymakers are looking for more and more ways to ensure openness and interoperability seems like a very stupid self-own. We’ll see if the EU decides to exempt iMessage from the DMA’s “gateekeeper” classification and its interop requirements, but policymakers elsewhere are certainly noticing.
While I often think that Elizabeth Warren’s tech policy plans are bonkers, she’s correctly calling out this effort by Apple.
She’s correct. Chatting between different platforms should be easy and secure, and Apple choosing to weaken the protections of its users while claiming it’s doing the opposite is absolute nonsense, and should be called out as such.
Looks like everybody who’s anybody has got a set of hacking tools in Canada. Well, at least in terms of the federal government. Documents obtained by the CBC shed some light on the prevalence of phone-cracking tech within the government. And what that light shows isn’t all that flattering.
Tools capable of extracting personal data from phones or computers are being used by 13 federal departments and agencies, according to contracts obtained under access to information legislation and shared with Radio-Canada.
Radio-Canada has also learned those departments’ use of the tools did not undergo a privacy impact assessment as required by federal government directive.
Well, that’s pretty much how it goes here in the US, too. Tech is obtained and deployed. Years later — if ever — privacy impact assessments are delivered. Act first. Get into compliance later. And don’t even worry about apologizing. Sure, governments are supposed to serve the public’s interest. But if those interests don’t align with the government’s interests, well… tough shit, I guess.
It’s not surprising law enforcement and national security agencies have access to these tools. What’s a bit more surprising is how many regulatory agencies have (or have had) possession of device-cracking tech. The full list of agencies with these tools is bound to provoke some questions that won’t be all that easy to answer.
Fisheries and Oceans Canada Environment and Climate Change Canada Canadian Radio-Television and Telecommunications Commission Canada Revenue Agency Shared Services Canada Competition Bureau Canada Global Affairs Canada Transportation Safety Board of Canada Natural Resources Canada Correctional Service Canada National Defence Royal Canadian Mounted Police
Here are some details on just one of the oddities on this list. Shared Services Canada is the government’s IT wing, providing infrastructure and support for the federal government. This government agency acquired a whole suite of device crackers to crack devices.
According to the documents Light shared with Radio-Canada, Shared Services Canada purchased the equipment and software for the end users from suppliers Cellebrite, Magnet Forensics and Grayshift. (The latter two companies merged earlier this year).
No explanation was given as to why this entity should need this tech. The only explanation given for any of this was this defensive, nonsensical statement from Cellebrite, in which it defended itself from accusations no one was making.
After publication of this story, Cellebrite said in an email that its “technologies are not used to intercept communication or gather intelligence in real time. Rather, our tools are forensic in nature and are used to access private data only in accordance with legal due process or with appropriate consent to aid investigations legally after an event has occurred. The person/suspect does know our technology is obtaining data through court/judicial permission through a search warrant or consent by the individual.”
Um. OK. No one was accusing Cellebrite of engaging in illegal (or even legal) interception of communications or real-time surveillance. That the company chose to lead with that almost suggests that it does provide these services to other governments or government entities, just not the ones being discussed here. (It probably doesn’t. So far, Cellebrite has only been shown to provide phone-cracking devices that require those doing the cracking to have possession of the device being cracked. But still, it’s a weird thing to say when no one’s accusing you of doing those things.)
As for the mandatory privacy impact assessments that have yet to be created, only one agency (Fisheries and Oceans) said it planned to whip one up. The rest of the agencies that bothered to respond to this query suggested no privacy impact assessment was necessary because any deployment of the tech was backed by a court order or warrant. That’s an obviously wrong assumption, but that’s the excuse being given.
As for the legal justifications that supposedly allow these entities to skip publishing PIAs, they’re almost as ridiculous as the excuses offered for ducking their own legal obligations to the general public. Fisheries and Oceans listed “Fisheries Act” as its sole justification for deploying device-cracking tech. The Radio-Television and Telecommunications Commission claimed “Canada’s Anti-Spam Legislation” allowed it to break into devices and computers. The Environment and Climate Change agency was just as vague, citing “enforcement of different laws and regulations.”
The more plausible explanation for the possession of these devices by agencies that aren’t actually in the law enforcement/national security business is this: they’re being used to perform internal investigations.
Some of the departments say they use the tools to conduct internal investigations when employees are suspected of fraud or workplace harassment, for example. They say data is only extracted from government-issued devices in accordance with internal protocols that govern the collection and storage of personal information to ensure its protection.
And the statements provided by the Transportation Safety Board strongly suggest forensic devices are being used to search devices recovered from traffic accidents, most likely to determine whether or not they were due to distracted driving.
All in all, a pretty eye-opening set of revelations. Agencies one would never suspect had any need for these powerful tools not only have them, but are using them for reasons that are mostly left unexplained. The lack of privacy impact assessments isn’t surprising, though. It’s just disappointing. Obligations to the public are always put on the back burner, especially when agencies possess little-known tech with capabilities that have yet to be fully exposed. They want every chance to exploit these before their oversight catches on and/or public records requesters figure out what questions to ask and who to ask them to.
And despite widespread backlash (BMW had to backtrack on many of its plans), the auto industry shows absolutely no indication they’re going to back away from their plan, with numerous automakers currently working on efforts to “subscriptionize” basic functions and features. And now they’re apparently trying to pretend that this shift is necessary to finance the shift to EVs:
Alistair Weaver, editor-in-chief at Edmunds, says automakers are counting on the new revenue stream to pay for the expensive transition to electric cars.
“So if your car payment is 600 bucks a month, it’s now $675,” Weaver said.
There are several problems here. One, most of the tech they want to charge a recurring fee to use is already embedded in the car you own. And its cost is already rolled into the retail cost you’ve paid. They’re effectively disabling technology you already own, then charging you a recurring additional monthly fee just to re-enable it. It’s a Cory Doctorow nightmare dressed up as innovation.
The other problem: nobody genuinely wants this shit. Surveys have already shown how consumers widely despise paying their car maker a subscription fee for pretty much anything, whether that’s an in-car 5G hotspot or movie rentals via your car’s screen. Other studies indicate that consumers are generally opposed to making functions subscription based, unless they wind up paying less overall:
Alix Partners, a global consulting firm, found that more than 60% of consumers are willing to consider subscribing for enhanced safety and convenience features as long they don’t feel like they are being charged for something they already paid for.
“A lot of people in the auto industry certainly use Apple as a shining light on the hill,” said Mark Wakefield, Alix Partners CEO.
“The car has to be cheaper, plus this option of subscribing,” Wakefield added.
But there’s zero chance that consumers will ever pay less. I’ve often seen carmakers like BMW try to pretend that turning heated seats and other features into recurring subscriptions lowers the vehicle retail cost, but I’ve not seen any evidence to indicate that’s actually true.
The entire point of integrating subscription systems like these is to please Wall Street’s insatiable, often myopic desire for consistent, upwardly scaling, improved quarterly returns. Once implemented, the subscription costs will inevitably be jacked steadily skyward to please Wall Street. It’s simply how these things work. The end result is higher overall costs, and annoying new subscription systems to manage.
There’s a whole bunch of additional unintentional consequences of this kind of shift. Right to repair folks will be keen on breaking down these phony barriers, and automakers (already busy fighting tooth and nail against right to repair reform) will increasingly respond by doing things like making enabling tech you already own and paid fora warranty violation.
The shift toward endless subscriptions for basic functions may not annoy folks with endless piles of disposable income, but for the majority of Americans that struggle to even afford new vehicle costs already, it’s hard to not see this impacting new car sales — or driving more users to older, used cars with dumber tech.