from the and-charging-you-more-is-merely-wallet-efficiency-management dept
Oh, Verizon. The company is ramping up its mobile data throttling on its LTE network. Basically, if you're a heavy user, your packets get "de-prioritized" (i.e., throttled). However, Verizon insists it's, like, totally, totally different.
Is this the same as throttling? No, this is not throttling.
How is this different than throttling? The difference between our Network Optimization practices and throttling is network intelligence. With throttling, your wireless data speed is reduced for your entire cycle, 100% of the time, no matter where you are. Network Optimization is based on the theory that all customers should have the best network possible, and if you’re not causing congestion for others, even if you are using a high amount of data, your connection speed should be as good as possible. So, if you’re in the top 5% of data users, your speed is reduced only when you are connected to a cell site experiencing high demand. Once you are no longer connected to a site experiencing high demand, your speed will return to normal. This could mean a matter of seconds or hours, depending on your location and time of day.
In other words... it's throttling. It may be temporary, and it may only impact top users, but it's still throttling. No matter what they say. As Broadband Reports notes, this bit of Orwellian speak probably doesn't work in reverse:
One wonders how Verizon would feel if customers stopped paying them, insisting they were simply "dynamically and intelligently altering payment transit."
Of course, if the FCC actually lived up to its transparency demands, perhaps it would ding Verizon for this. What are the chances of that happening?
On Friday, we reported the surprising fact that Vodafone had not just followed the latest trend in issuing a transparency report, but actually flat out admitted that many governments had direct access to its phone lines, which allowed those governments to listen in on calls without a warrant. That level of transparency is great, because all too often with the "transparency" reports we've seen from some companies, they seem more focused on hiding what's really going on. Too frequently, this is because of requirements from the government, which has (almost certainly illegal and unconstitutional) gag orders on what companies are allowed to say about requests for government information. However, it's almost certainly also because companies are now afraid of admitting the kinds of things they've allowed governments to do in secret -- and are worried about how the public would respond.
However, I'm hopeful that Vodafone's decision to just step up and admit the level of access that governments have had will lead other companies to "come clean" on the sins of their past, and how they've given governments way too much access. Rather than have it leak by a whistleblower, to have the companies step up and admit to exactly what's gone on, while at the same time calling for a change in laws and policies (as Vodafone did) might actually help to restore some confidence that these companies aren't just happily handing over access, but are willing to publicize what's happening and also fight back against the excesses as well.
In the US, for example, it was a remarkable struggle just to get the big telcos to finally agree to issue transparency reports -- and when those transparency reports were released, they were remarkably opaque, rather than transparent. Such a transparency report does little to build confidence in what's happening, and actually breeds greater distrust. Coming clean, saying what's really going on, and how the telcos plan to move forward, seems like the only real way to rebuild any semblance of trust.
A month or so ago, a PR person sent me a ridiculously misleading (to potentially dishonest) Forbes piece by Ev Ehrlich, former undersecretary of commerce for President Bill Clinton, arguing against net neutrality. The piece was so ridiculous that I asked the PR person whether or not Ehrlich, in his current job as a consultant/think tank person, was working with any broadband providers. The PR person said he didn't know, and I figured I'd just ignore the piece. However, having now listened to a radio debate on KCRW about net neutrality that included Ehrlich making the same basic argument in a discussion with Tim Lee from the Washington Post, Harold Feld from Public Knowledge and Alexis Ohanian of Reddit, it seems worth highlighting just how confused and, well, wrong, Ehrlich is. Here's the crux of Ehrlich's argument:
But what does “open” mean? To some advocates, it means that all the data packets that carry everything over the Internet must all ride at the same speed and cost, a policy dubbed “net neutrality.” But it’s time to question whether that policy really works for all of us.
But this isn't true. And using it as the premise of his argument destroys any credibility Ehrlich might have. No one has argued that net neutrality means that there can't be differentiated speeds and costs. We all see this every day. We can all buy internet access at different price points and speeds already. So can the companies who provide services on the internet. No one considers that to be a violation of net neutrality (except, it appears, folks like Ehrlich who want to pretend net neutrality is about something it's not).
The actual concern is not about that. It's about the broadband providers then turning around and doing a massive double dip. That is, even if you've, say, purchased an access plan with certain speeds and the internet companies have purchased their access and bandwidth at their own speeds, the broadband companies want to also be able to go to those internet services and charge them again to reach you at the level both of you have already paid for. Basically, they're arguing that when you buy internet access, you're merely buying the right to reach from your end of the network to the middle. And that's it. They're saying you haven't bought the right to reach service providers' end points. So, what they want to do is get the internet service providers to pay a second time to "reach you" rather than just the middle. And, if they don't, they may degrade or even block access.
Ehrlich misrepresents nearly all of this, as he argues for a tiered internet:
And that means that the Internet has evolved in a way that makes it practical for different types of uses to travel at different speeds, the way you can buy Sears’ “best” or Sears’ “good,” or travel in express lanes for a fee. In the modern world, “neutrality” means that a heart monitor that connects a patient to an online medical service crosses the Internet no faster than a video of a cat playing the xylophone.
Again, almost nothing written here is accurate. You can already get higher bandwidth and higher speeds and pay more for them. No one has argued against that. You can also do things to increase speeds like using CDNs to cache content and put connections closer to the endpoints. No one is arguing against that, though if you read Ehrlich, you'd think it was so. Furthermore, no one is arguing that a medical service can't connect to a faster line (though, it makes a lot of sense to use a dedicated line for such things anyway). They're just saying that the end ISP shouldn't be degrading certain service providers to make them pay more. The basic concept is one of preventing discrimination -- such that Comcast can't favor (for example) NBC content over ABC content (or YouTube content). But, NBC, ABC and YouTube all need to buy their own high levels of bandwidth already from their own service provider. What they shouldn't have to do is then pay again to your service provider just to reach you at a reasonable rate. They've already paid their own service provider, and you've already paid yours.
I'm at a loss as to whether or not Ehrlich just doesn't understand this or if he's being purposefully misleading. From the article and his statements in the interview, it almost sounds as if he's been misled and is arguing from a position of ignorance. It honestly sounds like he's been given the broadband providers' talking points and is just repeating them, without realizing he's arguing about something entirely different.
If a newspaper wasn’t allowed to take money from its advertisers, the reader would have to pay more. It’s the same with the Internet; if a provider can’t charge the big websites for a premium connection (if they want one) then the consumer has to pay instead, meaning consumers subsidize the companies sending big data packets.
Again, this statement is so inaccurate as to be laughable. No one is arguing that internet services don't have to pay for their broadband connections and speed. The claim that newspapers are not being "allowed to take money from its advertisers" mistakenly and misleadingly suggests that internet service providers are getting something for free. They're not. The more accurate analogy here is like saying, imagine if your state refused to let Fedex drive on its roads without paying extra, and instead sells "exclusive" access to UPS. That's not about someone getting something for free: it's about the infrastructure provider blocking competition. That's the issue being debated, and it's really unclear if Ehrlich even understands this.
The one other laughable argument that Ehrlich lays out in the radio interview, but not the Forbes piece, is the ridiculous claim that most people in the US have four or more choices of broadband providers. He's again "technically" correct that the FCC has data making this claim, but the reality is quite different. First, the FCC's data is notoriously bogus. Pop your own address in at broadbandmap.gov and laugh, laugh, laugh at the results. There are generally two major problems with the data in that database, both of which completely undercut the argument made by Ehrlich and others that there's real competition for broadband, and that if you don't like what one company is doing, you can just switch to another.
Problem number one is that the speeds claimed are absolutely bogus. I live in the heart of Silicon Valley, and it claims that I can get 10 to 25 Mbps from AT&T. I know that's not true, because I had AT&T DSL here for many years, and despite me begging them repeatedly for higher speeds, they never offered me but about 3 Mbps (and that was only relatively recently). The FCC's data is notorious for massively overstating actual speeds. I recently switched off of AT&T to Sonic.net (which is freaking awesome). The FCC's database claims I can get 25 to 50 Mbps from Sonic. I wish! The best package the company actually offers me is a top download speed of 6 Mbps. And, of course, all of these speeds are "up to" anyway, meaning you rarely see them in real life.
Problem number two is much bigger. What Ehrlich is actually quoting includes wireless services, where he pretends those are competitive. They are not. Basically any actual mobile provider that offers internet access includes incredibly low data caps on broadband access, in the range of 3 to 5 Gb / month. Those networks are simply not designed to be your primary internet access, and pretending otherwise is pure folly. Plus, the speeds on those networks tend to be much lower than advertised. The reality is that almost no one actually has four choices. Most people have two: their cable company and their telco (I'm actually one of the few lucky ones who can also get Sonic.net). And, to make matters even worse, the telcos are actively trying to get out of the landline business, even to the point of pushing their customers over to the cable providers as they try to focus just on wireless. That means there may be even less competition before too long.
Now, I've made it clear repeatedly that I'm skeptical that the FCC or Congress can come up with a reasonable solution to protect basic net neutrality -- so I worry about those efforts as well. But I'm constantly amazed at the absolutely bogus arguments that are regularly trotted out by people who claim to be against net neutrality. There's a reasonable argument that legislation or FCC efforts could be a mistake that would make things worse -- but the arguments usually presented by telcos and their supporters don't even pass the basic laugh test, and that's absolutely true with Ehrlich's talking points in both the article and the radio interview. I still don't know if he's working with any broadband providers, but either way, he needs to get past their bogus talking points and argue what's actually being discussed.
Ever since the Snowden leaks began, there's been a clear dichotomy in terms of how different industries have reacted. The various big internet companies, which were named early on as participants in the PRISM program, have been quite vocal (sometimes to profane levels) that they were not willing participants in most of these programs, and are currently involved in an important lawsuit arguing that they have a First Amendment right to reveal how much info they actually share with the government. While those eventual revelations (and they almost certainly will come out, either legally or through leaks) may reveal certain companies were more complicit than others, by all indications, the various internet companies have been very willing to fight the government over this.
On the other side, you've got the telcos -- mainly AT&T and Verizon (but Sprint and some others as well). What do you have there? Total, deafening silence. Seriously. They've said nothing about any of this, despite increasing evidence that they not only are happy and willing participants in the NSA's efforts to spy on everyone, but that they've volunteered to hand over more data than required. Furthermore, it's quite clear now that they've basically let the NSA put taps directly on the internet backbone, by which they can record just about anything, while the internet companies have (from all appearances to date) limited information sharing only to a specific segment of information following a specific court order (which probably doesn't have enough oversight, but that's a different issue).
The contrast here is really striking. From all revealed info to date, the telcos' info sharing with the government is much more massive and has significantly more privacy implications than anything done by the internet companies. And yet it's the internet companies that are both speaking out against this and challenging some of it legally (though some of us still think they should go further -- but, thankfully, we've been hearing significant and credible buzz that much more is on the way). The telcos? Absolutely nothing. Well, except for a Verizon VP mocking the internet companies for pushing for transparency.
When the internet companies reached out to the telcos about co-signing their letter to the government pushing for more surveillance, the telcos refused. Over the last few months, we've seen pretty much every major internet company release a transparency report, including showing at least some data on government requests for info. The telcos have never released such a thing.
Kevin Bankston, who was instrumental in helping to coordinate that original letter, is now calling out the telcos on their shameful silence. It seems that the telcos are hoping this whole thing blows over, in part because a true transparency report from them would likely reveal just how complicit the telcos were in handing over your private info to governments, often with little to no oversight. In fact, Bankston notes that AT&T has been quietly lobbying against legislative efforts to increase transparency.
What are the telcos afraid of? The answer is pretty obvious.
Customer trust is critical for any business, but especially for Internet and telecommunications companies that gather personal data concerning and affecting the lives of hundreds of millions of people in the U.S. and around the world. In an effort to help rebuild consumer trust, major Internet companies including Google, Microsoft, Twitter, LinkedIn, Facebook and Yahoo! have issued transparency reports with information on government requests; AT&T and Verizon have not. Companies, including Google and Microsoft, have also filed in court seeking authorization to disclose further information to the public; AT&T and Verizon have not.
Privacy is fundamental to democracy and free expression -- and transparency is essential if individuals and businesses are to make informed decisions regarding their personal information. AT&T and Verizon must comply with legal obligations imposed by the Patriot Act and other laws. But these companies have no good excuse for staying silent and failing to provide information about how often customer information is being shared with the government. To the contrary, staying silent as other industry leaders release transparency reports and take steps to reinforce a genuine commitment to privacy, makes it appear that these companies have something to hide and presents serious financial and reputational risks. Consumers prefer companies whose information practices they know and can trust. It is already estimated that the risks of surveillance and lack of trust could cost the U.S cloud computing industry $21 billion to $35 billion in foreign business over the next three years. The Chief Privacy Officers at AT&T and Verizon have praised transparency as a goal, but it is time to back up those statements with action by releasing transparency reports.
Of course, thanks to other bad government policies, AT&T and Verizon have market dominant positions, such that there's often no actual customer choice -- which is part of why they can get away with this silence. Hopefully shareholders of both companies will stand up and make those companies reveal some basic transparency, even if it will embarrass the two companies. And, if they're really embarrassed by what the data will show, perhaps that's a sign that they shouldn't be doing it in the first place.
The latest reporting on the Snowden docs by The Guardian shows that the UK's surveillance operation GCHQ was apparently well aware that its activities were almost certainly open to a "legal challenge" and therefore they were committed to keeping them secret to avoid such a challenge. Note that this is quite different than the official excuse always given about being worried about public disclosure putting national security at risk by revealing "sources and methods." Instead, here it seems clear that the secrecy was for the very reason that many of us suspected: they were pretty sure they're breaking the law, or at least coming so close that it was something the courts would eventually have to decide... but only if the info got out. And, it wasn't just them. They realized that the telcos willingness in passing on info likely opened up other legal challenges as well.
GCHQ lobbied furiously to keep secret the fact that telecoms firms had gone "well beyond" what they were legally required to do to help intelligence agencies' mass interception of communications, both in the UK and overseas.
GCHQ feared a legal challenge under the right to privacy in the Human Rights Act if evidence of its surveillance methods became admissible in court.
GCHQ assisted the Home Office in lining up sympathetic people to help with "press handling", including the Liberal Democrat peer and former intelligence services commissioner Lord Carlile, who this week criticised the Guardian for its coverage of mass surveillance by GCHQ and America's National Security Agency.
Amazingly, they seem to admit that the fear of a public debate/legal challenge was the key reason they fought (and won) a battle to keep such evidence out of trials. That is, even though they could have gone with the old favorite of "national security," instead, they finally admitted reality:
Our main concern is that references to agency practices (ie the scale of interception and deletion) could lead to damaging public debate which might lead to legal challenges against the current regime.
That other point mentioned above, about telcos going "above and beyond" in voluntarily handing over access is also pretty big, considering that the telcos in question had tried the "we're just complying with the law" excuse in the past. But, evidently, they were lying.
The revelations of voluntary co-operation with some telecoms companies appear to contrast markedly with statements made by large telecoms firms in the wake of the first Tempora stories. They stressed that they were simply complying with the law of the countries in which they operated.
In reality, numerous telecoms companies were doing much more than that, as disclosed in a secret document prepared in 2009 by a joint working group of GCHQ, MI5 and MI6.
Later in the report, a GCHQ memo notes that telcos "feared damage to their brands" if the extent of their over-cooperation was revealed. You know how they could have dealt with that? By not going so far above and beyond the law. But, once again, it seems like the telcos have been incredibly willing to screw over their own customers' privacy at every opportunity.
One of the ironies of European outrage over the global surveillance conducted by the NSA and GCHQ is that in the EU, communications metadata must be kept by law anyway, although not many people there realize it. That's a consequence of the Data Retention Directive, passed in 2006, which:
requires operators to retain certain categories of data (for identifying users and details of phone calls made and emails sent, excluding the content of those communications) for a period between six months and two years and to make them available, on request, to law enforcement authorities for the purposes of investigating, detecting and prosecuting serious crime and terrorism.
Notice the standard invocation of terrorism and serious crime as a justification for this kind of intrusive data gathering -- the implication being that such highly-personal information would only ever be used for the most heinous of crimes. In particular, it goes without saying that there is no question of it being accessed for anything more trivial -- like this, say:
Some Dutch telecommunications and Internet providers have exploited European Union laws mandating the retention of communications data to fight crime, using the retained data for unauthorised marketing purposes.
Of course, the news will come as no surprise to the many people who warned that exactly this kind of thing would happen if such stores of high-value data were created. But it does at least act as a useful reminder that whatever the protestations that privacy-destroying databases will only ever be used for the most serious crimes, there is always the risk of function creep or -- as in the Netherlands -- outright abuse. The only effective way to stop it is not to retain such personal information in the first place.
The Washington Post is out with the latest revelations from the Snowden leaks and it shows that the NSA relies on foreign telcos and "allied" intelligence agencies to scoop up data on email contact lists and instant messaging buddy lists to help build its giant database of connections. Remember a few weeks ago how it was reported that the NSA was basically building a secret shadow social network? It seems like this might be one of the ways it's able to tell who your friends are.
There are a variety of important points here. First off, this information is not coming directly from the tech companies (which, again, suggests that earlier claims that the NSA had direct access to all their servers was mistaken). Rather they're picking this information up off the backbone connections in foreign countries. It also explains why they get so much data from Yahoo -- because, for no good reason at all, Yahoo hasn't forced encryption on its webmail users until... the news of this started to come out.
And here's the big problem: because all of this information is collected overseas, rather than at home, it's not subject to "oversight" (and I use that term loosely) by the FISA court or Congress. Those two only cover oversight for domestic intelligence. The fact that the NSA can scoop up all this data overseas is just a bonus.
Also, while the program is ostensibly targeted at "metadata" concerning connections between individuals, the fact that it collects "inboxes" and "buddy lists" appears to reveal content at times. With buddy lists, it can often collect content that was sent while one participant was offline (where a server holds the message until the recipient is back online), and with inboxes, they often display the beginning of messages, which the NSA collects.
Separately, because this is allowing them to gather so much data, it apparently overwhelmed the NSA's datacenters. At times, this is because they get inundated with... spam. For example, one of the documents revealed show that a target they had been following in Iran had his Yahoo email address hacked for spamming, and that presented a problem:
In fall 2011, according to an NSA presentation, the Yahoo account of an Iranian target was “hacked by an unknown actor,” who used it to send spam. The Iranian had “a number of Yahoo groups in his/her contact list, some with many hundreds or thousands of members.”
The cascading effects of repeated spam messages, compounded by the automatic addition of the Iranian’s contacts to other people’s address books, led to a massive spike in the volume of traffic collected by the Australian intelligence service on the NSA’s behalf.
After nine days of data-bombing, the Iranian’s contact book and contact books for several people within it were “emergency detasked.”
Because of this mess, the NSA has tried to stop collecting certain types of information, doing "emergency detasks" of certain collections. This, yet again, shows how ridiculous Keith Alexander's "collect it all" mantra is. When you collect it all, you get inundated with a ton of bogus data, and the information presented here seems to support that.
Over the past several months, the Obama Administration has defended the government's far-reaching data collection efforts, arguing that only criminals and terrorists need worry. The nation's leading internet and telecommunications companies have said they are committed to the sanctity of their customers' privacy.
I have some very personal reasons to doubt those assurances.
In 2004, my telephone records as well as those of another New York Times reporter and two reporters from the Washington Post, were obtained by federal agents assigned to investigate a leak of classified information. What happened next says a lot about what happens when the government's privacy protections collide with the day-to-day realities of global surveillance.
The story begins in 2003 when I wrote an article about the killing of two American teachers in West Papua, a remote region of Indonesia where Freeport-McMoRan operates one of the world's largest copper and gold mines. The Indonesian government and Freeport blamed the killings on a separatist group, the Free Papua Movement, which had been fighting a low-level guerrilla war for several decades.
I opened my article with this sentence: "Bush Administration officials have determined that Indonesian soldiers carried out a deadly ambush that killed two American teachers."
I also reported that two FBI agents had travelled to Indonesia to assist in the inquiry and quoted a "senior administration official" as saying there "was no question there was a military involvement.''
The story prompted a leak investigation. The FBI sought to obtain my phone records and those of Jane Perlez, the Times bureau chief in Indonesia and my wife. They also went after the records of the Washington Post reporters in Indonesia who had published the first reports about the Indonesian government's involvement in the killings.
As part of its investigation, the FBI asked for help from what is described in a subsequent government report as an "on-site communications service" provider. The report, by the Department of Justice's Inspector General, offers only the vaguest description of this key player, calling it "Company A.''
"We do not identify the specific companies because the identities of the specific providers who were under contract with the FBI for specific services are classified,'' the report explained.
Whoever they were, Company A had some impressive powers. Through some means – the report is silent on how – Company A obtained records of calls made on Indonesian cell phones and landlines by the Times and Post reporters. The records showed whom we called, when and for how long -- what has now become famous as "metadata."
Under DOJ rules, the FBI investigators were required to ask the Attorney General to approve a grand jury subpoena before requesting records of reporters' calls. But that's not what happened.
Instead, the bureau sent Company A what is known as an "exigent letter'' asking for the metadata.
A heavily redacted version of the DOJ report, released in 2010, noted that exigent letters are supposed to be used in extreme circumstances where there is no time to ask a judge to issue a subpoena. The report found nothing "exigent'' in an investigation of several three-year-old newspaper stories.
The need for an exigent letter suggests two things about Company A. First, that it was an American firm subject to American laws. Second, that it had come to possess my records through lawful means and needed legal justification to turn them over to the government.
The report disclosed that the agents' use of the exigent letter was choreographed by the company and the bureau. It said the FBI agent drafting the letter received "guidance" from "a Company A analyst.'' According to the report, lawyers for Company A and the bureau worked together to develop the approach.
Not surprisingly, "Company A" quickly responded to the letter it helped write. In fact, it was particularly generous, supplying the FBI with records covering a 22-month period, even though the bureau's investigation was limited to a seven-month period. Altogether, "Company A" gave the FBI metadata on 1,627 calls by me and the other reporters.
Only three calls were within the seven-month window of phone conversations investigators had decided to review.
It doesn't end there.
The DOJ report asserts that "the FBI made no investigative use of the reporters' telephone records." But I don't believe that is accurate.
In 2007, I heard rumblings that the leak investigation was focusing on a diplomat named Steve Mull, who was the deputy chief of mission in Indonesia at the time of the killings. I had known Mull when he was a political officer in Poland and I was posted there in the early 1990s. He is a person of great integrity and a dedicated public servant.
The DOJ asked to interview me. Of course, I would not agree to help law enforcement officials identify my anonymous sources. But I was troubled because I felt an honorable public servant had been forced to spend money on lawyers to fend off a charge that was untrue. After considerable internal debate, I decided to talk to the DOJ for the limited purpose of clearing Mull.
It was not a decision I could make unilaterally. The Times also had a stake in this. If I allowed myself to be interviewed, how could the Times say no the next time the government wanted to question a Times reporter about a leak?
The Times lawyer handling this was George Freeman, a journalist's lawyer, a man Times reporters liked having in their corner. George and the DOJ lawyers began to negotiate over my interview. Eventually, we agreed that I would speak on two conditions: one, that they could not ask me for the name of my source; and two, if they asked me if it was ‘X,' and I said no, they could not then start going through other names.
Freeman and I sat across a table from two DOJ lawyers. I'm a lawyer, and prided myself on being able to answer their questions with ease, never having to turn to Freeman for advice.
Until that is, one of the lawyers took a sheaf of papers that were just off to his right, and began asking me about phone calls I made to Mull. One call was for 19 minutes, the DOJ lawyer said, giving me the date and time. I asked for a break to consult with Freeman.
We came back, and answered questions about the phone calls. I said that I couldn't remember what these calls were about – it had been more than four years earlier – but that Mull had not given me any information about the killings. Per our agreement, the DOJ lawyers did not ask further questions about my sources, and the interview ended.
I didn't know how the DOJ had gotten my phone records, but assumed the Indonesian government had provided them. Then, about a year later, I received a letter from the FBI's general counsel, Valerie Caproni who wrote that my phone records had been taken from "certain databases" under the authority of an "exigent letter,'' (a term I had never heard).
Caproni sent similar letters to Perlez, to the Washington Post reporters, and to the executive editors of the Post and the Times, Leonard Downie and Bill Keller, respectively. In addition, FBI Director Robert Mueller called Downie and Keller, according to the report.
Caproni wrote that the records had not been seen by anyone other than the agent requesting them and that they had been expunged from all databases.
I'm uneasy because the DOJ report makes clear that the FBI is still concealing some aspect of this incident. After describing Caproni's letters, the report says: "However, the FBI did not disclose to the reporters or their editors that [BLACKED OUT]." The thick black lines obliterate what appear to be several sentences.
If you were to ask senior intelligence officials whether I should wonder about those deletions, they'd probably say no.
I'm not so sure.
The government learned extensive details about my personal and professional life. Most of those calls were about other stories I was writing. Some were undoubtedly to arrange my golf game with the Australian ambassador. Is he now under suspicion? The report says the data has been destroyed and that only two analysts ever looked at it.
But who is this 'Company A" that willingly cooperated with the government? Why was it working hand in glove with the FBI? And what did the FBI director not tell the editors of the Times and the Washington Post when he called them acknowledging the government had improperly obtained reporter's records?
We already covered the latest Guardian report on the NSA and GCHQ's attempts to compromise Tor. While those have failed to directly break Tor, they were more successful effectively exploiting vulnerabilities in Firefox to target certain Tor users. Bruce Schneier has a more focused article on how those attacks worked, and as a part of that, detailed how the NSA and GCHQ are effectively able to do man-in-the-middle attacks on giant websites, something that is really only possible because of the major telcos letting the NSA put servers directly off the backbone. As we noted last month, buried in one of the earlier Snowden leaks was the news that the GCHQ and NSA were likely running man-in-the-middle attacks on Google. The latest leaks show why those work. As Schneier explains:
To trick targets into visiting a FoxAcid server, the NSA relies on its secret partnerships with US telecoms companies. As part of the Turmoil system, the NSA places secret servers, codenamed Quantum, at key places on the internet backbone. This placement ensures that they can react faster than other websites can. By exploiting that speed difference, these servers can impersonate a visited website to the target before the legitimate website can respond, thereby tricking the target's browser to visit a Foxacid server.
In the academic literature, these are called "man-on-the-middle" attacks, and have been known to the commercial and academic security communities. More specifically, they are examples of "man-on-the-side" attacks.
They are hard for any organization other than the NSA to reliably execute, because they require the attacker to have a privileged position on the internet backbone, and exploit a "race condition" between the NSA server and the legitimate website. This top-secret NSA diagram, made public last month, shows a Quantum server impersonating Google in this type of attack.
The NSA uses these fast Quantum servers to execute a packet injection attack, which surreptitiously redirects the target to the FoxAcid server. An article in the German magazine Spiegel, based on additional top secret Snowden documents, mentions an NSA developed attack technology with the name of QuantumInsert that performs redirection attacks. Another top-secret Tor presentation provided by Snowden mentions QuantumCookie to force cookies onto target browsers, and another Quantum program to "degrade/deny/disrupt Tor access".
Schneier also notes that this is basically the same technique the Chinese have used for their Great Firewall. In other words, the complicit nature of the telcos in basically giving the NSA and GCHQ incredibly privileged access to the backbone is part of what allows them to conduct those kinds of man-in-the-middle attacks. It still amazes me that there isn't more outrage over the role of the major telcos in all of this.
The other interesting thing about the FoxAcid servers is that it's basically a system that gives the NSA a rotating menu of ways to exploit a visitor who gets hooked on one of their servers. It also notes that the NSA is pretty careful about how it uses various exploits, such that "low-value exploits" are used against more technically sophisticated targets, recognizing that they're more likely to be discovered, and thus burned. They save the "most valuable exploits" for less technically savvy targets, and also the most important targets. This is hardly surprising, but interesting to see the level with which they plan these things out.
It's widely known that the NSA has taps connected to the various telco networks, thanks in large part to AT&T employee Mark Klein who blew the whistle on AT&T's secret NSA room in San Francisco. What was unclear was exactly what kind of access the NSA had. Various groups like the EFF and CDT have both been asking the administration to finally come clean, in the name of transparency, if they're tapping backbone networks to snarf up internet communications like email. So far, the administration has declined to elaborate. Back in August, when the FISA court declassified its ruling about NSA violations, the third footnote, though heavily redacted, did briefly discuss this "upstream" capability:
In short, "upstream" capabilities are tapping the backbone itself, via the willing assistance of the telcos (who still have remained mostly silent on all of this) as opposed to "downstream" collection, which requires going to the internet companies directly. The internet companies have been much more resistant to government attempts to get access to their accounts. And thus, it's a big question as to what exactly the NSA can collect via its taps on the internet backbone, and the NSA and its defenders have tried to remain silent on this point, as you can see from the redactions above.
However, as Kevin Bankston notes, during Thursday's Senate Intelligence Committee hearing, Dianne Feinstein more or less admitted that they get emails via "upstream" collection methods. As you can see in the following clip, Feinstein interrupts a discussion to read a prepared "rebuttal" to a point being made, and in doing so clearly says that the NSA can get emails via upstream collections:
Upstream collection... occurs when NSA obtains internet communications, such as e-mails, from certain US companies that operate the Internet background, i.e., the companies that own and operate the domestic telecommunications lines over which internet traffic flows.
She clearly means "backbone" rather than "background." She's discussing this in an attempt to defend the NSA's "accidental" collection of information it shouldn't have had. But that point is not that important. Instead, the important point is that she's now admitted what most people suspected, but which the administration has totally avoided admitting for many, many years since the revelations made by Mark Klein.
So, despite years of trying to deny that the NSA can collect email and other communications directly from the backbone (rather than from the internet companies themselves), Feinstein appears to have finally let the cat out of the bag, perhaps without realizing it.