Among the questions is how a contract employee at a distant NSA satellite office was able to obtain a copy of an order from the Foreign Intelligence Surveillance Court, a highly classified document that would presumably be sealed from most employees and of little use to someone in his position.
A former senior NSA official said that the number of agency officials with access to such court orders is “maybe 30 or maybe 40. Not large numbers.”
Mr. Snowden has now turned over archives of “thousands” of documents, according to Mr. Greenwald, and “dozens” are newsworthy.
In other words, more leaks are to come. But, considering that people are already scrambling to see how one pretty junior IT guy could have access to such things, it's making people wonder just how screwed up the NSA is if information could leak out this way -- and conversely, why should we trust them with our data?
Edward Snowden sounds like a thoughtful, patriotic young man, and I’m sure glad he blew the whistle on the NSA’s surveillance programs. But the more I learned about him this afternoon, the angrier I became. Wait, him? The NSA trusted its most sensitive documents to this guy? And now, after it has just proven itself so inept at handling its own information, the agency still wants us to believe that it can securely hold on to all of our data? Oy vey!
Or, as Farhad Manjoo notes later in that same article:
The scandal isn’t just that the government is spying on us. It’s also that it’s giving guys like Snowden keys to the spying program. It suggests the worst combination of overreach and amateurishness, of power leveraged by incompetence. The Keystone Cops are listening to us all.
And, on top of that, people are pointing out that if Snowden could walk out with that much supposedly secret information, you have to wonder who else has done so as well, perhaps with much more nefarious intent, such as selling the information to a foreign power or group. Conor Friedersdorf points out that having the NSA collect so much data makes it a key target for the Chinese:
Even assuming the U.S. government never abuses this data -- and there is no reason to assume that! -- why isn't the burgeoning trove more dangerous to keep than it is to foreswear? Can anyone persuasively argue that it's virtually impossible for a foreign power to ever gain access to it? Can anyone persuasively argue that if they did gain access to years of private phone records, email, private files, and other data on millions of Americans, it wouldn't be hugely damaging?
Think of all the things the ruling class never thought we'd find out about the War on Terrorism that we now know. Why isn't the creation of this data trove just the latest shortsighted action by national security officials who constantly overestimate how much of what they do can be kept secret? Suggested rule of thumb: Don't create a dataset of choice that you can't bear to have breached.
And, yet, that's exactly what we've done. If Snowden had access, then it seems only reasonable to assume that he wasn't the only one. Meaning that plenty of others also had access to the same information, and there's a decent chance that it's already leaked to others. The NSA is supposed to be the best of the best, but they don't even seem to know how to keep their secrets secret.
As Techdirt readers well know, one of the problems with measures brought in for "exceptional situations" -- be it fighting terrorism or tackling child pornography -- is that once in place, they have a habit of being applied more generally. A case in point is the blocking of Newzbin2 by BT in the UK. That was possible because BT had already installed its "Cleanfeed" system to block child pornography: once in place, this "specialized" censorship system could easily be deployed to block quite different sites.
The database was originally established in 2000 so EU nations could check whether an asylum seeker had previously applied for asylum in another European country or was receiving social benefits from another EU country. According to EU law, asylum seekers can apply for asylum only in the EU nation where they first entered the bloc.
But the politicians have noticed that this biometric data could be handy in quite different circumstances:
such a rich source of existing data has recently sparked the interest of other parties. If the EU Commission's requests are followed, Eurodac fingerprint data will be accessible to police officers during investigations. The commission's proposal envisions national law enforcement agencies and Europe's supranational criminal police commission, Interpol, being able to access the database.
Of course, allowing the police to check people's fingerprints in this way would have serious implications for privacy. Indeed, Peter Hustinx, head of the European Data Protection Supervisor, has already weighed in on the subject:
"Just because data is being collected doesn't mean that it should be used for another purpose, especially since that can have a hugely negative effect on the lives of individuals," said Peter Hustinx, head of the European Data Protection Supervisor.
And that really is the nub of the issue: people who agree to provide highly-personal data for one purpose, may then find it being used for another, without being asked. And if the European Commission gets its way, even more data will be shared:
"The Commission would generally like to widen its collection of data and make available any information regarding criminal prosecutions," [Green Party MEP] Keller said. "One example is the so-called 'Smart Borders' package, which actually wasn't proposed this round but has been in the pipeline for a long time. The idea there is that in the future anyone from non-EU countries that would like to travel into the EU will be recorded electronically, which also includes fingerprints."
As more and more biometric data is collected around the world, this kind of function creep is likely to become increasingly common.
The arguments are still ongoing concerning whether or not Congress will reauthorize the FISA Amendments Act, which has enabled -- via a secret interpretation of the law, that even many members of Congress are not told about -- law enforcement to collect huge chunks of private info on Americans with no oversight or warrants. However, in a move that should raise significant concerns about allowing such widespread trolling of private info, a report in the Wall Street Journal by Julia Angwin uncovered that the Obama administration quietly changed the rules back in March concerning the National Counterterrorism Center -- allowing it to retain a giant database of information on innocent Americans. This was done over the objections of many in the administration, including the "Chief Privacy Officer" for the Department of Homeland Security.
The National Counterterrorism Center (NCTC) is where all intelligence agencies were told to send their leads, after it was determined that there was an intelligence failure that allowed the so-called underwear bomber, Umar Farouk Abdulmutallab, to get on a plane a few years ago. Then came some interagency squabbling over information:
Unfortunately, NCTC didn't have the resources to "exhaustively" pursue the torrent of leads it began receiving. So it fell behind. Late last year, after Homeland Security had given NCTC a database on condition that it purge the names of all innocent persons within 30 days, things came to a head. Homeland Security eventually revoked NCTC's access to the data and NCTC decided it needed to operate under different rules. In particular, it wanted unlimited access to all government agency information for as long as it needed it, including both suspects and non-suspects alike. In March, after discussion at the White House, Eric Holder granted their request.
In a separate blog post, Angwin breaks down the specific rule changes from the 2008 document to the 2012 document. They detail just how a system that was initially limited to protect privacy has now turned into the exact opposite of that. Among the rule changes: it used to require a focus on terrorism information. No longer. And then there are the following two changes:
Dropping the requirement to remove innocent U.S. person information. The 2008 guidelines required NCTC to remove US person information that is “not reasonably believed to be terrorism information.” In 2012, the guidelines were updated to allow NCTC to keep U.S. person information for “up to five years” and to “continually assess” the information to determine whether it constitutes terrorism information.
Adding the ability to do “pattern-based queries” of entire datasets. The 2008 guidelines explicitly prohibit analysts from conducting “pattern-based” queries that are not based on known terrorism datapoints. The 2012 guidelines explicitly allow pattern-based queries that are not based on known terrorism data points. Pattern-based queries are still prohibited for databases that NCTC has not copied in its entirety.
Or, as the original article noted:
Now, NCTC can copy entire government databases—flight records, casino-employee lists, the names of Americans hosting foreign-exchange students and many others. The agency has new authority to keep data about innocent U.S. citizens for up to five years, and to analyze it for suspicious patterns of behavior. Previously, both were prohibited.
This all comes out almost exactly a decade after it was revealed that the feds were planning a "Total Information Awareness" program to troll through all of its databases to try to hunt down evidence of terrorism at work. That resulted in widespread public backlash and an eventual backing down from the program. This time around, since the whole thing was debated outside of the public eye (and they didn't given it an Orwellian name -- or an equally creepy logo), apparently it's just fine.
But, if you actually believe in things like basic civil liberties, the 4th Amendment and a right to privacy, this is all downright scary. It's the type of stuff that we're told over and over again the US government isn't supposed to do. And then it goes and does it anyway.
from the oh-wow...-a-Google-with-even-LESS-privacy dept
Back in August, Mike wrote about some questionable sharing of license plate information between the US Border Patrol and various insurance companies. While the stated aim of tracking stolen vehicles might seem to make this sharing justified, the fact that this is going on with no oversight or accountability is cause for alarm.
Dale Stockton, Program Manager of the “Road Runner” project at the Automated Regional Justice Information System in San Diego spoke on a panel on license readers at the 2010 conference and explained to police and prosecutors in attendance how best to share license plate data. Mind you, he was talking about the location information of people never accused of any crime.
We're probably not going to have any centralized national giant bucket of license plate reader data. It probably wouldn't stand the court of public opinion, and it's probably something that, given where we are in the rollout cycle, wouldn't easily be done, but we can develop regional sharing capability...
But the "court of public opinion" can be routed around, according to Stockton. Despite frankly stating that the public would find a "Google of license plates" odious, he intends to do just that, through a series of back doors.
And so doing, you get those set up and then begin to share between those regions, and as you begin to look beyond your region, utilize a trusted broker like Nlets...
Every law enforcement agency has a connection to Nlets. Nlets would serve not as a storage unit but as a pointer system, something akin to a Google, so that when you check a plate, Nlets would point you in the direction of where that plate can be found, and the result of that would be a query in one state by an investigator could give an indication of plates of interest in other states, and then that information can be pulled back.
If Stockton has his way, a decentralized search system for license plates, routed through a third-party's software, will perform exactly the way he wants it to. Somehow he feels that a distributed system is OK while a centralized system isn't. Or rather, he feels that both systems are OK, but the public will only put up with the illusion that law enforcement isn't running a Google-esque system of harvested plate data.
Is the fear of a system like this overblown? Every law enforcement official at the conference runs down several anecdotes about how the plate reading system has aided them in investigating various crimes. But should law enforcement have access to nationwide plate data, much of which pertains to citizens who have never been accused of a crime, much less committed one? Stockton tries to justify the LPR system by comparing it to officers running plates in person.
One of the questions about license plate readers is this kind of hocus pocus, "you are invading my privacy; it's super intrusive." And I come from the opposite end of the spectrum. I truly believe that this technology is only doing what license plate readers have already — I'm sorry — what officers have already been doing in the field for many, many years.
The courts have indicated to us that officers can look at a plate and run it. So let's just go down quickly here through the two sides, an officer doing it and the license plate reader doing it. Officers can run plates anytime they want. The courts have held that's why we put the state plate on there, so an officer can check and make sure it's current and it's not wanted. Well, the license plate reader is doing the exact same thing. It's looking at it; it's checking that plate to see if it's wanted. The officer can pick and choose among the vehicles that he or she looks at.
The biggest difference here is the "always on" aspect and the fact that everyone is tracked. "Reasonable suspicion" and the like are taken completely off the table and replaced with an indiscriminate system that harvests data. It's a handy way to peel back another layer of privacy, as Kade Crockford of the ACLU states:
We've been making a lot of noise about location tracking of late. License plate readers rank high among the technologies that are threatening our privacy with respect to our travel patterns. Where we go says a lot about who we are, and law enforcement agencies nationwide are increasingly obtaining detailed information about where we go without any judicial oversight or reason to believe we are up to no good. Stockton says we have nothing to worry about with respect to license plate reader data and privacy, that that's all "hocus pocus." But he's wrong.
We must ensure license plate readers do not become license plate trackers.
Data from license plate readers in Minnesota was obtained by a St. Paul car dealer using open-records laws, and used to repossess at least one car, according to a recent article in the Minneapolis Star Tribune. The article included this amusing tidbit:
"When the Star Tribune published data tracking Mayor R.T. Rybak's city-owned car over the past year, the mayor asked police Chief Tim Dolan to make a recommendation for a new policy about data retention."
So, the question that needs to be asked of every politician and law enforcement member who feels this system will only be used for catching "bad guys" is whether or not they'd mind having their location tracked via license plate readers. Stockton's pushing for something he knows the public won't stand for and is using successful investigations as the ends to justify the privacy-violating means.
A recent scandal in the UK concerned the country's worst sporting disaster, when 96 football/soccer fans were crushed to death at a stadium in Hillsborough in 1989. Prime Minister David Cameron issued an official apology to the families of the victims for the fact that the safety measures at the ground were known to be inadequate, and that police and emergency services had tried to deflect the blame for the disaster onto fans.
One way the police did this was by falsifying statements made to them after the disaster, to remove negative comments about how they had handled the situation. But another way involved trying to suggest the deaths were caused in part by the drunken behavior of fans. One attempt to bolster this view apparently involved the use of the UK's main Police National Computer system. Here's what the Report of the Hillsborough Independent Panel (pdf) wrote about new evidence that had come to light:
2.5.112 The document indicates that a Police National Computer (PNC) check was conducted on all who died at Hillsborough for whom a blood alcohol reading above zero was recorded. It includes a handwritten list of the names, dates of birth, blood alcohol readings and home addresses of 51 of the deceased and provides screen-prints apparently drawn from the PNC. A summary of the results appears on the front page, establishing the number 'with cons' (convictions).
The idea was clearly to point to previous convictions as evidence that many of those who died were in some way responsible for the deaths of themselves and others because of drunkenness. As TJ McIntyre emphasizes in a blog post on this revelation:
This illustrates an important point that privacy campaigners have been making for a long time: centralised databases of this type can and will be abused, and the power to trawl databases for information on individuals -- in effect, to manufacture a case against them -- is a dangerous one. It's not hard to imagine how data retention records might be abused in a similar way in future.
Although the UK government's proposed "Snooper's Charter" foresees the creation of distributed databases of information about every citizen's online activities, it will be possible to carry out "filters" -- searches -- across them, unifying them into a single, virtual centralized database. As McIntyre notes, it's easy to imagine these hugely-detailed records being trawled for information and then used by the police to cover up their own blunders in the future, or to support a flimsy case against someone, in exactly the same way that those involved in the Hillsborough disaster tried to do with the existing PNC database. The latest revelations of database misuse are another compelling reason not to bring in the intrusive and ineffective approach that lies at the heart of the UK government's plans.
Apparently the system that tracks sex offenders and paroled prisoners and other convicts via electronic tags was totally unreachable for about 12 hours last week, because no one at the company who ran the system, BI, apparently noticed that they had run out of space on their servers. "In retrospect, we should have been able to catch this," claimed a spokesperson for the company. You think? While the data as to their whereabouts was still collected, and the people being tracked were unaware of the lack of monitoring while it was happening, it still makes you wonder why so many governments trust such a system to a company that can't even monitor when they're running out of data storage space.
For many years, we've been reporting on stories of e-voting malfunctions, mainly from Diebold/Premier, ES&S and Sequoia. For a sampling of such stories click on any of the following links: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25. And that's just the first 25 I found (there are lots more), and only cover stories that I actually covered. I'm sure plenty more glitch-infused elections have happened. Given all these glitches and errors, and a seeming lack of followthrough to make sure they don't happen again, a group is asking Congress to authorize a public national database of e-voting election problems.
The really scary part is that the researchers who wrote the report note that many of the problems are repeats -- a problem happens in one location, but another voting district uses the same machines configured in the same problematic way in another election, totally unaware of the problems it will cause. It's still amazing that after nearly a decade of examples of problems with e-voting, just how little has been done to fix these machines.
Nearly two years ago, we wrote about a company, called Leader Technologies with an incredibly broad patent (7,139,761) that covered associating a piece of data with multiple categories, that was suing Facebook for infringement. Our usual group of patent system defenders rushed to the comments to quickly declare that I was an idiot for daring to question this patent. The case took a weird turn when the court actually ordered Facebook to hand over its source code. We were confused as to how this made sense. Since the lawsuit was about patents, not copyright, the specific source code shouldn't really matter.
Either way, it looks like the jury in the case seemed to agree with me about the quality of the patent. The jury has declared the patent invalid. Clearly, the only explanation is that the jury was also made up of idiots. Next time, Leader Technologies should file the lawsuit in East Texas where they know how to make juries, rather than Delaware.
As part of NY Attorney General Andrew Cuomo's grandstanding against child porn, he's mostly been making silly threats against the wrong parties in ways that don't actually help stop child porn (and could make it worse). However, his latest announcement actually sounds a lot more reasonable. His office is putting together a database of offending photos, and letting social networks compare uploads to the database to try to stop the uploads of known offending photos. I would imagine that it also records who was trying to upload that content. Some care would need to be taken to make sure that this effort really does focus on actually offending images -- one thing that makes such an effort tricky. I also do wonder if it makes sense for a gov't agency to be putting together the database, rather than having it done by the industry itself. On top of that, given Cuomo's earlier grandstanding and his usual methods, you have to expect that it would be long before Cuomo would start threatening any social network that doesn't use his system with some sort of bogus (but very, very public) legal threats. In other words, when the gov't (especially someone like Cuomo) sets up a system like this, how long until he starts acting like it's mandatory, rather than optional?
In the last year, there's been a sudden resurgence in interest in the concept of "hot news," a doctrine that most people thought was dead and buried, which allowed a judicially-created form of intellectual property on factual information that was deemed to be "hot news." There's no statute that covers this. Just a court decision. And that was a century or so ago. But... the concept started showing back up in court recently, and in March a ruling came down, blocking a website from reporting on news for two hours, using this doctrine. With that on the books, other "hot news" lawsuits were quickly filed.
However, one such recent lawsuit seems to stretch the concept of hot news so far that you can only sit back and admire the audacity of including it in the lawsuit, while fearing the results should a court actually buy it. Thomas O'Toole has the details of what is likely to be a very interesting lawsuit on a few different factors, beyond just the hot news claim (but we'll get to those other issues, so read on...).
The case apparently involves an employee at Goldman Sachs (or potentially multiple employees) who got the username and password of another account holder on a database put together by a company called Ipreo Networks, called "Bigdough." Bigdough is apparently a database of contact info on 80,000 financial industry people. The Goldman Sachs employee(s) logged in with someone else's username/password and downloaded a bunch of information.
This sort of thing happens all of the time. People share logins all of the time. Violating it is basically a terms of service violation, but here the company has broken out the big guns. Yes, it's claiming that the contact info in its database represents "hot news," and Goldman accessing it is a violation of the "hot news" doctrine. Think about that for a second. Contact information. "Hot news?" And, of course, the whole purpose of the "hot news" doctrine is about another publisher republishing the information -- something that Goldman Sachs didn't do here at all. The whole "hot news" claim here seems to stretch the (already questionable) concept way past the breaking point. Hopefully that part gets tossed quickly. Otherwise, imagine what else will suddenly be called "hot news."
But that's not all that's interesting in this case. As O'Toole notes in his report, there are two other interesting legal questions, having to do with the use of someone else's login. First, there's the question of whether or not Goldman Sachs is liable here, even if the actions are just that of a rogue employee (or group of employees). O'Toole points out that the legal standard to get GS on the hook here is pretty damn high. The second question, of course, is whether or not just using a login that someone shared with you is a violation of the Computer Fraud and Abuse Act (CFAA). We recently discussed how there are also a growing series of cases trying to stretch the CFAA to make all sorts of activities classified as "unauthorized access." CFAA was really designed as an anti-hacking law -- which was about people really breaking in to a computer system. If someone simply shares their login credentials with you, does that really count as criminal hacking? If that's the case, an awful lot of people may be guilty of doing so.
So, this should be a fun one to follow. Three separate interesting legal questions, and in all three cases, Ipreo appears to be trying to stretch the law beyond its intentions, so hopefully the court recognizes this. If you want to see the full filing, it's below:
Leigh Beadon: @GM i felt like John Oliver needed a couple episodes to settle into the rhythm and now he's right on point. He's always been good though, and he's slowly bringing a bit of his own flavour to it but yeah, the writing team is the same i'm sure, just with a different guy delivering (and possibly approving) the jokes Mike Masnick: btw, i only just discovered last week that john oliver has a weekly podcast. which is awesome Great Mizuti: @ssc, i could not get passed the second paragraph in that article. run-ons and fragments and grammar, oh my! this is clearly not the official spokesman for the future of the industry. @mike, does he really?!? i did not know this. seems like something i can't live without now that i know about it. Mike Masnick: http://thebuglepodcast.com/ silverscarcat: GM, I could barely read the article myself. John Fenderson: Wow. I seriously think that AJ has finally suffered a complete psychotic break. Josh in CharlotteNC: Not the first time, John. He's been overdue for awhile. silverscarcat: Which thread? Jay: He now has a pastebin for just Mike. Wow, he just doesn't quit... John Fenderson: @silverscarcat: All of them. silverscarcat: Wow... I think the funny men with the little white coats need to pay him a visit. Jay: ... I just thought about what the NSA is doing... They're creating the largest collection of books in history. Conceptually speaking, they're archiving and vacuuming all of the books that they can't read. BentFranklin: Links in comments need a new style. You can barely see them. How about bold them like in articles? silverscarcat: Holy... OUch, it gets worse and worse for MS these days. http://www.warpzoned.com/2013/06/congressmen-propose-we-are-watching-you-act-an-anti-kinect-bill/ Ninja: People should just report and ignore the link troll.. I like how some of the most wacky comments from the trolls are being left alone under the pinkish link