There are many tales in literature over millennia about people selling their soul to a malevolent deity for the right price. But at least it’s usually a good price. Recent research has discovered that we are willing to compromise our computer for no more than one cent in income.
The researchers from the Carnegie Mellon University CyLab who carried out this work, tempted users into downloading and, in many cases, actually running a Windows application on their computer. After they had agreed to take part, they were told that it was for an academic study but were given very little other information about the application. The application pretended to run a series of computational tasks and paid those who installed it one cent for every hour it was left running.
Even though a participant’s machine would give them a pop up warning when they started the download to tell them that this application wanted higher level access to essential security services, 22% of them went ahead and downloaded. And when participants were offered $1 per hour, that figure rose to 43%.
With more than 1,700 downloads, the application was run about 960 times, meaning that just over half of participants fell for the ruse. Alarm bells should have rung, but they were apparently not heeded.
The fact is, this application could easily have contained malware. Participants knew little about what they were installing other than it would pay them for their processing power but they didn’t seem to mind.
The ethics of this research are certainly potentially dubious. Individuals were lured into downloading this application for a seemingly good cause and we know nothing of their financial circumstances. It’s a scenario that many of us can recognise in one way or another, though. We may not get a financial reward for downloading applications but how often to we click away warnings so we can get an app that offers us some other incentive, such as access to free music or movies?
Crooks will be pleased to learn from this study that it is apparently very easy to trick ordinary computer users into hosting your malware.
It is an old adage, but it is still very important to remember – if it looks too good to be true, it probably is. Do not install any application without checking if the source is reputable. Free is often good, but with free on the internet comes with many risks. This is particularly true for sites offering access to illegal movies or adult content.
Whenever you download an application from any source, trusted or otherwise, you should complete a simple mental checklist.
Did I scan for malware just before I clicked to install the application? Is my operating system warning me about the security risks with this application? Did I scan my system for malware after I installed the application? And finally, do I have up to date anti-malware software?
This all may seem tedious, but it pays to be cautious. Recent incidents have taught us that there are plenty of people out there who will take advantage of anyone who hasn’t protected themselves properly. Whether this research shows that we just can’t be bothered to read the pop up warnings our computers send us when we click and install or whether it shows that we are even more willing to compromise our security in the name of a quick buck, it should make us think twice about how blindly we click. Just as any character in literary history will tell you, selling your soul rarely turns out to be a good deal.
Andrew Smith does not work for, consult to, own shares in or receive funding from any company or organization that would benefit from this article, and has no relevant affiliations.
Adopting a tactic that has been used by officials ranging from Sarah Palin to staffers of New Jersey Gov. Chris Christie, aides to New York Gov. Andrew Cuomo are sending emails from private accounts to conduct official business.
I know because I got one myself. And three other people who interact with the governor’s office on policy or media matters told me they have too. None of the others wanted to be named.
The tactic appears to be another item in the toolbox of an administration that, despite Cuomo’s early vows of unprecedented transparency, has become known for an obsession with secrecy. Emailing from private accounts can help officials hide communications and discussions that are supposed to be available to the public.
“Government business should never be conducted through private email accounts. Not only does it make it difficult to retrieve what is a government record, but it just invites the suspicion that a government employee is attempting to evade accountability by supervisors and the public,” said Christopher Dunn of the New York Civil Liberties Union, a frequent requester of records under the state’s Freedom of Information Law.
Emailing from private accounts also may violate state policy. State employees are not to “use a personal email account to conduct State business unless explicitly authorized,” according to a policy bearing the governor’s name published by the Office of Information Technology Services.
The Cuomo administration declined to comment on whether any employees are authorized to use private accounts.
Back when he was running for governor, Cuomo pledged, “We must use technology to bring more sunlight to the operation of government.”
The governor himself uses a Blackberry messaging system that does not save messages to communicate with aides, the Daily News reported in 2012. Under the Freedom of Information Law, those records would typically not have to be released because there is an exemption for internal deliberative material.
But emails with anyone outside of the administration – such as lobbyists, company executives, or reporters – usually have to be made public upon request. It is for those communications, with people outside the administration, that private email accounts have been used.
Last year, I was poking around on a possible story and filed some public records requests that sought emails from Director of State Operations Howard Glaser, a top Cuomo adviser. One day in October, just hours after filing a request with the governor’s office, an email appeared in my inbox from Glaser himself.
The email, inquiring what I was working on, was sent from a @glasergroup.net address rather than a government account. The note had a signature line about not using the email address for official business (even though it appeared to be doing just that). My interest was piqued.
So I filed a request under the state’s Freedom of Information Law, asking for all records sent to and from Glaser’s private account. It is not supposedto matter if an email is sent from an official account or a private one: If it pertains to government business, it typically has to be released.
A couple of months later, the Cuomo administration responded with a terse denial: “Please be advised that the New York State Executive Chamber has conducted a diligent search, but does not possess records responsive to your request.”
I appealed, noting that I had in my possession a record responsive to the request – Glaser’s email to me – and included it as an attachment.
The administration upheld its original denial, now citing a retention issue.
“[T]he fact that this record is in your possession does not mean that the Chamber failed to produce a responsive record in its possession. Emails and certain other correspondence are not required to be preserved indefinitely,” the March letter said.
When I asked about the email this month, Cuomo spokesman Rich Azzopardi took a different tack, now disputing that Glaser was emailing me in his official capacity at all and calling the email “informal.” “It would be inaccurate to characterize Howard’s email as official business – as he noted, your official business was being handled by the FOIL office, not him,” Azzopardi said.
But I have no personal relationship with Glaser, and my Freedom of Information Law requests focused only on his activities as a state official. When I recently asked Glaser about his email practices, he said, “I don’t use personal email to conduct official business.” He would not say how he defines “official business.”
In its letter denying my request for emails from Glaser’s private account, the administration cited the general retention policy of the State Archives. That policy says that “many email communications are not records and are therefore suitable for immediate destruction” but also that those emails which are records must be preserved.
So how does one determine which emails are “records”?
The governor’s office seems to take a particularly narrow view. The governor’s policy says that emails are only “records” if they are formal documents like press releases and nominations. Azzopardi, the Cuomo spokesman, said: “Official email is not required to be retained unless it meets the definition of a particular kind of record (eg – contract), consistent with the State Archives policy.”
But the Archives, which Cuomo’s office itself cited, takes a more expansive view, even as state law gives the governor leeway to determine which records should be kept.
Quoting the official definition of records, Archives spokeswoman Antonia Valentine said an email is a record if it is created “in connection with the transaction of public business (and provides) … evidence of the organization, functions, policies, decisions, procedures, operations, or other activities (of an agency).”
In practice, Glaser seems to be either eschewing his official email account or promptly deleting messages of substance. When I asked for a 10-day sample of emails from Glaser’s official account, I got back little actual communication: 147 pages that are largely filled with newsletters, press releases, and the occasional terse email to set up a phone call.
The use of private accounts can result in even more roadblocks when an official leaves the government. (Glaser is reportedly leaving the administration in June.)
The issue has come up before.
In 2007, executives from the insurance giant AIG filed a public records request with the Office of the Attorney General, seeking, among other things, former Attorney General Eliot Spitzer’s communications with the press from the period when he had sued the insurance giant. That request was resisted for years by Spitzer’s successor as attorney general: Andrew Cuomo.
While Cuomo’s office eventually released emails sent from official accounts, it maintained that Spitzer’s use of a private account put any of those emails beyond its reach.
“[T]he reality is that the Office of the Attorney General lacks access to this account and possession of whatever e-mails it may contain, thus rendering them beyond the scope of petitioner’s FOIL request both practically and legally,” Cuomo’s office said in a 2009 court filing.
A judge ruled against the attorney general’s office, which has appealed. Seven years since the original request, the case is still in the courts and Spitzer’s private email account – which he was known to use in his capacity as a state official – has never been searched for records.
Lawyers for Spitzer joined the case this year, arguing in a March filing that because Spitzer is now a former employee and a private citizen, the Freedom of Information Law doesn’t apply.
Beyond the governor’s office, the state is reportedly moving toward an email system that would automatically delete emails after 90 days except for those marked by users to save.
It’s not clear how that process would work or how the state will ensure that records are not destroyed. The Office of Information and Technology Services declined to provide the memo describing the new policy, requiring that I file a formal public records request to get it.
Transparency advocates have criticized 90 days as too short a period because emails may only become relevant months later after a scandal or other event.
A document on the IT office’s website references the possibility in a state email system for “recovery of deleted mailbox contents for the length of the retention period” – another capability that would not exist for officials using private accounts.
Across the river in New Jersey, private email accounts are at the center of the Bridgegate scandal.
The infamous “Time for some traffic problems in Fort Lee” email was sent from a Christie aide’s Yahoo account to another official’s Gmail account. That tactic held off public access to the email for a time.
In December, the Christie administration claimed it did not have records in response to a request from the Record of Bergen, N.J. The emails became public later, only after the officials were subpoenaed by the state Assembly.
If you have gotten emails from the private account of an official in the governor’s office or other state or city agencies, email me at justin@propublica.org.
Reposted from ProPublica via its Creative Commons (BY-NC-ND) license.
The Heartbleed computer security bug is many things: a catastrophic tech failure, an open invitation to criminal hackers and yet another reason to upgrade our passwords on dozens of websites. But more than anything else, Heartbleed reveals our neglect of Internet security.
The United States spends more than $50 billion a year on spying and intelligence, while the folks who build important defense software — in this case a program called OpenSSL that ensures that your connection to a website is encrypted — are four core programmers, only one of whom calls it a full-time job.
In a typical year, the foundation that supports OpenSSL receives just $2,000 in donations. The programmers have to rely on consulting gigs to pay for their work. “There should be at least a half dozen full time OpenSSL team members, not just one, able to concentrate on the care and feeding of OpenSSL without having to hustle commercial work,” says Steve Marquess, who raises money for the project.
Is it any wonder that this Heartbleed bug slipped through the cracks?
Dan Kaminsky, a security researcher who saved the Internet from a similarly fundamental flaw back in 2008, says that Heartbleed shows that it’s time to get “serious about figuring out what software has become Critical Infrastructure to the global economy, and dedicating genuine resources to supporting that code.”
The Obama Administration has said it is doing just that with its national cybersecurity initiative, which establishes guidelines for strengthening the defense of our technological infrastructure — but it does not provide funding for the implementation of those guidelines.
Instead, the National Security Agency, which has responsibility to protect U.S. infrastructure, has worked to weaken encryption standards. And so private websites — such as Facebook and Google, which were affected by Heartbleed — often use open-source tools such as OpenSSL, where the code is publicly available and can be verified to be free of NSA backdoors.
The federal government spent at least $65 billion between 2006 and 2012 to secure its own networks, according to a February report from the Senate Homeland Security and Government Affairs Committee. And many critical parts of the private sector — such as nuclear reactors and banking — follow sector-specific cybersecurity regulations.
But private industry has also failed to fund its critical tools. As cryptographer Matthew Green says, “Maybe in the midst of patching their servers, some of the big companies that use OpenSSL will think of tossing them some real no-strings-attached funding so they can keep doing their job.”
In the meantime, the rest of us are left with the unfortunate job of changing all our passwords, which may have been stolen from websites that were using the broken encryption standard. It’s unclear whether the bug was exploited by criminals or intelligence agencies. (The NSA says it didn’t know about it.)
It’s worth noting, however, that the risk of your passwords being stolen is still lower than the risk of your passwords being hacked from a website that failedtoprotect them properly. Criminals have so many ways to obtain your information these days — by sending you a fake email from your bank or hacking into a retailer’s unguarded database — that it’s unclear how many would have gone through the trouble of exploiting this encryption flaw.
The problem is that if your passwords were hacked by the Heartbleed bug, the hack would leave no trace. And so, unfortunately, it’s still a good idea to assume that your passwords might have been stolen.
So, you need to change them. If you’re like me, you have way too many passwords. So I suggest starting with the most important ones — your email passwords. Anyone who gains control of your email can click “forgot password” on your other accounts and get a new password emailed to them. As a result, email passwords are the key to the rest of your accounts. After email, I’d suggest changing banking and social media account passwords.
But before you change your passwords, you need to check if the website has patched their site. You can test whether a site has been patched by typing the URL here. (Look for the green highlighted ” Now Safe” result.)
If the site has been patched, then change your password. If the site has not been patched, wait until it has been patched before you change your password.
A reminder about how to make passwords: Forget all the password advice you’ve been given about using symbols and not writing down your passwords. There are only two things that matter: Don’t reuse passwords across websites and the longer the password, the better.
I suggest using password management software, such as 1Password or LastPass, to generate the vast majority of your passwords. And for email, banking and your password to your password manager, I suggest a method of picking random words from the Dictionary called Diceware. If that seems too hard, just make your password super long — at least 30 or 40 characters long, if possible.
Republished from ProPublica under a Creative Commons license.
The East German secret police, known as the Stasi, were an infamously intrusive secret police force. They amassed dossiers on about one quarter of the population of the country during the Communist regime.
But their spycraft — while incredibly invasive — was also technologically primitive by today’s standards. While researching my book Dragnet Nation, I obtained the above hand drawn social network graph and other files from the Stasi Archive in Berlin, where German citizens can see files kept about them and media can access some files, with the names of the people who were monitored removed.
The graphic shows forty-six connections, linking a target to various people (an “aunt,” “Operational Case Jentzsch,” presumably Bernd Jentzsch, an East German poet who defected to the West in 1976), places (“church”), and meetings (“by post, by phone, meeting in Hungary”).
Gary Bruce, an associate professor of history at the University of Waterloo and the author of “The Firm: The Inside Story of the Stasi,” helped me decode the graphic and other files. I was surprised at how crude the surveillance was. “Their main surveillance technology was mail, telephone, and informants,” Bruce said.
Another file revealed a low-level surveillance operation called an IM-vorgang aimed at recruiting an unnamed target to become an informant. (The names of the targets were redacted; the names of the Stasi agents and informants were not.) In this case, the Stasi watched a rather boring high school student who lived with his mother and sister in a run-of-the-mill apartment. The Stasi obtained a report on him from the principal of his school and from a club where he was a member. But they didn’t have much on him — I’ve seen Facebook profiles with far more information.
A third file documented a surveillance operation known as an OPK, for Operative Personenkontrolle, of a man who was writing oppositional poetry. The Stasi deployed three informants against him but did not steam open his mail or listen to his phone calls. The regime collapsed before the Stasi could do anything further.
I also obtained a file that contained an “observation report,” in which Stasi agents recorded the movements of a forty-year-old man for two days — September 28 and 29, 1979. They watched him as he dropped off his laundry, loaded up his car with rolls of wallpaper, and drove a child in a car “obeying the speed limit,” stopping for gas and delivering the wallpaper to an apartment building. The Stasi continued to follow the car as a woman drove the child back to Berlin.
The Stasi agent appears to have started following the target at 4:15 p.m. on a Friday evening. At 9:38 p.m., the target went into his apartment and turned out the lights. The agent stayed all night and handed over surveillance to another agent at 7:00 a.m. Saturday morning. That agent appears to have followed the target until 10:00 p.m. From today’s perspective, this seems like a lot of work for very little information.
What we have below is actually a ProPublica post by Kara Brandeisky, posted back in August of this year, but republished here under ProPublica’s Creative Commons license. However, given the White House task force’s recommendations, we thought it might be useful to be reminded what Senator Obama fought for concerning surveillance before he was President. Many of these look remarkably similar to what the task force proposes…
When the House of Representatives recently considered an amendment that would have dismantled the NSA’s bulk phone records collection program, the White House swiftly condemned the measure. But only five years ago, Sen. Barack Obama, D-Ill. was part of a group of legislators that supported substantial changes to NSA surveillance programs. Here are some of the proposals the president co-sponsored as a senator.
As a senator, Obama wanted to limit bulk records collection.
The measure Obama supported in 2007 is actually similar to the House amendment that the White House condemned earlier this month. That measure, introduced by Reps. Justin Amash, R-Mich., and John Conyers, D-Mich., would have ended bulk phone records collection but still allowed the NSA to collect records related to individual suspects without a warrant based on probable cause.
The amendment failed 35-63. Obama later reversed his position and supported what became the law now known to authorize the PRISM program. That legislation — the FISA Amendments Act of 2008 — also granted immunity to telecoms that had cooperated with the government on surveillance.
The law ensured the government would not need a court order to collect data from foreigners residing outside the United States. According to the Washington Post, analysts are told that they can compel companies to turn over communications if they are 51 percent certain the data belongs to foreigners.
Powerpoint presentation slides published by the Guardian indicate that when analysts use XKeyscore — the software the NSA uses to sift through huge amounts of raw internet data — they must first justify why they have reason to believe communications are foreign. Analysts can select from rationales available in dropdown menus and then read the communications without court or supervisor approval.
Finally, analysts do not need court approval to look at previously-collected bulk metadata either, even domestic metadata. Instead, the NSA limits access to incidentally collected American data according to its own “minimization” procedures. A leaked 2009 document said that analysts only needed permission from their “shift coordinators” to access previously-collected phone records. Rep. Stephen Lynch, D-Mass., has introduced a bill that would require analysts to get special court approval to search through telephone metadata.
As a senator, Obama wanted the executive branch to report to Congress how many American communications had been swept up during surveillance.
Feingold’s 2008 amendment, which Obama supported, would have also required the Defense Department and Justice Department to complete a joint audit of all incidentally collected American communications and provide the report to congressional intelligence committees. The amendment failed 35-63.
The Inspector General of the Intelligence Community told Senators Ron Wyden, D-Ore., and Mark Udall, D-Co. last year that it would be unfeasible to estimate how many American communications have been incidentally collected, and doing so would violate Americans’ privacy rights.
As a senator, Obama wanted to restrict the use of gag orders related to surveillance court orders.
Obama co-sponsored at least two measures that would have made it harder for the government to issue nondisclosure orders to businesses when compelling them to turn over customer data.
One 2007 bill would have required the government to demonstrate that disclosure could cause one of six specific harms: by either endangering someone, causing someone to avoid prosecution, encouraging the destruction of evidence, intimidating potential witnesses, interfering with diplomatic relations, or threatening national security. It would have also required the government to show that the gag order was “narrowly tailored” to address those specific dangers. Obama also supported a similar measure in 2005. Neither measure made it out of committee.
The Obama administration has thus far prevented companies from disclosing information about surveillance requests. Verizon’s surveillance court order included a gag order.
Meanwhile, Microsoft and Google have filed motions with the Foreign Intelligence Surveillance Court seeking permission to release aggregate data about directives they’ve received. Microsoft has said the Justice Department and the FBI had previously denied its requests to release more information. The Justice Department has asked for moretime to consider lifting the gag orders.
As a senator, Obama wanted to give the accused a chance to challenge government surveillance.
Until recently, federal prosecutors would not tell defendants what kind of surveillance had been used.
The New York Times reported that in two separate bomb plot prosecutions, the government resisted efforts to reveal whether its surveillance relied on a traditional FISA order, or the 2008 law now known to authorize PRISM. As a result, defense attorneys had been unable to contest the legality of the surveillance. Sen. Dianne Feinstein, D-Calif., later said that in both cases, the government had relied on the 2008 law, though prosecutors now dispute that account.
On July 30, the Justice Department reversed its position in one bomb plot prosecution. The government disclosed that it had not gathered any evidence under the 2008 law now known to authorize sweeping surveillance.
But that’s not the only case in which the government has refused to detail its surveillance. When San Diego cab driver BasaalySaeedMoalin was charged with providing material support to terrorists based on surveillance evidence in Dec. 2010, his attorney, Joshua Dratel, tried to get the government’s wiretap application to the Foreign Intelligence Surveillance Court. The government refused, citing national security.
Dratel only learned that the government had used Moalin’s phone records as the basis for its wiretap application — collected under Section 215 of the Patriot Act — when FBI Deputy Director Sean Joyce cited the Moalin case as a success story for the bulk phone records collection program.
As a senator, Obama wanted the attorney general to submit a public report giving aggregate data about how many people had been targeted for searches.
Under current law, the attorney general gives congressional intelligence committees a semiannual report with aggregate data on how many people have been targeted for surveillance. Obama co-sponsored a 2005 bill that would have made that report public. The bill didn’t make it out of committee.
Despite requests from Microsoft and Google, the Justice Department has not yet given companies approval to disclose aggregate data about surveillance directives.
As a senator, Obama wanted the government to declassify significant surveillance court opinions.
Currently, the attorney general also gives congressional intelligence committees “significant” surveillance court opinions, decisions and orders and summaries of any significant legal interpretations. The 2005 bill that Obama co-sponsored would have released those opinions to the public, allowing redactions for sensitive national security information.
Before Edward Snowden’s disclosures, the Obama Justice Department had fought Freedom of Information Act lawsuits seeking surveillance court opinions. On July 31, the Director of National Intelligence released a heavily redacted version of the FISA court’s “primary order” compelling telecoms to turn over metadata.
In response to a request from Yahoo, the government also says it is going to declassify court documents showing how Yahoo challenged a government directive to turn over user data. The Director of National Intelligence is still reviewing if there are other surveillance court opinions and other significant documents that may be released. Meanwhile, there are severalbills in Congress that would compel the government to release secret surveillance court opinions.
Anyone who took the time to read the UK government’s latest update on its cybersecurity strategy could be forgiven for thinking that a man called Edward Snowden never existed.
Most people who are even slightly plugged in to the world around them would agree, however, that we live in decidedly more interesting times for internet security and privacy than the document would have us believe. Not a day seems to have gone by since the summer without a new revelation of activities by the NSA or GCHQ that have gone just a little further than what most people find acceptable.
In fact, the only place where you won’t see the NSA affair taking centre stage is in communications from the UK government.
This latest update brings us up to speed on the progress made towards the objectives and the forward plans relating to the cybersecurity strategy that was published two years ago. Yet neither appear to have been affected by the Snowden crisis. There is not the slightest mention of his name in either document. This may not surprise the cynics but it is highly inadequate.
Bad for business
The very first objective in the original strategy was to make the UK “one of the most secure places in the world to do business in cyberspace”. The Snowden affair has profoundly affected this goal.
At the heart of cybersecurity, as far as businesses are concerned, is the ability to guarantee the confidentiality of sensitive data. Presumably, international companies which operate in competition with UK rivals do not expect to be sharing their business data with GCHQ. Snowden teaches us that they should.
It has also been alleged that the NSA and GCHQ have been involved in building back doors into commercially available encryption software and standards in order to gain access to encrypted data. Security researchers have pointed out that this undermines the very cyber infrastructure that GCHQ is supposed to be protecting.
If the agency introduces deliberate weaknesses to gain covert access to information, those weaknesses can equally be sniffed out and exploited by cyber criminals and other third parties. This point was also made quite forcefully by Sir Tim Berners-Lee. Obviously, undermining the infrastructure also runs contrary to “making the UK more resilient to cyber attack”, another objective identified in the original strategy.
Above scrutiny?
Another objective originally identified is “protecting our interests in cyberspace”, the execution of which has been mostly delegated to GCHQ. The government thus avoids having to report back on progress in any great detail since the information is classified. Nevertheless, we are assured that a report has been made on the matter to the Intelligence and Security Committee.
Here too, the government appears oblivious to the fact that the public has almost entirely lost confidence in the adequacy of information-sharing and challenge in that particular oversight relation. It claims to want to “ensure broad understanding within the UK of the government’s approach” but this is hard to defend if the workings of GCHQ are only revealed to and understood by tiny subgroups of government and parliament.
Even a past Cabinet minister on the National Security Council and parliamentarians with relevant responsibilities have already claimed that they had been insufficiently informed of GCHQ’s activities, so what hope for the rest of us?
An open society
However, the government scores its lowest marks for progress made towards objective three in its original strategy. Two years ago, it planned to play a part in creating an “open” and “vibrant” cyberspace “which the UK public can use safely and that
supports open societies”. The lack of transparency and accountability of GCHQ’s operations, even to Westminster, runs very much counter to this ideal.
The UK takes pride in its role in promoting democracy and human rights across the world and yet the Snowden affair has led to so much damage that Amnesty International has felt the need to lodge a complaint to the Investigatory Powers Tribunal because it thinks its sensitive communications have probably been intercepted.
International cyber-waters
As a positive achievement, the progress report mentions agreements to make international law apply in cyberspace. But even this will be fraught with difficulties as a result of the Snowden affair. International law should be equal to all, and this does not sit easily with the collaboration that is thriving between GCHQ and the NSA. The NSA is regulated in a way that is strongly biased against non-US citizens and many other governments seem to be alive to that, even if the UK isn’t.
All in all, Snowden’s revelations have significantly changed many people’s perceptions of the role the UK government actually plays in cyberspace. The government’s progress report does not appear to take this into account at all.
The UK government may choose to believe that none of Snowden’s files prove to be true, or that all the activities reported in them are fully justifiable. But even if that were the case, public reaction to these stories is a reality that needs to be confronted. The UK government cannot afford to be in denial about the relevance of the Snowden files and certainly not about the impact that they have had on business and society, at home and abroad.
Eerke Boiten is a senior lecturer in the School of Computing at the University of Kent, and Director of the University’s interdisciplinary Centre for Cyber Security Research. He receives funding from EPSRC for the CryptoForma Network of Excellence on Cryptography and Formal Methods.
Genetic testing is a powerful tool. Two years ago, with the help of my colleagues, it was this tool that helped us identify a new disease. The disease, called Ogden Syndrome, caused the death of a four-month old child named Max. But the rules and regulations for genetic testing in the US, laid down in the CLIA (Clinical Laboratory Improvement Amendments), meant I could not share the results of the family’s genetic tests with them.
Since that time, I have advocated performing all genetic testing involving humans such that results can be returned to research participants. This I believe should extend beyond research, and some private companies, like 23andMe, are helping to do just that.
For as little as US$99, people around the world can send a sample of their saliva to 23andMe to get their DNA sequenced. Their Personal Genome Service (PGS) analyses parts of a person’s genome. This data is then compared with related scientific data and 23andMe’s own database of hundreds of thousands of individuals to spot genetic markers, which the company claims “reports on 240 health condition and traits”.
This week, however, as I had feared, the US Food and Drug Administration (FDA) has ordered 23andMe to stop marketing their service. In a warning letter, FDA said: “23andMe must immediately discontinue marketing the PGS until such time as it receives FDA marketing authorisation for the device.” By calling PGS “a device”, the FDA fears that people may self-medicate based on results they receive from 23andMe.
Somehow the US and UK governments find it acceptable to store massive amounts of data about their own citizens and that of the rest of the world. They are happy spending billions on such mass surveillance. But if the same people want to spend their own money to advance genomic medicine and possibly improve their own health in the process, they want to stop them.
There are many diseases that appear to occur in the presence of genetic mutations, with large effect in certain populations. A case in point is that of deltaF508 mutation in the CFTR gene, which is known to predispose people to cystic fibrosis, which causes scarring inside organs.
The expression of cystic fibrosis in each of these people is highly variable, but the presence of the mutations can certainly raise suspicion for this illness in individuals with any such symptoms. This is particularly the case when there is an already known instance of cystic fibrosis in the immediate family.
This is why carrier screening in families with diagnosed cases of such diseases is advocated. And yet, such screening is not commonly performed, even though it could decrease prevalence of affected infants.
Genetic data (or genotype) on its own is of little use. It is the correlation of how those genes manifest in people, which is their phenotype, that makes genotypes useful.
I dream of a world in which we have phenotype and genotype data on millions of individuals, so that we can really begin to better understand genotype-phenotype relationships.
Instead, we still live in the medical world described in the Pulitzer prize-winning novel Arrowsmith pubished in 1925, where doctors pretend to know far more than they actually do. The sad fact is that there is no way the FDA can evaluate and regulate each and every genetic variant in the billions of letters which make up the human genome that get variably expressed in trillions of cells in every human body.
We need to collect billions of data points for analysis by computers. The only company in major contention to do this soon is 23andMe. With FDA’s latest attempt to stop 23andMe, all it is really doing is delaying, or worse stopping, the revolution that today’s medicine desperately needs.
Gholson Lyon is an Assistant Professor in Human Genetics at the Cold Spring Harbour Laboratory. He does not work for, consult to, own shares in or receive funding from any company or organization that would benefit from this article, and has no relevant affiliations.
This week the advocate general of the Court of Justice of the EU (CJEU), Yves Bot, publishes an opinion on the extent to which the Data Retention Directive, one of the most controversial security measures introduced by the EU in the past decade, is compatible with human rights law. Although not a binding judgement (this will come later), the CJEU’s opinion is a significant intervention in the ongoing debate over how to balance human rights with states’ perceived surveillance needs.
The security-related retention of communications by telecoms firms was on the European agenda well before 9/11, but privacy concerns had led to a limited approach. Telecoms companies in the EU were obliged to delete communications data as soon as all business needs had been met; the data could not be retained for security or criminal investigation purposes. Some states had attempted to adjust this and introduce a retention system in 2000, but this failed – again, largely because of privacy concerns. All this changed, however, after 9/11.
As early as May 2002, a “data retention amendment” had been made to existing EU privacy laws to allow for security-related data retention, and drafts of a provision that would require retention began to circulate. Those proposals attracted so much rights-based criticism that they were apparently abandoned; however, they quickly reappeared in the wake of the London and Madrid bombings, and in 2006, the Data Retention Directive was adopted.
It obliges all member states to introduce national data retention regimes, even where -— as in the UK —- there had already been significant resistance to such regimes when they were previously proposed at a national level. The directive requires telecommunications providers to retain data on the source, destination, time, date, duration and type of all communications by fixed and mobile telephone, fax and internet, and on the location and type of equipment used.
The data is to be retained for between six months and two years, with national law deciding on the duration, and can be accessed by state agencies investigating “serious crime” —- a term that has different definitions across the member states.
Blanket surveillance
The volume and extent of information retained under the directive is stunning; in effect, it has introduced a system of blanket surveillance across the entire EU. Although access to the information is regulated by law, state agencies can nonetheless access an enormous amount of information about our communications patterns and activities. This naturally raises serious human rights concerns, especially about privacy.
Security services insist that data retention is an indispensable tool for investigating serious crimes, such as terrorism and the production and distribution of child pornography. Yet different states make use of the Directive to wildly varying extents: in 2012, for example, Cyprus made 22 requests for access to data, while the UK made 725,467.
The question for the advocate general, the CJEU and the EU more broadly is whether or not the approach taken by the directive privileges perceived security needs over human rights. Data retention unquestionably constitutes a prima facie infringement on privacy; the real issue is whether this infringement is justified because it is necessary, effective, and limited. This question is at the core of all debates about “balance” in the security context: how far are we prepared to allow state power into our individual, family, social and democratic lives in order to “secure” us?
Answering this question requires us to decide on what we think “effectiveness” means in the context of security. If the directive helps to resolve a handful of serious crimes per year, or to prevent one terrorist attack, is it effective? Could a more limited approach -— such as requiring telecoms companies to collect data related to certain investigations but not to retain all data -— achieve the same security objectives while better protecting rights?
These are difficult questions, but they are ones we must resolve if we are to have a balanced security system. The advocate general’s opinion will be an important contribution to the debate, but it will not be the final word. Achieving a balanced approach to security requires critical scrutiny at practical, political, social and legal levels. This is all the more true given that, as the Data Retention Directive illustrates, security measures operate upon and have implications for the rights of all of us, all of the time.
Fiona de Londras is the Project Co-Ordinator of SECILE (Securing Europe through Counter-Terrorism: Impact, Legitimacy and Effectiveness), a project that has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement number 313195.
The federal institute that sets national standards for how government, private citizens and business guard the privacy of their files and communications is reviewing all of its previous recommendations.
The review, announced late Friday afternoon by the National Institute for Standards and Technology, will also include an assessment of how the institute creates encryption standards.
The institute sets national standards for everything from laboratory safety to high-precision timekeeping. NIST’s cryptographic standards are used by software developers around the world to protect confidential data. They are crucial ingredients for privacy on the Internet, and are designed to keep Internet users safe from being eavesdropped on when they make purchases online, pay bills or visit secure websites.
But as the investigation by ProPublica, The Guardian and The New York Times in September revealed, the National Security Agency spends $250 million a year on a project called “SIGINT Enabling” to secretly undermine encryption. One of the key goals, documents said, was to use the agency’s influence to weaken the encryption standards that NIST and other standards bodies publish.
“Trust is crucial to the adoption of strong cryptographic algorithms,” the institute said in a statement on their website. “We will be reviewing our existing body of cryptographic work, looking at both our documented process and the specific procedures used to develop each of these standards and guidelines.”
The NSA is no stranger to NIST’s standards-development process. Under current law, the institute is required to consult with the NSA when drafting standards. NIST also relies on the NSA for help with public standards because the institute doesn’t have as many cryptographers as the agency, which is reported to be the largest employer of mathematicians in the country.
“Unlike NSA, NIST doesn’t have a huge cryptography staff,” said Thomas Ptacek, the founder of Matasano Security, “NIST is not the direct author of many of most of its important standards.”
Matthew Scholl, the deputy chief at the Computer Security Division of the institute, echoed that statement, “As NIST Director Pat Gallagher has said in several public settings, NIST is designed to collaborate and the NSA has some of the world’s best minds in cryptography.” He continued, “We also have parallel missions to protect federal IT systems, so we will continue to work with the NSA.”
Some of these standards are products of public competitions among academic cryptography researchers, while others are the result of NSA recommendations. An important standard, known as SHA2, was designed by the NSA and is still trusted by independent cryptographers and software developers worldwide.
NIST withdrew one cryptographic standard, called Dual EC DRGB, after documents provided to news organizations by the former intelligence contractor Edward Snowden raised the possibility that the standard had been covertly weakened by the NSA.
Soon after, a leading cryptography company, RSA, told software writers to stop using the algorithm in a product it sells. The company promised to remove the algorithm in future releases.
Many cryptographers have expressed doubt about NIST standards since the initial revelations were published. One popular encryption library changed its webpage to boast that it did not include NIST-standard cryptography. Silent Circle, a company that makes encryption apps for smartphones, promised to replace the encryption routines in its products with algorithms not published by NIST.
If the NIST review prompts significant changes to existing encryption standards, consumers will not see the benefit immediately. “If the recommendations change, lots of code will need to change,” said Tanja Lange, a cryptographer at the University of Technology at Eindhoven, in the Netherlands. “I think that implementers will embrace such a new challenge, but I can also imagine that vendors will be reluctant to invest the extra time.”
In Friday’s announcement, NIST pointed to its long history of creating standards, including the role it had in creating the first national encryption standard in the 1970s — the Data Encryption Standard, known as DES. “NIST has a proud history in open cryptographic standards, beginning in the 1970s with the Data Encryption Standard,” the bulletin said. But even that early standard was influenced by the NSA.
During the development of DES, the agency insisted that the algorithm use weaker keys than originally intended — keys more susceptible to being broken by super computers. At the time, Whitfield Diffie, a digital cryptography pioneer, raised serious concerns about the keys. “The standard will have to be replaced in as few as five years,” he wrote.
The weakened keys in the standard were not changed. DES was formally withdrawn by the institute in 2005.
The announcement is the latest effort by NIST to restore the confidence of cryptographers. A representative from NIST announced in a public mailing list, also on Friday, that the institute would restore the original version of a new encryption standard, known as SHA3, that had won a recent design competition but altered by the institute after the competition ended. Cryptographers charged that NIST’s changes to the algorithm had weakened it.
The SHA3 announcement referred directly to cryptographers’ concerns. “We were and are comfortable with that version on technical grounds, but the feedback we’ve gotten indicates that a lot of the crypto community is not comfortable with it,” wrote John Kelsey, NIST’s representative. There is no evidence the NSA was involved in the decision to change the algorithm.
The reversal took Matthew Green, a cryptographer at Johns Hopkins University, by surprise. “NIST backed down! I’m not sure they would have done that a year ago,” he said.
Over the past several months, the Obama Administration has defended the government’s far-reaching data collection efforts, arguing that only criminals and terrorists need worry. The nation’s leading internet and telecommunications companies have said they are committed to the sanctity of their customers’ privacy.
I have some very personal reasons to doubt those assurances.
In 2004, my telephone records as well as those of another New York Times reporter and two reporters from the Washington Post, were obtained by federal agents assigned to investigate a leak of classified information. What happened next says a lot about what happens when the government’s privacy protections collide with the day-to-day realities of global surveillance.
The story begins in 2003 when I wrote an article about the killing of two American teachers in West Papua, a remote region of Indonesia where Freeport-McMoRan operates one of the world’s largest copper and gold mines. The Indonesian government and Freeport blamed the killings on a separatist group, the Free Papua Movement, which had been fighting a low-level guerrilla war for several decades.
I opened my article with this sentence: “Bush Administration officials have determined that Indonesian soldiers carried out a deadly ambush that killed two American teachers.”
I also reported that two FBI agents had travelled to Indonesia to assist in the inquiry and quoted a “senior administration official” as saying there “was no question there was a military involvement.”
The story prompted a leak investigation. The FBI sought to obtain my phone records and those of Jane Perlez, the Times bureau chief in Indonesia and my wife. They also went after the records of the Washington Post reporters in Indonesia who had published the first reports about the Indonesian government’s involvement in the killings.
As part of its investigation, the FBI asked for help from what is described in a subsequent government report as an “on-site communications service” provider. The report, by the Department of Justice’s Inspector General, offers only the vaguest description of this key player, calling it “Company A.”
“We do not identify the specific companies because the identities of the specific providers who were under contract with the FBI for specific services are classified,” the report explained.
Whoever they were, Company A had some impressive powers. Through some means – the report is silent on how – Company A obtained records of calls made on Indonesian cell phones and landlines by the Times and Post reporters. The records showed whom we called, when and for how long — what has now become famous as “metadata.”
Under DOJ rules, the FBI investigators were required to ask the Attorney General to approve a grand jury subpoena before requesting records of reporters’ calls. But that’s not what happened.
Instead, the bureau sent Company A what is known as an “exigent letter” asking for the metadata.
A heavily redacted version of the DOJ report, released in 2010, noted that exigent letters are supposed to be used in extreme circumstances where there is no time to ask a judge to issue a subpoena. The report found nothing “exigent” in an investigation of several three-year-old newspaper stories.
The need for an exigent letter suggests two things about Company A. First, that it was an American firm subject to American laws. Second, that it had come to possess my records through lawful means and needed legal justification to turn them over to the government.
The report disclosed that the agents’ use of the exigent letter was choreographed by the company and the bureau. It said the FBI agent drafting the letter received “guidance” from “a Company A analyst.” According to the report, lawyers for Company A and the bureau worked together to develop the approach.
Not surprisingly, “Company A” quickly responded to the letter it helped write. In fact, it was particularly generous, supplying the FBI with records covering a 22-month period, even though the bureau’s investigation was limited to a seven-month period. Altogether, “Company A” gave the FBI metadata on 1,627 calls by me and the other reporters.
Only three calls were within the seven-month window of phone conversations investigators had decided to review.
It doesn’t end there.
The DOJ report asserts that “the FBI made no investigative use of the reporters’ telephone records.” But I don’t believe that is accurate.
In 2007, I heard rumblings that the leak investigation was focusing on a diplomat named Steve Mull, who was the deputy chief of mission in Indonesia at the time of the killings. I had known Mull when he was a political officer in Poland and I was posted there in the early 1990s. He is a person of great integrity and a dedicated public servant.
The DOJ asked to interview me. Of course, I would not agree to help law enforcement officials identify my anonymous sources. But I was troubled because I felt an honorable public servant had been forced to spend money on lawyers to fend off a charge that was untrue. After considerable internal debate, I decided to talk to the DOJ for the limited purpose of clearing Mull.
It was not a decision I could make unilaterally. The Times also had a stake in this. If I allowed myself to be interviewed, how could the Times say no the next time the government wanted to question a Times reporter about a leak?
The Times lawyer handling this was George Freeman, a journalist’s lawyer, a man Times reporters liked having in their corner. George and the DOJ lawyers began to negotiate over my interview. Eventually, we agreed that I would speak on two conditions: one, that they could not ask me for the name of my source; and two, if they asked me if it was ‘X,’ and I said no, they could not then start going through other names.
Freeman and I sat across a table from two DOJ lawyers. I’m a lawyer, and prided myself on being able to answer their questions with ease, never having to turn to Freeman for advice.
Until that is, one of the lawyers took a sheaf of papers that were just off to his right, and began asking me about phone calls I made to Mull. One call was for 19 minutes, the DOJ lawyer said, giving me the date and time. I asked for a break to consult with Freeman.
We came back, and answered questions about the phone calls. I said that I couldn’t remember what these calls were about – it had been more than four years earlier – but that Mull had not given me any information about the killings. Per our agreement, the DOJ lawyers did not ask further questions about my sources, and the interview ended.
I didn’t know how the DOJ had gotten my phone records, but assumed the Indonesian government had provided them. Then, about a year later, I received a letter from the FBI’s general counsel, Valerie Caproni who wrote that my phone records had been taken from “certain databases” under the authority of an “exigent letter,” (a term I had never heard).
Caproni sent similar letters to Perlez, to the Washington Post reporters, and to the executive editors of the Post and the Times, Leonard Downie and Bill Keller, respectively. In addition, FBI Director Robert Mueller called Downie and Keller, according to the report.
Caproni wrote that the records had not been seen by anyone other than the agent requesting them and that they had been expunged from all databases.
I’m uneasy because the DOJ report makes clear that the FBI is still concealing some aspect of this incident. After describing Caproni’s letters, the report says: “However, the FBI did not disclose to the reporters or their editors that [BLACKED OUT].” The thick black lines obliterate what appear to be several sentences.
If you were to ask senior intelligence officials whether I should wonder about those deletions, they’d probably say no.
I’m not so sure.
The government learned extensive details about my personal and professional life. Most of those calls were about other stories I was writing. Some were undoubtedly to arrange my golf game with the Australian ambassador. Is he now under suspicion? The report says the data has been destroyed and that only two analysts ever looked at it.
But who is this ‘Company A” that willingly cooperated with the government? Why was it working hand in glove with the FBI? And what did the FBI director not tell the editors of the Times and the Washington Post when he called them acknowledging the government had improperly obtained reporter’s records?