from the left-hand,-meet-the-anonymous-right-hand dept
It's somewhat well known that the popular Tor anonymous browsing system gets a significant amount of funding from the US government. In the past, the suggestion had always been that the State Department was a major supporter because of its belief that Tor would help dissidents in other countries communicate better via anonymous systems. However, now there's a lot of buzz because it appears that a bit of malware that was discovered this weekend targeting Tor users, may have come directly from the FBI itself. The implication isn't against the Tor project at all, but rather it appears that whoever pushed out this malware did so by using a vulnerability targreting people using the Tor Browser Bundle -- a Firefox bundle that builds in Tor -- browsing a variety of hidden sites (available only to Tor users) hosted by the somewhat infamous Freedom Hosting. Freedom Hosting's boss, Eric Eoin Marques was arrested in Ireland last week as the US is trying to extradite him. But, what was more interesting was what some people discovered on all Freedom Hosting pages:
Shortly after Marques' arrest last week, all of the hidden service sites hosted by Freedom Hosting began displaying a “Down for Maintenance” message. That included websites that had nothing to do with child pornography, such as the secure email provider TorMail.
By midday Sunday, the code was being circulated and dissected all over the net. Mozilla confirmed the code exploits a critical memory management vulnerability in Firefox that was publicly reported on June 25, and is fixed in the latest version of the browser.
Though many older revisions of Firefox are vulnerable to that bug, the malware only targets Firefox 17 ESR, the version of Firefox that forms the basis of the Tor Browser Bundle – the easiest, most user-friendly package for using the Tor anonymity network.
So why do people think the feds are involved? The bit of malware scoops up various identifying information -- MAC address and Windows hostname -- and then sends it to a server in Virginia to find the real IP address of the computer in question. The Virginia server is controlled by the infamous contractor SAIC, who works with numerous government agencies.
It's no secret that law enforcement has wanted to identify folks who are trying to be anonymous. And, as discussed just last week, the FBI has been using malware at an increasing rate. So it wouldn't be a huge surprise to find out that little tricky bit of malware was designed to provide more info on Tor users who might be up to nefarious activity (or, you know, they might just want to surf anonymously). I imagine that this is not the end of this particular story...
from the burning-a-hole-in-taxpayers'-pockets dept
The cyber-Pearl Harbor is upon us and the only way to defeat it is to sink our own ships at the first sign of invasion. This is the sort of thing that happens when the legislators and advisors with the loudest voices value paranoia over rational strategy. The Department of Commerce, aided by a tragicomic string of errors, managed to almost stamp out its malware problem.
The Commerce Department's Economic Development Administration spent almost half of its IT budget last year to remediate a cyber attack that barely happened.
EDA's drastic steps to limit the damage by shutting down much of the access to the main Herbert Hoover Building network ended up costing the agency more than $2.7 million to clean up and reconfigure its network and computers. The IG said the bureau destroyed more than $170,000 in IT equipment, including desktop computers, printers, keyboards and mice.
Also included in the mass destruction were cameras and TVs. It wasn't just cyber-paranoia that led to this hardware cull. There was plenty of miscommunication too, along with the usual doses of bureaucratic clumsiness. The Inspector General's report breaks down the chain of missteps, which all began with a response team member grabbing the wrong network info.
In an effort to identify infected components, DOC CIRT’s (Dept. of Commerce Computer Incident Response Team) incident handler requested network logging information. However, the incident handler unknowingly requested the wrong network logging information... Instead of providing EDA a list of potentially infected components, the incident handler mistakenly provided EDA a list of 146 components within its network boundary. Accordingly, EDA believed it faced a substantial malware infection.
Yes. Much like "Reply" and "Reply All" will both get the job done, only one is the correct choice when firing off a devastating critique of your soon-to-be-former coworkers. The same goes for network logs. One shows you the correct info. The other "indicates" that more than half the EDA's computers are suffering from a malware infection.
DOC CIRT did try to get this fixed, pointing out the error to the handling team and re-running the analysis using the correct network log. Turns out, the original estimate was slightly off.
The HCHB network staff member then performed the appropriate analysis identifying only two components exhibiting the malicious behavior in US-CERT’s alert.
This new data in hand, a notification was sent out ostensibly to clear things up, but this too was mishandled so badly someone unfamiliar with bureaucratic ineptitude might be inclined to suspect sabotage.
DOC CIRT’s second incident notification did not clearly explain that the first incident notification was inaccurate. As a result, EDA continued to believe a widespread malware infection was affecting its systems.
Specifically, the second incident notification began by stating the information previously provided about the incident was correct. EDA interpreted the statement as confirmation of the first incident notification, when DOC CIRT’s incident handler simply meant to confirm EDA was the agency identified in US-CERT’s alert. Nowhere in the notification or attachment does the DOC CIRT incident handler identify that there was a mistake or change to the previously provided information.
Although the incident notification’s attachment correctly identified only 2 components exhibiting suspicious behavior—not the 146 components that DOC CIRT initially identified—the name of the second incident notification’s attachment exactly matched the first incident notification’s attachment, obscuring the clarification.
For five weeks, things went from bad to worse to comically tragic to tragically comic to full-scale computercide. Looking at its list (2 components), DOC CIRT asked the EDA to attempt containment by reimaging the infected items. Looking at its list (146 components), the EDA responded that reimaging half its devices would be "unfeasible." Taking a look at the EDA's list (from the first, mistaken network log analysis), DOC CIRT assumed the EDA had received additional analysis indicating the malware had spread, and changed its recommendations accordingly.
Finally, both departments were on the same (but entirely wrong) page and scaled up the response accordingly. A copy went to the DHS, stating that "over 50%" of the EDA's devices were infected. The DHS then accepted this without seeking independent confirmation. The NSA cranked out its own concerned report, quoting heavily from the DHS report (which was still in draft form), both of which were based on DOC CIRT's first erroneous report. This went undetected for over a year, until the OIG informed the involved agencies of its findings in December 2012.
The end result? The EDA and DOC CIRT worked together, attempting to head off a "severe" malware threat before it spread to other connected government computers. Despite gathering more information from outside consultants that indicated the malware was neither "persistent" nor a threat to migrate, the two agencies began destroying devices in May of 2012, finally stopping three months later when the "break stuff" budget had been exhausted.
Fortunately for the agencies, taxpayers and the surviving equipment (valued at over $3 million), the OIG's findings were brought to the agencies' attention before the fiscal year began and a new "break stuff" budget approved. All in all, the EDA spent over $2.7 million fighting a malware "infection" confined to two computers.
There's nothing in this report that makes the EDA look good. A chart on page 8 shows the EDA has persistently ignored the OIG's recommendations on agency computer security, with some assessments going back as far as 2006. It's no surprise it managed to (along with the Dept. of Commerce's response team) transform a 2-computer infection into a nearly $3 million catastrophe.
As we've noted before, when it comes to the Internet, governments around the world have an unfortunate habit of copying each other's worst ideas. Thus the punitive three-strikes approach based on accusations, not proof, was pioneered by France, and then spread to the UK, South Korea, New Zealand and finally the US (where, naturally, it became the bigger and better "six strikes" scheme). France appears to be about to abandon this unworkable and ineffective approach, leaving other countries to deal with all the problems it has since discovered.
According to the article 350 of the proposed draft, prosecutors may ask the judge for "the installation of a software that allows the remote examination and without knowledge of the owner of the content in computers, electronical devices, computer systems, instruments of massive storage or databases."
The key concern raised for similar projects of other countries applies here too: intentionally placing malware on computers increases the risk that others will be able to take control of those systems thanks to vulnerabilities in the code. That's no theoretical issue, as evidenced by major flaws discovered in Germany's trojan software. But it turns out that Spain's proposed malware scheme has an additional bad idea:
Furthermore, the article 351 of the text explains that official agents may require cooperation from "anyone who knows the operation of the computer system or measures applied in order to protect data held there". This means that Spanish authorities might require services from experts, "hackers" or computer companies.
Clearly that could be applied to Google or Facebook, say, which might be forced to provide user passwords or maybe even actively cooperate in attempts to infect a user's system. Given the current revelations about Internet companies' complicity in spying on huge numbers of people around the world, there seems little reason to hope that they would refuse to do so, despite protestations to the contrary, even if they -- unlike the Spanish politicians proposing this law -- understood the extreme stupidity of this approach.
We already did a post exploring the ridiculous background and bad assumptions of the so-called IP Commission Report, but we're going to explore some of the "recommendations" of the report as well. In that first post, we noted that the basis, assumptions and methodology of the report were all highly problematic, so it should come as little surprise that the "recommendations" that come out of it are equally ridiculous.
Let's start with the one that has received the most attention: the fact that the report recommends a "hack back" legalization, to allow those who feel their (loosely defined) "intellectual property" has been infringed to "hack back" at those who infringe. As Lauren Weinstein summarizes, this proposal more or less is a plan to legalize malware against infringers. Of course, this kind of idea is not new or unique. It's been around for a while. Almost exactly ten years ago, Senator Orrin Hatch proposed allowing copyright holders the right to destroy the computers of anyone infringing. The specifics here are explained over two "suggestions" that, when combined (hell, or even individually), are somewhat insane for anyone even remotely familiar with the nature of malware. First up, legalizing some basic spyware/malware:
Support efforts by American private entities both to identify and to recover or render inoperable
intellectual property stolen through cyber means.
Some information or data developed by companies must remain exposed to the Internet and
thus may not be physically isolated from it. In these cases, protection must be undertaken for
the files themselves and not just the network, which always has the ability to be compromised.
Companies should consider marking their electronic files through techniques such as “meta-tagging,”
“beaconing,” and “watermarking.” Such tools allow for awareness of whether protected information
has left an authorized network and can potentially identify the location of files in the event that
they are stolen.
Additionally, software can be written that will allow only authorized users to open files containing
valuable information. If an unauthorized person accesses the information, a range of actions might
then occur. For example, the file could be rendered inaccessible and the unauthorized user’s computer
could be locked down, with instructions on how to contact law enforcement to get the password
needed to unlock the account. Such measures do not violate existing laws on the use of the Internet,
yet they serve to blunt attacks and stabilize a cyber incident to provide both time and evidence for
law enforcement to become involved.
Basically, malware/DRM-on-steroids. As if that will work. Anyone who had even a modicum of experience with DRM or watermarking knows that these things aren't difficult to get around, and are basically a huge waste of time and money for those who employ them. The idea that they might then lock down entire computers if an incorrect file gets onto one seems even more ridiculous. Given how often DRM causes problems for legitimate users of the content, you can imagine the headaches (and potential lawsuits) this kind of thing would lead to. A complete mess for no real benefit.
So, then, they take it up a notch. If bad DRM/watermarking isn't enough, how about legalizing the pro-active hacking of infringers? No, seriously.
Reconcile necessary changes in the law with a changing technical environment.
When theft of valuable information, including intellectual property, occurs at network speed,
sometimes merely containing a situation until law enforcement can become involved is not an entirely
satisfactory course of action. While not currently permitted under U.S. law, there are increasing calls
for creating a more permissive environment for active network defense that allows companies not
only to stabilize a situation but to take further steps, including actively retrieving stolen information,
altering it within the intruder’s networks, or even destroying the information within an unauthorized
network. Additional measures go further, including photographing the hacker using his own system’s
camera, implanting malware in the hacker’s network, or even physically disabling or destroying the
hacker’s own computer or network.
Notice how that recommendation gets even more insane the further you read. "Retrieving" info? Okay. "Destroying info on an unauthorized network"? Yeah, could kinda see where someone not very knowledgeable about computers and networks thinks that's a good idea. "Photographing the hacker"? Well, that's going a bit far. "Implanting malware in the hacker’s network"? Say what now? "Physically disabling or destroying the hacker's own computer or network"? Are you people out of your minds?
This isn't just a bad idea, it's a monumentally dangerous idea that will have almost no benefit, but will have tremendously bad and dangerous consequences. Hell, today we already have to deal with a plethora of bogus DMCA takedown notices. Imagine if that morphed into bogus malware attacks or destroying of computers? It makes you wonder how anyone could take anything in the study seriously when you read something like that.
To be fair, the authors of the report say they don't recommend legalizing this stuff yet, but immediately make it clear that something like this is going to need to happen in the future, because "the current situation is not sustainable." Based on what? Well, as we explained in the first post about this report, that's mostly based on the authors' overactive imaginations, rather than anything fact-based.
A spokesman for the Attorney-General's Department said it was proposing that ASIO be authorised to ''use a third party computer for the specific purpose of gaining access to a target computer''.
The problem seems to be that even suspected terrorists are getting the hang of this security stuff:
The department said technological advances had made it ''increasingly difficult'' for ASIO to execute search warrants directly on target computers, ''particularly where a person of interest is security conscious.''
So the idea seems to be to infect the computer of someone that the alleged terrorists know, and then use that trusted link to pass on malware:
Australians' personal computers might be used to send a malicious email with a virus attached, or to load ''malware'' onto a website frequently visited by the target.
That probably seemed like a really clever ruse to the people who thought it up, but it overlooks some basic flaws.
First, that once ASIO has taken control of an intermediary's computer it can do anything -- including poking around to see what's there. After all, if intermediaries are known to suspected terrorists, it's possible that they too might be terrorists.
The authorities are insisting that the warrant to break into somebody's computer would not authorize ASIO to obtain "intelligence material" from it. But you don't have to be clairvoyant to predict that at some point in the future, "exceptional" circumstances will be invoked to justify doing precisely that: once security services start down a slippery stop, they never seem to be able to stop.
Secondly, as the German experience shows, if a computer has been compromised by malware in this way, it's not just the government agencies that can take control: anyone who has obtained the malware and analyzed it will be able to look for ways to send their own instructions. That could leave innocent members of the public vulnerable to privacy breaches and economic losses that would be directly attributable to the spy agency's digital break-in.
Finally, this approach seems to overlook the fact that presumed terrorists are unlikely to be best pleased with any person that unwittingly sends them government malware. If they notice and really are ruthless terrorists, they might decide to take revenge on that person and his or her immediate circle of family and friends. Either the Australian spy agency hasn't really thought this through, or it is being extremely cavalier with the lives of the members of the public it is supposed to protect.
Transparency is worth having for itself, since governments often tend to behave a little better when they know that someone is watching. But occasionally, requests for data turn up something big and totally unexpected because someone failed to notice quite what the information provided implies.
The German ministry for home affairs and thus the German police clearly state that they are monitoring Skype, Google Mail, MSN Hotmail, Yahoo Mail and Facebook chat if deemed necessary. Money is spent on trojan viruses and we can be quite certain which company produces the IMSI catchers [used for "man-in-the-middle" attacks on mobile phones] used by German police.
It's been known for a year that the German police forces have been using malware to spy on citizens via their computers, but the latest revelations about surveillance activity go far beyond that. It confirms that even in countries where people are very sensitive about privacy, Internet snooping by the police is routine. It also emphasizes, once more, the importance of encrypting your communication channels where possible, and avoiding those where it isn't.
It's become something of a cliché that anyone with a mobile phone is carrying a tracking device that provides detailed information about their location. But things are moving on, as researchers (and probably others as well) explore new ways to subvert increasingly-common smartphones to gain other revealing data about their users. Here's a rather clever use of malware to turn your smartphone into a system for taking clandestine photos -- something we've seen before, of course, in othercontexts -- but which then goes even further by stitching them together to form a pretty accurate 3D model of your world:
This paper introduces a novel visual malware called PlaceRaider, which allows remote attackers to engage in remote reconnaissance and what we call virtual theft. Through completely opportunistic use of the camera on the phone and other sensors, PlaceRaider constructs rich, three dimensional models of indoor environments.
The use of 3D reconstructions overcomes a potential problem with ordinary spyware: there's often too much data whose significance is unclear. That makes finding anything interesting hard. The solution here is to combine all the data into a unified, virtual reconstruction that can then be navigated by snoopers looking for significant items just as they might if they were rooting through your physical space.
The full academic paper "PlaceRaider: Virtual Theft in Physical Spaces with Smartphones" (pdf) makes for fascinating reading, even if it doesn't seem to understand the difference between "theft" and "surveillance". It includes the following rather fanciful description of how this 3D-spying capability might be used. It's rather over the top, but it gives an idea of what's theoretically possible:
Alice does not know that her Android phone is running a service, PlaceRaider, that records photos surreptitiously, along with orientation and acceleration sensor data. After on-board analysis, her phone parses the collected images and extracts those that seem to contain valuable information about her environment. At opportune moments, her phone discretely transmits a package of images
to a remote PlaceRaider command and control server.
Upon receiving Alice's images, the PlaceRaider command and control server runs a computer vision algorithm to generate a rich 3D model. This model allows Mallory, the remote attacker, to immerse herself easily in Alice's environment. The fidelity of the model allows Mallory to see Alice's calendar, items on her desk surface and the layout of the room. Knowing that the desktop surface might yield valuable information, Mallory zooms into the images that generated the desktop and quickly finds a check that yields Alice's account and routing numbers along with her identity and home address. This provides immediate value. She also sees the wall calendar, noticing the dates that the family will be out of town, and ponders asking an associate who lives nearby to 'visit' the house while the family is away and 'borrow'; the iMac that Mallory sees in Alice's office.
Well, maybe not. But what's more interesting is the way that smartphone malware is able to gather enough information to allow the detailed reconstruction of complex spaces. The paper includes some impressive 3D reconstructions from apparently random images that have been stitched together. These and the research project that produced them are a salutary reminder that useful as they are, smartphones also bring with them new dangers that need to be considered and, ultimately, addressed.
The American Enterprise Institute (AEI) recently held an event about cybersecurity and cybersecurity legislation. The keynote speech was from NSA boss General Keith Alexander. He of course talked about why he supports cybersecurity legislation, such as CISPA and other proposals that will make it easier for the NSA access private content from service providers -- much of which, reports claim, they're already capturing and storing. Alexander has claimed that the NSA doesn't have "the ability" to spy on American emails and such, and reiterates that claim during the Q&A in this session, insisting that the Utah data center doesn't hold data on Americans' emails (and makes a joke about just how many emails that would be to read). That's nice for him to say, but so many people with knowledge of the situation claim the opposite.
In a motion filed today, the three former intelligence analysts confirm that the NSA has, or is in the process of obtaining, the capability to seize and store most electronic communications passing through its U.S. intercept centers, such as the "secret room" at the AT&T facility in San Francisco first disclosed by retired AT&T technician Mark Klein in early 2006.
So it's interesting to pay attention to what Alexander has to say in pushing for cybersecurity legislation. You can watch the full video below, if you'd like:
Much of what he talks about online involves basic malware and hack attacks. These are definitely issues -- but are they issues that we need the military (which the NSA is a part of) to step in on? His "quote" line is that these attacks represent the "greatest transfer of wealth in history." That is a pretty broad statement, and there's almost no evidence to support it. He points to studies from Symantec and McAfee on the "costs" of dealing with security issues -- but remember, those are two of the biggest sellers of security software, and have every incentive in the world to inflate the so-called "costs." Also, seriously? The "greatest transfer of wealth in history"? Has he paid absolutely no attention to what's happened on Wall Street and the financial world over the past decade? Does anyone honestly believe that the amount of money "transferred" due to hack attacks is greater than the amount of money transferred due to dodgy financial deals and the mortgage/CDO mess? That doesn't pass the laugh test.
He does insist that worse attacks are coming, but provides no basis for that (or, again, why the NSA needs your info). In fact, according to a much more believable study, the real risks are not outside threats and hackers, but internal security screwups and disgruntled inside employees. None of that requires NSA help. At all.
But it sure makes for a convenient bogeyman to get new laws that take away privacy rights.
Alexander, recognizing the civil liberties audience he was talking to, admits that the NSA neither needs nor wants most personal info, such as emails, and repeatedly states that they need to protect civil liberties (though, in the section quoted below, you can also interpret his words to actually mean they don't care about civil liberties -- but that's almost certainly a misstatement on his part):
One of the things that we have to have then [in cybersecurity legislation], is if the critical infrastructure community is being attacked by something, we need them to tell us... at network speed. It doesn't require the government to read their mail -- or your mail -- to do that. It requires them -- the internet service provider or that company -- to tell us that that type of event is going on at this time. And it has to be at network speed if you're going to stop it.
It's like a missile, coming in to the United States.... there are two things you can do. We can take the "snail mail" approach and say "I saw a missile going overhead, looks like it's headed your way" and put a letter in the mail and say, "how'd that turn out?" Now, cyber is at the speed of light. I'm just saying that perhaps we ought to go a little faster. We probably don't want to use snail mail. Maybe we could do this in real time. And come up with a construct that you and the American people know that we're not looking at civil liberties and privacy, but we're actually trying to figure out when the nation is under attack and what we need to do about it.
Nice thing about cyber is that everything you do in cyber, you can audit. With 100% reliability. Seems to be there's a great approach there.
Now all that's interesting, because if that's true, then why is he supporting legislation that would override any privacy rules that protect such info? If he really only needs limited information sharing, then why isn't he in favor of more limited legislation that includes specific privacy protections for that kind of information? He goes back to insisting they don't care about this info later on in the talk, but never explains why he doesn't support legislation that continues to protect the privacy of such things:
The key thing in information sharing that gets, I think, misunderstood, is that when we talk about information sharing, we're not talking about taking our personal emails and giving those to the government.
So make that explicit. Rather than supporting cybersecurity legislation that wipes out all privacy protections why not highlight what kind of information sharing is blocked right now and why it's blocked? Is it because of ECPA regulations? Something else? What's the specific problem? Talking about bogeymen hackers and malicious actors makes for a good Hollywood script, but there's little evidence to support the idea that it's a real threat here -- and in response, Alexander is asking us all to basically wipe out all such privacy protections... because he insists that the NSA doesn't want that kind of info. And, oh yeah, this comes at the same time that three separate whistleblowers -- former NSA employees -- claim that the NSA is getting exactly that info already.
So, this speech is difficult to square up with that reality. If he really believes what he's saying, then why not (1) clearly identify the current regulatory hurdles to information sharing, (2) support legislation that merely amends those regulations and is limited to just those regulations and (3) support much broader privacy protections for the personal info that he insists isn't needed? It seems like a pretty straightforward question... though one I doubt we'll get an answer to. Ever. At least not before cybersecurity legislation gets passed.
Hacker: What are you doing? Why are you researching my Trojan?
Hacker: What do you want from it?
The AVG folks continued to chat with the guy for a little while, which is how they realized just how powerful the trojan was and how much it could do. The guy controlling it demonstrated this to them by remotely shutting down their machine after talking to them for a little while.
We're often told that the big media companies need to be saved because of all the important expensive reporting work they do. And then we see something absolutely ridiculous, such as Fox News linking the infamous Flame malware to Angry Birds... because both use the Lua computing language (found via Slashdot):
This is, of course, a complete pointless linkage, which seems to serve no purpose whatsoever, other than (perhaps) to attract the attention of those who are obsessed with Angry Birds (an admittedly large group of people). But just because two programs are written in the same language, it doesn't mean... well, it doesn't mean anything of importance whatsoever. Instead, it just seems like Fox News and its "Chief Intelligence Correspondent" Catherine Herridge needed to fill some space and came up with something entirely pointless. But, you know, we need those big professional news companies because of deep, hard-hitting stories like this one.