Techdirt has noted the increasing demonization of hackers (not to be confused with crackers that break into systems for criminal purposes), for example by trying to add an extra layer of punishment on other crimes if they were done "on a computer." High-profile victims of this approach include Bradley Manning, Aaron Swartz, Jeremy Hammond, Barrett Brown and of course Edward Snowden.
But as this Reuters story reports, that crass attempt to intimidate an entire community in case anyone there might use computers to embarrass the US government or reveal its wrongdoings is now starting to backfire:
The U.S. government's efforts to recruit talented hackers could suffer from the recent revelations about its vast domestic surveillance programs, as many private researchers express disillusionment with the National Security Agency.
Though hackers tend to be anti-establishment by nature, the NSA and other intelligence agencies had made major inroads in recent years in hiring some of the best and brightest, and paying for information on software flaws that help them gain access to target computers and phones.
Much of that goodwill has been erased after the NSA's classified programs to monitor phone records and Internet activity were exposed by former NSA contractor Edward Snowden, according to prominent hackers and cyber experts.
The article goes on:
Closest to home for many hackers are the government's aggressive prosecutions under the Computer Fraud and Abuse Act, which has been used against Internet activist Aaron Swartz, who committed suicide in January, and U.S. soldier Bradley Manning, who leaked classified files to anti-secrecy website WikiLeaks.
A letter circulating at Def Con and signed by some of the most prominent academics in computer security said the law was chilling research in the public interest by allowing prosecutors and victim companies to argue that violations of electronic "terms of service" constitute unauthorized intrusions.
This latest development also exposes a paradox at the heart of the NSA's spying program. Such total surveillance -- things like GCHQ's "Tempora" that essentially downloads and stores all Internet traffic for a while -- is only possible thanks to advances in digital technology. Much of the most innovative work there is being done by hackers -- it's significant that the NSA's massive XKeyscore program runs on a Linux cluster. But as the NSA is now finding out, those same hackers are increasingly angry with the legal assault on both them and their basic freedoms. That may make it much harder to keep up the pace of technological development within the spying program in the future unless the US government takes steps to address hackers' concerns -- something that seems unlikely.
Remember Jacques Nazaire? He's the local counsel for Prenda in a case in Georgia who was trying desperately to get the judge there to ignore Judge Wright's order in California, which lays out how Prenda's lawsuits are highly questionable, and likely against the law. He was so desperate that he said the judge should ignore the ruling in California because California recognizes gay marriage, among other differences, despite that having nothing to do with anything related to the actual case (which covers federal copyright laws, rather than state laws, and which was filed in the case to provide additional background, rather than as any sort of binding ruling).
Well, it appears that Nazaire seems to believe that if he just keeps telling the court crazier and crazier things, perhaps it will ignore Judge Wright's ruling. The latest filing tries, once again, to give the judge in Georgia a reason to ignore Judge Wright's ruling, but again it doesn't make much sense. The filing is rambling and somewhat wacky, seemingly trying to argue that, even though Prenda and AF Holdings are implicated in both cases, they're completely and totally unrelated. He also seems to argue that these filings are just designed to rack up higher billing fees. Note, for example, the slightly paranoid use of capital letters:
That motion was NOT written by the undersigned; nevertheless the defense has filed it in THIS docket apparently for two reasons. 1) to bill for the same and 2) to give THIS Court the impression that either the undersigned or a friend of his drafted and filed the same.
But where it gets really wacky is when Nazaire just starts tossing in totally random claims about hackers:
Why would the defendant in this case file a copy of a motion (ECF No. 31,
Defendant’s Exhibit B) from the California case and into THIS docket when that
motion has nothing to do with this case?
The undersigned does not know the answer to that question. However, it
must be noted that defendants (not the one herein) in these types of cases, typically employ various crafty and intimidating schemes against prosecutors and plaintiff’s attorneys. A newspaper article mentioning other types of intimidation is attached hereto as Plaintiff’s Exhibit A.
What is Exhibit A, you ask? Why it's a random story about hackers claiming to be a part of Anonymous hacking into Paypal. What does that have to do with anything? The answer is nothing.
Here's what I find most incredible about Nazaire's line of reasoning. It is basically "please ignore this other case where the same companies that I'm working for have been called out for fraud on the court, because that's totally unrelated, even though they're the same companies" while at the same time saying "we can't trust anything the defense says because, hackers! And, as proof, here's a random totally unrelated story about hackers."
He goes on to suggest that these hackers are after him, because some moron sent him a stupid email.
Furthermore the undersigned has been personally harassed by these types
of defendants (not the defendant in this instant case nor the individuals listed in Exhibit A) because of THIS case alone. (Please see Plaintiff’s Exhibit B attached hereto).
Exhibit B is a silly email from someone using the email address "firstname.lastname@example.org" saying:
You are about it get justifiably screwed by the justice system.
It's nice to see.
You aren't very smart, are you?
Of course, this is a stupid email by whoever sent it, but it's hard to see how that's necessarily "harassment," nor does it show that the person who sent that email is one of "these types of defendants." It's just a stupid email from someone mocking Nazaire (the email address should have been a giveaway on that front).
Either way, if I'm the judge in this case, each of these filings only makes me more interested in whatever must be in Judge Wright's order...
from the do-these-people-even-listen-to-themselves? dept
Last week, we had talked about some concerns about how various cybersecurity provisions would allow those hit by malicious hackers to "hack back" or, as some call it, engage in an "active defense." There were significant concerns about this, but as Marvin Ammori briefly mentioned in last week's favorites post, Rep. Louis Gohmert seems to not only think hacking back is a good idea, but that it should be explicitly allowed under the CFAA (Computer Fraud and Abuse Act). You can see his explicit statements to this effect below during last week's House Judiciary Committee hearing on the CFAA. It appears he heard a story about someone installing some malware on a hacker's computer to get a photograph of them, and has decided "that's a good thing, that helps you get at the bad guys," without ever thinking of the very, very long list of dangerous consequences of allowing such things:
Here's the basic transcript. The really crazy part is where Gohmert says he doesn't care as long as the hack back is "destroying that hacker's computer."
Rep. Gohmert: It's my understanding that under 18 USC 1030 that it is a criminal violation of law to do anything that helps take control of another computer, even for a moment. Is that your understanding?
Orin Kerr: It depends exactly what you mean by "taking control." If "taking control" includes gaining access to the computer, assuming a network your not supposed to take control of, then yes, that would clearly be prohibited by the statute.
Rep. Gohmert: For example, my understanding is that there was a recent example where someone had inserted malware on their own computer, such that when their computer was hacked and the data downloaded, it took the malware into the hacker's computer, such that when it was activated, it allowed the person whose computer was hacked to get a picture of the person looking at the screen. So they had the person who did the hacking, and actually did damage to all the data in the computer. Now, some of us would think 'that's terrific, that helps you get at the bad guys.' But my understanding is that since that allowed the hackee to momentarily take over the computer and destroy information in that computer and to see who was using that computer, then actually that person would have been in violation of 18 USC 1030. So I'm wondering if one of the potential helps or solutions for us would be to amend 18 USC 1030 to make an exception such that if the malware or software that allows someone to take over a computer is taking over a hacker's computer, that it's not a violation. Perhaps it would be like for what we do for assaultive offenses, you have a self-defense. If this is a part of a self-defense protection system, then it would be a defense that you violated 1030. Anybody see any problems with helping people by amending our criminal code to allow such exceptions or have any suggestions along these lines?
Orin Kerr: Mr. Gohmert, that's a great question that is very much debated in computer security circles. Because, from what I hear there is a lot of this "hacking back" as they refer to it. But at least under current law, it is mostly illegal to do that.... The real difficulty is in the details. In what circumstances do you allow someone to counterhack, how broadly are they allowed to counterhack, how far can they go? The difficulty, I think, is that once you open that door as a matter of law, it's something that can be difficult to cabin. So I think if there is such an exception, it should be quite a narrow one to avoid it from becoming the sort of exception that swallows the rule.
Rep. Gohmert: Well, I'm not sure that I would care if it destroyed a hacker's computer completely. As long as it was confined to that hacker. Are you saying we need to afford the hacker protection so we don't hurt him too bad?
Orin Kerr: (brief confounded look on his face) Uh... no. The difficulty is that you don't know who the hacker is. So it might be that you think the hacker is one person, but their routing communications... Let's say, you think you're being hacked by a French company, or even a company in the United States...
Rep. Gohmert: Oh and it might be the United States Government! And we don't want to hurt them if they're snooping on our people. Is that...?
Orin Kerr: No.
Rep. Gohmert: I don't understand why you're wanting to be protective of the hacker.
Orin Kerr: The difficulty is first, identifying who is the hacker. You don't know when someone's intruding into your network who's behind it. So all you'll know is that there's an IP address that seems to go back to a specific computer. But you won't know who it is who's behind the attack. That's the difficulty.
First off, kudos to Orin Kerr for keeping a (mostly) straight face through that exchange. There are many amazing things about this particular exchange, but the fact that Rep. Gohmert is one of the people in charge of how the CFAA gets reformed, and doesn't understand these very basic concepts, is immensely troubling. Among the headsmackers in that exchange: the idea that hackers are bad -- and not just partially bad, but apparently obviously and totally bad, like out of a movie. Also: that they're somehow easy to identify and that a freebie on hackbacks wouldn't be abused in amazing ways. Further, as Kerr pretty clearly points out that you can't automatically track back and (without saying so directly, but clearly implying) that hackers likely would shield their identity or fake someone else's identity, Gohmert still doesn't get it and somehow thinks that Kerr is saying we don't want to allow hackbacks on US government snooping (which, again, Gohmert seems to have no problem with). Yikes. Please do not let people like this near laws that have anything to do with computers. To me, this level of misunderstanding is worse than the whole "series of tubes" garbage from a few years back by Senator Stevens.
I'm sorry, but it seems that if you can't understand that there isn't some magic list that says "these hackers are bad, and therefore we should destroy their computers," I don't think you should have any role in making laws around this topic.
We just recently wrote about a trio of recent situations -- all involving young hackers probing for information, leading to either criminal charges or threats of criminal charges against them -- that show what happens when people in power don't understand how technology works. They were all cases where the individuals involved may have done things that some would think inconsiderate, but that hardly should rise to the level of "criminal" behavior -- especially with threats of many years in jail. Presenting the flipside to that argument: the editorialists at the Toronto Globe and Mail, who show why those who don't understand technology have no business writing about it. The editorial is headlined When did it become wrong to punish hackers?, which already suggests problem number one. Hacker is a generic term that does not automatically imply malicious attacks, yet the Globe and Mail immediately seems to assume otherwise. That might be news to the US government, which just announced its own National Day for Civic Hacking (despite filing charges against such civic hackers...).
A Montreal school is being widely criticized for expelling a student who hacked into its computer system and helped expose flaws in the system’s security. The student now has been offered jobs by computer security companies, including the one that ran the system he hacked into. In the Internet age, the hacker is celebrated as a hero and the school is pilloried for being an overbearing, defensive holdover from a bygone age. It’s an unfair presumption that needs to be corrected.
That's one version of the story. The hacker is celebrated as a hero because he did something useful: exposed a security flaw that could have been used by someone malicious for nefarious purposes. We generally want to celebrate those who spot danger and warn people away from it. And the school is being pilloried because it expelled this person. Without Ahmed Al-Khabaz's help, the data of students would be at risk. Doesn't it seem somewhat overbearing to blame the messenger? What exactly is "unfair" about the presumption? After pointing out that Al-Khabaz "discovered a serious flaw" the editorial still supports his expulsion, apparently entirely based on the fact that the company, Skytech, felt his probing was an attack:
... Mr. Al-Khabaz then went on and carried out what the company considered to be a “cyber-attack” on the school’s production servers. The company notified the school, and Mr. Al-Khabaz was hauled on the carpet. The company accepted the student’s explanation and noted that he “demonstrated great talent in computer science.” They dropped the matter and offered Mr. Al-Khabaz a job, but Dawson’s administrators felt the student had gone too far and expelled him on the grounds he had violated the college’s code of conduct.
What the company considered a "cyber-attack" could also be described as "checking to see if the flaw was fixed." And, clearly, they didn't think it was a huge problem if they offered him a job, and noted his "great talent." So why does the school still think he went too far?
Dawson’s officials are right: Rules exist for a reason, and students cannot expect to break them without consequence. Why have them, otherwise?
Ahhhhhh. Rules are rules. Rules exist for reasons, but sometimes those reasons are bad. And punishing people for breaking rules in ways that help people seems like sending the exact wrong message. Sometimes rules should be broken, because the rules are wrong.
The editorial then moves on to Aaron Swartz:
Swartz, who had a history of depression, was facing a slew of charges for allegedly downloading publicly funded academic journals from a large database that charged a fee for access. His family and supporters blame overzealous prosecutors for his death; the prosecutors insist – again, quite rightly – that “stealing is stealing.”
Uh, "stealing is stealing" is a tautology, so of course it's right. But what's "wrong" is arguing that what Swartz did was "stealing." He stole nothing. He downloaded papers from MIT's open network, which was set up with a site license from JSTOR allowing open downloading of those journal articles, all of which remained on the site for anyone else to download.
Go ahead, explain what was "stolen"?
In the age of the Internet, the massive downloading for free of music and movies and other copyrighted material has muddied the waters for many people.
It seems to have "muddied the waters" for the editorial writers of the Toronto Globe and Mail who don't seem to realize that neither case had anything to do with the "massive downloading for free of music and movies."
They seem to have forgotten that privacy rights and copyright laws are among the foundations of our economy. These are things that are not to be shoved aside by the absolutism of Internet activism.
Oh really? If privacy rights are the foundation of the economy, then, er, isn't it a good thing that Al-Khabaz alerted officials to a hole that exposed the private info of students. He did nothing to compromise anyone's privacy rights at all. Similarly, Aaron Swartz did not violate any copyright law, and he was not charged with copyright law violations.
So, seriously, how does a huge mainstream publication like the Globe and Mail get away with writing a piece of garbage this ridiculous? It claims things that simply aren't true, completely flips around reality, and then seems to wrap it up in some bizarre "rules are rules" argument, that makes no sense since the rules it says people violated... weren't even violated.
And the Globe and Mail thinks people should pay its meter to access this kind of crap?
from the we're-from-the-public-and-we're-here-to-tell-you-to-leave-us-alone dept
We keep hearing US government officials tell us fanciful stories about why we need cybersecurity legislation that paves the way for the government to get access to private information, but the arguments never make much sense. There are vague claims of threats that really seem more like garden variety hackers, and then there are the completely made up threats that are pulled right from Hollywood scripts -- like the claims that an online attack will lead to planes colliding.
A new survey suggests that the public just isn't buying it. 63% of those polled worried about the impact on privacy and civil liberties if we provided greater information sharing with the government. So for all the talk about how there's "bipartisan" support for doing something here, it's not clear that there's really American public support for this kind of thing.
The American Enterprise Institute (AEI) recently held an event about cybersecurity and cybersecurity legislation. The keynote speech was from NSA boss General Keith Alexander. He of course talked about why he supports cybersecurity legislation, such as CISPA and other proposals that will make it easier for the NSA access private content from service providers -- much of which, reports claim, they're already capturing and storing. Alexander has claimed that the NSA doesn't have "the ability" to spy on American emails and such, and reiterates that claim during the Q&A in this session, insisting that the Utah data center doesn't hold data on Americans' emails (and makes a joke about just how many emails that would be to read). That's nice for him to say, but so many people with knowledge of the situation claim the opposite.
In a motion filed today, the three former intelligence analysts confirm that the NSA has, or is in the process of obtaining, the capability to seize and store most electronic communications passing through its U.S. intercept centers, such as the "secret room" at the AT&T facility in San Francisco first disclosed by retired AT&T technician Mark Klein in early 2006.
So it's interesting to pay attention to what Alexander has to say in pushing for cybersecurity legislation. You can watch the full video below, if you'd like:
Much of what he talks about online involves basic malware and hack attacks. These are definitely issues -- but are they issues that we need the military (which the NSA is a part of) to step in on? His "quote" line is that these attacks represent the "greatest transfer of wealth in history." That is a pretty broad statement, and there's almost no evidence to support it. He points to studies from Symantec and McAfee on the "costs" of dealing with security issues -- but remember, those are two of the biggest sellers of security software, and have every incentive in the world to inflate the so-called "costs." Also, seriously? The "greatest transfer of wealth in history"? Has he paid absolutely no attention to what's happened on Wall Street and the financial world over the past decade? Does anyone honestly believe that the amount of money "transferred" due to hack attacks is greater than the amount of money transferred due to dodgy financial deals and the mortgage/CDO mess? That doesn't pass the laugh test.
He does insist that worse attacks are coming, but provides no basis for that (or, again, why the NSA needs your info). In fact, according to a much more believable study, the real risks are not outside threats and hackers, but internal security screwups and disgruntled inside employees. None of that requires NSA help. At all.
But it sure makes for a convenient bogeyman to get new laws that take away privacy rights.
Alexander, recognizing the civil liberties audience he was talking to, admits that the NSA neither needs nor wants most personal info, such as emails, and repeatedly states that they need to protect civil liberties (though, in the section quoted below, you can also interpret his words to actually mean they don't care about civil liberties -- but that's almost certainly a misstatement on his part):
One of the things that we have to have then [in cybersecurity legislation], is if the critical infrastructure community is being attacked by something, we need them to tell us... at network speed. It doesn't require the government to read their mail -- or your mail -- to do that. It requires them -- the internet service provider or that company -- to tell us that that type of event is going on at this time. And it has to be at network speed if you're going to stop it.
It's like a missile, coming in to the United States.... there are two things you can do. We can take the "snail mail" approach and say "I saw a missile going overhead, looks like it's headed your way" and put a letter in the mail and say, "how'd that turn out?" Now, cyber is at the speed of light. I'm just saying that perhaps we ought to go a little faster. We probably don't want to use snail mail. Maybe we could do this in real time. And come up with a construct that you and the American people know that we're not looking at civil liberties and privacy, but we're actually trying to figure out when the nation is under attack and what we need to do about it.
Nice thing about cyber is that everything you do in cyber, you can audit. With 100% reliability. Seems to be there's a great approach there.
Now all that's interesting, because if that's true, then why is he supporting legislation that would override any privacy rules that protect such info? If he really only needs limited information sharing, then why isn't he in favor of more limited legislation that includes specific privacy protections for that kind of information? He goes back to insisting they don't care about this info later on in the talk, but never explains why he doesn't support legislation that continues to protect the privacy of such things:
The key thing in information sharing that gets, I think, misunderstood, is that when we talk about information sharing, we're not talking about taking our personal emails and giving those to the government.
So make that explicit. Rather than supporting cybersecurity legislation that wipes out all privacy protections why not highlight what kind of information sharing is blocked right now and why it's blocked? Is it because of ECPA regulations? Something else? What's the specific problem? Talking about bogeymen hackers and malicious actors makes for a good Hollywood script, but there's little evidence to support the idea that it's a real threat here -- and in response, Alexander is asking us all to basically wipe out all such privacy protections... because he insists that the NSA doesn't want that kind of info. And, oh yeah, this comes at the same time that three separate whistleblowers -- former NSA employees -- claim that the NSA is getting exactly that info already.
So, this speech is difficult to square up with that reality. If he really believes what he's saying, then why not (1) clearly identify the current regulatory hurdles to information sharing, (2) support legislation that merely amends those regulations and is limited to just those regulations and (3) support much broader privacy protections for the personal info that he insists isn't needed? It seems like a pretty straightforward question... though one I doubt we'll get an answer to. Ever. At least not before cybersecurity legislation gets passed.
Richard Clarke, the former cybersecurity czar in the White House -- and a huge, huge, huge proponent of pushing for greater legislation for spying on Americans under the guise of "cybersecurity" (it used to be "cyberwar" but that term was so laughable, it's been downgraded to "cybersecurity) -- has written one of the most ridiculous defenses of new internet spying proposals, claiming that Chinese hackers are stealing all our intellectual property by hacking into computers online. He has no evidence of this. He tells apocryphal stories of Chinese hackers somehow getting all the data from a "$1 billion research program copied by hackers in one night." The whole thing is fear-mongering in the extreme, using the specter of evil "Chinese pirates" hacking computers and stealing important US intellectual property. That's wrong for a variety of reasons that we've discussed multiple times. But where it gets downright silly is in his assertion that (1) the US could magically "stop" these mythical hackers from "stealing" data, and (2) that Homeland Security already has the authority to spy on all internet traffic as it comes over the border:
If given the proper authorization, the United States government could stop files in the process of being stolen from getting to the Chinese hackers. If government agencies were authorized to create a major program to grab stolen data leaving the country, they could drastically reduce today’s wholesale theft of American corporate secrets.
Under Customs authority, the Department of Homeland Security could inspect what enters and exits the United States in cyberspace. Customs already looks online for child pornography crossing our virtual borders. And under the Intelligence Act, the president could issue a finding that would authorize agencies to scan Internet traffic outside the United States and seize sensitive files stolen from within our borders.
And this does not have to endanger citizens’ privacy rights. Indeed, Mr. Obama could build in protections like appointing an empowered privacy advocate who could stop abuses or any activity that went beyond halting the theft of important files.
Almost everything stated above is ridiculous. As law professor James Grimmelman points out, with this article "Richard Clarke disqualifies himself from participating in any serious discussion of cybersecurity."
Indeed. It's scary to think that Clarke was ever seen as an expert in cybersecurity. He seems to be under the assumption that the internet really is a series of tubes, in which customs agents can simply stop all that data at the border and inspect it. And the idea that appointing a single "privacy advocate" would magically stop abuses? You'd think he just stepped off the turnip truck, rather than having spent many years in government where privacy was regularly abused, despite much more significant safeguards in place. Who does he think he's kidding?
Will we ever have people driving policy discussions on regulating the internet who actually understand the internet?
from the CIP-#1,831-for-why-the-internet-is-scary dept
Perhaps no single "demographic" is more misunderstood (and feared -- especially post-SOPA debacle) by Hollywood than "The Hacker." In the hands of the movie machine, hackers are portrayed as fast-talking (and fast-typing) young men (and very occasionally, women) with unfortunate hairdos, huddled around multiple screens making use of thoroughly impractical GUIs, all the while spouting a confounding mixture of instantly-outdated slang and acronyms.
Maybe Hollywood uses this creative license to keep its fears at bay. It's got IT departments full of young men (and women) with unfortunate hairdos to handle anyone trying to DDOS its kilobytes, allowing it to breathe easy and sleep the deep sleep of the blissfully unaware. To confront the fact that anyone with half-decent social engineering skills could talk them and their underlings out of sensitive information is probably way too alarming.
from the kind-of-thing-an-idiot-would-have-on-his-luggage dept
Well, this is rather incredible. With the news that Anonymous hacked the offices of the Syrian President and dumped a ton of emails online... comes the news that the hack was insanely easy. Why? Because, apparently, the password was 12345. No joke. Of course, that's considered one of the worst passwords of all time. And, as pointed out by Lauren Weinstein, this is the exact same password that was immortalized by Dark Helmet (the original one, rather than our local Techdirt hero) as being the stupidest password he's ever heard -- and the "kind of thing an idiot would have on his luggage!"
It's been pointed out over and over again that censoring the internet is no way to deal with things like copyright infringement -- and that people will always figure out ways to route around such censorship. That's why it's interesting to hear that some folks at the famed Chaos Communication Congress in Berlin last week outlined some plans to set up their own satellite system for routing around internet censorship around the globe. And... a key reason given for why this is needed? SOPA, of course:
He cited the proposed Stop Online Piracy Act (Sopa) in the United States as an example of the kind of threat facing online freedom. If passed, the act would allow for some sites to be blocked on copyright grounds.
They're obviously a long way from this, but the ability of amateurs to build and launch their own satellites into space has been growing and that's only going to accelerate. On top of that, with efforts like SOPA and other censorship efforts around the globe, it's giving more urgency to folks who believe in freedom of speech and civil liberties to figure out ways to decentralize and move away from systems that can be controlled by governments.
We've noted in the past couple of years that a few big events have started to call attention to the parts of the network that are centarlized and vulnerable to censorship -- and that's resulted in numerous efforts to decentralize those elements and make them censorship-proof. These projects won't all work (and some will certainly fail miserably), but as more and more people realize that these censor-proof systems are needed, it means that they will get created.