Rich Kulawiec's Techdirt Profile

Rich Kulawiec

About Rich Kulawiec

Posted on Techdirt - 11 October 2013 @ 09:46am

How The Dream Of Spying More On The Public With Cameras Will Likely Decrease Public Safety

Marcus Ranum wrote “Information security’s response to bitter failure, in any area of endeavour, is to try the same thing that didn’t work — only harder.” It seems that this often applies to the entire security field, not just IT. Here’s a timely example.

There have been calls, in the wake of April’s bombing at the Boston Marathon, for increased surveillance of Americans — already, arguably, the most-surveilled and most spied-on citizens on the planet, to such an extent that ex-Stasi staff are likely envious. In particular, there have been calls for mass (camera) surveillance from police department officials in Boston and New York City.

These recommendations clearly raise serious issues about privacy and the Constitution and the values we hold as a society. Others have written about those issues more eloquently than I can. But let me break from their approach and point out something on a much more pragmatic level:

It didn’t work.

Let me ask you to consider for a moment the Boston Marathon and all the video/still cameras that were focused on it, the ones whose images were in front of the nation nonstop for days. Anyone who’s run in or been to a major distance running event knows that there are cameras everywhere. There are race operation cameras at the start and finish. There are TV news cameras, all over the course — some fixed, some mobile. There are family/friends of runners and other spectators, concentrated at the start and finish, but scattered everywhere along the course, and nearly all of them have cameras. There are official and unofficial race photographers in multiple locations who try to grab still shots of every runner and then offer them for sale afterwards. There are even some runners wearing cameras from time to time. And then of course there are all the now-ubiquitous cameras on stores, banks, parking garages, traffic signs, and on all kinds of other structures along the way.

We don’t know why the those responsible for the attack in Boston did it; but what we do know is that the attack required a modicum of planning and intelligence: they weren’t entirely stupid. I submit that there is no possible way that they did not know that the finish area of a major marathon is one of the most heavily-photographed areas of the planet on the day of the event. Yet they not only selected it as their target, they made no attempt at all to evade the massive number of lenses focused on it.

Thousands of cameras equated to zero deterrent value.

Yes, those cameras certainly helped identify and locate the suspects: but that is cold consolation to those who lost life and limb, because they didn’t actually prevent the attack. The upcoming prosecution of Dzhokhar Tsarnaev, while it might yield some answers to troubling questions, is not going to help local runner Carol Downing’s daughters (Nicole Gross suffered two broken legs; Erika Brannock lost part of one of hers) recover and rehab and go on with their lives.

A thousand more, ten thousand more, a hundred thousand more cameras would not help: cameras have no deterrent value to people who are prepared to die and/or don’t care if they’re identified.

There also remains the distinct, disturbing possibility that the attackers chose the location because they knew it was so thoroughly covered with cameras. An attack like this is clearly directed at those present, but if its real purpose is, as Bruce Schneier observes, to attack the minds of hundreds of millions elsewhere, then it can only reach its targets if the event is heavily documented and widely disseminated.

To put that point another way: it’s entirely possible that adding cameras to a particular location will decrease public safety — because it may make that location more attractive to those who want to make certain their attacks are captured on video and of course, dutifully replayed in slow-motion thousands of times by 24×7 news networks with many hours of airtime to fill.

This brings up another disturbing point: how is it possible that senior law enforcement officials don’t recognize such an obvious, major security failure when it’s right in front of them? How can they possibly not grasp the simple concept that if a thousand cameras failed to stop the Boston Marathon attack, that ten thousand cameras will fail to stop the next one, and might even influence the attackers’ choice of location?

The answer is thus not to add still more cameras: the answer is to refuse to give in. Terrorism doesn’t work if its targets — you, me, and everyone else — decline to be terrorized.

Runners have already responded: all over the country, many of those have never even thought of trying to qualify for Boston started training for the Boston Marathon 2014 the next morning. (If there wasn’t a qualifying standard for the race, they would probably receive a quarter million entries next year.) Fundraisers for The One Fund are being organized at races all over the country; and there is a common banner that will be at at all of them: “Run if you can; walk if you must; but finish for Boston”.

That’s how you fight terrorism: you simply refuse to yield to it. You don’t need more cameras, more wiretaps, more spying, more databases, more secrets, more intrusion. You don’t need to declare the Constitution obsolete, as NYC Mayor Bloomberg would like to do. You don’t need to cower in fear or to give in to paranoia. And you certainly don’t need to redouble your efforts toward an approach that’s already been demonstrated not to work.

You only need courage. What kind of courage? This kind: Erika Brannock is the official starter for this Saturday’s Baltimore Marathon.

Posted on Techdirt - 23 February 2012 @ 08:55am

How New Internet Spying Laws Will Actually ENABLE Stalkers, Spammers, Phishers And, Yes, Pedophiles & Terrorists

There’s proposed legislation in the US (sponsored by Lamar Smith) and in Canada (sponsored by Vic Toews) and in the UK that uses various flimsy justifications for the mass collection of data on telecommunications users. The data covered by these proposals varies, but includes things like URLs, phone calls, text/instant/email messages, and other forms of communication. Some of this proposed legislation deals with communication metadata, e.g., sender, recipient, time, etc.; some of it deals with communication content, e.g., the full text of messages.

I’m going to gloss over the specifics for two reasons: first, they’ve been covered exhaustively elsewhere, and second, I think it’s an absolute certainty that whatever these proposals contain, the next ones will contain more.

The putative reasons given for these proposals are the usual Four Horseman of the Infocalypse: terrorists, pedophiles, drug dealers, and money launderers. One would think, given the hysteria being whipped up by the proponents of these bills, that one could hardly walk down the street without being offered raw heroin by a grenade-throwing child pornographer carrying currency from 19 different countries.

Of course, everyone who’s actually studied terrorists, pedophiles, drug dealers and money launderers in the context of telecommunications knows full well that nothing in these bills will actually help deal with them. The very bad people who are seriously into these pursuits are not stupid, and they’re not naive: they use firewalls, encryption, and tunneling. They use strong operating systems and robust application software. They use rigorous procedures guided by a strong sense of self-preservation and appropriate paranoia. They’re not very likely to be caught by any of the measures in these bills because they’ll (a) read the text and (b) evade the enumerated measures.

Yes, there are occasional exceptions: every now and then, a clueless newbie or a careless dilettante turns up when they’re caught. And of course when that happens, there’s always a press conference announcing the event, and many claims that it’s a “major blow against crime” and a flood of self-congratulatory press releases. But it doesn’t mean anything, except that someone was either stupid…or careless…or was set up.

The unpleasant reality that these bills are trying to avoid is that catching very bad people requires diligence, patience, expertise and intelligence, aka “competent police work.” There’s no substitute and there are no shortcuts. This means that these bills will achieve very few of their stated goals; that is, the benefit to society from them will be minimal, if any.

But what about the cost?

I don’t mean the financial cost, although that will be high — much higher than those proposing such legislation are prepared to admit; I mean the cost to society as a whole.

If such legislation passes, then everyone will know that every ISP is building a database — a highly useful database for very bad people. It’s the sort of thing that some very bad people have been trying to construct for years, often at considerable expense and effort. How very nice of someone else to build it for them, saving them the cost and trouble — because they, and/or their agents, will of course target it for acquisition. And given the parade of security breaches and dataloss incidents we see on a daily basis, it’s certain they’ll get it. (My bet is that they’ll get it before it’s even finished. Any takers?)

There’s an old military saying — a bit of inter-service trash talk: “The Air Force builds weapons; the Navy builds targets”.

Politicians who propose such measures appear to be thinking that they’re building a weapon — a weapon that law enforcement agencies can use to pursue people who’ve committed, or are suspected of committing, crimes. But they’re not. They’re building a target. They’re building the mother lode for stalkers, pedophiles, spammers, identity thieves, child pornographers, blackmailers, extortionists, and yes — terrorists. A Techdirt story just a few days ago gave some rather creepy examples of what Target’s data mining can do…and they’re just trying to sell you stuff. Imagine what very bad people are capable of, given far richer data and the rather obvious inclination to break the law at will.

What’s worse than building a target? Telling everyone you’re building a target. What’s worse than telling everyone you’re building a target? Telling everyone where it is. What’s worse than telling everyone where it is? Telling them what’s in it. Yet that’s exactly what these bills would do: force the construction of a target, inform everyone that it exists, where it is, and what’s in it.

I’m sure that the very bad people these bills allegedly target are delighted. I’ll bet they’re having a hard time not expressing their enthusiastic support. But my guess is that most of them will heed Napoleon’s sage advice: “Never interrupt your enemy when he is making a mistake.”

I’m not the only one who’s observed that these databases are targets, not weapons. So has Ontario Information and Privacy Commissioner Ann Cavoukian:

“This is going to be like the Fort Knox of information that the hackers and the real bad guys will want to go after. This is going to be a gold mine. […] The government will say that they can protect the data, and they can encrypt it. Are you kidding me? The bad guys are always one step ahead.”

But this is not the worst of it — that is, the certainty that very bad people will find ways to acquire these databases and to correlate them with each other and with still more databases isn’t the endgame.

Particularly talented intruders will not only get it, they’ll monitor it in real time. How do you feel about someone knowing where you bank, that you’ve made three phone calls to stores today, and that you have a Visa card with the following number that you just used from a hotel room 300 miles from home? How do you feel about the web browsing of your teenage daughter being observed by someone who’s also reading her instant messages and listening to her VOIP calls, and has the IP address she’s using in her college dorm room?

And even this is STILL not the worst of it. Given the rampant Internet and computer illiteracy that we see every day out of law enforcement, private investigators, journalists, and others around the world — such as the clueless people behind these bills — it’s only going to be a short time until “the logs say X” becomes semantically equivalent in the vernacular to “X is true”. And it is at that point that some of the more talented very bad people won’t just acquire this data: they’ll modify it.

Posted on Techdirt - 20 December 2011 @ 02:53am

The Carrier IQ Saga (So Far) — And Some Questions That Need Answers

The story so far: security researcher Trevor Eckhart exposed some very disturbing information about the “Carrier IQ” application here. This set off a small firestorm, which quickly got much bigger when Carrier IQ responded by attempting to bully and threaten him into silence. This did not go over well. After he refused to back down, they retracted the threats and apologized.

Eckhart followed up by posting part two of his research, demonstrating some of his findings on video. Considerable discussion of that demonstration ensued, for example here and here and here. Some critics of Eckhart’s research have opined that it’s overblown or not rigorous enough. But further analysis and commentary suggests that the problem could well be worse than we currently know. Stephen Wicker of Cornell University has explored some of the implications, and his comments seem especially apropos given that Carrier IQ has publicly admitted holding a treasure trove of data. Dan Rosenberg has done further in-depth research on the detailed workings of Carrier IQ, leading to rather a lot of discussion about Carrier IQ’s capabilities — there’s some disagreement among researchers over what Carrier IQ is doing versus what it could be doing, e.g.: Is Carrier IQ’s Data-Logging Phone Software Helpful or a Hacker’s Goldmine?

Meanwhile, the scandal grew, questions were raised about whether it violated federal wiretap laws, a least one US Senator noticed, and Carrier IQ issued an inept press release. Phone vendors and carriers have been begun backing away from Carrier IQ as quickly as possible; there were denials from Verizon and Apple . T-Mobile has posted internal and external quick guides about Carrier IQ.

Some of the denials were more credible than others. There has been some skepticism about Carrier IQ’s statements, given their own marketing claims and the non-answers to some questions. There’s also been discussion about the claims made in Carrier IQ’s patent.

Then the lawsuits started, see Hagens Berman and Sianna & Straite and 8 companies hit with lawsuit for some details on three of them.

Attempts to figure out which phones are infected with Carrier IQ are ongoing. For example, the Google Nexus Android phones and original Xoom tablet seem to not be infected, nor do phones used on UK-based mobile networks, but traces of are present in some versions of iOS, although their function isn’t entirely clear. A preliminary/beta application that tries to detect it is now available. Methods for removing it have been discussed.

Meanhile, A Freedom of Information Act request’s response has indicated (per the FBI) that Carrier IQ files have been used for “law enforcement purposes”, but Carrier IQ has denied this. And there seems to be a growing realization that all of this has somehow become standard practice; as Dennis Fisher astutely observes, With Mobile Devices, Users Are the Product, Not the Buyer.

Those are the details; now what about the implications?

Debate continues about whether Carrier’s IQ is a rootkit and/or spyware. Some have observed that if it’s a rootkit, it’s a rather poorly-concealed one. But it’s been made unkillable, and it harvests keystrokes — two properties most often associated with malicious software. And there’s no question that Carrier IQ really did attempt to suppress Eckhart’s publication of his findings.

But even if we grant, for the purpose of argument, that it’s not a rootkit and not spyware, it still has an impact on the aggregate system security of the phone: it provides a good deal of pre-existing functionality that any attacker can leverage. In other words, intruding malware doesn’t need to implement the vast array of functions that Carrier IQ already has; it just has to activate and tap into them.

Which brings me to a set of questions that probably should have been publicly debated and answered before software like this was installed on an estimated 150 million phones. I’m not talking about the questions that involve the details of Carrier IQ — because I think we’ll get answers to those from researchers and from legal proceedings. I’m talking about larger questions that apply to all phones — indeed, to all mobile devices — such as:

  • What kind of debugging or performance-monitoring software should be included?

  • Who should be responsible for that software’s installation? Its maintenance?

  • Should the source code for that software be published so that we can all see exactly what it does?

  • Should device owners be allowed to turn it off/deinstall it — or, should they be asked for permission to install it/turn it on?

  • Will carriers or manufacturers pay the bandwidth charges for users whose devices transmit this data?

  • Should carriers or manufacturers pay phone owners for access to the device owners’ data?

  • Where’s the dividing line between performance-measuring data that can be used to assess and improve services, and personal data? Is there such a dividing line?

  • Will data transmission be encrypted? How?

  • Will data be anonymized or stripped or otherwise made less personally-identifiable? Will this be done before or after transmission or both? Will this process be full-documented and available for public review?

  • What data will be sent — and will device owners be able to exert some fine-grained control over what and when?

  • Who is is responsible for the security of the data gathered?

  • Who will have access to that data?

  • When will that data be destroyed?

  • Who will be accountable if/when security on the data repository is breached?

  • What are the privacy implications of such a large collection of diverse data?

  • Will it be available to law enforcement agencies?

    (Actually, I think I can answer that one: “yes”. I think it’s a given that any such collection of data will be targeted for acquisition by every law enforcement agency in every country. Some of them are bound to get it. See “FBI”, above, for a case in point.)

Lots of questions, I know. Perhaps I could summarize that list by asking these three instead: (1) Who owns your mobile device? (2) Who owns the software installed on your mobile device? and (3) Who owns your data?

More posts from Rich Kulawiec >>