The Politicians Who Cried 'Cyber Pearl Harbor' Wolf

from the tough-to-believe-them-any-more dept

With all the talk lately about cybersecurity legislation, we’ve still yet to see anyone lay out an actual scenario for a real “cyber security” threat (or, at least one that goes beyond your everyday malware or corporate espionage, which are covered by existing laws just fine). However, we have heard lots of fear mongering about planes falling from skies and electric grids being shut down — despite no evidence that there is any such threat (and, if there is, the concern should be focused on why those things are hooked up to the internet in the first place). And, of course, in all this fear mongering, there’s one phrase that stands out: “Digital Pearl Harbor,” as in, “we must protect ourselves before there’s a digital Pearl Harbor.”

David Parera, over at FierceGovernmentIT, has done the dirty work of tracing the history of the phrase, and suggesting that these Chicken Littles have been warning about the “imminent” digital Pearl Harbor for many years now.

The earliest public reference appears to be in a June 26, 1996 Daily News article in which CIA Director John Deutch warned that hackers “could launch ‘electronic Pearl Harbor’ cyber attacks on vital U.S. information systems.”

The next month, then-Deputy Attorney General Jamie Gorelick told the Senate Governmental Affairs permanent subcommittee on investigations that “we will have a cyber-equivalent of Pearl Harbor at some point, and we do not want to wait for that wake-up call,” according to the Armed Forces Newswire Service.

Thereafter the term appears to have gone into a hiatus, apart from some offhand or derivative references to the original sources cited above. But, not to worry, Sen. Sam Nunn (D-Ga.) used it again in the spring of 1998, being quoted in a March 19 South Bend Tribune article warning that “We have an opportunity to act now before there is a cyber-Pearl Harbor…We must not wait for either the crisis or for the perfect solution to get started.”

There’s a lot more where that came from, so go hit the link, read it, and be amazed.

Of course, as Parera notes, just because every single one of those fearmongering reports turned out to be false, it’s still possible that the “Digital Pearl Harbor” is right around the corner. But, still, it at least raises significant questions of how important it is that we rush through the bill without an explicit explanation of the true threat. Of course, that won’t really matter, as everyone’s basically playing a giant game of musical chairs, trying to be ready to claim they “called it” should these horrible things ever actually happen.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Politicians Who Cried 'Cyber Pearl Harbor' Wolf”

Subscribe: RSS Leave a comment
57 Comments
Anonymous Coward says:

Re: Re:

2010 – a year which will live in infamy – the Republic of Iran was suddenly and deliberately attacked by the Empire of the United States of America.

Iran was at peace with that nation and, at the solicitation of the United States, was still in conversation with its Government looking toward the maintenance of peace in the Middle East. Indeed, shortly after performing the Stuxnet attack, United States representatives were quoted as saying there is still time for Iran diplomacy to work. Their messages contained no threat or hint of war or attack. Nor did they claim responsibility for what they had done.

It will be recorded that the time required to develop Stuxnet makes it obvious that the attack was deliberately planned many months before the attack. During the intervening time the United States Government has deliberately sought to deceive the Republic of Iran by false statements and expressions of hope for continued peace.

The Stuxnet attack on Iran has caused severe damage to Iranian energy research. Centrifuges have been damaged. The same day the Iranian President announced the centrifuge damage, two nuclear scientists were assassinated. Later, a United States CIA plane was captured over Iranian air space a month before another Iranian nuclear facility director was killed.

We do not know what other subterfuge or computer attacks the United States are currently engaged in.

The damage caused by the Stuxnet attack was not limited to Iran. The virus infected computers in India, Indonesia and practically every nation on the planet. The United States has, therefore, undertaken a surprise offensive extending throughout the world. The facts of 2010 speak for themselves. The people of Iran have already formed their opinions and well understand the implications to the very life and safety of their nation. …

(- taken and modified from FDR’s “A Date That Will Live in Infamy” speech. —
http://www.law.ou.edu/ushistory/infamy.shtml

Anonymous Coward says:

Re: Re: Re: Yes, exactly the same...

I think the Siberian pipeline sabotage incident ( http://en.wikipedia.org/wiki/Siberian_pipeline_sabotage ) should count. Through dumb luck (or questionable reporting), there were no known deaths. But when our software purposely sets off an explosion equivalent to a small nuclear bomb ( equivalent to 3 kiloton TNT) in a foreign nation, I don’t think luck should be allowed as an extenuating factor.

Cory of PC (profile) says:

OK, I know that politicians aren’t going to do crap about this but this is kinda picking at me when I was reading this: if what happened at Pearl Harbor was a surprised attack carried out by the Japanese, and in this day of age when these people are worrying about a “cyber” Pearl Harbor…

Then why aren’t they defending themselves if an attack on this “cyber” Pearl Harbor is going to happen?

Seriously, if what happened at the real Pearl Harbor was a surprise attack, then proclaiming “cyber Pearl Harbor” kinda ruins that secrecy (that is if someone decides to go through with it). Again I know these people aren’t going to do squat other than the regular “do something!” routine, but that just kinda bugs me about what happened to the read Pearl Harbor as opposed to this “cyber” Pear Harbor.

That Anonymous Coward (profile) says:

Re: Re:

Because the former congresscritter ne lobbyist hasn’t brought them the new product designed to answer all of the problems they dreamed up yet.
Fearmongering is still a productive way of get reelected and passing legislation that erodes citizens rights in favor of greater “safety”.

I often enjoy the whole hackers can destroy x infrastructure and yet no one can explain why the controls for the power grid are on the freaking net to begin with.

Robert (profile) says:

Re: Re: Re: Re:

Did you not get the latest update? The status messages are broadcast over twitter, I think it is @USAPowerGrid.

The really cool part is that replies to the twitter account are processed as commands to the system. Not even in hexadecimal coding, just plain text.

For example: @USAPowerGrid “Shut down all nuclear reactors” is a valid nation-wide command to turn off all US nuclear reactors. From what I’ve heard, that command just pulls the plug to the control system, as the politicians (not nuclear safety experts) wanted a fast shutdown so they could go silent.

Apparently the politicians got the idea from watching Down Periscope.

Chuck Norris' Enemy (deceased) (profile) says:

Re: Re: Re:

yet no one can explain why the controls for the power grid are on the freaking net to begin with

They’re not. Industry has been self regulating, locking up, isolating, and protecting their grid controls for decades. CISPA will not change the way utilities do business except for allow us to share user’s private data with the government with impunity.

That Anonymous Coward (profile) says:

Re: Re: Re: Re:

Then I suggest every time a congresscritter suggests that some hacker can shut down the power grid they get pimp slapped.
It is the FUD they like to spread, that critical systems are an ip address away for the bad people to access.

But then you have DHS suggesting that when hackers were detected in a sensitive system to not shut them out but to let them keep poking around, unless they got close to something important. *blink*

:Lobo Santo (profile) says:

And humbug too!

This has nothing to do with ‘State Cyber-Security’ and everything to do with private contractors taking billions of OUR goddamn dollars from the government in order to provide the gov with the latest “emperor’s new firewall” Cyber-Security system.

End of story, nothing else to be said.

Except, the people taking that money are generally friends of the stupid shills pushing for this shyte.

Mason Wheeler (profile) says:

Damned if you do, damned if you don't

Ya know, I really like Techdirt for its focus on the problems of copyright and patent abuse. But the steady stream of denial when it comes to security issues drives me up the wall.

It reminds me of Y2K. That had all the hallmarks of a catastrophe in the making, and a lot of people don’t even know how big it might have been. My uncle is in the nuclear power industry, and some of the things he had to say about the very real Y2K issues they were facing in the late 90s had positively hair-curling implications.

Even assuming that the worst thing that happened was “only” some power plants going offline, that’s still pretty horrific. Our generation has never experienced widespread, extended blackouts in a major city. But just look at NYC in 1977, or any of a number of other examples. They tend to cause riots that make the Rodney King aftermath look like a wrassling match between grade-school children.

But in the end, the Y2K catastrophes didn’t happen, and for people who can only comprehend one degree of cause-and-effect, (which sadly is the majority of Americans,) that was enough. It didn’t happen, the problem wasn’t real, look how silly we all were for getting all worked up about nothing. And that’s still the conventional wisdom about Y2K today.

Problem is, it’s completely wrong. It was a very serious catastrophe in the making, and the only reason it *didn’t* happen is because all the people getting worked up about it caused people to put forth the Herculean efforts required to fix it! It’s a bit of a paradox: the only reason it didn’t happen was because it really was going to happen.

And today we see similar issues with security. And again, the people actually on the front lines are caught in a bad position. If they fail, then something horrible happens and it’s on their heads. But if they succeed, then nothing visible happens and everyone who can’t think past one degree of cause-and-effect blames them for wasting time and resources on a problem that wasn’t real.

I’m proud of the Internet’s response to SOPA, and I consider it a good thing. It finally got my generation to wake up and truly realize what “the price of liberty is eternal vigilance” actually means. Now that we’re becoming aware, can we please stop undermining it by ganging up on those whose job it is to be vigilant?

lavi d (profile) says:

Re: Re: Didn't read it all..... lol

DON’T put critical systems ONLINE.

In other words, the government doesn’t need to have access to all of our communications on the internet in order to protect systems which shouldn’t be on the internet.

If there is a danger to secured SCADA systems or air traffic control communications, then rightfully the government should have some authority to police those systems.

But this is not what they are asking for.

Anonymous Coward With A Unique Writing Style says:

Re: Re: Re: Didn't read it all..... lol

Q: If they don’t put it online, how is an obsese Homer Simpson going to be able to prevent it from melting down?

A: By basically going “eenie, meenie, miny, mo” and whatever button he points at at that point is by pure chance the button that prevents the nuclear meltdown. (As has been documented in The Simpsons on at least one occasion.)

Anonymous Coward says:

Re: Re: Didn't read it all..... lol

I know you guys like to think air gap/sneakernet, which in reality does exist to some extent on critical SCADA systems. The facts are allot of infrastructure is online, such as smart meters for home/business electric. The management systems themselves are often on a network as well for authentication with certificates/LDAP/AD and isolation only occurs at firewall points between systems.

Now how big an external threat is though, is very debatable. Most systems would probably be compromised from an internal source such as Stuxnet, than the internet.

That Anonymous Coward (profile) says:

Re: Damned if you do, damned if you don't

No the people on the front lines of security are placed in a much worse position.
If they don’t scream loud enough and doomsday predict hard enough they do not get allotted enough resources to deal with the real problem.
Then a majority of those resources are spent buying blinkie blue led laden magic boxes that will solve all of the problems, and siphon down all the Facebook updates in a 6 state radius looking for people plotting against the system.

Security of critical systems is important, no one is denying this… however running around screaming how some kid in a former Eastern Bloc country can shutdown the power to a hospital and kill 1000’s of unborn children and 10 kittens unless we can spy on everyone is NOT the proper method.

Demanding to “see” tangible results for hardened code and systems by people woefully incapable of understanding how computers work to begin with is the problem.
Pretending the internet is connected to everything to justify wholesale invasion of privacy on citizens is asinine.
Having critical systems connected to the internet is STUPID.
Calling it a possible “Digital Pearl Harbor” is STUPID.
Trusting lobbyists over actual experts who work for you already is STUPID.

We aren’t attacking the people who need to fix the problem, unless their answer is to spread hysteria and wait for a bigger payday.

Anonymous Coward says:

Re: Damned if you do, damned if you don't

Quote:

Our generation has never experienced widespread, extended blackouts in a major city.

There is your problem, you are afraid of the unknown.

The best protection any country could have against such a “terrible” thing is to have the knowledge to adapt to the parameters set forth by nature or man.

Central generation of power is a problem, central production of food is a problem, central healthcare is a problem, distributed production as in peer to peer production is a more robust way of doing things and still it would decimate the “big players”, we wen from everybody knowing what to do to survive to depending on other for it and now we are back again to re-learning how to do things for ourselves.

Cyber attacks only are a problem for centralized stuff it is not a big deal for distributed systems.

Mason Wheeler (profile) says:

Re: Re: Damned if you do, damned if you don't

I’m not “afraid of the unknown.” Quite the opposite, in fact. I’m afraid of the *known*. We know what happens when the lights go out in a big city for an extended period of time, because it’s happened before.

Just not to us, so instead of being afraid of it, and channeling that worry towards a positive outcome, we become complacent of the unknown. That’s a serious problem.

And your implication that the true problem is a single point of failure, and that a wider distribution would prevent it, is quite false. Take a look at the New York outage in 1977. If you believe the official explanation, (as opposed to the far more entertaining theory set forth in Men In Black that it was caused by an alien superball,) it involved three lightning strikes taking out three different power systems, followed by three incidents of human error in three different places while trying to deal with the consequences, over the course of about an hour. That looks pretty darn distributed to me…

That Anonymous Coward (profile) says:

Re: Re: Re: Damned if you do, damned if you don't

As it was happening there was lots of “reporting” about how some people heard explosions, and it might be terrorists.

Nope, just trees and a corporation who saved a few bucks skipping maintenance, and a critical system that was unprepared for an event they assumed could never happen.
You can never be prepared for everything, but you can’t sit in the bunker cowering over what-ifs.

The blackout should never have happened, but someone cut corners, blamed everyone but themselves, and proved that no matter how good a system you have humans can always screw it up.

Anonymous Coward says:

Re: Damned if you do, damned if you don't

Also variety is key, there are to few manufacturers of electronics, there are too many electronic manufacturers going offshore, it is time to bring those manufacturing capabilities to every individual so when push comes to shove we will be prepared and able.

Youtube: Home or Hobby Plastic Injection Molding Machine

Do it yourself or STFU and take it another FEMA/Katrina like you want it to happen, the government will never be prepared for all contingencies, they have proved they can’t handle even the ones they already know and prepared themselves against what makes you believe they will do any better with any other problem that involves millions of people?

The Japanese couldn’t handle Fukushima so it is not just the American government that is unable it is the system that is a problem and the problem is this BS centralized crap that people keep pushing, meaning those laws are useless, except for dress window.

Rich Kulawiec (profile) says:

Re: Damned if you do, damned if you don't

Two points.

First, we already have cyber-Pearl-Harbors on a regular basis, primarily because people are making the same well-known mistakes over and over and over again. The fix for this is not legislation or technology: it is accountability. But as we see (from watching “dataloss” and similar resources) there’s almost no accountability; the magic phrases “we take the problem seriously” and “no one could have foreseen” take care of everything. Until the next time.

Second, the kind of real-world security improvements that are needed are within reach today. Again: no legislation, no technology. “Don’t plug critical systems into the Internet” would be a good start. “Don’t run Windows” would be another. “Don’t use Adobe Acrobat” still another. None of this is difficult or arcane, none of it requires 8-figure contracts with inept corporations, none of it requires spying on citizens, none of it requires anything except recognition that it must be done.

So yes, there are real security problems everywhere, but the fixes are not at all what these demagogues think/say they are. What they have in mind will simply make things much worse.

Mason Wheeler (profile) says:

Re: Re: Damned if you do, damned if you don't

Oh, I definitely agree that a major part of it is stubbornly persisting in using known-bad software.

The wake-up call was in 1989, when the Morris Worm devastated the fledgling Internet, taking approximately 10% of it offline. (Imagine 10% of the Internet today going down!) It happened because Morris wrote a code that exploited a buffer overflow in a utility written in C to let the worm hack into systems where it was not welcome.

The elephant in the room, the thing that no one ever wants to acknowledge, is that the problem is inherent to the C language. Here we are, coming up on a quarter-century later, and people are still exploiting buffer overflows in software written in C, (and C++ and Objective-C, which inherited all of C’s flaws,) and causing billions of dollars worth of damage. The same mistakes over and over, because they’re much easier to make than to get right.

In a sane world, people would have read the writing on the wall. C was created for the specific purpose of building the very operating system that got hacked by the Morris Worm, which was designed as a networked OS. This proves conclusively that the language is unsuitable for its original purpose, but we didn’t listen, and we’re still paying the price today.

In a sane world, it would be considered an act of criminal negligence today to write an operating system, or any other network-facing software (such as web browsers, another common target of buffer overflow attacks) in a C-family language.

But we didn’t listen, we didn’t care. It’s got too much inertia now to change, so we’re stuck dealing with the consequences instead of fixing the problem.

Michael (profile) says:

Re: Re: Re: Damned if you do, damned if you don't

I completely agree that the use of non-size-limited memory operations is a problem. C++ addresses this by making Strings that store their actual length. Trusting user input blindly is another mistake. Those are issues that can appear in any language when combined with poor design choices and un-sanitized input.

Rich Kulawiec (profile) says:

Re: Re: Re: Damned if you do, damned if you don't

I’m well aware of the Morris worm, as I was in the trenches that day; my colleague Kevin Braunsdorf and I were the ones who came up with what we called “the condom” fix, a one-liner that prevented it from spreading. But the problem there is not C, or Unix or sendmail, or finger, but bad programming practice — which is possible in any language, with only the details of how bad varying.

That said, yes, it was a wakeup call, and it’s worth noting that in the decades since, there has been no security incident of similar magnitude involving Unix/Linux systems. However…there’s one of that size going on right now, and there has been for years. Well over a hundred million Windows systems are fully compromised and part of botnets. (100M was a 2006-2007 estimate. I think 200M is probably a better one today.) Of course those systems aren’t down — which would bring the problem into sharp focus — because their new owners have better things to do with them than merely shutting them down.

This problem is now ten years old. It affects issues as diverse as spam, phishing, identity theft, DoS attacks, online polls/voting, malware hosting, clickfraud, and others. Yet because (with rare exceptions) nothing is actually “down” there has been very little done to address beyond the usual “we take the problem seriously”. (Yes, I’m aware of announced “botnet takedowns”. These are meaningless theater. All the systems that are part of those are likely part of new botnets before the press conference is over.)

In a sane world, dealing with this problem — which is largest and longest running security issues in the history of computing — would be front and center. But it’s not. It doesn’t even get mentioned in most of the OMGOMGCYBERWAR fear-mongering, in favor of hypothetical attacks by the perceived national enemies du jour. (See, by the way, this excellent article: Why the United States Can?t Win a Cyberwar — h/t to Richard Forno and his excellent infowarrior list.)

After thinking about this for ten years, I’ve realized why: it’s an ugly, difficult problem to solve and very unlikely to result in huge profits for contractors. That’s because it’s real and of course really fixing it is a measurable outcome — one of the last things that those seeking to land $1.2B contracts want. It’s far more profitable for them to deal with imagined problems because then of course claims that they’re “solved” are vastly easily to sustain — and the cash register will keep ringing. Meanwhile, botted systems are everywhere and if there has been any slowdown in the infection rate, it’s only because their owners have an embarrassment of riches and thus don’t have a practical need for any more.

Mason Wheeler (profile) says:

Re: Re: Re:2 Damned if you do, damned if you don't

Yes, theoretically, bad programming practice is possible in any language, with only the details of how bad varying.

In practice, what you tend to see again and again are two very specific exploits: buffer overflows, (which are theoretically possible in any unmanaged language, but in practice I only ever see in C and its descendants,) and SQL injection, which as the name states is a problem specific to SQL. (Though the meta-problem, failing to properly separate code from data, originates in Lisp–where enthusiasts like to pretend it’s a “feature”–and exists to one degree or another in every language with an Eval function. But the instance of it that actually causes lots of problems is SQL injection.)

And those “hundred million Windows bot systems” aren’t just confined to Windows anymore. They’re starting to incorporate Macs (BSD-based), iPhones and Android phones (also based on Unix flavors) into botnets in increasing numbers as the popularity of each grows. (Not Linux PCs, though, because there’s *still* no one using Linux PCs.) All of it C-based, because C makes it easy to write insecure code and difficult to write correct code.

This simply underscores my point: trying to write a secure OS in C is a fool’s errand. It’s only appeared to succeed with *nix systems so far because they weren’t popular enough to be economically worthwhile as targets. As that changes, we see more and more just how bad their security issues are.

The only appropriate course of action is the same one we’ve refused to take since 1989: abandon C entirely, and move to a better language, one where a failure to follow non-intuitive “good practices” which are only necessary due to the design flaws of the language itself do not cause security holes in the created software.

TtfnJohn (profile) says:

Re: Re: Re:3 Damned if you do, damned if you don't

Plenty of Linux boxes are out there just not all that many desktops comparably. (Android is a Linux flavour though.)

It’s nice to blame C for the lack of security in OS installations rather than old code that should have been removed from the OS kernel years ago. In reality it’s bad programming practices and bad testing prior to release that’s the cause of most if the problems. If an organization is still using the utility that caused the buffer overflow error or a descendant of same 25 years on they should be slow roasted. C, remember, was designed to be one and a quarter steps removed from Assembler and not a high level language that takes care of a ton of stuff for the coder. I’ve never seen a claim that it was designed to be secure in and of itself from Kerrigan and Ritchie or anyone else. It was designed to produce operating systems. Unix, actually. Security was left to the programmer or programming team.

Now if you can name or come up with a mid-level or high level language that will be secure with an acceptably fast and small executable them please tell me what it is. One more to your liking because I can see your contempt for C and it’s derivatives. And yes, it’s an enormous pain at times.

Someone must think it’s good enough because it’s still widely used.

Windows flaws are well known and cracking it isn’t difficult if someone really wants to which means that botnets are easier to establish on Windows systems than on *Nix systems not just because it’s more widespread. Though that helps. They can be put on *Nix systems as proof of concept botnets have been established and tested on closed networks. We’ll see if Win8 can reduce the flaws in Windows and reduce the number of botnet infected machines.

None of this has to do with using C or C++ to code but design of the OS itself, as much as C is an enormous pain to use. Or would you rather code an OS to assembler? 😉

Meanwhile as far as an attack on critical systems are concerned we’re back to enforcing the best practices possible on users and administrators. That and accepting that there is no such thing as a 100% secure system which doesn’t mean trying to get there.

Cranky Old Git says:

Re: Re: Re: Damned if you do, damned if you don't

Yeah an edit command would be very helpful. Limit it’s availability to registered users only, giving folks an incentive to sign up, and make it expire five minutes after it was first posted.

A good idea would be to also keep track of each edit/resubmission and have a button which allows readers to expand the post (or open to a new page) and see what edits were made. Such a setup would allow people to fix typos/grammar while at the same time prevent abuse, like deletion and/or major changes to the original narrative. The “see edits” button can even be colored red so readers know at a glance edits have been made to the comment their reading, while on comments that haven’t been edited the button can simply be grayed out, unclickable, or non-existent entirely.

Yes I know we should all be proofreading carefully before hitting the submit button, and fwiw I always do, but even so I still tend to miss errors, not seeing them until after the submit button has been pressed. My forehead would certainly thank you. I facepalm enough as it is due to the number of dumb comments I read here. Don’t need to be doing it after reading my own posts too lol. 🙂

TtfnJohn (profile) says:

Re: Damned if you do, damned if you don't

As someone deeply involved in that herculean effort to stave off the Y2K bug (in the telecom industry) the biggest disappointment was that it wasn’t going to cause the mess predicted even on most older systems that hadn’t been updated in a coons age. I can’t speak for the nuclear power industry but I do know that the people involved in it all for the electrical generator and grid builder here were as disappointed as I was. Perhaps anticlimactic is more the word.

At the end of the day the most affected systems were the billing not operational systems. A few days prior to the millennium turnover IBM downloaded and activated the bug fix to our mainframes with nary a hiccup. All after we;d tested the hell out of things to determine that operational systems would have continued on just fine.

The night of Dec 31, 199 Jan 1, 2000 we watched and waited while all the mainframe apps ran smoothly on, most of the PC based systems were fine too with the single exception of SAP who hadn’t delivered their database update to fix dates from 2 characters to four. so SAP crashed impressively. A couple of phone calls to Germany later, using every swear word imaginable we got SAP to restructure their database, stop and restart the system and kept them online till we were satisfied. And they got us in the correct time zone. 🙂

By Jan 2nd we had admin passwords to every part of SAP which, till then, they wouldn’t give us. My low opinion of SAP didn’t get any better.

That said, we knew what was coming and had sufficient lead time to be prepared and ensure that all but one, it turned out, of our vendors were prepared and ready.

If there ever is a cyber Pearl Harbour the problem is that we have no idea what form it will take so the defenses fall back on best practice as well as anyone can. Keeping in mind the human factor which means that someone will leave a Pad device, laptop, smart phone or whatever laying around where anyone can look into it. Insecure password/username combinations and all the rest of it. None of this means lowering vigilance, resting on laurels or anything like it. It does mean planning for the worst possible outcome and how to recover from it. It means that if the power grid HAS to remain in line for whatever ersatz reason that it be significantly hardened that it’s extremely difficult to crack.

For all of that I’d be as or more concerned about an EMP like event and not even a terrestrial one. Strong solar flares matched with coronal mass ejections aimed at Earth have the potential to fry the planet’s electrical grid. Less powerful ones have to potential to fry semiconductors, have been known to cause transformers to explode and to cause mass blackouts such as the one that hit Quebec and parts of eastern Ontario in 1989.

These are associated with “sunspot years” and we’re in one of those right now. It’s hard to call the Sun a terrorist but a flare/coronal mass ejection on the order of the one in 1859 would take out satellites, the Internet and almost all of the electrical grid planet wide. We know this could happen. We know at some point it will happen and that we’re powerless when it does. All we can do it mitigate the damage when it does and pray we have enough hardware to begin to do that.

Whatever will constitute a “cyber Pearl Harbour” is more problematic. We have no idea who might launch it, what vector(s) it might use and just who has the motivation and skill to construct it. So all that can be done is broad defense against malware(s) that will make Stuxnet look like a programming exercise in middle school. Should be fun. But for all of that it can be done with existing resources if government departments and the private and public sectors simply apply and enforce “best practices” security. Which they should already be doing. Otherwise it’s a matter of throwing good money after bad and that never, ever works.

Jeffrey Nonken (profile) says:

Re: Damned if you do, damned if you don't

“It reminds me of Y2K. That had all the hallmarks of a catastrophe in the making…”

Your electric bill could have been late. Gasp!

“…and a lot of people don’t even know how big it might have been.”

Your phone bill might have been late, too.

“My uncle is in the nuclear power industry, and some of the things he had to say about the very real Y2K issues they were facing in the late 90s had positively hair-curling implications.”

Nice. Nice way to throw in an unsupported assertion by basing a claim on somebody else’s alleged expertise.

“Problem is, it’s completely wrong. It was a very serious catastrophe in the making, and the only reason it *didn’t* happen is because all the people getting worked up about it caused people to put forth the Herculean efforts required to fix it! It’s a bit of a paradox: the only reason it didn’t happen was because it really was going to happen.”

Whereas you seem to be claiming that the fact that it didn’t happen is proof that it would have.

Name one catastrophe and give us a credible causation chain. Details are important: “the Nuclear plant would have blown up because it had the date wrong” would be a really, really bad example of a credible, detailed causation chain.

By the way, I wasn’t in the Nuclear anything industry, nor any related power industry. I was in the embedded process controls industry and I am led to believe that some of our products were used in some power plants. However, that makes my expertise… zero. Because so far I haven’t even told you what I DID.

I was an embedded systems software engineer (and I still am, though I bill myself as “firmware developer” these days), so actually I have a pretty darned good idea how small, embedded systems work. Things like, for example, controllers for stuff like power systems and manufacturing. I currently work making airplane simulators, so I also have a pretty good idea how airplane systems and flight navigation systems work.

Most of ’em don’t give a rat’s ass what the date is.

Do you actually KNOW what the so-called Y2K bug was? Most people don’t seem to. The media at the time was all hype and no substance.

It’s just that decades ago computer memory was expensive, so people would save space (and therefore money) by programming dates as 2-digit numbers. Why not? The year 2000 was decades away, along with any problems associated with it.

Or maybe they were just lazy. Or some combination. Either way, the result was the same. Two-digit dates, based on the assumption that the date is in the 1900s, and that either their programs would be obsolete by 2000 or they wouldn’t be working their anymore anyway, and it wouldn’t be their problem.

Well, sometimes programmers and engineers do such a good job that their code lives on well past its expected lifetime, and as long as a system is working well, nobody is going to just spend tens or hundreds of thousands of dollars (or millions) to replace it. And if the system doesn’t change, and it works, there’s no reason to change the software.

And then one day you realize that the year 2000 is looming, and if you type 00 into the date field, suddenly Joe Schmoe is negative 30 years old and not eligible for social security. Or Jane Jackson’s pension plan won’t be vested for 95 years.

Most of the Y2K issues were accounting problems. Why do your traffic lights care whether it’s 1900 or 2000? They can still change every 30 seconds, and they can still make sure I have to stop at every #$^*! intersection without knowing the date. I’d tell you about the guy I know (actually sitting about 15 feet away as I type this) who worked on military aircraft at the time, but then I’d be guilty of making unsupported assertions based on somebody else’s alleged expertise, and I do hate being a hypocrite.

I’d much rather be making my own unsupported assertions based on my own alleged expertise. 😉

If you want anybody to take you seriously, you might consider at least giving an example of a disaster averted, and, as I’ve said, a credible chain of causation. Why would the nuclear plant have blown up, shut down, melted down, or whatever results your uncle described? Or airplanes fall out of the sky? Traffic lights cause massive pile-ups? Automobiles slam into reverse? Microwave ovens irradiate their owners? Cell phones call ex-wives and invite them to join you and your girlfriend for a 3-way? (…OK, I just made that one up.)

What disasters, how could they actually happen because somebody used a 2-digit year, or it’s not real.

Note: late bills because somebody realized that customers were either being charged or credited huge sums, or some other unrealistic numbers, and had to do everything by hand. Or had to wait for some emergency programming to fix the problem.

Note 2: A lot of stuff WAS averted because in the last 2 or 3 years before 2000, a massive effort was made to prevent the problems, or certify that it wasn’t required. (My group had exactly one product that even knew the date, and I was able to certify that it would be good until sometime in February, 2037.) However, MUCH more work was done than needed to be. Because most of it was hype. Most of the work was either done by people who were panicked, or by companies who knew better but wanted to make their customers happy.

Mason Wheeler (profile) says:

Re: Re: Damned if you do, damned if you don't

Yes, I’m well aware of the nature of the Y2K bug. I’m a computer programmer, and I understand the technical issues and their implications on a pretty low level.

Unfortunately, as I am not also a nuclear engineer, I don’t understand the process whereby it could have caused faults in the control systems for nuclear power plants. But that doesn’t mean that my uncle didn’t.

One of the more interesting things he said was that in some of the power plants, the term “embedded system” was taken quite literally: due to shielding requirements, some of the older computers were sealed away behind layers of lead and concrete, and gaining access to them to fix the Y2K issues required jackhammers!

John Fenderson (profile) says:

Re: Damned if you do, damned if you don't

It reminds me of Y2K.

Me too, but for the opposite reason as you.

Y2K took a real, readily manageable problem and exaggerated it wildly out of proportion. This cause a small group of people to be able to make a huge amount of money from necessarily terrifying people.

“Cyber-terrorism” is the same thing.

At least Y2K didn’t have “cyber” in the name. Nothing screams out technological incompetence like “cyber-“anything.

Jeffrey Nonken (profile) says:

Airplanes will fall out of the sky! OMG!

I always love this claim. They said the same thing in 1999.

– ATC doesn’t need to be connected to the Internet. I presume it isn’t, but I don’t personally know that for sure. If it is, all we have to do is disconnect it.
– ATC doesn’t actually control the airplanes. It just directs them. If you blow up the tower, the airplanes will keep flying. There could be problems at congested airports but the airplanes do not depend on ATC to fly.
– The GPS isn’t connected to the Internet. It just isn’t.
– Most airplanes don’t have fly-by-wire. Your average 737 will continue to obey the laws of aerodynamics even if you fry every electrical system on the airplane. Granted, it will become more difficult to fly. But even if you somehow hacked into their electronics (which ones? How?) the worst you could do is cause temporary confusion until the captain or FO overrode the autopilot.
– The airplanes that do have fly-by-wire aren’t connected to the Internet. Really, I don’t think you can do a web search and find access to the flight going overhead and turn it around with iAirplane running on your iPad.

The airplanes’ electronics are self-contained. They aren’t connected to the Internet any more than the average microwave.

You know that scene in Die Hard II: Even More Unbelievable Than The First Movie where he dials in a 100 foot negative displacement and causes an airplane to crash? Hah hah hah hah! Very funny. The ILS and, more importantly, Glide Slope directional radios are fixed radios on the ground at the ends of the runways. The guy in control of the airport may be able to turn them off, but reconfiguring them would mean physically moving them.

So I’m still waiting to hear how it is airplanes are going to just “fall out of the sky” because somebody in Asia is a hacker.

There’s a broken link in the cause-and-effect chain that you could, say, fly a C-5 through.

TtfnJohn (profile) says:

Re: Airplanes will fall out of the sky! OMG!

Big fellas like 767s do have fly by wire but somehow I don’t think Boeing is stupid enough to connect the control and navigation system(s) to the Internet while in flight.

Crackers are more interested in something easier to break into than a big plane while there are easier options. Say the computer systems on Capital Hill in DC. Thing is that no one may ever notice.

If aircraft start falling out of the sky it’ll be because of flocks of Canada Geese that have been cracked and programmed to fly into the engines of jets en masse. 😉

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...