NSF Funds Study On Avoiding The Perils Of Monoculture
from the should-be-interesting dept
While some people are getting fired for pointing out the risks of a monoculture computing environment, it appears the National Science Foundation considers it a bit enough problem to grant $750,000 to two universities to try to “solve” the monoculture computing problem. The idea is to figure out a way to automate diversity within programs. I have no clue how this might work – but it appears they want to create a system that will take software applications and “generate diversity in key aspects” of the programs. I understand the reasoning for this, but it seems like an odd idea to take an application and then purposely mess it up. I’m assuming there’s a lot more to it than that, so if anyone knows more about this project, please speak up. There’s a little more information on the websites of the two professors (Stephanie Forrest at University of New Mexico and Michael Reiter at Carnegie Mellon), but not too much about what this particular grant is likely to be used for.
Comments on “NSF Funds Study On Avoiding The Perils Of Monoculture”
I’ve always thought that communications protocols should “evolve” — that is, the two sides should gradually negotiate a more compact protocol appropriate to the transmission actually in process. Sort of how people who talk together a lot evolve a private shorthand (called a “jargon” in the technical literature — a perfect example of the phenomenon!).
I beg to differ
There’s not a lot on the first layer of their respective web pages, but if you look a little deeper on Dr. Forrest’s site, you’ll find this which contains a link to this. Note: This is a dvi formatted document.
It appears to me that on some level they may be trying to either pick up or clean up where the orange book left off (depends on how you look at it, I guess). Diversity is looked at, not for its own sake, but as a way of protecting computer systems. The tech monoculture gets the blame, but my read on this is that they are trying to address the security problems it creates by fixing the boxes, not the culture itself.
From what I can tell of her writing, Stephanie Forrest has provided a conceptual foundation, while Dr. Reiter appears to be a prolific numbers cruncher of the highest order and would be invaluable in laying down the mathmatical foundation required for a project like this. They will need that in particular since the concept in its current incarnation appears to depend heavily on randomization of programming elements that are ‘needlessly predictable’, which begs the question of how this is to be done in a way that is both hard to predict and hard to reverse engineer.
Someone must have looked at the Professor Forrest’s first paper and thought, hmm, this could be useful, but it’s not all the way there yet.
I am not familiar with this particular effort, but in general the idea is that you can rearrange the program instructions to mitigate stack-smashing attacks while still allowing the application to function normally. Many exploits use buffer-overrun errors to overwrite the stack pointer to point to a specific region where malicious code has been inserted. Right now because of the monoculture in desktop operating systems, the exploit will work consistently for all computers running the same applications (often times even under different revs of the OS). By randomly reordering the instructions, you can potentially limit how far a particular virus/worm can spread because the stack will be slightly different on each machine.