by Mike Masnick
Fri, Sep 19th 2008 7:55am
With everyone trying to figure out just what went wrong to cause the rather spectacular financial mess Wall Street finds itself in these days, Saul Hansell over at the NY Times wanted to find out why all the sophisticated risk management quant algorithms that Wall St. has been so big on lately failed to warn of impending doom. His answer, basically, is that people on Wall St. were lying to the algorithms, coming up with ways to purposely enter data such that the risk seemed much less than it actually was -- in order to let them keep pushing the boundary. Then, it became a situation where people start relying on the computers just because the computer says so -- even though the data is bad. This happens time and time again. Even when people know that computers make mistakes, it's just so convenient to have a computer "confirm" your thinking that you start ignoring other warning signs.
If you liked this post, you may also be interested in...
- Patent Not Sufficiently Broad Or Generic? Cloem Will Help You By Automatically Generating Dozens Of Nearly Identical Patents
- Telco Analyst Compares Google Fiber To Ebola... Completely Misses The Point
- DailyDirt: Will Computers Have 20/20 Vision?
- DailyDirt: Recipes Analyzed By Algorithms
- Wall Street Journal Upset That Wall Street Isn't Upset About Net Neutrality