DailyDirt: Making Computers More Like Humans And Vice Versa
from the urls-we-dig-up dept
As artificial intelligence projects get more advanced, the questions of how to measure general intelligence become increasingly more important. Tests such as beating humans at chess and conversing with people naturally are somewhat crude ways to judge the improvements in silicon-derived cognition. And as many point out, when AI projects do succeed in beating humans at chess (or other intelligent tasks), people move the goal posts and say that chess isn’t really an intelligent task or that the computer’s approach is fundamentally different from a human’s mind. Here are just a few links about humans and computers improving by copying off each other.
- Computers playing chess can beat over 99% of the human population at the game (and humans are arguably better at calling a draw). This situation hasn’t decreased the popularity of chess, and in fact, it’s made humans play more like computers by learning new tricks from chess programs. [url]
- Silicon-based computers are quite energy inefficient as they perform calculations, but how efficient are human brains or other biological computation mechanisms? There are fundamental computational limits for any kind of computer (silicon based or DNA based), but we’ve only started to quantify the biological limitations. [url]
- Silicon chips designed to mimic the known mechanisms of brain neurons and synapses could create an artificial neural network that processes a multitude of parallel instructions simultaneously. IBM is working on this kind of artificial brain that will require a completely different kind of programming, but it may produce a machine that processes information more like biological systems do (and make more mistakes?). [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: ai, artificial brain, artificial intelligence, calculations, chess, computational limits, neural networks
Comments on “DailyDirt: Making Computers More Like Humans And Vice Versa”
Computers vs. humans
Far more important to discuss is the problem outlined in the Sunday Boston Globe, “MONEY FOR ALL”. Because there will never be the jobs for all people due to technology continually taken so many, there is a proposal for a “guaranteed basic income”. I say not until we set a very different foundation in education, wherein each person is taught to value something in themselves, especially in the arts and creative ideas – something all human brains can do, but computers can not!!!
This is an issue with a very long history, and both sides have a point. In programming, the compiler (specifically, the first step of compilation, the parser, which reads source code written in a human-readable language and determines its meaning) was once thought to be an AI task, because it takes intelligence to read language and understand its meaning. Then someone took a look at Chomsky’s research on formal grammar classes and realized that the parts of understanding human languages that requires intelligence are understanding context and exceptions to the rules of the language. They started working on computer languages with a formally specified, context-free grammar, and suddenly it didn’t require intelligence to read them anymore; just a well-defined set of rules that both the compiler and the programmer understand.
Chess is very similar, and really so is anything that you can reduce to a well-defined set of rules with no exceptions. That doesn’t require intelligence to process; it just requires someone intelligent to hard-code a system for following the rules in a useful manner.
You want to see real AI research, look at natural language processing. Natural languages (English, Spanish, Japanese, etc) are highly contextual and don’t often follow their own rules, and people can still understand grammatically incorrect (rule-breaking) sentences, because of the context. There’s no way to code around that. So the things to keep an eye on are stuff like IBM’s Watson and Google’s search engine, which can contextually catch when you’ve made a likely mistake and suggest an alternative.