NTT Working On Subvocalizing
from the cool dept
I recently reread the Ender’s Game series, and I was thinking how cool the “subvocalizing” concept was, where some of the characters in the books could talk to the computer system simply by moving their mouths, rather than actually saying the words. Of course, I didn’t think something like that would become a reality – but it seems that researchers at NTT DoCoMo are working on just such a system. It’s a sensor that would detect your jaw movements so that it can interpret what you want to say just by the motion of your mouth – and you wouldn’t have to make any noise. They’re still working on it and don’t expect anything to be available for at least another five years, but it’s a pretty cool concept.
Comments on “NTT Working On Subvocalizing”
Jaw movements...
…what about tongue movements? And is the system versatile enough to detect the nasal differece between say, “L” and “N”? Sounds pretty interesting to me…
In 2001...
HAL could read lips, right? And we’ve touched on this subject here before, Mike.
And if you think it’s weird seeing people walking around talking to themselves when they’re actually on hands-free phones, I’m not sure how weird it will look when those people are just mouthing silently…
heh...
eliminate the middle man and get one computer to “talk” to another with this (software that turns spoken english into an avatar good enough that the deaf can read its lips).