The Anti-Turing Test
from the captchas dept
A few months ago someone sent me the following which I found to be very cool: "... randomising letters in the middle of words [has] little or no effect on the ability of skilled readers to understand the text. This is easy to denmtrasote. In a pubiltacion of New Scnieitst you could ramdinose all the letetrs, keipeng the first two and last two the same, and reibadailty would hadrly be aftcfeed. My ansaylis did not come to much beucase the thoery at the time was for shape and senqeuce retigcionon. Saberi's work sugsegts we may have some pofrweul palrlael prsooscers at work. The resaon for this is suerly that idnetiyfing coentnt by paarllel prseocsing speeds up regnicoiton. We only need the first and last two letetrs to spot chganes in meniang." I wish I had a real source for it, but all I get on a Google search is other sites posting the same quote. Anyway, I was just reminded of that when reading this NY Times article about the idea of "Captchas", which are tricks to make sure someone filling out a web-form or registration page is really a human, and not a bot. In other words, it's a sort of anti-Turing test. I would think that a system using plenty of misspelled words like the above paragraph could easily fool a computer, but is understandable by humans, and could make a good captcha.