This Chatbot Catches Pedophiles

from the technology-on-the-prowl dept

We’ve written about kids teaching FBI agents how to chat online like teens to help catch unlike predators, and about how chat-bots are fooling lots of people – especially when they’re expecting something specifically. It sounds like someone has put the two ideas together, and has created a chatbot that tries to catch pedophiles. It goes into chat rooms and starts talking like an ordinary teen, but looks for classic symbols of an adult predator looking to find a child. If the chatbot suspects something is up, it sends an email to the bots creator with the relevant transcript. He then reads the transcript to see if the situation looks suspicious, and will then contact local police with all the info. He calls the various bots ChatNannies, and claims no one has figured it out and that it’s helped with police investigations (though, there’s no proof of either of these claims). As for staying relevant, the chatbot apparently tries to learn from the conversations it’s involved with, as well as surfing the web for other pop culture information. The article also includes a “sample chat” that seems fairly sophisticated for a bot – which actually makes me wonder how true it really is – though, I have kept up with the state of the art in chatbots lately. Are they really that sophisticated?


Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “This Chatbot Catches Pedophiles”

Subscribe: RSS Leave a comment
8 Comments
Ed Halley says:

No Subject Given

I don’t know the chat bots discussed here in particular, but I have considered a double-blind approach to bots appearing “human” while sitting in a number of chat channels.

In this technique, the bot joins as person Alice on network A.NET, and as person Betty on a different network, B.NET. It picks a real unrelated participant on A.NET (we’ll call Annie) and parrots most of Annie’s speech on B.NET, so Betty says whatever Annie says. It likewise picks a random participant Bernice from B.NET and parrots most of Bernice’s speech as Alice on A.NET.

Once this blind is constructed, either Alice or Betty can add additional comments to goad conversations from their respective chat groups. If the bots detect directed questions, like “alice, a/s/l?” it can respond directly according to the bot’s goals, or it can just funnel the question to the alter-ego on the other network to fetch a genuinely produced reply from the real Bernice.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...