Decision time? Check out our latest product comparisons

Chatbot hunts for pedophiles

By

July 10, 2013

Negobot at work, pretending to be a gullible young girl

Negobot at work, pretending to be a gullible young girl

For a number of years now, police forces around the world have enlisted officers to pose as kids in online chat rooms, in an attempt to draw out pedophiles and track them down. Researchers at Spain’s University of Deusto are now hoping to free those cops up for other duties, and to catch more offenders, via a chatbot that they’ve created. Its name is Negobot, and it plays the part of a 14 year-old girl.

“Chatbots tend to be very predictable. Their behavior and interest in a conversation are flat, which is a problem when attempting to detect untrustworthy targets like pedophiles” says Carlos Laorden, who helped develop the program. “What is new about Negobot is that it employs game theory to maintain a much more realistic conversation.”

Game theory, putting it simply, involves strategic decision-making performed in order to reach a goal. In the case of Negobot, this is achieved through the use of seven different “conversational agents” (or levels) which dictate and change the virtual girl’s behavior in response to the suspected pedophile’s actions.

When Negobot first enters into a chat room conversation, it starts at level 0, in which it’s neutral. If the person doesn’t appear to be interested in talking to the chatbot, it gets more insistent about having a conversation, by introducing topics that will hopefully capture their attention. In doing so, it proceeds through levels -1 to -3.

It’s also possible, of course, that the person could be very interested in chatting Negobot up. Should they start exhibiting “suspicious behavior,” such as not caring about the girl’s age or asking her for personal information, the program enters into levels 1 to 3. Operating at these levels, it attempts to obtain the person’s phone number, email address, social network profile, or anything else that could be used to physically locate them.

In order to appear more human-like, Negobot remembers facts about specific people with whom it’s chatted, that it can bring up in subsequent conversations. It also sometimes takes the lead in the conversation (chatbots often only react), varies the amount of time that it takes to respond, plus it throws in some slang and poor writing. That said, it still doesn’t understand subjective uses of language, such as irony.

The university has “field tested” Negobot on Google’s chat service, and has entered into a collaborative agreement with the Basque Country police force, which is interested in implementing the technology.

Source: Plataforma SINC (Spanish)

About the Author
Ben Coxworth An experienced freelance writer, videographer and television producer, Ben's interest in all forms of innovation is particularly fanatical when it comes to human-powered transportation, film-making gear, environmentally-friendly technologies and anything that's designed to go underwater. He lives in Edmonton, Alberta, where he spends a lot of time going over the handlebars of his mountain bike, hanging out in off-leash parks, and wishing the Pacific Ocean wasn't so far away.   All articles by Ben Coxworth
9 Comments

Any attempt to capture pedofiles is welcome yet I am uncomfortable with the idea of removing the authorities from the coal face. I am grateful for the difficult work police do in this regard. What they deal with is horrendous yet they do a vital job of protecting our kids by being proactive. There are untold thousands of children that will avoid abuse and rape as a consequence of their efforts.

No matter how good a bot is, it will never be as good as a trained human when it comes to pursuing these criminals.

A sincere thank you to the good people at Task Force Argos.

Please let your kids know there is no shame in speaking out if anyone is inappropriate with them. Everyone should know if they want to pursue children for illicit purposes they will be caught.

Australian
10th July, 2013 @ 03:30 pm PDT

Just what we need! Robotic entrapment! What will they think of next?

/sarcasm

Anne Ominous
10th July, 2013 @ 03:41 pm PDT

This reminds me a little bit of ELIZA that was written at MIT in the 60's. It compensated for lack of spatial context by being inquisitive to draw out a response from the other party parroting a psychologist sort of. (Questions like "Does talking about this bother you?" etc.)

About 13 years ago a guy hooked it up to AIM and posted the conversations and some of them are kind of amusing. It is 404 now but Waybackmachine has it here: http://web.archive.org/web/20010223222122/http://fury.com/aoliza/

Some of them were kind of amusing. If you can fool some idiots into talking to a hacked together script from the 60's I am sure you can get ASL tards to trade dirty pictures with something you could build with a team of people today.

Daishi
10th July, 2013 @ 10:46 pm PDT

I am in no way defending or condoning pedophiles but when did having an inappropriate online conversation with a computer program become illegal or am I missing something here?

Robbo
10th July, 2013 @ 11:38 pm PDT

Wait till they will start to use this program to locate political dissidents.

Kris Lee
11th July, 2013 @ 03:40 am PDT

Robbo:

Having an inappropriate online conversation with a computer program is not illegal, but it will definitely flag the person and make them the target of a more customary investigation.

kpkpkp
11th July, 2013 @ 08:37 am PDT

It's easy to detect if the person you're talking to is not an actual person, but a script. Even if the conversation is somewhat randomized. When you ask a question, and the bot can't THINK of a valid response then it's a dead give away.

This is why the cops will always be better than a machine in their place.

Cops have the capability of independent thought. ...not that they actually do, but they have the capability.

mrvillan
11th July, 2013 @ 08:39 am PDT

If peds talk to an adult thinking it's a kid, they'll talk to a robot.

junbug20
11th July, 2013 @ 09:12 am PDT

This is the beginning of the slippery path that is an Orwellian future. Its Big Brother by definition.

A robot is stalking you, the abuse of such a program by a government (lets take USA for example because they want to get all hard on homeland security and yet a lot of their people still talk a lot about freedoms they believe in under the declaration of independence) or any other sort of organisations ( if anyone thinks a Police, Interpol or even the FBI haven't been hacked better go look around) and this program taken and used in other ways, they are dreaming.

The manufacture of such a product should be illegal. Criminals should be pursued by human beings and human beings only.

What is the intent of the end of the conversation with the bot? To discourage the behavior and watch for further repetition or leave the person in active suspense of another encounter? What if that is the trigger for the person to take the behavior offline and into the real world? What if the bot is that trigger?

Does it watch every chat room going on everywhere? Does it leave predictive responses that would train Hebephiles in this case to learn either bot or even Police derived responses to approaches and so teach them who to avoid and who to approach. What are they going to do on an international service about people who are in or from countries who's age of consent is 14 or even younger? Human rights groups would have a field day.

There are too many questions left open by such a system even in this case of thinking that it may do some good about even the most heinous of crimes let alone the further/future use and abuse of such a program.

Bots should never be involved in behavioral crimes, they can't detect nuance in conversation, sarcasm or someone throwing any kind of conversation at it to screw with it, that person will come under investigation or be on a "list", the program will be challenged and it will be thrown out by the world court on the basis of its inaccuracy.

I for one am very anti-bot in the area of human behavior and particularly the attempt to replicate and I study them and new developments i them. They aren't ready now and I don't believe they will ever be ready in the future of known technology.

Anne Yonimus
12th July, 2013 @ 01:45 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 29,008 articles