Robotics

Algorithm lets disaster response robots discern between humans and rubble

Algorithm lets disaster response robots discern between humans and rubble
The robot features HD cameras to scan the surrounding area for people
The robot features HD cameras to scan the surrounding area for people
View 2 Images
The system obtains the 3D points to segment, applying numerical values to the captured images that represent the shape, color and density of the shapes
1/2
The system obtains the 3D points to segment, applying numerical values to the captured images that represent the shape, color and density of the shapes
The robot features HD cameras to scan the surrounding area for people
2/2
The robot features HD cameras to scan the surrounding area for people

With their ability to navigate through tight spaces and unstable environments without putting people at risk, disaster response is one of the most promising applications for robots. Researchers from Mexico's University of Guadalajara (UDG) have developed an algorithm that could come in handy in such situations by allowing robots to differentiate between people and debris.

The team used a robot with a form factor similar to iRobot's 110 FirstLook robot, but without that robot's self-righting capabilities. With motion sensors, cameras, a laser and infrared system, the robot is able to plot paths through an environment or create a 2D map. But it is the inclusion of a flashlight and stereoscopic HD camera that allows it to obtain images of its environment and recognize if there are any people within it.

It does this by using the HD cameras to scan the surrounding area, before the images are cleaned up and patterns of interest are isolated from their surrounds, such as rubble. A descriptor system obtains the 3D points to segment, applying numerical values to the captured images that represent the shape, color and density of the shapes.

The system obtains the 3D points to segment, applying numerical values to the captured images that represent the shape, color and density of the shapes
The system obtains the 3D points to segment, applying numerical values to the captured images that represent the shape, color and density of the shapes

The segments are then merged to create a new image that passes through a filter that determines whether it is a human silhouette or not. The whole system can be integrated into the robot, or the algorithm run on a separate laptop and the robot controlled wirelessly.

"Pattern recognition allows the descriptors to automatically distinguish objects containing information about the features that represent a human figure," says Arana Daniel, researcher at the University Center of Exact and Engineering Sciences (CUCEI) at the UDG. "This involves developing algorithms to solve problems of the descriptor and assign features to an object."

The silhouettes will also be used to train a neural network to recognize patterns. This network, called CSVM, was developed by Arana Daniel and can be used to recognize not only human silhouettes, but also fingerprints, handwriting, faces, voice frequencies and DNA sequences.

By mimicking the human learning process, the team plans to continue development on the robot with the goal of training it to automatically classify human shapes based on previous experience.

Source: University of Guadalajara via Alpha Galileo

1 comment
1 comment
Bob Flint
Can it clamber over rubble whilst listening for the muffled cries or heartbeat of an unconscious victim as a trained dog could, also sniff, and alert human counterparts of the rescue team.
If it was me, I would opt for the four legged canine, with his keen sense of smell, sight, hearing, and speed...while the geeks sit in front of their monitors directing, the robot, and trying to reach areas that only a four footed animal can, then awaiting the images being transferred, and analyze the few scans they could get to....