Robotics

Cornell develops beer-pouring robot that anticipates people's actions

Cornell develops beer-pouring robot that anticipates people's actions
The anticipatory robot pouring beer
The anticipatory robot pouring beer
View 4 Images
Robot's view and anticipation of a person approaching a fridge
1/4
Robot's view and anticipation of a person approaching a fridge
The anticipatory robot helping to open a fridge door
2/4
The anticipatory robot helping to open a fridge door
Robot's view and anticipation of a person picking up a cup
3/4
Robot's view and anticipation of a person picking up a cup
The anticipatory robot pouring beer
4/4
The anticipatory robot pouring beer
View gallery - 4 images

What’s better than as robot bartender that can pour you a beer? How about a robot waiter that can see you need a refill and comes over to pour you another one. Hema S. Koppula, a Cornell graduate student in computer science, and Ashutosh Saxena, an assistant professor of computer science are working at Cornell’s Personal Robotics Lab on just such a robot. Using a PR-2 robot, they've programmed it to not only carry out everyday tasks, but to anticipate human behavior and adjust its actions.

Robots are the neat freaks of the technology world. They like things to be tidy, orderly and predictable, meaning they work best in places like laboratories and factories where everything can be controlled and where it’s easy to predict what’s going to happen next. When a robot moves out of its comfort zone into our imperfect world, it can run into difficulties. Even something as seemingly simple as noticing that someone’s glass is empty and topping it up requires a lot of observation and planning on the robot’s part.

Unfortunately, people may unintentionally hinder the robot by moving their glass as the robot goes to top it up. This could get very messy, so the robot needs to anticipate possible human actions and adjust accordingly. If it sees someone reaching for the cup, the robot has to know when to stop trying to pour.

Robot's view and anticipation of a person approaching a fridge
Robot's view and anticipation of a person approaching a fridge

The Cornell anticipatory robot avoids embarrassing spills and other accidents by using its Microsoft Kinect scanner to build up a 3D map of the objects present and then calculating how they might be used based on the action currently being performed by the person.

The robot manages this by means of a database of 120 3D videos of people performing everyday household tasks, from which it reduces the person's movements to symbolic skeletons. It then classifies these skeletons into subactivities, such as reaching, pouring and carrying, for example, while associating different objects with different actions.

The robot is also able to put various subactivities together in different combinations to form models of larger activities that it can use to anticipate the movements of people in different situations. The models it builds are general enough to take into account the fact that different people will perform the same activity slightly differently.

When faced with objects and someone performing an activity, the robot will generate a number of different possible continuations, before calculating which is the most likely. These predictions are continually updated and refined as the action continues.

However, it's predictive capabilities are still limited and depend on how far in the future it has to participate. In tests carried out by the researchers, the anticipatory robot is able to correctly predict actions 82 percent of the time when looking one second ahead, 71 percent for three seconds, and 57 percent for 10 seconds. So there's still some time before robot waiters show up at your local restaurant.

The anticipatory robot helping to open a fridge door
The anticipatory robot helping to open a fridge door

“Even though humans are predictable, they are only predictable part of the time,” Saxena said. “The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond.”

The anticipatory robot project’s findings will be presented at the International Conference of Machine Learning, June 18 to 21 in Atlanta, Georgia and the Robotics: Science and Systems conference June 24 to 28 in Berlin.

The video below shows the anticipatory robot in action.

Source: Cornell University

Human Activity Anticilation

View gallery - 4 images
5 comments
5 comments
steelnerves
Oh wow so that bar scene from I, Robot might actually be a common occurrence in the future
thk
I see the progress of a robot bartender worthy of the looks, feel and movement to tend to patrons in a VIP room is still steep.
Jim Sadler
Actually we were doing this in 1985. A weight sensor signaled that the container needed refilling. The container and its contents were both replaced if the weight dropped below a certain point. Filling the container in place would have been easy enough but a situation would have to have a sequence of centering the target which the robot could do and then adding liquids or solids. Back then the great issue was making sure a human did not intrude on the working sweep of the robot as the arms had great mass and moved as fast as the head of a golf club. Those beasts could splatter a human head all over the place and sometimes did.
warren52nz
I think this sort of thing is great. I wish people wouldn't come off so negative on research. It's obviously VERY tricky to emulate human behaviour and it's only going to be achieved with small steps ironing out all the intricate detail. The developers of such technology will have the last laugh on those that scoff at the incremental progress!
Stephen N Russell
can adapt to Liquor , wines too??