Shopping? Check out our latest product comparisons

Disney develops "face cloning" technique for animatronics

By

August 15, 2012

Face cloning: steps in modeling the digital face and the final result in silicone (Image: ...

Face cloning: steps in modeling the digital face and the final result in silicone (Image: Disney)

Image Gallery (5 images)

The “uncanny valley” is one of the frustrating paradoxes of robotics. Every year, roboticists make humanoid robots that more accurately imitate human beings, but it turns out that the better the imitation, the creepier the end result. It’s that strange, hair-raising sensation one gets when visiting the Hall of Presidents at Disneyland. True, George Washington and Abraham Lincoln look very lifelike, but there’s always something wrong that you can’t quite describe. In the hope of bridging this valley, a Disney Research team in Zurich, Switzerland, has invented a new robot-making technique dubbed “face cloning.” This technique combines 3D digital scanning and advanced silicone skins to give animatronic robots more realistic facial expressions.

Facial cloning sounds rather alarming, but its purpose is very straightforward. Basically, it’s a way of scanning a person’s face in 3D and then using that information to design and fabricate an artificial skin that will move much more realistically - not just in general, but as a close imitation of the original person right down to the wrinkles made while laughing.

The process uses scanning and digital processing techniques already used in the creation of CGI characters. This isn’t surprising, since animatronics and digital animation both have the same goal of creating realistic characters. However, animatronics is different because it uses soft materials fixed to a rigid skeleton that moves the “skin” around it. Over the past fifty years, there’s been a lot of success in creating humanoid robots, but the results have not been as realistic as desired. Worse, the method for designing and constructing animatronic characters are expensive and labor intensive. The Disney team’s goal was to produce a single skin that can reproduce the vast range of human expressions and to automate that process as much as possible.

The process they developed starts with using motion capture technology to scan a person’s head while the subject runs through a “performance” of various facial expressions. The resulting 3D scan is then used to produce a digital “mesh” - a sort of map of the face. This is used to design what the team called an optimized model of the robot head. This model defines the robot’s range of movements and locates the optimum points to attach the artificial skin.

The model also allows the researchers to select the best composition of skin. Disney isn't aiming to reproduce many properties of real skin because its goals are output based. In other words, the key objective is that the robot looks right. This means that Disney can use silicone rubber skin instead of more realistic alternatives that may behave more like the real thing.

That said, the skin isn’t just a flat piece of silicone. It has different thicknesses at different places to help it move and deform realistically. The digital model guides how the thickness of the skin is laid out and how it should attach to the robot head. It also shows how the head should move to minimize stretching and to make sure the skin only stretches along the thickest parts of the skin.

Once the head and skin have been designed, a 3D mold is made into which liquid silicone is injected. Once the skin cures and is attached to the motorized metal and plastic skull, the result is a realistic animatronic robot head. The only obvious difference to the casual eye is that the head is slightly larger than the original person’s in order to make up for the limitations of the robot’s movements.

In the future, the team hopes to give the skin more flexibility and introduce a multi-layered skin to provide more control over its movements.

The video below outlines Disney's face cloning process.

Source: Disney Research

About the Author
David Szondy David Szondy is a freelance writer based in Monroe, Washington. An award-winning playwright, he has contributed to Charged and iQ magazine and is the author of the website Tales of Future Past.   All articles by David Szondy
Tags
4 Comments

Nice proof of concept. It'll be nice to see this with more actuator points and more nuance.

John Hagen-Brenner
16th August, 2012 @ 09:55 am PDT

Seems like a good way to rid the film industry of over paid prima donna actors.

Nelson
16th August, 2012 @ 12:14 pm PDT

Nelson, Someone has to perform or animate the role. I think this technology is aimed at "live" attractions where the performance takes place repeatedly, not recorded movies or TV.

In the future, you might have an autonomous robot with such a head interacting with people. (Think of Data from Star Trek or the androids in the Alien movies.)

Les LaZar
16th August, 2012 @ 06:08 pm PDT

Even more fun... how about watching a movie with a different actor playing the role ?

Bernard Howard
17th August, 2012 @ 01:04 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 27,776 articles