One of the most popular humanoid robots, the Einstein-mimicking Albert Hubo, can now boast of a new trick: teaching itself how to make realistic expressions!
“As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” said researcher Tingfan Wu, the computer science Ph.D. student from the University of California San Diego Jacobs School of Engineering.
Automating the process of learning for a robot is an arduous task. Currently, Hubo has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. A highly trained person must manually set up these kinds of realistic robots so that the servos pull in the right combinations to make specific face expressions.
To simplify this process, Wu’s team looked at the most basic of learning behaviours: babies! Developmental psychologists speculate that infants learn to control their bodies through systematic exploratory movements, including babbling to learn to speak. Initially, these movements appear to be random, as infants learn to control their bodies and reach for objects.
“We applied this same idea to the problem of a robot learning to make realistic facial expressions,” said co-researcher Javier Movellan.
Albert Hubo, the "Einstein robot", tries out a few facial expressions
To begin the learning process, the researchers directed the robot to twist and turn its face in all directions, a process called ‘body babbling’. During this period, Albert Hubo could see itself in a mirror and analyse its own expression using facial expression detection software created at UC San Diego called CERT (Computer Expression Recognition Toolbox). This provided the data necessary for machine learning algorithms to learn a mapping between facial expressions and the movements of the muscle motors.
Once the robot learned the relationship between facial expressions and the muscle movements required to make them, it learned to make facial expressions it had never encountered.
For example, Hubo learned eyebrow narrowing, which requires the inner eyebrows to move together and the upper eyelids to close a bit to narrow the eye aperture.
The success aside, Wu and team acknowledge there is a lot of work still to be done, especially since some of the complex facial expressions are still awkward. The researchers think the ‘body babbling’ model may be too simple to describe the coupled interactions between facial muscles and skin.
While the primary goal of this work was to solve the engineering problem of how to approximate the appearance of human facial muscle movements with motors, the researchers say this kind of work could also lead to insights into how humans learn and develop facial expressions.
The research paper by the scientists can be found here.