Robot Detects and Reacts to Human Facial Expressions

emo-robot-human-facial-expressions-600

Emo Robot, Screen Capture/YouTube/Columbia Engineering

Columbia University engineers recently unveiled Emo, a silicon-clad robotic human-like head with a face that makes eye contact, crucial for nonverbal communication, and uses two AI models to detect and replicates a person’s smile before the person smiles. Emo is equipped with 26 actuators that enable a broad range of nuanced facial expressions.

Emo is considered “a major advance in robots predicting human facial expressions accurately, improving interactions, and building trust between humans and robots.” The Creative Machines Lab at Columbia Engineering has been working on the project for five years:

In a new study published today in Science Robotics, the group unveils Emo, a robot that anticipates facial expressions and executes them simultaneously with a human. It has even learned to predict a forthcoming smile about 840 milliseconds before the person smiles, and to co-express the smile simultaneously with the person.

The team was led by Hod Lipson, a leading researcher in the fields of artificial intelligence (AI) and robotics. Lipson is James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering at Columbia Engineering, co-director of the Makerspace at Columbia, and a member of the Data Science Institute.

Human-Robot Facial Co-expression

There were two challenges facing the team:

  • How to mechanically design an expressively versatile robotic face which involves complex hardware and actuation mechanisms
  • Knowing which expression to generate so that they appear natural, timely, and genuine.

The team developed two AI models to adapt to these challenges:

To train the robot how to make facial expressions, the researchers put Emo in front of the camera and let it do random movements. After a few hours, the robot learned the relationship between their facial expressions and the motor commands — much the way humans practice facial expressions by looking in the mirror. This is what the team calls ‘self modeling’ – similar to our human ability to imagine what we look like when we make certain expressions.

Then the team ran videos of human facial expressions for Emo to observe them frame by frame. After training, which lasts a few hours, Emo could predict people’s facial expressions by observing tiny changes in their faces as they begin to form an intent to smile.

Looking ahead, the researchers are now working to integrate verbal communication, using a large language model like ChatGPT into Emo for further human robotic communication and collaboration.

See also  Meta Releases Updated Generative AI Products