Duchenne smiles do not indicate if a person is happy

Duchenne smile

A smile that lifts the lips and crinkles the eyes is thought by many to be truly real. But new research at Carnegie Mellon University casts doubt on whether this joyful facial expression necessarily tells others the way the person feels inside. Actually, these “smiling eye” smiles, known as Duchenne smiles, look to be related to smile intensity, rather than acting as an indicator of whether a person is happy or not, said Jeffrey Girard, a former post-doctoral researcher in CMU’s Language Technologies Institute.

“These results emphasize the need to model the subtleties of human emotions and facial expressions,” said Morency, associate professor in the LTI and director of the MultiComp Lab. “We need to go beyond prototypical expression and take into account the context in which the expression happened.”

“I do think it’s possible that we might be able detect how strongly somebody feels positive emotions based on their smile,” said Girard, who joined the psychology faculty at the University of Kansas this past fall. “But it’s going to be a bit more complicated than just asking, ‘Did their eyes move?'”

When it’s possible to gauge a individual’s emotions based on their behavior is a subject of some debate within the areas of psychology and computer science, especially as researchers develop automated systems for monitoring facial movements, gestures, voice inflections and word choice.

Duchenne smiles might not be as popularly called Mona Lisa smiles or Bette Davis eyes, however there is a camp inside psychology which considers they’re a practical guideline for gauging happiness. But another camp is skeptical. Girard, who studies facial behavior and worked with CMU’s Louis-Phillippe Morency to create a multimodal strategy for monitoring behavior, stated that some research seems to support the Duchenne smile hypothesis, though other studies reveal how it fails.

So Girard and Morency, along with Jeffrey Cohn at the University of Pittsburgh and Lijun Yin of Binghamton University, set out to better understand the phenomenon. They enlisted 136 volunteers who agreed to have their facial expressions listed as they finished laboratory tasks designed to make them feel entertainment, humiliation, fear or physical pain. After each job, the volunteers rated how strongly they believed different emotions.

Finally, the group made videos of those smiles happening during these tasks and showed them to new participants (i.e., judges), who tried to figure how much positive emotion the volunteers felt while smiling.

A report on their findings was published online by the journal Affective Science.

Unlike many previous studies of Duchenne smiles, this work sought Spontaneous expressions, instead of posed smiles, and the researchers recorded videos of the facial expressions from start to end instead of taking still photographs. In addition they took primitive dimensions of grin intensity and other facial behaviors.

Although Duchenne smiles composed 90 percent of those that happened when Positive emotion has been reported, they also composed 80 percent of the smiles which occurred when no positive emotion was already reported. Concluding that a Duchenne smile must mean positive emotion could thus often be a error. On the flip side, the human judges discovered smiling eyes compelling and tended to guess that volunteers showing Duchenne smiles felt more positive emotion.

“It is really important to look at how people actually move their faces in addition to how people rate images and videos of faces, because sometimes our intuitions are wrong,” Girard said. “These results emphasize the need to model the subtleties of human emotions and facial expressions,” said Morency, associate professor in the LTI and director of the MultiComp Lab. “We need to go beyond prototypical expression and take into account the context in which the expression happened.”

“These results emphasize the need to model the subtleties of human emotions and facial expressions,” said Morency, associate professor in the LTI and director of the MultiComp Lab. “We need to go beyond prototypical expression and take into account the context in which the expression happened.”

It is possible, for example, to get someone to display precisely the same behavior in a wedding as at a funeral, yet the person’s emotions would be quite different.

Automated methods for tracking facial expression make it possible to examine behavior in much finer detail. Just two facial muscles are included with Duchenne smiles, but new systems make it possible to look in 30 different muscle movements simultaneously.

Multimodal systems such as the ones being developed in Morency’s laboratory hold the promise of giving doctors a new tool for analyzing mental disorders, and for monitoring and quantifying the outcomes of emotional therapy over time.

“Could we ever have an algorithm or a computer that is as good as humans at gauging emotions? I think so,” Girard said. “I don’t think people have any extrasensory stuff that a computer couldn’t be given somewhere down the road. We’re just not there yet. It’s also important to remember that humans aren’t always so good at this either!”

Related Journal Article: https://link.springer.com/article/10.1007/s42761-020-00030-w

Categories: Life