Negative facial expressions interfere with the perception of cause and effect

New research suggests that the emotional content of a facial expression influences how well observers can predict social outcomes. A series of experiments indicates that people have a harder time recognizing causal links between social cues when the faces involved display negative emotions, such as sadness, anger, or fear. The findings were published in the Quarterly Journal of Experimental Psychology.

Human interaction relies heavily on the ability to predict how one person will react to another. When a speaker smiles, an observer might expect the listener to smile in return. This predictive ability allows people to navigate complex social environments. Psychologists refer to this as contingency learning. It involves calculating the likelihood that a specific outcome will occur given a specific cue.

Researchers have debated how emotional faces fit into this learning process. Some theories propose that threatening or negative faces are evolutionarily important and should grab attention quickly. Other theories suggest that happy faces are easier to process because they are distinct and rewarding. To resolve this, a team of researchers led by Rahmi Saylik from Mus Alparslan University investigated whether specific emotional expressions help or hinder the ability to learn these statistical connections. The research team included Andre J. Szameitat and Adrian L. Williams from Brunel University London, and Robin A. Murphy from the University of Oxford.

The researchers aimed to understand if the “valence” of an emotion—whether it is positive or negative—affects the computation of cause and effect. They questioned if people are better at learning patterns when the faces are happy compared to when they are sad. They also sought to determine if this learning is based on genuine statistical evidence or simple observation of how often two things occur together.

To test this, the investigators designed a computer-based task using a “streaming” procedure. Participants watched a rapid series of images flash on a screen. In the emotional conditions, they saw two faces. One face represented a “sender” and the other a “receiver.” The participants’ goal was to determine if the expression on the first face caused the expression on the second face.

In Experiment 1, the researchers recruited 107 participants. The participants viewed streams of images involving happy faces, sad faces, or geometric shapes. The shapes served as a control condition to measure learning without social or emotional content. The researchers manipulated the statistical strength of the relationships. In some blocks, the cue perfectly predicted the outcome. In others, there was no relationship at all.

The participants provided ratings on a scale from negative to positive to indicate how strong they felt the causal link was. The results showed that participants could generally distinguish between strong and weak relationships. However, the type of stimulus altered their judgment. Participants perceived a weaker causal connection when viewing sad faces compared to happy faces or geometric shapes. The ratings for sad faces were less accurate in relation to the actual statistical evidence.

The researchers suspected that the visual differences between the photos and the simple shapes might have influenced the results. To address this, they conducted Experiment 2 with 82 new participants. They modified the stimuli to make the shapes and faces more visually comparable. They used black-and-white images and presented the faces through oval windows. They also created patterned shapes that mimicked the presence or absence of a feature, similar to how a face shows an emotion or remains neutral.

Despite these changes, the pattern of results remained the same. Participants consistently rated the causal relationships involving sad faces as weaker than those involving happy faces or the patterned shapes. There was no statistical difference between the ratings for happy faces and the neutral shapes. This suggested that happy faces did not necessarily boost performance, but rather that sad faces actively impaired the perception of causality.

A potential criticism of these findings is that participants might not be calculating complex statistics. They might simply be counting how often they see two emotional faces appear together. This is known as the “pairing hypothesis.” In Experiment 3, the researchers tested 90 participants to rule this out. They created specific conditions where the number of pairings was high, but the statistical predictive power was low. Conversely, they created conditions with few pairings but high predictive power.

The results from Experiment 3 confirmed that participants were indeed tracking the statistical contingency, not just the frequency of pairings. Even when the number of pairings was held constant, participants rated the stronger statistical connections higher. However, the emotional interference persisted. Sad faces continued to elicit lower ratings of causal strength compared to happy faces and shapes, regardless of how the statistics were presented.

In the final study, Experiment 4, the researchers expanded the scope to include other negative emotions. They wanted to see if the effect was specific to sadness or if it applied to aversive emotions in general. They recruited 51 participants and tested them using happy, angry, and fearful faces. The procedure mirrored the earlier experiments, asking participants to judge the strength of the relationship between the cues and outcomes.

The findings revealed that the interference effect was not unique to sadness. Participants perceived a weaker sense of causality when observing angry or fearful faces compared to happy ones. The ratings for the angry and fearful conditions were lower than those for the happy condition in scenarios where a positive relationship existed. This suggests that stimuli with negative valence generally disrupt the processing of contingency information.

The researchers interpreted these results through the lens of attention and cognitive resources. While threatening or negative faces are highly salient and grab attention quickly, they may also trigger task-irrelevant processing. For example, a sad or angry face might induce a state of worry or physiological arousal in the observer. This internal reaction could consume cognitive resources that would otherwise be used to track the statistical patterns in the environment.

Consequently, while the observer notices the face, they may have less mental bandwidth available to calculate the relationship between that face and the subsequent outcome. Happy faces, being pleasant and signaling safety, do not impose this cognitive tax. This allows the observer to focus on the structural relationship between the social cues. The study challenges the idea that “threat” enhances all forms of learning. It suggests that while threats are noticed quickly, they may hinder the analysis of the broader context.

There are limitations to the study that warrant mention. The experiments relied on static images presented on a computer screen, which is different from dynamic, real-world interactions. Additionally, while the researchers attempted to match the visual properties of the control shapes, non-emotional objects are inherently different from human faces. The study focused on neurotypical university students, so the results may not generalize to clinical populations with anxiety or depression.

Future research could investigate the speed of these judgments to understand the processing time required for different emotions. It would also be beneficial to use physiological measures to track arousal levels during the task. Understanding how negative emotions disrupt causal learning could have implications for understanding social misunderstandings. If negative expressions make social patterns harder to read, it could explain some difficulties in maintaining relationships during times of conflict or distress.

The study, “Sad, Angry and Fearful Facial Expressions Interfere With Perception of Causal Outcomes,” was authored by Rahmi Saylik, Andre J. Szameitat, Adrian L. Williams and Robin A. Murphy.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×