New research published in the journal Cognitive Science provides evidence that the fluidity of a person’s speech influences how knowledgeable they appear to others. The findings indicate that speakers who use “ums,” “ahs,” and corrections are consistently rated as less knowledgeable than those who speak fluently. But the presence of hand gestures, regardless of their type or frequency, does not appear to mitigate this negative perception.
Communication is a complex process that involves much more than just the words spoken. When humans interact, they rely on a multimodal system that includes speech, hand movements, eye gaze, and facial expressions. Previous studies suggest that hand gestures play a significant role in this dynamic.
Gestures often help listeners understand information and can even make a speaker seem more persuasive or likeable. At the same time, speech is rarely perfect. It often contains disfluencies, which are temporary pauses, errors, or filler sounds that interrupt the flow of language. These verbal stumbles can signal that a speaker is hesitant or experiencing difficulty planning their next words.
“Two main factors motivated this study,” explained study author Can Avcı, a PhD Student at Koç University and member of the Language and Cognition Lab. “First, there was a gap in the literature on how hand gestures affect the listeners’ knowledge assessments of the speaker. Most studies (there are not many) focused on speech disfluencies as signals for ignorance, but no investigation of gestures. The second motivation came from the possibility of understanding if gestures are beneficial in terms of knowledgeability perception and may even change how disfluencies are perceived by the listeners.”
The researchers utilized a concept known as the “feeling-of-another’s-knowing.” This concept refers to the judgment a listener makes about how well a speaker knows the topic they are discussing. The research team conducted two separate experiments to test whether seeing a speaker gesture would lead listeners to rate a disfluent speaker as more knowledgeable.
The first study focused on naturalistic, spontaneous speech. The researchers recruited 42 native Turkish-speaking young adults to participate. The participants watched a series of video clips featuring various speakers providing navigational instructions.
These videos were selected from a previous dataset and contained natural variations in speech and movement. Some speakers used gestures while others kept their hands still. Some speakers spoke fluently while others included disfluencies such as repetitions, repairs, or filled pauses.
A repair occurs when a speaker corrects a word, such as saying “left” and immediately changing it to “right.” A filled pause involves sounds like “um” or “uh” that bridge gaps in speech. In this naturalistic study, the researchers did not manipulate the videos.
Factors such as background noise, the speaker’s clothing, and facial expressions were allowed to vary naturally. After viewing each clip, participants rated the speaker’s knowledge level. They answered questions regarding how certain the speaker seemed and how well the speaker appeared to know the answer.
The results showed a strong link between speech fluency and perceived knowledge. Participants consistently rated speakers who spoke without hesitation as more knowledgeable than those who stumbled. The presence of gestures did not produce a statistically significant change in these ratings.
To address the potential messiness of natural stimuli, the researchers designed a second study with a more controlled environment. They recruited a new group of 43 participants. For this experiment, the team hired an actress to record the stimuli.
This approach allowed the researchers to control extraneous variables that might have influenced the first study. The background, camera angle, lighting, and the speaker’s appearance remained identical across all trials. The actress recorded specific sentences containing navigational information.
The researchers manipulated two key variables: the type of gesture and the presence of disfluencies. The study included three gesture conditions. The first condition involved no gestures at all.
The second condition involved iconic gestures. These are hand movements that visually represent the object or action being discussed, such as drawing a circle in the air to represent a round object. The third condition involved beat gestures.
Beat gestures are rhythmic hand movements that align with the cadence of speech but do not carry specific semantic meaning. The researchers also manipulated the speech to be either fluent or disfluent. The disfluent versions contained specific, scripted errors and pauses at consistent points in the sentences.
In addition to rating the speaker’s knowledge, participants in the second study completed the Gesture Awareness Scale. This measure assessed how much individuals typically notice and attend to hand movements in daily life. This allowed the researchers to see if people who are more attuned to gestures might be more influenced by them.
The findings from the second study mirrored those of the first. Once again, speech fluency emerged as the dominant factor. When the actress spoke with disfluencies, participants rated her as significantly less knowledgeable than when she spoke fluently.
The type of gesture used made no difference to the ratings. Whether the actress used descriptive iconic gestures, rhythmic beat gestures, or no gestures at all, the knowledge ratings remained largely the same. This held true even for participants who scored high on the Gesture Awareness Scale.
“We were surprised that although participants were aware of the presence of gestures, they did not consider them as signals of knowledge or ignorance,” Avcı told PsyPost.
These results suggests that when listeners are trying to judge a speaker’s competence, they prioritize verbal cues over visual ones. The hesitation signaled by an “um” or a self-correction appears to be a powerful indicator of uncertainty. It seems to overshadow any potential competence signaled by confident hand movements.
“People who have disfluencies in their speech are perceived as less knowledgeable than fluent speakers,” Avcı said. “The presence or absence of gestures does not affect others’ knowledge assessments.”
The researchers propose several explanations for why gestures failed to impact knowledge judgments. One possibility is the redundant nature of the gestures used. In many communicative contexts, gestures are most helpful when they provide information that is missing from the speech.
In these experiments, the speech conveyed the navigational information clearly enough on its own. Listeners may have felt they did not need to rely on the gestures to gauge the speaker’s understanding. Consequently, the gestures may have been processed as background noise rather than vital clues.
Another possibility involves the timing of the gestures relative to the disfluencies. In the controlled study, the gestures often occurred simultaneously with the verbal stumbles. It is possible that the obvious signal of difficulty provided by the stuttering simply drowned out the visual signal.
The study has some limitations. The speech samples used in the experiments were relatively short, consisting of only two or three sentences. This brief exposure might not allow enough time for gestures to build an impression of competence.
Additionally, the topic of the speech was limited to spatial directions. While gestures are commonly used in spatial descriptions, the effect might differ in other contexts. For example, gestures might play a larger role in persuasive speeches or emotional storytelling.
“For my future studies, I plan to include additional modalities, such as facial movements, in knowledgeability assessment and other contexts,” Avcı said.
The study, “Assessing Others’ Knowledge Through Their Speech Disfluencies and Gestures,” was authored by Can Avcı, Demet Özer, Terry Eskenazi, and Tilbe Göksuna.
Leave a comment
You must be logged in to post a comment.