Study finds little evidence of the Dunning-Kruger effect in political knowledge

A new study suggests that the average person may be far more aware of their own lack of political knowledge than previously thought. Contrary to the popular idea that people consistently overestimate their competence, this research indicates that individuals with low political information generally admit they do not know much. These findings were published in Political Research Quarterly.

Political scientists have spent years investigating the gap between what citizens know and what they think they know. This gap is often attributed to the Dunning-Kruger effect. This psychological phenomenon occurs when people with low ability in a specific area overestimate their competence.

In their new study, Alexander G. Hall and Kevin B. Smith of the University of Nebraska sought to answer several unresolved questions regarding this phenomenon. They wanted to determine if receiving objective feedback could reduce overconfidence. The researchers also intended to see if the Dunning-Kruger effect remains stable over time or changes due to major events. The study utilized a natural experiment to test these ideas in a real-world educational setting.

“Kevin and I have had an ongoing interest in this question: if you make someone’s substantive knowledge salient, will they do a more accurate job of reporting it?” explained Hall, who is now a staff statistician for Creighton University’s School of Medicine and adjunct instructor for the University of Nebraska-Omaha.

“I noticed that in his intro political science course he had been consistently collecting information that could speak to this, and that we had the makings of a neat natural experiment where participants had either taken this knowledge assessment before (presumably increasing that salience) or after being asked about their self-rated political knowledge.”

This data collection spanned eleven consecutive semesters between the fall of 2018 and the fall of 2023. The total sample included 1,985 students. The mean sample size per semester was approximately 180 participants.

The course required students to complete two specific assignments during the first week of the semester. One assignment was a forty-two-question assessment test designed to measure objective knowledge of American government and politics. The questions included items from textbook test banks and the United States citizenship test. The second assignment was a class survey that asked students to rate their own knowledge.

The researchers measured confidence using a specific question on the survey. Students rated their knowledge of American politics on a scale from zero to ten. A score of zero represented no knowledge, while a score of ten indicated the student felt capable of running a presidential campaign.

The study design took advantage of the order in which students completed these assignments. The course did not require students to finish the tasks in a specific sequence. Approximately one-third of the students chose to take the objective assessment test before completing the survey. The remaining two-thirds completed the survey before taking the test.

This natural variation allowed the researchers to treat the situation as a quasi-experiment. The students who took the test first effectively received feedback on their knowledge levels before rating their confidence. This group served as the experimental group. The students who rated their confidence before taking the test served as the control group.

The results provided a consistent pattern across the five-year period. The researchers found that students objectively knew very little about American politics. The average score on the assessment test was roughly 60 percent. This grade corresponds to a D-minus or F in academic terms.

Despite these low scores, the students did not demonstrate the expected overconfidence. When asked to rate their general political knowledge, the students gave answers that aligned with their low performance. The average response on the zero-to-ten confidence scale was modest.

The researchers compared the confidence levels of the group that took the test first against the group that took the survey first. They hypothesized that taking the test would provide a “reality check” and lower confidence scores. The analysis showed no statistically significant difference between the two groups. Providing objective feedback did not reduce confidence because the students’ self-assessments were already low.

The study also examined the stability of these findings over time. The data collection period covered significant events, including the COVID-19 pandemic and the 2020 presidential election. The researchers looked for any shifts in knowledge or confidence that might have resulted from these environmental shocks.

The analysis revealed that levels of political knowledge and confidence remained remarkably stable. The pandemic and the election cycle did not lead to meaningful changes in how much students knew or how much they thought they knew. The gap between actual knowledge and perceived knowledge remained substantively close to zero throughout the study.

“More than anything, I thought we’d see an impact of the natural experiment,” Hall told PsyPost. “I was also somewhat surprised by how flat the results appeared around 2020, when external factors like COVID-19 and the presidential election may have been impacting actual and perceived student knowledge.”

The authors utilized distinct statistical methods to verify their findings regarding overconfidence. They calculated overconfidence using quintiles, which divides the sample into five equal groups based on performance. They also used Z-scores, which measure how far a data point is from the average. Both methods yielded similar conclusions.

Using the quintile method, the researchers subtracted the quintile of the student’s actual score from the quintile of their self-assessment. The resulting overconfidence estimates were not statistically different from zero across all eleven semesters. This finding persisted regardless of whether the students took the assessment before or after the survey.

The Z-score analysis showed minor fluctuations but supported the main conclusion. There was a slight decrease in overconfidence in the control group between 2020 and 2023. However, the magnitude of this change was so small that it had little practical meaning. The overarching trend showed that students consistently recognized their own lack of expertise.

These results challenge the prevailing narrative in political science regarding the Dunning-Kruger effect. Hall and Smith suggest that the difference in findings may stem from how confidence is measured. Many previous studies ask participants to estimate their performance on a specific test they just took. This prompt often triggers a psychological bias where people assume they performed better than average.

In contrast, this study asked students to rate their general knowledge of a broad domain. When faced with a general question about how much they know about politics, individuals appear to be more humble. They do not default to assuming they are above average. Instead, they provide a rating that accurately reflects their limited understanding.

“The gap between what people know and what they think they know (over-or-under-confidence) may be less of a problem than we think, at least in the realm of political knowledge,” Hall said. “What we found is that if you ask someone what they know about politics they are likely to respond with ‘not much.’ You don’t have to provide them with evidence of that lack of information to get that response, they seem to be well-aware of the limitations of their knowledge regardless.”

“The short version here is that we did not find the Dunning-Kruger effect we expected to find. People with low information about politics did not overestimate their political knowledge, they seemed well-aware of its limitations.”

The authors argue that the Dunning-Kruger effect in politics might be an artifact of measurement choices. If researchers ask people how they did on a test, they find overconfidence. If researchers ask people how much they generally know, the overconfidence disappears. This distinction implies that the gap between actual and perceived knowledge may be less problematic than previously feared.

The study does have limitations that the authors acknowledge. The sample consisted entirely of undergraduate students. While the sample was diverse in terms of gender and political orientation, students are not perfectly representative of the general voting population. It is possible that being in an educational setting influences how students rate their own knowledge.

Another limitation involves the nature of the questions. The assessment relied on factual knowledge about civics and government structure. It is possible that overconfidence manifests differently when discussing controversial policy issues or specific political events. Future research could investigate whether different types of political knowledge elicit different levels of confidence.

The study also relied on a natural experiment rather than a randomized controlled trial. While the researchers found no significant differences between the groups initially, they did not control who took the test first. However, the large sample size and repeated data collection add weight to the findings.

“We should certainly be mindful of the principle that ‘absence of evidence isn’t evidence of absence,’ given the frequentist nature of null hypothesis significance testing,” Hall noted. “It’s also critical to understand the limitations of a natural experiment. There’s a lot of work on the Dunning-Kruger effect, and this is just one study, but I think it challenges us to think closely about the construct and how it generalizes.”

Future research could explore these measurement discrepancies further. The authors suggest that scholars should investigate how different ways of asking about confidence affect the results. Understanding whether overconfidence is a stable trait or a response to specific questions is vital for political psychology.

“Whether or not the Dunning-Kruger effect applies to broad domain knowledge is an important question for addressing political engagement – continuing down this line to broaden the domain coverage (something like civic reasoning, or real-world policy scenarios), and trying to move from a knowledge-based test scenario towards some closer indicator of manifest political behavior may give us a better sense of what’s likely to succeed in addressing political informedness,” Hall said.

The study, “They Know What They Know and It Ain’t Much: Revisiting the Dunning–Kruger Effect and Overconfidence in Political Knowledge,” was authored by Alexander G. Hall and Kevin B. Smith.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×