A new study published in the Journal of Experimental Political Science provides evidence that partisan gaps in Americans’ perceptions of the economy following Donald Trump’s return to the presidency in 2024 may reflect sincere differences in judgment, rather than partisan exaggeration.
The study was conducted by Matthew H. Graham, an assistant professor at Temple University, and focused on a key concept in political science known as expressive responding. This idea refers to the tendency of survey respondents to provide politically motivated answers that do not necessarily reflect their true beliefs. In polarized environments, expressive responding can inflate the appearance of partisan bias, making it appear as though people are more divided than they really are.
After Trump won the 2024 U.S. presidential election, Democrats and Republicans switched positions in how they rated the economy. This kind of flip is common after presidential elections, but the 2024–2025 shift was unusually sharp. In November 2024, data from the University of Michigan showed Democrats viewed the economy much more positively than Republicans. By April 2025, that trend had reversed, with Republicans expressing much greater confidence. Both groups shifted their perceptions by large margins.
Some researchers argue that these post-election shifts reflect expressive responding. In other words, people may adjust their answers to signal their support or opposition to the president, regardless of their actual experience. For example, a Republican might claim the economy is doing better simply because a Republican is in office, not because they personally feel more financially secure. However, other scholars have questioned how common or powerful this effect really is.
“Current events created a perfect opportunity to advance a longer line of research. There was a huge partisan flip in economic perceptions in the months after the election. Trump was implementing policies that depending who you listened to, were either ushering in a golden age or crashing the economy,” Graham told PsyPost.
“On survey measures of economic perceptions, Democrats and Republicans flipped, with Democrats becoming much more negative and Republicans much more positive. In past years, observers interpreted smaller ‘post-election flips’ as evidence that surveys exaggerate partisan bias: it can’t really be that people’s perceptions change this conveniently. Existing research on ‘partisan expressive responding’ seems to support this claim. For example, if you pay people for correct answers to quiz questions about economic statistics, partisan bias shrinks.”
“These events created a nice opportunity to advance understanding of expressive responding. Although it is clear that surveys sometimes exaggerate partisan bias, I argue in a forthcoming review article that we don’t really understand why. It could be that people are outright lying but it could also be that biased reasoning is warping the process of aggregating one’s underlying perceptions into a survey response. Depending on which it is, the implications for politics are different. To try to tease these apart, I included several supplemental outcome measures that have different implications for one theory or the other.”
For his study, Graham designed a panel survey experiment conducted in April and May of 2025. He recruited over 2,800 U.S. adults from the survey platform Prolific. Participants were asked to predict upcoming economic statistics that had not yet been released, including gross domestic product (GDP) growth, the unemployment rate, and the inflation rate. This approach required respondents to rely on their general understanding of the economy, since the official numbers were not yet public.
Critically, some participants were randomly selected to receive a $2 bonus if they guessed the correct number. The expectation was that if expressive responding were common, this financial incentive would reduce partisan bias. That is, if people were exaggerating their views to make a political point, the chance to earn money should prompt more accurate, less biased responses.
“Another nice feature of this case was the fact that the correct answers weren’t yet known,” Graham said. “When you pay for correct answers to questions with known correct answers, people can easily look them up. When they’re not, people have to just take their best guess based on their general economic perceptions, which is how existing research assumes people approach questions like this. I call this the ‘betting on the future’ design. I am not the first to use it but I hope this article helps popularize it.”
The findings did not provide evidence of expressive responding. In the group without incentives, Republican respondents rated the economy about 0.44 standard deviations more positively than Democrats. In the group that received the $2 incentive, the gap was only slightly smaller, at 0.38 standard deviations. This difference was not statistically significant. When the data were analyzed using an alternate method that categorized responses by whether people expected conditions to improve, stay the same, or worsen, the gap did not change at all.
“The partisan gap in economic perceptions that opened up after Trump took office appears to be mostly genuine,” Graham told PsyPost. “This suggests that post-election flips are not face-value evidence that surveys exaggerate partisan bias.”
“I am not surprised to find a null result here. About 40 percent of previously published estimates in this literature are null, and there are probably more we don’t know about. A common theme in my work is that measurement properties vary from question to question and topic to topic, which makes it easy to go into studies with an open mind.”
“I am also not surprised to find that at baseline, partisan differences is only 0.3 to 0.6 standard deviations (in psychology, “Cohen’s d”). It’s really common to see bold generalizations in the form “Democrats think this, Republicans think that” based on differences that are not actually all that large. In academia we often make this worse by jumping straight to regression tables with a billion controls, which obscures the basic descriptives.”
“What does surprise me is that the post-election flip in economic perceptions was so large — or perhaps that earlier post-election flips were so small in comparison,” Graham continued. “In years past observers have questioned the validity of economic perception measures based on flips that are much smaller than what we saw this time around.”
The study included several additional tests to better understand what was driving responses. Two indicators—response time and whether a respondent switched browser windows during the survey—were used to measure how much effort participants were putting into their answers. The results showed that those in the incentive group spent more time on the questions and were more likely to look up information, suggesting they were trying harder to get the answers right.
However, these efforts did not change the partisan gap in responses. This implies that the differences between Democrats and Republicans were not due to people lying or misrepresenting their beliefs. Instead, they may have simply reached different conclusions based on the same economic information.
To probe this further, Graham asked respondents to write down their thoughts before making their guesses. These open-ended responses were analyzed for sentiment and content. If the financial incentive had led people to think more neutrally or to include more balanced reasoning, this would have supported the idea that money encourages even-handed thinking. But the content of the reasoning did not differ much between the groups, which again suggests that the perceptions themselves may have been sincere.
Still, some findings pointed to the complexity of measuring economic beliefs. Responses to subjective questions, such as “How is the economy doing?” were more stable across time than guesses about specific statistics. Subjective views were also more consistent with each other than were the more factual questions. This suggests that people may find it easier to express their general feelings about the economy than to estimate specific numbers.
Among the three statistics tested, guesses about GDP growth appeared to reflect people’s general economic perceptions more reliably than guesses about inflation or unemployment. Responses about inflation, in particular, were less stable. This may have been partly due to a change in the question wording between survey waves, as well as the longer time gap between the initial and follow-up questions.
“I was also surprised to see that out of the three economic statistics I asked about, GDP growth was the most highly correlated with subjective measures of economic perceptions,” Graham said.
“At least in political science, past research has been more likely to focus on unemployment or inflation. I think that is based on the idea that an individual’s experience with losing a job (unemployment), or paying higher prices (inflation), is more immediate than the size of the whole economy (GDP). My findings suggest that perhaps people are taking a bigger-picture view when they answer the subjective questions. To the extent that researchers use economic statistics to proxy general economic perceptions, GDP growth might be the best choice.”
Graham notes that this is only one study, and that small effects cannot be completely ruled out. Although the experiment was large by social science standards, with more than 2,500 participants, it cannot settle all questions about the nature of partisan bias in surveys. He cautions against over-generalizing the results to all topics or assuming expressive responding no longer matters.
Future research could build on this work by testing similar designs in other policy areas or during different political moments. Understanding when and why people offer partisan answers remains an important challenge, especially as political divisions shape how Americans process information.
“This paper is the culmination of a process I started five years ago with my coauthor Omer Yair,” Graham explained. “First I got my feet wet by adding some new cases to the literature. Then I figured out what I think this literature needs by conducting a systematic review. This is the first paper I’ve written on the topic with that systematic view in mind. Researchers have developed a number of innovative techniques for studying expressive responding and applied them to a wide range of cases, but we don’t have a good enough understanding of how exactly expressive responding works, which limits our understanding of the substantive implications.”
“My goal is to move the field toward developing that understanding. I have a role to play in that, but what I really hope is that other researchers with different predispositions will take my lessons to heart and field some innovative designs that challenge my view of things.”
“For my part, I am going to keep my ear to the ground for opportunities to apply social science to current events in a way that also advances theory. I think it’s really important that we constantly use social science to probe prevailing narratives about public opinion. I want to move observers of politics away from one-size-fits-all interpretations of polling data, while at the same time moving scholars toward a better understanding of what’s going on under the hood.”
“The editors and staff at the Journal of Experimental Political Science (JEPS) deserve tremendous credit for their handling of this article,” Graham added. “It took less than six months to go from the end of data collection to online publication, including one of the most rigorous reviews of replication materials in the field. Being able to publish something semi-timely is rare in my field and I sent this to JEPS because I knew that if I held up my end of the bargain, they would hold up theirs.
“I’d also encourage anyone who is interested in this topic to take a look at my review of the expressive responding literature, which is forthcoming in the American Political Science Review. This article was an attempt to apply what I recommend in that piece.”
The study, “Expressive Responding and the Economy: The Case of Trump’s Return to Office,” was authored by Matthew H. Graham.