Asking people to exert more effort on a test of mental ability does not actually improve their score. A recent study published in Intelligence & Cognitive Abilities found that while financial rewards successfully motivate people to try harder, this increased motivation fails to produce higher scores on cognitive tests. The results challenge the popular idea that intelligence metrics heavily reflect a person’s willingness to engage with the material rather than their actual cognitive limits.
For decades, researchers have debated the exact relationship between personal motivation and measured cognitive performance. Some prominent social theories have proposed that a considerable portion of the differences seen in intelligence scores can be attributed to how hard individuals try during the examination. Under this framework, two people with identical baseline intelligence might receive wildly different scores simply because one cared more about the outcome and focused harder.
A well-known analysis from over a decade ago supported this perspective, claiming that offering small monetary rewards could boost test performance by a huge amount. This idea suggested that basic intelligence tests might be measuring motivation just as much as they measure mental capacity. That early analysis eventually fell apart under scrutiny because some of the specific research papers included in the review contained fraudulent data and were retracted by their publishers.
Other observational studies on effort relied on asking participants how hard they tried only after the test was already complete. This approach introduces a serious flaw known as reverse causation. When people feel they are doing well on a task, they tend to report trying harder. Ultimately, the good performance causes the feeling of high effort, rather than the effort causing the good performance.
Timothy Bates, a psychology researcher at the University of Edinburgh, designed a series of new experiments to solve this measurement problem. Bates wanted to isolate the true directional effect of effort on mental performance. To do this, he needed to manipulate exactly how much effort participants were willing to exert. He also needed to measure that intention before the volunteers actually started the test.
This experimental strategy relies on introducing an outside influence, in this case a financial reward, to randomly adjust the participants’ motivation levels. By tying a monetary bonus to a specific goal, a researcher can push one group of people to try harder than another group. If effort truly causes an increase in intelligence scores, the group offered the money should exhibit a clear spike in their performance. This model allows scientists to rule out unseen variables and focus entirely on the direct path from increased effort to the final test score.
In the first phase of the research, Bates developed a survey to measure effort ahead of time. He asked nearly 400 adult volunteers to rate their intended effort before taking a timed reasoning and grammar test. In this test, participants had ninety seconds to evaluate simple sentences and determine if they were logically true or false. Because he captured their intentions early, participants could not modify their answers based on how easy or difficult they found the questions to be.
Bates confirmed that his new prospective measure aligned with established behavior metrics. He found that people who promised to try hard on his survey also had strong track records of reliably completing other online tasks. When he looked at the test results, he noticed an early hint of his eventual conclusion. The amount of effort participants promised to give had no real link to the scores they ended up achieving on the logic test.
The second phase scaled up the experiment to test for a direct causal relationship. Bates recruited 500 adults to take a visual spatial test. This specific assessment required the participants to imagine folding a piece of paper in their minds. The volunteers first completed a baseline version of the paper-folding test before filling out the new survey asking them to state their intended effort for a second round of questions.
At this stage, half of the participants were randomly selected to receive a special proposition. They were offered a financial bonus of two British pounds if they managed to improve their score by at least one point compared to their first attempt. Everyone else proceeded in the standard testing group without the offer of a bonus.
The financial incentive worked exactly as intended. Participants in the reward group reported a clear increase in their willingness to work hard on the second test. Despite this boosted motivation, their actual performance on the spatial reasoning questions remained unchanged. The causal effect of the increased effort on their cognitive scores was near zero, and the minor variations in performance between the groups were not statistically significant.
To verify these results, Bates conducted a third experiment with more than 1,200 adult participants. This final test used a completely different set of survey questions to measure intended effort. Bates borrowed an effort scale originally developed for a large international mathematics and science study. Using a secondary tool ensured the results were not just a quirk of his own survey design.
The volunteers again responded to the promise of a financial reward by increasing their planned effort. Just as in the second experiment, this surge in motivation failed to translate into better test scores on the paper-folding task. Across multiple independent samples and different measurement surveys, the results rejected the idea that trying harder leads to a higher cognitive score.
The combined outcomes of these experiments suggest that basic mental abilities are fairly insulated from short-term acts of willpower. While a person can choose to direct their attention to a specific task, they cannot force their underlying cognitive processing to operate beyond its established limits. By analogy, a person might focus their eyes intensely on a distant object, but that concentration cannot alter the fundamental sensitivity of their visual system if the object is too far away to see.
This research reinforces the general validity of standard cognitive testing. Because the tests are not easily skewed by shifting levels of motivation, they remain an accurate reflection of a person’s baseline reasoning skills. Educators and psychologists who rely on these metrics can be fairly confident that the scores represent actual ability rather than mere compliance or enthusiasm on the day of the test.
The findings do not imply that hard work and perseverance are useless traits in general. Diligence and goal setting remain highly effective strategies for long-term success, especially when mastering new skills, learning information over time, or completing lengthy projects. Effort remains incredibly helpful for overcoming frustration and maintaining focus. The current findings specifically address short-term attempts to temporarily elevate brain power during a standalone assessment.
Future investigations should continue exploring the exact properties of these newly validated pre-test surveys to ensure they accurately capture participant intentions. In the meantime, educators seeking to improve student outcomes might shift their focus toward proven instructional techniques. Methods like systematic time spent on a task and spaced repetition over weeks and months reliably help students learn. These strategies offer a more realistic path to academic improvement than expecting a sudden burst of effort to raise a student’s basic cognitive ability.
The study, “Is Trying Harder Enough? Causal Analysis of the Effort-IQ Relationship Suggests Not,” was authored by Timothy Bates.
Leave a comment
You must be logged in to post a comment.