A new study suggests that online misinformation is not limited to fabricated stories from unreliable websites but also includes factual reports from mainstream media that are repurposed to support false claims. Researchers found that social media users who frequently share fake news also share specific articles from reputable news outlets that contain narratives common in misinformation. The findings indicate that bad actors may strategically select true information to lend credibility to misleading arguments. The study was published in the journal Nature Human Behaviour.
Social scientists and media researchers have traditionally struggled to quantify the spread of falsehoods online. The standard approach involves identifying specific websites or domains known for publishing fabricated content and tracking how often links to those sites appear on social networks. This method assumes a clear division where “fake” news comes from bad sources and “real” news comes from good sources.
However, this source-based binary fails to capture the nuance of how information actually circulates. A factual story from a reliable outlet can be taken out of context to imply something untrue. Pranav Goel, a researcher at the Network Science Institute at Northeastern University, led a team to investigate this gray area of the information ecosystem. Goel worked alongside Jon Green from Duke University, David Lazer from Northeastern and Harvard, and Philip S. Resnik from the University of Maryland.
The research team operated under the theory that information does not exist independently from how people use it. Users do not merely share individual facts. They share stories that advance their broader interests and political worldviews. The authors hypothesized that people seeking to promote misleading narratives would use factually true information to do so if mainstream sources provided useful material.
To test this hypothesis, the researchers analyzed a massive dataset of activity on Twitter, now known as X. The data spanned from May 2018 to November 2021. The team matched these Twitter accounts to a United States voter file to ensure the users were real people and to gather demographic data.
The investigators began by identifying a set of users who frequently shared content from unreliable domains. These domains were classified as “fake news” based on ratings from NewsGuard, an independent organization that vets news sources. The researchers then observed what other links these specific users shared.
This process allowed the team to identify articles from mainstream, reliable sources that were frequently “co-shared” with fake news. The researchers constructed a network graph where connections were drawn between reliable articles and fake news articles based on how often they were posted by the same people. They assigned a “co-sharing score” to mainstream articles.
Articles with a high co-sharing score were those disproportionately popular among people who also trafficked in misinformation. The researchers then created a control group of articles. These were stories published by the same mainstream outlets but which were not frequently shared by the misinformation group.
To analyze the actual content of these articles, the team used an automated computational tool designed to extract narrative structures from text. This software breaks sentences down into semantic relationships, specifically looking for an agent, a verb, and a patient. For instance, in the sentence “The vaccine causes shingles,” the tool would identify the relationship between the vaccine and the medical condition.
The researchers compiled a library of “potentially misleading narratives.” They did this by running their extraction tool on thousands of known fake news articles and claims that had been fact-checked as false. This created a database of narrative structures that are prevalent in the world of online misinformation.
The study then compared the co-shared mainstream articles against the control group. The researchers looked to see if the co-shared articles contained these potentially misleading narratives more often than the control articles did. The analysis revealed a distinct pattern in the data.
Mainstream articles that were co-shared with fake news were significantly more likely to contain narratives found in misinformation content. This relationship held true even when the researchers accounted for the partisan leanings of the news outlets. It was not simply a case of right-wing users sharing right-wing news. The correlation suggested a more strategic selection of content.
The authors provided several qualitative examples to illustrate how this dynamic works. One prominent case involved a Washington Post article with the headline, “Vaccinated people now make up a majority of covid deaths.” The headline was factually accurate at the time of publication.
However, the headline lacked important context. Because the vast majority of the population was vaccinated, the raw number of deaths would naturally be higher in that group even if the vaccine remained highly effective. Misinformation spreaders seized on this headline. It allowed them to promote the false narrative that vaccines are ineffective or harmful while citing a reputable source.
Another strategy identified by the researchers was the repurposing of old news. Users would find archival stories from mainstream outlets that could be recontextualized to support current conspiracy theories. For example, the study found that a 2012 New York Times article was widely shared in 2020.
The 2012 article carried the headline, “As More Vote by Mail, Faulty Ballots Could Impact Elections.” In its original context, the story was a nuanced report on election administration. In 2020, however, it became ammunition for those claiming that the presidential election was being stolen via mail-in ballot fraud. By sharing the old link, users could point to the Times to legitimize unfounded allegations of widespread fraud.
The study found that co-shared articles often employed “clickbait” style headlines. These headlines sometimes simplified complex issues in ways that made them easy to weaponize. The body of the article might contain the necessary nuance and corrections, but the headline alone served the misleading narrative.
The researchers also noted that the audience for these co-shared mainstream articles is potentially much larger than the audience for fake news sites. The users who shared the co-shared mainstream content had nearly double the number of followers on average compared to those who exclusively shared fake news. This suggests that repurposed mainstream news acts as a bridge, carrying misleading narratives into the broader public conversation.
There are some limitations to the study that the authors acknowledge. The research focused on the text of the news articles themselves rather than the text of the tweets sharing them. It is theoretically possible that users were sharing these articles to debunk or criticize them.
To address this, the team performed a manual check on a random sample of tweets. They found that instances of users sharing an article to criticize it were rare. The vast majority appeared to share the articles to endorse the content or the implied narrative.
Another caveat is that the narrative extraction tool works at the sentence level. It might miss broader context if a claim is raised in one sentence and refuted in the next. The tool identifies the presence of the narrative structure but cannot fully comprehend the rhetorical intent of the full article.
The researchers suggest that future work should examine the text of the social media posts sharing these articles. Understanding how users frame the links they share would provide further insight into the repurposing process. It would also be beneficial to study whether these misleading narratives appear mostly in direct quotes within the news stories or in the journalist’s own reporting.
The findings have implications for journalistic practice. The authors argue that fact-checking the content of a story is not enough. Editors and reporters may need to consider how a story, particularly its headline, could be used to support broader, misleading claims. The study highlights that in a networked information environment, strictly true information can still result in a misinformed public.
The study, “Using co-sharing to identify use of mainstream news for promoting potentially misleading narratives,” was authored by Pranav Goel, Jon Green, David Lazer, and Philip S. Resnik.
Leave a comment
You must be logged in to post a comment.