Women who hate men: Study finds similarities in gendered hate speech on Reddit

A new study reveals that online communities dedicated to hating men share strikingly similar behaviors and language patterns with communities dedicated to hating women. The research suggests that gender driven hate speech is a broad phenomenon characteristic of toxic digital groups, regardless of the victim’s gender. These findings were published in the journal Scientific Reports.

Social media networks allow people around the world to share ideas and perspectives at an unprecedented scale. While these platforms can foster community building, they also create environments where discrimination and extreme ideologies can spread. One unexpected impact is the creation of echo chambers. An echo chamber is a closed environment where users only encounter information or opinions that mirror and reinforce their own.

Anonymity on the internet often accelerates the formation of these isolated spaces. Within these chambers, hate speech acts as a mechanism of communication that expresses an ideology using offensive stereotypes. This speech targets individuals based on traits like ethnicity, religion, or gender. Gendered hate speech specifically involves harassing or degrading people based entirely on whether they are men or women.

Historically, researchers and content moderators have focused heavily on misogyny, which is the hatred of or prejudice against women. A routine search of academic databases reveals hundreds of thousands of papers examining online misogyny over the past two decades. In contrast, academic attention toward misandry, defined as the hatred of or prejudice against men, remains notably scarce. Studies examining misandry only began to appear around 2014, leaving huge gaps in the scientific understanding of digital harassment.

Erica Coppolillo, a researcher at the University of Calabria and the National Research Council of Italy, initiated a project to address this literature gap. Coppolillo sought to determine if there are systematic differences between communities that target men and communities that target women. The goal was to see if the gender of the perpetrators changes the nature of the hostility. If the behavior remains identical, it suggests that the core issue is the toxicity of extremist online environments rather than the specific gender dynamics.

To investigate these questions, the study focused on Reddit. This platform is organized into thousands of individual communities, known as subreddits, dedicated to specific topics. Users interact by sharing posts and commenting on threads, creating dense networks of conversation. The researcher selected four subreddits known for extreme views on gender as the basis for the text analysis.

Two of these groups were chosen as examples of misandric communities. The first was a mainstream feminist subreddit discussing women’s issues, and the second was a radical feminist subreddit. The latter was banned by the platform in 2020 for violating hate speech policies. For the misogynistic side, the researcher selected a men’s rights subreddit and a group for involuntary celibates. The involuntary celibate community was also eventually banned for promoting hate and violence.

The primary data included text posts and comments generated between 2016 and 2022. To ensure the analysis focused strictly on gender targeting, a tight filtering process was applied. In the misandric groups, only texts mentioning terms like man, men, or husband were retained. In the misogynistic groups, the texts had to include terms like woman, women, or wife.

The analysis began with a linguistic comparison to identify the vocabulary shaping these conversations. A computational tool designed to process human language cleaned the text by removing punctuation and numbers. The researcher then examined the twenty most frequent words in each community. The results showed that most common terms occurred with similar frequency across all four groups.

There were no sharp linguistic boundaries separating the groups targeting men and those targeting women. Next, the study measured the toxicity of the content to see how aggressive these conversations were. Toxicity refers to how rude, disrespectful, or hateful a given comment appears to the reader. The researcher used an advanced artificial intelligence framework known as a transformer to evaluate the text.

A transformer is a deep learning model that understands the context of a word based on the surrounding sentence structure. This specific model had been trained on tens of thousands of manually annotated internet posts to learn the nuances of hate speech. It assigned a toxicity score to each post and comment, placing it on a continuous scale from completely harmless to intensely toxic.

The results of the toxicity analysis showed that the majority of content across all four communities was rated as non-toxic. Almost all the communities had a dual pattern, with a large peak indicating harmless text and a smaller peak indicating highly toxic text. The two misogynistic communities showed a slightly higher peak in extreme toxicity compared to the misandric groups. Even so, the overall distribution patterns of toxicity were remarkably similar.

The third phase of the study evaluated the specific emotions expressed within the texts. The researcher used two different machine learning algorithms capable of detecting emotions like sadness, joy, fear, and anger. For this analysis, the focus was narrowed exclusively to negative emotions. The algorithms evaluated each piece of text to see if sadness, anger, fear, or hate was the dominant sentiment.

When examining the emotions at a broad content level, all four communities expressed hate most frequently. Anger was the second most common emotion across the board. The men’s rights group and the mainstream feminist group displayed incredibly similar emotional patterns. The involuntary celibate group leaned slightly more toward sadness, while the radical feminist group leaned slightly toward fear.

Once again, the findings did not reveal sweeping differences between the two sides. The researcher also decided to evaluate the same emotions at an individual user level. Instead of looking at unlinked posts, the algorithms calculated the dominant emotion expressed by each individual user across all their lifetime contributions. When viewed this way, the pattern shifted dramatically.

The mainstream feminist community displayed the highest levels of user driven hate, followed by the radical feminist group and the men’s rights group. This shifted perspective suggests that misandric communities might harbor more concentrated negative sentiments among actively posting users than misogynistic ones do. Finally, the study mapped the conversational networks within each subreddit. The researcher built visual graphs in which every user was a point, and an interaction between two users was a connecting line.

This allowed the researcher to measure the structural properties of each community network. One measured property was modularity, which dictates how strongly a network divides into smaller, isolated sub-communities. Another structural property was the network diameter, which represents the longest chain of communication between two users.

The network structures did not align with the gender focus of the subreddits. The mainstream feminist group shared more structural features, like high modularity and wide diameter, with the men’s rights group. In contrast, the involuntary celibate community’s conversational network more closely resembled the radical feminist network. The structural analysis confirmed that the intended direction of the hate speech does not dictate how an online community organizes itself.

These findings suggest that content moderation strategies should address all hate speech neutrally. Recognizing misandric hostility as a serious issue could lead to safer digital spaces for everyone. Treating misogyny and misandry with equal seriousness pushes platforms toward universal interventions to curb toxic behavior.

However, the study relies on data scraped from an open internet platform, which inevitably contains noise and formatting errors. Real world social data is rarely perfectly clean, which can impact automated evaluation. The study also relies heavily on artificial intelligence algorithms to evaluate toxicity and emotions. While these computerized models are highly accurate, they are not flawless.

These models occasionally misclassify internet slang or sarcasm, which could introduce a small degree of uncertainty into the results. The findings are also specific to the analyzed Reddit communities. Content dynamics on different platforms, such as Facebook or a video sharing site, might yield completely different results.

Future research could investigate whether artificial bot accounts contribute to the spread of negativity in these specific forums. Researchers could also look for heavily radicalized sub-factions hidden within the broader internet communities.

The study, “Women who hate men: a comparative analysis across extremist Reddit communities,” was authored by Erica Coppolillo.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×