Gender-Based Violence (GBV) extends far beyond physical and psychological harm, permeating digital spaces with alarming frequency. Online gender-based violence is a growing global issue, encompassing a range of harmful behaviours and exploitation disproportionately affecting women, girls, and gender-diverse individuals. This form of violence manifests through cyberstalking, doxxing, targeted abuse, harassment, non-consensual sharing of intimate images, trolling, and hate speech, among other harmful practices. Among these, gender-based hate speech is particularly pervasive, targeting women and LGBTQI individuals with demeaning, intimidating, and silencing rhetoric. Hate speech targeting gender identity, sexuality, or feminist advocacy, often weaponizes stereotypes and prejudices, and is often designed to demean, intimidate, and silence women and gender-diverse individuals, reinforcing societal power imbalances.
The consequences of online hate speech are especially severe for women and LGBTQI people, particularly those from disadvantaged backgrounds and/or facing intersectional discrimination based on race, ethnicity, disability, or socioeconomic status. These groups are often discouraged from expressing themselves online due to fear of harassment, retaliation, or reputational harm. Alongside, everyday digital interactions, such as reading the comment section under an article, can become extremely harmful, as discriminatory “opinions” perpetuate harmful ideologies. The repercussions of online hate speech are far-reaching, extending beyond digital boundaries to affect victims’ emotional well-being, professional lives, and safety. Furthermore women & lgbtqi individuals in leadership roles —such us politicians, journalists, and activists— are particularly vulnerable, as their public visibility becomes a focal point for misogynistic and discriminatory attacks. These relentless assaults aim to discourage participation in public discourse, stifling diverse perspectives and undermining gender equality. This silencing effect not only harms individuals but also erodes the diversity and inclusivity of public discourse, necessitating urgent collective action to foster safer and more equitable online spaces.
Sexist hate speech takes many forms both online and offline, notably victim blaming and revictimization, slut-shaming, body-shaming, revenge porn, threats of death, rape and violence, offensive comments on appearance, sexuality, sexual orientation or gender roles, but also false “compliments” or supposed “jokes”. Regardless of the exact form, victims of online gender-based hate speech can suffer long-term consequences, affecting their agency, privacy, trust and integrity and forcing them to go through a devastating psychological cycle. Finally, unfortunately, in the worst cases, hate speech also leads to hate crime or drives victims to suicide.
Confronting the phenomenon
Online hate speech is a modern and unfortunately a growing phenomenon that exploits key features of digital spaces, such as privacy, anonymity, mob mentality, and the permanence of data. These characteristics empower users to initiate and spread gender-based hate speech with alarming ease, often evading accountability. The ability for harmful content to persist and amplify makes addressing this issue a critical challenge for the online world, requiring systemic efforts to curb its damaging effects on individuals and communities.
Addressing this issue requires both technological and cultural solutions, including algorithmic accountability, improved moderation practices, and public education on the consequences of online harassment.
Despite the growing recognition of online gender-based violence, its prevalence remains poorly quantified. While anecdotal evidence is abundant, systematic research on the issue is still in its infancy. Limited data exists on the scale and nature of online violence against women, girls, and LGBTQI individuals, and when available, it is rarely disaggregated by gender or other intersecting factors. Reports such as the European Women’s Lobby’s #HerNetHerRights (2017) highlight this gap, noting that existing data primarily focuses on social media platforms, leaving other online spaces understudied.
CHASE: Building Tools to Counter Gender-Based Hate Speech Online
The EU-funded CHASE project is an innovative initiative designed to combat online gender-based hate speech. Its primary objective is to support online media platforms in preventing, detecting, and addressing these harmful expressions, thereby aligning with the EU’s priority to prevent and counter hate speech online. The project targets key groups integral to this effort: online media professionals, especially those engaging directly with users, and CSOs/NGOs advocating for human rights and gender equality.
Online media professionals are a focal group because their role is pivotal in identifying and responding to online hate speech, which, if left unchecked, can incite broader societal harm, including hate crimes. It should be noted that the project targets mainstream online media and not social networks, as several attempts have already been made towards that direction. Despite the critical role of mainstream online media, many media professionals struggle to effectively detect and address overt and implied gender-based hate speech. By equipping them with innovative ICT tools and educational resources, CHASE seeks to empower these professionals to act decisively.
CHASE also empowers CSOs/NGOs to enhance their watchdog role, bridges gaps in understanding online hate speech and raises public awareness on human rights and gender issues, contributing to vital research and data improvement on this pervasive issue.
ITML: Empowering CHASE with Advanced AI Solutions
ITML, a leading partner in CHASE, will design and develop the whole cutting-edge ICT tool leveraging state-of-the-art text analytics and machine learning algorithms. This advanced solution promises real-time detection of online gender-based hate speech, setting a new standard in precision and efficiency for combating digital hate. Anchored in legal analysis set forth by the Council of Europe, hate speech pattern assessments, and needs analyses conducted by our esteemed partners, ITML’s role in CHASE extends beyond technological prowess to embody a commitment to fostering a digital ecosystem where hate speech holds no sway, and every individual’s voice finds validation and respect.