Abstract

Excerpted From: Eva Nave, Hate Speech, Historical Oppressions, and European Human Rights, 29 Buffalo Human Rights Law Review 83 (2022-2023) (182 Footnotes) (Full Document)

 

EvaNaveThe Internet enables borderless communications for more than half of the world's population. It connects people who are physically apart and it facilitates the spread of ideas and information. While the benefits of the Internet are undeniable, it also presents a dark side: hateful speech, for instance, tends to spread much faster and farther online, often systematically targeting marginalized groups. As noted by a former Secretary-General of the United Nations, the use of the Internet to promote hateful expressions is one of the most significant human rights challenges arising from technological developments.

Online platforms have been linked to the rise of hate speech and violent conduct. For example, Facebook was accused of contributing to anti-Muslim riots in Sri Lanka and of playing a crucial role by hosting commentary inciting to violence against the Rohingya minority in Myanmar. Other platforms have been associated with mass shootings, e.g. in the cases of Gab in relation to the Pittsburgh synagogue mass shooter and of 8kun with the El Paso killing.

In reaction to these events and due to international pressure, online platforms have started to self-regulate hate speech. However, such self-regulatory efforts often lack a standardized approach to the conceptualization of hate speech that is aligned with human rights. Though some online platforms expressly prohibit hate speech (e.g. Facebook, Twitter, YouTube, LinkedIn, TikTok, Tumblr, Microsoft), they then differ in their definitions. While Facebook defines hate speech as a “direct attack against people - rather than concepts or institutions - on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease,” others do not refer directly to hate speech and focus instead on the prohibition of expressions based on their harmful impact (e.g. Reddit, WhatsApp, LinkedIn).

A specific example of how the platforms' definitions of hate speech may not be aligned with human rights standards is Facebook's definition of “protected categories.” In 2017, controversies arose when Facebook employed a definition of the categories protected from hate speech, disregarding the protections assigned under international and European human rights law to marginalized groups. In this case, that definition led to the removal of a post suggesting that “all white people were racist” but authorized a post incentivizing the “killing of radicalized Muslims.” This decision was based on the justification that “radicalized Muslims” was a subgroup of a protected marker (i.e. religion), while “all whites” was more generic, thus supposedly more impactful, and therefore deemed more important to protect.

Automated content moderation tools have been said to often be either overbroad or underinclusive. Overbroad because they take down content with no legal basis for removal (online hate speech detection tools have been under scrutiny for racial and queer biases), and underinclusive as they often disregard context or content shared by linguistically marginalized groups.

To date, there is no legally binding definition of hate speech in European or international human rights law. States and public bodies have passed legislation regulating online hate speech but controversies arise on how to conceptualize hate speech and on how to design effective legislation compliant with human rights standards.

The main questions that this Article seeks to answer are: (1) what are the main elements of hate speech under European human rights law?, (2) do they align with the original conceptualization of hate speech by critical legal theory?, and (3) to what extent do they require further clarification? By addressing these questions, this Article aims to clarify the main aspects of a legal conceptualization of hate speech, grounded in critical legal theory, laying the foundation for an analysis of advances and shortcomings in the European regulatory framework. The focus is on the European context as there is a need to systematize at the regional level the legal requirements for current and future hate speech policies.

The methodology is composed of doctrinal, normative, and meta-legal research. Doctrinal research focusing on applicable legal frameworks to online hate speech in Europe will contribute to clarifying the existing legal standards. Normative research will identify and address legal loopholes. Meta-legal research will investigate the interplay between European human rights law and critical legal (race) theory and (black) feminist intersectionality theory. These last two theoretical frameworks were selected as the term (racist) “hate speech” was coined and conceptualized within these fields.

Part II explores the legal foundations of what hate speech is, what its consequences are, and how it should be regulated from a critical legal perspective. The original legal conceptualization of racist hate speech by critical race theory is key to understanding that hate speech is used against historically and systematically oppressed groups. The insights by critical legal theory also help to understand the impact and harm of hate speech by highlighting the cumulative effects of continued exposure to hate speech and the intersectionality of systems of oppression (race, gender, sexual orientation, etc.). This Part explores the legal foundations of the regulation of hate speech in three different periods: from the Enlightenment, passing by the 1980s, and to present times. The Part highlights how freedom of expression was never understood as an absolute right and how, since the start of the debate about systematic marginalization, exceptions to free speech have always been accounted for. It concludes with an analysis of the current legal challenges related to hate speech. For instance, the need to grant protection to people increasingly targeted by misogynistic and queer-phobic hate speech, as well as hate speech targeting people with disabilities. Another current challenge relates to the digitalization of hate speech and how the legal system now needs to account for a faster and further dissemination of hate speech through the internet.

Part III investigates the theoretical underpinnings of hate speech at the Council of Europe. This Part focuses both on treaty and non-treaty initiatives. The primary treaty is the European Convention of Human Rights (ECtHR), analyzed together with relevant case law by the European Court of Human Rights. Other treaties are the Additional Protocol to the Convention on Cybercrime, concerning the criminalization of acts of a racist and xenophobic nature committed through computer systems, the Convention on preventing and combating violence against women and domestic violence, the Framework Convention for the protection of national minorities, and the European Convention on Transfrontier Television. Non-treaty initiatives selected for this analysis include: recommendations and guidelines by the Committee of Ministers; general policy recommendations by the European Commission against Racism and Intolerance (ECRI); outcomes of the European Ministerial Conferences on Mass Media Policy; outcomes of the Council of Europe Conferences of Ministers responsible for media and new communication services; and the Venice Commission Report on the relationship between freedom of expression and freedom of religion. The main non-treaty framework is the Recommendation CM/Rec(2022)16 [1] of the Committee of Ministers to member States on combating hate speech which draws on the main jurisprudence of the ECtHR on hate speech and which is a cornerstone in the clarification of the main elements of hate speech in this Article.

Part IV explores the main elements of hate speech in the substantive regulation at the European Union (EU) level. This Part starts by examining the EU's general principles and primary sources such as the Treaty of the EU and the Charter of Fundamental Rights of the EU. It then explains the main advances in the regulation of hate speech in secondary sources of the EU law such as: the Council Framework Decision on combating certain forms and expression of racism and xenophobia by means of criminal law; the Audiovisual Media Services Directive; resolutions adopted by the European Parliament (EP); the Regulation of the EP and of the Council on a Single Market for Digital Services (Digital Services Act, DSA); the Proposal for a Regulation of the EP and of the Council Laying down harmonized rules on artificial intelligence (Artificial Intelligence Act, AI Act); and the Proposal for a Directive of the EP and of the Council on combating violence against women and domestic violence. Finally, this Part will explore the EC communication from December 2021 on its intention to extend the list of EU crimes to include hate speech and hate crime. In doing so, it does not focus on the procedural regulation of online hate speech as that pertains to the corporate human rights due diligence responsibilities of internet intermediaries moderating illegal content. Rather, the scope of this Article focuses on the substantive conceptualization of hate speech.

Part V concludes with a summary of the main elements in the legal conceptualization of hate speech rooted in European human rights law and supported by notions of critical theory and intersectionality advanced by the (black) feminism scholarship. Though the main elements in the conceptualization of hate speech were clarified in CM/Rec(2022)16, this Article presents two main findings. First, it is critical that the European regulatory framework explicitly acknowledges the scholarship of critical legal scholars in that they conceptualized hate speech as expressions intended to perpetuate historical and systematic oppressions. Second, this Article advocates that the conceptualization of hate speech in the European context can only achieve legal cohesion when all European regulatory instruments expressly account for the intersectionality of systems of oppression.

[. . .]

Scholars, practitioners, and policy-makers have long focused on clarifying the definition and status of hate speech in international and regional human rights. However, this has proven to be a challenging process and the absence of a legally-binding definition of hate speech in human rights law has had severe negative individual and societal implications. In an era of digital communication where there is an increased prevalence and reach of hate speech, it is imperative to advance a standardized legal conceptualization of hate speech that is suitable to protect and to present legal remedies for people targeted by such hateful expressions.

This Article clarifies the original conceptualization of hate speech advocated by critical race scholars, grounded on the perpetuation of intersectional, historical and systematic oppression. This Article then analyzes, in the light of that original conceptualization of hate speech, a selection of legal initiatives in the European context, covering treaties and non-treaty initiatives hereby suggested as the most relevant in the regulation of hate speech at the European level. The main treaty instruments analyzed are the European Convention on Human Rights and the Charter of Fundamental Rights of the EU. The most relevant EU legal instruments are the Framework Decision on Combating Racism and Xenophobia, the Audiovisual Media Services Directive and the Code of Conduct on countering illegal hate speech online. The main non-treaty instrument is the Recommendation of the Committee of Ministers of the Council of Europe on a comprehensive approach to hate speech (CM/Rec/(2022)16).

The analysis in this Article explains the interplay between European regulatory instruments and claims that a more standardized conceptualization of hate speech rooted in the intersectional, historical and systematic systems of oppression perpetuated by hate speech can help reconcile the regulation of hate speech in Europe. If the European regulatory framework to counter hate speech is to uphold values of equality, dignity, pluralism, and democracy, then the most effective and legally coherent manner to achieve that objective is to emphasize the seminal hate speech elements presented by critical race and legal scholars and to emphasize the need to investigate intersectional, historical and systematic systems of oppression perpetuated by hate speech.


PhD candidate at eLaw, Center for Law and Digital Technologies, Faculty of Law, Leiden University,