Sections

When Hate goes Viral

When Hate goes Viral

12.19.2016, by
Young people are the prime target for hate campaigns on the Internet.
How does hate spread online? A large-scale investigation of online violence and propaganda and their effect on young people is currently being led by Catherine Blaya, a professor of educational sciences and the president of the International Observatory of Violence in School (IOVS).

Have there been any studies to measure the impact of hate content on the Internet?
Catherine Blaya:1 Several projects have shown that young people are increasingly exposed to such material. A European report entitled “Net Children Go Mobile” indicates a sharp increase in various types of practices between 2010 and 2014, including online bullying and insults, exposure to violent images and messages of hate or discrimination. Another recent study in Finland revealed that 67% of all Internet users had been exposed to online hate content concerning body types, sexual identity, religion or color of skin. On the other hand, there has been no specific study of its impact on young people: what effect does it have on them? Does it encourage them to sympathize with this type of message, support or even adopt violent ideas or behaviors? This is what we are trying to verify, specify or qualify in response to the CNRS Attentats-Recherche (“Research on Terrorist Attacks”) call for proposals. To that end, we are conducting an extensive survey on children aged between 11 and 18, specifically targeting the issues of racism, anti-Semitism, Islamophobia and xenophobia.

But isn’t such a process—what we could call “online radicalization”—already clearly established?
C.B.: Our research doesn't specifically target radicalization, but rather young people’s involvement in cyberhate and its consequences in terms of their adopting violent, or even extremist ideas or attitudes. “Radicalization” is a vague, multiform concept with no precise definition and no scientifically proved link to the proliferation of online hate content. Simply put, it is the dilemma of the chicken and the egg: do young people become radicalized because they are exposed to calls for hatred, or is it xenophobia or racism that prompts them to seek out or post this type of message?  That's the kind of question we would like to investigate, without reducing the transition to extremism to one single factor. For example, limiting oneself to a psychological approach, by blaming one sectarian excess or another, means losing sight of the social and economic aspects that might also play a role, such as the fact of having been a victim of discrimination, marginalization, severe poverty, etc. Conversely, interpreting everything from a purely sociological point of view fails to explain why other young people in difficult socioeconomic situations do not become radicalized. We can’t make generalizations based on a single approach.

What is the procedure used in your own study?
C.B.: We submitted a questionnaire to about 1,500 young people to start with, and we’re now conducting hour-long individual interviews with some of them. We hope to be able to trace, step by step, the process that exposes them to hate content. For example, we ask them if they have encountered such material by chance or through acquaintances, or if they have actively sought it out. Other studies on cyberviolence have shown that 72% of those who post hate content have been victims of it in the past. We therefore ask our respondents whether they have ever been the victims of online insults or harassment, and whether they have posted this type of material themselves before or after being exposed to it. We also look at the effect of this content: what kind of reaction does it trigger and what feelings does it arouse? Anger, sadness, hatred, or none at all…? Did  those surveyed deal with the situation alone, or did they seek help from an adult or an institution? Retracing every stage of the process, taking a multitude of possible situations into account, is painstaking, time-consuming work. We presented an initial overview of our findings on November 28, but the detailed results will not be known before the end of January.

What criteria do you use to define hate speech?
C.B.: We are careful not to impose our adult mindset on young people’s way of thinking. Go to a school playground and you’ll realize that words don’t always have the same meaning for them. What an adult would take as an insult, for example, can actually be a joke between two friends who don’t take it seriously. We ask the youngsters to characterize the content themselves. If they say they have been exposed to violent material, it means that they themselves see it as such. Of course, it’s not a matter of pure relativism either—we then ask them to characterize the violence according to well-established criteria: was it related to skin color, sexual identity, ethnicity, religious beliefs? We also evaluate the degree of support for racist ideas, using questions from the “Implicit Association Test” developed at Harvard University.

What can be done to combat online calls to violence?
C.B.: They cannot be taken lightly. Today’s propaganda campaigns are extremely well organized, as shown in the book Viral Hate: Containing Its Spread on the Internet by Foxman and Wolf, or in L’Internet de la Haine (in French) by Marc Knobel. The propagandists target young people via multiple platforms—blogs, forums, social networks, etc.—and know how to adapt to their norms, imitating the way they speak and relying on their cultural references. For example, we have observed that disreputable militant outfits often create or use music bands to convey a racist ideology. They’re not easy to spot, because the albums are interspersed among thousands of titles online. Most young people come across them by chance or through recommendations. Understanding how such contacts are established will enable us to prevent and reduce them. We believe that deconstructing these techniques and measuring their impact will improve action against them. Our work is not just about making observations, it seeks to help everyone concerned: parents, teachers, online operators (search engines, social networks, etc.),—not to mention political decision-makers.

Footnotes
  • 1. Migrations and Society Research Unit (URMIS - CNRS / Université Paris-Diderot / Université de Nice Sophia Antipolis / IRD).
Go further

Share this article

Author

Fabien Trécourt

Fabien Trécourt graduated from the Lille School of Journalism. He currently works in France for both specialized and mainstream media, including Sciences humaines, Le Monde des religions, Ça m’intéresse, Histoire or Management.

Comments

0 comment
To comment on this article,
Log in, join the CNRS News community