Sections

Training AI to Detect Cyberbullying

Training AI to Detect Cyberbullying

01.23.2019, by
Artificial intelligence has turned into a surprising, but effective ally in the fight against cyberbullying, a phenomenon that primarily involves teenagers.

While Internet has always been an ambivalent tool, the  rise of social media over the last decade has reinforced its dangers: sharing knowledge and opinion has  become easier, but so has the intrusion of potentially negative elements for teenagers. While Jean-Paul Sartre never had a smartphone or the Internet, his phrase "hell is other people"1 has taken on a new dimension in these times of over-connection.

In France, emergency phone numbers were set up to specifically combat this phenomenon.2 Research in AI has engaged with this  problem through the European project Creep,3 which targets schools in the Italian province of Trento as a use case scenario. Coordinated by the Italian  research center Fondazione Bruno Kessler, and funded by the European Institute of Innovation & Technology, Creep brings together members from the University of Trento, the CNRS, Université de la Côte d'Azur, as well as the company ExpertSystem and the German start-up Neuronation. It has developed technological tools to combat cyberbullying, with solutions based on artificial intelligence in particular.

UNESCO, the United Nations agency that promotes education and culture, estimates that overall school harassment, not solely limited to its digital version, involved 246 million minors in 2017. Between 2010 and 2014, the proportion of young people between the ages of 9 and 16 who were exposed to cyberbullying rose from 8 to 12%, with only 10% of them informing their parents. The French Ministry of National Education considers cyberbullying a situation in which a victim suffers at least one attack per week for a month. In France, the percentage of teenagers indicating they have already been victims rose to 12.5%.

Differentiating Arguments from Bullying

"We are developing tools that can explore social media networks and analyze the exchanged messages , in order to identify those containing cyberbullying or hate speech," explains Elena Cabrio, associate professor at Université Côte d'Azur and member of the Laboratoire d’informatique, signaux et systèmes de Sophia Antipolis (I3S).4 The research conducted by the Sparks team at I3S,5 focuses on the automatic processing of language and the study of argumentative text.  Its research is based on network analysis and natural language processing methods.  "Each user is represented as a node in a graph, and  textual exchanges between different users are the edges connecting users to each others. Algorithms help navigategraphs, while machine learning methods process messages to determine whether they contain aggressive content."

Automatic cyberbullying detection system: each node represents a user and the arcs the connections between them
Automatic cyberbullying detection system: each node represents a user and the arcs the connections between them

These tools can consequently detect swear words, insults, and other toxic terms. Yet while exchanges on the web are not always courteous, every argument is not an instance of cyberbullying. "People are virulent on social media,"  Cabrio admits. "But bullying is defined as  repeated attacks by a user or set of users towards  the same target." Adults are not free from this problem.

Serena Villata, CNRS Researcher at I3S and a member of the Sparks team, is also contributing to the Creep project. She  studies arguments and how to extract argumentative structures from online debates. "Bullying existed before the Internet, but with social media its negative effects can continue even outside the classroom," she regrets. "The victims have no respite, so we are trying to help them manage difficult situations."

In Situ Testing in Italy

The methods of I3S were applied to social media such as Facebook, WhatsApp, and Instagram, along with social media created specifically for pilot schools in the province of Trento. Some institutions have equipped themselves with internal networks, which are easier to control and study, as researchers do not have access to private messages on platforms such as Facebook at this point, which limits the exploration and automatic analysis of online content.
  
Fondazione Bruno Kessler  will also program a chatbot, (i.e. an automated online discussion system) that can support victims and direct them  toward a psychologist or other trusted individual. Initially planned to end in late 2018, the Creep project was ultimately renewed for  2019.

"Schools in the province of Trento will soon begin using our prototypes," adds Villata . "We will take advantage of this project extension to create a second version of the application, which will also take images into account."

Social media such as Instagram focus on  pictures. Aside from insulting photomontages, the risks primarily involve erotic content. The unwilling sharing and circulation of suggestive photos of a young girl is a massive trigger of cyberbullying. The specific term of “revenge porn” refers to any unsolicited diffusion of erotic or even pornographic photos and videos by former boyfriends, girlfriends, or lovers. While this mainly involves adults, the phenomenon also affects minors through the spread of sexting, which can be used against their initial sender in cases of romantic disappointment.

In Spring, Facebook announced that it would combat revenge porn by proposing that victims, as well as those who think they are at risk of becoming so, send their compromising content to the company, so that it can be quickly identified if it is posted on its network. This is a ludicrous solution, as it is inconceivable to expect people to provide Facebook with intimate media of themselves, all the more so when they are minors. The grotesqueness of the solution emphasizes the lack of adequate tools, as well as the scope of the work that remains to be done. "There is a lot of talk about artificial intelligence," Villata concludes. “Our research is a good example of a use that benefits society and helps people."

Footnotes
  • 1. This quote comes from the play No Exit (Huis Clos, in French) which Jean-Paul Sartre wrote in 1943.
  • 2. In France, a telephone number (3020, Say no to harassment) was created to combat this phenomenon, as was a more specialized 800 number (0800 200 000, Net hotline).
  • 3. Cyberbullying Effects Prevention.
  • 4. Unité CNRS/Université Côte d'Azur.
  • 5. Scalable and Pervasive Software and Knowledge Systems.
Go further

Author

Martin Koppe

A graduate from the School of Journalism in Lille, Martin Koppe has worked for a number of publications including Dossiers d’archéologie, Science et Vie Junior and La Recherche, as well the website Maxisciences.com. He also holds degrees in art history, archaeometry, and epistemology. 

 

Comments

0 comment
To comment on this article,
Log in, join the CNRS News community