HomeTechnologyUbisoft and Riot Games want to create an AI to end bullying...

Ubisoft and Riot Games want to create an AI to end bullying in online games

The two gamers have decided to join forces to launch the “Zero Harm in Comms” research project. Their goal is to set up an AI capable of detecting toxic behavior in online games in order to turn it into a cyberbullying cop.

Return to online video games its friendly, healthy and community spirit in the good sense of the term. The toxic behavior of some players, capable of harassing other players online, of having more than reprehensible behavior with streamers or even simple players, has been constantly highlighting the problem for several years. But the solutions, beyond simple moderation and denunciation, only temporarily heal the wounds that never heal.

To solve a problem that afflicts the entire sector, perhaps the solution must come from the main actors themselves. This is the observation made by Riot Games and Ubisoft. The two video game heavyweights announce this Wednesday the launch of the research project Zero damage in communications (zero damage in comments, in French). As the name suggests, its goal is to achieve automatic moderation by artificial intelligence of all hurtful, scandalous or even more scandalous comments.

An AI capable of detecting inappropriate comments

Taking advantage of their experience in online games, the publishers of League of Legendssand assassin’s Creed decided to join forces to design an AI-based solution that would clean up the comments. These moderation tools will detect and penalize inappropriate behavior.

“Ubisoft approached us for this project because they knew of Riot’s interest and commitment to working with others in the industry to build safe communities and mitigate disruptive behavior,” Wesley Kerr, Riot Games’ director of technology research, told Tech&Co. “It’s a complex and difficult issue to resolve,” adds Yves Jacquier, CEO of Ubisoft La Forge. “But we believe that by bringing the industry together through collective action and knowledge sharing, we will be more effective in delivering positive online experiences and a reassuring community environment.”

Currently, the moderation of comments is often based on a dictionary of insults “easily circumvented and that does not take into account the online context”, emphasizes Yves Jacquier. “We need an AI that can understand the general meaning of an online game in its context.”

Therefore, the project aims to develop an AI trained to “preemptively” detect harmful behaviors in online chats and make them disappear as quickly as possible. The aim is also to develop tools that will then be shared with other industry players for concerted and complementary action.

Active members of the Fair Play Alliance, Ubisoft and Riot Games explain that they want to trust their technologies already applied in online gaming tools, as well as their approach to implement a framework that guarantees ethics and confidentiality. To do this, they will rely on the experience of competitive games from Riot Games (Valorant in particular) that can sometimes generate threatening behavior on the part of players despite their efforts, but also on the diversity of Ubisoft games, very far to Mario+The Rabbids, and therefore multiple possible player profiles.

Ubisoft has often been at the forefront of fighting toxic behavior in games by constantly strengthening its tool to detect racist, homophobic, sexist or even hateful comments in game chats like Rainbow Six Siege. This takes the form of a message displayed to the player to explain offensive behavior, being banned from a game for a defined time, account suspension, or even a full ban. Riot has, for valuer, it multiplied the penalties, ranging from simply cutting off the microphone to exclusion. This was notably enabled by voice analysis and reports from other players.

Make it an effective moderation tool for the entire industry.

It must be said that the two companies have a common interest: they increasingly base their operations on games with strong community potential. And for them to be attractive, their environment must be healthy.

with the research project Zero damage in communications, the two actors say they are ready to share their knowledge to solve the toxicity problem in comments online. By crossing their gaming experiences, they hope to develop a database that spans all types of games, players, and behaviors, with AI capable of responding to all situations.

“Harmful behavior is not just the prerogative of games, but of all social platforms,” recalls Wesley Kerr. “To create positive experiences, we all need to come together. This project is an illustration of our commitment and the work we at Riot are doing to develop inclusive, healthy, and safe exchanges in our games.”

This announcement is the first stone of the “ambitious and transversal” research project. Ubisoft and Riot Games hope to attract other publishers and developers to their cause. The first lessons should be able to be shared next year with the video game industry, they explain, whatever the conclusion. “The only real failure is when it doesn’t work and you can’t explain why,” says Yves Jacquier.

Author: By Melinda Davan-Soulas
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here