Malicious gamers who enjoy harassing others through in-game communication will be equally rudely awakened.
As the creator and publisher of several best co-op game out there, Ubisoft and Riot Games are no stranger to dealing with malicious players. Fortunately, both companies have joined forces for No harm in communication (opens in a new tab), a research partnership aimed at combating bad behavior online. The project will use AI technology to track, address and teach bots how to identify instances of online harassment to “promote more rewarding social experiences and avoid harmful interactions.”
Players with bad behavior will no longer be able to hide behind their anonymous keyboards. The research partnership intends to create a system that stores information about offending players and then shares it with the entire industry in a database.
You may have heard the term “malicious” used to describe certain types of online gamers. Whether you’re referring to specific gameplay techniques that are opposed by the broader player base, or a real case of abuse or bullying, it’s hard to point to a display name in the age of anonymity. on the Internet.
While anti-cheat software has been introduced on many major online PvP games, these do not tend to notice when players actively harm other players with their words. The narrow gap between profanity and bullying is often occupied by the kind of trolling you’d hope to find only in fairy tales.
In the announcement from Ubisoft Le Forge, CEO Yves Jacquier Show your sympathy (opens in a new tab) to players who find themselves in these terrible positions. “Disruptive player behavior is an issue that we take very seriously but also a very difficult one to deal with,” he said, referring to how on-premises systems repeatedly failed to detect and punish users for their bad behavior. “We believe that, by working together as an industry, we will be able to tackle this problem more effectively.”
And they will come together, as Riot Games CEO Wesley Kerr fully agrees. “We are committed to working with industry partners like Ubisoft who believe in creating safe communities,” he said. This partnership with Ubisoft is just one example of “the broader commitment and work that [Riot Games are] do… to develop systems that create healthy, safe, and inclusive interactions.”
The AI software will work by taking chat logs across Ubisoft and Riot’s game arrays and removing any instances of sensitive information before being labeled according to the behavior displayed. All this data will be collected to better prepare AI bots to detect players violate community guidelines.
Sure, some games have made strides to fix the bad behavior in their individual games, but if you encounter device taken away from you in your middle Call of Duty: Modern Warfare 2 running doesn’t sound threatening enough, the fact that hundreds of people will read your malicious comments will hopefully make unscrupulous players think twice before speaking out in post-match chats .