Rec Room reduces toxic chat incidences by 70% with intelligence voice moderation


Gaming experiences can be undermined, even ruined by bad behavior in text chat or forums. In voice chat and in VR, that bad experience is magnified and so much more visceral, so toxicity is amplified.

But positive interactions can be similarly enhanced. It’s vital that developers dig into how their users are relating to one another to understand how to mitigate harm, improve safety and trust, and encourage the kind of experiences that help players build community and stay for the long haul.

To talk to about the challenges and opportunities emerging as the game industry begins to address just how bad toxicity can be for business, Imran Khan, senior writer, game dev and tech at GamesBeat welcomed Yasmin Hussain, chief of staff at Rec Room and Mark Frumkin, director of account management at Modulate, to the GamesBeat Next stage.

Backing up the code of conduct with voice intelligence

Moderation is one of the most effective tools for detecting and combating bad behavior, but it’s a complex undertaking for humans alone. Voice intelligence platforms, such as Modulate’s ToxMod, can monitor across every live conversation, and file a report directly to the human moderation team for follow-up. That offers the evidence required to make educated decisions to mitigate that harm, backed by a code of conduct, as well as offers overall insight into player interactions across the game.

Rec Room has seen a 70% reduction in toxic voice chat incidents over the past 18 months since rolling out ToxMod, as well as experimenting with moderation policies and procedures and making product changes, Hussain said. Consistency has been key, she added.

“We had to be consistent. We have a very clear code of conduct on what we expect from our players, then they needed to see that consistency in terms of how we were moderating and detecting,” she said. “ToxMod is on in all public rooms. It runs in real time. Then players were seeing that if they were to violate the code of conduct, we were detecting those instances of toxic speech.”

With the data behind those instances, they’ve been able to dig into what was driving that behavior, and who was behind the toxicity they were seeing. They found that less than 10% of the player base was responsible for the majority of the violations they saw coming through. And understanding who was responsible for the majority of their toxicity allowed them to nuance their approach to the solution.

“Interventions and responses start from the principle of wanting to change player behavior,” she said. “If we just react, if we just ban, if we just stop it in the moment, we’re not changing anything. We’re not reducing toxicity in the long run. We’re using this as a reactive tool rather than a proactive tool.”

Experiments and tests let them get underneath the most effective response pattern: responding quickly, and then stacking and slowly escalating interventions, starting from a very light touch, friendly warning, then moving to a short time-out or mute, to longer mutes and then eventually bans. False positives are reduced dramatically, because each alert helps establish a clear behavior pattern before the nuclear option is chosen.

Finding the right approach for your platform

Of course, every game, every platform and every community requires a different kind of moderation, not just because of the demographic of the audience, but because of the game itself — social experiences and multiplayer competitive games have very different voice engagement profiles, for instance.

“It’s important to understand that engagement profile when making decisions based on the escalations that you’re getting from trust and safety tools,” Frumkin said. “The studios, the trust and safety teams, the community managers across these various platforms, they’re the experts in who their players are, how they interact with each other, what kind of mitigations are appropriate for the audience itself, what the policies are and should be, and how they evolve. At Modulate we’re the experts in online interactions that are negative or positive. We deeply understand how people talk to each other and what harms look like in voice chat.”

And when implementing a strategy, don’t jump right to solutions, Hussain said. Instead, spend more time defining the what, who, how and why behind the problem, because you’ll design better solutions when you truly understand what’s behind instances of toxicity, code of conduct violations or whatever harm is manifesting, Hussain said. The second thing is to talk to people outside of trust and safety.

“The best conversations I have across Rec Room are with the designers — I’m not saying, hey, you built this thing that’s probably going to cause harm,” she said. “It’s, hey, you’re building something, and I’d love to talk to you about how we can make that more fun. How we design for positive social interactions in this space. They have great ideas. They’re good at their jobs. They have a wonderful understanding of the affordances of a product and how to drive that, use that in designing for trust and safety solutions.”


Leave a Comment