The Evolutionary Step in Combatting In-Game Toxicity
The gaming industry is increasingly leaning on the transformative capabilities of artificial intelligence (AI) to enhance user experience. A pioneer in this space is the Call of Duty franchise, which is setting new benchmarks in its fight against in-game toxicity and disruptive behaviour. Scheduled to launch this 10th of November, Call of Duty®: Modern Warfare® III will feature an AI-powered voice chat moderation system on a global scale, thanks to a partnership with Modulate.
The Role of AI: Introducing ToxMod
This next-level voice chat moderation utilises ToxMod, a cutting-edge, AI-driven technology developed by Modulate. ToxMod has the capability to instantaneously detect and moderate instances of toxic language, including hate speech, harassment, and discrimination. This marks a groundbreaking advancement in AI’s capability to make real-time interventions, supplementing the existing moderation mechanisms that have been set in place by the Call of Duty anti-toxicity team. This team already utilises text-based filtering across 14 languages and has an exhaustive in-game reporting system.
Beta Testing and Global Rollout
As a prelude to the global launch, a beta version of this voice chat moderation technology will commence trials in North America on the 30th of August 2023. This trial will be integrated into existing Call of Duty titles such as Modern Warfare II and Warzone™. Following this, a worldwide release (excluding Asia) is slated to coincide with the new Call of Duty: Modern Warfare III. Initially, the service will be launched in English, with the addition of other languages to follow.
AI’s Impact Beyond Gameplay
AI’s burgeoning versatility extends beyond enhancing gameplay or graphics; it is increasingly pivotal in creating a safer and more inclusive environment. Since the franchise’s initial anti-toxicity efforts, more than a million accounts have been restricted for violating Call of Duty’s Code of Conduct.
These AI-enabled technologies undergo regular updates, facilitating real-time blocking of harmful language. Encouragingly, data indicates that around 20% of players have ceased toxic behaviour after receiving just a single warning.
The Fight Against False Reporting
In addition to combatting toxicity, the anti-toxicity team has also instituted measures against false reporting by adding a Malicious Reporting clause to the Call of Duty Security and Enforcement Policy. This highlights the multi-faceted applications of AI, which extend beyond automation to enhancing human experiences and behavioural feedback loops.
Setting a Precedent for AI in Virtual Interactions
As AI discovers new use cases almost daily—from enterprise solutions to customer service—the Call of Duty initiative provides another compelling example of how AI can revolutionise how we interact in virtual spaces.
We commend the Call of Duty community and its development teams for their steadfast commitment to eradicating toxicity and creating a more equitable gaming landscape. With AI taking the lead, the franchise is propelling the gaming industry into a new era and setting a precedent for other online communities.
How We Can Help
Let us be your trusted partner in navigating the complexities of the digital landscape and unlocking the full potential of technology for your organisation.