This is the detail page of the paper
Explainable Multi-view Game Cheating Detection.
The paper is under review, we don’t put many details here for the moment.
Jianrong Tao (NetEase Fuxi AI Lab); Yu Xiong (NetEase Fuxi AI Lab); Shiwei Zhao (NetEase Fuxi AI Lab); Yuhong Xu (NetEase Fuxi AI Lab); Jianshi Lin (NetEase Fuxi AI Lab); Runze Wu (NetEase Fuxi AI Lab); Changjie Fan (NetEase Fuxi AI Lab)
Online gaming is one of the most successful applications having a large number of players interacting in an online persistent virtual world through the Internet. However, some cheating players gain improper advantages over normal players by using illegal automated plugins which has brought huge harm to game health and player enjoyment. Game industries have been devoting much efforts on cheating detection with multi-view data sources and achieved great accuracy improvements by applying artificial intelligence (AI) techniques. However, generating explanations for cheating detection from multiple views still remains a challenging task. To respond to the different purposes of explainability in AI models from different audience profiles, we propose the EMGCD, the first explainable multi-view game cheating detection framework with explainable AI (XAI). It combines cheating explainers to cheating classifiers from different views to generate individual, local and global explanations which contributes to the evidence generation, reason generation, model debugging and model compression. The EMGCD has been implemented and deployed in multiple game productions in NetEase Games, achieving remarkable and trustworthy performance. Our framework can also easily generalize to other types of related tasks in online games, such as explainable recommender system, explainable churn prediction, etc.
todo