Zhipeng, Feng and Gani, Hamdan (2022) Interpretable Models for the Potentially Harmful Content in Video Games Based on Game Rating Predictions. Applied Artificial Intelligence, 36 (1). ISSN 0883-9514
Interpretable Models for the Potentially Harmful Content in Video Games Based on Game Rating Predictions.pdf - Published Version
Download (5MB)
Abstract
Studies reported that playing video games with harmful content can lead to adverse effects on players. Therefore, understanding the harmful content can help reduce these adverse effects. This study is the first to examine the potential of interpretable machine learning (ML) models for explaining the harmful content in video games that may potentially cause adverse effects on players based on game rating predictions. First, the study presents a performance analysis of the supervised ML models for game rating predictions. Secondly, using an interpretability analysis, this study explains the potentially harmful content. The results show that the ensemble Random Forest model robustly predicted game ratings. Then, the interpretable ML model successfully exposed and explained several harmful contents, including Blood, Fantasy Violence, Strong Language, and Blood and Gore. This revealed that the depiction of blood, the depiction of the mutilation of body parts, violent actions of human or non-human characters, and the frequent use of profanity might potentially be associated with adverse effects on players. The findings suggest the strength of interpretable ML models in explaining harmful content. The knowledge gained can be used to develop effective regulations for controlling identified video game content and potential adverse effects.
Item Type: | Article |
---|---|
Subjects: | Research Scholar Guardian > Computer Science |
Depositing User: | Unnamed user with email support@scholarguardian.com |
Date Deposited: | 19 Jun 2023 10:53 |
Last Modified: | 12 Jan 2024 05:04 |
URI: | http://science.sdpublishers.org/id/eprint/1109 |