Blog
16.12.2022

Hate speech in video games: A social media problem

The presence of hate speech on video game platforms has slid under the radar as pressure increases on social media platforms. This blog post is part 1 in a series about the digital governance problems facing the video game industry.

The detrimental presence of harmful content in online spaces has led to the ongoing public discussion of what exactly should be done to deal with it. Harmful online content (HOC) encompasses hate speech, misinformation, cyberbullying and more and has steadily been on the rise across the internet. Regulation like Germany’s Network Enforcement Act (NetzDG) and the EU’s Digital Services Act address hate speech on social media platforms but the fastest growing entertainment sector has seemingly slid under the radar: video games.

Video games have become one of the world’s most popular forms of entertainment with more than 3 billion active players in 2022. According to the Interactive Software Federation of Europe (ISFE), the European games industry has experienced a 55% growth rate over the last 5 years and 51% of all Europeans play video games regularly. What constitutes a video game exactly has caused some ontological problems with regulation. The distinction between a social media platform and a video game platform is not straightforward as both are places of public speech and dialogue. Online games hosted by Microsoft’s Xbox or Sony’s PlayStation allow players to send messages and talk with each other live. Twitch hosts live gaming streams while Discord is an instant messaging platform stemming from the gaming community.

The distinction between social media platforms and video game platforms begins to appear arbitrary when considering how they are used. Gaming platforms can be accessed from computers, gaming consoles and mobile devices and connect people online from across the world, allowing them to form communities and speak with each other instantly over microphone or text. At times, they have been avenues for free speech under authoritarian regimes. In 2020, digital protests were staged against the Chinese government in solidarity with Hong Kong in Nintendo’s Animal Crossing: New Horizons, a global bestseller. In Minecraft, another global blockbuster owned by Microsoft, a team of software engineers constructed the Uncensored Library on an open server. It houses material from murdered and censored journalists from around the world and is accessible to anyone with access to the game. 

Video game companies have not been held to the same harmful online content liability rules as social media companies, and this has resulted in online environments that are rife with hateful content. Vitriolic speech and toxic online environments are so ubiquitous that any gamer could tell you about them. According to one study, over 90% of gamers have witnessed or experienced abuse or bullying while online and 1 in 3 have experienced hate speech. These environments have not only a negative impact on the mental health of gamers but also on the product in question:  nearly 70% of gamers have quit a game or considered doing so because of HOC.

Regarding additional stakeholders in the industry, it is useful to distinguish between platforms and online content providers, which are organisations that produce content (such as video game developers or streamers) and use intermediary platforms (such as Xbox or Twitch) to interact with their customers. Some communities have formed in solidarity from the exclusion felt in online spaces. Melanin Gamers is a non-profit that highlights issues with hate speech and reaches out to developers to address the problem. The group offers tips and spaces to be safe online while also promoting the work of minority creators. Intel has created the Bleep AI technology which allows users to choose their own settings and filter out hate speech from categories such as racism and xenophobia, misogyny, white nationalism or LGBTQ+ hate.

Platforms have the most power over what kind of language is tolerated. One reason for managerial hesitancy from platforms to implement measures against HOC is the fear of being associated with censorship. The consequences of this foot dragging can be felt, as many of the largest companies are associated with misogyny (an image perpetuated by large sexual workplace harassment lawsuits) and people are leaving games because of the toxic environments. The gaming community has known for years that there is an issue and gaming companies have taken some steps to address the problem with HOC. Let’s explore some of the policies.

Content Moderation

In contrast to many social media platforms, gaming platforms have employed minimal moderation of online interaction. Gamers who want to avoid harmful content while online can turn off chat functions or mute, block and report other individuals. These options are a form of co-operative response between users and platform but may be criticised as being reactive instead of proactive. Reported users may be suspended for a period of time or banned outright, with the reporter generally being unaware of the outcome. For many platforms, the only way to prove that hate speech has been used is via chat logs or usernames, and speech from audio chats is generally deemed ineligible.

Most platforms employ automated moderation that filters out the most offensive language but can easily be skirted by changing punctuation or spelling. There is a dearth of research regarding the extent of moderation in video games. Microsoft released its first ever transparency report on moderation only on 14 November 2022, the latest of the big gaming conglomerations to do so. Notable exceptions remain Nintendo and Sony, who have not released any information about moderation practices. Some companies have moved to allow voice recordings of play sessions for moderation purposes which has raised concerns about privacy. As there remains a lack of transparency about moderation standards, these concerns may be founded.

Content moderation can function like a hammer that turns all issues into a nail, but companies have more options to explore to address HOC. Most platforms have a code of conduct that is explicit about what behaviour is acceptable or unacceptable. Users need to have sufficient capacity, knowledge and freedom to exercise actions in line with the code, which could include examples, scenarios and detailed information about reporting. Furthermore, companies need to create conditions for users to comply with their responsibilities, which include enforcing real consequences and making public displays on infringements to educate about and demonstrate actionable activities. Users often don’t use reporting functions because they don’t know what effect it has, if any. Counterspeech, speech that calls out harmful content as inappropriate or seeks to create empathetic responses, is a further tool that can be developed to create and maintain civil online spaces. Increasing accountability by enforcing community standards and meting out appropriate sanctions would be a move in the right direction on the part of platforms.

Potential Policy Responses

Video game platforms have not experienced the same level of scrutiny from regulators as traditional social media platforms and this lack of political pressure has resulted in delayed action on the side of the companies. A lack of moderation in online gaming spaces has resulted not only in pervasive hate speech and toxic environments but has also allowed for extremist groups to use the platforms for recruitment. The following post in this blog series will explore the topic deeper. Attempts by governments to enforce stricter content moderation in online spaces, like the German NetzDG, have been widely criticised as infringing upon freedom of expression and it is still the companies who host spaces of public speech that have the greatest powers of intervention. A co-regulatory framework that includes mandatory safety and quality requirements is a suggestion taken from proposed social media regulation, but more nuance regarding the structure of gaming communities is needed.

 

Further Reading:

Einwiller, S.; Kim, S. (2020). How Online Content Providers Moderate User‐Generated Content to Prevent Harmful Online Communication: An Analysis of Policies and Their Implementation. Policy & Internet. 12. doi.org/10.1002/poi3.239

Maher, B. (2016). Can a video game company tame toxic behaviour? Nature. 531, 568–571 https://doi.org/10.1038/531568a

Breuer, J. (2017). Hate Speech in Online Games. In book: Online Hate Speech. Perspektiven auf eine neue Form des Hasses. Pp. 107-112. Eds. Kaspar, K., Gräßer, L., Riffi A. Publisher: kopaed. https://www.researchgate.net/publication/316741298

 

Teaser photo by Luis Villasmil on Unsplash.