Blog
21.04.2021

Products of our own psychology

Social media’s new strategies to combat misinformation don’t address the underlying psychology behind why fake news spreads in the first place.

Misinformation online has risen massively throughout the COVID-19 pandemic. Amid the anxiety and confusion of a global crisis, it turns out, well-meaning individuals are just as likely to share fake news as political saboteurs. No matter how well intentioned, medical misinformation poses the very real threat of “imminent physical harm.” Over the last year, social media platforms have raced to develop response strategies to curb this phenomenon -- more so than they have to political misinformation campaigns.

Anyone who used Facebook, Instagram, Twitter, or YouTube since the pandemic began has certainly noticed these changes aimed at combatting misinformation. On Twitter, for example, a warning label may appear under a tweet, reading “some or all content shared in this Tweet conflicts with guidance from public health experts regarding COVID-19.” An informational icon will then direct users to the COVID-19 website from their national government. When retweeting, users are prompted to add their own comments (a “quote tweet”) and thus are unable to amplify the article to their followers without adding to it. These three actions (label, redirect, slowing the spread) are currently the go-to approach of all the largest platforms, and they claim to prompt users to think before sharing misinformation. 

The exact effectiveness of this approach is hard to evaluate due to a frustrating lack of transparency. However, research in other fields has shown that humans are psychologically prone to trust misinformation more than warning labels. Our judgement of truth is often clouded by confirmation bias, the long-documented tendency to seek out and trust information that confirms one’s pre-existing worldview. Users may be so convinced of the “truth” of their misinformed content that they attack the platform, its fact-checkers, and any personal connections that call-out the falsehood.

Another factor is that in the absence of definitive proof, our brains use a confidence heuristic to determine what information to trust. In other words, our brains interpret the most confident, definitively phrased statements as the most truthful -- regardless of their content. This factor is especially relevant during a crisis, where definitive answers from legitimate sources are scarce and emotions contribute more to the psychological need for confident information. Anxiety, stress, and direct personal involvement in a crisis scenario all increase the tendency to place trust in incomplete or nonfactual information.

More worrisome to content regulators is the fact that being aware that information is false is not enough to stop users from believing it. Studies of human cognition show that our memories struggle to remember retractions or corrections of false information. For example, jurors will often unintentionally make decisions based on evidence they were told to disregard. Although they remember hearing the retraction, their reliance upon the disregarded information stays the same – a factor known as the continued influence effect.

General warnings to be distrustful of misinformation have been proven to reduce the brain’s reliance upon it, as can giving an alternative, true fact alongside a retraction. However, these work best when audiences are engaged and rational thinkers, and studies of users on social media unfortunately show the opposite. Platform architecture even encourages irrationality; content-recommendation algorithms hold a user’s attention with whatever they perceive as interesting and, unfortunately, emotionally charged and false information is more engaging than the truth. 

Strategies of labelling and redirecting simply do not align with the characteristics of the misinformation that they are designed to combat, and thus, the responses to misinformation need to be redesigned. One solution would be to swiftly remove false content from the platforms altogether. With a list of common misinformation and targeted lists of fact-checks, a post could be removed or replaced with a text bubble explaining the truth and where to learn more. Facebook currently maintains a partnership with the WHO to continually update this list of COVID-19 misinformation and remove offending posts in a timely manner, and other platforms could implement a similar approach. Should removing misinformation from the platforms be deemed a step too far towards censorship, a more moderate solution could be to invest in co-created fact-checking mechanisms.

Currently, only Facebook allows its users to flag misleading or false information for review. A pilot project on US Twitter called Birdwatch which launched in late January 2021 adds a crowd-sourced mechanism for users to identify misinformation and contribute notes to provide context. This type of community-led fact checking addresses both confirmation bias and the confidence heuristic, and generally engages users in a more rational manner.

At the very least, existing warning labels should be adapted to catch users’ attention and mitigate their biases. “You may believe this because your close friend shared it. However, independent fact checkers have proved it untrue” is an example of a warning that would directly address the user’s confirmation bias, while also embodying enough confidence to engage the confidence heuristic.

Going forward, we must recognise that individuals engaging with social media are driven not by logic, but by psychological factors which are heightened in a digital setting. Until platforms directly address the psychological reasons why misinformed statements spread on their platforms, efforts to fight them will fall short.
 

Photo by Markus Winkler, Unsplash