Morgue employee cremated by mistake while taking a nap | Alberto Acerbi

Morgue employee cremated by mistake while taking a nap

My new paper on cognitive attraction and online misinformation.

Online misinformation, fake news, false news, hoaxes, you name it, has been blamed for almost everything bad happening in the last years, from the success of Trump to Brexit, from the election of Bolsonaro to the (relative) spread of the anti-vax movement. The scientific consensus on the prominence and on the effect of online misinformation, however, is, at best, mixed.

Some highly-publicised studies had supported the scaremongers showing, for example, that falsehood spread farther, faster, deeper, and more broadly than the truth or that ‘low-quality’ information is equally likely to go ‘viral’ than ‘hi-quality’ information because of our ‘limited attention’ (notice however that the former study does not compare ‘falsehood’ with ‘truth’, but debunked rumours versus confirmed rumours, so that real news that nobody debunked, i.e. the majority of them, are not even considered, and the latter has been recently retracted).

Many other works, however, had painted a more nuanced image, showing that the production and the consumption of misinformation is concentrated in a limited portion of social media users, that individuals that have already extreme political views are more likely to be exposed to political misinformation, and that, in general, misinformation is unlikely to change people minds. And, simply, other studies have shown that the actual amount of online misinformation is not that big.

Yet, the narrative supporting that online misinformation is dangerous is itself very persuasive. It is easier to think that the people that voted Trump (you can replace this with whatever you do not like) have been manipulated by evil figures, and if they would have access to the correct information they would have done the right thing. This narrative is also quite dangerous, I think, because creates an easy scapegoat (you can make up any newspaper article with this schema: “How [name of social media] helped [thing that you do not like]”) that often distract to more relevant social, economic, and cultural problems.

Also, this narrative does not fit particularly well with an evolutionary account of human behaviour. We are a species that heavily relies on communication, social interactions, and learning from others. While details widely vary, researchers studying cultural evolution and cognition agree that, for the characteristics allowing communication, social interactions, and learning from others to be evolved in the first place, we should not be too gullible. Of course, the conditions could be so different now that we could be more easily tricked, but this is an open question, and, given a ‘presumption of good design’, we should be at least sceptical of answers that are too pessimistic.

Given this somewhat reassuring perspective, however, online misinformation does exist, and one can ask what the possible advantages of misinformation could be. In a recent paper, I proposed that one of these could simply be that misinformation can be manufactured building on features that make it attractive in an almost unconstrained way, whereas ‘true’ news cannot, simply because they need to correspond to reality. Misinformation can be designed to spread more than real information does (whether this is a conscious process or not). Cognitive anthropologists have put forward features that make some content, everything else equal, more appealing, memorable, or attention-grabbing than other content.

Here is an example: many misinformation articles (I examine a small sample of ‘fake news’ from 2017: all the material, including the links and the texts used, is here) contains threat-related information, even though threats do not need to be relevant for us (as in the article that gives the title to this blog post: not many of us usually take naps in a morgue. Still, the false news has been shared/liked/commented more than one million times on Facebook). Other features (negative content, disgust, etc.) and many more details are explained in the paper.

Misinformation is not low-quality information that succeeds in spreading because of the shortcomings of online communication. Quite the opposite, misinformation, or at least some of it, is high-quality information that spreads because online communication is efficient. The difference is that ‘quality’ is not about truthfulness, but about how it fits with our cognitive predispositions. Online “fake news”, is, from this perspective, not much of a political and propagandistic phenomenon, but is more similar to the diffusion of memes, urban legends, and the like (of course political misinformation exist, even though, interestingly, often takes advantage of similar cognitive preferences to spread, think about the notorious Pizzagate, with its mix of threat, sex, and disgust).

The paper is part of a Cultural evolution collection, edited by Jamie Tehrani, and I invite you to have a look at the other articles too!

jpg

Avatar
Alberto Acerbi

Cultural Evolution / Cognitive Anthropology / Individual-based modelling / Computational Social Science / Digital Media

comments powered by Disqus