Why too much focus on online misinformation is a problem | Alberto Acerbi

Why too much focus on online misinformation is a problem

gif

Together with Sacha Altay and Manon Berriche, we just published a new article in Social Media + Society: Misinformation on Misinformation: Conceptual and Methodological Challenges. The article puts together, in an hopefully coherent framework, a number of criticisms to current research on online misinformation that we discussed and developed in a more informal way in the past (several) years. If you know our research, it may not come as a surprise that we tend to be wary of a scaremongering approach to online misinformation, proposing instead that there may be nothing particularly surprising in the (limited) current spread of misinformation and that, if anything, misinformation is not much of a problem by itself, but a symptom of more deep, and important, societal problems.

Now, these ideas are quite controversial today, but this is not what I want to explore in this short post: you can read the article, which is exactly about that and it is written for a - relatively - large audience. I want to focus on another aspect. I had the impression that, for some people at least, by minimising the danger of misinformation, we are siding with the “bad guys”. These bad guys being either big social media companies or vaguely right-wingish figures, which claim that the fight against misinformation is the way the “establishment” suppresses dissenting voices (here an Italian example of this).

I am unsuccessful enough as a public persona to not be too worried about being associated to the bad guys, but I think it is quite misguided. First, downplaying the importance of online misinformation is hardly playing into big social media companies’ hands. On the contrary, the continuous attention to online misinformation provides both an analogous continuous attention to the activity of social media companies and support to the (I think wrong) idea that what happens on social media has a determinant influence on seemingly any event. If you are Elon Musk, you would certainly prefer people thinking that Twitter can change election results or even the course of a pandemic, than thinking that it does not matter that much.

One of my all-time favourite here is an Economist’s review of “The Great Hack”, a documentary about Cambridge Analytica: “So credulous is “The Great Hack” that if Cambridge Analytica had not shut down, its bosses would be using the movie as a testimonial”.

Or, let me quote from this excellent article:

Ironically, to the extent that this work creates undue alarm about disinformation, it supports Facebook’s sales pitch. What could be more appealing to an advertiser, after all, than a machine that can persuade anyone of anything? This understanding benefits Facebook, which spreads more bad information, which creates more alarm. Legacy outlets with usefully prestigious brands are taken on board as trusted partners, to determine when the levels of contamination in the information ecosystem (from which they have magically detached themselves) get too high. For the old media institutions, it’s a bid for relevance, a form of self-preservation. For the tech platforms, it’s a superficial strategy to avoid deeper questions. A trusted disinformation field is, in this sense, a very useful thing for Mark Zuckerberg.

This quote also hints to another important point: how the focus on misinformation may be a “superficial strategy to avoid deeper questions”. We do not have good support for the idea that misinformation causes bad outcomes outside of social media. Really, we do not (you can read much more about this in the paper). Assuming that it does, when what we want to assess are exactly the effects of misinformation, is putting the cart before the horse. In a world with unlimited resources this could not be a problem. Maybe (maybe) zero misinformation is better than some misinformation. However, we do not live a world with unlimited resources. When people claim that, say, to increase vaccinations we need to fight misinformation, they are using time - and, mostly, a lot of resources - assuming that misinformation causes people not vaccinating themselves. But if that is not the case, we are wasting time, and a lot of resources. More, we are not considering the importance of deeper issues, focusing our efforts on dubious technological fixes. Perhaps, in the case of vaccination, we should work on the reasons behind the luck of trust in science, institutions, reliable news sources, and so on. And, unfortunately, these reasons may have even deeper social, economical, and cultural causes, but, hey, “fake news”!

Avatar
Alberto Acerbi

Cultural Evolution / Cognitive Anthropology / Individual-based modelling / Computational Social Science / Digital Media

Related

comments powered by Disqus