Won't somebody please think of the children! | Alberto Acerbi

Won't somebody please think of the children!

jpg

With unsurprising regularity, new “scandals” or “shocking” information about social media appear in the press, and, of course, in social media themselves. Lately, Facebook seems to be the main target. (Incidentally, why Facebook? My hunch is that academics and journalists, which are the main producers and consumers of this information, tend to not be Facebook users, so it feels less dissonant to think it is the real baddy. If only the others would understand it! …Anyway, this is not very important for the rest of the post.)

First, around one month ago, the Washington Post reported that “Misinformation on Facebook got six times more clicks than factual news during the 2020 election”. The news was, of course, a hit. However, the “six times” figure seems at odds with much recent literature that shows that, when considered together with the total information present, the amount of misinformation circulating in social media is actually modest, something like the 5% of the total information (often less). For this reason, it would have been interesting to know how the “clicks” were actually calculated or what was exactly considered as “misinformation”. Unfortunately the study to which the Washington Post referred to was not out at the moment, and it is still, after one month, not available (not even in form of preprint, which is now relatively standard practice).

Later on — mid September — another scandal hit Facebook. This time, it was leaked documents to the Wall Street Journal, titled “Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show”. You probably heard about it, and about the outraged editorials and comments all around social media: you have here a bad social media company (i) hiding information, and (ii) harming (iii) young girls. Hardly you can have something more attractive. However, it looks like the report was mostly based on a small survey on a few dozens of teenagers that were asked if they thought Instagram was bad for them, and on “focus groups” with even less participants, where the reason why they thought it was bad were discussed in more details. Now, we are far to have an in-depth knowledge of the effects of Instagram and social media on well-being and mental health, but this survey does not add much to our previous knowledge, so it is difficult to justify the reactions. (A good analysis of the vicissitude can be found here).

Move on to today, and we have the whistleblower Frances Haugen, ex Facebook employee, testifying in front of the US congress. Things get heated, outrage continues, with public personalities pushing on Twitter the sober hashtag #FacebookKills. I admit I did not read the full transcriptions (I am already using precious time by writing this blog post instead of working or, even better, having a walk, as Bristol is sunny in October!), and I am referring to reports from newspapers, in particular the Guardian. From what I understand, there are not really big novelties in what the whistleblower reports. She is mostly repeating the accusations that Facebook knew about harming children and teenagers (but see the point above), adding various dubious twists. One is the analogy between social media usage and addiction (I cite: “It’s just like cigarettes…teenagers don’t have good self-regulation.”), which is considered flawed by practically all researchers in the field. Another is the idea, never really established, that the algorithm is pushing towards more extreme content (the algorithm “led children from very innocuous topics like healthy recipes…all the way to anorexia-promoting content over a very short period of time”). We have gory details (“women would be walking around with brittle bones in 60 years’ time because of the anorexia-related content they found on Facebook platforms” in the words of the Guardian), and not very surprising considerations, such as that Facebook put profits before people (huge if true for a private company). In sum, nothing really new and, mostly, from what I get, nothing that can be actually used to understand better what is going on, a part the usual accusations.

Of course, I am not in the “everything is good as it is” team (I do not think anybody is), and there are several things that Facebook and other social media should do to makes things better. A short and idiosyncratic list could include:

  • Making their data available and easy to access to researchers and to the general public. Twitter seems slightly better than Facebook about that. (Adding to what said above, Facebook got caught recently in other “scandals” about errors in sharing data and blocking access to scientists.)

  • No social media, that I am aware, is open about the algorithm they use to present information to users. How our Facebook news feed is decided? What about Twitter? I find the lack of transparency particularly worrying, also because this information would finally allow to give better answers to central questions on the kind of information promoted by social media.

  • Could we make social media better? We could and we certainly should experiment with alternatives. If it is true (see the point above) that, say, social media algorithms push contents that receive more interactions (I called it optimisation for shallow engagement) and, say negative or strongly emotional content receive more interactions, this could created a vicious circle where negative (say) information receive more interactions and are proposed more, so end up to dominate social media content. But, of course, things are complicated. Many other factors could be relevant (again, we would need to know more about the algorithm!) and, importantly, preferences for negative (say) information could be linked to more general cognitive preferences and be succesfull independently from social media, so that one would need to balance algorithmic and psychological forces. This is just an example, but many others could be discussed along similar lines.

  • This goes clearly beyond my expertise, but putting Whatsapp, Instagram, Facebook (and what else?) in the same hands may not be a good idea. I leave this to people that know more, but it feels that a decentralised system would work better for us.

And on and on. All these are, I think, very reasonable criticisms to Facebook and other social media, and I’d be (and am) very supportive of calls to action in this direction. Now, it could be that the periodic outrages are useful, but, call me crazy, I am not sure about that. If one claims that #FacebookKills, it is very easy for Facebook to show that it does not and, well, they are right. The risk is that by muddling sensible requests with sensationalist claims, we are doing Facebook’s game.

ps: I may be too old, so here the title reference.

Avatar
Alberto Acerbi

Cultural Evolution / Cognitive Anthropology / Individual-based modelling / Computational Social Science / Digital Media

Related

comments powered by Disqus