Interesting regularities in human behaviour: older authors write happier books

[Second post of the series “Things that I probably will not develop in a proper paper, but I find interesting enough to write here”. The first is on the XX century decrease of turnover rate in popular culture]

In the last couple of years, part of my research has been dedicated to explore the emotional content of published books, using the  material present in the Google Books Ngram Corpus. Our analysis produced some interesting results. While analysis like ours need to be carefully weighted and possibly re-produced with various samples (but this should happen always…), I think that tools like the Google Books Corpus represent an extraordinary opportunity, as my goal is to study human culture in a scientific/quantitative framework.

Continue reading “Interesting regularities in human behaviour: older authors write happier books”

Books Average Previous Decade of Economic Misery

Almost one year ago, we published a paper in which we described a large scale analysis of cultural/literary trends, realised using the google books ngram corpus. In particular, we showed that, trough a relatively simple extraction of emotion-realted words (words semantically related to “main” emotions like joy, sadness, anger, etc.), it was possible to detect some clear tendencies, such as a general decline in the emotional “tone” of books published in the twentieth century – or at least in the frequencies of emotions words -, a divergence between American and British English – with the former being more emotional -, and, finally, the existence of distinct periods of “literary mood” in the last century.

Related to the last point, PLOS ONE just published a follow up of this research, in which we correlate this literary mood with the past century economic trend. The image below shows the main point of our study.


The red line is what we called “Literary Misery Index” (how “sad” are books in a certain year, on average), that we extracted from the books in the Google Corpus, while the black line is a 11-years moving average of the economic Misery index (how “bad” is economy in a certain year), a well-known economic index, realised adding inflation and unemployment rates. The two trends are strongly correlated (you can read more in the Bristol University press release here, and, of course, in the original paper).

As for the previous work, we are glad we had some media attention (see for example The New York Times and The Guardian), which generated quite a lot of buzz. Not surprisingly, this included some criticism. It is interesting that, while some commenters think that we are “stating the obvious”, others accuse us to apply  a “crude” causal determinism, and to defend the implausible claim that economy “dictates” literature and culture.

To me, I am more sympathetic to the state-the-obvious side of the debate so I am not going to write on this (but: we are able to substantiate an “obvious” claim – economic conditions influence cultural mood – with empirical data, as well as provide some refinement, for example providing a possible estimate of a time lag). Regarding the other side of the debate, I would not say that economy “dictates” literature, but it is quite plausible that economic conditions may have an effect on mood. This is not just common sense: many studies link, for example, financial strain and depressive symptoms (here), or general psychological distress (here). If the google corpus is a good barometer of a culture mood, our results are not particularly surprising. This does not mean of course that all books published, for example, in the 80s, were gloomy (I feel like I am underestimating the intelligence of the readers, but some journalists seem to criticise our result on this shaky basis), or that economy alone has a causal effect on literature or culture.

On a related note, given that I can safely assume that most of the “crude determinism” critics come from literary, or, in general humanistic, departments: I like to imagine that a well-known German philosopher, that once was very praised in there, would be very supportive of our work!



Bentley R.A., Acerbi A., Ormerod P., Lampos V., (2014), Books Average Previous Decade of Economic MiseryPLoS ONE, 9 (1): e83147.

Meet @CultEvoBot, my first Twitterbot


In the last few days, for independent reasons (i) I was told the Horse_ebooks story (in short, an “artistic” project where humans pretended to be a Twitterbot and gained around 200K followers – but if you don’t know anything about it please read the Wikipedia page and the links cited in the References there, it is quite interesting), (ii) I stumbled upon this page with a few example of Twitterbots worth to follow (at least according to, and, finally (iii) I was pointed to this NYTimes article (from August 2013) on social-bots (claiming, among other things, that only 35% of twitter users are humans). This seemed enough to me to try and see how difficult was to set up a Twitterbot.

A Twitterbot is a program that produces automated posts via Twitter (surprise!). In my case, @CultEvoBot is a short python script that every hour – when my laptop is on – uses google news search or google blogs search (after having flipped a coin to decide) and search there for “cultural evolution”. It then goes trough the links proposed and, if one is not in its log file of past links, posts it in twitter with the title provided by google (and adds it in its log file). That’s all (it also follows its followers, which is completely useless at the moment – among other things because I am the only follower – but might be useful in the future).

So basically, @CultEvoBot does not do much more than providing links to potentially interesting sources, still I am pretty satisfied of the result. Programming a Twitterbot – also with more elaborate functions (like answering to specific users or posts, re-tweeting, etc.) – seems quite straightforward, and I can imagine that I will be able to use them in the future for scientific (or artsy) projects, even though at the moment I don’t have any specific idea (suggestions welcome).

“Happiness” in 20th Century English books

Just to give an idea of the analysis mentioned in the previous post, the plot below shows the trend for a rough measure of the “happiness” of the books present in the Google Books database. For WordNet-Affect (WNA) this is obtained, simplifying a little, by subtracting the cumulative scores of the categories of “Joy” and “Sadness”, while for Linguistic Inquiry and Word Count (LIWC) the two (equivalent) categories are called “Positive emotions” and, again, “Sadness”. Values above zero indicate generally ‘happy’ periods, and values below the zero indicate generally ‘sad’ periods.


This result is interesting for me not much because we can discover something new about the last century (even though I wonder why the 80s seems to be so sad), but because if (i) two independent ways to score the emotional content of texts (ii) trough a quite rough analysis of (iii) an enormous database of books, give highly correlated trends, this means that there is a meaningful “signal” that we can extract (which can not be taken for granted).

We also performed an analogous analysis using a tool called “Hedonometer“ (HED – see the plot below). In this case the results are quite different, even though some similarities are present, e.g. the 20s positive peak, the negative peak corresponding to Second World War, the post-80s increase in happiness. The reason is probably that LIWC and WNA are conceptually quite different from HED. LIWC and WNA are basically “lists” of words related to specific emotions (so, for example, the first – alphabetically – 5 words in LIWC’s category of “Sadness” are: abandon*, ache*, aching, agoniz*, agony), while HED uses a list of generic words not directly related to emotional states, but evaluated by human subjects as particularly happy or sad. So, for example, HED scores in texts the presence of words such as “terrorism” or “Christmas”.


One interesting things to notice regarding HED is that it is the only index that “tracks” the effect of the First World War. Also, comparing the absolute values of our results (the right y-axis in the plot above) with the the values obtained for contemporary twitter messages (see here), it seems that, in general, books tend to be slightly more “sad” than tweets.

If you are interested in more details, and in the other analysis, the preprint of our contribution can be found here.

Robustness of emotion extraction from 20th Century English books

I’ll give today a short talk at the Big Humanities Workshop, held in conjunction with the 2013 IEEE International Conference on Big Data, on our research on the emotional content of English-language books.

In a previous work we analysed the usage of emotion-related words using the Google Books database. We reported there three main findings:

  1. the existence of distinct periods of positive and negative “moods” detectable trough automatic analysis of the texts.
  2. a steady decrease in the usage of emotion-related words throughout the century.
  3. a divergence between American English and British English books, with the former getting more “emotional” starting from 1960s.

The next step has been to perform additional analysis to check the robustness of these results. In details, we re-run the same analysis with the last (2012) version of the Google Books corpus (which contains approximately 3 millions more books than the one we used originally), we compare the results of different, independent, ways to score the emotional content of the texts (originally we used WordNet-Affect, that now we compare with Linguistic Inquiry and Word Count and “Hedonometer“), we run more detailed statistical analysis (to check the effect of high-frequency mood-words that might determine on their own the trends for specific emotions, obscuring the role of the numerous low-frequency terms), and, finally, we compare our original results with trends obtained by considering only terms tagged as adjectives or adverbs, which are considered reliable indicators of emotional content (Part-Of-Speech information was not present in the first version of the Google corpus).

Overall, we were happy to see that the original results demonstrated to be quite robust (especially results #2 and #3). The next step would be now to understand what they mean – to me, especially interesting is the decrease in the emotional content – assuming that they do not derive from some idiosyncrasy of the Google database. Apparently the official Proceedings of the IEEE Big Data Conference are not around yet, but here you can find a preprint of our contribution (thanks to Bill, coauthor together with Alex Bentley).

Unfortunately I will not be physically in some room in Santa Clara, California, to present my talk. It would have been very interesting for me to get to know more of the “Digital Humanities” world (to me, books are just one kind of artefact useful to study more general cultural dynamics, and it happens they are convenient to quantify, have temporal depth – someone talks, in this regard, of long data), hopefully there will be other occasions. Also my distant-talk will end up to be after 11 pm Bristol-time, and after a Puccini’s La Bohème, so if you, reader, are in the workshop, I apologise in advance…