Understanding Well-being Data

chapter 8 Talking different languages of value

Some reasons why findings may differ

As I mentioned in Sect. 8.2, we began a journey which involved understanding the contexts in which the research in ‘Museums and Happiness’ was undertaken. My colleague also looked at the quantitative work and used the same data, following the methodology section, to try and reproduce the results. The headline finding of the quantitative work in our project is that the monetary estimates of the relationship between participation and subjective well-being do not match across the two pieces of research. There are a number of reasons why this may be the case.

Why the difference? The second study may have recoded variables in different ways from the initial study. As we know from Chap. 3, coding ordinarily requires human decision-making on what to code how, and there is no single objectively correct way to code variables—all approaches have their own pros and cons under different circumstances. However, what follows from that is that the difference in coding, based on the way it is reported, leads to the finding being backwards. By this, I mean that there is a positive relationship between participation and happiness, but not between attendance and happiness. For example, people who play a musical instrument are happier, but people who go to concerts aren’t. In short, the reports’ key headlines, and their focus on the positive relationship between happiness and attending particular activities, were not the same when reproduced.

There are also questions about how ‘participation’ and ‘audience’ were operationalised in the analysis. The ‘Museums and Happiness’ report includes some variables and excludes others in its construction of these terms. This is another example of how models require decisions, and it is difficult to be certain that such decisions are not affected by bias, particularly regarding which variables relate to happiness and which do not. We discovered in Chap. 7 a number of ways that the operationalisation of culture and well-being is important. If the operationalisations are too narrow, and ‘participation’ and ‘attendance’ do not include all activities that we might want to be classified within these scales, then the apparent positive effects from participation could reflect something broader than just the publicly subsidised cultural sector. It may be that the positive associations of participation in publicly funded culture are similar to those of playing in a darts team or watching Eurovision with friends.

Alternatively, if the operationalisations are too broad, then the positive association between participation and happiness might be driven by one activity, or type of activity, and other activities are then undeservedly classified as being associated with happiness. For example, if dancing is associated with happiness but playing a musical instrument is not, and these two activities, along with several more, are combined into a single variable for whether or not people have participated in the arts, then dancing will be under-credited for its association with happiness, while playing an instrument over-credited. We encountered something similar in Chap. 7, where the incorporation of ‘social activity’ in the category called ‘cultural access’ with multiple other variables made it difficult to establish what the effect of cultural participation might be. We also encountered this in Box 7.6 with the hypothetical situation that young people don’t like jazz music, but older people do, if you looked at everyone together you would likely find that the two groups would cancel each other out, to a degree, finding that people weren’t really bothered by jazz at all.

Most importantly for the context of this book and chapter, data are not neutral, data modelling requires many human interventions, such as cleaning and coding, and experimenting with different ways to derive a relationship from the data. This leaves the processes open to human error, numerous biases and disagreements in a way that is not ordinarily accounted for. The claims made may not reflect the data collected, given the questions asked, and through careful reading we do not need to necessarily be quantitative social scientists to ask questions about where headline findings came from.