Understanding Well-being Data

chapter 6 Relating well-being, values, culture and society

Cultural value and the role of well-being data

As with the terms culture, well-being and social value, you will probably not be surprised to know there is no one definition of cultural value. Like so many of the other terms set out in this book, there are long debates and no clear consensus1. Given the extent of these discussions, there is a brief overview of cultural value, acknowledging how its definition and quantification became a much-discussed problem to resolve, safe in the knowledge that the detail of these debates can be found elsewhere.((For example, in: O’Brien (2010); Oakley and O’Brien (2015); Crossick and Kaszynska (2016); Neelands et al. (2015).))

The impact of culture on the economy first became a prominent feature of cultural value in the last quarter of the twentieth century. The focus on efficiency of the ‘Thatcherite revolution’2 and new public management discussed in Chap. 2 saw a focus on ‘social value’ as a consideration in public decision-making. In parallel, what was called the ‘economic turn’ instigated new methodologies for measuring culture’s worth as economic returns on investment3. The new possibilities for measurement enabled by new methodologies, in turn, resulted in an increasing focus on measuring value, full stop, including areas of life less readily measurable than money.

Ideas of cultural value enable continuity from economic value to instrumental approaches to valuing what culture and leisure activities could do for both individuals and society. Under New Labour (1997–2010), this tended to be articulated more prominently as social value (harking back to Victorian values of social and moral improvement). However, in truth, there was a growing abundance of econometrics that were taken up as proxies for cultural value.

The Department for Culture, Media and Sport (DCMS, formerly Department for National Heritage) was renamed in 1997 by the then recently elected Tony Blair and was keen to promote the idea that ‘sport and culture are widely perceived to generate social impacts’4, alongside economic impacts5. All New Labour departments inherited a civil service culture steeped in almost two decades of new public management approaches, mixing public and private provision and a commitment to using social science technologies to evaluate what worked and what did not in public administration (as discussed in Chap. 2).

Initially, the ways DCMS was required to assess its performance against social and economic goals were not demanding in terms of data or data expertise. As discussed above, it compared visitors to a range of events with the general population and used these numbers to make arguments about contributions to social aims. If the profile of people at these events grew closer to that of the general population—and less highly educated and white—then arguments were made for a contribution to social cohesion, as a ‘strategic priority’6.

While not technically challenging, such assessments were hampered by the limits to the data available. It was impossible to identify how the fraction of the population going to a museum had changed in the last 12 months without a figure for the previous 12 months. The data collected on the cultural sector were partial, largely driven by specific targets generated by DCMS and related bodies.((See Selwood (2002) for a comprehensive review of cultural sector data.)) Thus, they reflected the interests and management approaches nationally, as well the expertise available. Cultural value arguments were increasingly included in the rhetoric of other actors and organisations, such as local authorities. These arguments retained the two key focusses: social impact and economic multipliers. If a local authority could show their local theatres led to economic growth, or to social impact, they could make a case for greater funding. Similarly, bids for new local arts venues ordinarily entailed commitments to an evaluation of economic and social impact.

Here we see the general ‘enthusiasm for numbers’7 discussed in Chaps. 2 and 5, manifest in a need for data expertise in the cultural sector, which was lacking, because it had not been previously required. Consequently, there was an increasing reliance on consultancies to satisfy the desire for data and evidence for policy evaluation. This was symptomatic of a shift from collecting and describing data to a more involved analysis of the data gathered, as part of the production of evidence for valuing culture. Whereas researchers once ‘collected and recorded mainly quantitative data on things like the number of creative or cultural businesses in a particular area, the number of people they employed, the amount of revenue they generated and other typically economic “indicators” of cultural and creative activity’8, this work broadened, so that by 2010, consultancies were estimating social and economic impact. This included bespoke data collection—for example, assessing the social impact of events by surveying attendees about changed perceptions9. It also increased the demand for understanding statistical power and significance10. In short, the more research that was brought in, the more sophisticated it became, and the further outside the day-to-day remit of many responsible for evaluations.

Meanwhile, the need to ensure culture was part of discussions of valuation and appraisal encouraged further attempts to define cultural value. One of the most prominent is John Holden’s (2004, 2006), for whom, there are different parts of society with different relations to, and needs for, culture. These different parts of society also reflect different perspectives on value: the public, the professionals and the politicians. Cultural value also takes three forms for Holden (2006), broadly representative of these groups. For example, ‘intrinsic value’ is the subjective experience of culture: ‘intellectually, emotionally and spiritually’11. ‘Instrumental value’ is how culture can be ‘used to achieve social or economic purpose’12. There is also ‘institutional value’ found in how people relate to cultural organisations. For example, the BBC was very concerned about its ‘public value’ and conducted a consultation so it could articulate its institutional value (in Holden’s terms) to the public and its instrumental value in economic terms.((O’Brien (2013, 122–130) covers particular case studies of public value in greater detail.)) ACE’s 2007 Arts Debate aimed to fulfil a similar objective13. However, public consultation data may not always reinforce the values of institutions and can in fact challenge them. When reanalysing the ONS’ data from the national well-being debate in 2010, I also found that Holden’s three groups formulate the value of culture to well-being differently. The lack of reference to arts and cultural institutions in general or specific terms by people in these data14 poses important questions for the cultural value debate.

The problem of cultural value is also extrinsically linked to, yet separated from, economic value, in the policy context. Cultural economist David Throsby breaks cultural value down into different elements—aesthetic, spiritual, social, historic, symbolic and authenticity value—arguing that each contributes to the overall value of a cultural object, institution or experience15. He maintains that cultural value is separate from economic value and, relatedly, that ‘there are some aspects of cultural value that cannot realistically be rendered in monetary terms’15. However, he also argues that a thorough economic valuation of both the market and non-market benefits of a cultural object can offer a good indication of its cultural value, because generally ‘the more highly people value things for cultural reasons the more they will be willing to pay for’ them16. Some aspects of cultural value lend themselves more readily to being expressed in the language of outputs and outcomes, whilst others do not. Given the valuation tools we have are predominantly from the field of economics, perhaps the one which is most readily measurable is economic value. This is because it is already numerical, in a way that people’s subjective experiences are not.

As we can see, the idea of culture, the policies which contain and promote it, those who work in it, its infrastructure and research, seem to both attract and resist economic analysis.((See Doyle (2010) for a longer discussion on how culture attracts and resists economics.)) The proliferation of data collection and consultancy for policy appraisal included economic impact and valuation methodologies. Some of the economic valuation techniques that are used to capture the effects of culture are not yet technically sound17, as will be expanded on in greater detail in the subsequent chapters (Chaps. 7 and 8). Yet, some argue the need to satisfy the demand for evidence of this kind of value has to be addressed in some way. One particular in-depth project focussed on how to overcome the gulf between what the cultural sector thought it was making culture for, and the demands of Her Majesty’s Treasury (HMT)18. This report argued the need for pragmatism in presenting cultural value to secure public funding19. It argued that ‘the lack of consensus in the literature over the meaning of cultural value and how to best measure and capture cultural value suggests the potential of using established economic valuation tools’20. By encouraging the sector to measure the value of culture in ways more acceptable to the hierarchies of evidence demanded by HMT, the report aimed to reconcile two cultures of evidencing cultural value. Arguably, however, this may have reinforced how very distinct they are, as well as leading to increased technocracy in the attempts of arts managers to do cultural economics or deal with more data.

Many in the sector see the value of their work as exceeding its economic value, and feel it cannot be reduced to economic considerations alone. Others argue that instrumentalising culture for social policy ends is not ethical for various reasons. It has also been pointed out that the hierarchies of cultural value (that one thing is more valuable than another to solve social problems) essentially ‘define[] culture as a mechanism for the replication of inequality’21. These contestations have led to various audits of cultural value, such as the Warwick Commission for Cultural Value, which influentially cites Taylor’s finding that the most privileged 8%((Note it was actually 8.7%, but was unconventionally rounded down in error to 8% when the finding was reproduced. See Taylor (2016) for more detail on the actual findings.)) access culture22. Arts Council England has commissioned numerous reviews on the subject, many of which ask for further evidence rather than using the evidence we already have. For example, the publication The Value of Arts and Culture to People and Society ((ACE 2014)) lists key themes of the value of culture as economy, health and well-being, society and education. Positioned as a rapid review of evidence, the report identifies a number of gaps, particularly regarding longitudinal data and the health and well-being evidence on cultural participation. Another example, the Arts and Humanities Research Council (AHRC) Cultural Value Project, a £2.5 million initiative over 3.5 years, supported over 70 original pieces of research initiated by the call, largely from arts and humanities research disciplines. The programme intended to improve comprehension of the value of arts and culture and the methods used to capture this value23. This programme has finished, but has resulted in a new Centre for Cultural Value which aims to build ‘a shared understanding of the differences that arts, culture, heritage and screen make to people’s lives and to society’.((The Arts and Humanities Research Council (AHRC), Paul Hamlyn Foundation (PHF) and Arts Council England (ACE) jointly funded this call to establish a Centre for Cultural Value (CCV) to the value of up to £2 million (University of Leeds n.d.). The new centre is hosted at the University of Leeds.))

A recent large-scale academic project looked at how we might rearticulate ‘cultural values’ through understanding what people do in their everyday lives as culture, rather than thinking of cultural policy as something inherited to manage an elite idea of culture24. Understanding Everyday Participation: Articulating Cultural Values (UEP) notably used many different types of data, collecting primary data using various methods, and analysing secondary data using different approaches. The premise was simple: understand what people were actually doing, and what they valued, rather than what cultural policymakers, the government, economists or the Happiness Tsar thought people should be doing (and then investing in programmes to get them to do what they thought people should be doing and measuring whether they did it, or not). Insights include dwindling investments in the social infrastructure presented by the local park25, or how charity shops in certain communities have been overlooked despite their specific ‘relations between culture, economy and place which has effects in the social sphere’26.

As noted above, a particularly influential insight from UEP was through reanalysis of DCMS’ Taking Part Survey data. Taylor found that:

approximately 8.7% of the English population is highly engaged with state-supported forms of culture, and that this fraction is particularly well-off, well-educated, and white. Over half of the population has fairly low levels of engagement with state-supported culture but is nonetheless busy with everyday culture and leisure activities such as pubs, darts, and gardening.

(2016, 169)

Taking Part: The National Survey of Culture Leisure and Sport had been established in 200527 as part of a programme of evidence generation led by DCMS. This new survey (known as Taking Part, and often shortened to TPS) aimed to collect data that would be useful to the concerns of all the sectors under DCMS’ remit. Notably, the CASE programme cited above was also a part of this project. TPS asks detailed questions about what people do and where. Chapter 8 goes into greater detail about the wording of the questions, demonstrating the level of detail collected about simple pastimes, such as walking. The survey also collects demographic data and since the 2013–2014 dataset has also contained ‘the ONS4’ (see Table 4.3). TPS data therefore have inequality measures, well-being measures and highly detailed data about how people spend their time in terms of the variety of activities they undertake, how frequently and for how long. While DCMS have been criticised for not making enough of the survey data themselves28, others have analysed the data, looking at types of participation and inequality29 and well-being30.

My PhD research was connected to the UEP project, and as discussed in Chap. 3 and briefly here, one of my approaches used free text fields from the ONS’ Measuring National Well-being Debate. My research presented a reordering of data to see how people value different domains of their life, in comparison to the published findings (Table 3.1). I found that when people talk about their well-being, they tend to describe the sorts of activities that Taylor lists, rather than those subsidised by cultural policy or indeed the institutions which house them. Overall, the vast body of research presented across the UEP project indicates the limits to research on and for cultural value arguments in asserting the value of particular forms of culture.

Most examples of articulating cultural value are attached to a specific idea of cultural policy (conflated here with arts policy), as you can see above. What is key here is that in deciding what is cultural in cultural value, cultural policy practitioners (policy-makers and academics) are also ascribing value to certain activities or practices. Much like the definition of social value and well-being, as described in Chap. 2, this is a value system in and of itself: a ranking system which results in certain places, people and practices being invested in, while others are not. What is interesting is that what is thought to have caused the downfall of the social indicators movement in the 1970s was the ‘bewildering array’ of measures, as we discussed in Chap. 2. It was also the lack of a robust theoretical or ideological analysis, as well as the failure to establish what needed to be achieved for whom and how31. Despite the breakdown in prior measures, and years of contestation around the limitations of metricised cultural value, however, it remains a resilient idea that is heavily invested in.

  1. Oakley and O’Brien 2015 []
  2. Power 1994 []
  3. most notably Myerscough 1988 []
  4. Taylor et al. 2015, 11 []
  5. see e.g. Hesmondhalgh et al. 2015 []
  6. DCMS 2003b []
  7. Hacking 1991, 186; Hacking 2002 []
  8. Prince 2015, 584 []
  9. Prince 2015 []
  10. Prince 2014, 755 []
  11. Holden 2006, 14 []
  12. Holden 2006, 16 []
  13. Bunting 2007a, b []
  14. Oman 2020 []
  15. Throsby 2006, 42 [] []
  16. Throsby 2006, 42; see also 2010 []
  17. Rustin 2012 []
  18. O’Brien 2010 []
  19. O’Brien 2010, 8–9 []
  20. O’Brien 2010, 15 []
  21. Oakley and O’Brien 2015, 5 []
  22. Taylor 2016, in Neelands et al. 2015 []
  23. Crossick and Kaszynska 2016 []
  24. Miles and Gibson 2016 []
  25. Gilmore 2017 []
  26. Edwards and Gibson 2017 []
  27. DCMS 2006 []
  28. Bunting et al. 2019 []
  29. Taylor 2016 []
  30. Fujiwara 2013; Fujiwara et al. 2015 []
  31. Scott 2012, 36 []