Understanding Well-being Data

chapter 5 Getting a sense of Big Data and well-being

Social media data-mining in social and cultural sectors

Social media data mining is not always a large-scale affair requiring APIs and special software. As found in a six-month research project with city councils and a city-based museums group in the north of England1, many small organisations use quite basic techniques to do this work. Social and cultural policy sectors are reliant on understanding well-being data, as improving well-being is at the core of what many of them do. Yet, as Chap. 1 of this book acknowledges, the sectors do not always have the skills or confidence to use data. We will look at these sectors as a whole in greater depth in the next three chapters.

The project exploring how these smaller social and cultural organisations were already using data mining, wanted to understand how they might use it more effectively. The researchers discovered that although software packages were adopted to analyse institutional impact and engagement on Twitter, this was largely unsystematic2. Keen to improve their social media data mining capacity, these organisations signed up for training in new tools that would improve their capability. However, it became clear that less data mining was happening than expected and the capacity of workshop participants to engage with training in the new tools also fell away3. Doing better with data seems a good idea, but is not always as easily resourced or incorporated into working practices as initially hoped.

Local councils, social and cultural sector organisations all have limited resources. Despite enthusiasm for being, or becoming, data-driven, capacity to invest time and money in new tools at the organisational level is often lacking4. In the case of the cultural sector, there is a tendency to invest in grand schemes, new metrics and reports at policy level that claim to investigate the value of new and/or Big Data and the associated technologies required to generate or analyse them5. However, when considering the (already ill-defined) cultural sector((If you are reading this chapter a while after reading the previous ones, then the cultural sector is a broad description of cultural institutions like libraries, heritage sites, museums, theatres and so on. Crucially, it is not only about the buildings themselves, but all the ways people make and consume culture and can include Netflix and outdoor festivals. In the UK, the cultural sector includes organisations funded by public subsidy as well as commercial organisations.)) as a whole, differences are obscured in requirements and capacity for data technologies, which are multiplied by huge variability in organisation size, type, purpose, mission and cultural offering across and within sectors6. These top-down resources and contributions are not always actually used or found useful at an organisational level or across the wider sector6. Some organisations recognise that their audiences are full of people whose opinions are less easily captured by Big Data. Some people, for example, still prefer booking telephone lines to web pages and are certainly not tweeting or Instagramming their experience of a show. As such, some who attend a show are less likely to be generating data on their opinions that might then be mined. Advocates for using Big Data in small organisations acknowledge that Big Data can be ‘debilitating’ in their complexity and challenges. This is not always explored in a way that offers resolution6, and as we have seen7 when recommendations, even training, are offered, there is not necessarily the capacity to take them up.

Yet, it can be very easy and fast to interact with Big Data as social media data, as long as you consider the limitations of the data and their origins, as well as how you might analyse them yourself. Organisations and individuals do not need Big Data analytics know-how or software, although there are excellent resources freely available to help them understand how,((This post from Wasim Ahmed (2019) offers a clearly presented overview of the kinds of analyses available using different software https://blogs.lse.ac.uk/impactofsocialsciences/2019/06/18/using-twitter-as-a-data-sourcean-overview-of-social-media-research-tools-2019/)) as I found when I wanted to explore Twitter discussions about happiness. In 2013, Mass Observation recreated the Bolton happiness study on Twitter (see Fig. 5.3). This was still fairly experimental for them as much as me when I requested access to the tweets. There were 25 responses that they captured at the time.

The sample of 25 meant that—of course—I did not require data mining or sentiment analysis software—or any knowledge of APIs. In fact, I did not even need to request these tweets from Mass Observation directly, as they are still available on Twitter by searching the hashtag (or were in August 2020 when I last checked). A cursory analysis in this case simply meant reading, and noting similarities and themes, which I could have done on a piece of paper.

Fig. 5.3 Mass Observation happiness tweets

So, what did this cursory analysis tell me? Whilst 20% mentioned pets, all of which were cats (it is the internet after all), one person replied with a single word: bacon. Mainly, however, people described informal, everyday participation,(( ‘everyday participation’8 has come to mean the everyday activities we participate in, which tend to fall outside of formal subsidy, which tendentially funds ‘the arts’.)) including reading, going to gigs, watching films. There were lots of glasses of wine and some chocolate in there too. The textual content of these tweets is reproduced in Box 5.1, without Twitter handles. You might note the surprising varieties of theories of well-being we have encountered so far in the book can be present in 25 tweets. Some map onto clear areas of social policy, others are definitely in the private domain. Some people used negative language to imply life isn’t currently great for them: ‘Day off. Smoke in peace.’ And ‘Ability for women to walk down the street & not be catcalled or threatened. Few happy women here’. Some people were philosophical, others wistful. Some focussed on activities, others on the ‘bliss’ of doing nothing. The variety of tone and content makes for fascinating reading, but leaves these data wide open to interpretation—whether that is via human or artificial intelligence.

Box 5.1 Tweets Answering the Question: ‘What Is Happiness?’

  • Beer, maps, chocolate, quizzes, the unending pursuit of knowledge
  • Ability for women to walk down the street & not be catcalled or threatened. Few happy women here
  • Short term happiness is different for everyone. Long term happiness is about fulfilling your potential.
  • Bacon
  • 5 minutes to myself and a good book, with peppermint tea and the cats curled up around me. Absolute bliss!
  • Volunteering, yoga, baking, being with loved ones, reading, warm days paddling in the sea, colourful things, exploring, my cat: D
  • Doing what I love (#history), a safe home by the sea, someone to love & share things with
  • Good company, fireworks, being smiled at, a job well done, ‘sweet pea’ by Manfred Mann, making someone else happy, good health.
  • I am happiest when discovering/learning new things, such as reading books and finding new music.
  • Happiness is cooking for those I love, with a glass of wine and giggles on the side.
  • Day off. Smoke in peace.
  • “What is happiness?” something to do with dopamine levels
  • Making things that muself [sic], and hopefully other people will enjoy
  • Loving and being loved and valued for who I actually am.
  • More precisely: Time, a book, a view, a friend.
  • Choices and control in life not just in shopping.
  • Connecting with other people, being able to make a difference to someone else, a good book and a purring cat on my lap!
  • My kids
  • What is happiness?’—“A warm spot on the bed in the sunshine”
  • Knowing that enough is plenty
  • The scent of roses on a damp morning […] being where you are without wishing to be somewhere else
  • Happiness is seeing my children flourish, Swansea City FC progress & succeed & cooking for husband. Ln that order!;)
  • Love, health and a sense of purpose. Oh, and cake.
  • What makes me happy? Cuddling up on the sofa with my partner & animals, a glass of wine, chocolate, a film & crochetbliss
  • Happiness is good relationships, a little more than enough money, satisfaction and contentment

I used these tweets as a light-hearted example, with my ever so light touch analysis, in my first ever conference presentation in 2013. In Chap. 3, I explained that my research question at the beginning of my PhD was loosely: ‘When people describe well-being, how often do they talk about participating in different kinds of activities—and what might that tell us about aspects of social and cultural policy?’ or ‘how can qualitative data collected to understand well-being tell us how people feel about what they do?’. I noted in this presentation that state-funded cultural practices (like art galleries and museums) were less frequently mentioned by people as making them happy than what is called everyday participation9. This same finding emerged from my reanalysis of the ONS free text data I used in my PhD10. By extension, these data (with their caveats) were another dataset to suggest we should question whether cultural funding was supporting activities that made people happier or increased their well-being.

This was not the only way of analysing these tweets to make an argument about the relationship between culture and well-being. Someone else may have counted how many of these responses included something creative and used their analysis to argue they have found the value of culture to people, thereby justifying more funding. These are debates about data and their use in politics and policy that we return to in the next chapter. What is important here is that even with (arguably, especially with) such a small dataset we can see how human bias can interact with data and lead to different arguments.

If it is difficult for humans to make categorical claims from a form of sentiment analysis that is not much more systematic or technical than reading 25 tweets, we must remember these limits when these analyses are made through machine learning. This is especially vital as time-sensitive analyses of large-scale samples of emotional expressions are being used in research on COVID-19, particularly given they are seen to have the potential to inform mental health support and help tailor risk communication to change behaviours11. As with all data uses mentioned in this book, it is not that using social media data, or automated sentiment analyses are necessarily bad, but rather, that their limits should be recognised. As ever, it is an issue of methodology, transparency context and legibility.

  1. Kennedy 2016 []
  2. Kennedy 2016, 71 & 72 []
  3. Kennedy 2016, 74 []
  4. Kennedy 2016; Oman 2019a, b []
  5. Gilmore et al. 2018; Oman 2013a []
  6. Oman 2013a [] [] []
  7. Kennedy 2016 []
  8. Miles and Sullivan 2010 []
  9. Oman 2013b []
  10. Oman 2017, 2020 []
  11. i.e. Pellert et al. 2020 []