Understanding Well-being Data

chapter 3 Looking at Well-being Data in Context

Questionnaire data

If the idea of data collection is new to you, perhaps the easiest way to imagine well-being data being collected is by using a questionnaire that asks people how they feel about things related to their well-being. Questionnaires are easily distributed, and ask the same questions in the same way and can be repeated numerous times with the same or different people. This means their data are easily comparable, providing insights into well-being across a group or sub-population. If you ask the same people, you can understand their well-being over time. Online questionnaires distributed by organisations that have some responsibility for our well-being are increasingly familiar, for example, universities surveying their students and employers, their staff.((Elsewhere I have written that universities aren’t necessarily that good at looking after the well-being of staff or students. See Oman and Bull 2021 and Oman et al. 2015, forthcoming.)) These tend to ask us questions about our well-being that are useful to the running of the organisation in some way. The data can be used ‘by management’ to decide if it is allocating resources well, or if HR needs to make an intervention, in the same way that policy-makers can use well-being data.

Another way to imagine the context of questionnaire data collection might be the market researchers who used to be on the streets with clipboards (that my mum would always desperately avoid at the shops). In our increasingly online world, people’s opinions are still sought using questionnaires in person (although, along with everything else, COVID-19 has compromised this, and we are yet to see how social research will find its new normal). Questionnaires could involve asking whether people would buy a product in the case of market research, but can also include questions about something they have just seen, an experience they just had, or how they feel about a particular place, like the park they are in. Some have had questionnaires after their COVID-19 vaccine, asking about their healthcare experiences. People can fill in the questionnaires themselves, or the researcher could complete the questionnaire on their behalf. If the research wanted to understand how people feel about a local, publicly subsidised event, the questions answered could look something like:

Q1 Have you just seen [specific subsidised concert]?Y/N
Q2 Is this your local park?Y/N
Q3 How are you feeling right now—out of 10, with 10 being the best you’ve ever felt, and 0 the worst?0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

These questions would generate binary data (yes/no) that can be aggregated (totalled) alongside numeric data from the scale. These sorts of data are easy to work with quantitatively, as you can categorise easily across the binary questions. Q3 uses a Likert((The scale is named after its inventor, psychologist Rensis Likert. There can be confusion with Likert scales, when it comes to the middle of the scale and moderate or neutral options, as sometimes these will record ‘don’t knows’, rather than my well-being is five.)) rating scale that presents a series of answers to choose from, ranging from one extreme attitude to another. It’s sometimes referred to as a satisfaction scale as it is ideal for measuring satisfaction, and is therefore often used to measure well-being. The numbers from the scale are used to establish trends or averages.

Say, a researcher was lucky enough to get 100 people to speak to them on their way home from a concert in a park, they would have a sample of 100 people, and would know that they saw the event (is that the same as attended, you may ask? We will see what to do about that shortly). The researchers could establish what percentage of those spoken with were local residents (although, note, that what is meant by ‘local’ is not specified, which is not ideal). They could then look for trends in how people felt having attended the concert using the numeric data.

Or, they could ask the question,
Having seen this subsidised concert in your local park, how are you feeling right now?

Yeah, good, ta. There was a great atmosphere.

This box, called a free text field or open text, allows people to answer a question in their own words. Whilst this is less easy to process and compare at scale, it can sometimes provide valuable information. In each case, the majority of the data collected would be subjective, as the numeric or textual answers would reflect the reported experience of the individual. Therefore, the answers collected—the data—may be considered a valuable reflection of how they are feeling.

However, not all textual or verbal responses are succinct. In fact, when you ask people how they feel in themselves or about something to do with well-being, their responses can contain much rich detail1. So they might say, something like:

I feel great! It was great to have the opportunity to go to a gig close by. Because I only earn £6000 a year, I don’t get to go to concerts any more. I think that because I never get to go, that made this all the more special Yeah, good. There was a great atmosphere.

These qualitative data contain: objective data in the salary disclosure, an example of preference (in that they chose to spend their limited income on the subsidised concert), as well as what they think this means for their well-being and concert attendance.

However, there are confounders, too: their limited income = limited concert attendance, which means that they think their enjoyment of this concert was heightened. How does this compare to other people who attended, etc.? How could it compare? How might a valid argument be made for the impact of this concert (as a cultural product, or an arts event) rather than capturing ‘the social value’ (which we covered in the previous chapter) of going to an event in the local park? How could claims made be generalisable? Meaning how could what is learnt from 100 people in one context be used to understand different people who attend different kinds of concerts with different life circumstances in different places at different times? Also, the fact that this person lives locally to the concert is probably a factor impacting on their decision to go. How might we isolate the relationship between concert attendance and happiness from these confounders? Here we mean how much of an effect did proximity to the concert have on attendance versus wanting to go to the concert for another reason? How do you know they weren’t caught in a very limited moment of elation that meant they said they felt great, but which didn’t last? How do you know the people spoken to could possibly represent diverse opinions? Perhaps they were all picked because they were all wearing band T-shirts for those on the line-up? It may be that people who are more likely to stop and answer questions will also have more time to go to concerts? How do you know if you need to know these things, or indeed, which of them you need to know?

Perhaps, more importantly, how sure can we be that what people say is an accurate representation of their feelings and opinion more generally? There is evidence that people who are approached will say nice things to people because, despite popular belief, people are generally nice, and they don’t want to offend people. This may mean giving an answer they think the interviewer wants and is called the ‘interviewer effect’.((Matarasso’s (1997) ‘now discredited’ Use or Ornament report (Belfore 2002; Merli 2002; Selwood 2002) was highly influential for its ‘impressive sounding numbers’ (Belfore 2009, 348). It was described by the then Secretary of State as ‘compelling’, despite the ‘paltry evidence’ (Belfore 2009, 348). One of the key methodological flaws highlighted by Belfore are those relating to asking participants whether they were happier or healthier as a result of participation (2002, 99). The interview effect is an ongoing issue with qualitative research in the cultural sector, in which questions, such as Matarasso’s: ‘has the project changed your ideas about anything?’ or ‘since being involved I have been happier’ lead the interviewee to respond positively—to appease the interviewer in some way. These questions about the degree to which you can trust responses to these questions are a problem for evidence in a number of fields, particularly the cultural sector, that we will return to.)) In our case this would mean that people are inclined to say that something has improved their mood or happiness or well-being because they think that is what the person posing the question wants to hear. Asking the question, did you see the concert? followed by how do you feel right now? will suggest to the person asked that the researcher wants to understand if the concert has positively impacted on how they are feeling and is a leading question.

There are other aspects of situations like this which will affect people’s answers: can they be overheard, for example? Do they want to look like they like the music played, or do they want to suggest they have ‘better’ taste? Sometimes people answer for the benefit of others, rather than truthfully.

It is not only how truthful someone is in the moment, but also a question of how long that moment lasts. If you ask someone directly after the concert how they feel, are you able to argue for a longer-term effect on well-being? We don’t know how long such an effect will last. Can feeling great for five minutes be argued as a positive impact on well-being? These are contextual issues with data: often the context in which data have been collected compromises the claims which can be made through analysing them. These are issues of validity (see Box 3.3). Yet, when you read a local council’s report about an event like a park concert, it will rarely acknowledge the limits to what can be known.

Similarly, how do you also account for negative effects on well-being and social impacts that are less positive? What of the park being shut for the concert take down and put up? What of the noise pollution affecting older people, pets or babies sleeping? All of these are examples of confounders on the claims that what might seem a simple initiative, such as the local council subsiding a concert in a local park, can have social impact in a way that is simple to express. The negative impacts are not often accommodated in research which asserts social impact, yet is clearly important to account for these issues in any claims made for any positive effects.

It is not often acknowledged that good questionnaires that collect ‘good data’ are not easy to design or execute well. Questionnaire data therefore may be useful for many purposes and relatively easy to access, but need testing. One way of feeling more secure in the quality of questions, even on a small scale, as with our concert scenario, can involve the same questions and techniques as questionnaires used in large surveys. Of course, the claims cannot be generalisable, as you are less likely to speak to a range of people, but you can then compare your data with a representative sample.(( A representative sample is quite simply a sample that is representative of the population, in that it holds similar characteristics. It is useful when thinking about how different kinds of people will respond to questions, depending on their age, health, ethnicity, gender, and so on. If the characteristics of the sample are similar to that of the population studied, then they are more generalisable.)) Researchers should, therefore, think very carefully about the context of where they want to use the questionnaire, who and what they want to know about, and the limits of what can be known from specific questions asked of the people they are able to speak to. They also need to think about their impact: will they ruin the experience of the concert? Will they offend people in some way, or indeed, will the simple act of asking them if they enjoyed something affect their desire to say yes or no, and to communicate how much they enjoyed it? How much can be known from such a short-lived interaction with a hundred people? What use are these ‘snapshot’ data in answering bigger questions?

Box 3.3 Validity

Researchers need to think in terms of validity to understand the limits of what can be known by what they are asking. There are two main types of validity.

Internal validity is concerned with how capable a research tool (say a survey question) is in enabling a researcher to answer their research question. For example, when you ask someone ‘how are you feeling right now’ without asking them to connect the feeling to the concert, you are unable to know that the feeling is linked to attending a concert. This will limit the claims you can make with validity.

External validity is concerned with how generalisable the results of a piece of research are outside of the study; by which we mean ‘can the findings of this study (speaking to 100 people outside X park) explain how people that we didn’t speak to feel about concerts?’

Limits to validity are not always bad, it depends on the context, but they should be accounted for.

  1. Oman 2015, 2017, 2019 []