chapter 9 Understanding
Data uses as barriers to understanding
Beyond the arguments I have just made about how a lack of understanding can lead to bad data practices that are bad for well-being, I also argue that they lead to bad data. If people cannot answer the questions in a survey for practical, personal or political reasons, or because they feel uncomfortable that they do not know enough about why the data are important and what is happening with them (as is the case with the proxy questions), you jeopardise possibilities for good data, instead ending up with missing or incorrect data.
What we have also encountered in this book is how data uses lead to a lack of understanding more broadly. As in the case with Google Flu Trends we covered in Chap. 5, if you do not consider the variety of contexts in which people will type the symptoms of a pandemic illness, you will not appreciate the limits to your method. This is a barrier to understanding. Similarly, if those modelling the data on COVID-19 ‘in the community’ are not aware of the fact that it is more difficult to collect tests from high-rise flats in poorer communities, whose data are missing? How might that hinder understanding of inequalities and the pandemic, if the data are to be analysed to answer those questions? Context is important to understanding. If you don’t think about who is missing from your missing data, how can you know how important the missing pieces are? How can you know how limited your understanding is?
The gift of search engines offers us access to so much more information—daily—as we go about our business. We can playfully search to prove a family member wrong at Christmas—‘no that’s not the same so-and-so that was in that thing. You’re thinking of this one…’—or cheat at the local pub quiz. However, the lists of information it presents us with are not always a simple single answer to a closed question. Searches of course enable you to put a proxy term in and see what the search comes up with. But there often are millions of results.
Search engines have been designed to learn what we might be looking for, based on information they have on our previous searches (and everyone else’s). This means that a search engine wants to understand what we might want to know. Yet, as we discovered in Chap. 1, the search engine does not only gather data on us and show us results back in some sort of neutral process. Instead, it makes decisions on what it will recommend we look at as a result of our search terms. As Noble explained, if you typed in the phrase ‘black girls’ as recently as 2011, you were shown indecent
images. This is not a question and answer process, but rather one of selection and assumption.
Instead, search engines try to understand what we might want to find by making associations that may be very different from our own way of understanding things, or indeed what we are imagining we might find. Returning to an important point from Chap. 1, it is possible that being shown an association subconsciously changes an aspect of our understanding of what people do, or what they look like. Data and data practices can change culture. This is potentially dehumanising and can lead to the opposite of greater understanding—or, indeed the good society. We must design data practice, along with the ways in which we engage with data, more responsibly to ensure that well-being is improved through this engagement.