The sessions at this Norwegian Media Researcher Conference are organised in the form of particularly constructive feedback on work-in-progress papers – which is great as a format, but doesn’t lend itself particularly well to liveblogging. So, I’ll skip forward right to the next keynote by Raul Ferrer-Conill, whose focus is on the datafication of everyday life. This is something of a departure from his previous work on the gamification of the news.
He begins by outlining the datafication of the mundane: the way people’s social action – as well as non-human action, in fact – is being transformed into quantifiable data, especially online, and that such data therefore become a resource that can be utilised, operationalised, and exploited. Indeed, the sense in the industry is now that ‘everything starts with data’, which reveals a particular, peculiar kind of mindset. Over the past years, the Internet of Things has moved from an idea to a reality, and this has fuelled the “smart” delusion: the belief that more datapoints mean smarter decision-making processes (they usually don’t).
But whatever their quality, such datapoints then also underpin the formulation of key performance indicators (KPIs), and their use in comparing with the past, understanding the present, and planning for or predicting the future. What is emerging here is also the so-called “DataOps” paradigm, where technologies, social life, and data are to be connected in order to optimise processes and maximise returns on investment; this paradigm is central to the implementation of many “smart cities” strategies, too.
But this is further complicated by very variable availability and quality of data, but uncertainties about the meaning of particular datapoints, by different patterns of circulation of data around industry workflows, and by the quality of the algorithms used to process these data. And yet, data-driven processes (and workflow diagrams) often unquestioningly place data at the centre, whatever their quality.
This may take the idea of audiences and users as active produsers of content as a starting-point, but ultimately also utilises the data they generate as a way to channel and control their activities, reducing their agency through algorithmic management. The metricated mindset, the desire to datafy and quantify user activities is increasingly problematic if such metrics are positioned unchallenged as a means of creating user scores and gamifying the engagement experience.
What does ‘engagement’ mean these days, then? Its meaning has shifted from political to civic participation, and from there to participation in social networks; and this has given rise to the creation of metrics of engagement. These metrics are defined differently on each platform, but ultimately always boil down to some ‘shiny number’ that is positioned as the ultimate goal for the platform and/or its users. And such quantified engagement has both social and economic relevance: social as it measures impact on popular opinion; economic as it measures the potential commercial value. And yet there are various critiques of engagement that are important here: from the technical-behavioural (what can be measured?) to the emotional (what mindset does it represent?), from the spatial-temporal (when and where does it start and end?) to the normative (what kind of engagement do we see as positive?).
Such data depend on the development of the material infrastructures across which they can flow. And these are themselves developed in response to certain economic and other objectives. The major technology companies, in particular, have been buying and building more and more infrastructure: Google for instance owns more than 8% of all of the world’s submarine cable infrastructure. This enables such platforms to become the centre of the entire data lifecycle; and it makes them indispensable for their various clients, and thus also too big to fail.
But why does this matter? First, this obviously creates commercial imbalances. In turn, it also places these companies in positions of particularly immense power, of course. But it also creates epistemological problems, as these data now describe our social reality. And finally, it produces substantial ethical questions as well. We are already seeing significant abuses of such data power in corporate and government surveillance and policing, as well as other cases where human judgment is being replaced by automated decision-making, with deeply problematic results.