You are here

'Big Data' and Government Decision-Making

The next speaker at "Compromised Data" is Joanna Redden, whose interest is in government uses of 'big data', especially in Canada. There's a great deal of hype surrounding 'big data' in government at the moment, which needs to be explored from a critical perspective; the data rush has been compared to the gold rush, with similarly utopian claims - here especially around the ability for 'big data' to support decision-making and democratic engagement, and the contribution 'big data'-enabled industries can make to the GDP.

But how are 'big data' actually being used in government contexts? New tools and techniques for the analysis of 'big data' are of course being used in government, but how these affect policy decisions remains unclear. Social media analysis is similarly being used for public policy and service delivery; sentiment analysis is used for some decisions around law enforcement and service delivery, but adoption to date is slow.

For public servants in this field, it has proven difficult to influence political leaders who so far have been successful by relying on their gut feelings rather than hard evidence in their decision-making. Even where 'big data' are being adopted as a support for decisions, however, the quality of the data as well as of the analysis must be questioned; the provenance of the data, the models used for 'big data' analytics, the conceptual models for understanding the findings all remain in ther very early stages.

In this new world, data scientists wield considerable power, and their training to exercise this power is limited - they may have the computing and statistics skills, but not necessarily the analytical, social sciences, or diplomatic skills to enable their insights to be effectively incorporated into decision-making skills. Even where 'big data' are being used, the selection of evidence may remain biassed and incomplete.

There is also a danger of civil servants dividing the populace into a number of data-circumscribed sub-populations, and of treating these populations as clients or consumers rather than citizens; further, information from sources other than 'big data' may be sidelined in decision-making processes unless clear evidence is also covered in the quantitative data sources themselves.

More generally, there is a 'climate of fear' in the public service in Canada at the moment, and scientists are unable to speak freely for fear of retribution from their political masters and funders. This is wrapt up with neoliberal ideologies in the current Canadian government - government is being reshaped to serve business and industry interests, and long-term measures of societal changes (like the census) are being abolished. This makes the weighting of more specific 'big data' sources (e.g. social media data) in relation to underlying demographic patterns all the more difficult.

With the turn to 'big data' there is a concern over a computational turn in public thinking - what is measured and what is ignored in this will have very substantial impacts on future public policy. We must be concerned about the congruence of data-driven decision-making and neoliberal rationality, extending market values into all aspects of our way of life. In this way, the turn to 'big data' rationalises neoliberal thinking, enabling the quantification of life based on calculated reasoning, a strong emphasis on individualism, a focus on measuring consumption, and the identification of 'useless' subgroups of society.

There is also a potential turn away from causality to mere correlation, where the reasons why certain things are happening are ignored by policies which merely use the levers which emerge out of correlated patterns. We need to be concerned about the ways in which 'big data' models from market research are being integrated into policy-making, and need to query the market principles of the 'big data' business itself.