Skip to main content
Home
Snurblog — Axel Bruns

Main navigation

  • Home
  • Information
  • Blog
  • Research
  • Publications
  • Presentations
  • Press
  • Creative
  • Search Site

Fighting the Colonial Extractivism of Artificial Intelligence

Snurb — Friday 24 October 2025 18:21
Politics | Internet Technologies | 'Big Data' | Artificial Intelligence | ZeMKI 2025 | Liveblog |

The second day at the ZeMKI 20th anniversary conference in Bremen starts with a keynote by Nick Couldry, focussing on the corporatisation of media and everything. He notes a number of key changes over the past twenty years: datafication – the transformation of everyday life into data, and its exploitation by business and government, thereby producing the social for capital; social media – shifting the exploitation of social data to produce attention and shape consumer and citizen action; and artificial intelligence – the corporate capture of the human mind itself, which automates cognitive production and transforms what we value.

This predatory AI is not just a technological development, but a social transformation, which needs to be interpreted sociologically. We are changing our societal definitions of expertise and knowledge, which changes the internal authority structures of institutions, their task definitions, and their internal and external legitimacy (especially for knowledge institutions like universities) – and all this is happening with hardly any debate (which is instead focussed on faraway and possibly unattainable visions of general artificial intelligence).

The question here is what happens if – through artificial intelligence – intelligence becomes a mundane and ubiquitous property of things in general, and no longer unique to humans; AI then is a dehumanising technology which treats us as less than what we are. This transformation of social life, social space, and cognitive production expands economic extraction and deepens social reconfiguration: it has colonial attributes, and is willingly supported by the everyday human users of these technologies; it is not a revolution from above. This produces a social order of capture.

AI models do not just take in surveillance assets, but capture all human communication and creativity. This is a form of colonialism, and in using generative AI we take part in data territories, from which data can be continuously extracted; such extraction is a knowledge enterprise that imposes one version of rationality on others, following very colonial and therefore unequal thought processes. Through this, specific elites try to govern the world’s knowledge, but everyday users also engage in its social construction as quasi-human.

Our interactions with AI extend this capture into the cognitive domain: how we ask questions, generate ideas, make content, and redefine our goals iteratively through such interactions also serves as training data. AI use is also being locked in as a core skill in supposedly AI-accelerated work processes, and substandard AI outputs must constantly be corrected by human users.

And AI fundamentally depends on copyrighted content: generative AI systems cannot be trained without relying on a comprehensive dataset of global knowledge, so it cannot exist without content capture.

This is increasingly also leading to a geopolitics of capture: China’s vision is of a social domain governed through data extraction and processing; in the US there are alliances between Big Tech and the Trump administration, as well as between Big Tech and the military; while Global North corporations are fundamentally relying on Global South labour in building and maintaining AI systems. Meanwhile, alternative sovereigns like the EU, India, or Brazil are attempting to regulate AI, but this is constrained by the increasingly belligerent US tariffs regime. We cannot rely solely on such sovereign enforcement; there is a need for much wider consumer, civic, and economic mobilisation to challenge the power of Big Tech.

The growing move towards personalised AI, trained on the entirety of personal life data, further extends this comprehensive corporate capture. Such personalised AI systems would ingest all human data, sharing them with AI providers.

Future research into these developments should pay close attention to the corporate discourse around such technologies. We must track the pervasive power of corporate discourse: AI discourse encourages conviviality with machines while undermining conviviality with humans; this is a new form of symbolic violence, taking over existing forms of communication (for instance through the inescapable insertion of AI features into everyday software) that represents a forced change. The struggle over this takeover by force is currently unfolding.

This digital transformation is changing the conditions and forms of everyday life, not necessarily with user consent but without an opportunity to resist effectively. So what is the social contract emerging today through our uses of AI: is the corporately dominated contract for its use appropriate? Do we even know how to draw the boundary between appropriate and inappropriate boundaries  of such use – and who has the power to do so?

Such questions highlight the emerging conflict between business and community or human values. This might be seen today especially in the workplace, in creative work, and in education; it is about everyday and mundane uses of AI, not about promised general artificial intelligence systems. One concern here is for instance the potential of human deskilling as a result of increasing AI use – could this lead to a general deconscientisation of society?

The temptation to adapt to AI is powerful: LLMs are genuinely useful in some contexts, and arguing otherwise can be seen as a desperate attempt to hold on to conservative understandings of ‘human’ attributes – this has been described as a backwards-looking kind of ‘remainder humanism’. Similarly, AI can be seen simply as a new stage of cultural expansion which efficiently represents our culture back to us: a new kind of cultural and social technology. But the way such AI works is very different from previous cultural technologies.

There are now three prominent visions of AI, predominantly representing US perspectives: AI as the greatest engine of progress in recent history; AI as serving humanity rather than corporations; and AI as an extractivist corporate technology. Can we bridge the differences between these visions? How should Europe, in particular, position itself in relation to these views?

One response to this needs to come from teachers, whose struggle mirrors that of other cognitive workers; another must come from researchers, who must define what are useful and non-useful uses of AI; a third must be made by citizens, defending a vision of the university as a place of dialogue, listening, and the human management of cognitive resources – defending conscientisation. A final response must involve societies overall: building the conversations from which better understandings of AI must emerge.

In combination, this might push back against AI’s extractivism, and challenge its deepening mediatisation of the construction of reality; this mediated construction is a site of struggle and resistance whose shape is only now beginning to emerge. We must exert the full strength of our imagination to examine where the full use of our new modalities may lead us, as Norbert Wiener said as early as the 1960s.

  • 59 views
INFORMATION
BLOG
RESEARCH
PUBLICATIONS
PRESENTATIONS
PRESS
CREATIVE

Recent Work

Presentations and Talks

Beyond Interaction Networks: An Introduction to Practice Mapping (ACSPRI 2024)

» more

Books, Papers, Articles

Untangling the Furball: A Practice Mapping Approach to the Analysis of Multimodal Interactions in Social Networks (Social Media + Society)

» more

Opinion and Press

Inside the Moral Panic at Australia's 'First of Its Kind' Summit about Kids on Social Media (Crikey)

» more

Creative Work

Brightest before Dawn (CD, 2011)

» more

Lecture Series


Gatewatching and News Curation: The Lecture Series

Bluesky profile

Mastodon profile

Queensland University of Technology (QUT) profile

Google Scholar profile

Mixcloud profile

[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Licence]

Except where otherwise noted, this work is licensed under a Creative Commons BY-NC-SA 4.0 Licence.