Skip to main content
Home
Snurblog — Axel Bruns

Main navigation

  • Home
  • Information
  • Blog
  • Research
  • Publications
  • Presentations
  • Press
  • Creative
  • Search Site

‘Just Asking Questions’: Doing Our Own Research on Conspiratorial Ideation by Generative AI Chatbots (IAMCR 2025)

Snurb — Sunday 13 July 2025 15:03
Politics | Polarisation | ‘Fake News’ | Internet Technologies | Artificial Intelligence | ARC Centre of Excellence for Automated Decision-Making and Society | Dynamics of Partisanship and Polarisation in Online Public Debate (ARC Laureate Fellowship) | Evaluating the Challenge of ‘Fake News’ and Other Malinformation (ARC Discovery) | IAMCR 2025 |

IAMCR 2025

‘Just Asking Questions’: Doing Our Own Research on Conspiratorial Ideation by Generative AI Chatbots

Axel Bruns, Katherine M. FitzGerald, Michelle Riedlinger, Stephen Harrington, Timothy Graham, and Daniel Angus

  • 16 July 2025 – Paper presented at the IAMCR 2025 conference, Singapore

Presentation Slides

‘Just Asking Questions’: Doing Our Own Research on Conspiratorial Ideation by Generative AI Chatbots from Axel Bruns

Abstract

Interactive chat systems that build on generative artificial intelligence frameworks – such as ChatGPT or Microsoft Copilot – are increasingly embedded into search engines, Web browsers, and operating systems, or available as stand-alone sites and apps. Here, consumers are likely to use them for a wide range of purposes, including for informational and explanatory purposes. In a communicative environment where information disorder (Wardle & Derakhshan, 2017) is a significant and persistent problem, this is highly likely to also include chat interactions which seek information about conspiracy theories and other verifiably false claims. While some such interactions may simply seek legitimate background information on these conspiracist claims, others are likely to actively use these interactive tools to gather material that would further support and inform conspiratorial ideation.

Conducting a systematic review of six AI-powered chat systems (ChatGPT 3.5; ChatGPT 4 Mini; Microsoft Copilot in Bing; Google Search AI; Perplexity; and Grok in Twitter/X), this study examines how these leading products respond to questions related to conspiracy theories. This follows the “platform policy implementation audit” approach established by Glazunova et al. (2023). We select five well-known and comprehensively debunked conspiracy theories and four emerging conspiracy theories that relate to breaking news events at the time of data collection. We then confront each of the AI chat systems with scripted questions from a “casually curious” user persona, that requests information about the chosen conspiracy theories. In assessing these responses, we qualitatively code the output to determine whether the chatbot system chat system: refuses outright to engage with conspiracist ideas; seeks to educate the user by pointing them to fact-checks or other quality information sources; provides false balance between factual and conspiracist perspectives (i.e. “bothsiderism”: cf. Aikin & Casey, 2022); acquiesces to the user’s wishes by providing information from fringe sources that support the conspiracy theory; displays disapproval or empathy in its responses, or even hallucinates additional material to further bolster conspiratorial ideation.

Conspiracy theories selected for this study include the following baseless claims: that a secretive group of government actors are using chemtrails to spread harmful substances in the atmosphere; that the assassination of President John F. Kennedy was orchestrated by someone other than Lee Harvey Oswald; that the 9/11 attacks were an inside job; that Barack Obama was born in Kenya and ineligible to be President; and that there is a global conspiracy to enact a ‘Great Replacement’ of white populations. These conspiracy theories have been long debated and debunked. However, in addition, we considered conspiratorial thinking that was developing as the data were being collected. These included the false claims that Hurricane Milton – an extremely destructive hurricane which made landfall in the United States in October 2024 – was created and controlled by Democrats; the false idea that Haitian immigrants in the United States were eating household pets; the baseless allegations that Donald Trump staged his own assassination attempt in July 2024; and the idea that Trump rigged the 2024 election. The purpose of adding these additional theories is to determine how chatbots manage conspiratorial thinking when they have limited data to draw on and emerging commentary on the events is causing confusion in public debate.

In undertaking this platform audit of AI-driven interactive chat systems, we help to address a number of critically important challenges: first, we provide crucial empirical detail on whether and how the leading providers of such systems have sought to fortify their platforms against both intentional misuse by outright conspiracy theorists and accidental enrolment in the amplification of problematic information. Second, we offer a methodological blueprint for further studies that extend and complement our analysis by repeating it again at a later point in time, for a different selection of chat systems, with a broader set of conspiracy theories, in languages other than English, or in various other contexts. And third, we contribute to a growing volume of conceptual work that seeks to improve the transparency and accountability of generative artificial intelligence systems (e.g. Kuai, 2024; McGregor et al., 2024; Simon et al., 2024). More, and, in particular, more extensive and regularly repeated work along these lines is clearly required, but this initial study of conspiratorial ideation in the content produced by generative AI chatbots in response to conspiracy- curious questioning serves as a valuable stepping-stone towards such more comprehensive analysis.

References

Aikin, Scott F., and John P. Casey. 2022. ‘Bothsiderism’. Argumentation 36(2): 249–68. doi:10.1007/s10503-021-09563-1. 

Glazunova, Sofya, Anna Ryzhova, Axel Bruns, Silvia Ximena Montaña-Niño, Arista Beseler, and Ehsan Dehghan. 2023. ‘A Platform Policy Implementation Audit of Actions against Russia’s State- Controlled Media’. Internet Policy Review 12(2). doi: 10.14763/2023.2.1711. 

Kuai, Joanne, Cornelia Brantner, Michael Karlsson, Elizabeth van Couvering, and Salvatore Romano. 2024. ‘The Dark Side of LLM-Powered Chatbots: Misinformation, Biases, Content Moderation Challenges in Political Information Retrieval’. Paper presented at the IAMCR 2024 conference, Christchurch, 3 July 2024. 

McGregor, Shannon, Heesoo Jang, and Daniel Kreiss 2024. ‘Complicating Our Methodological Practices: Evaluating Potential Biases in LLMs for Election Information and Civic Engagement’. Paper presented at the P³: Power, Propaganda, Polarisation ICA 2024 postconference, Brisbane, 27 June 2024. 

Simon, Felix, Richard Fletcher, Rasmus Kleis Nielsen. 2024, 2 July. ‘How AI Chatbots Responded to Questions about the 2024 UK Election’. Oxford: Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/news/how-ai-chatbots-responded-questions-about-2024-uk-election 

Wardle, Claire, and Hossein Derakhshan. 2017. Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. DGI(2017)09. Strasbourg: Council of Europe.

  • 2 views
INFORMATION
BLOG
RESEARCH
PUBLICATIONS
PRESENTATIONS
PRESS
CREATIVE

Recent Work

Presentations and Talks

Beyond Interaction Networks: An Introduction to Practice Mapping (ACSPRI 2024)

» more

Books, Papers, Articles

Untangling the Furball: A Practice Mapping Approach to the Analysis of Multimodal Interactions in Social Networks (Social Media + Society)

» more

Opinion and Press

Inside the Moral Panic at Australia's 'First of Its Kind' Summit about Kids on Social Media (Crikey)

» more

Creative Work

Brightest before Dawn (CD, 2011)

» more

Lecture Series


Gatewatching and News Curation: The Lecture Series

Bluesky profile

Mastodon profile

Queensland University of Technology (QUT) profile

Google Scholar profile

Mixcloud profile

[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Licence]

Except where otherwise noted, this work is licensed under a Creative Commons BY-NC-SA 4.0 Licence.