IAMCR 2025 / AoIR 2025 / AANZCA 2025
‘Just Asking Questions’: Doing Our Own Research on Conspiratorial Ideation by Generative AI Chatbots
Axel Bruns, Katherine M. FitzGerald, Michelle Riedlinger, Stephen Harrington, Timothy Graham, and Daniel Angus
- 16 July 2025 – Paper presented at the IAMCR 2025 conference, Singapore
- 17 Oct. 2025 – Paper presented at the 2025 Association of Internet Researchers conference, Niterói, Rio de Janeiro
- 28 Nov. 2025 – Paper presented at the AANZCA 2025 conference, Sunshine Coast
Presentation Slides
Abstract
Introduction
Interactive chat systems that build on generative artificial intelligence frameworks – such as ChatGPT or Microsoft Copilot – are increasingly embedded into search engines, Web browsers, and operating systems, or available as stand-alone sites and apps. Here, consumers are likely to use them for a wide range of purposes, including for informational and explanatory purposes. In a communicative environment where information disorder (Wardle & Derakhshan, 2017) is a significant and persistent problem, this is highly likely also to include chat interactions which seek information about conspiracy theories and other verifiably false claims. While some such interactions may simply seek legitimate background information on these conspiracist claims, others are likely to actively use these interactive tools to gather material that would further support and inform conspiratorial ideation.
Conducting a systematic review of six AI-powered chat systems (ChatGPT 3.5; ChatGPT 4 Omni; Microsoft Copilot in Bing; Google Search AI; Perplexity; and Grok in Twitter/X), this study examines how these leading products respond to such problematic questions. This follows the “platform policy implementation audit” approach established by Glazunova et al. (2023): we select six well-known and comprehensively debunked conspiracy theories, confront each of the AI chat systems with scripted questions and follow-ups that ask them to provide information that appears to confirm conspiracist views, and evaluate the responses we receive. In assessing these responses, we examine in particular whether the chat system refuses outright to engage with conspiracist ideas; seeks to educate the user by pointing them to fact-checks or other quality information sources; provides false balance between factual and conspiracist perspectives (i.e. “bothsiderism”: cf. Aikin & Casey, 2022); acquiesces to the user’s wishes by providing information from fringe sources that support the conspiracy theory; or even hallucinates additional material to further bolster conspiratorial ideation.
Conspiracy theories selected for this study include the baseless claims that a secretive group of government actors are using chemtrails to spread harmful substances in the atmosphere; that the 9/11 attacks were an inside job; and that there is a global conspiracy to enact a Great Replacement of white populations. We also examine whether any superficial safeguards against conspiracist queries that may exist in such systems can be easily circumvented with common ‘jailbreaking’ techniques (e.g. phrasing questions as hypotheticals).
In undertaking this platform audit of AI-driven interactive chat systems, we help to address a number of critically important challenges: first, we provide crucial empirical detail on whether and how the leading providers of such systems have sought to fortify their platforms against both intentional misuse by outright conspiracy theorists and accidental enrolment in the amplification of problematic information. Second, we offer a methodological blueprint for further studies that extend and complement our analysis by repeating it again at a later point in time, for a different selection of chat systems, with a broader set of conspiracy theories, in languages other than English, or in various other contexts. And third, we contribute to a growing volume of conceptual work that seeks to improve the transparency and accountability of generative artificial intelligence systems (e.g. Kuai, 2024; McGregor et al., 2024; Simon et al., 2024). More, and in particular, more extensive and regularly repeated work along these lines is clearly required, but this initial study of conspiratorial ideation in the content produced by generative AI chatbots in response to conspiracy-curious questioning serves as a valuable stepping-stone towards such more comprehensive analysis.
References
Aikin, Scott F., and John P. Casey. 2022. ‘Bothsiderism’. Argumentation 36(2): 249–68. doi: 10.1007/s10503-021-09563-1.
Glazunova, Sofya, Anna Ryzhova, Axel Bruns, Silvia Ximena Montaña-Niño, Arista Beseler, and Ehsan Dehghan. 2023. ‘A Platform Policy Implementation Audit of Actions against Russia’s State-Controlled Media’. Internet Policy Review 12(2). doi: 10.14763/2023.2.1711.
Kuai, Joanne, Cornelia Brantner, Michael Karlsson, Elizabeth van Couvering, and Salvatore Romano. 2024. ‘The Dark Side of LLM-Powered Chatbots: Misinformation, Biases, Content Moderation Challenges in Political Information Retrieval’. Paper presented at the IAMCR 2024 conference, Christchurch, 3 July 2024.
McGregor, Shannon, Heesoo Jang, and Daniel Kreiss 2024. ‘Complicating Our Methodological Practices: Evaluating Potential Biases in LLMs for Election Information and Civic Engagement’. Paper presented at the P³: Power, Propaganda, Polarisation ICA 2024 postconference, Brisbane, 27 June 2024.
Simon, Felix, Richard Fletcher, Rasmus Kleis Nielsen. 2024, 2 July. ‘How AI Chatbots Responded to Questions about the 2024 UK Election’. Oxford: Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/news/how-ai-chatbots-responded-questions-about-2024-uk-election
Wardle, Claire, and Hossein Derakhshan. 2017. Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. DGI(2017)09. Strasbourg: Council of Europe.











