As I write this I'm just about to head off to Germany for a keynote at the Social Media Access Days at the German National Library, followed by a three-month stay as a Mercator Fellow at the fabulous Centre for Media, Communication, and Information Research (ZeMKI) at the University of Bremen, thanks to the Communicative AI Research Unit. That stay will also enable me to visit a number of other colleagues and collaborators along the way, including at GESIS in Köln, the University of Southern Denmark in Odense, the University of Münster, the Hans-Bredow-Institut in Hamburg, LMU München, and the University of Zürich – as well as a side trip to the ICA conference in Cape Town. (And if you're not on the list yet, let me know and I'll see what I can do.)
I'll have more to say about all this along the way as I go, and will liveblog the Social Media Access Days conference next week where I can, but before I get there I have a handful of other updates, too: with several of my DMRC colleagues I've just published a new article in Media and Communication which – in addition to sporting a fabulous title – engages in a very timely analysis of how current AI chatbots respond to questioning that indicates an interest in conspiracy theories. The results are concerning: while some chatbots do seem to have at least some guardrails against conspiracist ideation built in, others are a lot more willing to actively encourage an engagement with dangerous conspiracy theories. Unsurprisingly, Elon Musk's chatbot Grok performs especially badly, and Grok's so-called "Fun Mode" in particular is spectacularly unfunny – much like Musk himself.
Our full article is available as open access, and it's important to note that with the rapid and unconstrained evolution of AI chatbots there's a pressing need to conduct audits such as ours much more frequently and regularly if we want to hold these platforms to account – but unfortunately that's a task that scholarly research has neither the time nor the resources for. Here is the article:
Katherine M. FitzGerald, Michelle Riedlinger, Axel Bruns, Stephen Harrington, Timothy Graham, and Daniel Angus, D. “Just Asking Questions”: Doing Our Own Research on Conspiratorial Ideation by Generative AI Chatbots. Media and Communication 14 (2026). DOI: 10.17645/mac.11337.
But I'll end this update on a preview: in 2016 my dear Norwegian and Swedish colleagues and I edited the first edition of the Routledge Companion to Social Media and Politics – and just in time for its tenth anniversary we're back with a new editorial team, comprised of Gunn Enli, Anders Olof Larsson, Jessica Yarin Robinson, Tanja Bosch, Kateryna Kasianenko, and me, and an entirely new second edition. The book will launch in mid-2026, but a first preview and table of contents are now available from the Routledge site, so go check it out.
And that's all for now – hope to see some of you in Europe in the coming weeks...











