The workshop of the Bots Building Bridges project in Bielefeld continues with a final session for today, which starts with Christian Grimme. His focus is on the role of AI in creating as well as fighting artificial communication. Artificial agents – bots – are not new, of course: there were email bots, Twitter bots, and there are many other forms of social bots, which are now also increasingly integrated with and driven by Large Language Models. There are also prosocial bots which are used to counter more problematic bots.
Automation can mean various things, though. Closed-loop systems use feedback mechanisms that achieve self-regulating control; open-loop systems merely implement redesigned algorithmic programmes without any self-regulation. In a social media context, social bots initially engaged in simple content amplification; these became more sophisticated over time and began to simulate the behaviour of genuine users more accurately (implementing diurnal activity cycles rather than operating around the clock, for instance). By now, they simulate human behaviour quite effectively, complicating their detection.
Much of this still predates the use of artificial intelligence, however. By 2020, there were some 40,000 GitHub repositories providing social bots, and AI was not yet particularly prevalent; in 2025, there are some 145,000 such repositories, and many use LLMs. Telegram bots were especially prominent in both timeframes, yet their development has substantially outpaced their counterparts for other platforms. Twitter/X bot development in particular has stagnated, for obvious reasons.
Such bots operationalise a range of human behaviours. They build on the mediation of interpersonal engagement and communication by computers, which implies a necessary abstraction and standardisation; they also draw on the humanisation of computers as communicative counterparts; and they seek to avoid the uncanny valley of a clearly artificial and offputting imitation of human traits.
New forms of automation therefore attempt to create sufficient uncertainty in the communication partner about the actual nature of an actor behind an account or avatar, within generally feature-poor environments, by controlling cues and content. Where this is done for problematic purposes, how can we fight it? Can we use the very tools of automation itself to do so?
In the past, we might have been able to detect clear signals of automated activity; this is no longer the case where such signals are hidden by the use of more sophisticated agents. There may still be more subtle signals of automation present in the data, however, but new approaches to their detection are necessary. For instance, greater emphasis on reducing false positive rates might be useful: the challenge is not to detect all signals of problematic activity, but to detect at least some of them without also finding many false positives.
But this is not the whole story: modern social bots no longer simply generate inauthentic content, but also engage with their sociotechnical settings. This needs to be taken into account in any attempts to detect them. Temporal factors are especially valuable in detection approaches, in fact.