The final speaker in this Social Media & Society 2024 session is my excellent QUT colleague Dom Carlon, whose focus is on governance of bots by bots, and inter-bot communication more broadly, on Reddit. Bots are often understood based on how they communicate with humans, and there are often seen as a problem or nuisance, but bots have always also communicated with other bots; this is sometimes by design and sometimes by chance (as bots have unplanned encounters with other bots online). How are bots governing or moderating the behaviour of other bots, then?
Bots can be seen as natural inhabitants of commercial platforms; they are produced by the logics and platforms of the platform environment, and may be officially sanctioned as ‘little helpers’ or dismissed as unwanted platform invaders or ‘pirates’. How this unfolds depends on the specific platform and its rules and values, of course; on Reddit, for example, user-created bots have always been allowed if not necessarily welcomed.
This is in part because of Reddits’ decentralised environment, where moderators negotiate standards around bot-making on a subreddit-by-subreddit level, and bots can be anything from helpers in moderation to automated personas in subreddits. This means that it is difficult to generalise the roles of bots across all of Reddit.
Under what circumstances are bots contested on Reddit, then? This needs to be answered by examining the evolution of Reddit community standards relating to bots: outlawed bots include those that manipulated Reddit’s voting systems; targetted or harassed specific users or groups; scammed; restored deleted content; or posted in certain sections of reddit that deal with serious discussions (e.g. on physical or mental health issues) – all of these represent Reddit norm violations.
Rogue bots, by contrast, include those that engage in nudging behaviour that is contrary to Reddit culture; or that push spam and other low-quality content (duplicate posts, indiscriminate posting, posting without being summoned).
Such bots are policed by other bots – there is the BotDefense bot that maintains a blacklist of bots, for instance, and – when invited by moderators into a subreddit – targets some 145,000 outlawed and rogue bots on the platform. This project is now under threat following Reddit’s 2023 adjustments to its API structures, however – at a time when more and more AI-driven bots are flooding the platform.
In response, some users have now taken things into their own hands again and have created their own bots – such as the CoolDownBot, which warns accounts that are swearing excessively –, but this also led to the creation of further bots that deliberately trigger such nudging bots, resulting in a bot arms race. With Reddit’s controversial shift towards embracing content deals with OpenAI, and in the process changing and commercialising its API options, there is a further battle brewing here.