You are here

Theorising Twitter Block Bots

The final speaker in this AoIR 2015 session is Stuart Geiger, whose interest is in the collective block lists on Twitter that are developed by anti-harassment communities. This bypasses or sits alongside Twitter's own, 'official' anti-harassment (and anti-spam, etc.) efforts.

Stuart's own work began with Wikipedia, which has has a set of complex internal governance mechanisms and is increasingly using automated bots to militate against vandalism and other disruptions. Such bots are a form of bespoke code that is separate from but interacts with the sovereign, governmental code that underlies the Wikipedia platform itself.

Much of this became especially visible in the context of the #gamergate harassment campaign, which has played out on Twitter for some time now. Victims of harassment could of course try to block all of the accounts that harass them, but this is a sisyphean task due to the sheer number and mutability of the harassment accounts; community-curated blocklists therefore emerged that pooled together all knowledge about current harassment accounts. Bots such as @theblockbot and @blocktogether enable the sharing of blocklists and support the automatic blocking of harassment accounts based on a variety of parameters.

Blocklists can be generated by a single person, and shared; curated by a tightly or loosely organised group; developed by algorithmic processes; or compiled through crowdsourcing; combinations of these approaches are also possible.

Block bots involve selectively disapproving of selections of accounts, and by creating them, counterpublic groups reconfigure both the code and their own practices on a platform like Twitter. They are a form of collective sensemaking and articulation.

This is in some way a delegation from humans to non-humans, but also a delegation from inside Twitter to outside Twitter – and this exposes the communities and individuals creating such blocking tools to logistical and personal costs, to financial and legal implications, and ultimately also to further harassment.