Third in this AoIR 2023 session is Reed van Schenck, whose interest is in the decline and reconstitution of the US alt-right after 2017 – from the ‘tiki torch’ marches to the 6 January 2021 coup attempt. A particular focus here is on Telegram, but much of the research so far has examined only the public Telegram channels, and not its private and secret channels where potentially even more problematic activities may be taking place.
Telegram is an in encrypted instant messaging platform launched by the Russian Durov brothers and now operated from Dubai, with strong take-up especially in Eastern Europe, Iran, Brazil, and Ethiopia. There is no content moderation, and Telegram enables private messaging, small-group chats, and large channels. Telegram encourages trust in its encrypted architecture, though it essentially operates as a black box; in this it presents itself as different from the Durovs’ first creation, VKontakte, which has come under far more direct state surveillance in Russia.
In this respect Telegram also provides an ideological safe harbour protecting users against deplatforming and content moderation, yet this security culture has yet to be tested by legal action and is challenged by operator errors. Reed’s approach in this study was especially to identify and exploit the mistakes made by extremist channel operators that would make such channels visible to ordinary users.
Telegram’s lax content moderation platform is attractive to the US far right, but it is not free of moderation; in response to EU pressure it has banned channels and accounts operated by ISIS terrorists, for example, though it has not yet taken similar action against white supremacists. Those networks remain active, and have had to develop specific approaches to working with the affordances of this platform – positioning themselves as enclave publics and countermovements within a broader unitary public.
Part of this activity is to conduct regular ‘operational security’ (op-sec) checks, which remind followers to manipulate the interface and security settings on the platform in order to minimise visibility and maximise privacy – yet Reed could access such op-sec advisory messages because the posters often failed to set their own settings in the way they recommended them to others. This points to a certain amateurishness in such groups, and shows that op-sec checks are conducted more in order to feel part of the group than to make any meaningful difference to individual security.
There is also a strange obsession with profile images in these instructions; individualised profile images (which increase the traceability of users, after all) are actively encouraged because default avatars are said to appear ‘suss’, yet there is also advice not to use overt white supremacist symbols in avatars as these could also be used by law enforcement infiltrators to ingratiate themselves with the community. This reveals a certain paranoia within these communities.