For the afternoon session on this first day of IAMCR 2023 I am in a session on propaganda, which starts with Courtney Radsch. Her focus is on the use of artificial intelligence in state-aligned information operations. She notes the rise of populist authoritarianism, the emergence of coordinated inauthentic behaviour, the emergence of reputation management firms, and a number of other problematics we have seen in recent years; some of this directly targets journalists and journalism with state-aligned propaganda and harassment.
But how do such tactics leverage the machine learning and AI systems of online platforms, and use AI in their own content creation? First, high-profile accounts and state media have a multiplier effect in social media platforms: they are highly visible and may enjoy preferential treatment in algorithmic selection; the journalistic beats that are most at risk from such actors are investigative journalists and those on the disinformation beat. AI systems then are involved in technological agenda-setting and algorithmic framing – for instance through their selection of trending topics, their recommendation of auto-fill search completion, etc.
There are seven different tactics in this: intrusion, abuse and threats, exposure, smearing and impersonation, exclusion, obfuscation, and (gendered) disinformation. Intrusion includes seeking information about journalistic sources, tracking movements, and using offensive spyware, and this also precedes and enables other negative tactics. One such tactic is abuse and threats: overtly or covertly targetting individuals, often using gendered terms, coded and vernacular language that content moderation systems won’t pick up, and with the help of high-profile accounts and private coordination that generates a publicly visible groundswell and viral information cascades.
Next, exposure is also linked to intrusion and may again be gendered, for instance by non-consensually releasing intimate images, smearning, doxxing, extortion, blackmail, and so on. Smearing and impersonation is another key tactic, working with false accusations, impersonation, spoofed photos, generative impersonation, manipulated synthetic media, and other tools to generate contagious discrediting. Exclusion is another tactic, attempting to block of flag targets and their content, infiltrating accounts, executing DDOS attacks, email and subscription bombing, and other approaches.
Finally, obfuscation attempts to drown out accounts and render their content invisible, by producing information gluts that flood the targets online space and conceal their original content and reporting, and engaging in astroturfing, hashtag hijacking, and similar tactics. AI tools can be used in the context of any and all of these tactics, and will almost certainly make the situation worse still.