The post-lunch session at the ICA 2024 conference that I’m attending has been organised by the Global Journalism Innovation Lab (GJIL) project, and focusses on AI-generated content in the news. Elizabeth Dubois starts us off by defining generative AI as a type of artificial intelligence system which is capable of generating text, images, and other media in response to prompts. Such generative AI models learn the patterns and structure of their input training data, and then generate new data that have similar characteristics.
Michelle Bartleman now takes over by presenting an updated systematic literature review of journalism scholarship on automated content. This might refer variously to AI authorship, AI journalism, AI news, AI-generated news, or artificial journalism, and the update was necessary because the field has developed so much following the explosion in generative AI services in recent years and months; Michelle’s study eventually included 145 genuinely relevant articles from 40 countries of origin, with some 15-20 articles per annum since 2019. This is a substantial increase from previous years.
The next speaker in this session is my excellent QUT colleague Michelle Riedlinger, whose interest is on news consumers’ perceptions of AI-generated news content in Australia, Canada, and the UK. Younger consumers were more positive about such content than older users, and a further survey of news consumers has just launched. A recent Reuters report showed that users saw such content as less trustworthy, more up to date, and cheaper, as well as not worth paying for; however, user understanding of generative AI in general, and its uses in journalism in particular, is also still very limited. Preliminary patterns show that users are concerned about stories being written and delivered by AI.
Up next is another QUT colleague, Ned Watt, whose interest is in the intersection of generative AI and the field of independent fact-checking. Such independent fact-checking, contrary to journalistic fact-checking practices, is post hoc, testing claims that are already circulating in public, and has emerged as a global practice; Ned’s work focusses on Latin America, Southern Africa, and Australia, in particular.
He asked these fact-checking organisations about their emerging imaginaries for the use of AI in their work, and found that AI was seen predominantly as a potential friendly helper – employed in internal use rather than for the production of external output, for instance to monitor information feeds from media, politicians, social media, and elsewhere; review and verify potentially inauthentic content; and organise data over time. Fact-checkers did not see their overall work as replaceable by AI, however.
Finally, the great Alfred Hermida is reflecting on Sophi, an AI-based automation system developed by the elite Canadian newspaper Globe and Mail that is intended to represent the values of such newspaper journalism. Because of its origins in journalism rather than Silicon Valley, it was covered very positively as a groundbreaking news technology (a form of AI that is acceptable to newsrooms), and especially as improving news organisations’ return on investment by choosing what content should be inside or outside the paywall and otherwise determining story placement and promotion. Sophi can be understood as a kind of ‘dynamic paywall’; it is now used by nearly 60 news Websites, and especially for running local and regional news sites at scale and more efficiently.