You are here

The Critical Role of Communication and Media Research in Addressing the Emerging Generative AI Paradigm

The next session at the ICA 2024 conference is the annual Steve Jones lecture, which this year is presented by my QUT colleague Jean Burgess and is on the impact of the newly emerging generative artificial intelligence technologies. This should not be confused with the substantial hype around artificial general intelligence, a technology which always seems to be just around the corner and has yet to actually eventuate.

Rather, this talk is about the more limited generative AI systems that appear to have invaded all sorts of projects, and seem to be universally indicated now by sparkle (✨) icons and emoji and rainbow gradients in user interface designs in both expert and consumer products. Only Meta has resisted this trend, and uses a ring icon.

Very serious money is now being poured into generative AI, and well beyond conventional venture capital: all of the major tech firms as well as a range of specialist AI Labs and AI ‘community’ developer platforms like Hugging Face have highly capitalised AI divisions now. This has also led to a vast increase in the amount of computing power and energy resources required got drive such AI activities.

How will we pay for all this sparkle, then? Google has already signalled the potential that AI-powered search may be offered under a for-pay model, and Google AI has also introduced a premium subscription plan. This is a significant shift away from advertising-funded free (or at least freemium) Internet services like online search and online document creation. Another development is the insertion of AI technologies into physical devices, from AI laptops to AI iPhones that incorporate specialised AI chips and on-device Large Language Models.

While some of this is outright hype, or reactionary anti-hype, or even a kind of criti-hype that emphasises oh-so-clever critiques, there’s much more going on here. AI assistants have been built, often poorly, around the experiences extracted from years of content and search moderation, and their implications are as yet poorly understood.

Overall, this marks a shift from analytical and determinative to (also) expressive, communicative, and agentive AI systems; an integration of these systems into the digital media environment, where generative AI serve as media systems and platforms; and a potential to reconfigure, reinforce, and disrupt established digital media platform economies. Communication and media studies have an important role to play in analysing these developments.

How is generative AI interacting with the political economy of the communication and media environment, then; what governance and regulation does it require; what capability and digital inclusion initiatives do we need to enable its inherent benefits and opportunities to be realised; and how do we address questions about transparency, observability, and explainability around generative AI?

It is important to note in this context that generative AI is still in the making and unsettled, and that methods and approaches from our field can therefore not only study but also shape the outcomes of the transformation that is now unfolding. This also requires continuing changes in our field: the past twenty years or so have already seen increased collaboration, scale, and multi-disciplinary as we have built out the field of Internet studies and related areas of research, and that trajectory must continue even further as we move deeper into the generative AI paradigm.

We must also anticipate unexpected developments as we do so. At the launch of the iPhone, for instance, there was no App Store, leading some critics to dismiss it as a comparatively dumb device; the App Store opened up substantial opportunities for third-party developers to enhance the device, and turned Apple into a platform company.

Yet many of these developments remain difficult to observe: data access is severely and perhaps increasingly limited, and researchers have therefore had to rapidly innovate their methodological approaches in order to be able to do the important work that needs doing.

Within the ARC Centre of Excellence for Automated Decision-Making and Society, several such key projects are now underway. One addresses the question of generative authenticity: this might include efforts at validating, for instance, the authenticity of images and other content in order to defend against misleading AI-generated content, but also working through the complex question of what authenticity even means in a generative AI context.

Another is the Australian Ad Observatory, which seeks to make visible through data donations from ordinary users what Facebook advertising they actually encounter, and in doing so reveals patterns of algorithmic targetting as well as the limitations of Meta’s own ‘Why am I seeing this’ (WAIST) explanations for such targetting. In the age of generative advertising, such explorative and explanatory work is going to be even more crucial, since the potential for problematic targetting practices is now even greater.

Jean’s own focus is now also on leading the QUT Generative AI Lab (which currently has a number of PhD scholarships on offer), which is charged with analysing more of the impacts of such generative artificial intelligence developments.