You are here

Artificial Intelligence

Human vs. LLM Coding of Australian Charities’ Civic Activities

The final speaker in this ACSPRI 2024 conference session is Aaron Willcox, presenting work with the Scanlon Research Institute to explore local government-level civic opportunities. For organisations, such opportunities include hosting events, offering memberships, involving individuals through volunteering, and taking action through advocacy and campaigns.

Exploring Effective Persuasion Using LLMs

The next speaker in this ACSPRI 2024 conference is Gia Bao Hoang, whose interest is in the use of LLMs for detecting efficient persuasion in online discourse. Such an understanding of effective persuasion could then be used for productive and prosocial purposes, or alternatively to identify problematic uses of persuasion by bad actors.

Using LLMs to Assess Bullying in the Australian Parliament?

The next speaker in this ACSPRI 2024 conference session is Sair Buckle, whose interest is in the use of Large Language Models to detect bullying language in organisational contexts. Bullying is of course a major societal problem, including in companies, and presents a psychosocial hazard: there are several proposed approaches to address it, including surveys and interviews and manual linguistic classification (e.g. in federal parliament), which are subjective and manually intensive; pulse surveys and self-labelling questionnaires (e.g.

Using Large Language Models to Code Policy Feedback Submissions

The first session at the ACSPRI 2024 conference is on generative AI, and starts with Lachlan Watson. He is interested in the use of AI assistance to analyse public policy submissions, here in the context of Animal Welfare Victoria’s draft cat management strategy. Feedback could be in the form of written submissions, surveys, or both, and needed to be analysed using quantitative approaches given the substantial volume of submission.

LLMs in Content Coding: The 'Expertise Paradox' and Other Challenges

And the final speaker in this final AoIR 2024 conference session is the excellent Fabio Giglietto, whose focus is on coding Italian news data using Large Language Models. This worked with some 85,000 news articles shared on Facebook during the 2018 and 2022 Italian elections, and first classified such URLs as political or non-political; it then produced and clustered text embeddings for these articles, and used GPT-4-turbo to classify the dominant topics in these clusters.

LLMs and Transformer Models in News Content Coding

The next speaker in this final AoIR 2024 conference session is the great Hendrik Meyer, whose interest is in detecting stances in climate change coverage. This focusses especially on climate change debates in German news media, focussing on climate protests, discussions about speed limits, and discussions about heating and heat pump regulations.

Towards an LLM-Enhanced Pipeline for Better Stance Detection in News Content

The next speaker in this session at the AoIR 2024 conference is my QUT colleague Tariq Choucair, whose focus is especially on the use of LLMs in stance detection in news content. A stance is a public act by a social actors, achieved dialogically through communication, which evaluates objects, positions the self and other subjects, and aligns with other subjects within a sociocultural field.

Using LLMs to Code Problematic Content in the Brazilian Manosphere

The second speaker in this final session at the AoIR 2024 conference is Bruna Silveira de Oliveira, whose focus is on using LLMs to study content in the Brazilian manosphere. Extremist groups in this space seek legitimisation, and the question here is whether LLMs can be used productively to analyse their posts.

Paying Attention to Marginalised Groups in Human and Computational Content Coding

The final (!) session at this wonderful AoIR 2024 conference is on content analysis, and starts with Ahrabhi Kathirgamalingam. Her interest is especially on questions of agreement and disagreement between content codings; the gold standard here has for a long time been intercoder reliability, but this tends to presume a single ground truth which may not exist in all coding contexts.

How Meta’s Third-Party Fact-Checkers Are Learning to Think Like the Machine

The final presenters in this session at the AoIR 2024 conference are Yarden Skop and Anna Schjøtt Hansen; their interests are in the third-party fact-checking network employed by Meta. This operates on the basis of a Meta-provided online dashboard that highlights potentially problematic content, and the dashboard’s operation directs fact-checking away from political content spread by major political figures, and towards other forms of content.

Pages

Subscribe to RSS - Artificial Intelligence