Comprehensive analyses of diverging patterns in the journalistic coverage of major controversial topics are often limited by the volume of content that such analyses can realistically process. In-depth research typically relies on the manual coding of content for a variety of aspects – for instance, which stakeholders are represented in news coverage, how are issues framed and solutions presented, what terms and language is used to describe the problem. But manual coding is labour- and resource-intensive, and therefore does not scale well; large-scale and longitudinal analyses of news content are therefore rare and limit themselves to a small number of aspects to be coded. This presentation outlines possible approaches for the use of Large Language Models (LLMs) to augment and extend such work, with a particular focus on the use of such approaches to detect patterns of polarisation in the news media. These approaches use emerging LLMs in the role of human content coders, training and testing them against a manually coded subset of the data and investigating the reliability, replicability, and limitations of such approaches. Although several obstacles to this work still remain, a successful implementation has the potential to substantially enhance and upscale the analysis of large news corpora.