You are here

Situating Digital Methods

Our Digital Methods pre-conference workshop at AoIR 2016, combining presenters from the Digital Methods Initiative at the University of Amsterdam and the Digital Media Research Centre at Queensland University of Technology starts with a presentation by Richard Rogers on the recent history of digital methods. He points out the gradual transition from a conceptualisation of the Internet and the Web as cyberspace or as a virtual space to an understanding of the Web as inherently linked with the 'real' world: online rather than offline becomes the baseline, and there is an increasing sense of online groundedness.

In the process, however, the messiness of Web data has also been pointed out. The famous Google Trends study that purported to identify the incidence and spread of the flu around the world has been critiqued from this perspective, for instance, and a number of digital humanities approaches have emerged from this.

One approach is Lev Manovich's idea of cultural analytics, which has also focussed especially on visual content; this has enabled a study of the diachronic transformation of visual styles online, for instance. A second, similar approach is culturomics, which has focussed on large-scale textual analysis, especially using the Google Ngram Viewer, and provides a longer-term perspective on trends in popular attention and language use.

Further, Webometrics uses a network-based approach, studying for instance the linkages between Websites to identify key authorities and sources in the network; altmetrics similarly applies a network approach to the analysis of citations, especially in non-standard contexts such as social media.

Increasingly, we have now seen the emergence of 'natively digital' methods, which focus on the inherent properties of objects found in the digital medium and are designed specifically for the study of that medium rather than imported from offline contexts. This is also a software project - a question of what objects are available in the digital space; how these objects can be captured; and how the methods that exist in the medium can be repurposed for research.

This includes hyperlink, archive, search engine, blogosphere, Web space, Wikipedia, Facebook, Twitter, and app space studies, for instance. For hyperlink studies, the IssueCrawler software has been available since 2001 now and enables the research of interlinkages between different Websites (and the organisations that operate these sites), providing insight into the politics of association between these sites and organisations.

Further, the availability of the sites thus identified can be explored through various nation-specific proxy servers in order to examine the level and focus of censorship applied in different countries. Such work is somewhat double-edged, however: the censors themselves could use the same methods to improve the coverage of their censorship efforts, of course.

Archival studies may focus on the evolution of the presentation of specific Websites, creating for instance a screencast of the changing design of a site over time. There is also the potential to do archival hyperlink analysis, to explore how the networks of the blogosphere, for example, have evolved over time. Similarly, the underlying HTML code of these sites can also be studied – for instance to explore the growing use of user tracking and other emerging features.

A further area of study focusses especially on Wikipedia: this shows how articles evolve, and points to the fact that specific national perspectives gradually emerge in the different language versions of a given Wikipedia article; it can compare the URLs and images used in the different versions, for instance.

For Facebook, a similar reconstruction of historical changes is also valuable, but even more difficult because of the lack of historical snapshots in the Internet Archive. Here, the focus may therefore be pushed especially to the study of public Facebook pages over profiles, because these data are more easily available – in essence, as Richard points out, digital methods often follow the medium to see what it can do, and are therefore shaped by the affordances of the platforms they study.

There is also a potential here for networked content analysis, to examine the content that is being engaged with the most on Facebook. This is a mixed-methods, quant/qual approach ("first you count, then you interpret"), and examines for instance the networks of interlinkages through the likes across different pages, and the engagement with content – especially viral, visual content – across different pages.

One could think about two eras of Facebook research here: from 2006 to 2011, with a focus on the presentation of the self and studies of personal interests, to post-2011 research that focusses on social movements and causes and examines Facebook groups and pages. Similar patterns may apply to other platforms, such as Instagram, where there has been a shift from the study of selfie culture and geographically based moods towards the study of antagonistic hashtags, or Twitter, where research has progressed from the study of phatic communication (2006-2009) through hashtag studies as "remote event analysis" (2009-2012) towards more complex and sophisticated research across a wider range of topics (from 2012 onwards). Twitter can still be understood as a storytelling machine for remote event analysis, however. This commonly uses hashtag-based tweet collections and takes a diachronic approach that pulls out the key events to generate a timeline of the event.

We may now be at the end of the Web 2.0 era, however, where the Web was a scrapeable, accessible environment. In an app-dominated environment, a Web of proprietary platforms, and a space where APIs are increasingly being restricted, the large-scale digital methods that have been developed are increasingly coming up against significant barriers. This has led to a shift back towards other forms of research, combining natively digital and other approaches.