My Books

   

In Collections

Blogs

Reverse-Engineering Twitter?

The next speaker at "Compromised Data" is Robert Gehl, whose interest is in critically reverse-engineering social media as a form of critiquing and producing alternatives to current social media platforms. This builds on reverse-engineering approaches in engineering, economics and law, on science and technology as well as software studies, and on critical humanism.

Reverse-engineering is a method of producing knowledge by dissociating human-made artefacts. Such knowledge is then used to produce new associated artefacts that bear some relation to the old. Some reverse-engineering has merely functional and pragmatic reasons, but in other cases reverse-engineering takes a more critical perspective, using for example in actor-network theory or conducting an ethnography of the infrastructure by following the actors.

Bottom-Up Measurements of Network Performance

The next session at "Compromised Data" starts with Fenwick McKelvey, who begins with a reference to the emergence of digitised methods for the study of the Web during the mid-2000s. This was the time around which the latest generation of social media emerged, enabling us to begin thinking about society through the study of the Internet, requiring the development of new research methods by repurposing computer science methods for social science research.

In Toronto, Infoscape Labs developed a number of tools for the exploration of political discourse in Web 2.0, including the Blogometer. This is the emergence of platform studies, paying attention to the platform itself - but this also introduces challenges about how to study the platform, as the core object of research itself intervenes in its study, e.g. through the politics of APIs. This work also required compromises around data access and utilisation, and a growing bifurcation between scholarly and commercial research activities emerged.

Archiving Our Personal Digital Milieux

The final presenter in this morning session at "Compromised Data" is Yuk Hui, who will present a social media self-archiving project. He has worked for years on audiovisual archives, but much of the work on this field has focussed on institutional rather than personal archives, with the latter often concerned mainly with privacy issues.

But another set of problems relates to data management instead: we are working with multiple cloud-based systems, but rarely archive our digital objects effectively - archiving is not just about storing, but about preserving the context of digital objects as well: the digital milieu.

Social Media Data and Their Utopian Assumptions

The next speaker at "Compromised Data" is Ingrid Hoofd, whose interest is in how new technologies make certain types of representation possible or impossible. The neoliberalisation of universities, for example, leads to a quantification of research data which generates poor research. This is the violence of numbers: how do we assess the way new media technologies change the face of social sciences research, then?

Social media data mining methodology provides an allegory of the technological apparatuses that use it. This hinges on these technologies' propensity to speed up, and on the associated notion of change. There is a strong emphasis on objectivity, generating more true as well as more questionable coverage of the conditions of the real. Social science via datamining tools is implicated in a push towards an idealised data-driven utopia.

Haunted Data in Cross-Media Controversies

The second day of "Compromised Data" starts with Lisa Blackman, who is tracking social media controversies and mapping information contagion. Can we use quantitative methods in non-positivist ways to understand these processes?

Lisa introduces the idea of haunted data, and suggests that we need to think about digital methods as performative: we need to move behind infographics when thinking about visualising data. Part of this is about priming: creating an experimental apparatus that makes people feel that their actions are self-directed, but actually generates such actions through the interventions of the apparatus. Such research is controversial because of its early ties to research into psychic phenomena, however. It is useful, however, to explore information contagion and virality, especially in the context of social media controversies.

The Push towards Niche Geosocial Data

The final speaker on this first day of "Compromised Data" is Sidneyeve Matrix, who shifts our focus towards geosocial information as generated by smartphones and other mobile devices. Only 12% of US users as surveyed by the Pew Centre posted Foursquare check-ins in 2013, for example, down from 18% in 2011 - but this may mask a greater take-up of other location-based services, not least the Frequent Locations functionality in iOS7.

There is a continuing trend towards the consumerisation of geodata. Geosocial cultural arrangements are explored through the use of mobile communication patterns, but such analysis is notoriously difficult - not because of a lack of data, but because of the difficulties in assigning meaning to the geolocated information which is available from a variety of platforms.

Towards a More User-Centric Perspective in Utilising 'Big Data'

The next speaker at "Compromised Data" this afternoon is Asta Zelenkauskaite, who notes the increasing interweaving of social and mainstream media; based on the properties of 'big data' it therefore becomes important to explore how users engage with mass media and cross-media contexts. How relevant are 'big data' to the mass communication field?

Traditional media outlets have been mainly focussing on a quasi-passive engagement with media content, while social media now offer a two-way interaction by providing back channel functionality. Mass media content, user-generated content, and user interactions' digital imprints are coming together to shape this cross-media environment.

'Big Data' and Government Decision-Making

The next speaker at "Compromised Data" is Joanna Redden, whose interest is in government uses of 'big data', especially in Canada. There's a great deal of hype surrounding 'big data' in government at the moment, which needs to be explored from a critical perspective; the data rush has been compared to the gold rush, with similarly utopian claims - here especially around the ability for 'big data' to support decision-making and democratic engagement, and the contribution 'big data'-enabled industries can make to the GDP.

But how are 'big data' actually being used in government contexts? New tools and techniques for the analysis of 'big data' are of course being used in government, but how these affect policy decisions remains unclear. Social media analysis is similarly being used for public policy and service delivery; sentiment analysis is used for some decisions around law enforcement and service delivery, but adoption to date is slow.

Engagement through Social Media: What Do We Mean?

The final presentation in this "Compromised Data" session is by Mary Francoli and Dan Paré, who focus on the question of engagement and mobilisation in a time of rapidly evolving social media use. One initial observation is that these terms lack definitional clarity - there are some very high-level definitions (e.g. building on UN definitions), but these remain vague; political and civic engagement are conflated, and specific forms of engagement are not necessarily defined in detail.

Simply voting is a form of engagement, for example, but is clearly different from other, more complex forms of political engagement. The literature increasingly links these types of activity with social media (and with the Net more broadly) - and the extent to such such forms of engagement occur, and how they interrelate with forms of offline political engagement, need to be studied in greater detail.

Towards Mixed Methods for Analysing Multimodal Communication

The next session at "Compromised Data" starts with Frauke Zeller, who begins by noting the multimodality of communication, including through social media: many texts are using more than one semiotic mode, combining text, images, audio and video. How can the existing methods for studying multimodality be transferred to online environments, and to research building on 'big data', however?

Some such work begins with exploring the networks between users, and between texts, but this is not enough - how do we move from the macro to the meso and micro levels of communication? How do we move to the manifest to more latent content, especially where non-textual content is involved?

Pages

Subscribe to RSS - blogs