You are here

Snurb's blog

Remote Sensing in Archaeological Resource Management

Canberra.
This is a very fast-moving DHA 2012 session – the next speaker is Adela Sobotkova, who’ll present the Bulgarian experience in archaeological remote sensing in Bulgaria. Remote sensing extracts information from photographic images captured from space; such information has been used for site mapping and for the detection of new archaeological sites.

Federated Data Tools for Archaeology

Canberra.
The next speaker in this session at DHA 2012 is Shawn Ross, presenting the NeCTAR federated archaeological information management systems project. This is a major, multi-partner project which aims to manage digital data from creation to archival.

The idea here is to break data out of ‘destination Website’ silos, and to develop a federated rather than centralised system; it utilises existing resources wherever possible, and encourages the use of portable, machine-readable, and reusable data.

Synthesising Historical and Archaeological Databases

Canberra.
After the DHA 2012 keynote, I’m in a session on archaeology, which starts with Penny Crook. She highlights the task of synthesising history and archaeology in this field, and notes the potential which digital humanities methods have in this context. More needs to be done here: especially, more connection of available datasets, and more collaboration in online environments. Penny points to two archaeological databases which she’s been involved in.

From Network to Patchwork Collections

Canberra.
And we’re starting the final day of the Digital Humanities Australasia 2012 conference. The day begins with a keynote by Julia Flanders, who challenges us to rethink collections. This begins by asking what we mean by a ‘collection’, in the first place: collection implies agency, a collector who creates a sense of order amongst the entities they collect.

There’s a bounded comprehensiveness which is implied by the term, too – a completeness may have been achieved through the collection. A collection is an aggregation of individual items which are not just meaningful in themselves, but importantly also in their relationships.

New Approaches to In-Browser Data Visualisation

Canberra.
The final speaker at DHA 2012 this afternoon is Mitchell Whitelaw, whose interest is in data visualisation. The work he’s presenting here builds on the Prints and Printmaking Australia Asia Pacific database, and Mitchell’s project has explored the opportunities to present these data (20,000 works, 4,000 artists) in new, visual ways.

Mitchell has done similar work for other national archives collections, as well as for Flickr Commons image datasets. Such work provides exploratory visualisations of cultural collections, so far mainly with applications built in Processing and Java – but those technologies don’t work very well in browser environments, so recently this work has shifted more towards in-browser visualisations (using HTML and Javascript).

Structuring Factoids in the Dictionary of Sydney

Canberra.
OK, I skipped the first afternoon session at Digital Humanities Australasia 2012 for a quick excursion to Parliament House – and thanks to the vagaries of Canberra taxi services, missed half of the first paper in the next session as well. So, we pick up again with Stewart Wallace from the Dictionary of Sydney project. The dictionary contains some 22,000 entities called ‘factoids’, linked together in various ways.

As a project, it is about urban history; it began with the work of historians, and attempts to reflect their journeys, and to connect their knowledge. The underlying architecture comprises a single digital repository, containing various entities and their interrelations, and presenting these in XML format to be used in various forms of presentation and visualisation.

Building Linked Data Archives

Canberra.
The final speaker in this session at DHA 2012 is Antonina Lewis, who begins by highlighting the question of entities in describing linked data. Along with additional issues, such as data storage and retention, they raise a range of key questions for the creators, custodians, and curators of linked data.

Importantly, interpretation of data requires context; this is especially true for collections of coded data, where coding schemes and the various provenance of data sources are also crucially important for meaningful interpretation.

Assessing Linked Data Repositories

Canberra.
The second speaker in the linked data panel at DHA 2012 is Steven Hayes, who begins by introducing the network model of representing relationships between entities. This model has been employed by the Heurist database system, which Steven says represents a new ‘linked data’ mindset in humanities research.

From the perspective of that mindset, how linked are our data? Steven presents a number of criteria for ‘proper’ linked data: are they available online, in machine-readable form, using non-proprietary formats, using RDF standards, and linked to other RDF repositories, for example (the linked data checklist proposed by Tim Berners-Lee)?

The Challenge of Comprehensive Linked Data

Canberra.
Following the plenary panel, I’ve made it to a Digital Humanities Australasia 2012 panel on linked data, which opens with Toby Burrows. He begins by outlining the shape of what we now call e-Research: it ranges from supercomputing, large data visualisations, and other major, expensive projects mainly in the ‘hard’ sciences through to work being done in the humanities (notably excluding mere digitisation initiatives).

In the humanities, why do we bother? We could simply remain within our own niche areas, or leave the computational work to someone else; humanities work also adds to the problem by introducing further, major collections of cultural and communicative data. But the digital deluge is here, and cannot be ignored; further, mere computational methods are not enough, but crucially need better input from humanities scholarship, and this must also be translated into better recognition and funding for humanities research.

Understanding Computational Methods in the Digital Humanities

Canberra.
The final panellist on this DHA 2012 panel on ‘Big Digital Humanities’ is John Unsworth. His definition of the digital humanities is narrower than that of the others: he defines it as a form of humanities scholarship that builds centrally on computational methods – for example, research which uses ‘big data’ resources to do work which could not be done in any other way.

John uses the Hathi Trust Digital Library as an example: a collection of some 10 million (and growing) digitised publications which emerged in tandem with the Google Books initiative and is supported by libraries which contributed to the initiative; the Trust also operates a research centre which enables users to do computational work building on this vast resource.

Pages

Subscribe to RSS - Snurb's blog