Berlin.
The third day at the Berlin Symposium starts with a brief keynote by Damon Horowitz from Google, who outlines some further research challenges for the new Institute for Internet and Society. He begins by considering the auto-complete function of Web forms (as in Google search) – this is a simple indication of how data is gathered about usage patterns in pursuit of greater systems efficiency: it can be beneficial, but also a sign of humans losing agency to the system.
Second, the social media status update: a simple way of starting a conversation, of sharing information, of spreading ourselves; but where do such updates go? Who are the intended, or actual recipients? What are the consequences? Once we’ve tasted the pleasure of communicating more widely this way, it’s difficult to restrain ourselves from using this functionality – but do we understand the full implications of doing so?
These features reflect and shape a particular view of the self; the auto-complete, as an automated self freed from trivial actions which are delegated to a digital representative; the status message, as a public self which overcomes insecurity and constantly participates in society. Both are representatives of a technological view of the self.
Technologists build those products and features which fit their view of the self – and those views are shaped again in turn by those technologies. The socially awkward undergraduate is likely to build a social network platform that is all about in- and outgroups, is a festival of social liking, for example. (Now who might this be directed at?) But what is left out by this technological view of the self? The automated self leaves out the deliberative self, for example; the public self leaves out the more private self – these selves are neglected in our current technological view.
At the same time, we could also argue that we’re not automated or public enough; they draw on a limited amount of background and context, and therefore provide only a very limited range of experiences. We must challenge the common conception that everything is moving online - “if I can’t find it on Google, it doesn’t exist” –; the dominant medium of our time remains reality, not the Internet. The underlying problem with current systems is also that the underlying representation of the user is not faithful enough.
The key to providing better online experiences, then, is not to try to have technology fully ‘understand’ us, but to help developers of technologies be more fully aware of the aspects of ourselves which remain left out; to understand the inherent limits of technology. This is about better understanding the human being, and the human experience, through the work of the humanities and social sciences. And beyond this more descriptive project, we must recommend interventions – this prescriptive work draws on the language of rules and control (rules of privacy and transparency are important, for example, to enable users to remain in control of their data), but goes beyond that, too.
What is necessary here is to understand the feeling of technical life in development hubs such as Silicon Valley, too; what developers create is not shaped by rules of governance and regulation, but such elements tend to come later; perhaps they shouldn’t, and interventions in the development process may be necessary at an earlier time. This helps push for the development of products we can admire – a move from building systems just because we could, to a more user-centric design. Perhaps we can go even further than this – where in the development product can we consider human potential: personal development, civic engagement, human values, etc.? The recent ‘circles’ approach to social networking (um, there may be a Google product for that?) stems from this; the use of social media in crisis communication; and other new developments. How can developers be inspired to develop products that lead to individual flourishing and civic well-being?