We're continuing in a law and policy vein. The final session for today is on the potential for a democratisation of IT infrastructures. Dan Wielsch is the first presenter, focussing on infrastructure governance. He notes that the governance principles of distribution technology are changing - more people than ever before have access to the means of information production and exchange, drastically reducing entry costs to communication (also known as 'cheap speech'). This is markedly different from the previous industrial information economy, of course. In the new network information economy there is a serious increase in non-market content production, leading to more and more diverse content and content producers.
This happened in part because of the declining price of computation, but especially also because of the changes in governance of the technological means of distribution. While cables are still proprietary and often mono- or duopolistically owned, the Internet, for example, enables data communication across distant physical networks without handling some data preferentially over other data - the lower layers of the network are as general as possible and all application-specific functions are at the higher layers. This is an end-to-end principle where intelligence sits at the ends of the network. Further, all standards are open and cannot be easily manipulated.
Regulation follows that for the telephone, where phone lines need to carry all content regardless of its format or purpose - in effect, all lines are openly but not freely accessible to users, following a commons principle. Access is not conditional on the permission of someone else, or where it is, such permission is granted in a neutral way (either by technology design or state regulation). This means that the use of the resource becomes more decentralised and more people will use the resource in more diverse ways. This allows for a wide variety of Internet applications, both commercial and non-commercial, even if there is an applications competition at the heart of Internet growth. There is a connection netween open access and positive externalities, too: the value of the Net as a resource increases as more people use it - 'the more, the merrier'.
From an economic perspective, the case for commons management is particularly strong where user-based research generates large benefits. Ths follows a demand-side definition of infrastructures (following Frischmann): the resource is (potentially) non-rivalrous in consumption; social demand is driven primarily by downstream activity; and the resource is used as input in the production of a wide range of public/private/meritorious goods. Such definitions apply for tangible (roads/railways) as well as for intangible (Net/human genome database) systems.
The presence of demand-side externalities drives the recognition of this infrastructure resource as a demand-driven one. Its value is ultimately driven by consumers of downstream outputs. Now, if the output is a private good, the market creates accurate demand information; however, if the output is a public good, the market will undervalue the social value. This market failure translates upstream; and as a result there is a need for some form of commons regime. Applied to IT the commons principle leads to a decentralisation of control over the means of communication.
But how does the law respond when faced with such a decentralisation of control? Normative support comes from free speech laws - decentralisation of control translates into a decentralisation of content production, and this greater expressive autonomy means that users have a greater expressive capacity and should be supported by law. However, this is a very broad and general argument, and other, more specific ones can be found. The partial separation of ownership and control which is occurring here leads us to look at other instances where such cases have been addressed by law - big media, for example, have often evoked free speech protections against regulatory interventions (ownership of a communication network, and the consequential control of what the network communicates, is construed as a form of free speech). This might be a somewhat spurious argument, however - content production and content distribution need to be distinguished and should not be rolled into one. (Questions of network neutrality are also often framed in a free speech context, interestingly - by proponents of either side of the debate.)
Antitrust can provide an additional tool to enforce open access - but such law must effectively check the power of the monopolists. The danger of leveraging the power of one market into an adjacent market by refusing the power to deal needs to be addressed here, too. Finally, liability rules may also be used to enforce open access - there is a need not to block the use of commons technology. There could be a new form of a separation doctrine for the political economy of the network society here - separate the freedom to innovate from the freedom of property.
The next presenter is Stefan Bechtold, focussing more narrowly on peer-to-peer systems. How are these networks regulated, and how does their technological development respond to the regulation? P2P networks are of course self-organising communications systems of equal peers which share distributed resources, while avoiding any centralised services (to a varying degree). As an infrastructure, they are non-rivalrous in consumption, social demand is driven by downstream activity, and they are use as input to the production of a wide range of public, private, and meritoric goods - and indeed there are legitimate uses, there is innovation in distribution technology by creating highly efficient distribution mechanisms, and amongst other things p2p is a promising area of distributed systems research.
P2P systems exist at the logical layer of the Internet communications system (between the physical and the content layer) - and the debate around p2p is one of copyright holders in content wanting to control the logical layer, that is to limit the range of logical layer technologies through which content can be distributed.
There were a number of waves of case law surrounding p2p applications: in the first wave, cases relating to Napster in the U.S. and in Germany, where Napster was held liable mainly because of its operation of a central network database that the p2p technology relied on. In a second wave, litigation focussed on Grokster and Kazaa, which removed this central database and were therefore less directly liable. In the U.S. and the Netherlands courts held that the developers of such systems did not technically operate the systems and could not be held liable. However, in later cases in the U.S., Australia and Germany courts found that the very act of developing such systems could lead to developers' liability (especially if developers advertised the potential of copyright infringement using their tools). This is a shift in law from a focus on the shape of the network to a focus on the power to control the software, and hence the logical layer of p2p distribution.
Overall, then, there is a trend to decentralisation in these networks. This is the case both in the actual file searching technology, in corporate structures as providers of p2p software have moved increasingly to offshore locations, in client applications, and in advertisements and community interaction resources (which are now frequently maintained using wikis, and involving users). Indeed, the continuing litigation brought by the music industry against such technologies could be seen as the core driver here: the rapid development of p2p systems, and the increasingly intricate community and company structures which support it, would not have come about in this form, or as quickly as they have, had users, developers and companies not had to respond to the threat of legal action. In this view, the music industry is doing its best to prop up the very tools and practices it seeks to destroy.
That's where my battery ran out, so I'm afraid I missed the presentation by Christoph Engemann. He presented a very useful overview of the continuing move towards open source software especially for government applications, and of the threats and opportunities involved.