You are here

Towards e-Privacy by Design in European Union Legislation

The second keynote at AoIR 2017 is by Marju Lauristin, who is both a professor at the University of Tartu and the rapporteur on e-privacy at the European Parliament, where she also represents Estonia as an MEP; indeed she has been named one of the most influential Estonian women in the world. This week the Parliament voted on new EU privacy regulations which Marju has been instrumental in developing.

Her focus here is on the impact of algorithms on deliberative democracy, and the short summary of the situation is that algorithms will severely affect democracy if the companies that utilise them remain unchecked, and that they will prevented from doing so only if effective legislation is enacted to protect democratic processes.

Of course, the whole idea of deliberative democracy is deeply connected to the Habermasian conceptualisation of the public sphere; it depends on rational political deliberation. But this idea is now in question, given the substantive changes to the media industry, media practices, and media audiences. Political communication researchers now understand that not many people are necessarily interested in political deliberation and critical discussion; that instead much public communication is more affective and antagonistic rather that oriented towards rational consensus development.

Eastern European scholars took Habermas very seriously during Soviet times, and mythologised freedom and democracy during the Cold War, but since the achievement of independence in 1992 a deep disappointment has also set in: fragmentation of the party system was also seen as a loss of unity, and there was a steep learning curve in coming to terms with post-communist politics. The expected critical discussions informing a rational electoral choice were in part overwhelmed by modern, commercially structured forms of political marketing that had not been anticipated.

The shift of some such processes to digital environments has further complicated this. There are digital, technological opportunities for political deliberation, but also a low capacity for creating rational political argumentation and critical dialogue; at the same time, digital traces and sources of 'big data' are being used to understand as well as channel and subvert public opinion. Indeed, they are in part also used to create a mistrust of rational deliberation; the pursuit of consensus is seen by some as a weakness.

Search and social media optimisation, behavioural targetting, and other mechanisms are also powerful here because of an absence of institutional, legal, and regulatory frameworks for their operation. In the European Parliament there are almost unlimited possibilities to draw on scholars and other experts to inform such policies, yet much of the political response still relies on intuitive and knee-jerk responses rather than on evidence informed by research; additionally, media effects research is now again a critical discipline, but many of its findings remain deeply disputed and lag behind the current technological environment.

Deliberation is also work. We talk to each other, engage in logical argument, contest each other's views, and come to some form of rational consensus. Yet much political argumentation is now performed in visuals and figures, and the politicians charged with such argument in parliamentary processes often lack the domain knowledge to fully comprehend what they are dealing with. Legislation on voter profiling, on data retention, on privacy, on 'big data' is difficult to develop if the legislators do not have a full understanding of what they are legislating on – to enshrine such legislation in meaningful legal text is even more complicated.

To make the meaning of such legislation intelligible to ordinary citizens, potentially with even less domain knowledge, is a further significant challenge. In the new EU legislation on e-privacy, which has been a long time coming, EU citizens are empowered to give informed consent about how algorithms are processing their data – but how are they to do so without a full understanding of what algorithms are and how they work? This is centrally also an educational task.

An Estonian debate about neoliberal, data-driven governance (or 'governance by Excel') resulted in the Charter of the Twelve (Marju was one of the signatories), which called for different ways to engage Estonian citizens in decision-making; it generated a substantial level of citizen engagement and kickstarted a process of crowdsourcing and reviewing proposals for new initiatives. Although this received the support of the Estonian President at the time, the government at the time attacked and undermined the proposals; this demonstrates the difficulties to make relevant institutional changes in a democratic society once specific governmental systems have ossified and particular interests have emerged.

Estonia also remains at the forefront of the use of digital technologies in government, but here, too, there are substantial digital divides; these may be less about access to technologies now, but more about digital literacies, digital skills, and the time available to engage in digital activities. Indeed, digital activities have not replaced more traditional forms of engagement, so they create additional calls on the time available to citizens. This creates various digitally active, but very differently resourced groups, and can serve to fragment society along these faultlines.

The EU is now pursuing the creation of a single digital market, but this will not work without a digital society. In particular, how may citizens be empowered to exercise full control over their digital data; indeed, how is 'data' defined in this context? The legislation Marju has been instrumental in developing in this context seeks to protect privacy and individual autonomy in the digital environment, and this protects the private, autonomous space of the individual person; it protects them from behavioural targetting for commercial and political purposes. This has raised concerns in the digital data industry, because it challenges its business models – which to date rely on a largely unregulated, 'Wild West' market environment.

One part of the rhetoric from the industry and its supporters has been to claim that modern citizens no longer care as deeply about their human rights, in fact, and that in using digital services they are making an informed choice to give up some of their privacy in exchange for otherwise free services. Yet this assumes that citizens are able to understand all of the consequences of doing so, which is highly unlikely given the intransparency of the industry. This is therefore also a fight over the ability of citizens to provide informed consent, and about cementing the principle of privacy by design in the digital industry.

The General Data Protection Regulation (GDPR) has already been enacted, and the digita industry has gradually come around to accepting it; e-Privacy Regulation is currently under debate, but continues to be strongly opposed by the proponents of 'big data'. Lobbyists for European as well as transnational companies, including the 'big five', are still fighting hard against any new regulation in this space – yet privacy by design must ultimately prevail as the leading principle for the development of digital technologies, and there is a substantial need for further scholarly research to underpin the argument for such developments.