Next to the systematic literature review (see Part I), this research comprises the empirical study of 22 cases of digital tools, tools which have been used or are still used as instruments for citizen involvement in democratic processes. The cases were for a large part requested by the Panel for the Future of Science and Technology at the European Parliament, who commissioned this research. The other cases were selected based on the following criteria: (1) diversity of tools, (2) diversity of institutional contexts and scales (local, national, European and some international),1 (3) geographical diversity and (4) different types of citizen involvement. The combination of these criteria provides a broad perspective on the kind of tools that could be used to strengthen participatory democracy at the EU level. Of course, we do not claim that this set of case studies would be representative for all uses of digital tools as discussed on the basis of our literature review. It remains a selection which could be completed towards still greater correspondence with our conceptual framework and the arsenal of digital practices in political participation, if there were no space limitations.
5.1 Evaluation Framework
The description of the 22 cases is based on an evaluation framework for assessing the digital tools. The selection of the key elements of the framework has been made according to the project’s central aim: To identify and analyse best practices with digital tools for participatory and direct democracy at different political and governmental levels (local, national, European) that in the future can be used at EU level to encourage citizen engagement and countervail the European democratic deficit.
In view of the current crisis of representative democracy, the disengagement from the democratic processes and the distance of citizens from EU institutions, restoration and enhancement of democratic legitimacy at the European level is needed. Therefore, we put legitimacy and its key dimensions (Schmidt 2013) centre stage in the evaluation framework and use it as the basis for differentiating further, more specific evaluation aspects. In this we follow the Council of Europe in its recommendation on e-democracy as referred to in the Introduction: “E-democracy, as the support and enhancement of democracy, democratic institutions and democratic processes by means of ICT, is above all about democracy. Its main objective is the electronic support of democracy” (Council of Europe 2009: 1).
Overview of case studies
Monitoring | TheyWorkForYou | National | Great Britain | |
Abgeordnetenwatch | National | Germany | ||
Agenda setting | Informal | Petities.nl: Dutch e-petitions site | National | Netherlands |
Open Ministry Finland: crowdsourcing for law proposals | National | Finland | ||
Formal | Iceland: crowdsourcing for a new constitution | National | Iceland | |
Future Melbourne Wiki: crowdsourcing for city planning vision | Local | Australia | ||
Predlagam.vladi.si: platform for e-proposals and e-petitions | National | Slovenia | ||
European Citizens’ Initiative: citizens’ proposals for new EU laws | European | EU | ||
Participatory budgeting Berlin | Local | Germany | ||
Internetconsultatie.nl: consultation on draft laws | National | Netherlands | ||
Futurium: consultation on EU (digital) policy making | European | EU | ||
Your Voice in Europe: (open) public consultation on EU policy | European | EU | ||
European Citizens’ Consultation: pan-European consultation on the future of Europe | European | EU | ||
Decision-making | Non-binding | Pirate Party Germany | National/district | Germany |
Five Star Movement | National | Italy | ||
Podemos | National | Spain | ||
Participatory budgeting Belo Horizonte | Local | Brazil | ||
Participatory budgeting Paris | Local | France | ||
Participatory budgeting Reykjavik | Local | Iceland | ||
Binding | Voting for Spitzenkandidaten in the 2014 EP elections within the Green Party | European | EU | |
E-voting for elections | National | Estonia | ||
E-voting for elections/referenda | National | Switzerland |
Fritz W. Scharpf (1999) divided democratic legitimisation into input legitimacy, judged in terms of the EU’s responsiveness to citizen concerns as a result of participation by the people and output legitimacy, judged in terms of the effectiveness of the EU’s policy outcomes for the people. Vivien Schmidt (2013) has added to this theorisation of democratic legitimacy, a third criterion for evaluation of EU governance processes: throughput legitimacy, judging legitimacy in terms of their inclusiveness and openness to consultation with the people.
The distinction between the three criteria for democratic legitimacy helps to understand the particular relevance of the democratic deficit in times of the recent and current EU crisis. Due to the transnational character, EU institutions’ legitimisation has difficulties to be rooted in strong channels of information by citizens (input legitimacy) and consultation with citizens (throughput legitimacy) and thus must rely on legitimising its policies by the quality of its output, that is its decisions and regulations being in the best interest of, and thus being supported by, the citizenry (output legitimacy). The fact that in the latter respect the means of the EU institutions are restricted as well has a special bearing in times of crisis. The missing input legitimacy becomes the more problematic, the weaker output legitimacy is getting, entailing apparent difficulties to establish consensus on a, for example, joint European policy to solve the refugee problem. In a situation where strong decisions have to be taken at the EU level (beyond national interests), input but also throughput legitimacy is urgently needed.
The three types of legitimacy pose different demands on digital tools for citizen involvement. In the following paragraphs we will address these different demands.
Regarding the input legitimacy, the use of digital tools will be assessed for how it enhances the voice of citizens in the political decision-making process. “Voice” concerns the way in which affected citizens are able to influence the political agenda (Manin 1987). To what extent are citizens enabled to express their wishes and interests in political decision-making? How can citizens get an issue on to the political agenda? Is there equal opportunity for citizens to voice their concerns? Are citizens supported enough in their efforts to voice themselves in the process (i.e. interaction support)? Is the tool user-friendly (i.e. tool usability)?
Regarding the throughput legitimacy, an evaluation will be made of how digital tools contribute to the quality of the deliberation process, in terms of an inclusive dialogue and a careful consideration of options (Cohen 1989). Relevant questions are: to what extent do the views of the citizens expressed by the digital tool represent the views of the general population (i.e. representation)? How is the diversity of views within the population (including minority views) reflected in the process? Are the different policy options carefully considered in the deliberation process? Do the citizens have access to all the relevant information about the decision-making process to which the results of the digital citizen involvement should contribute?
Concerning the output legitimacy, responsiveness to the arguments and proposals of citizens (Cohen 1989) and effectiveness (Scharpf 1999) will be evaluated, along with the accountability of decisions made. To what extent do the tools substantially contribute to the political decisions made (i.e. democratic impact)? How do the digital tools contribute to feedback? Is information provided about the decision-making process and its outcomes (i.e. accountability)?
Evaluation framework for assessing digital tools
Key dimensions | Demands | Specific questions |
---|---|---|
Input legitimacy | • Information/equality of opportunity • Tool usability • Interaction support • Voice | • Has the possibility to participate been effectively communicated to the target group? • Is the tool accessible for every member of the target group to participate? • Are the participation tools considered usable, reliable and secure? • How and to what extent are participants enabled to express their wishes and interests? • How and to what extent are the participants able to set the (political) agenda? • Does the design help to involve citizens beyond the participation elite? |
Throughput legitimacy | • Deliberation quality • Representation • Diversity/inclusion | • To what extent is information provided about the complete decision-making process and how is the citizen participation part of this (during the process)? • How is information provided to the participants about the issues at stake? • Does the tool encourage interactive exchange of arguments between participants? • Does the tool encourage interaction between the views of participants and views of the officials/politicians? • To what extent are the participants representative for the target group? • To what extent is the input of and/or conversation between participants moderated? • How is the diversity of views of the participants managed (aggregated?) in the process; are minority standpoints included? |
Output legitimacy | • (Cost)-effectiveness • Democratic impact • Accountability • Responsiveness | • How does the instrument contribute to the decision-making process and its outcomes? • Does the tool increase the transparency of the issues at stake? • Does the tool help to enhance accountability: informing who is responsible for what action? • How are participants informed about the outcomes and about what has been done with their contributions (afterwards)? • Does the process provide space to the official/politician to make their own judgement, selection or assessment? |
5.2 Data Collection
(grey) literature research
standardised online questionnaire
semi-structured interviews
Key in our strategies for data collection is thus methodological triangulation. We used more than one method and source to gather data on the 22 cases. This was to cross-check our data and to obtain construct validity (an effective methodology and a valid operationalisation) (Fielding and Warnes 2009). The elementary data for the case studies came from the (grey) literature about the case. In addition, two respondents per case were interviewed. In our design the two respondents were (1) a professional that is involved in the case and (2) an expert who scientifically studied and/or contemplated the case. The data collection was finished in February 2017.
The interviews took place via two steps. First, the interviewees were asked to answer a standardised questionnaire online to evaluate the digital tool. For the e-voting experiences a separate questionnaire was created, because not all questions were applicable in these cases. The concept questionnaires were pre-tested in a pilot and feedback was received from two external experts. This led to several adjustments in the questionnaire.
Second, the respondents were interviewed face-to-face, by telephone or Skype, asking follow-up open questions which took no more than one hour. The individual responses of the professionals and experts guided these subsequent semi-structured interviews. The open questions addressed, in a more qualitative way, the motivations of respondents behind their evaluation scores. Moreover, the open questions focused on a better understanding of the success factors, risks, challenges and the EU suitability in relation to the specific digital tool. In addition, in the interviews unsolved issues within the case study—inconsistencies in the data or aspects on which no information can be found in the literature—were discussed with the respondents. The interviewees were able to comment on the transcript of the interview as well as on the draft case study.
The data collection was conducted in the year 2016 until February 2017. In a few of the cases the latest developments in the following months and year (2017–2018) are addressed in the case descriptions.
5.3 Qualitative Comparative Analysis (QCA)
To analyse case descriptions based on the findings of the desk research, the questionnaire and the interviews, the technique of Qualitative Comparative Analysis (QCA) was used.2 QCA is a technique for making a systematic comparison of different case studies. The intention of the QCA is to integrate qualitative case-oriented and quantitative variable-oriented approaches (Ragin 1987). The QCA technique aims for “meeting the need to gather in-depth insight into different cases and to capture their complexity, while still attempting to produce some form of generalization” (Rihoux and Ragin 2009, xvii).
Our particular research has an intermediate-N research design, including 22 cases. This sample is too large to focus on in-depth analysis only and too small to allow for a conventional regression analysis, but QCA is an appropriate technique for analysis (cf. Gerrits and Verweij 2015). It is particularly in such intermediate-N research designs that QCA helps to acknowledge the internal case complexity on the one hand, while it enables cross-case comparison on the other hand (Rihoux and Ragin 2009, xvii).
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.