Wednesday, 19 June 2013

The difference between objective and subjective perceptions


While at our Weatherhead Center workshop, we presented the results of our expert perceptions pilot study. Dawn Brancati kindly served as discussant, and she made a number of very useful and thought-provoking comments. In this post, we are going to focus on one of them.

To provide a bit of background, the core of our survey is a group of forty-nine questions encompassing all stages of the electoral process—as defined by the UN here and explained well by Jørgen Elkit and Andy Reynolds in a 2005 Democratization article.

The comment we are focusing on today is Dawn’s suggestion that we distinguish between objective (factual) measures and subjective (judgmental) measures within our survey. Understanding the difference between objective and subjective measures is potentially important to our project for several reasons, but for today one stands out.

Specifically, we believe that distinguishing objective from subjective measures could provide us a good way of controlling for differences between experts. This ability might be particularly important for electoral contests where no consensus exists about whether they meet standards of electoral integrity.

Consider the following table and figure. Table 1 shows the distribution of responses in the Czech Republic to the question “Did the election trigger violent protests?” Our Czech experts’ responses are very clear. There were no violent post-election protests and, as such, all the experts say that voters were not threatened with violence. This judgment reflects an objective fact: either there were violence protests or there were not.

Table 1. Did the election trigger violent protests
in the 2012 Czech Republic national election?

Frequency
Strongly disagree
18
Disagree
0
Neither disagree or agree
0
Agree
0
Don’t know
0
Not applicable
0

Now let’s look at Figure 1, which summarizes the responses to the question: “Do rich people buy elections?” in the Czech Republic. This question is one of four in our survey that is worded exactly as it is in the latest round of the World Values Survey. This overlap will be very useful for comparing elite and mass perceptions—a topic we will be discussing in future posts.

Figure 1. Responses: Do rich people buy elections in the Czech Republic?



As you can see, unlike for the election violence question there is a divergence of responses. This result is unsurprising since this question is more open for interpretation and subjective judgment than a question about whether there was violence. What is “rich”? What does it take to buy an election? Do the rich need to intervene the results directly or just buy the candidates? Clearly, this question is more subjective and open for interpretation.

We find the differences between potentially objective and subjective questions interesting because the former could be used as a proxy for ‘expert knowledge’ to better understand answers to the latter—we could check whether objective events did or did not occur and then see if individual experts’ answers reflect this fact.

We could then weight experts’ answers to subjective questions according to their answers to objective questions. If an expert ‘fails’ (i.e. by being wrong) many of the objective questions, then it may be worth discounting his/her answers. This could also partially address a point raised by Andreas Schedler in his 2012 piece in Perspectives on Politics (available here). Schedler stresses the importance of holding experts accountable for their judgments. Evaluating our experts’ knowledge through an external verification method like the one we propose here could provide a clear means of improving our confidence in the aggregate survey results.

Thanks again to Dawn Brancati for her insights.

Please feel free to leave your own feedback in the comments section below.
-Ferran and Rich

Friday, 14 June 2013

Working towards a common goal: practitioners and academics discuss current challenges to electoral integrity

From left to right: Chad Vickery (IFES), Aleida Ferreyra (UNDP), Betilde Munoz-Pogossian (OAS), David Carroll (The Carter Center), Eric Bjornlund (Democracy International), Annette Faith-lihic (IDEA), Staffan Darnolf (IFES)

Practitioners from a wide range of organizations including UNDP, OASThe Carter Center, IFES, Democracy International and International IDEA, joined academics in discussing concepts and challenges to electoral integrity at the 2013 Annual Workshop of the Electoral Integrity Project, wich took place at the Weatherhead Center for International Affairs at Harvard University. The conference provided a much-needed platform for an exchange of ideas between practitioners and researchers; specifically, a panel of practitioners on the workshop's second day highlighted three issues that extremely useful to academics working on democracy-related topics.





 
 


The practitioners first focused on subjects in need of further research. While an abundance of research centers on during-elections shortcomings like fraud and vote buying, there is still a great need to add to the literature focusing on factors—structural and circumstantial—that hinder democracy. For example, among the topics suggested for further research by the Organization for Independent States’  Betilde Munoz-Pogossian was (1) the use of social media to report election irregularities and violence before, during, and after the elections; (2) campaign and party finance which may both enable the citizen body to make a more informed decision on Election Day and hold politicians accountable for the mechanisms with which they receive and spend funds; and finally, (3) a subject that was unanimously supported by the panel: studies evaluating the impact international organizations’ projects have on the ground. These topics are fast-changing and challenging to study, however the knowledge produced by them is one that is greatly needed by the practitioner world and which most likely will be utilized to inform policy-making decisions.

The second issue brought up at the panel was the need to focus academic research on both prediction by identifying empirical patterns as well as explaining consequences; currently we’re seeing more research on the latter than on the former. This may be a controversial issue to some extent, given that each election is the product of unique historical, social, political, and cultural factors; however it is well worth asking the research community if recurrent factors can be identified which can then be linked with the occurrence of certain events. For example, are there factors that can predict vote rigging or when a certain type of social media will be used over another. “Predictions rather than explanations” was the takeaway and emphasized by IFES regional director, Chad Vickery.

Lastly, a strong emphasis was placed on bridging the gap, to the extent possible, separating researchers and practitioners. Indeed, several academics might be working on a subject, and practitioners looking for experts on that same subject, and yet the connection is never made. During the question and answer period, academics agreed. Both Essex’s Sarah Birch and Yale’s Susan Hyde stressed that a common platform matching academics’ research interests with practitioners’ needs is necessary in order to help bridge the gap.
The 2013 “Concepts and Challenges to Electoral Integrity” workshop proved to be a needed and useful step towards that end. We look forward to assess the progress being made in next year's   conference (happening conjunctively with the Australian Political Science Association annual conference) at the University of Sydney, where the understanding of challenges to electoral integrity will be expanded in scope and strengthened in depth and where, once more, a rare opportunity for academics and practitioners to meet will yield a productive conversation about the constant challenges to the advancement of democracy.

Wednesday, 12 June 2013

Measuring Experts' Perceptions of Electoral Integrity

(Left to Right: Sarah Birch, Richard Frank, Pippa Norris, and Ferran Martínez i Coma)

On 3 June 2013, Richard Frank and I presented the results of our expert survey at the conference at the Weatherhead Center for International Affairs at Harvard University. We were part of the first panel on  shared with Pippa NorrisJørgen ElklitAndrew Reynolds, and Sarah Birch. Our paper (available here) introduces and analyzes pilot stage results of the Electoral Integrity Project's expert survey on Perceptions of Electoral Integrity (PEI). The pilot stage included surveying experts on elections in twenty countries in the second half of 2012.

We define an expert as a political scientist (or social scientist in a related discipline) who has published or who has other demonstrated knowledge of the electoral process in a particular country. We understand demonstrated knowledge by the following criteria: (1) membership of a relevant research group, professional network, or organized section of such a group; (2) existing publications on electoral or other country-specific topics in books, academic journals, or conference papers; (3) employment at a university or college as a teacher.

Occasionally other social scientists were also used—including law and sociology, or to a lesser degree economics, anthropology, mathematics, or statistics. During the pilot phase, we have sought at least forty experts per country, including both domestic and international experts. The distinction was drawn based upon the location of institutional affiliations, and monitored in the survey through citizenship and country of residence.

You can watch our presentation here, and the video is embedded below.


Overall, the pilot study's results are encouraging and demonstrate substantial external validity when compared to other datasets and mass opinions.

Regarding the former, there is a substantial (although thankfully not total) agreement with existing measures. As a first cut effort at analyzing our results we made an additive index of our forty-nine measures and scaled it to a 100 point scale. Our results suggest that 58% of our index's variance is explained by the 2010 Freedom House index (scaled to 100 for comparability).


PEI pilot results and Freedom House (2010) measures of political freedom
 (Source: Pippa Norris, Ferran Martinez I Coma and Richard W. Frank. The expert survey of Perceptions of Electoral Integrity, pilot study April 2013. Available at www.electoralintegrityproject.com)



A similar agreement is also found with Kelley’s (2012) Quality of Elections (also scaled to 100),  although these data are substantially older (2004) than the 2010 Freedom House data.


 PEI pilot results and Judith Kelley's (2012) Quality of Elections database
 (Source: Pippa Norris, Ferran Martinez I Coma and Richard W. Frank. The expert survey of Perceptions of Electoral Integrity, pilot study April 2013. Available at www.electoralintegrityproject.com)


Most compelling from a theoretical perspective for us is the relationship between expert and mass perceptions. We include in the PEI survey four questions also included in the sixth wave of the World Values Survey. So far five countries (Ukraine, Romania, Mexico, Ghana, and the Netherlands) have had elections in the pilot time frame and were also included in the WVS sixth wave. See the figure below.


PEI results and the World Values Survey
 (Source: Pippa Norris, Ferran Martinez I Coma and Richard W. Frank. The expert survey of Perceptions of Electoral Integrity, pilot study April 2013. Available at www.electoralintegrityproject.com)


In general, many of the problems of electoral integrity highlighted by our expert survey were similar to  those highlighted by election observer reports. In addition to our working paper summarizing the pilot stage, the data used for the study are publicly available at: http://www.electoralintegrityproject.com. We encourage comments and and feedback.

Monday, 10 June 2013

New problems of electoral integrity

Claims of voter fraud and voter suppression in US elections. Robo-call scandals in Canada. Allegations of vote-buying in Mexican contests. Ballot-stuffing in Armenia. And heavy-handed suppression in Iranian contests.
What do all these problems have in common?
On June 3-4th 2013, over seventy leading international scholars and practitioners met at the Weatherhead Center for International Affairs, Harvard University to discuss electoral integrity. More than 30 papers presented new research investigating the causes and consequences of problems of electoral integrity.

Organized by the Electoral Integrity Project, the workshop debated issues as diverse as public trust and confidence in elections in the United States, Latin America, and sub-Saharen Africa, the performance of Electoral Management Bodies in the UK, Central America and Sudan, and new techniques for detecting voter fraud in Ghana, the US and Afghanistan. All papers are available at www.electoralintegrityproject.com.


The workshop concluded with a panel discussing the next steps in the research agenda which would be most useful for the international community. The panel included representatives from the Carter Center, IFES, the Organization of American States,  the United Nations Development Programme, Democracy International and international IDEA.


Overall the workshop strengthened the networks of scholars and practitioners focused on these challenges and laid the outlines for the next generation of research strengthening electoral integrity. The subsequent workshop to develop this program will meet in Chicago on 28 August 2013 in conjunction with the annual meeting of the American Political Science Association.

Sunday, 2 June 2013

The 2nd Annual Electoral Integrity Workshop meets at Harvard

The Weatherhead Center, Harvard University.
(Credit: Harvard University)
The second annual EIP workshop (June 3-4) on electoral integrity kicks off tomorrow morning at the Weatherhead Center for International Affairs at Harvard University. We are all looking forward to a productive and intellectually engaging few days with over seventy participants, thirty papers being presented, and a roundtable of representatives from the UN Development Program, the Organization of American States, the Carter Center, International IDEA, IFES, and other international organizations.

The workshop schedule and papers can be found on our website. We will be posting updates on our Twitter page @ElectIntegrity, and participants are encouraged to post thoughts, notes, etc, Twitter using the hashtag #EIP2013. We are also going to record the panels and upload them to YouTube after the workshop.

Even if you are not able to attend the workshop we encourage you to read the papers, watch the presentations, and provide feedback on our website and on Twitter and Facebook.