Sie sind auf Seite 1von 19

832924

research-article2019
IPS0010.1177/0192512119832924International Political Science ReviewGarnett

Article

International Political Science Review


1­–19
Evaluating electoral © The Author(s) 2019
Article reuse guidelines:
management body capacity sagepub.com/journals-permissions
DOI: 10.1177/0192512119832924
https://doi.org/10.1177/0192512119832924
journals.sagepub.com/home/ips

Holly Ann Garnett


Royal Military College of Canada, Canada

Abstract
Electoral management bodies (EMBs) perform many functions crucial to promoting electoral integrity,
from registering voters to resolving post-election disputes. The capacity of an EMB to perform its tasks,
however, is difficult to measure in cross-national perspective. Data on resources and personnel provide only
a partial picture of EMB capacity and expert surveys are limited in their comparability. This article presents
a new proxy for measuring EMB capacity. It employs a content analysis of EMB websites in 99 countries to
measure the presence of indicators of their major functions. It assesses the measurement validity of this
new measure of capacity and conducts a small-scale test to determine whether EMBs that score highly do
actively communicate with their citizens. An application of this new measure of EMB capacity demonstrates
its importance in predicting overall electoral integrity, indicating its importance for future scholarly and
policy research.

Keywords
Elections, electoral integrity, election management, capacity, e-governance

Introduction
Electoral management bodies (EMBs) perform many crucial tasks throughout the electoral cycle:
from pre-election activities such as boundary delineation and voter registration, through election-
day administration of voting procedures and the counting of ballots, to post-election reporting and
auditing. However, the design and conduct of EMBs around the world vary greatly. In recent years,
variations in the formal structure of EMBs have received increasing scholarly attention, focusing
on issues such as independence (Hartlyn et al., 2008; van Aaken, 2009; van Ham and Lindberg,
2015) and centralization (James, 2016).
However, the capacity of EMBs, or their ability to perform their functions, has received consider-
ably less study. The lack of study of EMB capacity is largely due to a lack of comparative cross-
national data. It is difficult to find a way to measure EMB capacity across countries, and thus scholars

Corresponding author:
Holly Ann Garnett, Department of Political Science, Royal Military College of Canada, PO Box 17000, Station Forces
Kingston, Ontario, K7K 7B4, Canada.
Email: holly-ann.garnett@rmc-cmr.ca
2 International Political Science Review 00(0)

have shied away from its study. However, EMB capacity is likely to be a crucial predictor of overall
electoral integrity. EMBs are involved in all aspects of running elections, and their ability to manage
elections and perform key tasks such as identifying voters and counting the ballots is crucial.
This article therefore considers a variety of ways of measuring EMB capacity. It first examines
the resources, including budgets and staff, at an EMB’s disposal, and secondly explores expert
perceptions of EMB capacity. However, these methods of comparing an EMB’s capacity have
substantial disadvantages, including data incompleteness, lack of comparability and precision, and
a reliance on perceptions or judgements.
Thus, this article contributes a new proxy for EMB capacity in the form of a website content analysis.
Data points were collected through a content analysis of EMB websites in 99 countries that held national
elections between mid-2012 and 2014 and transformed into a scale of EMB capacity using Mokken
scaling analysis. This article assesses the measurement validity of this new method of evaluating EMB
capacity, first through a small-scale test of email responsiveness to demonstrate that EMB websites are
not simply static facades, and second by testing convergent validity, by comparing the scores with
expert perceptions of EMB capacity and performance, and overall government effectiveness.
Finally, this article uses this new measure of EMB capacity to demonstrate that capacity is key
to understanding variations in overall electoral integrity between countries. This article therefore
contributes to our comparative understanding of EMBs themselves and presents a new avenue for
research into the capacity of EMBs to perform the tasks that are crucial to electoral integrity.

EMB capacity
Following the introduction to this special issue, EMBs are defined as the variety of organizations that
are involved in running elections. The capacity of electoral management bodies refers to their ability to
perform functions and achieve their goals. International organizations often employ the term capacity
for the purposes of international assistance programmes. The United Nations Development Programme,
for example, defines capacity as ‘the ability of individuals, institutions and societies to perform func-
tions, solve problems, and set and achieve objectives in a sustainable manner’ (2009). This definition
suggests that capacity refers to overarching abilities of an organization to achieve its goals.
Scholars of public administration likewise suggest that capacity refers to specific abilities or
skills that may be mobilized (Christensen and Gazley, 2008). In an article on non-profit manage-
ment, for example, Eisinger defines capacity as ‘a set of attributes that help or enable an organiza-
tion to fulfill its missions’ (2002: 117). This is particularly useful in terms of defining EMB
capacity, since these overarching attributes or abilities may be mobilized to perform key electoral
functions throughout the electoral cycle, regardless of what specific tasks an EMB may face.
It is important to clarify that capacity is distinct from the other ways EMBs are compared,
against attributes such as impartiality or autonomy. For example, an EMB can be highly impartial
and its actions not influenced by the incumbent government, yet still lack the capacity to register
voters, set up enough polling stations, and accurately tabulate the results. This article seeks to care-
fully distinguish capacity from these other characteristics of EMBs.

Resources
One way to consider whether an EMB has the ability to perform its functions is to examine whether
it has adequate resources to do so. Elections carry a high price tag, including the salaries and ben-
efits of EMB personnel, both permanent and temporary, rent for office space and polling locations,
the purchasing of voting materials, including voting machines or printed ballots, and various non-
material goods such as advertisements and public outreach campaigns. A report from the early
Garnett 3

2000s suggested that while established democracies have relatively low costs-per-elector (ranging
from US$1 to $3), some emerging democracies were spending as much as $45.5 (Cambodia in
1993) (IFES and UNDP, 2005).1
Some research has considered whether greater funding can translate into better EMB perfor-
mance. Clark takes advantage of the decentralized nature of election administration in the United
Kingdom to test the relationship between local EMB budgets and electoral performance, as evalu-
ated by local returning officers (2014, 2016). He finds that higher budgets can, in fact, predict
better overall electoral integrity (see also Clark, this issue).
To directly measure EMB resources, this article uses data from a survey of EMBs around the
globe, called the Electoral Management Survey (EMS), conducted between June 2016 and
October 2017.2 At the same time, a sister survey was conducted by the Electoral Integrity Project,
called ELECT.3 The results of these surveys give apicture of the variation in EMBs’ budgets per
capita in the last election year (standardized to 2016 US$ and adjusted for purchasing power par-
ity) (Appendix A).4
The budgets of the 50 EMBs that provided data vary greatly, from less than 1 cent per person
for Mozambique’s National Commission of Elections and Afghanistan’s Independent Election
Commission, to almost $40 per person for Zimbabwe’s Electoral Commission.5 It is important to
note that this is not the same as the cost of running an election, since multiple bodies may be
involved in running elections in a given country, and organizations may be supported by interna-
tional aid for specific electoral events.
One of the major expenses of EMBs is the recruitment and retention of high-quality permanent
and temporary staff to run elections. A number of studies that examine the impact of EMB person-
nel on public confidence in elections consider EMB personnel capacity according to the profes-
sional qualifications or experience of EMB members or commissioners (Herron et al., 2017).
Considering just the numbers of permanent staff per 100,000 people, the surveys demonstrate
some surprising variations in staffing levels. In particular, Panama’s Electoral Tribunal reported
more than 70 staff per 100,000 population, which is more than double any other body that responded
to the survey. Most EMBs also reported that a large number of additional staffs are added during
election times, including temporary central or regional employees. In fact, the responses to whether
additional staff were added during election periods ranged from no additional staff to over a mil-
lion additional staff reported in Afghanistan. EMBs also second (or ‘borrow’) staff from other
government departments; 63% of EMBs that responded to this question in the surveys noted that
they do borrow additional government staff during intense periods of the electoral cycle.
While these data provide some insights into the resources expended on elections around the
world, the incompleteness and incomparability of these data reveal some important challenges in
directly measuring EMB capacity through evaluating their resources. First, the surveys reveal that
a variety of organizations are involved in running elections, ranging from independent electoral
commissions to census agencies. This makes collecting data on the cost of elections or the number
of personnel that work on elections especially difficult, as data would need to be collected from
different organizations in each country (Garnett, 2017). Additionally, the decentralization of elec-
tions in some countries means that more than one level of government may contribute to the man-
agement and costs of elections. For example, in Finland, each municipality is involved in the
practical matters of running elections. Collecting all the relevant budgetary and staffing data for
this country would require an impressive exercise of coordination with all municipalities, which is
difficult for any specific country, let alone for all countries worldwide.
Secondly, personnel and resources may move between projects and not be exclusively used for
elections. While this article reports on permanent staff in specific organizations, it is difficult to meas-
ure how many staff are actively working on elections in any election year. Staff are seconded from
4 International Political Science Review 00(0)

other departments and temporary staff are brought in for election time. Furthermore, staff in govern-
ment ministries or offices involved in running elections may not work solely on electoral activities, as
their ministries or offices may have other responsibilities. For example, Identity Malta is responsible
for all identity documents in the country, including voter registration. Likewise, resources (such as
facilities or supplies) may be used by multiple programmes. It is therefore nearly impossible to deline-
ate what is used solely for elections and what is not, for some organizations and countries studied.
Thirdly, budgetary and staffing data may not give a full picture of overall EMB capacity due to
the presence of foreign assistance in a large number of countries (49% of EMBs that responded to
this question reported receiving some form of electoral assistance), involved in a variety of tasks,
from training and advice to directly providing financial support. It is therefore nearly impossible to
adequately account for this assistance in total budgets and staff on a cross-national basis. The
impact of foreign assistance in electoral management capacity building remains understudied, but
without taking this key variable into consideration, scholars cannot attain a clear picture of the
resources expended on elections in many developing countries.
Finally, there are serious challenges in collecting complete and accurate budgetary and staffing
data from EMBs. All EMBs are involved in different functions and use different budgetary lines to
report expenses. Delineating these data would therefore require a great amount of cooperation and
work on the part of organizations that are already very busy. Additionally, access to data is limited
in some contexts. While some countries freely reported their election year budgets, others declined
to report this information, and while it may be possible to glean these data from annual reports or
budgets, these are not always made publicly available. Finally, the types of organization likely to
respond may have increased capacity, including the personnel available to fill out the survey, mak-
ing the responses non-generalizable. Thus, measuring capacity cross-nationally through data on
resources is difficult and often inappropriate.

Expert perceptions
In response to these challenges in directly measuring EMB capacity through budgets and staff
levels, recent research has employed expert surveys to measure perceptions of EMB capacity.
Expert surveys are commonly used by academics and practitioners to capture latent concepts that
may be difficult to directly measure (Maestas, 2016). Expert surveys are relatively easy to conduct
on a cross-national basis, since they rely only on the cooperation of experts (usually academics)
rather than the availability of raw data. They can also be conducted across time periods, and re-
evaluated yearly or at each election time.
There are two major expert surveys that have considered EMB capacity: the Varieties of
Democracy dataset and the Perceptions of Electoral Integrity dataset.6 Both of these datasets com-
pare overall electoral management capacity in a country, in order to avoid identifying the specific
electoral management bodies at work. In the Varieties of Democracy survey, for example, experts
were asked whether the given country has ‘sufficient staff and resources to administer a well-run
national election’ (Coppedge et al., 2016).
There remain a number of challenges with using expert perceptions to measure EMB capacity.
First, EMB capacity is often asked of experts alongside other election-related questions. It is pos-
sible that experts may pay less attention to technical items such as the conduct and capacity of
EMBs. Secondly, experts may have varying standards across countries. What may be perceived as
high capacity in a developing country might be considered very low capacity in a long-established
democracy. Experts may be using the same numbers on a scale to mean two very different things
depending on the country they are evaluating. Finally, measuring a latent concept like EMB capac-
ity requires experts to make evaluative judgements. Martinez i Coma and van Ham (2015) suggest
Garnett 5

that expert perceptions of electoral integrity will be less accurate when they involve these types of
judgement, as opposed to factual information. This increases the risk of variance among the scores
of different experts and between countries. Additionally, just as experts’ perceptions of EMBs may
reflect confidence in government and politics more generally, media, partisan or government
reflections on the conduct of the election, personal experiences may not reflect the EMB’s conduct,
or even the outcome of the elections studied (Atkeson et al., 2015; Birch, 2008). Thus, expert
judgements likewise cause difficulties in cross-national comparisons.

An alternative measure of capacity: Website content analysis


Because of these challenges of measuring EMB capacity through the analysis of budgets and staffing
resources, or expert perceptions, it is necessary to find another observable way to measure and com-
pare EMB capacity across countries. This article uses a content analysis of EMB websites, which
exist for nearly all EMBs and are openly available on the internet, allowing for cross-national data
collection. The evaluation of government websites has become commonplace in the e-government
literature (Downey et al., 2011). A government department or agency’s online presence can be a use-
ful indicator of its activities, linkage with stakeholders and organizational capacity (Norris, 2001). In
the same way, evaluating EMB websites may be a useful proxy for indicators of EMB capacity.
Before turning to how this content analysis is conducted, it is worth noting that, like other measures
of EMB capacity, this source of data has some drawbacks. First, these data may be biased by levels of
internet penetration in a country, since EMBs will be more likely to devote time and resources to their
website if more citizens have internet access. Although there remains a ‘digital divide,’ it is estimated
that 40% of the world’s population was on the internet in 2014 (International Telecommunication
Union, 2014). Nonetheless, with the proliferation of access via smartphones and other personal com-
puting devices, the internet remains one of the most accessible means of communication between
EMBs and the public. Secondly, an EMB’s website may change frequently. This data source is there-
fore only a snapshot of an EMB at one particular point in time. Finally, this data source relies on online
content, and thus captures only digital evidence of EMB capacity. This proxy for EMB capacity should
therefore be considered in addition to the other measures mentioned earlier.

Indicators of capacity on an EMB’s website


To measure EMB capacity through a website content analysis, it is first worth considering what a
highly capable EMB should be able to achieve. This article considers the six major functions of an
EMB, according to the International IDEA Handbook, Electoral Management Design (Catt et al.,
2014: 75), and develops indicators of each function that may be expected on an EMB’s publicly
facing website (see Appendix B for full listing and coding scheme).7
The first function is ‘determining who is eligible to vote’, which includes the main task of ‘iden-
tifying and registering voters’. Transferring this to an EMB’s online presence, we may expect a
high capacity EMB to provide information on their website about voter registration.
The second task includes ‘receiving and validating the nominations of electoral participation
(for elections, political parties and/or candidates)’. Since this study focuses on an EMB’s publicly
facing website and linkage with voters, this item may not be reasonably expected to be on an
EMB’s public website, since it is useful to only the specialized audience of political parties and
candidates. Thus, there is no indicator collected on this function.
The next function, ‘conducting polling’, involves the entire process of running election day (and
pre-election day in cases where early voting is available). A number of indicators of this function
can therefore be collected from EMB websites, including information on polling procedures for
6 International Political Science Review 00(0)

foreign and disabled voters, voter identification and electoral districts. While these indicators rely
on whether information is provided about these services, some are also direct measures of service
provision, for example, providing forms and instructions for overseas voters.
Evidence of the capacity to perform the next two functions, ‘counting the votes’ and ‘tabulating
the results’, should be found on an EMB website by the publishing of the results of the election.
The most transparent results are provided in units smaller than the total (meaning by district or
polling division), allowing the public to examine the election results in detail.
The final task is perhaps the broadest: ‘running a credible organization’. There are a number of
dimensions of this function. As governmental agencies and departments working on behalf of the pub-
lic, one of the most important qualities we expect from an EMB is accountability. Accountability can be
defined by three key principles: the communication of and justification for decisions made, the ability
for stakeholders to have input, and a clear recognition of where the body’s authority does and does not
lie (O’Loughlin, 1990). Communication and the ability of stakeholders to provide input refers to the
ways that citizens – one of the main stakeholders of EMBs – can connect and engage with their electoral
officials. This may include how voters can get in touch with their EMB for specific inquiries or to lodge
complaints. The ease and availability of different means of communication is also an indicator of
whether the EMB is engaged in assisting voters and other stakeholders with the election process.
Accountability also requires transparency, or the free flow of information, in this case from the
EMB to voters (Hollyer et al., 2014). Transparency can also include the information that citizens
can access about the identity of their EMB members or commissioners (or senior government offi-
cials) and their qualifications. Another indicator of transparency is whether citizens have access to
information about the accountability structure or hierarchy of the EMB. It is also important to
consider whether the EMB regularly reports on its activities, as these reports serve both as a good
delivered by the EMB to the public, as well as an indicator of accountability and transparency.
In sum, these indicators of EMB capacity that may be found on a website, detailed in Appendix
B, provide a useful proxy for overall EMB capacity in a country.

Data collection
These indicators of EMB capacity were collected from EMB websites in 99 countries. These coun-
tries were taken from the possible 107 countries that are included in the Perceptions of Electoral
Integrity Index (PEI3, 2012–2014), all of which have had an election since mid-2012. The primary
EMB in each country was selected as identified in the IDEA handbook (Catt et al., 2014). When two
EMBs were present in a country (for example, in a mixed system), the EMB performing the major
functions defined by the same IDEA handbook was selected. Of these 107 countries, eight did not
have an EMB website at the time of coding. These eight countries were not included in the analysis
for two reasons: first, to ensure that these outliers did not influence the results, and second, to account
for the possibility that these websites were simply inaccessible from outside the country or offline for
maintenance during the coding period.8
Research assistants proficient in one of the 45 languages used on the EMB websites studied were
hired to code the websites between June and October 2015.9 The starting point for coding was the EMB
homepage. The coder answered 20 questions about whether certain elements could be found on the
website (see Appendix B for the full list of questions). Each question asks for a simple ‘yes’ (1) or ‘no’
(0) answer. This dichotomous classification is advised as a useful basic scheme in building measures
(Collier et al., 2012). More practically, this avoids subjective coder judgements about the quality of the
information contained on the website. To be scored ‘yes’, the information must be accessible from the
website, without searching through legal or constitutional documents. It was acceptable to be sent to
other websites, such as a subnational EMB or, in the case of mixed EMBs, another government body.
Garnett 7

To ensure the reliability of these data, the research assistants first coded an English or French-
language website so the researcher could review their work and check that they properly under-
stood the coding scheme and so they could ask the researcher for clarification about certain
elements. Additionally, two coders were assigned to each website. Any differences between the
two coding results were re-checked by the researcher (sometimes using website translation func-
tions such as Google Translate). When it was not possible to see why the differences arose, both
coders were consulted and the question was discussed until a response was agreed upon.

Scaling
To build the capacity score, Mokken scaling analysis was used (Hardouin et al., 2011; van Schuur,
2003). This non-parametric technique considers how well the 20 binary variables collected by the cod-
ers form an additive scale. This method suggests that certain elements will be easier for EMBs to imple-
ment than others. For example, presenting the total final election results is easier for an EMB than
presenting the results in smaller units, such as by region or candidate. Likewise, presenting the names
of the EMB staff is easier than providing EMB members’ qualifications for the position. Mokken scal-
ing is particularly appropriate for building a web-based EMB capacity score, since it does not require a
priori assumptions about the relative importance of the elements we expect to find on EMB websites.
The initial analysis, using Mokken scaling, demonstrated that the 20 items cannot simply be
added together to form a scale of EMB capacity, since Loevinger’s H coefficients range from only
0.13 to 0.36. Only seven of the 20 items score above 0.30, indicating a weak, but acceptable, level
of scalability (van Schuur, 2003). Instead, Mokken scaling suggests four subscales, clustered
around four key themes, or dimensions of capacity (see Online Appendix for more details): results,
personnel, information for voters and communication. All four subscales qualify as having high
scalability (van Schuur, 2003). The dimensions are found in Table 1. Three of these subscales
(results, communication and information for voters) can be combined to form a 0 to 3 scale with
an acceptable Loevinger’s H-coefficient of 0.36.

Table 1.  Dimensions of EMB capacity.

Dimension Loevinger H-coefficient Components


Results 0.91 -  Election results
-  Election results in smaller units
Personnel 0.60 -  Specific names to contact
- Hierarchy
-  Name of EMB member(s)
-  Qualification of EMB members
Information for voters 0.60 -  Disabled voters
-  Foreign voters
-  Voter identification
-  Voter eligibility
-  Voter registration
Communication 0.56 -  Contact in person
-  Contact by post
-  Contact by telephone

Final EMB capacity score (0–3) is the addition of the 0–1 scores of transparency of results, information and communica-
tion (created by Mokken Scaling of the four dimensions listed above).
See Online Appendix for full details on the Mokken Scaling used to create these scores.
8 International Political Science Review 00(0)

The EMB capacity scores for each country are reported in Table 2 (more detailed scores are
presented in the Online Appendix). Twenty countries, from a variety of continents and levels of
economic development, received the top score of 3. This suggests that high capacity electoral man-
agement may be possible in a variety of settings. Seven countries had scores less than one, and the
lowest score of 0.33 was found for Djibouti.

Table 2.  EMB capacity scores.

Capacity score (0–3) Countries


3.00 Australia, Bhutan, Bulgaria, Colombia, Costa Rica, Fiji, Hungary, Japan, South
Korea, Malta, Mexico, Mongolia, Netherlands, Norway, Paraguay, Slovakia, South
Africa, Sweden, Thailand, Tunisia
2.50–2.99 Afghanistan, Albania, Austria, Bahrain, Botswana, Chile, Cyprus, Czech Republic,
Germany, Iceland, India, Indonesia, Jordan, Kenya, Latvia, Mauritius, Moldova,
Namibia, New Zealand, Pakistan, Philippines, Romania, Slovenia, Uruguay
2.00–2.49 Argentina, Belarus, Belgium, Bolivia, Bosnia and Herzegovina, Brazil, Burkina Faso,
Cambodia, Ecuador, Egypt, El Salvador, Georgia, Iraq, Italy, Lithuania, Macedonia,
Malawi, Malaysia, Mauritania, Nepal, Panama, Serbia, Solomon Islands, Togo,
Turkey, Ukraine, Venezuela, Zimbabwe
1.50–1.99 Algeria, Armenia, Azerbaijan, Bangladesh, Guinea-Bissau, Israel, Micronesia, Sierra
Leone, Swaziland, Tonga, United States
1.00–1.49 Angola, Cameroon, Granada, Honduras, Iran, Kuwait, Maldives, Rwanda, Tajikistan
0.50–0.99 Barbados, Congo, Ghana, Guinea, Madagascar, Montenegro
0.00–0.49 Djibouti

See Online Appendix for full scores, and scores of sub-dimensions.

It is important to mention that there is a correlation between internet penetration and the final
EMB capacity scores (Corr. 0.44, p < 0.01). However, it is worth noting that there are examples of
countries with low EMB capacity scores with high rates of internet penetration (for example,
Kuwait with about 61 internet users per 100 people), and examples of countries with the highest
possible EMB capacity score that have low internet penetration (for example, Mongolia only has
an internet penetration rate of about 10 internet users per 100 people). Nonetheless, internet pene-
tration is included in a control when these scores are later used.

Assessing measurement validity


There are a number of tests of measurement validity that can be undertaken before using these new
scores of EMB capacity.

Email test
First, to address the concern that these EMB websites are merely static facades, or that an EMB’s
website information may not be backed up by staff who are willing to interact with citizens, a
test was conducted to determine whether EMBs responded to citizen inquiries via the email
address or web form on the EMB’s website. This sort of test of the responsiveness of public
officials is not unprecedented. For example, Loewen and MacKenzie conducted a study that
involved sending emails to members of parliament in Canada from fictitious constituents to test
whether constituency population size influenced the helpfulness of the responses, as measured
by two blind coders (Thomas et al., 2013).
Garnett 9

In this article, the research assistants composed two emails, using an English-language guide,
asking the EMB for information. The first email asked how to register to vote after moving to a new
city. The second email was about residency requirements to vote (Appendix C). These emails were
sent approximately three weeks apart from two fictitious gmail.com accounts, and were signed by a
common male name for each language chosen by the coder.10 Emails could not be sent to the 10
EMBs that did not have an email address or web forms on their website, and the two EMBs that
required the sender to input an identification number (i.e. passport number) in order to send a ques-
tion. The responses were coded by the native language speakers according to three categories: 0 was
no response (or only an automatic response) for both emails; 1 was a substantive response for only
one of the emails (including a referral, request for additional information to respond to the query, or
an answer to the question); and 2 was substantive responses for both emails.11
The resulting email test scores correlate with the communication dimension of capacity, as well
as the overall capacity scores. The communication score mean was 0.92 for the countries with two
substantive responses; for the countries with only one response, it was 0.83; and for no responses,
it was 0.77. Using a one-way analysis of variance (ANOVA), it is encouraging that the variance of
website scores is significantly related to email response scores (two responses versus no response:
0.14 std. err. 0.07, p < 0.1). There is also a relationship between the results of this email test and
overall capacity. For the countries with two substantive responses, the capacity score mean was
3.43; for the countries with only one response, it was 3.15; and for no responses, it was 2.55. The
variance of website scores is significantly related to email response scores (two responses versus
no response: 0.61 std. err. 0.21, p < 0.05), using a one-way ANOVA. These findings enhance con-
fidence in the validity of assessing EMB websites as a proxy for actual EMB capacity.

Convergent validity
It is also important to consider the measurement validity of these scores. In other words, are they meas-
uring the intended concept? Testing measurement validity in this way has proven a useful tool for
many comparative social scientists seeking to better measure key concepts relating to elections and
democracy (Adcock and Collier, 2001; Bollen, 1980; Carmines and Zeller, 1979; Elkins, 2000; Hill et
al., 1997; King et al., 1994). Simple correlations can be used to test convergent validity (without
country-level controls), since this is an exercise in measurement validation rather than explanation.
While expert surveys have limitations, they remain one of the most comprehensive existing
cross-national measures of EMB capacity. There are two major datasets considering expert percep-
tions of EMB capacity: the Perceptions of Electoral Integrity (PEI) Index and the Varieties of
Democracy (VDem) dataset (see Online Appendix for question wording). These data sources are
used to test convergent validity by assessing whether the capacity scores correlate with another
valid measure of the target concept. There should be a statistically significant association between
expert perceptions and the EMB capacity scores.
As expected, the VDem capacity score is positively associated with the website capacity scores
(Corr. 0.43, p < 0.001). This suggests that the expert survey and the website content analysis have
similar capacity scores. There was also a significant positive relationship between the EMB capac-
ity scores and the PEI EMB sub-index (an aggregation of scores for all questions in the index
related to EMBs) (Corr. 0.48, p < 0.001).12
Additionally, the EMB capacity scores were compared with a measure of government effective-
ness, drawn from the Quality of Governance dataset. Norris has demonstrated a relationship
between public administration effectiveness and perceptions of EMBs (Norris, 2015). It is likely
that this measure of the government effectiveness will influence the quality of electoral manage-
ment, since electoral management falls within the realm of public administration. Indeed, there is
10 International Political Science Review 00(0)

a statistically significant relationship between the World Bank government effectiveness measure
and the EMB capacity scores (Corr. 0.45, p < 0.001).

The role of EMB capacity in strengthening electoral integrity


As mentioned earlier, EMBs play an important and active role in all parts of the electoral cycle:
they register and educate voters, manage candidate and party registration and financing, conduct
polling on election day, and count the results. Indeed, they are one of the most crucial players in
every step of the cycle. While, as mentioned earlier, they are not directly responsible for all the
determinants of overall electoral integrity (such as violence at the polls, or the candidates who run
for office), it can be reasonably expected that the capacity of an EMB to manage elections should
improve the integrity of the election.
This new measure of EMB capacity provides scholars with the opportunity to examine the role
of EMBs in promoting electoral integrity more generally. To test this hypothesis, the quality of
recent elections is measured in the aforementioned Perceptions of Electoral Integrity (PEI) Index.
This expert survey compiles the responses of experts to questions about all stages of the electoral
cycle into a 100-point scale (M = 66.47, SD = 14.13). While EMBs are considered as one compo-
nent of this index, the questions about EMBs do not specifically refer to capacity, but instead to
related concepts like performance and impartiality, so there is little threat that any results will be
impacted by the inclusion of items relating to EMBs in this PEI Index. A robustness check also
creates a PEI Index that excludes the electoral authorities’ section (Table 3, Model 3).

Table 3.  The impact of EMB capacity scores on overall electoral integrity.

Variables (1) (2) (3) (4)

PEI Index PEI Index PEI index of electoral PEI Index


of electoral of electoral integrity (electoral of electoral
integrity integrity authorities section excluded) integrity
EMB capacity 2.10** 2.42** 2.04*  
  (1.03) (1.04) (1.13)  
(Log of) EMB capacity 3.79**
  (1.66)
Regime durability 0.03 0.02 −0.01 0.03
  (0.03) (0.03) (0.03) (0.03)
Freedom House, partially free 5.99*** 6.68*** 3.31 5.91***
  (1.89) (1.93) (2.07) (1.88)
Freedom house, free 15.74*** 17.32*** 12.60*** 15.76***
  (2.19) (2.12) (2.40) (2.17)
Internet usage 0.11*** 0.13*** 0.10***
  (0.03) (0.04) (0.03)
GDP (truncated) 0.00**  
  (0.00)  
Constant 46.82*** 46.52*** 50.34*** 48.79***
  (2.31) (2.37) (2.53) (1.67)
   
Observations 99 99 99 99
R-squared 0.689 0.675 0.602 0.692

OLS regression.
Standard errors in parentheses, *** p < 0.01, ** p < 0.05, * p < 0.1.
Garnett 11

Because this test, unlike the earlier tests of measurement validity, is concerned with causality, it
is important to control for any other variables that may influence both the overall conduct of the
election and the capacity of an EMB. These include democratic development (regime durability),
level of freedom and economic development. Studies have suggested that the quality of democracy
and elections are related to these variables (Lipset, 1959). Additionally, Norris et al. (2014) have
shown these variables to be important predictors of PEI scores. Models also include a control for
internet penetration. However, since there is a strong correlation (Corr. 0.87, p < 0.001) between
internet penetration and economic development, they are not included in the same models (see
Model 1 for internet penetration and Model 2 for economic development).
The results demonstrate that even when filtering out the effect of internet usage (or level of
economic development), the country’s freedom and regime durability, EMB capacity, measured
through the online content analysis, has a significant positive impact on electoral integrity (see
Table 3). Similar results, albeit with a slightly larger regression coefficient, are found when the log
of EMB capacity is used (Model 4), suggesting that steeper increases in electoral integrity are
found as EMBs with lower levels of capacity improve.
Figure 1 presents the predicted PEI scores based on the EMB capacity score developed in this
article. The marginal effect of a one-point increase in capacity (on a scale of 0–3) is about two
points on the country’s PEI score (0–100).13 While this may seem small, this is equivalent to the
impact of more than US$10,000 in GDP.

Figure 1.  The impact of EMB capacity on electoral integrity.


80
Predicted Electoral Integrity Index
70
60
50

0 1 2 3
EMB Capacity Score

Predicted probabilities from Table 3, Model 1. 90% confidence intervals shown. PEI Index on a 0–100 scale. EMB capac-
ity on a 0–3 scale.

In sum, EMB capacity, as measured through an online content analysis, is an important deter-
minant of electoral integrity. This demonstrates the importance of having a clear measure of this
key variable for scholarly research on election quality.

Conclusions
As the organizations tasked with the administration of elections, EMBs are crucial to strengthening
electoral integrity. However, scholars of electoral management and electoral integrity have difficulty
12 International Political Science Review 00(0)

measuring EMB capacity, or the ability of these organizations to perform their functions. Direct
measures of budgets and staff, as well as expert perceptions, remain problematic and incomplete.
This article presents an additional data source to measure EMB capacity cross-nationally. It conducts
a content analysis of EMB websites, considering the presence of indicators of their key functions. It then
creates a scale of EMB capacity using Mokken scaling analysis. It addresses the concern that EMB
websites may not be backed by appropriate staff and resources for voters through a test of the respon-
siveness of EMBs to fictional citizen inquiries. Further, it assesses the convergent validity of these EMB
capacity scores, by examining the relationship between the capacity scores and expert perceptions.
Using this new method of measuring EMB capacity, this article sheds new light on our under-
standing of the causes of electoral integrity, demonstrating that EMB capacity is a significant predic-
tor of overall electoral integrity, even when accounting for other factors such as economic and
democratic development. Capacity has been a missing variable in our models of the determinants of
electoral integrity.
The website content analysis explored in this article has a number of advantages, particularly
that it is cross-nationally comparable. It can provide data on nearly all countries around the world,
but does not rely on judgements that can be influenced by the availability of experts to respond and
the accuracy of the experts’ information. Furthermore, it is relatively cost-effective, easy, and rep-
licable over time. As such, these data should be considered alongside other measures of EMB
capacity to better understand electoral integrity.

Acknowledgements
I wish to thank Elisabeth Gidengil, Dietlind Stolle, André Blais, Pippa Norris, Toby James, Carolien van
Ham, Leontine Loeber, Ian McAllister, and colleagues at the Electoral Integrity Project and the Centre for the
Study of Democratic Citizenship for their helpful comments and suggestions in the development of this arti-
cle. I also thank the anonymous journal reviewers, whose inputs greatly improved this article. Previous drafts
of this article received valuable feedback at pre-APSA workshops on ‘The Construction and Use of Expert
Indicators in the Social Sciences: Challenges of Validity, Reliability and Legitimacy’ and ‘Strengthening
Electoral Integrity: What Works?’ at the Australian Political Studies Association annual conference, and in
seminars at the Åbo Akademi, University of Helsinki, University of Melbourne, Australian National
University and Victoria University of Wellington.

Funding
Funding for this research was obtained from the Electoral Integrity Project, the Centre for the Study of
Democratic Citizenship and through a doctoral fellowship from the Social Sciences and Humanities Research
Council of Canada. The Electoral Management Survey referenced in this article was funded by the Electoral
Integrity Project, the University of East Anglia, and the University of New South Wales.

ORCID iD
Holly Ann Garnett https://orcid.org/0000-0002-2119-4399

Supplemental material
Supplemental material for this article is available online at www.electoralmanagement.com and at journals.
sagepub.com/home/ips.

Notes
  1. It should be noted that in some cases, international organizations are heavily involved in running (and
funding) elections in these countries.
  2. See the introduction to this special issue for more details.
  3. See Norris et al. (2016)
Garnett 13

  4. Two EMBs from the ELECT survey were excluded from this analysis due to their unique legal status
(Guam and Palestine’s electoral commissions), and one since it is only a regional body (Indonesia’s
Election Commission for West Java Province).
  5. Because the responses varied so drastically, the reliability of these data should be approached with a high
degree of caution.
  6. Coppedge, Michael, John Gerring, Staffan I. Lindberg, Svend-Erik Skaaning, Jan Teorell, David Altman,
Frida Andersson, Michael Bernhard, M. Steven Fish, Adam Glynn, Allen Hicken, Carl Henrik Knutsen,
Kelly McMann, Valeriya Mechkova, Farhad Miri, Pamela Paxton, Daniel Pemstein, Rachel Sigman,
Jeffrey Staton, and Brigitte Zimmerman. 2016. ‘V-Dem Codebook v6.’ Varieties of Democracy (V-Dem)
Project; Norris, Pippa; Wynter, Thomas; Grömping, Max, 2017, ‘Perceptions of Electoral Integrity, (PEI-
5.5)’, Harvard Dataverse, V2, UNF:6:orsycndtUZPrE58BWhCdtg=
  7. This study limits its content analysis to the publicly facing website of an EMB, aimed primarily at voters
as the major audience. EMBs may also have specialized ways they interact with candidates and parties.
This relationship should be the subject of further research.
  8. The eight countries without EMB websites were: Cuba, Equatorial Guinea, Mali, Mozambique, North
Korea, Sao Tome and Principe, Syria and Turkmenistan (see Online Appendix for full list).
  9. Most coders were students with no special skills beyond the language they were hired to code. This
reflects ordinary citizens’ interaction with an EMB website. The exception for the coding time frame is
Albania, for which the second coding was not completed until January 2016. There were national elec-
tions in Argentina, Belarus, Egypt and Guinea during the coding period.
10. Coders were instructed not to choose a name that is country or region-specific since these emails were
sent to various countries speaking the language translated. They were also instructed to adapt the greet-
ing and closing line to the language’s custom.
11. Email responses are not reported in this paper by country to protect the privacy of the electoral officials
who responded (or did not respond). Some organizations have so few employees that they could be easily
identified.
12. There is also a statistically significant relationship with the EMB performance evaluation question in the
Perceptions of Electoral Integrity Index (Corr. 0.42, p < 0.001).
13. Marginal effects from Model 1 from Table 3.

Reference
Adcock, Robert and David Collier (2001) Measurement Validity: A shared standard for qualitative and quan-
titative research. The American Political Science Review 95(3): 529–546.
Atkeson, Lonna Rae, R Michael Alvarez and Thad E Hall (2015) Voter Confidence: How to measure it and
how it differs from government support. Election Law Journal 14(3): 207–219.
Birch, Sarah (2008) Electoral Institutions and Popular Confidence in Electoral Processes: A cross–national
analysis. Electoral Studies 27(2): 305–320.
Bollen, Kenneth A (1980) Issues in the Comparative Measurement of Political Democracy. American
Sociological Review 45(3): 370–390.
Carmines, Edward G and Richard A Zeller (1979) Reliability and Validity Assessment, Vol. 17. Beverly Hills:
Sage Publications.
Catt, Helena, Andrew Ellis, Michael Maley, Alan Wall and Peter Wolf (2014) Electoral Management Design.
Stockholm: International Institute for Democracy and Electoral Assistance.
Christensen, Robert K and Beth Gazley (2008) Capacity for Public Administration: Analysis of meaning and
measurement. Public Administration and Development 28(4): 265–279.
Clark, Alistair (2014) Investing in Electoral Management. In Richard W Frank, Pippa Norris and Ferran
Martinez i Coma (eds), Advancing Electoral Integrity. Oxford: Oxford University Press.
Clark, Alistair (2016) Identifying the Determinants of Electoral Integrity and Administration in Advanced
Democracies: The case of Britain. European Political Science Review 9(3): 471–492.
Collier, David, Jason Seawright and Jody LaPorte (2012) Putting Typologies to Work: Concept formation,
measurement, and analytic rigor. Political Research Quarterly 65(1): 217–232.
Coppedge, Michael, John Gerring, Staffan I Lindberg, Svend-Erik Skaaning, Jan Teorell, David Altman,
Frida Andersson, Michael Bernhard, M. Steven Fish, Adam Glynn, Allen Hicken, Carl Henrik Knutsen,
14 International Political Science Review 00(0)

Kelly McMann, Valeriya Mechkova, Farhad Miri, Pamela Paxton, Daniel Pemstein, Rachel Sigman,
Jeffrey Staton and Brigitte Zimmerman (2016) V–Dem Codebook v6. Available at: https://papers.ssrn.
com/sol3/papers.cfm?abstract_id=2951018
Downey, Ed, Carl D Ekstromand and Matthew A Jones (eds) (2011) E–Government Website Development:
Future Trends and Strategic Models. Hershy: IGI Global.
Eisinger, Peter (2002) Organizational Capacity and Organizational Effectiveness among Street-Level Food
Assistance Programs. Nonprofit and Voluntary Sector Quarterly 31(1): 115–130.
Elkins, Zachary (2000) Gradations of Democracy? Empirical Tests of Alternative Conceptualizations.
American Journal of Political Science 44(2): 293–300.
Garnett, Holly Ann (2017) Electoral Management Roles and Responsibilities in Comparative Perspective,
Paper presented at the Australian Political Studies Association Annual Conference, Melbourne, Australia.
Hardouin, Jean-Benoit, Angelique Bonnaud-Antignac and Veronique Sebille (2011) Nonparametric Item
Response Theory Using Stata. The Stata Journal 11(1): 30–51.
Hartlyn, Jonathan, Jennifer McCoy and Thomas M Mustillo (2008) Electoral Governance Matters: Explaining
the Quality of Elections in Contemporary Latin America. Comparative Political Studies 41(1): 73–98.
Herron, Erik S, Nazar Boyko and Michael E Thunberg (2017) Serving Two Masters: Professionalization
versus corruption in Ukraine’s Election Administration. Governance 30(4): 601–619.
Hill, Kim Quaile, Stephen Hanna and Sahar Shafqat (1997) The Liberal–Conservative Ideology of U.S.
Senators: A new measure. American Journal of Political Science 41(4): 1395–1413.
Hollyer, James R, B Peter Rosendorff and James Raymond Vreeland (2014) Measuring Transparency.
Political Analysis 22: 413–434.
IFES and UNDP (2005) Getting to the CORE: A Global Surey on the Cost of Registration and Elections.
Available at: http://aceproject.org/ero-en/misc/undp-ifes-getting-to-the-core-a-global-survey-on/view
International Telecommunication Union (2014) The World in 2014: ICT Facts and Figures. Available at:
http://www.itu.int/en/ITU–D/Statistics/Documents/facts/ICTFactsFigures2014–e.pdf
James, Toby S (2016) The Effects of Centralising Electoral Management Board Design. Policy Studies.
Available at: https://www.tandfonline.com/doi/abs/10.1080/01442872.2016.1213802
King, Gary, Robert O Keohane and Sidney Verba (1994) Designing Social Inquiry: Scientific Inference in
Qualitative Research. Princeton: Princeton University Press.
Lipset, Seymour Martin (1959) Some Social Requisites of Democracy: Economic development and political
legitimacy. American Political Science Review 53(1): 69–105.
Maestas, Cherie (2016) Expert Surveys as a Measurement Tool: Challenges and New Frontiers. In Lonna
Rae Atkeson and R Michael Alvarez (eds) The Oxford Handbook of Polling and Polling Methods.
Cambridge: Cambridge University Press.
Martinez i Coma, Ferran and Carolien van Ham (2015) Can Experts Judge Elections? Testing the Validity
of Expert Judgments for Measuring Election Integrity. European Journal of Political Research 52(2):
305–325.
Norris, Pippa (2001) Digital Divide: Civic Engagement, Information Poverty, and the Internet Worldwide.
New York: Cambridge University Press.
Norris, Pippa (2015) Why Elections Fail. New York: Cambridge University Press.
Norris, Pippa, Alessandro Nai and Jeffrey Karp (2016) Electoral Learning and Capacity Building (ELECT)
Data. Available at: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/MQCI3U
Norris, Pippa, Richard W Frank and Ferran Martinez i Coma (2014) Measuring Electoral Integrity around the
World: A new dataset. PS: Political Science and Politics 47(4): 789–798
O’Loughlin, Michael G (1990) What is Bureaucratic Accountability and How can We Measure It?
Administration and Society 22: 275–302.
Thomas, Paul EJ, Peter John Loewen and Michael K Mackenzie (2013) Fair Isn’t Always Equal: Constituency
population and the quality of representation in Canada. Canadian Journal of Political Science 46(2):
273–293.
United Nations Development Programme (2009) Capacity Development: A UNDP Primer. Available at:
http://www.undp.org/content/dam/aplaws/publication/en/publications/capacity–development/capacity–
development–a–undp–primer/CDG_PrimerReport_final_web.pdf
Garnett 15

van Aaken, Anne (2009) Independent Electoral Management Bodies and International Election Observer
Missions: Any impact on the observed level of democracy? A conceptual framework. Constitutional
Political Economy 20: 296–322.
van Ham, Carolien and Staffan Lindberg (2015) When Guardians Matter Most: Exploring the conditions
under which electoral management body institutional design affects election integrity. Irish Political
Studies 30(4): 454–481.
van Schuur, WH (2003) Mokken Scale Analysis: Between the Guttman scale and parametric item response
theory. Political Analysis 11(2): 139–163.

Author biography
Holly Ann Garnett is an assistant professor of political science at the Royal Military College of Canada. Her
research examines how electoral integrity can be strengthened throughout the electoral cycle, including the
role of electoral management bodies, electoral assistance, voter registration, convenience voting measures,
election technologies, civic literacy and campaign finance. She is a co-convener of the Electoral Management
Network (www.electoralmanagement.com).

Appendix A.  EMB Budgets and Staff.


Country and EMB name Election year budget Permanent staff per
US$ per capita PPP 100,000 population
Afghanistan – Independent Election Commission < 0.01 1.31
Albania – Central Election Commission 6.17 1.91
Argentina – National Electoral Directorate 9.16 0.18
Bahamas – Parliamentary Registration Department 6.37 4.60
Belarus – Central Commission for Elections and Conduct 2.21 0.11
of Republican Referendums
Belgium – Federal Public Service – Directorate General Not released 0.04
Institutions and Population – Service Elections
Bhutan – Election Commission of Bhutan 11.92 21.43
Bosnia Herzegovina – Central Election Commission 4.94 1.93
Bulgaria – Central Election Commission 2.60 0.59
Burkina Faso – National Independent Electoral Commission Not released 0.62
Cambodia – National Election Committee Not reported 1.90
Canada – Elections Canada Not reported 0.90
Costa Rica – Supreme Court of Elections 20.83 18.53
Cote d’Ivoire – Independent Electoral Commission Not reported 1.27
Croatia – State Election Commission 12.80 0.55
Czech Republic – Statistical Office Not reported 11.01
Denmark – Ministry of Economic Affairs and the Interior Not reported 0.14
Dominica – Electoral Office 0.45 6.80
Ecuador – Electoral Tribunal Not reported 0.26
Estonia – National Electoral Committee Not reported 0.53
Finland – Ministry of Justice 3.22 0.07
Ghana – Electoral Commission Not reported 7.09
Greece – Ministry of the Environment/Election Directorate Not reported 0.19
Guinea – Independent National Electoral Commission Not reported 0.20
Hungary – National Election Commission Not reported 0.18
Hungary – National Election Office 18.57 0.71
(Continued)
16 International Political Science Review 00(0)

Appendix A. (Continued)
Country and EMB name Election year budget Permanent staff per
US$ per capita PPP 100,000 population
Iceland – Department of Housing, Planning, Community and Not reported 2.69
Local Government
Iraq – Independent High Electoral Commission Not reported 10.75
Israel – Central Elections Committee 7.73 0.29
Jordan – Independent Electoral Commission 5.85 1.06
Kenya – Independent Electoral and Boundaries Commission Not reported 1.79
Kyrgyz Republic – Central Commission for Election and Not reported 2.70
Referendums
Kyrgyz Republic – State Registration Service 0.81 0.16
Latvia – Central Election Commission 3.72 0.77
Luxembourg – Government Centralizing Office 3.38 2.23
Malawi – Malawi Electoral Commission 5.63 1.55
Maldives – Elections Commission of Maldives Not reported 14.37
Malta – Electoral Commission 22.97 9.61
Mauritius – Office of the Electoral Commissioner 13.78 7.91
Mexico – National Electoral Institute 15.32 11.76
Moldova – Central Electoral Commission 6.23 1.35
Mongolia – General Election Commission of Mongolia 7.41 0.99
Mozambique – National Commission of Elections < 0.01 1.73
Netherlands – Electoral Council 0.15 0.09
Netherlands – Ministry of the Interior and Kingdom 0.22 0.04
Relations
New Zealand – Electoral Commission 8.31 2.26
Norway – Municipal and Modernization Department Not released 0.08
Norway – Norwegian Directorate of Elections 2.00 0.40
Panama – Tribunal Electoral 12.51 74.37
Peru – National Election Jury 0.18 0.47
Philippines – Commission on Elections 3.17 5.03
Poland – State Electoral Commission; National Electoral 6.00 1.21
Office
Rep. of Korea – National Election Commission 13.74 5.46
Romania – Permanent Electoral Authority 4.06 1.27
Russia – Central Election Commission 4.76 0.32
Rwanda – National Electoral Commission 1.33 0.42
Saint Lucia – Saint Lucia Electoral Department 5.90 18.54
Samoa – Office of the Electoral Commissioner Not reported 23.06
Sao Tome and Principe – TElect – STP 0.32 16.01
Senegal – National Election Commission 1.53 0.09
Sierra Leone – National Electoral Commission Not reported 2.70
Slovakia – State Commission on Election and Control of 2.74 0.22
Funding of Political Parties
Spain – Central Electoral Board Not reported 0.02
Spain – Ministry of Interior – Directorate General of 4.26 0.12
Internal Policy – Deputy Directorate General of Internal
Policy and Electoral Processes
Garnett 17

Appendix A. (Continued)
Country and EMB name Election year budget Permanent staff per
US$ per capita PPP 100,000 population
Spain – Office of the Electoral Census Not reported 0.41
Suriname – Independent Electoral Council 1.71 3.40
Sweden – Election Authority Not reported 0.18
Switzerland – Federal Chancellery, Political Rights Section Not reported 0.12
Taiwan – Central Election Commission 5.17 0.23
Tanzania – National Electoral Commission 5.21 0.26
Thailand – Election Commission Not reported 2.90
Timor Leste – National Election Commission Not reported 14.58
Trinidad and Tobago – Elections & Boundaries Commission Not released 25.42
Turkey – Higher Elections Committee Not reported Not reported
Zimbabwe – Electoral Commission 39.78 3.03

Note that multiple organizations involved in running elections in most countries.


2016 United States dollars and population used.
Four EMBs requested that their budgetary data not be made public. However, it is included in all cross-national analysis
in this article.
Less than 0.01 dollars for Mozambique’s National Commission of Elections and Afghanistan’s Independent Election
Commission.
Population, PPP and Conversion Rates from World Bank Data Base (Except Taiwan, which is not included in the World
Bank Database, so alternative sources were used – Population (2015 data) – https://www.ndc.gov.tw/en/News_Content.
aspx?n=607ED34345641980&sms=B8A915763E3684AC&s=3CE82CC912356116PPP – https://www.quandl.com/data/
ODA/TWN_PPPEX-Taiwan-Province-of-China-Implied-PPP-Conversion-Rate-LCU-per-USDConversionrate- https://
www.poundsterlinglive.com/best-exchange-rates/us-dollar-to-taiwan-dollar-exchange-rate-on-2016-12-31
Guam, Palestine and Indonesia (Java Province) excluded.

Appendix B: EMB Website Content Analysis Coding Scheme


Coders were instructed to:
•• Access the website in the country’s official language (if applicable), and not to use internet
translating functions.
•• Mark no if there was a space for the material but the area was under construction or not
loading.
•• Mark no if the material was only found in a legal text, such as an election law or constitution
(except for the question that specifically refers to legal texts).
•• Mark yes if the website information only explains that citizens do not have the opportunity
(ex. out of country voting, or alternative voting measures).

Area of responsibility Indicators Question for coder


(From International IDEA Handbook on expected to
Electoral Management Design) be on an EMB
website
Determining who is eligible to vote Voter registration Are registration procedures posted
-  Identifying and registering voters information online?
- Identifying and registering voters
living in another country who are still
eligible to vote
- Developing and maintaining a national
electoral register
(Continued)
18 International Political Science Review 00(0)

Receiving and validating the Not expected to be  


nominations of electoral participants on an EMB’s public
(for elections, political parties and/or website
candidates)
- Registering political parties
- Regulating political party financing
- Overseeing political party pre-
selections or primaries
Conducting polling Eligibility to vote Does the website provide information
- Planning and implementing electoral about the qualifications to vote? (ex.
logistics age, residency qualifications)
- Hiring and training temporary electoral Disabled voters Is information available that mentions
staff options for additional assistance for
- Training political parties’ and disabled voters to cast their ballot?
candidates’ poll watchers Foreign voters If a voter out of the country, is there
- Directing the police or other security information about voting options?
services to ensure a peaceful election Electoral district Can citizens check their electoral district
- Accrediting and regulating the conduct or polling division online? (This does not
of election observers include specific polling station locations.)
- Adjudicating electoral disputes Alternative voting Are there alterative options for voting,
- Organizing external voting for those measures besides casting a ballot at a polling
not in the country station on election day? (If so, list types
and any restrictions.)
Voter Does the website contain information
identification about what documents (ex. voting card,
identification) are required to vote?
Counting the votes and tabulating Election results Can you access vote count for the last
the votes legislative or presidential election final
- Announcing and certifying election vote count? (If not, record most recent
results election for which there are results.)
Note: These dimensions were originally Election results Can you access in smaller units than
listed separately in the International IDEA smaller units the national total? (ex. by constituency,
Handbook on Electoral Management Design, region, polling station etc.)
however the tasks associated are common,
so they are listed as one in this analysis.
Running a credible organization Electoral laws and Are citizens provided information
- Making national or regional electoral fraud about election laws or what constitutes
policies electoral fraud?
-  Planning electoral services Complaints Are citizens directed as to how to
- Training electoral staff lodge complaint about the election
- Reviewing and evaluating the adequacy procedure? (Specify how if possible.)
of the electoral framework and the Hierarchy Is there information about the
election management body’s own election administration hierarchy or
performance after elections accountability structure?
EMB member(s) Are the names of the EMB members,
name electoral commissioners, or civil servant
who is in charge of elections printed on
the website?
EMB member(s) Are their qualifications, experience or
qualifications biography printed on the website?
Garnett 19

Reporting Are reports on the activities of the EMB


made available online? (Please note date
and type of most recent report.)
Contact methods Does the website provide information
- Email about contacting the EMB:
-  In person - Via email/email form?
- Post - In person? (This may include regional
- Telephone or local offices, hours of operation,
Specific contact building information, maps etc.)
Information - Via post? (Must include all
information to send mail to the
EMB, such as postal codes etc.)
- Via telephone?
Does the website list names and/or
contact information of specific divisions
(whether individually or through a
contact form) rather than a ‘catch all’
email address or form?
Note: Each contact method was a unique
indicator.

Appendix C: Email test


Email #1 (English)
Subject line: Voter Registration
Hello,
I have just moved to a different city, and I was wondering how to make sure I am registered to vote
here.
Thank you,
NAME

Email #2 (English)
Subject line: Residency?
Greetings,
I have been living outside of the country for the past year. I was wondering if I am still eligible to
vote?
Thanks,
NAME

Experiment coding
0 – No responses (or automatic reply)
1 – One substantive response (including referral, request for additional information in order to
assist, or answer to the question)
2 – Two substantive responses

Das könnte Ihnen auch gefallen