Sie sind auf Seite 1von 11

Impact Assessment and Project Appraisal, 29(1), March 2011, pages 2736 DOI: 10.3152/146155111X12913679730511; http://www.ingentaconnect.

com/content/beech/iapa

A critical review of checklist-based evaluation of environmental impact statements


Tnis Pder and Tiit Lukki

Most of the research on environmental impact assessment quality has been focused on the quality of environmental impact statements (EIS), if they supplied important information about the components of the assessment process. This paper highlights some topical methodological issues concerning the two most widely used checklists the Environmental Statement Review Package and the European Commissions EIS Review Checklist. Both were found to be neglecting several important aspects, such as quality of information, uncertainty and probability of predictions, consideration of alternatives, and public participation. This causes overvaluation of EISs that inadequately address these aspects. An empirical study of inter-individual variations in judgements of 41 evaluators revealed significant divergence at all stages of the review process. The frequency of two-grade differences in the evaluation outcomes was about 25%. Highlighted inadequacies in two popular tools, along with variation in application due to user subjectivity, indicate that these tools should be applied with considerable care and caution, especially for research and monitoring of EIS quality.

Keywords:

environmental impact assessment (EIA), environmental impact statement (EIS), quality evaluation, Review Package, Review Checklist

HECKLISTS HAVE BEEN A POPULAR TOOL for evaluation of environmental impact assessment statements (EIS) and reports for some decades. Several modifications of checklists have been proposed (e.g. Ross, 1987; Elkin and Smith, 1988; Lee and Colley, 1991; CEC, 1994, 2001; Lawrence, 1997; Bojrquez-Tapia and Garcia, 1998; Lee et al, 1999), and used in many studies (Lee and Brown, 1992; Lee and Dancey, 1993; Baker and Wood, 1999; Cashmore et al, 2002; Gray and Edwards-Jones, 2003; Canelas et al, 2005; Sandham and Pretorius, 2008). It is somewhat surprising that such a critical aspect as reliability of the underlying evaluation method has been largely neglected. To date, no analysis has been done of these methods validity,
Tnis Pder and Tiit Lukki are both at the Institute of Mathematics and Natural Sciences, Tallinn University, Narva mnt 25, Tallinn 10120, Estonia; Email: tonispoder@mac.com. We would like to thank Dr Angus Morrison-Saunders and two anonymous referees for their valuable comments on an earlier draft of this paper.

determined by the extent to which the used tool accomplishes its intended purpose (i.e. measures the quality of EIS). Data about evaluators and/or reviewers interindividual divergences of results are also scarce. Implicitly, the existence of a problem of interindividual differences (subjectivity) in judgements is recognized by the suggestion of group decision (two persons as a minimum group size) instead of a single reviewer (Lee and Brown, 1992; Lee et al, 1999). In practice, the recommended double-reviewer evaluation has often been disregarded (e.g. McGrath and Bond, 1997; Canelas et al, 2005). Indeed, with three to eight hours needed per EIS, getting additional evaluators involved is a remarkable burden for both researchers and environmental officers. To avoid group assessment, some researchers have implemented an inter-comparison exercise, whereby at the beginning of the study some EISs are evaluated together to ensure consistency in a subsequent separate review (e.g. Canelas et al, 2005); others have just taken the reliability of the method for granted (e.g. Cashmore et al, 2002).

Impact Assessment and Project Appraisal March 2011

1461-5517/11/010027-10 US$12.00 IAIA 2011

27

A critical review of checklist-based evaluation of EISs

However, the subjectivity of reviewers might be more influential than usually expected. For instance, Lee and co-authors (Lee et al, 1994) noticed a clear difference between their grades and the planning authorities perception of the same set of EIS, with proportions of non-satisfactory EIS marked as a half and a third, respectively. Therefore, the need for grounded characterization of individual variability of evaluation outcomes cannot be ignored, at least when outcomes from different evaluators are compared for research or monitoring purposes. This article attempts to disclose the essence of the potential constraints of a checklist-based EIS evaluation and elucidate the implication these constraints may have on the use of the evaluation outcomes. The article focuses on the following aspects: Constraints resulting from the features of the checklists; and The evaluators inter-individual variability of results. The two most generally used checklists, the Environmental Statement Review Package (Lee et al, 1999) and the European Commissions (EC) EIS Review Checklist (CEC, 2001), were selected for scrutiny. The first was selected because, though somewhat modified, it had been started by the research conducted by Lee and Colley (1991) and has since been used for almost two decades, also serving as a kind of precursor for subsequent similar tools. The latter was selected, pursuant to the respective EC suggestion, to be used for assessing environmental impact assessment (EIA) reports in the EU member countries. Both, the Environmental Statement Review Package and EIS Review Checklist declare that they are designed for users who wish to review the quality of EIS. Checklists
Environmental Statement Review Package

Each of these areas contains several Review Categories of EIA activity, designated by two digits (e.g. 1.1 Description of the development: the purpose(s) of the development should be described as should the physical characteristics, scale and design. Quantities of materials needed during construction and operation should be included and, where appropriate, a description of the production processes). Similarly, each Review Category contains several Review Sub-categories (criteria), designated by three digits (e.g. 1.1.1 The purpose(s) and objectives of the development should be explained). The list of Review Categories and 52 Review Subcategories can be found at <http://www.sed.man.ac. uk/planning/research/publications/wp/eia/documents /OP24PARTB.pdf>. The evaluation process starts from the lowest, i.e. the Sub-category level, and moves upwards. At each level the assessment results are expressed using a letter scale with a range stretching from A (relevant task well performed, no important tasks left incomplete) to F (very unsatisfactory, important task poorly done or not attempted) taking into account the completeness and quality of the information and relevancy of the Review Topic. The quality of the information is assessed and interpreted by looking for unsuitable and ad hoc methods, biased or inaccurate supporting data, and absence of the rationale for conclusions. The assessment of higher levels should not be derived by merely averaging the assessments of lower level topics. Instead, the evaluators should adjust these assessments with their personal judgements about the relative importance of various sub-topics and the additional knowledge gained from the EIS. No detailed explanation is given for how this should be done. To promote objectivity in reviewing, each EIS is recommended to be initially separately reviewed by two different reviewers. Finally, any differences in the assessments made by them are identified and re-examined in an attempt to resolve these differences.
EIS Review Checklist

The Environmental Statement Review Package (hereinafter: Review Package) is intended for use by the broadest range of EIA-related users, such as environmental officers, developers, consultants, interest groups and researchers, for assessing the quality of EIS. The Review Package has a four-level hierarchical design. The overall assessment is based on the following four Review Areas of Environmental Assessment activity: 1. Description of the development, the local environment and the baseline condition. 2. Identification and evaluation of the key impacts. 3. Alternatives and mitigation of impacts. 4. Communication of results.

The latest version of the EIS Review Checklist (hereinafter: Review Checklist) was published in 2001. It has been designed for two purposes: (1) to assess the adequacy of a single EIS for decisionmaking, and (2) to assess the quality of EIS for either research or monitoring purposes. Its intentional users are environmental authorities, developers, EIA practitioners, academics and organizations in the EU and around the world (CEC, 2001). The Review Checklist is declared as not something designed to verify the quality of presented information, and it is confined only to detecting whether or not certain topics have been addressed, not whether this has been done in a manner that is sound both scientifically and technically.

28

Impact Assessment and Project Appraisal March 2011

A critical review of checklist-based evaluation of EISs

The Review Checklist is a bulky questionnaire comprising the following seven sections: 1. Description of the project. 2. Consideration of alternatives. 3. Description of the environment likely to be affected by the project. 4. Description of the likely significant effects of the project. 5. Description of mitigation measures. 6. Non-technical summary. 7. Quality of presentation. Sections 1, 3 and 4 are divided into sub-sections; the total number of Review Questions is 123. Detailed content of the Review Checklist can be found at <http://ec.europa.eu/environment/eia/eia-guidelines/ g-review-full-text.pdf>. First, for each Review Question, the evaluator has to decide whether the question is relevant to the project in hand. If the question is identified as relevant, the completeness of answers to each question is to be assessed using an appropriate letter grade on the five-point scale from A to E. The highest grade is A, denoting full provision of information, with no gaps or weaknesses and the lowest grade is E, denoting very poor provision of information, with major gaps or weaknesses, which would prevent the decision-making and, thus, requires major work in order to be complete. Proceeding from answers to the questions, a grade will be assigned for each of the following six Review Topics: 1. Characteristics of the project. 2. Alternatives considered. 3. Location of the project. 4. Mitigation. 5. Characteristics of the potential impacts. 6. Presentational issues. Based on the grade of each Review Topic, an overall grade of the EIS will be assigned. However, the mode of integration of lower level grades into both the Review Topics grades and the overall EIS grade has not been detailed. For illustration, only some rough examples are provided: (1) if one question is graded A and grade B is given to the other nine, the section (Review Topic) grade B is regarded as reasonable, or (2) if one is graded E and another nine are graded B, then the D grade is regarded as probably appropriate. Methodology
Constraints emerging from checklists features: design and mode for aggregation

defined. The possibility exists that an intensional definition (i.e. a definition that enumerates all inherent properties) cannot be given for EIS quality. In this research we proceeded from the idea that quality EIS is the kind that gives, primarily to the decisionmaker but also to other interested parties, all information about EIA that (supposedly) is important for decision-making. Hence, to be adequate, an EIS quality evaluation tool must address all relevant information within EIS. Identification of gaps in checklists when covering essential aspects of EIS was achieved by systematic reviewing of the content of checklists from the viewpoint of a potential user. Besides addressing all essential aspects, the evaluation tools should provide an adequate mode for aggregating of lowest level grades into section/topic grades and those into an overall EIS grade. Both the Review Package and Review Checklist are arranged hierarchically, starting from the most detailed questions (criteria) and aggregated throughout all levels into an overall assessment (bottom-up approach). This is substantially the value-tree approach developed within the framework of multi-criteria decision-making. Therefore, the methods of aggregation integrated into questionnaires are analysed from the viewpoint of the value-tree assessment model (UK DTLR, 2001; Value Tree Analysis, 2002) to highlight the possible deviations that could influence the credibility of outcomes. We focused on two key issues that influence aggregation: (1) mutual compensation, that is, can a low grade given to one criterion (e.g. Review Sub-category/Review Question, Review Area/Review Topic) be compensated by a high grade given to another?, and (2) relative weights of criteria.
Evaluators inter-individual variability of judgements

Inter-individual differences in judgements may result from several factors, such as individual cognitive abilities, emotional profile, overall educational background, knowledge of EIA process, work experience, etc. In this study, we have focused on one of them: cognitive-psychological factor. To measure its influence on evaluation, an empirical study was performed in 2008. A survey group was set up of 41 students who had just completed their EIA course.

Even though EIS quality is a basic concept both in the Review Package and the Review Checklist, and in many studies based on them, it has not been

Besides addressing all essential aspects, the evaluation tools should provide an adequate mode for aggregating of lowest level grades into section/topic grades and those into an overall EIS grade

Impact Assessment and Project Appraisal March 2011

29

A critical review of checklist-based evaluation of EISs

Hence, the members of the group had similar background characteristics (e.g. age, education, theoretical knowledge of EIA, lack of EIA-related work experience). They individually evaluated the same reference EIS, using the Review Checklist as the most detailed and up to date tool. An EIS compiled by the Stockholm Environmental Institute Tallinn Centre was selected as the reference EIS (SEI, 2006). It reveals the results of an EIA on site selection of an oil refinery and has been approved by responsible authorities. The study addressed three sources of subjectivity: selection of relevant Review Questions for the project (see the section on the Review Checklist above), variability of grading given to selected Review Questions, and, finally, differences in aggregation grades throughout different hierarchical levels into the overall grade of EIS. The statistical analysis (correlations, probability distribution of grades) of the assessment data was completed, using MS Excel VBA modules. For the calculation of correlations, numerical equivalents of grades were applied (A=5, B= 4, etc). The probability distribution of the different grades given by reviewers was obtained as follows: the grades given by all reviewers to the Review Questions were compared with grades given by other evaluators to the same questions. The absolute grade differences received were within the range from 0 (two reviewers had given similar grades to the same question) to 4 (one reviewer had given the highest grade A, while the other had given the lowest grade E). To obtain probability estimates for every grade difference level (i.e. 0, 1, 2, 3, 4) the numbers of pairs in each difference level were divided by the total number of comparison pairs. Design-related constraints
Neglecting quality of information

but incorrect and misleading information can be even worse than the clearly recognized and accounted for information gaps. The idea that quality assessment of an EIS should also include consideration of whether the information presented is correct in line with current scientific and technical knowledge was pointed out some time ago by Scholten (Scholten,1995, cited in Fuller, 1999). From this point of view, quality of presented information should be regarded as a component of EIS quality. Such an approach excludes the possibility that an EIS might be qualified high even though the presented information is technically and scientifically incorrect and, therefore, misleading the contradiction that evolves when the quality of EIS has only been equalized with the quantity of information. To avoid misunderstanding, a distinction should be made between the evaluation of EIS quality, which among others includes the evaluation of reliability of information, and evaluation of the completeness of EIS, which only includes the sufficiency of relevant information. The first can only be performed by qualified experts, while the second can be undertaken by a wide range of persons involved with EIA. The non-experts could take the quality of the presented information for granted, or consult with the experts in the respective field, as recommended in the Review Checklist. If the quality of information is not evaluated, the proportion of EIS graded unsatisfactory should be regarded as minimal because the set of positively evaluated reports may contain those whose substandard information quality may necessarily incur negative evaluation.
Uncertainty and probability of predictions

The Review Package and Review Checklist address the information quality issue differently. In the Review Package the quality of information is covered, albeit in a rather restricted manner, mostly as presence of supportive information such as identification of data sources, rationale for selecting impact identification methods, justification of the assumptions and value systems used. In contrast, the Review Checklist deliberately omits this issue and is declared unsuitable for verifying the quality of information. Instead, it is a method for reviewing the completeness and suitability of information from decision-making viewpoint. It is reasonable to assume that for informed decision-making the completeness of information (i.e. all the relevant aspects are provided with sufficient information in EIS) as well as the quality of the information contained (i.e. the information provided about those aspects is scientifically and technically sound) are equally important. In fact, the complete

Both the Review Package and Review Checklist focus on identification, prediction and evaluation of the environmental impacts of a proposed action. Somewhat surprisingly, both of them pay very limited attention to the likelihood of potential impacts. Neither of these tools indicates directly whether an EIS contains information about the probability of predicted impacts and what kind of probability is involved: the one based on the assessors judgement or the one grounded in stochastic models. In the Review Package the topic uncertainty/probability is only tackled through the gaps in required data (Sub-category 2.4.1 Any gaps in the required data should be indicated and the means used to deal with them in the assessment should be explained), and the ranges of confidence limits in predictions (Sub-category 2.4.3 Where possible, predictions of impacts should be expressed in measurable quantities with ranges and/or confidence limits as appropriate). In the Review Checklist, the sub-sections Prediction of Direct Effects, Predictions of Effects on Human Health and Sustainable Development Issues

30

Impact Assessment and Project Appraisal March 2011

A critical review of checklist-based evaluation of EISs

and Impact Assessment Methods do not contain any questions about the probability/likelihood of effects. In sub-section Prediction of Secondary, Temporary, Short Term, Permanent, Long Term, Accidental, Indirect, Cumulative Effects, the probability of impacts is mentioned among several other topics in the Review Question Are the geographic extent, duration, frequency, reversibility and probability of occurrence of each effect identified as appropriate?. In the sub-section Impact Assessment Methods, one Review Question tackles predictionmethod-related uncertainties in the results and another asks about worst case predictions. Typically, some uncertainty is inherent to prediction of future events and informed decision-making should not ignore it (Skinner, 2001; Hansson, 2005). Uncertainty can be addressed by subjective probability (experts judgement) or by objective probability based on objective data (e.g. using stochastic prediction models). Experts judgement expresses their belief in the likelihood of predicted consequences as well as the role of certain factors (data quality, limited knowledge of interactions, etc) for judgement; the objective probability characterizes the natural randomness of variables in the system under study. The importance of probability in decision-making is clearly articulated in basic literature on decisionmaking (e.g. Skinner, 2001; Winkler, 2003). Skinner (2001) defines a good decision as one that is logically consistent with our state of information and incorporates the possible alternatives with their associated probabilities and potential outcomes in accordance with our risk attitude. The relevance of probability is also clearly highlighted in some EIA manuals. For instance, A Handbook on Environmental Impact Assessment (DTA, 2005) states that competent authorities and consultees should try to ensure that, inter alia, environmental statements fairly and consistently describe the likelihood of impacts occurring. The question Do they [assessors] indicate the probability of an impact occurring? was included in an early set of guidance questions for EIA reviewing in New Zealand (Morgan and Memon, 1993, cited in Morgan, 2002). Keeping in mind the crucial role the estimation of probability plays in decision-making, the need for assessment of completeness of information on this

matter is clearly underestimated in both review tools. This in turn means possible overestimation of EISs where information about the probability of the occurrence of predicted environmental effects is deficient or has been omitted.
Consideration of alternatives

The scarcity of questions about alternatives is a common feature of both tools. In the Review Package only 3 out of 52 Review Sub-categories address alternatives. In the Review Checklist 7 out of 123 Review Questions address alternatives, and, in the checklist, three questions inquire about the development of the range of alternatives: 1. Is the process by which the Project was developed described, and are the alternatives considered during this process described? 2. Is the baseline in the No Project situation described? 3. Are the alternatives realistic and genuine alternatives to the Project? There are neither questions for getting convincing evidence of all the feasible alternatives scrutinized, nor questions about how they were screened out (i.e. information about eliminated alternatives and reasons for elimination). The section Consideration of Alternatives in the Review Checklist also tackles the issue of comparison of the alternatives, preceding the section Assessment of environmental effects. Because the comparison should be based upon the environmental effects, this sequence of questions is somewhat confusing. In fact, in the EIA process the consideration of alternatives consists of two phases: (1) identification of reasonable alternatives, which takes place before their effects are assessed, and (2) comparison of alternatives, which takes place after the effects have been assessed. Logically, the consideration of alternatives in EIS evaluation should be arranged in a similar order; that is, the comparison of alternatives should be placed after the section Description of the likely significant effects of the project. The comparison of alternatives at large is scarcely covered, and is limited to two, largely overlapping, questions in the Review Checklist: 1. Are the main reasons for choice of the proposed Project explained, including any environmental reasons for the choice? 2. Are the main environmental effects of the alternatives compared with those of the proposed Project? In the Review Package, the comparison of alternatives is addressed only by the statement that the main environmental advantages and disadvantages of alternative sites should be discussed and the reason for the final choice given.

Keeping in mind the crucial role the estimation of probability plays in decision-making, the need for assessment of completeness of information on this matter is clearly underestimated in both review tools

Impact Assessment and Project Appraisal March 2011

31

A critical review of checklist-based evaluation of EISs

Making a choice between options (i.e. alternatives) is the core of any decision-making (Skinner, 2001; Hansson, 2005). In numerous textbooks (e.g. Canter, 1996; Carroll and Turpin, 2002; Morgan, 2002) alternatives are considered vitally important for the EIA process. The EU legislation (Council Directive of 27 June 1985) stipulates the same. Nevertheless, several researchers have pointed to the deficiencies in approaching alternatives in EIA (e.g. Jones, 1999; Steinemann, 2001; Benson, 2003). Steinemann (2001) has shown that shortcomings are remarkable in the development of a range of alternatives (some competing alternatives are not identified) and in screening out of feasible alternatives (some alternatives are eliminated from detailed study without any convincing and documented reason). The negative influence emerging from the limited attention paid to the identification and comparison of alternatives, is revealed by overvaluation of EISs that have deficiencies in addressing the development of the range of feasible alternatives and identifying the preferred alternative. At the same time, efforts in approaching these issues that should be encouraged and appreciated may become inadequately reflected in the assessment grade.
Public participation in the assessment of significance of impacts

presented in the EIS are based on, and comprehend the discrepancy and split that may possibly occur between the value judgements of different parties involved. By neglecting the reflection of stakeholders input in EIS, the review tools have lost a great portion of adequacy in evaluating the presented information completeness in particular and EIS quality in general. This results in overvaluation of EIS (and EIA) with deficient public participation. Aggregation The aggregation of various criteria into an overall EIS evaluation is covered in a rather perfunctory manner in both review tools. Neither of them explains the principal difference between mutually compensatory (i.e. the high grade of one criterion can compensate for the low grade of the other) and non-compensatory aggregation, and neither refers to any criterion that should be regarded as non-compensable. To get an overall estimation of the multi-criteria assessment, the single attribute (criterion) assessment results should be transformed (aggregated) into a generalized grade. Usually, thereby, it is assumed that the attributes are mutually substitutable. If the condition of possibility of mutual compensation does not apply, other possibilities of aggregation should be chosen. One possibility is to regard the grade falling below a certain level as non-compensable; that is, a negative grade received for certain criterion will implicitly lead to a negative aggregate grade, regardless of any other consideration (Beinat, 1997). We believe that at least some EIS assessment attributes (i.e. some Sub-categories, Categories and Review Areas in the Review Package, or Review Questions and Review Topics in the Review Checklist) should be regarded as non-compensatory, at least when their values fall below a certain threshold. The Review Topic Alternatives considered or even the Review Question Are the main environmental effects of the alternatives compared with those of the proposed Project? serve as examples of such critical issues. If the EIS fails to provide adequate information about the consideration of alternatives in general, or lacks information about how they were compared, it should be assessed as substandard, regardless of the grades given to other issues. Overlooking the non-compensatory approach may be crucial if the research or monitoring goal is to determine the proportions of satisfactory (i.e. grades equal or higher than C) and non-satisfactory grades (i.e. grades D and lower). In this case the overestimation due to an unjustified compensatory approach may significantly increase the proportion of positively graded (grade C) EISs. In multi-criteria assessment, the question of the relative weights of (mutually compensatory) criteria is inevitable (Trianaphyllou, 2000; Value Tree Analysis, 2002). Certainly, the possibility that all

The Review Package does not explicitly feature public input in the assessment of significance of impacts. Implicitly, this issue is covered by Subcategory 2.5.3 The choice of standards, assumptions and value systems used to assess significance should be justified and any contrary options should be summarized. In the Review Checklist, the public role in this assessment is omitted. This contrasts with the widely recognized concept of the important role of public participation in environmental decision-making. In EIA the significance of impacts is largely based on value judgements. Bringing public values into the assessment is regarded as one of EIAs functions and a cause of its wide international success (Petts, 1999; Sippe, 1999), and public involvement is acknowledged among the basic principles of EIA best practice (Andr et al, 2006). Stewart and Sinclair (2007) point out a need for transparency in using the public input in decision-making and Wood (2008) stresses that EIS needs to be characterized by clarity and transparency with regard to evaluation of impact significance. Furthermore, due to deficiencies in public participation pointed out in numerous studies (e.g. Palerm and Aceves, 2004; Tang et al, 2005; Jou and Liaw, 2006; Nadeem and Hameed, 2008), determination of adequacy of information provided in the EIS on this issue is regarded as crucial (Doelle and Sinclair, 2006). Certainly, to make an informed decision, a decision-maker should be aware of whose (e.g. consultants, public) value judgements the conclusions

32

Impact Assessment and Project Appraisal March 2011

A critical review of checklist-based evaluation of EISs

criteria are of equal importance cannot be ruled out; nevertheless it should be regarded as exceptional. In a value-tree approach, evaluators should decide about the weights of criteria (attributes) on all hierarchical levels. The weight elicitation can be done in two ways: (1) by non-hierarchical weighting, with weights being defined for lowest level attributes, and upper level weights calculated as a sum of lower level weights; or (2) by hierarchical weighting, with weights being defined for each hierarchical level separately, and then multiplied down to get corresponding lower level weights (Stillwell et al, 1987; Value Tree Analysis, 2002). The Review Package only mentions the need for adjusting the assessment grades with weights of subtopics, without giving any explanation of how it should be done. The Review Checklist does not consider weight elicitation, but it implicitly gives equal weights to all Review Questions, sub-sections and sections (Review Topics). Assigning equal weight to some Review Topics, for instance to Alternatives Considered and Presentational Issues, seems rather questionable. Not considering the differences between weights of criteria may have equivocal effects, whereby either insubstantial reports are overvalued by relative devaluation of weights of certain, more relevant criteria, or substantial reports are undervalued by revaluating the weights of certain, less relevant criteria. Thus, poor consideration of the differences between weights can cause the evaluation results to be levelled off rather than biased in one direction. Inter-individual variability of judgements The first input of individual variability occurs when the reviewer selects the Review Questions relevant for a specific project. Somewhat surprisingly, in our experiment the reviewers positions diverged
1 1 Number of reviewers 1 1 8 6 4 2 0 01121-

substantially in this matter: the number of Review Questions qualified as irrelevant varied from 0 up to 57 out of 143. Despite the fact that the vast majority of reviewers omitted up to 20 questions (Figure 1), the results obtained indicate that differences in understanding the relevance of questions can serve as a factor that causes overall divergence of grades and, thus, cannot be ruled out. The next step of subjective (i.e. evaluators perspective) input is the main assessment, where grades are given to every Review Question acknowledged as relevant for the project under study. Individual variability of grades given to the same Review Question is characterized by the correlation coefficients (Figure 2). Though the range was relatively wide, the vast majority of them fell into the interval 0.20.4. Rather weak correlations indicate that the responses at large were poorly related and their divergences had a random rather than systematic character due to differences in reviewers strictnessif some reviewers had permanently preferred giving lower (or higher) grades, the correlation would have been strong. In contrast to systematic deviations, the random variability cannot be easily corrected with the help of a reviewers inter-comparison exercise. When comparing the evaluation results, the absolute difference of grades that reviewers had given as their answers to the same Review Question becomes important. According to our results (Figure 3), the probability of match for the grade given by a reviewer and for those given by others was merely 0.3. With probability nearly 0.4, the difference was two or more grades, which should be marked as a significant one. The last step, the aggregation of single assessments into grades of higher hierarchical level (Review Topics) and, finally, into overall EIS grade, presents another source of subjectivity. Here the possible differences in approaching the idea of mutually compensatory and of relative importance

31-

41-

51-

Number of irrelevant questions


Figure 1. Frequency distribution of Review Questions qualified by 41 evaluators as irrelevant for the evaluation of the reference EIS

Impact Assessment and Project Appraisal March 2011

33

A critical review of checklist-based evaluation of EISs

Figure 2. Frequency distribution of correlation coefficient (R). R reflects the correlation of grades given by 41 evaluators to every relevant question with grades given by every other evaluator to the same question in the Review Checklist

of different criteria (Review Questions, Review Topics) are expected to result in outcomes variability. One could expect that, if reviewers aggregate single grades in a similar manner, the grades given to Review Topics would be tightly correlated with statistics (averages, medians) of grades given to questions within Review Topics. The obtained results showed that the correlations varied synchronously with both statistics, being slightly tighter with the average. Almost all correlations with the average were above 0.5, indicating relatively modest individual variability at this stage (Table 1). The summary of subjectivity is manifested in the divergence of the overall grades for the whole EIS. Unfortunately, many reviewers did not complete this task: only 18 of them presented an overall grade.

None of them assessed the EIS with extreme grades A and E, while grades B, C and D were given by five, nine and four reviewers, respectively. This is very close to the results of a recent analogous study (Peterson, 2010) of divergences in overall evaluation grades of environmental professionals (environmental authorities and EIA consultants): eight out of 24 reviewers graded the case EIS by B, ten of them gave C and six reviewers D. These results clearly indicate that the two-grade divergence of overall assessment is not a rare occurrence, but that it happens with about a quarter of reviewers. Even if the limited number of data did not allow statistically confident conclusions to be drawn, the similarity of outcomes distribution in these two studies supports the idea that the cognitive difference of evaluators rather than

Figure 3. Probability distribution of absolute differences in grades, given by 41 evaluators to every relevant question in the Review Checklist

34

Impact Assessment and Project Appraisal March 2011

A critical review of checklist-based evaluation of EISs


Table 1. Correlation between grades given by 41 evaluators to Review Topics and average and median grades given by them to relevant questions within the corresponding Review Topics

Review Questions Review Topic Alternatives considered Location of the project Characteristics of potential impacts Mitigation Presentational issues Median 0.37 0.49 0.62 0.77 0.33 Average 0.57 0.58 0.66 0.75 0.50

their background is the dominating reason for outcomes diversity. Comparison of final grades with those given to Review Topics revealed that in our study five out of nine reviewers who assigned an overall grade C had given grade D to the Review Question Is the significance of each effect clearly explained?. This indicates that these reviewers made an assumption (though questionable) that the shortage of information about environmental effects can be compensated with good performance of other issues. Conclusions To avoid any misinterpretations and misrepresentation of results obtained by using EIS evaluation checklists, this research points out the following: 1. Reviews that overlook the quality of information presented in EIS do not reflect the quality of EIS at large, but only one quality component, namely completeness of relevant information. This second component is necessary, yet not sufficient for making a decision about the reports quality. The evaluation of completeness of relevant information in an EIS can be implemented by a broad range of involved or interested parties, while quality review of EIS presumes that evaluators are suitably qualified in issues relevant to a given EIS. 2. The most common EIS evaluation tools, Review Package and Review Checklist, neglect several important aspects of the EIA process. This causes overvaluation reports that inadequately address these aspects (probability of predicted effects, public involvement, consideration of alternatives), and considerable undervaluation of those reports where the approach is adequate. This, however, means that positive trends at handling of these aspects would not be adequately mirrored in the monitoring of EIA quality. This shortcoming can be eliminated by upgrading the checklists. 3. Anticipating any emergence of the mutually compensatory approach to criteria at an improper

place is a factor, leading to overvaluation of reports. It can lead to extremely tendentious conclusions if the reports are grouped as at least satisfactory and non-satisfactory. A mutually compensatory approach should be explicitly presented in the Methodology section of checklistbased EIS studies. Hereby, the discrepancy can be reduced by preliminarily agreeing on rules of applying it. The same is valid for assigning relative weights to attributes. 4. Individual cognitive differences explicitly form a factor that is conducive to differences in grading results. The differences start when selecting relevant review criteria for the project, and continue when assessing the conformance with a single criterion and, thereafter, when aggregating grades. Usability of these checklists for research and monitoring, where the results from different evaluators have been compared, is confined (besides the factors above) by the evaluator-related variability that is expressed with about 0.25 probability of two grade differences in the overall outcomes. Though the streamlining of the aggregation approach can have its reducing effect, nevertheless the essential discrepancies start already at the initial stages of evaluation, expressed by the selection of relevant Review Questions and the answers given to them. Highlighting the inadequacies in two popular EIS quality evaluation tools, along with variation in application brought about user subjectivity, we have demonstrated some significant limitations about the use of these tools. While we do not denounce the use of these tools by practitioners in the future, we urge that these tools be applied with considerable care and caution, especially for research and monitoring of EIS quality.

References
Andr, P, B Enserink, D Connor and P Croal 2006. Public Participation. International Best Practice Principles. Special Publication Series No. 4. Fargo, USA: International Association for Impact Assessment. Barker, A and C Wood 1999. An evaluation of EIA system performance in eight EU countries. Environmental Impact Assessment Review, 19(4), 387404. Beinat, E 1997. Value Functions for Environmental Management. Environment & Management, Volume 7. Dordrecht: Kluwer. Benson, J F 2003. What is the alternative? Impact assessment tools and sustainable planning. Impact Assessment and Project Appraisal, 21(4), 261280. Bojrquez-Tapia, L A and O Garca 1998. An approach for evaluating EIAS deficiencies of EIA in Mexico. Environmental Impact Assessment Review, 18(3), 217240. Canelas, L, P Almansa, M Merchan and P Cifuentes 2005. Quality of environmental impact statements in Portugal and Spain. Environmental Impact Assessment Review, 25(3), 217225. Canter, L W 1996. Environmental Impact Assessment, 2nd edition. New York: McGraw-Hill, Inc. Carroll, B B and T Turpin 2002. Environmental Impact Assessment Handbook. A Practical Guide for Planners, Developers and Communities. London: Thomas Telford. Cashmore, M, E Christophlopoulos and D Cobb 2002. An evaluation of the quality of environmental impact statements in Thes-

Impact Assessment and Project Appraisal March 2011

35

A critical review of checklist-based evaluation of EISs


saloniki, Greece. Journal of Environmental Assessment Policy and Management, 4(4), 371395. CEC, Commission of the European Communities 2001. Guidance on EIA. EIS Review. Office for Official Publications of the European Communities. Available at <http://ec.europa.eu/ environment/eia/eia-guidelines/g-review-full-text.pdf>, last accessed 7 August 2006. CEC, Commission of the European Communities 1994. Environmental Impact Review Checklist. Brussels: Directorate General for Environmental, Nuclear Safety and Civil Protection. Council Directive of 27 June 1985 on the assessment of the effects of certain public and private projects on the environment (85/337/EEC). Available at <http://ec.europa.eu/ environment/eia/full-legal-text/85337.htm>, last accessed 28 January 2011. Doelle, M and A J Sinclair 2006. Time for a new approach to public participation in EIA: promoting cooperation and consensus for sustainability. Environmental Impact Assessment Review, 26(2), 185205. DTA, David Tyldesley and Associates 2005. A Handbook on Environmental Impact Assessment. Guidance for Competent Authorities, Consultees and others involved in the Environmental Impact Assessment process in Scotland, 2nd edition. Available at <http://www.snh.org.uk/publications/on-line/ heritagemanagement/eia/> last accessed 26 July 2007. Elkin, T I and P G Smith 1988. What is a good environmental impact statement? Reviewing screening reports from Canadas national parks. Journal of Environmental Management, 26(1), 7189. Fuller, K 1999. Quality and quality control in environmental impact assessment. In Handbook of Environmental Impact Assessment, Vol. 2, ed. J Petts, pp. 5574. Oxford: Blackwell Science. Gray, I and G Edwards-Jones 2003. A review of environmental statements in the British forest sector. Impact Assessment and Project Appraisal, 21(4), 303312. Hansson, S O 2005. Decision Theory. A Brief Introduction. Available at <http://www.infra.kth.se/~soh/decisiontheory.pdf>, last accessed 25 October 2009. Jones, C 1999. Screening, scoping and consideration of alternatives. In Handbook of Environmental Impact Assessment. Vol. 1, ed. J Petts, pp. 201228. Oxford: Blackwell Science. Jou, J-J and S-L Liaw 2006. A study on establishment of EIA System in the Taiwan Region. Journal of Environmental Assessment Policy and Management, 8(4), 479494. Lawrence, D P 1997. Quality and effectiveness of environmental impact assessments: lessons and insights from ten assessments in Canada. Project Appraisal, 12(4), 219232. Lee, N and R Colley 1991. Reviewing the quality of environmental statements. Town Planning Review, 62(2), 239248. Lee, N and D Brown 1992. Quality control in environmental assessment. Project Appraisal, 71(1), 4145. Lee, N and R Dancey 1993. The quality of environmental impact statements in Ireland and the United Kingdom: a comparative analysis. Project Appraisal, 8(1), 3136. Lee N, F Walsh and G Reeder 1994. Assessing the performance of the EIA process. Project Appraisal, 9(3), 161172. Lee, N, R Colley, J Bonde and J Simpson 1999. Reviewing the Quality of Environmental Statements and Environmental Appraisals. University of Manchester: Occasional Paper Number 55. Available at <http://www.sed.man.ac.uk/planning/ research/publications/wp/eia/documents/OP24PARTB.pdf>, last accessed 23 March 2010. McGrath, C and A Bond 1997. The quality of environmental impact statements: a review of those submitted in Cork, Eire from 19881993. Project Appraisal, 12(1), 4352. Morgan, R K 2002. Environmental Impact Assessment. A Methodological Perspective. Dordrecht: Kluwer. Nadeem, O and R Hameed 2008. Evaluation of environmental impact assessment system in Pakistan. Environmental Impact Assessment Review, 28(8), 562571. Palerm, J and C Aceves 2004. Environmental impact assessment in Mexico: an analysis from a consolidating democracy perspective. Impact Assessment and Project Appraisal, 22(2), 99108. Peterson, K 2010. Quality of environmental impact statements and variability of scrutiny by reviewers. Environmental Impact Assessment Review, 30(3), 169176. Petts, J 1999. Public participation and environmental impact assessment. In Handbook of Environmental Impact Assessment, Vol. 1, ed. J Petts, pp. 145177. Oxford: Blackwell Science. Ross, W A 1987. Evaluating environmental impact statements. Journal of Environmental Management, 25, 137147. Sandham, L A and H M Pretorius 2008. A review of EIA report quality in the North West province of South Africa. Environmental Impact Assessment Review, 28(45), 229240. SEI, 2006. VKG Oil AS poolt kavandatava uue litehase asukohavaliku keskkonnamjude hindamine. Lpparuanne. Available at <http://www.envir.ee/orb.aw/class=file/action=preview/ id=235883/UTT_KMH lopparuanne.pdf>, last accessed 22 October 2006 (in Estonian). Sippe, R 1999. Criteria and standards for assessing significant impact. In Handbook of Environmental Impact Assessment. Vol. 1, ed. J Petts, pp. 7492. Oxford: Blackwell Science. Skinner, D C 2001. Introduction to Decision Analysis. A Practitioners Guide to Improving Decision Quality, 2nd edition. Gainesville: Probabilistic Publishing. Stewart, J M P and J Sinclair 2007. Meaningful public participation in environmental assessment: perspectives from Canadian participants, proponents, and government. Journal of Environmental Assessment Policy and Management, 9(2), 161183. Steinemann, A 2001. Improving alternatives for environmental impact assessment. Environmental Impact Assessment Review, 21(1), 321. Stillwell, W G, D von Winterweldt and R S John 1987. Comparing hierarchical and nonhierarchical weighting methods for eliciting multiattribute value models. Management Science, 33(4), 442450. Tang, Shui-Yan, Ching-Ping Tang and Carlos Wing-Hung Lo 2005. Public participation and environmental impact assessment in mainland China and Taiwan: political foundations of environmental management. The Journal of Development Studies, 41(1), 132. Triantaphyllou, E 2000. Multi-Criteria Decision Making Methods: A Comparative Study. Applied Optimization, volume 44, ed. P M Parlos. Dordrecht: Kluwer. UK DTLR 2001. Multi Criteria Analysis: A Manual, Department for Transport, Local Government and Regions, UK. Available at <http://www.odpm.gov.uk/stellent/groups/odpm>, last accessed 27 March 2007. Value Tree Analysis 2002. Helsinki University of Technology. Available at <http://www.mcda.hut.fi/value_tree/theory>, last accessed 24 March 2008. Winkler, R L 2003. An Introduction to Bayesian Inference and Decision, 2nd edition. Gainesville: Probabilistic Publishing. Wood, G 2008. Thresholds and criteria for evaluating and communicating impact significance in environmental statements: See no evil, hear no evil, speak no evil? Environmental Impact Assessment Review, 28(1), 2238.

36

Impact Assessment and Project Appraisal March 2011

Copyright of Impact Assessment & Project Appraisal is the property of Beech Tree Publishing and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Das könnte Ihnen auch gefallen