Sie sind auf Seite 1von 11

Curator

From Knowing to Not Knowing: Moving


Beyond Outcomes

Andrew J. Pekarik
Abstract
The ways that museums measure the success of their exhibitions reveal
their attitudes and values. Are they striving to control visitors so that people will
experience what the museum wants? Or are they working to support visitors, who
seek to find their own path? The type of approach known as outcome-based evaluation weighs in on the side of control. These outcomes are sometimes codified and
limited to some half-dozen or so learning objectives or impact categories. In
essence, those who follow this approach are committed to creating exhibitions that
will tell visitors what they must experience. Yet people come to museums to construct
something new and personally meaningful (and perhaps unexpected or unpredictable) for themselves. They come for their own reasons, see the world through their
own frameworks, and may resist (and even resent) attempts to shape their experience. How can museums design and evaluate exhibitions that seek to support visitors
rather than control them? How can museum professionals cultivate not knowing
as a motivation for improving what they do?

Everyone who works in museums wants to create exhibitions that are successful. But
what exactly is success? How can it be identified? Can the degree of success be measured?
Over the past quarter century, as the field of visitor studies has grown, some museums have come to believe that an exhibition is successful when visitors have the experiences that the museum intended them to havethat is, when outcomes match
intentions.
Measuring the extent of the match is known as outcome-based evaluation and it
has become increasingly popular in recent years. Adherents even see it as the starting

Andrew J. Pekarik (pekarika@si.edu) is Program Analyst in the Office of Policy and


Analysis, Smithsonian Institution, Washington, D.C.
105

106

ANDREW J. PEKARIK MOVING BEYOND OUTCOMES

point for creating exhibitionsthat is, first identify the desired outcome, then select a
topic or approach likely to achieve that outcome.
In this article I will describe why I think outcome-based program managementdefined here as planning based on the definition and measurement of objectivesis ill-advised for creating and evaluating museum exhibitions, and I will suggest
what a more suitable alternative might be.
The outcome-based approach to evaluation has a long history in America. It is
often traced back to the work of Ralph W. Tyler, an educational assessment specialist
who was chief evaluator for The Eight-Year Study (19321940), a national program
that involved 30 high schools in a curriculum experiment that organized material by
theme (with input from students) rather than by subject.1 Tyler emphasized assessment derived from comparing clearly stated objectives with outcomes. The advantage
of this method was that it was much simpler than the traditional gold standard for
evaluating experiments, namely, comparing treatment groups to control groups.
Instead of studying how students who used the experimental curriculum differed from
those who used the traditional curriculum, he could simply look for his predefined
outcomes. This line of thinking led in the 1950s to Benjamin Blooms taxonomy, a
set of hierarchical categorizations of learning outcomes. In the cognitive domain, these
outcomes are: Knowledge, Comprehension, Application, Analysis, Synthesis, and
Evaluation; in the affective domain: Receiving, Responding, Valuing, Organizing, and
Characterizing (Bloom 1956).2
Outcome-based evaluation grew rapidly in the 1970s in connection with the
professionalization of evaluation and the need to determine the effectiveness of new,
large-scale government programs (for instance, Project Head Start). In particular, the
work of Joseph Wholey, an influential public administration scholar, emphasized the
use of evaluation for program management. Ultimately, this trend led to the Government Performance and Results Act of 1993 (GPRA), and its emphasis on performance measurement systems and strategic objectives as guides to program decisionmaking.
In America, within the world of social-service non-profits, the W. K. Kellogg Foundation (1998), the Urban Institute (Hatry 1999), and the United Way (1996) led the call
for outcome-based program management in the 1990s. In the realm of museum exhibitions, in response to the GPRA, the Institute of Museum and Library Science (IMLS) has
been a leader. According to the IMLS website for grant applicants:
IMLS defines outcomes as benefits to people: specifically achievements or changes in
skill, knowledge, attitude, behavior, condition, or life status for program participants
(visitors will know what architecture contributes to their environments participant
literacy will improve).3

More recently, the National Science Foundations (NSF) Informal Science Education program developed a Project Monitoring System, a relational database to facilitate
the recording of project outcomes for exhibitions. Proposers of projects are required to
identify intended outcomes using six impact categories: Awareness, knowledge, or
understanding; Engagement or interest; Attitude; Behavior; Skills; and Other.4

CURATOR 53/1 JANUARY 2010

107

A similar categorization of outcomes for museums, known as the Generic Learning


Outcomes (GLO), was developed in Great Britain. The five categories of GLO are:
Knowledge and understanding; Skills; Attitudes and values; Enjoyment, inspiration, and
creativity; Action, behavior, and progression.5
Among museum evaluators, Chan Screven and Harris Shettel, in particular,
have been strong proponents of using objectives and outcomes to determine (and
improve) the didactic efficacy of exhibitions since the 1960s. Objections to this
approach are also long-standing.6 The more recent trend has been to expand this
method to explicitly include non-conceptual goals, such as inspiration and interest;
to codify the range of objectives as described by learning-outcome frameworks such
as the GLO or the NSF Impact Categories; and to more strictly link evaluation to
exhibition planning.
Despite several compelling factorsthe size of the literature surrounding this outcome-based approach to program planning, its history, the clout of its supporters, and
its inherently logical character7I am suggesting that this method is not the best direction for planning and evaluating museum exhibitions. My argument against the outcome-based approach to exhibition-making and evaluation in museums has three main
points:
First, as with all evaluation methods and program management systems, outcomebased evaluation and program management has both strengths and weaknesses. Through
familiarity with those strengths and weaknesses we can determine where a method is
appropriate and where it is not.
Second, outcome-based program management and the use of pre-defined outcome frameworks tend to reinforce conventional wisdom about museum mission among
exhibition creators; this mode of thinking ultimately limits innovation.
Third, outcome frameworks are very likely to leadby the coercive rationality of
the logic modelto measurements, and these measurements, necessarily narrower than
the outcomes themselves, further restrict the range of what staff believes exhibitions can
accomplish.
In place of this, I propose the use of a more recent evaluation approach, participant-oriented evaluation, and the adoption of design experimentation as the concomitant
principle to guide exhibition decisions.

Strengths and Weaknesses


The strength of outcome-based evaluation lies in its conceptual simplicity. The advantages include straightforwardness of implementation (provided that objectives are clearly
defined and outcomes clearly discernible), and ease of presentation (because results are
easy to understand). In addition, the approach is well suited to quantitative measurement, since outcomes are precisely definable in terms of the objectives, and thus data
needs are relatively easy to predict.
The weakness of outcome-based evaluation and management systems begins
with the initial establishment of objectives. It assumes at the outset that the defined

108

ANDREW J. PEKARIK MOVING BEYOND OUTCOMES

objectives are the best expressions of exhibition quality within the context of the
museum mission. But who determines objectives for a particular exhibit? On what
basis is one objective (or a few) selected over others? Whom does a particular objective best serve? The selection of an outcome is an implicit decision about value. While
the choice of objectives may reflect the agendas of program managers and developers,
it does not generally take into account the values of all those whom the program
purports to benefit.
This weakness can be mitigated when studies in advance of an exhibition (sometimes called front-end evaluation or front-end analysis) obtain input from potential
visitors and are used to shape objectives. But even so, the need to specify a limited set of
clearly defined objectives tends to limit the range of what is recognized as useful within
this data.
For example, outcome-based exhibition development typically favors cognitive
objectives, such as increasing visitors information or knowledge. (Its no accident that
outcomes are so often called learning outcomes.) This traditional preference has its
own internal logic: First, the acquisition of a specific idea is easier to define and measure,
in comparison to something as vague as inspiration or creativity; second, the method
makes sense because exhibitions have been classified as communications media in the
minds of many museum people and evaluators. Front-end evaluation that attempts to
uncover what people know about a subject in order to refine the cognitive message
that is to be the exhibitions objective will have trapped itself within the model of
outcome-based development, since it is likely to ignoreor not solicit, or not notice
data that points to the value of other, very different visitor goals, including those that are
unrelated to ideas.
And even if one were to accept the relevance of pre-packaged sets of outcomes
as defined by these external funding entities, how would one decide among competing
systems? Although the learning-outcome frameworks such as the GLO or the NSF
Impact Categories resemble one another, they are not equivalent, either in their content
or in their implicit values. Neither the GLO nor the NSF impacts, for example, give
prominence to synthesis, which is the construction (or derivation) of a whole pattern
out of diverse parts, even though synthesis stood near the top of Blooms hierarchical list
of cognitive learning outcome categories. In other words, these learning outcome frameworks emphasize the passive acquisition of information and attitudes rather than the
active construction of something new and personally meaningful (and perhaps unexpected by the museum).
A second major weakness of outcome-based evaluation lies in its relative neglect of
unintended outcomes. Museum audiences are very diverse in terms of their intentions
and expectations. The range includes those who are vaguely curious and are visiting
perhaps to please a companion, as well as those who are seeking answers to specific technical questions. Much more research, in fact, needs to be done on the motivations (conscious and unconscious) of museum visitors.
Objectives or outcomes are like an arrangement of funnels meant to neatly channel the unruly flow of visitor experiences into bottles for measurement and labeling. But
visitors in exhibitions are not under the museums control. They come for their own

CURATOR 53/1 JANUARY 2010

109

reasons, see the world through their own frameworks, and may even actively avoid the
attempts of exhibition makers to shape their experience. Objectives-based program
development encourages us to undervalue what flows past the funnels in this dynamic
stream, and to believe in the illusion of control.
We can be reasonably certain that any choice of one or several outcomes is likely
to exclude many in the museum visitor audience who have little or no interest at all in
those particular aspects of the museum visit. And despite the focus and efforts of exhibition developers, it is also reasonably certain thatamong those who experience that
exhibitionthere are those who find significant satisfaction and benefit from it in ways
that were not predicted by the objectives outcomes framework.
If such unintended outcomes are captured at all in the evaluation process (and this
is unlikely if the evaluation activity is efficient), they will be no more than side-effects
with little impact on the comparison of objectives and outcomes that represents the
heart of the analysis effort. Repeated iterations of this cycle of evaluation and planning
would tend toward ever narrower and sharper objectives directed (inadvertently, perhaps) to ever narrower audiences (in particular, those most receptive to those objectives), since, in the end, the museum shapes its audience by the limitations on what it
chooses to offer visitors.

Museum Mission
The valuation and establishment of specific outcomes implies a paternalistic relationship
between the organization and its public. It suggests that the task of the organization is to
change the visitor in ways that the museum has predetermined are useful and valuable.
This attitude is not rare in the history of museumsand it is, of course, an established
feature of contemporary schoolingbut it is not the only possible way to view the
museums mission.
The deliberate choice to promote museums as primarily educational institutions
(a strategy meant to improve the appeal of museums to government funders) has naturally led museums to be seen within the framework of contemporary education and thus
closely tied to the kinds of outcomes that are associated with schooling.
In my opinion, the word education should be used to mean much more than
schooling or training. It should be used to describe something that extends far
beyond the acquisition of certain predefined knowledge and abilities. It should describe
a type of engagement with reality that leads to independent growth and discovery. Since
education, in this sense, is currently treated as a kind of unintended outcome of the
schooling training process, its accomplishment is a hit-or-miss affair that is consequently undervalued.
In a comparable way, I see museums as environments within which individuals can
find opportunities for engagement that can lead to personal growthintellectually,
emotionally, spirituallyin whatever way each individual needs and desires. This is a
very different model from the one that sees them as communicators of a limited set of
ideas and values.

110

ANDREW J. PEKARIK MOVING BEYOND OUTCOMES

Metaphorically speaking, the museum is more like Yosemite National Park than it
is like Hialeah Race Track. Vast, rugged, wild, and boundary-less (except on maps),
Yosemite is a monumental place to wander in. Would you want to see Yosemite re-structured to meet just a few specific, predefined uses? Hialeah is designed for peopleracers
and bettors alikewho keep an eye on the finish line. How long would it take, how
many thousands of studies, to determine all of the ways that individuals of many diverse
types find benefit from spending time in Yosemite? Would those experiences be
improved if these possibilities were narrowed to the few benefits whose outcomes could
be measured? Or should possibilities be continually expanded to meet the needs of an
ever-widening circle of users?
To see the museum as a field of potential for human growth is to see it as a place
that serves othersrather than as a place that changes people into versions more acceptable to the museums staff and sponsors (although such changes may indeed happen in
some instances). Its taskfrom this perspectiveis to provide a setting that is as rich with
opportunities, as alive and intriguing, as is humanly possible. The museum becomes, in
a sense, a hyper-realitya trackless realm to play in, like Yosemitethat offers opportunity for engagement in multiple ways, with the capacity to be intense and powerful.
Now imagine what it is like to create exhibitions from this viewpoint rather than
from the mechanical, self-referencing system of objectives and outcomes. Because the
field of potential is so vast, one would need to begin by understanding how, where, and
for whom this kind of growth is taking place. The construction of any exhibition, in
other words, would truly begin with visitors, and would proceed through a process of
self-questioning to determine how the museums resources (contents, facilities, staff, and
so on) could be used to expand visitors experiences, not narrow them down to chosen
outcomes.

Measurement
I am arguing for a view of museums that expands outward, rather than one that narrows.
The setting of objectives represents a funneling of many possibilities into a select few.
Measuring outcomes constricts the funnel even further, since it establishes specific, quantifiable parameters that will then be taken to represent the outcome more broadly.
Outcome-based evaluation requires measurement. It has no meaning if outcomes
cannot be measured, since there is then no other way to compare them with objectives
in order to prove success. The funders who urge outcome-based evaluation assume that
there will be measurement. The NSF impact categories, for example, are each defined as
a measurable demonstration of its particular type. The Kellogg Foundation insists that
outcomes be SMART, that is: specific, measurable, action-oriented, realistic, and timed
(W. K. Kellogg Foundation 1998, 17). IMLS defines outcome-based evaluation as the
measurement of results.8
In the exhibition setting, however, valid, reliable measures of outcomes are difficult to come by for a number of reasons. First, such measurements are likely to be indirect, subjective, and vague: For example, what about the percentage of visitors who agree

CURATOR 53/1 JANUARY 2010

111

that they learned somethingeven if what they learned was incorrect? Second, thorough
testing of visitors is not possible in a museum environmenteven if you could test them
on a specific idea, how do you account for those who did not learn that idea, but who
did learn many others? Third, sample sizes are generally small, either because program
audiences are small, or because large-scale studies are too expensive for most museums.
Fourth, there is no single ideal time to measure an outcomeshould a behavior change
be measured immediately upon exit from an exhibition, a short while later, or much
later? Fifth, since museum visiting is likely to be only one of a series of inter-related activities that might have a role to play in determining the outcome of a visit, how can you
control for these confounding variables?

Participant-based Evaluation
Participant-based evaluation originally arose in opposition to outcome-based evaluation. It begins with qualitative inquiry into the experiences of those involved in a project,
whether as providers or as consumers, or both. Variations on this approach include naturalistic evaluation (Wolf and Tymitz 1978), responsive evaluation (Stake 1984),
fourth-generation evaluation (Guba and Lincoln 1989), and collaborative evaluation
(Cousins, Donohue, and Bloom 1996). The research aims to discover new dimensions
of the situation that have not previously been noted and considered. It is an open-ended
inquiry into meaning making that aims to make understanding more complexrather
than to simplify it.
The participant-based approach draws on constructivist thinking and seeks to
understand the exhibition and its effectiveness from the point of view of those who experienced it. In place of the single producer-oriented reality represented by objectives and
outcomes, this approach admits that each of the participants is constructing his her own
perception of reality. Evaluators seek out the diversity of these perceptions. Because this
method is context-sensitive, and represents a process of discovery, it is very sensitive to
the variety of participant needs as well as to unexpected outcomes.
Because this kind of evaluation begins as a goal-free, blinder-free search for discovery, it provides a view that is more complete and contextualized, more nuanced and
complex, and more accepting of multiple viewpoints. Its results address all the elements
that impact the experience of visitorsnot just outcomes, but processes, settings, needs,
issues, values, barriers, and so on. Out of this analysis comes insight that can be used to
change the exhibition or to devise new ones.
Another way of putting it is that participant evaluation, unlike outcome evaluation, is not just about what happened, but also about why it happened. The value of this
kind of evaluation far exceeds the simple question of effectiveness of a program. Because
it provides insight into the way that diverse participants think and act, it gives exhibition
developers a much richer mental model to use in constructing and revising exhibitions.
As those exhibitions are studied, in turn, a large, complex body of knowledge is constructed out of the surprises and insights that come from the iterative process of creating
programs and studying participants.

112

ANDREW J. PEKARIK MOVING BEYOND OUTCOMES

In any system of evaluation, one hopes that repeated evaluations will inform the
creation and revision of exhibitions. When outcome-based evaluation is employed, the
refinement that is achieved over time is movement ever closer to the ideal of a program
that will perfectly match objective and outcome (usually achieved most effectively by
narrowing the audience). When participant-based evaluation is used, however, the
development is an ever richer and deeper understanding of visitors and the confidence
to experiment with new ways to respond to those new discoveries about how diverse
people engage with the museum.
Of course, there are implications in this alternative model for the way that exhibition development is pursued. While outcome-based evaluation can be handed off to an
evaluation specialist or to an outside contractor once objectives are agreed upon, participant-based evaluationif it is to be used to actively construct programsneeds to be an
ongoing activity of the exhibition development team more broadly. Since it aims to capture subtle complexities in the experience of specific individuals, it is most valuable when
it is personally experienced. Those who are responsible for making the exhibition need
to see and hear first-hand how visitors respond. In this way, their mental models of visitors are consciously and subconsciously enhanced. In other words, evaluation becomes
everyones business.

Design Experimentation
In situations where one wishes to understand the workings of a complex, interpenetrating, and overlapping interchange of activities, such as learning in a classroom or visiting
an exhibition in a museum, it is very difficult to separate out one strand from the entire
system. A learning outcome, isolated from its context, is like a fish taken from the ocean,
gasping for air.
In order to understand (and judge) what is happening in complex systems like
these, we need to study the system in its totality and examine how the working of that
system shifts in response to changes. Just as wind-tunnel testing is used to refine the
design of rockets or airplanes, alterations in exhibitions can be studied to determine
their impact on participant responses, so that improved versions result. In other
words, the exhibition itself can be viewed not as a product to be constructed in its
entirety and then judged as successful or not, but as an experiment whose components will be altered. In accordance with those alterations, participants will be studied
in an open-ended manner in order to determine what happened, who was affected,
and why.
The idea of design experimentation is generally traced to the work of Ann Brown,
who said that, As a design scientist in my field [the study of learning], I attempt to engineer innovative educational environments and simultaneously conduct experimental
studies of those innovations (1992, 141).
The difference between design experimentation and seat-of-the-pants fine-tuning
or formative evaluation is that design experiments are done not only to create
improvements, but also to construct theoryand not a grand theory like Special

CURATOR 53/1 JANUARY 2010

113

Relativity, but a working theory of how content and design choices affect the emergence of particular responses. As Cobb et al. have stated with respect to design experiments in education:
Design experiments ideally result in greater understanding of a learning ecologya
complex, interacting system involving multiple elements of different types and
levelsby designing its elements and by anticipating how these elements function
together to support learning. Design experiments therefore constitute a means of
addressing the complexity that is a hallmark of educational settings (Cobb et al.
2003, 9).

In my opinion, this is exactly the exhibition development approach that would


most benefit museums, because here, too, we are dealing with an extremely complex
system of interacting parts and levels. If we organize exhibitions as design experiments,
we can begin to definein a clearer, more thoughtful and accurate waythe ecology
of the environment that gives rise to the museum experience. Design experimentation,
coupled with participant evaluation, thus can become a powerful engine for change
and creativityone that can respond to changing audiences and media with ease and
skill.

Design Experimentation in Practice


I am not aware of an exhibition project that has been or is being consciously developed
as a design experiment. (If you have an example, please let me know.) I am involved
with several projects that have started to move in this direction, but instead of describing
them here, I would like to suggest how I envision this approach being applied in its
fullest form.
Instead of thinking of the exhibition as a building that is planned in detail
and then built, one would think of it as a living organism. It begins small, perhaps
as a few displays set among others. As the exhibition team studies the ways that visitors engage with this embryonic exhibition, the team starts to invent methods for
expanding it that seem likely to be fruitful, in view of what team members are learning about visitors and their responses. As the embryonic exhibition is revised and
enlargedperhaps doubled, lets sayit is studied again, and yet again it is changed
and built upon. The exhibition, in other words, evolves as the teams understanding
evolves in regard to what the visitor experiences and what the exhibition facilitates.
Eventually it is declared mature, it stops changing, and, after a decent interval, it
begins to die as new growthnew displays, for instancebegin to take away some
of its territory.
A living, changing, organic exhibition like this might have some practical benefits
in todays museum environment. Obviously this pattern of development is most suitable
for the display of permanent collections, not for projects dependent on outside loans.
And at a time when museums are turning away from extravagant loan exhibitions
towards new ways of using the permanent collection, this practice of experimentation

114

ANDREW J. PEKARIK MOVING BEYOND OUTCOMES

might provide a way to make the display of permanent collections dynamic without radically increasing costs; to incorporate new technologies effectively and efficiently; to
increase creativity and experimentation; and to engage new audiences.

Conclusion
If success does not mean that exhibition visitors attain the specific outcomes that the
museum intends them to achieve, what, then, does it mean? I am suggesting here that
for visitors it means that the experience opens up possibilities in ways they feel are personally meaningful. For museum staff it means that the exhibition provides an opportunity to attain a deeper, richer understanding of the museum experience, one that
significantly expands their mental model of visitor response.
In the end, I think, the argument for or against outcome evaluation is a matter of
values. What kind of understanding do museum professionals seek? Is it better to have a
solid, established understanding that one can confidently apply in a consistent, objective
way? Or is it better to have a fluid, dynamic understanding that is constantly seeking
new articulation and is never the same? What is our mission? Is it to disseminate our wisdom? Or is it to help others in their search? What motivates museum professionals? Is it
knowing? Or is it not knowing?

Notes
1. The reforms recommended by the study are still considered relevant today. See,
for example, Kirdel, Bullough, and Goodlad (2007). For excerpts of the study
itself, see http://www.8yearstudy.org/projectintro.html.
2. The taxonomy also identified a psychomotor domain, but did not include subcategories within this domain.
3. See http://www.imls.gov/applicants/basics.shtm. The page contains a link to the
IMLS document Perspectives on Outcome Based Evaluation for Libraries and Museums.
4. See Table 31, page 21, in Friedman (2008).
5. See http://www.inspiringlearningforall.gov.uk/toolstemplates/genericlearning/
index.html.
6. See, for example, the objections to Shettels approach by Michael Alt (Alt 1977)
and Shettels detailed response (Shettel 1978).
7. The approach is alternatively called the logic model.
8. See http://www.imls.gov/applicants/basics.shtm.

References
Alt, M. 1977. Evaluating didactic exhibits: A critical look at Shettels work. Curator
20 (3): 241258.

CURATOR 53/1 JANUARY 2010

115

Bloom, B. S., ed. 1956. Taxonomy of Educational Objectives: The Classification of Educational Goals, by a Committee of College and University Examiners. New York:
Longmans, Green.
Brown, A. L. 1992. Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. Journal of the
Learning Sciences 2 (2): 141178.
Cobb, P., J. Confrey, A. diSessa, R. Lehrer, and L. Schauble. 2003. Design experiments in educational research. Educational Researcher 32 (1): 913.
Cousins, J. B., J. J. Donohue, and G. A. Bloom. 1996. Collaborative evaluation in
North America: Evaluators self-reported opinions, practices, and consequences. Evaluation Practice 17 (3): 207226.
Friedman, A, ed. 2008. Framework for evaluating impacts of informal science
education projects. National Science Foundation. Accessed Oct. 3, 2009 at
http://www.insci.org/resources/Eval_Framework.pdf.
Guba, E., and Y. Lincoln. 1989. Fourth Generation Evaluation. Thousand Oaks, CA:
Sage.
Hatry, H. P. 1999. Performance Measurement: Getting Results. Washington, DC: The
Urban Institute.
Hendricks, M., M. C. Plantz, and K. J. Pritchard. 2008. Measuring outcomes of
United Way-funded programs: Expectations and reality. In Nonprofits and
Evaluation: New Directions for Evaluation, No 119, J. G. Carman and K. A. Fredericks, eds., 1335. San Francisco: Jossey-Bass.
Kirdel, C. A., R. V. Bullough, and J. I. Goodlad. 2007. Stories of the Eight-year
Study: Reexamining Secondary Education in America. Albany, NY: State University of New York Press.
Shettel, H. H. 1978. A critical look at a critical look: A response to Alts critique
of Shettels work. Curator 21 (4): 329345.
Stake, R. E. 1984. Program evaluation, particularly responsive evaluation. In Evaluation Models, G. F. Madaus, M. Scriven, and D. L. Stufflebeam, eds. Boston:
Kluwer-Nijhoff.
United Way of America. 1996. Measuring Program Outcomes: A Practical Approach.
Alexandria, VA: United Way of America.
W. K. Kellogg Foundation. 1998. W.K. Kellogg Foundation Evaluation Handbook.
Accessed Oct. 3, 2009 at http://www.wkkf.org/Pubs/Tools/Evaluation/
Pub770.pdf.
Weil, S., and P. Rudd. 2000. Perspectives on Outcome-Based Evaluation for Libraries
and Museums. Institute of Museum and Library Services. PDF accessed on
Oct. 3, 2009 via link at http://www.imls.gov/applicants/basics.shtm.
Wolf, R. L., and B. L. Tymitz. 1978. A Preliminary Guide for Conducting Naturalistic
Evaluation in Studying Museum Environments. Washington, DC: Smithsonian
Institution Office of Museum Programs.

Das könnte Ihnen auch gefallen