Sie sind auf Seite 1von 13

The Policy Studies Journal, Vol. 32, No.

3, 2004

Evaluating the Effectiveness of Nonprofit Fundraising


Arthur C. Brooks

In this article, I apply and discuss two alterative approaches for evaluating nonprofit fundraising
practices. First, I introduce simple financial ratios, using a sample of New York state social welfare
nonprofits. Then, I construct Adjusted Performance Measures, which attempt to neutralize the uncon-
trollable outside forces that nonprofits face in their fundraising. I compare the two types of measures
statistically, and examine their respective strengths and weaknesses.

Nonprofit activity is receiving increasing policy attention. There are two main
reasons for this. First, many nonprofits receive high levels of government funding:
Governments at all levels provide about a third of all nonprofit revenues, which
amounts to more than $200 billion annually (Brooks, forthcoming). Second, gov-
ernments are increasingly using the nonprofit sector—usually social welfare organ-
izations—to produce or deliver public goods and services, resulting in a huge wave
of public-private contracting (Van Slyke, 2003).
Privately donated support (in terms of both time and money) for nonprofits is
also crucial to their viability. In 2000, 20% of all nonprofit sector revenues—more
than $130 billion—were donated by individuals, and the value of adult volunteer
time to nonprofits in 1998 was over $225 billion (Independent Sector, 2001). Accord-
ing to the Social Capital and Community Benchmark Survey (Roper, 2000), 81% of
U.S. households gave charitably in 2000, with an average annual gift of $1,347. Sixty-
five percent of households contributed to religious organizations (average annual
gift of $858), and 68% gave to nonreligious organizations ($502).
Twelve percent of all individual charitable giving, about $19 billion, flowed to
social welfare service providers (Brooks, 2004). It is apparent, however, that this type
of organization tends to face the most difficult circumstances in raising donated
funds. As a rule, social welfare nonprofits receive smaller average donations than
organizations in other areas, such as education or the arts, and thus must solicit
larger numbers to acquire adequate funds. One way to illustrate this point is to look
at the ratio of the average donation size to the number of donations (in millions) in
different nonprofit subsectors. Nationally, in 1995, this ratio was 103 for arts and
culture nonprofits, 39 for education, but just 14 for social welfare organizations
(Brooks, 2003). This indicates that, even holding constant the number of donors for
each type of nonprofit, the average donation to an arts firm is about seven and a

363
0162-895X © 2004 The Policy Studies Journal
Published by Blackwell Publishing. Inc., 350 Main Street, Malden, MA 02148, USA, and 9600 Garsington Road, Oxford, OX4 2DQ.
364 Policy Studies Journal, 32:3

half times higher than that to a social welfare provider. Potentially, this is a fiscally
disadvantageous state of affairs for these providers, in that they must develop a rel-
atively large number of small donations. Not surprisingly, fundraising effectiveness
is a topic that receives significant attention among social welfare nonprofit practi-
tioners. In fact, for-profit marketing companies have generally made their deepest
inroads into the nonprofit world by bringing their expertise to bear in designing
effective direct mail and media campaigns for these firms.1
This fundraising situation raises an area of policy interest as well, given the
reliance of many states on these organizations for the delivery of crucial services
(Frumkin, 2002). In any contracting relationship between governments and non-
profits, performance evaluation is key to understanding and ensuring due diligence
with tax-supported funds. At present, evaluation of nonprofits tends to focus on
program effectiveness. However, given the importance and difficulty of fundrais-
ing among social welfare providers, fundraising performance is an evaluation focus
that governments and other funders will almost certainly increasingly adopt in their
contracting and granting processes: It reflects the ability of these nonprofits to gen-
erate the resources (on top of government revenues) they need to survive. It also
may show how well organizations use tax-supported income in their fundraising.
With an eye to the interests of both policymakers and nonprofit practitioners,
this article describes and compares common quantitative measures of fundraising
performance. After a brief discussion of the existing literature on simple fundrais-
ing ratios, I introduce two that nonprofits often calculate in assessing or demon-
strating their efficiency or effectiveness: the proportion of total costs devoted to core
services instead of fundraising, and the donations per fundraising dollar. Then, I
describe “adjusted performance measures” (APMs), which try to improve on simple
ratios by netting out variables beyond organizations’ control, such that their true
efforts can be observed. To demonstrate both of these techniques, as well as their
data requirements, I use 2001 IRS Form 990 data for social service providers in
upstate New York. Neither of these methods is without problems, both for techni-
cal reasons and because neither yields much useful information to nonprofit organ-
izations regarding what they should do to achieve higher fundraising effectiveness.
Thus, I close by introducing a method that is slightly more complex but that holds
promise for understanding future fundraising success.

Fundraising Effectiveness, by the Numbers

Background

The use of ratios in assessing financial success, failure, or vulnerability has a


controversial history among nonprofit practitioners and scholars. Beginning with
the work of Tuckman and Chang (1991), financial measures such as the ratio of
administrative overhead to total costs and the ratio of cash on hand to total rev-
enues have been used in predictive modeling and performance evaluation
(Greenlee & Trussel, 2000; Hager, 2001). The advantage of using these financial ratios
Brooks: Evaluating the Effectiveness of Nonprofit Fundraising 365

centers on their simplicity and transparency. They are easy to calculate, interpret,
and explain to managers and policymakers.
On the other hand, critics object to their use for three reasons. First, they
measure averages, which don’t tell us about effectiveness at the scale of operations
chosen by a nonprofit. For example, a nonprofit might have a high average return
on its fundraising expenses, yet be fundraising too much (if, for the last dollar it
spends, the return is less than a dollar). In other words, gauging true effectiveness
requires information on marginal returns (which is not usually available), not
average returns. Second, in spite of the organizations’ similarity in activities, these
ratios are inherently contaminated by factors beyond the firms’ control. For instance,
the home areas of nonprofits often vary greatly in terms of population, economic
circumstances, and demographics. Thus, we might be uncomfortable saying that a
nonprofit that has a low donation level per fundraising dollar, but which resides in
a poor area, is necessarily performing “worse” than an organization in a wealthy
area that sees a higher yield. Indeed, we would probably be surprised if this were
not the case, despite any underlying effort and skill on the part of the organizations.
Third, internal accounting standards for fundraising are notoriously porous.
Looking at the IRS form 990, which all 501(c)(3) nonprofits with gross annual rev-
enues over $25,000 are required to file, one gets the impression that fundraising
expenditures are a clearly delineated expense, neatly recorded on line 15 of the form.
However, the fact is that fundraising responsibilities are distributed broadly across
organizations and personnel. Most nonprofit executives devote some of their time
(and thus some of their salary) to fundraising, but the actual amount is open to
imprecision and bias.
In sum, financial ratios—especially involving fundraising expenditures—are
problematic. Whether we like them or dislike them, however, they are very com-
monly used, and their strengths and weakness need to be understood.

Simple Ratios

Imagine we are interested in judging the fundraising effectiveness of a social


welfare nonprofit. We want to know the level of unearned income Di and fundrais-
ing expenses Fi for an organization i. However, these numbers are not really useful
unless they are known in comparison to one another, relative to other organizations,
and/or given the organization’s total budget. Thus, we might start by constructing
two measures:
1. 1 - Fi/TCi, where Fi is i’s annual fundraising expenditures, and TCi is i’s total
expenses. This represents the proportion of total costs that go to core services
instead of to fundraising.
2. Di/Fi. This ratio represents the amount of unearned revenues generated by each
fundraising dollar, on average.
Obviously, each of these measures captures different phenomena and would be
useful under different circumstances.2 The first might be thought of as measuring
366 Policy Studies Journal, 32:3

the resources an organization has left over after fundraising, which has implications
for its sustainability and fundraising efficiency. The second measure can be thought
of as measuring an organization’s effectiveness in targeting and retaining donors.
We wouldn’t necessarily assume a high correlation between these measures—indeed,
some might believe that a low value of 1 - Fi/TCi is desirable (e.g., Rose-Ackerman,
1982), although this view is probably unusual. The value of each measure is highest
in a comparative sense. That is, we would most likely want to seek these measures
in the context of a ranking of “peer” organizations, in order to establish benchmarks.
To demonstrate the use of these common measures, consider the following small
sample of New York state social welfare nonprofits. This sample considers data from
fiscal year 2001 on the 47 organizations that fulfill the following criteria.
1. Each organization is from Albany, Syracuse, or Rochester, New York.
2. Each organization filed an Internal Revenue Service (IRS) Form 990 for 2001.
3. Each organization is principally involved in one of the following social welfare
activities: children’s services, family services, financial counseling, transporta-
tion for the needy, emergency assistance, care for institutionalized populations,
or services to promote the independence of vulnerable groups (the elderly,
immigrants, etc.).
4. Each organization has a positive fundraising budget.
The data were assembled from IRS records by the National Center for Charita-
ble Statistics at the Urban Institute.3 Table 1 summarizes F, TC, D, and the con-
structed measures 1-F/TC and D/F for the sample. It also summarizes the years since
inception for organizations (ORGNAGE) and their total revenues (TR). In these data,
D includes not only private contributions but also government subsidies.4 Note this
sample is assembled only as an example to illustrate the measures and techniques.
Its limited size and scope are intentional to facilitate ease in understanding the
results. However, a much larger sample of firms would be necessary to conduct an
actual unbiased evaluation of fundraising performance.
Table 2 lists the individual organizations with respect to 1-F/TC, D/F, and the
rankings of these indexes. Of the 47 organizations, 13 (28%) are faith-based;
however, this designation is not as simple as it seems at first, because different

Table 1. IRS Form 990 data for social welfare nonprofits

Variable Mean Standard deviation Minimum Maximum


F $71,068 $153,904 $100 $830,818
TC $4,269,558 $7,154,622 $43,540 $35,805,623
D $1,790,911 $4,493,451 $446 $30,178,984
1-F/TC 0.94 0.16 0 1
TR $4,486,429 $7,437,557 $0 $36,722,136
D/F 131 300 0.33 1,484
ORGNAGE 32 19 3 70
Note. The minimum level of D in this sample exceeds the minimum level of TR because TR includes
earned revenues, which can be negative.
Brooks: Evaluating the Effectiveness of Nonprofit Fundraising 367

Table 2. IRS data for ranking organizations by self-sufficiency and fundraising effectiveness

Organization 1-F/ TC D/ F Rank Rank D/F


1-F/ TC
Rochester Presbyterian Home 0.999934 240.71 1 5
Kripalu Yoga Fellowship of the Capital District 0.999730 4.46 2 41
Centro Civico Hispano Americano 0.999457 1,484.37 3 1
Edgerton Child Care Services 0.999456 1,124.09 4 2
Arbor Park Child Care Center 0.999198 153.22 5 9
Trinity Institution 0.999055 911.06 6 3
Women’s Building 0.998788 220.00 7 6
Baden Street Settlement 0.998544 638.83 8 4
Elmcrest Children’s Center 0.998043 33.32 9 19
NYSARC 0.997616 77.52 10 14
Onondaga Community Living 0.997540 104.48 11 12
Spaulding PRAY Residence 0.996762 14.32 12 31
YMCA of The Capital District 0.996193 42.50 13 17
Southwest Area Neighborhood Association 0.994938 174.68 14 7
Syracuse Model Neighborhood Facility 0.994674 169.65 15 8
Twelve Corners Day Care Center 0.994572 31.12 16 20
Parsons Child & Family Center 0.993894 17.17 17 26
Catholic Charities of Syracuse 0.993058 121.42 18 10
YMCA of Greater Rochester 0.993008 29.53 19 21
Lewis Street Center 0.991925 79.48 20 13
International Center of the Capital Region 0.990998 115.12 21 11
Transitional Living Services of Onondaga County 0.989131 1.36 22 45
Rochester Rehabilitation Center 0.988687 6.53 23 36
YMCA of Greater Syracuse 0.987784 12.03 24 32
Holding Our Own 0.985640 2.64 25 44
YWCA of Syracuse & Onondaga County 0.983527 16.43 26 28
Huntington Family Centers 0.982369 19.91 27 25
Lifespan of Greater Rochester 0.976589 55.26 28 15
New York Statewide Senior Action Council 0.974517 37.94 29 18
Center for the Disabled Foundation 0.972849 10.79 30 33
St. Paul’s Day Care Center 0.968687 5.43 31 39
Urban League of Rochester 0.967946 26.23 32 22
Threshold Center for Alternative Youth Services 0.963780 25.06 33 23
Syracuse Jewish Family Service 0.962368 20.34 34 24
Pirate Toy Fund 0.961978 43.43 35 16
Capital District Center for Independence 0.941191 17.15 36 27
Rescue Mission Alliance of Syracuse 0.940780 7.68 37 35
100 Groton Parkway 0.931767 1.23 38 46
Crisis Pregnancy Services 0.925454 15.40 39 30
Jewish Home of Central New York 0.921706 15.81 40 29
Colonie Senior Service Centers 0.912934 5.45 41 38
New York State Coalition for the Aging 0.874191 3.89 42 42
Women’s Foundation of Genesee Valley 0.842301 8.56 43 34
Rochester Children’s Nursery Foundation 0.830422 0.33 44 47
Francis House 0.776930 6.14 45 37
Hillside Children’s Center Foundation 0.631919 5.20 46 40
Kirkhaven Foundation 0.0 2.72 47 43
368 Policy Studies Journal, 32:3

organizations have very different types of religious purpose and mission (Ebaugh,
Pipes, Chafetz, & Daniels, 2003). Given the nature of IRS Form 990 data, I can only
record the presence of a religious affiliation, data which I use in the next section.
Ideally, greater detail on the role of religion in organizations would be used.
The two ratios measure different phenomena but are fairly highly-correlated:
The rank correlation between them is 0.70. The top organization in 1-F/TC, the Pirate
Toy Fund of Rochester (which provides toys to poor children), is fifth in D/F; the
top organization in D/F, the Hispanic American Civic Center of Albany, is third in
1-F/ TC.5

Adjusted Performance Measures

Recall that there were several common objections to the use of simple ratios.
The first objection is that they ignore marginal effects, which are most relevant in
decision making. Unfortunately, this objection cannot typically be remedied with
data collected by a nonprofit or governing body. Hence, we may have to rely on
averages, albeit with their weaknesses in mind as we interpret them.
The second objection to simple ratios—that they are contaminated by influences
beyond organizations’ control—can be addressed. One method intended to do this
is the use of Adjusted Performance Measures (APMs). These are regression-based
metrics designed to simulate measures free from environmental influences that
might make raw comparisons unfair or unreliable (Steifel, Rubenstein, & Schwartz,
2002). For the application at hand, the basic idea of APMs is as follows. The best
linear model of effectiveness we can build, given available data, might be R = a +
Xb + e, where R is the measure 1-F/TC or D/F, X represents a vector of measurable
environmental factors (mostly beyond an organization’s control) that affect
fundraising, and e is a random disturbance. The portion of R that owes to an orga-
nization’s efforts is not measured by this model—indeed, it is probably unmeasur-
able—so it is captured in the disturbance term. This disturbance, and hence the
portion of an organization’s performance not described by the environmental
factors, is estimated by e = R - R̂, where R̂ = â + X b̂ . Then, the element of e corre-
sponding to a particular organization i, ei, is treated as an uncontaminated indica-
tor of the true measure sought, relative to the other organizations in the sample.
That is, e is an APM. A positive value of ei indicates that an organization would tend
to receive a high score in terms of R, even if all environmental factors were neu-
tralized; a negative value of ei indicates the opposite. The units in e arguably do not
have an especially meaningful interpretation, so APM information is most useful in
ordinal form.
By way of example, I turn once again to the data used in the last section, and
calculate APMs for each of the ratios. I augment the IRS data (including ORGNAGE
and TR, to neutralize the fact that older and larger organizations may naturally have
an easier time fundraising than younger and smaller firms) with several socioeco-
nomic variables, calculated at population averages for the zip code in which each
nonprofit is located. The idea here is that a nonprofit’s “home” area will tend to
affect its fundraising ability. (Note that the assumption that each nonprofit’s zip code
Brooks: Evaluating the Effectiveness of Nonprofit Fundraising 369

Table 3. Socioeconomic data for APM calculations

Variable Mean Standard deviation Minimum Maximum


NOHS 0.25 0.13 0.06 0.48
HHINC $28,466 $15,128 $9,692 $70,744
POV 0.24 0.16 0.01 0.47
POP 14,963 7,788 1,683 28,094
FAITH 0.28 0 1
ALB 0.30 0 1
ROC 0.40 0 1

Table 4. APM regression estimates

Dependent Variable: ln(1-F/TC) Dependent Variable: ln(D/F)


Coefficient (standard error) Coefficient (standard error)
Intercept -8.869 (14.786) -44.827 (17.655)
ln(TR) 0.055 (0.091) 0.117 (0.109)
NOHS 0.163 (0.298) 0.066 (0.356)
ln(HHINC) 0.068 (0.058) 0.078 (0.069)
POV 0.861 (1.43) 4.566 (1.707)
ln(POP) -0.021 (0.077) 0.115 (0.092)
FAITH -0.266 (0.463) -0.538 (0.553)
ALB 0.114 (0.53) 0.054 (0.633)
ROC 0.351 (0.617) 1.105 (0.737)
R2 0.13 0.34
Note. Regressions are estimated using ordinary least squares.

adequately represents its “home” is fairly arbitrary and done for illustrative pur-
poses only. In an actual evaluation, researchers would need to define the appropri-
ate home area with much greater precision.) These variables are the percentage of
the adult population without a high school diploma (NOHS), household income
(HHINC), the percent of the population living below the poverty line (POV), and
population (POP). I include dummy variables denoting whether the organization is
faith-based (FAITH) or is located in the cities of Albany (ALB) or Rochester (ROC).
(The reference city is Syracuse.) I transform all of the continuous variables, includ-
ing the ratios, into their natural logarithms, such that a normal distribution of the
errors is a plausible assumption, the left-hand side of each equation is not censored,
and consequently the models can be estimated using ordinary least squares (OLS).
Table 3 summarizes these variables across the organizations in the sample.
Table 4 summarizes the regression equations used to build each APM. The APM
for each organization is calculated as exp{Ri} - exp{ R̂i}. The mean estimated APM
for 1-F/ TC is 0.07, with a standard deviation of 0.33. The mean value of the
APM for D/F is 82.27, with a standard deviation of 281.53.
Table 5 summarizes the organizations with respect to the actual ratios, as well
as the calculated APMs, and their rankings. The adjusted ratios diverge from each
other much more than the simple ratios do. The rank correlation between the APMs
of 1-F/TC and D/F is 0.19, which is not statistically significantly different from zero.
370 Policy Studies Journal, 32:3

Table 5. IRS data for ranking organizations by self-sufficiency and fundraising effectiveness

Organization 1-F/ TC (APM) D/F (APM)


[APM rank] [APM rank]
Pirate Toy Fund 0.962 (0.739) [1] 43.4 (39.8) [13]
St. Paul’s Day Care Center 0.969 (0.619) [2] 5.4 (0.6) [19]
Women’s Foundation of Genesee Valley 0.842 (0.52) [3] 8.6 (-0.6) [24]
Rochester Children’s Nursery Foundation 0.83 (0.519) [4] 0.3 (-6.6) [34]
Twelve Corners Day Care Center 0.995 (0.482) [5] 31.1 (21.1) [15]
100 Groton Parkway 0.932 (0.474) [6] 1.2 (-8) [35]
Rochester Rehabilitation Center 0.989 (0.473) [7] 6.5 (-6.6) [33]
Lifespan of Greater Rochester 0.977 (0.453) [8] 55.3 (21.1) [16]
Southwest Area Neighborhood Association 0.995 (0.381) [9] 174.7 (134) [7]
Colonie Senior Service Centers 0.913 (0.354) [10] 5.4 (-0.4) [23]
Crisis Pregnancy Services 0.925 (0.313) [11] 15.4 (-3) [28]
Women’s Building 0.999 (0.309) [12] 220 (211.1) [5]
Hillside Children’s Center Foundation 0.632 (0.277) [13] 5.2 (-5.3) [31]
Threshold Center for Alternative Youth Services 0.964 (0.264) [14] 25.1 (0.7) [18]
New York Statewide Senior Action Council 0.975 (0.259) [15] 37.9 (0) [21]
Center for the Disabled Foundation 0.973 (0.256) [16] 10.8 (-0.4) [22]
New York State Coalition for the Aging 0.874 (0.244) [17] 3.9 (-29.9) [42]
Spaulding PRAY Residence 0.997 (0.234) [18] 14.3 (-2.1) [27]
Transitional Living Services of Onondaga County 0.989 (0.228) [19] 1.4 (-16.7) [39]
Arbor Park Child Care Center 0.999 (0.193) [20] 153.2 (111.1) [9]
YMCA of Greater Syracuse 0.988 (0.141) [21] 12 (-6) [32]
Edgerton Child Care Services 0.999 (0.111) [22] 1124.1 (1020.6) [2]
YWCA of Syracuse & Onondaga County 0.984 (0.11) [23] 16.4 (0.3) [20]
Elmcrest Children’s Center 0.998 (0.096) [24] 33.3 (2.5) [17]
Rescue Mission Alliance of Syracuse 0.941 (0.076) [25] 7.7 (-12) [38]
Rochester Presbyterian Home 1 (0.025) [26] 240.7 (197) [6]
Onondaga Community Living 0.998 (-0.029) [27] 104.5 (91.3) [11]
Catholic Charities of Syracuse 0.993 (-0.05) [28] 121.4 (97.4) [10]
Parsons Child & Family Center 0.994 (-0.07) [29] 17.2 (-0.8) [25]
Syracuse Model Neighborhood Facility 0.995 (-0.088) [30] 169.6 (131.5) [8]
Jewish Home Of Central New York 0.922 (-0.1) [31] 15.8 (-11.1) [37]
YMCA of Greater Rochester 0.993 (-0.104) [32] 29.5 (-8.4) [36]
Trinity Institution 0.999 (-0.121) [33] 911.1 (784) [3]
Holding Our Own 0.986 (-0.126) [34] 2.6 (-42.4) [44]
Syracuse Jewish Family Service 0.962 (-0.137) [35] 20.3 (-4.3) [30]
Francis House 0.777 (-0.163) [36] 6.1 (-3) [29]
Huntington Family Centers 0.982 (-0.178) [37] 19.9 (-17.4) [40]
Kirkhaven Foundation 0 (-0.256) [38] 2.7 (-1.1) [26]
NYSARC 0.998 (-0.258) [39] 77.5 (32.3) [14]
Centro Civico Hispano Americano 0.999 (-0.269) [40] 1484.4 (1376.7) [1]
Capital District Center for Independence 0.941 (-0.304) [41] 17.2 (-38.8) [43]
Kripalu Yoga Fellowship of the Capital District 1 (-0.305) [42] 4.5 (-23.5) [41]
International Center of the Capital Region 0.991 (-0.348) [43] 115.1 (61.1) [12]
Urban League of Rochester 0.968 (-0.388) [44] 26.2 (-229) [47]
Lewis Street Center 0.992 (-0.406) [45] 79.5 (-159.6)[45]
Baden Street Settlement-Rochester 0.999 (-0.46) [46] 638.8 (377.3) [4]
YMCA of the Capital District 0.996 (-0.871) [47] 42.5 (-184.1) [46]
Brooks: Evaluating the Effectiveness of Nonprofit Fundraising 371

Table 6. Rank correlations between ratios and APMs

1-F/TC 1-F/TC (APM) D/F D/F (APM)


1-F/TC 1
1-F/TC (APM) -0.31** 1
D/F 0.70*** -0.28* 1
D/F (APM) 0.51*** 0.19 0.71*** 1
***denotes significance at the 0.001 level, **denotes significance at the
0.05 level, and *denotes significance at the 0.10 level.

The simple ratio D/F is highly-correlated with its APM, whereas 1-F/TC is not.
Table 6 illustrates this by way of the partial correlations between the four measures.
The correlation between 1-F/ TC and its APM is -0.31; the correlation between D/F
and its APM is 0.71. Both of these values are statistically significant. In other words,
the simple ratio is fairly accurate in representing D/F in spite of demographic influ-
ences. In contrast, the simple measure of 1-F/TC produces a ranking that is nega-
tively associated with its APM. One explanation might be that nonprofits in
unfavorable fundraising circumstances may select revenue generation mecha-
nisms—such as state contracts—that do not rely on a high level of F, and hence a
neutralization of these circumstances pushes their predicted fundraising proportion
in the other direction. If this is the case, it weakens the utility of APM rankings to
judge true fundraising effectiveness and suggests that samples of nonprofits for
comparison need to be more homogeneous than that used here.
If we trust the underlying methodology of APMs, these measures are important
in showing how much of the organizations’ effectiveness is due to the environment,
as opposed to true effort. In many cases (such as the Hispanic American Civic
Center), the APM doesn’t materially change the picture. However, in other cases,
the difference is dramatic. The Kripalu Yoga Fellowship in Albany provides a good
example of this. In “raw” service spending, 1-F/TC, it ranks near the top of the
sample, at second. However, among the people living in its neighborhood, 14% have
no high school diploma and only 4% live below the poverty level. Given these
advantages, a score near the top half of the sample might be assumed. But when
neutralizing these effects with the APM, the adjusted ratio places the organization
near the bottom, at 42nd.
APMs are not without problems. Most notably, Brooks (2000) shows how relying
on an estimate of a variable that is intentionally—although unavoidably—left out
of a regression model can lead to distortions from omitted variable bias. This would
be especially true in cases in which the sample is small, as in the example here.
However, a number of performance evaluation scholars (e.g., Rubenstein, Schwartz,
& Stiefel, 2003) argue forcefully that the costs in statistical distortion in APMs are
preferable to the costs in measuring a variable contaminated by environmental
forces outside organizations’ control. Further methodological development and
applications are needed to achieve a reliable consensus on the utility of APMs.
372 Policy Studies Journal, 32:3

Predicting Fundraising Success

The attraction of simple ratios and their more sophisticated APM cousins is that
they allow comparative fundraising evaluation on the basis of existing data. Hence,
government agencies and other potential funders can judge the efforts of social
service nonprofits in an analytic way. Unfortunately, simple ratios and APMs fre-
quently give little guidance about content: Funders and nonprofits themselves
cannot judge which fundraising levels and approaches will be most appropriate,
unless, across a sample, generalizable patterns of successful and unsuccessful
fundraising appear among the organizations ranked.
Looking into fundraising content suggests slightly more complex evaluation
approaches than those using simple ratios or APMs. One possibility is the use of
natural experiments, which have been employed extensively in for-profit market-
ing for years, and which hold the potential to enhance substantially the effective-
ness of actual fundraising strategies—especially in the area of direct mail appeals.
They require a fundraising “control” technique, as well as an enhancement to that
control. Then, groups of organizations can be exposed to each approach, and the
results compared.6
For example, imagine that social welfare nonprofits have a standard practice of
sending out direct mail costing 25 cents each piece to their list of potential and
current contributors. They might benefit from increases in mailer “quality,” such as
using faster delivery; a government body or foundation would like to mandate (or
sponsor) an experiment to investigate such an increase. Imagine that this would
double the cost per unit. The objective is to measure the net benefits and choose an
alternative.
The n nonprofits involved are separated into equal (and randomly selected)
groups of n/2 organizations; the first group sends out 25-cent mailers, while the
second group sends out 50-cent mailers. After one year (or other relevant period to
evaluate success), we can judge the success of the alternative approaches by first
calculating Dij/Fij (or the APM of this ratio) for each organization i in group j = 1,2
(25-cent or 50-cent mailers), and then comparing the appeals by calculating a
comparison of means across the three groups, under the null hypothesis that
n2 n2
Di 1 Di 2
ÂF =Â . The “best” alternative will emerge from the appeal with the
i =1 i1 i =1 Fi 2
(statistically-significant) highest mean value.7
The strength of this experimental approach is that, instead of evaluating past
effectiveness, it evaluates alternative approaches. Not only does this give nonprof-
its information on what will be most effective, but it also suggests to funders what
type and level of fundraising they might require of their future grantees.

Conclusion

In an environment in which governments seek out private nonprofit partners


in the production and delivery of public services, financial due diligence by the
Brooks: Evaluating the Effectiveness of Nonprofit Fundraising 373

partners is extremely important. Frequently, scholars talk about efficiency and effec-
tiveness in the production and delivery processes, but rarely in terms of how the
nonprofits raise their nongovernmental funds. Policymakers may fear that non-
profits that deal with governments will become ineffective in raising contributions,
as their dependence on public sector subsidies pushes their organizational resources
toward the acquisition and maintenance of these subsidies.8
In this article, I have weighed the advantages and disadvantages of two alter-
ative approaches to evaluating fundraising practices and donations levels. First, I
illustrated how simple ratios (of service spending to total expenses and donations
to fundraising expenditures) can be employed, using a sample of upstate New York
social welfare nonprofits as an example. Then, I introduced a modified form of these
ratios, called Adjusted Performance Measures (APMs), which attempted to neu-
tralize the uncontrollable environmental forces a firm faces in its fundraising.
Neither approach is without practical and technical difficulties, as I have dis-
cussed. In addition, neither gives much information about what nonprofits should
do, only what they have done. Thus, I have also briefly outlined the use of natural
experiments in developing fundraising standards.
My objective in this article has been to discuss ratios and APMs methodologi-
cally and provide a clear and understandable example, as opposed to introducing
a comprehensive list of useful measures or executing a full-scale evaluation. In point
of fact, at present, those interested in evaluating nonprofit fundraising performance
and efficiency do not have an established array of ratios, APMs, or other measures
to watch. A productive avenue for future research, therefore, might be to turn the
insights in this article into a fuller set of measures for nonprofit managers and public
policymakers tasked with evaluating various aspects of nonprofit effectiveness.

Arthur C. Brooks is an associate professor of public administration and the direc-


tor of the Nonprofit Studies Program at Syracuse University’s Maxwell School of
Citizenship and Public Affairs. His research focuses on philanthropy, nonprofit
economics, and cultural policy.

Notes

For helpful suggestions on this paper, I am grateful to George von Furstenberg. This paper was prepared
for the “Evaluation Methods and Practices Appropriate for Faith-Based and Other Providers of Social
Service” conference, October 2003, Indiana University.
1. See, for example, The Domain Group’s website, www.thedomaingroup.com.
2. These measures are not intended to be comprehensive, or necessarily even the best measures for a
given purpose; rather, I am using them to demonstrate the strengths and weaknesses of ratios per se
as measures for performance evaluation.
3. The data are available at http://www.nccs.urban.org/.
4. This may lead to problems of interpretation, if government funding is unconnected to F, as I will show
in the next section.
5. The Hispanic American Civic Center’s high rankings on these measures is driven mostly by a low
fundraising budget. The organization’s relatively high value of D may be a function of government
grants, suggesting that conflating private and public subsidies can lead to results that are hard to inter-
pret. Actual evaluations should try to avoid this problem.
374 Policy Studies Journal, 32:3

6. Group assignment could be random or follow a system of stratification by demographics or organi-


zational characteristics.
7. More sophisticated regression-based approaches are also possible. For examples, see Meyer (1995).
8. Note that questions about this specific concern require data that, unlike my example here, disaggre-
gate public and private contributions.

References

Brooks, A. C. (2000). The use and misuse of adjusted performance measures. Journal of Policy Analysis and
Management, 19(2), 323–328.
Brooks, A. C. (2003). Do government subsidies to nonprofits crowd out donations or donors? Public
Finance Review, 31(2), 166–179.
Brooks, A. C. (2004). The effects of public policy on private charity. Administration & Society, 36(2),
166–185.
Ebaugh, H. R., Pipes, P. F., Chafetz, J. S., & Daniels, M. (2003). Where’s the religion? Distinguishing faith-
based from secular social service agencies. Journal for the Scientific Study of Religion, 42(3), 411–426.
Frumkin, P. (2002). On being nonprofit: A conceptual and policy primer. Cambridge, MA: Harvard Univer-
sity Press.
Greenlee, J. S., & Trussel, J. (2000). Predicting the financial vulnerability of nonprofit organizations. Non-
profit Management and Leadership, 11, 199–210.
Hager, M. A. (2001). Financial vulnerability among arts organizations: A test of the Tuckman-Chang meas-
ures. Nonprofit and Voluntary Sector Quarterly, 30(2), 376–392.
Independent Sector. (2001). The new nonprofit almanac IN BRIEF: Facts and figures on the independent sector.
Washington DC: Author.
Meyer, B. D. (1995). Natural and quasi-experiments in economics. Journal of Business & Economic Statis-
tics, 13(2), 151–160.
Roper Center for Public Opinion Research. (2000). Social capital community benchmark survey.
http://www.roper.com, October 1, 2003.
Rose-Ackerman, S. (1982). Charitable giving and excessive fundraising. The Quarterly Journal of Econom-
ics, 97(2), 193–212.
Rubenstein, R., Schwartz, A. E., & Stiefel, L. (2003). Better than raw: A guide to measuring organizational
performance with adjusted performance measures. Public Administration Review, 63(5), 607–615.
Stiefel, L., Rubenstein, R., & Schwartz, A. E. (1999). Using adjusted performance measures to evaluate
resource use. Public Budgeting and Finance, 19(1): 67–87.
Tuckman, H. P., & Chang, C. F. (1991). A methodology for measuring the vulnerability of charitable
nonprofit organizations. Nonprofit and Voluntary Sector Quarterly, 20, 445–460.
Van Slyke, D. M. (2003). The myth of privatization in contracting for social services. Public Administra-
tion Review, 63(3), 296–315.

Das könnte Ihnen auch gefallen