Sie sind auf Seite 1von 10

Journal of Clinical Child Psychology

ISSN: 0047-228X (Print) (Online) Journal homepage: http://www.tandfonline.com/loi/hcap19

Child Sexual Abuse Prevention Programs: A Meta-


Analysis

M. Katherine Davis & Christine A. Gidycz

To cite this article: M. Katherine Davis & Christine A. Gidycz (2000) Child Sexual Abuse
Prevention Programs: A Meta-Analysis, Journal of Clinical Child Psychology, 29:2, 257-265, DOI:
10.1207/S15374424jccp2902_11

To link to this article: http://dx.doi.org/10.1207/S15374424jccp2902_11

Published online: 07 Jun 2010.

Submit your article to this journal

Article views: 979

View related articles

Citing articles: 87 View citing articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=hcap20

Download by: [b-on: Biblioteca do conhecimento online ISCTE] Date: 07 October 2017, At: 04:19
Journal of Clinical Child Psychology Copyright © 2000 by
2000, Vol. 29, No. 2, 257–265 Lawrence Erlbaum Associates, Inc.

Child Sexual Abuse Prevention Programs: A Meta-Analysis


M. Katherine Davis and Christine A. Gidycz
Department of Psychology, Ohio University

Conducted a meta-analytic evaluation of the effectiveness of school-based child abuse


prevention programs. Literature searches identified 27 studies meeting inclusion cri-
teria for use in this meta-analysis. The average effect size for all programs studied was
1.07, indicating that children who participated in prevention programs performed
1.07 SD higher than control group children on the outcome measures used in the stud-
Downloaded by [b-on: Biblioteca do conhecimento online ISCTE] at 04:19 07 October 2017

ies. Analysis of moderator variables revealed significant effects for age, number of
sessions, participant involvement, type of outcome measure, and use of behavioral
skills training. Most important, programs presented over 4 or more sessions that al-
lowed children to become physically involved produced the highest effect sizes. Al-
though most often used only with younger children, findings suggest that active, long-
term programs may be more effective for children of all ages.

Child sexual abuse has received much attention in gains are typically maintained at follow-up 2 to 12
the past two decades as researchers have begun to docu- months later (e.g., Fryer, Kraizer, & Miyoshi, 1987b;
ment its prevalence. Given a consistent definition of Saslawsky & Wurtele, 1986).
sexual abuse as ranging from fondling to intercourse Although some basic concepts concerning preven-
and perpetrated by an individual at least 5 years older, tion strategies are common to all school-based pro-
typical estimates state that 20% of children will be sex- grams, the programs vary widely on a number of
ually assaulted before the age of 18 (Sandberg, Lynn, & dimensions including format, the age of the target audi-
Green, 1994). In response, several programs aimed at ence, the occupation of the program leader, and the
teaching children to avoid and report abuse have been length of the program. Although it is hypothesized that
implemented. A survey of 400 school districts in the many of these variables could influence program effec-
United States found that over 85% had offered a pre- tiveness, researchers are only beginning to investigate
vention program in the past year, and 64% of these dis- the impact of these key components.
tricts mandated programs (Daro, 1994). Telephone One variable that has been examined is program for-
interviews of a representative sample of children in the mat. Wurtele, Marrs, and Miller-Perrin (1987) found
United States revealed that 67% had been exposed to a that behavioral skills training, including active skills
prevention program (Finkelhor & Dziuba-Leatherman, rehearsal, shaping, and reinforcement, produced signif-
1995). The programs to which these children had been icantly higher knowledge and skill gains than did sim-
exposed were almost exclusively school based and typ- ple modeling of skills by a presenter. Other studies have
ically took place in late elementary school. Given the corroborated that more active programs that include
pervasiveness of prevention programs and the impor- role plays and participant rehearsal are more effective
tance of their mission, it is essential that they be system- than less active modes of presentation (e.g., films or
atically evaluated. lectures) in improving prevention knowledge and skills
Recent evaluations examining several child abuse of children (Blumberg, Chadwick, Fogarty, Speth, &
prevention programs have suggested that children both Chadwick, 1991; Wurtele, Saslawsky, Miller, Marrs, &
enjoy them and show significant improvements in Britcher, 1986).
abuse-related knowledge and skills following partici- Age is a second component examined by research-
pation (e.g., Binder & McNiel, 1987). Knowledge con- ers. It has been documented consistently that older chil-
tent generally involves information on body ownership dren are better able to learn prevention concepts than
and the touch continuum, and targeted skills include younger children (e.g., Blumberg et al., 1991; Conte,
recognizing abusive situations, saying no, and telling a Rosen, Saperstein, & Shermack, 1985), but there ap-
trusted adult about the abuse (e.g., Saslawsky & pears to be no difference in the medium that works best;
Wurtele, 1986; Tutty, 1992). Knowledge and skill that is, children of all ages appear to learn best from
programs that include active behavior skills
Requests for reprints should be sent to Christine A. Gidycz, De-
components.
partment of Psychology, Ohio University, 231 Porter Hall, Athens, Although data are beginning to emerge regarding
OH 45701. E-mail: Gidycz@oak.cats.ohiou.edu the important aspects of prevention programs, narrative

257
DAVIS & GIDYCZ

reviews in the area of child abuse prevention are limited than younger children and that programs involving ac-
in their ability to integrate the literature. It remains dif- tive participation would be most effective for children
ficult to determine how much benefit children are re- of all ages.
ceiving from abuse prevention programs or just which
program components are critical. Variables such as the
occupation of the program leaders, the length of the Method
program, and the type of design utilized in the evalua-
tion study are hypothesized to affect program effective- Sample
ness (Finkelhor, Asdigian, & Dziuba-Leatherman,
1995) but have not been assessed in any systematic Searches of computer databases, article references,
manner. Previous narrative reviews have examined and relevant journals were conducted in an attempt to
only a limited number of variables that might affect locate all research evaluating child abuse prevention
program outcomes. programs. Computer searches included the following
In contrast to narrative reviews, meta-analysis al- databases: PsycInfo, Medline, ERIC, HealthStar, and
Downloaded by [b-on: Biblioteca do conhecimento online ISCTE] at 04:19 07 October 2017

lows a reviewer to examine a body of literature quanti- Dissertation Abstracts. The reference sections of iden-
tatively and to identify existing relations even when the tified articles were then examined for studies missed in
outcomes of single studies differ in magnitude, signifi- the computer databases. Finally, the most recent vol-
cance levels, or even the direction of effects. Two pre- umes of relevant journals (e.g., Child Abuse & Neglect)
vious meta-analyses of child sexual abuse prevention were searched manually.
programs have been published. Berrick and Barth To be included in the proposed meta-analysis, stud-
(1992) examined overall program effectiveness and as- ies were required to: (a) evaluate a school-based pre-
sessed whether participant age and program evaluation vention program focusing on sexual abuse, (b) examine
source (internal vs. external) were related to effective- children aged 3 to 13, (c) rate effectiveness using a
ness. This analysis found a mean effect size of .90, indi- knowledge-based or behavioral outcome measure, (d)
cating that knowledge levels of children increased by include a control group comparison, and (e) be pre-
close to 1 SD following participation in a prevention sented in English. All studies that met the inclusion cri-
program. No significant differences were found among teria were examined, regardless of methodological
children of different ages or between studies conducted quality. With these inclusion criteria, 27 studies with a
by internal versus external evaluators. This meta-anal- total of 8,115 participants were located. Twenty-one of
ysis provided some encouraging data regarding the ef- the identified studies were published journal articles,
fectiveness of prevention programs in increasing three were unpublished doctoral dissertations, two
children’s knowledge. The analysis was limited by its were unpublished internal program evaluations, and
small sample size, however, which may have lacked the one was an unpublished master’s thesis.
power to identify existing moderating effects. Many
potentially important moderating variables were not Data Coding
investigated.
Rispens, Aleman, and Goudena (1997) conducted a The studies selected for use in the analysis were
second meta-analysis and found an overall coded according to several variables: (a) mean age of
postintervention effect size of .71. These authors did children in the study; (b) mode of presentation (e.g.,
examine several additional variables for moderating ef- written material, puppets, role plays); (c) level of child
fects, but they identified and analyzed only 60% of the participation (physical participation, verbal participa-
available empirical studies of school-based child abuse tion, no participation); (d) qualifications of instructor;
prevention programs. Failure to include all available (e) sex of instructor; (f) length of program in hours; (g)
literature in a meta-analysis may result in biased find- number of sessions into which the program was di-
ings, thus it is important to replicate the findings of vided; (h) time period between program and posttest;
these researchers using every available study. and (i) type of outcome measure used. Methodological
This study is a meta-analysis assessing the overall quality was operationalized by type of control group,
effectiveness of child sexual abuse prevention pro- random assignment of participants, research design, in-
grams and the relation between program variables and terviewer status (was the individual administering out-
knowledge gains. The goals of this meta-analysis in- come measures informed or uninformed concerning
cluded: (a) to find an overall effect size, (b) to deter- participants’ group membership), and number of items
mine what effect methodological problems have on on the outcome measure. Interrater reliability was good
effect size, (c) to establish whether publication bias ex- (correlations and κ values > .85) for all but one of the
ists in the literature, and (d) to explore potential moder- variables examined. The one variable with an insuffi-
ator variables. Based on the data available from cient interrater agreement level (κ = .664) had only
previous narrative reviews, it was hypothesized that three discrepant codings. Those studies were reexam-
older children would learn more prevention concepts ined, and a coding decision was reached.

258
CHILD ABUSE PREVENTION

Examination of Effect Size and with a variance of 0.76 after adjusting for sampling er-
Moderator Variables ror. By Cohen’s (1988) classification, 1.07 would be
considered a large effect.
Effect sizes and their variances were calculated as d One study (Nemerofsky, Carran, & Rosenberg,
values for posttest comparisons of control and treat- 1994) was extremely influential in the results of this
ment groups (Hunter & Schmidt, 1990). This statistic is meta-analysis due to its unusually large effect size (d =
computed as the difference between the two group 2.93) and its very large sample size (N = 1,339). The
means divided by the pooled within-group standard de- sample size caused the results of this study to be
viation. The population treatment effect (D) was esti- weighted quite heavily. This in combination with the
mated by averaging individual effect sizes (ds) magnitude of its effect size allowed the study to have a
weighted according to the sample size of each study. large impact on the overall results. With the study by
One effect size for each individual construct repre- Nemerofsky et al. (1994) removed from the analysis,
sented in each study (verified by tests of homogeneity; the average effect size dropped to 0.81 (variance =
Hedges & Olkin, 1985) was used in these calculations. 0.26), which is still considered a large effect (Cohen,
Downloaded by [b-on: Biblioteca do conhecimento online ISCTE] at 04:19 07 October 2017

Moderator variables were examined by breaking the 1988). Because of the undue influence of this one
studies into subsets according to the proposed variable study, analyses are presented both with and without its
and conducting a meta-analysis on each subset sepa- inclusion where its removal altered the findings
rately. Once the subset effect sizes were calculated for substantially.
each variable, homogeneity tests were run to determine
whether the search for moderators should continue. For Publication Bias
linear and continuous variables, a significant correla-
tion (two-tailed) with the d statistic signified a moderat- The observed correlation between effect size and
ing effect. Categorical moderators were examined publication status was .342 (p = .003). A subset exami-
using multiple regression.1 nation of the publication status variable indicated that
studies that were unpublished yielded a higher mean ef-
Examination of Publication Bias fect size (D = 1.252) than those studies that were pub-
lished (D = 1.005). To further examine publication
Results of unpublished studies (e.g., unpublished bias, a file-drawer analysis was completed. This analy-
theses and dissertations) were compared with the re- sis revealed that 56 studies with results supporting the
sults of published articles to determine whether a publi- null hypothesis (no significant program effect) would
cation bias existed for this body of literature. In be necessary to lower the calculated effect size from
addition, a file-drawer analysis (Hunter & Schmidt, large to moderate (.50). To lower the effect size to small
1990) was conducted. These analyses are based on the (.20), 249 studies with null findings must exist.
hypothesis that studies with nonsignificant findings
may not be published; they may instead end up filed or Analysis of Methodological Quality
thrown away. The file-drawer analysis measured the
number of missing studies averaging null results that The relation between methodological quality and ef-
would be necessary to lower the mean effect size to .50, fect size was assessed using multiple regression. The
a moderate effect size, or to .20, a small effect size (Co- results suggested that type of control group, research
hen, 1988). design, interviewer status, random assignment, and
number of items on the outcome measure were together
significant predictors of effect size, accounting for
Results 43.5% of the total variance in d, F(5, 37) = 5.708, p <
.0005. In the presence of the other variables entered
Summary Statistics into the regression, interviewer status (informed vs. un-
informed concerning participants’ group membership)
From a total of 8,115 participants, 73 separate effect accounted for a significant proportion of the variance in
sizes were calculated.2 One interview outcome mea- d, F(1, 37) = 11.203, p < .005. Subset analyses (Table 1)
sure assessing assertiveness resulted in an effect size of found higher mean effect sizes when studies (a) used
zero (Downer, 1984), but the remaining 72 effect sizes interviewers who were informed about the status of
were all positive, ranging from 0.10 to 2.93. The aver- participants, (b) did not use random assignment of par-
age effect size (D), weighted for sample size, was 1.07, ticipants, and (c) used a wait-list control instead of an
unrelated alternate program that would control for
1For a more complete explanation of the statistical methodology,
amount of experimenter contact. These findings indi-
see Davis (1997).
cate that studies with poorer methodology found higher
2A listing of studies, study characteristics, and effect sizes is avail- mean effect sizes. The exceptions to this trend were the
able in Davis (1997). design variable and the number of items on the outcome

259
DAVIS & GIDYCZ

measure. Studies that utilized pretests to examine ini- Table 1. Average Effect Sizes (D) for Subgroups
tial control group and experimental group equivalence
Subgroup Da Db
were considered superior methodologically; these
studies found higher effect sizes than those using only Interviewer Status = Uninformed 0.655 0.655
posttests. Also, studies with more items on outcome Interviewer Status = Informed 2.020 0.967
measures found higher effect sizes. Random = Yes 0.725 0.725
Random = No 1.131 0.826
Item Number < 20 0.862 0.862
Item Number > 20 1.455 0.619
Analysis of Potential Moderator Type of Control = Alternate Program 0.727 0.727
Variables Type of Control = Wait List 1.198 0.825
Design = Pre- and Posttests 1.249 0.886
Design = Posttest Only 0.601 0.601
Mean age, level of participation, and number of ses-
Published 1.005 0.632
sions all showed significant correlations to the d value Not Published 1.252 1.252
(p < .015). For this reason, the subset effect sizes for all Number of Sessions = 1 0.598 0.598
Downloaded by [b-on: Biblioteca do conhecimento online ISCTE] at 04:19 07 October 2017

of these variables were examined (see Table 1). Mean Number of Sessions = 2 or 3 0.663 0.663
age was divided into three groups for the purposes of Number of Sessions > 3 2.153 1.536
subset analysis. The weighted mean effect size was Active Physical Participation 1.202 1.202
Active Verbal Participation 1.226 0.657
highest for the youngest group (M age = 3 to 5 years, D No participation 0.453 0.453
= 2.143), followed by age group two (M age = 5.1 to 8 Mean Age = 3 to 5 2.143 0.937
years, D = 1.243), and age group three (M age > 8 years, Mean Age = 5.01 to 8 1.243 1.243
D = .770). Removing the study by Nemerofsky et al. Mean Age = 8.01 to 12 0.770 0.770
(1994) lowered the effect size for the youngest age Behavioral Outcome Measure 1.191 1.191
Questionnaire Outcome Measure 1.138 0.838
group to .937, and the other age groups maintained the
Interview Outcome Measure 0.617 0.617
same effect sizes. Did Not Use Written Material 0.878 0.878
Child participation was analyzed at three different Used Written Material 1.296 0.709
levels: physical participation (e.g., role plays), verbal Did Not Use Role Play or Drama 1.226 0.688
participation (e.g., discussion), and no participation by Used Role Play or Drama 0.910 0.910
the children (e.g., film or lecture). The mean weighted Did Not Use Discussion or Lecture 0.718 0.718
Used Discussion or Lecture 1.094 0.821
effect size was similar for programs with some type of Did Not Use Video or Film 1.183 0.704
child participation (physical, D = 1.202; verbal, D = Used Video or Film 0.923 0.923
1.226) but much lower for programs with no participa- Did Not Use Puppet Show 1.084 0.805
tion (D = .453). Without the study by Nemerofsky et al. Used Puppet Show 0.898 0.898
(1994), the effect size for verbal participation dropped Did Not Use Behavioral Training 1.024 0.663
Used Behavioral Training 1.210 1.210
to .657, resulting in a large difference between physical
Did Not Use Dolls 1.081 0.817
and verbal participation. Used Dolls 0.741 0.741
The variable for the number of sessions involved in
aEffect sizes with Nemerofsky et al. (1994) included. bEffect sizes
the presentation of a prevention program was divided
with Nemerofsky et al. (1994) removed.
into three subsets: programs administered in one ses-
sion, those taught in two to three sessions, and those
presented in more than three sessions. A linear trend ables representing type of outcome measure were use
was found in the mean effect size, with D increasing of a questionnaire, use of a vignette interview, and use
along with the number of sessions from .598 in the one- of a behavioral measure. These variables were signifi-
session programs to 2.153 in programs with more than cant predictors of effect size when entered into a regres-
three sessions. The same trend remained with the re- sion in one block, accounting for 16.2% of the variance
moval of the Nemerofsky et al. (1994) study, although in d, F(3, 69) = 4.458, p = .0064. Use of a behavioral
the effect size for programs with more than three ses- measure was the only variable found to be a significant
sions lowered to 1.536. To ensure that the linear trend predictor of effect size with all three variables in the
was not due to increased time spent learning preven- equation, F(1, 69) = 13.337, p < .001. Subset analyses
tion, a partial correlation between D and number of ses- revealed that assessments using behavioral observation
sions controlling for hours spent in the program was (in vivo simulation) or questionnaires detected greater
run. Even with program length controlled for, session effects than those using vignette interviews. Studies
number was significantly correlated with D (p = .05). with in vivo simulations had a weighted mean effect
To measure the significance of type of outcome size of 1.191, those using a questionnaire found a mean
measure, mode of presentation, and instructor qualifi- effect size of 1.138, and those with interviews had a
cations, regression analyses were used. The first regres- mean effect size of .617. Removing the Nemerofsky et
sion examined the type of outcome measure as a al. (1994) study from this regression did not signifi-
predictor of effect size. The three dichotomous vari- cantly alter the findings. Without this extreme case, the

260
CHILD ABUSE PREVENTION

effect size for studies using a questionnaire dropped to signment (r = .386) and uninformed interviewers (r =
.838. .753) or to have the control group participate in an alter-
The seven dichotomous variables representing nate group activity (r = .323).
mode of presentation were use of written material, use To better understand the relations between age and
of role plays or drama, use of discussion or lecture, use the other predictors, correlations were run using
of a video or film, use of puppet shows, use of behav- pairwise comparisons of the age groups. Correlations
ioral skills training, and use of dolls or stuffed animals. using only the younger two age groups indicated that
When all seven dichotomous variables were entered children aged 5 to 8 were more likely than children un-
into a regression in one block, they did not significantly der age 5 to be administered a behavioral outcome mea-
predict effect size, F(7, 65) = 1.245, ns. In the presence sure (r = .453). Correlations comparing children under
of the other variables, however, the use of puppet 5 to children over 8 indicated that younger children
shows was a significant predictor, F(1, 65) = 6.41, p < were more likely to be allowed to actively participate in
.05. Studies using puppet shows had a significantly their programs (r = .456) and had a greater number of
lower mean effect size (D = .898) than those not using sessions (r = –.625). Correlations using the older two
Downloaded by [b-on: Biblioteca do conhecimento online ISCTE] at 04:19 07 October 2017

puppet shows (D = 1.084). This finding reversed when age groups showed that children aged 5 to 8 were more
the Nemerofsky et al. (1994) study was removed. With- likely than children over 8 to have more sessions (r = –
out this study, mode of presentation did approach sig- .349) and to be evaluated with a behavioral outcome
nificance as a predictor of effect size, F(7, 64) = 2.101, measure (r = –.526).
p = .056. The use of puppet shows remained a signifi-
cant predictor, F(1, 64) = 11.127, p < .01, but in this
analysis studies using puppet shows had a significantly Discussion
higher mean effect size (D = .898) than those not using
puppet shows (D = .805). The use of behavioral skills The goals of this study were to use meta-analysis to
training was also found to be a significant predictor in evaluate school-based sexual abuse prevention pro-
this analysis, F(1, 64) = 4.133, p < .05; studies with be- grams. The overall weighted mean effect size derived
havioral skills training had a mean effect size twice as from 27 control-group studies was found to be 1.07.
high as those without such training (D = 1.210 and D = This indicates that on average, after controlling for
.663, respectively). sample size, children who participated in prevention
Eight dichotomous variables represented the use of programs performed 1.07 SD higher than the control
an instructor with one of the following qualifications: group children on outcome measures used in the stud-
teacher, parent, mental health professional, college stu- ies. The calculated effect size dropped to .81 when one
dent, high school student, volunteer, police officer, or outlying study (Nemerofsky et al., 1994) was removed
researcher. When all eight of these variables were en- from the data set. Both numbers are considered to be
tered into a regression in one block, they did not signifi- large effects (Cohen, 1988). Importantly, these effect
cantly predict effect size, F(7, 43) = 1.558, ns. sizes can only be interpreted as improvements in the
Furthermore, no single instructor qualification was sig- prevention-related knowledge and skills of children
nificantly correlated with effect size. The same results who participated in programs. No identified studies
were found with removal of the Nemerofsky et al. have examined the effects of prevention programs on
(1994) study. This variable was, therefore, not consid- abuse prevalence, so it cannot be assumed that children
ered as a moderator. participating in abuse prevention programs are at a
lower risk for sexual abuse.
In general, studies with poorer methodology found
Correlations Among Moderator larger effect sizes, implying that the calculated mean
Variables effect size may be artificially inflated by studies using
poor methods. Specifically, interviewers who were
High correlations were found among most of the aware of the group status of study participants found
moderators identified; significant findings are de- larger effect sizes, demonstrating the power of expec-
scribed here. Because of the number of correlations tancy effects. Interestingly, the one study that was an
run, a conservative significance level was used (p ≤ outlier (Nemerofsky et al., 1994) was rated as having
.005). As the mean age of the children involved in the poor methodological quality and, thus, it may be that
study increased, programs used fewer sessions to teach the high effect size obtained by this study is due at least
the material (r = –.399) and used lower levels of partici- in part to the methodological problems.
pation (r = .421). As the number of sessions used in a An examination of moderator variables identified
program increased, the evaluation study was more several program and participant characteristics related
likely to be unpublished (r = .630), and the program to effect size. Programs that allowed physically active
was more likely to use active participation (r = –.442). participation and made use of behavioral skills training
Unpublished studies were less likely to use random as- such as modeling, rehearsal, and reinforcement pro-

261
DAVIS & GIDYCZ

duced the largest changes in performance level. Addi- Number of Program Sessions
tionally, programs that broke down the presentation of
material into more than three sessions were much more Studies of programs with more than three sessions
effective than those programs that lasted for only one, found effect sizes from two to three times those of pro-
two, or three sessions. This was not due to increased grams presented in one, two, or three sessions. This ef-
amount of time spent learning about prevention; rather, fect does not seem to be due to increased time spent
learning in smaller increments appeared to be benefi- learning about prevention, as the number of total hours
cial. Children in preschool and early elementary school spent in the program did not correlate significantly with
learned the most from prevention programs. The oldest effect size. Furthermore, the number of sessions was
children, those in late elementary school or early mid- significantly correlated with effect size even after con-
dle school, improved the least with respect to the con- trolling for the variance accounted for by number of
trol group. The manner in which children’s learning hours. This result may be explained by the spacing ef-
was evaluated was also of importance in determining fect, which is the tendency for spaced presentations of
effect size; tests of behavioral change found a larger information to produce greater learning than massed
Downloaded by [b-on: Biblioteca do conhecimento online ISCTE] at 04:19 07 October 2017

difference between the treatment and control groups presentations of the same total length (Dempster &
than any other outcome measure. These moderating ef- Farris, 1990). Perhaps dividing a program into more
fects are further explored in what follows. segments, thereby shortening the length of each, allows
children to maintain their attention for the entire period
and increases the amount of repetition of the material,
Child Participation Level thus leading to greater retention of the material.

Child participation level was one important variable


predicting knowledge gain; programs that allowed the Use of Puppet Shows
children to be active participants found effect sizes
three times as high as those with no participation. With- A final program characteristic that emerged as a sig-
out the Nemerofsky et al. (1994) study, a large differ- nificant moderator was the use of puppet shows. Inter-
ence emerged between physical and verbal estingly, the direction of this effect was dependent on
participation. Although the difference in effect sizes the presence of the Nemerofsky et al. (1994) study.
was not large between studies with passive participants With this study, programs with puppet shows had a
and studies where participants were involved in verbal lower mean effect size than programs without them;
discussion (change in the mean effect size = .24), the without the study, programs with puppet shows had a
shift to active physical participation in a program dou- higher mean effect size. The mutability of this finding
bled the effect size. Similarly, behavioral skills training may be influenced by the small number of studies that
increased the learning of prevention material; studies used puppet shows, thus the significance of this vari-
utilizing behavioral skills training found effect sizes able should be viewed with caution.
twice as high as those with no behavioral skills training.
Program participation and behavioral skills training
may be helpful both because the involvement increases Type of Outcome Measure
attention level and because the children can practice
skills with instructive supervision. Physical participa- Along with identifying important program charac-
tion may have some advantage over verbal participa- teristics, the results of this meta-analysis suggested that
tion in teaching prevention skills to children due to their type of outcome measure is related to effect size. Spe-
stage of cognitive development. Children have diffi- cifically, the use of behavioral observation to measure
culty using symbolic representations in abstract outcome significantly predicted higher effect sizes.
thought; thus they are less able to logically think Only two studies using behavioral outcome measures
through hypothetical situations. Physical involvement were included in the analysis, however, so the
makes the information and skills being taught more generalizability of these results is questionable, and
concrete and salient for the children. Additionally, the conclusions must be tentative.
use of rehearsal and reinforcement in the behavioral Higher effect sizes found using a behavioral out-
skills training may increase the motivation of children come measure may be the result of several factors re-
to learn the information provided and increase the like- lated to both the measure itself and the type of program
lihood that children will use the skills in the future. for which it was used. Behavioral outcome measures
These results are consistent with meta-analyses in the involved in vivo simulation of a potentially dangerous
child treatment literature that have found greater effect stranger situation. Some time following program pre-
sizes for behavioral techniques than for nonbehavioral sentation, participants were approached by a stranger
techniques in treating children and adolescents and asked to accompany this person to a more isolated
(Kazdin, 1994). location. Typically, this outcome measure was scored

262
CHILD ABUSE PREVENTION

as simply a pass if the child did not agree to go with the theses yielded effect sizes 40% smaller than those of
stranger and a fail if the child did agree to go. This type other studies. In the analysis reported here, however, un-
of outcome measure had very few items and tested a published studies had a mean effect size 25% larger than
narrow range of learning for which children were spe- that of studies that were published. A likely explanation
cifically trained. This may be responsible for the in- for the high effect sizes of unpublished studies is the
creased effect sizes found using this type of measure. intercorrelation found between publication status and
Programs evaluated using a behavioral measure dif- methodological quality. Specifically, unpublished
fered from other programs as well. These programs studies had poorer designs, which may explain their sta-
typically focused on strangers as potential perpetrators tus as unpublished as well as their inflated effect sizes.
and taught specific rules for dealing with strangers.
This concrete approach to prevention may be easier for
a child to learn than abstract concepts concerning Future Directions
known-adult abuse. Wurtele, Kast, Miller-Perrin, and
Kondrick (1989) found that children taught specific Although this analysis has focused on the positive
Downloaded by [b-on: Biblioteca do conhecimento online ISCTE] at 04:19 07 October 2017

rules concerning touching were more successful in dis- outcomes of prevention programs, it is important to
tinguishing appropriate and inappropriate touches than note the possibility of negative reactions to the abuse-
were children taught the more abstract touch contin- related material. In surveys, a moderate proportion of
uum. Additionally, situations with strangers are less children and parents have reported increases in child
ambiguous than those with known adults. The majority anxiety and fear of adults (Finkelhor & Dziuba-
of abuse, however, is perpetrated by known adults. Leatherman, 1995). Interestingly, parents who rated
their children as more worried following prevention
programs also rated the programs themselves as more
Age of Participants helpful; perhaps increased anxiety is seen as effective
in helping children to avoid victimization. Future meta-
A final component important in predicting program analyses should examine the evidence for negative con-
effect size was child age. The relation was such that sequences of prevention programs.
studies with older children had the lowest effect sizes. Possible limitations of this study include small
This did not support the hypothesis that older children sample size and inclusion of research with poor
would learn the most from prevention programs (e.g., methodologies. Sample size was a problem in this
Blumberg et al., 1991). Low effect sizes obtained for meta-analysis not only because the area of child
older children may have resulted from ceiling effects; abuse prevention is relatively new, but because many
this older group may have already known some preven- of the evaluation studies that exist were unusable. A
tion concepts and therefore been able to perform well primary problem in this literature is the lack of con-
on the outcome measure. Alternately, the lowered ef- trol groups in much of the research; approximately 25
fect sizes found in studies evaluating older children additional studies were located that could not be used
may be due to the types of programs these children re- in this meta-analysis because they did not have con-
ceived. Studies evaluating children with a mean age trol groups. Additionally, in the analysis of modera-
higher than 8 found the lowest results but were also less tors, the sample size decreased even further due to
likely to include active participation, behavioral skills missing information; many studies failed to report in-
training, and more than three sessions of instruction. formation critical to literature integration such as cell
These results imply that future programs for older chil- sizes, design characteristics, and specific program
dren should alter format to match that of programs for content. This lack of information made it even more
the younger children. Importantly, the finding that difficult to compare the already diverse array of ex-
younger children show more gains following preven- isting prevention programs.
tion programs is consistent with the results of the This meta-analysis incorporated all studies meeting
Rispens et al. (1997) meta-analysis. In that analysis, inclusion criteria regardless of methodological quality
however, effect size discrepancies were found to disap- so that all available information pertaining to program
pear at follow-up evaluation, indicating that younger effectiveness could be utilized. This may have resulted
children do not retain the learned information as well as in a somewhat inflated mean effect size, as studies with
older children. poorer methodological quality were found to have
larger effect sizes. It was difficult to justify removing
studies due to methodological quality, however, as the
Publication Bias large majority of the studies were rated poorly on at
least one methodological variable. Future evaluations
A publication bias was found in this body of litera- should use uninformed interviewers, active control
ture, but it was in the opposite direction of what is typi- groups, and random assignment of participants (not just
cal. Rosenthal (1991) reported that dissertations and classrooms).

263
DAVIS & GIDYCZ

A shortcoming of all the evaluation studies utilized Berrick, J. D., & Barth, R. P. (1992). Child sexual abuse prevention:
Research review and recommendations. Social Work Research
in this meta-analysis is that they did not objectively as-
and Abstracts, 28, 6–15.
sess whether abuse prevention was actually achieved. Binder, R., & McNiel, D. (1987). Evaluation of a school-based sexual
The majority of the investigations evaluated program abuse prevention program: Cognitive and emotional effects.
effectiveness using attitude and knowledge scales. In- Child Abuse & Neglect, 11, 497–506.
deed, the effect size calculated in this meta-analysis can Blumberg, E., Chadwick, M., Fogarty, L., Speth, T., & Chadwick,
only provide a measure of knowledge and skill gains in D. (1991). The touch discrimination component of sexual
abuse prevention training. Journal of Interpersonal Violence, 6,
program participants. Although the effect size is large, 12–28.
it does not indicate that program participants are less Cohen, J. (1988). Statistical power analysis for the behavioral sci-
likely to be sexually abused. We cannot assume that ences. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
knowledge gains will translate into behavioral differ- Conte, J., Rosen, C., Saperstein, L., & Shermack, R. (1985). An eval-
ences. If programs are to achieve their goal of preven- uation of a program to prevent the sexual victimization of young
children. Child Abuse & Neglect, 9, 319–328.
tion, individuals’ behavior when confronted with a *Crowley, K. (1989). Evaluation of good touches/bad touches: A
possibly abusive situation must change.
Downloaded by [b-on: Biblioteca do conhecimento online ISCTE] at 04:19 07 October 2017

program to prevent child sexual abuse in school-age children.


To address the issue of behavioral change, some Unpublished doctoral dissertation, Pace University, New York.
programs assess skill gains by having children re- Daro, D. (1994). Prevention of child sexual abuse. The Future of
spond to touch discrimination vignettes or participate Children, 4, 198–223.
Davis, M. K. (1997). Child sexual assault prevention programs: A
in role plays (e.g., Blumberg et al., 1991; Hazzard, meta-analysis. Unpublished master’s thesis, Ohio University,
Webb, Kleemeier, Angert, & Pohl, 1991) and, thus, Athens.
have documented that children can learn several strat- Dempster, F., & Farris, R. (1990). The spacing effect: Research and
egies to use when in dangerous situations. Addi- practice. Journal of Research and Development in Education,
tionally, to determine actual behavior change in 23, 97–101.
*Dhooper, S., & Schneider, P. (1995). Evaluation of a school-based
dangerous situations, some researchers have used in
child abuse prevention program. Research on Social Work
vivo simulations with strangers and judged treatment Practice, 5, 36–46.
effectiveness based on children’s responses to the Downer, A. (1984). Evaluation of talking about touching. Seattle,
stranger (Fryer, Kraizer, & Miyoshi, 1987b; Poche, WA: Committee for Children.
Yoder, & Miltenberger, 1988). These studies found Finkelhor, D., Asdigian, N., & Dziuba-Leatherman, J. (1995). The
effectiveness of victimization prevention instruction: An evalu-
that children’s risky behavior was significantly di-
ation of children’s responses to actual threats and assaults. Child
minished following program participation. Survey Abuse & Neglect, 19, 141–153.
data have found evidence that material learned in pre- Finkelhor, D., & Dziuba-Leatherman, J. (1995). Victimization pre-
vention programs is used in response to actual threats vention programs: A national survey of children’s exposure and
in children’s lives (Finkelhor et al., 1995). Children reactions. Child Abuse & Neglect, 19, 129–139.
*Fryer, G., Kraizer, S., & Miyoshi, T. (1987a). Measuring actual re-
who have participated in prevention programs report
duction of risk to child abuse: A new approach. Child Abuse &
using primary prevention strategies when confronted Neglect, 11, 173–179.
by the threat of victimization. Fryer, G., Kraizer, S., & Miyoshi, T. (1987b). Measuring children’s
These studies reflect efforts to document behavioral retention of skills to resist stranger abduction: Use of the simula-
changes resulting from prevention program participa- tion technique. Child Abuse & Neglect, 11, 181–185.
*Harvey, P., Forehand, R., Brown, C., & Holmes, T. (1988). The pre-
tion. This work provides a valuable contribution to the
vention of sexual abuse: Examination of the effectiveness of a
literature on program effectiveness, but the crucial program with kindergarten-age children. Behavior Therapy, 19,
question remains unanswered: Do prevention pro- 429–435.
grams actually reduce the prevalence of child sexual Hazzard, A., Webb, C., Kleemeier, C., Angert, L., & Pohl, J. (1991).
abuse? Future studies must further these efforts to ex- Child sexual abuse prevention: Evaluation and one-year follow-
up. Child Abuse & Neglect, 15, 123–138.
amine the practical benefits of prevention programs.
Hedges, L., & Olkin, I. (1985). Statistical methods for meta-analysis.
Although the high overall effect size found in this meta- New York: Academic.
analysis is encouraging, it is essential that existing pro- *Herman, P. (1987). Evaluation of a sexual abuse curriculum/pre-
grams begin to document their impact on the actual vention program. Unpublished doctoral dissertation, Washing-
rates of sexual victimization. ton State University, Seattle.
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis.
Newbury Park, CA: Sage.
References Kazdin, A. (1994). Psychotherapy for children and adolescents. In A.
Bergin & S. Garfield (Eds.), Handbook of psychotherapy and
behavior change (pp. 543–594). New York: Wiley.
References marked with an asterisk indicate studies *Kolko, D., Moser, J., & Hughes, J. (1989). Classroom training in
included in the meta-analysis. sexual victimization awareness and prevention skills: An exten-
sion of the red flag/green flag people program. Journal of Fam-
*Anderson-Varney, T. (1988). The evaluation of a personal safety ily Violence, 4, 25–45.
curriculum for preschoolers. Unpublished master’s thesis, *Kolko, D., Moser, J., Litz, J., & Hughes, J. (1987). Promoting
Michigan State University, East Lansing. awareness and prevention of child sexual victimization using

264
CHILD ABUSE PREVENTION

the red flag/green flag program: An evaluation with follow-up. Saslawsky, D., & Wurtele, S. (1986). Educating children about sex-
Journal of Family Violence, 2, 11–35. ual abuse: Implications for pediatric intervention and possible
*Kraizer, S. (1991). The Safe Child Program for the prevention of prevention. Journal of Pediatric Psychology, 11, 235–245.
child abuse: Development and evaluation of a school-based Tutty, L. (1992). The ability of elementary school children to learn
curriculum. Unpublished doctoral dissertation, The Union In- child sexual abuse prevention concepts. Child Abuse & Neglect,
stitution, Cincinnati, Ohio. 16, 369–384.
*Nelson, D. (1985). An evaluation of the student outcomes and in- *Wolfe, D., MacPherson, T., Blount, R., & Wolfe, V. (1986). Evalua-
structional characteristics of the “You’re in charge” program. tion of a brief intervention for educating school children in
Unpublished manuscript, Utah State Office of Education, Salt awareness of physical and sexual abuse. Child Abuse & Neglect,
Lake City. 10, 85–92.
Nemerofsky, A., Carran, D., & Rosenberg, L. (1994). Age variation *Wurtele, S. (1990). Teaching personal safety skills to four-year-old
in performance among preschool children in a sexual abuse pre- children: A behavioral approach. Behavior Therapy, 21, 25–32.
vention program. Journal of Child Sexual Abuse, 3, 85–102. *Wurtele, S., Gillispie, E., Currier, L., & Franklin, C. (1992). A com-
*Oldfield, D., Hays, B., & Megel, M. (1996). Evaluation of the effec- parison of teachers vs. parents as instructors of a personal safety
tiveness of Project Trust: An elementary school-based victim- program for preschoolers. Child Abuse & Neglect, 16, 127–137.
ization prevention strategy. Child Abuse & Neglect, 20, 821– *Wurtele, S., Kast, L., & Melzer, A. (1992). Sexual abuse prevention
832. education for young children: A comparison of teachers and
Downloaded by [b-on: Biblioteca do conhecimento online ISCTE] at 04:19 07 October 2017

*Peraino, J. (1990). Evaluation of a preschool antivictimization pre- parents as instructors. Child Abuse & Neglect, 16, 865–876.
vention program. Journal of Interpersonal Violence, 5, 520– Wurtele, S., Kast, L., Miller-Perrin, C., & Kondrick, P. (1989). A
528. comparison of programs for teaching personal safety skills to
*Poche, C., Yoder, P., & Miltenberger, R. (1988). Teaching self-pro- preschoolers. Journal of Consulting and Clinical Psychology,
tection to children using television techniques. Journal of Ap- 57, 505–511.
plied Behavior Analysis, 21, 253–261. Wurtele, S., Marrs, S., & Miller-Perrin, C. (1987). Practice makes
*Ratto, R., & Bogat, G. A. (1990). An evaluation of a preschool cur- perfect? The role of participant modeling in sexual abuse pre-
riculum to educate children in the prevention of sexual abuse. vention programs. Journal of Consulting and Clinical Psychol-
Journal of Community Psychology, 18, 289–297. ogy, 55, 599–602.
Rispens, J., Aleman, A., & Goudena, P. (1997). Prevention of a child Wurtele, S., Saslawsky, D., Miller, C., Marrs, S., & Britcher, J.
sexual abuse victimization: A meta-analysis of school pro- (1986). Teaching personal safety skills for potential prevention
grams. Child Abuse and Neglect, 21, 975–987. of sexual abuse: A comparison of treatments. Journal of Con-
Rosenthal, R. (1991). Meta-analytic procedures for social research sulting and Clinical Psychology, 54, 688–692.
(rev. ed.). Newbury Park, CA: Sage.
Sandberg, D., Lynn, S., & Green, J. (1994). Sexual abuse and
revictimization: Mastery, dysfunctional learning, and dissocia-
tion. In S. Lynn & J. Rhue (Eds.), Dissociation: Clinical and Manuscript received October 23, 1997
theoretical perspectives (pp. 242–267). New York: Guilford. Final revision received July 6, 1999

265

Das könnte Ihnen auch gefallen