Sie sind auf Seite 1von 5

DE 1.

0 Internal Monitoring Report from September 8, 2009


DAB Feedback

Introduction

Overall, our committee gives its strong support to the idea that PSD should be accountable for meeting
the ends set for the district by the BOE, that those ends should be clearly defined, and that our success
against them should be measurable using (primarily) progress against quantifiable objectives. To that
end, we applaud the learning goals as a start in establishing those objectives and the data in this report
as a partial measurement against those objectives.

Not surprisingly, we have some suggestions for improvement both in content and in presentation and will
present them below. Not every member of our committee agrees with every one of the comments and, in
fact, you will see a couple of places where there is some contradiction. In general though, the entire
committee supports the thrust of the comments for better organization and consistency, more clarity, and
more detail. We stand ready as a committee and as individuals to support the district, in whatever ways
we can, to make the needed improvements in the report going forward and, more importantly, to make the
underlying improvements in the district for the benefit of all PSDʼs students.

General

We believe there are multiple target audiences for this report. First, the administration obviously needs to
report to the BOE. However, the report also should serve as a communication vehicle to the parents of
the district and the community at large on how we are doing against our ends and our objectives. The
following changes would help both audiences:

1) Create an executive summary at the beginning of the report. This would summarize the general
progress against the ends and objectives and the challenges that remain - perhaps citing key statistics
to bolster the case. It would also cover, at a high level, new measures and objectives that have been
adopted or will be adopted going forward. The full report would be an attachment. In this way, more
people would have access to a concise summary and therefore have the opportunity to be more
engaged in the process.
2) Paint a picture of PSD over time. Where has it been since the last report? Where is it now? Where
will it be in the next 2-3 years? Where do we aspire to be in the next ten years? This should not only
be done in words but in the statistics and measures. There should be targets for each of those time
windows for every one of the measures we use. That gives the reader context and the organization a
broader planning horizon.
3) As part of the future vision, it would be helpful to have a section about how the report and measures
will evolve over time to adapt to the broad changes going on - the change in state standards, the
changes in CSAP, and the ongoing work on the definition of and interpretation of the district ends.
4) Adopt common content and style expectations for all parts of the report and have a single editor to
bring it all together. Different benchmark groups, different kinds of statistics, different levels of detail,
different time horizons, and different styles of writing are all evident in the current report. That makes
the report harder to read and can give the appearance of a lack of focus or harmony across the
different groups.
5) The separation of the Interpretation from the Evidence section is confusing and required going back
and forth to try to tie the whole picture together. Each learning goal should be covered in its entirety in
its own section. If need be, a statement of interpretation can be a subsection therein. This change in
organization will be even more important as the number of learning goals and objectives increases
over time - as it must to cover the full scope of the district ends.
6) We recognize the distinction that the BOE wishes to make between means and ends and that,
according to the strict rules of engagement therein, the report may not need to address anything
having to do with means. Nonetheless, it is our belief that the report as a document and, more

Draft from the District Advisory Board Academic Subcommittee - 1/30/2010


importantly, the confidence in the goals that it sets forth would be strengthened if these concerns were
addressed:
a) Throughout the Evidence section, there are statements that “it is reasonable” that we increase
this goal or that goal and numbers are used to justify that statement. That is one way to look at
what is reasonable. It would be at least as instructive to say, where possible, why we think a
particular measure is low and say why, in light of that understanding, it is reasonable to
increase the goal. The underlying causes may make a new target difficult to attain. And
certainly, knowing why something is happening will help to correct or improve it.
b) Similarly, there are few if any statements in the report about what steps are actually planned to
make the targeted improvements (or what steps were taken to generate the improvements that
are reported). Such identified plans would help inspire confidence that PSD has some reason
to believe it will meet the new targets (and that the prior improvements were not just luck).
7) The timing of the report should be changed.  This report was delivered to the BOE in August.  Most
stakeholders are not engaged during the summer months.  Reporting after school is back in session
would be a better time and lend to the BOEʼs desire for transparency.
8) While the use of data to measure learning goals and having appropriate means for the ends is critical,
where is there any directive about recognizing the need to educate the whole child?  The document
does not offer any policy statements about the role of art, music, PE educators in assessing academics
within PSD via the RTI process. 

What follows are specific comments organized by learning goal, some of which are unique and some of
which represent specific examples of the concerns expressed above:

Learning Goal #1, Reading:

There are breakdowns by gender and ethnicity and SES. Do the cross products of those breakdowns
show anything useful (i.e., hispanic males vs. hispanic females vs white males, etc.)? Related, how much
overlap is there between the SES and the Hispanic populations (or any other for that matter) so we
understand how much we are seeing the same kids or effects just under different labels?

3rd Grade Reading:  If there is little separation in districts ranked 3rd through 5th in reading (80.7% through
82.5%), why is our goal an increase of 2% annually?  Is this a meaningful increase in 3rd grade reading
proficiency?

Learning Goal #2, Annual Growth:

The interpretation of demonstrating high levels of achievement…states the target is one yearʼs growth
(great goal).  Where do we talk about mastery?  How will the new State Standards drive how we look at
catch-up growth and promotion?

If RTI is critical to addressing growth, why is it not discussed in the monitoring report?

State Standards to be measured are listed as: The district interprets children will demonstrate levels of
achievement consistent with high individual expectations and Colorado State Standards to mean a
studentʼs performance is targeted toward the accomplishment of a minimum of one yearʼs growth in one
yearʼs time using the CSAP, the stateʼs measure for state standards in language arts, math, science, and
writing. -----District Ends list many more subject areas to be measured/assessed. How/when will PSD be
assessing these other subject areas? 

In the annual growth section, it would be instructive to see data also broken down by where the students
started (e.g., unsat, pp, p, or a) and what percentages of each made annual growth.

Draft from the District Advisory Board Academic Subcommittee - 1/30/2010


The normative tracking of annual growth (comparing a student to their peers) doesn't accurately measure
annual academic growth for English Language Learners speaking students, nor is the CSAP (which must
be taken in English) an accurate tool to measure annual growth for these students.  How should this be
addressed/measured?

There is discussion of the limitations of the growth model. This discussion should be extended to explain
the limitations in measuring growth for advanced students. There or elsewhere, further explanation
should be given as to how we are ensuring those students achieve growth given these limitations in the
primary measurement tool.

Page 4 - Next to last paragraph refers to “CSAP score history”.  It would be nice for score history to be
more explicitly defined.  How is it calculated?

Page 6 – The 0.15 factor allowances appears at first glance to be result in a bias resulting in a greater
number of students achieving a yearʼs growth.  If this is an attempt to address known measurement error,
how are we addressing measurement error in the other direction?

Page 6 – The introduction of 75% for reading, 72% for writing and math is presented without context
leading this reader to immediately wonder why all three subject areas did not have the same target.  I
believe this is explained adequately later on in the document, but this is where the numbers are
introduced, and it would be helpful to have an explanation on this page.

The Growth+ section states "...if a student is currently testing at the proficient or advanced level and
maintained or increased their standing relative to the favorable performance category, then the student
did make a year's progress. How much of an increase in the scores of proficient or advanced students will
be a year's progress? Growth+ seems to show a student who is proficient or advanced didnʼt backslide. 
Does it really tell us anything about growth?  Does MAPS, or ACT (at the secondary level) help to fill in
this gap? The yearʼs growth goals (75% in reading, 72% in math and writing) seem low.

I am still unclear exactly how Growth + Interpretation are being calculated.  I am also unsure that I agree
that this deviation from the “standard” growth model is desirable and/or necessary, specifically
considering that later on in this same document, we compare ourselves against other districts within the
state based upon the standard state growth model. I get a sense, perhaps untrue, that the growth+ metric
tends to artificially inflate our growth data. How are other districts dealing with this

It appeared that the various performance metrics and growth metrics were not completely consistent
between performance goals.  Sometimes we compare against top 25 districts, or top 5 districts or top 3
districts.  Sometimes we cite 3-year rolling averages, or 5-year rolling averages, or changes from last year
to this year.  What is the reason for this inconsistency?  Did different authors write different sections? 
Were specific data cited to make the most favorable impression? (Page 16 – An example of my concern
about what we cite and report on is on display on this page.  Third paragraph states that the growth data
in table 2.6 indicate a substantial increase in the percent of PSD students making a yearʼs growth from
2007 to 2008 and from 2008 to 2009.  However, upon closer evaluation of table 2.6, it is evident that over
the five years, the performance curve is a v-shape, and even after those two years of “substantial”
increase, we are still performing lower in 2008-09 than we were in either 2004-2005 or 2005-2006. )

Page 13 – Given the historical evidence from 2005-2009, a 2 percent annual increase may not be
realistic.  Perhaps it would be more realistically obtainable to strive for a 1 percent annual increase.

Table 2.2 seems to show 7th and 8th grade need “support to improve outcomes”.  How is this being
addressed by the district?  Do prior yearsʼ data show the same thing for grades 7 and 8?  10th grade
shows a drop as well.  Is this a transitional drop?  It seems math is doing a relatively “better” job in getting

Draft from the District Advisory Board Academic Subcommittee - 1/30/2010


growth. How do we find out the whyʼs, whatʼs and howʼs and try to replicate them in reading and writing? 
MAPs:  Why shouldnʼt we perform in the 90th percentile? Table 2.6 shows we are far below that level now.
Table 2.7:  what changed between 07/08 and 08/09?  Why did we see so much more growth?  Catch up
growth:  I didnʼt understand the 1.6% on pg. 18.  It is interesting that math does the best job attaining
annual growth, but not catch up growth.  Why?  It seems Academy 20 is doing the best job across all
content areas in catch up growth.  What are they doing?

Page 15 – Why isnʼt the percentage of students making a yearʼs growth in other Colorado districts beyond
2007 available from the CDE?  I believe the report states that “PSD does not have access to ………..” 
Why not?

On page 16, there is a paragraph that says the % of students making a years growth as measured by
NWEA is unlikely to exceed certain percentages. It implies this is because that would put the district in
the 90th percentile or above for districts. Why shouldnʼt we be targeting PSD to be among the top ten
percent of districts in the country?

I also took issue with the conclusion in the Evidence section discussing Goal 2 - annual growth.  There is
the following statement:

Note that PSD has met the current growth targets in specific grades across all
three subjects tested. This fact indicates that attaining our growth targets is
reasonable, and that the growth metric and targets set have utility in illuminating
where we can provide support to improve outcomes.

I do not see that just because growth has been made in all subjects in specific grades that it necessarily
means that growth of that magnitude should be able to be made across all subject areas in any grade.
Maybe there is more relevant information but when I looked at the chart I immediately wondered if there
were factors inherent in the subject matter for the higher grades that would preclude a greater level of
growth. Or if the age of the students and the challenges they face in the older grade could be a factor.

In a couple of places (e.g., in discussing catch-up), there is a statement that it is difficult to compare
districts because of the differences between them. Why not identify a few districts at least of comparable
size and demographics (or in this case that have a comparable CSAP score distribution) for these
comparisons? Otherwise, we appear to be avoiding comparison (since we compare everywhere else).

Learning Goal #3, Post-Secondary Ready

If the State Board of Education and the Colorado Commission of Higher Education have jointly adopted a
definition of post-secondary ready and work force ready, why are we not using it, www.cde.state.co.us/
cdegen/downloads/PWRdescription.pdf?  It states specifically, “with no remediation.”

AP/IB tests lack well established levels of validity and reliability???? That is a very surprising statement
and, though the statement may be true in some statistical sense, it makes little difference in the real world
since these are among our only national comparisons, they are the only “objective” ways to ensure that
our students learn the appropriate material in our post-secondary courses, and they can make a huge
difference in getting accepted or denied admission at some colleges.

Page 8 – third paragraph refers to ACT Benchmarks in conjunction with PLAN and EXPLORE.  Neither
PLAN nor EXPLORE are defined leading this reader wanting to know more.  A reference for both of these
would be appreciated as would a very brief explanation of what they are.

Draft from the District Advisory Board Academic Subcommittee - 1/30/2010


Should PLAN and EXPLORE results be part of objectives to identify issues and trends early before they
show up in the ACT metrics?

In the AP section, what are the median scores and the trend on those? (Presumably, we have similar
data at least for IB.) Also what is the percentage of students who have a median score on all APs taken 3
or greater? In other words, using “tests taken” obscures that there are some students who take many AP
exams and do well on all of them. Do those folks skew the overall results?

On ACT readiness, the science results are terrible. This is a place in particular where more discussion of
reasons and steps taken to fix it are in order.

There should be district, state, and national comparison charts on the ACT results.

ACT—it is interesting to note that math seems to do the best in growth in the years leading up to taking
the ACT, yet only 50% or fewer students test at the college readiness level!  How can this be?

Page 27 – 2nd paragraph.  Another example of inconsistent language.  Two of the four goals use the
terminology “per year” whereas two others do not.   Furthermore, the goals are stated as three years,
what is the desired path over those three years?  A straight line?  Something else.  Here is where I would
like to see (and for all) a goal table formatted like the historical performance table that is explicit in what
the numerical goal is for each of the next three years by performance attribute being measured.

I like the fact that we are tracking post-secondary enrollment.  What comparisons can we make?

Why  common assessments of DE 1.1 in 4th, 7th, and 9th?  What is the timeline for development/
implementation?  Who is involved?  Who will define Real World Applications, Creative Applications,
Critical Thinking, Decision Making, Advocacy, and Adaptation to an Ever Changing Personal and Global
Contexts?

The Course Assessments section seems incomplete. Which actual classes under those subject areas
have assessments (e.g., Spanish 1, 2, 3, etc.)? Which are actually being administered? What are the
results, trends on those results, and targets?

How can we be meeting goals in post-secondary readiness, but not in 3rd grade reading proficiency if it is
a critical factor in predicting future academic success?

Learning Goal #4, Transitions:

Page 32 – Iʼd like a primer on how dropout rate is calculated given the data in the preceding tables.  I
could not figure out the math.

Drop out rate:   Do we know or can we find out, how many kids drop out but then come back and finish?

Can we use the percentage of students who need remediation at the next level as another measure of the
success of our transitions?

It would be instructive to use as a measure a break-down of the number/percentage of students attending


what types of colleges (e.g., 2 yr vs 4 yr, in state vs out of state, selective vs highly selective)?

Draft from the District Advisory Board Academic Subcommittee - 1/30/2010

Das könnte Ihnen auch gefallen