Sie sind auf Seite 1von 9

Kate Frost February 12, 2014

EPS 513 Prof. Salmon & Knauti


Formative Assessment
The data workshops this semester have helped me to discover what forms of data
representation help me to help my students, and what areas of formative assessment I would like
to focus on improving when I have my own classroom. Due to the constantly changing teaching
schedule in my current classroom with two residents and a mentor teacher, I was unable to use
formative assessments that showed students learning progression within a specific mathematical
concept. I, nonetheless, learned about effective representations of formative assessment, as well
as ways in which I can refine and areas in which I can improve my own use of such assessments.
My initial desire was to look at the relationship between the content knowledge/skills that
students possess on any given topic, and their perception of their own understanding of the topic.
That is to say that I was hoping to look at both how well students demonstrate mastery of a topic,
as well as how confident they are in their answers, and whether or not there is a correlation
between these two categories. Unfortunately, the inconsistency with teaching and my hesitation
to incorporate self-assessment into a classroom that is not my own, resulted in my failing to
acquire the necessary information for such study. Thus, due to my inability to focus on one
particular concept because of inconsistencies with teaching, and my failure to accumulate data on
self-assessment, I instead focused on what general knowledge I gained about how to target and
acquire informative data, as well as how to best represent it, so that I can use it effectively to
inform my instruction.
My mentor teacher has set up a comprehensive data wall in her classroom, for which she
is able to gather data from her routine assessments of students. While she seems to have a sort of
routine around engaging students in the assessment process, I feel as though this aspect of the
cycle is slowed or complicated with three teachers in the classroom (especially given the fact that
I joined the room three months into the year, and did not have a clear introduction to how this
aspect of the classroom runs). As for the data that I collected and analyzed for this class, I used
two exit tickets and two pre-tests. Exit tickets are given daily for each lesson- we often teach two
separate math lessons, or one math and one science lesson each day. Pre-tests are given at the
beginning of each new unit, and post-tests are given at the completion of each unit (we use the
same test for our pre- and post-tests). Students who correctly answer at least eighty percent of
questions that require a given skill (i.e. adding fractions with uncommon denominators), are said
to have mastered that skill and receive a star on the data walls mastery chart. Each skill that
we have taught this year is on the mastery chart. The skills assessed on each exit ticket are either
a skill directly assessed on the pre/post-test or are a stepping stone to a mastering a skill found on
the pre/post-test. The pre/post- tests are aligned with the Common Core Standards as well. Thus,
the routines surrounding assessment and data collection within the classroom are highly
effective. However, the routines around students review of such assessment appear to be
lacking. Students are simply given the opportunity to briefly look over feedback they are given,
before either throwing out their work or bringing it home. There is little self-assessment done by
students. They are, however, able to view the data wall, and all students have been taught how to
read the mastery chart. Students also understand what opportunities they will have to master the
skills they are currently lacking. Thus the consequential validity of each assessment is high
insofar as it allows the teachers in the room to create lessons that target the learning that is most
needed by students. The accumulated data allows students to have a sense of where they are, and
where they are going with their learning. The consequential validity of these assessments would
be considerably lower however, if the criteria focused more directly on students increased
understanding of their own knowledge, as related specifically to a given assessment. There are
distinct routines around assessment and data collection within the classroom; however, student
involvement in the process remains lower than I would like to see it.
Through the data workshops and my own exploration with data representation formats, I
learned how to make increasingly effective representations of my data. I believe there is a
balance between becoming so specific in your data analysis that the work becomes tedious and
burdensome, and specific enough that you are able to accurately and effectively inform your
instruction. I feel as though I was able to find a productive balance in three of my four
representations. Despite my initial research question and interest, I ended up focusing on
students content knowledge and skills.
Table 1 shows a graphic representation of student scores on two components of an exit
ticket. This graph, which lacks some specificity, makes it clear that students in both sections (1
and 2) have a much better understanding of what a fraction x fraction number sentence means
than how to set up and/or solve the model. This graph however tells me nothing about individual
students understanding of the concept at hand. Table 1 is the representation that I did not find to
have the productive balance of specificity without being tedious, as it was not as informative as it
should have been. Tables 2.1 2.4 display students understandings of various components of
solving a mixed number times a fraction. This data only shows one class section, as opposed to
Table 1, which shows both sections of students. Table 2.4 corresponds to the same skill as does
the red column in Table 1, and thus I am able to see that between these two exit tickets, students
ability to correctly represent and solve a fraction times a fraction greatly improved. My third
representation shows the results of a volume pre-test.
1
This representation gives me a strong
sense of where my students are with the material, prior to beginning the unit. In this
representation I am able to see the data for the class as a whole, as well as for individual
students. The data in Table 3 helped me to form small groups for instruction, so that students are
not spending their time relearning something that they have already mastered, when they could
be furthering their skills. Similarly, I used this data to see what areas of Volume would be
appropriate for whole (or majority-) group instruction. While I have been unable to see specific
progress in these skills yet, as we had not yet taught the corresponding lessons, exit ticket from
our volume lessons, as well as the volume post test will allow me to assess student growth and
knowledge. From Table 3, it was clear to me that student 11 (I have taken names out to
maintain anonymity) as well as a few other students, whose data was removed from this table in
order to allow the chart to better fit, had enough prior knowledge about volume, that they could
work together- with the assistance of structured worksheets and brief instruction during
independent practice- to attempt to master the leftover concepts within the larger umbrella of
volume. Table 1 and Tables 2.1 2.4 expose what concepts students were able to grasp after a
specific lesson, whereas Table 3 gives me insight into students prior knowledge, and next steps
for instruction.


1
I also created a representation for a Geometry pre-test, using the same structure as that in Table 3. I chose not to
include it within this paper, as it followed the same layout as Table 3 and shared similar implications for instruction.

Table 1 (above)


Table 2.1 (above, left) Table 2.2 (above, right)


Table 2.3 (above, left) Table 2.4 (above, right)
100%
89%
37%
11%
0%
20%
40%
60%
80%
100%
120%
1 2
C
o
r
r
e
c
t

A
n
s
w
e
r
s

Problem Number
Multiplying Fractions x Fractions
Exit Ticket
Meaning
Model
Meaning
Mastered
Needs Help
Reteaching
Required
Model
Mastered
Needs Help
Reteaching
Required
Solving Part 1
Mastered
Needs Help
Reteaching
Required
Solving Part 2
Mastered
Needs Help
Reteaching
Required
Student #19-21 & 23-25 #27 #22 #13 & #17 #12 & #16 #18 & #26
#14 &
#15

Calculate
Volume - Prism
Shown &
Dimensions
Volume
= cubic
units
Volume from
dimensions
only (no prism
or cubes
shown)
Rectangular
prism w/
cubes shown
Irregular
shapes w/
cubes
shown
How many
more units
(rectangular
prism) w/
cubes shown
Define
Volume
1

2

3

4

5

6

7

8

9

10

11

12

13

14

15
16

17

18

19

20

21

22

23
Color
Key Not yet mastered

Mastered

Special
Education

Table 3 (above)

While the assessments that I chose to analyze and represent graphically did not give me
large amounts of insight into my students learning progression, they did help me to understand
effective ways to use formative assessment, and highlighted areas of the process which I would
like to alter when I have my own classroom. One aspect of the process that I would like to alter,
which I have already included in this weeks lesson, is in relation to student self-monitoring and
assessment. In this weeks exit tickets, I ask students to rate how confident they are in their
answers. I hope that this question will cause students to begin to think about their thinking,
triggering some extent of metacognition within my current classroom. Table 1 and Tables 2.1 -
2.4 made it very clear to me where reteaching was needed, and Table 3 let me know what skills
my students were entering the classroom with, and where they, as a class, needed the most
support. Table 3 also, as I mentioned earlier, gives me a good sense of where individual students
are, so that I may plan remediation and more challenging problems accordingly. As Heritage
articulates, it is vital when using formative assessment that you interpret the evidence of or
[draw] inferences from your students results, and then [match your] instruction to the gap
(Heritage, p. 144). Heritage explains that in order to avoid boredom or seemingly insurmountable
frustration, it is imperative that teachers appropriately match instruction to the gap that students
exposed on their formative assessment. Using the results from the volume pre-test, I isolated
those students that I felt would benefit from student-directed instruction, given tools from me,
and allowed them to work on furthering their own learning, as I instructed the class on topics of
which those other students had already proven mastery. Hattie & Timperleys models, both the
three questions they present, as well as the four levels of feedback, have provided me with a
model and structure to utilize as I begin to create my own model for the feedback loop in my
classroom. I feel comfortable with each component of these models, as I am able to see where
my current practice fits into the model, and the next steps that I can take in order to more
significantly impact my students growth. That is to say that while I tend to focus on the process,
as well as the product of the task, it would be best for me to focus more on the process than I am
currently. I had an inclination that this was the case, but it is encouraging to have research
support my instincts, as well as provide me with concrete steps that I can take to increase the
consequential validity of assessments as well as class work. Shepard (2005) also touches upon
this aspect of feedback and assessment: positive learning outcomes were more likely when
feedback focused on features of the task such as how the student could improve in relation to
the standardsand emphasized learning goals instead of lavishing nonspecific praise or making
normative comparisons (p. 68). Both Shepard and Hattie & Timperley cite the importance of
including self-assessement and monitoring in the classroom. As I mentioned, these are areas of
the assessment process in which I am most hoping to grow, and have already begun to make
changes. All in all, the readings for this class in conjunction with my personal data analysis and
representation helped me to understand my students learning and how to best help them
progress. The data workshops were helpful insofar as seeing other types of representations and
hearing about best practices in other classrooms.
The remaining questions that I have about students learning growth in relation to content
knowledge/skills I believe will be answered in part through further research, but mostly through
experience within my own classroom. I feel as though I have a solid understanding of how to use
formative assessment to aid in my students growth, and that I now need to focus on how to
create a climate for self- and peer- assessment within my classroom and help my students to
become more metacognitive. Additionally, as I am beginning to do next week, I want to use my
exit tickets as a way to begin to help students self-monitor and take responsibility for their work.
I want them to feel a sense of ownership of their work. Thus, the question that I will begin to
look at, now that I feel more comfortable making such changes within my classroom, is: How
does a students mastery of a skill relate to his/her assessment of his/her own understanding of
the topic? I will also continue to ask students questions that help them to find the errors in their
own work, so that they may begin to self-monitor, using the model of thinking and questioning
with which I am providing them. Thus, at this point in time, I will be doing little to change my
assessments themselves (until I decide to begin including student-made questions), but will focus
on changing the process surrounding formative assessment. I will alter the process to ensure
student engagement within it, and in order to foster metacognition and self-monitoring. These
changes will allow me to better understand my students abilities to self-assess, and to attempt to
help my students gain a greater ownership over their learning.

Das könnte Ihnen auch gefallen