Sie sind auf Seite 1von 271

Implementing and

Analyzing Performance
Assessments
in Teacher Education

A volume in
Contemporary Issues in Accreditation, Assessment, and Program
Evaluation Research in Educator Preparation
Joyce E. Many, Series Editor
This page intentionally left blank.
Implementing and
Analyzing Performance
Assessments
in Teacher Education

edited by

Joyce E. Many
Georgia State University

Ruchi Bhatnagar
Georgia State University

INFORMATION AGE PUBLISHING, INC.


Charlotte, NC • www.infoagepub.com
Library of Congress Cataloging-in-Publication Data
  A CIP record for this book is available from the Library of Congress
  http://www.loc.gov

ISBN: 978-1-64113-119-3 (Paperback)


978-1-64113-120-9 (Hardcover)
978-1-64113-121-6 (ebook)

Copyright © 2018 Information Age Publishing Inc.

All rights reserved. No part of this publication may be reproduced, stored in a


retrieval system, or transmitted, in any form or by any means, electronic, mechanical,
photocopying, microfilming, recording or otherwise, without written permission
from the publisher.

Printed in the United States of America


CONTENTS

Introduction.......................................................................................... vii

1 The Evolution of Teacher Performance Assessments As a


Measure of Accountability..................................................................... 1
Carla L. Tanguay

2 From Isolation to a Community of Practice: Redefining the


Relationship of Faculty and Adjunct University Supervisors
During the Implementation of edTPA............................................... 39
Sharilyn C. Steadman and Ellen E. Dobson

3 Faculty Investment in Student Success: A Four-Year


Investigation of edTPA Implementation............................................ 63
Gaoyin Qian, Harriet Fayne, and Leslie Lieman

4 Mandates Revisited: One Coordinator’s Story of Cultivating


Collegiality and Inquiry Through a Professional Learning
Community........................................................................................... 85
Holley M. Roberts

5 The Power of Supports to Improve edTPA Outcomes..................... 105


Kathleen Fabrikant, Cynthia Bolton, Cindy S. York, and Angie Hodge

6 Cognitively Guided Instruction as a Means of Preparing


Elementary Teacher Candidates for edTPA Mathematics
Assessment Task 4............................................................................... 121
Susan Swars Auslander, Stephanie Z. Smith, and Marvin E. Smith

 v
vi   Contents

7 Not Just for Preservice Teachers: edTPA as a Tool for


Practicing Teachers and Induction Support.................................... 147
John Seelke and Xiaoyang Gong

8 Forcing Me to Reflect: Preservice and Novice Teachers’


Reflective Thinking in Varied School Contexts............................... 167
Dianna Gahlsdorf Terrell, Kathryn McCurdy, Megan L. Birch,
Thomas H. Schram, and Page Tompkins

9 State Education Agency Use of Teacher Candidate Performance


Assessments: A Case Study of the Implementation of a Statewide
Portfolio-Based Assessment System in Kansas.........................................191
Stephen J. Meyer, Emma V. Espel, and Nikkolas J. Nelson

10 Using the Concerns-Based Adoption Model To Support edTPA


Coordinators and Faculty During the Implementation Process............ 217
Joyce E. Many, Shaneeka Favors-Welch, Karen Kurz, Tamra Ogletree,
and Clarice Thomas

About the Editors............................................................................... 247

About the Contributors...................................................................... 249


INTRODUCTION
Joyce E. Many and Ruchi Bhatnagar
Georgia State University

The field of teacher education has long relied on locally-developed assess-


ments to evaluate preservice teachers’ ability to teach and to gather data for
program improvement. Such measures, however, have come under increas-
ing levels of critique from public stakeholders and from some teacher edu-
cators for lacking both reliability and validity (Castle & Shaklee, 2006; Gross-
man, Hammerness, McDonald, & Ronfeldt, 2008). In response to concerns,
rigorous performance-based assessments for preservice teachers have been
advanced as one possible way to ensure that all students receive instruction
from a high-quality teacher (Darling-Hammond, 2010). Towards that end,
both state and national teacher performance assessments, focusing on the
application of knowledge of teaching and learning in a classroom setting,
have been developed (Wei & Pecheone, 2010). As these assessments have
been implemented, important issues have been raised by teacher educa-
tors and others with respect to (a) whether such performance assessments
should be used for high-stakes purposes in relation to certification or for
program approval, (b) the degree to which the voices of teacher educators
have played a role in policy development or implementation decisions, and
(c) perceptions of the impact of the use of such assessments on the learning
and development of preservice teachers and program improvement.

Implementing and Analyzing Performance Assessments in Teacher Education, pages vii–xiii


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. vii
viii   Introduction

Our book explores factors related to the implementation of teacher


performance assessments in varying state and institutional contexts. The
contributors, teacher educators from across the country, focus on what was
learned from inquiries conducted using diverse methodologies (quantita-
tive, qualitative, self-studies, and mixed methods). Their research encom-
passed faculty, supervisors, cooperating teachers, and students’ perceptions
and concerns of teacher performance assessments, case studies of curricu-
lar reform and/or resistance, analyses of experiences and needs as a result
of the adoption of such assessments, and examinations of the results of
program alignment and reform. The chapters showcase experiences which
occurred during high-stakes situations, in implementation periods prior
to high-stakes adoption, and in contexts where programs adopted perfor-
mance assessments as an institutional policy rather than as a result of a
state-wide mandate.
In Chapter 1, “The Evolution of Teacher Performance Assessments As
a Measure of Accountability,” Carla Tanguay provides a description of the
historical and socio-political context which led to the emergence of teacher
performance assessments. Next, she leads the reader through a delinea-
tion of the development, content, and use of specific portfolio-based assess-
ments in teacher education, including the Performance Assessment of Cali-
fornia Teachers (PACT), the edTPA®, the Praxis Performance Assessment
of Teachers (PPAT), and others. Finally, she presents tensions surrounding
the use of teacher performance assessments including (a) whether pro-
motion of performance assessments leads to a common language for the
teacher education field or a narrowing of the curriculum, (b) alignments
or tensions between assessments with program identities or faculty perspec-
tives on teaching and learning, and (c)  the degree to which cultures of
inquiry, compliance, or resistance develop as faculty are pressured to use
data to drive decision-making.
The notion of how a culture within faculty groups can be positively im-
pacted as individuals work together to implement teacher performance as-
sessments is explored in Chapter 2, “From Isolation to a Community of
Practice: Redefining the Relationship of Faculty and Adjunct University
Supervisors During the Implementation of edTPA.” Using a phenomeno-
logical approach, Sharilyn Steadman and Ellen Dobson examine how the
creation of new structures and communication pathways instituted for the
implementation of edTPA led to a dissolution of the hegemonic relation-
ships that had become the norm between adjunct supervisors and tenured
and tenure-track faculty. Educators from both roles moved from working
in isolation to functioning as a cohesive group as they strove to unpack,
understand, and implement the new performance assessment. In contrast
to the setting in Chapter 2, where the authors’ institution made the deci-
sion to move to a performance-based assessment without a state mandate,
Introduction    ix

the story told in Chapter 3, “Faculty Investment in Student Success: A Four-


Year Investigation of edTPA Implementation” occurs in a context where
state policy changes resulted in the adoption of edTPA as a high-stakes as-
sessment with a very short timeline for implementation. Grounding their
work in the Concerns-Based Adoption Model (CBAM; Hall & Hord, 2015),
Gaoyin Qian, Harriet Fayne, and Leslie Lieman focus on how college fac-
ulty and their administrative leadership team worked through phases of
(a) preparation, (b) exploration, and (c) acculturation in response to the
externally mandated performance assessment. They use the term covert
leadership (Mintzberg, 1998) to emphasize the importance of adopting an
administrative approach which motivates, supports, and coordinates faculty
activities rather than handling the implementation process through institu-
tional directives. Qian and colleagues draw on the results of three studies
conducted over four years to understand how the dimensions of the profes-
sional learning community they developed aligned with faculty’s efforts to
move from teacher-centered to student-centered concerns and to engage
in implementation activities which supported students’ performance.
The importance of establishing a professional learning community ap-
proach is further delineated by Holley Roberts in Chapter 4, “Mandates
Revisited: One Coordinator’s Story of Cultivating Collegiality and Inquiry
Through a Professional Learning Community.” Having been thrust into
a leadership role as a newly minted edTPA coordinator, Roberts shares a
personal narrative of the processes she and her faculty went through to
respond to her state’s reform efforts encouraged by adoption of edTPA as
a certification requirement. Her chapter details the progression which oc-
curred as faculty moved first from focusing on personal, philosophical, and
political stances toward the mandate, to then becoming concerned about
logistics surrounding implementation, and finally to collectively interpret-
ing data and sharing experiences. She positions her action research as a
way to understand how promoting an intellectual inquiry stance in a profes-
sional learning community can be an effective response to implementing
mandates.
In Chapter 5, “The Power of Supports to Improve edTPA Outcomes,”
Kathleen Fabricant, Cynthia Bolton, Cindy York, and Angie Hodge turn
the focus to how faculty grappled with developing and sustaining effective
scaffolds that support candidates’ ability to be successful on their state’s
high-stakes performance assessment. Looking back over time, they describe
the range of supports which were developed at their institution and ex-
plain how they used data from faculty, students, and cooperating teachers
to analyze their effectiveness. Their chapter addresses their growing aware-
ness of the value of program-level foundational supports which provide
the content-specific mentoring that college-level supports can lack and the
x   Introduction

difficulty of sustaining scaffolds that require extensive out-of-course sup-


port by content-faculty experts.
The development of candidate expertise in one particular content area,
elementary mathematics, is the emphasis in Chapter 6, “An Exploration of
Elementary Teacher Candidates’ Preparation for edTPA Mathematics As-
sessment Task 4.” The authors, Susan Swars Auslander, Stephanie Z. Smith,
and Marvin E. Smith, work with preservice elementary teachers to develop
high-level teaching practices which use children’s cognition to guide in-
struction. The authors begin by introducing their two-course, elementary
mathematics methods sequence and the elements related to edTPA Math
Task 4 embedded in those courses to help guide candidates attention to
children’s conceptual understanding, procedural fluency, and problem
solving and reasoning. Next, they discuss the results from a mixed-methods
study designed to investigate the extent to which course experiences led
to a change in preservice teachers’ mathematical beliefs, and participants’
perspectives on engaging in an edTPA Math Task 4 while in the courses. In
contrast to the authors’ research in years prior to the inclusion of edTPA
Task 4 (Smith, Swars, Smith, Hart, & Haardoerfer, 2012; Swars, Hart, Smith,
Smith, & Tolar, 2007; Swars, Smith, Smith, & Hart, 2009; Swars, Smith,
Smith, Carothers, & Myers, 2016), the participants in this study did not
demonstrate changes in their mathematical beliefs. The authors draw on
their qualitative data to explore how struggles and issues related to comple-
tion of the edTPA may have filtered their learning and expected changes
in beliefs.
Understanding the role performance assessments may play in shaping
candidates’ learning and reflection is the subject of the next two chapters in
this volume as well. In Chapter 7, “Not Just For Preservice Teachers: edTPA
as a Tool for Practicing Teachers and Induction Support,” John Seelke and
Xiaoyang Gong turn their attention to the potential of the performance
assessments to extend beyond being a summative evaluation to serve as a
bridge to supporting novice teachers during induction. At their institution,
which is not mandated to use edTPA, K–12 partners have historically been
involved in local evaluation of their candidates’ portfolios. Research on
alumni, mentoring teachers, and K–12 colleagues involved in local scor-
ing indicate (a) edTPA promotes critical reflection, (b) divergent views
on whether respondents see edTPA as connecting to teaching practices in
schools, (c) alignment of edTPA and core pedagogical practices, (d) chal-
lenges evident in implementing edTPA due to school environments, and
(e) beliefs that even practicing teachers benefit from edTPA.
The potential impact of performance assessments on learning and de-
velopment of educators continues in Chapter 8, “Forcing Me to Reflect:
Preservice and Novice Teachers’ Reflective Thinking in Varied School Con-
texts.” Dianna Terrell, Kathy McCurdy, Megan Birch, Tom Schram, and
Introduction    xi

Page Tompkins focus their attention on the effect of a different perfor-


mance assessment, the New Hampshire Teacher Candidate Assessment of
Performance (NH-TCAP) on teachers’ pedagogical reflection. The authors
argue that reflective thinking, which is a requirement in performance as-
sessments, needs careful operationalization and research to understand its
transferability and effects on classroom practice. Using Larivee’s typology
of reflection (2008), Terrell and colleagues analyze the types of reflection
evident both (a) in the teacher-candidates’ NH-TCAP reflective thinking
tasks and (b) in their teaching performance after they were hired in their
initial teaching positions. Case-studies of first-year teachers who graduated
from different programs are used to illustrate commonalities and differ-
ences in patterns of reflections and how these may have been shaped by the
nature of reflections called for by the NH-TCAP. Finally, the authors use
document analysis of the NH-TCAP to substantiate their contention that
the assessment promotes both surface and pedagogical reflection but that
there is little explicit prompting for critical reflection on social or systemic
factors that might impact student learning.
Like the performance assessment developed for state-specific use that is
highlighted in the previous chapter, Chapter 9, “State Education Agency Use
of Teacher Candidate Performance Assessments: An Overview and a Case
study of a Statewide Portfolio-Based Assessment System in Kansas,” addresses
the development of a performance-based assessment for Kansas. Authors Ste-
phen Meyer, Emma Espel, and Nikkolas Nelson describe the creation and
use of the Kansas Performance Teaching Portfolio (KPTP) including its his-
tory as an assessment for practicing teachers and the assessment’s evolution
into an evaluation used for initial teacher preparation program completion.
In addition to detailing the content and structure of the assessment, the au-
thors explore a data analysis tool developed to aid KPTP implementation and
improvement. Reflections on lessons learned are offered to other agencies
embarking on development of statewide performance assessments, includ-
ing ways to leverage partnerships after development and implementation as
thoughts turn to systemic improvement and effective data use.
Chapter 10, “Using the Concerns Based Adoption Model (CBAM) As a
Framework to Understand and Support edTPA Coordinators and Faculty
During the Implementation Process,” continues in the vein of understand-
ing and learning from state-wide implementation of teacher performance
assessments. Joyce Many, Shaneeka Favors-Welch, Karen Kurz, Tamra Ogle-
tree, and Clarice Thomas are members of Georgia’s Teacher Education Re-
search Consortium. Drawing on two state-wide studies, the authors illustrate
how the CBAM (Hall & Hord, 2015) can be used to understand faculty and
edTPA coordinators’ concerns about implementing a performance assess-
ment such as edTPA and how these concerns may relate to the degree to
which they integrate edTPA activities. Their chapter underscores the need
xii   Introduction

to recognize the complexity of the changes called for when responding to


implementation of a performance assessment as a high-stakes assessment
and how to support faculty engagement during the change process.
As illustrated by the work of the contributors to this book, in the last
decade the field of teacher education has experienced a surge in the use
of teacher performance assessments. The movement has resulted in both
significant curricula reform and professional development and also im-
mense anxiety, angst, and concerns over the impact on teacher-education
programs and their content, the challenges of implementation, and the
perceived loss of academic freedom for the teacher educators. As the chap-
ters outline, implementation of such assessments, whether state developed
or available from a national provider, is extremely complex for the stake-
holders involved. The stories shared in these 10 chapters provide the read-
ers an in-depth account of the range of issues teacher educators encounter
while trying to implement preservice teacher performance assessments.
While the content of these performance assessments and the stakes associ-
ated with the implementation varied across the contexts, the issue of choice
and the inclusion of voice emerged as key variables shaping the success of
implementation and the extent of faculty engagement and willingness to
adopt change. These factors were foundational to the following implica-
tions of the research described within these chapters:

• The importance of providing teacher-education faculty opportuni-


ties for open discussions, for sharing concerns, and for taking an
active role in decisions about the changes needed in their curricula,
assessments, and program design.
• The benefits of shared leadership in institutions and the creation
of professional learning communities to overcome resistance and
increase collaboration.
• The opportunity to create new collaborative relationships between
faculty and supervisors, challenging the old hegemonic power status.
• The potential of involving P–12 partners and connecting the preser-
vice performance assessments to induction.
• The opportunity to analyze the merit of assessments by reflecting on
preservice teachers’ data to evaluate whether the assessment elicits
reflection in areas such as social justice and equity, and whether the
assessment is educative and impacts beliefs of preservice teachers in
significant ways.

For readers involved in the stages of initial implementation in response


to adoption of teacher performance assessments, these issues are worthy
of careful attention. By providing a research-based understanding of how
performance assessments may affect preservice-teacher learning, program
Introduction    xiii

improvement, and faculty motivation, our intent is to support the field in


considering how educators may thoughtfully balance state-wide policies,
teacher accountability, and program values as they work to implement assess-
ments that provide meaningful data for program improvement and teacher
development.

REFERENCES

Castle, S., & Shaklee, B.D. (2006). Assessing teacher performance: Performance-based as-
sessment in teacher education. Lanham, MD: Rowman & Littlefield Education.
Darling-Hammond, L. (2010). Evaluating teacher effectiveness: How teacher performance
assessments can measure and improve teaching. Washington, DC: Center for
American Progress.
Grossman, P., Hammerness, K., McDonald, M., & Ronfeldt, M. (2008). Construct-
ingcoherence: Structural predictors of perceptions of coherence in NYC
teacher education programs. Journal of Teacher Education, 59(4), 273–287.
Hall, G. E., & Hord, S. M. (2015). Implementing change: Patterns, principles, and pot-
holes (4th ed.). Upper Saddle River, NJ: Pearson.
Larrivee, B. (2008). Development of a tool to assess teachers’ level of reflective prac-
tice. Reflective Practice, 9(3), 341–360.
Mintzberg, H. (1998, November-December). On managing professionals. Harvard
Business Review, 141–147.
Smith, M. E., Swars, S. L., Smith, S. Z., Hart, L. C., & Haardoerfer, R. (2012). Ef-
fects of an additional mathematics content courses on elementary teachers’
mathematical beliefs and knowledge for teaching. Action in Teacher Education,
4, 336–348.
Swars, S. L., Hart, L., Smith, S. Z., Smith, M, & Tolar, T. (2007). A longitudinal study
of elementary preservice teachers’ mathematics beliefs and content knowl-
edge. School Science and Mathematics, 107(9), 325–335.
Swars, S. L., Smith, S. Z., Smith, M. E., Carothers, J., & Myers, K. (2016). The
preparation experiences of Elementary Mathematics Specialists: Examining
influences on beliefs, content knowledge, and teaching practices. Journal of
Mathematics Teacher Education. Advance online publication. doi: 10.1007/
s10857-016-9354-y
Swars, S. L., Smith, S. Z., Smith, M. E., & Hart, L. C. (2009). A longitudinal study
of effects of a developmental teacher preparation program on elementary
prospective teachers’ mathematics beliefs. Journal of Mathematics Teacher Edu-
cation, 12(1), 47–66.
Wei, R.C., & Pecheone, R.L. (2010). Assessment for learning in preservice teacher
education: Performance-based assessments. In M Kennedy (Ed.), Teacher as-
sessment and the quest for teacher quality: A handbook (pp. 69–132). San Fran-
cisco, CA: Jossey Bass.
This page intentionally left blank.
CHAPTER 1

THE EVOLUTION
OF TEACHER PERFORMANCE
ASSESSMENTS AS A MEASURE
OF ACCOUNTABILITY
Carla L. Tanguay
Georgia State University

Teacher performance assessments (TPAs) were developed by teacher


educators in response to historical, social, economic, and political influ-
ences that have shaped the public’s perception and policy on education
in the United States (Cochran-Smith, Villegas, Abrams, Chavez-Moreno, &
Mills, 2016). Education stakeholders (i.e., the public, policy makers, think
tanks, educational funders, and even teacher educators) are persuaded by
U.S. economic and political imperatives to maintain strength domestically
and internationally, as measured by student achievement scores on the Na-
tional Assessment of Educational Progress (NAEP) and the Programme for
International Student Assessment (PISA; Ravitch, 2013; Wilson & Tamir,
2008). Some stakeholders question the integrity of the U.S. educational
system and blame teachers and the programs who prepared them, as gaps
have persisted among historically marginalized groups of learners despite

Implementing and Analyzing Performance Assessments in Teacher Education, pages 1–37


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 1
2    C. L. TANGUAY

other potential systemic factors (Kumashiro, 2012; Ravitch, 2013; Wilson &
Tamir, 2008). Additionally, the public’s attention has been drawn to histori-
cal events, such as Sputnik and the space race in 1957, and to alarming re-
ports, such as A Nation at Risk in 1983 (National Commission on Excellence
in Education, 1983), escalating their concerns about the nation’s competi-
tiveness and its educational health in a global society (Ravitch, 2013). More
recently, politically-charged reformers, aiming to privatize education, fur-
ther contribute to public concern by sending messages proposing competi-
tion via school choice and school vouchers, while jeopardizing public edu-
cation, a “civil rights issue” and threat to U.S. democracy (Ravitch, 2013,
p. 325; Zeichner, Payne, & Brako, 2015; Zeichner & Pena-Sandoval, 2015).
Thus, education reform initiatives have aimed at the improvement of
K–12 schools and their teachers and have emphasized, most recently, a
standards-based curricula and standardized assessments (Cochran-Smith et
al., 2016; Delandshere & Arens, 2001). Simultaneously, teacher educators
have faced pressure for increased transparency and accountability focused
on teacher preparation in a rapidly changing 21st century global economy
(Cochran-Smith et al., 2016; Darling-Hammond, Wei, & Johnson, 2009; De-
landshere & Petrosky, 2004; Newton, 2010). Proponents advocating for a
professionalized teaching force have developed teacher performance as-
sessments, arguing for the use of a valid and reliable measure of teacher
effectiveness and an authentic yet standardized way to assess teacher can-
didate readiness for teaching that may be beneficial for program renewal
(Darling-Hammond, 2010; Mehta & Doctor, 2013; Peck, Singer-Gabella,
Sloan, & Lin, 2014; Wei & Pecheone, 2010). However, when TPAs have
been used for program completion, certification/approval, accreditation,
and/or graduation, the initiative has been met with responses from some
educators who do not view the efforts as promoting reform but rather as a
threat to their autonomy over their profession (Allington, 2005; Wilson &
Tamir, 2008; Zeichner et al., 2015). Some teacher educators have recom-
mended caution regarding the use of teacher performance assessments in
high-stakes contexts, in non-educative ways (Whittaker & Nelson, 2013),
which lead to consequences for teacher candidates (Bunch, Aguirre, &
Tellez, 2009; Chung, 2008; Lit & Lotan, 2013; Meuwissen & Choppin, 2015;
Meuwissen, Choppin, Shang-Butler, & Cloonan, 2015; Okhremtchouk et
al., 2009) and their teacher preparation programs (Cochran-Smith, Piazza,
& Power, 2013; Wei & Pecheone, 2010). In this chapter, I will discuss stake-
holders’ evolving conceptualizations of teacher quality and teaching effec-
tiveness, their shifting conceptions of teaching and learning, and the sub-
sequent development and use of various teacher performance assessments.
Evolution of Teacher Performance Assessments    3

TEACHER QUALITY AND TEACHING EFFECTIVENESS

Teacher education is a historically-situated social practice, and the beliefs,


attitudes, and actions of teacher educators are shaped by personal, pro-
grammatic, and institutional contexts (Cochran-Smith et al., 2016). As a
result, it is critical to understand how educators’ and policy-makers’ defini-
tions of “teacher quality” and “teaching quality” and/or teaching effec-
tiveness have been conceptualized (Knight et al., 2015, p. 105) and have
resulted in the popularity of teacher performance assessments to measure
teacher-candidate readiness for teaching in a diverse 21st century, knowl-
edge society (Cochran-Smith et al., 2016; Darling-Hammond, 2010; De-
landshere & Petrosky, 2004; Zeichner, 2014). I will address stakeholders’
concerns for teacher quality considering teachers’ varying characteristics
(i.e., inputs) and their influence on student achievement as well as teaching
quality or teachers’ behaviors and performance measured by standardized
achievement tests (i.e., outputs) and the effect on student achievement.
Furthermore, I will account for federal legislation aimed at teacher quality
and teaching effectiveness. Additionally, I will address teacher educators’
approaches to preparing teacher candidates in the knowledge society and
global economy. These areas will be considered as they address prior mea-
sures of teacher quality and effectiveness leading up to the current use of
teacher performance assessments as a measure of accountability.

Teacher Quality/Characteristics

With increasing public awareness of student achievement gaps and con-


cern for the educational health of the nation, stakeholders considered the
impact of teacher quality, as indicated by varying characteristics, on student
achievement. Researchers focused on the knowledge of the teacher, privileg-
ing teacher candidates’ content knowledge and verbal ability as measured on
standardized admissions achievement tests (i.e., SAT, ACT, GRE), their edu-
cational backgrounds (i.e., GPA, majors), certification exams used to mea-
sure their preparation, and the reputation of their degree seeking institutions
(Knight et al., 2015; Wilson & Tamir, 2008; Wilson & Youngs, 2005; Zumwalt
& Craig, 2008). Zumwalt and Craig (2008) indicated that some correlational
research shows a relationship between teachers’ verbal ability and student
achievement; however, it does not account for the relationships between
verbal ability and other characteristics of teacher quality. Considering other
teacher characteristics, such as the comparison of content knowledge majors
to education majors, the research is scant, except for in mathematics where
a correlation is noted between teachers’ mathematics content knowledge
and high-school students’ achievement in math (Floden & Meniketti, 2005;
4    C. L. TANGUAY

Zumwalt & Craig, 2008). In another study, researchers noted greater gains in
student learning in math and English language arts, comparing candidates
taught by faculty from university-based programs in comparison to Teach for
America and New York City Teaching Fellows (NYCTF). Although the former
university-based programs were initially more effective, the latter programs
caught up (Boyd, Grossman, Lankford, Loeb, & Wycoff, 2006). Darling-Ham-
mond, Chung, and Frelow (2002) discussed concerns about policies lowering
standards for entry into the profession and noted that retention and sense
of preparedness/efficacy outcomes were significantly higher for university-
based programs than TFA, the Peace Corps, and Teacher Opportunity Corps;
however, there was variation in the university-based programs. Considering
research on the relationship of teacher characteristics in relation to student
achievement has been an area long seen as pertinent to policy discussions on
teacher preparation and the requirements that states establish to determine
teaching readiness (Wayne & Youngs, 2003).

Teaching Quality/Performance and Effectiveness

In addition to an emphasis on teacher characteristics, as a way of con-


sidering teacher inputs, policy-makers have also been interested in raising
student achievement scores, or outputs, on standardized tests as measured
by teachers’ behaviors and/or classroom performance (Brophy & Good,
1986; Knight et al., 2015). In alignment with the rising performance-based
assessment movement beginning in the 1980’s, the Carnegie Task Force
on Teaching as a Profession and The Holmes Group (1986) were estab-
lished and emphasized teaching quality as recognized by improving student
achievement (Carnegie Forum on Education and the Economy, 1986; Lewis
& Young, 2013; Tucker & Mandel, 1986). In 1987, an organization called
the National Board for Performance Teaching Standards was established to
address concerns about teaching quality, and subsequently, the board devel-
oped standards in 1994 to measure teacher performance on the job and tied
it to national certification for accomplished teaching (Darling-Hammond,
2010; Darling-Hammond et al, 2009; Sato, 2014). Additionally, the National
Research Council insisted on a wider range of authentic assessment mea-
sures to evaluate preservice teachers beyond standard admissions criteria
and licensure content-based exams, which had not been shown to predict
future teacher effectiveness (Darling-Hammond & Snyder, 2000; Mitchell,
Robinson, Plake, & Knowles, 2001). Although the research evidence on ac-
creditation, teacher testing, and certification is inconclusive on improving
student achievement (Wilson & Youngs, 2005), educators recognize the dif-
ficulty with causal research and note that lack of research evidence does not
indicate that relationships do not exist (Zumwalt & Craig, 2008).
Evolution of Teacher Performance Assessments    5

Federal Legislation Aimed at Teacher Quality


and Teaching Effectiveness

Conceptions of teacher quality and teaching effectiveness are acknowl-


edged in federal legislation. Considering teacher quality, No Child Left
Behind Act of 2001 required states to assess teacher quality measured by
verbal ability and content knowledge (U.S. Department of Education,
2002). Changing perspectives, legislation continued in 2009 with the Race
to the Top state funds, demanding indicators of teaching effectiveness tied
to teacher evaluation systems and value-added measures, based on student
achievement scores (U.S. Department of Education, 2009). Since states re-
ceived federal funding (e.g., No Child Left Behind Act of 2001 and Race
to the Top Executive Summary 2009), they were required to demonstrate
accountability by making the assessment outcomes of their teacher prepa-
ration program graduates’ effectiveness on student learning transparent
to the public. Additionally, states were required to embed the InTASC
Model Core Teaching Standards and Learning Progressions for Teachers
1.0 (2013), developed by the Chief Council of State School Officers and
adopted by the new accrediting body, the Council for the Accreditation of
Educator Preparation (CAEP). The standards were created to ensure learn-
ers were college and career ready and teacher education programs demon-
strated alignment and accountability. State agencies created their own stan-
dards, aligned to national standards, to maintain state control. Therefore,
teacher preparation programs focused on developing teacher candidates’
knowledge, performance, and dispositions as aligned to professional teach-
ing standards and focused on student learning (Berliner, 2005; Knight et
al., 2015). Differing from the performance-based movement of the 1980s,
focusing on a generic set of standards with lack of consideration for content
and context, new standards specified the criticality of subject-specific peda-
gogy and the emphasis placed upon teachers to know their learners within
their contexts for learning (Wayne & Youngs, 2003).

Teaching in the Knowledge Society and Global


Economy

In addition to legislation requiring teacher-education program align-


ment to new state and national standards, teacher educators associated
teaching effectiveness as the ability to prepare teacher candidates in the
knowledge society and for work with diverse learners in a global econo-
my (Cochran-Smith et. al, 2016). They shifted their conceptions of how
people learn, from methods of transmission, as dominating in years prior
to the 1980–1990s, to constructivist approaches to teaching and learning
6    C. L. TANGUAY

representative of work in a technologically advancing knowledge society.


Rather than a view of teaching as the transmission of knowledge where
knowledge resides in the teacher and is deposited in the learner as ob-
served in the industrial age, teacher educators acknowledged cognitive and
social constructivist approaches to teaching and learning (Cochran-Smith
& Fries, 2005, 2008; Cochran-Smith et al., 2016). Considering cognitive-
constructivist approaches, teacher educators held views of knowledge
as residing in the learner (Gonzales, Moll, & Amanti, 2006; Richardson,
2003; Vygotsky, 1978) and social-constructivist approaches where learners
co-construct knowledge in social activities with an influential other and in
groups, such as learning communities (Cochran-Smith & Fries, 2005, 2008;
Hargreaves & Fullan, 2013; Lave & Wenger, 1991; Richardson, 2003; Vy-
gotsky, 1978). Recognizing constructivist principles of teaching and learn-
ing, teacher educators prioritized the preparation of candidates with deep
subject-specific content knowledge and pedagogy for engaging learners in
the construction of knowledge through meaningful and technologically-
advanced learning opportunities (Cochran-Smith et. al, 2016; Knight et. al,
2015). Other educators emphasized teaching effectiveness based upon the
teacher’s ability to use “high-leveraged practices,” (Ball & Forzani, 2009;
Ball & Forzani, 2011, p. 21; Lampert et al., 2013) identifying higher order
thinking, collaboration, and pedagogical content knowledge (PCK) as criti-
cal for student learning and growth rather than as means to an end on a
standardized measure (Knight et. al, 2015).
Paying attention to teacher attrition in schools comprised of under-
represented populations most often taught by White, female, and the least
experienced teachers (Ingersoll & May, 2011), teacher educators consid-
ered ways to prepare teacher candidates in a diverse society (Gay, 2000;
Irvine, 2003; Ladson-Billings, 1999; Sleeter, 2001; Zumwalt & Craig, 2008).
The educators acknowledged that knowledge should not privilege Whites
and recognized the need for more teachers of color in a demographically
changing United States. Additionally, they emphasized the critical need
for teachers to provide equitable and accessible opportunities for all their
learners, while valuing their assets and building on their prior academic
knowledge and skills; personal, family, and community backgrounds; and
linguistic and cultural experiences (Ball & Tyson, 2011; Cochran-Smith et.
al, 2016; Gonzales et al., 2006).

Responding to Accountability Issues and Public


Perception

Teacher educators have been criticized for creating program assessments


lacking validity and reliability in measuring teacher–candidate effectiveness
Evolution of Teacher Performance Assessments    7

associated with student learning and in demonstrating proficiency in pro-


viding a shared language of practice regarding what teacher candidates
should know and be able to do (Darling-Hammond, 2010; Darling-Ham-
mond & Bransford, 2005). In a study designed to address that need, Henry
et al. (2013) sought to show relationships between teacher–candidate ef-
fectiveness and student learning. They analyzed their own program assess-
ments to examine teacher candidates’ progress and performance to see if
these measures predicted student learning in their graduates’ classrooms.
They found that their program assessments (i.e., student teaching evalu-
ations, summative portfolios, and dispositions) did not measure multiple
constructs, as intended to inform the teacher education program, but
instead provided a global rating. Furthermore, none of the instruments
produced measures that predicted candidates’ later effectiveness on stu-
dent achievement in reading or math (Henry et al., 2013). Educators rec-
ognize that strong assessments and research tying the teacher preparation
of teacher candidates to the achievement of their future learners is needed
in the profession (Grossman, 2008). Teacher educators have suggested the
possibility for assessments having construct validity, content validity, inter-
rater reliability, and predictive validity, as now required by the new accred-
iting body, the CAEP (2013), as a means for taking back the profession
(Darling-Hammond, 2010; Mehta & Doctor, 2013; Peck et al., 2014; Wei &
Pecheone, 2010).

THE DEVELOPMENT, DESIGN, AND INTENT


OF TEACHER PERFORMANCE ASSESSMENTS

Under tremendous pressure for accountability and more authentic assess-


ment of educators entering the profession, teacher educators across the
country have shifted toward an exploration of standardized teacher per-
formance assessments (Darling-Hammond, 2010; Sato, 2014). Teacher per-
formance assessments are designed to measure teacher candidates’ PCK
and are often used in addition to content-knowledge tests (e.g., Praxis,
GACE; Darling-Hammond, 2010). In some states, institutions are exploring
TPAs, using locally developed evaluation measures (e.g., CalTPA, PACT),
while other states have mandated the incorporation of a standardized TPA
(e.g., edTPA®, PPAT, NH-TCAP, KPTP) in teacher preparation programs
and require external evaluation of the portfolios for teacher credentialing.
While the intent to use TPAs is to professionalize the teaching force and im-
prove teacher preparation, teacher educators have mixed feelings on their
use as standardized measures to evaluate teachers, teacher preparation pro-
grams, and their candidates (Kornfeld, Grady, Marker, & Ruddell, 2007; Lit
8    C. L. TANGUAY

& Lotan, 2013; Meuwissen, & Choppin, 2015; Peck, Gallici, & Sloan, 2010;
Peck & McDonald, 2013; Sloan, 2015).
In the following section, I will provide an overview of commonly used
teacher performance assessments, including the Performance Assessment
of California Teachers (PACT), the precursor to the first nationally avail-
able TPA; edTPA; and followed by the Praxis Performance Assessment of
Teachers (PPAT), another nationally available TPA. Next, I will provide a
brief overview of performance assessments which have been developed in
other states, including the New Hampshire Teacher Candidate Assessment
of Performance (NH-TCAP) and the Kansas performance Teaching Portfo-
lio (KPTP), which are the subject of two of the chapters in this book. Com-
parisons of the PACT, edTPA, PPAT, NH-TCAP, and KPTP can be found in
the Appendix at the end of this chapter.

The Performance Assessment of California Teachers


(PACT)

California universities developed some of the first standardized TPAs


which led to the development of nationally available teacher performance
assessments. The California Teacher Performance Assessment (CalTPA;
Hafner & Maxie, 2006), thePACT (Guaglianone, Payne, Kinsey, & Chiero,
2009; Pecheone & Chung 2006), and the Fresno Assessment of Student
Teachers (FAST; Torgerson, Macy, Beare, & Tanner, 2009) are three op-
tions used by institutions of higher education in California.
Led by faculty and staff at the Stanford Center for Assessment, Learning
and Equity (SCALE), the PACT Consortium founded 2001, comprised of
12 institutions, developed the PACT which became the precursor to other
state teacher performance assessments and to other TPAs available nation-
ally (e.g., edTPA; Pecheone & Chung Wei, 2007). The PACT was piloted
in 2002 and went through a revision period in 2003–2004. Required by SB
1209 in 2006, it was approved in 2007 by the California Commission for
Teacher Credentialing, and statewide policy for TPAs became consequen-
tial in 2008. The author and owner of the PACT continues to be the SCALE
which has supported its implementation in 33 preparation programs. The
author and owner of the PACT continues to be the PACT Consortium of
2001, and it is launched by Educational Testing Services (ETS). It has been
used by 32 institutions of higher education programs in California. The
PACT is a subject-specific pedagogy portfolio covering 17 credential areas
and focused on a teaching event (TE; Pecheone & Chung Wei, 2007).
The portfolio is comprised of five components: (a) planning, (b) instruct-
ing, (c) assessment of student learning, (d) reflection, and (e) academic lan-
guage. Evaluators use three rubrics per task for scoring. Data from theTE
Evolution of Teacher Performance Assessments    9

is used as one source of evidence for program completion and recommen-


dation for state license, initial license in California. Elementary candidates
complete the TE in either literacy or mathematics, and three additional con-
tent area tasks (developed by their program) in other core content areas
not assessed by the TE: literacy or mathematics, history-social-science, and
science (Pecheone & Chung Wei). The PACT creators have worked on an
optional or formative feature called the embedded signature assignments
(ESAs). The ESAs were envisioned as campus-specific tasks with rubrics de-
signed for formalized scoring. Examples of ESAs include community studies,
child case study, observation of classroom management, and a curriculum
unit (Pecheone & Chung Wei, 2007).
Institutionally trained scorers score the PACT. All PACT TEs are locally
scored by trained PACT consortium scorers as part of official scoring and
reporting. Considering the reliability in scoring, scorers spend two days
training in a subject-specific area of expertise. Institutions of higher educa-
tion programs send a lead trainer or collaborate with one another to score.
Scorers who fail to calibrate must be retrained and pass to score. TEs that
do not meet the passing standard or that are just above the passing stan-
dard (i.e., have at least one score of 1) are double-scored. A random sample
of 10% of the remaining TEs are double-scored to check reliability.
Considering the validity of PACT, developers established content validity
by identifying a strong linkage between the TE and the California Teach-
ing Performance Expectations and skills determining readiness to practice.
Construct validity was established in their comparison of the structure of the
guiding question items to the results of a factor analysis. They examined bias
and fairness by examining the differences in the scores of different demo-
graphic groups. Criterion-related concurrent validity was measured and ac-
complished by comparing raters’ holistic ratings of candidates’ performance
to their pass/fail rates (Pecheone & Chung Wei, 2007).

edTPA

edTPA was the first nationally available TPA (SCALE, 2016a). edTPA
was developed by Stanford Center for Learning, Assessment, and Equity
(SCALE) in partnership with the American Association of Colleges of
Teacher Education (SCALE, 2016a). With the rise of standardization and
reform efforts in teacher education, implementation of the edTPA has be-
come a growing trend for measuring preservice teacher performance and
for various purposes (e.g., program completion/graduation, certification,
program approval, and/or accreditation).
edTPA is a subject-specific, teacher performance-based assessment cre-
ated by educators and owned by Stanford University (SCALE, 2016a).
10    C. L. TANGUAY

SCALE formally launched the edTPA in 2013, following two years of field
testing with 12,000 candidates across 450 institutions of higher education
and 29 states (SCALE, 2016a). After the second year of national implemen-
tation, SCALE tested the assessment with 27,000 candidates across 700+
teacher education programs in 38 states (Pecheone, Whittaker, & Klesch,
2016; SCALE, 2016b). edTPA’s structural design incorporates 80% general
pedagogy (i.e., planning, teaching, and assessing) and 20% subject-specific
pedagogy constructs across 27 content areas aligned to national organiza-
tion standards, such as the National Council for Teachers of Mathematics
and InTASC Model Core Teaching Standards and Learning Progressions
(Pecheone et al., 2016).
Teacher educators designed edTPA to include three subject-specific
tasks aimed at student learning and principles from research and theory:
(a) planning for instruction and assessment, (b) instructing and engaging
students in learning, and (c) assessing students’ learning. The edTPA Ele-
mentary Education Handbook includes three tasks focused on literacy learning
and a fourth task, “Assessing Students’ Mathematics Learning” (Stanford
Center for Assessment, Learning and Equity, 2017b, p. 43). edTPA empha-
sizes a 3–5 day cycle of teaching focused on student learning, embedding
academic language components and opportunities for teacher candidates
to justify their planning decisions, analyze their teaching effectiveness, and
use data to inform instruction (Stanford Center for Assessment, Learning
and Equity, 2017a). The teacher candidate engages in analytical thinking
and reflection in a response to all edTPA tasks including authentic artifacts
consisting of lesson plans, student work samples, video-recorded evidence,
and reflective commentaries. Upon completing edTPA, teacher candidates
submit all tasks for external scoring. edTPA candidates are provided a flex-
ible submission schedule by Pearson, edTPA’s operational partner, to sub-
mit their portfolios the first time and during retakes (Stanford Center for
Assessment, Learning, and Equity, 2017d).
Considering edTPA scoring training and interrater reliability, edTPA scor-
ers are required 20+ subject specific hours. Scorers are comprised of teacher
educators and P–12 teachers across the country. Scorers must demonstrate
consistent scoring on multiple portfolios before qualifying, and those who
fail to calibrate must be retrained and pass to score. Additionally, scorers are
monitored for scoring consistency. edTPAs that do not meet the passing stan-
dard or within one standard error of measurement below must be double
scored. A random sample of 10 % of the remaining edTPAs are also double
scored to check reliability. A nationally recommended professional passing
standard (i.e., score of 42) was established by a national panel of educators
and policy makers in 2013. The panel recommended a “ramping up” time-
line for states and individual state standard setting using the Hartel and Lorie
(2004) method (Pecheone, Whittaker, & Klesch, 2016).
Evolution of Teacher Performance Assessments    11

Based upon several statistical tests assessing interrater reliability, edTPA


scorers were shown to be in approximately 95% agreement following the
double scoring of 2,617 portfolios (Pecheone et al., 2016). Additionally,
edTPA sources of validity evidence are consistent with the Standards for Edu-
cational and Psychological Testing (2014) in measuring a teacher candidate’s
ability to plan, teach, and assess in subject-specific areas to determine their
readiness for the teaching profession (Pecheone et al., 2016). There is evi-
dence of predictive validity that edTPA constructs significantly predict first-
year teacher performance, as noted in research on a 2013–2014 cohort of
graduates from a University of North Carolina institution who were followed
into their first year of teaching (Bastian & Lys, 2016; Pecheone et al., 2016).
VAM scores, based on North Carolina teaching standards, were associated
with 7 of 15 edTPA rubrics. Specifically, the edTPA’s instruction construct
predicted significantly higher teacher ratings on 4 of 5 North Carolina teach-
ing standards, while the assessment construct predicted significantly higher
ratings on 2 of 5 teaching standards (Bastian & Lys, 2016; Pecheone et al.,
2016). Other supporting evidence of edTPA predictive validity was found by
Goldhaber, Cowan, and Theobald (2016) indicating passing scores on the
edTPA was predictive of candidate employment and their students’ achieve-
ment in literacy but not mathematics (Pecheone et al., 2016).
The development and design of edTPA was intended for educative pur-
poses as embedded in programs where learning is continuous and every-
one learns. The assessment is intended to enhance program improvement
and curriculum renewal (Pecheone & Whittaker, 2016). Pecheone and
Whittaker (2016) highlight the importance of incorporating formative op-
portunities for candidates to engage in authentic experiences, using edTPA
materials, as well as teacher educators’ use of data from candidates’ edTPA
score profiles and work samples to identify strengths and needs for pro-
gram improvement. Although educators are provided support from SCALE
in collaboration with America Association of Colleges of Teacher Educa-
tion (AACTE) through an online network community, including numerous
shared resources, educators mandated to implement the assessment have
met challenges from the onset. From conceptually-framed reactions in the
field to its consequential use for certification and/or program completion,
edTPA has spurred both positive and negative responses from teacher edu-
cators, including from edTPA coordinators and faculty members who have
been charged with the facilitation and implementation of edTPA.

The Praxis Performance Assessment of Teachers (PPAT)

In comparison to the California PACT and edTPA, a content development


team in conjunction with Educational Testing Services developed the PPAT,
12    C. L. TANGUAY

which was approved in 2015 as a second nationally available TPA (Educa-


tional Testing Services, 2016a). The PPAT has been explored by educators in
17 states and is designed after the InTASC Model Core Teaching Standards.
This TPA requires that the teacher candidate create a standards-based portfo-
lio with embedded content rather than one that emphasizes subject-specific
pedagogy like the PACT and edTPA. The PPAT includes four tasks with the
first one scored locally and the subsequent tasks scored externally: (a) knowl-
edge of students and their learning environment, (b) assessment and data
collection to measure and inform student learning, (c) designing instruction
for student learning, and (d) implementing and analyzing instruction to pro-
mote student learning (Educational Testing Services, 2016b).
Regarding reliability in scoring, the PPAT offers practice sessions with qual-
ifying raters who must pass a certification test verifying mastery of accurate
scoring. The portfolios are double-scored. If there is a discrepancy between
the two scorers, the ETS team requires that a third scorer resolve the issue. In
terms of standard setting, ETS employs a Multistate Standard-Setting Study
process. For the PPAT, ETS joined EPP faculty and K–12 teachers to engage
in a panel discussion to establish criteria for standard setting based upon the
assessment quality indicators and levels of proficiency required for teaching
readiness (Reece & Tannebaum, 2015). Developers included an online li-
brary of PPAT examples for student/faculty use (Educational Testing Services,
2016b), while SCALE in collaboration with AACTE have provided local evalu-
ation materials and multiple resources for faculty use in an edTPA online re-
source library (Stanford Center for Assessment, Learning and Equity, 2017c).
Considering submissions for scoring and costs of the TPAs, the edTPA and
the PPAT are similar in cost; however, the submission and retake policies differ.
Unlike the process used for edTPA, PPAT candidates submit each task upon
completion using a designated window of dates for submission. PPAT candi-
dates also submit retakes throughout their experience, adhering to dates speci-
fied by the testing company. While PPAT candidates are given opportunities to
revise and resubmit as they complete the portfolio, more restrictive deadlines
may interfere with the experience (Educational Testing Services, 2016b).

Other Teacher Performance Assessments

Teacher educators in other states, such as Washington, Missouri, New


Hampshire, and Kansas worked with interested stakeholders to develop their
own state TPAs, including similar constructs to the ones discussed in this
chapter. Educational Testing Services’ history includes support in the imple-
mentation of CalTPA, the 2009 Washington ProTeach, a large-scale portfo-
lio assessment for second tier teacher licensure as part of an induction pro-
gram, and the 2012 Missouri Performance Assessment for in-service teachers,
Evolution of Teacher Performance Assessments    13

leaders, counselors, and librarians (Educational Testing Services, 2016a). In


Chapter 8 of this book, Terrell and her colleagues will provide information
on the NH-TCAP which mirrors the design of PACT. In Chapter 9, Stephen
Meyer and his colleagues will describe the development and use of theKPTP.
You will find further details in the chart providing a comparison of all TPAs
discussed in this volume in the appendix of this chapter.

CONTROVERSIAL ISSUES SURROUNDING USE


OF TEACHER PERFORMANCE ASSESSMENTS

In the following section, I will address tensions experienced by teacher


educators, as they understand the content and use of teacher performance
assessments, when required as a measure of accountability. The following
issues associated with TPA’s, which have been considered in the literature
by teacher educators, scholars, and researchers, will be discussed: (a) the
promotion of a common language of practice or a narrowed curriculum,
(b) the alignment or misalignment with program identities and perspec-
tives on teaching and learning, and (c) the elevation of data-driven deci-
sion-making and cultures of inquiry or compliance at the expense of other
important initiatives.

Promotion of a Common Language of Practice or a


Narrowed Curriculum

Teacher educators have found they face challenges associated with high-
stakes teacher performance assessments as related to their philosophical
and ideological beliefs and their program identities. Many have begun ask-
ing questions about the underlying conception of TPAs, such as edTPA,
and its effectiveness in measuring teacher candidate readiness to teach
(Sato, 2014). What is the underlying conception of good teaching and what
does it mean to be ready to teach? Who decides what will be the common
language of practice in the teaching profession? Kornfield et al. (2007), in
a critical discourse analysis, aimed to understand the impact of standardiza-
tion on their discourse, following new policies regarding use of TPAs in
California. While policy changes increased their awareness of standardiza-
tion and the new standards provided a common language for talking about
practice, their findings revealed that their use of standardized language
narrowed faculty thinking about what they do (Kornfeld et al., 2007). While
some educators believe that developing a common language for the teach-
ing profession will increase accountability in teacher education and will
measure teacher candidates’ teaching readiness (Peck et al., 2014; Wei &
14    C. L. TANGUAY

Pecheone, 2010; Whittaker & Nelson, 2013), other teacher educators be-
lieve that standardized assessments will discount other perspectives regard-
ing what may count as knowledge and may narrow the teacher education
curriculum (Dover & Schultz, 2016; Lachuk & Koellner, 2015; Lit & Lotan,
2013). Educators have also raised their concerns about increasing standard-
ization and the use of rubrics which may narrow the criteria for evaluating
teacher candidates (Gorlewski & Gorlewski, 2015). A disadvantage, how-
ever, of using teacher education homegrown or institutionally developed
assessments is that such assessments may provide only a holistic measure or
global rating and may not lend to the specificity provided by rubric driven
data, which can be more informative for program improvement and pre-
dictive of student achievement (Bastian & Lys, 2016; Henry et al., 2013;
Pecheone & Whittaker, 2016). Pecheone and Whittaker provided case ex-
amples of teacher educators’ experiences with teacher performance assess-
ment, sharing how the educators’ benefitted from their local assessments
and the use of edTPA to deepen their candidates’ knowledge of teaching
and learning. By embedding “educative” opportunities for teacher candi-
dates in their programs, “everyone who is engaged in this assessment pro-
cess learns something” (Pecheone & Whittaker, 2016, p. 11). Educators will
continue to negotiate these types of questions and concerns as they con-
sider using teacher performance assessments in teacher education.

Alignment or Misalignment With Program Identities


and Perspectives on Teaching and Learning

Additionally, teacher educators experience tensions related to their be-


liefs on teacher development and their inherent responsibilities for the
growth and learning of their teacher candidates in an educative environ-
ment. They raise questions about the alignment of the edTPA and other
TPAs with their program identities, visions, and commitments to theoreti-
cal perspectives on teaching and learning (Sato, 2014). To resolve tensions,
teacher educators using any TPA must consider the assumptions embedded
in the assessment regarding what teachers and their students are expected
to know and learn to do—critical constructs in understanding its validity
(Sato, 2014). For example, as noted in the structure of edTPA handbooks,
student learning, within content-based instruction, is the central compo-
nent in the assessment of the teacher candidate. Learning in this context
is not based upon standardized student achievement scores, rather on stu-
dent outcomes aligned to the learning objectives established by the teacher
candidate (Sato, 2014). Additionally, edTPA assesses a candidate’s ability
to use research and theory to justify their development and coordination
of learning activities that engage their learners in experiences that lead to
Evolution of Teacher Performance Assessments    15

higher levels of learning and equity, a service to the learner and an intend-
ed outcome of a profession (Sato, 2014). When teacher educators question
the alignment of the assessment with their commitments to specific theo-
retical perspectives, such as critical pedagogy and/or constructivist theory,
tensions surface (Gorlewski & Gorlewski, 2015). Understanding the under-
lying conception of a TPA, teacher educators may find clarification on how
specific theoretical approaches to teaching and learning may or may not
align to an assessment, such as the edTPA, and may serve to alleviate ten-
sions (Sato, 2014).
Furthermore, teacher educators raise theoretical concerns about the ed-
ucative intent of edTPA and other TPAs, as related to teacher development,
especially when they are used for high stakes purposes. Does the use of a
high stakes performance assessment create anxiety and tension for teacher
candidates at the most vulnerable time of their development as novices?
Although edTPA was designed to be educative as embedded throughout
a program curriculum (Pechone & Whittaker, 2016; Whittaker & Nelson,
2013), what happens when it serves, primarily, as a summative assessment
at the program endpoint? Lit and Lotan (2013) express concerns about
the formative nature of the work of educators and the summative nature
of a high-stakes performance assessment. Since time is required to embed
teacher-performance assessment constructs within a program, supporting
its educative value (Pechone & Whittaker, 2016), teacher educators must
consider curricular revisions and reevaluate their program design or risk
teaching to the test (Lit & Lotan, 2013).
Considering the importance of sociocultural learning theory (Vygotsky,
1978), which suggests the need for teacher candidate support and guid-
ance during the learning process, teacher educators have mixed feelings re-
garding the type of instructional support they may provide their candidates
as they complete a TPA, while maintaining ethical standards (Meuwissen
& Choppin, 2015; Ratner & Kolman, 2016). States utilizing TPAs as high-
stakes assessments must adhere to policies describing the kinds of support
that are acceptable for external evaluation (SCALE, 2014). Based on their
study of teacher-educators’ perspectives on providing candidate support for
the edTPA, per SCALE’s (2014) “Guidelines for Acceptable Candidate Sup-
port,” Ratner and Kolman (2016) categorized teacher educators as break-
ers, benders, or obeyers. They suggested that teacher educators either broke
the rules, due to their philosophical stance; they bent the rules, providing
some unacceptable supports and explaining support of the candidate’s de-
velopment; or they followed the rules to be compliant (Ratner & Kolman,
2016). Furthermore, Lit and Lotan (2013) also indicate faculty experience
challenges making the learning event authentic and the assessment pro-
cess educative, while they consider appropriate supports for their teacher
candidates. They explain that their candidates experience a shift in focus
16    C. L. TANGUAY

from the real purpose of student teaching and the intended curriculum to
getting PACT done. Thus, it is critical that teacher educators invest in un-
derstanding TPA alignment with their curriculum priorities to avoid falling
into a pattern of employing a test-driven curriculum.
Questioning how to support teacher candidate development while using
a TPA as a high-stakes summative measure rather than a formative measure,
additional researchers acknowledged theories of learning, such as sociocul-
tural theory which suggests that the novice teacher learns in a social context
from influential others (Chung, 2008; Margolis & Doring, 2013). Margolis
and Doring wondered if a teacher candidate, who was still a novice, could
demonstrate mastery on a teacher performance assessment that was origi-
nally designed for a veteran obtaining National Board Performance Teacher
Certification even though edTPA expectations are not equivalent to mastery
and accomplished teaching. Additionally, Lachuk and Koellner (2015) ex-
perienced tensions in helping their teacher candidates feel competent, at a
time when they were still uncertain as educators. The authors navigated un-
familiar ways of teaching literacy instruction, bringing their focus on inquiry
and building scaffolds to transition into the implementation. This impacted
their curricular choices, course changes, and their interactions with their
teacher candidates as they prompted them to think and reflect on their own
work (Lachuk & Koellner, 2015). Recognizing the affordances of PACT data
to inform programs, Chung (2008) supported the notion that it takes time
for new teachers to grow and develop, considering Schon’s (1983) reflection
in action and Shulman’s (1987) role of the teacher as a reflective practitio-
ner rather than a technician. Similarly, Pecheone and Whittaker (2016) sug-
gest the importance of embedding the formative measures of a TPA, such
as edTPA, throughout a program with an educative intent and in support of
novice-teacher development. Miller, Carroll, Jancic, and Markworth (2015)
use Wiggin’s and McTighe’s Understanding by Design framework to support
candidate development by providing them with ongoing professional learn-
ing and disciplinary understanding of edTPA constructs throughout their
program rather than waiting to the program endpoint.

Elevation of Data-driven Decision Making and Cultures


of Inquiry or Compliance at the Expense of Other
Important Initiatives

Another tension faced by teacher educators, considering the use of


TPAs, is related to their perception of the TPA as an opportunity to estab-
lish cultures of inquiry based on data evidence (Peck & McDonald, 2013) or
as a challenge requiring their compliance at the expense of other program
initiatives (Kornfeld et al., 2007). How are teacher educators supported
Evolution of Teacher Performance Assessments    17

in such an endeavor to weave the constructs of a performance assessment


throughout their program, while maintaining its identity? To increase
the educative value of edTPA and other teacher performance measures,
teacher educators have begun considering the impact on their workloads
and the demand to embed the assessment in their programs. Integration
of new innovations into a program is a process which demands faculty time
to engage in professional development, to analyze results, and to reflect
on potential outcomes for program improvement (Cuthrell et al., 2014).
Considering teacher performance assessments for curriculum change and
program improvement, teacher educators share mixed findings. When ex-
amining edTPA as a curriculum change agent, Ledwell and Oyler (2016)
indicated that some faculty were concerned that edTPA was not aligned
to program principles, such as Universal Design for Learning, yet it did
focus on differentiation, and they agreed it emphasized important teacher
preparation practices (i.e., planning, teaching, assessing, and reflecting).
Additionally, some faculty made high levels of curriculum change, creating
new assignments aligned to edTPA constructs, while others made middle-
level changes, revising course assignments, and some did not make any
changes at all (Ledwell & Oyler, 2016). The researchers found engagement
of faculty in a decentralized approach to investigate the impact of teacher
performance data on curriculum change increases their participation in
activities that lead to changes in curriculum and program design.
Decentralization of the process of implementing changes as a result of TPA
implementation may help in other ways. By engaging faculty in leadership
committees with representation across programs and by using teacher can-
didate performance assessment data, teacher educators can be motivated to
engage in cultures of inquiry to determine program areas for improvement,
such as course and/or program gaps (Peck & McDonald, 2013). Peck and
McDonald indicated that faculty participation in data retreats revealed dif-
ferences in their perspectives of candidates’ skills and knowledge proficiency.
Teacher educators learned about teaching practices outside of their specialty
area by engaging in frequent cross department opportunities for collegial and
critical reflection. Using forms of distributed leadership and organized struc-
tures of support, internally and externally, teacher educators engaged in ap-
proaches that supported implementation and teacher-candidate growth and
development (Peck et al., 2010; Peck & McDonald, 2013; Sloan, 2013).
Furthermore, Sloan (2013) contrasts how disturbances, in the form of TPA
mandates, can threaten faculty autonomy, while distributed leadership, as a
planned process, gives more people power in decision-making (Sloan, 2013).
Thus, faculty distribution of specific practices across the whole program can
take on a holistic approach to program improvement. For example, local
scoring of TPA portfolios becomes a collective learning opportunity, creating
conversations grounded in evidence and leading to shared understandings
18    C. L. TANGUAY

(Sloan, 2013). Involvement of faculty in this process provides them with an


active role in the change process where they have opportunities to make de-
cisions about program activities that matter to them. Internal and external
distributed leadership practices, where faculty create a common language in
their collaborations with one another, build capacity for leadership within
and across programs (Peck et al., 2010; Sloan, 2013). By providing oppor-
tunities for teacher educators to reflect on data and engage in new forms of
collaboration, they develop a shared language and shared practice.
Sloan (2015) highlights the importance of establishing a culture of inquiry
to promote faculty engagement in curriculum change leading to program
improvement. When faculty find the assessment useful for inquiry, they be-
come the authors of the change process (Sloan, 2015). When organization-
al structures facilitate faculty engagement and learning using data-driven
teacher performance assessments, transformative curriculum change is
owned by the program rather than the course (Sloan, 2015) and supports
large scale implementation (Lys, L’Esperance, Dobson, & Bullock, 2014).
By creating cultures of inquiry to promote reflections based on data, teacher
educators have opportunities to engage in conversation about how to em-
bed components of policy in their programs (Peck et al., 2010). Reflecting
on data, teacher educators move beyond their personal concerns about a
mandate to evaluating it at the program level (Hall, 2010; Hall, Dirksen, &
George, 2006; Hall, Newlove, George, Rutherford, & Hord, 1991; see also
Many et al., Chapter 10 this volume). Engaging in a collective endeavor,
some teacher educators develop a shared language and practice, reinvent-
ing their programs, and find the balance between a state mandate and their
program identity (Peck et al., 2010; Peck & McDonald, 2013; Sloan, 2013; &
Sloan, 2015; see also Qian & Fayne, Chapter 3 this volume).
Finally, in other TPA related concerns, teacher educators raise questions
about sustainability, the impact on their load, and TPA compliance which
may come at the expense of other important initiatives, such as multicul-
tural education, social justice, equity, and acknowledgement of personal
biases and racism (Dover & Schlutz, 2016; Greenblatt & O’Hara, 2015;
Tuck & Gorlewksi, 2016). Some educators argue that standardized testing
narrows the evaluation criteria, increases opportunities for privatization of
education, and discounts the local context, especially for candidates who
are placed in urban settings where teachers may not have established strong
classroom management procedures (Dover & Schultz, 2016; Tuck & Gor-
lewski, 2016). In contrast, other educators (Stillman et al., 2013) provide
an example using PACT data to show how a TPA may instead work to im-
prove teacher preparation for diverse classrooms. They assessed a PACT’s
role in identifying teacher candidates’ knowledge and skills to enact cul-
turally-responsive instruction, described as contextualizing teaching and
learning (Stillman et al., 2013, p. 140). Their findings indicate that teacher
Evolution of Teacher Performance Assessments    19

candidates had shallow understandings of prior knowledge, emphasizing


academic knowledge and discounting, or omitting learners’ funds of knowl-
edge (Gonzales et al., 2006); that is, candidates neglected students’ assets
based on experiences (Stillman et al., 2013).
Findings from additional studies provide further evidence that TPAs can
have some benefits in preparing teacher candidates to teach English lan-
guage learners; however, these benefits did not encompass the many facets
of multicultural education, social justice and equity, and anti-bias education
(Bunch et al., 2009; Lui & Milman, 2013; Torgerson et al., 2009; Van Es &
Conroy, 2009). We learn from these studies how teacher educators’ learned
from TPA evidence about their candidates’ needs for strategies to develop
their students’ academic language in mathematics (Bunch et al., 2009).
Additionally, candidates benefit from opportunities to demonstrate plan-
ning, considering not only their student context for learning but also their
own personal biases and attitudes (Liu & Milman, 2013). Candidates need
strategies to meet the needs of individuals and groups (Torgerson, et al.,
2009) and to understand their students’ conceptions and misconceptions
in mathematics (Van Es & Conroy, 2009).
Accounting for the historical, social, economic, and political pressures
faced by teachers and the teacher-education community, educators should
consider the affordances and tensions surrounding the development of
teacher performance assessments and their enactment in some state poli-
cies. Teacher educators may benefit from some of the following ways to
alleviate the tensions. They may consider embracing a shared language of
a professionalized teaching force, while paying careful attention to the in-
clusiveness of multicultural education and social justice aims. By under-
standing TPA constructs, teacher educators may increase alignment to pro-
gram missions/visions and avoid teaching to the test. Furthermore, teacher
educators may increase the educative value of the assessment to ensure for
teacher candidate growth and development by embedding constructs and
formative assessment methods within their programs. They may engage
faculty in cultures of inquiry to use valid and reliable TPA data to identify
candidates’ strengths and needs, and teacher educators may employ or-
ganized structures of support for program coherence to support faculty
in making curricular changes aligned to program goals. Considering the
design of teacher performance assessment, intended as a measure used to
professionalize the teaching force and as an opportunity for program im-
provement, teacher educators may benefit from its content and use, while
navigating concerns.
APPENDIX:  A Comparison of Teacher Performance Assessments
PACT: PPAT: NH-TCAP:
Performance Praxis Performance New Hampshire Teacher KPTP:
Assessment for Assessment of Candidate Assessment Kansas Performance
FACTS California Teachers edTPA Teachers of Performance Teaching Portfolio
Development • 2001 PACT • Stanford University • Content Development • 2012 NH Board • 2003, KSDE required
& Approval Consortium of 12 faculty and staff at the Team of educators of Education and statewide use of KPA
20    C. L. TANGUAY

institutions Stanford Center for in conjunction with NH IHE Network for initial licensure
• Piloted 2002 Assessment, Learning, Educational Testing commitment and to upgrade to
• Revision period and Equity (SCALE) Services consultants • 2013, NH IHEs voted a 5-year licensure
2003–2004 in collaboration with (July 2016, PPAT unanimously to adapt, but discontinued
• Required by SB American Association Assessment Handbook, pilot and validate the and replaced with a
1209 in 2006— of Colleges for version 2.0). common assessment mentoring program.
consequential in Teacher Education for teacher candidates
2008 (AACTE); 25 years of 2009 Kansas Performance
Adapted from the PACT
development Teaching Portfolio
2013–2014, five
• Pearson is the (KPTP), was introduced
Approved 2007 Approved in 2015 and institutions in the NH
operational partner at the request of EPPs for
by California launched by ETS IHE Network engaged
initial teacher licensure
Commission for Validated for widespread in a small-scale pilot
Teacher Credentialing use in 2013 as a 2014–2015 NH IHE
nationally available Network piloted with
TPA 12 institutions and 270
candidates

Approved 2015–2016 full


implementations
Author & PACT Consortium of Stanford University Educational Testing NH IHE Network (15 Kansas State Department
Owner 2001 Services EPPs) of Education (KSDE)
and partners
(continued)
PACT: PPAT: NH-TCAP:
Performance Praxis Performance New Hampshire Teacher KPTP:
Assessment for Assessment of Candidate Assessment Kansas Performance
FACTS California Teachers edTPA Teachers of Performance Teaching Portfolio
Users Grown to 32 IHEs Grown to 41 states and Explored by educators 15 EPPs in New Kansas EPPs (16 of 24;
and programs— over 700 programs in 17 states Hampshire Myers & Nelson, 2017)
California
Design Subject-specific Informed by National InTASC Model Core Six strands: Six areas aligned with
portfolio Board for Professional Teaching standards- I. Contextualizing KSDE Professional
linked to the Teaching Standards based portfolio with Learners & Learning Education Standards:
California content (NBPTS) & the embedded content II. Planning & Preparing Focus Area A: Analysis
standards—17 Interstate Teacher rather than subject- III. Instructing Students of Contextual
credential areas Assessment and specific: & Supporting Information—includes
focusing on the Support Consortium Learning understanding of child
One formative task and
Teaching Event (TE): (InTASC) Standards IV. Assessing Student development
three summative tasks
portfolio, and the Learning Focus Area B: Analysis of
Five components Task 1: Knowledge
PACT. V. Reflecting & Growing Learning Environment
Task 1: Planning of Students & the Professionally Factors—additionally,
Task 2: Instruction Subject-specific Learning Environment VI. Using academic
Task 3: Assessment of portfolio—27 Task 2: Assessment and focus on reading and
language role of technology
student learning credential areas data collection to Assessed on 12 rubrics Focus Area C:
Task 4: Reflection focusing on 3–5-day measure and inform
Task 5: Academic learning segment: student learning Instructional
http://ihenetwork. Implementation—
language Task 3: Designing
Three summative tasks org/Websites/ additionally, ensures
instruction for student
Three rubrics per five Task 1: Planning for ihenetwork/files/ for students’ effective
learning
tasks for scoring instruction and Content/5850139/ use of technology
Task 4: Implementing
Data from the assessment NHTCAP_brochure. Focus Area D: Analysis of
and analyzing
Teaching Event is Task 2: Instructing and pdf Classroom Learning
instruction to promote
used as one source engaging students in Environment
student learning
(continued)
Evolution of Teacher Performance Assessments    21
PACT: PPAT: NH-TCAP:
Performance Praxis Performance New Hampshire Teacher KPTP:
Assessment for Assessment of Candidate Assessment Kansas Performance
FACTS California Teachers edTPA Teachers of Performance Teaching Portfolio
of evidence for learning Focus Area E: Analysis
program completion Task 3: Assessing of Assessment
and recommendation student learning Procedures
22    C. L. TANGUAY

for initial license in Includes Context Focus Area F: Reflection


California. for Learning and and Self-Evaluation)
Elementary candidates embedded Academic Scoring Rubric aligned
complete the TE Language and to Kansas Professional
in either literacy or Analysis of Teaching Education Standards
mathematics, and components by focus area
three additional
content area tasks Five rubrics per three The total KTPT is
(developed by their tasks scored based on
program) in other Data used from the the 6 focus areas as
core content areas composite of the covered in a unit of
not assessed by three tasks is used study comprised and
the TE: literacy or to meet standards organized in four tasks:
mathematics, history- for high-stakes Task 1: (Contextual
social-science, and credentialing in Information and
science (Pecheone & adopting states Learning Environment
Chung Wei). Elementary candidates Factors)
are assessed in Task 2: (Designing
subject-specific Instruction)
portfolios of literacy, Task 3: (Teaching and
mathematics, or both Learning)
subjects in a four task Task 4: (Reflection and
model with 18 rubrics. Professionalism)
(continued)
PACT: PPAT: NH-TCAP:
Performance Praxis Performance New Hampshire Teacher KPTP:
Assessment for Assessment of Candidate Assessment Kansas Performance
FACTS California Teachers edTPA Teachers of Performance Teaching Portfolio
Additional Also includes Embedded First nationally available Nationally available Candidates who do not
Featurees Signature Assignments teacher performance teacher performance meet the minimum
(ESAs) developed assessment assessment proficiency will be
as campus-specific provided one of two
tasks with rubrics for levels of remediation
formalized scoring— support: (a) minimal
still in technical and (b) extensive.
development
(community
study, child case
study, observation
of classroom
management,
curriculum unit).
Submission for Institutional scoring Evaluation Systems Educational Testing Cost was not evident in Submitted to Kansas
Scoring & by trained scorers; of Pearson (ES)— Services (ETS) this review. State Department of
Cost cost was not evident SCALE’s operational platform at $300; Education for scoring
in this review partner—for external Retake at $85 at $60.00, following
submissions at $300; Each task submitted the depletion of
Retake at $100 per task sequentially, as grant funds.
All tasks submitted completed, and Institutional
upon completion at permitted twice per Coordinator schedules
the end of the student year by task; each task the submission date
teaching experience. scored throughout and sends institutional
Submission windows the student teaching identifiers.
offer the program experience. KSD will share candidate
Evolution of Teacher Performance Assessments    23

(continued)
PACT: PPAT: NH-TCAP:
Performance Praxis Performance New Hampshire Teacher KPTP:
Assessment for Assessment of Candidate Assessment Kansas Performance
FACTS California Teachers edTPA Teachers of Performance Teaching Portfolio
choice in establishing Retakes of single tasks scores with each
a pacing timeline. may be submitted institution.
Retakes may be prior to submission of Institutions may use
24    C. L. TANGUAY

submitted for the subsequent task data to recommend


single or multiples during the experience. program completion,
tasks following first licensure, and/or
submission attempt. program review.
Scoring All PACT Teaching All edTPA tasks are PPAT task 1 is evaluated 2015–2016 TCAP piloters Total of 10 ratings (two
Events are locally either evaluated locally; tasks 2-4 are suggested a 24/48 focus areas for Task
scored by trained locally for formative externally scored for (across 12 rubrics), with 1, three for Task 2,
PACT consortium purposes or externally official scoring and a maximum of one “1” four for Task 3, and
scorers as part of scored as part of reporting. per strand. IHEs may one for Task 4); total
official scoring and official scoring and require higher scores. possible = 30 with
Reliability in Scoring:
reporting. reporting. http://ihenetwork.org/ score of 20 to pass;
Scorer training: offers initiatives 3-point rating scale
Reliability in Scoring: Reliability in Scoring: practice sessions with The IHE Network where 1 = criteria
Scorer training: 2-day Scorer training: 20+ qualifying raters; must position: NH TCAP not met, 2 = criteria
subject-specific; hours subject-specific; pass a certification is one of a multitude partially met, and 3 =
IHEs/programs send drawing from teacher test verifying mastery of measures used by criteria met.
a Lead Trainor or educators and P–12 of accurate scoring; each IHE to determine
collaborate with one teachers across the portfolios are double- Reliability in Scoring:
candidate readiness for
another to score; country scored; discrepancy Scorer training: one and
teaching.
scorers who fail to Scorers must between two scorers one-half days to learn
Scores are not made
calibrate must be demonstrate resolved with third content standards,
public or required for
retrained and pass to consistent scoring on scorer. review exemplars,
state licensure.
score. multiple portfolios Scores for Tasks 2 and score training cases;
(continued)
PACT: PPAT: NH-TCAP:
Performance Praxis Performance New Hampshire Teacher KPTP:
Assessment for Assessment of Candidate Assessment Kansas Performance
FACTS California Teachers edTPA Teachers of Performance Teaching Portfolio
Teaching Events that before qualifying 3 are added together. use of sample rubrics
do not meet the and those who fail The score for Task 4 with indicators at each
passing standard or to calibrate must be is multiplied by two proficiency level; note-
that are just above retrained and pass to to reflect the double taking while scoring
the passing standard score. weighting of the task. required for candidate
(i.e., have at least one edTPAs that do not Tasks that are not profile in the event of
score of 1) should be meet the passing submitted receive a a dispute.
double scored. standard within one score of zero. Each Double-scored and
A random sample standard error of task is double-scored. averaged for a final
of 10% of the measurement below score.
remaining Teaching must be double scored. Adjudication process—
Events should also A random sample of 10% for substantially
be double scored to of the remaining different scores; KSDE
check reliability. edTPAs are also staff member reviews
double scored to and makes a final
check reliability. score determination.
Scorers are KSDE has oversight
monitored for for scorer training of
scoring consistency EPP representatives,
(Pecheone, Whittaker, in-service and retired
& Klesch, 2016). teachers. EPPs may
not score their own
candidates.
Scorers must submit a
Record of Evidence
including final
(continued)
Evolution of Teacher Performance Assessments    25
PACT: PPAT: NH-TCAP:
Performance Praxis Performance New Hampshire Teacher KPTP:
Assessment for Assessment of Candidate Assessment Kansas Performance
FACTS California Teachers edTPA Teachers of Performance Teaching Portfolio
scores, keywords, and
justification statements
Standard Standard setting: Standard setting: Standard setting: Standard setting: Standard setting:
26    C. L. TANGUAY

Setting 3 stage process Evidence-based models ETS Multistate followed the 3-stage A 30-point total is
Used models by by Haertel & Lorié Standard-Setting Study process used for PACT possible resulting
Haertel (2002) and (2004) as well as the process Spencer Research group from 10 scores aligned
Haertel & Lorie process used by the For PPAT: convening of uses models by Haertel to a 3-point rubric.
(2000), as well as National Board for ETS with EPP faculty (2002) and Haertel & Educators noted a
the process used by Professional Teaching and K–12 teachers Lorie (2000), as well score of 20 as the
the National Board Standards (Pecheone, panel discussion (Reece as the process used by cut score based on
for Professional Whittaker, & Klesch, & Tannebaum, 2015) the National Board for alignment to Kansas
Teaching Standards 2016) Professional Teaching Professional Education
Cut scores and use:
(Phillips, 1986; Nationally recommend- Standards (Pecheone Standards. Piloters
determined by each
Pecheone & Chung ed professional pass- & Chung Wei, 2007) used candidate
state using panel
Wei, 2007). ing standard (score work samples in the
recommendations Cut scores and use:
of 42) was established standard setting
Cut scores and use: identical to those
by a national panel of process (Myers &
Passing all five established for the
educators and policy Nelson, 2017; Nelson,
categories of the PACT (IHE Network)
makers in 2013. The 2017).
TE and no more panel recommended a
than three scores of Cut scores and use:
“ramping up” timeline
“1”across tasks. Determined by
for states and individ-
Cut scores by category: each EPP with
ual state standard set-
Planning = 1 out of recommended score
ting using the Hartel
3 scores can be a of 20 indicated.
and Lorie method.
score of “1”
(continued)
PACT: PPAT: NH-TCAP:
Performance Praxis Performance New Hampshire Teacher KPTP:
Assessment for Assessment of Candidate Assessment Kansas Performance
FACTS California Teachers edTPA Teachers of Performance Teaching Portfolio
Instruction, Cut scores and use:
Assessment, determined by each
Reflection, and state
Academic Language
= 1 out of 2 scores
can be a score of “1”
(Pecheone & Chung
Wei, 2007)
Validity Validity of the TE: Nationally validated Led by Spencer The KSDE began
Content validity: 2013: meets Research Group— using the KPTP with
strong linkage standards outlined grant funded pre-service teachers
between the TE in the Standards for Each IHE Network rather than in-service.
and CA Teaching Educational and determines their level Using grant funds,
Performance Psychological Testing of involvement in educators working
Expectations and (AERA, APA & the larger calibration with REL piloted the
skills determining NCME, 2014). process and in assessment using a
readiness to More than 45,000 defining the minimum revised work sample
practice. edTPA portfolio performance required. model and ultimately
Construct validity: results from the first The IHE Network validated the KPTP
compared the two years of edTPA is working toward (Myers & Nelson,
structure of the implementation calibration of scoring 2017).
guiding question (2014 and 2015 both within and
3 areas identified for
items to the results edTPA Administrative between institutions.
improvement and to
of a factor analysis. Reports).
provide support:
Bias or fairness: Pecheone, Whittaker, &
examined Klesch, (2016)
Evolution of Teacher Performance Assessments    27

(continued)
PACT: PPAT: NH-TCAP:
Performance Praxis Performance New Hampshire Teacher KPTP:
Assessment for Assessment of Candidate Assessment Kansas Performance
FACTS California Teachers edTPA Teachers of Performance Teaching Portfolio
28    C. L. TANGUAY

differences in the 1. Coherence of


scores of different KPTP Tasks and
demographic groups. Alignment with
Criterion-related State Educator
concurrent validity: Evaluation System.
compared raters’ 2. Reliability of
holistic ratings of Scoring and Scorer
candidates to pass/ Training Process.
fail rates. 3. Use of KPTP
Predictive validity: Data to Provide
longitudinal Meaningful
study, funded Information to
by the Carnegie Candidates and
Foundation, is in EPPs.
process measuring
1st year teacher
effectiveness by
student achievement
(Pecheone & Chung
Wei, 2007).
(continued)
PACT: PPAT: NH-TCAP:
Performance Praxis Performance New Hampshire Teacher KPTP:
Assessment for Assessment of Candidate Assessment Kansas Performance
FACTS California Teachers edTPA Teachers of Performance Teaching Portfolio
Resources PACT Resources Extensive Resources Library of PPAT http://ihenetwork.org/ http://www.mcpherson
online at: online: Examples online at: initiatives .edu/wp-content/
http://pacttpa. http://www.edtpa. https://www.ets.org/ http://ihenetwork. uploads/2014/
org/_main/hub. com/ http://edtpa. ppa/test-takers/ org/Websites/ 08/KPTP-Final-2009
php?pageName aacte.org/ https://www.ets.org/s/ ihenetwork/files/ .pdf
=Home ppa/pdf/ppat- Content/5850139/ http://ksde.org/
candidate-educator- NHTCAP_brochure. Portals
handbook.pdf pdf /0/TLA/
Accreditation/KPTP_
implemention_
guidelines.pdf
http://www.ksde.org
/Portals/0/TLA/
Accreditation/
KPTP_overview.
pdf?ver=2013-11-12-
141308-947
Sample portfolios:
http://tinyurl.com/
lv7c93f
Evolution of Teacher Performance Assessments    29
30    C. L. TANGUAY

REFERENCES

Allington, R. L. (2005). Ignoring the policy makers to improve teacher preparation.


Journal of Teacher Education, 56, 199–204.
Ball, D. L., & Forzani, F. M. (2009). The work of teaching and the challenge of
teacher education. Journal of Teacher Education, 60, 497–511.
Ball, D. L., & Forzani, F. M. (2011, Summer). Building a common core for learning
to teach and connecting professional learning to practice. American Educator,
35(2), 17–39.
Ball, A., & Tyson, C. (Eds.). (2011). Studying diversity in teacher education. Lanham,
MD: The Rowman & Littlefield Publishing Group, Inc.
Bastian, K., & Lys, D. (2016). Initial Findings from edTPA Implementation in North
Carolina. Education Policy Initiative at Carolina (EPIC). Retrieved from
https://publicpolicy.unc.edu/files/2016/10/Initial-Findings-from-edTPA-
Implementation.pdf
Berliner, D. C. (2005). The near impossibility of testing for teacher quality. Journal
of Teacher Education, 56(3), 205–213.
Boyd, D., Grossman, P., Lankford, H., Loeb, S., & Wyckoff, J. (2006). How chang-
es in entry requirements alter the teacher workforce and affect student
achievement. Education, Finance, and Policy, 1(2), 176–216. doi:10.1162/
edfp.2006.1.2.176
Brophy, J., & Good, T. (1986). Teacher behavior and student achievement. In M.
C. Wittock (Ed.), Handbook of research on teaching (3rd ed., pp. 328–375). New
York, NY: MacMillan.
Bunch, G. C., Aguirre, J. M., & Tellez, K. (2009). Beyond the scores: Using candidate
responses on high stakes performance assessment to inform teacher prepara-
tion for English learners. Issues in Teacher Education, 18(1), 103–128.
Carnegie Forum on Education and the Economy. (1986). A nation prepared: Teachers
for the 21st century: The report of the TASK Force on teaching as a profession, Carn-
egie Forum on Education and the Economy, May 1986. Washington, DC: Author.
Chief Council of State School Officers. (2013, April). InTASC model core teaching
standards and learning progressions for teachers 1.0. Retrieved from http://www
.ccsso.org/Documents/2013/2013_INTASC_Learning_Progressions_for_
Teachers.pdf
Chung, R. R. (2008, Winter). Beyond assessment: Performance assessments in
teacher education. Teacher Education Quarterly, 35(1), 7–28.
Cochran-Smith, M., & Fries, K. (2005) Researching teacher education in changing
times: Politics and paradigms. In M. Cochran-Smith & K. Zeichner (Eds.),
Studying teacher education: The report of the AERA panel on research and teacher
education (pp. 69–110). Mahwah, NJ: Lawrence Erlbaum Associates Inc.
Cochran-Smith, M., & Fries, K. (2008) Research on teacher education: Chang-
ing times, changing paradigms. In M. Cochran-Smith, S. Feiman-Nemser, J.
McIntyre, & K. Demers (Eds.), Handbook of research on teacher education: En-
during questions in changing contexts (3rd ed., pp. 1050–1093). New York, NY:
Routledge.
Evolution of Teacher Performance Assessments    31

Cochran-Smith, M., Piazza, P., & Power, C. (2013). The politics of accountability: As-
sessing teacher education in the United States. The Educational Forum, 77(1),
6–27.
Cochran-Smith, M., Villegas, A. M., Abrams, L. W., Chavez-Moreno, L. C., & Mills, T.
(2016). Research on teacher preparation: Charting the landscape of a sprawl-
ing field. In D. H. Gitomer & C. A. Bell (Eds), Handbook of research on teach-
ing (5th ed., pp. 439–547). Washington, DC: American Educational Research
Association.
Council for the Accreditation of Educator Preparation. (2013). CAEP accreditation
standards and evidence: Aspirations for educator preparation. Washington,
DC: Author.
Cuthrell, K., Stapleton, J. N., Bullock, A. A., Lys, D. B., Smith, J. J., & Fogarty, E.
(2014). Mapping the journey of reform and assessment for an elementary
education teacher preparation program. Journal of Curriculum and Instruction,
8(1), 67–85.
Darling-Hammond, L. (2010). Evaluating teacher effectiveness: How teacher perfor-
mance assessments can measure and improve teaching. Washington, DC: Center
for American Progress. Retrieved from http://files.eric.ed.gov/fulltext/
ED535859.pdf
Darling-Hammond, L., & Bransford, J. (Eds.). (2005). Preparing teachers for a changing
world: What teachers should learn and be able to do. San Franciso, CA: Jossey-Boss.
Darling-Hammond, L., Chung, R., & Frelow, F. (2002). Variation in teacher prepa-
ration: How well do different pathways prepare teachers to teach? Journal of
Teacher Education, 53, 286–302. doi:10.1177/0022487102053004002
Darling-Hammond, L., & Snyder, J. (2000). Authentic assessment of teaching in
context. Teaching and Teacher Education, 16(5–6), 523–545.
Darling-Hammond, L., Wei, R. C., with Johnson, C. M. (2009). Teacher preparation
and teacher learning: A changing policy landscape. In G. Sykes, B. L. Schnei-
der, & D. N. Plank (Eds), Handbook of education policy research (pp. 613–636).
New York, NY: American Educational Research Association and Routledge. Re-
trieved from https://scale.stanford.edu/system/files/Teacher_Preparation
_and_Teacher_Learning_A_Changing_Policy_Landscape_0.pdf
Delandshere, G., & Arens, S. A. (2001). Representations of teaching and standards-
based reform: Are we closing the debate about teacher education? Teaching
and Teacher Education, 17, 547–566.
Delandshere, G., & Petrosky, A. (2004). Political rationales and ideological stances
of the standards-based reform of teacher education in the US. Teaching and
Teacher Education, 20(1), 1–15.
Dover, A. G., & Schultz, B. D. (2016). Troubling the edTPA: Illusions of objectivity
and rigor. The Educational Forum, 80, 95–106. doi:10.1080/00131725.2015.11
02368
Educational Testing Services. (2016a). ETS: A rich history of educator performance
assessment Innovation. Author. Retrieved from https://www.ets.org/s/ppa/
pdf/ets-performance-assessment-history.pdf
Educational Testing Services. (2016b). PPAT Assessment: Candidate and educator
handbook July, 2016, version 2.0. Author. Retrieved from https://www.ets.
org/s/ppa/pdf/ppat-candidate-educator-handbook.pdf
32    C. L. TANGUAY

Floden, R. E., & Meniketti, M. (2005). Research on the effects of coursework in the
arts and Sciences and in the foundations of education. In M. Cochran-Smith
& K. Zeichner (Eds.), Studying teacher education: The report of the AERA panel
on research in teacher education (pp. 261–308), Mahwah, NJ: Lawrence Erlbaum
Associates Inc.
Gay, G. (2000). Culturally responsive teaching: Theory, research, and practice. New York,
NY: Teachers College.
Goldhaber, D., Cowan, J., & Theobald, R. (2016). Evaluating prospective teachers:
Testing the predictive validity of the edTPA. Calder Working Paper 157. Re-
trieved from: http://www.caldercenter.org/sites/default/files/WP%20157.pdf
González, N., Moll, L. C., & Amanti, C. (Eds.). (2006). Funds of knowledge: Theorizing
practices in households, communities, and classrooms. New York, NY: Routledge.
Gorlewski, D. A., & Gorlewski, J. A. (2015). Producing professionals: Analyzing what
counts for edTPA. In K. A. O’Hara (Ed.), Teacher evaluation: The charge and the
challenges (pp. 19–37). New York, NY: Peter Lang.
Greenblatt, D., & O’Hara, K. (2015, Summer). Buyer beware: Lessons learned from
edTPA implementation in New York State. Thought & Action, 42(2), 57–67.
Grossman, P. (2008). Responding to our critics: From crisis to opportunity in re-
search on teacher education. Journal of Teacher Education, 59(1), 10–23.
Guaglianone, C. L., Payne, M., Kinsey, G. W., & Chiero, R. (2009). Teaching per-
formance assessment: A comparative study of implementation and impact
amongst California State University Campuses. Issues in Teacher Education,
18(1), 129–148.
Hafner, A. L., & Maxie, A. (2006). Looking at answers about reform: Findings from
the SB 2042 implementation study. Issues in Teacher Education, 15(1), 85–102.
Hall, G. E. (2010). Technology’s Achilles heel: Achieving high-quality implementa-
tion. Journal or Research in Technology Education, 42(3), 231–253.
Hall, G. E., Dirksen, D. J., & George, A. A., (2006). Measuring implementation in
schools: Levels of use. Austin, TX: SEDL.
Hall G. E., Newlove, B. W., George, A. A., Rutherford, W. L., & Hord, S. M. (1991).
Measuring change facilitator stages of concern: A manual for use of the CFSoC Ques-
tionnaire. Greely, CO: Center for Research on Teaching and Learning.
Hargreaves, A., & Fullan, M. (2013). The power of professional capital. Journal of
Staff Development: The Learning Forward Journal, 34(3), 36–39.
Haertel, E. H. (2002). Standard setting as a participatory process: Implications for
validation of standards-based accountability programs. Educational measure-
ment issues and practice, 21(1), 16–22.
Haertel, E. H., & Lorié, W. A. (2000, April). Validating standards-based test score inter-
pretations. Paper presented at the Annual Meeting of the American Educa-
tional Research Association. New Orleans, LA.
Haertel, E. H., & Lorié, W. A. (2004). Validating standards-based test score interpre-
tations. Measurement: Interdisciplinary Research and Perspectives, 2(2), 61–103.
doi:10.1207/s15366359mea0202_1
Henry, G. T., Campbell, S. L., Thompson, C. L., Patriarca, L. A., Luterbach, K. J.,
Lys, D. B., & Covington, V. M. (2013). The predictive validity of measures
of teacher candidate programs and performance: Toward an evidence-based
Evolution of Teacher Performance Assessments    33

approach to teacher preparation. The Journal of Teacher Education, 64(5), 439-


453. doi:10.1177/0022487113496431
The Holmes Group. (1986). Tomorrow’s teachers: A report of the Holmes Group. East
Lansing, MI: The Holmes Group, Inc.
Ingersoll, R. M., & May, H. (2011). Recruitment, retention and the minority
teacher shortage. Consortium for Policy Research in Education (CPRE Research
Report #RR-69). Retrieved from http://repository.upenn.edu/gse_pubs/
226/?utm_source=repository.upenn.edu%2Fgse_pubs%2F226&utm_medium
=PDF&utm_campaign=PDFCoverPages
Institutions of Higher Education Network: A Consortium on NH Educator Prepara-
tion Programs. (2016, December 1). Author. Retrieved from http://ihenetwork
.org/initiatives
Irvine, J. J. (2003). Educating teachers for diversity: Seeing with a cultural eye. New York,
NY: Teachers College Press.
Kansas State Department of Education. (2009). Kansas Performance Teaching Port-
folio. Author. Retrieved from http://www.mcpherson.edu/wp-content/up-
loads/2014/08/KPTP-Final-2009.pdf
Kansas State Department of Education. (2011, September). Kansas Performance Teach-
ing Portfolio: Implementation Guidelines. Author. Retrieved from http://ksde.org/
Portals/0/TLA/Accreditation/KPTP_implemention_guidelines.pdf
Knight, S. L., Lloyd, G. M., Arbaugh, F., Gamson, D., McDonald, S. P., Nolan Jr., J., &
Whitney, A. E. (2015). Reconceptualizing teacher quality to inform preservice
and inservice professional development. Journal of Teacher Education, 66(2),
105–108.
Kornfeld, J., Grady, K., Marker, P. M., & Ruddell, M. R. (2007). Caught in the cur-
rent: A self-study of state-mandated compliance in a teacher education pro-
gram. Teachers College Record, 109(8), 1902–1930.
Kumashiro, K. K. (2012). Bad teacher!: How blaming teachers distorts the bigger picture.
New York, NY: Teachers College Press.
Lachuk, A. J., & Koellner, K. (2015). Performance-based assessment for certifica-
tion: Insights from edTPA implementation. Language Arts, 93(2), 84–95.
Ladson-Billings, G. (1999). Preparing teachers for diverse student populations: A crit-
ical race theory perspective. A Review of Research in Education, 24(1), 211–247.
Lampert, M., Franke, M. L., Kazemi, E., Ghousseini, H., Turrou, A. C., Beasley, H.,
Cunard, A., & Crowe, K. (2013). Keeping it complex: Using rehearsals to sup-
port novice teacher learning of ambitious teaching. Journal of Teacher Educa-
tion, 64(3), 226–243.
Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation.
Cambridge, England: Cambridge University Press.
Ledwell, K., & Oyler, C. (2016). Unstandardized responses to a “Standardized” test:
The edTPA as a gatekeeper and curriculum change agent. Journal of Teacher
Education, 67(2), 120–134. doi:10.1177/0022487115624739
Lewis, W. D., & Young, T. V. (2013). The politics of accountability: Teacher education
Policy. Educational Policy, 27(2), 190–216. doi:10.1177/0895904712472725
Lit, I. W., & Lotan, R. (2013). A balancing act: Dilemmas of implementing a high-
stakes performance assessment. The New Educator, 9, 54–76. doi:10.1080/154
7688X.2013.751314
34    C. L. TANGUAY

Liu, L. B., & Milman, N. B. (2013). Year one implications of a teacher performance
assessment’s impact on multicultural education across a secondary education
teacher preparation program. Action in Teacher Education, 35, 125–142. doi:10
.1080/01626620.2013.775971
Lys, D. B., L’Esperance, M. L., Dobson, E., & Bullock, A. (2014). Large-scale imple-
mentation of the edTPA: Reflections upon institutional change in action. Cur-
rent Issues in Education, 17(3), 1–15.
Margolis, J., & Doring, A. (2013). National assessments for student teachers: Docu-
menting teaching readiness to the tipping point. Action in Teacher Education,
35(4), 272–285. doi:10.1080/01626620.2013.827602
Mehta, J., & Doctor, J. (2013). Raising the bar for teaching. Phi Delta Kappan, 94(7),
8–13.
Meuwissen, K. W., & Choppin, J. M. (2015). Preservice teachers’ adaptations to ten-
sions associated with the edTPA during its early implementation in New York
and Washington states. Education Policy Analysis Archives, 23(103), 1–29.
Meuwissen, K. W., Choppin, J. M., Shang-Butler, H., & Cloonan, K. (2015). Teaching
candidates’ perceptions of and experiences with early implementation of the edTPA
licensure examination in New York and Washington States. Rochester, NY: Warner
Graduate School of Education and Human Development.
Miller, M., Carroll, D., Jancic, M., & Markworth, K. (2015). Developing a culture
of learning around the edTPA: One university’s journey, The New Educator,
11(1), 37–59. doi:10.1080/1547688X.2014.966401
Mitchell, K. J., Robinson, D. Z., Plake, B. S., & Knowles, K. T. (Eds.). (2001). Testing
teacher candidates: The role of licensure tests in improving teacher quality. Commit-
tee on Assessment and Teacher Quality, Center for Education, Board on Test-
ing and Assessment, National Research Council: National Academies Press.
Retrieved from http://www.nap.edu/catalog/10090.html
Myers, S., & Nelson, N. (2017). The Kansas Performance Teaching Portfolio (KPTP).
Regional Educational Laboratory at Marzano Research Laboratory. Re-
trieved on June 23, 2017 from https://www.relcentral.org/wp-content/up-
loads/2014/08/Presentation-Slides-Nelson.pdf
National Commission on Excellence in Education. (1983). A nation at risk: The im-
perative for educational reform. Washington, DC: U.S. Government Printing Of-
fice. Retrieved from http://www2.ed.gov/pubs/NatAtRisk/risk.html
Nelson, N. (2017). Kansas Performance Teaching Portfolio (KPTP): Candidate Introduc-
tion. Presentation retrieved on August 7, 2017 from http://www.ksde.org/Por-
tals/0/TLA/Accreditation/KPTP_overview.pdf?ver=2013-11-12-141308-947
Newton, S. (2010). Preservice performance assessment and teacher early career effectiveness:
Preliminary findings on the performance assessment for California teachers. Stan-
ford, CA: Stanford University, Stanford Center for Assessment, Learning, and
Equity.
Okhremtchouk, I., Seiki, S., Gilliland, B., Ateh, C., Wallace, M., & Kato, A. (2009).
Voices of preservice teachers: Perspectives on the Performance Assessment
for California Teachers (PACT). Issue in Teacher Education, 18(1), 39–62.
Pecheone, R. L., & Chung, R. R. (2006). Evidence in teacher education: The per-
formance assessment for California teachers (PACT). Journal of Teacher Educa-
tion, 57(1), 22–36. doi:10.1177/0022487105284045
Evolution of Teacher Performance Assessments    35

Pecheone, R. L., & Chung Wei, R. R. (2007, March). Technical Report of the Perfor-
mance Assessment for California Teachers (PACT): Summary of Validity and Reliabil-
ity Studies for the 2003-04 Pilot Year. PACT Consortium. Retrieved from http://
pacttpa.org/_files/Publications_and_Presentations/PACT_Technical_Re-
port_March07.pdf
Pecheone, R. L., & Whittaker, A. (2016). Well prepared teachers inspire student learn-
ing. Phi Delta Kappan, 97(7), 8–13. Retrieved from https://scale.stanford.edu/
sites/default/files/Phi%20Delta%20Kappan-2016-Pecheone-8-13.pdf
Pecheone, R. L., Whittaker, A., & Klesch, H. (2016). Educative assessment and mean-
ingful support: 2015 edTPA Administrative Report. Palo Alto, CA: Stanford Cen-
ter for Learning, Assessment, and Equity.
Peck, C. A., Galluci, C., & Sloan, T. (2010). Negotiating implementation of high-
stakes performance assessment policies in teacher education: From compli-
ance to inquiry. Journal of Teacher Education, 61(5), 451–463. doi:10.1177/
0022487109354520
Peck, C. A., & McDonald, M. (2013). Creating “Cultures of Evidence” in teacher
education: Context, policy, and practice in three high-data-use programs. The
New Educator, 9(1), 12–28. doi:10.1080/1547688X.2013.751312
Peck, C.A., Singer-Gabella, M., Sloan, T., & Lin, S. (2014). Driving blind: Why we
need standardized performance assessment in teacher education. Journal of
Curriculum and Instruction, 8(1), 8–30. doi:10.3776/joci.2014.v8n1p8-30
Ratner, A. R., & Kolman, J. S. (2016). Breakers, benders, and obeyers: Inquiring
into teacher educators’ mediation of edTPA. Education Policy Analysis Archives,
24(35), 1–29.
Ravitch, D. (2013). Reign of error: The hoax of the privatization movement and the danger
to America’s public schools. New York, NY: Alfred A. Knopf.
Reece, C. M., & Tannenbaum, R. J. (2015, October). Research Memorandum ETS-
RM-15-11: Recommending a Passing Score for the Praxis® Performance Assessment
for Teachers (PPAT). Princeton, NJ: Educational Testing Services.
Richardson, V. (2003). Constructivist pedagogy. Teachers College Record, 105(9),
1623–1640.
Sato, M. (2014). What is the underlying conception of teaching of the edTPA? Jour-
nal of Teacher Education, 65(5), 421–434. doi:10.1177/0022487114542518
Schon, D. A. (1983). The reflective practitioner: How professionals think in action. New
York, NY: Basic Books.
Shulman, L. S. (1987). Assessment for teaching: An initiative for the profession. The
Phi Delta Kappan, 69(1), 38–44.
Sleeter, C. E. (2001). Preparing teachers for culturally diverse schools: Research and
the overwhelming presence of whiteness. Journal of Teacher Education, 52(2),
94–106.
Sloan, T. (2013). Distributed leadership and organizational change: Implementa-
tion of a teaching performance measure. The New Educator, 9(1), 29–53. doi:
10.1080/1547688X.2013.751313
Sloan, T. F. (2015). Data and learning that affords program improvement: A re-
sponse to the U.S. accountability movement in teacher education. Educational
Research for Policy and Practice, 14(3), 259–271. doi:1007/s10671-015-9179-y
36    C. L. TANGUAY

Stanford Center for Assessment, Learning, and Equity. (2014, April). edTPA Guide-
lines for Acceptable Candidate Support. Retrieved from https://secure.aacte.
org/apps/rl/resource.php?resid=164&ref=edtpa
Stanford Center for Assessment, Learning, and Equity. (2016a). About edTPA: Project
status. Retrieved from http://edtpa.aacte.org/about-edtpa#Overview-0 .
Stanford Center for Assessment, Learning, and Equity. (2016b). edTPA Map Slide
2016. Amherst, MA. Retrieved from https://scale.stanford.edu/teaching/
edtpa
Stanford Center for Assessment, Learning, and Equity. (2017a). edTPA. Amherst,
MA: Pearson Education, Inc. Retrieved from https://scale.stanford.edu/
teaching/edtpa
Stanford Center for Assessment, Learning, and Equity. (2017b). edTPA Elementary
Elementary Education Assessment Handbook. Board of Trustees of the Leland
Stanford Junior University.
Stanford Center for Assessment, Learning and Equity. (2017c). edTPA Resource Li-
brary. New York, NY: AACTE. Retrieved from https://secure.aacte.org/apps/
rl/resource.php?ref=edtpa
Stanford Center for Assessment, Learning, and Equity. (2017d). edTPA Submission
and reporting dates. Amherst, MA: Pearson Education, Inc. Retrieved from
http://www.edtpa.com/PageView.aspx?f=GEN_ScoreReportDates.html
Stillman, J., Anderson, L., Arellano, A., Wong, P. L., Berta-Avila, M., Alfaro, C., &
Struthers, K. (2013). Putting PACT in context and context in PACT: Teacher
educators collaborating around program-specific and shared learning goals.
Teacher Education Quarterly, Fall, 135–157.
Torgerson, C. W., Macy, S. R., Beare, P., & Tanner, D. E. (2009). Fresno assessment
of teachers: A teacher performance assessment that informs practice. Issues in
Teacher Education, 18(1), 63–82.
Tuck, E., & Gorlewski, J. (2016). Racist ordering, settler colonialism, and edT-
PA: A participatory policy analysis. Educational Policy, 30(1), 197–217.
doi:10.1177/0895904815616483
Tucker, M., & Mandel, D. (1986). The Carnegie report: A call for redesigning the
schools. The Phi Delta Kappan, 68(1), 24–27. Retrieved from http://www.jstor.
org/stable/20403252
U.S. Department of Education. (2002, January). No Child Left Behind Act of 2001.
Public Law, PL 107-110. Washington, D. C. Retrieved from http://www2.
ed.gov/policy/elsec/leg/esea02/107-110.pdf
U.S. Department of Education. (2009, November). Race to the Top Executive Sum-
mary. Washington, D. C. Retrieved from http://www2.ed.gov/programs/
racetothetop/executive-summary.pdf
Van Es, E. A., & Conroy, J. (2009). Using the performance assessment for California
teachers to examine pre-service teachers’ conceptions of teaching mathemat-
ics for understanding. Issues in Teacher Education, 18(1), 83–102.
Vygotsky, L. S. (1978). Mind in society. Cambridge, MA: Harvard University Press.
Wayne, A. J., & Youngs, P. (2003). Teacher characteristics and student achievement
gains: A review. Review of Educational Research, 73(1), 89–122.
Wei, R. C., & Pecheone, R. L. (2010). Assessment for learning in preservice teach-
er education: Performance-based assessments. In M Kennedy (Ed.), Teacher
Evolution of Teacher Performance Assessments    37

assessment and the quest for teacher Quality: A handbook (pp. 69–132). San Fran-
cisco, CA: Jossey Bass.
Whittaker, A., & Nelson, C. (2013). Assessment with an “End in View.” The New
Educator, 9(1), 77–93.
Wilson, S. M., & Tamir, E. (2008). The evolving field of teacher education: How un-
derstanding challenge(r)s might improve the preparation of teachers. In M.
Cochran-Smith, M. S. Feiman-Nemser, J. D. McIntyre (Eds.), & K. E. Demers
(Assoc. Ed.), Handbook of research on teacher education: Enduring questions in
changing contexts (3rd ed., pp. 908–935). New York, NY: Routledge.
Wilson, S., & Youngs, P. (2005). Research on accountability processes in teacher ed-
ucation. In M. Cochran-Smith & K. Zeichner (Eds.), Studying teacher education:
The report of the AERA Panel on research and teacher education (pp. 591–643).
Washington, DC: American Educational Research Association.
Zeichner, K. (2014). The struggle for the soul of teaching and teacher education
in the USA. Journal of Education for Teaching, 40(5), 551–568. Retrieved from
http://dx.doi.org/10.1080/02607476.2014.956544
Zeichner, K., Payne, K. A., & Brayko, K. (2015). Democratizing teacher education.
Journal of Teacher Education, 66(2), 122–135. doi:10.1177/0022487114560908
Zeichner, K., & Peña-Sandoval, C. (2015). Venture philanthropy and teacher edu-
cation policy in the US: The role of the New Schools Venture Fund. Teachers
College Record, 117(6), 1–44.
Zumwalt, K., & Craig, E. (2008). Who is teaching? Does it matter? In M. Cochran-
Smith, M. S. Feiman-Nemser, J. D. McIntyre (Eds.) & K. E. Demers (Assoc.
Ed.), Handbook of research on teacher education: Enduring questions in changing
contexts (3rd ed., pp. 404–423). New York, NY: Routledge.
This page intentionally left blank.
CHAPTER 2

FROM ISOLATION TO A
COMMUNITY OF PRACTICE
Redefining the Relationship of Faculty
and Adjunct University Supervisors
During the Implementation of edTPA

Sharilyn C. Steadman and Ellen E. Dobson


East Carolina University

On a chilly March afternoon, a group of experienced university supervisors sat


in the windowless meeting room of an off-campus building. One individual was
sharing how, several days earlier, he had referenced a specific component of the
university’s new assessment instrument during a post-observation conference
with one of his teacher candidates. All members of the group listened intensely.
Some nodded in affirmation of the approach that he had used. As the universi-
ty supervisor paused, the project director, who was the leader of the implemen-
tation effort, began to summarize the university supervisor’s interaction. The
supervisor, however, quickly interrupted the project director’s comments in or-
der to provide his own summary and then to recommend a particular resource
that he had found to be helpful in discussing learning styles with his teacher
candidates. Others wrote down the name of the website that he recommended.
When he concluded, all members of the group offered animated comments on
the information that the university supervisor had provided.

Implementing and Analyzing Performance Assessments in Teacher Education, pages 39–62


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 39
40    S. C. STEADMAN and E. E. DOBSON

While such an interaction may seem commonplace, it was anything but


that at our university in 2011 when this interaction took place. Months ear-
lier, our College of Education had implemented a new assessment instru-
ment, edTPA®, for teacher candidates, and the individuals meeting that day
had come together for their monthly meeting to discuss their interactions
with their teacher candidates. The group gathered in the meeting room
was composed of sixteen individuals: ten non-faculty university supervisors;
two faculty members who were not supervising that semester (one of whom
was the project director); four faculty university members, including Shari
Steadman, the first author of this chapter (who has supervised interns for
over a decade, but who, for this project, served solely as a researcher); and
two members of the Office of Assessment and Accreditation (including the
other author of this chapter, Ellen Dobson. Ellen has served as a resource for
university supervisors, but has never personally supervised interns). When
the group initially met in December 2010, many of the individuals had nev-
er met. How, then, just three months later, had a community developed in
which all members, regardless of their status at the university or experience
working with teacher candidates, were viewed as equal contributors to the
implementation of the new edTPA assessment?

WE’RE GOING TO IMPLEMENT A NEW


ACCOUNTABILITY INSTRUMENT? WHY . . . AND HOW?

While much has been written about the impact that the infusion of new ac-
countability instruments has had on the curriculum of teacher education
programs, the impact of implementing such assessments does not end with
curriculum revision. Because the internship experience (previously called
the student-teaching experience) is the venue in which interns or teacher
candidates apply their accumulated knowledge via the completion of per-
formance instruments such as edTPA, the work of individuals who serve as
university supervisors to the interns is also affected by the take-up of these
new assessments.
When the administration and education faculty of our large, state uni-
versity with a robust teacher education program perceived a need to in-
crease the rigor of our assessment instrument, they chose to participate in
the field testing of a new teacher performance assessment developed by
an entity not affiliated with the university. Initially, the pilot involved only
one student in fall 2010, and the instrument selected was then called the
Teacher Performance Assessment, now entitled edTPA. An analysis of the
selected intern’s response to edTPA suggested that the instrument had the
potential to provide a more thorough, richer, and more robust opportu-
nity for interns to demonstrate their planning, teaching, and assessing skills
From Isolation to a Community of Practice    41

while providing faculty more accurate insight into the interns’ performanc-
es. Based on the one-intern pilot data, the decision was made to expand
the pilot for the Spring 2011 semester to include three programs: English
Education, History Education, and Middle Grades Education.
In light of the magnitude of change affiliated with moving from a locally-
constructed performance assessment to a highly structured, external per-
formance assessment, an implementation unit was formed. The individu-
als included in the implementation unit represented various roles within
teacher education: university faculty who taught courses and also super-
vised English, history, or middle grades education interns and adjunct uni-
versity supervisors for the same three programs. Adjunct university supervi-
sors were former teachers and administrators with advanced degrees who
were hired from semester to semester on an as-needed basis, but did not
hold faculty status. During the implementation semester, all members of
the implementation unit met monthly to discuss their perceptions of intern
reactions to developing an electronic teacher performance assessment.
Prior to piloting edTPA, faculty university supervisors and adjunct uni-
versity supervisors rarely interacted, despite serving in the same role as uni-
versity supervisors to interns in the same three program areas. Other than
the yearly half-day training session for all university supervisors, the paths
of supervisors working with different program never crossed. Thus, some
university supervisors had not met each other before the edTPA implemen-
tation semester.
The integration of individuals from three different content areas (Eng-
lish education, history education, and middle grades education), repre-
senting two different positions within the university (faculty and adjunct) to
discuss the implementation of a completely new intern assessment requir-
ing a new means of submission (paper versus electronic) provided a unique
opportunity to explore two questions. First, “How do regularly scheduled
face-to-face interactions of adjunct and faculty university supervisors affect
the established faculty–adjunct relationship?” Second, “How does the im-
plementation of a new assessment instrument contribute to the ways that
university supervisors define their work and their positions within a college
of education?”
As the two groups of supervisors, who occupied different positions at the
university, moved from doing intern supervision in isolation to discussing
their work within a group, they interacted to unpack, understand, and con-
sistently implement the requirements of the new performance assessment.
Our chapter offers an analysis of data, collected in the 2011 implementa-
tion semester, and six years later, in 2017, that focuses on new structures
and pathways for faculty–adjunct university supervisor communications
that emerged during edTPA implementation semester.
42    S. C. STEADMAN and E. E. DOBSON

THE INTERACTION OF UNIVERSITY SUPERVISORS:


ONE SHARED GOAL, TWO LEVELS OF PARTICIPANTS,
THREE INFLUENCES ON INTERACTIONS

Three factors influenced the interactions of the participants who took part
in edTPA implementation: (a) the ill-defined work of the university super-
visor, (b) the hegemonic relationship of positions within academia, and
(c) theories of identity. Each factor shaped the common experience of the
participants.

Defining the Work of the University Supervisor

The framework for this study called upon extant literature on the work
and position of university supervisors within teacher education programs.
Research in teacher education has focused little attention on the work of
university supervisors even though they are seen as participants in the stu-
dent-teaching experience, an experience that, over time, has consistently
been cited by classroom teachers as the single most influential component
of their teacher education programs (Clifford & Gutherie, 1990; Guyton
& McIntyre, 1990; McIntyre, Byrd, & Foxx, 1996; Wilson, Floden, & Fer-
rini-Mundy, 2002). As noted in 1996, focused study of the practices and
positioning of university supervisors was “relatively sparse and outdated”
(Enz, Freeman, & Wallin, 1996, p. 132), and little has changed in more
than two decades. For example, student teacher supervision by university-
affiliated individuals occupies only four pages in Studying Teacher Education
(Cochran-Smith & Zeichner, 2005), and these pages relate solely to super-
vision in professional development school settings. Further, in the more
recently published third edition of the Handbook of Research on Teacher Edu-
cation, subtitled Enduring Questions in Changing Contexts (Cochran-Smith,
Feiman-Nemser, McIntyre, & Demers, 2008) neither student teaching nor
student-teacher supervision are afforded any attention within the volume’s
1,354 pages.
Contributing to a sense of vagueness about the work of the university su-
pervisor is the range of individuals who do this work. As Slick (1998) notes,
the position of supervisor may be assigned to a faculty member in addition
to that person’s full teaching load or to an adjunct educator. Sometimes a
supervisor is a retired administrator or teacher; sometimes he or she is a
graduate student teaching assistant. Regardless of the position of the per-
son serving as a university supervisor, never does the work itself carry more
than minimal status.
Further, as Clifford and Guthrie (1990) have pointed out, the low status
of university supervisors is not new; those who supervise student teachers
From Isolation to a Community of Practice    43

have never been held in high esteem. When placed on a continuum of vari-
ous positions held within a College of Education, “practicum supervision,
the rationale and expectations for which are often vague, tends to receive
the least attention of all” (Beck & Kosnik, 2002, p. 6). Thus, standing at the
intersection of practice and higher education, the work of the university
supervisor has been largely ignored, and as a result, is little understood
(Steadman & Brown, 2011), commands little prestige, and receives mini-
mal attention.
Given the lack of knowledge about how university supervisors define
their work and professional identities (Poyas & Smith, 2006), their contin-
ued employment by teacher education programs during what is regarded
as the salient component of a future teacher’s development presents an
interesting situation. In light of the current call for a seamless transition
between teacher education courses and school-based experiences, a deeper
understanding of the position, work, and impact of university-based super-
visors is long overdue.
The increasing awareness of the critical contribution that clinical experi-
ences make to the development of teacher candidates and the related em-
phasis on standards-based teacher performance assessments (i.e., edTPA)
is now beginning to bring the work of the university supervisors into greater
focus and has prompted an escalating acknowledgement of the benefits of
university-based faculty and field-based adjuncts working together to bridge
the university–K–12 school divide that teacher candidates often experi-
ence. As Darling-Hammond (2014) asserts “particularly useful are those
approaches that develop both greater teaching skill and understanding for
the participants and for those involved in mentoring and assessing these
performances” (p. 555), notably university supervisors. The uneven posi-
tioning within the university, however, may stifle such bridge building.

Elements Affecting Faculty–Adjunct Relationships: How


We View “The Other”

Since the 1970s, the academic workforce has shifted away from the tra-
ditional full-time, tenured/tenure track faculty to one largely comprised
of full- and part-time non-tenure track teaching staff (Schuster & Finkles-
tein, 2006). In 2011, the U.S. Department of Education reported that full-
time non-tenure track faculty and part-time faculty made up over 56% of
post-secondary instructional staff (Curtis, 2014). Although now in the ma-
jority, adjunct faculty have reported a negative climate among their ten-
ured/tenure-track colleagues (Kezar & Sam, 2010). This phenomenon
has been described as a two-class system in which adjunct faculty “occupy a
disadvantaged status when compared to their tenured and tenure-eligible
44    S. C. STEADMAN and E. E. DOBSON

colleagues” (Baldwin & Chronister, 2001, p. 7). Adjunct faculty members


typically have higher course loads than tenured/tenure-track faculty, re-
ceive few, if any, insurance benefits, and rarely accrue retirement benefits
(Halcrow & Olson, 2008). According to Baldwin and Chronister, “Histori-
cally, faculty off the tenure track have been viewed from a policy perspective
as an academic equivalent of migrant workers” (2005, p. 155).
The use of adjunct faculty has both advantages and disadvantages. One
noted advantage is cost. Employing an adjunct faculty member is usually
less expensive than employing a tenured/tenure-track faculty member
(Baldwin & Chronister, 2001; Curtis, 2014). Adjunct contracts frequently
cover a short period of time, typically one semester. This is the case at our
university where adjunct university supervisors are hired for one semester,
with their contracts being renewed a semester at a time. Another benefit of
employing adjuncts is flexibility (Kezar & Sam, 2010; Moorehead, Russell,
& Pula, 2015). Instructors can be hired for whichever education program
needs them at the time.
The various differences in the roles of adjunct and tenured/tenure-
track faculty have an impact on how these two groups of individuals are
perceived within the university. Often adjunct are perceived, and in some
cases, perceive themselves, as the bottom of the faculty hierarchy (Levin,
Shaker, & Wagoner, 2011.) Many academics report that the increase in hir-
ing adjunct faculty can be seen as a threat to the concept of tenure (Curtis,
2014; Slaughter & Rhoades, 2004). Adjuncts have reported that they often
experience a negative climate among colleagues and that tenured/tenure-
track faculty express animosity towards them (Kezar & Sam, 2010). In their
study of part-time faculty, Levin and Montero Hernandez found that ad-
juncts “experienced diminished professional status due to their lack of in-
teraction with other faculty members” (2014, p. 548). Consequently, the
bifurcation of faculty positions frequently produces a distinct hegemony
between tenured/tenure-track faculty and adjunct faculty members, which
influences how the individuals interact.

Identity and Situated Learning: How We View


Ourselves

The final factor that shaped the experiences of those of us involved in


the implementation of edTPA in spring 2011 are the theories of individual
and community identity, socially-situated identity (Gee, 1999), positioning
theory (van Langenhove & Harre, 1999), and the notion of communities of
practice (Wenger, 1998).
The theories of “socially-situated identity” (Gee, 1999, p. 13) and posi-
tioning theory (van Langenhove & Harre, 1999) provide a contextual lens
From Isolation to a Community of Practice    45

through which to view the ways that individuals position and identify them-
selves in new social situations. The resulting theoretical framework posits
that each individual determines the “‘kind of person’” (Gee, 1999, p. 13)
that he or she seeks to be in a particular moment based on the interpreted
rules of a specific setting to which he or she wishes to belong and the indi-
vidual’s prior experiences (Lemke, 2001). Because individuals may move
from one social situation to another in a short period of time (and each
social situation might require a different identity), individuals’ identities
are constantly reforming as they travel from situation to situation. Further,
a single social situation may evolve over time, and thus a person may change
identities several times within one setting, taking on different identities, as
a particular situation requires.
This movement within and among settings and in relation to interac-
tions with others suggests that identities are not “stable, natural, or unique
[and that] individuals . . . are not the sole authors of the self but rather are
authored in language and by social practices” (Finders, 1997, p. 32). Thus,
during such interactions, individuals “construct as well as interrogate com-
plex social realities” (Weiner, 2005–2006, p. 287) shaped by the constructs
of the particular situation at a particular moment. Depending upon the
social situation, an individual can position herself or be positioned by oth-
ers as

powerful or powerless, confident or apologetic, dominant or submissive, de-


finitive or tentative, authorized or unauthorized, and so on. A “position” can
be specified by reference to how a speaker’s contributions are hearable with
respect to these and other polarities of character, and sometimes even of role.
Positioned as dependent, one’s cry of pain is hearable as a plea for help. But
positioned as dominant, a similar cry can be heard as a protest or even as a
reprimand. It can easily be seen that the social force of an action and the
position of the actor and interactors mutually determine one another. (van
Langenhove & Harre, 1999, p. 17)

Hence, discursive interactions provide a lens through which to view indi-


viduals’ socially-situated identities, how they choose to position themselves,
and how they are positioned by others.
While the way in which individuals identify themselves may shift fre-
quently within discursive situation, the frequency of shifts diminishes with-
in well-established groups with established acceptable ways of being. These
groups, termed communities of practice (Lave & Wenger, 1991), rest upon
four foundational premises of learning as social participation and as related
to knowledge, knowing, and knowers. First, individuals are social beings
who learn from and with each other. Second, knowledge is defined as hav-
ing competence in a valued enterprise. Third, knowing is demonstrated
through actively pursuing these valued enterprises, and fourth, the goal
46    S. C. STEADMAN and E. E. DOBSON

of learning is to produce meaningful experiences and engagement with


the world (Wenger, 1998, p. 4). Therefore, the activities of a community of
practice provide learners with a framework for making sense of a specific
sphere of life (Lave & Wenger, 1991).

OUR INITIAL STUDY

In order to understand how faculty and adjunct university supervisors


defined their work, negotiated their positions, and interacted during the
shared event of implementing the new assessment instrument, edTPA, we
selected a phenomenological approach. Creswell (2013) defines a phe-
nomenological study as an exploration of the common meaning of several
individuals’ lived experiences. In this study, we viewed the lived experiences
as encompassing two sets of phenomena: (a) the group meetings that all
participants attended and (b) the school-based intern-observations that all
university supervisors conducted. Because “[p]henomenology is not only
a description, but it is also an interpretive process in which the researcher
makes an interpretation . . . of the meaning of the lived experiences” (Cre-
swell, 2013, p. 80), we recorded all meetings, and the participants’ respons-
es provided the data for analysis and interpretation of the interactions of
the members of the implementation unit.

Participants and Context

As noted earlier, the selection of secondary English education, second-


ary history education, and middle grades education identified, by default,
the twelve university supervisors who would participate in edTPA imple-
mentation unit: ten adjunct university supervisors and two faculty univer-
sity supervisors affiliated with the three programs. The other members of
the pilot were the project director, an associate professor of middle grades
education; a tenured associate professor who served as the coordinator of
the three selected programs; and two members of the assessment and ac-
creditation office. All four faculty members had taught at the middle-school
or the high-school level, had mentored student teachers, and had served as
university supervisors at our university prior to the pilot semester. In addi-
tion, all four of us regularly taught multiple classes at the undergraduate
and graduate levels within their program areas. Both Office of Assessment
and Accreditation personnel had served as middle-school teachers before
assuming their current assignments; one had also served as a university
supervisor.
From Isolation to a Community of Practice    47

The ten adjunct university supervisors who participated in the pilot com-
prised a fairly representative group of our college of education’s adjunct
university supervisors. All were former teachers. One individual was a re-
tiring university faculty member; the other nine were part-time adjuncts
who were hired on a semester-to-semester basis. The group consisted of five
women and five men, all of whom are White and who averaged 26.7 years
of teaching experience. Collectively, the group had served as principals at
both the middle-school and high-school levels, department chairs, mentor
teachers, trainers for National Board teacher applicants, and SACS coor-
dinators; further, some had previously been recognized as teachers of the
year at their particular schools and several were National Board certified.
Four of the adjunct supervisors held doctoral degrees. None was taking up
this work for the first time, and the length of previous experience ranged
from one year to 38 years. The mean was slightly over nine years of intern
supervision experience. The workloads that these supervisors carried in the
given semester were not identical; they ranged from two to twelve interns,
with a mean of ten.
Prior to the pilot semester, all university supervisors, including faculty,
had been required to attend a mandatory yearly training meeting. Our uni-
versity’s teacher education candidates’ internship sites are situated over a
large, primarily rural, geographic area. With no regular meetings, adjunct
university supervisors rarely saw each other and were afforded significant
autonomy in the supervision of their interns. Occasionally, a situation
arose that the supervisors—or their interns—deemed significantly serious
to warrant an email to a faculty program member or to the teacher edu-
cation program. Both faculty and previous interns reported that prior to
the beginning of the edTPA pilot semester, university supervisors could be
inconsistent, both across and within program areas, in their expectations
of intern performance and that upon occasion, they had requested that
interns enact practices (such as lesson plan formatting) that differed from
the information taught in content programs and from the policies of the
teacher education program.
Each adjunct university supervisor was paid a $500 stipend to participate
in the launch of edTPA. The stipends supplemented the adjuncts’ tradi-
tional compensation, and in order to receive the supplements, the adjunct
university supervisors were required to complete a survey at the beginning
of the semester; attend monthly meetings, all of which were video record-
ed; and agree to be interviewed individually at the end of the implementa-
tion semester. None of the faculty or members of the Office of Assessment
and Accreditation received a stipend, but all attended each of the meetings.
48    S. C. STEADMAN and E. E. DOBSON

Data We Collected and Analyzed

Data from the pilot implementation were collected between December


2010 and June 2011. The pilot implementation participants met jointly
eight times during that period, for a total of 13 hours and 8 minutes, with
the first meeting held approximately three weeks before the beginning of
the pilot semester and the last occurring approximately three weeks after
the semester concluded. All meetings were video recorded, and Shari kept
extensive field notes. Further, all participants attended technology-training
sessions that focused on the skills affiliated with the new electronic portfo-
lio and progress report. These sessions were also video recorded.
We also requested that all study participants complete a demographic
survey, implemented before the pilot implementation began, that provided
a profile of each individual participant as well as, when aggregated, a sum-
mary of the cadre of supervisors assigned to the interns for the three con-
tent areas for the semester. We completed these surveys as well. In addition,
at the completion of the semester, after all teacher education intern obser-
vations had concluded, paperwork had been filed, and the last joint meet-
ing had been held, one of us conducted semi-structured, in-depth individ-
ual interviews with nine university supervisors over a two-day period. The
interviews ranged from 16 to 59 minutes, with a mean of 33 minutes. One
supervisor’s serious illness prohibited an interview. Other faculty members
and the other assessment and accreditation person were also interviewed
individually. Interviews with all participants focused on each individual’s
perceptions of the eight group meetings; in addition, interviews with the
university supervisors also afforded data on their experiences with their
teacher education interns during the semester. These experiences were of-
ten referenced in the joint meetings, and the interviews allowed an oppor-
tunity to pursue more in-depth discussion on these events.
Multiple analytic tools, selected for their responsiveness to the specific
data collected, were used within this study. Discourse analysis (Titscher,
Meyer, Wodak, & Vetter, 2000) was the primary tool used to illuminate fre-
quently appearing topics in transcripts of the eight videotaped meetings,
field notes of participants’ reactions to technology demands, the end-of-
the semester video-recorded interviews of individual university supervisors,
and faculty interviews. Significant statements and quotes were aggregated
to produce themes (Creswell, 2013), and we categorized, coded, gener-
ated analytic memos, and finally analyzed these themes (Corbin & Strauss,
2008). Through this process we developed “textural descriptions” of par-
ticipants’ experiences, a “structural description” of how they experienced
the context and conditions of the joint group meetings and/or the class-
room field experiences, and a combination of the two types of descriptions
to produce an overarching “essence” of the experience of working together
From Isolation to a Community of Practice    49

to implement the new accountability instrument (Creswell, 2013, p. 80).


Finally, member checks were conducted and participant feedback solicited.
As is true of much qualitative research, the scale of this study on the
edTPA implementation experience is small. Because the study was inten-
tionally situated during the piloting of a new teacher performance ac-
countability instrument, the number of participants was confined to those
associated with the content area programs enrolled in the pilot. Further,
these sixteen individuals offer a demographic richness in some areas (a
roughly equal number of males and females, various levels of experience,
differences in the number of interns the university supervised were re-
sponsible for, etc.). Unfortunately, however, the participant group lacks
graduate students, who frequently serve as supervisors at many universi-
ties; racial diversity; and very early career educators.

What We Learned: The Development and Evolution of a


Community of Practice

In the following sections, we describe how full-time university personnel


(faculty and members of Office of Assessment and Accreditation) and part-
time adjunct university supervisors interacted with each other across the
timeframe of the study, how they defined their work, how they positioned
themselves throughout and at the end of the semester, and how they came
to interact as they did in the vignette shared at the beginning of this chap-
ter and subsequent meetings.

Meeting Together
The first joint meeting of the implementation team occurred approxi-
mately three weeks before the spring semester pilot began. The ten univer-
sity supervisors had met each other previously, but few were familiar with
the other members of the implementation team.

As the project director stood at the front of the room, each individual briefly
introduced herself or himself. Over the next 82 minutes, the project director
and other five full-time university members (the other three faculty and the
two members of the Office of Assessment and Accreditation [OAA]) spoke
to the group, providing an overview of the implementation plan, handouts,
and explanations of the new assessment instrument, timeframes, and associ-
ated technology skill requirements. Nine of the university supervisors made
few or no comments and asked only brief clarifying questions. Only one uni-
versity supervisor, a recently-retired, former professor from one of the pro-
grams involved in the study, actively engaged in the presentation. Intermit-
tently throughout the meeting, he offered suggestions, posed challenging
questions, and positioned himself as a vocal skeptic of the value of and ease
50    S. C. STEADMAN and E. E. DOBSON

with which the new assessment portfolio instrument might be implemented.


Near the close of the meeting, when the project director asked for questions,
a handful of adjunct university supervisors voiced short, concrete inquiries
about the technology skills needed to employ the new instrument, the timing
of events, and whether the new portfolio replaced the previous paper-based
portfolio. As responses were given to the questions, some adjunct university
supervisors nodded, while others exchanged confused looks, sighed, and
shook their heads suggesting a degree of uncertainty and confusion; they
rarely asked follow-up questions. Two faculty members (the project director
and a faculty university supervisor) and the OAA personnel (one of the au-
thors and her colleague) stated that support would be provided and that fac-
ulty and adjunct university supervisors would learn more about the require-
ments and materials together.

In early January, implementation unit members met for the second time.

Once again, the project director’s voice dominated the majority of meeting
and, as in the first meeting, nine adjunct university supervisors for the most
part listened quietly to the explanations and information that the project di-
rector and the author member of the OAA provided on the implementation
and requirements of the portfolio instrument. The retired professor-turned-
university supervisor interjected slightly fewer comments than at the first
meeting, but provided several unsolicited explanations of state and College
of Education requirements. When the project director afforded faculty and
adjunct supervisors short opportunities to ask questions at various, designat-
ed points during the meeting, six of the adjunct supervisors asked questions
about procedural aspects of implementation of the new assessment, voiced
confirming statements, verbally volunteered feelings of confusion about tech-
nology demands or required equipment, or made moderately challenging
statements regarding timing or procedures. The adjunct university supervi-
sors’ discourse during the first two meetings reflected their confusion and, at
times, frustration, and their voices were quiet and hesitant.

The discursive patterns of the first two meetings and the ways in which
the participants positioned themselves and were positioned suggests an un-
spoken group understanding of who might say what to whom, when and
how they might say it, under what circumstances, for what individual or
group purposes, and with what outcomes (Green & Dixon, 1993). Their
discourse thus provided a way to identify the social rules that defined the
early norms of the meetings. The project director positioned himself as
the authority figure, able to define times when university supervisors were
permitted to make comments or ask questions, and other members of the
group ceded this position to him. Possessing lower authority were faculty
university supervisors, who spontaneously offered comments or delivered
small segments of instructional aspects of the meetings, and the members
of the Office of Assessment and Accreditation, who frequently constructed
From Isolation to a Community of Practice    51

and facilitated PowerPoint delivery as visual support materials. The recent-


ly-retired faculty member was the only university supervisor to challenge
the authoritative position of the pilot director, interrupting the planned
presentations with challenging comments and questions. By not accept-
ing the group-defined position of adjunct university supervisor as polite
receiver of information, the retired faculty supervisor positioned himself as
straddling two worlds: the realm of the faculty and the realm of the adjunct
university supervisor.
Between the second and third meetings, all university supervisors, both
faculty and adjunct, visited their interns and clinical (also termed cooper-
ating) teachers in their clinical/school settings for the introductory meet-
ings, and some held their first observation visits. The student-teaching in-
terns had been informed about the new portfolio assessment instrument
during their weekly seminar classes taught by faculty. Thus, they peppered
their university supervisors, both faculty and adjunct, with questions about
specific aspects of the instrument, implementation details, what each in-
tern was responsible for, how they were to generate and collect particular
data, etc. Their clinical teachers wanted details as well. Being positioned by
the interns and the clinical teachers as the experts on the new accountabil-
ity instrument was, for the adjuncts, not a familiar position in their work as
university supervisors.

At the third meeting, held in February, the university supervisors demonstrat-


ed an awareness of a new responsibility to their interns and clinical teachers
in regard to edTPA and, correspondingly, a new position for themselves with-
in the student teaching triad. They arrived at the meeting with specific ques-
tions, and when provided with opportunities, they actively asked those ques-
tions. Though the director’s and faculty supervisors’ discourse still dominated
the meeting, the nature of the adjunct supervisors’ questions had changed:
their inquiries were more detailed, and in several instances, the adjunct su-
pervisors asked follow-up questions once the pilot director had provided a
response to their initial queries. In a private conversation post-meeting con-
versation with Shari, one adjunct supervisor explained her response to the
implementation of the new instrument, saying, “Getting involved in the task
that the interns have to do has made me more confident, I think, in dealing
with both the clinical teachers and the interns.”

At the March meeting, described in part at the opening of this chapter,


the change in the nature of discourse among meeting members was sub-
stantial and widespread, involving adjunct and faculty supervisors. After the
incident described at the beginning of this chapter, in which an adjunct
supervisor interrupted the project director to reclaim control of the con-
versation, a shift in the nature of the interactions emerged.
52    S. C. STEADMAN and E. E. DOBSON

Other adjunct supervisors eagerly referenced in detail their own field expe-
riences with their interns, thereby providing concrete examples of interac-
tions. As this fourth meeting progressed, the adjunct university supervisors’
discourse dominated much of the meeting time as they expressed opinions
on a variety of aspects of the edTPA instrument. In addition, when discussing
a topic, they occasionally spoke to or questioned one another directly, rather
than targeting all utterances to the project director.

The interactions of the group during this meeting illuminated two linked
group-sanctioned positions that had developed over the course of the
meetings and that they now viewed as available not only to a few, but to
all participants: (a) possessors of valuable knowledge about edTPA, and
(b) teachers of that knowledge. The dissolution of the division between
who was allowed to teach (the project director, faculty, and OAA mem-
bers) and who was not (previously adjunct university supervisors, except
occasionally for the newly retired faculty member) reflected a shift in the
dynamics of the group meeting and signaled a group-sanctioned change
in who could say what to whom, in what manner, and with what anticipat-
ed response. This shift reified the development of a community of prac-
tice that had not existed when the group began to meet four months ear-
lier. Because communities of practice occur as people take part in actions
whose significance they have collaboratively negotiated (Wenger, 1998),
the actions, both physical and discursive, of the group provided empiri-
cal evidence of mutually-constructed practices that defined the norms
of being a member of the edTPA implementation community. Thus, the
ways in which these group members (both faculty and adjunct university
supervisors, the pilot director, and OAA members) participated in their
edTPA implementation community shaped every individual’s experiences
while simultaneously defining the community in which the experiences
occurred. Further, as this community of practice developed and operated,
the members defined, enacted, and reified specific practices germane to
their particular community. The ways in which community members iden-
tified themselves and interacted with each other, as well as the products
they produced, all served as reflections of the common work and beliefs
of the community. Thus, to answer the question asked at the beginning
of this chapter—“How, then, just three months later, had a community
developed in which all members, regardless of their status at the univer-
sity or experience working with teacher candidates, were viewed as equal
contributors to the implementation of the new edTPA assessment?”—
the discourse and interactions of the individuals in the room reflected
the presence of a mutually-developed common community of practice
through which all individuals viewed the intern experience and their re-
lationships to it.
From Isolation to a Community of Practice    53

Months later, as the final meeting of the university supervisors and fac-
ulty began, the project director, sitting in a chair at the front of the circle of
tables, gave a quick overview of what they would be discussing that session
and ended the outline of the agenda by saying, “Unless anyone has other
things to discuss”—an inviting statement that contrasted sharply with the
discourse of the first several meetings.

While the meeting began with the director’s topic, eight minutes into the
session, an adjunct supervisor introduced a new topic, explaining her initial
concerns about assigning local scores to the edTPA product, but then confi-
dently explained how she tried to become familiar with the terminology of
the rubric by formulating questions to use with her interns. As she shared,
other university supervisors, both adjunct and tenured/tenure-track, turned
to look at her and nodded their heads. Another adjunct supervisor immedi-
ately shared an issue that she had with one intern who desired follow-up and
wanted to know if it was okay to provide that. The project director and an ad-
junct university supervisor simultaneously leaned forward to respond, but the
director quickly deferred to the adjunct supervisor. After the adjunct supervi-
sor provided his feedback, several adjunct and faculty university supervisors
confidently followed up with responses. The interactive exchange continued
for over six minutes during which adjunct university supervisors claimed the
floor more than 80% of the time, expressing their thoughts clearly in the lan-
guage associated with the new edTPA instrument.
In the final portion of this concluding meeting, Ellen asked if the adjunct
university supervisors could provide further feedback on an additional local
instrument that was being considered for implementation. Less than four sec-
onds passed before concrete suggestions began to come forward. All supervi-
sors engaged in a collegial discussion, infused with stories of their interactions
with their interns and clinical teachers from this semester and their sugges-
tions based upon those interactions.

When the conversation ended 31 minutes later, every university supervisor


had contributed at least three times to the conversation.

Supervisor-Identified Changes
During their individual interviews at the end of the semester, several ad-
junct supervisors offered their thoughts on the impact of the events of the
pilot semester. As one supervisor commented, “First of all for me, it was
a much richer, richer experience because I felt on the team for the first
time.” She continued, “I was closer to the college than I have ever felt be-
cause of the meetings. I like going out to observe the interns, but I need
these meetings, too.”
That increase in affiliation translated itself into a sense of ease in offering
advice to the program faculty. Adjunct supervisors, who acknowledged that
they had never before made suggestions regarding content-area programs
54    S. C. STEADMAN and E. E. DOBSON

and/or curriculum, readily thought of and voiced changes that might be


beneficial, for example, areas where interns might need opportunities to
develop deeper understanding. As one said,

They are getting what they need in planning, from what I can see, and the
process and templates, all were supporting good planning. But they need to
work on assessment, what are the underlying issues? What are the patterns?
Not just did the students get the answers wrong or right, but what are the
patterns that I see? They need to lift up the rock and see what is underneath
that.

Another adjunct supervisor asserted,

They know their content, but learning issues and assessment issues? The third
piece of the portfolio: being able to really identify a strong theory base for
what they are doing and why they are doing it. The whole concept of language
and how does that affect learning, cause and effect—how does that apply to
this lesson, but also long term?

Adjunct supervisors indicated that they had never had the opportunity to
share their suggestions to program area faculty before, but that they felt
comfortable at that time in offering suggestions and confident that their
voices were heard and that their ideas were valued.
During their individual interviews, more than half of the adjunct uni-
versity supervisors also volunteered that they saw changes in themselves as
a result of being involved in the pilot experience, notably a greater sense
of confidence in their understanding about what makes for effective teach-
ing. One adjunct supervisor expressed that she believed that she was on
more solid footing in her conversations with clinical teachers, interns, and
program area faculty. Another commented, “I wanted to be more confident
in what I knew. I am now. I am more confident now.” A different adjunct su-
pervisor explained, “I know that I was more focused and more intentional
(this semester). I thought that I had been up to this point, but I just had
so many more opportunities this time.” These beliefs supported the notion
that as individuals engaged in teacher education foster an intern’s develop-
ment and movement, they experienced growth in their own professional
development and solidification of their identity as mentors of future teach-
ers (Kwan & Lopez-Real, 2010).

Faculty-Identified Changes
The faculty reported similar responses to the events of the pilot semes-
ter. One faculty member explained that implementing a standardized ac-
countability instrument created outside the university “brought all closer to
the same understanding” of what was expected of the teacher candidates in
From Isolation to a Community of Practice    55

their internship experience, and she attributed the increased level of com-
munication with prompting a shift in the way that faculty viewed adjunct
university supervisors. “At first (adjunct) university supervisors were quiet,
but by the end, we were all colleagues, rather than (adjunct) university su-
pervisors being seen as sub-employees,” she stated. Her use of the term
“sub-employees” suggests her acknowledgement of the prior existence of
a hegemonic structure that restricted adjunct university supervisors to low
status and implies her recognition of the disruption of that structure by the
development of the community of practice.
The program director also noted a change in the relationship of pro-
gram faculty and adjunct university supervisors, stating emphatically, “For
the first time in fifteen-years, I felt that the (adjunct) university supervisors
worked side-by-side with the program area to ensure student success.” He
went on to say, “I was impressed by the mutual dialog that took place be-
tween the (adjunct) university supervisors and faculty. I believe that both
groups learned from each other.”
Another faculty member stated,

I found that as the university supervisors became more connected and collegial,
we began to unite as a community of instructional leaders. Ideas were shared;
practices were discussed and modified. The conversations that took place in the
meetings were the driving force behind what I deem to be this positive change.
It was in these meetings that a real collegial bond was nurtured.

Though faculty who were serving as university supervisors did not note a
shift in their own positions or changes in how they defined their own work,
they were quick to identify transformations in their relationships with ad-
junct university supervisors, citing an increase in collegiality and the impor-
tance of the face-to-face nature of their conversational interactions.

Conclusions From the Implementation Experience

The majority of the adjunct university supervisors had served in this po-
sition for many years at this university. Because of the loose nature of the
supervisor group, they had experienced significant freedom in construct-
ing their own supervisor identities and practices; consequently, over time,
they had established their own individual ways of “being” supervisors. By
participating in the implementation of edTPA, these experienced adjunct
university supervisors encountered three challenges: (a) one to their previ-
ously near-invisible relationship with program area faculty, (b) one to their
established identities, and (c) one to their existing understanding of what
it meant to supervise teaching interns.
56    S. C. STEADMAN and E. E. DOBSON

The joint meetings that took place across this semester afforded greater
levels of face-to-face interaction between tenured/tenure-track and adjunct
university supervisors than members of either group had experienced in
the past. Because of the history of autonomy and physical distance among
our university’s supervisors and between faculty and adjunct supervisors,
the development of a robust community of practice was not anticipated
as the semester began, and indeed, the first two meetings offered no hints
of such a development. Over time, however, a community of practice did
emerge, and its presence allowed all individuals to position themselves and
to be positioned as knowledgeable members of the community authorized
to teach each other.
When the semester began, the project director and OAA members were
viewed as the more knowledgeable members of the team who would help
all university supervisors through the introduction and adoption of edT-
PA. By the end of the semester, however, when all community members
had conceptual knowledge of the edTPA instrument, the adjunct and ten-
ured/tenure-track university supervisors also had the additional knowledge
of what that instrument looked like in action, in relation to real interns:
knowledge that the project director, the members of the OAA, and other
faculty who did not supervise interns lacked. Drawing upon that knowledge,
the university supervisors told stories about their interns’ experiences, con-
cerns, and successes that all were eager to hear. In the process, all university
supervisors positioned themselves within the schools and in the meetings
as experts on the enactment of the new assessment and identified as more
self-assured and knowledgeable mentors.
While the events associated with implementing the new assessment in-
strument challenged the adjunct university supervisors’ notions of what
their work entailed, they discovered new opportunities to define their con-
tributions in relation to the development of beginning teachers. Asked to
dive into new technologies and language and ways of thinking, they re-
sponded at first hesitatingly and then actively. Most found a new sense of
competence in recognizing components of effective teaching and an in-
crease in confidence that allowed them to suggest areas where interns and
their programs needed more concentrated focus. These new areas of ex-
pertise translated into an enhanced level of excitement about their own
involvement in teacher development.
Rather than acting as separate groups, isolated from each other, with dif-
ferent responsibilities and disparate positions within our teacher education
program, as the pilot semester ended, the participants were united as mem-
bers of a community of practice with a sense of shared knowledge, common
purpose and practices, and community-sanctioned positions and identities.
From Isolation to a Community of Practice    57

FOLLOW-UP REFLECTIONS:
SIX YEARS LATER, WHERE DO WE STAND?

Six years have passed since the last meeting of the edTPA implementation
unit and the end of the edTPA pilot. In those six years, we have seen sig-
nificant changes in teacher education at our university. All 17 of the uni-
versity’s teacher education programs now participate in edTPA, and during
the Fall 2016 semester, 114 interns completed and submitted edTPA port-
folios; 336 interns participated in this process in the Spring 2017 semester.
Rather than the 12 university supervisors required for the pilot semester,
73 university supervisors, both adjunct and faculty, were employed for the
Spring 2017 interns. Many changes have occurred since our initial pilot.
For instance, when Elementary Education, the largest teacher education
program in the university, elected to implement edTPA in Fall 2011, we saw
the demise of our monthly meetings due to (a) sheer number of individuals
involved, (b) the distance that some university supervisors lived from the
university, and (c) the cost of supplying the significantly increased number
of university supervisors with stipends. The members of the edTPA imple-
mentation community have experienced some changes, as well. Of the
original ten adjunct university supervisors, six continue to supervise interns
and all but two faculty members are still involved in implementing edTPA.
In an effort to identify how the members of the edTPA implementa-
tion community of practice from 2011 currently regard that experience,
we contacted the individuals who still serve as university supervisors for our
university and explained our request. Two members of the original imple-
mentation group were unable to take part in the study, but the remaining
supervisors agreed to participate.
We constructed and administered a survey that included both multiple-
choice and open-ended items. The last open-ended question provided re-
spondents with an opportunity to add additional comments. The response
rate for each question in the survey was 100%. The supervisors who partici-
pated in the survey have maintained an active level of engagement in their
work as university supervisors, and they regularly attend supervisor meet-
ings, even though they are no longer paid stipends to do so. Several of the
supervisors noted that they are far more knowledgeable now about edTPA
and are confident when explaining the requirements of the edTPA product
to their current interns.
As they recalled their reactions to the implementation semester, they
frequently noted that it was a challenging experience because the assess-
ment instrument was new to everyone involved. Even as they recalled how
feelings of apprehension were frequently present during the semester, a
sense of community infused their recollections, as evidenced in the use
of plural first person pronouns in their statements: “We were finding our
58    S. C. STEADMAN and E. E. DOBSON

way, working through the process, figuring out what was expected of us.”
Another offered a more effusive response:

I loved being a part of a network of educators all working toward the same
goal. Experiencing the beginning of an important endeavor and having a key
role in the process was invigorating. The fact that we had the opportunity as
secondary level educators to work across discipline areas was also a plus.

Finally, one member of the group summed up an overwhelming sense of


engagement stating that the university supervisors experienced “more in-
volvement” than they ever had in the past.
In considering how the edTPA pilot semester affected their practices as uni-
versity supervisors, participants consistently reported a positive impact that left
them feeling more knowledgeable and confident about their work as univer-
sity supervisors in general. One participant declared that being a part of the
edTPA launch “forced me to be less declarative and more interrogative. I went
form [sic] identifying strengths and weaknesses to asking students about their
perceptions of their performance relative to expectations.” Another professed

I have become more analytical in my approach to intern supervision. The ru-


brics continue to be useful tools in post-conference sessions. I find that having
this framework makes it easier to have a common language from the start of
the internship with the intern and the CT. (clinical teacher)

Another explained how edTPA influenced the focus of his/her intern ob-
servations: “It (edTPA) has made me much more aware of what to look for
when observing. Since the format for the edTPA is indepth [sic], as every
lesson should be, I also focus on the big picture.”
When discussing how the Spring 2011 experience influenced their per-
ception of university supervisors’ roles, each individual acknowledged a
shift in how they viewed the work associated with the position. Brief state-
ments like, “I understood more” and it “changed into a more technical
role,” as well as, I had “more understanding and participation” mingled
with longer responses. One participant noted a belief that the experience
“moved us” from “instructional evaluators to instructional coaches” and
another asserted that being part of the edTPA implementation and discuss-
ing the focus of edTPA with other university supervisors, whether those
supervisors were faculty members or adjuncts, continues to reinforce a be-
lief in the need for interns to be able to “develop units of study that are
grounded in a focal understanding that all students must acquire. I have to
be sure my interns know how to implement strategies to instruct and assess-
ments to determine success for all students.”
One supervisor stated that being a part of edTPA initiative continues to
influence his approach toward supervising and reaffirmed a belief
From Isolation to a Community of Practice    59

that the role of US [supervisors] is important, because it affords an outside


perspective. I find it all too easy for interns to settle into the climate of their
assigned school and lose sight of the fact that they are preparing for a career
in education as opposed to surviving or thriving in their assigned school.

The opportunity to meet and work with other university supervisors was
mentioned several times as a key benefit of the Spring 2011 experience.
One individual compared her current experience to her first introduction
to edTPA, stating

I enjoyed meeting with other University Supervisors. [Sometimes] we [feel as


if we belong to] a group that “are on an island by ourself” [sic]. [Now] every-
one is helpful with emails, and I enjoy the Elementary Pod which reminds me
of those meetings we had learning about the edTPA process.

Another supervisor recalled that “the launch year provided the opportu-
nity for me to get to know people in other content areas, both faculty and
adjuncts. From that year on, I have felt that I am really a part of the team.”
The statements offered by the university supervisors from the implemen-
tation semester reflect three components of supervision that appear to have
emerged from that experience. First, they shifted their views of themselves
as university supervisors, moving from evaluator to coach and serving as a
grounding force that reminds interns to focus on their students and not
simply themselves. Second, they viewed their work differently; some be-
came more analytical, some changed the focus of their observations, some
recognized the need to focus not just on the “what,” but also the “how” of
being a supervisor. Third, they viewed themselves as part of a team, no lon-
ger alone on Supervision Island.
Taken together, the supervisors’ recollections of the edTPA implementa-
tion experience intimate something more: a finding that may go beyond
this particular group or our specific institution. Taken together, the survey
responses suggest that participation in a community of practice that centers
upon a common goal, in this case the goal of enhancing the development
and assessment of teacher candidates, has the potential to dissolve the he-
gemonic stratification that traditionally infuses university settings. Univer-
sity supervisors, regardless of their positions within the university, left the
2011 implementation with an enhanced view that they were making valu-
able contributions to their interns’ development and possibly their own.
Finally, the participants’ responses affirm that being a participant in the
edTPA pilot still resonates six years after the original experience, shaping
how these individuals view various aspects of their work as university super-
visors, how they view themselves, and the degree of importance that they
assign to being part of a team.
60    S. C. STEADMAN and E. E. DOBSON

CONCLUSION

When our university’s College of Education decided to implement edTPA


in spring 2011, our focus was on the organization and information required
to prepare each participating intern to complete and submit an electronic
portfolio. The individuals who were asked to participate were selected be-
cause they worked either as faculty or adjunct university supervisors with
interns from the three programs involved in the pilot. The goal was a suc-
cessful launch of a new assessment tool capable of providing programs with
an understanding of their interns’ capabilities to design and implement
lesson plans and to assess student learning.
No mention, and likely, no thought was given to the possibility of a com-
munity of practice developing among faculty supervisors and adjunct su-
pervisors . . . and yet, such a community did emerge. By addressing a com-
mon purpose, a cohesive experience organically evolved and produced a
community that members continue to value six years later. When individu-
als believe that their voices, their opinions, and their experiences are heard
and respected, powerful relationships develop.
As teacher-education programs currently strive to develop new educa-
tors who enter their first classrooms as well-prepared beginning teachers,
a new focus is emerging on the impact of the internship experience on
teacher candidates. Darling-Hammond asserts that “[s]trengthening clini-
cal practice in teacher preparation is clearly one of the most important
strategies for improving the competence of new teachers and the capacity
of the teaching force as a whole (2014, p. 557). Thus university supervisors,
whether they have adjunct or full-time faculty positions, have the unique
opportunity and critical responsibility to help candidates connect peda-
gogical knowledge they studied in teacher preparation coursework with the
application of that knowledge in their evolving practice. By dissolving the
artificial barriers that negatively affect relationships while striving to build
communities of practice among all university supervisors, we serve our edu-
cation candidates and their development more effectively.

REFERENCES

Baldwin, R. G., & Chronister, J. L. (2001). Teaching without tenure: Policies and practices
for a new era. Baltimore, MD: Johns Hopkins University Press.
Baldwin, R. G., & Chronister, J.L. (2005). What happened to the tenure track? In
R. P. Chait (Ed.), The Questions of Tenure (pp. 125–159). Cambridge, MA: Har-
vard University Press.
Beck, C., & Kosnik, C. (2002). Professors and the practicum: Involvement of univer-
sity faculty in preservice practicum supervision. Journal of Teacher Education,
53(1), 6–19.
From Isolation to a Community of Practice    61

Clifford, G. J., & Guthrie, J. W. (1990). Ed School. Chicago, IL: The University of
Chicago Press.
Cochran-Smith, M., Feiman-Nemser, S., McIntyre, D. J., & Demers, K. E. (Eds).
(2008). Handbook of research on teacher education. New York, NY: Routledge/
Taylor & Francis Group and the Association of Teacher Educators.
Cochran-Smith, M., & Zeichner, K. M. (2005). Studying teacher education: Enduring
questions in changing contexts. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Corbin, J., & Strauss, A. (2008). Basics of qualitative research (3rd ed.). Los Angeles,
CA: SAGE.
Creswell, J. W. (2013). Qualitative inquiry and research design (3rd ed.) Los Angeles,
CA: SAGE.
Curtis, J. (2014). The employment status of instructional staff members in higher educa-
tion, fall 2011. Retrieved from the American Association of University Pro-
fessors website: https://www.aaup.org/sites/default/files/files/AAUP-In-
strStaff2011-April2014.pdf
Darling-Hammond, L. (2014). Strengthening clinical preparation: The Holy Grail
of teacher education. Peabody Journal of Education, 89(4), 547–561. DOI:
10.1080/0161956X.2014.939009.
Enz, B. J., Freeman, D. J., & Wallin, M. B. (1996). Roles and responsibilities of the
student teacher supervisor: Matches and mismatches in perception. In D. J.
McIntryre & D. M. Byrd (Eds.), Preparing tomorrow’s teachers: The field experi-
ence, teacher education yearbook IV (pp. 131–150). Thousand Oaks, CA: Corwin
Press, Inc.
Finders, M. J. (1997). Just girls: Hidden literacies and life in junior high. New York, NY:
Teachers College Press.
Gee, J. P. (1999). An introduction to discourse analysis. London, England: Routledge.
Green, J. L., & Dixon, C. N. (1993). Talking knowledge into being: Discursive and
social practices in classrooms. Linguistics and Education, 5(3–4), 231–239.
Guyton, E., & McIntyre, D. J. (1990). Student teaching and school experiences. In
W. R. Houston (Ed.), Handbook of research on teacher education (pp. 514–534).
New York, NY: Macmillan.
Halcrow, C., & Olson, M. R. (2008). Adjunct faculty: Valued resource of cheap la-
bor? Focus on Colleges, Universities, and Schools, 2(1). Retrieved from http://
www.nationalforum.com/Electronic%20Journal%20Volumes/Halcrow,Cher
yl,FOCUS,Vol2,Num1,2008.pdf
Kezar, A., & Sam, C. (2010). Special issue: Non-tenure track faculty in higher educa-
tion—Theories and tensions. ASHE Higher Education Report, 36(5), 1–91.
Kwan, Y., & Lopez-Real, F. (2010). Identity formation of teacher-mentors: An analy-
sis of contrasting experiences using a Wengerian matrix framework. Teaching
and Teacher Education, 26(3), 722–731.
Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation.
Cambridge, England: Cambridge University Press.
Lemke, J. L. (2001). Articulating communities: Sociocultural perspectives on sci-
ence education. Journal of Research in Science Teaching, 38(3), 296–316.
Levin, J. S., & Montero Hernandez, V. (2014). Divided identity: Part-time faculty in
public colleges and universities. The Review of Higher Education, 37(4), 531–558.
62    S. C. STEADMAN and E. E. DOBSON

Levin, J. S., Shaker, G. G., & Wagoner, R. (2011). Post neoliberalism: The profes-
sional identity of faculty off the tenure-track. In B. Pusser, K. Kempner, S.
Marginson, & I. Ordiricka (Eds.), Universities and the public sphere: Knowledge
creation and state building in the era of globalization (pp. 197–217). New York, NY:
Routledge.
McIntyre, D. J., Byrd, D. M., & Foxx, S. M. (1996). Field and laboratory experience.
In J. Sikula, T. Butlery, & E. Guyton (Eds.), Handbook of research on teacher edu-
cation (171–193). New York, NY: Simon Schuster McMillan.
Moorehead, D. L., Russell, T. J., & Pula, J. J. (2015). Invisible faculty: Department
Chairs’ perceptions of part-time faculty status in Maryland four-year public
and private higher education institutions. Delta Kappa Gamma Bulletin, 81(4),
102–119.
Poyas, Y., & Smith, K. (2006). Becoming a community of practice—The blurred
identity of clinical faculty teacher educators. Teacher Development, 11(3),
313–334.
Robinson, S. (2017, February). We are the engine of a democratic society. Open-
ing Address of the Association of American Colleges of Teacher Education.
Tampa, FL.
Schuster, J. H., & Finkelstein, M. J. (2006). The American faculty: The restructuring
of academic work and careers. Baltimore, MD: Johns Hopkins University Press.
Slaughter, S., & Rhoades, G. (2004). Academic capitalism and the new economy: Markets,
state, and higher education. Baltimore, MD: John Hopkins University Press.
Slick, S. (1998). The university supervisor: A disenfranchised outsider. Teaching and
Teacher Education, 14(8), 821–834.
Steadman, S. C., & Brown, S. D. (2011). Defining the job of university supervisor:
A department-wide study of university supervisors’ practices. Issues in Teacher
Education Journal, 20(1), 51–68.
Titscher, S., Meyer, M., Wodak, R., & Vetter, E. (2000). Methods of text and discourse
analysis. London, England: SAGE.
Weiner, E. J. (2005–2006). Keeping adults behind: Adult literacy education in the
age of official reading regimes. Journal of Adolescent and Adult Literacy, 49(4),
288–301.
Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cam-
bridge, England: Cambridge University Press.
Wilson, S. M, Floden, R. E., & Ferrini-Mundy, J. (2002) Teacher preparation re-
search: An insider’s view from the outside. Journal of Teacher Education, 53(3),
190–204.
United States Department of Education. (2006). A test of leadership: Charting the
future of U.S. higher education. Retrieved October 1, 2007 from http://www.
ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf
van Langenhove, L., & Harre, R. (1999). Introducing positioning theory. In R.
Harre & L. van Langenhove (Eds.), Positioning theory (pp. 14–31). Cambridge,
MA: Blackwell.
CHAPTER 3

FACULTY INVESTMENT
IN STUDENT SUCCESS
A Four-Year Investigation
of edTPA Implementation

Gaoyin Qian, Harriet Fayne, and Leslie Lieman


Lehman College–City University of New York

Despite increased scrutiny of how teacher education programs judge the


quality of their graduates, relatively little is known about the steps that lead
up to the development of robust assessment systems. Federal and state
teacher education reform agendas increasingly require a type of nimble-
ness paired with an accountability structure for which teacher-education
programs are not well prepared. Changing anything in higher education is
no mean feat; rapid change is nearly impossible.
In an effort to gain a nuanced understanding of how assessment can
drive change, we, as school of education (SOE) leaders (the dean and asso-
ciate dean together with the educational technology coordinator, director
of field experiences and professional development network, and academic
assessment coordinator) provided opportunities for faculty engagement

Implementing and Analyzing Performance Assessments in Teacher Education, pages 63–83


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 63
64    G. QIAN, H. FAYNE, and L. LIEMAN

related to this issue. Beginning in Fall 2012, we conducted a longitudinal


study that documented the process of implementing the teacher perfor-
mance assessment known as edTPA®, in the SOE at Lehman College. Our
students come from heterogeneous ethnic, racial, cultural, and socioeco-
nomic backgrounds. Many are from immigrant families, have acquired
English as a second language, and are first-generation college students.
Add to the mix that our students often fare poorly on standardized exams
and could be negatively impacted if we failed to develop a clear, efficient
strategy that would yield positive outcomes. So, when New York State in-
troduced new certification requirements during one academic year to be
enacted within the next year, we had to move quickly to redesign curricula
in order to prepare candidates for the edTPA.
We realized at this time that the old way of doing business in teacher
preparation was out; however, we needed to get the faculty to believe it.
Unit leaders challenged faculty to, in the words of Lee Shulman (2007),
“take control of the narrative” (p. 25). Using a white water rafting meta-
phor, he contends, “the best way to get where you want to go when nego-
tiating the rapids in a fast-moving stream is to paddle faster than the current”
(p. 25). How did we get faculty to paddle faster when many of them didn’t
want to paddle at all?
First, we adopted a covert leadership approach (Mintzberg, 1998), one
that advocates motivating, supporting, and coordinating faculty’s efforts
rather than directing assignments or controlling activities. Without top-
down directives, faculty came to realize that they needed to redesign pro-
grams in response to a New York State mandate that required candidates to
pass edTPA before being granted a teaching certificate. Across four years,
faculty participated in a series of professional development activities that
were designed to support the edTPA implementation.
Second, we acknowledged the intellectual capital within the SOE by lever-
aging communities of practice as a vehicle for change. Wenger, McDermott,
and Snyder (2002) define communities of practice as informal groups that
are organized by people who “share a concern, a set of problems, or a pas-
sion about a topic” (p. 4). In order for these informal groups to function
well, Wenger (2008) suggests that leaders provide: locations, reasons for
groups to convene, financial support, time for meetings, and opportunities
to be creative. Since there is a natural ebb and flow to how these communi-
ties coalesce, as covert leaders, we wanted to publically acknowledge those
individuals whose multi-membership allowed them to “be brokers across
boundaries” (p. 255). We set the parameters, provided financial incentives,
and then allowed professionals to act professionally.
Third, we identified critical needs and provided resources to meet them.
We knew that it would not be easy for our candidates to earn passing scores.
edTPA requires close reading of directions, writing to complicated prompts,
Faculty Investment in Student Success    65

facility with technology, and good organizational skills. Our candidates


come to us with significant challenges. A typical candidate is a young mother
who acquired English when she entered formal schooling and whose family
income is modest at best. Most of our preservice teachers juggle part-time
or full-time jobs while attending school.
In this chapter, we focus on how college faculty and unit leadership
implemented the externally mandated performance assessment. Since Fall
2012, we have fostered professional growth by orchestrating and support-
ing activities that are aligned with the five dimensions of a learning com-
munity identified by Hall and Hord (2011; see Table 3.1). We grounded
our work in the concerns-based adoption model (Hall & Loucks, 1978).
Extending Fuller’s theory of teacher development (1969), Hall and Loucks
theorized that implementation of an educational innovation is a process

TABLE 3.1  Five Dimensions of a Professional Learning Community


Aligned With Implementation Bridge Actions: 2012–2016
Phase One Phase Two Phase Three
2012–2013 2013–2014 2014–2016
Dimension Preparation Exploration Acculturation
Supportive • Fall Semester: • RePrep staff RePrep staff monitors
and Shared RePrep leadership members visit all edTPA performance,
Leadership teams constituted, student teaching disseminates findings
plans Winter 2013 seminars and to seminar instructors
edTPA launch answer program- and program
during SOE retreat specific questions coordinators
• Spring Semester:
team approves work
group proposals
Shared Values Retreat One Retreat Two Retreat Three
and Vision edTPA Launch Local Scoring Assessment Literacy
Supportive • Camcorders • edTPA lab open • Targeted workshops
Conditions purchased four nights per developed by faculty on
• Local edTPA website week two specific rubrics that
developed • edTPA support had yielded low scores
days for student during Spring 2014
teachers (2 for • Locally relevant video
Fall; 3 for Spring) cases built
• Reduced Seminar
Instructors’ work load
Collegial • Work groups • Debriefing sessions • Faculty teams curate
Learning prepare deliverables with supervisors video footage collected
and seminar to be used as exemplars
instructors in methods classes
Shared • Work groups • Candidate/faculty • Posters displayed in
Personal presentations testimonials posted SOE hallways
Practices on edTPA website
66    G. QIAN, H. FAYNE, and L. LIEMAN

and that faculty’s feelings and actions can be investigated and explained by
the Concerns-Based Adoption Model (CBAM) with its three underlying di-
mensions: (a) affective (stages of concern), (b) actions (levels of use), and
(c) fidelity (innovation configuration map).
We use a chronological narrative structure that takes the reader on a
journey that unfolded in three phases: Phase 1 (Preparation)—Unit leader-
ship developed a targeted set of experiences designed to encourage faculty
to make a shift from teacher-centered to student-centered and program-
matic concerns; Phase 2 (Exploration)—Formal and informal communities
of practice across the School of Education grappled with edTPA handbooks,
terminology, and technological demands; and Phase 3 (Acculturation)—
edTPA was normalized and became part of SOE culture (see the Appendix
of this chapter for major activities). At the end of the chapter, we report on
three studies conducted across the four years. For each, we provide a sum-
mary of our research questions, results, and findings.

PHASE ONE: PREPARATION

In Fall 2012, the Re-Prep (re-envisioning educator preparation) team was


constituted to lead the School of Education’s edTPA implementation initia-
tive. The Re-Prep team was composed of unit leaders, faculty representing
the academic departments and programs, and staff members who specialize
in educational technology, assessment, and clinical placements. Planning a
two-day retreat to introduce all faculty to the design, details, and impact of
edTPA was the first order of business for the Re-Prep team. We agreed that
a guest speaker who had experience in the implementation of the edTPA at
her own university would be the best person to get the conversation started
on Day 1. She provided an overview of the assessment’s critical components
and a framework for aligning them to current curricula. On Day 2, faculty
members formed work groups and began to brainstorm implementation
plans. Forty-one faculty and staff attended the retreat.
After this first retreat, faculty leaders were identified and asked to lead
each of the program work groups. The groups were given a full semester
to come up with strategies to incorporate edTPA requirements into their
curricula and submit deliverables (sample syllabi, major assignments, and
formative assessments) to the Re-Prep team prior to the next academic year.
The goal was to learn from one pilot semester (Fall 2013) using these im-
plementation plans before Spring 2014, the first high-stakes semester. Nine-
teen full-time and seventeen part-time faculty participated in the seven pro-
gram-based groups, meeting, corresponding, and planning to determine
what changes were needed in courses and field experiences to ensure stu-
dent success on the new performance assessment. Each member received
Faculty Investment in Student Success    67

a modest stipend for a minimum of 18 hours of work. By June 2013, the


faculty work groups completed and presented: (a) revised student teaching
seminar syllabi, (b) community/school context assignments to be used in
introductory methods courses, (c) directions for using videotaped teaching
segments in methods courses and student teaching seminars, and (d) cur-
riculum maps with edTPA assignments embedded across courses of study.
The curriculum map prepared by the elementary education work group
can be found in Table 3.2.
Another major Re-Prep team effort was getting the message out to inter-
nal and external stakeholders. During the semester following the January
2013 retreat, periodic updates on implementation were provided during
monthly SOE meetings. Early adopters gave presentations of how they were
modifying their courses or revising field experiences to align with edTPA
components. The educational technology coordinator offered face-to-face
workshops and posted online resources.
The director of field experiences scheduled two edTPA orientation
sessions in May 2013 in preparation for the fall semester, inviting student
teachers to one and cooperating teachers and college supervisors to the
other. Two candidates and one college supervisor who participated in a
small edTPA pilot in Spring 2013 served on a panel to provide accounts of
their own experiences with edTPA. Official communications were sent to
all partner schools and district offices describing the performance assess-
ment and indicating that candidates would need the cooperation of school
personnel in order to satisfy requirements.
As a part of the effort to get a message out and to establish effective
and long-term communication with our candidates, the SOE educational
technology coordinator designed a website dedicated to edTPA. It began
with a general overview and technology-related user guides and grew into a
repository for support materials that helped candidates to present evidence
according to required submission standards. The site provided all current
candidates, faculty, and aspiring education students easy access to infor-
mation about the assessment (http://www.lehman.edu/academics/educa-
tion/edTPA/index.php).

PHASE TWO: EXPLORATION

Participation in activities and initiatives during Phase 1 not only gave faculty
a solid understanding of the performance assessment but also resulted in
attitudinal changes. There was a notable shift from teacher-centered to stu-
dent-centered and programmatic concerns (Fayne & Qian, 2016). With this
shift came a deep dive into the redesign of courses and clinical experiences.
Faculty members, given small financial incentives, worked collaboratively,
TABLE 3.2  Proposed Sequence of Graduate Elementary Program Courses Aligned With edTPA Requirements
Course Alignment with edTPA Requirements
Child Child Study Report
Development • stronger emphasis on using/quoting theory, pedagogy, and child development in supporting statements in assignment
Program Design Classroom Observation Hours and Report
• Observation topics are infused with academic language from edTPA.
• Observation report includes description of school context assignment and uses exact language required for context
description in edTPA.
Lesson Plan Format
• Introduction of lesson plan format that meets edTPA requirements and is used program wide.
68    G. QIAN, H. FAYNE, and L. LIEMAN

Analyzing Teaching Videotapes


• Practice analyzing videotapes of other teachers in edTPA areas (i.e., classroom environment).
Art Methods Lesson Planning
Science Methods • Practice writing series of lessons based on edTPA lesson plan format and integrating common core standards.
Music Methods • Teach 2 lessons to small group.
• Write reflective commentary on planning, instruction, and assessment.
Social Studies Unit Planning
Methods • Create a project based unit plan aligned with the edTPA lesson plan format.
• Construct a series of teaching lessons that integrates literacy, social studies, and common core standards and includes
assessment—preparation for writing unit for edTPA.
• Write reflective commentary on planning, instruction, and assessment.
• Write analytical paper which combines observations and teaching and makes connections supported by theory and readings.
(continued)
TABLE 3.2  Proposed Sequence of Graduate Elementary Program Courses Aligned With edTPA Requirements
(continued)
Course Alignment with edTPA Requirements
Literacy Methods Unit Planning
• Create a literacy unit plan aligned with common ore standards based on edTPA requirements and teach to a small
group of students.
• Practice for edTPA sections: planning for literacy, instructing and engaging students in literacy, and assessing students
literacy learning.
• Write reflective commentary on planning, instruction, and assessment.
Math Methods Unit Planning and Videotaping
• Create a series of math lessons aligned with common core standards and based on edTPA format.
• Teach 2 lessons to small group.
• Videotape 1 lesson.
• Analyze teaching and make suggestions to inform future teaching.
• Analyze work samples, use data to inform future teaching practice.
• Write reflective commentary on planning, instruction, and assessment.
Capstone Seminar: Complete and Submit edTPA
Teacher as
Researcher
Student Teaching
and Seminar
Note: Academic language has been infused throughout the program.
Faculty Investment in Student Success    69
70    G. QIAN, H. FAYNE, and L. LIEMAN

formed communities of practice across the SOE, and moved forward with
the first iteration of edTPA.
Re-Prep team members visited all student teaching seminars to address
program-specific challenges. In addition, they organized meetings with
field supervisors so that their concerns about the impact of edTPA on clini-
cal practice could be heard. Perhaps most important were edTPA updates
at all formal faculty meetings. These updates included best practice presen-
tations by unit faculty and staff.
During Fall 2013, all student teachers were required to prepare a full
edTPA submission, even though it was not yet a high-stakes certification re-
quirement. The fall data collection was a pilot, since the New York State Ed-
ucation Department (NYSED) had set May 1, 2014 as the implementation
date, and a majority of participating students would graduate by January
2014. Candidate portfolios were collected in Taskstream, the unit’s ePortfo-
lio system, and retrieved during January 2014 for a local scoring exercise.
The Re-Prep team organized two, half-day sessions. Using a PowerPoint
presentation developed by Stanford Center for Assessment, Learning, and
Equity (SCALE; https://scale.stanford.edu), the SOE academic assessment
coordinator walked the entire group through a sample submission. Then
faculty, first independently and then collaboratively, reviewed program-spe-
cific edTPA portfolios and scored them using the SCALE three-point local
scoring rubric as opposed to the five-point official edTPA rubric.
At the end of Fall 2013 and again at the end of Spring 2014, Re-Prep
members met with seminar instructors and student teaching supervisors.
The discussion focused around: (a) academic challenges and psychologi-
cal stress that candidates were experiencing, (b) edTPA components such
as videotaping and assessments, (c) logistics in managing the seminar and
advising candidates, (d) ways of following up with candidates who failed to
meet the edTPA cut-off, and (e) technical aspects of edTPA implementa-
tion. From the unit’s perspective, organizing these initial discussions among
clinical faculty representing different disciplines was a critical feature of
these end-of-semester debriefs. As clinical faculty reflected on their edTPA
experiences with candidates, common concerns surfaced and hearing each
other’s strategies, ideas, and recommendations proved valuable. (Note: In-
structor comments are included in quotations in the paragraph below.)
One major area of concern was English proficiency. “If I cannot under-
stand the spelling and syntax, what can I do about the content?” Content
mastery was deemed to be necessary but faculty were realizing that other
skills, especially English language proficiency, also needed attention. “If a
candidate is strong in math but the kids cannot understand his English, how
can I help him?” This concern led to questions about admissions criteria
and whether standards had to change as a result of edTPA. “At some point,
we will need to make tough decisions about who to admit. Do we need
Faculty Investment in Student Success    71

explicit checkpoints throughout the program? If we don’t have stricter stan-


dards, we need more supports.” There was also a great deal of conversation
about the stress that candidates were experiencing. A frequent adjective
used to describe student teachers was “overwhelmed.” What caused this
feeling? For some it was the videotaping requirement. For others, it was the
high-stakes nature of the assessment. “Students are saying, I can’t do this, I
am not perfect.”
In the wake of these discussions, the unit built support structures, which
included: (a) opening an edTPA computer lab, (b) providing more con-
tent on the edTPA website, and (c) offering on-campus edTPA days to stu-
dent teachers. The SOE opened the edTPA computer lab four evenings per
week. This lab became a space where faculty and candidates gathered to get
assistance with technical aspects of the assessment. Under the supervision
of the unit’s educational technology coordinator, staff provided support
throughout the semester. No assumptions were made about candidates’
access to or expertise with technology, thus helping candidates to stay fo-
cused on teaching by relieving them of worries about the technological
demands of the evidence chart submission requirements. The lab was the
place where candidates checked out camcorder bundles as they prepared
their video segments, sought out technical advice, and organized their work
as they prepared to submit for scoring. Perhaps even more important than
the technical assistance provided was the emotional support. The lab was
an informal gathering spot where questions were answered either by staff
or by peers. Exit ticket feedback indicated that lab staff were deemed to be
responsive, knowledgeable, and patient.
The SOE edTPA website complemented lab services. Recognizing that
students needed access to multiple means of support, the educational tech-
nology staff designed step-by-step user guides that were posted online so
students could continue to work independently. The website was a portal
to additional edTPA resources, offered a linear overview of the require-
ments, and gave a sneak peek to rising education students about edTPA.
For candidates preparing for official submission, the website established
a standard protocol for recording, downloading, editing, and uploading
video segments; anticipated a range of questions; and offered guides for
submitting edTPA evidence.
The SOE sponsored three on-campus edTPA workshop days during
student teaching. The agenda for these days included two hands-on, task-
based training sessions. The first was led by the lab team on video capture
and analysis and the second was conducted by clinical faculty on specific
edTPA elements, chosen and developed in response to the evaluation of
prior candidate performance on the assessment. Time was also set aside for
individual consultation with lab and/or writing center student assistants.
72    G. QIAN, H. FAYNE, and L. LIEMAN

PHASE THREE: ACCULTURATION

By August 2014, we had entered the normalizing phase of the implementa-


tion. Exemplars of instantiation of new cultural norms included offering
data literacy workshops, developing local video cases, and encouraging co-
teaching initiatives.
Data literacy workshops were designed to help faculty use assessment
data to inform practice. With data gathered over a two-year period, the
Re-Prep team made strategic decisions about where the unit needed to go
next. Consistently, Task 3: Assessment yielded the lowest rubric ratings within
and across academic programs. In response, the team planned a third re-
treat that focused on formative assessment practices.
Northwest Evaluation Association customized a full day in-service on
three Keeping Learning on Track (KLT) formative assessment strategies:
(a) sharing learning targets, (b) eliciting evidence, and (c) providing feed-
back (NWEA, 2014). KLT techniques had already been implemented as a
signature practice in an elementary residency program. The overarching
objective was to enhance faculty’s understanding, use, and adaptation of
these strategies in courses and field experiences, thereby increasing the
likelihood that candidates would use them to address Task 3 requirements.
As a follow up, we recruited two instructors to develop face-to-face
workshops on specific edTPA commentary elements that consistently sup-
pressed total scores (Rubric 10: Analyzing Teaching and Rubric 13: Student
Use of Feedback). For several semesters these workshops were included in
our On-Campus Day 2 and On-Campus Day 3 (respectively), during which
the instructor led close reading of rubrics, provided sample evidence, and
encouraged small and large group discussions across content areas.
Knowing that we would not be able to sustain the face-to-face model each
semester, we offered the material online. The educational technology coor-
dinator assisted each faculty member in scripting the workshop, preparing
relevant visuals, and recording screen cast sessions that were posted in our
learning management system. Building on the positive response to these
targeted mini-lessons, a third module was created on academic language.
Students now complete the modules individually or in groups, and exit tick-
ets continue to reflect their value.
One exciting offshoot of promoting faculty professional growth was de-
veloping locally relevant videotaped teaching episodes. By watching and an-
alyzing teaching in neighborhood classrooms under the tutelage of course
instructors, candidates could apply the terminology and determine how to
garner evidence to support claims. From Fall 2014 to Spring 2015, the edu-
cational technology coordinator, capitalizing on expertise within Lehman
and at another City University of New York campus, organized a working
group charged with the responsibility of planning ways to capture useful
Faculty Investment in Student Success    73

video footage. The group identified key instructional practices and created
assignments aligned to edTPA. With a focus on student learning, six public
school classroom teachers and two literacy coaches participated in the ini-
tial project, each offering one to three lesson units to be videotaped. With
close to 60 hours of video, faculty began to curate the video into lesson
plan chunks, ranging from 10–30 minute sections or 2–5 minute moments.
These were then matched with companion assignments that were uploaded
as modules into our learning management system. In following semesters,
faculty in other content areas adapted the assignments, reviewed unused
video footage, and created additional locally relevant video modules.
In order to house the above videos, assignments, online rubric modules,
and more, the educational technology coordinator created a Blackboard
organization, School of Education Student Modules. All students are en-
rolled in the Blackboard organization as soon as they take any education
course. This system-wide design element (as opposed to individual program
or course solutions) has enabled the SOE to offer cross-disciplinary online
support and collaboration. This shared environment in our learning man-
agement system has proved to be very valuable.
The edTPA provided us with a compelling reason to refine and expand
co-teaching as the unit approach to student teaching. We contracted with
two of our clinical instructors to enhance Teaching in Tandem, a locally de-
veloped face-to-face professional development workshop for cooperating
teachers and with St. Cloud University to design an online alternative for
those who could not get to campus. We hoped that employing co-teaching
strategies would make edTPA implementation in the classroom feel less
onerous. The edTPA lesson segment, that could seem artificial, would be
organic if it grew out of co-planning across the entire semester of student
teaching.

OUR IMPLEMENTATION STUDIES

Study 1: Faculty’s Concerns and Shift of Focus

In this study, we concentrated our research on two questions: (a) What


were SOE faculty’s initial concerns regarding the edTPA as reflected in re-
sponses to the Stages of Concerns Questionnaire (SoCQ; Hall, George, &
Rutherford, 1979) and in focus groups? and (b)) Were implementation bridge
(Hall & Hord, 2011) activities effective in allaying anxiety and reducing
resistance?
In order to tap individual faculty’s perceptions about edTPA, we admin-
istered the SoCQ (Hall et al., 1979) at the January 2013 retreat. The 35-item
SoCQ was used to take the temperature of individuals at this early stage
74    G. QIAN, H. FAYNE, and L. LIEMAN

and to serve as a baseline measure. In general, prior to the official launch,


participating faculty members demonstrated a typical non-user SoCQ pro-
file, indicating that they were not fully aware of the edTPA (Fayne & Qian,
2016). This finding pointed out that we needed to provide opportunities
for faculty members to learn more about the edTPA so that they could
begin to envision how the assessment would impact programs and policies
across the unit.
We administered the SoCQ (Hall et al., 1979) again at a second retreat
held in January 2014, one year after the formal introduction of the inno-
vation, both to gauge progress and to determine next steps in the re-en-
visioning process. Results indicated that faculty experienced a shift from
teacher-centered to student-centered and programmatic concerns (Fayne
& Qian, 2016). The following email message illustrates one instructor’s
student-centeredness:

I must take a moment to tell you some good news: Adriana passed the edTPA!
I know this may not make the cover of the New York Times, but for me, it
should. Adriana worked tirelessly to complete this test. A single mother, she
brought her children to the Lehman Library every night for weeks and stayed
until it closed. I received work from her at 3 a.m. some mornings and then
saw her teaching the same day, with no notable exhaustion. Cathy, you saw
how low she felt at one of the campus days and your words of encourage-
ment propelled her to finish. Leslie, thanks so much for all your support in
the lab. And Pam, none of this would happen without your willingness to
take on yet another project (not to mention your de-stressing Adriana when
I overstressed her!). I owe you a big debt of gratitude for all your help this
year . . . By the way, Adriana scored 18 on the assessment . . . not bad for the
toughest part of the test!

By the end of Phase 2, focus group data reinforced this shift. Faculty had
gained enough knowledge and received sufficient support regarding tech-
nical aspects of the edTPA that they were able to concentrate on more com-
plex issues. They started to think practically and realistically about specif-
ics: curriculum changes, time management for instructors and candidates
alike, benefits and drawbacks of the assessment itself, the importance of
teamwork, and the threat (both perceived and real) of increased external
control of program design.

Study 2: Faculty Group Works Evolve Into Communities


of Practice

For the second study, our questions were: (a) What impact did the edTPA
implementation have on course redesign (as measured by content analysis
Faculty Investment in Student Success    75

of curriculum artifacts)? and (b) Did faculty attitudes continue to change


during Phase 2 (as measured by the SoCQ and focus groups)? The faculty
work groups formed during Phase 1 continued and added members. The
discipline-based communities of practice took responsibility for modifying
capstone seminars and redesigning methods courses. With few exceptions,
full-time faculty across the unit were engaged with the edTPA implementa-
tion. Content analysis of work group deliverables revealed integration of
five edTPA elements (academic language, lesson segment, videotaping,
commentary, and student assessment) into program curricula. Three of the
seven discipline-based communities of practice addressed all five elements,
three missed one out of five, and one missed two.
In end-of-the-year focus groups, participants were able to identify more
positive outcomes in 2015 (e.g., increased emphasis on reflective practices,
greater integration across courses, enhanced collaboration among pro-
gram faculty) than they had in 2014. Faculty recognized that candidates
were benefiting from the edTPA, as some indicated in these reflections:

• I went back and looked at my summary and I understand what they are
saying and it is helping them think about their teaching. I didn’t see it
as quite the learning tool, but they are seeing it as a learning tool.
• In order to complete the edTPA, students have become more reflec-
tive—it enables them to learn and identify when students are having
trouble and improves differentiation. Students are forced to really
look at the different needs of students in their class.

Faculty also realized that they were improving their teaching skills. They
had learned to be more explicit:

• We start by reading the manual. We refer to it. We break it down into


small sections and refer back to what we’re doing in the classroom.
And we analyze the questions like, “So what are they really asking
with this?” We interpret: What does respect and rapport mean?
• We have been back-mapping . . . a lesson segment and analyze the
video . . . and every single piece we are trying to give exposure to ele-
ments prior to the student teaching semester. Even back-mapping
Taskstream is what we are trying to do.

However, negative consequences of the edTPA mandate remained sa-


lient (e.g., expense, stress related to high-stakes nature of the assessment,
narrowing of the curriculum). Faculty echoed similar criticisms to those de-
scribed in other studies (Ledwell & Oyler, 2016; Margolis & Doring, 2013)
regarding the developmental appropriateness of this type of assessment
and the challenges involved in implementation.
76    G. QIAN, H. FAYNE, and L. LIEMAN

Study 3: Faculty’s Levels of Use and Candidate


Performance

At the end of Phase 3, we conducted a final study that focused on four


questions: (a) What Levels of Use (LoU) did faculty report when asked
about course planning and execution in student teaching seminars?; (b)
What type of relationship exists between Levels of Use ratings and can-
didate performance on the edTPA?; (c) Do faculty make the connection
between edTPA performance and their LoU?; and (d) What other expla-
nations do they offer for candidate success or failure? Thirteen seminar
instructors agreed to serve as informants as we sought to understand how
their teaching practices had been impacted by the assessment.
The 13 instructors were experienced teachers and teacher educators
who, with one exception, had taught the seminar before the edTPA had
been added as an exit requirement. All had received formal training on
the edTPA. In semi-structured interviews, seminar instructors were asked to
select two or three of the five elements that they used most as they prepared
for the edTPA. The five elements were: the edTPA handbook, academic
language, the campus edTPA lab, the SOE edTPA web site, and locally de-
veloped online edTPA modules. Without exception, all instructors selected
the handbook as the key resource.
Because of the uniformity of responses, we limited LoU scoring to the
interview segment during which each participant discussed the handbook.
The LoU manual (Hall et al., 2006) includes operational descriptions of
eight different action patterns that an instructor may exhibit as he or she
is exposed to an innovation. According to previous research, the first three
levels (i.e., Nonuse, Orientation, and Preparation) depict different ways that
an instructor can demonstrate non-use or first use of the handbook. The
remaining five levels (i.e., Mechanical, Routine, Refinement, Integration,
and Renewal) describe increasing levels of engagement. It was gratifying to
see that only three of the 13 (23%) instructors fell into the lowest engage-
ment category (III Mechanical Use). The modal rating was IVB Refinement.
Table 3.3 illustrates differences between Level III and Level IVB responses.
The edTPA performance data on 89 candidates were shared to examine the
relationship between the clinical faculty’s LoU and candidates’ edTPA perfor-
mance. Instructors were given tables that included scores for each candidate
enrolled in their seminars during 2014–2016. They were asked to respond to
the following three questions: (a) In general, what can you say about the over-
all results?; (b) In general, why do you think that candidates did well (or did
poorly) on the edTPA?; and (c) Looking at one of the candidates on your list,
what could be some of the reasons for this candidate’s performance?
There was a substantial difference in candidate performance for those
assigned to seminars taught by mechanical users and candidates in sections
Faculty Investment in Student Success    77

TABLE 3.3   Two Vignettes: Level III (Mechanical) vs. Level IVB
(Refinement)
Level III: Mechanical Level of Use Level IVB: Refinement Level of Use
John had been teaching and supervising Lisa had been teaching and supervising
teacher candidates since 2012 at the student teachers at the college for
college. In his initial responses, he was not four years. She started with negative
clear about his position on the edTPA. feelings about the edTPA and seriously
John demonstrated very little knowledge questioned its impact on preparation of
about the edTPA and did not seem competent teachers in her discipline.
to learn more about it as he took the Lisa started with limited knowledge
responsibilities for supervising candidates. about the edTPA, but she was motivated
He talked about what he would do as a to learn about the edTPA while working
routine: “So I would bring the handbook, with candidates. With a focus to improve
I would make sure that they had candidate outcomes, she spearheaded
downloaded it, if they hadn’t already, have the implementation of edTPA in the
them download the handbook. . . . And we student teaching seminar in her program.
would review page by page.” In an effort to transform the program,
When he was asked to rank order the she collaborated with her colleagues in
resources such as online modules, integrating key edTPA elements into
the edTPA Lab, and other resources, other coursework and co-teaching with
John was not even aware of the online another instructor. “We elevate each other
modules on Blackboard. He admitted to a higher standard. . . . I not only enjoy
that he had never used the resources nor it, but I think the students do. And we’re
did he guide the students to use online constantly modeling co-teaching in the
modules. He did not collaborate with any classroom, which is also fun. And we work
other faculty in preparing candidates for off each other . . . We always talk before and
edTPA portfolios. He appeared to rely after . . . to say where are we going next.”
solely on one instructional approach. In order to help candidates to be fully
“My strategy was to let them work in prepared for the edTPA, Lisa employed
small groups, and share what they’ve an array of instructional and management
been doing with their progress on each practices and concluded: “I need to be
task.” Candidates in John’s group had a available 24/7.” Despite the fact that she
33% pass rate on the edTPA. was working with the most challenging
candidates in the discipline, 70% (i.e.,
14 out 20) of her group of candidates
managed to pass the edTPA.

taught by instructors rated at the Routine level or above (see Table 3.4).


High LoU (IVB-Refinement and VI-Renewal) instructors took greater own-
ership of candidate performance, had more in-depth knowledge of their
candidates, demonstrated a higher level of reflectivity, and were better able
to see positive features of the edTPA than did the Low (III-Mechanical and
IVA-Routine) LoU Instructors (see Table 3.5).
We expected that faculty members’ attributions of why a candidate
passed or failed the edTPA would, in part, reflect both their teaching effi-
cacy and level of engagement. Table 3.5 provides the reader with an imple-
mentation map. We determined that not only was it the ability to grasp
the technical requirements of the assessment but also the professional
78    G. QIAN, H. FAYNE, and L. LIEMAN

TABLE 3.4  Faculty’s LoU Ratings and Candidates’ Passing Rate


on edTPA
# of Candidates
Levels of Use # of Faculty Supervised # of Passes edTPA Pass Rate
III Mechanical Use 3 8 3 38%
IVA Routine 2 14 13 93%
IVB Refinement 7 60 51 85%
VI Renewal 1 7 7 100%

TABLE 3.5   Results of Seminar Instructors’ Disposition Configuration


High LoU n = 8 Low LoU n = 5
Components Number Percent Number Percent
1. Ownership or 1. Yes 6 75% 1 20%
Professionalism 2. No 2 25% 4 80%
2. Knowing the 1. Minimum 2 40%
candidates 2. Somewhat 2 25% 1 20%
3. Very Detailed 6 75% 2 40%
3. Reflective Practice 1. Yes 6 75% 2 40%
2. No 2 25% 3 60%
4. Conceptions About 1. Positive 8 100%
edTPA 2. Negative 4 80%
3. Unclear 1 20%
Note: High LoU = Instructors at IVB Refinement and VI Renewal; Low LoU = Instructors at
III Mechanical Use and IVA Routine.

dispositions of individual instructors that impacted both the LoU and Fidel-
ity of Implementation. High LoU instructors believed that they had both
the ability and responsibility to determine learning outcomes, looked for
ways to become more effective edTPA coaches, and used performance data
to improve their own practice. Mechanical users, in contrast, seemed to be
passive observers of the process, leaving candidates to fend for themselves.

CONCLUSION

The consistent and continued reflection on “How are we doing?” through


the lens of the edTPA has contributed to program improvement and rein-
forced our belief that covert rather than top-down leadership works best if
you want real rather than surface changes in teacher education programs.
We found no magic bullet that made transformation quick and painless. New
Faculty Investment in Student Success    79

York State programs were expected to be up and running with the edTPA
within 18 months. In reality, it took us four years to normalize the edT-
PA within School of Education culture. The CBAM model gave us solace
because it served as a reminder that change is a developmental process
and that development is uneven within and across individuals. The model
helped us think systematically and strategically. First, we addressed concerns
by providing information and technical assistance. Second, we tracked LoU
and identified individuals who became our local experts. Third, by attend-
ing to fidelity of implementation, we developed a nuanced understanding
of the connection between instructor “moves” and candidate outcomes.
However, we still have work to do. There are faculty who remain on
the margins or who just go through the motions (like the three mechani-
cal users in our case studies). Faculty at the level of Mechanical Use took
fewer instructional actions. They were less likely to be engaged in curricular
and programmatic changes, demonstrate ownership of candidates’ perfor-
mance on the edTPA, make efforts to know candidates’ specific difficul-
ties and needs, or adopt reflective practices than instructors at high LoU.
We continue to ask ourselves: What could we do to help Mechanical Users
move beyond an unproductive mindset?
Candidate submission rates are another source of concern. While we have
seen an increase in passing scores on the edTPA within and across programs,
there continues to be a sizable number of candidates who complete edT-
PA requirements during the student teaching seminar but do not submit it
for official scoring. Why? Understanding why candidates fail to submit and,
therefore, do not obtain certification requires that we think about a number
of critical factors: (a) targeted support to complete the tasks independently
(Chung, 2008; Pecheone & Chung, 2006); (b) quality of clinical faculty’s
feedback about the tasks; and (c) opportunities for planning, teaching, and
reflecting on the teaching experience prior to and during student teaching
(Bunch, Aquirre, & Tellez, 2009; Chung, 2008; Liu & Milman, 2013). As unit
leaders, we make the assumption that the quality of the student teaching
experience should be evident in candidates’ professional and pedagogical
knowledge and skills as measured by the edTPA (Chung, 2008; Okhrem-
tchouk, Newell, & Rosa, 2013; Okhremtchouk et al., 2009).
Even after four years, there are many faculty who continue to question the
value of the edTPA. We strongly believe that there is room for edTPA critics
within a SOE; however, teacher preparation is exactly that: preparation. Part
of that preparation in a growing number of states and institutions is helping
candidates get ready for the edTPA. Without asking faculty to teach to the test and
abandon aspects of the curriculum that are central to their identity as teacher
educators, unit leaders must acknowledge the centrality of an exit assessment
that is not only a credentialing measure but also an indicator of program quality.
APPENDIX:  Implementations Bridge Major Activities by Semester
Activities Fall 12 Spring 13 Fall 13 Spring 14 Fall 14 Spring 15 Fall 15 Spring 16 Fall 16
Re-Prep Team • • • • • • •
Faculty Work Groups • • •
School of Education Retreat • • •
Camcorder Bundles for Candidates to Check out • • • • • • •
SOE edTPA Website • • • • • • • •
Study One: Faculty’s Concerns and Shift of Focus • •
80    G. QIAN, H. FAYNE, and L. LIEMAN

  Re-Prep Team Visited Student Teaching • •


Seminars
  edTPA Pilot •
  Local Scoring • •
  edTPA Computer Lab • • • • • • •
  edTPA Support Days • • • • • •
  Face to Face Workshop on Academic Language • • • • •
  Online Workshop on Academic Language •
(continued)
APPENDIX:  Implementations Bridge Major Activities by Semester (continued)
Activities Fall 12 Spring 13 Fall 13 Spring 14 Fall 14 Spring 15 Fall 15 Spring 16 Fall 16
Study Two: Focus Group Works Evolve into •
Communities of Practice
  Data Literacy Workshops •
  Face to Face Workshop on Rubric 10: Analyzing • • •
Teaching Effectiveness
  Online Module on Rubric 10 • •
  Face to Face Workshop on Rubric 13: Student • •
Use of Feedback
  Online Module on Rubric 13 • • •
  Locally Relevant Videos • •
  SOE Student Modules on Blackboard • • • • •
  Co-Teaching Model • • • • •
  Reduced Seminar Instructors’ Workload • • • • •
Study Three: Faculty’s Levels of Use and Candidate •
Performance
Faculty Investment in Student Success    81
82    G. QIAN, H. FAYNE, and L. LIEMAN

REFERENCES

Bunch, G. C., Aguirre, J. M., & Tellez, K. (2009). Beyond the scores: Using can-
didate responses on high stakes performance assessment to inform teacher
preparation for English learners. Issues in Teacher Education, 18(1), 103–128.
Chung, R. R. (2008). Beyond assessment: Performance assessments in teacher edu-
cation. Teacher Education Quarterly, 35(1), 7–28.
Fayne, H. R., & Qian, G. (2016). What does it mean to be student centered? An
institutional case study of edTPA implementation. The New Educator, 12(4),
311–231.
Fuller, E. F. (1969). Concerns of teaching: A developmental conceptualization.
American Educational Research Journal, 6(2), 207–226.
Hall, G. E., George, A. A., & Rutherford, W.L. (1979). Measuring stages of concern
about the innovation: A manual for the use of the SoC Questionnaire. Austin, TX:
Research and Department Center for Teacher Education. The University
of Texas.
Hall, G. E., & Hord, S. M. (2011). Implementing change: Patterns, principles, and pot-
holes. Boston, MA: Pearson.
Hall, G. E., & Loucks, S. (1978). Teacher concerns as a basis for facilitating and per-
sonalizing staff development. Teachers College Record, 80(1), 36–53.
Ledwell, K., & Oyler, C. (2016). Unstandardized responses to a “standardized” test:
The edTPA as gatekeeper and curriculum change agent. Journal of Teacher
Education, 67(2), 120–134.
Liu, L. B., & Milman, N. B. (2013). Year one implications of a teacher performance
assessment’s impact on multicultural education across a secondary education
teacher preparation program. Action in Teacher Education, 35(2), 125–142. do
i:10.1080/01626620.2013.775971
Margolis, J., & Doring, A. (2013). National assessments for student teachers: Document-
ing teaching readiness to the tipping point. Action in Teacher Education, 35(4),
272–285.
Mintzberg, H. (1998, November–December). On managing professionals. Harvard
Business Review, 140–147.
Northwest Evaluation Association (n.d.). Formative assessment practices. Retrieved
from https://www.nwea.org/formative-instructional-practice/
Northwest Evaluation Association. (2014, September 5). Professional development:
Keeping learning on track. Retrieved from https://www.nwea.org/professional
-development/keeping-learning-on-track/
Okhremtchouk, I. S., Newell, P. A., & Rosa, R. (2013). Assessing pre-service teachers
prior to certification: Perspectives on the Performance Assessment for Cali-
fornia Teachers (PACT). Education Policy Analysis Archives, 21(56), 1–31.
Okhremtchouk, I. S., Seiki, S., Gilliland, B., Ateh, C., Wallace, M., & Kato, A.
(2009). Voices of pre-service teachers: Perspectives of the Performance As-
sessment for California Teachers (PACT). Issues in Teacher Education, 18(1),
39–62.
Pecheone, R. L., & Chung, R. R. (2006). Evidence in teacher education: The per-
formance assessment for California teachers (PACT). Journal of Teacher Educa-
tion, 57(1), 22–36. doi:10.1177/002248710528404
Faculty Investment in Student Success    83

Shulman, L. S. (2007). Counting and recounting: Assessment and the quest for ac-
countability. the Magazine of Higher Learning, 39(1), 20–25.
Wenger, E. (2008). Communities of practice: Learning, meaning, and identity. New York,
NY: Cambridge University Press.
Wenger, E. C., McDermott, R., & Snyder, W. C. (2002). Cultivating communities of prac-
tice: A guide to managing knowledge. Boston, MA: Harvard Business School Press.
This page intentionally left blank.
CHAPTER 4

MANDATES REVISITED
One Coordinator’s Story of Cultivating
Collegiality and Inquiry Through a
Professional Learning Community

Holley M. Roberts
Georgia College & State University

Mandates are commonplace in the field of education. Efforts to continu-


ally improve education are essential, yet these reform efforts have also led
to a type of fatigue that can be extremely draining. As a former elemen-
tary school teacher, this fatigue in schools was often evident to me, and I
saw how it propelled some teachers to respond in compliance rather than
defiance because they believed their professional expertise was often over-
looked or their perspectives were not considered during decision-making
processes. Transitioning from a K–12 setting to working in higher educa-
tion as a teacher educator and leader afforded me the welcomed opportu-
nity to collaborate with colleagues and be actively involved in creating solu-
tions and action plans rather than being a passive recipient of directives.
As a faculty member, assessment coordinator, and now department chair,
I have unique vantage points of witnessing the responses of my colleagues

Implementing and Analyzing Performance Assessments in Teacher Education, pages 85–103


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 85
86    H. M. ROBERTS

as well as my own responses to this same type of reform overload invading


teacher education. Taking the perspective that challenges create opportuni-
ties, this chapter shares my experiences and research as an educative Teacher
Performance Assessment (edTPA®) coordinator tasked with implementing
a top down mandate, edTPA, a subject specific, nationally available perfor-
mance-based assessment that is used to determine teacher candidates’ readi-
ness to teach. This particular mandate has high stakes repercussions on the
teacher candidates with whom we work and the teacher candidates’ results
would affect the rankings of our teacher education programs on both the
state and national levels. One purpose in sharing these experiences, findings,
and implications of the study, that resulted from faculty commitment and
my leadership, is the possibility that our processes might serve as a model or
guide for others to implement. Our approach offers one way to effectively
respond to reform efforts while also encouraging and promoting the intel-
lectual and curious environment inherent to higher education.
This chapter explores how faculty approached the implementation of
edTPA through an inquiry stance rather than forced compliance by using
their knowledge of and experience with professional learning communi-
ties. The initiative and dedication of the faculty to preserve the tenets and
core concepts of their teacher education programs while critically and re-
flectively using the components of edTPA to question current practice and
programmatic curriculum was a gradual process. This chapter and research
study shares the progression of the work—that ensued over three years—
as faculty members moved (a) from personal, philosophical, and political
stances; (b) to becoming hypersensitive and overly concerned about exter-
nal logistics of the implementation; and then (c) to collectively analyzing
and sharing their expertise in areas of focus in edTPA to support teacher
candidate learning and development.
This chapter is written from my perspective, the edTPA coordinator and
principal investigator, who provided the leadership in facilitating a profes-
sional learning community (PLC) and the implementation of the mandat-
ed performance assessment into the teacher education programs in our
college of education. I will share how my leadership influenced faculty to
participate in the PLC by highlighting their expertise and providing the
organizational structure to promote inquiry as well as much needed re-
sources to support the implementation. I also share how I realized that the
implementation was worthy of researching to determine if more could be
learned and if this experience could be relevant and informative to others
who were negotiating mandates. Through this process my research ques-
tion became: How can participating in a professional learning community
support the practice of teacher educators in implementing mandates?
The study was a participatory action research study that focused on
solving the problem of implementing the top down mandate of edTPA
Mandates Revisited    87

successfully. Kemmis and McTaggart (2003) state that, “participatory ac-


tion research frequently emerges in situations where people want to make
changes thoughtfully—that is, after critical reflection” (p. 346). I served as
the researcher and the participant as I collaborated with faculty in address-
ing the mandate while also leading certain aspects of the implementation.
The data for the study were obtained from several sources: (a) edTPA
PLC meeting agendas and minutes, (b) various reports to the administra-
tion on the progress of the implementation, (c) faculty members’ partici-
pation in the professional learning community as indicated by attendance
records, (d) selected participants’ structured interviews following their
experiences in the edTPA PLC, (f) a survey of all participants in the edT-
PA PLC, and (g) the teacher candidates’ aggregate performance data on
edTPA.

THE MANDATE COMES DOWN—OUR CONTEXT

In 2012, teacher education programs in Georgia were informed of several


reform efforts including the impending impact of the consequential nature
of edTPA, a proprietary performance based assessment, on meeting the li-
censing requirements for teaching by Fall 2015. As many teacher education
programs in the state reacted to the directive very quickly and seized the
opportunity of a nationally scored pilot, the administration at my university
did not respond. In a positive way, this decision would eventually signifi-
cantly impact the progression of the work by increasing the timeline of the
implementation and offering an opportunity for a more thoughtful and
robust review of edTPA prior to involving the teacher candidates.
The teacher education programs discussed in this chapter are situat-
ed within the state’s designated public liberal arts university in which I
work. The program areas include early childhood, middle grades, special
education, and music education at the undergraduate level and second-
ary education, physical education, and middle grades education at the
graduate level, through a Master of Arts in Teaching. On average our pro-
grams enroll 275 initial teacher candidates. Each of the programs consists
of a field-based mentor-led cohort model, a signature of the university’s
teacher preparation program. The model utilizes a faculty mentor who
teaches, supervises field placements, and serves as the advisor for a group
of 20 to 25 teacher candidates throughout their initial teaching program.
Each of the programs within the college require significant field experi-
ence time with most teacher candidates completing well over 1,000 hours
of field work in the two-year programs and complete a year-long intern-
ship. Our education programs place teacher candidates in approximately
21 different school systems for field experiences and internships. All of the
88    H. M. ROBERTS

undergraduate programs are nationally recognized and have a sequence


of courses that build upon one another and have been deliberately and
intentionally planned to support the development of teacher candidates.
Two of these programs have received national rankings because of the
mentor-led field-based cohort model and program outcomes that are a
result of excellent faculty and quality curriculum. The college of educa-
tion is also nationally accredited. The mandate of edTPA and other reform
efforts were initially seen as an intrusion on the quality programming that
was being offered.
The strategy of using a professional learning community was not my
“brainchild” and was not initially considered as a way to implement edTPA
into the initial teacher education programs in our college of education.
Rather it was an organic process that simultaneously entailed both my lead-
ership of integrating the mandate and the development of the PLC. The
progression of the PLC and me as a leader of the mandate informed each
other and at times were happening together and at other times separately.
During the progression of the implementation I realized that this process
was worthy of studying in order to determine if this approach could inform
others who were grappling with mandated reform efforts and contribute to
the field of teacher education. While the faculty and I worked to implement
the edTPA into the initial teaching programs, their needs and ideas were
shaping my role as a leader among them, which influenced the path we cre-
ated to reach our shared goals of the work.

CRITICAL ROLE OF LEADERSHIP

When I accepted the role of edTPA coordinator, I had not previously served
in any specific leadership capacities within the college of education. I had
served as the chair of two search committees, but that was within my own
program and required me to work with colleagues with whom I was very fa-
miliar. Serving as the edTPA coordinator would impact every initial teacher
education program and most of the approximately 35 faculty in the college
of education. At the time the mandate was enacted, I had participated in
a professional learning community for six years, a Critical Friends Group®
(CFG), which provided knowledge and experience of the benefits of a pro-
fessional learning community, but I did not have that in mind as I thought
about my new role. I also did not have an example of a mandate or reform
effort that was so broad reaching across programs with such high stakes
attached that had been previously implemented from which to draw. My
initial goals as a leader were to eliminate as many obstacles as I could so
that faculty could work closely with candidates in their areas of expertise.
In order to accomplish this, it was important to build an infrastructure to
Mandates Revisited    89

support the implementation of edTPA through providing support related


to technology, P–12 partners, teacher candidates, and faculty. I also sought
to encourage faculty to view the implementation as an opportunity to en-
gage in continual reflection and development—two key concepts embed-
ded in our practice as teacher educators and lifelong learners.
As I attended more and more professional development opportunities
to learn about edTPA, I connected with edTPA coordinators across the state
as well as staff from the state licensure commission. These connections were
important to me because we were all embarking on the implementation
together and through this, a network of support was created. This network
was critical because the leadership structure in the college of education was
in transition and I did not have anyone in a leadership role at my institution
that I could consult regarding edTPA. While this time of transition in the
college was very tumultuous, after much reflection I believe the leadership
restructuring in the college played an enormous role in the formation and
the development of the professional learning community, as faculty were
seeking a place to come together and collaborate during a fractured orga-
nizational structure.

Building the Infrastructure With Technology

While I knew I would need to work to build some type of community


based on the initial reactions of faculty and the current college environ-
ment, I also knew I needed to manage the operational aspects of the imple-
mentation. I felt strongly that one of the obstacles of implementation was
the overwhelming size and scope of the edTPA mandate. I felt that in my
full-time position, I should take the responsibility of creating a structure of
support for faculty to address all of the logistical needs associated with the
implementation. Due to the overwhelming confusion and concern related
to the technological implications of the mandate, I recruited an Instruc-
tional Technology intern who helped create a link labeled, edTPA on the
current College of Education’s intranet. The purpose of the site was to pro-
vide edTPA resources in a readily accessible place for teacher candidates
and faculty. The site contained specific resources that were selected from a
myriad of resources and supports that were featured at different meetings,
conferences, and development opportunities and endorsed by Stanford
Center for Assessment, Learning, and Equity (SCALE), the proprietor of
edTPA. Combing through the most useful resources and presenting them
in a convenient place would prove helpful in supporting faculty and candi-
dates in providing information and support materials in their work towards
success with edTPA. Based on a teacher-candidate survey that was created
through input from the edTPA PLC, teacher candidates stated that the
90    H. M. ROBERTS

resources provided on the college of education’s intranet were very helpful


in their completion of edTPA. The survey also demonstrated that the use of
the resources posted on the college’s intranet site had increased each year
since the implementation pilot.
Another initial concern of faculty members was the teacher candidates’
access to technology and their ability to use the technology necessary to
be successful on the very scripted requirements of the video portion of
edTPA. While most of our teacher candidates are consumers of technol-
ogy, they were not knowledgeable of how to create professional videoing
recordings, compressing videos, and uploading video that met very specific
requirements; therefore, I sought to approach the issue of technology avail-
ability and its successful use by candidates. I solicited the dean and asked if
we could purchase 50 iPads, cases, tripods, microphones, and microphone
extensions with lapse funding; thankfully, he agreed and even provided the
funds for 90 iPads that would be available to approximately 150 teacher
candidates. While my knowledge of iPads was only associated with cruising
through my own social media accounts, I enlisted the guidance of our most
senior Instructional Technology faculty member. He willingly collaborated
with me and provided support in recommending what type of equipment
to order and what applications to download to meet the video require-
ments of edTPA. Fortunately, he spent a lot of time thinking through the
most effective way for teacher candidates to use the iPads based on available
video storage and sound quality. The use of the iPad to video classroom in-
struction has since been integrated into the teacher candidates’ technology
classes. Our collaboration has resulted in several presentations at national
conferences as we have shared our experiences of planning, purchasing,
and providing the equipment to streamline and standardize the videoing
process with others who are also addressing the mandate of edTPA.
In order to provide access to the equipment to teacher candidates, I sug-
gested that we renovate a closet for the new iPad storage and then wrote a
request and was granted a graduate assistant who would work twenty hours
each week to assist in managing the circulation of the equipment. I contact-
ed the university library circulation manager to schedule a meeting to help
me understand the circulation processes used at the university and our op-
tions for maintaining the equipment for college of education students use
only. We came to an agreement on how to make the equipment available
for teacher education students, while assessing fees for damaged, lost, or
late equipment that could be utilized to replace or add to the equipment
we possessed. These processes encouraged further collaboration beyond
the college and included the business office, registrar, and circulation li-
brarian. We believe that by providing access and maintaining the equip-
ment in the college of education and offering support for the use of the
equipment for teacher candidates, the number of condition codes has been
Mandates Revisited    91

limited. This is also evident in data, as condition codes have been reduced
each year. During 2016–2017, 158 candidates uploaded edTPA and only
two students received condition codes. During the first consequential year,
2015–2016, 145 candidates uploaded and received five condition codes.
Based on this data and problems negotiated during the upload sessions,
teacher candidates are highly encouraged to utilize our standardized ap-
proach to the technology use in order to record and upload their videos
for Task 2 of edTPA.

Building the Infrastructure to Support P–12 Partners

In the summer of 2014, I was made aware of a state award that was acces-
sible to education preparation programs in Georgia. The intent of the award
was to continue the implementation work for edTPA and to design specific
strategies for how edTPA would be used by the college to examine program
effectiveness as well as how we would work with partners to use edTPA data
to examine candidates’ readiness to teach, including designing induction
support for early career teachers. The award provided $10,000 that was used
to collaborate with local school systems to implement edTPA. The award
team consisted of faculty, superintendents, and other administrators from
school systems in our area and myself. Part of the funds were used to reim-
burse school administrators for travel to the university to meet with faculty
to provide insights into the critical needs of induction teachers and progress
being made in evaluating practicing teachers on the state teacher evalua-
tion system. These discussions contributed to one of the most significant
pieces of work that resulted from the award, which was an evaluation of the
field experience for teacher candidates, which directly aligned with edTPA,
The Interstate Teacher Assessment and Support Consortium standards, and
the state teacher evaluation system. In concert with the college’s assessment
committee, which I also chaired, the committee recommended the assess-
ment for adoption as an educator preparation program assessment.
The award money was also used to compensate three faculty members
who created a website for partner teachers based on suggestions from the
team. The website provided short video clips and resources to assist partner
teachers in their work with teacher candidates who were completing edTPA.
This website was necessary for partner teachers who could not access the
college’s intranet site and it only contained information pertaining to the
needs of P–12 partners.
The remaining money was utilized to compensate an instructional tech-
nology professor to create an instructional video for teacher candidates
with step-by-step directions on how to film, clip, and compress teaching vid-
eos using the equipment provided by the college of education. This video
92    H. M. ROBERTS

link was made accessible to teacher candidates and faculty on the college
of education’s intranet and to P–12 partners through the edTPA website.
Based on the recommendations of the award team I created an infor-
mational presentation for school leaders of the requirements, policies, and
components of edTPA. Later that summer, I traveled to several different
school districts where our teacher candidates engage in field placements
and internship experiences. I spoke at principals’ meetings to make them
aware of the parental permissions that would be requested and the use of
the video cameras within the school context, which was another primary
concern of faculty. I also made them aware of the resources available to as-
sist partner teachers in supporting teacher candidates in their completion
of edTPA.

Building the Infrastructure to Support Teacher


Candidates

One of the major concerns of faculty in the initial discussions of edTPA


was the significant costs that would be incurred by teacher candidates as
a result of the edTPA mandate. After our implementation pilot in 2015,
where the state provided a proportionate number of vouchers to pay for
national scoring, I met with the dean to seek guidance on the process of
implementing a special course fee to be added to all internship courses.
The purpose of the special course fee was to include the cost of edTPA in
the teacher candidate’s tuition payment to the university so that teacher
candidates would not have to access a personal credit card at the time of
the edTPA upload session to pay the $300 fee for external scoring by Pear-
son. While adding fees to student courses is a sensitive topic with upper ad-
ministration, the course fee passed through academic affairs with a strong
rationale and the support of the administration in our college. The fee is
collected in a special course fee account within the college of education
and then the funds are used to purchase vouchers from Pearson. At the
time of the upload sessions, I provide the teacher candidates with a voucher
number to be entered in their edTPA.com account upon registration. The
teacher candidates regularly comment that the voucher is much appreci-
ated because they are able to use financial aid and scholarships to pay for
the course fee and they do not have to pay the $300 fee out of pocket.
One of the most important structures of support that I provided for
teacher candidates has been the group upload sessions. The upload ses-
sions were planned based on recommendations from members of the edT-
PA PLC. I facilitate group upload sessions for teacher candidates within
each initial teacher education program. The upload times are determined
after considering the submission and reporting dates published by Pearson.
Mandates Revisited    93

The members of the edTPA PLC recommended that each teacher candi-
date upload their edTPA in enough time to have his or her scores returned
to them soon enough to complete a retake prior to the ending of the in-
ternship. During the upload sessions, I lead the students through the pro-
cess step by step in order to lessen the chance of errors in the submission
process. Teacher candidates are instructed to come to the upload session
well prepared with their edTPA content assessment checklist (created from
the evidence chart of each edTPA content area handbook), a jump drive
with all necessary files labeled correctly and in PDF format, and a positive
attitude. Prior to the upload sessions, and if an invitation is offered, I visit
cohorts and share a short presentation on how to be best prepared for
the upload session and present lessons learned throughout the process of
implementing edTPA. These information presentations have helped the
upload sessions run more smoothly each year.

Building the Infrastructure to Support Faculty

Ultimately, it was the teacher education faculty who needed a structure


for support to lead the implementation of edTPA with the teacher candi-
dates and a place to explore and consider the educative opportunities of
edTPA. Rather than mandating the method of implementing edTPA for
all programs in the College of Education, it was decided early on that pro-
grams should have the structures of support they needed to be successful
while maintaining the autonomy to address the mandate in the context of
the programs. Therefore, each month I would schedule meetings, set the
agenda based on needs and areas of inquiry, seek faculty who were becom-
ing experts in these areas and ask them to lead, co-plan the facilitation
and/or protocols that would be used, take minutes in each meeting, and
post the minutes and outcomes of the meeting on the College of Educa-
tion’s intranet site. In doing this, I provided internal leadership that is re-
quired for a community to maintain itself (Wenger, 1998) while fostering
the concept of collaboration in the implementation. A typical agenda of
the monthly meetings can be found in Table 4.1.
In order to continually evaluate and improve our work of preparing
teacher candidates I regularly provide the edTPA data of each teacher can-
didate and the aggregate data of the cohorts of teacher candidates to the
appropriate mentor leader and program coordinator for review and analy-
sis. Additionally, each year before the beginning of fall semester I provide
an analysis of the unit data highlighting trends within the edTPA data based
on areas of strengths and areas in need of improvement. This information
is utilized to propose the work for the edTPA PLC for the upcoming year.
94    H. M. ROBERTS

TABLE 4.1  Sample PLC Meeting Agenda


edTPA PLC Meeting Agenda
11:00 a.m.–12:00 p.m.
1. Welcome and Sign In
2. Task 2 Learning Activity (led by Secondary Faculty)
a. Group Assignments
b. Characteristics and Differences
c. How can we support/scaffold teacher candidates based on what we have learned?
3. Open Discussion
4. Suggestions for future meetings

Upcoming PSC events regarding edTPA: (professional development events would be listed
here)

As the coordinator, I saw it as my responsibility to support faculty en-


gagement in the implementation of edTPA through cultivating an inquiry
stance with faculty in learning about and implementing edTPA. Through
this lens, I along with colleagues developed an edTPA PLC through which
faculty were able to maintain their program’s uniqueness while refining
their work with teacher candidates. As evidenced by the data, the real credit
of the successful implementation of the edTPA lies in work of the PLC.

THE LIFE AND TIMES OF THE


PROFESSIONAL LEARNING COMMUNITY

I knew our faculty were not the teach to the test type, which is a popular
method to address such high stakes mandates in the K–12 setting. I also
knew that I should not accept the role of preparing teacher candidates for
edTPA on my own by hosting boot camps for teacher candidates and po-
tentially excluding the faculty from taking ownership of edTPA. Through
my interactions and conversations with faculty, as we attended those first
professional development opportunities together, I learned that many of
them did not value the idea of teaching to the test and were very adamant
that edTPA not overtake the curriculum of the initial teacher education
programs. I also knew from my experience in CFG and through the work of
DuFour (2011), that “[t]here is abundant research linking higher levels of
student achievement to educators who work in the collaborative culture of
a professional learning community” (p. 60).
In November of 2012, when I received an email from the associate dean
that originated from the state’s university system office inviting educator
preparation programs to join eight other state institutions to begin work
towards piloting edTPA, I was intrigued with the opportunity to have an
Mandates Revisited    95

externally scored assessment as one measure of data to demonstrate the


preparation of the teacher candidates in the college of education. At the
time, I was an assistant professor in the early childhood education program
and, along with six other faculty members, I responded to the opportunity.
At the request of the dean, I agreed to serve as the edTPA coordinator be-
ginning Spring 2013 and she offered one course reassignment to compen-
sate me for the time. My contact information was provided to the university
system office and the state certification agency and I began to receive in-
formation and updates regarding the statewide implementation of edTPA.
In February 2013, I e-mailed an open invitation to all of the faculty in the
college to attend an edTPA professional development offered at another
university in the state that was focused on local scoring. Fortunately, faculty
from each initial teaching program responded to the invitation and par-
ticipated. Shortly after the local scoring professional development, a small
group of faculty, consisting of those who responded to the initial e-mail
and those who attended the professional development, including me, be-
gan meeting regularly and attended various webinars and trainings hosted
throughout the state. At our meetings, we began to process what we were
learning and to think about the implications on the teacher candidates and
individual programs within the college of education.
In the summer of 2013, our dean signed the Institution of Higher Edu-
cation agreement for the university to begin the exploratory membership
with SCALE. This designation catapulted us into the official world of edTPA
and allowed the college to access necessary SCALE handbooks, templates,
and rubrics through our assessment management system. In fall at the an-
nual college of education’s faculty retreat, the small group of faculty who
had been exploring edTPA were asked by the dean to present general in-
formation that we had learned and the state’s timeline for implementation
of edTPA. The presentation was highly controversial and took a completely
different direction than what we had planned. As a matter of fact, we did
not get past the first slide of the presentation. Many faculty members were
visibly angry with the mandate and because of these highly charged emo-
tions, the interactive activities we planned to “get to know edTPA” did not
happen.
While leading the faculty discussion at our retreat, I realized very quickly
that we would need a different approach. So after listening to several fac-
ulty espouse their discontent with the assessment and the “hijacking” of
teacher education by Pearson, we closed the session and moved on to other
business of the college. While the reactions of faculty derailed the presen-
tation that day, I was not surprised. I had read an article by Hayes and So-
kolower (2013) in Rethinking Schools about how one faculty member, Bar-
bara Madeloni, at the University of Massachusetts and her students resisted
96    H. M. ROBERTS

the implementation of the edTPA. In the article, Madeloni is quoted as


saying:

This is eerily similar to what is happening in K–12 education, where teachers’


voices are silenced, and teaching is subject to technocratic high surveillance
accountability measures that destroy the potential of the classroom to be a
place of inquiry, creativity, and liberation. (p. 8)

Due to the likeness in the initial reactions of faculty, I suggested that we


read and discuss the article at our next meeting. Additionally, the article
was posted to the college’s intranet for faculty to access for further consid-
eration and meaningful dialogue.
Upon the commencement of that fall semester, several faculty members
from the original group who had responded to the email left the university
and other faculty had become interested in learning more about edTPA
and its implications on the profession. I, too, had changed positions from
early childhood faculty to the director of assessment and accreditation.
This small group of faculty members continued meeting and addressing
specific issues related to understanding the terminology and nuances of
edTPA, how the architecture of edTPA compared to the programs structure
and curriculum, and how critical reflection was integrated into edTPA in
the form of commentaries. Through meeting and addressing these initial
ideas, the group meetings evolved into what we now consider the edTPA
PLC. The group’s goal most closely aligned to Cox’s (2004) definition of
professional learning communities as, “cross disciplinary faculty and staff
group of six to fifteen . . . who engage in an active, collaborative, yearlong
program with a curriculum about enhancing teaching and learning” (p. 8).
Because it was organic and developed over time; we did not identify our-
selves as a PLC; rather, we gathered together to work on a problem we were
experiencing through a mandate being enacted. The group met monthly
to address the challenges the mandate of edTPA presented, but ultimately,
the group wanted to discover how edTPA could support teacher candidates
into becoming education professionals who are highly reflective, who pro-
mote and expect high levels of achievement in all students, and who are
advocates for students as individuals beyond just receiving a passing score.
It was clear that “[t]he participants appreciated the opportunity to share
ideas and discuss problems in a safe environment” (Engin & Atkinson,
2015, p. 171). Furthermore, edTPA provided the curriculum in which to
analyze and compare our current practices within the teacher education
programs.
The edTPA PLC participants included representatives from almost all of
the initial certification programs in the college of education. Each month I
would schedule the edTPA PLC to meet and prepare the agenda with items
Mandates Revisited    97

that were related to the infrastructure and inquiries related to the edTPA’s
architecture or addressing edTPA with teacher candidates. Participants in
the edTPA PLC would sign in to every meeting. While I facilitated the meet-
ings, faculty began to share with one another what they were learning and
had implemented with their students. Members of the learning community
initially decided to begin researching the overall purpose and function of
edTPA and concentrated on moving the dialogue of faculty from resistance
to embracing the value of a valid and reliable measure of the preparedness
of candidates for their first years of teaching, even though they themselves
were grappling with the assessment. The group also worked to increase
their understanding of edTPA language demands and to determine where
edTPA “fit” into the current teacher work sample and each program’s
course work, without losing the uniqueness of programs and their align-
ment with the conceptual framework and professional standards.
Since academic language is a foundation of edTPA and seemed most
inconsistent with the current assignments, the group decided to spend time
learning about academic language through examining SCALE resources
and attending professional development opportunities offered in the state.
In addition, group members analyzed the appropriate handbook for their
content area and compiled a list of reflective prompts to aid candidates in
understanding the language of edTPA prior to the implementation pilot.
In this process, the group realized that in each program the term reflection
was important to maintain but that we also needed to focus more on how
and why we asked teacher candidates to reflect beyond what one might
change about the lesson if he or she were to teach it again.
Finally, the group began to develop the procedures and processes for
a spring implementation pilot of edTPA through editing the current les-
son plan format to include definitions of the components of edTPA most
closely related to the glossary and terminology used by SCALE. Addition-
ally, faculty began using the assessment terminology used in edTPA inter-
changeably to support teacher candidates in aligning terms they were learn-
ing in class. We also focused lessons on academic language and discourse
and video analysis of teaching, based on information provided by faculty
experts within our college. Faculty also provided support to each other by
sharing their understanding of implementing critical structures of support
for students, innovative ways to storyboard their lessons, and effective ways
to provide feedback to students while focusing on the evaluation criteria
of an assessment. In addition, we began to seek ways to explore inter-rater
reliability with our expectations of performance on edTPA. We did this by
studying teacher candidates’ submissions and the rubric scores received
through national scoring.
Through this work, the edTPA PLC transitioned from a group of people
who had come together in a shared interest to learn more about edTPA and
98    H. M. ROBERTS

its impact on our teacher candidates to a fully functioning PLC. We sought


to create and offer the infrastructure to support faculty as they grappled
with the high-stakes consequences while attempting to support the teacher
education faculty’s desire for inquiry, collegiality, and program improve-
ment. In doing this, the edTPA PLC developed what DuFour (2004) identi-
fied as 6 characteristics of professional learning communities:

1. shared mission (purpose), vision (clear directions), values (collec-


tive commitments) and goals (indicators, timelines, targets)—all
focused on student learning;
2. collaborative culture with a focus on learning;
3. collective inquiry into best practice and current reality;
4. action orientation—learning by doing;
5. commitment to continuous improvement; and
6. results orientation (pp. 525–526).

This model promoted faculty collegiality, continuous improvement at


programmatic and unit levels, and encouraged critical reflection of our
work with teacher candidates. The PLC also formed the foundation for the
implementation through an inquiry stance rather than forced compliance
through open invitations and professional development opportunities,
open dialogue and choice, collaboration and support across programs, and
ultimately through shared decision-making regarding how we would ad-
dress this mandate.

PLC AND edTPA COORDINATOR:


ACCOMPLISHMENT OF SHARED GOALS

The work of the PLC provided faculty members leadership opportunities


and served as a space for professional development and problem solving.
This democratic approach to the edTPA mandate served as the cornerstone
for increasing communication and collaboration across programs and fa-
cilitating faculty buy in that was not forced but gradual as we all came to
terms with edTPA. In addition to the collaborative leadership I provided,
the edTPA PLC had a positive impact on the teacher education programs
and the edTPA implementation (a) through creating access and education
related to technology needed by teacher candidates, (b) by providing fac-
ulty resources and student supports to scaffold the successful implemen-
tation, and (c) by creating a structure for edTPA implementation across
the college of education. All of these structures have been critical for the
sustainability and continuation of edTPA for the future.
Mandates Revisited    99

At the end of both of the 2015–2016 and 2016–2017 academic years,


the edTPA PLC held a culminating meeting with the intent of celebrat-
ing and reflecting upon all of the accomplishments and lessons learned
based on the two years leading up to the implementation. At the meeting,
I asked that we share insights we gained by categorizing things we appreci-
ated, things we realized, things we would do better next time, and things
we did well.
As a summary of the comments from the two years the following points
of reflections were identified:

• The edTPA is always a work in progress due to the changes in the


assessment and resources as well as the needs of the teacher candi-
dates who are taking the assessment.
• The approach we used allowed us to dig deeper and think more
about teacher preparation and the needs of the teacher candidates
beyond jumping through hoops and through this we have gained a
deep understanding of edTPA in each of the content areas.
• The edTPA results reaffirmed the quality and strengths of the pro-
grams, curriculum, and learning experiences and that the programs
already supported components of edTPA.
• The mandate of edTPA provided criteria in which to compare program
curriculum and expectations within the college of education.
• The edTPA is a valid and reliable assessment that allows us to seek
comparisons to other universities in the state and around the nation.
• The support of teacher candidates in their emotional well-being as
they are completing the demands of their internship, coursework,
and edTPA is critical for candidate success.

Faculty commented that it is important to provide positive language


about the assessment and its educative benefits while not overemphasiz-
ing the assessment. Faculty consistently reported that we collaborated well
to support teacher candidates by coming to the table to participate in the
process, we provided important supports for the faculty and teacher candi-
dates, we have utilized this work to support scholarship and have presented
our work at national, international, and regional conferences, and we have
improved many of our signature assessments.
Ultimately, the teacher candidates’ success has been the best indica-
tor of our work. In the first exploratory pilot the participants were both
randomly selected or mentor leaders chose to select a range of teacher
candidates based on current academic standing in the cohort. The catego-
ries were loosely defined: candidates who we thought would do well, candi-
dates that we thought would do ok, and candidates that we thought would
struggle based on program performance. The first-time pass rate for the
100    H. M. ROBERTS

non-consequential pilot was 65.31% out of 49 submissions, 17 did not meet


the standard score of 35. In the following year, 2015–2016 in which the as-
sessment was consequential and was required for licensure; the first-time
pass rate was 96.60% with 147 teacher candidates uploading, with five not
meeting the standard score, and in 2016–2017 the first-time pass rate was
98.63% with 148 teacher candidates submitting the edTPA and two who did
not meet the standard.
Over the last three years, the edTPA PLC has evolved and served its pur-
pose as the catalyst for inquiry and a place of collaboration. What began
as a small group of faculty seeking to learn more about this assessment
evolved into a thriving PLC. While the initial mandate has been addressed,
the work of the edTPA PLC continues in different ways. This is the result
of the strong relationships and expertise developed over the years of the
implementation. Monthly meetings of the PLC are no longer necessary due
to the sustainable processes and procedures that were formed as a result of
the work of the edTPA PLC. The faculty have chosen to address the imple-
mentation of edTPA in monthly departmental meetings in the Department
of Teacher Education during the 2017–2018 academic year. This is made
possible in large part because of my dual roles as interim department chair
and edTPA coordinator, both of which provide the opportunity for me to
continue to lead and support the initiative.

IMPLICATIONS AND CHALLENGES


FOR TEACHER EDUCATORS

In the age of accountability, teacher education reform, and the continuous


focus of value-added measures to teacher education programs, mandates
have become and will continue to be a part of teacher education. There-
fore, it is imperative for educational leaders and teacher education faculty
to seek creative and collaborative ways to address reform efforts that sup-
port faculty inquiry and collegiality and ultimately foster teacher candidate
learning.
In the participatory action research project, the research question I ex-
plored was focused on how participating in a PLC could support teacher
educators in implementing mandates. In reviewing and analyzing the quali-
tative data the following findings emerged:

• Professional Learning Communities evolve and provide a catalyst


for collegiality to address curriculum/programmatic changes and
reform.
• Faculty who were initially involved found edTPA to be a venue for
dialogue to move from frustration to productivity.
Mandates Revisited    101

• Participation and collaboration fostered ownership of the mandate


and utilized faculty expertise.
• The experience provided reaffirmation of program quality.
• The mandate of edTPA provided a criteria in which to compare pro-
gram curriculum and expectations.
• Leadership is critical and having a point person to support the logis-
tics of the mandate is imperative.
• The mentor-led field-based model in our college of education sup-
ported the work of the edTPA PLC.
• Having knowledge of and experience with professional learning
communities supported the work of this particular edTPA PLC.

Through reflecting on the process of implementing edTPA in our col-


lege of education and based on data that emerged during our work, one
participant responded that she believed that professional learning commu-
nities could be a practical way to “soften” mandates in teacher education.
While the PLC was not an initial plan and strategy for implementing edTPA
successfully, it provided the space and structure to address the mandate by
encouraging dialogue and critical reflection by the faculty. The edTPA PLC
evolved into a community of learners as we were all attempting to negotiate
and learn more about edTPA and how it would impact the work we were
already doing. The PLC provided a catalyst for collegiality to address cur-
riculum and programmatic changes and reform across the college while
building relationships across programmatic silos. It also served as a venue
for faculty to move beyond frustration to productivity by encouraging facul-
ty to seek their own solutions to the mandate. Based on the data, I learned
that mandates can promote inquiry as a positive consequence, if partici-
pants are willing to look at it as an opportunity. As a result of the work, it
has become obvious to me that professional learning communities should
be encouraged to evolve in their scope and actions to meet the needs of the
professionals who are engaged in the PLC. Furthermore, it is critical that
the leader be someone who is also invested in the process and can lead oth-
ers by encouraging collaboration and through highlighting the expertise in
the group by utilizing the tenets of distributive leadership. Kennedy, Deuel,
Nelson, and Slavit (2011) provide three important attributes of distributive
leadership within the context of a PLC: “(1) a leader’s recognition and
use of internal intellectual and experiential resources, (2) differentiated
top-down and lateral decision making processes, and (3) culture building
through dialogue and collaborative inquiry” (p. 21). It is also important to
understand that the work of the group changes, therefore, the PLC should
also change or it will become less authentic and seem contrived, as many
PLCs have become.
102    H. M. ROBERTS

Additionally, leadership is important and very necessary for the sustain-


ability and success of the PLC. Faculty participants reported that it was critical
to have leadership that responded to the immediate needs and problems that
arose throughout the exploration and implementation of edTPA. By having
a leader who facilitated the implementation and worked to create structures
of support and to oversee the many logistical needs brought about by the
implementation, the members of the PLC were able to provide guidance into
the processes while focusing their energy and expertise on learning more
about edTPA and ways they could best support the teacher candidates. Many
faculty participants reported that having a point person to seek out when
questions arose was critical to the success of the entire process.
While the PLC we used was effective, we recognize context is impor-
tant and others may encounter different challenges as they address similar
mandates using the structure shared within this chapter. The mentor-led
field-based cohort model used in our college of education may have been
particularly supportive of the work of the edTPA PLC because of the unique
role of faculty who are heavily involved in the success of their cohorts. This
level of investment certainly had an impact on participation in the PLC and
the high level of involvement in the entire process.
Another surprising advantage that emerged at our institution was the
transition in college administration during the time of the implementation.
Many faculty were seeking an opportunity to collaborate and join together
as the changes in leadership and organizational structures occurred where
faculty did not feel connected to a department or the college. I also found
that the commitment of the faculty to the success of the teacher candi-
dates was most important in establishing the environment for addressing
a mandate in such a positive way. While some faculty struggled with the
top-down enforcement of the mandate, the components of edTPA, and the
additional costs to teacher candidates, they chose to focus on the teacher
candidates and their success and prioritize that above their negative reac-
tions to the mandate.
My experience as a participant and leader in the implementation of the
top-down mandate of edTPA has restored my sense of the possibilities of
inquiry and collegiality in higher education. Unlike our P–12 counterparts
we were not told how to implement the mandate, therefore, we could ap-
proach the mandate with an inquiry stance utilizing an authentic PLC. This
flexibility allowed for faculty growth, program improvement, and teacher
candidate success. Going forward, I believe that there are opportunities for
PLCs to provide a place for faculty to enact a problem-based approach to
address mandates or reform efforts in a way that preserves the unique at-
tributes of higher education while adhering to the regulatory oversight and
accountability associated with our profession. Our approach demonstrated
our response to the top-down mandate and my hope is that others will seek
Mandates Revisited    103

creative ways to respond to reform efforts in ways that value our intellect,
our experiences, the teacher candidates, and the work we do as teacher
educators.

REFERENCES

Cox, M. D. (2004). Introduction to faculty learning communities. New Directions for


Teaching and Learning, 97, 5–23.
Critical Friends Group®. National School Reform Faculty: Harmony Education Cen-
ter; https://www.nsrfharmony.org/
DuFour, R. (2004). Schools as learning communities, Educational Leadership, 61(8),
6–11.
DuFour, R. (2011). Work together: But only if you want to. Kappan, 92(5), 57–61.
Engin, M., & Atkinson, F. (2015). Faculty learning communities: A model for sup-
porting curriculum changes in higher education. International Journal of
Teaching and Learning in Higher Education, 27(2), 164–174.
Hayes, N., & Sokolower, J. (2013). Stanford/Pearson test for new teachers draws
fire. Rethinking Schools, 27(2), 7–8.
Kennedy, A., Deuel, A., Nelson, T., & Slavit, D. (2011). Requiring collaboration or
distributing leadership? Kappan, 92(8), 20–24.
Kemmis, A., & McTaggart, R. (2003). Participatory action research. In N. Denzin
& Y. Lincoln (Eds.), Strategies of qualitative inquiry (pp. 336–396). Thousand
Oaks, CA: SAGE.
Wenger, E. (1998). Communities of practice: Learning as a social system. Systems
Thinker, 9(5), 2–3.
This page intentionally left blank.
CHAPTER 5

THE POWER OF SUPPORTS


TO IMPROVE edTPA
OUTCOMES
Kathleen Fabrikant
Armstrong State University

Cynthia Bolton
Armstrong State University

Cindy S. York
Northern Illinois University

Angie Hodge
Northern Arizona University

In the Beginning . . 

In 2012, the state Professional Standards Commission (PSC) announced their


decision to incorporate a high stakes teacher performance assessment for
initial teacher certification candidates. Candidates would have to pass this
assessment in order to become certified in the state. At the time, there was
institutional pushback based on the fact there was little to no reliability data

Implementing and Analyzing Performance Assessments in Teacher Education, pages 105–120


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 105
106    K. FABRIKANT et al.

regarding the scoring of the assessment, and validity data were based on
content that many agree were important indicators of effective teaching, al-
though not the only indicators. Told that this was a “done deal,” our strategy
switched to “if you can’t beat ’em, join ’em” since we had no choice in the
matter. The focus was now how to build structures that supported faculty and
candidates in understanding the assessment tasks and rubrics in order to en-
sure that candidates were prepared to be successful.

Educator preparation programs (EPPs) have been the target for change
on national, state, and local levels. This changing landscape has included
all areas of education preparation from candidate selection, pre-service cur-
ricula, and comprehensive field experiences to data collection, candidate/
completer tracking, and performance assessment. The ultimate objective for
many of these changes is that beginning teachers have the pedagogical skills
and expertise in planning, management, and assessment on the first day of
school. One initiative to meet this objective is a high-stakes work sample port-
folio assessment, edTPA®.
The edTPA portfolio is a national performance-based assessment that
focuses on the content pedagogy and student learning in a classroom (Wie
& Pecheone, 2010) and was adopted by the Professional Standards Com-
mission (PSC) as a high-stakes assessment for all pre-service candidates eli-
gible for an initial teaching certification. Both PSC and the Council for the
Accreditation of Educator Preparation (CAEP) endorse edTPA as a tool for
program accreditation.

In an era in which teacher education has been challenged to demonstrate


its effectiveness, performance assessments have emerged not only as useful
measures of teacher performance but also as a way to evaluate the quality of
credential programs for state accountability systems and program accredita-
tion. (Pecheone & Cheung, 2006, p. 23)

Research pointed us to the fact that supports are necessary for a success-
ful completion of the edTPA portfolio and the types and frequency of those
supports can be generalized to some degree, but should also be designed
around the unique needs of teacher candidates at specific universities (Bar-
ron, 2015). Our work is an attempt to determine how to best support teach-
er candidates’ edTPA completion and success. In that effort, we began to
systematically explore data related to the following questions:

• What are the supports that the edTPA Team built to support
edTPA success?
• How did cooperating teachers view their role as a support in the
development of their edTPA portfolios?
The Power of Supports to Improve edTPA Outcomes    107

• How did supervising faculty view their role as a support to teacher


candidates as the students completed the edTPA portfolios?
• Which supports did teacher candidates find most useful in edTPA
success?

This ongoing work illustrates how the formation of a collaborative team


of dedicated faculty, staff, and cooperating teachers, listened to candidate
and faculty voices and engaged stakeholders as they constructed specific
supports in response to data gathered.

First Things First

By the spring of 2014, we came to terms with the reality that edTPA was
here to stay. The state had identified the academic year of 2015–2016 as the
first consequential year for implementation of this requirement for all ini-
tial certification candidates. This gave us a very short time to prepare. The
first order of business was to establish leadership that would collaborate
on primary initiatives. An edTPA team, established during Fall 2013 as a
subgroup of the college assessment committee, was comprised of the as-
sociate dean, director of field experiences, LiveText administrator, two fac-
ulty representatives, and two department heads. One of the first initiatives
discussed was the professional development of the faculty who prepared
teacher candidates since the motivation and pedagogical content knowl-
edge of the faculty are the foundations of student outcomes. Schieb and
Karabenick (2011) highlighted several themes leading to successful moti-
vation through professional development. From systemic reforms such as
curriculum alignment to the mentoring of candidates through the edTPA
process, the understanding and engagement of the faculty is necessary.
In the summer 2014, all faculty and clinical supervisors were encouraged
through a paid stipend to attend local evaluation training presented by a
national expert on edTPA. All but two faculty members attended. Feed-
back was mixed with many stating, “What’s new? This is what we do,” to
“This is just another new thing the state is mandating and will be gone in a
couple of years replaced by something new.” Other reactions took the form
of grumbling that this assessment was overly complicated, heavy handed,
and invasive. Faculty edTPA buy-in was going to be a bit tougher than we
had expected.
Meanwhile, guided by research and lessons learned from other adopters,
many different ideas and initiatives came out of the edTPA team’s work. An
analysis of data gathered led to supports tailored for the university’s candi-
dates in an effort to result in a successful completion of the edTPA portfolio
requirement (Petty, Heafner, Lachance, & Polly, 2016; Suleiman & Byrd,
108    K. FABRIKANT et al.

2016). In addition, once faculty were familiar with the edTPA constructs,
they instituted curricular revisions and updates that helped candidates bet-
ter understand the process and requirements for the edTPA portfolio ear-
lier in the program of study (Hobbs, 2015). The use of exemplars (Burns,
Henry, & Lindauer, 2015) by candidates proved to be a useful support as
evidenced by program completer surveys at the end of each semester. An-
other very important support, suggested by the work of Burns et al. (2015)
and Greenblatt (2016), was that cooperating teachers should be better in-
formed about the edTPA process and how its requirements impacted their
student teachers.

Our Pilot Study of edTPA Implementation in Our


Programs

We began edTPA implementation in spring 2014 by randomly identify-


ing a candidate from each program. Participants were encouraged to vol-
unteer to submit an edTPA portfolio for national scoring at no financial
cost to them. We made the case that candidates would be assisting the col-
lege and their peers in gearing up for this assessment, as well as furthering
their own understanding of pedagogy; in other words, an educative expe-
rience. Participants included volunteers from 4 out of 9 programs (early
childhood education [BSED & MAT], special education, mathematics, and
visual arts). After submission, candidates were interviewed regarding their
individual experiences and completed a short open-response survey.
During the academic year 2014–2015, every teacher candidate, both
graduate and undergraduate, was required to submit a portfolio for local
scoring, though only randomly selected candidates representing each of 9
programs submitted portfolios for official scoring by Pearson (Fall 2014,
n-26; Spring 2015, n-22). Programs included elementary education, spe-
cial education, health and physical education, secondary science, second-
ary history, secondary English language arts, secondary mathematics, visual
(art) and performing arts (music). Elementary, special education, and sec-
ondary education are offered at both the Bachelor of Science and Master
of Arts levels at the university. Costs of national submissions were paid for
by the college and through vouchers provided by the state. Feedback was at-
tained through a survey based on the previous pilot results. That feedback
was used to further develop candidate supports for edTPA as well as plan
professional development for supervising faculty.
Plans for assessing the impact and effectiveness of the supports resulted
in further development of teacher candidate surveys, supervising faculty
surveys, cooperating teacher surveys, and structured interview protocols.
Teacher candidates were surveyed at the conclusion of the official LiveText
The Power of Supports to Improve edTPA Outcomes    109

submission seminars for academic years 2015–2016 and 2016–2017. The


survey questions were developed from a document analysis from the Stan-
ford Center for Assessment, Learning, and Equity (SCALE, 2015b) materi-
als available in the American Association for Colleges of Teacher Educa-
tion (AACTE, 2016) resource library as well as documents and PowerPoint
presentations from EPP sponsored seminars and informational sessions on
edTPA (SCALE, 2015a).
Cooperating teachers were interviewed using a structured protocol at
the conclusion of the Fall 2016 semester. Supervising faculty participants,
consisting of fifteen university academic and clinical faculty, were also
interviewed.

Building Supports

The edTPA team identified the need to build initial supports for candi-
dates in the pilot semesters. These supports included video support semi-
nars for Task 2, LiveText and Pearson submission support, and an edTPA
resource library. All resources were available in LiveText and included semi-
nar PowerPoint presentations, and resources for supervising faculty, con-
tent mentors (faculty experts in content areas), and cooperating teachers.
Although there were many supports developed along our journey to-
ward edTPA success, they can be discussed in three categories: human re-
sources from faculty and staff, document resources such as handbooks and
other documents and websites, and support seminars held throughout each
semester.

Human Resources: Faculty and Staff


The human resources were perhaps the most appreciated and needed.
The edTPA team worked collaboratively to identify key faculty roles and
define responsibilities and to develop the resources, both textual and in
seminar form, to support our candidates. Additionally, in Spring 2016, the
creation of the position of edTPA coordinator was approved as well as a
permanent edTPA Committee. The necessity for a central point of contact
for candidates, faculty, and cooperating teachers had become evident. The
edTPA coordinator had six responsibilities:

• to establish expert knowledge of all edTPA content areas used by


university candidates;
• to deliver professional development to faculty undertaking the men-
toring of candidates through the edTPA process;
• to serve as a resource to faculty, candidates, and schools;
• to attend and present at edTPA conferences both regional and national;
110    K. FABRIKANT et al.

• to maintain a candidate database; and


• to chair a retake committee for candidates unsuccessful in meeting
cut scores, and the edTPA committee.

The edTPA committee had the responsibility of determining candidate


supports, the structure of the submission seminar, the EPP’s retake policy,
and a continual restructuring of how the university could better support
teacher candidates. The committee included members from each teacher
education program at the university who could better define the role of fac-
ulty support for teacher candidates. With this new administrative structure,
the EPP had a single person to contact with any and all matters edTPA. This
has resulted in many fewer miscommunications and a more consistent mes-
sage to our candidates.
Initially, faculty and university supervisors acted as content mentors with
the faculty chosen on the basis of their expertise in a specific content area.
Content mentors attended each seminar with their teacher candidates re-
garding edTPA presentations. They were given time to work with their can-
didates during each seminar and could also meet with them at other con-
venient times. That format was based on feedback from the 2014 seminar,
when content mentors were not fully aware of the edTPA guidelines that
had been shared with the candidates. The second year (2015), both candi-
dates and mentors heard the same information from the same resources.
Previously, clinical faculty only served as process mentors, which translated
to supporting the deadlines and processes of edTPA, but with no knowl-
edge of the content required in the edTPA portfolios. However, clinical fac-
ulty expressed a need to learn more about constructs and tasks of edTPA.
Based on that feedback, the edTPA Committee developed a series of four
seminars for the supervising faculty to be trained in local scoring and ap-
propriate feedback for the teacher candidates.

Documents and Text Supports


Document and text supports were uploaded into each candidates’ Live-
Text edTPA assignment. Because it became an overwhelming amount
of information, the supervising faculty were also given summaries of the
documents available so they could appropriately help their candidates as
needed. These were indexed within LiveText so candidates could easily
find what they needed. For example, the academic language handout from
SCALE was easily found under the title of “academic language.” Checklists
developed for the video seminars and the submission seminars were pro-
vided during the seminar before they were needed and explained in depth.
In addition to presenting the task-specific information at each seminar, the
seminars were video recorded and technology-based presentations were up-
loaded into LiveText so students could go back and check any information
The Power of Supports to Improve edTPA Outcomes    111

as needed. Technical screenshots of how to organize documents and sub-


mit assignments into LiveText were included.
One very popular set of documents proved to be the edTPA content-
specific exemplars, which were also housed in LiveText. We asked students
each semester what would help them the most and they consistently asked
for exemplars in their content areas. The edTPA Coordinator created a
set of exemplars covering ten content areas, using EPP candidates’ port-
folios as models. Candidates signed a waiver permitting the use of their
work as resources for training and development. The edTPA coordinator
developed the exemplars using a system of highlighting to indicate what
a scorer likely would tag as evidence for meeting rubric expectations. She
also highlighted what would need to be done to achieve a higher level score
with an explanation of why it was required. These have proved very useful
to our candidates; but, we recognize that as edTPA progresses, they will
become outdated.

Seminars
Seminars for candidates were held on campus four times during a se-
mester and were planned and facilitated by the edTPA coordinator, the
field placement office director, and the LiveText coordinator. The first
seminar provided an overview of edTPA with an in-depth focus on the con-
text for learning (and context of learning Task 4 for elementary education
facilitated by a faculty member who was an expert on Task 4). The second
seminar focused almost exclusively on Task 1 and planning documents: les-
son plans, instructional materials, planned assessments, and the planning
commentary. Each of the first five rubrics was discussed in depth. The third
seminar focused on Task 2. Interns were given a three-part “instructional as-
signment” to complete (a) during the upcoming Seminar 3, (b) during the
video support seminar, and (c) for the final submission seminar. This in-
structional tool was designed to help candidates critically analyze their vid-
eos. In addition, Rubrics 6–10 were covered in depth and video permission
forms were collected and scanned into candidates’ LiveText accounts. The
fourth seminar focused on Task 3 and an in-depth review of Rubrics 11–15
as well as what artifacts from Task 1 were to be completed in Task 3.
Video support seminars were also scheduled for a time when candidates
would have finished their video recording. These seminars offered assis-
tance with formatting, cropping, and compressing files. Candidates were
given a check-sheet to have completed before these seminars that required
them to have chosen their videos and documented their time stamps as re-
quired by content area. The goal was for the candidates to have completed
the video portion of Task 2 by the end of the seminar. Submission seminars
were scheduled by content area and at times that would not interfere with
candidates who were actively teaching.
112    K. FABRIKANT et al.

In 2016–2017, the edTPA Committee restructured seminar days to allow


candidates free time to write and collaborate either with their colleagues
or with their faculty content mentors. In most cases, their content mentors
were also their university supervisors in the classroom. Other interactive ac-
tivities were added starting in fall 2016. These included a Teaching for Equity
(Whittaker, 2016) emphasis in the first candidate seminar to support Task
1, an interactive worksheet for students to use to track their Task 2 video
preparation, and a requirement for seminar admission or a “ticket in the
door” for the final submission seminar which included how each file was
named, the number of files required, and the type of document.
Also during the Spring 2017 semester, three university supervisors host-
ed interactive discussion sessions within the seminars focusing on topics
such as authentic assessments, effective differentiation, and professional
dispositions and communication.

Building Communication and Collaboration

Research was necessary in order to understand the impact of edTPA on


all stakeholders working to support teacher candidates. By understanding
how cooperating teachers and university supervisors viewed their knowl-
edge and responsibilities of the edTPA assessment and specific tasks, we
surmised that we would be able to support these key resources. Surveying
and interviewing cooperating teachers and university supervisors was key
in building communication and collaboration that ultimately resulted in a
successful edTPA process.

Cooperating Teachers
Although we routinely conducted a survey of our cooperating teachers
in the field, only a few of those questions were directly associated with sup-
porting our candidates through the edTPA process, which left many of our
edTPA questions unanswered. Since this is one of the most consequential as-
sessments required of our candidates, we decided to conduct in-person and
telephone interviews of six cooperating teachers from a variety of content
areas and grade levels. We selected teachers in the areas of elementary edu-
cation, music, visual arts, and secondary history, science, and mathematics.
We discovered that although the cooperating teachers knew about edT-
PA, they really had no clear or organized way of understanding how they
should support their candidates while in their classrooms. How much or
little they knew about the edTPA portfolio depended largely on two fac-
tors: (a) the amount of knowledge the university supervisor knew and com-
municated to the cooperating teacher, and (b) how much the candidates
themselves understood. Some of the cooperating teachers felt they had a
The Power of Supports to Improve edTPA Outcomes    113

clear idea of the requirements, partly because they mirrored the portfolios
required for National Certification, and others because they had taken the
initiative to research the edTPA website. However, many of the teachers
simply did not have the time to do the research and were confused by the
candidate’s need to plan for teaching and video recording lessons so far
in advance of teaching them. Planning well in advance was complicated by
the use of the school districts’ use of pacing guides, which dictated what
content needed to be covered in a given nine weeks or semester. We found
that some candidates’ plans to teach were thwarted because the classroom
teacher had to reteach concepts or other school activities, such as field
trips, got in the way. Setting aside three or more days for the candidate to
video record was a problem in those cases.
The requirement for the candidates to video record their edTPA lessons
also was problematic for some cooperating teachers. Although the need for
the candidates to video record without distractions was emphasized repeat-
edly in their on-campus seminars, the real world of the classroom didn’t al-
ways cooperate. Some of the interruptions were laughable after the fact, but
not for the stressed candidates desperately trying to complete their videos.
One video showed a school nurse entering the room to do a “lice check” of
the students. Another video had the potential to make the national scorer
motion sick as a result of zooming in and out with the video camera and
making dizzying swoops across the classroom. All of these issues pointed to
the fact that the requirements and needs of their candidates to complete
edTPA had not been well communicated to the cooperating teachers.
Cooperating teachers who took part in the structured interviews had sev-
eral suggestions of how to remedy this problem. One was to create short
videos or online modules for them to use as a guide. Another was to hold
webinars, and yet others were to create a binder or a PowerPoint for them
to use. Fortunately, all of these resources had previously been created, but
the problem was that there was no one central place for all cooperating
teachers to receive the same information and be able to discuss it in a com-
mon forum. The request from cooperating teachers to remedy this was to
have a day on campus just for them, where all of their questions could be
answered before the school year began and they were busy teaching their
classes. Thus, a professional Teacher Camp was developed that reviewed
all internship requirements with an extensive edTPA overview and plenty
of time for discussion and questions, and included packets of information
and websites. Teacher Camp will be an annual event, bringing new coop-
erating teachers each summer in an effort to ensure that we are all on the
same page.
114    K. FABRIKANT et al.

Supervising Faculty
In Fall 2016, 15 university supervisors from across the EPP (elementary,
special education, secondary health and physical education, mathematics,
music, and art education) were trained during four three-hour seminars
conducted by the edTPA coordinator using the model developed by SCALE
(SCALE, 2015b). Redacted edTPA portfolios written by our own univer-
sity candidates were examined (with appropriate permissions from candi-
dates) and faculty scores were compared to actual Pearson scores. A survey
of 7 open-ended questions and 2 Likert-scale questions were completed by
the 15 supervising faculty.
Faculty who had been supervising candidates through edTPA said that
they participated because they wanted to be sure that they truly understood
how edTPA scoring worked. Others were new to the process of supervising
candidates through edTPA. The supervising faculty were shown how the
prompts in the commentaries were closely aligned with the rubrics and how
much national scorers depend on the responses to those prompts. The im-
portance of having the lesson plans and instructional materials and assess-
ment support those responses was discussed. The faculty worked together
in content-area groups to decide on scores for each rubric and develop a
rationale and evidence to support their chosen scores. After each group
presented their scores and reasoning, the Pearson scores were provided.
For the most part, faculty were within one level of the Pearson scores. One
participant stated, “I understand the process better now. Now I see what
the reasoning is behind the rubric level scores.” Another supervisor add-
ed, “Talking it through helped me understand why in-depth responses are
scored at a higher level. Working together as a group and looking closely
at the rubrics was eye-opening.” This training gave our supervising faculty
the ability to support cooperating teachers and their teacher candidates in
a more highly informed fashion.

Candidate Perspectives
Using the survey data from teacher candidate participants from Fall
2015–Fall 2016, we attempted to ascertain which supports developed by
the EPP, supervising faculty, and cooperating teachers were perceived to
be the most useful for completing the edTPA portfolio. Teacher candidates
responded to a survey at the end of each semester. The survey was modi-
fied during the first 3 semesters and only scores for the semesters that the
edTPA was required (identified as highly consequential) were used for this
study (academic years 2015–2016 and 2016–2017). The final survey con-
sisted of 10 questions.
Candidates continued to report feeling “overwhelmed” by the task of de-
veloping the edTPA portfolio. Almost 60% of the candidates reported that
edTPA took 61 or more hours to complete on the Fall 2016 edTPA survey.
The Power of Supports to Improve edTPA Outcomes    115

Candidates ranked the perceived usefulness of all of the supports and


resources provided related to their completion of the edTPA. Two main
prompts from the survey were analyzed due to impact on the supports:
Prompt 2 and Prompt 6. Prompt 2 required participants to rank order their
preference for usefulness of provided resources. That prompt asked candi-
dates to rank in order of VALUE the following supports: edTPA seminars,
content mentors, edTPA handbooks, edTPA exemplars, LiveText submis-
sion seminar, and video support seminar. In Spring 2015, two additional
supports were added: peer support and edTPA resources (which included
Making Better Choices, Rubric Level Progressions, and other resources
available on the edTPA website). A score was then calculated for each sup-
port and allowed a comparison of the supports.
Survey Prompt 6 rated the helpfulness of each support resource on a
5-point Likert scale from 1 (not helpful at all) to 5 (excellent support).
That prompt listed support personnel such as the process mentor (univer-
sity supervisor), content mentor (program faculty), cooperating teacher,
video support (learning commons), and submission support (LiveText
coordinator).
Shown in Figure 5.1 are the scores from Teacher Candidate Prompt 2:
Rank in order of VALUE the following support structures. Scores are indi-
cated by semester. The helpfulness of each support resource was rated on
a 5-point Likert scale (see Figure 5.2). This again is indicated by semester.
Fall 2015 participants indicated the submission support was the most valu-
able resource for them. Spring 2016 and Fall 2016 participants both indi-
cated the Cooperating Teacher was the most valuable resource for them.
As shown in Figure 5.1, edTPA Support Values, physical resources such
as edTPA Handbooks, edTPA exemplars, and edTPA resources documents
all were ranked high by teacher candidates in terms of the value of their

Figure 5.1  edTPA supports value scores.


116    K. FABRIKANT et al.

Figure 5.2  Overall helpfulness.

support. However, content mentors and peer support were also highly
valued, indicating that having personal support was very helpful to the
candidates.
For edTPA overall helpfulness (see Figure 5.2), candidates ranked simi-
lar factors much higher. As ranked by percentage, the human factors such
as process mentors, cooperating teachers, and content mentors all scored
at between 58% and 90%in terms of how helpful they were perceived by
the teacher candidates. The helpfulness of the support seminars showed a
wider disparity in terms of helpfulness, scoring from 41% to 89%. Of special
note is the upward trajectory of the helpfulness of cooperating teachers.
This may be an indicator of cooperating teachers’ increased knowledge,
familiarity and comfort with the edTPA.
As these candidate supports continue to be refined based on these sur-
vey and individual comments from the teacher candidates, we expect their
usefulness scores to increase. This is an evolving process, which will con-
tinue every semester.

REFLECTIONS ON WHAT WE LEARNED

Candidate Preparation

Candidate surveys indicated that providing both a process mentor and


a content mentor was confusing for candidates and also placed a heavy
work burden on faculty in addition to their regular duties. To remedy this
shortcoming, university supervisors were trained in local scoring evaluation
The Power of Supports to Improve edTPA Outcomes    117

by the edTPA coordinator, who was a nationally certified scorer. Candidates


also reported a disconnect between their curricular supports and the re-
quirements of compiling the edTPA portfolio. Over the past three semes-
ters, significant program curriculum revisions have been developed and
approved by the university-wide curriculum committee. Those changes and
revisions in candidate edTPA supports are discussed below.
In order to prepare candidates for this rigorous in-depth assessment,
both graduate and undergraduate programs have aligned coursework
with edTPA best practices. All candidates use an EPP lesson planning tem-
plate recognized by the National Council on Teacher Quality (NCTQ) as
an exemplary format (AASU, no date). That template and its correspond-
ing rubric are key assessments in all educator preparation programs and
is aligned with edTPA Planning Task 1. In addition, methods courses have
required the videotaping and critique of instruction with input by the su-
pervising faculty as well as the cooperating teacher. Finally, a stand-alone
assessment course is now part of graduate programs, while an entire “mini-
edTPA” assignment is embedded in the undergraduate courses. Intern-
ship requirements have been adjusted to reflect the use of performance
evaluation standards, which mirror the state’s teacher evaluation standards
(Georgia Department of Education, 2014). Using edTPA as a performance
assessment of teacher candidates during the student teaching semester has
created what one cooperating teacher called a realistic way to prepare for
one’s own classroom using “the same book with a different cover.”

Cooperating Teachers

The face-to-face and phone interviews with selected cooperating teach-


ers yielded information not previously recognized. The cooperating teach-
ers’ perceptions of the edTPA process aligned with McLee’s (2015) per-
spective that the edTPA portfolio was a realistic preparation for the world
of the real classroom.
Resulting from the interviews, most of the cooperating teachers felt that
they had adequately supported their teacher candidates as far as helping
the candidates plan lessons, but felt that they (the cooperating teachers)
did not know enough about edTPA to really understand the requirements
of their teacher candidates. All cooperating teachers agreed that they
would welcome a time in person, and perhaps on campus, to have a fo-
rum to ask questions and understand more about edTPA. The teachers had
been provided with a webinar and SCALE documents (SCALE, 2015a) that
outlined the role of the cooperating teacher, but all indicated the need for
more information. The office of field experiences director and the edTPA
coordinator traveled to schools with at least three cooperating teachers to
118    K. FABRIKANT et al.

give them a crash course on edTPA in Spring 2016, but it was widely felt
that more information was needed. The cooperating teachers also felt that
coming together in person would help them understand the parameters of
their supports to better assist their teacher candidates. Teacher candidates
indicated that they continued to need edTPA support from cooperating
teachers to identify their focus students (a requirement for their edTPA
portfolio to indicate a high, medium, and low-performing student) as well
as when planning for when lessons would be taught.

Supervising Faculty

A total of 15 faculty, both clinical and academic, attended a series of


5 seminars (2–3 hours each) designed to provide local evaluation training
(Barron, 2015) as approved by SCALE (SCALE, 2015b). All faculty indi-
cated they did not understand the rubrics deeply enough nor the thinking
behind them. All were experienced university supervisors who understood
best practices of teaching. The faculty felt the preparation of the edTPA
portfolio certainly met the requirements of best practices, but also felt
it was a heavy burden for teacher candidates. Supervising faculty closely
studied the edTPA rubrics, using institutional portfolios as their training
modules in order to increase understanding. All faculty going through the
training scored within one level (adjacent) of the Pearson score, indicating
their acceptable understanding of the scoring process.

MOVING FORWARD

The teacher candidate supports outlined in this study have been added and
modified over the past three years based on results of teacher candidate
surveys, cooperating teacher surveys, and the supervising faculty surveys
after local edTPA scoring training. The edTPA Team, established in late
Fall 2013 and evolved into the edTPA Committee in 2016–2017, has attend-
ed and presented at edTPA conferences; maintained strong relationships
with the State Professional Standards Commission and other institutions
of higher education using edTPA; and participated in webinars for edTPA
coordinators across the state. That participation has resulted in a wide net-
work of experts who advise the EPP in what is considered acceptable candi-
date support (something that has changed markedly over the past two years
[e.g., Understanding Rubric Level Progressions was not available to candidates
until 2015]). In addition, the edTPA committee was charged with planning
and improving edTPA policies, seminars and boot camps, curricular recom-
mendations, and procedures for implementation and submission support.
The Power of Supports to Improve edTPA Outcomes    119

The training of supervising faculty to serve as edTPA mentors for teacher


candidates has provided candidates with support and feedback. Curricular
changes are developmentally preparing teacher candidates for the edTPA
portfolio assessment. However, there continues to be faculty content men-
tors who are tasked with mentoring candidates through edTPA with no re-
lease from a full workload. Our attempt to give the best possible support
to our candidates, while relieving the workload of non-supervising faculty
has been a primary focus during 2016–2017 academic year. Using the com-
bined knowledge of several state universities as well as our regional edTPA
consultant, the edTPA committee is developing foundational supports for
our candidates that are program-based rather than college-focused. Based
on the belief that each program has designed its own curricular supports
and best knows its content area, it is logical that the programs would be-
come the center of support for their majors. Program driven on-campus
boot camps are envisioned for Fall 2017 that relieve individual faculty from
mentoring each candidate individually. Professional development and lo-
cal scoring training will be available for any new supervising faculty each
semester.
Future initiatives include requiring boot camps three times during the
semester to review Tasks 1, 2, and 3, respectively. Boot camps are scheduled
to occur in the morning of the seminar days, while the afternoon will be
open for candidates to write and to develop their edTPA portfolios. This,
we hope, will give our candidates content-specific support from faculty ex-
perts in addition to edTPA support in the classroom from their university
supervisors.
Our work will expand to include more in-depth perceptions of the edTPA
process with the development of an open-ended survey, and focus groups to
include both candidates and content mentors. We believe the results of this
work along with the descriptions of the supports provided can help other
teacher preparation programs beginning an edTPA portfolio requirement.
Starting a new program from scratch can take a lot of trial and error and it
was our intention that by being able to see what worked for one university
might streamline the process at other universities.

REFERENCES

American Association of Colleges for Teacher Education (AACTE). (2016). edTPA.


Retrieved from edtpa.aacte.org
Armstrong Atlantic State University (AASU). (n.d.). Professional education unit: Les-
son plan format. Retrieved from http://nctq.org/dmsView/Armstrong_Atlan-
tic_State_University_Professional_Education_Unit_Lesson_Plan_Format
Barron, L. (2015). Preparing pre-service teachers for performance assessments.
Journal of Interdisciplinary Studies in Education, 3(2), 70–76.
120    K. FABRIKANT et al.

Burns, B. A., Henry, J. J., & Lindauer, J. R. (2015). Working together to foster can-
didate success on the edTPA. Journal of Inquiry & Action in Education, 6(2),
18–37.
Georgia Department of Education. (2014). Georgia Department of Education TAPS
Standards. Retrieved from https://www.gadoe.org/School-Improvement/
Teacher-and-Leader-Effectiveness/Documents/FY15%20TKES%20and%20
LKES%20Documents/TAPS_Reference_Sheet%206-5-14.pdf
Greenblatt, D. (2016). Supporting teacher candidates completing the edTPA. In
D. Polly (Ed.), Evaluating teacher education programs through performance-based
assessments, (pp. 184–200). Hershey, PA: IGI Global.
Hobbs, E. L. (2015). The edTPA experience: Student teachers’ perceptions and reflec-
tions on the edTPA assessment (Doctoral dissertation). Aurora University,
Aurora, IL. Retrieved from http://gateway.proquest.com/openurl?url_
ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx
:dissertation&rft_dat=xri:pqdiss:3746484
McLee, L. (2015). “Mama Bear” teacher says edTPA gave student teacher realistic
classroom experience. Ed Prep Matters AACTE blog. Retrieved from http://
edprepmatters.net/2015/06/mama-bear-teacher-says-edtpa-gave-student-
teacher-realistic-classroom-experience/
Pecheone, R. L., & Chung, R. R. (2006). Evidence in teacher education: The Per-
formance Assessment for California Teachers (PACT). Journal
of Teacher Education, 57(1), 22–36.
Petty, T., Heafner, T. L., Lachance, J., & Polly, D. (2016). Supporting teacher edu-
cation candidates through the edTPA process. In D. Polly (Ed.), Evaluating
teacher education programs through performance-based assessments (pp. 201–215).
Hershey, PA: IGI Global.
Schieb, L. J., & Karabenick, S. A. (2011). Teacher motivation and professional develop-
ment: A guide to resources, math, and science partnership. Motivation Assessment
Program. Ann Arbor, MI: University of Michigan.
Stanford Center for Assessment, Learning and Equity (SCALE). (2015a). edTPA
guidance to supervising teachers. Retrieved from https://secure.aacte.org/
apps/rl/resource.php?resid=497&ref=edtpa
Stanford Center for Assessment, Learning and Equity (SCALE). (2015b). edTPA
Resource Library. Retrieved from https://secure.aacte.org/apps/rl/resource.
php?ref=edtpa
Suleiman, R., & Byrd, C. (2016). edTPA preparation: Building support structures
for teacher candidates. In D. Polly (Ed.), Evaluating teacher education programs
through performance-based assessments (pp. 138–145). Hershey, PA: IGI Global.
Wei, R. C., & Pecheone, R. L. (2010). Assessment for learning in preservice teacher
education: Performance-based assessments. In M. M. Kennedy (Ed.), Teacher
assessment and the quest for teacher quality: A handbook (pp. 69–132). San Fran-
cisco, CA: Jossey-Bass.
Whittaker, A. (2016, March). edTPA and equity. Presentation at edTPA Regional
Technical Summit. Macon, GA.
CHAPTER 6

COGNITIVELY GUIDED
INSTRUCTION AS A MEANS
OF PREPARING ELEMENTARY
TEACHER CANDIDATES
FOR edTPA MATHEMATICS
ASSESSMENT TASK 4
Susan Swars Auslander
Georgia State University

Stephanie Z. Smith
Georgia State University

Marvin E. Smith
Kennesaw State University

Our elementary teacher preparation programs are located in a state requir-


ing successful completion of edTPA® as a gatekeeper for teacher certifica-
tion (American Association of Colleges for Teacher Education [AACTE],
2016). That is, teacher candidates not achieving a passing score on edTPA

Implementing and Analyzing Performance Assessments in Teacher Education, pages 121–146


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 121
122    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

are not granted teaching licensure, though they can receive a bachelor’s
degree in Early Childhood Education (Grades K–5) from our universities.
For teacher preparation programs across the state, edTPA first became con-
sequential for graduates in 2015–2016. Its recent implementation and high-
stakes nature, along with uncertainty about both expectations for success
and scoring processes, have contributed to a pervading sense of anxiety in
our prospective teachers and program faculty.
The version of edTPA our teacher candidates complete, Elementary Ed-
ucation, includes four tasks with one focusing on mathematics (i.e., Math
Task 4; Stanford Center for Assessment, Learning, and Equity [SCALE],
2015). In general, the mathematics preparation of elementary prospec-
tive teachers holds certain challenges, particularly the well-documented
unproductive mathematical beliefs and attitudes of elementary teachers.
For example, many hold procedural beliefs about mathematics, including
that it is a collection of facts, rules, and skills that should be memorized
with repeated practice (Hiebert, 2003; National Council of Teachers of
Mathematics [NCTM], 2014a). Additionally, some have a fixed mind-set
(Dweck, 2006) about learning mathematics rather than a growth mind-set,
and therefore believe not everyone can improve their capacity to learn and
understand mathematics. Furthermore, mathematics anxiety has been of
long-standing concern in this population, with research showing anxiety
to be prevalent (Bekdemir, 2010; Conference Board of the Mathematical
Sciences [CBMS], 2012; Hembree, 1990; Philipp, 2007). These negative
beliefs and attitudes about mathematics, along with the additional anxieties
generated by edTPA, are concerning to us as mathematics educators. How
can we remain true to the goals, values, and practices we know to be im-
portant for our prospective teachers during mathematics methods courses,
while meeting the real challenges of preparing them for this high stakes
performance assessment? How can we allay the fears and apprehensions
of our teacher candidates about this gatekeeping mechanism that appear
overwhelming at times? It should be noted that regardless of our philo-
sophical stance about the value of edTPA, as mathematics educators we
believe the responsibility largely lies with us for preparing our prospective
teachers for Math Task 4.
So, we set out to continue our current foci and learning experiences in
our mathematics methods courses, with the additional goal of providing
seamless preparation for Math Task 4. Our courses emphasize the study of
children’s thinking and using these understandings to guide instructional
decision-making, particularly focusing on the frameworks for problem types
and children’s solution strategies in cognitively guided instruction (CGI).
Our aim was that these experiences would serve as a foundation for a simu-
lated Math Task 4 assignment. Through this process we sought to study the
influence of these mathematics methods courses that included a simulated
edTPA: Preparing Elementary Prospective Teachers    123

Math Task 4 on our prospective teachers’ beliefs, along with their perspec-
tives on engaging in the simulated Math Task 4. This chapter describes
the foci and learning experiences provided in our mathematics methods
courses and how they are grounded in the literature, a brief overview of our
research methods, and some of the findings of the study with discussion.

WHAT IS MATH TASK 4?

The Elementary Education version of edTPA is comprised of three liter-


acy-focused tasks and Task 4: Assessing Students’ Mathematics Learning.
Specifically, Math Task 4 requires teacher candidates to teach a learning
segment of 3–5 consecutive lessons based on a mathematical concept as a
central focus (SCALE, 2015). Across these lessons, instruction should sup-
port students’ development of conceptual understanding, procedural flu-
ency, and mathematical reasoning and problem solving skills. Prospective
teachers are to develop or adapt an assessment to be administered upon
conclusion of the learning segment and analyze the data from this group
assessment for evidence of students’ patterns of learning, with the analysis
focusing on mathematical errors, confusions, and partial understandings.
Three students with a common area of struggle are then chosen as focus
students for a re-engagement lesson. Based on the analysis of student work
of the three focus students, teacher candidates identify a targeted learning
goal and design and teach the re-engagement lesson. Then, they administer
another assessment to the three focus students to provide new evidence of
student understanding. The prospective teachers evaluate the effectiveness
of the re-engagement lesson and consider its impact on student learning.
In addition to submitting student work, they prepare and submit a written
commentary of not more than eight single-spaced pages responding to vari-
ous prompts about their instructional processes and associated reflections.
The developers of edTPA assert that this task generally centers on two high-
leverage teaching practices: (a) the use of assessments to analyze student
learning, and (b) re-engagement of students to continue their learning of
specific mathematical concepts.

OUR MATHEMATICS METHODS COURSES


AND PREPARATION FOR MATH TASK 4

The empirical data presented in this chapter come from the program in
which the first two authors of this chapter are directly involved. The third
author is also involved with an elementary mathematics methods course
at a different university that is taught from the same perspective and
124    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

confronted with similar challenges relating to Math Task 4. We have col-


laboratively created, planned, and taught mathematics education courses
across multiple programs for over 14 years and share common philosophi-
cal views on mathematics teaching and learning.
For the program described in this chapter, our teacher candidates com-
plete one mathematics methods course in the first semester of the program
(since the time of this study, the course has been changed to second se-
mester) and a second mathematics methods course in the third semester
of the program. The first course focuses on: number, operations, and al-
gebraic thinking; and the second course emphasizes: geometry, measure-
ment, and data analysis. Both courses center on developing high-leverage
teaching capabilities in the elementary classroom, including: (a) selection
and implementation of mathematical tasks with high levels of cognitive de-
mand (i.e., worthwhile mathematical tasks); (b) use of multiple mathemati-
cal representations; (c) use of mathematical tools; (d) promotion of math-
ematical dialogic discourse, explanation and justification, problem solving,
and connections and applications typical of a standards-based learning en-
vironment (NCTM, 2000, 2014a); and (e) use of children’s thinking and
understandings to guide instruction. We intend that as our teacher candi-
dates develop these capabilities, their pedagogical beliefs will shift toward
more cognitive orientation and their beliefs in their abilities to effectively
teach elementary mathematics will strengthen.
Our course sessions promote learning through: (a) active inquiry and
analysis of the mathematics in the elementary curriculum, (b) study of chil-
dren’s thinking and learning via video clips and teaching cases, and (c)
examination of examples of classroom practice via video clips and teach-
ing cases. In addition to assigned readings of various articles, texts include
Schifter, Bastable, and Russell’s Developing Mathematical Ideas series (2008),
Children’s Mathematics: Cognitively Guided Instruction (Carpenter, Fennema,
Franke, Levi, & Empson, 2014), and Thinking Mathematically: Integrat-
ing Arithmetic and Algebra in Elementary School (Carpenter, Franke, & Levi,
2003). Our students also engage in careful analysis of the Common Core State
Standards for Mathematics ([CCSS-M], National Governors Association Cen-
ter for Best Practices & Council of Chief State School Officers [NGACBP &
CCSSO], 2010), including both the content standards for the elementary
grades and the Standards for Mathematical Practice, and the Standards for
Teaching Mathematics in Principles to Actions: Ensuring Mathematical Success
for All (NCTMa, 2014). During the courses, we draw elementary curricular
materials and instructional tasks from: Investigations in Number, Data, and
Space (Technical Education Research Centers, 2012); NCTM’s Navigations
(2007) and Curriculum Focal Points (2006) series; Contexts for Learning Math-
ematics (Fosnot, 2006); Georgia Department of Education’s Mathematics
edTPA: Preparing Elementary Prospective Teachers    125

Frameworks Units (2016); and Everyday Mathematics (University of Chicago


School Mathematics Project, 2007).
The development of high leverage teaching capabilities occurs in a vari-
ety of ways. For example, across the two courses, teacher candidates engage
in examination of mathematical tasks, including analysis of these tasks based
on level of cognitive demand during selection, setup, and implementation
using the guidelines in the text Implementing Standards-based Mathematics In-
struction (Stein, Smith, Henningsen, & Silver, 2009). In particular, the first
course includes an assignment requiring them to select, adapt, and/or gen-
erate and then analyze six worthwhile mathematical tasks with higher levels
of cognitive demand. In addition, the study of children’s thinking is thread-
ed across both courses, with assignments in the first course including four
clinical-style interviews focusing on children’s understandings of number,
operations, and algebraic thinking, with accompanying analysis to deter-
mine children’s understandings and plan appropriate instructional steps.
For these interviews, prospective teachers engage in careful study of the
frameworks from cognitively guided instruction (CGI) about problem types
and children’s solution strategies. Then, toward the beginning of the sec-
ond course, they use this knowledge of children’s thinking to plan and im-
plement at least two CGI-based lessons in their field placement classroom.
Central to our vision of effective mathematics instruction in elementary
classrooms is the placing of children’s thinking and learning at the center
of classroom activity and instructional decision-making. So, during the CGI-
based lessons, children are expected to interpret the meaning of the story
problem, develop their own solutions, represent their thinking in writing,
construct arguments and critique one another’s reasoning, and debate the
merits of different solution strategies. The assignment requires prospec-
tive teachers to plan two CGI-based lessons, including formulating goals for
children’s learning, determining relevant tools and other materials, and
anticipating children’s solution strategies for the specific problem. These
lessons are to use a three-part structure consisting of the launch (i.e., pos-
ing the number story), student work time, and whole group discourse.
Videos of mathematics lessons using this structure are viewed, analyzed,
and discussed. Particularly during the whole group discourse, prospective
teachers should create goal-driven, discourse-rich spaces by closely attend-
ing to children’s ideas and planned learning goals. They are to sequence
children’s presentations of solutions from the least-sophisticated to the
most-sophisticated, while providing careful representations for the entire
class and prompting children to ask questions, consider validity, and dis-
cuss similarities and differences amongst the strategies shared. After im-
plementation of each CGI-based lesson, they assess and analyze children’s
individual work samples using a multi-dimensional analytic rubric in order
to plan next instructional steps. In addition, they engage in a reflective
126    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

component about the effectiveness of the lesson by considering and writing


about the impact on children’s learning. We anticipate these CGI-based les-
sons will provide a foundation for our teachers candidates’ understanding,
preparing, implementing, and writing about a subsequent simulated Math
Task 4, in part because these lessons integrate important elements of Math
Task 4—attention to children’s conceptual understanding, procedural flu-
ency, and problem solving and reasoning.
Midway through the second course, our goal is that the learning experi-
ences thus far provide substantial preparation for Math Task 4. At this point,
a specific assignment requires our teacher candidates to engage in a simu-
lated Math Task 4 with children in their field placement classrooms, which
are either fourth or fifth grade. For the simulated task, the learning segment
is specified as only two lessons (rather than 3–5) before the re-engagement
lesson, because they spend only two consecutive days each week in placement
classrooms. Prospective teachers are to collaborate with their cooperating
teachers on deciding which mathematical concept to teach, and they are told
by us that using CGI-based lessons is permissible but not required. They are
to read the edTPA Handbook, and we provide explicit information on the
expectations of Math Task 4. Their understandings of important elements of
Math Task 4, particularly the required attention to conceptual understand-
ing, procedural fluency, and problem solving and reasoning, are supported
by the readings “Developing Computational Fluency With Whole Numbers”
(Russell, 2000), “Procedural Fluency in Mathematics” (NCTM, 2014b), and
“The Strands of Mathematical Proficiency” in Adding It Up: Helping Children
Learn Mathematics (National Research Council [NRC], 2001). They consider
how these elements are embedded in various cognitively-demanding math-
ematical tasks and how children can evidence them, and they are expected
to use this information to construct assessment rubrics. The completed simu-
lated assignment is due at the end of the semester.

GROUNDING OUR COURSES’ FOCI


AND LEARNING EXPERIENCES IN THE LITERATURE

The emphases and learning experiences in our mathematics methods


courses, as well as this study, are grounded in the literature on elementary
teachers’ mathematics preparation.

Mathematics Teaching Practices

Reform and change in mathematics education in the United States has


been largely guided by the research-based recommendations of NCTM.
edTPA: Preparing Elementary Prospective Teachers    127

NCTM’s (2014a) Principles to Actions: Ensuring Mathematical Success for All


recommends the use of a framework of eight mathematics teaching practic-
es comprised of a core set of high-leverage practices and essential teaching
skills. This framework supports the “characterization of mathematics learn-
ing as an active process, in which each student builds his or her own math-
ematical knowledge from personal experiences, coupled with feedback
from peers, teachers and other adults, and themselves” (NCTM, 2014a,
p. 9). According to these eight practices, teachers should: establish math-
ematics goals to focus learning, implement tasks that promote reasoning
and problem solving, use and connect mathematical representations, facili-
tate meaningful mathematical discourse, pose purposeful questions, build
procedural fluency from conceptual understanding, support productive
struggle in the learning of mathematics, and elicit and use evidence of stu-
dent thinking. Some of these practices are evident to varying degrees in the
expectations of Math Task 4, such as the emphasis on: children’s problem
solving and reasoning, the development of both conceptual understand-
ing and procedural fluency, and the eliciting of children’s understandings
via assessment to guide instruction. The implementation of NCTM’s high
leverage practices is challenging for many teachers, given the marked dif-
ferences between those practices and the traditional direct instruction per-
vasive in classrooms in the United States.

Teachers’ Beliefs

There are many factors that shape teachers’ instructional practices, with
one being their mathematical beliefs. Though some argue the teacher be-
liefs–practice link is less causal and more dynamic, with the impact of beliefs
molded by other facets such as resources and goals and modified by contex-
tual constraints (Schoenfeld, 2015; Skott, 2015), there is “broad acceptance
that mathematics teachers’ beliefs about mathematics influence the ways in
which they teach the subject” (Beswick, 2012, p. 127). Over time, research
has shown teachers’ beliefs shape their thinking and behaviors, including
instructional decision-making and use of curriculum materials (Beswick,
2006; Cross Francis, 2015; Philipp, 2007; Polly et al., 2013; Thompson, 1992;
Wilson & Cooney, 2002). Beliefs have been characterized as: mental con-
structs that are subjectively true for an individual; value-laden, held with
a certain degree of commitment, and relatively stable; and “expected to
significantly influence individuals’ perceptions and interpretations of expe-
riential encounters and their contributions to the practices in which they
engage” (Skott, 2015, p. 6). Two teacher beliefs relevant to this study are
pedagogical beliefs (i.e., beliefs about teaching and learning) and teaching
128    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

efficacy beliefs (i.e., beliefs about one’s capabilities to teach effectively and


influence student learning).

Teachers’ Learning and Change

Several program and course emphases have been identified as important


to elementary prospective teachers’ mathematical learning and change
(CBMS, 2012; Hart, Oesterle, Swars Auslander, & Kajander, 2016; Smith,
Swars, Smith, Hart, & Haardoerfer, 2012; Sowder, 2007; Swars, Smith, Smith,
& Hart, 2009). In particular, they should complete courses that examine
in depth (and from a teacher’s view) the vast majority of K–5 mathemat-
ics and its connections to PreK and middle school mathematics. Further,
courses should provide time and opportunities to think about, discuss, and
explain mathematical ideas, while developing mathematical habits of mind
and furthering mathematics as a sense-making enterprise. In addition, pro-
grams should include a seamless blend of study of mathematics content
and teaching methods, and departments of education and mathematics
should collaborate, including mathematics educators and mathematicians,
in the preparation of teacher candidates. Specific methods for prompting
their learning include: studying children’s thinking, using reform-oriented
curricula and cognitively demanding instructional tasks, emphasizing prob-
lem solving and other mathematical processes, examining case studies of
teaching and learning, and relating coursework to K–12 classrooms (Hart
et al., 2016; Lannin & Chval, 2013; Liljedahl, 2005; Philipp, 2008; Philipp et
al., 2007; Schoenfeld, 2015; Swars et al., 2009).
As an example, it has been posited that instead of trying to interest el-
ementary teachers in mathematics for the sake of mathematics itself, teach-
er learning should provide connections to children’s thinking—in which
teachers are fundamentally concerned—to effectively prompt change and
learning (Lannin & Chval, 2013; Philipp, 2008; Philipp et al., 2007; Swars,
Smith, Smith, Carothers, & Myers, 2016; Tyminski, Land, Drake, Zambak, &
Simpson, 2014). One means of studying children’s mathematical thinking
is the professional development materials from the CGI (Carpenter et al.,
2014) Project. CGI is an approach to teaching and learning mathematics
focused on teachers using knowledge of children’s mathematical thinking
to make instructional decisions. It includes well-defined research-based tax-
onomies of problem types and children’s strategies for solving those prob-
lems. The CGI materials available include video clips, cases, and descrip-
tions of teachers, children, and classroom pedagogy in a CGI textbook.
The use of CGI in university courses and professional development shows
positive influences on elementary teacher development in mathematics,
generally contributing to: productive changes in beliefs, implementation of
edTPA: Preparing Elementary Prospective Teachers    129

more cognitively-based instructional practices, and the promotion of more


inclusive pedagogical practices (Cady, Meier, & Lubinski, 2006; Fennema et
al., 1996; Franke, Carpenter, Fennema, Ansell, & Behrend, 1998; Moscar-
dini, 2014; Steele, 2001; Swars et al., 2009; Vacc & Bright, 1999).

OVERVIEW OF OUR STUDY

Using a mixed methods design, we investigated two research questions:

1. Do elementary teacher candidates’ mathematical beliefs change


during a mathematics methods course that includes a simulated
edTPA Math Task 4?
2. What are elementary teacher candidates’ perspectives on engaging
in a simulated edTPA Math Task 4 during a mathematics methods
course?

Participants included 51 elementary prospective teachers (49 females, 2


males) from three student cohorts in an undergraduate teacher prepara-
tion program at the large, urban university in the southeastern United States
of the first two authors. The program is 2 years in duration and completed
during the junior and senior years. It consists of 3 semesters of courses
with concurrent 2-day-per-week field placements, followed by a semester of
student teaching. The field placements adhere to a developmental model,
meaning, the teacher candidates start their placements in prekindergarten
and finish in fifth grade prior to the student teaching semester. Preparation
for edTPA is embedded in specific courses across the first three semesters,
including the previously described mathematics methods courses, along
with support provided via on-campus seminars during the final semester of
student teaching. The student teaching semester is when our prospective
teachers complete the four tasks of the Elementary Education edTPA and
submit required documents to Pearson Education for evaluation.
Individual interviews and an open-ended questionnaire provided the
qualitative data, and two belief surveys provided the quantitative data. Six
randomly selected teacher candidates participated in individual, semi-struc-
tured interviews at the end of the second mathematics methods course.
The interview protocol contains seven multi-part questions designed to
learn about preparation for and implementation of the simulated Math
Task 4. All prospective teachers completed the open-ended questionnaire,
which contains 10 multi-part questions with a similar purpose, during the
last class session of the second mathematics methods course.
All teacher candidates completed two belief surveys at the beginning and
end of the second mathematics methods course, including the Mathematics
130    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

Beliefs Instrument (MBI) and the Mathematics Teaching Efficacy Beliefs


Instrument (MTEBI). The MBI is a 48-item Likert-type scale instrument
designed to assess teachers’ beliefs about the teaching and learning of
mathematics and the degree to which these beliefs are cognitively aligned
(Peterson, Fennema, Carpenter, & Loef, 1989, as modified by the CGI Proj-
ect). The three subscales include: (a) role of the learner (learner), (b)
relationship between skills and understanding (curriculum), and (c) role
of the teacher (teacher). The MTEBI consists of 21 Likert-type scale items,
with the two subscales of personal mathematics teaching efficacy (PMTE)
and mathematics teaching outcome expectancy (MTOE; Enochs, Smith, &
Huinker, 2000). The PMTE addresses teachers’ beliefs in their individual
capabilities to be effective mathematics teachers. The MTOE addresses
teachers’ beliefs that effective teaching of mathematics can bring about stu-
dent learning regardless of external factors.
Analysis of the qualitative data involved constant comparison methods
(Lincoln & Guba, 1985), which was documented in a coding manual. Both
inferential and descriptive statistics were used for analysis of the quantita-
tive data. The integration of the quantitative and qualitative data occurred
during interpretation of the results, as a part of the mixed methods design.

OUR STUDY’S FINDINGS

The findings provide insights into teacher candidates’ mathematical be-


liefs before and after the mathematics methods course with the simulated
Math  Task 4. The data also tell the story of candidates’ perspectives on
preparation for and implementation of the simulated Math Task 4.

Teacher Candidates’ Mathematical Beliefs

The teacher candidates’ mathematical beliefs, as indicated through the


survey results, are provided in Table 6.1. Shown are the means, standard
deviations, and mean differences for the data from the MBI and MTEBI
surveys by overall scale and subscale at the two data collection points. The
data are presented as response values on the 5-point Likert-type scales,
along with probabilities (p-values) for the statistical significance of these
changes using paired t-tests (two-tail). Across the mathematics methods
course, all measures of mathematical pedagogical and teaching efficacy be-
liefs do not show significant changes. Given the marginal mean differences
from pretest to posttest, we must conclude that the mean scores for these
beliefs were uncertain at both the beginning and the end of the course. That
is, the prospective teachers’ beliefs that children can construct their own
edTPA: Preparing Elementary Prospective Teachers    131

TABLE 6.1  Means, Standard Deviations, Mean Differences, and


p-Values for Pedagogical Beliefs (MBI) and Teaching Efficacy Beliefs
(MTEBI)
Pretest Posttest
Standard Standard Mean
Instrument Mean Deviation Mean Deviation Difference p-Values
Overall MBI 2.83 0.57 2.80 0.74 –0.03 .539
Learner Subscale 2.87 0.59 2.81 0.77 –0.06 .367
Curriculum Subscale 2.87 0.45 2.85 0.66 –0.02 .781
Teacher Subscale 2.76 0.78 2.74 0.91 –0.02 .749
Overall MTEBI 2.75 0.69 2.76 0.73 +0.01 .890
PMTE Subscale 2.67 0.77 2.69 0.83 +0.02 .720
MTOE Subscale 2.88 0.78 2.86 0.71 –0.02 .733

mathematical knowledge (learner), mathematics skills should be taught in


relation to understanding and problem solving (curriculum), and mathe-
matics instruction should be organized to facilitates children’s construction
of knowledge (teacher) did not change toward more cognitive alignment.
Further, the teacher candidates’ beliefs in their individual capabilities to
teach mathematics effectively (PMTE) did not increase, as well as beliefs
that their own effective instruction of mathematics would influence stu-
dent learning (MTOE) did not positively shift. According to these data, the
intent of the course experiences to shift mathematical pedagogical beliefs
toward more cognitive orientation and increase mathematics teaching ef-
ficacy was not realized.

Teacher Candidates’ Perspectives on the Simulated


Math Task 4

The teacher candidates’ perspectives on the simulated Math Task 4 were


revealed through emergent themes from our analysis of the interview and
open-ended questionnaire data. These themes include: anxiety and stress,
mismatches with placement classrooms, readiness and support, and impact
and learning.

Anxiety and Stress


The analysis of the interview data shows that five of the six prospective
teachers were experiencing marked anxiety and stress about edTPA. Their
apprehension was linked to uncertainty about Math Task 4 and edTPA in
general, the detailed expectations for formatting and writing of required
132    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

documents for Math Task 4, and the struggle to understand and identify
complex instructional elements involved in Math Task 4.
The high stakes nature of edTPA was a potent force on the teacher can-
didates. Said one: “[It is] not a very settling place to be, knowing that my
teaching certificate and the past 6½ years of college . . . rides on THAT [edT-
PA]” (Participant 3). Interview participants and open-ended questionnaire
respondents described edTPA as: “overwhelming,” “difficult,” “confusing,”
“stressful,” “terrifying,” “time-consuming,” “tedious,” and “annoying.”
These negative associations were linked in part to a piecemeal introduction
to the details of the assessment, as one asserted:

When we first heard [about edTPA early in the program], it scared the crap
out of us. Because all we know is we have to pass it, because that’s what they
tell us. You have to pass this $300 test, but we really need more information so
that just gives us a little bit of ease. (Participant 4)

The prospective teachers’ emphatic call for more information earlier in


the program about the expectations and processes of edTPA is illustrated
by this statement:

We kept hearing about it [edTPA] at the beginning, and they’re like, well,
don’t worry about it. Don’t worry about it ‘til Practicum 2. Don’t worry about
it ‘til Practicum 3. And, then we’re at the end of Practicum 3, and still we’ve
not had a seminar as far as what will be on the edTPA that we are going to
be doing in a few months that determines whether we will get our teaching
certificate or not. We’re like, “that’s kind of a big deal,” and I’d like to know!
Being in the dark for so long, I guess it just creates fear of what it’s going to
be. (Participant 2)

The uncertainty about edTPA was a growing source of anxiety across the
teacher preparation program for the teacher candidates, and this was a
powerful, pervading presence during interactions about edTPA in the
mathematics methods course.
In addition to lack of knowledge about edTPA across the program, the
formatting and writing of the required Math Task 4 documents and artifacts
generated anxieties, with concerns about writing the “right way,” or the per-
ceived right way according to portfolio scorers at Pearson Education. One
simply stated, “I don’t like the format” (Participant 1). Another elaborated,
“Layout can just confuse you more than the actual processes . . . filling it out
with the appropriate academic jargon and what they [portfolio scorers] are
looking for is confusing and stressful” (Participant 3). Another described
her anxieties with writing the commentary portion as:
edTPA: Preparing Elementary Prospective Teachers    133

What I struggled with was the writing it out. Like, I don’t know what edTPA—I
don’t know what they want me to write so I can get the [passing score]. . . . I
don’t know if it is the right language that they want. So that the only thing
I’m nervous about is my language, but I feel prepared other than that.” (Par-
ticipant 2)

She went on to say:

They give you the page limits of don’t go over 8. But, you should have 6.
Should I do 7 1/2? Like should I go all the way to 8? Should I be right at 6?
Um, the nitty gritty stuff is the stuff I’m worried about.

This notion of writing it the “right way” was expressed by others, being pre-
sented as an unknown to receive a passing score.
Another aspect contributing to concerns about Math Task 4 was the com-
plexity of expectations for planning, instruction, and assessment, specifi-
cally the attention to conceptual understanding, procedural fluency, and
problem solving and reasoning. These components were challenging for
the prospective teachers to understand and identify in mathematical tasks,
as one stated, “We’ve struggled to figure out how exactly this task is proce-
dural, conceptual, and problem solving, or if it’s not” (Participant 3). This
difficulty extended to assessments, as she went on to share, “I’m still trying
to figure out, does this cover all three of the categories to be a good assess-
ment?” Locating cognitively demanding tasks with all three elements was
also a challenge for some: “Finding activities that met all three [elements]
was kind of difficult. Just because you want to make sure problem solving
was in there. And, I realized that looking at a lot of these tasks online that
there’s not much problem solving. It’s conceptual or procedural” (Partici-
pant 4). One open-ended questionnaire respondent summarized this com-
plexity of expectations as an “overly complicated mess of work.”

Mismatches With Placement Classrooms


The analysis of the interview data revealed that with the exception of
one teacher candidate, the participants described mismatches associated
with Math Task 4 and their field placement classroom experiences. These
mismatches were related to pedagogical emphases and the everyday reali-
ties of teaching and classrooms. The prospective teachers had to negotiate
their opportunity to complete the simulated Math Task 4 with their cooper-
ating teachers. Attempts to structure lessons for edTPA that were different
from typical lessons in their placement classrooms caused difficulties for
children, and some cooperating teachers considered edTPA as a distraction
from their test preparation efforts with children.
The prospective teachers indicated that the perceived expectations for
pedagogy in Math Task 4, which aligned with the instructional emphases
134    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

in their mathematics methods course, were not consistent with the typical
mathematics instruction in their field placement classrooms. Math Task 4
expects that planning, instruction, and assessment consider the complex
relationships amongst conceptual understanding, procedural fluency, and
problem solving and reasoning. However, the teacher candidates described
children who struggled with CGI- and problem-based lessons, requiring
children to come up with their own solution strategies. For example, one
asserted, “My students struggled with [solving] a number story. They didn’t
really do very well because . . . they didn’t know how to do it . . . then they
turned their brains off” (Participant 2). Another described her experience
trying to use a CGI-based lesson as:

It was just very hard for them . . . I realized that they don’t do it in class, like
they don’t do it at school. The number story they had so much trouble do-
ing, and I didn’t realize that they did not get practice at actual problem solv-
ing . . . The problem solving, I was like “oh my God!,” it was stressful. (Partici-
pant 4)

Ultimately, this teacher candidate decided not to use a CGI-based lesson


for her simulated Math Task 4. She went on to say, “I just wished there
were better placements. [The methods course instructor] was showing us
[problem-solving instructional videos] . . . it doesn’t match up as it should”
(Participant 4). Similarly, an open-ended questionnaire respondent wrote,
“Implementation-resistant teachers and students had difficulty adapting to
new procedures. It was hard to get students to produce work or effectively
use time rather than wait for me to give answers.”
The prospective teachers also described Math Task 4 as not matching the
realities of responding to students’ thinking in the moment while teaching
mathematics in elementary classrooms. They struggled with the periodic
assessment required by Math Task 4 that did not anticipate responsive, it-
erative, day-to-day, cognitively-guided instruction. One asserted, “I don’t
think I’d ever do a math lesson the way edTPA wants me to plan out a math
lesson . . . I would have a lesson plan all ready to go, and I’d have to rewrite
it because the students weren’t ready for what I need to teach them. And,
on the drop of a dime I’ve had to totally change my lesson plan. . . . edTPA
just wants you to do all this thinking and time into it . . . when it comes down
to it, what you plan to teach on Wednesday you may end up teaching on
Friday” (Participant 2). Another stated:

We’re basically doing so much for this one test, but if we think realistically
it doesn’t seem so realistic. . . . My cooperating teacher told me . . . you don’t
have time to plan for all of these things. . . . You’re not going to have time
for that. Things are going to happen. . . .You’re not going to be able to fin-
edTPA: Preparing Elementary Prospective Teachers    135

ish. . . . Maybe you’re going faster than the schedule, or you need to reteach
longer than you thought you would. (Participant 5)

Further, given the myriad of demands on practicing teachers, including


pressures related to student achievement on state standardized testing, ne-
gotiating such a large assignment with cooperating teachers was an issue:

I just felt like even my cooperating teacher sometimes, you know, with this
whole [state standardized assessment] thing going on, sometimes I just felt
like she has so much on her plate I didn’t want [to ask for help]. So, just not
being able. Looking for new resources and being scared to ask for help. . . . I
went to the math coach at the school. (Participant 6)

The approaching state standardized assessment was frequently mentioned


by the teacher candidates as an impediment to Math Task 4 implementa-
tion. All in all, the prospective teachers perceived the expectations of Math
Task 4 did not mesh with the complexity of and flexibility needed for class-
room teaching.

Readiness and Support


Though participants were experiencing significant anxiety about edTPA,
four of them indicated a degree of readiness about Math Task 4 at the end
of the mathematics methods course. This readiness was linked to the infor-
mation about and practice for Math Task 4 provided in the course, along
with the pedagogy learned in the course—all of which led to some relief
from their anxiety and stress. Specifically related to readiness, one (Partici-
pant 6) spoke of being ready but hoping to have a student teaching place-
ment classroom that supports the instructional emphasis of Math Task 4. It
should be noted, for this teacher candidate, who indicated readiness and
did not express apprehension, she had unwavering support in her current
placement classroom for implementing Math Task 4. The pedagogical em-
phasis in the classroom was conducive for implementation and the cooper-
ating teacher was familiar with the assessment and supported her. For the
other five, three indicated readiness with reservations and two indicated
they were not ready. One with reservations asserted, “I can actually do it, as
the edTPA is not as scary. I mean it’s still terrifying, but it’s not as big and
bad as I thought it was going to be” (Participant 2). Another with misgiv-
ings stated, “I ask myself, ‘how does one edTPA?’ I don’t really know . . . I
don’t feel ready. I don’t feel ready at all. I feel like I’m kind of starting to
finally get a grasp of what they want” (Participant 3). Two others stated they
were not ready, with one speaking about edTPA broadly, “I don’t know if
I’m ready for edTPA in general . . . I don’t think I am” (Participant 5), and
the other stating, “I struggled . . . I’m struggling . . . I don’t want to say I am
ready; I can do it” (Participant 1).
136    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

Across all of the interviews, a common thread was that they felt more
ready for Math Task 4 than the other tasks in edTPA. For example: “I’m
prepared to do the math if anything, that’s probably the one I’m prepared
to do, to be honest” (Participant 4). From the open-ended questionnaire
data, 58% of the respondents wrote, “Yes,” they felt prepared to a degree to
complete Math Task 4 during student teaching, 18% indicated uncertainty
about their readiness, and 24% wrote they did not feel prepared.
The prospective teachers described supports for their readiness linked
to Math Task 4, including information provided by us as instructors and
also the learning experiences, instructional resources, and examples pro-
vided in the course. One said:

[The instructor] took the time to go through the task. You know, part A and
part B, and that was just like everything clicked. And, I talked to other girls
from our cohort, and like it clicked for a lot of us. Oh, that’s what this means,
and that’s what we need to do with this part . . . She went through and ex-
plained everything . . . let us ask questions. We were just able to ask questions
and get REAL answers. (Participant 2)

In addition to information about Math Task 4, the simulated assignment


was beneficial. Said one:

[The instructor] was so detailed . . . she gave us additional resources and so


okay, I got this. I know how to do it. And she let us do it as a mock the first
time, so it’s not like we’re going into student teaching and will be like, “oh
wait, this is a whole different form or anything.” We already used the format
once so we’re used to it. . . . It’s not a shocker. (Participant 6)

When considering other supports, in response to a question about how


the mathematics methods course prepared them for Math Task 4, the teach-
er candidates stated that engagement in mathematical tasks and assess-
ments with subsequent analysis for conceptual understanding, procedural
fluency, and problem solving and reasoning, along with preparation for
and implementation of the CGI-based lessons, particularly analysis of vid-
eos of model classroom lessons, were particularly beneficial. Notably, they
perceived the current and future usefulness of their learning in the course,
as one stated, “This is the first course I thought about where I had to do an
assignment and the stuff she gave me was actually beneficial in my teach-
ing” (Participant 2). Further, they said: “Really, the chunk of what I got
out of her classroom is different ways to explore math. . . . We also watched
several videos of numbers stories and how they [children] explored math
through a number story” (Participant 1):
edTPA: Preparing Elementary Prospective Teachers    137

I learned a lot of different strategies to teach math. . . . She gave us really good


tasks during class that we can possibly do with our students with all three
[conceptual understanding, procedural fluency, and problem solving and
reasoning] of those things that we’re looking for as a teacher. (Participant 4)

The procedural and the conceptual—she really pushes on those key things
that need to happen. . . . if it was a different assessment or a different les-
son. . . . focusing on those three main things. She made sure to put emphasis
on that and finding those three things when we worked on geometry and
those three things when we worked on graphing. (Participant 3)

Impact and Learning


The teacher candidates had mixed perspectives on how their engagement
in a simulated Math Task 4 impacted them as a future teacher of mathemat-
ics. Three affirmed that it has made them a better teacher of mathematics
for a variety of reasons, particularly posing purposeful problems, focusing on
conceptual understanding, and planning instruction and assessment based
on careful consideration of the needs of students. In response to a related
question, one said, “I definitely think more about the types of problems I
pose to students and the sorts of answers I’m looking for. Or, the way that they
came to the answers . . . having students think non-algorithmically . . . do they
really understand?” (Participant 3). Another stated:

The whole task made me think about math in a different way. . . . The whole
timing thing, like these kids aren’t on my time, but they’re on their own time.
I have to take that into consideration. . . . The assessment might be easy for
me, but is it easy at their level of cognitive demand? . . . It makes me think
about how I set up my lessons. So, I really need to know my students, know
what they need. . . . You don’t want to set them up for failure. (Participant 1)

Another said the simulated Math Task 4 made her “think strategically. You
need to really plan out how you are going to assess . . . the ways you are go-
ing to differentiate with the kids that you are teaching” (Participant 5).
Three were skeptical about Math Task 4’s impact on them as a future teach-
er of mathematics. One spoke of it as including “restrictions . . . with the
things being asked of you may not apply in your classroom” (Participant 6),
while another spoke of its influence as, “I’m not sure. I don’t know. I never
even thought about it that way” (Participant 4). Finally, another candidate
asserted, “I don’t think I’d ever think to do a math lesson the way edTPA
wants me to plan out a math lesson” (Participant 2), with the structure of it
not really accounting for the flexibility that teachers need in the classroom.
In response to a question on what the prospective teachers learned by
preparing and implementing a simulated Math Task 4, some linked their
learning to improving their own classroom pedagogy, while others linked
138    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

their learning to the actual completion of the Math Task 4 process. In terms
of improving their mathematics instruction, the prospective teachers men-
tioned the importance of: deliberate and purposeful planning that requires
time and effort; careful alignment of instruction and assessment with the
standards and learning goals; children deeply understanding mathematical
concepts and coming up with their own solutions; differentiation based on
the varying mathematical understandings of children; and implementation
of problem-based, student-centered instruction early on (i.e., from “Day 1”
of the school year, Participant 4). However, two teacher candidates linked
their learning from the preparation and implementation of the simulated
Math Task 4 with the importance of planning early for edTPA, such as “you
cannot wait ‘til the last minute” (Participant 6), and that as a result of the
simulated assignment, the candidate knows Math Task 4 is something she
can successfully complete.

DISCUSSION

The mathematics methods courses described in this chapter are an ex-


ample of our efforts to provide preparation for Math Task 4 in ways that
support our methods course objectives. In particular the emphases on the
frameworks for story problems and children’s solution strategies in CGI and
CGI-based lessons serve to provide preparation for a simulated Math Task 4
assignment. In addition, our study of these experiences adds to the body of
literature in needed ways, particularly in light of the increasing high stakes
use of edTPA during teacher preparation programs and the scarce research
specifically on preparing prospective teachers for Math Task 4.
We are troubled, as researchers with a long history of studying math-
ematical belief shifts of both prospective and practicing elementary teach-
ers during mathematics education learning experiences, by the continuing
uncertainty of the teacher candidates’ belief in this study. Using these same
beliefs surveys, multiple studies have shown significant changes in beliefs in
productive ways as a result of similar course learning experiences (Smith et
al., 2012; Swars, Hart, Smith, Smith, & Tolar, 2007; Swars et al., 2009; Swars
et al., 2016). In this inquiry, two important course intentions—that peda-
gogical beliefs would become more cognitively oriented and that beliefs
about teaching effectiveness would increase—did not occur. The absence
of these shifts is telling and disconcerting to us, particularly in light of the
abundant evidence in the literature on the importance of teachers’ beliefs
and their role in shaping choices of instructional practices (Beswick, 2006;
Cross Francis, 2015; Philipp, 2007; Polly et al., 2013; Thompson, 1992; Wil-
son & Cooney, 2002).
edTPA: Preparing Elementary Prospective Teachers    139

The interview and open-ended questionnaire data provide insights into


some of the struggles and issues the prospective teachers had with the
simulated Math Task 4 and preparing to complete edTPA, which filtered
their learning and the expected changes in beliefs during the mathemat-
ics methods course. The marked fear and at times overwhelming anxiety
about edTPA, along with the substantial misalignment between their field-
placement classrooms and both the expectations for Math Task 4 and the
course learning emphases, likely contributed to continuing uncertain be-
liefs about cognitively-oriented pedagogy, personal teaching efficacy, and
teaching outcome expectancy.
We believe the mathematics portion of the Elementary Education ver-
sion of edTPA has potential to align well with recommendations for how
mathematics should be taught and learned (NCTM, 2000, 2014a; NGACBP
& CCSSO, 2010). Notably, our teacher candidates have indicated they felt
more prepared for Math Task 4 than the three literacy edTPA tasks, which
can be attributed to their learning in our methods courses. Our prospective
teachers have expressed significant appreciation for the detailed informa-
tion on Math Task 4 provided during these courses. They also mentioned
the relevance of their learning in the course with Math Task 4 implemen-
tation, particularly the analysis of mathematical tasks and assessments for
the needed elements (e.g., conceptual understanding, procedural fluency,
and problem solving and reasoning) and the CGI-based lesson assignment,
including the analysis of models of classroom practice provided in videos.
The use of video cases of teaching along with careful analysis in mathemat-
ics methods courses has shown that prospective teachers enacting instruc-
tional practices focused more closely on student thinking as evidenced on
PACT, the precursor for edTPA (Sun & van Es, 2015).
We note that though there were mixed responses from candidates about
how Math Task 4 impacts them as future teachers of mathematics, the
completion of the simulated task did result in positive outcomes related to
their learning. They spoke of learning about the importance of: purposeful
and deliberate planning; the close alignment of instruction, assessment,
and learning goals; children deeply understanding mathematics via prob-
lems and their own reasoning; and using children’s understandings and
needs to guide instruction. However, it is concerning that in the simulated
Math Task 4 documents, teacher candidates are concerned about using lan-
guage in the “right way” as deemed by the portfolio scorers, with this “right
way” presented as an unknown and a hoop to jump through. They did
not view the assessment commentary as an opportunity to write reflectively
and analytically about their teaching; perhaps more emphasis on this in the
mathematics methods courses is needed.
The central role of schools and placement classrooms when it comes to
edTPA planning and implementation cannot be stated enough (Greenblatt
140    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

& O’Hara, 2015; Meuwissen & Choppin, 2015). Within the high stakes
context of successful edTPA completion, we are concerned with the con-
tinuing mismatches between methods courses and classroom placements.
Math Task 4 has the potential for promoting good practices by emphasiz-
ing problem solving and reasoning, conceptual understanding, and proce-
dural fluency, but for some in this study it became a significant source of
struggle if they did not have placement classrooms conducive for this more
complex pedagogical emphasis. In fact, only one teacher candidate stat-
ed her placement classroom environment and cooperating teacher were
supportive, and she was the one who indicated the greatest readiness for
Math Task 4 during student teaching, describing her experiences with the
simulated Math Task 4 as “easy.” Our programs are located in a large met-
ropolitan area, with many school districts used as placement sites. How field
placement classrooms are chosen varies, but for a number of the school
districts, administrators choose which teachers receive student interns and
student teachers. For our teacher candidates, it would be ideal to purpose-
fully place them in classrooms that model the instruction learned about in
university courses; however, that does not appear to be feasible. This begs
the question of how can teacher candidates be held to such a high level of
accountability when placement classroom contexts may not be supportive
of the pedagogical expectations of edTPA?
However, it is hoped this mismatch with pedagogical emphasis could
have a silver lining. Teaching in ways that differ from how one learned as
a student is difficult (Hiebert, 2003; NCTMa, 2014b). And, given the high
stakes nature of edTPA, we have seen an increased urgency for our prospec-
tive teachers to work through the challenges of planning and implement-
ing instruction that is different from the common practices in their field
placement classrooms. It seems this pressure has increased subsequent to
the use of edTPA in our teacher preparation programs. Developing an abil-
ity to work though and adapt to multiple and sometimes diverging expecta-
tions and demands could perhaps strengthen their understanding of the
outcomes of various teaching practices.
Most teacher candidates in our study were placed in urban schools hav-
ing some of the characteristics identified in the literature—prevalence of
minority student populations, high numbers of students eligible for the fed-
erally funded free and reduced lunch program, high numbers of immigrant
students with English as a second language, and teacher shortages (Jacob,
2007). A purpose of this teacher preparation program is to prepare teacher
candidates for urban school contexts. Greenblatt and O’Hara (2015, p. 59)
argue that “edTPA privileges certain student teaching placements” and
that the “challenges of the edTPA are exacerbated in schools in low-income
communities where . . . students often are not scoring well on standardized
tests. Not only are these schools more likely to have scripted curricula, but
edTPA: Preparing Elementary Prospective Teachers    141

they also have students with a variety of special needs.” These factors help
us understand some of the results of this study. Prospective teachers placed
in more homogeneous school contexts may not have these additional chal-
lenges for edTPA planning and implementation. Thus, the degree of vari-
ability from one school setting to another calls into question the fairness of
edTPA for teacher candidates (Meuwissen & Choppin, 2015).
In addition, it should be noted who makes up the student body at the
university in this study: “[The university] now enrolls more African Ameri-
cans, Latinos, Asian Americans, first generation students, and Pell students
than any other four-year university in [the state]” (Georgia State University,
2015, p. 6). Further, according to U.S. News and World Report (n.d.), the uni-
versity is now one of only two universities to rank in the Top 15 in the nation
for both its racial/ethnic diversity and the number of low-income students
enrolled. The prospective teachers share many of these characteristics with
the overall population of the university. Many are first generation college
students, and most hold jobs while attending the university as full-time stu-
dents. Perhaps these distinctive characteristics of the teacher candidates,
who may have minimal family history related to navigating the college years
and understanding the demands of higher education, were factors contrib-
uting to the profound fear and anxiety associated with edTPA, particularly
in light of its high stakes nature.
The findings of this study also illuminate issues such as, the best time
to share details of the expectations and processes of edTPA with prospec-
tive teachers. Clearly, our teacher candidates have escalating anxieties and
stress when first hearing about edTPA, and somehow the growth of these
apprehensions must be halted early on. Yet we must coordinate the need
for detailed information with the appropriate time when that information
will be understandable and useful. Within the program, it seems that as a
way of de-emphasizing edTPA in order to prevent it from hijacking pro-
gram experiences, concrete information about the assessment as a whole is
not provided until the student teaching semester. Delaying detailed infor-
mation about edTPA is also considered a means of preventing the creation
of a program culture that feels like it is “teaching to the test” (i.e., edTPA),
a dilemma found in K–12 schools given the pressures associated with stu-
dent achievement on standardized tests. Faculty knowledge is another issue
related to who shares what about edTPA and when. Just as edTPA is new
to teacher candidates, it is also new to faculty members, who are learning
about its expectations and processes. Some faculty members are philosophi-
cally opposed to edTPA, which plays out in their decisions about what infor-
mation to include or not include about the assessment during the courses
they teach. However, our belief as teacher educators is that within a con-
text of edTPA being a high stakes reality for teacher candidates, providing
timely, detailed preparation is the only responsible path. We also believe
142    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

that since the Elementary Education edTPA involves two different content
areas—literacy and mathematics—that preparation for these content areas
must come from those with expertise in these areas. In sum, these factors all
play out as to when and how much information about edTPA teacher candi-
dates receive from various instructors and supervisors. This aligns with the
findings of others indicating large variations and disparities in the level and
kinds of support that prospective teachers receive (Ledwell & Oyler, 2016;
Ratner & Kolman, 2016).
Though the teacher candidates expressed anxiety and stress about edT-
PA, for the 2015–2016 school year 99% of graduates from the university’s
teacher preparation program successfully passed the requirements of the
elementary education edTPA (“Teacher Preparation Program Effective-
ness,” 2016). Similar to the findings of others (Ledwell & Oyler, 2016), it
appears that edTPA did not serve as a gatekeeper to the profession as some
argue it should (Adkins, 2016; Darling-Hammond & Hyler, 2013; Pecheone
& Whittaker, 2016). However, historically in the program, by the time teach-
er candidates are at the student teaching semester, any who are not a good
fit for the teaching profession have already opted out or been advised out of
the program and have changed to a different major. The prospective teach-
ers’ success during the field intensive aspects of the program—when they
are in elementary schools two-days-per-week for the first three semesters—
provides a good indicator of whether or not an individual is well-suited for
teaching.
In summary, the findings of our study provide insights into some of the
issues related to preparation for Math Task 4 and show the influence of
Math Task 4 in ways that are desirable and not desirable. The lack of change
in these teacher candidates’ mathematical beliefs is decidedly concerning,
particularly when considering patterns from our previous research results
and the intended course outcomes. One positive outcome of Math Task 4
is the careful consideration and conversations we have had as mathemat-
ics educators in preparing to assure the teacher candidates’ success. As al-
ways, our major goal in mathematics methods courses is to focus on good
teaching; but now the goals also include preparing teacher candidates
for Math Task 4 in ways that are consistent with important pedagogical
changes intended to be promoted during elementary teacher preparation
experiences.

REFERENCES

AACTE. (2016). How are states using edTPA for program approval and/or licen-
sure? Retrieved from http://edtpa.aacte.org/state-policy
Adkins, A. (2016, May). The benefits of edTPA. Educational Leadership, 73(8), 55–58.
edTPA: Preparing Elementary Prospective Teachers    143

Bekdemir, M. (2010). The pre-service teachers’ mathematics anxiety related to


depth of negative experiences in mathematics classroom while they were stu-
dents. Educational Studies in Mathematics, 75, 311–328.
Beswick, K. (2006). The importance of mathematics teachers’ beliefs. The Australian
Mathematics Teacher, 62(4), 17–22.
Beswick, K. (2012). Teachers’ beliefs about school mathematics and mathemati-
cians’ mathematics and their relationship to practice. Educational Studies in
Mathematics, 79, 127–147. DOI 10.1007/s10649-011-9333-2
Cady, J. A., Meier, S. L., & Lubinski, C. A. (2006). Developing mathematics teachers:
The transition from preservice to experienced teacher. Journal of Educational
Research, 99(5), 295–305.
Carpenter, T. P., Fennema, E., Franke, M. L., Levi, L., & Empson, S. B. (2014). Chil-
dren’s mathematics: Cognitively guided instruction. Portsmouth, NH: Heinemann
and NCTM.
Carpenter, T. P., Franke, M. L., & Levi, L. (2003). Thinking mathematically: Integrating
arithmetic & algebra in elementary school. Portsmouth, NH: Heinemann.
CBMS. (2012). The mathematical education of teachers II. Providence, RI: American
Mathematical Society.
Cross Francis, D. L. (2015). Dispelling the notion of inconsistencies in teachers’
mathematics beliefs and practices: A 3-year case study. Journal of Mathematics
Teacher Education, 18(2), 173–201.
Darling-Hammond, L., & Hyler, M. E. (2013, Summer). The role of performance
assessment in developing teaching as a profession. Rethinking Schools, 27(4),
10–15.
Dweck, C. (2006). Mindset: The new psychology of success. New York, NY: Random
House.
Enochs, L., Smith, P., & Huinker, D. (2000). Establishing factorial validity of the
Mathematics Teaching Efficacy Beliefs Instrument. School Science and Math-
ematics, 100(4), 194–202.
Fennema, E., Carpenter, T. P., Franke, M. L., Levi, L., Jacobs, V. R., & Empson,
S. B. (1996). A longitudinal study of learning to use children’s thinking in
mathematics instruction. Journal for Research in Mathematics Education, 27(4),
403–434.
Franke, M. L., Carpenter, T., Fennema, E., Ansell, E., & Behrend, J. (1998). Under-
standing teachers’ self-sustaining generative change in the context of profes-
sional development. Teaching and Teacher Education, 14, 67–80.
Fosnot, C. T. (2006). Contexts for learning mathematics. Portsmouth, NH: Heinemann.
Georgia Department of Education. (2016). Mathematics frameworks units. Retrieved
from https://www.georgiastandards.org/Frameworks/Pages/BrowseFrame-
works/math.aspx
Georgia State University. (2015). 2015 status report. Retrieved from http://success.
gsu.edu/download/2015-status-report-georgia-state-university-complete-col-
lege-georgia/?wpdmdl=6470560
Greenblatt, D., & O’Hara, K. E. (2015, Summer). Buyer beware: Lessons learned
from edTPA implementation in New York state. Thought and Action, 57–67.
144    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

Hart, L. C., Oesterle, S., Swars Auslander, S., & Kajander, A. (Eds.). (2016). The
mathematics preparation of elementary teachers: Issues and strategies for content
courses. Charlotte, NC: Information Age.
Hembree, R. (1990). The nature, effects, and relief of mathematics anxiety. Journal
for Research in Mathematics Education, 21(1), 33–46.
Hiebert, J. (2003). What research says about the NCTM standards. In J. Kilpatrick,
W. G. Martin, & D. Schifter (Eds.), A research companion to principles and stan-
dards for school mathematics (pp. 5–23). Reston, VA: NCTM.
Jacob, B. A. (2007). The challenges of staffing urban schools with effective teachers.
The Future Children, 17(1), 129–153.
Lannin, J. K., & Chval, K. B. (2013). Challenge beginning teacher beliefs. Teaching
Children Mathematics, 19(8), 508–515.
Liljedahl, P. G. (2005). Mathematical discovery and affect: The effect of AHA! Expe-
riences on undergraduate mathematics students. International Journal of Math-
ematical Education in Science and Technology, 36(2–3), 219–235.
Lincoln, Y., & Guba, E. (1985). Naturalistic inquiry. New York, NY: SAGE.
Ledwell, K., & Oyler, C. (2016). Unstandardized responses to a “standardized” test:
The edTPA as gatekeeper and curriculum change agent. Journal of Teacher
Education, 67(2), 120–134.
Meuwissen, K. W., & Choppin J. M. (2015). Preservice teachers’ adaptations to ten-
sions associated with the edTPA during its early implementation in New York
and Washington states. Education Policy Analysis Archives, 23(103), 1–25.
Moscardini, L. (2014). Developing equitable elementary mathematics classrooms
through teachers learning about children’s mathematical thinking: Cogni-
tively guided instruction as an inclusive pedagogy. Teaching and Teacher Educa-
tion, 43(2014), 69–79.
NGACBP & CCSSO. (2010). Common Core State Standards for Mathematics. Washing-
ton, DC: Authors.
NCTM. (2000). Principles and standards for school mathematics. Reston, VA: Author.
NCTM. (2006). Curriculum focal points. Reston, VA: Author.
NCTM. (2007). Navigations. Reston, VA: Author.
NCTM. (2014a). Principles to actions: Ensuring mathematical success for all. Reston, VA:
Author.
NCTM. (2014b). Procedural fluency in mathematics. Retrieved from: http://www.nctm
.org/Standards-and-Positions/Position-Statements/Procedural-Fluency
-in-Mathematics/
NRC. (2001). The strands of mathematical proficiency. In J. Kilpatrick, J. Swafford,
& B. Findell (Eds.), Adding it up: Helping children learn mathematics (pp. 115–
155). Washington, DC: National Academy Press.
Pecheone, R. L., & Whittaker, A. (2016). Well-prepared teachers inspire student
learning. Phi Delta Kappan, 97(7), 8–13.
Peterson, P. L., Fennema, E., Carpenter, T. P., & Loef, M. (1989). Teachers’ peda-
gogical content beliefs in mathematics. Cognition and Instruction, 6(1), 1–40.
Philipp, R. A. (2007). Mathematics teachers’ beliefs and affect. In F. K. Lester (Ed.),
Second handbook of research on mathematics teaching and learning (pp. 257–315).
Charlotte, NC: Information Age.
edTPA: Preparing Elementary Prospective Teachers    145

Philipp, R. A. (2008). Motivating prospective elementary school teachers to learn


mathematics by focusing on children’s thinking. Issues in Teacher Education,
17(2), 7–16.
Philipp, R. A., Ambrose, R., Lamb, L., Sowder, J. L., Schappelle, B. P., & Sowder, L.
(2007). Effects of early field experiences on the mathematics content knowl-
edge and beliefs of prospective elementary teachers: An experimental study.
Journal for Research on Mathematics Education, 38(5), 438–476.
Polly, D., McGee, J. R., Wang, C., Lambert, R. G. Pugalee, D. K., & Johnson, S.
(2013). The association between teachers’ beliefs, enacted practices, and stu-
dent learning in mathematics. The Mathematics Educator, 22(2), 11–30.
Ratner, A. R., & Kolman, J. S. (2016). Breakers, benders, and obeyers: Inquiring into
teachers educators’ medication of edTPA. Education Policy Analysis Archives,
24(35), 1–26.
Russell, S. J. (2000, November). Developing computational fluency with whole num-
bers. Teaching Children Mathematics, 154–158.
SCALE. (2015). edTPA elementary education assessment handbook. Retrieved from http://
edtpa.aacte.org/
Schifter, D., Bastable, V., & Russell, S. J. (2008). Developing mathematical ideas: Case-
books. Parsippany, NJ: Dale Seymour/Pearson Education.
Schoenfeld, A. H. (2015). What counts, when?—Reflection on beliefs, affect, at-
titude, orientations, habits of mind, grain size, time scale, context, theory,
and method. In B. Pepin & B. Roesken-Winter (Eds.), From beliefs to dynamic
affect systems in mathematics education (pp. 395–404). Switzerland: Springer
International.
Skott, J. (2015). Towards a participatory approach to ‘beliefs’ in mathematics educa-
tion. In B. Pepin & B. Roesken-Winter (Eds.), From beliefs to dynamic affect sys-
tems in mathematics education (pp. 3–23). Switzerland: Springer International.
Smith, M. E., Swars, S. L., Smith, S. Z., Hart, L. C., & Haardoerfer, R. (2012). Ef-
fects of an additional mathematics content courses on elementary teachers’
mathematical beliefs and knowledge for teaching. Action in Teacher Education,
(34)4, 336–348.
Sowder, J. T. (2007). The mathematical education and development of teachers. In
F. K. Lester (Ed.), Second handbook of research on mathematics teaching and learn-
ing (pp.157–223). Charlotte, NC: Information Age.
Stein, M. K., Smith, M. S., Henningsen, M. A., & Silver, E. A. (2009). Implementing
standards-based mathematics instruction. New York, NY: Teachers College Press
and NCTM.
Steele, D. F. (2001). The interfacing of preservice and inservice experiences of re-
form-based teaching: A longitudinal study. Journal of Mathematics Teacher Edu-
cation, 4(2), 139–172.
Swars, S. L., Smith, S. Z., Smith, M. E., Carothers, J., & Myers, K. (2016). The
preparation experiences of Elementary Mathematics Specialists: Examining
influences on beliefs, content knowledge, and teaching practices. Journal of
Mathematics Teacher Education. Advance online publication. doi: 10.1007/
s10857-016-9354-y
Swars, S. L., Smith, S. Z., Smith, M. E., & Hart, L. C. (2009). A longitudinal study
of effects of a developmental teacher preparation program on elementary
146    S. S. AUSLANDER, S. Z. SMITH, and M. E. SMITH

prospective teachers’ mathematics beliefs. Journal of Mathematics Teacher Edu-


cation, 12(1), 47–66.
Swars, S. L., Hart, L., Smith, S. Z., Smith, M, & Tolar, T. (2007). A longitudinal study
of elementary pre-service teachers’ mathematics beliefs and content knowl-
edge. School Science and Mathematics, 107(9), 325–335.
Sun, J., & van Es, E. A. (2015). An exploratory study of the influence that analyz-
ing teaching has on preservice teachers’ classroom practice. Journal of Teacher
Education, 66(3), 201–214.
Teacher Preparation Program Effectiveness. (2016). Retrieved from http://educa-
tion.gsu.edu/teacher-preparation-program-effectiveness/
Technical Education Research Centers. (2012). Investigations in number, data, and
space. Upper Saddle River, NJ: Pearson Scott Foresman.
Thompson, A. G. (1992). Teachers’ beliefs and conceptions: A synthesis of the re-
search. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and
learning (pp. 127–146). New York, NY: Macmillan Library Reference USA.
Tyminski, A. M., Land, T. J., Drake, C., Zambak, V. S., & Simpson, A. (2014). Pre-
service elementary mathematics teachers’ emerging ability to write problems
to build on children’s mathematics. In J. Lo, K. R. Leatham, & L. R. Van
Zoest (Eds.), Research trends in mathematics teacher education (pp. 193–218).
New York, NY: Springer.
U.S. News and World Report. (n.d.). Campus ethnic diversity: National universities. Re-
trieved from http://colleges.usnews.rankingsandreviews.com/best-colleges/
rankings/national-universities/campus-ethnic-diversity
University of Chicago School Mathematics Project. (2007). Everyday mathematics.
New York, NY: McGraw-Hill Education.
Vacc, N. N., & Bright, G. W. (1999). Elementary preservice teachers’ changing be-
liefs and instructional use of children’s mathematics thinking. Journal for Re-
search in Mathematics Education, 30(1), 89–110.
Wilson, M., & Cooney, T. (2002). Mathematics teacher change and development:
The role of beliefs. In G. Leder, E. Pehkonen, & G. Toerner (Eds.), Beliefs: A
hidden variable in mathematics education? (pp. 127–148). Dordrecht, The Neth-
erlands: Kluwer Academic Press.
CHAPTER 7

NOT JUST FOR


PRESERVICE TEACHERS
edTPA as a Tool for Practicing Teachers
and Induction Support

John Seelke and Xiaoyang Gong


University of Maryland, College Park

In 2010, the University of Maryland (UMD) was among the first teacher prep-
aration programs to pilot the teacher performance assessment edTPA®. Six
years later, over 725 teacher preparation programs across 39 states are using
edTPA in some fashion. Currently eight states (Georgia, Illinois, Minne-
sota, New Jersey, New York, Oregon, Washington, Wisconsin) require an of-
ficially scored edTPA for either program completion or teacher licensure,
with six states (Alaska, Alabama, California, Deleware, Hawaii, West Vir-
ginia) requiring either an officially scored edTPA or another teacher per-
formance assessment for program completion or licensure. By 2019, three
other states (Connecticut, Iowa, Tennessee) will require edTPA, with one
state, North Carolina, requiring edTPA or another assessment (AACTE,
2017). Programs in non-mandated states have more leeway in how they use

Implementing and Analyzing Performance Assessments in Teacher Education, pages 147–166


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 147
148    J. SEELKE and X. GONG

edTPA. Some of these programs require all candidates to complete official


scoring. Other programs rely on local evaluation having faculty or K–12
partners serve as assessors. Local evaluation is different from edTPA official
scoring, as it uses qualitative ratings (emerging, proficient, and advanced)
as opposed to numeric values of 1–5, used in official scoring. Local evalua-
tion provides programs the opportunity to implement edTPA without hav-
ing candidates pay $300 for national scoring; however, in most cases it does
not have the same reliability and validity standards that are found in official
scoring.
At UMD, completing edTPA satisfies a state requirement that all teacher
candidates in approved teacher preparation programs successfully com-
plete a teaching portfolio; however, Maryland is not one of the states that
have a policy connected to edTPA. In order to support candidates who an-
ticipate moving within the next three years to states that require edTPA for
certification or licensure, our university provides vouchers for official scor-
ing. In 2017, our university provided nearly 75 teacher candidates vouchers
for national scoring. Portfolios for all teacher candidates who are enrolled
in the capstone student teaching internship (approximately 300 per year),
including those that are officially scored, undergo local evaluation.
With local evaluation, our university relies on multiple practitioners
in teacher preparation, including mentor teachers, local national board
certified (NBC) teachers, district level mentors of first-year teachers, and
faculty members, to serve as evaluators. Evaluators provide both a categor-
ical rating for each rubric as well as qualitative feedback that both justifies
the rating as well as offers the teacher candidate suggestions for improve-
ment. The addition of qualitative feedback is unique to our university’s
version of local evaluation and provides teacher candidates an extra layer
of support that candidates can glean from their edTPA results. In 2014,
our university created a professional development plan such that candi-
dates can take their feedback from local evaluation and identify areas of
strength and areas for growth going into their first year of teaching. The
UMD plan is one of the main sources used by the Stanford Center for As-
sessment Learning and Equity (SCALE) to create an edTPA professional
development (PD) growth plan for any teacher preparation program us-
ing edTPA (SCALE, 2016).
With the increased participation of K–12 partners through local evalua-
tion as well as the introduction of the professional development plan, our
university has viewed edTPA as more than just a summative assessment that
candidates complete at the end of their program. Rather, our university
sees the potential of edTPA to impact teachers in their induction year of
professional teaching, with the assessment serving as a bridge between what
candidates study in their teacher preparation program and the pedagogical
practices they use daily in the classroom. Our institution also believes that
Not Just for Preservice Teachers    149

exposing K–12 partners to edTPA, through local evaluation, mentor orien-


tation, or other means, can provide implicit professional development to
veteran teachers. To examine these beliefs, three similar surveys were given
between 2015 and 2017 to local evaluators, mentor teachers, and alumni in
their first year of teaching.
Our chapter begins with a short literature review of the research around
edTPA and its impact on practicing teachers. We then briefly discuss our
university’s story around local evaluation, specifically, sharing some context
on how the university recruits, trains, and supports local evaluators, how
it shares information about edTPA with candidate mentor teachers, and
how it connects edTPA to induction through the professional development
growth plan. In the next section, we will describe the methodology such
as the sample and data sources. In the last section of this chapter, we will
discuss findings of three surveys as well as explore future research around
edTPA as a tool for formative growth.

LITERATURE REVIEW

As noted in Chapter 1, numerous scholars have studied the reasons be-


hind the implementation of teacher performance assessments, both at
the preservice level and the in-service level. Such assessments have long
impacted both the work of teacher preparation and of practicing teach-
ers. Since this chapter focuses on the connection between teacher per-
formance assessments and practicing teachers, we will briefly discuss re-
search around this issue.
Possibly the most well-known assessment that looks at teacher practice
is the assessment teachers take to achieve NBC. Studies have shown that
teachers who attempted NBC, even those who did not successfully pass
the assessment, felt that the process made them better teachers, due to
the amount of reflection required (Hattie & Clinton, 2010; Steeley, 2003).
Since edTPA is based on NBC, one possible hope is that those who com-
plete the edTPA (even if they do not successfully pass it) would have simi-
lar feelings about the reflection process. However, one key difference be-
tween those completing edTPA and those pursuing NBC is that teachers
choose to endure the rigorous NBC process, thus, it makes sense that those
teachers would view the assessment as a means to improving their practice.
Such assumptions cannot be made about preservice teachers who complete
edTPA, as the overwhelming majority of candidates who complete the as-
sessment, do so as a requirement of their state certification process or their
teacher preparation.
Because edTPA is so new, it has not been studied as much as other as-
sessments. One of the criticisms which has emerged relates to the seeming
150    J. SEELKE and X. GONG

disconnect between official scoring evaluators hired through Pearson and


the candidates whose portfolios are being evaluated, as well as the involve-
ment of Pearson in the assessment of teacher candidates. Studies have
noted candidates felt anxious not knowing who would be assessing their
portfolio, even in low-stake instances where they were submitting for offi-
cial scoring solely for edTPA completion as opposed to earning a particular
cut-score (Hobbs, 2015; Huston, 2015). Huston (2015) commented that
two participants in his study “admitted to tailoring their answers [in their
commentary] based on the concept of audience, and that their answers did
not necessarily reflect what they might consider as best practice” (p. 107).
Other researchers have specifically targeted the involvement of Pearson as
an edTPA partner as a reason to be against the assessment (Dover, Schultz,
Smith, & Duggan, 2015; Jordan & Hawley, 2016; Maldeloni & Gorlewski,
2013; Singer, 2014). As Jordan and Hawley (2016) noted, “the problem that
many students now have with edTPA is the same problem people have with
most major corporations—we simply do not trust them” (p. 1).
Interestingly, the Performance Assessment for California Teachers
(PACT), an immediate predecessor to edTPA, designed by a collaborative
of university teacher preparation programs, including SCALE, relies on
both faculty and K–12 evaluators to complete portfolio evaluation, a pro-
cess that shares some positives. In researching PACT, Peck and McDonald
(2013) indicated that “direct faculty examination of individual ‘cases’ was
considered to be pivotal in making the relevance and value of PACT data
clear” (p. 27). Additionally, Whittaker and Nelson (2013) wrote about the
experience with PACT at San Jose State University. They noted that because
of time constraints on faculty, much of their PACT evaluation was done
by district partners, including mentor teachers as well as teacher leaders
in their local districts. Also, “when faculty and school personnel work to-
gether, there is enormous collaboration and communication about mutual
goals for the development of teachers across the learning to teach continu-
um and shared values about teaching and learning” (p. 92).
Over the past two years, SCALE and Pearson have piloted a type of offi-
cial scoring called regional scoring. Regional scoring specially aims to address
the concern that some evaluators from across the country may not under-
stand the context of a particular region, which could impact a candidate’s
score. With regional scoring, evaluators must complete the official scoring
training process and when doing so are eligible to score portfolios nation-
ally. However, they also are assigned to evaluate a portion of portfolios that
are from their region, which could include a state or a group of states. For
example, the Maryland edTPA collaborative serves as a region in 2017. This
means that portfolios submitted officially from Maryland can be scored by
evaluators from Maryland. It will be interesting to see how regional scoring
impacts perceptions of edTPA in the near future.
Not Just for Preservice Teachers    151

THE UNIVERSITY OF MARYLAND STORY—LOCAL


EVALUATION, MENTOR ORIENTATION, AND THE
PROFESSIONAL DEVELOPMENT PLAN

Our university first piloted edTPA in the spring of 2010 with four second-
ary English candidates. Following that initial pilot, the university gradually
added different content areas, purposively scaling up over time. During the
first two years of participation, SCALE had not yet partnered with Pearson
to offer official scoring, so our university relied on doctoral students and
faculty to complete the assessment. In 2012, all universities participating
in the edTPA pilot had the opportunity to send their portfolios for official
scoring without any cost. Our university took advantage of this generous
offer, yet still had faculty and doctoral students locally assess one task per
assessment.
In 2013, with increasing numbers of portfolios and without the option
of Pearson official scoring (due to a change in SCALE policy where only
states with pending policy around edTPA could submit official scoring),
our university created an edTPA leadership team. The leadership team,
consisted of the edTPA local evaluation director and eight faculty members
who represented multiple teacher preparation programs (i.e., elementary
education, secondary education, PK–12 education), believed that the en-
tire portfolio needed to be evaluated. Additionally, the leadership team
recognized our university did not have enough human capital to rely on
in-house assessors alone. This led to the crucial decision to create a model
for local evaluation and to invite our PK–12 partners to participate in the
evaluation process.

Recruiting and Training edTPA Local Evaluators

During the 2012–2013 academic year, our edTPA leadership team met
bi-monthly to organize its local evaluation process. To create a pool of eval-
uators, the team initially turned to faculty, graduate students and candidate
mentor teachers in our university’s four partner districts. Given the close
connection between edTPA and the assessment for NBC, the team also de-
cided to reach out to NBC teachers within partner districts.
The recruitment process usually begins in October when the direc-
tor of local evaluation first sends out recruitment letters to each of the
potential groups of evaluators. By December the pool of evaluators is
usually complete. Over the past four years, the number of evaluators has
remained at around 100, with nearly 40% of those returning on a year-
ly basis. In addition to mentors and local NBC teachers, our university
has also reached outside of the university to two other groups—teachers
152    J. SEELKE and X. GONG

interested in pursuing NBC (who may work at one of our professional


development school partner schools but may not be serving as mentors)
and alumni who completed edTPA and have been in the classroom for at
least three years.
Like official scoring, all our local evaluators must complete local evalua-
tion training. The expectation for our university local evaluation training,
however, does not match the rigor of official scorer training and cannot be
substituted for official scorer training. Since nearly all of our evaluators are
practicing K–12 teachers, the training is a hybrid of online modules (that
introduce or remind evaluators about the edTPA format and the evaluation
process) and a three-hour in-person training, at which evaluators meet in
content area teams and complete a group evaluation of a candidate sample
from the previous year. Our university has tinkered with the evaluation pro-
cess over the years, trying to find a way to prevent returning evaluators from
repeating the same training but also using their expertise to support new
evaluators. For example, in the spring of 2017, all evaluators had to attend
the in-person training, however returning evaluators only stayed for half of
the session. For the few returning evaluators whose schedules prevented
them from attending an in-person training (or in a couple of cases they
were evaluators who had moved from the local area but were still inter-
ested in participating), our edTPA office created a webinar that focused on
particular areas such as scorer bias and providing effective feedback. The
webinar was received positively and our edTPA office hopes to replicate it
in the future as a means of supporting K–12 partners.
Candidates typically complete their edTPA in middle to late April, with
evaluators having two weeks to complete their work. Most evaluators are
asked to look at a minimum of three portfolios. As mentioned earlier, evalu-
ators provide a categorical rating (emerging-proficient-advanced) for each
rubric. They also provide qualitative feedback, sharing evidence for their
rating choices as well as what candidates could have done to reach the next
highest level (i.e., moved from proficient to advanced). For their efforts,
evaluators earn $50 per portfolio, with the funding provided by a combi-
nation of grants and money from the student teaching lab fee. First-time
evaluators are also able to earn a continuing professional development
(CPD) credit for attending the training. The CPD option allows our univer-
sity to provide a “no cost” compensation option for participating in local
evaluation.

Mentor Orientation

With only 25% of mentors serving as local evaluators, our university


needed to make sure that the remaining mentor teachers working with
Not Just for Preservice Teachers    153

teacher candidates understand what edTPA is, how the assessment may im-
pact their candidate, and how they can (or cannot) support their candi-
dates around edTPA. While each program at our university has their own
method of mentor orientation, since 2013 the orientations have specifically
included information about edTPA. For example, the secondary content
areas (mathematics, science, social studies, and English) hold group ori-
entations in August at their various school sites. Part of the hour-long ses-
sion includes sharing tools to support candidates through edTPA as well as
providing suggestions on how mentors can infuse edTPA support into their
normal routines, such as, focusing an observation on pedagogical strategies
that are found in the edTPA rubrics.

Professional Development Plan

The idea behind creating a PD plan connected to edTPA stemmed from


two distinct aspects of local evaluation. First, our university realized that
candidates were often not taking advantage of the qualitative feedback
provided by evaluators. Candidates tended to treat edTPA as just another
assessment connected to preservice teaching (e.g., Praxis) where the only
goal was to receive a passing rating. Our university felt that part of the rea-
son candidates didn’t examine their edTPA results closely was that they
didn’t have a tool they could use to decipher edTPA results and use in their
first year of teaching. Involving candidates in the creation of a professional
development plan immediately after they received their local evaluation
ratings but before the program ended provided an opportunity for candi-
dates to truly use their edTPA evaluation as one measure of their strengths
and areas needing growth as they entered the profession.
Second, a small cohort of local evaluators served as district employees
whose main focus was to work with first year teachers in the induction pro-
cess. Some of these evaluators noted conversations they had around edTPA
with first year teachers who had just completed the assessment at our uni-
versity, and how they were able to discuss a candidate’s strengths or areas
of improvement while referring to pedagogical strategies found in edTPA.
Additionally, various actors connected to the teacher preparation program,
including alumni and district level professional development school part-
ners noted the similarities between edTPA and district teacher observation
protocols. Thus, the hope was that candidates could use their edTPA results
to create a PD plan they could take into their first year of teaching and use
as a source of support for their initial formal observations.
Our university created its version of a PD plan in the spring of 2015, and
piloted the plan in the summer of 2015 with 10 candidates who had just
completed edTPA. The plan simply asked candidates to identify three edTPA
154    J. SEELKE and X. GONG

rubrics that represented their strengths, three rubrics that represented areas
of growth and three specific areas for support on which they hoped to focus
during their first year of teaching. They used the feedback provided from
local evaluation to identify their strengths and areas for growth. From the
pilot, participants noted in follow-up surveys that while they appreciated the
opportunity to examine their edTPA feedback and consider their strengths
and areas of growth, they did not use the PD plan during their first year of
teaching. Some noted they felt overwhelmed as first year teachers and either
turned to other resources or felt they did not have the time to go back and
look at the PD plan. Such feedback has led our university to consider ways the
program can educate local districts about the PD plan and find a means to
infuse the PD plan into specific aspects of the induction process.
Despite some challenges, the idea of creating and using a PD plan gained
some interest from SCALE. In the fall of 2015, our university shared its
prototype with SCALE, which later used it as a basis for a PD plan that was
shared with the entire edTPA community (SCALE, 2016). In the spring of
2016, our university completed a second pilot with the professional growth
plan, with some content areas choosing to use it with their undergraduate
or their master’s certification (MCERT) students. In the spring of 2017, our
university again invited different programs to use the PD plan as an end of
course assignment. At least one of the MCERT capstone courses requires
candidates to complete the PD plan. The hope is that by Spring 2018, all
candidates will use the PD plan in some form, providing the teachers who
take employment in our partner districts a head-start on the induction
process.

RESEARCH ON OUR EFFORTS TO CONNECT


EDTPA, OUR PARTNERS, AND INDUCTION

Since 2015, our edTPA office has conducted research around the connec-
tion between edTPA and K–12 partners and the relation between the assess-
ment and induction. Our research aims to answer the following research
questions: (a) How did edTPA influence the teaching of mentor teachers,
first-year teachers (alumni), and edTPA local evaluators? and (b) How were
the perceptions of mentor teachers, first-year teachers, and edTPA local
evaluators similar or different?
For our study, we collected the survey data from three groups of par-
ticipants involved in the edTPA implementation: (a) first-year teachers who
completed edTPA in 2016 (n = 91 out of 350), (b) mentor teachers from
local districts (n = 140 out of 400 mentors), and (c) local evaluators from
2014 to 2016 (n = 128 out of 200). With the local evaluators, solicitations
to survey participants were provided during the summer, which may have
Not Just for Preservice Teachers    155

allowed more time to complete the survey. Both the first-year teachers and
the mentor teachers were solicited via email in the spring of 2017, in the
middle of the school year when practicing teachers were often busy with
multiple responsibilities. Our edTPA office is considering options to gather
more responses for future research, such as surveying participants towards
the end of the school year.
We should note that 19.6% of the mentor teachers also served as an edTPA
local evaluators, which meant that there was a possibility for some overlap
between the local evaluators and mentor teachers. Also, when examining
the data, each survey included slightly different questions around edTPA.
The local evaluator survey focused both on the training and preparation for
local evaluation as well as perceived connections between edTPA and cur-
rent teaching practices. The first-year teacher survey focused on how edTPA
connected to practices around planning, instruction, and assessment as well
as possible connections to current teaching practices. The mentor teacher
survey was similar to the first-year teacher survey, but included questions on
preparation for understanding and working with edTPA. Table 7.1 includes
a summary of the number and types of questions for each survey as well as
sample questions.
The following section summarizes major themes that emerged from the
survey data, including both numeric data from the Likert-scale questions
and participant quotes from the open-ended items. Note the initial Lik-
ert-scale survey to local evaluators, given during the summers of 2015 and
2016, used a four-point scale (strongly disagree, disagree, agree, strongly
agree). The researchers felt that a five-point scale (strongly disagree, dis-
agree, neutral, agree, and strongly agree) better captured participant per-
ceptions, and thus used the 5 point scale in the spring of 2017 with first year

TABLE 7.1  The Summary of the Surveys for Three Groups of Subjects
Survey First-Year Teachers Mentor Teachers Local Evaluators
Demographic 3 items 5 items 8 items
Questions What are or were your In what county are you Are you currently
certification content teaching? Nationally Board
areas? Certified?
Likert-Scale 8 items 7 items 16 items
Items The edTPA made me I believe that edTPA edTPA is worthwhile
more aware of my can improve the for candidates’
teaching practices. status of the teaching professional
profession. development.
Open-Ended 6 items 3 items 6 items
Items Did you complete What are the strengths How does edTPA
a professional and the weaknesses change or shape
development plan using of the edTPA? the way you think
your edTPA results? about teaching?
156    J. SEELKE and X. GONG

teachers and mentor teachers. The researchers plan on using a five-point


scale when they survey local evaluators in 2017, but felt that the data from
the 2015 and 2016 cohort of local evaluators reflected important parts of
two research questions and thus included it in the data. Any differences in
the scale are noted in the tables.

Theme 1: edTPA Promotes Critical Self-Reflection

The most significant theme identified in the data was the facilitation of
self-reflection in the edTPA procedure. First-year teachers used standards
in edTPA to evaluate their practices of planning, instruction and assess-
ment. Such exercises promoted them to develop the skill or habit of reflec-
tion, which also benefited their routine teaching life. Therefore, the assess-
ment served as a pathway for achieving professional growth. One first year
teacher commented:

The edTPA required me to carefully plan my instruction, review appropriate


evidence-based strategies to support my learners, and critically reflect on my
instruction. I think that reflection piece was most valuable because I con-
stantly feel the need to ask myself many questions. Did my students meet my
objective? Could I have modified my instruction better to help them teach
the objective? Even though I don’t have as much time as I did on edTPA
to sit down and reflect on each lesson, I try to still bring in that mindset of
reflection when I am planning for my week or even when I am in the middle
of my instruction and need to modify to better reach my students. Keeping
the mindset that there is always something I can do to alter my instruction
or classroom environment to impact student progress helps me from getting
frustrated with my most challenging students.

While local evaluators or mentor teachers were not specifically asked about
the reflection process, many of them commented on reflection when asked
about the strengths of the edTPA process. One local evaluator noted that edTPA

asks teacher candidates to reflect upon their own teaching and really think
about each part of lesson planning and especially think about the “why” of
what the candidates are doing. It does a great job of asking candidates to ex-
plain their choices and provide support for those choices.

Another evaluator also mentioned,

all teaching candidates should have an opportunity to experience edTPA be-


cause it will set them up for success in their career (if they continue reflecting
on their practice like this assessment requires).
Not Just for Preservice Teachers    157

Theme 2: edTPA May or May Not Connect to Current


Teaching Practices

In discussing edTPA with teacher candidates, our university faculty often


noted the similarities between edTPA and observation tools such as the
Charlotte Danielson Framework for Teaching (FFT). The same sentiment
was echoed in qualitative comments from multiple local evaluators about
how edTPA related to their current work in the classroom, including how
they were observed. One evaluator noted how completing edTPA could
offer candidates a head start on understanding some of the initiatives they
face as beginning teachers:

[edTPA helps] beginning teachers to focus on what is effective teaching. It


helps them to critically look at planning, teaching and assessing—but more
it makes them continually ask how do they know their students are learning?
Today’s teachers are under the gun with common core, race to the top, and
FFT. Highly effective teachers are those that can articulate not just what their
students know but how they know that they have learned.

At least one first-year teacher noted how edTPA was used when being
observed by a district consulting teacher, an employee who specifically
coached and offered induction support to first-year teachers. The teacher
commented

I have used the strategies [in edTPA] on a daily basis. They have been help-
ful for me to meet standard while getting observed by my consulting teacher.
Reading my informal and formal observation reports lets me know that I am
using the strategies I used for my edTPA project.

However, the survey data (Table 7.2) shows some differences between


the perceptions of first year teachers and mentor teachers. Mentor teach-
ers were more positive about the edTPA’s role of supporting profession-
al teaching than first-year teachers. Some of the disconnect could be in
the first-year teacher question using the word “observed” versus the word
“evaluated” for mentor teachers. In some of the qualitative responses, first-
year teachers noted that writing pages of commentary (something found in
edTPA) was not a part of their observations, that visits by administrators or
district personnel (such as consulting teachers) were more similar to obser-
vations from their university supervisor or mentor teacher. One first-year
teacher represented this sentiment with the following comments:

At no point in my observation year thus far have I been asked to reflect on


a recording of my teaching. Furthermore, when administration comes in to
observe my classes, they do not evaluate based on a strict rubric. Rather, there
158    J. SEELKE and X. GONG

TABLE 7.2  Items for First-Year Teachers and Mentor Teachers on a


5-Point Scale
Item M SD
The edTPA directly relates to how I am observed by my 2.69 1.13
school district.
First-Year Completing the edTPA helped prepare me for my first 2.98 1.23
Teachers year of teaching.
The edTPA would be useful for my future teaching 2.94 1.24
practices.
I believe that edTPA is connected to how teachers are 3.45 0.96
Mentor evaluated within my district.
Teachers I believe that edTPA can be used as tool to support first 3.69 0.84
year teachers.

is a debriefing session with the administrator and an opportunity to reflect in-


person. Therefore, the observations completed by the mentor and the super-
visor in the graduate program more closely mirror the expectations first-year
teachers should have for observation.

A second first year teacher echoed the same idea, noting:


I’m more focused on the input from current observations since they
reflect a more holistic and relevant view of my teaching practices. I have
grown as a teacher over the course of the first semester and the resource
teacher, consulting teacher, and administrators in the building are more
helpful than a video from the latter part of my internship.

Theme 3: edTPA Aligns With Core Pedagogical Practices

Local evaluators and first-year teachers were both asked about how the
planning, instruction, and assessment rubrics connected to pedagogical
practices (while the mentor teacher survey focused more on understand-
ing edTPA and expectations around edTPA, future surveys will also have
them address connections to the three components of the cycle of teaching
focused on within edTPA). Table 7.3a demonstrates how local evaluators
felt edTPA reflected the aspects teacher candidates needed to consider go-
ing into their first year. Table 7.3b shows that first-year teachers also see the
connection, although the connection is not quite as strong as the veteran
teachers who served as evaluators and mentor teachers.
Both local evaluators and first-year teachers shared in their qualitative
comments that edTPA connected to their teaching process. One first-year
teacher noted the connection to key components of teaching practices
when commenting that edTPA provided “a process by which I was able to
Not Just for Preservice Teachers    159

TABLE 7.3a  Items for Local Evaluators on a 4-Point Scale


Item M SD
Overall, the edTPA rubrics reflect my personal vision of 3.40 0.54
high quality teaching.
The Planning for Instruction and Assessment rubrics 3.35 0.54
capture the most important elements of planning for
teacher candidates to consider in their practice.
Local The Instructing and Engaging Students in Learning rubrics 3.40 0.57
Evaluators capture the most important elements of instruction for
teacher candidates to consider in their practice.
The Assessing Student Learning rubrics capture the 3.37 0.56
most important elements of assessment for teacher
candidates to consider in their practice.
edTPA will help prepare higher quality teachers. 3.54 0.59

TABLE 7.3b  Items for First-Year and Mentor Teachers on a 5-Point


Scale
Item M SD
I believe that edTPA is connected to how teachers are 3.45 0.96
evaluated within my district.
The Planning for Instruction and Assessment rubrics of 3.22 1.02
the edTPA connects to my current planning practices.
First-Year
Teachers The Instructing and Engaging Students in Learning 3.40 1.00
rubrics of the edTPA connects to my current
instruction practices.
The Assessing Student Learning rubrics of the edTPA 3.30 1.02
connects to my current assessment practices.
Mentor I observe the connections between edTPA and high- 3.70 0.88
Teachers quality teaching.

design, implement and evaluate a series of lessons. Insights gained from


this process undoubtedly informed and continue to inform my decision-
making process with respect to planning and teaching.” Other first-year
teachers specifically focused on the role of assessment. One teacher noted
that completing edTPA helped when “considering multiple teaching goals
while planning a single assignment/project. Building on student skills over
succeeding assignment/projects. Generally, considering long-term teach-
ing goals when planning specific assignments and connecting assignments,
beyond obvious content connections.” Another first-year teacher noted
how the assessment task of edTPA specifically taught, “it is important to
always be assessing your students, and changing your planning and imple-
mentation to benefit your students and help them to be successful.”
160    J. SEELKE and X. GONG

Local evaluators noted that amongst the strengths of edTPA was how it
mimicked what teacher candidates would face as full-time teachers. As one
evaluator commented, “one of the strengths is that it accurately reflects
what teachers must do when they get in the workforce. They must plan les-
sons, evaluate student learning, think about what they will do next and have
a rationale for doing it. It also forces students to evaluate their teaching in
some of the softer areas like classroom management, student engagement,
and relationships with students . . . It also allows students to see themselves
teaching.”

Theme 4: School Environment Influences edTPA Impact

One of the qualitative questions posed to first year teachers was “how
does your school environment impact how you teach your lessons? Does the
environment encourage or hinder some of the practices you saw in the edT-
PA?” This question stemmed from previous research around edTPA and
practicing teachers that noted how teacher perceptions about the useful-
ness of the assessment could be impacted by the place where they worked
(Seelke, 2017). The previous research revealed mixed results, with some
alumni noting how their school environment supported the pedagogy and
practices found in edTPA while others felt their school environment hin-
dered or was not connected at all to edTPA.
Our survey of first-year teachers revealed similar findings. Seventy-nine
first year teachers responded to the qualitative questions, with 22% of them
reporting challenges in implementing strategies described in edTPA due to
contextual factors such as the time limitation and strict curriculum. As one
teacher noted:

By nature the school day is so fast paced and so many content areas need to be
taught every day, so there is not the time to thoughtfully plan and reflect on
each lesson the way edTPA requires. Additionally, I implement many scripted
programs, as required by my county. There is less room to modify the instruc-
tion in these programs after reflecting on student engagement or progress.

On the other hand, 33% of first-year teachers were positive about school
environments, which supported the implementation of practices outlined
in edTPA. For example, one teacher noted, “my school environment is very
invested in having students engaged and challenged. I feel this was similar
to part II of edTPA. The environment encourages some of the practices I
saw in edTPA.” Other teachers commented how expectations such as com-
pleting student learning objectives were similar to the process of complet-
ing edTPA.
Not Just for Preservice Teachers    161

A third group of teachers noted both connections and disconnections


between edTPA and their current school. One teacher commented,

My school environment provides me with a flexible curriculum where I can


alter my lessons accordingly throughout the practice. However, it does not
allow enough planning periods for me to go over and reflect on my practice
in such in-depth level.

This comment connects back to previous students’ notions that they do not
have the time to complete the practices they did within edTPA. A second
teacher wrote,

there are a lot of different types of learners in my classes (504s, IEPs, ESOL,
etc.), so I have to be more flexible with my instruction. This encourages
edTPA practices related to diversifying instruction/assessment. However, my
school does not have a set curriculum for the courses I teach, so the structure
that edTPA provides is not available at my school.

Finally, a group of first-year teachers noted that the realities of the


teaching profession impacted how much edTPA could connect to their
current work. One first-year teacher noted that “first-year teachers often
have more preps than experienced ones (I have three). It is impossible to
plan lessons like the ones we used for edTPA given the amount of work we
have.” Other first-year teachers discussed how they saw positives in some
of the pedagogical practices found in edTPA (e.g., providing detailed
feedback for all students), however they felt that they did not have the
time to complete all of these practices given the demands of the teaching
profession.

Theme 5 : Even Practicing Teachers Benefit From edTPA

Both mentor teachers and local evaluators viewed edTPA as a tool that
connected to their own work as teachers as well as a tool that can support
candidates’ professional development as they enter their first year of teach-
ing. Tables 7.4a and 7.4b demonstrate how strongly mentors and local
evaluators believed in the impact of edTPA. Both groups of teachers also
shared these sentiments in their qualitative comments, showing how edTPA
not only impacts preservice teacher practices but also the practices of the
teachers who work with them. In fact, multiple themes around professional
development for first-year teachers as well as the more experienced mentor
teachers or local evaluators emerged.
162    J. SEELKE and X. GONG

TABLE 7.4a  Items for Mentor Teachers on a 5-Point Scale


Item M SD
Mentor I believe that edTPA is connected to my current work as 3.71 0.89
Teachers a mentor teacher.
I believe that edTPA can be used as tool to support first- 3.69 0.84
year teachers.

TABLE 7.4b  Items for Local Evaluators on a 4-Point Scale


Item M SD
Local edTPA is worthwhile for candidates’ professional 3.61 0.51
Evaluators development.
edTPA helped me reflect upon my own understanding 3.55 0.61
of teaching.

Professional Development of One’s Own Practice


Both local evaluators and mentor teachers noted in their comments
how working with the edTPA benefited their teaching in two aspects. First,
edTPA led them to consider their own teaching practices in some form. As
one mentor teacher noted:

As a mentor teacher, observing and providing feedback to teacher candidate


helps me to reflect and improve my own teaching. A deep understanding of
edTPA rubrics enables me to better support teacher candidates. Collabora-
tion and professional dialogue with teacher candidates is helpful to plan and
meet the needs of our students in multiple ways.

Second, many local evaluators and mentor teachers noted the connec-
tions between edTPA and NBC. In some cases, local evaluators or mentors
felt that working with edTPA helped them pursue NBC. Others, who were
already nationally board certified, saw edTPA as a reinforcement of some of
their beliefs about teaching pedagogy. As one local evaluator stated:

edTPA is very similar to the National Board Process so it didn’t really change
the way I thought about teaching, but maybe just confirmed it. I used it as a
PD check to make sure I wasn’t falling into bad habits or teaching ruts that
often plague veteran teachers. In addition, the edTPA provided me a differ-
ent perspective on what scorers are looking for and are tied to the rubric. This
will be very helpful when it is time to recertify.

Still other evaluators and mentors acknowledged that some of their cur-
rent practices may need to be refined and that working with edTPA helped
Not Just for Preservice Teachers    163

them think about things they could do in order to become better teachers.
As one participant commented:

As for my teaching practice with my own students, the edTPA allowed me to


reflect on areas that I have veered away from or toward in my current role
that may need tweaking for me to make sure I am keeping active in my parent
connections, as well as the different levels of communication that are needed
between specialist, teachers, parents, program, and administration.

Professional Development as Mentor Teachers


Mentor teachers noted that working with edTPA, particularly in the role
of local evaluator, assisted them in considering better ways they could sup-
port future interns not only around edTPA but around teaching pedagogy.
As one mentor noted, serving as a local evaluator impacted how he pro-
vided interns feedback:

The edTPA has not changed my thoughts about teaching. However, I did use
the vocabulary of the edTPA and rubrics when giving feedback to my interns.
Before I provided feedback to my interns during their first semester, I asked
them to provide a reflection of their lesson. During that time I organized the
good teaching practices (general or content specific) into the rubric that cor-
responded. Asking the interns to reflect before getting my specific feedback
was very interesting. The things I observed that were most important to teach-
ing, they identified and thought about adjustments to the lesson themselves.

A second local evaluator shared how working with edTPA helped change
the perspective on what types of supports could be offered to preservice
teachers:

It has reminded me of the importance of reflection. As a teacher educator


now, this has really challenged my understanding of preservice teachers. It
has encouraged me as a teacher educator to do a better job, especially in
helping the preservice teachers teach equitably and work to meet the needs
of their diverse student populations, rather than expecting that the students
change their own culture to be successful in class.

Professional Development Around Induction Support


Veteran teachers (mentors or local evaluators) noted that working with
edTPA helped prepare them to better support first-year teachers since
they had a clearer sense of candidate expectations. As one local evaluator
commented:

As a teacher leader, the edTPA process has led me to see how and what teacher
candidates are being prepared with when they are leaving institutes of higher
164    J. SEELKE and X. GONG

learning. Knowing what first year teachers know how to do, helps me with the
mentoring that I am providing them. It has also made me look back on my
own practice when I was in the classroom and ask myself the hard questions.

NEXT STEPS

While our university has been able to sustain edTPA local evaluation for four
years, we face a critical juncture in 2017 around local evaluation versus of-
ficial Pearson scoring. While our edTPA office has consistently improved its
training procedures to try and maintain consistency amongst scorer ratings,
the university does not have the resources to show that its local evaluation
process meets reliability and validity standards required by accreditation
bodies such as the Council for the Accreditation of Educator Preparation.
Thus, starting in the 2017–2018 academic year, all our edTPA portfolios
will be sent out for official scoring. That being said, our university is com-
mitted to maintaining some type of role for K–12 partners in its edTPA
work, particularly given the feedback that members of the K–12 community
have shared on how working with edTPA has positively impacted their prac-
tices. One possible strategy is working with partner districts on identifying
teacher leaders who are pursuing NBC and using edTPA as a tool to sup-
port their efforts. Another possibility is, using local evaluators to evaluate
practice edTPAs done earlier in the teacher internship as a formative assess-
ment for preparing candidates for their actual edTPA submissions. A third
option is, continuing the work around local evaluation, however doing it by
content or program as opposed to the entire college.
The surveys used in this study also reveal how our university still needs
to work some on the messaging around edTPA, making sure all parties in-
volved (candidates, mentors, other K–12 partners) view edTPA as not just
a stressful assessment completed at the end of the program but rather as a
tool for growth as candidates enter the profession. As one local evaluator
who worked with candidates noted,

While I do think it’s a worthwhile assessment of a teacher candidate, it is very


stressful for them as well. I was told in [evaluator] training that it was just one
part of many assessments of the teacher candidates. . . . But [some] student
teachers were not told the same thing. They were so stressed out about it and
I feel that maybe they needed more reassurance that they would do well, that
it isn’t the “be all, end all” to their teaching career.

Of course, the “be all, end all” aspects of the assessment can vary depend-
ing on the state policy around edTPA.
As edTPA becomes more widely used across the nation, more researchers
are examining the effectiveness and the usefulness of the tool in preparing
Not Just for Preservice Teachers    165

teacher candidates as they enter the profession. For example, Goldhaber,


Cowan, and Theobald (2016) recently published a paper looking at the
predictive validity of edTPA, including the potential connection between
a candidate’s successful completion of the assessment and student achieve-
ment in reading or mathematics. Other such studies are sure to follow. Ad-
ditionally, other research has focused on the impact edTPA has on teacher
preparation programs or candidate experiences during their student teach-
ing year. While such research is needed and encouraged, the results of this
study demonstrate a need to explore how edTPA can be used not just as a
summative tool to measure a candidate’s readiness for the field, but also
as a formative tool to support the development of both first-year teachers
in induction as well as veteran teachers who mentor or support those just
entering the profession. While edTPA is currently described as a tool that is
supposed to be “educative for candidates, faculty, and programs,” (SCALE,
2016, p. 6), it also needs to be seen as a bridge between teacher prepara-
tion and those within the profession as means to raising the bar not just for
student teachers, but also for all teachers.

REFERENCES

American Association of Colleges for Teachers Education (2017). Participation


Map. Retrieved from http://edtpa.aacte.org/state-policy.
Council for the Accreditation of Educator Preparation. (2017). The CAEP Stan-
dards. Retrieved from http://www.caepnet.org/standards/introduction
Dover, A. G., Schultz, B. D., Smith, K., & Duggan, T. J. (2015, March 30). Who’s
preparing our candidates: edTPA, localized knowledge and the outsourcing
of teacher evaluation. Teachers College Record. Retrieved from https://www.
tcrecord.org/Content.asp?ContentID=17914.
Goldhaber, D., Cowan, J., & Theobald, R. (2016). Evaluating prospective teachers:
Testing the predictive validity of the edTPA. Washington, DC: National Cen-
ter for Analysis of Longitudinal Data in Education Research. Retrieved from
http://www.caldercenter.org/sites/default/files/WP%20157.pdf
Hattie, J., & Clinton, J. (2010). The assessment of teachers. Teaching Education,
12(3), 279–300.
Hobbs, E. L. (2015). The edTPA experience: Student teachers’ perceptions and reflections
on the edTPA assessment (Doctoral dissertation). Available from ProQuest Dis-
sertations & Theses Global (Order No. 3746484). Retrieved from http://
search.proquest.com/docview/1763838245?accountid=14696
Huston, T. P. (2015). Being assessed: Student teachers’ experiences of IUTPA (Doctoral
dissertation). Available from ProQuest Dissertations & Theses Full Text (Or-
der No. 3715594). Retrieved from http://search.proquest.com/docview/171
0722235?accountid=14696.
166    J. SEELKE and X. GONG

Jordan, A.W., & Hawley, T. (2016). By the elite, for the vulnerable: The edTPA, aca-
demic oppression, and the battle to define good teaching. Teachers College Re-
cord. Retrieved from http://www.tcrecord.org/content.asp?contentid=19461
Madeloni, B., & Gorlewski, J. (2013). Wrong answer to the wrong question. Rethink-
ing Schools, 27(4), 16–21.
Peck, C. A., & McDonald, M. (2013). Creating “cultures of evidence” in teacher
education: Context, policy, and practice in three high-data-use programs. The
New Educator, 9(1), 12–28.
Seelke, J. L. (2017, April). Examining the impact of edTPA on practicing teachers. Paper
presented at the annual meeting of the American Educational Research As-
sociation, San Antonio, TX.
Singer, A. (2014, June 18). SCALE and edTPA fire back!: Methinks they doth protest
too much. Huffington Post. Retrieved from http://www.huffingtonpost.com/
alan-singer/scale-andedtpa-fire-back_b_5506351.html.
Stanford Center for Assessment, Learning and Equity (2016). Educative Assessment
and Meaningful Support: 2015 edTPA Administrative Report. Retrieved from
https://secure.aacte.org/apps/rl/resource.php?resid=647&ref=edtpa.
Steeley, N. J. (2003). A qualitative study of three kindergarten teachers’ practice and profes-
sionalism: Does the national board for professional teaching standards’ certification
make a difference? (Doctoral dissertation). Available from ProQuest Disserta-
tions & Theses Full Text. (Order No. 3080963).
Whittaker, A., & Nelson, C. (2013). Assessment with an “End in View.” The New
Educator, 9(1), 77–93.
CHAPTER 8

FORCING ME TO REFLECT
Preservice and Novice Teachers’
Reflective Thinking in Varied
School Contexts

Dianna Gahlsdorf Terrell


Saint Anselm College

Kathryn McCurdy
University of New Hampshire, Manchester

Megan L. Birch
Plymouth State University

Thomas H. Schram
University of New Hampshire

Page Tompkins
Upper Valley Educators Institute

Implementing and Analyzing Performance Assessments in Teacher Education, pages 167–189


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 167
168    D. G. TERRELL et al.

We have yet to meet a teacher who has reflected on her or his first year with
a sense of preparedness, pride, and fulfillment. To the contrary, toward the
end of the first year of teaching, many novice teachers express the sensation
of desperately clawing their way to the end of the school year. The frantic
pace of the school year leaves little time for processing, and becomes ex-
traordinarily emotionally draining. While most teachers take a short time
to power down during their summer vacation, many quickly return to the
work of teaching by reflecting on how the year went. By August, many teach-
ers are ready to begin the school year anew with fresh insight and perspec-
tive. This rejuvenation is rooted in and relies on reflection.
As researchers, with over 100 years of combined teaching practice, we re-
member our first years of teaching, including what we felt prepared for and
what we didn’t. Variables in the first teaching year initially seemed to lack pat-
tern, but after some reflection they became more constant. A growing aware-
ness of what to expect in terms of scheduling demands, committee work, the
curricular arc, and the general flow of the academic year gradually tempered
the day-to-day anxiety that defined the earlier parts of the year. The summer fol-
lowing the first year of teaching opened a larger space for reflection.
We asked three novice teachers at the end of their first year to look back
and reflect on what they felt prepared for, and what they didn’t. Even in
light of the fact that we interviewed three different novice teachers, with
variations in both preparation and their first year contexts, our partici-
pants’ responses were surprisingly similar.
When asked to look back on what they believe to be the biggest lesson
from their first year of teaching, each participant expressed variations on
the same theme; they learned to be flexible and willing to learn from mis-
takes. They each acknowledged in one way or another that good teachers
seek evidence of student learning, and good teaching demands responsive-
ness to students’ needs. We theorize that the consistency across their reflec-
tions sheds light on the successes of educator preparation programs, and
specifically on assessments that call for reflection.
It seems banal, yet for teacher educators involved in the work of prepar-
ing new teachers, these responses might generate a sense of relief or even
delight in their clear consistency. Each of these teachers conveys compe-
tence with a skill that is prompted and reinforced time and again in many
teacher education programs. What each of these responses conveys is a
level of mastery, and indeed even a comfort with pedagogical reflection.

REFLECTION

We suspect that the similar reflections of our three cases are not happen-
stance, but the result of explicit work and deliberate prompting. There is,
Forcing Me to Reflect    169

after all, a significant body of conceptual and empirical research to suggest


that prompts on performance assessments elicit particular types of think-
ing, and candidates’ responses in performance assessments are linked to
the quality of teachers’ observed classroom practice (Darling-Hammond,
2010; Darling-Hammond, Newton, & Wei, 2012).
Still, there is limited investigation of the relationship between discrete
tasks required on performance assessments, and teacher learning and prac-
tice (Ball & Cohen, 1999; Darling-Hammond, 2010; Feiman-Nemser, 2012).
Following the example of a high-quality assessment of teacher performance
designed by teachers and teacher educators called the Performance Assess-
ment of California Teachers (PACT), teacher preparation programs in our
state use the New Hampshire Teacher Candidate Assessment of Perfor-
mance (NH-TCAP). NH-TCAP is comprised of six distinct yet interrelated
strands: contextualizing learners and learning, planning and preparing,
instructing students and supporting learning, assessing student learning,
reflecting and growing professionally, and using academic language. To-
gether, these strands require teacher candidates to demonstrate strate-
gies they will use to make learning accessible to their students; explain the
thinking underlying their teaching decisions; analyze strategies they use to
teach; and examine the effects of their instructional design and teaching
practices on students’ learning. In our study, we were interested particularly
in exploring the nature of the reflections prompted by the reflecting and
growing professionally strand (see Table 8.3 later in this chapter). We argue
that while “reflective thinking” is a ubiquitous requirement in performance
assessments, the characteristics and operationalization of reflective think-
ing, its transferability to classroom practice, and the implied assumptions
about what both say about the “quality” of novice teachers are unevenly
researched or agreed upon.
For a profession that often declares there to be no “silver bullets” and
which even expresses reluctance with the idea of best practices, the teach-
ing profession seems fixated with reflection. A Boolean search of peer re-
viewed journals in Academic Search Premier over the last ten years yielded
close to 2,000 articles on subjects specific to preparing teachers and titles
that included the term “reflective teaching.” Indeed, a long tradition of
conceptual research guides our understanding of reflective thinking (Dew-
ey, 1933; Jay & Johnson, 2002; Schön, 1983; Zeichner & Liston, 1987), but
the predominant definition comes from Dewey (1933), who suggested re-
flective thinking brings clarity to confusing situations. Subsequent interpre-
tations of reflective thought came from Schön, who is credited with linking
reflective thought to professional learning, and who outlined a distinction
between reflecting on action, and reflecting in action (Clarà, 2015).
In a classic study, Zeichner and Liston (1987) qualified Dewey’s work by
calling for critical reflection. Subsequent to Zeichner and Liston’s work,
170    D. G. TERRELL et al.

numerous typologies of reflection have emerged with aspects of critical re-


flection holding court as the highest form of reflection. Ensuing studies
(Clarà, 2015; Rodgers, 2002), however, have raised questions about pre-
service and novice teachers’ capability to reflect in practice or the need
to reflect “critically.” To a great extent this line of argumentation provides
fodder for our study.
For example, we found that Larrivee’s (2008) typology of reflective
thinking provides a close approximation of how reflection is characterized
by the field. Unlike many scholars’ prior conceptualizations of reflection,
which only include three levels, Larrivee’s (2008) typology describes four
levels, which she argues capture the “expansive range of meanings” in the
profession (p. 342). Larrivee’s (2008) levels include Pre-Reflection, in which
teachers react to teaching situations “automatically, without conscious con-
sideration of alternative responses” (p. 342); Surface Reflection, in which
teachers’ reflections “focus on strategies and methods used to reach pre-
determined goals” (p. 342); Pedagogical Reflection, in which teachers “apply
the field’s knowledge base and current beliefs about what represents quality
practices” (p. 343) (see Table 8.1); and Critical Reflection, in which teachers
”focus their attention inwardly at their own practice and outwardly at the
social conditions in which these practices are situated” (p. 343).
Meanwhile, there is a substantive contrast between the types of reflec-
tion we ask of educators in the profession, and the types of reflection we
ask for of preservice teachers. In New Hampshire, where we have drawn
extensively from the work of California educators in the development of
a locally administered performance assessment called the NH-TCAP,1 our
performance assessment does not explicitly require critical reflection.
This appreciable gap regarding definitions of what signifies quality re-
flection between advocates for the profession and those who prepare teach-
ers for the profession is not to be taken lightly. We recognize that the quality
of prompts and evaluation criteria outlined by the rubrics to some extent
drive our candidates’ responses. For example, Soslau, Kotch-Jester, and Jor-
lin (2015) argue that if performance assessments do not assess a particular
trait, teacher candidates mistakenly infer that they do not have to do it. This,

TABLE 8.1  Pedagogical Reflection


Definition Characterizations Practices
Level of reflection based on • Understand the • Reflect on educational
application of teaching theoretical basis for goals
knowledge, theory, and classroom practice • Understand theories
research • Foster consistency underlying applications
between espoused theory • Connect theoretical
and theory in use principles and practice
Source: Larrivee, 2011
Forcing Me to Reflect    171

the authors argue, is both mis-educative and dangerous. For these reasons
alone, the conceptualization and operationalization of reflection becomes
an issue worth studying.
Cognitive psychologists have taken up the issue of reflection with many
arguing that reflection, and corresponding cognitive exercises like men-
tal rehearsal and elaboration, require thinkers to retrieve information and
“add layers to learning” (Brown, Roediger, & McDaniel, 2014, p. 209–210).
Psychologists continue to parse the manner in which humans encounter
new material and select, organize and integrate new data into working
models (Mayer, 2012). In many ways then, the term reflection in the educa-
tion discipline is a proxy for describing thinking.
Psychologists propose that metacognitive strategies including self-as-
sessment, monitoring and self-evaluation increase the likelihood of trans-
ferability of learning from one context to the next (Dimmitt & McCor-
mick, 2012). These metacognitive strategies, which are to some extent
signature practices of the reflective practitioner, are useful in our discipline
when our candidates are asked to transfer learning from teaching intern-
ships to the professional classroom. Furthermore, a significant body of
scholarship suggests that the act of reflecting through writing enhances
critical thinking and brings learners’ reflections from a state of brouillon,
or “scramble,” to a state of order (Bean, 2011, p. 18). This suggests that
not only does quality reflection signal good teaching, it signals good think-
ing as well.
Regardless, this theoretical and empirical lineage provides clear testi-
mony as to why and how reflective thinking has emerged as a hallmark
practice of learning to teach, as well as the many troubles associated with
establishing criteria and evaluative measures for reflective thinking. The
PACT, and derivative performance assessments like edTPA® and NH-TCAP,
prompt students to engage in numerous reflective thinking tasks with the
implication that reflective thinking signals learning to teach, and that the
skills involved are clearly defined, worth measuring, and measurable.
In light of the proliferation of performance assessments that call for reflec-
tive thinking, and the overwhelming response from teacher candidates both
bemoaning and celebrating the way performance assessments “forced them
to reflect,” (Okhremtchouk et. al, 2009, p. 49; IHE Meeting, 2015), we stud-
ied our candidates’ responses to examine whether and how the NH-TCAP
contributed to teacher learning from teacher preparation through the first
year of teaching, focusing particularly on reflective thinking. We address the
following overarching research question and sub-questions: What is the rela-
tionship between preservice teachers’ reflective thinking and performance
on the NH-TCAP, and reflective thinking during the first year of teaching?
172    D. G. TERRELL et al.

• How do teacher candidates perform on reflective thinking tasks in


the NH-TCAP relative to expectations?
• What, if any, changes in reflective thinking occur from performance
on the NH-TCAP to practice in the first year?
• How is reflective thinking operationalized by the NH-TCAP?

STUDIES

This mixed-methods study sought to clarify what we mean when we call


for reflection, how well the task of reflection is executed by our teaching
candidates, and what that says about the quality of teaching. We used two
approaches to find out.

Study One: Reflections on Teaching Practice

This research is based on data from three case studies derived from a
larger longitudinal study that followed teacher candidates from entry
into a teacher education program through their first year of teaching.
The three participants selected for analysis in this study, Marilyn, Zoey,
and Maeve, represent a purposive sample (Teddlie & Yu, 2007) and were
chosen for several reasons. Participants in this study completed the NH-
TCAP during the 2014–2015 academic year as a component part of the
graduation requirements of their teacher preparation program. All were
trained and endorsed by institutions of higher education in New Hamp-
shire. None were from the same institution. All three carried out NH-
TCAP in elementary settings—two in math, and one in literacy. Marilyn
was prepared by an elite private college, Maeve by a Catholic college, and
Zoey by a public university. Upon completion of the preparation pro-
grams, all participants secured a full-time teaching position during the
2015–2016 academic year. In all three cases, first-year teaching contexts
varied, and were significant departures from the teaching internship site.
In making these choices, the researchers sought to gain insight into par-
ticipants’ experiences with and perceptions of reflection in the NH-TCAP.
In many ways, the cases exemplified consistency in NH-TCAP administra-
tion—using essentially the same instrument with very minor differences be-
tween mathematics and literacy. In other ways, participants were selected to
ensure variation across school and community contexts.
Our study draws on multiple data sources including: (a) the NH-TCAP
completed at the end of teacher preparation; (b) an initial interview prior
to the first year of teaching; and (c) three interviews paired with classroom
observations in the fall, spring, and end of the first year of teaching. Interview
Forcing Me to Reflect    173

protocols were adapted from the Boston College Teachers for a New Era pro-
tocols (Cochran-Smith & the Boston College Evidence Team, 2009).
We then conducted analyses of preservice performance on the NH-
TCAP, and responses to reflection prompts throughout interviews the fol-
lowing year. Informed by Hill and colleagues’ (2005) consensual approach
to qualitative analysis, the research team completed within-case analyses be-
ginning with a scoring calibration exercise of the Reflection Strand. By fo-
cusing on data from Reflection Strand of NH-TCAP and selecting segments
of interview transcripts designed to elicit reflective thinking, the team
“winnow[ed] the data” for analysis (Creswell, 2013; Guest, MacQueen, &
Namey, 2011). Interview excerpts were coded, and we used codes to de-
termine patterns of reflection relative to Larrivee’s (2008) framework (see
Table 8.2).
Since all three teachers graduated from different preparation programs,
we theorized that if the performance assessment’s reflection requirement
had any impact, we might see some similarities in the manner and style
of reflections. Each within-case analysis describes how our candidates re-
flected based on evidence derived from the NH-TCAP work sample and
their responses to interview prompts the first year. The descriptions below
are intentionally limited to key attributes, which framed and informed can-
didates during critical points of reflection.

Marilyn: A Case of Learning From the Best

From our first meeting, Marilyn told us she had always wanted to be a
teacher. Yet, as graduation closed in, Marilyn was still searching for a teach-
ing position. With a unique past of volunteer teaching overseas, and work-
ing on her Teaching English to Speakers of Other Languages certification
at the time, Marilyn toyed with the idea of teaching abroad. Ultimately,
well into the summer months after graduation, she accepted a position
stateside. She had graduated from a small, highly selective liberal arts col-
lege, and now would teach second grade in a well-resourced school in a
suburb of a major northeastern city. Her new school stood in contrast to
past classroom settings, and especially her teaching internship in a rural,
under-resourced school where she completed the Elementary Mathematics
NH-TCAP with second graders.
Remarkably, Marilyn’s first year of teaching was not fraught with descrip-
tions of overwhelm. She speculated, “There were times when I was frus-
trated, but not overwhelmed.” Marilyn saw herself as a learner of teaching,
mentioning in early interviews her appreciation for having another knowl-
edgeable person in the classroom from whom to learn. Yet, Marilyn ulti-
mately made the transition from student teaching to co-teacher, assuming
174    D. G. TERRELL et al.

TABLE 8.2  A Priori List of Codes


Code Descriptor
Pre-Reflection
 No-Own Classroom is beyond control of teacher. Students own their learning
only. No accommodations.
 Immediate Only immediate demands of classroom are addressed by teacher
 Assertion Assertions without evidence
 Simple Oversimplifies problems
 Survive Survival mode
Surface Reflection
 Inattention Misses clear patterns
 Blob Limited accommodations for students, little differentiation or ability
to recognize individual students
 Anecdotal Anecdotal evidence only
 Patch Short term solutions to problems are enacted
 Technical Addresses technical questions only. Recounting instances of teaching,
Scripted curriculum.
Pedagogical Reflection
 Impact Analyzes teaching practice impact on student learning
 Attention Searching for patterns, building personal theories (“t”) or
Constructive critique of own teaching, use of research/theory (“T”)
 Adjusting Adjusting methods based on perceived student learning
 Improvement Genuine curiosity/desire to improve teaching strategies
Critical Reflection
 Ethical Ethical ramifications of work addressed
 Critical Questions her own assumptions and beliefs
 Context Recognizes context: Pol, econ, hist, cult, soc, etc.
 Metacog Metacognitive awareness
 SJ Equity, social justice, acts to bring alignment between moral
commitments and teaching world

teaching responsibilities for social studies and math. While demonstrating


her commitment to lead teaching, the move suggested an attempt to posi-
tion herself both as a teacher and as a learner.
Marilyn attributed much of her growth to completion of the NH-TCAP
as well as experiences with her mentor teacher and her co-teacher during
her first year of teaching. For example, during student teaching Marilyn
was asked to do a unit on erosion. She recalled “looking at the assessment
and being like ‘okay cool, but now what? What do I do with this?’” Mean-
while, Marilyn reflected on her growth by the end of her first year. She felt
Forcing Me to Reflect    175

better able “to look at a student’s work and figur[e] out where they are and
look at all the students’ work as a whole and think, ‘Alright, my class really
needs to work on this, this, and this.’”
Analyzed through the lens of Larrivee (2008), Marilyn’s reflections were
consistent with that of Pedagogical Reflection. Yet, on multiple occasions Marilyn
engaged in what we identified as recalling reflection, meaning that she would
recollect reflections that occurred previously. In other words, during inter-
views Marilyn narrated stories of her reflection, for example, she said, “we have
to go through every day [to see] where the kids are at, what they’ve . . . (here
she trailed off and changed direction), the product that they produce and
prepare for the next day to help the kids.” In this statement, Marilyn summa-
rized a series of earlier conversations she had with her co-teacher.
These exchanges highlighted Marilyn’s capability to think about her
planning, instruction, and assessment through a reflective lens, and also
demonstrated Marilyn’s second-in-command role in the classroom with re-
gards to decision-making. On multiple occasions, Marilyn referred in plu-
rality to the shared work between her and her co-teacher. This tendency to
view herself as part of “we” also meant that she would often stop short of an
explicit description of how specific interactions impacted her teaching. Her
interview comments suggested a shared rationale for making instructional
adjustments. Marilyn did not clarify her role, despite heavy prompting dur-
ing our interviews.
Marilyn repeatedly described her frequent meetings with her co-teacher
to dissect lessons that had occurred and plan for future lessons. As part of
these descriptions, Marilyn used language that was consistent with pedagogi-
cal reflection described by Larrivee (2008), including notions of adjustment
and prior knowledge of students. This suggested Marilyn was actively en-
gaged in reflection with her mentor teacher, whom she considered a knowl-
edgeable practitioner. During interviews, she engaged in recollections of
those conversations with her mentor, however, did not engage in any in-the-
moment reflection during interviews.
For Marilyn, her work on the NH-TCAP was fairly predictive of her ca-
pacity as a pedagogical reflective practitioner in her first year of teaching.
Although the interviews were not spaces for Marilyn to reflect, she did
share that “this year has confirmed for me that I am where I’m supposed to
be . . . because I see myself teaching in 10 years.”

Zoey: Reflecting as Route Seeking

Zoey’s undergraduate program in Family Studies, encompassing semes-


ter-long practica at the preschool and kindergarten level, affirmed her con-
viction that, “I am in the right place when I am in the classroom.” Unlike
176    D. G. TERRELL et al.

the other two cases featured in this study, Zoey completed a Master’s-level
program at a public university. Her full-year internship in a first grade led
to certification in both elementary and special education. Zoey began the
yearlong internship by foregrounding her commitment to “act as my own
teacher researcher.”
Zoey interned in a rural K–4 public school located in a small community.
The school, with little racial diversity and low free-and-reduced priced lunch
enrollment, was well-funded and enjoyed a strong reputation compared to
other area schools. As a dual certification intern, Zoey benefitted from ad-
ditional supervision of a special education case manager and gained direct
experience in the design and implementation of IEPs, targeted academic
and behavioral interventions, and assessment. Following graduation, Zoey
obtained a teaching position in a combined first/second grade classroom.
In some ways, in her first year of teaching Zoey’s relationship with her men-
tor mirrored the experience she had enjoyed with her elementary and spe-
cial education cooperating teachers. These relationships, she explained,
deepened her opportunities for ongoing reflection.
Having said that, Zoey found herself in a drastically different demo-
graphic environment than her internship. Her new job was in a small city
characterized by significant poverty and a diverse population. Zoey charac-
terized these new realities as “a great experience to learn, to push myself,
and to challenge myself with the things that I wasn’t expecting.” Offsetting
the challenges, however, was the opportunity to work under the guidance
of both an induction mentor and peer coach who planned with her and
observed her literacy instruction and assessment.
Mission driven and detail oriented, Zoey, with the help of the literacy
coach, was charged with improving all students’ literacy achievement. Zoey
took this goal to heart, as she got comfortable with the unfamiliar. She re-
counted, “I never even knew one-on-one conferencing was something you
were supposed to do with students . . . except through my experience do-
ing it.” This sentiment of learning through experience and with the guid-
ance of a coach was echoed multiple times. Characterized by her ability to
implement detailed strategies, Zoey encountered unexpected roadblocks
that required her to set aside ego to reassess her approach by engaging in
what Larrivee (2008) might deem pedagogical reflection. “I have to make sure
I’m not taking things personally. I have to make sure that when I make a
mistake it’s okay to make those mistakes . . . because when I make those mis-
takes I can actually learn from them and figure out what I can do differently
next time to reach my students.” Zoey’s tenacity was fueled by her unwaver-
ing bottom line: finding a pathway to reaching students.
Zoey, like our other two participants, sharpened her capacity for peda-
gogical reflection over the course of her first year of teaching. For example,
Zoey demonstrated the use of prior knowledge in the following excerpt:
Forcing Me to Reflect    177

[My student] missed a lot of school in Kindergarten. So, it’s kind of like she
didn’t go to Kindergarten is what they tell me. So, I kind of understand why
she’s at this level . . . and I’ve seen growth, and she’s now able to recognize that
she needs vowels in words and she’s recognizing more letter sounds.

Zoey also shared how experiences from her internship shaped her first year
of teaching:

I feel like my guided reading groups are really strong, which is something that
I am prepared for coming in this year . . . but like one-on-one conferencing
with students during reading I had to learn all of that stuff this year.

Here, not only did Zoey convey adjusting (see Table 8.2) her instruction,
she did so through an awareness of herself as a planner, instructor, and as-
sessor of her students. She attributed much of her growth to conversations
with her literacy coach who encouraged Zoey to tinker with instruction, to
meet the needs of her students. She clarified, “I don’t feel like I was as pre-
pared to differentiate until . . . I actually needed to do it here and it’s been
awesome with the multi-age because it’s like true, true differentiating. . . . ”
Her pedagogical adjustments were bolstered by her own reflections and
facilitated by coaches. In turn, Zoey enhanced her differentiation skills that
were familiar, but not fully developed until she led her own classroom. In
these ways, Zoey built a route to quality teaching her new context.

Maeve: Spontaneous Critical Reflection

Like many young teachers we meet in our work, Maeve recounted the
pleasure she felt playing teacher as a child; practicing her board writing
on a white board, and fabricating worksheets with incorrect answers so she
could break in her red pen and mark a few incorrect. Due in large part to
those early recollections, Maeve reprimanded herself for entering college
as a pre-med major. With a characteristic combination of self-deprecation
and authenticity, Maeve admonished herself for taking a full year to “wake
up.” It was in her second year at a Catholic liberal arts college that Maeve
enrolled in the elementary education program. By her senior year, Maeve
had made such an impression on the educators at her teaching internship
in a fifth-grade classroom at a relatively homogeneous, academically suc-
cessful, well-financed school that the building principal praised Maeve’s
instinctual teaching ability and management skills.
Still, Maeve’s capstone work on her Elementary Math NH-TCAP did not
convey what seemed to be an innate capacity for teaching. In the reflection
strand of her NH-TCAP, Maeve wrote minimally, and in a rather hyper-fo-
cused manner. Much of her reflection in the NH-TCAP centered on correct
178    D. G. TERRELL et al.

or incorrect answers without any investigation into the pedagogical turns


that yielded those answers. Absent the acknowledgment of how her teach-
ing and students’ learning were interrelated, Maeve didn’t exhibit mastery
of pedagogical reflection on the NH-TCAP.
In contrast, throughout her first year of teaching third-grade in a near-
by elementary school with considerably more linguistic, racial, and socio-
economic diversity, Maeve’s reflective capability seemed to kick into high
gear. Maeve naturally exhibited mastery of pedagogical reflection and was the
only one of our three candidates who consistently engaged in what Larrivee
(2008) highest level of reflection—critical reflection. Her interviews seemed
to offer her a needed opportunity to reflect on teaching in the moment. In
most regards, Maeve’s interviews were characterized by pedagogical reflection.
For example, during a teaching episode observed by researchers, Maeve
described in the follow-up interview her efforts to differentiate instruction
of equivalent fractions. She states,

Yesterday I had a higher group, so I didn’t write anything on the board until I
[saw] what they did. Today I had a lower group, so I wrote it on the board as
I was saying, “Draw a rectangle.” Still, I had kids draw a rectangle this way and
[others drew] a rectangle that way . . . I wanted them to physically be doing it,
but I also used mine to refer to rather than student work . . . So they still drew
it, but it wasn’t necessarily the way that I did it . . . 

While Maeve recognized that her students came to the room with extraor-
dinarily diverse learning needs and strengths, her efforts to provide stu-
dents an opportunity to discover mathematical concepts and differentiate
instruction were characteristic of her broader approach to practice.
When Maeve ventured into critical reflection it was often in combination
with an aspect of what Larrivee (2008) would call pre- or surface reflection.
In those instances, Maeve’s reflections conveyed her ethical commitment
to children and her awareness of the impact of poverty on her instruction
while at the same time conveying the struggles typical of a first-year teacher.
Maeve reflects,

As tough as it has been to get my bearings, every day is a very humbling expe-
rience with [my students] because you really do realize that these kids need
a place to come to where they can be safe and, like, . . . eat . . . So, that’s what
I’ve tried to do.

In moments like these, the reflections of first-year teachers—and particular-


ly those working within high need communities—are telling, but not quite
surprising. In one regard, Maeve expressed the highest form of critical reflec-
tion, but in the same instant her reflections conveyed day-to-day struggles.
Forcing Me to Reflect    179

When asked how teachers might be better prepared for the demands of
the first year, Maeve’s response affirmed her belief that formal training is
subordinate to instinct:

There’s no amount of prep work that can get you to feel completely comfort-
able and completely ready on your first go of it. There are a lot of times this
year that I was so out of my comfort zone, but I knew that I needed to learn
how to be comfortable with the uncomfortable.

Maeve’s case raises several questions for us about the constricting nature
of the performance assessment and whether our students are simply trying
to meet the lowest bar on the NH-TCAP. In natural settings, this participant
was clearly capable of pedagogical reflection. Maeve was free to make surpris-
ing emotional revelations. In the presence of a rubric, Maeve sunk to the
low end. This juxtaposition offers an important caution to teacher edu-
cators seeking to use performance assessments as a proxy or predictor of
performance in the field: The presence of a high-stakes rubric, and the ab-
sence of the natural setting of teaching yielded an NH-TCAP work sample
that did not capture this teacher’s true potential or capacity for reflection.
These within case analyses revealed attempts by preservice teachers to
provide justifications for their actions, instructional choices, and possible
next steps. In all cases, regardless of preparation program or licensure
path, reflective analyses were focused on resolutions of perceived prob-
lems, or what Larrivee (2008) labels pedagogical reflection. Only one of the
case participants engaged in critical reflection in her responses: reflection in
which teachers evaluate the purpose of teaching while taking into account
historical, social, and political contexts. We note this absence of critical re-
flection given Zeichner and Liston’s (1987) assertion that without “massive
and fundamental changes in the conditions of the teacher’s work . . . we will
continue to pedal wildly and go nowhere” (p. 45). In light of findings from
our second study on the work required by the NH-TCAP however, this hy-
per-focus on pedagogical reflection, at the cost of critical reflection in most cases,
should hardly be a surprise.

Study Two: Analysis of the Assessment

Each of our three participants had in common a clear capacity for peda-
gogical reflection. Having made the claim, we would be remiss to suggest this
success is an outcome prompted solely by the candidates’ teacher prepara-
tion program. To the contrary, the capacity for pedagogical reflection could be
the result of internal drive and mentoring—as might be claimed by Marilyn.
The capacity for pedagogical reflection could be organic and instinctive—as
180    D. G. TERRELL et al.

appears to be the case with Maeve. Still others may claim pedagogical reflec-
tion evolves from an attention to detail and strategizing—as might be the
case with Zoey. Undoubtedly, there are contextual factors that compound,
restrict, or support pedagogical reflection.
Yet, even despite contextual factors and internal drives, we note the re-
markable similarity in our candidates’ utterances that convey pedagogical
reflection. In our second study, we set out to determine whether the prompts
and rubrics from the NH-TCAP—and subsequently the interview prompts
about reflection, which were derived from the NH-TCAP—solicited a par-
ticular type of reflection from our candidates in ways that supported peda-
gogical reflection.
The Reflecting and Growing Professionally Strand of NH-TCAP provides
insights into how the NH-TCAP operationalizes reflective thinking. In our
second study, we analyzed three separate components of this strand (see Ta-
bles 8.3 and 8.4) including (a) the language at the beginning of the strand,
which frames the tasks for candidates; (b) the prompts, which are the spe-
cific questions to which candidates respond; and (c) the rubrics, which are
provided to each candidate and are used to evaluate candidates’ responses.

Methods
To analyze the NH-TCAP framing materials, prompts, rubrics, and in-
terview prompts, we drew on the work of Porter and colleagues (Porter,
2006; Polikoff, Porter, & Smithson, 2011) to conduct a content analysis and

TABLE 8.3  Summary of the TCAP Reflecting and Growing


Professionally Strand
Daily Reflections: Candidates are asked to reflect after each lesson, incorporating the prompts:
• Provide specific examples as to what worked, what did not, for whom, and why in
relation to the lesson focus, learning objectives, and academic language.
• How does what you learned about student learning inform what you plan to do in the
next lesson?
Reflective Prompts: Candidates are asked to respond to two reflective prompts:
1. Based on your experience teaching this learning segment, describe
a. what you learned about your students and what questions you have about your
students’ learning.
b. how your students’ learning was affected by your planning, instruction, or
assessment decisions.
c. the theories or research that inform these conclusions.
d. what you learned about yourself as a teacher.
2. If you could go back and teach this learning segment again to the same group of
students, describe
a. what you would do differently in relation to planning, instruction, and assessment.
b. how the changes would improve the learning of students with different needs and
characteristics.
TABLE 8.4  Rubrics Used by Evaluators to Assess Candidates’ Ability to Reflect
REFLECTING AND GROWING PROFESSIONALLY: MONITORING STUDENT PROGRESSa
9. How does the candidate monitor student learning and make appropriate adjustments in instruction during the learning segment?
Level 1 Level 2 Level 3 Level 4
• Daily notes indicate inconsistent • Daily notes identify what students • Daily notes indicate monitoring All components of Level 3 plus:
monitoring of student could or could not do within each of student progress toward meeting Adjustments to instruction are
performance. lesson. the standards/objectives for the focused on deepening conceptual
• There is limited evidence of Adjustments to instruction are learning segment. understanding, computational/
adjusting instruction in response focused on improving directions for Adjustments to instruction are procedural fluency, and
to observed problems, e.g., learning tasks, time management, or focused on addressing some mathematical reasoning.
student confusion, a lack of reteaching. individual and collective learning
challenge, time management. needs.
a
Note: Evidence for this rubric comes primarily from the Daily Notes on Student Learning.

REFLECTING AND GROWING PROFESSIONALLY: REFLECTING ON LEARNING


10. How does the candidate use research, theory, and reflections on teaching and learning to guide practice?
Level 1 Level 2 Level 3 Level 4
• Reflections on teaching • Reflections on teaching practice • Reflections on teaching practice • Reflections on teaching practice
practice are erroneously are consistent with principles from are based on sound knowledge integrate sound knowledge
supported through a significant theory and research. of research and theory linked to of research and theory about
misapplication of theory or • Changes in teaching practice knowledge of students in the class. effective teaching practice,
research principles. are based on reasonable • Changes in teaching practice knowledge of students in the class,
OR assumptions about how student are based on reasonable and knowledge of content.
• Changes in teaching practice are learning was affected by assumptions about how • Changes in teaching practice are
planning, instruction, or student learning was affected specific and strategic to improve
not based on reasonable assumptions
assessment decisions. by planning, instruction, or individual and collective student
about how student learning was
assessment decisions. understanding of standards/
affected by planning, instruction,
objectives.
or assessment decisions.
Forcing Me to Reflect    181
182    D. G. TERRELL et al.

identify what Porter (2006) calls the content language of the assessment
relative to reflective thinking. Similar to our methodological approach
for our within-case analyses, we coded our NH-TCAP framing materials,
prompts, and rubrics with an a priori start list of codes derived from the
Larrivee (2008) framework to maximize coherence (Creswell, 2014; Guest
et al., 2012).
Our analysis indicates that the NH-TCAP directed teachers towards
some types of reflection and away from others. The NH-TCAP operational-
izes reflective thinking primarily as set of pedagogical reflective practices.
Specifically, the prompts elicited responses across Larrivee’s (2008) middle
two levels of reflection (surface reflection and pedagogical reflection) with the
rubrics generally progressing along a similar continuum. The NH-TCAP
does not elicit or seek to evaluate critical reflection, including metacognition,
but neither does it preclude these forms of reflection. We conclude by dis-
cussing the implications of this approach to reflection for the NH-TCAP as
an assessment for teacher learning.

Framing Materials
The framing properties of the NH-TCAP reflection strand, which in-
clude the stated purpose and the directions for the strand, situate the tasks
as an exercise in pedagogical reflection. Some elements called for technical
or descriptive reflections on teaching, or surface reflection. For instance,
the purpose statement, which introduces each strand, indicates that the
reflection strand is intended to elicit a candidate’s description of “ . . . what
you learned from teaching the learning segment . . . provid[ing] evidence
of your ability to analyze your teaching and your students’ learning to im-
prove your teaching practice” (NH-TCAP, 2015, p. 21). It is possible for a
novice teacher to interpret this purpose statement as calling for a focus on
describing their teaching. Candidates might generate superficial responses
focused on what worked and what didn’t work. However, examining one’s
own teaching practice for impacts on student learning with the intention of
improving practice parallels the language of pedagogical reflection.
Of the five prompts in the reflection strand of the NH-TCAP, only one
may plausibly limit responses to surface reflection. The first prompt requires
students to reflect on daily lessons, but also arguably directs candidates
towards pedagogical reflection: “What is working? What is not? For whom?
Why?” (NH-TCAP, 2015, p. 21). A candidate may respond to this prompt
with surface reflection if they focus more on the “what is working” and “what
is not” aspects of the prompt. However, the “for whom” and “why” aspects
of the prompt incline candidates towards examining the impact of their
instruction on students, and to constructively critique their own teaching,
characteristics of pedagogical reflection.
Forcing Me to Reflect    183

Four of the five reflection prompts oriented candidates clearly in the di-
rection of pedagogical reflection, spanning multiple dimensions of Larrivee’s
(2008) description of pedagogical reflection. Several prompts explicitly asked
candidates to use descriptive reflection to inform future plans, to consider
the impact of their teaching on student learning, to use research or theory
as a lens to analyze their own instruction, and to reflect on what aspects
of their teaching needed to improve. In each case, a candidate could not
plausibly respond to the prompt without engaging in pedagogical reflection.
Although none of the prompts explicitly require critical reflection, some
prompts reference “differences in learners,” possibly eliciting critical reflec-
tion. For example, one prompt asks, “what do you think explains the learn-
ing or differences in learning that you observed during the learning seg-
ment?” (NH IHE Network, 2014, p. 21). Given the overall framing of the
strand and the rubric descriptors discussed below, the question appears to
elicit, at minimum, pedagogical reflection. Nevertheless, this prompt could
provide an opening for a candidate to discuss systemic or social factors that
impact student learning.

NH-TCAP Rubrics
Two rubrics are used by scorers to assess candidates’ work on the Reflec-
tion Strand in the NH-TCAP (see Table 8.4). The first rubric, “Monitoring
Student Learning,” assesses a candidate’s capacity to monitor student prog-
ress and make adjustments during and between lessons in the segment.
The second rubric, “Reflecting on Learning,” assesses a candidate’s use of
research and theory in their analysis of their own teaching, and the abil-
ity to examine evidence of student learning to guide subsequent teaching.
Each of the two rubrics lays out performance level descriptors on a level 1
to 4 scale, where 1 is considered failing and 2 is considered to be minimally
prepared for the classroom (Lombardi, 2011). On both rubrics the descrip-
tors for scores above passing were characterized by a mix of what Larrivee
(2008) might call surface and pedagogical reflection. None of the rubrics in the
NH-TCAP reflection strand sought to evaluate critical reflection. We make
the case, therefore, that both of these reflection strand rubrics emphasize
pedagogical reflection over surface or critical reflection.
The lowest levels of the NH-TCAP reflection rubrics describe reflec-
tion as an absence of surface or pedagogical reflection, what Larrivee (2008)
would deem pre-reflection. Reflective practices that signal pre-reflection are
overly reactive, wherein teachers attribute ownership of learning problems
to students rather than their own pedagogical choices. To some degree,
pre-reflection is, in fact, non-reflection. That is to say, candidates who con-
vey non-reflection earn a score of 1 on the NH-TCAP reflective thinking
rubrics. Take, for instance, the language in one of the descriptors: There is
limited evidence of adjusting instruction in response to observed problems,
184    D. G. TERRELL et al.

e.g., student confusion, a lack of challenge, time management” (NH-TCAP,


2015, p. 38). Reflection scored as level 1, or what Larrivee (2008) refers to
as pre-reflection, signals failure on the NH-TCAP.
At performance levels of 2 and 3 on the two rubrics measuring reflective
thinking, the descriptors conveyed a mix of surface and pedagogical reflec-
tion. For rubric nine, which evaluates how well candidates monitor student
progress, the level 2 descriptor states, “Daily notes identify what students
could or could not do within each lesson. Adjustments to instruction are
focused on improving directions for learning tasks, time management, or
re-teaching” (NH-TCAP, 2015, p. 38). In other words, to pass this rubric
with a 2, candidates must demonstrate at minimum a capacity for surface
reflection and a focus on technical corrections to teaching.
The Level 3 descriptors on rubric nine, however, conveyed more of a
mix between surface and pedagogical reflection. The first part of the descrip-
tor requires that, “Daily notes indicate monitoring of student progress
toward meeting the standards/objectives for the learning segment” (NH-
TCAP, 2015, p. 38). This might be satisfied through either a more sophis-
ticated form of surface reflection, or through pedagogical reflection focused
on the impact of particular teaching techniques on student learning. The
second part of this level-3 descriptor requires “[a]djustments to instruc-
tion . . . focused on addressing some individual and collective learning
needs” (NH-TCAP, 2015, p. 38). In other words, to meet a 3 on this rubric,
a candidate must clearly articulate pedagogical reflection by examining the
impact of teaching on student learning and make corresponding adjust-
ments to instruction.
In contrast, the tenth rubric called “Reflecting on Learning,” includes
the same bifurcation, but at level 2. To earn a passing score with a 2 on
the tenth rubric, candidates must demonstrate that “[c]hanges in teaching
practice are based on reasonable assumptions about how student learning
was affected by planning, instruction, or assessment decisions” (NH-TCAP,
2015, p. 39). As was the case with the ninth rubric, candidates could fulfill
this criterion through either surface reflections, such as reflections on how to
adjust specific routines and procedures, but could also extend their think-
ing through pedagogical reflection that included examinations of how their
underlying approaches affect student learning. It is essential to point out
here that the same types of reflection required to earn a 3 on the ninth ru-
bric would only signal a level 2 performance on the tenth rubric—or mini-
mally passing. What this conveys to us is a theoretical inconsistency in mini-
mum expectations of teacher candidates reflective thinking capabilities.
Both reflection strand rubrics describe increasingly sophisticated forms
of pedagogical reflection across scores of 2, 3, and 4. For example, the ninth
rubric called “Monitoring Student Progress,” describes an expectation that
candidates reflect on ways that they can improve their teaching. A score of
Forcing Me to Reflect    185

4 expands on that expectation to include adjustments intended to “deepen


key skills, [and] concepts, and/or thinking processes” (NH-TCAP, 2015,
p. 38).
Similarly, rubric ten, “Reflection on Learning,” calls for the candidates’
use of research as a tool for analyzing practice in increasingly complex terms
along the scoring continuum (i.e., a score of 2, 3, or 4). Using research as a
tool to analyze practice is a core aspect of pedagogical reflection. The tenth ru-
bric articulates the developmental nature of teaching candidates’ capacity
to use research by claiming a candidate will earn 2 by showing consistency
with principles of research, a 3 by “linking” research to their knowledge
about students, and a 4 by demonstrating an ability to “integrate sound
knowledge of research” (NH-TCAP, 2015, p. 39).
In sum, our content analysis of the NH-TCAP Reflection Strand revealed
that the NH-TCAP framing materials, prompts and rubrics all operational-
ize reflective thinking primarily in terms of pedagogical reflective practices,
with some aspects of surface reflection incorporated. Non-reflective practices
tend to occupy the lowest level descriptors of reflective practice on the NH-
TCAP rubrics, and candidates demonstrating this level of reflection on the
NH-TCAP would fail the reflecting strand. Our analysis further revealed
that while the task does not preclude candidates’ critical reflection, neither
the NH-TCAP framing materials, the prompts, nor rubrics elicit or evaluate
critical reflection. Absent a clear articulation of the limits of the performance
assessment to elicit or assess reflective thinking as defined by the profes-
sion, we contend that typical performance assessments share a threat to
construct validity.

IMPLICATIONS AND CONCLUSIONS

As an instrument designed to serve as an assessment of learning and assess-


ment for learning (Reagan, Schram, McCurdy, Chang, & Evans, 2016), the
NH-TCAP values and operationalizes reflective thinking. Here we discuss
two sets of implications derived from our study. First, we address implica-
tions derived from findings generated by the case analysis. Next, we address
implications derived from the content analysis of the NH-TCAP framing
materials, prompts, and rubrics.
The cases we analyzed indicate that reflective practice, like playing the
piano or learning to swim, is a skill that develops over time. Marilyn, Zoey,
and Maeve convey growth in the scope of their reflection during their first-
year of teaching, compared to their internship. Marilyn honed her skills for
reflecting on individual students as well as her whole class. Zoey draws from
previous experiences to make sense of and strengthen her skills in one-
to-one conferences. Maeve showed almost no pedagogical reflection in her
186    D. G. TERRELL et al.

NH-TCAP but proved successful in doing so her first year. Thus, the NH-
TCAP positions candidates for and directs candidates through the analytic
work required for reflection. “Passing” the NH-TCAP does not signify so-
phisticated reflection. In fact, none of the participants in our study earned
a 4 on either rubric.
Our data also suggest that the context and support structure available
to candidates, such as the co-teaching relationship Marilyn shared with
her mentor, impact a candidate’s reflection. Thus, this study highlights the
need for further research as to how context shapes reflection. How does
the implementation of the NH-TCAP within our programs constrict and/
or inform candidates’ understanding of reflection?
A second set of implications arises from our examination of how the NH-
TCAP operationalizes reflection. Based on our analysis of the reflection
strand, we found that the NH-TCAP operationalizes reflective thinking pri-
marily in terms of pedagogical reflective practices. As described above, the
participants in this study responded accordingly, demonstrating pedagogical
reflection on the NH-TCAP. They subsequently continued to demonstrate
this type of reflection in their first year of teaching. Based on the Larrivee
(2008) framework, critical reflection includes both conceptions of equity and
social justice oriented reflection as well as metacognitive reflection. The
absence of each of these forms of reflection is significant.
The structure of the NH-TCAP leaves little room for teachers to reflect
critically on the larger social and institutional contexts of school, to con-
sider how these contexts can affect the learning of their students, or to ex-
amine their own underlying assumptions and biases which affect teaching
and learning. We are persuaded by the body literature suggesting that eq-
uity and justice-oriented reflection is an important component of effective
teaching, teacher development, and teacher education (Banks et al., 2005;
Larrivee, 2008; Zeichner & Liston, 1996). We are simultaneously aware of
some of the challenges associated with incorporating this form of reflection
in an assessment of this type (Nagle, 2009).
The NH-TCAP’s stated purpose, intentionally narrow focus, and low-
stakes nature may all partially explain the absence of equity and justice-ori-
ented reflection. According to the designers, the purpose of the NH-TCAP
is to provide a necessary but not all encompassing assessment of candidate
skills, designed to facilitate feedback oriented towards candidate improve-
ment, and intended to be one of multiple measures of teacher effective-
ness (Reagan et al., 2016). In keeping with the voluntary and collaborative
nature of the NH-TCAP initiative, institutions of higher education sought
to focus a shared performance assessment around a set of core practices
that were common to all institutions, without limiting or distorting indi-
vidual institution’s ability to emphasize aspects of teaching and the teaching
profession that were priorities in their settings. Finally, the NH-TCAP is a
Forcing Me to Reflect    187

low-stakes assessment, and is not required for licensure, which may limit
the extent to which the NH-TCAP comes to symbolize what the state of New
Hampshire considers most important.
Despite the mitigating factors, participating teacher programs, both in-
dividually and as part of the New Hampshire Institutions of Higher Educa-
tion Network, should be aware that equity and justice-oriented reflection
is not captured by NH-TCAP, and must consider the implications of this
absence for their program’s curriculum, assessment, and improvement.
Further study is needed to determine how, if at all, the NH-TCAP might
operationalize equity and justice-oriented reflection.
Teachers’ metacognition is also largely left out of the NH-TCAP. Meta-
cognitive reflection, a dimension of what Larrivee (2008) would include
under the category of critical reflection, incorporates practices such as ob-
serving one’s own thinking process, evaluating one’s own progress towards
self-identified goals, and actively inquiring into one’s own development
(Larrivee, 2008). This form of reflection has been identified as important
to fostering teachers’ ability to analyze instructional events and situations in
a way that enables them to understand and handle the complexities of life
in classroom, to take control of their own learning, and to continuously im-
prove (Hammerness et al., 2005). The designers of the NH-TCAP explicitly
chose to develop their own assessment, rather than use a standardized tool
such as edTPA, because they wanted to develop an assessment for learning
in addition to having an assessment of learning (Reagan et al., 2016). In
light of this the absence of metacognitive critical reflection is conspicuous.
Providing opportunities for candidates to demonstrate metacognitive abil-
ity is an important potential enhancement to the NH-TCAP if it is to fulfill
its stated purpose. We might tell our candidates that earning a 4 is rare,
but how might we use the rubric to encourage metacognitive, critical and
justice-oriented reflection?
In sum, the NH-TCAP elicited reflection, particularly pedagogical reflection,
from participating pre-service teachers, and these teachers demonstrated
this and other dimensions of reflection in their first year as teachers as their
knowledge and experience grew. This was consistent with the types of reflec-
tion the NH-TCAP was designed to elicit and assess. More research is need to
understand how contextual factors such as school characteristics, differences
between student teaching contexts and job contexts, mentor programs for
first-year teachers, and differences in student populations interact with re-
flective practice. Additionally, given the stated purpose of the NH-TCAP, the
absence of prompts and evaluative criteria related to candidates’ metacog-
nitive abilities appears to be a crucial gap in performance assessments that
measure reflective thinking. Finally, we suggest that further consideration is
needed regarding whether and how the NH-TCAP might direct candidates
to reflect on broader social contexts, equity, and social justice.
188    D. G. TERRELL et al.

NOTE

1. The NH-TCAP is an open-source version of California PACT with similar


prompts and rubrics.

REFERENCES

Ball, D. L., & Cohen, D. K. (1999). Developing practice, developing practitioners:


Toward a practice-based theory of professional education. In G. Sykes and L.
Darling-Hammond (Eds.), Teaching as the learning profession: Handbook of policy
and practice, (pp. 3–22). San Francisco, CA: Jossey Bass.
Banks, J., Cochran-Smith, M., Moll, L., Richert, A., Zeichner, K., LePage, P., & Mc-
Donald, M. (2005). Teaching diverse learners. In L. Darling-Hammond (Ed.),
Preparing teachers for a changing world: What teachers should learn and be able to do
(pp. 232–274). San Francisco, CA: Jossey-Bass.
Bean, J. C. (2011). Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical
Thinking, and Active Learning in the Classroom (2nd ed.). San Francisco, CA:
Jossey-Bass.
Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make It Stick: The Science of
Successful Learning. Cambridge, MA: Harvard University Press.
Clarà, M. (2015). What is reflection? Looking for clarity in an ambiguous notion.
Journal of Teacher Education, 66(3), 261–271.
Cochran-Smith, M., & Boston College Evidence Team. (2009). “Re-culturing”
teacher education: Inquiry, evidence, and action. Journal of Teacher Education,
60(5), 458–468.
Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed methods ap-
proaches. Los Angeles, CA: SAGE.
Darling-Hammond, L. (2010). Evaluating teacher effectiveness: How teacher per-
formance assessments can measure and improve teaching. Washington, DC:
Center for American Progress. Retrieved September 2016 from http://files.
eric.ed.gov/fulltext/ED535859.pdf
Darling-Hammond, L., Newton, S. P., & Wei, R. C. (2012). Developing and assessing
beginning teacher effectiveness: The potential of performance assessments.
Stanford, CA: Stanford Center for Opportunity Policy in Education.
Dimmitt, C., & McCormick, C.B. (2012). Metacognition in education. In K. R. Har-
ris, S. Graham, & T. Urdan (Eds.), APA educational psychology handbook, vol. 1:
Theories, constructs, and critical issues (pp. 157–188). Washington, DC: Ameri-
can Psychological Association.
Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the
educative process. Boston, MA: Heath and Company.
Feiman-Nemser, S. (2012). From preparation to practice: Designing a continuum to
strengthen and sustain teaching. In S. Feiman-Nemser (Ed.), Teachers as learn-
ers (pp. 105–150). Cambridge, MA: Harvard University Press.
Guest, G., MacQueen, K. M., & Namey, E. E. (2011). Applied thematic analysis. Los
Angeles, CA: SAGE.
Forcing Me to Reflect    189

Hammerness, K., Darling-Hammond, L., Bransford, J., Berliner, D., Cochran-Smith,


M., McDonald, M., & Zeichner, K. (2005). How teachers learn and develop. In
L. Darling-Hammond & J. Bransford (Eds.), Preparing teachers for a changing
world: What teachers should learn and be able to do (pp. 358–389). San Francisco,
CA: Jossey-Bass.
Hill, C. E., Knox, S., Thompson, B. J., Williams, E. N., Hess, S. A., & Ladany, N.
(2005). Consensual qualitative research: An update. Journal of Counseling Psy-
chology, 52(2), 196–205.
Jay, J., & Johnson, K. (2002). Capturing complexity: A typology of reflective practice
in teacher education. Teaching and Teacher Education, 18(1), 73–85.
Larrivee, B. (2008). Development of a tool to assess teachers’ level of reflective prac-
tice. Reflective Practice, 9(3), 341–360.
Lombardi, J. (2011). Guide to performance assessment for California teachers (PACT).
Boston, MA: Pearson Education, Inc.
Mayer, R. E. (2012). Information processing. In K. R. Harris, S. Graham, & T. Urdan
(Eds.), APA educational psychology handbook, vol. 1: Theories, constructs and criti-
cal issues (pp. 85–100). Washington, DC: American Psychological Association.
Nagle, J. (2009). Becoming a reflective practitioner in the age of accountability. The
Educational Forum, 73(1), 76–86.
New Hampshire Institutes of Higher Education Network (IHE Network). (2015,
June). IHE Meeting Minutes June 2015.
Okhremtchouk, I., Seiki, S., Gilliland, B., Ateh, C., Wallace, M., & Kato, A. (2009).
Voices of pre-service teachers: Perspectives on the Performance Assessment
for California Teachers (PACT). Issues in Teacher Education, 18(1), 39.
Polikoff, M. S., Porter, A. C., & Smithson, J. (2011). How well aligned are state assess-
ments of student achievement with state content standards? American Educa-
tional Research Journal, 48(4), 965–995.
Porter, A. C. (2006). Curriculum assessment. In J. L. Green, G. Camilli, & P. B. El-
more (Eds.), Handbook of complementary methods in education research (pp. 141–
159). Washington, DC: American Education Research Association.
Reagan, E. M., Schram, T., McCurdy, K., Chang, T., & Evans, C. M. (2016). Politics of
policy: Assessing the evolution, implementation, and impact of the Performance
Assessment for California Teachers. Education Policy Analysis Archives, 23(9), 1–27.
Rodgers, C. (2002). Defining reflection: Another look at John Dewey and reflective
thinking. Teachers College Record, 104(4), 842–866.
Soslau, E., Kotch-Jester, S., & Jorlin, A. (December, 2015). The dangerous message
teacher candidates infer: “If the edTPA does not assess it, I don’t have to do
it.” Teachers College Record. Retrieved from http://www.tcrecord.org
Schön, D. (1983). The reflective practitioner. New York, NY: Basic Books.
Teddlie, C., & Yu, F. (2007). Mixed methods sampling: A typology with exam-
ples. Journal of mixed methods research, 1(1), 77–100.
Zeichner, K., & Liston, D. (1987). Teaching student teachers to reflect. Harvard
Educational Review, 56(1), 23–48.
Zeichner, K., & Liston, D. (1996). Reflective teaching: An introduction. Mahwah, NJ:
Erlbaum.
This page intentionally left blank.
CHAPTER 9

STATE EDUCATION
AGENCY USE OF TEACHER
CANDIDATE PERFORMANCE
ASSESSMENTS
A Case Study of the Implementation
of a Statewide Portfolio-Based
Assessment System in Kansas

Stephen J. Meyer and Emma V. Espel


RMC Research Corporation

Nikkolas J. Nelson
Kansas State Department of Education

Accountability for teacher preparation programs has received substantial


attention in recent years. Changes to national (Council for Accreditation
of Educator Preparation [CAEP]) accreditation standards, new state re-
quirements for data collection and reporting, and proposals for changing

Implementing and Analyzing Performance Assessments in Teacher Education, pages 191–216


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 191
192    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

reporting requirements associated with Title II of the Higher Education


Act of 1965, as amended in 2008 by the Higher Education Opportunity Act
have all contributed to a focus among educators, researchers, and policy-
makers on how to best measure teacher candidate performance outcomes.
However, tools for assessing the performance of teacher candidates and
practicing teachers vary substantially in their design, implementation, and
utility. For example, a study of the characteristics and utility of five obser-
vation instruments used with practicing teachers (which share many char-
acteristics of instruments used with teacher candidates) found that they
focused on common elements of practice, but varied in terms of their re-
lationship to student achievement scores and the extent to which observa-
tion scores were independent of the characteristics of students in teachers’
classrooms (Gill, Shoji, Coen, & Place, 2016). While there is emerging un-
derstanding of assessments used for practicing teachers, there is a need for
better understanding of the design, implementation, and utility of assess-
ments for teacher candidates.
State education agencies play a substantial role in decisions about the use
of teacher candidate assessments. Nearly all states require that teachers pass
tests to demonstrate proficiency in basic skills, subject matter, and profession-
al knowledge as a condition of certification or licensure (Hoogstra, 2011). In
recent years, several states began to encourage or require the use of teacher
candidate performance assessments. Based on a review of publicly available
information from state education agency websites, we found that goals identi-
fied by states for these assessments vary and include the following:

• to demonstrate that candidates are ready for the profession;


• to provide targeted feedback to candidates;
• to facilitate professional dialogue among candidates and mentors/
supervisors during preparation;
• to prepare candidates to collect and present evidence of effective
teaching and learning;
• to help schools identify the professional learning needs of new
teachers and direct induction and mentoring activities;
• to strengthen partnerships among educator preparation programs
and partner school systems; and
• to provide a source of evidence for licensure/certification decisions,
program review, and accreditation.

States also vary in the assessments they use and in their approaches for
collecting and using these data. Based on a review of publicly-available in-
formation from state education agency and test publisher websites con-
ducted in early 2017, we found that 23 states either require or have plans to
require a teacher candidate performance assessment as part of completing
State Education Agency Use of Teacher Candidate Performance Assessments    193

a preparation program or as a condition of certification or licensure. The


most prevalent assessment is the edTPA®. Ten states either require or have
plans to require edTPA (Georgia, Illinois, Iowa, Minnesota, New Jersey, New
York, Oregon, Tennessee, Washington, and Wisconsin) and an additional 6
states require or have plans to require either edTPA or an alternative assess-
ment (Alaska, Alabama, Delaware, Hawaii, North Carolina, and West Virgin-
ia). The remaining seven states (Arizona, California, Kansas, Massachusetts,
Michigan, Missouri, and New Hampshire) either require or have plans to
require a state-specific assessment. State education agencies are in a particu-
larly good position to help bring coherence to the performance assessment
measures and processes used by programs as they develop statewide assess-
ment systems in response to internal priorities and external requirements for
program accountability.
In this chapter, we discuss the implementation and use of data from can-
didate performance assessments by one state education agency—the Kan-
sas State Department of Education (KSDE)—that uses a state-specific can-
didate performance assessment called the Kansas Performance Teaching
Portfolio (KPTP). KSDE requires educator preparation programs to imple-
ment candidate performance assessments, the passing scores from which
are to be used as a criterion for program completion. Most Kansas colleges
and universities that prepare educators (22 of 26) have adopted the KPTP.
The remaining four institutions use locally developed and scored portfolios
with content similar to the KPTP.
As authors of this chapter, we bring together the perspectives of KSDE
and representatives of the Regional Educational Laboratory Central at
Marzano Research who worked with KSDE to support analysis of data from
the performance assessment. Author Nikk Nelson represents KSDE and
provides information about its history, implementation, challenges and
lessons learned. Authors Stephen Meyer and Emma Espel represent RMC
Research, a subcontractor to the Regional Educational Laboratory (REL)
Central at Marzano Research, and provide information about a data analy-
sis tool designed to inform KPTP implementation and improvement.
Specifically, we provide an overview of the KPTP, describing its develop-
ment, content, and use by the state and educator preparation programs,
with a focus on how the assessment is implemented and scored. Next, we
describe emerging priorities for improving the assessment and a data analy-
sis tool designed to inform KPTP implementation and improvement. We
discuss the analysis tool and how data from the tool have been used and
conclude with a discussion of next steps for implementation of candidate
performance assessment in the state and areas for consideration by state
education agencies and preparation programs as they develop and imple-
ment approaches for candidate performance assessment.
194    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

THE KANSAS PERFORMANCE TEACHING PORTFOLIO (KPTP)

History

In 2003, KSDE introduced a statewide performance assessment for prac-


ticing teachers known as the Kansas Performance Assessment (KPA). The
KPA required practitioners at the initial license level to create a unit of study
including lesson plans, assessment plans, and documented demonstration
of reflective practice. These materials were organized into a portfolio and
submitted to KSDE for scoring. A passing score on the KPA was used to al-
low a beginning practitioner to upgrade his or her initial license to a 5-year
professional teaching license. New teachers and their employers expressed
substantial concerns about the KPA, arguing that it added too much pres-
sure to already overworked new teachers. In response, KSDE discontinued
the KPA and focused on encouraging districts to provide required mentor-
ing programming for new teachers. Completion of a district-provided year-
long mentoring program replaced the KPA as the requirement to upgrade
from an initial to a professional license.
Despite discontinuation of the KPA, KSDE and educator preparation
program administrators and faculty saw value in a capstone performance
assessment that offered more robust measurements of pedagogy than the
standard multiple choice, “imagine this is your classroom” essay questions
that were characteristic of typical state pedagogical licensing exams. Prep-
aration programs wanted to assess candidate performance in an authentic
setting where candidates could demonstrate proficiency teaching real stu-
dents. To meet this need, KSDE collaborated with a national testing com-
pany to develop a preservice candidate performance assessment designed
to be completed during the student teaching internship. This assessment,
known as the KPTP, was introduced in 2009 and continues to be used in
Kansas as the primary candidate performance assessment.

Purpose and Use of the KPTP

The KPTP is one measure of a candidate’s readiness to serve as a practic-


ing educator. The assessment is designed to be a culminating experience
in which candidates apply what they have learned during their prepara-
tion to demonstrate how they incorporate contextual factors and student
characteristics in the design and implementation of a unit of study. The
state requires that programs that prepare educators use KPTP scores (or
scores from a similar assessment) as a criterion for program completion,
based on candidate achievement of a minimum score. Educator prepara-
tion programs also often use scores from KPTP as evidence to demonstrate
State Education Agency Use of Teacher Candidate Performance Assessments    195

adherence to national accreditation standards and state standards for pro-


gram approval and/or accreditation.
Each educator preparation program is required to have a remediation
policy in place for candidates who do not meet the KSDE-established min-
imum score on the assessment. KSDE recommends that policies include
two levels of remediation. Candidates needing minimal remediation typically
have low scores on parts of the KPTP, but demonstrate an understanding
of requirements and provide adequate evidence for most tasks. These can-
didates may be asked to reformat or reorganize their submission to comply
with page limits or other submission requirements, attach required appen-
dices, address a single area that lacked evidence, and/or rewrite a single
task area to provide clarification or additional detail. Candidates needing
extensive remediation do not demonstrate an understanding of requirements
and provide minimal or no evidence for the tasks. These candidates may be
asked to completely rewrite their submission or repeat the student teaching
internship and submit a new KPTP. The cost of KPTP administration is $60,
which is used to provide a stipend of $30 to each of two independent scor-
ers. The cost is typically built into student teaching course fees.

Content

The KPTP measures candidate performance in six focus areas that align
with KSDE Professional Education Standards and represent key areas of
teaching practice (Kansas State Department of Education, 2016). Focus ar-
eas are discussed in the following section and include: (a) analysis of contex-
tual information, (b) analysis of learning environment factors, (c) instruc-
tional implementation, (d) analysis of classroom learning environment, (e)
analysis of assessment procedures, and (f) reflection and self-evaluation.

Focus Area A (Analysis of Contextual Information)


The candidate will: (a) have acquired a knowledge base of how students
learn and develop, (b) provide learning opportunities that will support
their understanding of child development, (c) have the knowledge to select
developmentally appropriate differentiated instruction, and (d) include
multiple instructional strategies to meet the needs of all learners including
those with exceptionalities.

Focus Area B (Analysis of Learning Environment Factors)


The candidate: (a) demonstrates the ability to provide different ap-
proaches to learning; (b) creates instructional opportunities that are eq-
uitable, based on developmental levels, and adapted to diverse learners;
(c) understands a variety of appropriate instructional strategies to develop
196    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

various kinds of students’ learning including critical thinking, problem solv-


ing, and reading; (d) plans effective instruction based upon the knowledge
of all students, community, subject matter, curriculum outcomes, and cur-
rent methods of teaching reading; (e) demonstrates the ability to integrate
across and within content fields; and (f) understands the role of technology
in society and demonstrates skills using instructional tools and technology
to gather, analyze, and present information.

Focus Area C (Instructional Implementation)


The candidate understands and uses a variety of appropriate instruction-
al strategies including a wide range of technological tools to develop vari-
ous kinds of students’ learning including critical thinking, problem solving,
and reading. The candidate ensures effective student use of technology.

Focus Area D (Analysis of Classroom Learning Environment)


The candidate uses an understanding of individual and group motivation
and behavior, including effective verbal and nonverbal communication tech-
niques to create a positive learning environment that fosters active inquiry,
supportive interaction, collaboration, and self-motivation in the classroom.

Focus Area E (Analysis of Assessment Procedures)


The candidate understands and uses formal and informal assessment
strategies to evaluate and ensure the continual intellectual, social, and oth-
er aspects of personal development of all learners. The candidate monitors
his or her own teaching strategies and behavior in relation to student suc-
cess, modifying plans, and instructional approaches accordingly.

Focus Area F (Reflection and Self-Evaluation)


The candidate is a reflective practitioner who continually evaluates the
effects of his or her choices and actions on others (students, parents, and
other professionals in the learning community), actively seeks out oppor-
tunities to grow professionally, and participates in the school improvement
process. The candidate fosters collegial relationships with school person-
nel, parents, and agencies in the larger community to support all students’
learning and well-being.

Task Areas

Standards within each focus area are scored based on a unit of study that
is designed and implemented by the candidate. Scores for the standards are
summed to generate a total KPTP score and scores for the following four
task areas: (a) contextual information and learning environment factors,
State Education Agency Use of Teacher Candidate Performance Assessments    197

(b) designing instruction, (c) teaching and learning, and (d) reflection
and professionalism. Candidates use a KPTP template to submit informa-
tion for each task (Kansas State Department of Education, 2013).

Task 1 (Contextual Information and Learning Environment Factors)


Task 1 is about the class and learning environment. The candidate provides
information about the broader school community (e.g., rural, urban, sub-
urban), the school district (e.g., size, required testing), the selected class
(e.g., number of students, identified disabilities), and selects a student sub-
group and two Focus Students from the class. Candidates are encouraged
to select student subgroups for which meaningful adaptations to instruction
can be designed. For example, if in gathering contextual information, a
candidate discovers that four students have behavior disorders, a subgroup
consisting of those students would likely provide the candidate with a good
opportunity to demonstrate how to adapt instruction to meet student needs.
Focus Students must include students identified as English Language Learn-
ers or as needing some other adaptation(s). The candidate then analyzes
this contextual information and provides information about implications for
instruction (for the whole class, subgroup, and Focus Students) and methods
for fostering a positive classroom learning environment.

Task 2 (Designing Instruction)


Task 2 is about planning a unit of study: teaching, learning, and improving
professional practice. The candidate provides an outline of a unit of study,
focused on one content area (e.g., science, math, social studies). The candi-
date chooses objectives that are aligned with state standards for the content
area and plans a series of lessons that enables students to achieve those objec-
tives. Within the unit, the candidate: (a) designs lessons including the use of
technology, reading strategies, integration of other content, and community
resources; (b) identifies and/or designs assessments that will verify student
achievement of the objectives; (c) includes two detailed Focus Lessons that
are video recorded and observed; (d) includes technology and reading strat-
egies at least once within those two plans; and (e) prepares adaptive plans for
each of the two detailed lessons for the Focus Students identified in Task 1.

Task 3 (Teaching and Learning)


Task 3 is about implementation, focusing on teaching, learning and im-
proving professional practice. In this task area, the candidate: (a) demon-
strates ability to implement an instructional unit (developed in Task 2) and
then analyzes and reflects on teaching and (b) keeps a daily teaching re-
flection log of lessons to be completed following each day’s lesson, paying
attention to identified Focus Students and responding to specific prompts
regarding these students. The two Focus Lessons are used as the basis of
198    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

Task 3. Prompts focus on changes the candidate made in the unit design
and reasons for changes. For example, did the results of an assessment,
even an informal formative assessment, lead to adjustments in the next
day’s lesson? Did an instructional strategy fall flat? In Step 2 of Task 3, the
observation, video recording, and narrative reflection on the two detailed
unit lessons provided in Task 2 takes place and the candidate responds to
prompts for this section. In the final step of this task area, the candidate
analyzes the assessment plan. Data from the pre-assessment are used to in-
form the instructional unit for Task 2 and then candidate implementation
of the unit in Task 3. The candidate is expected to provide assessment data,
analysis, and interpretation for pre-assessment, formative assessments, and
summative assessment in relation to the unit objectives.

Task 4 (Reflection and Professionalism)


Task 4 is about reflection on professional practice. The candidate uses
data from Task 3 to analyze the effects of instruction on student learn-
ing and identify two objectives that were most successful and two that
were least successful and provides rationale. The candidate then reflects
on strategies for improvement in future instruction and identifies profes-
sional learning opportunities that could support his or her professional
growth. Lastly, the candidate reflects on the communications s/he had
with students, parents, faculty, and other professionals, identifies four or
five communications that ultimately had a positive impact in the class-
room, and discusses that impact.
All materials are submitted by candidates as a work-sample portfolio
(35 pages maximum) plus appendices which includes lesson plans, assess-
ments, and any relevant student scoring rubrics or keys. The portfolio in-
cludes detailed descriptions (e.g., of students and context), unit and lesson
plans, lesson materials and assessments with rationales, reflections based on
two video-recorded lessons, data from student assessments and interpreta-
tion, reflections on achievement of learning objectives and future profes-
sional development, and description of professional communication and
its impact.
The template that the candidates complete also asks guided questions in
each task area. For example, once the candidate has gathered Task 1 contex-
tual information, the candidate is asked: “How will you help create a positive,
self-motivating, classroom learning environment?” In Task 2, the candidate
is asked: “Why are the learning objectives you selected appropriate for the
content area and grade level?” As part of Task 3, candidates are asked: “Based
on the summative assessment results, what objectives were most successful?”
These guided questions encourage candidates to understand and provide ex-
planation for their instructional decisions, the outcomes of those decisions,
State Education Agency Use of Teacher Candidate Performance Assessments    199

and any needed adjustments or extensions to their instructional design and


implementation identified through student assessment.

Implementation Guidance

KPTP content guidelines (Kansas State Department of Education,


2016) are used to convey requirements and provide detailed guidance to
candidates for completing the KPTP, including templates for submission
of required content. Candidates are encouraged to review the content
guidelines in advance of early clinical field experiences. The guidelines
include the scoring rubrics, prompts to guide candidate responses, and
checklists to guide their completion of each task. An implementation
guidance document (Kansas State Department of Education, 2011) pro-
vides guidance to educator preparation programs. Institutions using the
KPTP are expected to integrate the KPTP tasks within the program’s clini-
cal coursework, field experiences, and student teaching (Figure 9.1) and
are expected to inform, advise, and support candidates throughout the
KPTP process.
During candidates’ student teaching experiences, cooperating teachers are
expected to: (a) review the KPTP content guidelines and create a plan for
when the candidate will teach the unit for the KPTP, (b) observe the candidate
and provide constructive criticism relevant to the KPTP focus areas, and (c)
provide appropriate support as the candidate performs the KPTP unit of study.

• The KPTP is formally implemented


• Appropriate support provided by
institutional faculty/supervisors and
Student
mentor/cooperating teachers
Teaching

• Practice in completing portions of


Field the KPTP tasks beginning in
Experiences early clinical field experiences
• Purposeful integration of
reflective writing and self-assessment

• KPTP is integrated into


clinical coursework
• Purposeful integration of
Clinical Coursework
reflective writing and
self-assessment

Figure 9.1  Integration of the KPTP in clinical courses and experiences.


200    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

Either the cooperating teacher or building level administrative supervisor may


serve as an observer for the KPTP video recorded lesson to provide observa-
tion-based feedback that the candidate uses for written reflective responses.

KPTP Scores

Candidates are rated on focus areas associated with each of the four
tasks, for a total of 10 ratings (2 focus areas for Task 1, 3 for Task 2, 4 for
Task 3, and 1 for Task 4), using a 3-point rating scale where 1 = criteria not
met, 2 = criteria partially met, and 3 = criteria met. Two scorers evaluate
each KPTP and their scores are averaged to generate a final score for each
candidate. The total possible score for the KPTP is 30 (up to 3 points for
each of 10 ratings) and candidates must have a score of at least 20 to dem-
onstrate competency on the assessment. If scorers provide substantially dif-
ferent scores for a particular standard (i.e., if one scorer provides a score of
1 and the other provides a score of 3), an adjudication process is triggered.
The KSDE staff member with oversight for the scoring process reviews the
evidence provided by each scorer and makes a final score determination.

Scoring Process

The scoring process is managed by KSDE, which has oversight for recruiting
and training scorers, implementing scoring activities, and distributing candi-
date scores. In the fall and spring of each year, educator preparation programs
submit KPTP materials for each candidate to KSDE for scoring. Materials are
uploaded to a common document warehouse and KSDE sends documents via
email to trained scorers. KPTP documents are randomly assigned to scorers
who include practicing and retired educators and faculty members. Scorers
may not score KPTPs from candidates at their own institution and sign a con-
fidentiality agreement in which they agree to maintain the confidentiality of
teacher candidates and institutions before, during, and after scoring sessions.
Scorers include brief statements that document evidence in each focus
area and justify selection of scores. Educator preparation programs sub-
mit a record of evidence for each candidate to KSDE that includes scores
for each focus area along with interpretive words that describe evidence in
each focus area (e.g., “accurate,” “meets needs,” or “limited”) and a justifi-
cation statement for the scores assigned that uses language from the rubric.
KSDE compiles submitted scores and sends a spreadsheet with final scores
and the complete record of evidence for each candidate via email to the
KPTP coordinator at each institution. The record of evidence includes final
scores, keywords, and justification statements.
State Education Agency Use of Teacher Candidate Performance Assessments    201

Scorer Training

KSDE requires that all KPTP scorers participate in a formal training ses-
sion which typically lasts a day and a half. The training is designed to famil-
iarize scorers with the KPTP content guidelines and scoring tools including
scoring rubrics, a list of interpretive words, and the record of evidence.
During the training, scorers practice scoring with validated benchmarks for
each of the four task areas, and complete training cases. Trainers are pro-
vided with exemplar cases and examples of evidence that support assign-
ment of particular scores.
Scorers are encouraged to identify their own biases that may contribute
to scoring decisions and the training focuses on providing scorers with an
understanding of how to apply the rubric consistently, incorporating only
the evidence that is called for by the rubric. The training also emphasizes
the importance of holistic scoring and avoiding a “deficit model” of scor-
ing. For example, to meet criteria for Focus Area E (analysis of assessment
procedures), candidates must use a variety of informal and formal assess-
ment techniques, analyze disaggregated data and use assessment results to
inform and improve instruction, use assessment results to monitor teach-
ing strategies in relation to focus student success, and modify plans and
instructional approaches accordingly. When candidates do not include a
pre-assessment, scorers are sometimes inclined to assign a score of 1 based
on this perceived deficit. However, the rubric does not explicitly require a
pre-assessment to meet criteria for this focus area. During training, scorers
are provided with several sample KPTP materials and closely engage rubric
content to ensure that scoring is consistently applied. Sample rubric crite-
ria associated with the KPTP tasks are presented in Table 9.1. One sample
criterion from each relevant focus area is included.
The training emphasizes careful note-taking during the scoring process
to ensure that scores are closely tied to evidence identified in the task area
rubrics. Guidance for good note-taking and sample KPTP notes are shared.
Scorers are also instructed not to criticize, coach, or rescue the candidate and
to include only language from the document itself, the interpretive words list,
and the scoring rubrics as part of the record of evidence. Scorer notes are re-
viewed by trainers to inform scorer training and counseling and are sometimes
used to conduct a validity check of a particular candidate’s scores by another
scorer. Scorer notes become part of candidate’s permanent record and if there
is a challenge to a scoring decision, the notes are reviewed to evaluate the ap-
propriateness of the score. All guidelines, templates, and policies related to
KPTP scoring are available on the KSDE website (www.ksde.org).
202    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

TABLE 9.1  Sample Rubric Criteria (Associated With “Criteria Met”)


Task 1: Contextual • Responses for 2 focus students provide detailed and
Information appropriate references to student characteristics, including but
and Learning not limited to prior learning, culture, language, exceptionality,
Environment Factors family values, and community values (Focus Area A).
• Responses provide detailed and appropriate strategies for
the selected subgroup to become self-motivated and work
productively and cooperatively (Focus Area D).
Task 2: Designing • The instructional design includes instructional activities
Instruction that are developmentally appropriate and have appropriate
adaptations to meet the needs of all learners (Focus Area A).
• The instructional design includes clear evidence of
appropriate adaptations and differentiations to meet the
needs of all students (Focus Area B).
• The assessment plan includes a clear description of how the
results of the assessments will be used (Focus Area E).
Task 3: Teaching and • Candidate uses a variety of teaching and learning strategies
Learning that are appropriate for students’ diverse contextual factors
and reading abilities (Focus Area C).
• Candidate maintains an environment that includes
independent and/or group participation to encourage
positive social interaction, equitable engagement, and self-
motivation of all students (Focus Area D).
• Disaggregated data were analyzed, and assessment results
used to inform and improve instruction (Focus Area E).
• Reflection demonstrates a consistent and thorough ability
to reflect on the implementation and outcomes of the daily
instruction in relation to the impact on the whole class and
the focus students (Focus Area F).
Task 4: Reflection and • Reflection identifies and discusses two or more strategies to
Professionalism extend instruction for successful learner objectives (Focus
Area F).

ENGAGEMENT OF EDUCATOR PREPARATION


PROGRAMS, SCALE-UP, AND CHALLENGES

KSDE provides implementation guidance (described above) to educator


preparation programs wishing to adopt the KPTP, which includes best prac-
tice ideas for KPTP implementation. Institutions adopting the KPTP are en-
couraged to have faculty discuss the tool as early as possible in clinical course-
work. For example, KSDE encourages programs to have the candidate read
Task 1 in the content guidelines prior to conducting field observations and
to imagine the kind of information s/he would provide as evidence for Task
1 in their portfolio. Institutions are also asked to elect a KPTP coordinator
who acts as liaison between the preparation program and KSDE with respect
to the institution’s KPTP submissions, receipt of scores, and organization of
State Education Agency Use of Teacher Candidate Performance Assessments    203

training for faculty or candidates. Institutions adopting the KPTP are also
offered a piloting opportunity. For example, they can begin KPTP implemen-
tation with just their elementary education program candidates, disseminate
those data, and then make a decision about scaling up.
The most common challenge reported by educator preparation program
faculty and administrators is candidate anxiety about the KPTP. Program fac-
ulty report that candidates view a “state assessment” differently from one that
is used as a local requirement. Candidates believe that there is more scrutiny
involved and more at stake with their performance. To alleviate this concern,
KSDE offers “candidate training” to discuss the purpose of the KPTP and
best practices for completing it. The training includes guidance for develop-
ing a timeline for completion, ensuring appropriate student subgroups are
selected, and other topics. Program faculty and administrators have reported
that this presentation is beneficial and helps to put candidates at ease.

EMERGING PRIORITIES
FOR IMPROVING THE KPTP AND ITS UTILITY

Implementation of the KPTP has continued since its development in 2009,


with most Kansas graduates from educator preparation programs using the as-
sessment to demonstrate that they are prepared to be professional educators.
After several years of implementation, KSDE began to identify areas in which
the assessment and its use could be improved. Suggestions for improvement
came from administrators and faculty who felt that some KPTP elements were
redundant and could be better clarified. KPTP scorers also identified parts
of the KPTP where they had particular difficulty assigning scores and KSDE
noticed that the need to adjudicate scores (because of substantially differing
scores across scorers) tended to be greater for some task areas.
Three areas, discussed below, were identified as possible areas for im-
provement: (a) coherence of KPTP tasks and alignment with state educator
evaluation system, (b) reliability of scoring and scorer training process, and
(c) use of KPTP data to provide meaningful information to candidates and
educator preparation programs.

Coherence of KPTP Tasks and Alignment With State


Educator Evaluation System

Review of KPTP scores and feedback from scorers revealed redundan-


cies across KPTP tasks (particularly across Tasks 2 and 3) and that can-
didates often provided duplicate information. For example, candidates
sometimes repeated information from Task 2 in the daily reflection logs
204    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

in Task 3. KSDE was also concerned that the rubric elements for Task 4
(Reflection and Professionalism) were based on a deficit model which
suggested that lower scores be assigned when the portfolio was missing
particular elements. This was seen as contradictory to the holistic scoring
model that scorers were encouraged to use. KSDE wanted to ensure that
scorers were consistent in their understanding and application of rubric
elements related to the task area which suggested possible revisions to the
assessment, such as providing additional guided questions and clarifying
expectations across tasks. The state also adopted a new educator evalu-
ation system in 2015, the Kansas Educator Evaluation Protocol (Kansas
State Department of Education, 2012), and sought to use common lan-
guage across rubrics for this protocol and the KPTP to better introduce
candidates the criteria with which they will ultimately be evaluated in
practice.

Reliability of Scoring and Scorer Training Process

KSDE offered comprehensive initial training to KPTP scorers, but


wanted to improve its monitoring of their performance and the reliabil-
ity, or consistency, of scoring. KSDE sought a better way to monitor the
performance of scorers and provide them with feedback to improve the
reliability of the scoring process. Further, KSDE sought information about
aspects of the scoring process that could better inform scorer training.
For example, if scorers were less consistent in their assignment of scores
in particular task areas, training might be adjusted to emphasize those
areas. Lastly, KSDE was interested in exploring the potential of providing
individualized feedback to scorers that might be used to improve their
practice. For example, if a scorer provided consistently lower-than-aver-
age scores in a particular task area, they might be prompted to reconsider
how they apply rubric criteria or biases that they may bring to scoring
process.

Use of KPTP Data to Provide Meaningful Information


to Candidates and Educator Preparation Programs

Faculty and administrators reported to KSDE that they rarely provided


records of evidence (with detailed information about performance) to
candidates and rarely received requests from candidates to review them.
They indicated that candidates typically only requested this more detailed
information when their KPTP score fell below the cut score. Further, there
was no indication that educator preparation programs were consistently
State Education Agency Use of Teacher Candidate Performance Assessments    205

using data from the KPTP to review the performance of their candidates.
While KSDE recognized that the KPTP is not an ultimate determination
of candidate ability, it prioritizes providing feedback that is valuable to
candidates and educator preparation programs. Reports that data were
not regularly used prompted KSDE to consider ways to improve data and
reporting.

PARTNERSHIP WITH REL CENTRAL TO SUPPORT DATA USE


TO INFORM KPTP IMPLEMENTATION AND IMPROVEMENT

The Partnership

KSDE collaborated with the REL Central at Marzano Research to ex-


plore ways that existing data could be leveraged to meet KSDE’s priori-
ties for improving the KPTP and use of its data. REL Central is one of 10
Regional Educational Laboratories (see https://ies.ed.gov/ncee/edlabs)
that support education stakeholders across the nation to use research and
data to inform policy and practice. These organizations carry out their
work through partnerships consisting of practitioners, policymakers, and
others who share a common educational interest or concern. Partnerships
between researchers and practitioners are increasingly recognized as a way
to inform policy and practice and ensure that research and evaluation ac-
tivities are relevant and closely aligned with the needs of education stake-
holders (Farrell et al., 2017). KSDE’s collaboration with REL Central was
one of several activities undertaken between 2012 and 2017 as part of REL
Central’s Educator Effectiveness Research Alliance, which included state
education agency and district staff, college and university faculty, adminis-
trators of teacher preparation programs, researchers, and others in seven
states. The partnership took place at a time when state education agencies
in the Central region and nationally were actively considering changes or
making changes to how they collected and used data to evaluate teacher
candidates and the programs that prepare them.
The partnership began in late 2013 with a series of conversations among
KSDE and REL Central staff and the work was undertaken over an approxi-
mately 2-year period. Initial activities involved developing a shared under-
standing of the KPTP scoring process, scorer training activities, and the
KPTP data maintained by the state. KPTP data for six academic years (over
6,500 candidate records) were shared by KSDE and compiled by REL Cen-
tral. Next REL Central worked closely with KSDE to develop and test a data
analysis tool that aligned with their priorities.
206    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

Partnership Goals

Goals for the partnership were determined collaboratively with KSDE


and REL Central and are summarized in Table 9.2.

The KPTP Data Analysis Tool

Collaborating with KSDE, REL Central developed a customized analy-


sis tool that allows for integration of data from multiple years and genera-
tion of reports for KSDE stakeholders, scorers, and institutions. The tool
integrated KPTP data that the state had collected for six academic years.
Because it was easily available to KSDE staff, Microsoft Excel was used as the
platform for the tool and Excel macros were used to create user-friendly

TABLE 9.2  Summary of Partnership Goals


Goal Description
To Inform KPTP KSDE aims to continuously refine the KPTP and the scoring
Revisions process and sought data about past use of the tool to inform
these ongoing efforts.
To Inform and Improve KSDE was interested in documenting variation in KPTP scores
Scorer Training among scorers and over time to identify areas where scores
were more and less consistent, provide feedback to scorers,
and inform scorer training.
To Improve Reliability The extent to which different scorers will come to similar
of Scoring conclusions about the same candidate is an important feature
of any high-quality performance assessment that involves
multiple ratings. KSDE wanted additional information about
the reliability of scoring to better substantiate results and to
identify the extent to which there were inconsistencies among
scorers that suggested a need for revision of training activities
or re-training of particular scorers.
To Improve Score KSDE sought to develop accessible and easily-generated reports
Reporting for its own use, for scorers, and for educator preparation
programs.
To Inform Scorers and KSDE was interested in providing information to scorers that
Educator Preparation allowed them to see how their scores varied over time and
Programs compared with other scorers in the state. Similarly, KSDE
sought a way to provide information about the average KPTP
performance of educator preparation programs relative to
state averages.
To Improve Data KSDE sought to create database of KPTP scores that integrated
Management data from multiple years and a data entry mechanism that
would minimize the effort needed to enter data and generate
reports based on the data.
State Education Agency Use of Teacher Candidate Performance Assessments    207

features such as a data entry interface, separate tabs for generating differ-
ent types of reports, and dropdown menus that allow for selection of spe-
cific educator preparation programs, scorers, and/or years of data. The
tool includes explicit instructions for users about how to enable macros,
how to generate program- and scorer-level reports, and how to enter data.
Formulas used to generate reports were hidden and protected to avoid ac-
cidental changes.
The tool generates educator preparation program- and scorer-specific
reports. Reports align with the format of prior reports and best practices
for data visualization (Evergreen, 2016). A central feature of the tool is its
ability to quickly generate customized reports for each KPTP scorer. These
“scorer feedback reports” are described in more detail below. A guidance
document was created to accompany these reports with information about
how to interpret the tables and charts, possible implications of findings in
the report, cautions about interpretations that may be made, and an over-
view of the rubric content that is used to generate each task area score.

Scorer Feedback Reports

Scorer feedback reports show how mean (average) scores from an indi-
vidual scorer: (a) compare to other scorers, (b) vary over time, and (c) vary
across KPTP task areas. A drop-down menu appears in the upper left cor-
ner, allowing the user to generate a report for an individual scorer, based
on their identification number.
Two tables at the top of the report present mean total scores and scores
for each of four KPTP task areas for an individual scorer and for all scorers
in the state. Scores are presented for all years for which data are available
(summed) and each academic year. Sample sizes (N s) are presented for the
individual scorer and all scorers, by year. Standard deviations (SD s) associ-
ated with each mean are also presented to give an indication of the extent
to which individual scores deviate from the mean. The standard deviation
is a measure of how spread out a set of values is, with higher standard devia-
tions indicating greater variability, or spread, in scores.
Next, the report presents five charts that summarize information in the
tables—one for the total score (range = 0 to 30) and one for each of the
four task area scores (range = 0 to 3). Red lines correspond to mean scores
for the individual scorer and blue lines correspond to mean scores for all
scorers. “Error bars” appear above and below each point and correspond
to one standard deviation below and one standard deviation above each
mean value to give a sense of the variation of individual scores. Charts from
a sample scorer feedback report appear in Figures 9.2 through 9.6, based
on actual data from an unidentified scorer.
208    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

Figure 9.2 indicates that, across six years, this scorer has total KPTP
scores that are about four points higher, on average, than those for all scor-
ers. Differences are apparent in each of six years and the extent of the
difference varies by task area (Figures 9.3 to 9.6). Differences between this
scorer’s average ratings and those for all scores appear to be largest for
task 2 and task 3 scores (Figures 9.4 and 9.5) and smallest for task 1 scores
(Figure 9.3). Lastly, variation among scores (indicated by the length of the
error bars) is largest for task 4 scores (Figure 9.6).
Several possible implications are apparent. Consistently higher-than-av-
erage ratings for this scorer suggest a positive bias and need for calibration
with other scorers. This scorer may need additional training or consulta-
tion that focuses on better aligning portfolio elements with the KPTP ru-
bric criteria. Examination of the size of the differences suggests a focus on
improving the scorer’s understanding of rubric criteria for tasks 2 and 3
because differences were largest for those tasks (Figures 9.3 and 9.4). Also,
the larger variation among scores in task 4 for this scorer and for all scores
(indicated by longer error bars and larger standard deviations) suggest that
rubric criteria for this task may be more difficult to apply or simply that
candidates vary more in their performance on this task area (Figure 9.6).
Given concerns (discussed above) about Task 4, determining the cause of
this additional variation could help to inform revisions to the assessment or
future training.

Figure 9.2  Sample scorer feedback report comparing an individual scorer’s Total
scores to those of all scorers: 2010–2011 to 2015–2016.
State Education Agency Use of Teacher Candidate Performance Assessments    209

Figure 9.3  Sample scorer feedback report comparing an individual scorer’s Task
1 scores to those of all scorers: 2010–2011 to 2015–2016.

Figure 9.4  Sample scorer feedback report comparing an individual scorer’s Task
2 scores to those of all scorers: 2010–2011 to 2015–2016.
210    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

Figure 9.5  Sample scorer feedback report comparing an individual scorer’s Task
3 scores to those of all scorers: 2010–2011 to 2015–2016.

Figure 9.6  Sample scorer feedback report comparing an individual scorer’s Task
4 scores to those of all scorers: 2010–2011 to 2015–2016.

Users of information from the KPTP analysis tool were provided with
some cautions about interpreting results. First, the number of portfo-
lios scored can affect the mean and standard deviation. If relatively few
State Education Agency Use of Teacher Candidate Performance Assessments    211

portfolios were scored in a particular year, the mean and standard devia-
tions for that year may be affected by a few very low or very high scores.
Therefore, the number of portfolios scored by a scorer at a given point in
time or at a particular educator preparation program should be considered
when interpreting results. Second, differences in average scores may be a
result of the quality of the portfolios scored rather than a reflection of scor-
ing tendency or bias. If a scorer’s scores are consistently lower than overall
average scores, this may mean either that the scorer reviewed lower quality
portfolios or that the scorer tended to consistently give lower scores than
other scorers.

HOW DATA FROM THE KPTP


ANALYSIS TOOL HAVE BEEN USED

In line with the goals outlined at the start of the partnership, data have
been used by KSDE to inform KPTP revisions, inform and improve scorer
training, improve reliability of scoring, improve score reporting, support
scorers, support institutions, and improve data entry and management. For
example, KSDE staff have used the reports to identify scorers who need to
improve calibration because of ratings which differ substantially from aver-
ages. In consultation with these scorers, scorer feedback reports were used
to guide discussion of concerns with calibrations. Confidential reports were
also provided to each educator preparation program that administers the
KPTP about their average KPTP scores over time and by task.
Scorer feedback reports were shared with all KPTP scorers in 2015. Some
scorers who had scored several hundred documents over multiple years
had received little to no feedback on their scoring performance. Scorers
were encouraged to reflect on the information in their report and consider
ways in which their scores may reflect bias or incorporation of criteria not
expressly listed on the rubric. The reports were intended to be the founda-
tion of an overall recalibration effort. Initial review of data by KSDE unfor-
tunately suggested that sharing results from the reports did not substantial-
ly affect scoring practice. KSDE staff have observed that when scorers settle
into a scoring habit, it can be very difficult for that habit to change even
when confronted with evidence of low or high calibration. KSDE plans to
continue using the scorer feedback reports but in conjunction with a more
rigorous, in-person, recalibration training.
The analysis tool also provided institutions with a statewide view of
overall KPTP score data disaggregated by task. The institution then had
the opportunity to compare those data with their data over time, allow-
ing them to identify scoring trends. For example, an educator prepara-
tion program with scores that were consistently lower than average in a
212    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

particular task area might ask: Are there aspects of our clinical course-
work or inconsistencies that are creating misconceptions around assess-
ment? Answering these types of questions may lead preparation program
faculty to have data-informed conversations around program revision and
improvement.

NEXT STEPS FOR THE KPTP

KSDE is currently exploring several revision areas for the KPTP. The cur-
rent KPTP model is close to ten years old. To add value to the feedback
that candidates ultimately receive, revisions are focused on Task 4. Since
the assessment was developed, KSDE has adopted new professional educa-
tion standards and recently adopted a new educator evaluation system, the
Kansas Educator Evaluation Protocol. Using common language in rubrics
for this protocol and KPTP will introduce preservice candidate to the type
of criteria that will be used for evaluation of practicing teachers.
KPTP also has the potential to provide feedback that supports targeted
support and mentoring. Mentoring of new teachers is a central component
of their transition from an initial to a professional license. The candidate
and employing school district can use KPTP data to develop a professional
learning and support plan and as a basis for evaluating progress. Use of
KPTP data in this way can help educator preparation programs and candi-
dates to consider the KPTP results as something valuable beyond gradua-
tion and initial licensing.
Lastly, higher education institutions often struggle with getting data
back from candidates once they become practicing teachers. KSDE is in-
terested in exploring ways to collect information from practicing teachers
about their preparation experiences. Survey, interview, or focus group data
might be used to better understand how well teachers were prepared for
the profession, and provide them an opportunity to reflect on the KPTP
assessment process and the extent to which the areas in which they were
assessed adequately reflected expectations of practice.
As discussed, several recent state policy changes have implications for
implementation of the KPTP, including the state’s adoption of new profes-
sional education standards, implementation of new educator evaluation
system, and a partnership with a new national accreditation organization
for educator preparation programs. Responding to these changes, KSDE
formed an advisory committee in 2015 with the goal of developing ideas
related to revision of the KPTP and the assessment process. The commit-
tee has identified areas of focus and is exploring next steps.
State Education Agency Use of Teacher Candidate Performance Assessments    213

CONSIDERING THE ROLE OF STATE EDUCATION


AGENCIES, PARTNERSHIPS, AND DATA USE
IN CANDIDATE PERFORMANCE ASSESSMENT

State education agencies are likely to continue to play a substantial role


in decisions related to candidate performance assessments. State-directed
candidate performance assessment activities can support: (a) generation of
statewide and educator preparation program-specific data that may be used
in statewide accountability systems and as evidence for programs seeking
national accreditation, (b) development of statewide norms that provide
a basis for comparison across programs and inform needs for program im-
provement, and (c) efficiencies associated with use of common tools. Based
on this work, we suggest several areas for consideration by state education
agencies and institutions as they develop and implement approaches for
candidate performance assessment.

Using Candidate Performance Assessment Data


for Continuous Improvement

While state education agencies cite several purposes for implement-


ing candidate performance assessments, one that is not frequently men-
tioned is to provide data for program improvement. Educator prepara-
tion programs may wish to consider developing approaches for analyzing
candidate performance data like those used by KSDE to explore their
potential for program improvement. Use of data in this manner may also
prompt exploration of new ways to develop and support mentoring activi-
ties during the first years of a candidate’s professional life. Further, bet-
ter alignment of candidate performance assessments with those used for
practicing teachers has great potential to inform educator preparation
programs and K–12 school leaders about the strengths and areas of im-
provement of their candidates, teachers, and programs. This alignment
may also provide a basis for establishing stronger partnerships between
K–12 school and educator preparation programs that best meet the needs
of candidates and practicing teachers and support continuous improve-
ment of educator preparation and K–12 professional learning.

Exploring New Uses of Existing Data to Support


Decision-Making

The partnership between KSDE and REL central resulted in an analy-


sis tool for use by the state, but also represents a model that other state
214    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

agencies and educator preparation programs may wish to explore to ad-


vance their own data use related to candidate performance assessment
or other activities. Data use has been identified as a priority to support
decision making in public education. For example, in 2015, Congress
created the Commission on Evidence-Based Policy making to study how
the federal government can better support data use and every state has
made substantial investments in statewide longitudinal data systems over
the past decade (Data Quality Campaign, 2017). KSDE had used candi-
date performance assessment data collected via the KPTP primarily to
determine whether candidates were prepared to become Kansas educa-
tors. Many types of data maintained by states are used primarily for that
type of purpose—to demonstrate compliance with a state or federal re-
quirements or to measure point-in-time performance. The KPTP analysis
tool demonstrates how existing data may be leveraged for other purposes.
State education agencies and educator preparation programs may wish to
explore data that they maintain and how they might be applied in similar
ways to inform practice.

Increasing Communication to Ensure Effective Data Use

Any effort to collect, analyze, and report data will be more successful if it
is clear to relevant stakeholders that the data are useful and serve an infor-
mation need. In this case, the partnership goals were aligned closely with
information priorities of KSDE. Substantial communication between REL
central and KSDE was needed to ensure this alignment. Similarly, commu-
nication between KSDE, educator preparation programs and KPTP scorers
has been essential to successful use of data from the KPTP analysis tool.
KSDE has provided information from the tool to preparation programs and
KPTP scorers as a way to foster dialogue about candidate performance,
the KPTP assessment, and the scoring process. KSDE has also provided
information to help with interpretation of the data and has sought feed-
back about the utility of the data provided. This sort of communication
has been important to ensure that the candidate performance assessment
activities and the data they generate are perceived as valuable by all relevant
stakeholders.

Using Partnerships and Analytic Tools to Support


Data Use

The partnership resulted in a customized data tool using commonly-


available software that KSDE can continue to use over time. While data
State Education Agency Use of Teacher Candidate Performance Assessments    215

are increasingly prevalent in state education agencies, capacity to use


data varies substantially. Access to specialized software is often limited
and there are often limited available staff with the skills or available time
to explore new uses of data. This project represents one of many types
of research-practice partnerships that researchers and practitioners are
actively exploring (see, for example, Desimone, Wolford, & Hill, 2016;
Penuel, Allen, Coburn, & Farrell, 2015). State education agencies and
educator preparation programs may wish to explore similar partnerships
with outside entities that can support data use, particularly with stand-
alone data tools that can be used independently once they have been
introduced.

CONCLUDING THOUGHTS FOR STATE EDUCATION


AGENCIES EXPLORING STATEWIDE CANDIDATE
PERFORMANCE ASSESSMENT

Kansas is a relatively small state with 26 educator preparation programs.


Many states have substantially more programs and have limited staff to
manage statewide performance assessment. These capacity issues make na-
tional assessments, such as the PRAXIS Performance Assessment of Teach-
ers (PPAT) or Pearson edTPA particularly attractive because they offer a
national pool of scorers and an mechanism for technical support that is
difficult to provide at the state level. For states that wish to explore develop-
ment and implementation of a state-specific assessment, we suggest starting
small and closely engaging faculty and administrators. KSDE serves primar-
ily in support and management roles to facilitate KPTP implementation.
Real investment by educator preparation programs is vital to the success of
any candidate performance assessment effort and must be built on recog-
nized needs of programs and shared vision and goals.
Like many states, Kansas strives to measure pedagogical ability as an in-
dication of teacher preparation quality and its impact on student success.
The utility of the KPTP and other candidate performance assessments is
limited, however, in that they can provide only a general idea of how can-
didates will perform as practicing teachers. Ongoing monitoring of the im-
plementation of candidate assessment protocols and their utility is critical
to ensure their relevance.
The lesson KSDE continues to learn is that ongoing support at the state
level is needed as the KPTP is adopted by more institutions. Successful im-
plementation of the assessment requires dedicated staff at both the state de-
partment and educator preparation programs. If the KPTP is just one more
project on a long list of assigned duties, it can be very difficult to engage
in things like updates and revisions in a timely manner. Without sufficient
216    S. J. MEYER, E. V. ESPEL, and N. J. NELSON

commitment to implementation by the educator preparation program and


support from the state, the quality and utility of data collected by the instru-
ment can be compromised.

ACKNOWLEDGMENTS

Work described in this chapter was supported in part by the Institute of Ed-
ucation Sciences, U.S. Department of Education, through Contract Num-
ber ED-IES-12-C-0007. The opinions expressed are those of the authors and
do not represent views of the Institute of Education Sciences or the U.S.
Department of Education.

REFERENCES

Data Quality Campaign. (2017). From hammer to flashlight: A decade of data in educa-
tion. Washington, DC: Author.
Desimone, L. M., Wolford, T., & Hill, K. L. (2016). Research-practice: A practical
conceptual framework. AERA Open, 2(4), 1–14.
Evergreen, S. D. H. (2016). Effective data visualization: The right chart for the right data.
Los Angeles, CA: SAGE.
Farrell, C. C., Davidson, K. L., Repko-Erwin, M. E., Penuel, W. R., Herlihy, C., Potvin,
A. S., & Hill, H. C. (2017). A descriptive study of the IES Researcher–Practitioner
Partnerships in Education Research program: Interim report (Technical Report No.
2). Boulder, CO: National Center for Research in Policy and Practice.
Gill, B., Shoji, M., Coen, T., & Place, K. (2016). The content, predictive power, and
potential bias in five widely used teacher observation instruments (REL 2017–191).
Washington, DC: U.S. Department of Education, Institute of Education Sci-
ences, National Center for Education Evaluation and Regional Assistance,
Regional Educational Laboratory Mid-Atlantic. Retrieved from http://ies.
ed.gov/ncee/edlabs
Higher Education Act of 1965, Title II, 20 U.S.C. 1001 (1965).
Higher Education Opportunity Act, Title II, 20 U.S.C. 1001 (2008).
Hoogstra, L. (2011). Tiered teacher certification and performance-based assessment. Na-
perville, IL: REL Midwest at Learning Point Associates.
Kansas State Department of Education. (2016). Kansas Performance Teaching Portfo-
lio: Content Guidelines. Topeka, KS: Author.
Kansas State Department of Education. (2013, January). Kansas Performance Teach-
ing Portfolio: Template. Topeka, KS: Author.
Kansas State Department of Education. (2012, August). Kansas Educator Evaluation
Protocol. Topeka, KS: Author.
Kansas State Department of Education. (2011, September). Kansas Performance
Teaching Portfolio: Implementation Guidelines. Topeka, KS: Author.
Penuel, W. R., Allen, A., Coburn, C. E., & Farrell, C. (2015). Conceptualizing re-
search–practice partnerships as joint work at boundaries. Journal of Education
for Students Placed at Risk, 20(1–2), 182–197.
CHAPTER 10

USING THE
CONCERNS-BASED ADOPTION
MODEL TO SUPPORT
edTPA COORDINATORS
AND FACULTY DURING THE
IMPLEMENTATION PROCESS
Joyce E. Many
Georgia State University

Shaneeka Favors-Welch
Georgia State University

Karen Kurz
Berry College

Tamra Ogletree
University of West Georgia

Clarice Thomas
Georgia State University

Implementing and Analyzing Performance Assessments in Teacher Education, pages 217–246


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 217
218    J. E. MANY et al.

At a southeastern university College of Education faculty meeting, a group


of around 40 faculty listen intently as a colleague explains to them that she
has been delegated by the dean to become the educative Teacher Perfor-
mance Assessment, edTPA® coordinator. She begins by saying, “The edTPA is
a subject specific performance-based teacher candidate assessment that was
developed by educators for other educators. It is standards-based with an em-
phasis on authentically measuring the skills and knowledge all teachers need
when they enter the profession. Assessments like edTPA have been adopted
in many states to measure each candidates’ readiness to teach. Our state will
begin using cut-off scores on edTPA as a requirement for teacher licensure.
A hand goes up and a faculty member asks, “What if I do not want to partici-
pate?” Another faculty members asks, “Who is going to teach us about the
edTPA and am I going to have to change my course content?” Another states
emphatically that she thinks this is just another example of top-down decision
making that has no benefits to future educators. The coordinator answers the
questions to the best of her ability but has many concerns as well.
—Vignette Reflecting Authors’ Experiences

The term edTPA evokes strong feelings in educational stakeholders across


the United States. Education reformers whose goal is to professionalize
teaching through standardized performance-based testing approaches
have embraced edTPA (Darling-Hammond, 2010; Stanford Center for As-
sessment, Learning, and Equity, 2014; Wei & Pecheone, 2010). In contrast,
advocates for “critical pedagogy,” and a more “ideological” concept have
greatly criticized edTPA (Sato, 2014, p. 433). The opening vignette is re-
flective of conversations occurring in institutions of higher education when
faced with the newness of edTPA and one we found typical in the state of
Georgia during our research. Both faculty and edTPA coordinators ques-
tioned how edTPA would be implemented. They were concerned about fac-
ulty “buy-in” and if program changes would have to be made. They wanted
to know what materials they would need and were worried about the rami-
fications if students did not do well on the assessment.
The intent of edTPA is to have clear and consistent guidelines across dis-
ciplines and states for beginning teachers to demonstrate they have met the
skills necessary to ensure all students learn (Darling-Hammond, 2010). The
state of Georgia implemented edTPA as a certification requirement as part
of an effort to innovatively strengthen standards for licensing new teachers.
Beginning September 2015 teacher candidates had to pass the edTPA with
a cut score of 35 (15 rubric handbook) to be eligible for certification with
the cutoff score rising to 38 in 2017 and potentially continuing to rise in
subsequent years.
In Georgia, and in other states across the country, as policies have been
adopted requiring edTPA or other performance assessments, institutions
of higher education have been faced with an intensive need for faculty
CBAM to Support edTPA Coordinators and Faculty    219

professional development in order to ensure teacher candidates have not


only the knowledge and expertise required by the assessment, but also the
technical and informational resources for success. The vignette written
at the beginning of the chapter, although fictional, was very close to real-
ity for many Georgia colleges and universities as faculty dealt with edTPA
implementation. With new educational reforms, such as those related to
implementation of edTPA initiatives, research indicates faculty experience
a range of concerns as they go through the process of engaging in innova-
tion (Hall, 2010; Overbaugh & Lu, 2008). When Georgia prepared to adopt
edTPA as a certification requirement, we noted critical concerns being ex-
pressed by teacher educators from public and private institutions of higher
learning across the state. We found the concerns-based adoption model
(CBAM) to be a vital lens through which we could (a) better understand
faculty anxiety and their process of engaging in edTPA-related initiatives,
and (b) provide support to aid their implementation efforts.
In this chapter, we begin by providing an overview of the CBAM and an
introduction to the statewide studies we have conducted exploring edTPA
coordinators and faculty concerns about and engagement in edTPA initia-
tives. Then, we will draw from our research to describe the Stages of Concern
teacher educators may experience when implementing edTPA initiatives
in a high-stakes environment, the nature of the initiatives they have imple-
mented in programs and courses, and the types of professional develop-
ment and supports that they have found useful in light of specific concerns.

UNDERSTANDING EDUCATIONAL
INNOVATION FROM A CBAM PERSPECTIVE

With any change, especially in the event of top-down mandated changes,


concerns arise among stakeholders in affected organizations. CBAM was
initially developed in the 1970’s as a model to analyze implementation and
change (Hall, Wallace, & Dossett, 1973). The model evolved out of the work
of Frances Fuller (1969) and others with preservice teachers in response to
the innovation focus approach to educational change. The CBAM has been
widely applied to study the process of implementing educational change
in both K–12 settings and teacher education contexts for over five decades
(Anderson, 1997; Hall, 2010; Hall & Hord, 2015; Hall, Newlove, George,
Rutherford, & Hord, 1991; Hall et al., 1973). The central assumption of the
model is that the single most important factor in any change process is the
people involved in the change and who are actually doing the work; there-
fore, facilitating change means understanding the existing attitudes and
perceptions of those involved in the process (Hall & Hord, 2015). Each per-
son will view and engage in innovation and changes differently. As shown
220    J. E. MANY et al.

Figure 10.1  Dimensions of the CBAM. Source: Adapted from https://www.Sedl.


Org/cbam/ (retrieved May 1, 2017).

in Figure 10.1, CBAM has three diagnostic dimensions for assessing and


guiding the process of new program implementation. The degree to which
innovations are implemented in educational organizations, (a) is affected
by participants’ attitudes and beliefs at particular stages (Stages of Concern),
(b) is evidenced in their behaviors (Levels of Use), and (c) is manifested in
different ways that can be described (Innovation Configuration).
The CBAM posits that educators’ concerns about an innovation peak in a
linear fashion across a series of stages. Although participants may have con-
cerns related to a number of stages, their peak concern is reflective of their
most intense reactions and beliefs at a given time. The seven stages begin with
Awareness or Unconcerned where an individual’s attention is focused on other
things rather than the innovation at hand, and then move to subsequent stages
directly related to the innovation in question including Informational, Personal,
Management, Consequence, Collaboration, and Refocusing; see Table 10.1). Two
CBAM to Support edTPA Coordinators and Faculty    221

TABLE 10.1  Stages of Concern


Stage Definition
Unconcern/ The individual has little involvement or concern with the innovation.
Awareness The person’s attention is focused elsewhere.
Informational There is interest in learning more about the innovation. The concern is
not self-oriented or necessarily change facilitation oriented. The focus
is on the need to know more about the innovation, its characteristics,
requirements, and effects.
Personal There is uncertainty about the demands of an innovation, personal
ability to meet the demands, and one’s own role in relation to the
innovation. Doubts about one’s adequacy, organizational support,
degree of involvement in decision making, and rewards for doing
the job are included. Potential conflicts with existing structures or
personal commitment are concerns.
Management The time, logistics, available resources, and energy involved in using
the innovation are the focus. Central issues relate to efficiency,
organizing, scheduling, and managing.
Consequence The focus is on the innovation’s impact on students. Considerations
include the relevance of the innovation, the outcomes on students,
and changes needed to improve student outcomes.
Collaboration The emphasis is on the need to collaborate with others to improve use
of the innovation.
Refocusing Ideas about alternatives to the innovation are a focus. Thoughts
and opinions oriented towards increasing benefits and are based
on substantive questions about the maximum effectiveness of the
innovation. Thought may be given to alternative forms or possible
replacement of the innovation.
Source: Based on Hall et al. (1991)and George, Hall, & Steigelbauer (2006).

research instruments have been developed to understand the Stages of Con-


cern of those involved in implementing innovations. The Stages of Concern
questionnaire (SoC) provides a valid and reliable measure of faculty’s Stages of
Concern about an innovation through a 35 item questionnaire (George, Hall,
& Stiegelbauer, 2006) while the 35 item Measuring Change Facilitators’ Stages
of Concern (CFSoC) questionnaire (Hall et al., 1991) focuses on the concerns
of facilitators responsible for the change process in organizations.
The second dimension of the CBAM, the Level Of Use, focuses on the
behaviors or actions of individuals who are involved implementing an in-
novation. Because change is a process, not all individuals will be engaged
in the same amount of involvement at any given time. To understand the
change process, the CBAM approach examines the specific level at which
an individual is involved. The Level of Use dimension outlines a range of
involvement levels beginning with those associated with a non-user of an in-
novation (where individuals are not yet using but may be involved in taking
222    J. E. MANY et al.

action to learn more) to levels where individuals are becoming familiar


with the innovation, or then preparing to use the innovation for the first
time. At the user level, individuals progress from Mechanical use of the inno-
vation where attempts are made to simply master the tasks, to Routine use of
the innovation without regard to improving or making changes, to Refine-
ment where individuals attempt to improve use of the innovation to achieve
greater impact, to Integration where individuals work together with others to
achieve a collective impact on students, and finally to Renewal where ongo-
ing analysis leads to evaluation of effectiveness of innovation and potential
modifications or new developments to increase effectiveness.
Finally, the third the dimension of the CBAM is understanding the na-
ture of what is being implemented in the name of any given innovation.
The Innovation Configuration concept underscores the need for descrip-
tive details to create a map or picture of what is done in the name of an
innovation. The nature of change in educational contexts may differ from
classroom to classroom, school to school, and organization to organization.
By creating a verbal picture of what is done in the name of innovation, we
can better understand what the change process involves.

ANALYZING THE IMPLEMENTATION


OF edTPA FROM A CBAM PERSPECTIVE

As we began the implementation of edTPA initiatives in response to our


state’s adoption of edTPA as a high-stakes assessment, we became increas-
ingly aware of the concerns being expressed by both faculty and by the
individuals who were responsible for implementing edTPA at specific insti-
tutions (edTPA coordinators). We found the CBAM to help us not only un-
derstand the concerns being expressed by those involved, but also to give us
guidance to think about the change process and how faculty and coordina-
tors might be supported during this time. We discussed the CBAM at meet-
ings of a teacher education research consortium in our state, and ultimately
a team of administrators, faculty, and doctoral students from five public
and private institutions designed a series of studies. The first, a mixed-
methods study of edTPA coordinators, analyzed the Stages of Concern of
these change facilitators, whether their concerns were related to contextual
factors (public/private, size of institution, number of handbooks, number
of roles they played at their institution), and what types of professional de-
velopment helped or was needed in light of their concerns. The second
statewide study surveyed methods course faculty and supervisors to under-
stand their Stages of Concern and their level of integration of edTPA initia-
tives. In the following sections, we begin by outlining what we learned about
coordinators and faculty concerns in relation to edTPA implementation
CBAM to Support edTPA Coordinators and Faculty    223

and the type of resources which seemed helpful at particular stages. Then
finally, we describe a new instrument we developed based on Levels of Use
and Innovation Configuration concepts to explore the nature of and extent
of integration of edTPA initiatives in programs and courses.

Stages of Concern Related to edTPA Implementation

We learned about edTPA coordinators and faculty concerns through our


two research studies. In our mixed methods study, 34 edTPA coordinators in
our state responded to the Change Facilitator Stage of Concern (CFSoC) sur-
vey (Many et al., 2016) and 22 of those surveyed agreed to participate in fol-
low up interviews regarding their concerns and the concerns of their faculty.
In our faculty/supervisor study, 145 participants responded to our edTPA
Level of Integration (LoI) survey and 78 responded to our Stage of Concern
(SoC) survey (Bhatnagar, Kim, & Many, 2017; Bhatnagar et al., 2017). The
descriptions found below are based on information from those studies.

The Unconcerned Stage: “I Have a Lot of Things to Do.”


At the Unconcerned Stage faculty members are introduced to an innova-
tion and it is not an area of intense concern (Hall, 2010). Faculty members
were more likely than edTPA coordinators to be at this stage during the
year prior to edTPA becoming consequential, with 33 faculty members out
of 78 (42%) peaking at the Unconcerned Stage while 7 edTPA coordinators
out of 34 (21%) peaked at the same stage on the CFSoC (this stage is called
Awareness in the change facilitator’s survey). While examining responses
at this stage, two reoccurring themes were conveyed; (a) the feeling of be-
ing overwhelmed with current duties, and (b) not knowing the impact that
edTPA would have in relation to their future job status. For example, one
participant expressed it this way, “Well, I don’t have much choice. I kind
have my fingers in everything. So, yeah, I have a lot of things to do, so I
try . . . I think the focus would to more intentionally give edTPA enough
attention.”
During the time that these studies took place, major changes had taken
place in our state with respect to data collection systems in educator prepa-
ration, in policies governing program approval, and in regards to the new
CAEP standards. Both edTPA coordinators and faculty were intensely in-
volved in curricular reform and organizational changes related to these
issues. Some respondents noted that concerns over areas such as these were
taking precedent in their lives, overshadowing issues related to edTPA. One
program facilitator explained it like this, “I am the Livetext Coordinator, so
I set up all the courses and everything each semester. [I’m very busy] meet-
ing with faculty about their assessments other than edTPA and really trying
224    J. E. MANY et al.

not to make edTPA the only thing that we do.” While it is possible that fac-
ulty can be associated with more than one stage simultaneously, these state-
ments illustrate how individuals in the Unconcerned Stage may prioritize
other job duties and program elements in lieu of edTPA. For most of the
individuals during the implementation year, however, attention had turned
to an intense need to acquire more information about edTPA and how to
incorporate necessary tasks into their daily job responsibilities.

Informational Stage of Concern: “I Spent a Lot of Time Looking at


Material.”
The Informational Stage describes the time when faculty members are
aware of the change and feel the need to take a personal interest in learn-
ing about the innovation, its features, outcomes, and requirements (Hall,
2010). On the SoC survey 5 out of 78 (6%) of the faculty respondents
peaked at the Informational Stage, and 1 of the 34 (3%) coordinators’ con-
cerns peaked at the same stage. Individuals at this Stage of Concern indi-
cated a general concern around learning about the edTPA assessment and
often this was expressed as a need to engage directly with the assessment.
As one interviewee explained, “I spent a lot of time looking at material
from SCALE and trying to inform and educate myself so I can then better
work with faculty and students as we pilot this based on the Dean’s request
we piloted.” Coordinators noted that faculty members at the informational
stage wanted to better understand the edTPA assessment and were con-
cerned about how it would relate to their current practice in their course
and programs. This meant the coordinators had to become experts in all of
the handbooks. For example, one edTPA coordinator said,

In order to assist our program coordinators, first, I had to make sure that I
understood the handbooks and so I had to read them and really delve into
understanding the rubrics so that when I worked with coordinators and fac-
ulty I could support them in that effort.

Many of the coordinators underscored the importance of local evaluation


training sessions, for both themselves as for faculty, as a resource to provide
an understanding of edTPA as a performance assessment. Thinking about
her faculty’s need for information, one coordinator explained,

Until last year, a lot of them were still at the information stage and we found
interactions I’ve had at local evaluation trainings, the support . . . they were
really getting into the edTPA handbooks. Now you can only read handbooks
so many times, there is just so much in it there so densely packed with infor-
mation, if you really have to take it at small chunks at a time and digest what
it is saying and it’s the same case with students, so the faculty can try to under-
stand [what] the handbook incorporates and what all is in there.
CBAM to Support edTPA Coordinators and Faculty    225

For educators in our state, the main resource they identified that would
have been helpful, but which were not available in the implementation
year, were exemplars from specific content handbooks. One coordinator
explained,

Now the biggest concern was that everybody wanted to see a well-done port-
folio. So and this is something that the students would all ask also. “So if I’m
going to prepare an edTPA portfolio show me an example of a really well
done,” and they didn’t have those examples. That’s another concern that we
were limited in the resources that we had which we can share with our faculty
and students.

Through professional development resources and experiences, faculty


and coordinators became engaged in and knowledgeable about the specif-
ics of edTPA and the new initiatives that might potentially be needed as
a result of adopting this performance assessment as a high stakes policy.
From a CBAM perspective, educators then experience a Stage of Concern
related to feelings of personal anxiety. In the case of our educators, the
Personal Stage of Concern was related to feelings of professional doubts,
tensions, and resistance.

The Personal Stage of Concern: “We Had Some Resistance.”  


On the quantitative surveys, only 12% of faculty and 9% of the edTPA
coordinators were peaking at the Personal Stage of Concern during the
time of the survey. However, many of the edTPA coordinators interviewed
(17 of the 22) reflected in detail on the own personal struggles with which
they had grappled and they discussed at length the personal concerns of
their faculty. Their comments indicated the Personal Stage of Concern in
relation to edTPA implementation can be characterized by anxiety related
to personal efforts to cope with resistance against compliance, desire for
faculty commitment, doubts of personal capabilities, and difficulties in cop-
ing with changing roles they had encountered or were facing as a result of
implementing edTPA.
Coping with feelings of resistance from faculty as a result of the state
policy decision to require edTPA as a high-stakes assessment for accredita-
tion with the biggest concern of edTPA coordinators. Some coordinators
simultaneously struggled with their own philosophical conflicts with the
use of edTPA as a certification requirement. These coordinators faced a
particularly difficult task given the fact that they had been chosen as the
individual who was responsible for facilitating implementation.  One coor-
dinator stated,

I don’t have any lack of certainty about my ability to facilitate. I just have a
personal, moral argument with myself because I’m the best on my campus to
226    J. E. MANY et al.

do this job, but I don’t believe in edTPA so I’m forced to advocate or teach
something that I don’t really think is helpful.

Other coordinators personally recognized value in edTPA as a performance


assessment, but they also found that coping with resistance from faculty was
particularly stressful. One remarked,

I really do believe that edTPA is a good instrument, and it can tell us, as a pro-
gram, and tell our candidates, a lot about their teaching, and hopefully make
our program a better program. There’s just a lot of pushback. The collabora-
tion, I just didn’t feel that the faculty were really—yeah, okay, we’re doing this
because the state says we have to do it.

Another coordinator noted the developmental nature of faculty concerns,


recognizing that just when she thought faculty complaints within her institu-
tion had been worked through, anxieties would crop again. She explained,

I think what is most interesting about this is, is that how the concerns peak in
particular places. And after we piloted and we had gotten all of these personal
responses, where people were just like so angry, so upset and talking, “How
can I do this when I have to do publications and presentations? How can I
do this when I have so much else I want to do in my program?” We would be
working through all of that and had settled it, and then in a couple of months,
oh my god, here it comes again. I finally started realizing, you get to those
stages but only when you knew enough to be worried about it.

Coordinators noted that faculty concerns and resistance often emerged


at the time that they began to work with faculty to create course learning
experiences to prepare students for edTPA. While some faculty were very
upfront about their resistance to make any changes to courses, others were
less overtly resistant but then demonstrated limited follow through to sup-
port candidates within programs to ensure success. As one coordinator
commented,

It’s more about the individual faculty members making a commitment to


the actual implementation. I think that we provided the structure, and we’ve
taken a pretty in-depth inquiry approach to learning more about it. It’s more
about getting the faculty to buy-in to the commitment level. And it goes back
to, I think, their many other commitments.

Other coordinators described specific situations where they had encoun-


tered a lack of faculty support when changes were suggested in order to
provide preservice teachers with familiarity with edTPA language or format.
One coordinator noted,
CBAM to Support edTPA Coordinators and Faculty    227

We put together some resources and were trying to help faculty look at every
aspect of any course where we were teaching—where our candidates were
teaching, weaving that into the lesson plan. Totally redesigned the lesson
plan. That was an area where faculty didn’t like that. There was a lot of push-
back on that.

Another also described the resistance she encountered as faculty in differ-


ent programs tried to cope with the need to make changes to the curricu-
lum saying,

Then we kept getting this flare-up of how am I going to reorganize my pro-


gram? I can’t put this in and I already got too much in my program. I can’t
have time to do this and do everything else I’m doing. All those personal
concerns, it was because it was coming in waves as people got more and more
informed about what was happening.

Another type of concern was also evident for edTPA coordinators as they
struggled to facilitate the change process. As this is a high-stakes assessment
and students had to meet state specified cut scores for certification, some
coordinators questioned their capability to prepare faculty and students.
Reflecting on her concerns, one coordinator said,

Having to do it while doing other things makes it a little bit hard, and it makes
you full of doubt whether or not you’re really getting people prepared. I also
don’t like to do things like prepare students—candidates for tests and then
have them not do well. That’s a lot of stress to put onto one person as edTPA
coordinator. It’s a lot of stress to put onto a department.

Coordinators discussed the stress of managing edTPA implementation


effectively in the midst of wearing multiple hats and the importance of in-
stitutional support.  Coping with changing roles, one coordinator shared,

I tend to be one of those people who take it upon myself if anything goes
wrong and take it personally, but the support from my department chair has
meant a great deal in saying that “we’re all in this together, we will figure it
out together, you’re doing the best you can but we all share the responsibil-
ity to know what they can do that is best for our students, but don’t take it
personally.”

When administrative support was not evident, edTPA coordinators ex-


pressed distress at having to cope with their responsibilities. Some coordi-
nators felt a lack of administrative support implied a lack of understanding
or even empathy for the amount of time required to provide leadership
for edTPA. Another coordinator explained, “I volunteered to be edTPA
coordinator, but when I went to talk about what could be taken off the
228    J. E. MANY et al.

plate, because I’m also assistant dean, the assessment coordinator, and I
also teach, nothing was taken off the plate.”
Several coordinators expressed the value of having a network of other
edTPA coordinators to whom to turn for both information and emotional
support. In Georgia, the agency overseeing teacher certification and edu-
cator preparation (Georgia Professional Standards Commission) created
zones across the state with each having an edTPA Regional Coordinator to
aid implementation. The agency also sponsored statewide technical meetings
and monthly webinars. Coordinators felt “very supported by the network of
people that were doing edTPA in Georgia.” One stressed the supportive role
that group of colleagues played in her development saying, “Other edTPA
coordinators, as we got to know each other through these meetings—so that,
probably, for me, was one of the most important things. It gave me, after go-
ing to some of those meetings—and then the webinars that we would have, it
gave me—it fueled me back up to keep leading the charge.”
As individuals found support and resources to aid in addressing person-
al concerns, attention was turned next to orchestrating changes that were
needed to deal with the implementation process. This brought about a fo-
cus on a new set of concerns related to the Management Stage.

The Management Stage of Concern: “Managing Our Resources and


Deciding Who Will Be Doing What.”
According to the SoC results, 9 of the 78 methods faculty or supervi-
sors (11.4%) peaked at the management stage of concern during this year
prior to edTPA becoming consequential. In contrast, of the 34 responding
to the CFSoC survey, the concerns of 14 coordinators (41%) peaked with
their highest amount of concerns at the management stage. Unsurprisingly,
those responsible for facilitating the change process in this year leading
up to edTPA being consequential were more likely to be peaking at the
stage where concerns were focusing on management issues. Interviews with
the edTPA coordinators indicated managing time and resources, provid-
ing technical support, providing candidate support, and providing faculty
training and support were issues that concerned them the most.
Managing time, resources, and faculty support.  Many of the coordinators
struggled with how to assist faculty in finding the time and resources they
needed to be able to focus on edTPA effectively. Coordinators expressed
concerns about how faculty would find the time it would take to learn
about the assessment and make curriculum change. One stressed, “I think
that my main concern has been, and continues to be, the amount of time
that faculty have to devote to learning all they can about the edTPA.”
Another coordinator explained her concern as focusing on “managing
the change process [to keep from] overloading staff or faculty absolutely
CBAM to Support edTPA Coordinators and Faculty    229

and . . . ironically an overabundance of available resources and trying to


determine which of those have value.”
Other edTPA coordinators also spoke about their need to provide fac-
ulty support and training so that edTPA could be integrated within the
curriculum. One coordinator saw this aspect as being the most challenging
saying,

So I guess that’s the biggest implementation challenge was to get everybody


acquainted with these expectations and have them become aware of what
they should be doing in their classrooms that will align with preparing the
candidates for their edTPA portfolio eventually to send to Pearson.

Because coordinators provided leadership for faculty in considering how


concepts assessed in edTPA were embedded throughout the program, in a
number of cases, they struggled with the issue of whether faculty were being
asked to “teach to the test.” In one case a coordinator struggled with that
notion reflecting,

And so making sure that we’re doing that with integrity and fidelity and not
compromising our standards and not changing our curriculum to teach to
the test. But at the same time making sure that our candidates will be well
prepared for that performance based assessment.

Other coordinators saw management challenges in coping with devel-


oping a framework for what needed to be done and who in an institution
would take on particular tasks. Implementation of edTPA initiatives could
involve working with not only methods faculty, college supervisors, and stu-
dents but also mentor teachers and other stakeholders, and clerical and
technical staff. One coordinator explained how her faculty were beginning
to recognize the enormity of the management questions saying,

So, I have seen it go through this process—when they begin to get involved
in it; that management of how are we going to do it? How are we going to
make sure we got everybody with the ability to do the videotaping? Make sure
we have got all of the release forms in every language for every school we are
in that are translated. Who is going to go out there to the schools when the
principal doesn’t want to be able to let our student teachers do this? All the
management issues become huge. How are we going to do the upload pro-
cess? Who is going to help us? Who is going to be there when we’ve got 30
who need to upload at the same time and we can’t get it all done.

Coordinators also struggled with managing support and guidance work-


ing across programs, departments, and colleges. Coordinators described
the infrastructures that developed in order to support faculty and staff
230    J. E. MANY et al.

across content area disciplines. One coordinator at a large research institu-


tion shared the scope of her management issues saying,

I think my concerns probably are understood within the context of how large
we are and how many initial prep programs we have. Because we are spread
across two colleges, 7 departments, we have over 35 initial prep programs. We
have one version at undergrad and one version at one at MAT and certifica-
tion only—all in the same certification field. In many cases, we have different
faculty with the different levels who operate as self-contained programs.

One supportive approach mentioned by edTPA coordinators was the es-


tablishment of an edTPA liaison group which brought together faculty and
staff from across programs, departments and colleges within an institution
to discuss best practices and concerns related to edTPA implementation.
An associate dean who collaborated on edTPA coordination with an assess-
ment coordinator and two program coordinators shared,

But what I do get the most out of, which is as much for me as my faculty, is
that we do edTPA liaison groups. So, I have liaisons for every department, ev-
ery program. We have a brown bag for 2 hours once a month; we begin . . . by
sharing best practice. . . . To tell you the truth, that is where I learned the most
about the on the ground applications of edTPA and what needs to be done.
I learn by listening to their concerns. I learned what types of resources they
need, and then I try to find that and figure out how to structure it.

Providing technical and conceptual support to candidates.  In addition


to managing resources and tensions over faculty time and needs for
professional development, assisting students with the technical aspects
of their edTPA portfolios and the uploading of their work became an
additional management concern. One coordinator from a large institution
stressed the management of technical issues as ongoing troubleshooting,
explaining,

You really have to streamline what kind of resources will come in handy, so
the ITC [Instructional Technology Center] in the College of Education wrote
a manual about recommendations on developing your edTPA portfolio and
how to do the video. And we also got a grant to purchase 75 mini iPads, which
the students would issue out of the ITC. And we found that the most effective
method of doing the video-taping of themselves while teaching because it can
automatically compress the video and it can save a lot of time for us basically,
and the file will be upload ready. So all of these things they took a lot of time
initially to be management issues. . . . So it’s a process, it took a lot of time to
understand the different bits of pieces of the edTPA system itself and then the
portfolio uploading process.
CBAM to Support edTPA Coordinators and Faculty    231

At a number of institutions, the edTPA coordinators were also directly


responsible for providing candidate support. Support ranged from offering
orientation workshops and training seminars, making certain candidates
were on track, and ensuring candidates were immersed in edTPA concepts
and reflective experiences throughout their programs. Coordinators spoke
about specific types of support they learned were necessary after working
through the edTPA experience with candidates and in particular they un-
derscored the importance of not creating additional stress for the preser-
vice teachers. As one explained, “So now you know we have kind of zeroed
in on things like academic language . . . and Making Good Choices, but there
is so much and you don’t want to give it all to the students because it is go-
ing to overwhelm them.”
Another coordinator also noted the need for structuring the candidates’
experiences saying,

Yeah for every time we go to the handbook we very methodically try to have
activities around parts of the handbook so it’s not just read this, do it. It’s what
does this mean to you? How do you understand this? And as they are working
on their edTPA, I am not allowed to provide them specifically with feedback,
but they are able to provide feedback for each other. We do a lot of sessions
where they share what they have written, they share and respond, and I will sit
back and ask questions about that but I don’t say well you better do it this way
or that way. It has taken a lot of time and energy to try to put that structure
in place.

The Consequences Stage of Concern: “They’re Concerned for How


This Is Going to Impact Our Students.”
In the fourth stage, the Consequences Stage of Concern, educators cen-
ter their attention on the impact an innovation is going to have on improv-
ing practices. Faculty members responding to the SoC survey peaked with
1 participant out of 78 (1 %) and only 1 edTPA coordinator out of 34 (3%)
peaking at the consequences stage. Discussions with respondents indicated
most concerns focused on anticipation of what might happen in future se-
mesters, when edTPA became consequential in the state.
Preparing pre-service teachers for entry into the classroom is no small
task. Coordinators expressed their own concerns and remarked on the
concerns of their faculty, that the policy related to passing edTPA at a spe-
cific cut-score for certification had added an additional roadblock for pro-
spective teachers to navigate. They also questioned the impact that edTPA
would have on their students’ success in completing the program and the
financial pressure that candidates were facing. One respondent stated, “I
do have concerns about how this is going to impact students, you know
[if] they don’t pass it, there is a financial obligation; there is a certification
penalty.” Similarly, another noted, “the demand of the fee coming out of
232    J. E. MANY et al.

their pockets at the end, added on with so many other fees when they are
paying graduation fees, commencement, and moving out of the dorm. And
[they] pay the GACE certification exam and now a new ethics exam, and
now we’re adding the edTPA fee onto that.” Another went on to express
concerns related to what would happen if the candidates don’t pass the as-
sessment, saying, “And the whole managerial aspect of that, managing the
money and the time and what happens if they don’t pass the edTPA. That
hasn’t been fully explained to us.”
Faculty members are always looking to make the best choices for their
candidates. These decisions, whether they are at the unit, program, or
course levels are all geared towards students’ successful completion of the
program and becoming teachers. Faculty also expressed concerns about
the impact edTPA results would have on programs and course curriculum.
In part, addressing these concerns had to be put on hold during the year
prior to the assessment being consequential because faculty were unsure
what the results would be on the nationally scored exam once candidates
knew they had to pass to be certified. One coordinator stated,

But it’s hard to plan for something that you don’t know what the results are
going to be and so we are kind of playing the waiting game right now at [insti-
tution] until we get the scores back in the fall to know how to proceed in the
future, So basically what I talked to our leadership team and our faculty about
is we have made decision for next year but those decisions may change the
following year because this is an evolutionary process and once we get more
information about the implications and consequences of this performance
assessment then we need to reevaluate, readjust, and make some decision in
regards to what is best for our program and what is best for our students. We
are kind of in waiting mode right now.

In addition to specific concerns for programs and students, overall con-


cerns for the unknown were also expressed as respondents reflected on the
consequence stage. Not knowing what the results would be or what changes
would be called for in light of the results, led some coordinators to feel
anxiety about the future of teacher education. The following participant
expressed,

I am concerned for all programs in the state in terms of teacher education in


general. I want all programs in our state to be successful; I don’t want Teacher
Ed to be shut down; if this is going to come in the public eye. In the event
that maybe our scores are not as high as we would hope they would be, I don’t
want the public to get a view of Teacher Ed as not being strong in terms of
preparing our candidates.
CBAM to Support edTPA Coordinators and Faculty    233

Statements such as the one above transcends the impact edTPA has on
faculty and students. Instead it places emphasis on the overall picture of
teacher education and its future in preparing pre-service teachers for the
profession.

The Collaboration Stage: “Learn From and Support One Another to


Increase Effectiveness”
At the CBAM’s next stage, the Collaboration Stage of Concern, indi-
viduals focus on coordinating and cooperating with others to improve the
implementation of innovations (Hall, 2010). Of the participants who re-
sponded to the SoC 15 out of 78 (19%) of the methods faculty and supervi-
sors and 5 of the 34 (15%) coordinators peaked at the Collaboration Stage.
Interviews indicated respondents felt the best way to continue to improve
their efforts at implementing edTPA effectively was through networking
with and learning from others. Collaborative efforts were seen as important
for moving to the next level of effectiveness. One coordinator explained
this as a need for “resources and conversations in order to tackle issues
beyond initial understandings.” Another explained how collaboration and
coordination was integral to what she needed to do saying,

Really, my job is helping other programs who need that information, [to get
that information] out to them, know what has been done but not doing it
myself. Although that idea of how do we think about support when faculty
are feeling this is an overload, that is something that I worry about, but it isn’t
something that is as much of a concern of mine as is the collaboration.

In particular coordinators reflected on how to coordinate efforts within


their institution and to learn from faculty at other institutions in the state
and nationally that have implemented edTPA into their teacher education
programs. For example, coordinators underscored the value of “organizing
liaison groups to learn from and support one another to increase effec-
tiveness.” This networking was seen as a way to “create support systems to
share/gain knowledge, resources, implementation strategies.” In discuss-
ing her institution’s creation of an ad-hoc committee and her continued
concerns for collaboration, one noted,

[We] needed to figure out how to organize so we could share knowledge


across people, who had expertise and did have knowledge, and then figure
out what we needed to do at the unit level and at the program level. What
is going to be the responsibility of who, where, when?. . . . So, being able to
figure out how we can coordinate across that and facilitate communication
internally has been a concern.
234    J. E. MANY et al.

In addition to collaborating within institutions, coordinators also under-


scored the need for, “networking with colleagues externally to question and
learn what is happening at other institutions.” The participants raved about
the support system that had developed in the state through monthly webi-
nars, in person meetings, and professional development. These opportuni-
ties played an important role in addressing coordinators and faculty desire
for opportunities to learn from and reflect with others on how to make
edTPA implementation a beneficial reform. As one coordinator explained,

My main concern is collaborating with all the coordinators and other people
in the state in order to increase the effectiveness of the assessment for our
students, that its educative, so that its successful, so that our faculty don’t
feel burdened and that they feel like it’s an educative experience for their
candidates.

The Refocusing Stage of Concern: “We Would Prefer to Refocus and


Replace With the More Powerful Alternative.”
At the Refocusing Stage of Concern, educators explore ideas about al-
ternatives to an innovation, including changes to policy or initiatives (Hall,
2010). Faculty members responding to the SoC survey peaked with 7 par-
ticipants out of 78 (9%) at the refocusing stage while 2 edTPA coordina-
tors out of 34 (6%) peaked at the same stage. At this stage, respondents
expressed concerns about what they perceived as a lack of input they had
in making decisions about the best options to assess candidates for readi-
ness to teach. One coordinator addressed her opinion emphatically, say-
ing, “There’s been absolutely no opportunity, no space in conversation to
say; we would prefer to refocus and replace with the more powerful alter-
native . . . that just has simply not been allowed in the conversation, which
disturbs me.” This coordinator went on to stress she felt that edTPA is sup-
ported by teacher education accreditation agencies as an accountability
tool used to validate teacher educator’s performance but that this support
undermines teacher educators’ ability to make decisions regarding effec-
tive assessments. She explained,

I actually think that the state should not require it, that they should allow us
to choose whether we want edTPA or a teacher work sample, or to create our
own performance assessment . . . our prior teaching work sample was giving us
that same data and we were already working on that. So . . . it concerns me that
we aren’t given choices, because ultimately what this is, is we need an external
“unbiased” source, person to manage this test, because you guys—the faculty
in the university—essentially can’t be trusted to come up with, you know, un-
biased and reliable process on your own.
CBAM to Support edTPA Coordinators and Faculty    235

Educators grappling with concerns at the refocusing stage desire to be in-


volved in policy changes that will affect their job duties and their students.
At this stage, the participants in our study expressed a desire to advocate
for the use of alternative assessments driven by faculty and less by state
mandates.
In summary, from a CBAM perspective, understanding educators’ atti-
tudes and beliefs in relation to implementation of an innovation can be
helpful in clarifying the nature of supports and resources needed by faculty.
In our studies, surveying and interviewing coordinators, methods faculty,
and supervisors helped us to understand the Stages of Concern faced by
teacher educators confronted with a high-stakes policy change requiring
cut off scores on edTPA as a certification requirement. In addition, how-
ever, as shown in Figure 10.1. at the beginning of this chapter, the degree
to which faculty are involved (Level of Use) and the nature of the activities
they implement (Innovation Configuration) are also directly related to the
effectiveness of any new innovation. We now turn to how educators might
seek understanding of what faculty are doing in relation to edTPA imple-
mentation by looking at an instrument we developed in our research to
describe faculty members’ level of integration of edTPA initiatives.

Understanding Level of Integration of edTPA Innovations


They’re working together as faculty and they’re doing the things the things
that it takes. They’re making accommodations to be sure their students are
supported. They’re reading together. They’re talking through the courses
in the program and how in those courses they can embed signature assign-
ments . . . and that kind of thing, and these conversations were really not hap-
pening a year ago, so I’ve seen a shift in the faculty’s interest and their invest-
ment in this.
—edTPA Coordinator Reflecting on Faculty Engagement During a Pilot Year

When states adopt edTPA as a high stakes policy, faculty are faced with
how to ensure their candidates are prepared for the assessment and are
able to score high enough to be certified. In Georgia, for most of the faculty
and edTPA coordinators in our study, this resulted in concentrated scrutiny
of teacher preparation programs in light of the demands of edTPA, and
consideration of possible curricular reforms, processes and structures that
might need to be implemented. As indicated in the quote above from an
edTPA coordinator, faculty gave careful attention to what they were doing
in courses and programs and how they might make accommodations to
ensure they were meeting students’ needs.
As we noted earlier, from the perspective of the CBAM model of imple-
menting an innovation, change is a process that is impacted by the nature of
the changes that are integrated (Innovation Configuration) and the degree
236    J. E. MANY et al.

to which changes are implemented at both the level of individual faculty


and at institution as a whole (Level of Use; Hall, Direksen, & George, 2006).
In thinking about how faculty at universities engage in innovations to pre-
pare their candidates to complete an edTPA portfolio, a variety of initiatives
might be undertaken at both the course and at the program or unit level.
Our research drew on both constructs to develop an edTPA Level of Inte-
gration (edTPA LoI) survey.
Creating and validating the edTPA LoI survey.  In order to create items
related to the extent to which faculty had been involved in the integration
of edTPA, a listing of potential components of what such integration might
entail were identified by examining questionnaires sent to edTPA coordi-
nators in Georgia by the state agency responsible for the policy change. In
the year prior to our study, the agency had provided professional devel-
opment workshops, webinars, and area consultants to support institutions
in responding to the policy change requiring edTPA adoption. As part of
this process, a statewide edTPA policy committee and area coordinators
had surveyed institutions to understand the nature of their involvement in
edTPA activities and what might be done to support their progress. Compo-
nents addressed in these questionnaires were used as the basis for creation
of items for edTPA LoI, with a Likert Scale rating used to determine the
extent to which respondents could identify particular elements as being
addressed either in their program overall or within the particular courses
which they taught. The points on the Likert Scale corresponded with six
Levels of Use identified by Hall et al. (2006): Nonuse, Orientation, Prepa-
ration, Mechanical Use, Routine, Refinement, and Integration. The survey
items can been seen in Table 10.2.
To establish validity and reliability of the survey, a total of 453 teacher ed-
ucation faculty in 35 institutions across the state of Georgia were e-mailed
the online edTPA LoI survey (Bhatnagar et al., 2017). Respondents includ-
ed a total of 145 faculty, representing perspectives from across a variety
of the edTPA content handbooks: elementary, middle grades, secondary
(English, Math, Science, and Social Studies), special education, and P–12
(art, music, foreign language, and physical education). Some of the faculty
were involved in initiatives addressing more than one edTPA handbook.
We conducted an exploratory factor analysis to examine the clustering
of items and whether survey items measured what they were expected to
measure. A reliability analysis was also conducted to confirm the consis-
tency of items within each factor that emerged. Finally, we conducted pair-
wise comparisons to contrast how edTPA faculty describe their integration
of edTPA initiatives within their program and within their personal practice
in courses (see Bhatnagar et al., 2017 for an in-depth description of the
method of analysis and results).
TABLE 10.2  edTPA Levels of Integration Survey
Level Items
1. Has your Program conducted a pilot of the edTPA portfolio in practica/student teaching?
2. Has your Program analyzed scores from Local Evaluation of portfolios to identify program needs?
3. Has your Program integrated edTPA related content in course lectures, discussions and activities prior to student teaching?
4. Has your Program integrated technical knowledge and skills needed for edTPA portfolio construction in course lectures, discussions, and
activities prior to student teaching?
5. Has your Program integrated assignments focusing on edTPA related content prior to student teaching?
6. Has your Program integrated assignments utilizing technical knowledge and skills needed for edTPA portfolio construction prior to student teaching?
7. Has your Program offered faculty and supervisor professional development to understand edTPA content?

Program
8. Has your Program offered faculty and supervisor professional development to understand the technical knowledge and skills needed to
submit an edTPA portfolio?
9. Has your Program analyzed scores from National Scoring of portfolios to identify program needs?
10. Have faculty in your Program used data from national scores of portfolios to develop individualized plans for teachers in the induction phase
of teaching (first 3 years)?
1. Have you analyzed scores from Local Evaluation of portfolios to identify what you need to address in your course(s)?
2. Have you Integrated edTPA related content in your course lectures/seminars, discussions and/or activities?
3. Have you integrated edTPA related content in your course assignments
4. Have you integrated technical knowledge and skills needed for edTPA portfolio construction in course lectures/seminars, discussions and/or
activities?
5. Have you integrated technical knowledge and skills needed for edTPA portfolio construction in your course assignments?

Personal
6. Have you participated in professional development to understand edTPA content?
7. Have you participated in professional development to understand the technical knowledge and skills needed to submit an edTPA portfolio?
8. Have you analyzed scores from National Scoring of portfolios to identify what you need to address in your course(s)?
9. Have you prepared candidates to analyze their national edTPA score results to develop an individual induction plan?
Note: Items were rated on a 6 point Likert Scale using the following descriptors:
1. Nonuse: Not at this time or I don’t know
2. Orientation: Acquiring information about this; have not started preparations
3. Preparation: Preparing to integrate this; have not started implementation
4. Mechanical Use: Currently implementing; focusing primarily on complying with requirements
5. Refinement: Have implemented and are making adjustments based on results of implementation
6. Integration: Have implemented and are collaborating with others and studying professional resources to make refinements
CBAM to Support edTPA Coordinators and Faculty    237
238    J. E. MANY et al.

Exploratory factor analysis indicated a five-factor solution (N = 145;


RMSEA = 0.08; CFI = 0.99; TLI = 0.98), which was well within the accept-
ed range (>.30) of factor loadings for survey items (Costello & Osborne,
2005). The five factors included: (a) analyzing local/national edTPA scores,
(b)  integration of edTPA in program design and delivery, (c) professional
development around edTPA, (d) integrating edTPA in personal practice,
and (e) using edTPA data to inform practice. Internal consistency reliabil-
ity of each factor was examined using Cronbach’s alpha and was found to
be acceptable across all five factors (.69–.92). Overall, faculty indicated a
higher level of integration of edTPA initiatives at their program level as op-
posed to the level of their specific courses.
We were not surprised that more reform was taking place at the program
level rather than at the individual course level. What was more revealing
were the distinct patterns across the five factors which gave a glimpse into
the nature of the activities garnering the most attention. Notably, our results
were based on respondents who had implemented changes to programs/
courses during a pilot year in anticipation of the portfolio assessment be-
ing subsequently used in the state as a high-stakes assessment. The patterns
we describe below regarding the factors associated with level of integration
should be situated in this context.
Analyzing local/national edTPA scores.  In this factor, faculty indicated
they had implemented edTPA in their student teaching/practicum experi-
ences and were using local evaluation scores to identify needs at both the
program level and at their course level. National scoring was also being
used to identify program needs (although not needs at the course level).
At this point in our state, programs were required to pilot edTPA, and most
programs were locally analyzing their own portfolios. Institutions received
vouchers to allow a subset of portfolios to be sent for national scoring. Giv-
en that individual programs at an institution may not have had but a few
portfolios sent for national scoring, making changes at the course level was
more informed by the local evaluation data.
Integration of edTPA in program design and delivery.  A second set of
items gave an indication of the ways in which faculty had innovated across
their program. Faculty responses indicated that preparation for edTPA at
the program level included providing both content related to edTPA and
technical knowledge and skills prior to student teaching. Respondents’ rat-
ings for these items emphasized what had been addressed in the programs
in which they taught, even when respondents were not directly involved in
the changes.
Integration of edTPA in personal practice.  In this category, five items re-
lated to the degree to which faculty had integrated content or technical
knowledge and skills in their own personal courses by addressing it in lec-
CBAM to Support edTPA Coordinators and Faculty    239

tures or in-class activities and/or through incorporation of specific assign-


ments. Related to these areas of integration into courses, was whether or
not individual faculty members had participated in professional develop-
ment to understand edTPA content.
Professional development around edTPA.  The survey also included three
items that grouped around the construct of professional development. Items
in this factor noted faculty and supervisors had been offered professional
development related to edTPA content and the technical skills needed to
construct a portfolio and whether or not they personally had participated
in professional development to understand technical skills.
Using edTPA data to inform practice.  Items within this category related
to the use of data from national scoring. The focus was on either using
results to inform the respondent’s own courses, or on the use of data at
the program level or in collaboration with students to aid the candidates
in creating induction plans for their first year of teaching. These three
items were rated the lowest by respondents, indicating most faculty were
acquiring information about this process or were preparing to implement
these innovations but had not yet address these items in their work.

Using the edTPA LoI to Understand and Describe the Nature and
Extent of edTPA Related Innovations
We envision our survey to be beneficial to institutions in understanding
the extent and nature of reforms made across programs and courses within
their institution. During an implementation year where institutions were
piloting edTPA, we found the most activity was related to implementing the
assessment in student teaching, offering professional development for fac-
ulty on the content of edTPA, and analyzing data related to local evaluation
to inform program initiatives. Being personally engaged in professional
development on content was related to faculty ratings on the inclusion of
content, and technical knowledge and skills in specific courses.
We believe these patterns may change substantially over time, as faculty
engage in edTPA initiatives during a context of using the edTPA as a high-
stakes assessment. For example, with national scores available for all stu-
dents in a program, the use of such data may be more directly related to
faculty decision making in courses. We also believe, factors which showed
little or no use during an implementation year (using edTPA data to inform
practice related to induction) could show growth as faculty begin to envi-
sion how to use the scores to help students identify their strengths and ar-
eas of need. Finally, within an institution, using the edTPA LoI with faculty
and supervisors may be informative as to the nature of professional devel-
opment/support needed within individual programs. This might be par-
ticularly useful in the event of variation in overall scores across programs or
when faculty change has occurred. Used in such ways, the edTPA LoI survey
240    J. E. MANY et al.

could be a helpful tool for institutions as they seek to understand edTPA


reform initiatives over time, as well as for analyzing attention to edTPA re-
lated innovations across programs within their institution and across faculty
within programs.

Using CBAM to Understand and to Support the edTPA


Implementation Process

In summary, we believe the CBAM can be an effective tool to understand


and support faculty engaged in considering, planning for, and/or imple-
menting edTPA initiatives in teacher education programs, particularly when
policy changes create tension-filled contexts for change. Change is seen as
a process of implementation not an event where something is or is not ad-
opted (Hall et al., 1973; Hall, 2010). Hall (2010) uses the metaphor of an
implementation bridge linking current practices in educational settings to
the new practices. The farther along the three dimensions educators travel
the closer one gets to the far side of the bridge where innovations are being
realized to their fullest potential. Drawing on this metaphor in Figure 10.2,
we present how considering implementation of edTPA as a change process
might be beneficial in understanding how to support faculty with varying
concerns as those individuals engage in relevant edTPA related initiatives.
Once faculty begin to develop knowledge of edTPA and to understand what
students need to know and be able to do to demonstrate their abilities, they
will inevitably begin to experience the concerns described by those in our
research. The edTPA Level of Integration instrument (edTPA LoI) can be
utilized to understand the nature of changes occurring at both the pro-
gram level and within individual courses. By recognizing where faculty are
in relation to these constructs, administrators can more effectively identify
the nature of supports and resources needed within their institution or by
reaching out to external opportunities.
When edTPA is adopted as a high-stakes assessment, we recommend
institutions thoughtfully consider who will fulfill the role of coordinating
edTPA activities and fully appreciate the complexity of this leadership task.
The edTPA coordinators in our study were called upon to juggle and attend
to multiple and competing demands, responsibilities, and roles. At the be-
ginning stages (see stages associated with Learning about edTPA and Self in
Figure 10.2), coordinators grappled with concerns in relation to develop-
ing the knowledge and ability to serve as the go-to person across all of the
handbooks, and with tensions related to expectations of the institution and
need for administrative support. Faculty also needed opportunities to learn
about the tasks, either through piloting the assessment or through profes-
sional development. Often after initial engagement, both coordinators and
Figure 10.2  Understanding and supporting the edTPA implementation process in light of educators’ Stages of Concern.
CBAM to Support edTPA Coordinators and Faculty    241
242    J. E. MANY et al.

faculty grappled with philosophical concerns related to the high stakes na-
ture of the assessment and what they perceived as a loss of voice in decision
making regarding their program. As teacher educators moved from initial
learning about edTPA to the Personal Stage of Concern and to more in-
tense involvement in edTPA initiatives, our inquiries revealed the following
institutional supports to be particularly valuable:

• Distributed Leadership. Recognizing and capitalizing on the wisdom


and best practices developing across programs and departments is
important. Institutions should establish a model of distributed lead-
ership across individuals, groups, and offices (Sloan, 2013) rather
than expecting the edTPA coordinator to assume sole responsibility
for addressing edTPA for an institution.
• Identify and Compensate Support Personnel. Adding additional person-
nel support or identifying support staff for specific tasks may be
important as new structures or requirements are put in place. Fund-
ing faculty to attend workshops and conferences is particularly im-
portant at the informational stage while institutional knowledge is
being developed. Faculty stipends for curriculum mapping, creating
templates and resources for programs, and creation of new semi-
nars or courses may be needed as new resources must be developed.
• Support Faculty Autonomy. Within institutions, making effort to retain
faculty autonomy in decision making processes and policy regard-
ing edTPA within their programs is important. Whether and how
concepts related to edTPA, such as academic language, assessment,
differentiation of instruction, planning, etc., are emphasized in
courses and seminars can be determined at the program level. Fac-
ulty involvement should be evident as institutions decide whether
policies/procedures should be at the unit level or the program level
regarding such things as retakes, counting edTPA as part of student
teaching grades, requiring programs to do local evaluation of port-
folios, or upload sessions. Retaining respect for faculty autonomy
and inclusion of faculty in institutional decision making processes is
particularly important in addressing anxiety.

As a result of support from institutional leadership and colleagues, edT-


PA coordinators indicated that their concerns about personal capacity were
somewhat alleviated, and they developed confidence in their position. In-
stitutional leadership was also instrumental in helping edTPA coordinators
cope with faculty resistance to change, deal with negative reactions towards
the policy push for edTPA, and promote a sense of collegiality to develop
the best strategies for their institution. Finally, ensuring faculty maintain a
CBAM to Support edTPA Coordinators and Faculty    243

role in determining institutional and program level policies can aid faculty
in dealing with the tensions as well.
As illustrated in Figure 10.2, as faculty and edTPA coordinators become
more engaged in edTPA implementation, concerns become more focused
on the task of integrating edTPA into courses and programs. At this point,
institutions may find it beneficial to develop new infrastructures to support
information sharing and support collaboration. As data become available,
faculty become more focused on specific areas of need and desire increased
information about how other colleagues are integrating initiatives into their
programs. For educators at the Management, Consequence, Collaboration,
and Refocusing Stages of Concern, the following supports can aid them in
addressing their concerns and finding new ways to integrate edTPA initia-
tives in their programs:

• edTPA Liaison meetings, data retreats, and stakeholder retreats. Monthly


meetings of faculty, supervisors, and staff associated with efforts to
implement edTPA are crucial. Sharing best practices and identify-
ing areas of need evolve naturally from these conversations. Meet-
ings can also include guest speakers related to specific areas of need
and can be a context for identifying the need for unit level supports
to address issues in common across all programs. Faculty and P–12
educators may meet in data retreats to consider what can be learned
from yearly results and the implications for program and course re-
design. Stakeholders may also meet annually to consider policy and
procedures implications of edTPA results and how to link educative
use of edTPA within the program to planning for professional devel-
opment during the induction years.
• Statewide webinars and summits for edTPA coordinators and interested
faculty. edTPA coordinators need ongoing support and communica-
tion to grapple with the task of managing edTPA implementation.
At the state level, monthly webinars, focusing on areas of need iden-
tified by edTPA coordinators, can provide an important scaffold for
these facilitators’ ongoing development and can provide support
for other faculty who might attend only when the sessions are of
particular interest to their own needs. Drive-in conferences and
summits are also effective jump-starts for motivating and informing
coordinators and faculty, but may not in isolation be adequate to
meet the challenges faced by educators confronted with the need to
implement edTPA in a high stakes context.
• edTPA Policy Committees. As performance assessments such as
edTPA become consequential for certification and licensure, the
importance of reflecting on data to understand consequences and
the need to stay open to refocusing becomes apparent. One advan-
244    J. E. MANY et al.

tage of using assessments with strong reliability and validity can be


that data can be informative at the program level, the institution
level, and at the state level. At the state level, policy committees
should engage all stakeholders (including faculty who may or may
not be advocates of edTPA as a high stakes assessment) in consid-
ering what can be learned, what can be improved, what might be
unintended negative outcomes, and what else might be considered
as alternatives.

In our research we found aligning the nature of supports and resources


to the needs and pressing concerns of faculty is vital. When educators
are experiencing a “peak” or heightened level of concern associated with
a particular stage (i.e., Personal), experiencing supports aligned with a
different stage (professional development on handbooks or webinars on
the upload process) will not suffice. In addition, while movement along
the Stages of Concern is developmental and is aligned with increased en-
gagement in edTPA initiatives and particular types of support, educators
should be wary of two cautionary notes. First, administrators should not
assume that once an institution or program has dealt with particular con-
cerns, those concerns will be resolved and will no longer be an issue. Al-
though over time concerns “peak” in a linear process from left to right de-
velopmentally, educators implementing edTPA may, and most often do,
express concerns related to stages from across the continuum. Second,
within an institution or a state, there will always be educators peaking at
varying Stages of Concern. New faculty and supervisors are added on a se-
mester by semester basis and existing faculty are often reassigned to roles
they have not previously played. This means institutions have an ongoing
need (a) to provide initial overviews to edTPA, (b) to continually listen
to and support faculty voicing their personal philosophical struggles with
the use of the assessment and/or policies in place, and (c) to provide op-
portunities for in-depth critique and analysis of data and the use of edTPA
in programs and at the unit level.
In summary, the challenges of making changes in teacher education in
response to implementation of edTPA as a high stakes assessment are com-
plex. The CBAM aids us in viewing these changes as a process which is
affected by teacher educators’ concerns, the degree to which they are en-
gaged, and the ways in which they attempt to integrate initiatives into their
programs. The better we understand their concerns and efforts to address
edTPA implementation, the more effectively we can support the degree to
which edTPA initiatives may have a positive impact on our students.
CBAM to Support edTPA Coordinators and Faculty    245

REFERENCES

Anderson, S. E. (1997). Understanding teacher change: Revisiting the con-


cerns based adoption model. Curriculum Inquiry, 27(3), 331–367.
doi:10.1111/0362-6784.00057
Bhatnagar, R., Kim, J., & Many, J. E. (2017). An instrument to study state-wide im-
plementation of edTPA: Validating the Levels of edTPA Integration survey.
Journal of Research in Education, 27 (1), 24–33 Retrieved from https://www.
eeraorganization.org/jre-spring-2017 
Bhatnagar, R., Kim, J., Many, J. E., Ogletree, T., Favors, S., Thomas, C., Williams, M.
J., Tanguay, C., Kurz, K., Wilson, J., & Ariail, M. (2017, April). Understanding
faculty concerns and the change process resulting from adoption of a high-stakes per-
formance assessment. Paper presented at the American Educational Research
Association, San Antonio, TX.
Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis:
Four recommendations for getting the most from your analysis. Practical As-
sessment, Research & Evaluation, 10(7), 1–9.
Darling-Hammond, L. (2010). Evaluating teacher effectiveness: How teacher performance
assessments can measure and improve teaching. Washington, DC: Center for
American Progress.
Fuller, F. F. (1969). Concerns of teachers: A developmental conceptualization. Amer-
ican Educational Research Journal, 6(2), 207–226.
George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation
in schools: The Stages of Concern Questionnaire. Austin, TX: Southwest Edu-
cational Development Laboratory.
Hall, G. E. (2010). Technology’s Achilles heel: Achieving high-quality implementa-
tion. Journal of Research in Technology Education, 42(3), 231–253. doi:10.1080/
15391523.2010.10782550
Hall, G. E., Dirksen, D. J., & George, A. A. (2006). Measuring implementation
in schools: Levels of use. Austin, TX: Southwest Educational Development
Laboratory.
Hall, G. E., & Hord, S. M. (2015). Implementing change: Patterns, principles, and pot-
holes (4th ed.). Upper Saddle River, NJ: Pearson.
Hall, G. E., Newlove, B. W., George, A. A., Rutherford, W. L., & Hord, S. M. (1991).
Measuring change facilitator stages of concern: A manual for use of the CFSoC Ques-
tionnaire. Greeley, CO: Center for Research on Teaching and Learning.
Hall, G. E., Wallace, R. C., & Dossett, W. A. (1973). A developmental conceptualization
of the adoption process within educational institutions (Report No. 3006). Austin,
TX: The University of Texas at Austin, Research and Development Center for
Teacher Education.
Many, J. E., Bhatnagar, R., Ariail, M., Tanguay, C., Jones Williams, M., Thomas, C.,
Kim, J., Cannon, S., Favors, S., Ogletree, T., Wilson, J., Kurz, K., Howrey, S., &
An, S. (2016, April). State-wide implementation of edTPA in preparation for high-
stakes testing: A mixed-methods study of the concerns of edTPA coordinators. Paper
presented at the Annual Meeting of the American Educational Research As-
sociation, Washington, DC.
246    J. E. MANY et al.

Overbaugh, R., & Lu, R. (2008). The impact of a NCLB-EETT funded professional
development program on teacher self-efficacy and resultant implementation.
Journal of Research on Technology in Education, 41(1), 43–61. doi:10.1080/1539
1523.2008.10782522
Sato, M. (2014). What is the underlying conception of teaching of the edTPA? Jour-
nal of Teacher Education, 65(5), 421–434. doi:10.1177/0022487114542518
Sloan, T. (2013). Distributed leadership and organizational change: Implementa-
tion of a teaching performance measure. The New Educator, 9(1), 29–53. doi:
10.1080/1547688X.2013.751313
Stanford Center for Assessment, Learning, and Equity. (2014). edTPA. Amherst,
MA: Pearson Education, Inc. Retrieved from http://www.edtpa.com/Home.
aspx
Wei, R. C., & Pecheone, R. L. (2010). Assessment for learning in preservice teacher
education: Performance-based assessments. In M. Kennedy (Ed.), Teacher as-
sessment and the quest for teacher quality: A handbook (pp. 69–132). San Fran-
cisco, CA: Jossey Bass.
ABOUT THE EDITORS

Joyce Many is Associate Dean of Undergraduate Studies and Educator


Preparation in the College of Education and Human Development at Geor-
gia State University. A former chair of the AERA SIG: Accreditation, Assess-
ment, and Program Evaluation Research in Educator Preparation, she is a
regular participant in state and national task forces and convenings focus-
ing on teacher education reform, policy, and research. She has published
more than 75 journal articles and has edited and contributed to multiple
books, including Clinical Teacher Education: Reflections from an Urban Profes-
sional Development School Network and the Handbook of Instructional Practices
for Literacy Teacher Educators. She is founder of Georgia’s Teacher Education
Research Consortium and has served on AACTE’s Research and Dissemina-
tion Committee.

Ruchi Bhatnagar is a Clinical Assistant Professor at Georgia State University.


She received her PhD in teacher education from the University of Michi-
gan, Ann Arbor. She started working at Georgia State University in Spring
2010 as a postdoctoral research fellow. She serves as the assessment coordi-
nator for the unit of educator preparation. Her research interests include:
teacher education policy and its impact on teaching and teacher education,
assessments in teacher education, program evaluation, teacher retention,
and teacher quality.

Implementing and Analyzing Performance Assessments in Teacher Education, page 247


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 247
This page intentionally left blank.
ABOUT THE CONTRIBUTORS

Dr. Susan Swars Auslander is an associate professor of mathematics education


in the Department of Early Childhood and Elementary Education at Georgia
State University. She teaches mathematics methods and content courses for
prospective and practicing elementary teachers, including courses that pre-
pare elementary mathematics specialists. Her research interest is elementary
teacher mathematical development, with a focus on how different education-
al experiences and contexts influence elementary teachers’ mathematical
learning and change. Specific outcomes explored include classroom instruc-
tional practices, specialized content knowledge, and teacher affect.

Dr. Megan Birch works to understand and expand anti-oppressive and


emancipatory pedagogical practices. Her interests include teacher educa-
tion, curriculum, and secondary English-language arts education. In par-
ticular, she is interested in how the social visions and narratives teachers
use to think about identities and difference impact students, schools, and
society. Committed to working democratically to improve teacher educa-
tion, Dr. Birch’s recent work has also included participation in collaborative
statewide initiatives to transform teacher education. Dr. Birch is a faculty
member and former director of educator preparation at Plymouth State
University, where she currently co-coordinates the teacher certification pro-
grams for English-language arts. She is a teacher consultant for the Nation-
al Writing Project in New Hampshire, as well as an active member of the
New Hampshire IHE Network and New Hampshire’s Council for Teacher
Education. Prior to coming to Plymouth State, she taught in public schools
in Prince George’s County and Montgomery County, Maryland.

Implementing and Analyzing Performance Assessments in Teacher Education, pages 249–256


Copyright © 2018 by Information Age Publishing
All rights of reproduction in any form reserved. 249
250   About the Contributors

Ruchi Bhatnagar is a clinical assistant professor at Georgia State University.


She received her PhD in teacher education from the University of Michi-
gan, Ann Arbor. She started working at Georgia State University in Spring
2010 as a postdoctoral research fellow. She serves as the assessment coordi-
nator for the unit of educator preparation. Her research interests include:
teacher education policy and its impact on teaching and teacher education,
assessments in teacher education, program evaluation, teacher retention,
and teacher quality.

Cynthia Bolton, PhD, is a currently the interim dean of the College of Educa-
tion and professor of educational psychology at Armstrong State University
in Savannah, GA. Dr. Bolton holds a PhD in Educational Psychology and BS
in Public Health from the University of North Carolina Chapel Hill, and a
Master of Education in Special Education from University of North Carolina
Charlotte. She has taught PK–12 grade mathematics, science, health educa-
tion, and special education. Dr. Bolton’s areas of expertise include develop-
ing and maintaining quality assurance and assessment systems, accreditation,
innovative pedagogical instructional strategies, and developing partnerships.
Dr. Bolton works as a consultant with local school systems regarding assess-
ment of induction programs, teacher leadership and partnership opportuni-
ties, and at the national level serves as a Councilor and Site Visitor for the
Council for the Accreditation of Educator Preparation (CAEP).

Ellen E. Dobson currently serves as the director of assessment, data mage-


ment, and digital learning for the College of Education at East Carolina Uni-
versity (ECU). Prior to her current position, she worked as the Taskstream
account manager and edTPA coordinator for the Educator Preparation pro-
gram unit at ECU. She is an edTPA National Academy consultant. Dr. Dob-
son received her Doctor of Education degree from East Carolina University
in 2013. She completed her Master of Education degree at Penn State Uni-
versity and her Bachelor of Arts degree at the University of Maryland.

Emma Espel, PhD is a senior research associate at RMC Research. Dr. Es-
pel’s areas of expertise include innovative and theory-driven research and
evaluation methodologies, advanced quantitative design and analysis, in-
strument development and psychometrics, qualitative approaches includ-
ing protocol development and data collection, and technical assistance for
local and state education agencies and policy evaluation. Dr. Espel co-leads
work under the Rural Education Research Alliance in seven states on the
topic educator mobility through the Regional Education Laboratory (REL)
Central at Marzano Research and previously served as a researcher with
REL Central working with educator preparation. She is a certified What
Works Clearinghouse reviewer. Dr. Espel obtained her PhD in Developmen-
tal Psychology from the University of Denver.
About the Contributors    251

Kathleen Fabrikant, EdD, is an associate professor of education within the


Department of Secondary, Adult, & Physical Education at Armstrong State
University in Savannah, GA. She has served as professor in residence at
Armstrong’s Professional Development School and is now the edTPA coor-
dinator for the university’s education preparation programs. She has also
served as a middle-grades special education teacher and as the executive
director of a literacy nonprofit. Her research interests include edTPA sup-
ports and their effect on teacher candidates and their K–12 students.

Shaneeka Favors-Welch received a BS in Mathematics from Southern Poly-


technic University and a MAT in Mathematics from Clayton State Univer-
sity. She is a doctoral student at Georgia State University in Teaching and
Learning with a concentration in Teaching and Teacher Education. In 2008
she founded a non-profit, GROWE (Giving Resources and Opportunities
to Women through Education), dedicated to advancing women in STEM
fields. She spent her first year as a doctoral student in China teaching high
school mathematics and physics courses. Her particular areas of interests
are cultural asset pedagogies, decolonization theories, and critical quanti-
tative methods. She hopes to engage educators and other stakeholders in
culturally responsive practices.

Harriet Fayne, PhD, is the interim provost and senior vice president for
academic affairs at Lehman College, City University of New York. Dr.
Fayne served as the dean of the School of Education from 2011–2016. In
that capacity, she led initiatives to reform teacher preparation, showcase
practitioner inquiry, build community partnerships, and foster institu-
tional growth. Prior to accepting an appointment at Lehman, she spent
sixteen years as Education Department chair before becoming the inau-
gural dean of the School of Professional Studies at Otterbein University
in Westerville, Ohio. She has authored or co-authored articles and book
chapters that focus on teacher education redesign and regularly presents
her work at international and national conferences. Dr. Fayne holds a BA
with a major in American Studies from Barnard College, a MAT in Social
Studies Education from Harvard University, and a PhD in educational
psychology from Columbia University.

Xiaoyang Gong is a PhD candidate in the Center for Science and Technol-
ogy in Education in the College of Education at the University of Mary-
land (UMD), College Park. She worked in UMD edTPA office as a graduate
assistant in academic year 2016–2017. Before enrolling in the doctorate
program, she was a high school chemistry teacher. Her research interests
include students’ affective perceptions in technology-integrated environ-
ments, teachers’ pedagogical practices of integrating technology in class-
room instruction, and equity issues in science education.
252   About the Contributors

Angie Hodge, PhD, is an assistant professor of mathematics education in


the Department of Mathematics and Statistics at Northern Arizona Univer-
sity. She is special projects coordinator for the Academy of Inquiry Based
Learning. She has taught a variety of courses using IBL over the last 10
years. She travels all over the world to run ultramarathons, including the
Leadville 100-mile run. 

Dr. Karen Kurz has been a faculty member at Berry College Charter School
of Education and Human Sciences since Fall 1995 and served as assistant
dean for graduate studies in education from 2001 to 2017. She teaches
courses in curriculum, research methods and assessment at undergradu-
ate and graduate education levels. She is the edTPA coordinator, and has
served as assessment coordinator and CAEP Standard 4 committee chair.
Professional service includes, serving on the board of examiners for the
Georgia Professional Standards Commission, adjudicator for National Asso-
ciation for Sport and Physical Education, mentor for the National Board of
Professional Teaching Standards portfolio development, Georgia Associa-
tion of Colleges of Teacher Education President, and Georgia Association
of Independent Colleges of Teacher Education President.

Leslie Lieman is the educational technology coordinator for Lehman Col-


lege School of Education, City University of New York. Ms. Lieman provides
leadership in a wide-range of school activities. Her professional develop-
ment workshops for faculty and educational technology initiatives for stu-
dents have led to improved use and integration of technology in teaching,
learning, and assessment. Ms. Lieman created and supervises an edTPA
technology and support lab to assist students’ successful completion of New
York State teacher certification exams. Her instructional design expertise
and project development of online tutorials, classroom teaching videos,
ePortfolios, websites, accessible resources and more, impacts student en-
gagement and college coursework. She regularly presents her work at local,
state, and national conferences. Ms. Lieman holds a BA from Binghamton
University SUNY, a Master’s in Social Work from Hunter College CUNY and
a Master’s in Educational Technology from Michigan State University.

Joyce Many, PhD, is associate dean of undergraduate studies and educator


preparation at Georgia State University. A former chair of the AERA SIG:
Accreditation, Assessment, and Program Evaluation Research in Educator
Preparation, she is a regular participant in state and national task forces
and convenings focusing on teacher education reform, policy, and re-
search. She has published more than 75 journal articles and has edited and
contributed to multiple books, including Clinical Teacher Education: Reflec-
tions from an Urban Professional Development School Network and the Handbook
of Instructional Practices for Literacy Teacher Educators. She is founder of Geor-
About the Contributors    253

gia’s Teacher Education Research Consortium and has served on AACTE’s


Research and Dissemination Committee.

Stephen Meyer, PhD, is a senior research associate at RMC Research. Cur-


rently, Dr. Meyer serves as a researcher for Regional Education Labora-
tory (REL) Central at Marzano Research leading research and technical
assistance activities in the 7-state region. As part of REL Central, he led a
cross-state Research Alliance focused on effective teacher preparation and
consisting of policy makers, state database administrators, researchers, and
educators who represent state agencies, teacher preparation programs, uni-
versities, and research organizations. Currently, Dr. Meyer leads evaluations
of TeachDETROIT, an urban teacher residency program, and of Houghton
Mifflin Harcourt coaching models for teacher professional development.
Dr. Meyer also serves as a What Works Clearinghouse reviewer. He received
his PhD in Education-Measurement, Evaluation, and Statistical Analysis
from the University of Chicago.

Dr. Kathryn McCurdy is an assistant clinical professor at the University of


New Hampshire where she also serves as the director of field placement for
its Manchester campus. Prior to her doctoral work, Kathryn taught middle-
school mathematics in Boston where she also worked for years supporting
and mentoring preservice and beginning teachers. Her research interests are
novice teacher learning, mentoring and induction support structures and
activities, teacher performance assessments, and continuing professional
learning of veteran teachers. Kathryn earned her master’s degree from the
University of Michigan and her PhD from the University of New Hampshire.

Nikkolas Nelson studied at the University of Kansas receiving bachelor’s


degrees in English and secondary English education and a master’s degree
in curriculum and instruction. He taught secondary English briefly before
beginning employment with the Kansas State Department of Education in
2009 as an educator licensing consultant taking over project management
duties for the Kansas Performance Teaching Portfolio (KPTP) in 2011. He
lives in Lawrence, Kansas with his wife Amy and their chinchillas.

Dr. Tamra W. Ogletree is an associate professor and program coordinator


of the language and literacy program at the University of West Georgia.
She is also the director of the Cherokee Rose Writing Project which is an
affiliate of the National Writing Project. She holds a PhD in Language and
Literacy and a Certificate in Interdisciplinary Qualitative Research from
the University of Georgia. Tamra holds an L-7 certificate in Educational
Leadership and an MEd in Early Childhood and Middle Grades Education.
She has experience in public and private education from the Pre-K level
to High School. Her research focus includes literacy education, teacher
254   About the Contributors

preparation, social justice and equity, and qualitative methodology with an


emphasis in ethical research practices.

Gaoyin Qian, PhD, currently serves as the associate dean of the school of
education at Lehman College. Dr. Qian has been a professor of literacy ed-
ucation at Lehman College. His research has focused on secondary-school
students’ epistemological thinking and their learning from science text.
He has published on a variety of topics such as word recognition, early lit-
eracy among young children from immigrant families, computer mediated
discussion and literacy assessments, and the role of epistemological beliefs
in teaching and learning. Dr. Qian was the former director and principal
investigator of the Lehman College Robert Noyce Scholarship program,
which was funded by the National Science Foundation. The program was
designed to prepare qualified mathematics and science teachers for high
need middle schools. Dr. Qian received his master’s degree in English Edu-
cation from Missouri State University and completed his PhD in Reading
Education at University of Georgia.

Dr. Holley Morris Roberts is a teacher educator and leader. She received her
BS and MEd in Early Childhood Education from Georgia College and her
EdD in Curriculum Studies from Georgia Southern University. She taught
in P–12 public and private schools for 11 years and has served as a teacher
educator for the last 12 years. At Georgia College she has also served as the
director of assessment and accreditation and is currently the interim chair
of the Department of Teacher Education and edTPA coordinator. Dr. Rob-
erts’ research includes critical literacy, cultural awareness, higher education
assessment, and accreditation. Dr. Roberts has received the Teaching Excel-
lence Award for Georgia College, is a member of Phi Kappa Phi, and serves
as a reviewer for the Council for the Accreditation of Educator Preparation
(CAEP). Holley Roberts lives in Milledgeville, Georgia.

Dr. Tom Schram is an associate professor and director of the Division of Ed-
ucator Preparation at the University of New Hampshire, a position to which
he brings over thirty-five years in education ranging from K–6 teaching
to higher education teaching, research, and administration. His scholarly
agenda encompasses educator preparation, teacher education program
development and policies, school-university partnerships, and field-based
qualitative research design and methods. He is a founding member and
elected vice president of the New Hampshire Institutions of Higher Edu-
cation (IHE) Network, a nonprofit consortium comprised of all the pub-
lic and private higher education educator preparation programs in New
Hampshire. His current areas of focus include implementation of a teacher
residency program in New Hampshire’s rural high-need communities as
About the Contributors    255

well as statewide and national collaborative initiatives centered around


school-university partnerships.

John Seelke is the director of edTPA local evaluation at the University of


Maryland. He also is a PhD candidate completing his dissertation on the
perceptions of practicing teachers who completed edTPA. A ten-year veter-
an secondary mathematics teacher, John won the 2007 Presidential Award
for Excellence in Math and Science Teaching while teaching in a Washing-
ton, DC public school.

Dr. Marvin E. Smith is an associate professor in the Elementary and Early


Childhood Education Department in the Bagwell College of Education at
Kennesaw State University in Kennesaw, GA. He received his PhD in Cur-
riculum and Instruction (Mathematics Education) from the University of
Wisconsin–Madison. His scholarly interests include learning and teaching
elementary mathematics with understanding, classroom assessment, and
teacher education and development.

Dr. Stephanie Z. Smith is an associate professor in the Early Childhood and


Elementary Education Department of the College of Education and Hu-
man Development at Georgia State University in Atlanta, GA. She received
her PhD in Curriculum and Instruction (Mathematics Education) from the
University of Wisconsin–Madison. Her scholarly interests include learning
and teaching elementary mathematics with understanding, conceptions of
mathematics, and teacher education and development.

Sharilyn C. Steadman earned a bachelor’s degree in English and a Master


of Education degree from Belmont University and taught high school Eng-
lish for nine years. She received her Doctorate in English and Education
from the University of Michigan. There, her dissertation focused on the
work and self-identification of university supervisors. Steadman has taught
English education at Florida State University and East Carolina Universi-
ty. Recently, she assumed the position of Director of the Master of Arts in
Teaching program at ECU. She is the author or co-author of articles and
chapters on the theoretical and practical understandings of professional
support and development of preservice and early-career teachers.

Dr. Carla Lynn Tanguay is associate to the dean for clinical practice at Geor-
gia State University serving all initial teacher preparation programs in the
areas of teacher performance assessment and program evaluation, as well
as induction and mentoring through school partnerships. Additionally, she
is the coordinator in the Department of Early Childhood and Elementary
Education (ECEE) for the Bachelor of Science in Education programs in
Early Childhood Education with concentrations in special education and
256   About the Contributors

English to speakers of other languages. She serves as the unit level chair
of the Assessment and Accreditation Committee and on committees with
the Georgia Professional Standards Commission. Her scholarly interests
include assessment, educational policy, urban education, teacher develop-
ment, induction, and retention.

Dianna Gahlsdorf Terrell is associate professor in education at Saint An-


selm College in Manchester, New Hampshire. She is a research team mem-
ber on the Spencer Small Grant, “Performance Assessments and Teacher
Learning in and Beyond Teacher Preparation,” and previously directed a
community grant from the New Hampshire Charitable Association to build
a network of civic education professional development providers and non-
profits in New Hampshire.

Clarice Thomas is a doctoral candidate in Teaching and Learning with a


concentration in Teaching and Teacher Education program at Georgia State
University. Her research interests include narrative research of experiences
within the school-to-prison nexus, equitable education for African American
students, and pre-service teacher preparation in urban education. Clarice is a
former alternative education instructor and secondary history teacher.

Page Tompkins, EdD, is the executive director of the Upper Valley Edu-
cators Institute where he teaches in the Teacher and School Leadership
programs. Page is currently a member of the New Hampshire Professional
Standards Board, and is active in the New Hampshire Institutions of High-
er Education Network, including co-coordinating the Leadership Prepa-
ration Programs Committee. Page is a former high-school principal and
social-science teacher, with a background in project-based learning. Prior
to coming to UVEI, Page was the founding director of the Reach Institute
for School Leadership in Oakland, California, and he was an instructor in
the Principal Leadership Institute masters program and the Leadership for
Educational Equity doctorate program at UC Berkeley.

Cindy S. York, PhD, is associate professor of instructional technology within


the Department of Educational Technology, Research, and Assessment at
Northern Illinois University. Her research interests include the examina-
tion of practitioners in order to better prepare students in the areas of
instructional design and the integration of technology into teacher educa-
tion. Her past includes K–12 teaching and corporate experiences. She was
a 2014–2017 AECT board of directors member and the 2011–2012 presi-
dent of the Division of Distance Learning for AECT. She may be reached at
cindy.york@niu.edu.