Sie sind auf Seite 1von 25

This article was downloaded by: [112.209.40.

91] On: 23 February 2013, At: 00:13 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Communication Education
Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/rced20

Assessment of Oral Communication: A Major Review of the Historical Development and Trends in the Movement from 1975 to 2009
Sherwyn Morreale , Philip Backlund , Ellen Hay & Michael Moore Version of record first published: 09 Mar 2011.

To cite this article: Sherwyn Morreale , Philip Backlund , Ellen Hay & Michael Moore (2011): Assessment of Oral Communication: A Major Review of the Historical Development and Trends in the Movement from 1975 to 2009, Communication Education, 60:2, 255-278 To link to this article: http://dx.doi.org/10.1080/03634523.2010.516395

PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-andconditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Communication Education Vol. 60, No. 2, April 2011, pp. 255 278

Assessment of Oral Communication: A Major Review of the Historical Development and Trends in the Movement from 1975 to 2009
Downloaded by [112.209.40.91] at 00:13 23 February 2013

Sherwyn Morreale, Philip Backlund, Ellen Hay & Michael Moore

This comprehensive review of the assessment of oral communication in the communication discipline is both descriptive and empirical in nature. First, some background on the topic of communication assessment is provided. Following the descriptive background, we present an empirical analysis of academic papers, research studies, and books about assessing communication, all of which were presented or published from 1975 to 2009. The results are outlined of content and thematic analyses of a database of 558 citations from that time period, including 434 national convention presentations, 89 journal articles, and 35 other extant books and publications. Three main themes and eight subthemes are identified in the database, and trends evident in the resulting data are considered. The study concludes with a discussion of the trends and overarching themes gleaned from the research efforts, and the authors recommendations of best practices for how to conduct oral communication assessment. Keywords: Assessment; Evaluation; Communication Assessment; Communication Skills Assessment; Evaluating Communication; Assessing Communication; Assessing Student Learning; Learning Outcomes; Program Evaluation
Access to and success in college are substantially influenced by prior academic achievement. Learning is a continuum; gaps and weaknesses at one point*whether

Sherwyn Morreale (Ph.D., University of Denver, 1989) is Associate Professor and Director of Graduate Studies in the Communication Department, University of Colorado at Colorado Springs. Philip Backlund (Ph.D., University of Denver, 1977) is a professor of Communication Studies at Central Washington University. Ellen Hay (Ph.D., Iowa State University, 1982) is a professor and chair of the Department of Communication Studies at Augustana College, Rock Island, IL. Michael Moore (Ph.D., University of Missouri, 1973) is an emeritus professor of Communication from Morehead State University. Sherwyn Morreale can be contacted at smorreal@uccs.edu
ISSN 0363-4523 (print)/ISSN 1479-5795 (online) # 2011 National Communication Association DOI: 10.1080/03634523.2010.516395

256 S. Morreale et al.

in high school or college*create barriers to successful performance at the next level. Student learning outcomes data are essential to better understand what is working and what is not, to identify curricular and pedagogical weaknesses, and to use this information to improve performance. (Kuh & Ikenberry, 2009, p. 1)

As a national initiative, early mandates to assess student learning were often perceived as an inappropriate expectation of faculty set by college administrators and legislators external to their campuses. Many faculty members firmly believed their current practices for grading knowledge and performance were quite sufficient. Times and attitudes evolved, and assessment is now institutionalized on the majority of American campuses (Ewell, 2009). As we will detail in the following report, assessment in higher education and in the communication discipline has developed considerably over the last 35 years. As a policy and practice in higher education, assessment likely will be with us for years to come. Assessment is not going away and nor should it. Today a wide range of organizations external to campuses*regional accrediting bodies, legislatures, state boards of education, and others*endorse and mandate assessment. They require assessment as part of their accountability processes to ensure faculty at institutions of higher education are doing their jobs well. These two processes*assessment and accountability*may be a source of confusion for some, because both are often collectively referred to as assessment. In reality, assessment and accountability are two different processes but with one potentially embedded in the other. Put simply, when we assess our own performance or that of our students, it is assessment; when others assess our performance or that of our department, program, or institution, it is accountability (Frye, 2006). Another persistent source of confusion about these two processes is the language used in their discussion and application. Precision of language about assessment could support greater clarity of practice and perhaps more enthusiastic support and participation. To that end and to inform the report that follows, Table 1 presents some of the more commonly used terms in the assessment movement. The present study examines the development and evolution of the assessment movement in the communication discipline by providing a comprehensive overview of historical trends in related scholarship over the last 35 years. The goal of this research study is to serve the needs of scholars, teachers, and administrators who are committed to engaging in oral communication assessment effectively, both in and outside the communication discipline. We begin with some descriptive background for this study and then outline the method and results of gathering and analyzing data about assessing communication from national convention programs, educational journals, as well as other books and publications. Background to the Present Study Some understanding of assessment in general, and in the communication discipline in particular, is in order. A brief sketch of the history of the assessment movement includes references to communications role therein. Then we detail the National

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Assessment Review Table 1 Common Terms Used in the Assessment Initiative


Term Assessment and accountability Definition/description

257

In general, when we assess our own performance, it is considered assessment; when others assess our performance, it is considered accountability. That is, assessment is a set of initiatives we take to monitor the results of our actions and to improve ourselves; accountability is a set of initiatives others take to monitor the results of our actions and to penalize or reward us accordingly. Assessment The systematic process of determining educational objectives, gathering, analyzing, and using information about student learning and learning outcomes to make decisions about programs, individual student progress, or accountability. Measurement The systematic investigation of peoples attributes or behaviors. Benchmark A criterion-referenced objective standard that is used for comparative purposes. A program can use its own data as a baseline or benchmark against which to compare future performance. It can also use data from another program as a benchmark. Direct assessment Direct assessment of student learning requires students to display their knowledge and skills as they respond to or are evaluated using an assessment instrument. Objective tests, essays, presentations, and classroom assignments all meet this criterion. Indirect Indirect assessments such as surveys and interviews ask students to reflect on assessment their learning rather than demonstrate it. Formative An assessment that is used for improvement (on an individual or program assessment level) rather than for making final decisions or for accountability. It is also used to provide feedback to improve teaching, learning, and the curricula, as well as to identify students strengths and weaknesses. Summative A sum total or final product measure of achievement at the end of an assessment instructional unit or course of study. Performance-based An assessment technique involving the gathering of data through systematic assessment observation of a behavior or process and evaluating these data based on a clearly articulated set of performance criteria to serve as the basis for evaluative judgments. Evaluating speeches is a good example of this type of assessment. Evaluation This term broadly covers all potential investigations of institutional functioning, based on formative, summative, or performance-based assessment processes. Evaluation may include assessment of learning, but it might also include nonlearning centered investigations (e.g., satisfaction with instructional facilities). Objectives The specific knowledge, skills, or attitudes that students are expected to achieve through their college experience (e.g., any expected/intended student outcomes). Outcomes The results of instruction, the specific knowledge, skills, or developmental attributes that students actually develop through their college experience (viz., the assessment results). Rubric A scoring tool that lists the criteria for an assignment or task, or what counts (e.g., purpose, organization, and mechanics) in a piece of writing. A rubric also articulates gradations of quality for each criterion it contains, from excellent to poor. Norm An interpretation of scores on a measure that focuses on the rank ordering of students, not their performance in relation to criteria. Value-added The effects educational providers have had on students during their programs of study. The impact of participating in higher education on student learning and development above that which would have occurred through natural maturation. Value-added factors are usually measured as longitudinal change or difference between pretest and posttest.

Downloaded by [112.209.40.91] at 00:13 23 February 2013

258 S. Morreale et al.

Communication Associations involvement in assessing communication programs and student learning outcomes. We compare assessment of communication to that in other disciplines and describe the current status of communication assessment nationally. General Historical Background In the 1960s, poor results of what was termed choice-based curriculum led to the undergraduate curriculum reform calls of the 1980s. During that time, many students were deemed not adequately prepared for college, and students graduating from college sometimes lacked skills necessary for workplace success*a condition that some might say has not changed sufficiently yet. Over 20 national reports on skills assessment were published by a variety of associations and agencies between 1983 and 1989 (Hay, 1989, 1992). As the 20th century drew to a close, interest in assessment continued to increase. Goals 2000: Educate America Act, and President Bushs No Child Left Behind program, focused on improving education with an emphasis on assessing learning outcomes. The Goals 2000 Act codified into law six national education goals developed in 1989 and added two goals to encourage parental participation and the professional development of educators (Clinton intends, 1993). The national goal on literacy and lifelong learning was of particular importance to communication educators: The proportion of college graduates who demonstrate an advanced ability to think critically, communicate effectively, and solve problems will increase substantially (Lieb, 1994, p. 1). Even though the communicate effectively portion of the objective might not have received the full attention some communication educators believed it deserved, it does provide a national rationale for assessing communication education. National Communication Association (NCA) Assessment Initiatives NCA, formerly known as the Speech Communication Association, has actively developed a national assessment agenda since the 1970s. The Speech Communication Association Task Force on Assessment and Testing, formed in 1978, was charged with gathering, analyzing, and disseminating information about the testing of speech communication skills (Backlund & Morreale, 1994). This task force has evolved into the NCA Communication Assessment Division (CAD), which addresses activities such as defining communication skills and competencies, publishing summaries of assessment procedures and instruments, publishing standards for effective communication programs, and developing guidelines for program review. A significant portion of the research on assessment supported by NCA has focused on two interrelated areas: communication programs and student learning outcomes. Communication program assessment. The purpose of program assessment is continuous improvement of departmental educational efforts through self-evaluation. Such assessment provides an opportunity for department members to demonstrate the unique contributions of their departments to administrators and to fend off

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Assessment Review

259

threats based upon fiscal constraints or political motivations. Program assessment requires department members to examine curricula, educational experiences, and student learning. Evaluation may be part of a campus-wide effort (Backlund, Hay, Harper, & Williams, 1990) and/or part of a departmental initiative (Makay, 1997; Shelton, Lane, & Waldhart, 1999). Work in this area is also associated increasingly with mandates from state agencies and accreditation boards. NCA has a set of Guidelines for Developing and Assessing Undergraduate Programs in Communication available on the associations website (NCA, 2008). Communication learning outcomes assessment. The purpose of learning outcomes assessment is to examine actual student learning in any course or other arena in which teaching and learning may occur. Faculty ultimately must own assessment of student learning; they are the ones who write student-focused learning objectives, select appropriate instruments of assessment, collect, analyze, and interpret the data, and then use the data for course and program improvement. Developing studentlearning outcomes in communication begins with defining communication competence as it relates to the desired educational outcomes of the instructional program. Several publications, including the published proceedings of NCAs national assessment conference in 1993, discuss various approaches to examining communication competence (Christ, 1994; Morreale, Brooks, Berko, & Cooke, 1994; Morreale, Spitzberg, & Barge, 2006). Assessment in Communication Compared to Other Disciplines While the terms in Table 1 apply to assessment in all disciplines, assessment within the communication discipline tends to be unique. Methods of assessment used in other academic areas cannot always be adapted to communication, particularly to the assessment of oral communication skills. Comparing communication skills assessment with assessment in other academic areas calls attention to two distinctions, which can be addressed by well-designed assessment programs (Morreale & Backlund, 2007). First, assessment of learning in many disciplines can use such methods as achievement tests as well as objective and subjective tests of content. By contrast, communication is generally seen as a process skill, similar to reading and writing. While it is important to assess students knowledge about how they should communicate, it is equally if not more important to assess their communication performance in authentic situations. Thus, communication skills have generally been assessed with performance measures, while communication knowledge has been assessed with more traditional assessment tools such as paper-and-pencil tests and essays. Second, due to the interactive nature of communication, assessment of communication performance encounters other challenges. The appropriateness and effectiveness of communication education is generally based on the situation and in the perceptions of the viewer or the impression made by the communicator on the

Downloaded by [112.209.40.91] at 00:13 23 February 2013

260 S. Morreale et al.

observer. As a result, there may be more than one correct answer or way of performing. To complicate matters further, evaluation of a communicator depends on criteria that are often culturally bound, thus making assessment more difficult than in other academic subjects. Third, in the communication discipline, we are faced with the uncertainty of not knowing whether our educational programs have worked. Assessment results can only be predictive of a certain potential or propensity to communicate competently in the future. The determination of competence in communication will be affected by numerous factors impinging on any interaction at any given time. Determining whether a student has achieved a given level of maintainable competence requires observation of the students performance in a multitude of diverse situations.
Downloaded by [112.209.40.91] at 00:13 23 February 2013

Current Status of Communication Assessment The assessment of student learning about communication has taken root nationally. Two efforts external to the communication discipline support this observation. The College Board recently published content standards for English language arts as well as math and statistics; these content standards are recommended as secondary school assessment tools to test for college readiness. Communication is considered critical in the language arts standards, which prominently include rubrics for assessing speaking, listening, and media literacy (see http://professionals.collegeboard.com for a full description of The College Board standards). The Association of American Colleges and Universities now offers its Liberal Education for Americas Promise (LEAP) program as a primary vehicle for advancing and enhancing undergraduate liberal education for all students. Communication competence and its assessment figure significantly in the LEAP program (see http:// www.aacu.org/leap for a full description of the LEAP program). The essential learning outcome for intellectual and practical skills includes written and oral communication, information literacy, as well as teamwork and problem solving. The learning outcome for personal and social responsibility includes intercultural knowledge and competency. More information on the nature and status of communication assessment is available in an updated edition of an NCA publication on large-scale assessment (Morreale & Backlund, 2007). However, to further investigate the evolution of assessment in communication studies, we conducted an analysis of convention presentations, journal articles, and other publications, as described in the following section. Method This study utilized a triangulated methodology to examine trends in research related to oral communication assessment over a 35-year period extending back to 1975, when assessment began to take on national impetus. Content and thematic analyses were used to develop a database of citation items, code those items, identify themes

Assessment Review

261

and categories, and produce a comprehensive description of how communication assessment has been approached over the years. Content Analysis First, a content analysis process was conducted to identify and count presentations at conventions of the NCA and scholarly articles on communication assessment in leading, national communication education journals. Additionally, a list was developed of extant publications, such as books and monographs that address assessment of communication. More specifically, for the time period from 1975 to 2009, we reviewed events and presentations listed in NCA and Speech Communication Association (SCA) convention programs. For the same time period, we examined the tables of content of Communication Education, the Association for Communication Administration Bulletin, and Communication Teacher (formerly Speech Communication Teacher). Some past issues of Speech Communication Teacher were not included because they are not available in any electronic database. The tables of content for journals published by the four regional communication associations also were reviewed but not included because most were not available electronically. We also searched an array of databases and the Internet for books and other nonserial publications, such as conference proceedings within and outside of the communication discipline. Assessment, evaluation, assessing, and evaluating were the initial keywords used in this search. These keywords were adjusted during the data gathering process based on the results of queries conducted in various databases. The results of the content analysis processes were recorded in Ref Works, a computerized bibliographic database system that is housed on the campus of one of the authors, but was electronically accessible to all authors involved in this study. The resulting database contained a total of 558 items, including 434 convention presentations, 89 journal articles, and 35 other books and nonserial publications. After developing the database, we subjected the items in the database to thematic analysis using a a, 2009). qualitative coding and categorizing process (Saldan Thematic Analysis Thematic analysis of the items in the database, including convention presentations, journal articles, and other extant publications, was used to determine trends and analyze patterns in the evolution of oral communication assessment from 1975 to 2009. The first step in the thematic analysis process involved identifying the general theme or main focus of each item. Each of the four researchers in this study worked independently, engaging in a first cycle coding process of all the items in the database to develop a preliminary list of main themes. After comparing the four sets of themes, three broad categories of themes emerged, focused on the why, what, and how of assessing communication. Next, the authors collaborated to identify subthemes for each of the three categories and a description of each subtheme to be used in a second cycle coding process. The goal of this collaboration was to agree on a set of subthemes that are

Downloaded by [112.209.40.91] at 00:13 23 February 2013

262 S. Morreale et al.

comprehensive and mutually exclusive. If the set of subthemes was comprehensive, then each of the presentations, articles, and publications in the database could be assigned to one of the themes and subthemes. If the subthemes were mutually exclusive, it would improve the likelihood that each item would clearly fall into one a (2009), the goal of this type of subtheme rather than another. According to Saldan coding and categorizing process is to organize and group similarly coded data into categories or families (p. 8) because they share some characteristics. Table 2 presents a description of the three categories or main themes and their associated subthemes. A pilot test of these subthemes was conducted to ensure their viability before engaging in second cycle coding of the entire database of articles, presentations, and publications. All four raters categorized the same 32 items from the database.
Downloaded by [112.209.40.91] at 00:13 23 February 2013
Table 2 Themes and Subthemes Used for the Thematic Analysis Process
Themes and subthemes Theme 1 What is communication assessment, and why do we do it? Subtheme A: General overview Subtheme B: Rationale Theme 2 What is assessed? Description This category of bibliographic items includes theoretical issues, fundamentals of oral communication assessment, and reflections about the oral communication process. Items also focus on why we engage in the communication process. These items discuss basic assessment concepts and assessment language, and they reflect upon and provide a sense of where the movement is at or could be headed. These items explain why we do communication assessment. This category of bibliographic items includes the qualities, knowledge, abilities, and dispositions that are commonly assessed. These items focus more on what should be considered in the assessment process rather than how to do the assessment. These items aid in defining what could and should be assessed. The items identify typical aspects of student learning that are or should be assessed. These items focus on evaluation that occurs at the unit or programmatic level, and provide guidance in what qualities and procedures to consider in such a review or evaluation.

Subtheme C: Student learning outcomes Subtheme D: Program/ departmental evaluation Theme 3 How is it assessed?

This category of bibliographic items considers the how tos of specific assessment practices and processes. Subtheme E: Assessment These items focus on how to develop and organize assessment efforts; guidelines and they also explain how departments can gather and analyze assessment frameworks data. Subtheme F: Assessment These items focus on assessment practices and processes with an in specific contexts and emphasis on how to assess specific knowledge sets, behavioral skills, courses and dispositions across a range of situations and contexts (e.g., interpersonal, group, public, organizational, K-12). Subtheme G: Assessment These items focus on data-gathering strategies (e.g., portfolio, survey, strategies and techniques behavioral coding) and their use in both classroom and nonclassroom contexts (e.g., applied situations, communication centers and labs). Subtheme H: Assessment These items focus on assessment instruments/measures, including the instruments criteria for evaluating various assessment instruments.

Assessment Review

263

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Cronbachs alpha was used to calculate the consistency of their ratings, and a coefficient of .862 was achieved. Cronbachs alpha coefficient is a measure of internal consistency reliability, and it is useful for understanding the extent to which the ratings from a group of judges are consistent (Stemler, 2004). Since a reliable and consistent set of subthemes emerged in the pilot test, Cronbachs alpha was calculated to determine the two most consistent and reliable pairs of coders who would engage in second cycle coding and work as two independent teams to code half of the entire database of items. One pair of coders achieved a reliability coefficient of .90 and the second pair a coefficient of .75. The two teams then each took half of the database and used the set of subthemes in Table 2 to code their items. Any items about which the two coders in a pair disagreed were coded by a third coder in order to determine the subtheme for that item. The results of content and thematic analysis are presented next followed by a discussion of trends and overarching themes evident in the results. Results The results of the content analysis and coding of the NCA/SCA convention programs, the national communication journals, and other books and publications provide a detailed picture of how interest in communication assessment has been approached over the years. Patterns, in 5-year periods extending from 1975 to 2009, became evident when the items in the database were coded into subthemes. In the appendix to this writing, full citations are provided for the journal references and books that were identified and coded for this study. They are organized by theme and subtheme to facilitate the readers ability to tie specific studies or books to a particular topic or subtheme. A list of the 434 coded convention papers is available directly from the authors of this study. To interpret the results now presented in Tables 3 6, the eight subthemes, based on subtheme letter (e.g., A, B, C), are briefly listed again here: Theme One: What Is Communication Assessment, and Why Do We Do It?

Subtheme A: General overview of assessment. Subtheme B: Rationale for doing assessment.

Theme Two: What Is Assessed?


Subtheme C: Student learning outcomes assessment. Subtheme D: Program/department evaluation.

Theme Three: How Is It Assessed?


Subtheme Subtheme Subtheme Subtheme

E: Assessment guidelines and frameworks. F: Assessment in specific contexts and courses. G: Assessment strategies and techniques. H: Assessment instruments.

264 S. Morreale et al.

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Table 3 presents all of the coded citations, including convention papers, journal articles, and other extant publications, identified by subthemes during each of the seven five-year periods. A total of 558 relevant citations were spread out over 1975 to 2009. As Table 3 shows, the majority of the citations (383; 68.6%) occurred from 1990 to 2004. The subtheme occurring most frequently (123; 22.0%) was assessment in specific contexts and courses (F). The next most popular subthemes were assessment strategies and techniques (G; 83; 14.8%), general overview of assessment (A; 82; 14.6%), and program/department evaluation (D; 81; 14.5%). The least popular subtheme was rationale for doing assessment (B; 14; 2.0%). Table 4 presents the coded convention papers, identified by subthemes during each of the seven five-year periods. A total of 434 relevant citations were spread out over 1975 to 2009. As with the total number of citations, this table indicates the majority of the convention presentations (322; 74.2%) occurred from 1990 to 2004. The most popular subtheme for convention presentations also was assessment in specific contexts and courses (F; 113; 26%). The next most popular subthemes were program/ departmental evaluation (D; 66; 15.2%) and assessment strategies and techniques (G; 65; 15%). The least popular subtheme of convention papers was rationale for doing assessment (B; 12; 2.7%). Table 5 presents the coded journal articles, identified by subthemes during each of the seven five-year periods. A total of 89 relevant articles were spread out over 1975 to
Table 3 Total Citations of Convention Papers, Journal Articles, and Books and Other Extant Publications by Theme and Years (1975 to 2009)
Theme A B C D E F G H Total 1975 79 7 0 5 0 0 2 1 6 21 1980 84 15 1 1 6 2 9 3 5 42 1985 89 7 3 6 5 8 9 6 7 51 1990 94 17 2 15 20 13 32 22 16 137 1995 99 12 5 6 31 16 28 22 22 142 2000 04 13 2 5 13 8 30 23 10 104 2005 09 11 1 4 6 10 13 6 10 61 Total 82 14 42 81 57 123 83 76 558

Table 4 Convention Papers by Theme and Years (1975 to 2009)


Theme A B C D E F G H Total 1975 79 1 0 3 0 0 1 1 4 10 1980 84 2 0 1 5 1 7 0 1 17 1985 89 5 3 4 1 3 8 3 4 31 1990 94 13 2 4 11 7 28 15 13 93 1995 99 10 5 5 30 16 26 19 21 132 2000 04 12 1 4 13 8 30 21 8 97 2005 09 8 1 3 6 9 13 6 8 54 Total 51 12 24 66 44 113 65 59 434

Assessment Review Table 5 Journal Articles by Theme and Year (1975 to 2009)
Theme A B C D E F G H Total 1975 79 3 0 1 0 0 1 0 2 7 1980 84 10 1 0 1 0 1 2 2 17 1985 89 1 0 2 4 5 1 1 2 16 1990 94 4 0 8 7 5 3 5 2 34 1995 99 0 0 1 1 0 2 1 1 6 2000 04 0 0 1 0 0 0 2 2 5 2005 09 2 0 1 0 1 0 0 0 4

265

Total 20 1 14 13 11 8 11 11 89

Downloaded by [112.209.40.91] at 00:13 23 February 2013

2009. This table reveals that the 5-year period from 1990 to 1994 was the most popular for publications of journal articles on assessment (34; 38.2%). A steady decline in journal articles is evident in each 5-year period since 1994. Interestingly, the most popular subtheme for journal articles was general overview of assessment (A; 20; 22.4%), rather than assessment in specific contexts and courses (F; 8; 9%). Table 6 presents the books and other extant publications, identified by subthemes during each of the seven 5-year periods. A total of 35 relevant books were published from 1975 to 2009. The 5-year period from 1990 to 1994 was the most popular for books and other publications (10; 28.6%). The most popular subtheme for books was general overview of assessment (A; 11; 31.4%) Discussion of Results, Trends, and Overarching Themes Based on the results just outlined, we now discuss several trends in oral communication assessment over the past 35 years, including some recommendations for how to conduct assessment research in the future. One general trend indicates that communication assessment received considerable attention in the 1990s up until the early 2000s (see Table 3). But since 2005, assessment publications and convention presentations have declined. This changing pattern of interest parallels national attention toward assessment. With calls for
Table 6 Books and Other Extant Publications by Theme and Year (1975 to 2009)
Theme A B C D E F G H Total 1975 79 3 0 1 0 0 0 0 0 4 1980 84 3 0 0 0 1 1 1 2 8 1985 89 1 0 0 0 0 0 2 1 4 1990 94 0 0 3 2 1 1 2 1 10 1995 99 2 0 0 0 0 0 2 0 4 2000 04 1 1 0 0 0 0 0 0 2 2005 09 1 0 0 0 0 0 0 2 3 Total 11 1 4 2 2 2 7 6 35

266 S. Morreale et al.

educational reform in the 1980s, state legislatures and accrediting agencies demanded more accountability in the 1990s. Individuals in communication programs needing to respond to these mandates began to share more information as well as their resources for assessing communication. Additionally, in the early 1990s, NCA convened a major national conference on assessment that produced an array of new assessment instruments (Morreale et al., 1994). Then it appears that assessment pressures lessened from 2005 to 2009. That may be, in part, because we have figured out what we are doing about assessment and how we should do it. However, now might be the right time to revisit earlier discussions of assessment about guidelines and frameworks as well as strategies and techniques (i.e., subthemes E and G) with a focus on contemporary instructional challenges. For example, it would be helpful to have more studies that focus on assessing the role of communication in enhancing student learning (i.e., subtheme C) in online environments, in distance education settings, and in global contexts. A second trend suggests there has been a change over time in the venues for discussions about communication assessment. From 1975 to 1994, journal articles and other publications (see Tables 5 and 6) were more numerous, keeping pace with presentations in the convention format (see Table 4). Since 1995, there has been a shift such that almost all of the conversations about assessment are conducted at conventions, not in academic journals. Perhaps because of the demise of the Association of Communication Administration Bulletin, there are fewer journals in which assessment manuscripts can be published. Or perhaps department and program chairs, who believe assessment to be a concern, might not have the time to prepare manuscripts for journal publication. In either case, we may be well advised to begin seeking more publications in scholarly journals, focused on valid and reliable best practices in assessment of student learning outcomes at the program and departmental levels (i.e., subthemes C and D). A third trend, indicated by the subthemes reported in the results, suggests that most of the attention over time has focused topically on assessment in specific contexts and courses (see Table 3, subtheme F), with this discussion occurring mainly at the convention level. These how-tos of assessment appear to have dominated convention discussions. Approximately 26% of convention papers were about assessment of knowledge, abilities, and dispositions in different situations and contexts, 15% were about program and department evaluation, and 14% about assessment strategies and techniques. These papers, and the how-to recommendations they contain, could provide useful models for departments interested in coursebased assessment and the assessment instruments for use therein (i.e., subthemes F and H). Meta-analyses of the content of the accessible convention papers could make a useful contribution to the assessment literature in communication. By contrast to the subtheme topics at conventions, general overviews of the assessment process were the primary focus in journal articles and other publications (see Tables 5 and 6). In these more permanent records, we have not given attention to the how-tos of assessment. The publication of such applied studies would be useful, if they are characterized by the academic rigor expected in journal publications.

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Assessment Review

267

Finally, rationale writings, explaining why we do assessment, appear to be the least popular subtheme across all outlets, conventions, journals, and books. Despite the popular notion that we need to argue for communication assessment, not many of us are writing rationale statements, which might be a topic worth revisiting and updating. The results and trends next are examined briefly in light of the three questions that emerged from the original thematic analysis of the 558 citations: What is communication assessment and why do we do it? What is assessed? How is it assessed? What Is Communication Assessment, and Why Do We Do It?
Downloaded by [112.209.40.91] at 00:13 23 February 2013

What is communication assessment? Of the 558 citations in the database, 17% (i.e., 95) provided answers to the what and why of assessment. And while there is no shortage of definitions, the scholars appear to accept as a basic premise that assessment is how we document our efforts to develop student learning. This leads to a specific definition of assessment as the process of gathering and analyzing information from multiple sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences (Teaching and Learning Center, 2010). Assessment of communication is thus considered the process of documenting, usually in measurable terms, student gains in communication knowledge, skills, attitudes, and beliefs. Why do we assess communication? Recently, one of the coauthors of this article participated in an accreditation visit to a university. The faculty interviewed for this accreditation visit seemed enthusiastic about assessment. Two faculty members, selfdescribed curmudgeons, stated they formerly were strong skeptics but now were strong supporters of the concept of assessment. Their reasons for a change-of-heart mirror the essential rationale for assessing communication identified in this studys database. Those studies summarily point to four reasons for engaging in communication assessment:

It is good for students. As the citations in our database strongly suggest, the primary motivation for assessment is to develop a stronger educational program for students. It brings faculty together. Determining how to assess student learning outcomes for a communication program requires faculty to think beyond their individual courses and collaboratively examine the entire educational program. It satisfies the needs of external agencies. Assessment provides information and data to outside agencies. State legislatures, state boards of education, and every regional accrediting body require some form of assessment. It is the right thing to do. Improvement is the ultimate response to why do we conduct assessments? If each faculty member and each administrator is committed to student learning, then assessment of that learning is obviously appropriate.

268 S. Morreale et al.

The 95 reviewed citations in the what and why category consistently reinforced the point that the fundamental reason for communication assessment is improvement. They provide a useful rationale for continuing to refine and enhance assessment programs in the communication discipline. What is Assessed? Indeed, there is a commonality of interest in assessment to meet the expectations of others about improving student learning. However, what is actually assessed, while described in 22% (123) of the 558 items in our database, appears to vary considerably. These 123 citations center on assessing specific competencies (e.g., media literacy, service learning) in specific courses (e.g., fundamentals, public relations) and at multiple levels (e.g., pre-K, secondary, college). Furthermore, these citations also center on communication competence at the institutional level, often as part of general education, and on the wide range of skills believed to comprise communication competence. Another less frequent emphasis is on speaking and listening skills, with a modest level of interest specifically in public speaking and listening. There remains very little focus on more cognitive (as opposed to behavioral) skills*less than 5% of the 123 items reviewed displayed a focus on cognitive skills. With regard to the context in which assessment is conducted, the greatest interest continues to be in assessment of the academic program/major and the academic department. Without question, the academic level receiving the most attention in the literature is the college and university level, with over 54% of the 123 items focusing specifically on assessment of communication competence and/or academic programs at this level. Only 4% address assessment at the community college level. While there is some interest in assessment of communication competence or specific competencies within K-12, that interest appears very limited. Based on the 123 citations in this category, our discipline clearly needs to continue efforts to enhance assessment of communication competence at the college and university level, to include the knowledge, skills, and dispositions of our students upon graduation. Moreover, some focus on assessing communication at other levels of the educational enterprise also may be in order. How Is It Assessed? As noted in the results, communication educators have spent considerable time and energy focusing on how to do assessment. Approximately 61% (340) of the total of 558 citations were categorized into the four subthemes responding to this how to do it question. These papers, articles, and texts have provided overviews of assessment plans, many explaining how specific departments devised their response to assessment mandates. They have also offered suggestions on how to assess the various abilities and contexts (i.e., public speaking, listening, interpersonal, intercultural, and organizational) as well as professional applications (i.e., communication assessment for teachers, lawyers, doctors, or engineers). Different assessment

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Assessment Review

269

strategies such as surveys, interviews, focus groups, capstone courses, and portfolios have been examined but to a lesser extent. Finally, communication educators have developed and reported on instruments to evaluate effectiveness in public speaking, interpersonal interaction, listening, and various dimensions of motivation to communicate. Scholars in the communication discipline now need to look more closely at the content of the 340 studies categorized here as how is communication assessed. Using this body of literature, we need to develop research-driven models for student learning and program assessment. Based on these overarching trends gleaned from our research efforts, recommendations for best practices in regard to communication assessment will conclude this report. But first, we mention several limitations to the present study.
Downloaded by [112.209.40.91] at 00:13 23 February 2013

Limitations of Study This review of the assessment literature was limited by two constraints. First, we did not examine the journals and convention programs of affiliated organizations. For example, examining the work of the International Listening Association or the International Communication Association might have expanded our understanding of assessment. While we did consider including the journals of the regional communication associations, their convention programs were not readily accessible for the 35-year time frame of interest in this study. Second, because many of the convention sources were unavailable in their entirety, we were only able to examine the titles of convention papers and some of the other books. Had we had the opportunity to review the content of all 558 manuscripts in our database, we could have engaged in more substantive analysis of the content of the studies. Such analysis would have provided yet more useful insights into the howtos of communication assessment, a topic reserved for a later study. Given this limitation, the following recommendations for best practices, while supported by the data from the qualitative thematic analysis, also are based on the authors extensive experience with assessment, accountability, accreditation, program review, and external evaluation. Recommendations for Best Practices in Oral Communication Assessment The results of this study imply communication assessment has become a permanent part of the fabric of academic life in higher education. This permanence highlights the following critical issues for consideration by communication department and program administrators, scholar teachers, and scholarly associations. Recommendations for Administrators Legislatures, accrediting bodies, state boards of education, and internal reviewers will continue to inquire about whether the communication education received by students produces the desired effects. Department and program administrators as

270 S. Morreale et al.

Downloaded by [112.209.40.91] at 00:13 23 February 2013

well as directors need to be fully cognizant of any expectations or requirements on their campuses related to communication instruction and programs. In addition to curricula for communication majors, communication expectations in other majors or in general education are typical examples. Not only should quality instruction be in place to satisfy those expectations, but also assessment of student achievement should occur. While the departments needs and the requirements imposed by other constituencies may vary, administrators responses can be guided by fundamental questions related to incorporating the assessment of student learning outcomes in programmatic assessment (i.e., subthemes C and D). Administrators need to facilitate strategic planning discussions with faculty as a way to define, review, and redefine academic programs in the communication discipline. The following set of questions, derived from our consulting activities, suggest that assessment can serve as an integral part of defining, reviewing, and redefining academic programs in the communication discipline: 1. Who are we and why do we exist? What is the mission of our program? 2. What do we want to accomplish? What are our goals and objectives? Who do we serve? 3. What assessment procedures can we use to determine if the goals and objectives are met? 4. What is our assessment plan, and is it sufficiently rigorous? 5. What are the results of our assessment program, and how are they being used? 6. What changes will we make to our goals/objectives/outcomes/processes based on the results? 7. What evidence is there that this assessment process is a continuous cycle of improvement? Recommendations for Scholar Teachers Communication faculty members have responsibilities and opportunities, regarding communication assessment, at two levels. At the campus level, faculty members need to support their administrators, chairs, or program directors in the development of rigorous course-based assessment activities. At its core, student learning occurs in courses, and only faculty members know how to best assess learning in their particular courses (i.e., subthemes F, G, and H). Their recommendations about assessment can and should provide the essential substance for departmental assessment plans and programs. At a disciplinary level, communication faculty need to turn their attention back to the national dissemination of what they learn through conducting rigorous assessment of student learning at the local level. Such sharing should begin to occur through publications as well as in discussions and presentations at conferences and conventions. Basic course directors and faculty, for example, could begin to report about their best practices for course-based assessment in multiple sections of the same course. Communication faculty might also want to consider framing the publication of their assessment efforts under the umbrella of the

Assessment Review

271

scholarship of teaching and learning, a well-respected initiative in the communication discipline (Huber & Morreale, 2001). Recommendations for Scholarly Associations Finally, NCA may want to take a greater role in ensuring that faculty and departments have access to an archival history of assessment in the discipline, as well as the most recent assessment resources to meet their needs. Convention papers on assessment could be collected in a centralized location, and reports of updated assessment practices could be solicited for inclusion on the association-based assessment website. NCA could also help to provide a national venue for an ongoing dialogue about communication assessment in general (i.e., subthemes A and B). Without such resources, the discipline may find itself reinventing the assessment wheel every 10 15 years. Conclusion The time is right for our discipline to become fully engaged once again in communication assessment. This re-engagement should be characterized by rigor and an emphasis on valid and reliable results from the assessment process. Moreover, all stakeholders should recognize the development of communication assessment programs as a genuinely constructive activity, rather than just another expectation of administrators and legislators. By developing and implementing best practices and disseminating the how-tos of those practices at conventions, in academic journals, and in other publications, we can collaborate to assess communication programs and student learning more effectively. References
Backlund, P., Hay, E.A., Harper, S., & Williams, D. (1990). Assessing the outcomes of college: Implications for speech communication. Association for Communication Administration Bulletin, 72, 13 20. Backlund, P., & Morreale, S.P. (1994). History of the Speech Communication Associations assessment efforts and present role of the committee on assessment and testing. In S.P. Morreale, M. Brooks, R. Berko, & C. Cooke (Eds.), 1994 SCA summer conference proceedings and prepared remarks (pp. 9 16). Annandale, VA: Speech Communication Association. Christ, W.G. (Ed.). (1994). Assessing communication education: A handbook for media, speech, and theatre educators. Hillsdale, NJ: Erlbaum. Clinton intends to establish national academic standards. (1993, February 24). Greeley Tribune, p. A1. Ewell, P.T. (2009). Assessment, accountability, and improvement: Revisiting the tension (NILOA Occasional Paper No. 1). Urbana, IL: National Institute of Learning Outcomes Assessment. Frye, R. (2006). Assessment, accountability, and student learning outcomes. Dialogue, 1(2), 1 12. Retrieved from http://pandora.cii.wwu.edu/dialogue/default.htm Hay, E.A. (1989). Education reform and speech communication. In P.J. Cooper & K.M. Galvin (Eds.), The future of speech communication education (pp. 12 17). Annandale, VA: Speech Communication Association.

Downloaded by [112.209.40.91] at 00:13 23 February 2013

272 S. Morreale et al.

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Hay, E.A. (1992). Assessment trends in speech communication. In E.A. Hay (Ed.), Program assessment in speech communication (pp. 3 7). Annandale, VA: Speech Communication Association. Huber, M.T., & Morreale, S.P. (Eds.). (2001). Disciplinary styles in the scholarship of teaching and learning: A conversation. Washington, DC: American Association for Higher Education and The Carnegie Foundation for the Advancement of Teaching. Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Urbana, IL: National Institute for Learning Outcomes Assessment. Lieb, B. (1994, October). National contexts for developing postsecondary communication assessment. Washington, DC: U.S. Department of Education, Ofce of Educational Research and Improvement. Makay, J.J. (1997). Assessment in communication programs: Issues and ideas administrators must face. Journal of the Association for Communication Administration, 1, 62 68. Morreale, S.P., & Backlund, P. (Eds.). (2007). Large-scale assessment in oral communication: K-12 and higher education (3rd ed.). Washington, DC: National Communication Association. Morreale, S.P., Brooks, M., Berko, R., & Cooke, C. (Eds.). (1994). 1994 SCA summer conference proceedings and prepared remarks. Annandale, VA: Speech Communication Association. Morreale, S.P., Spitzberg, B.H., & Barge, J.K. (2006). Human communication: Motivation, knowledge, and skills (2nd ed.). Belmont, CA: Wadsworth. National Communication Association. (2008). Guidelines for developing and assessing undergraduate programs in communication. National Communication Association Insider, 1. Retrieved from http://www.natcom.org/index.asp?bid14887 a, J. (2009). The coding manual for qualitative researchers. New York: Sage. Saldan Shelton, M.W., Lane, D.R., & Waldhart, E.S. (1999). A review and assessment of national education trends in communication instruction. Communication Education, 48, 228 237. Stemler, S.E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research & Evaluation, 9. Retrieved from: http://PAREonline.net/getvn.asp?v9&n4 Teaching and Learning Center. (2010). Assessing students in a learner-centered classroom [Workshop]. Eugene, OR: University of Oregon. Retrieved from http://tep.uoregon.edu/workshops/ teachertraining/learnercentered/assessing/assessing.html

Appendix
The following are all the journal references and books that were identied and coded for this study. They are organized by subtheme to facilitate the readers ability to locate writings relevant to a particular topic or subtheme. Subtheme A: General Overview
Aitken, J.E., & Neer, M. (1992). The public relations need in assessment reporting. Association for Communication Administration Bulletin, 81, 53 59. Allen, R.R., & Brown, K.L. (Eds.). (1976). Developing communication competence in children: A report of the Speech Communication Associations national project on speech communication competencies. Skokie, IL: National Textbook Company. Allen, R.R., & Wood, B.S. (1978). Beyond reading and writing to communication competence. Communication Education, 27, 286 292. Anderson, R.S., & Speck, B.W. (Eds.). (1998). Changing the way we grade student performance: Classroom assessment and the new learning paradigm. New Directions for Teaching and Learning No. 74. San Francisco, CA: Jossey-Bass.

Assessment Review

273

Anonymous. (1979). The 7th annual SCA seminar on assessment of programs. Association for Communication Administration Bulletin, 30, 4 6. Ashmore, T.M. (1980). The rhetoric of quality assessment. Association for Communication Administration Bulletin, 32, 26 32. Backlund, P.M., Booth, J., Moore, M., Parks, A.M., & VanRheenen, D. (1982). A national survey of state practices in speaking and listening skill assessment. Communication Education, 31, 125 129. Becker, S.L. (1980). The rhetoric of quality assessment: A no vote. Association for Communication Administration Bulletin, 32, 33 35. Becker, S.L. (1984). Evaluation: Reactivity and validity. Association for Communication Administration Bulletin, 48, 46 48. Clevenger, T., Jr. (1980). Evaluation of the quality assessment seminar. Association for Communication Administration Bulletin, 32, 47 50. Cole, S.S. (1990). Outcomes assessment: The tip of the iceberg. Association for Communication Administration Bulletin, 74, 34 36. Cooper, P.J. (1987). A response to assessment. Association for Communication Administration Bulletin, 60, 55 55. Cooper, P.J., & Galvin, K.M. (Eds.). (1989). The future of speech communication education. Annandale, VA: Speech Communication Association. Goldberg, A.A. (1980). The rhetoric of quality assessment:. A response. Association for Communication Administration Bulletin, 32, 36 37. Goulden, N.R. (1992). Theory and vocabulary for communication assessments. Communication Education, 41, 258 269. Gray, P.A. (1984). Assessment of basic oral communication skills: A selected, annotated bibliography. Annandale, VA: Speech Communication Association. Hay, E.A. (1992). A national survey of assessment trends in communication departments. Communication Education, 41, 247 257. Hunt, G.T. (1990). The assessment movement: A challenge and an opportunity. Association for Communication Administration Bulletin, 72, 5 12. Kurylo, A. (2007). Teaching about assessment in professional organizations. Communication Teacher, 21, 93 98. Larson, C.E. (1978). Problems in assessing functional communication. Communication Education, 27, 304 309. Larson, C.E., Backlund, P.M., Redmond, M.V., & Barbour, A. (1978). Assessing functional communication. Annandale, VA: Speech Communication Association. McCroskey, J.C. (2007). Raising the question #8 assessment: Is it just measurement? Communication Education, 56, 509 514. Morreale, S.P., & Backlund, P.M. (1996). Large scale assessment of oral communication: K-12 and higher education (2nd ed.). Annandale, VA: Speech Communication Association. Morreale, S.P., Backlund, P.M., Hay, E.A., & Jennings, D.K. (Eds.). (2007). Large scale assessment of oral communication (3rd ed.). Washington, DC: National Communication Association. Mottet, T.P. (2004). Seminar in communication assessment. Communication Teacher, 18, 111 115. Rubin, D.L., & Mead, N.A. (1984). Large scale assessment of oral communication skills: Kindergarten through grade 12. Annandale, VA: Speech Communication Association. Rubin, R.B. (1984). Communication assessment instruments and procedures in higher education. Communication Education, 33, 178 180. Spitzberg, B.H. (1983). Communication competence as knowledge, skill, and impression. Communication Education, 32, 323 329. Sullivan, J. (1980). Quality assessment: An insiders view. Association for Communication Administration Bulletin, 32, 38 40. Wood, B.S. (1976). Children and communication: Verbal and nonverbal language development. Englewood Cliffs, NJ: Prentice-Hall.

Downloaded by [112.209.40.91] at 00:13 23 February 2013

274 S. Morreale et al.

Subtheme B: Rationale
Allen, T.H. (2002). Charting a communication pathway: Using assessment to guide curriculum development in a re-vitalized general education plan. Communication Education, 51, 26 39. McCaleb, J.L. (Ed.). (1987). How do teachers communicate? A review and critique of assessment practices [Teacher Education Monograph No. 7]. Washington, DC: ERIC Clearinghouse on Teacher Education.

Subtheme C: Student Learning Outcomes


Backlund, P.M. (1985). SCA national guidelines for essential speaking and listening skills for elementary school students. Communication Education, 34, 185 195. Backlund, P., Hay, E.A., Harper, S., & Williams, D. (1990, April). Assessing the outcomes of college: Implications for speech communication. Association for Communication Administration Bulletin, 72, 13 20. Bassett, R.E., Whittington, N., & Staton-Spicer, A. (1978). The basics in speaking and listening for high school graduates: What should be assessed? Communication Education, 27, 293 303. Canary, D.J., & MacGregor, I.M. (2008). Differences that make a difference in assessing student communication competence. Communication Education, 57, 41 63. Christ, W.G. (Ed.). (1994). Assessing communication education: A handbook for media, speech, and theatre educators. Hillsdale, NJ: Erlbaum. Clark, R.A. (2002). Learning outcomes: The bottom line. Communication Education, 51, 396 404. Jones, E.A., & Melander, L. (1993). Speech communication skills for college students. University Park, PA: National Center on Postsecondary Teaching, Learning, and Assessment. Litterst, J.K. (1990). Communication competency assessment of non-traditional students. Association for Communication Administration Bulletin, 72, 60 67. Phillips, G.M., Kelly, L., & Rubin, R.B. (1991). Communication incompetencies: A theory of training oral performance behavior. Carbondale, IL: Southern Illinois University Press. Quianthy, R.L. (1990). Communication is life: Essential college sophomore speaking and listening competencies. Annandale, VA: Speech Communication Association. Rubin, R. (1985). Ethical issues in the evaluation of communication behavior. Communication Education, 34, 13 17. Rubin, D., & Bazzle, R.E. (1981). Development of an oral communication assessment program: The Glynn County speech prociency examination for high school students. Brunswick, GA: Glynn County Board of Education. Rubin, D.L., & Hampton, S. (1998). National performance standards for oral communication K-12: New standards and speaking/listening/viewing. Communication Education, 47, 183 193. Runkel, R. (1990). Assessing problem-solving abilities in the theatre curriculum: A cumulative and sequential approach. Association for Communication Administration Bulletin, 74, 44 48. Smith, R.M., & Hunt, G.T. (1990). Dening the discipline: Outcome assessment and the prospects for communication programs. Association for Communication Administration Bulletin, 72, 1 4. Evangelistic, A.L., & Daly, J.A. (1989). Correlates of speaking skills in the United States: A national assessment. Communication Education, 38, 132 143. Wenger, P.E., & Fischbach, R.M. (1983). Speech communication instruction: An interdisciplinary assessment. Association for Communication Administration Bulletin, 45, 36 38. Wood, B.S. (1977). Development of functional communication competencies grades K-6. Annandale, VA: Speech Communication Association.

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Assessment Review
Subtheme D: Program/Departmental Evaluation

275

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Aitken, J.E., & Neer, M. (1992). A faculty program of assessment for a college level competencybased communication core curriculum. Communication Education, 41, 270 286. Comer, K.C. (1987). Development of quality factors for college and university theatre programs. Association for Communication Administration Bulletin, 60, 11 14. Downey, B.J. (1980). A proposed instrument for program quality assessment. Association for Communication Administration Bulletin, 32, 4 7. Hagood, A.D. (1986). Program evaluation in a major university. Association for Communication Administration Bulletin, 56, 9 11. Hay, E.A. (Ed.). (1992). Program assessment in speech communication. Annandale, VA: Speech Communication Association. McBath, J.H. (1990). Use of departmental review as part of the assessment process. Association for Communication Administration Bulletin, 72, 38 44. McGlone, E.L. (1984). Program evaluation and elimination: A case study. Association for Communication Administration Bulletin, 50, 19 25. Nebergall, R.E. (1980). ACA seminar/workshop on assessment of programs: Report on session three. Association for Communication Administration Bulletin, 32, 45 46. Parker, B.L., & Drummond-Reeves, S.J. (1992). Alumni outcomes assessment: Boise State University survey, 1990. Association for Communication Administration Bulletin, 79, 1 11. Platt, R.W. (1985). External evaluation of small college communication programs. Association for Communication Administration Bulletin, 54, 40 42. Reynolds, B. (1986). Program evaluation in undergraduate only institutions. Association for Communication Administration Bulletin, 56, 12 13. Smith, R.M. (1990). Issues, problems, and opportunities in assessment of communication programs. Association for Communication Administration Bulletin, 72, 21 26. Symons, J.M. (1990). Assessment guidelines for theatre programs in higher education. Association for Communication Administration Bulletin, 72, 35 37. Taylor, A. (1983). Curriculum accreditation: A case against. Association for Communication Administration Bulletin, 44, 41 43. Valentine, C.A. (1980). Program evaluations and standards: An overview. Association for Communication Administration Bulletin, 32(2), 8 12.

Subtheme E: Assessment Guidelines and Frameworks


Buerkel-Rothfuss, N.L. (1990). Communication competence: A test out procedure. Association for Communication Administration Bulletin, 72, 68 72. Crocker-Lakness, J., Manheimer, S., & Scott, T. (1991). The Speech Communication Associations criteria for the assessment of oral communication. Annandale, VA: Speech Communication Association. Hawkins, K.W. (1987). Use of the Rasch model in communication education: An explanation and example application. Communication Education, 36, 107 118. Hay, E.A. (1990). Nontraditional approaches to assessment. Association for Communication Administration Bulletin, 72, 73 75. King, P.E., & Witt, P.L. (2009). Teacher immediacy, condence testing, and the measurement of cognitive learning. Communication Education, 58, 110 123. Knight, M.E., & Lumsden, D. (1990). Outcomes assessment: Creating principles, policies, and faculty involvement. Association for Communication Administration Bulletin, 72, 27 34. Parker, B.L., & Drummond-Reeves, S.J. (1992). Outcomes assessment research: Guidelines for conducting communication alumni surveys. Association for Communication Administration Bulletin, 79, 12 19.

276 S. Morreale et al.

Rubin, R.B. (Ed.). (1983). Improving speaking and listening skills: New directions for college learning assistance (No. 12). San Francisco: Jossey-Bass. Rubin, R.B., & Graham, E.E. (1988). Communication correlates of college success: An exploratory investigation. Communication Education, 37, 14 28. Rubin, R.B., Graham, E.E., & Mignerey, J.T. (1990). A longitudinal study of college students communication competence. Communication Education, 39, 1 14. Spitzberg, B.H., & Hurt, H.T. (1987). The measurement of interpersonal skills in instructional contexts. Communication Education, 36, 28 45. Stiggins, R.J., Backlund, P.M., & Bridgeford, N.J. (1985). Avoiding bias in the assessment of communication skills. Communication Education, 34, 135 141. Willmington, S.C. (1989). Oral communication assessment procedures and instrument development. Association for Communication Administration Bulletin, 69, 72 78.

Subtheme F: Assessment in Specic Contexts and Courses

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Backlund, P.M., Brown, K.L., Gurry, J., & Jandt, F. (1982). Recommendations for assessing speaking and listening skills. Communication Education, 31, 9 17. Bostrom, R.N., & Brown, M.H. (1990). Listening behavior: Measurement and application. New York: Guilford Press. Chesebro, J.W., McCroskey, J.C., Atwater, D.F., Bahrenfuss, R.M., Cawelti, G., Gaudino, J.L., & Hodges, H. (1992). Communication apprehension and self-perceived communication competence of at-risk students. Communication Education, 41, 345 360. Erickson, J.G., & Omark, D.R. (Eds.). (1981). Communication assessment of the bilingual, bicultural child: Issues and guidelines. Baltimore, MD: University Park Press. Ford, W.S.Z., & Wolvin, A.D. (1993). The differential impact of a basic communication course on perceived communication competencies. Communication Education, 42, 215 223. Redmond, M.V. (1998). Outcomes assessment and the capstone course in communication. Southern Communication Journal, 64, 68 75. Ritter, E.M. (1977). Accountability for interpersonal communication instruction: A curriculum perspective. Central States Speech Journal, 28, 204 209. Rubin, R.B., & Feezel, J.D. (1985). Teacher communication competence: Essential skills and assessment procedures. Central States Speech Journal, 36, 4 13. Rubin, R.B., Welch, S.A., & Buerkel, R. (1995). Performance-based assessment of high school speech instruction. Communication Education, 44, 30 39. Trank, D.M., & Steele, J.M. (1983). Measurable effects of a communication skills course: An initial study. Communication Education, 32, 227 236.

Subtheme G: Assessment Strategies and Techniques


Angelo, T.A., & Cross, K.P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco: Jossey-Bass. Arneson, P., & Arnett, R.C. (1998). The praxis of narrative assessment: Communication competence in an information age. Association for Communication Administration Bulletin, 27, 44 58. Backlund, P. (1992). Using student ratings of faculty in the instructional development process. Association for Communication Administration Bulletin, 81, 7 12. Dick, R.C., & Robinson, B.M. (1992). Assessing self-acquired competency portfolios in speech communication: National and international issues. Association for Communication Administration Bulletin, 81, 60 68. Emmert, P., & Barker, L.L. (1989). Measurement of communication behavior. New York: Longman.

Assessment Review

277

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Larson, V.L., & McKinley, N.L. (1987). Communication assessment and intervention strategies for adolescents. Eau Claire, WI: Thinking Publications. Lederman, L.C. (1990). Assessing educational effectiveness: The focus group interview as a technique for data collection? Communication Education, 39, 117 127. Lederman, L.C., & Ruben, B.P. (1984). Systematic assessment of communication games and simulations: An applied framework. Communication Education, 33, 152 159. Malinauskas, M.J. (1990). Get what you like: An assessment scheme. Association for Communication Administration Bulletin, 72, 76 79. McNeilis, K.S. (2002). Assessing communication competence in the primary care medical interview. Communication Studies, 53, 400 428. Morreale, S.P., Brooks, M., Berko R., & Cooke, C. (Eds.). (1994). 1994 summer conference proceedings and prepared remarks: Assessing college student competency in speech communication. Annandale, VA: Speech Communication Association. Neer, M.R. (1989). The role of indirect tests in assessing communication competence. Association for Communication Administration Bulletin, 69, 64 71. Northwest Regional Educational Laboratories. (1998). Improving classroom assessment: A toolkit for professional developers. Portland, OR: Northwest Regional Educational Laboratory. Rubin, D.L., Daly, D., McCroskey, J.C., & Mead, N.A. (1982). A review and critique of procedures for assessing speaking and listening skills among preschool through grade twelve students. Communication Education, 31, 285 303. Saraceni, I.J. (1990). The video camera as an assessment tool in the acting class. Association for Communication Administration Bulletin, 74, 37 43. Stiggins R.J. (Ed.). (1981). Using performance rating scales in large-scale assessments of oral communication prociency. Portland, OR: Clearinghouse for Applied Performance Testing. Stitt, J.K., Simonds, C.J., & Hunt, S.K. (2003). Evaluation delity: An examination of criterionbased assessment and rater training in the speech communication classroom. Communication Studies, 54, 341 353. Young, R., & He, A.W. (Eds.). (1998). Talking and testing: Discourse approaches to the assessment of oral prociency. Philadelphia, PA: John Benjamins.

Subtheme H: Assessment Instruments


Bostrom, R.N. (1990). Assessing achievement with standardized tests: The NTE speech communication examination. Association for Communication Administration Bulletin, 72, 45 50. Carlson, R.E., & Smit-Howell, D. (1995). Classroom public speaking assessment: Reliability and validity of selected evaluation instruments. Communication Education, 44, 87 97. Hayes, D.T. (1978). Toward validation of a measure of speech experience for prediction in the basic college-level speech communication course. Communication Studies, 29, 20 24. Morreale, S.P. (2007). Assessing motivation to communicate (2nd ed.). Washington, DC: National Communication Association. Morreale, S.P., Moore, M., Surges-Tatum, D., & Webster, L. (2007). Competent speaker speech evaluation form (2nd ed.). Washington, DC: National Communication Association. Papa, M.J., & Graham, E.E. (1991). The impact of diagnosing skill deciencies and assessmentbased communication training on managerial performance. Communication Education, 40, 368 384. Rubin, R.B. (1982). Assessing speaking and listening competence at the college level: The communication competency assessment instrument. Communication Education, 31, 19 32. Rubin, R.B. (1985). The validity of the communication competency assessment instrument. Communication Monographs, 52, 173 185. Rubin, R.B., & Martin, M.M. (1994). Development of a measure of interpersonal communication competence. Communication Research Reports, 11, 33 44.

278 S. Morreale et al.

Rubin, R.B., & Roberts, C.V. (1987). A comparative examination and analysis of three listening tests. Communication Education, 36, 142 153. Rubin, R.B., Sisco, J., Moore, M.R., & Quianthy, R. (1983). Oral communication assessment procedures and instrument development in higher education. Annandale, VA: Speech Communication Association. Sayer, J.E., & Chase, L.J. (1978). Contemporary graduate study: Evaluating the post-coursework comprehensive written examination. Association for Communication Administration Bulletin, 26, 30 33. Spitzberg, B.H. (2007). Conversational skills rating scale (2nd ed.). Washington, DC: National Communication Association. Thomson, S., & Rucker, M.L. (2002). The development of a specialized public speaking competency scale: Test of reliability. Communication Research Reports, 19, 18 28. Watson, K.W., & Barker, L.I. (1983). Watson Barker listening test. Auburn, AL: Spectra. Watson, K.W., Barker, L.I., & Roberts, C.V. (1989). Watson Barker high school listening testdevelopment and administration. Auburn, AL: Spectra.

Downloaded by [112.209.40.91] at 00:13 23 February 2013

Das könnte Ihnen auch gefallen