Sie sind auf Seite 1von 6

Rachel Ann L.

Reyes Jaime Duncan

May 5, 2012 Prof Celerino Baclaan

ASSESSMENT
Geoff Brindley I. Terminology and Key Concepts A. Assessment - refers to a variety of ways of collecting information on a learners language, ability or achievement. - Umbrella term encompassing measurement instruments administered on a one-off basis (i.e. tests), as well as qualitative methods of monitoring and recording student learning (i.e. observation, simulations or project work). B. Evaluation - concerned with the overall language program and not just with what individual students have learnt Kinds of Assessment A. According to Purpose 1. Proficiency Assessment measurement of general language abilities acquired by the learner independent of a course of study (often done through the administration of standardized commercial language-proficiency tests). 2. Assessment of Achievement aims to establish what a student has learned in relation to a particular course or curriculum (frequently carried out by the teacher). - May be based either on the specific content of the course or the course objectives (Hughes, 1989). B. According to Coverage 1. Formative Assessment - Carried out by teachers DURING the learning process with the aim of using the results to improve instruction 2. Summative Assessment - At the END of the course, term or school year often for purpose of providing aggregated information on program outcomes to educational authorities C. According to Interpretation 1. Norm-referenced ranks learners in relation to each other (e.g. a score or percentage in an examination reports a learners standing compared to other candidates). 2. Criterion-referenced occurs when learners performance is described in relation to an explicitly stated standard (e.g. a persons ability may be reported in terms of a can-do statement describing the kinds of tasks he or she can perform using the target language (such as can give personal information). Key Requirements A. Validity 1. Construct Validity the extent to which the content of the test/assessment reflects current theoretical understanding of the skill(s) being assessed 2. Content Validity whether it represents an adequate sample of ability

II.

III.

3. Criterion-related validity the extent to which the results correlate with other independent measures of ability 4. Consequential Validity the extent to which a test or assessment serves the purposes for which it is intended (Messick, 1980, 1989) B. Reliability concerned with ascertaining to what degree scores on tests or assessments are affected by measurement error i.e. variation in scores caused by factors unrelated to the ability being assessed 9e.g. conditions of administration, test instructions, fatigue, guessing, etc.) - the consistency of test results over time can be estimated in terms of test-retest reliability, where the same test is given to a group at two different points in time or by administering two equivalent forms of the same test. - To examine whether the performance is consistent across different parts of the same test, various kinds of internal consistency estimates can be calculated (Bachman 1990; J.D. Brown 1996). Purposes Assessment is carried out to collect information on learners language proficiency and/or achievement that can be used by the stakeholders in language learning programs for the ff. purposes: 1. Selection 2. Certification 3. Accountability 4. Diagnosis 5. Instructional decision-making 6. Motivation Background Timeline 1960s 1970s

IV.

V.

Description [Objective test: large item multiple choice] Structural linguistics influence Language tests were designed to assess learners mastery of different areas of the linguistic system such as phoneme discrimination, grammatical knowledge and vocabulary. Disadvantage: Provides no information on learners ability to use language for communicative purposes [Integrative tests: cloze tests and dictation which requires learners to use linguistic and contextual knowledge] Unitary Competence Hypothesis there is a single general proficiency factor which underlay test performance (Oller 1976; Oller and Hinofotis 1980).

1970s early 1980s

Critique: There is no test that can give an accurate picture of an individuals proficiency. The range of different assessment procedures is necessary (Cohen, 1994). Problem: Integrative tests are indirect tests. They dont require the testee to demonstrate the language skills they would need to use in order to communicate in real world. 1990 present Communicative Language Teaching (CLT) Assessments are direct. Language tests and assessments contain languageuse situations that they would encounter in using the language for communicative purposes in everyday life. Activities: Oral interviews, listening to and reading extracts from the media, & various kinds of authentic writing tasks which reflect real-life demands (Weir, 1990, 1993). VI. Researches in Language Assessment RESEARCHERS Clapham and Corson (1997) Douglas (1995) Shohamy (1995)

DESCRIPTION

Current issues in language assessment A summary of trends in language testing Particular issues and problems involved in the assessment of language performance Hamayan (1995) Describes a variety of assessment procedures not involving the use of formal tests. Kunnan (1997) Categorizes over a hundred language testing research studies in terms of the framework for language test validation proposed by Messick (1980). Brindley (1998b), Turner (1998), Perkins (1998), Current development and research in the Kroll (1998) assessment of the skills of listening, speaking, reading, and writing A. Two important issues in Language Assessment 1. Research into the Nature of Communicative Language Ability What does it mean to know how to use a language? (Spolsky, 1985) 2 Approaches to construct definition (McNamara, 1996) a. Compiling detailed specifications of the features of target language performances which learners have to carry out , often on the basis of an analysis of communicative needs (Shohamy, 1995).

b. Employs a theoretical model of language ability as a basis for constructing tests and assessment tools (Canale and Swain 1980; Bachman 1990; Palmer 1996). Critique: Assessments based on the real life approach which take the context of language use at the point of departure are considered problematic by many measurement specialists since they are not based on an underlying theoretical model for communicative language ability and thus lack generalisability beyond the assessment situation (Bachman 1990; Shohamy 1995; McNamara 1996). 2. Research into Self-assessment of language ability Problem: A good deal of assessment taking place in language learning classrooms is aimed not so much at measuring outcomes, but rather at improving the quality of learning and instruction.

Proponents have argued that participating in self-assessment can assist learners to become skilled judges of their own strengths and weaknesses and to set realistic goals for themselves, thus developing their capacity to become self-directed (Dickinson 1987; Oscarson 1997). Research suggests that with training, learners are capable of self-assessing their language ability with reasonable accuracy (Blanche and Merino 1989). The concept of self-assessment may be quite unfamiliar and threatening to many learners since it alters traditional teacher-learner relationships (Blue 1994; Heron 1988).

VII. Practice Direct assessment of language performance is time consuming and therefore expensive, particularly individual testing. Achievement assessment may also be very resource intensive. Given the potential practical problems arising when new tests or systems are introduced into an existing curriculum, assessment researchers have argued that institutions need to consider carefully resourcing requirements at both the planning and implementation stages. If teachers are required to construct and administer their own assessment tasks, it is crucial to provide adequate support (e.g. professional development, materials development and rater training) and establish systems for ensuring the quality of assessment tools used (Bottomley et al. 1994; Brindley 1998a). VIII. Current and future trends and directions

Second language acquisition (SLA) increasingly overlaps language assessment research (Bachman and Cohen, 1998). Methods of language analysis developed by SLA researchers are increasingly used to investigate language use in assessment situations (e.g. Ross 1992; Young and Milanovic 1992; Lazaraton 1996) and the results of such research are increasingly employed in constructing tests and assessment procedures (Fulcher 1996b). What it means to know how to use a language notion continues to be increasingly refined and elaborated.

The development of the expansion of recent models of communicative language ability to include what were previously regarded as non-language factors, such as personality and background knowledge. The framework by Bachman and Palmer (1996) includes the test-takers topical knowledge (knowledge of the world that can be mobilized in tests) and affective schemata (emotional memories influencing the way test-takers behave). - This is an important development since it recognizes the key role that personal characteristics may play in language performance and opens the way for the development of assessment procedures which attempt to build such factors into the assessment situation. - However, McNamara (1996) points out, it also opens a Pandoras Box because of the complexity of such variables and the challenges involved in adequately measuring them. Computer-adaptive assessment enables tasks to be tailored to the test takers level of ability, and enables test takers to receive immediate feedback on their performance (Gruba and Corbel 1997). Computerized versions of major proficiency tests are increasingly available worldwide (Educational Testing Service 1998). Researchers investigate ways to advance electronically-mediated communication, computer technology and linguistic analysis to incorporate in language tests, including automated scoring of open-ended responses, video-mediated testing, and handwriting and speech recognition (Burstein et al 1996; J.D. Brown 1997). The potential of the internet for delivery of language tests is increasingly being exploited. Attention paid to assessment of achievement, which was neglected in the past (Weir 1993; Brindley 1998). - Increase in the use of alternative methods of assessing and recording achievement which can capture the outcomes of learning that occur in the classroom but which do not involve standardized tests. - Methods include structured observation, progress grids, learning journals, project work, teacher-developed tasks, peer-assessment and self-assessment (Brindley 1989; Cohen 1994; Hamayan 1995; Genesee and Upshur 1996; Bailey 1998; Shohamy 1998). Although research has provided some information on how these methods are used in language programs, the nature and extent of their impact on learning have yet to be fully investigated. One notable gap in the context of language learning concerns the nature and use of teacherconstructed assessment tasks, a question which has been explored in some depth in general education (Linn and Burton 1994).

IX. Conclusion Language assessment is a complex and rapidly evolving field which underwent significant change in the 1990s. Considerable progress has been made from a theoretical perspective. Models of ability now underlying language tests are much more sophisticated than the somewhat crude skills-based models from the earlier periods. Progress has been made in tests analysis with the advent of measurement techniques which can model the multiple factors involved in test performance (Bachman and Eignor 1997). Despite the advances in language assessment, a number of important areas are in urgent need of further investigation.

More data-based studies of language skills in use are needed to increase our knowledge of the nature of language ability. The need to find cost-effective ways of integrating interfaces between teaching and assessment In order to formulate ethical standards of practice, we need to find out more about the ways in which tests and other assessments are used. Only through the systematic exploration of such questions will it eventually be possible to improve the quality of information and language assessment provides.

Reference: Carter, Ronald., Nunan, David. The Cambridge Guide to Teaching English to Speakers to Other Languages. Cambride University Press, 2001.

Das könnte Ihnen auch gefallen