Sie sind auf Seite 1von 49

UNIT BAHASA INGGERIS IPGKTAA

alarm clock petrol pump speedometer meeting new people opinion of this class

Assessment - The process of measuring something with the purpose of assigning a numerical value. Scoring -The procedure of assigning a numerical value to assessment task. Evaluation - The process of determining the worth of something in relation to established benchmarks using assessment information. Test - An instrument or activity used to accumulate data on a persons ability to perform a specified task. In kinesiology the content of these tests are usually either cognitive, skill, or fitness.

A measurement takes place when a test is given and a score is obtained . If the test collects quantitative data, the score is a number. If the test collects qualitative data, the score may be a phrase or word such as excellent.

Not the same as testing! An ongoing process to ensure that the course/class objectives and goals are met. A process, not a product. A test is a form of assessment. (Brown, 2004, p. 5)

Informal assessment can take a number of forms:


unplanned comments, verbal feedback to students, observing students perform a task or work in small groups, and so on.

Formal assessment are exercises or procedures which are:


systematic give students and teachers an appraisal of students achievement such as tests.

Multiple-choice True-false Matching Norm-referenced and criterion referenced tests

Norm-referenced test
standardized tests (SPM, STPM, CPT, TOEFL, IELTS) Place test-takers on a mathematical continuum in rank order

Criterion-referenced tests
give test-takers feedback on specific objectives (criterea) test objectives of a course known as instructional value

Authentic assessment
reflects student learning, achievement, motivation, and attitudes on instructionally relevant classroom activities (OMalley & Valdez, 1996).

Examples:
performance assessment portfolios self-assessment

Motivation Achievement Improvement Diagnosis Prescription Grading Classification Prediction

Relevance Education value Economic value Time Norms Bias Safety

Diagnose students strengths and needs Provide feedback on student learning Provide a basis for instructional placement Inform and guide instruction Communicate learning expectations Motivate and focus students attention and effort Provide practice applying knowledge and skills

Provide a basis for evaluation for the purpose of:


Grading Promotion/graduation Program admission/selection Accountability Gauge program effectiveness

1920 World War IIQ Screening

Based on the Bell Curve Use standardized tests Comparing students to students Want to create a spread
Item analysis Distinguish items: High achievers get correct and low achievers get wrong

Used for screening people in and out

1920

1970

Criterion Referenced Testing Began

Specific standards established Certain information/learning is necessary to continue the next steps of learning. Students learning is compared to the criteria or standards (NOT to each other) Assumption: If students do not reach standards, find other means of teaching students Banks of testing items created to match different types of curriculum (mostly multiple items)

1920

1970

1980s

Authentic Assessment

Not all of what we teach can be assessed by paper and pencil tests nor by multiple choice items Students need to demonstrate what they learned: performance based (based on constructivist learning theory) Assessment is different than testing or grading (closer to diagnosis) Multiple means of assessment

Rubrics (specific criteriateaching is planned around criteria) Includes attention to non-academic or difficult to assess
Cooperative learning Critical thinking skills Social learning

Differentiates summative vs. formative assessment Portfolios

Have layers May be based on developmental levels May be weighted for different categories at different times of the years Need to be sensitive to the time to present to learners
Too early=overwhelming (havent taught it yet) Too late=not useful for modifying or developing products

Eventually students can develop rubrics (they have internalized the criteria)

Development of Assessment in Malaysian setting:


What are the changes? Why are there need for the changes? How do these changes influence the education system, society etc.?

Pre-assessment (diagnostic)

Formative (ongoing)

Summative (final)

Pretests Observations Journals/logs Discussions Questionnaires Interviews

Quizzes Discussions Assignments Projects Observations Portfolios Journal logs


Standardized tests

Teacher-made test

Portfolios Projects
Standardized tests

How would you document a student performance during a discussion? Which types of assessments noted in the chart could be considered authentic assessment?

Practicality Reliability Validity Authenticity Washback

An effective test is practical


Is not excessively expensive Stays within appropriate time constraints Is relatively easy to administer Has a scoring/evaluation procedure that is specific and time-efficient

A reliable test is consistent and dependable. If you give the same test to the same students in two different occasions, the test should yield similar results.
Student-related reliability Rater reliability Test administration reliability Test reliability

The most common issue in student related reliability is caused by temporary illness, fatigue, a bad day, anxiety, and other physical and psychological factors which may make an observed score deviate from a true score.

Human error, subjectivity, and bias may enter into the scoring process. Inter-rater reliability occurs when two or more scorers yield inconsistent scores of the same test, possibly for lack of attention to scoring criteria, inexperience, inattention, or even preconceived bias toward a particular good and bad student.

Test administration reliability deals with the conditions in which the test is administered.
Street noise outside the building bad equipment room temperature the conditions of chairs and tables, photocopying variation

The test is too long Poorly written or ambiguous test items

A test is valid if it actually assess the objectives and what has been taught.
Content validity Criterion validity (tests objectives) Construct validity Consequential validity Face validity

A test is valid if the teacher can clearly define the achievement that he or she is measuring A test of tennis competency that asks someone to run a 100-yard dash lacks content validity If a teacher uses the communicative approach to teach speaking and then uses the audiolingual method to design test items, it is going to lack content validity

The extent to which the objectives of the test have been measured or assessed. For instance, if you are assessing reading skills such as scanning and skimming information, how are the exercises designed to test these objectives? In other words, the test is valid if the objectives taught are the objectives tested and the items are actually testing this objectives.

A construct is an explanation or theory that attempts to explain observed phenomena If you are testing vocabulary and the lexical objective is to use the lexical items for communication, writing the definitions of the test will not match with the construct of communicative language use

Accuracy in measuring intended criteria Its impact on the preparation of test-takers Its effect on the learner Social consequences of a test interpretation (exit exam for pre-basic students at El Colegio, the College Board)

Face validity refers to the degree to which a test looks right, and appears to measure the knowledge or ability it claims to measure
A well-constructed, expected format with familiar tasks A test that is clearly doable within the allotted time limit Directions are crystal clear Tasks that relate to the course (content validity) A difficulty level that presents a reasonable challenge

The language in the test is as natural as possible Items are contextualized rather than isolated Topics are relevant and meaningful for learners Some thematic organization to items is provided Tasks represent, or closely approximate, real-world tasks

Washback refers to the effects the tests have on instruction in terms of how students prepare for the test Cram courses and teaching to the test are examples of such washback In some cases the student may learn when working on a test or assessment Washback can be positive or negative

Self and peer-assessments


Oral production-student self-checklist, peer checklist, offering and receiving holistic rating of an oral presentation Listening comprehension- listening to TV or radio broadcasts and checking comprehension with a partner Writing-revising work on your own, peer-editing Reading- reading textbook passages followed by selfcheck comprehension questions, self-assessment of reading habits (page 416, Brown, 2001)

Performance assessment- any form of assessment in which the student constructs a response orally or in writing. It requires the learner to accomplish a complex and significant task, while bringing to bear prior knowledge, recent learning, and relevant skills to solve realistic or authentic problems (OMalley & Valdez, 1996; Herman, et. al., 1992).

Portfolio assessment Student self-assessment Peer assessment Student-teacher conferences Oral interviews Writing samples Projects or exhibitions Experiments or demonstrations

Constructed response Higher-order thinking Authenticity Integrative Process and product Depth versus breadth

Specify to students the purpose of the journal Give clear directions to students on how to get started (prompts for instance I was very happy when) Give guidelines on length of each entry Be clear yourself on the principal purpose of the journal Help students to process your feedback, and show them how to respond to your responses

Commonly used when teaching writing One-on-one interaction between teacher and student Conferences are formative assessment as opposed to offering a final grade or a summative assessment. In other words, they are meant to provide guidance and feedback.

Commonly used with the communicative language teaching approach (CLT) It is a collection of students work that demonstrates to students and others the efforts, progress and achievements in a given area. You can have a reading portfolio or a writing portfolio, for instance You can also have a reflective or assessment portfolio as opposed to collecting every piece of evidence for each objective achieved in the course

Specify the purpose of the portfolio Give clear directions to students on how to get started Give guidelines of acceptable materials or artifacts Collect portfolios on a pre-announced dates and return promptly Help students to process your feedback Establish a rubric to evaluate the portfolio and discuss it with your students

Cooperative test construction involves the students contribution to the design of test items. It is based on the concept of collaborative and cooperative learning in which students are involved in the process (Brown, 2001, p. 420)

Das könnte Ihnen auch gefallen