Sie sind auf Seite 1von 6

Notes in Educ 8 (Assessment of Student learning 2) II.

Establishing High Quality Classroom Assessments Classroom assessment consists of determining purpose and learning targets, systematically obtaining information from students, interpreting the information collected, and using the information. High quality classroom assessments are technically sound and provide results that demonstrate and improve targeted student learning. It also inform instructional decision making. CRITERIA FOR ENSURING CLASSROOM ASSESSMENTS HIGH-QUALITY

4. Oral question assessments used continuously in instruction to monitor student understanding. Teachers ask students questions about the content or process, or they engage students in verbal interaction individually or in groups. It include oral examination, interviews, conferences, and other conversations in which information is obtained about student learning. 5. Observation assessments teachers constantly observe students informally to assess student understanding and progress. Teachers watch students as they respond to questions and study, and teachers listen to students as they speak and discuss with others. 6. Self-report assessments those which the students are asked to complete a form to answer questions to reveal how they think about themselves or how they rate themselves. Matching Targets with Methods 1. Knowledge a. Well-constructed objective tests do a good job of assessing subject matter and procedural knowledge, particularly when students must recognize or remember isolated facts, definitions, spellings, concepts, and principles. b. Asking students questions orally about what they know is also an effective way to gain knowledge. c. Essays can be used effectively to assess knowledge when your objective is for students to learn large chunks or structures of knowledge that are related. 2. Reasoning a. Demonstrated most efficiently in essays by asking students to compare, evaluate, and critique, provide justification for, organize, integrate, defend, and solve problems. b. Performance-based assessments are also quite effective in measuring reasoning skills as long as the product or demonstration clearly illustrates procedures that reveal reasoning or a performance from which we can infer reasoning. c. Objective questions can be an excellent method for assessing certain aspects of reasoning when the item demands more than simply recalling or recognizing a fact. d. Student self-reports of the reasoning they used in answering a question or solving a problem can help you diagnose learning difficulties. 3. Skills a. Performance-based methods are clearly the preferred method to determine systematically whether or not a student has mastered a skill. On a more informal basis, teachers use observation extensively to assess progress in demonstrating skills.

A. Clear and Appropriate Learning Targets Learning target includes both what students should know and can do, and the criteria for judging student performance. B. Appropriateness of Assessment Methods Even though most targets may be measured by several methods, the reality of teaching is that certain methods measure some types of targets better that other methods do. Particular methods are more likely to provide quality assessments for certain types of targets. Once you have identified the targets, match them with methods. Types of Assessment Methods 1. Objective Test requires structure student responses and how they are scored. Typically, students either select a response from two or more possibilities, or they supply a one- or two-word answer to a question. The answers are then scored objectively, in the sense that each item is scored correct or incorrect according to pre-established guidelines. Major type of objective tests include supply type (short answer and completion), and selection type ( multiple choice, true/false, and matching). 2. Essay Test paper-and-pencil assessments that allow students to construct a response that would be several sentences to several pages in length. Typically restricted- response tests or extendedresponse tests, depending on the degree of freedom provided to the student. 3. Performance-based assessments require students to demonstrate a skill or proficiency by asking them to create, produce, or do something, often in a setting that involves real-world applications. These include paintings, speeches, musical presentations, demonstrations, research papers, athletic performance, exhibition and other products that require students to construct a unique response to a task.

b. Objective tests and oral questioning can be used to assess students knowledge of the skills, such as knowing the proper sequence of actions or recognizing the important dimensions of the skill. 4. Products a. Performance-based assessment is the best way to assess student products. b. Use objective items, essay items, and oral questions to determine whether students know the components of the product or to evaluate different products. 5. Affect a. Affect refers to attitudes, values, feelings, self-concept, interests, and other feelings and beliefs and affective outcomes are best assessed by either observing students or using student self-report. The most direct and efficient way to assess affect is to ask the students directly through self-report surveys and questionnaires. b. Observation can be effective in determining, informally. Example, affective traits are often apparent when the student shows negative feelings through body posture. c. Some performance-based performance provides ample opportunities for teachers to observe affect though this are not usually non-systematic and inferences are required. C. Validity This refers to the extent to which the test serves its purpose or the efficiency with which it intends to measure. It is a characteristic that pertains to the appropriateness of the inferences, uses, and results of the test or any other method utilized to gather data. Validity is concerned with the inferences, not the test itself. In reality , the same test or instrument can be valid for one purpose and invalid for another. Validity is always a matter of degree, depending on the situation. For example, a social science test may have high validity for inferring that students know the sequence of events leading up to the American Revolution, less validity for inferring that students can reason, even less validity for inferring that students can communicate effectively in writing. How is Validity Determined? Validity is always determined by professional judgment. This judgment is made by the user of the information (the teacher for classroom assessment). Sources of information for validity: 1. Content-Related Evidence Determines the extent of which the assessment is the representative of the domain of interest. Once the content domain is specified, review the test items to b e assured that there is a match between the intended inferences and what is on the test. A table of specification will help further delineate

what targets should be assessed and what is important from the content domain. 2. Criterion-Related Evidence Determines the relationship between an assessment and another measure of the same trait. It provides validity by relating an assessment to some valued measure (criterion) that can either provide an estimate of current performance or predict future performance. The principle is that when you have two or more measures of the same thing, and these measures similar results, then you have established criterion-related evidence. For example, if you are interested in the extent to which preparation by your students, as indicated by scores on a final exam in mathematics, predicts how well they will do next year, you can examine the grades of previous students and determine informally if students who scored high on your final exam are getting high grades and students who scored low on your final exam are obtaining low grades. If a correlation is found, then an inference about predicting how your students will perform, based on their final exam, is valid. 3. Construct-Related Evidence Determines which assessment is a meaningful measure of an unobservable trait or characteristic like intelligence, reading comprehension, honesty, motivation, attitude, learning style, and anxiety. There are three types of constructrelated evidence: a. Theoretical explanation or definition of the characteristic so that its meaning is clear and not confused with any other characteristic. This is particularly important whenever you emphasize reasoning and affect targets. For example, suppose you want to assess students attitudes toward reading. What is your definition of attitude? Do you mean how much students enjoy reading, value reading, or read in their spare time? Are you interested in their desire to read or their perception of ability to read? Logical for some reasoning constructs, you can ask students to comment on what they were thinking when they answered questions. Ideally their thinking reveals an intended reasoning process. Another logical type of evidence comes from comparing the scores of groups who, as determined by other criteria, should respond differently. These groups can be students who have been taught compared to untaught students, being taught and after being taught groups, age groups, or groups that have been identified by other means to be different on the construct. statistical statistical procedure can be used to correlate scores from measures of the construct with scores from other measures of

b.

c.

the same construct and measures of similar but different constructs. For example, self-concept of academic ability scores from one survey should be related to another measure of the same thing but less related to measures of self-concept of physical ability. Test Validity Enhancers 1. 2. 3. Prepare a table of specification Construct appropriate test items. Formulate directions that are brief, clear, and concise. 4. Consider the reading vocabulary of the examinees. 5. Make the sentence structure of your test items simple. 6. Never have an identifiable pattern of answers. 7. Arrange the test items from easy to difficult. 8. Provide adequate time for student to complete the assessment. 9. Use different methods to assess the same thing. 10. Use the test only for intended purposes. D. Reliability This refers to the consistency with which a student may be expected to perform on a given test. It is concerned with the consistency, stability, and dependability of the results. A reliable result is one that shows similar performance at different times or under different conditions. For example, a Math teacher wants to assess her students addition skills so she gave a quiz. She wants to be sure about the level of performance of the students in order to design appropriate instruction so she gave another quiz few days after. The addition quiz scores were fairly consistent. Those who scored high on the first quiz also scored high on the second quiz and the same result on those who scored low. The results are reliable. Reliability is directly related to error. For each assessment there is some degree of error, thus, there is low, moderate, or high reliability. There are factors that affect test reliability: 1. scorers inconsistency because of his/her subjectivity 2. limited sampling because of incidental inclusion or accidental exclusion of some materials on the test 3. changes in the individual examinee and instability during examination 4. testing environment Test Reliability Enhancers 1. 2. Use a sufficient number of items or tasks. A longer test is more reliable. Use independent raters or observers who can provide similar scores to the same performance.

3. 4. 5. 6. 7.

Make sure the assessment procedures and scoring are objective Continue the assessment until the results are consistent. Eliminate or reduce the influence of extraneous events or factors. Assess the difficulty level of the test. Use shorter assessments more frequently rather than few long assessments.

E. Fairness A fair assessment is one that provides all students with an equal opportunity to demonstrate achievement. Fair assessment are unbiased and nondiscriminatory, uninfluenced by irrelevant or subjective factors. That is, neither assessment task nor scoring is differentially affected by race, gender, ethnic background, handicapping condition, or other factors unrelated to what is being assessed. Key Components of Fairness 1. Students knowledge of learning targets and assessments A fair assessment is one in which it is clear what will and will not be tested. You need to be very clear and specific about the learning target what is to be assessed and how it will be scored. The students should know the content and scoring criteria prior to the assessment and often prior to instruction so they know what to study and focus on. 2. Opportunity to learn This means that students know what to learn and then provided ample time and appropriate instruction. You must plan instruction that focuses specifically on helping students understand, providing students with feedback on their progress, and giving students the time they need to learn. 3. Prerequisite knowledge and skills This means that you need to have a good understanding of prerequisites that your students demonstrate so you need to examine your assessments carefully to know what prerequisites are required. For example, you want to test math reasoning skills. Your questions are based on short paragraphs that provide needed information. In this situation, Math reasoning skills can be demonstrated only if students can read and understand the paragraphs. Thus, reading skills are prerequisites.

4. Avoiding teacher stereotypes Stereotypes are judgments about how groups of people will behave based on characteristics such as gender, race, socioeconomic status, physical appearance, and other characteristics. These interfere with your objectivity. It is your responsibility to judge each student on his or her performance on assessment tasks, not on how others who share characteristics of the student perform.

5. Avoiding bias in assessment tasks and procedures Bias is present if the assessment distorts performance due to the students ethnicity, gender, race, religious background, and so on. Two major forms of assessment bias: a. offensiveness occurs if the content of the assessment offends, upsets, distresses, angers, or otherwise creates negative affect for particular students or a subgroup of students. This occurs most often when stereotypes of particular groups are present in the assessment. b. unfair penalization is bias that disadvantages a student because of content that make it more difficult for some students from some groups to perform as compared to students from other groups. Bias is evident when an unfair advantage or disadvantage is given to one group because of gender, socioeconomic status, race, language, or other characteristic. F. Positive Consequences 1. Students The most direct consequence of assessment is that students learn and study in a way that is consistent with your assessment task. If the assessment is a multiple choice test to determine the students knowledge of specific facts, then students will tend to memorize information. Assessment also have clear consequences on student motivation. If students know what will be assessed and how it will be scored, and if they believe that the assessment will be fair, they are likely to be more motivated to learn. The student-teacher relationship is also influenced by the nature of assessment. When teachers construct assessments carefully and provide feedback to students the relationship is strengthened. 2. Teachers Teachers tend to teach the test. If the assessment calls for memorization of facts, the teacher tends to teach lots of facts; if the assessment requires reasoning, then the teacher structures exercises and experiences that get students to think. Assessments help the teacher make more valid judgment that leads to better information and decision making about students.

Assessments may influence how you are perceived by others. G. Practicality and Efficiency 1. Teacher familiarity with the method Teachers need to know about the assessment methods they select. This includes knowledge of the strengths and limitations of the method, how to administer the assessment, how to score and properly interpret student responses, and the appropriateness of the method for given learning targets. 2. Time required Gather only as much information as you need for the decision or other use of the results. The time required should include how long it takes to construct the assessment, how much time is needed for students to provide answers, and how long it takes to score the results. The time needed for each of these aspects of assessment is different for each method of assessment. Another consideration in deciding about time for assessment is reliability. The reliability of a test or other assessment is directly related to its length the longer the test the greater the reliability. 3. Complexity of administration Practical and efficient assessments are easy to administer. This means that the directions and procedures for administration are clear and that little time and effort is needed. 4. Ease of scoring Scoring needs to match your method and purpose. Use the easiest method of scoring appropriate to the method and purpose of the assessment. 5. Ease of interpretation It is necessary to provide sufficient information so that whatever interpretation is made is accurate. Interpretation is easier if you are able to plan, prior to the assessment, how to use the result. 6. Cost It would be unwise to use a more unreliable procedure assessment just because it costs less. It is best to use the most economical assessment, other things being equal.

EDUCATION 8 ACTIVITY: Activities to be done individually: 1. Refer to the notes I sent. 2. Use yellow pad paper and black or blue ballpen only. 3. Make your handwriting clean and clear. Topic 1 The Role of Assessment in Teaching Answers to questions 1, 2, and 3 should be in paragraph form of 5-7 sentences.

1. What is the relationship between teacher decision making, complex classroom environments, and assessment? 2. What does it mean to say that assessment is not an add-on activity? 3. How do assessments communicate expectations for student learning? 4. Identify each of the following examples as pre-instructional assessment; ongoing assessment; or postinstructional assessment. a. giving a pop quiz b. giving a cumulative final exam c. giving students praise for correct answers d. using homework to judge student knowledge e. reviewing student scores on last years standardized test f. changing the lesson plan because of student inattention g. reviewing student files to understand the cultural background of the students h. asking the students about their prior idea of the topic i. asking students to share their ideas while discussing the topic j. giving a test after every lecture 5. Identify each of the following quotes as referring to one of the four components of classroom assessment: purpose; measurement; evaluation; and use. a. Last week I determined that my students did not know very much about the Civil War. b. This year I want to see if I could assess student attitudes. c. The test helped me to identify where students were weak. d. I like the idea of using performance-based assessment. e. I intend to combine several different assessments to determine the grade. Topic 2 Establishing High Quality Classroom Assessments A. Match the description with the type of assessment. Write the word or term. 1. Based on verbal instructions a. objective 2. Made up of questionnaires and surveys b. essay 3. Selection or supply type c. performance-based 4. Constructs unique response to demonstrate skill d. oral question 5. Constructed response either restricted or extended e. observation 6. Used constantly by teachers informally f. self-report B. For each of the following situations or questions, indicate which assessment method provide the best match. Same choices on letter A. Write the word or term. 1. Mrs. Santos needs to check students to see if they are able to draw graphs correctly like the example just demonstrated in class. 2. Mr. Garcia wants to see if his students are comprehending the story before moving to the next set of instructional activities. 3. Ms. Manahan wants to find out how many spelling words her student know. 4. Ms. Sanchez wants to see how well her students can compare and contrast the Vietnam War with World War II. 5. Mr. Paternos objective is to enhance his students self-efficacy and attitudes toward school. 6. Mr. Andrada wants to know if his Science students can identify different parts of a microscope. C. Mr. Castro asks the other Math teachers in high school to review hid midterm to see if the test items represent his learning targets. Which type of evidence for validity is being used? Why? a. content-related b. criterion-related c. construct-related D. The students in the following lists are ranked ordered based on their performance on two tests of the same content (highest score at the top). Do the results suggest a reliable assessment? Explain your answer. Test A Test B Germaine Ryann Cynthia Robert Ryann Steve Steve Germaine Robert Cynthia

E. Which aspect of fairness is illustrated in each of the following assessment situations? Give your suggestion for the assessment to be fair. 1. Students complained because they were not told what to study for the test. 2. Students studied the wrong way for the test (e.g. they memorized content) 3. The teacher was unable to cover the last unit that was on the test. 4. The story students read, the one they would be tested on, was about life in the country where there is snow. Students who had been to a country with snow showed better comprehension than students who had never been on that place.

Das könnte Ihnen auch gefallen