Sie sind auf Seite 1von 3

Reflection: Development process of rubrics

Introduction Before starting our task on this project, we learnt about the history, purpose and required components of webquests. We were asked to build a rubric to evaluate a webquest. The rubric we built is able to evaluate any webquest related to our task. Discussion A rubric is a scoring tool for subjective assessments. The assessment rubric was used to evaluate students when the webquest was completed. It is a set of criteria and standards linked to learning objectives that is used to assess a student's performance on webquest, essays, and other assignments. Rubrics allow for standardized evaluation according to specified criteria, making grading simpler and easier. We looked through different rubrics from the web sites for reference. We did not copy the rubrics because we know as a teacher we should develop our own rubrics so that our students are benefited from having teacher who knows their needs and develop an assessment just for them. There are two types of rubric structures holistic and analytical, but we choose analytical rubric because it fits best for the task to evaluate a webquest. Before we start to build rubrics to evaluate a webquest, we search online for information about the rubrics we are to build. Then we analyze this information and start to build the rubrics. While doing the rubrics we focus on the objective which is to evaluate a webquest and use a range to rate performance. Normally a rubric is presented in form of table. So, the first step to build our rubric is to draw a table with about 9 rows and 9 columns. The standards and levels at the top row of the rubrics are almost the same but the criteria of the left column of the rubrics are different. We included the standards in the top row which are below expectation, need improvement, satisfactory, and excellent with respective score 10, 40, 70, 100 marks. After that, we included the specific criteria for evaluating webquest in left column such as overall layout appeal, introduction, task, process, resources, and evaluation. Then, we have to write out the descriptors for each criterion with different levels or rating scale that contain specific performance characteristics to indicate the degree a level met . We

compared each rating scale and write the descriptors and example accordingly to clarify the meaning of each dimension. The descriptors for below expectation level are some bad statements about the webquest which means the webquest give very bad impression to evaluator. The descriptors for need improvement level are better than below expectation level while the descriptors for satisfactory level are better than need improvement level. Lastly, the descriptors for excellent level are very good things about the webquest. Besides, the width for excellent level is longer because we have more and longer descriptors to explain the criteria for a success webquest. We wrote 2 to 4 qualitative and quantitative descriptors for each dimension. The descriptors we wrote for each dimension are clear and can be compared with other levels easily so that suitable marks could be given. For example, for task criterion we used simple and comparable descriptors like the task is not related to the topic of study for below expectation level, and the task is related well to the topic of study for excellent level. Then in the top right row, we included score, weight, and value of the webquest being evaluated. The score is marks given to a webquest by evaluator which is subjective and range from 1 to 100 marks. We considered the weight of the criteria because the task and problem solving skills (process) criteria are more important than the others, so they are given higher weight which is 20% while others is 15%. The value marks is obtained by multiplying score and weight, e.g. for overall layout appeal the value is 40 x 15% = 6 marks. The total score is obtained by adding the value for each criteria and maximum score is 100 marks. We choose maximum score equal to 100 marks because it is easier to calculate and it also can give the percentage score of a webquest. In the process we build the rubrics, we learned a lot of things. Firstly, we learned what are rubrics and its importance as scoring tools for complex and subjective assessments. Besides, we learned how to compare criteria which are subjective and explain the qualitative and quantitative descriptors for all criteria and levels. This allows subjective assessment to be more objective and consistent. We also learned to build a fair and just rubric to reflect the processes of real life problem solving. We can figure out different way of approaching a particular and learn something that is benefits to us. In conclusion, we love the process of building a rubric and this make us look at subjective things in more aspects.

Conclusion We had finally completed the rubric and evaluated Units of Measure Webquest according to the standardized evaluation of the rubrics. We noticed that this webquest did not have good layout because its background was white. The content was also not interesting enough to attract attention. This webquest got overall of 61 points from full score of 100 points. After evaluating this webquest, we learned how to use a rubric and we realized our rubric was completed. URL of Webquest Evaluated http://www.cape.k12.mo.us/CJHS/science/gibbar/Calendar/Projects/WebQuests/Units %20of%20Measurement/WebQuest.htm