Sie sind auf Seite 1von 10

Evaluating First Year Students’ Feedback at the College of

Languages/Salahaddin University
A Research by

Dr. Fatimah Rashid Hasan Al Bajalani Dr. Lanja A. Dabbagh


Professor Instructor
Dept. of English, College of Languages Dept. of English, College of Languages
Salahaddin University- Erbil Salahaddin University- Erbil
fatimah.hassan@su.edu.krd Lanja_dabbagh@yahoo.com

Abstract
The study aims at evaluating first year students’ feedback at both English and
Arabic departments, College of Languages, Salahaddin University in the academic year
2011-2012 in terms of understanding the items of the feedback form, the way students
answer, and comparing the answers of top ten students with others.

It is hypothesized that because most of the students admitted to these two


departments have poor command of the language and they may not understand their
teachers because they speak the target language in the class, their feedback is not
reliable.

The procedures of the study is to compare students’ answers (Feedbacks) in May


and their answers (Feedbacks) after explaining the aim and items of the process to
them, arranging their sitting in a way every student is independent in his/her answer,
and separating the answers of top ten students from other students’ answers.

The results revealed that student’s evaluations of teaching are, to some extent,
unreliable and invalid. The choice of method and feedback instrument depends on what
we want to know and why. It is suggested that students’ feedbacks should not be the
only method to evaluate teachers because of the late start and many holidays during
the academic year. The process of feedback is new to them, the classes and the way of
lecturing is different from what they have exposed to before and the system of the
university and the college is both new and different to what they have experienced at
school.

Key words: students’ feedback, evaluation, teaching, quality assurance.

1
Introduction

Systems for evaluating teaching and course quality in higher education have long
been established all over the world. Within Kurdistan Region in the north of Iraq, there
has been a growth of interest in this area from a range of different perspectives driven
both internally by institutions themselves and externally by funding agencies, national
quality initiatives, the response to widening participation and general public calls for
increased transparency, accountability and quality assurance.

Whilst there is a large number of possible sources of feedback and evaluation


data on both teaching and course quality (including, for example, course
documentation, progression rates, curriculum design processes, teaching committees,
etc.) the most common source of input in Kurdistan to teaching evaluation this year is
feedback from students.
The study aims at evaluating first year students’ feedback at both English and
Arabic departments, College of Languages, Salahaddin University in the academic year
2011-2012 in terms of understanding the items of the feedback form, the way students
answer, and comparing the answers of top ten students with others.

It is hypothesized that because most of the students admitted to these two


departments have poor command of these languages since they are considered second
languages and that they may not understand their teachers as they speak the target
language in the class, their feedback is not reliable.

The procedures of the study is to compare students’ answers (Feedbacks) in May


and their answers (Feedbacks) after explaining the aim and items of the process to
them, arranging their sitting in a way every student is independent in his/her answer,
and separating the answers of top ten students from other students’ answers.

Literature Review

The use of student evaluation of teaching (SET) is widespread in higher


education (Richardson, 2005). SET, is generally about eliciting perceived performance
feedback on teacher and/or course related aspects. Much of the literature deals with
the dual summative and formative objectives of SET and the controversial issues
involved (see, for instance, Greenwald, 1997; Haskell, 1977; Marsh and Roche, 1997;
Pounder, 2007; and Sproule (2000). The perspective taken in this paper is that of SET
and its formative purpose, i.e. teachers’ use of feedback to reflect on their teaching
approach and courses and to make modifications where required and possible.
In doing so, they may simply assume that the teaching or course features
evaluated are equally important. If this assumption is not valid, however, teachers may

2
modify the wrong aspects of their teaching or courses or pay sufficient attention and
improve the adequate ones.

Student evaluation of teaching quality in higher education is a well-recognized


practice and research on the subject has been conducted for over seventy years (O’Neil,
1997). The merits of student evaluation have also been well debated, with some
academics arguing that students are not suitably qualified to judge quality of teaching
(see for example, Wallace, 1999) and others offering strong support for the use of
student evaluation for quality assurance purposes (see for example, Oldfield and Baron,
2000; Murray, 1997).

Feedback is conceptualized as information provided by an agent (e.g., teacher,


peer, book, parent, self, experience) regarding aspects of one’s performance or
understanding. A teacher or parent can provide corrective information, a peer can
provide an alternative strategy, a book can provide information to clarify ideas, a parent
can provide encouragement, and a learner can look up the answer to evaluate the
correctness of a response. Feedback thus is a “consequence” of performance.
To assist in understanding the purpose, effects, and types of feedback, it is useful
to consider a continuum of instruction and feedback. At one end of the continuum is a
clear distinction between providing instruction and providing feedback. However, when
feedback is combined with more a correctional review, the feedback and instruction
become intertwined until “the process itself takes on the forms of new instruction,
rather than informing the student solely about correctness” (Kulhavy, 1977, p. 212). To
take on this instructional purpose, feedback needs to provide information specifically
relating to the task or process of learning that fills a gap between what is understood
and what is aimed to be understood (Sadler, 1989), and this can be done in a number of
different ways. These may be through affective processes, such as increased effort,
motivation, or engagement. Alternatively, the gap may be reduced through a number of
different cognitive processes, including restructuring understandings, confirming to
students that they are correct or incorrect, indicating that more information is available
or needed, and pointing to directions students could pursue, and/or indicate alternative
strategies to understand particular information. Winne and Butler (1994) provided an
excellent summary in their claim that “feedback is information with which a learner can
confirm, add to, overwrite, tune, or restructure information in memory, whether that
information is domain knowledge, meta-cognitive knowledge, beliefs about self and
tasks, or cognitive tactics and strategies” (p. 5740).
Feedback has no effect in a vacuum; to be powerful in its effect, there must be a
learning context to which feedback is addressed. It is but part of the teaching process
and is that which happens second—after a student has responded to initial instruction—
when information is provided regarding some aspect(s) of the student’s task
performance. It is most powerful when it addresses faulty interpretations, not a total
lack of understanding. Under the latter circumstance, it may even be threatening to a
student: “If the material studied is unfamiliar or abstruse, providing feedback should

3
have little effect on criterion performance, since there is no way to relate the new
information to what is already known” (Kulhavy, 1977, p. 220).
The focus here is on feedback as information about the content and/or
understanding of the constructions that students have made from the learning
experience is not the same as a behaviorist input-output model. Contrary to the
behaviorists’ argument, Kulhavy (1977) demonstrated that feedback is not necessarily a
reinforcer, because feedback can be accepted, modified, or rejected. Feedback by itself
may not have the power to initiate further action. In addition, it is the case that
feedback is not only given by teachers, students, peers, and so on, but can also be
sought by students, peers, and so on, and detected by a learner without it being
intentionally sought.

Winne and Butler (1994) argued that the benefits of feedback of teaching
depend heavily on learners’:
1. being attentive to the varying importance of the feedback information during
study of the task,
2. having accurate memories of those features when outcome feedback is provided
at the task’s conclusion, and
3. being sufficiently strategic to generate effective internal feedback about
predictive validities (e.g., Which factors boost my performance?).
It is likely that feedback at this task level is most beneficial when it helps students
reject erroneous hypotheses and provides cues as to directions for searching and
strategizing. Such cues can sensitize students to the competence or strategy information
in a task or situation (Harackiewicz, 1979; Harackiewicz, Mabderlink, & Sansone, 1984).

Materials and Methods


The procedure was carried out to obtain students feedback during the academic
year 2011- 2012, in both English and Arabic departments at the college of Languages of
Salahaddin University- Hawler. The process was performed in two settings in May and
June. Quality assurance members distributed the written form feedback sheets to the
students. This was done at the first setting and the procedure was carried on as it is
every time. In the second setting, top ten students of each department were separated
from the other students. The class was arranged so that each student would have
enough space to answer the questions of the feedback form individually without the
interference of students on each other’s answers.

The students of the first grade were chosen from the English and the Arabic
departments. Only three studying topics were chosen. The ten top students, based on
their grades and teachers evaluations, were selected from both departments and seated
separately at the same studying hall where the procedure is carried out. Both
researchers were present during the second setting of the procedure.

4
Results
Analyzing the scores of the students’ feedback in all three topics of both
departments using the statistical programme SPSS, the researchers arrived at some
results. The following two tables show the comparison between the results of May and
June for both departments in all three topics:

Table 1: Students’ feedback in May and June

Department
Arabic English
Subjects Subjects
Mean of Literature Grammar AD Literature Grammar Phonetics
Scores Score Score Score Score Score Score
Month May 3.54 4.01 3.54 3.70 4.61 4.12
June 2.80 3.38 3.76 3.08 4.28 3.65

The results show that the scores dropped (decreased) in the English department
for all three topics when the procedure was performed in May. In the Arabic
department, the scores dropped (decreased) in literature and grammar, while increased
in academic debate.

5
The following table shows a comparison between the top students’ scores and
the rest of the students’ scores. The results prove that there is a significant difference
between the two scores. This indicates that the scores made by the top students, being
more able to comprehend, are more reliable.

Table 2: The Difference between Top students’ feedback and the rest of students

Number of Students Mean Deviation


Top Students 105 3.7109 0.93771
Rest Of The 695 3.4458 1.05415
Students

The following figure shows students’ feedback difference in May and June. In
May, students gave more than three, still there are some low marks less than two, while
in June students’ feedback ranges between less than three to less than five.

Figure 1: Students’ feedback in May and June

6
Discussion

Feedback is one of the most powerful influences on learning and achievement,


but this impact can be either positive or negative. Its power is frequently mentioned in
the present research about learning and teaching, but surprisingly few recent studies
have systematically investigated its meaning. Although feedback is among the major
influences, the type of feedback and the way it is given can be differentially effective. A
model of feedback is then proposed that identifies the particular properties and
circumstances that make it effective, and some typically thorny issues are discussed,
including the timing of feedback and the effects of positive and negative feedback.
Based on the results of this study, it is clear that student’ evaluations of teaching
are, to some extent, unreliable and invalid. The reason may be that most of the students
admitted to these two departments have poor command of the language and they may
not understand their teachers because they speak the target language in the class, their
feedback is not reliable.

The choice of method and feedback instrument depends very much on what we
want to know and why we want to know it. If the purpose of our evaluation is formative,
i.e. the improvement of teaching and courses, then our methods and approach will be
very different to those employed than if our evaluation purpose were summative, for
example, to make personnel-related decisions.

Conclusions

Depending on the results of this research, the following points can be concluded:

1. Students who do not understand the feedback responses designed did not give
logical points to the questions.
2. The 'now you see it, now it's gone' syndrome can affect students' retention of
the feedback messages, as students move quickly from one question to another
without giving enough time and thinking of each questions.
3. Students give points without realizing that these points will affect the teachers’
performance, the course outline, and the teaching method in the future.
4. The fact is that handwritten feedback is slow and time-consuming to write
individually and even slower when class sizes are large. Students’ think of the
feedbacks to be authoritativeness - can be threatening to them if they do not
give their teachers good point.
5. Students rarely, if never, wrote positive comments about their teachers. Most, if
not all, were negative, critical, and somehow offensive.
6. The results of the first feedbacks showed all the negative points mentioned
above.

7
7. The results of the second feedbacks were somehow better which proves that it is
better for the quality assurance members pay more attention to the way the
procedure is performed.
8. In such case the following points can be achieved:
a. Students will obtain sense of ownership_ an opportunity to comment on
their academic experience.
b. Valuable informed feedback, on a regular basis, on the courses run by our
departments.
c. There will be an improvement of the quality of courses.
d. The concept of partnership will be established between students, staff
and teachers of our department. This will improve staff-student relations.
e. The scheme will develop students’ transferable skills and help to improve
the job prospects of quality assurance members.
f. The scheme will diffuse tensions by providing a less formal complaints
procedure.
g. The scheme will embed students’ views in the department’s decision-
making processes.
h. The scheme will raise the profile of student representation as a whole.

Recommendations
1. In order to minimize the effects of those variables known to impact upon
students’ evaluations, care should be taken to ensure that students remain
anonymous and that lecturers are not present during administration of any
feedback mechanisms used. The purpose of the evaluation should also be clearly
explained to students and information should be provided regarding the
information provided by the students following the analysis of their feedback.
2. In terms of the other variables which may impact upon ESL, and which are,
arguably, more outside of the control of the individual lecturer than are the
administrative variables, they should be kept in mind at all stages in the
evaluative process, in particular during the analysis and interpretation phase.
3. Generally, in every circumstances it may be more appropriate to use more
information about the questions of the feedback and explain every item
comprehensively. Quality assurance members should always make it clear for
the students that what they are doing is for their benefit not to grade the
teacher. It is obvious that one cannot easily tell to what extent individual
students are benefiting from the feedback designed. Nevertheless, these
feedbacks are designed to improve methods of teaching in the department.
4. More time is required to avoid the problem of slow time- consuming process of
feedback and dividing students into smaller groups will obtain more reliable and
beneficial results.

8
-Write sth on the type of the items that give no chance to students to randomly
answer the question.
-You did not mention anything on other forms of teachers’ evaluation which is
our 2nd hypothesis!!

References
Harackiewicz, J. M. (1979). The effects of reward contingency and performance
feedback on intrinsic motivation. Journal of Personality & Social Psychology, 37(8),
1352–1363.

Harackiewicz, J. M., Mabderlink, G., & Sansone, C. (1984). Rewarding pinball wizardry:
effects of evaluation and cue value on intrinsic interest. Journal of Personality and Social
Psychology, 47, 287–300. Downloaded from http://rer.aera.net by guest on December
10, 20113.

Kulhavy, R. W. (1977). Feedback in written instruction. Review of Educational Research,


47(1), 211–232.

Sadler, R. (1989). Formative assessment and the design of instructional systems.


Instructional Science, 18, 119–144.

Winne, P. H., & Butler, D. L. (1994). Student cognition in learning from teaching. In T.
Husen & T. Postlewaite (Eds.), International encyclopaedia of education (2nd ed., pp.
5738–5745). Oxford, UK: Pergamon.

Richardson, J.T. (2005). Instruments for obtaining student feedback: A review of the
literature. Assessment and Evaluation in Higher Education, 30(4), 387–415.

Pounder, J. (2007). Is student evaluation of teaching worthwhile? An analytical


framework for answering the question. Quality Assurance in Education, 15(2), 178-191.
Sproule, R. (2000). Student evaluation of teaching: a methodological critique of
conventional practices.
Education Policy Analysis Archives, 8(50), November 2 2000.

Haskell, R. (1997). Academic freedom, promotion, reappointment, tenure, and the


administrative use of student evaluation of faculty (SEF): (Part IV) Analysis and
implications of views from the court in relation to academic freedom, standards, and
quality of instruction. Education Policy Analysis Archives, 5(21), November 25 1997.

Greenwald, A. (1997). Validity concerns and usefulness of student ratings of instruction.


American Psychologist, 52(11), 1182–1186.

9
Marsh, H., & Roche, L. (1997). Making students' evaluations of teaching effectiveness
effective: The central issues of validity, bias, and utility. American Psychologist, 52(11),
1187-97.

Murray, H. G. (1997) Does Evaluation of Teaching Lead to Improvement of Teaching?


International
Journal for Academic Development 2(1), 8-23.

Oldfield. B. and Baron, S. (2000) Student Perceptions of Service Quality in a UK


University Business
and Management Faculty. Quality Assurance in Education 8(2), 85-95.

O’Neil, C. (1997) Student Ratings at Dalhousie. Focus 6(5), Halifax: Dalhousie University,
1-8.

Wallace, J. (1999) The Case for Students as customers. Quality Progress 32(2), 47-51.

The references need to be arranged alphabetically please.

10

Das könnte Ihnen auch gefallen