Sie sind auf Seite 1von 6

Proceedings of the Australian Universities Quality Forum (AUQF) Canberra July 2008 (refereed)

The Promise of Good Teaching – getting much more from student surveys

Dr Sandra J Welsman

Principal, Frontiers Insight and AUQA Auditor, Australia

In the context of rising expectations of university teaching performance, this paper discusses a
commissioned exercise to map usage of a major University’s Student Evaluation of Teaching (SET)
service. Analysis sought, with a view to increasing value of SET, included how results from surveys were
being handled in Faculty systems and applied to improve teaching practices.

The voluntary SET program had been operating for a decade and had evolved in response to self-review
and feedback. This review confirmed variation in usage of the SET facility among teachers and among
schools, although with uptake higher since 2003. Messages identified during analysis of SET material
and interviews with Faculty leaders are discussed with reference to evaluation literature and SET user
viewpoints. In particular -

 Although University procedures state that surveys should be conducted for the purpose of improving
teaching and learning, promotion is a key driver in a voluntary system.

 Faculty leaders – some with experience in open teaching evaluation processes – see need for access to
the considerable data collected by SET systems and sent privately to academics.

 Deans and Heads of Schools are interested to received SET reports plus expert analysis to enhance
their understanding, including of benchmarks and of tools to help improve teaching.

 Some see current concepts of student evaluation of teaching as ‘limited’; there was interest in exploring
peer evaluation of skills and gleaning other ideas from international universities.

Keywords: evaluation, teaching, feedback

Demand for verifiable teaching performance has escalated alongside rising student expectations of their
universities, competition for students, and national policy attention to development of skills (COAG,
2006). Emerging funding incentives include the Federal Government Learning and Teaching
Performance Fund and the Carrick Institute Awards.

Responsibilities on Faculties and academics to achieve excellence in teaching are being restated and
reinforced in Australian university policies. Faculty Deans are expected, for instance, to ensure
undergraduate education is of high quality, appropriate to student needs, and responsive to the strategic
goals of a given University. Various plans expect Deans and School heads to evaluate and improve
quality of the teaching and learning in their disciplines and programs.

Students are being promised both good teaching and that their views will ‘have weight’. University
Teaching and Learning Codes – tucked away on websites but with persuasive extracts on lead webpages –
set a framework of expectations of teaching practice. Such Codes generally confirm that students should
reasonably expect opportunity for constructive feedback, and that results obtained through such feedback
will be used to improve their educational experience.

1. Aims – to understand drivers behind the numbers

This study was commissioned during 2006 by the academic development unit (ADU) of a major
university. Aims were to map usage of their Student Evaluation of Teaching (SET) products and services,
to understand influences on how results from surveys are being handled within Faculty systems, and to
look for signs of use of SET information in improving teaching practices.

1 AUQF 2008 refereed


The Promise of Good Teaching – getting much more from student surveys Dr Sandra J Welsman

The SET program had operated for a decade and evolved on a number of fronts in response to self-review
and feedback. Academic teacher usage of SET survey questionnaires and reports had always been
voluntary, but had increased notably from 2003. Reasons were not clear.

Across-university surveying had also expanded, including a First-year Experience Questionnaire (FYEQ)
to explore student background, their early perceptions of studying at the university, plus satisfaction with
teaching, courses and overall experience. SET and FYEQ, in addition to the usual Course Experience
Questionnaire surveys of students at program completion, and the Graduate Destination Surveys, were
providing regular sets of data and comments that should be useful to teachers and faculties in achieving
their promises of good teaching.

Through this study of SET data and practices, the University was looking, firstly, to increase usage of
evaluation tools and information as a university wide resource, by identifying and responding to concerns
about SET questionnaires and tools. And, equally, to encourage application of data and insights to
improve teaching and learning (T&L) practices and methods of reporting advances in T&L at individual
teacher and group levels, so contributing measurably to the university’s development directions and
university and student success.

Methodology included analysis of SET data for trends, review of university policy statements and reports
providing context and frameworks, interviews and discussions across Faculties and Schools, then analysis
of themes in material collected, with cross-reference to literature on Teaching Evaluation and Surveys.

2. Mapping Student Evaluation of Teaching

This SET facility had been developed during the 1990s. Its evaluation methods, survey forms and analysis
techniques were research-based and built on a substantial literature on the validity (including reliability) of
properly derived and used student ratings of teaching effectiveness, for formative and summative
purposes (Marsh, 1984; Arubayi, 1987; Marsh & Dunkin, 1993). Teaching and course survey questions
were structured to interrogate aspects known to be assessed well by students including:

 Clarity of communication with students


 Provision of adequate and helpful feedback to guide learning
 Effective use of teaching media and materials
 Ability to motivate or inspire interest in subject matter
 Ability to relate concepts and areas of content
 Classroom climate and teacher-student rapport
 Teachers availability and helpfulness, and
 The amount they had learned and its significance to them.

While maintaining the initial framework to enable continuity of data across years, the SET service had
been monitored against ongoing research (Ramsden, 2003a; 2003b). Elements had also been adjusted in
response to teacher requests and feedback.

As in most Australian institutions, conducting SET evaluations was not a mandatory policy, except for
new academics during probation periods. However, formal evaluation reports were increasingly expected
in promotion applications. Trends were identified in analysis of SET data over recent years for discussion
with Faculty and School leaders. Patterns included:

 SET survey questionnaires and reports were the most utilised of evaluation services in this university,
and usage had increased notably with 60% more teachers requesting survey materials for large group
teaching (lectures) and more than 50% growth in course surveys since 2003. This trend had flattened to
low growth over 2004 to 2006.

2 AUQF 2008 refereed


The Promise of Good Teaching – getting much more from student surveys Dr Sandra J Welsman

 During first semester of 2006, there were near 900 teacher requests for survey forms (to survey large
group T&L, small group T&L, or course structure/content) with many adding ‘open questions’ to the
surveys. Some 36,000 student responses were processed.

 A graphical presentation of SET survey results (for all large classes, small groups or courses within a
Faculty) revealed normal curves centred around ‘very good’ to ‘good’ but a few ‘borderline-
unsatisfactory’ ratings. Under the voluntary and private reporting systems, these ratings would be
known only to the lecturer concerned.

There was variation in SET usage among schools or centres within Faculties, as well as among Faculties,
but substantial uptake of SET since 2003 had reduced these differences. Over the years, large amounts of
quantitative and qualitative data had been collected through SET, with almost all details being sent only
to individual teachers for their private use or otherwise.

Structured interviews were conducted to elicit greater understanding of issues and influences and to assist
analysis of material towards the study and academic development unit objectives.

Major themes identified during analysis of these interviews included that promotion was a major factor in
higher teaching evaluation survey usage, that Deans were seeking SET data plus useful analysis to assist
planning and development; and that there is an ongoing need to link SET structure and mechanics with
new teaching challenges.

3. Promotion as a primary driver of teaching evaluation

Most universities stress that the fundamental purpose of teaching evaluation is to improve the quality of
teaching, and so to improve the quality of student learning. Evaluation guides also routinely stress that
surveys need to lead to changes and improvements in teaching and learning.

Even so, when interviewed, many academic leaders across universities and faculties frankly identify
promotion as the main reason academics ask their students to complete SET surveys. A number
mentioned they first focussed on SET when told that their staff seeking promotion should obtain formal
teaching evaluations and include reports in teaching portfolios.

Promotion committee demands for evaluation reports are backed by policies in career-path universities
nationally and internationally, especially in the USA. By default, formal evaluations were becoming
mandatory for those academics concerned to achieve promotion. This further distorts the purpose and
‘meaning’ of student evalautions, with surveys being perceived by some as components of ‘compliance
and performance’, rather than tools to improve T&L for the students’ sake (see also MacDonald, 2006).

A more complex set of influences and indicators also emerged during interviews.

 Closer ties with promotions might explain some academics seeing evaluation as a ‘policing function’
and being concerned about veracity of the process and student reports – although research has found
informed students to be reasonable, discerning, knowing, and tending to generosity (Wood, 2004;
Nulty et al, 2005; Yunker & Yunker, 2003; Marsh & Roche, 1997).

 Market pressures were influencing survey usage by some groups and academics. A number of centres
subject to highly competitive student marketplaces had used and refined SET evaluations for their
course and teaching improvement over many years.

‘We began requiring surveys because we are so market driven – we had to know what students were
saying, how staff were going – we discuss results. Teachers can respond with a two page report from
their perspective. At times we identify problems and get assistance.’

3 AUQF 2008 refereed


The Promise of Good Teaching – getting much more from student surveys Dr Sandra J Welsman

 By 2006, some faculties were requiring systematic evaluation of courses (but not teaching), either
using the central SET course survey, or a tailored departmental survey, or peer evaluation. Views on
SET were mixed.

‘On balance, people thought that departmental surveys provided more useful feedback, but SET surveys
had the virtue of reducing our workload. Positive comments were that it was nice for someone else to
do all the paper work and to prepare reports. The most useful feedback on the course itself seemed to be
the student's comments, rather than the scores.’

However, there were clear signs of teachers seeking deeper information on their performance, including
by asking for additional open-answer questions in surveys. This likely reflected growing attention by
promotion panels to how teachers respond to SET feedback as well as public exhortations to ‘improve
teaching’. Suggestions on ways to encourage SET usage and application of results included strong
University policy weighting on teaching and associated real rewards (alongside encouragements such as
a Dean’s Prize for teaching).

‘Generally everyone wants to be a good teacher – it goes to the core – but some are not good … a
growing sense of colleagues feeling they need to improve teaching and learning … believe if they are
teaching well will help in a career way … The University also needs to really value good teaching,
scholarship of teaching and creative output.’

4. Deans seek SET data plus useful analysis

It was surprising how little Deans or Heads of Schools see of SET reports in voluntary systems.
Statistical reports from an ADU for whole faculties show broad patterns, but aggregation (for privacy)
across years and courses removes insights on variation. Averaging camouflages the poorer performance
tails that universities aspiring for excellence will need to address.

Senior academic leaders, likely spurred by successively stronger policy statements and expectations on
them to ‘lift teaching’, were generally concerned that information important to improving teaching was
not available for Faculty planning and development.

‘Management access to evaluations – [is] the quickest way of dealing with issues in T&L. Would
prefer to see the results highly public – [there will be] a cringe in the first year, then becomes
established. … need clear pathways to T&L improvement and to be unafraid of using the data … need
[academic development unit] guidance on how to take the data further.’

Some Australian academics have experienced open evaluation systems in overseas universities, especially
the USA. These can involve, say, evaluation reports being posted on notice boards, ‘open to everyone …
so much more useful as a tool or staff development and improving teaching’.

‘If we are to make meaningful use of evaluation then we need to make reports publicly available. … As
management instruments the SET surveys are not perfect but are a guide – the free-form comments are
informative … for use in bettering teaching.’

There was also concern that expectations of students (who also communicate among themselves) were
not being met. ‘If students can’t see action responses to their survey inputs why would students bother to
respond accurately or at all’ – a logic that appears irrefutable and has been noted in occasional research
where the question is asked (eg. Surratt, 2007).

‘If I had access to the SET reports I would be looking for evidence of positive course experiences or
lack of this … I would use SET as one tool to track transformation in teaching – generally and
individuals … A poor academic response would be unwillingness to engage in a pedagogic justification
of the way they taught.’

4 AUQF 2008 refereed


The Promise of Good Teaching – getting much more from student surveys Dr Sandra J Welsman

Concerns expressed during interviews suggested that performance expectations universities are now setting
for academic leaders are translating into an impatience for useful information, and for value-adding
analysis by ADUs, and for top-level policy support to secure both these.

5. Linking SET structures and mechanics with new teaching challenges

Steps are being taken in Australian universities to address the need to provide useful information to
faculties for their use in improving course programs. (See for instance, papers on ‘closing the loop’:
Palermo, 2004; Nair, Soediro & Wayland, 2005).

However, Faculty Deans and Heads of Schools are generally seeking more – including SET reports
coupled with ‘expert analysis’ to assist their understanding, plus benchmarks and other tools to assist
teaching improvement in changing times. This can be seen in various comments:

‘Useful to receive a set of reports from ‘the centre’ on SET evaluations at School or Centre level. These
would ideally include skilled third-party analysis of raw data and open-answer questions (trends,
implications, indications of problems, clues on alignment) with enough depth for the Dean to have
useful discussion with school heads and associates.’

‘Now need information on how effectively teachers are using delivery technologies available to them
and being genuinely interactive? Survey questions need to move with the times.’

‘Need benchmarks for the use of SET feedback in improving teaching – what, how, results –across
comparable universities, plus more assistance with formal evaluation of course structures, materials,
achieving teaching objectives etc – linking skill development in these areas to indicators from SET
evaluations.’

‘Also need ways for teachers to be able to turn SET survey instruments from rather blunt tools to suit
their course purposes – so a teacher can use formal SET processes to obtain focussed feedback such as
they receive if they prepare a survey themselves. Plus procedures for small ongoing surveys, such as
sampling through a semester course – so the teacher (and School) can adjust and respond during the
course - even weekly feedback forms.’

‘We are looking for a SET adaptation, a way of evaluating boring compulsory building block courses –
so the evaluation is more informative for students and less dispiriting for teachers.’

Even as there is increasing support now for SET surveys and services, some see current concepts of
student evaluation of teaching as ‘limited’. This study found a notable interest in exploring towards peer
evaluation of skills, for instance gleaning ideas from top universities in the USA.

‘Australian universities need a next-level of sophistication in evaluation eg. to assess alignment of


programs and learning outcomes with stated objectives. Plus peer evaluation processes, so constructive
peer review can be linked to outcomes expected of T&L teams (Faculties, Schools).’

‘Looking forward three years on academic development services – we need more on peer review, more
tools for evaluating flexible learning, for keeping up with education technologies, and specialised
course evaluations.’ (Perhaps by working with employers eg. Kabanoff, Richardson & Brown, 2003).

In conclusion, this study demonstrated that in the fast changing Australian higher education environment,
tools and procedures introduced gently a decade ago, may be overtaken by the daily influences of many
marketplaces – ranging from promotion signals and global career orientation of academics, to student
expectations and availability of many educational choices.

In all this, the goal of using surveys as tools to improve teaching can be submerged, indicating need for
ongoing monitoring of the value of SET services, and for policies backing their intended usage. For
ADUs, the challenge is to develop, refine and deliver useful day-to-day services, while keeping ahead of the
game by observing and responding to increasingly demanding policy, practical and people circumstances.

5 AUQF 2008 refereed


The Promise of Good Teaching – getting much more from student surveys Dr Sandra J Welsman

References

Arubayi, E. (1987) Improvement of instruction and teacher effectiveness: Are student ratings reliable and
valid?, Higher Education, 16, 267-278.
Baume, D. (2006) Towards the End of the Last Non-Professions?, International Journal for Academic
Development, 11(1), 57-60
COAG (2006) Communiqué: A New National Reform Agenda. Council of Australian Governments,
February. Available from www.coag.gov.au/meetings/100206/index.htm.
Kabanoff, B., Richardson, A,, Brown, S. (2003) Business Graduates’ Perceptions of the Quality of their
Course: A View from their Workplace, Australasian Association for Institutional Research. Retrieved June
1, 2006 from www.aair.org.au/jir/Oct03/Contents.htm.
MacDonald, R. (2006) The use of evaluation to improve practice in learning and teaching, Innovations in
Education and Teaching International, 43(1), 3-13.
Marsh, H. (1984) Students' evaluations of university teaching: Dimensionality, reliability, potential biases,
and utility, J. Educational Psychology, 76, 707-754
Marsh, H., Dunkin, M. eds. (1993) Students' evaluations of university teaching: A multidimensional
perspective, Agathon, New York
Marsh, H., Roche, L. (1997) Making Students' Evaluations of Teaching Effectiveness Effective - The
critical issues of validity, bias and utility, American Psychologist, 52(11), 1187-1197
Nair, C., Soediro, S., Wayland, C. (2005) A system for Evaluating the Student Voice in the 21st Century,
Proceedings of 2005 Forum of the Australasian Association for Institutional Research.. Retrieved June 1,
2006 from www.aair.org.au/Forum2005/Nair.pdf.
Nulty, D., Hughes, C., Sweep, T., Southwell, D. (2005) Evidence of teaching quality: what counts - and
what should count, Workshop: The Effective Teaching and Learning Conference. The University of
Queensland, Brisbane.
Palermo, J. (2004) Closing the Loop on Student Evaluations, Australasian Association for Institutional
Research Forum, Retrieved June 1, 2006 from www.aair.org.au/jir/2004Papers/.
Ramsden, P. (2003a) Learning to Teach in Higher Education, 2nd edn, Routledge Falmer, London.
Ramsden, P. (2003b) Student Surveys and Quality Assurance, Australian Universities Quality Forum -
'National Quality in a Global Context'. Melbourne. Retrieved June 1, 2006 from
http://www.auqa.edu.au/auqf/2003/proceedings/index.htm.
Surratt, C., Desselle, S. (2007) Pharmacy Students’ Perceptions of a Teaching Evaluation Process, American
Journal of Pharmaceutical Education 71, 1.
Wood, V. (2004) On the horns of a dilemma: Is student evaluation of lecturers a valuable tool or a devil in
disguise?, Australasian Association for Institutional Research Forum. Retrieved June 1, 2006 from
http://www.aair.org.au/jir/2004Papers/.
Yunker, P., Yunker, J. (2003) Are student evaluations of teaching valid? Evidence from an analytical
business core course, Journal of Education for Business, 78 (6) 313.

6 AUQF 2008 refereed

Das könnte Ihnen auch gefallen