Sie sind auf Seite 1von 9

A Short Guide To:

Multi-Rater Developmental
Feedback Surveys
TABLE OF CONTENTS

Multi-Rater Feedback Surveys........................................................................ 1


Confidentiality of Your Responses.................................................................. 2
Understanding the Rating Scale ..................................................................... 3
Your Comments and Supporting Text............................................................. 4
Appendix I: The Human Response Bias ......................................................... 5

i
Multi-Rater Feedback Surveys
Multi-Rater Feedback Surveys are a means of providing a person with greater
insight into their own behaviour and performance. This is about measuring
Reputation, how others view our performance, and is usually framed around
key competencies relevant to the role or position.
People who experience the results of our behaviour are typically better placed
to comment on it than we are. Equally, it is other peoples perceptions, rather
than our own, that affect real world outcomes. We get that job because
someone else thinks were up to it, we are promoted because our manager
thinks were good enough. Very rarely do these outcomes occur because we
think we deserve it!
For participants undertaking their first Developmental Feedback Survey, the
experience can be quite a daunting one. The big challenge is remaining
objective and focussed on future development, rather than reacting defensively
about past performance that cant be changed. For those in leadership roles, it
will often become an indispensible tool to understand where and how to focus
development in a new context.

Figure 1: The Johari Window


Not Known to Others
Known to Others

Where a
Developmental
Feedback Survey can
provide useful
information

Known to Self Not Known to Self

The purpose for undertaking a multi-rater feedback survey is purely


developmental. As participants, we should see it as an opportunity to identify
areas of strength and development. As raters, we should see it as an
opportunity to provide someone with honest and helpful information on which
they can act and develop.

1 1
Confidentiality of Your Responses
As a rater, your scores on the participant are confidential and your name is not
displayed against any scores you enter. Scores are always presented back to
the participant as an average for the whole rater group (e.g. direct reports, or
peers) and not for any individual rater response.
If you are the only person in a rater group (e.g. the participants Manager
being an obvious example) your score ratings will be identifiable to the
participant because the group average will obviously just be your scores. If you
are concerned about your responses being identified as the only person in a
rater group, you should talk to the survey coordinator or the participant before
you complete the survey.
Any free text comments you enter are identifiable against your rater group (e.g.
direct reports or peers) but your name is not displayed against any comment
you make. We show your rater group because it still preserves your anonymity
but is highly useful for the participant to contextualise suggestions and
comments. Like any free text, just be aware that the more specific you make
your comments the more likely you could be identified. Sometimes you might be
quite happy for this to happen, other times you might not.

2 2
Understanding the Rating Scale
If available, familiarise or remind yourself of the question areas or competency
domains involved. Refreshing yourself on the general framework before you
dive in and start answering questions often helps give you a clearer sense of
the individuals relative performance in each of the areas. This in turn will help
you be more accurate in your ratings.
As a rater, you will be asked to indicate how often you observe the described
behaviour in the person you are rating. The questions in the survey will have
been developed or included based on their link to behaviours that are
consistently attributable to higher performance in the participants role or
position.
The five point rating scale presented for each question is as follows:

Rating Scale Description

Never, or almost never, demonstrates this


1. Almost Never
behaviour/attribute

2. Between never and sometimes

Sometimes, or often, demonstrates this


3. Sometimes
behaviour/attribute

4. Between sometime and always

Always, or almost always, demonstrates this


5. Almost Always
behaviour/attribute

To rate the participant accurately, use the following guidelines:


Think of the rating scale descriptors, and try to avoid either an overly
favourable or overly critical response pattern.
Think of how this individual specifically behaves; try and picture actual
examples in your mind.
Remember, your ratings are confidential so be as honest and accurate as
you can.
Rate individuals on each question alone; try not to think of anything other
than what is being asked in that question.
Remember that this exercise is about improvement and development.
Your views will help drive better performance across your organisation.
Finally, answer based on your own context and experience with the
participant and not on what you might have heard from other people.

3 3
Your Comments and Supporting Text
Developmental Feedback Surveys usually contain questions which allow for
your free text and comments. This will help to elaborate your ratings or provide
specific examples that better reveal your intent. These provide you as a rater
with the opportunity to provide suggestions for the individual to improve their
performance.
To provide useful developmental comments and suggestion, use the following
guidelines:

Dont:
Generalise (Be a better manager).
Identify what they cannot change (Increase the budget or Become better
looking).

Do:
Be specific (Slow your speech down; Make more time for articulating the
strategy on the monthly retreat).
Suggest how they can build on strengths (You are great at managing
external relationships, so delegate some of the administration to free up
time for client meetings).

Note: your comments will be reported back verbatim to the participant, so


please dont enter anything that you dont want the participant to read directly.

4 4
Appendix I: The Human Response Bias
Human beings have an inherent bias in the way we perceive the world around
us. This often stems from in-built psychological mechanisms that help us
operate effectively in the world, but they can also be counter productive to
giving accurate developmental feedback to others!
Some of these natural biases are presented below, and they are a useful
reminder to understand how they might affect us as individual raters. The point
is not to expect we can eradicate them completely, but to consciously work to
minimise their effect in our own responses. It is this that will enable us to
provide the most honest and accurate feedback to participants.

Fundamental Attribution error


This relates to a tendency to attribute mistakes/poor performance by others to
internal dispositions and achievements to external events e.g. someone has
failed to meet a deadline because they are lazy rather than the fact that they
were relying on someone else to deliver a crucial piece of information that did
not arrive till the last minute.
How to manage: Consider the context that the individual was operating in and
how typical this is of his/her behaviour.

Rater Bias
Views of ability are influenced by personal biases someone who is liked is
rated more highly than someone the rater does not particularly like. One of the
most telling examples of this is with physical attraction. Numerous studies have
shown that more attractive people are also judged as being more intelligent and
capable than their less attractive counterparts.
How to manage: Be aware of your individual biases. Base your ratings on
behavioural evidence rather than your like/dislike of others.

The Halo Effect


Ratings of overall performance are based on one characteristic or event, e.g.
subjects are categorised as performing generally negatively because of a
previous judgement that has been made no matter what their recent behaviour.
This can be positive or negative e.g. you base your rating on one particular
event that sticks out in your mind rather than an overview of the subjects
behaviour over time. Where negative, this is sometimes referred to as the Horns
Effect.
How to manage: If one particular event sticks out in your memory, think about
how consistent this is of the individual given the broader knowledge you have of
them.

5 5
Shifting standards
Subjects are rated against each other rather than the standards e.g. a subject
with moderate strength is compared against a subject with significant strength
and is rated lower than they should have been as a result.
How to manage: Rate the subjects against the standard rather than each other.

Stereotyping
Raters have stereotypes of how certain personnel should behave or how
particular groups (women, young people, ethnic groups) perform. Subjects may
be appraised to be good (or bad) because they belong to a particular group.
How to manage: Think of some stereotypes you may hold. How do these
impact on the criteria you are rating the participant against?

Negative information bias


Raters are influenced significantly more by perceived or acknowledged
weaknesses rather than by strengths. Strengths are likely to be taken for
granted. Weaknesses have an undue impact on perception e.g. recall of one
negative experience results in the subject being rated low across the board,
despite the fact that this was not typical of the subject.
How to manage: If you have had a negative experience with an individual in
the past, challenge this by thinking of times they have not behaved like this.
Identify consistent behaviour.

Clone syndrome
Raters prefer people similar to themselves in biographical background,
personality and attitudes. Sometimes this can even extend to physical
similarities.
How to manage: Be aware of those you are similar to. Check that this similarity
is not influencing your perspective by thinking of evidence that reflects the rating
you have given for the individual.

Central tendency
This reflects a tendency to give the subject a rating of 3 for most or all of the
items. This occurs for a number of reasons: the rater does not want to indicate
that anyone has a development need or that anyone is better than anyone else,
the rater is in a hurry and cant be bothered taking the time to fill in the survey

6 6
properly, the rater is unable to rate the individual against this item and gives a
3 as a default option.

How to manage: Take the time to look at each item, the behavioural
descriptors for each item, and think about the subject and the behaviour that
they have demonstrated that relates to the item. Finally, if you are unable to rate
a subject on a particular item, do not rate them on that item.

Leniency
Leniency is a tendency to give overly positive ratings to subjects, regardless of
the behaviour they have demonstrated.

How to manage: If you have rated someone as 5s on all items, consider


whether their behaviour really reflects the very positive ratings that you have
given. Remember, that to obtain a 5 the subjects behaviour would have to be
significantly stronger than the majority of his/her peers in relation to the item
rated.

7 7

Das könnte Ihnen auch gefallen