Sie sind auf Seite 1von 21

Article

Assessing University Students:


Searching for an English Language Exit Test

David D. Qian
The Hong Kong Polytechnic University
Hung Hom. Kowloon. Hong Kong
David Qian@pol\>u. edu. hk

Abstract • Iti order to tnotivate university students to improve their English profi-
ciency, the Hong Kong govemtnent decided to adopt a common exit English language
test for all graduating students. In the process ot selecting a suitable measure for this
purpose, an empirical study with a sample of over 240 students was conducted to
compare two English proficiency tests, the English Test ofthe Graduating Students'
Language Proficiency Assessment (GSLPA) and the Academic Version ofthe Inter-
national English Language Testing System (IELTS). The comparisons covered both
speaking and writing components ofthe two tests, based on information provided by
the two tests as well as performances ofthe participating candidates on the compo-
nents under study. Results indieate: (1) that GSLPA writing and speaking scores
distinguish candidates" abilities more clearly than the corresponding scores on the
IELTS but IELTS overall scores, generated from writing, speaking, reading and listen-
ing sub-scores, have a discriminating power similar to that of GSLPA; (2) that the
GSLPA and IELTS writing subtests basically measure different skills; and (3) that
the constructs ofthe GSLPA and IELTS speaking subtests, while having their own
distinctive feattires, overlap by about 48%. This paper recommends options for Improv-
ing the current assessment framework for graduating university sttidents and discusses
the possibie impact of such a test on English language teaching and learning.

Keywords • English proficieney test, GSLPA, Hong Kong, IELTS, perfonnance test,
speaking assesment, writing test, university exit test.

Background
In Hong Kong, English language proficiency is an important factor for
new university graduates in securing employment (Qian 2005). When
applying for a job, new graduates are often required to show some evi-

Vo] 38(!) 18-37 | DOI: 10,1177/0033688206076156


RELC '0 2007 SAGE Publications (Los Angeles, London, New Delhi and Singapore)
http;//RELC, sagepub.com
19
Assessing University Students

dence of their English language ability, based on scores from a standard-


ized English proficiency test, or a company-made English language test
(Qian and Lumley 2005). In recent years, however, concems have been
expressed, especially by the business community, that the general level of
English language proficiency for students and the workforce in Hong
Kong has been on the decline (Berry and Lewkowicz 2000; Conium and
Falvey 2002: Lumley and Qian 2003; Nunan 2002). This negative per-
ception was so widespread that the former Chief Executive of Hong Kong
had to raise it as an issue in his first policy address in October 1997. In
an effort to address such concems, the Govemment of the Hong Kong
Special Administrative Region has since launched various initiatives
(Conium and Falvey 2002; Lumley and Qian 2003; Nunan 2002). Among
those initiatives, two have been the most high profile, namely, Workplace
English Campaign (WEC) and the Common English Proficiency Assess-
ment Scheme (CEPAS).
WEC is an assessment and training scheme aimed at the general work-
force in Hong Kong, whereas CEPAS is specifically aimed at benchmark-
ing the English language proficiency of fresh university graduates.
CEPAS was officially launched by the University Grants Committee
of Hong Kong (UGC) in July 2002, after several years' inter-institutional
discussions and consultancies in the form of a framework for assessing the
English language proficiency levels of all tertiary-level graduating stu-
dents in Hong Kong {Berry and Lewkowicz 2000). The UGC eventually
decided that CEPAS should contain only a single standardized exit test
for all the eight tertiary-level institutions and that the Academic Modules
ofthe International English Language Testing System (IELTS), owned,
developed and delivered jointly by the British Council, IDP: IELTS Aus-
tralia and the University of Cambridge ESOL Exaniinations (Cambridge
ESOL), should be the test to fulfil this assessment purpose (UGC 2002).
However, it was not an easy decision to make. During the several years
leading to this decision, numerous deliberations were entertained at vari-
ous management committees overseeing the eight institutions and various
constituents' meetings (Berry and Lewkowicz 2000). At the Hong Kong
Polytechnic University (HKPU), an investigation was aiso carried out to
compare the appropriateness ofthe IELTS, a mixture of receptive and
productive components, and another wholly task-based performance test
being considered for CEPAS at the time, the English Test ofthe Gradu-
ating Students' Language Proficiency Assessment (GSLPA). Specifically,
the study aimed to acbieve the following purposes;
20
Regional Langidage Centre Journal 38.1

(1) to compare the distributions of obtained scores on the corre-


sponding writing and speaking components ofthe IELTS and
GSLPA in order lo examine the discriminating power of each
test; and
(2) to establish intercorrelations between the writing and speaking
components ofthe two tests in order to determine whether or not
corresponding components ofthe two different tests measure
the same areas of language knowledge and skills.

GSLPA
First implemented in the 1999/2000 academic year, the GSLPA has been
a formal English language exit test at HKPU for three years. During the
three-year period of 1999/2000 and 2001/2002, over 6,500 HKPU final-
year students sat the test, which was also made available to students from
Lingnan University in Hong Kong.
The GSLPA test aims to provide infonnation for prospective employers
and otber interested parties about the English language proficiency of stu-
dents around the time of graduation. Its content, therefore, looks forward
to the context of professional employment rather than backwards at the
context of using language for academic purposes (Lumley and Qian 2003).
The GSLPA was initially developed with UGC research funds at HKPU.
Its continuous development has received expert input from language testing
specialistsfrom the UK, USA, Canada, Australia and other countries. It is a
wholly workplace-oriented, task-based perfonnance test mainly consisting
of two components, writing and speaking.
The writing component, consisting of three tasks, lasts 105 minutes.
The first two tasks (A and B) require the candidate to write a memo or
professional letter. Task A is intended to be straightforward, addressing a
relatively unproblematic situation, for which most necessary infonnation
is supplied as writing prompts. Task B, which carries more weight for
scoring, is designed to be more cognitively and linguistically challenging.
The task requires some kind of problem solving or argument on the part of
the candidate in a workplace context, with careful attention to audience,
register and communicative strategies. Candidates are given sufficient
time (90 minutes for both tasks) and space to draft their responses before
writing the fmal version for submission. Task C takes the fonn of a proof-
reading and error correction task, based on a passage supplied, on a theme
related to modem world workplaces. This task contributes only in a minor
way to the final band score of writing.
21
Assessing University Students

GSLPA's speaking component, in the form of a semi-direct test in


multimedia language laboratories, lasts about 40 minutes and is composed
of five tasks as follows;
• Task 1: Summarizing and reporting infonnation from a radio
interview.
• Task 2; Responding to a series of questions at a job interview.
• Task 3: Presenting infonnation from a written (graphic) source to
a business tneeting.
• Task 4; Leaving a work-related telephone message.
• Task 5: Providing information about an aspect of life in Hong
Kong to a newly-arrived international colleague.
Each student's writing and speaking performances are evaluated by
two tiained and experienced raters. Where there is a noticeable discrep-
ancy between the two ratings of a performance, a third rating is conducted.
All the ratings are finally processed by a computer statistical programme.
Facets, which has been developed based on the theory of Rasch modelling,
to generate final scores for reporting. Factors such as item difficulty, rater
harshness, student ability, and parity across multiple versions are taken
into consideration in the computing processing by default.
Results from tbe GSLPA are reponed on a scale from 1 (low) to 6
(high). The scale uses + to indicate intermediate points on this scale: 1, I +,
2, 2+, 3, 3+, and so on. Including the intermediate points, the GSLPA
scale has 11 bands (see Appendix A for a full description ofthe bands).
Further details ofthe development and design ofthe test are discussed in
Lumley and Qian (2003). The Rasch reliabilities based on the 2002
operational results were 0.94 for the writing component and 0.97 for the
speaking components.

IELTS
The IELTS test is jointly provided by three organizations: the British
Council, IDP: IELTS Australia atid Cambridge ESOL, with its develop-
ment and validation unit based in Cambridge, UK.
There are two versions ofthe IELTS test: the Academic Modules and
the General Modules. Both versions contain four components: listening,
speaking, reading and writing. According to the IELTS handbook, the Aca-
demic Modules ofthe IELTS are designed to assess a candidate's English
language proficiency for academic studies at the undergraduate or post-
22
Regional Language Centre Journal 38.1

graduate level, whereas the General Modules are developed to assess test
candidates who intend to go to English-speaking countries to complete
their secondary education or undertake work experience or training pro-
grammes at below degree level. People who need to demonstrate their
English proficiency in order to immigrate to Australia or New Zealand are
also required to sit the General Modules (British Council, IDP: IELTS
Australia and University of Cambridge ESOL Examinations 2005).
The listening subtest, which lasts 30 minutes, is composed of four sec-
tions including two monologues and two conversations. Forty questions in
a variety of item types are asked in this subtest, which include multiple
choice, short-answer questions, sentence completion, table/chart comple-
tion, and matching.
The reading subtest, lasting for 60 minutes, contains 40 questions based
on three passages with a total length of 2,000-2,750 words. The question
formats include multiple choice, short-answer questions, cloze summary,
sentence completion, table/chart completion, matching lists or phrases,
supplying headings for paragraphs, and so on.
The writing component, also lasting for 60 minutes, is made up of two
academic-oriented tasks. The first task asks the candidate to describe in
about 150 words a chart, a graphic or a table which they might encounter
during their study at university. The second task usually requires the can-
didate to write an argumentative essay of about 250 words based on a
controversial topic supplied in the question paper.
The speaking subtest, which lasts 11-14 minutes, is conducted in the
form of a one-to-one interview with an examiner. The interview is re-
corded on an audiotape for fiature quality assurance (see below). The sub-
test comprises three parts. In the first part, the examiner interviews the
candidate with questions of familiar nature which are selected from a
pool; In the second part, the candidate makes a short presentation on a
topic supplied at the beginning ofthe session; and the third part is in the
form of a discussion between the examiner and the candidate, based on
questions linked to the main theme of Part 2 (British Council, IDP: IELTS
Australia and University of Cambridge ESOL Examinations 2005).
According to the most recent information available, the Cronbach's
Alpha reliabilities for IELTS listening and reading subtests in 2003 were
0.89 and 0.88 respectively. IELTS candidates' writing and speaking per-
formances are rated by a single certified rater at local test centres for the
purpose of score reporting. Reliabilities of writing and speaking assess-
ment is ensured 'through a sample monitoring process', where a sub-
23
Assessing University Students

sample ofthe candidates' performances are collected and later re-rated


by senior examiners for quality check. In 2003, the overall correlational
agreement between the local raters and senior examiners were 0.91 for
both writing and speaking scores. Reliabilities based on these correla-
tions were therefore 0.84, using the Spearman-Brown Formula (British
Council, IDP: IELTS Australia and University of Cambridge ESOL
Examinations 2004).
Results from the IELTS are reported on a scale from 1 (non-user) to 9
(expert user). The reading and listening band scores are reported in whole
and half bands, while writing and speaking scores are reported in whole
bands only. Appendix B lists the descriptions ofthe nine overall bands.

Methodology
The data were collected from a voluntary sample of 243 final-year stu-
dents from 17 academic departments at HKPU, who sat for both the
GSLPA and the Academic Modules ofthe IELTS within a month. The
IELTS was administered to them directly by IDP Australia and the British
Council. With consent from the participating students, the test results were
provided to the research team as the data for the present study.
To achieve the two purposes ofthe study, statistical analyses were per-
formed using SPSS Version 14. Three procedures were applied to the data:
computation of Descriptive Statistics mainly for comparing the distri-
butions of obtained scores on the corresponding writing and speaking
components ofthe IELTS and GSLPA, in order to examine the discrimi-
nating power of each test, and calculation of Pearson Product Moment
Correlations and Shared Variances in order to determine whether or not
these components ofthe two different tests measure the same areas of
language knowledge and skills.

Results and Discussion


Score Distributions
The comparisons made here focus only on the writing and speaking sub-
tests ofthe GSLPA and IELTS because the former, as a performance test,
does not contain separate measures for assessing reading and listening
abilities. Table 1 provides some general statistics to describe the test results
ofthe 243 participating candidates. Tables 2-5 and Figures 1-4 illustrate
the distributions of band scores on writing and speaking achieved by the
same group of students.
24
Regional Language Centre Journal 38.\

Table 1. Descriptive Resultsfromthe GSLPA and IELTS


T&it Possible Mean Score Mean Score Obtained Score
Score Range (SD) in% Range
GSLPA Writing 1-6 3.7(0.58) 62% 2.5-5.5
GSLPA Speakitig 1-6 3.9(0.71) 65% 1.5-6.0
GSLPA Overall 1-6 3.8 (0.54) 63% 2.3-5.3
IELTS Listening 1-9 6.7(0,85) 74% 4.5-9.0
IELTS Reading 1-9 6.4 (0.76) 71% 4.0-9.0
IELTS Writing 1-9 6.0 (0.76) 67% 4.0-8.0
IELTS Speaking 1-9 6.1 (0.88) 68% 4.0-8.0
lELTS Overall i-9 6.3 (0.64) 70% 4.5-8.0

Table 2. Score Distribution of GSLPA Writing Component

Band Frequency Percent


1.0 0 0
1.5 0 0
2.0 0 0
2.5 11 4.5
3.0 36 14.8
3.5 76 31.3
4.0 86 35.4
4.5 21 8.6
5.0 12 4.9
5.5 1 4
6.0 0 0
Total 243 100

Table 3. Score Distribution ofthe IELTS Writing Component


and Frequency Percent
I 0 0
2 0 0
3 0 0
4 5 2.1
5 59 24.3
6 125 51.4
7 51 21.0
8 3 1.2
9 0 0
Total 243 100
25
Assessing University Students

Table 4. Score Distribution ofthe GSLPA Speaking Component


Band Frequency Percent
1.0 0
1.5 X 4
2,0 5 m
2.5 9 3.7
3.0 31 12.8
3.5 64 26.3
4.0 26.3
4.5
m
48 19.8
5.0 IS 7.4
5.5 5 2.1
6.0 t 0.4
Total 20 100

Table 5. Score Distribution ofthe IELTS Speaking Component


Band Frequency Percent
1 0 0
2 0 0
3 0 0
4 7 2.9
5 52 21.5
6 116 47.9
7 54 22.3
8 ll 5.4
9 .,•'. 0
Total 242 100

Table 6. Distribution ofthe IELTS Overall Scores


Band Frequency Percent
l.O 0 0
0 0
0 0
0 0
0 " 0
' 0 &
40 Q
4.5 M
S.0 8 33
5.5 35 14.4
26
Regional Language Centre Journal 3S .1

6.0 67 27.6
6.5 71 29.2
7.0 43 17.7
7.5 16 6.6
8.0 2 0.8
8.5 0 0
9.0 0 0
Total 243 100

50.0

1 I
% 30,0
— | - 11
1 ^^ I
I
1M-

1>Q IJfc 241 2& 3.0 3£


Bands
1•.
4.0 4.S S,0 5.5 6J)

Figure {.Distribution of GSLPA Writing Scores


60.0

4J1 5.0 6.0 7,0 %St 8.0


Bands

Figure 2. Distribution of IELTS Writing Scores


27
Assessing University Students

1i) 2.0 2.5 3.0 3.5 4.0 4.6 6.0 5.S 8JQ

Figure 3. Distribution ofGSLAPA Speaking Scores


80.0

0.0 4-
8,0 9.0

Figure 4. Distribution of IELTS Speaking Scores

It can be observed frotn these tables and figures that while there are
nine bands available for the IELTS to describe candidates' overall, and
writing and speaking proficiency, only about half of these bands were
actually used in describing the writing and speaking proficiency levels of
this group of candidates. The scores on the IELTS writing are distributed
over five bands (4-8). The speaking scores span the same range.
On the other hand, the GSLPA scores are spread over a wider range.
For example., the scores on the GSLPA writing are distributed over seven
bands (2.5-5.5), including four half bands, which were reported in the
score certificates. The GSLPA speaking scores are distributed over 10
28
Regional Language Centre Journal 38.1

bands (1.5-6), which again include five half bands as they were reported
in the certificates.
It can also be observed from Tables 2-5 that GSLPA writing and
speaking scores are more evenly distributed than their IELTS counterparts.
Most notably, as shown in Tables 3 and 5, on the IELTS writing subtest,
125 (51.4%) out ofthe 243 candidates are bunched up on Band 6, and 116
candidates (47.9%) cluster on Band 6 for the IELTS speaking subtest. We
can note that, as shown in Table 6, the distribution ofthe IELTS overall
scores looks much better than the distributions of its writing and speaking
scores: This is because the IELTS listening and reading scores also con-
tributed equally to the production ofthe IELTS overall scores. With a
potential range of 17 bands/half bands, and spreading over 11 bands (4—9)
and 10 bands (4.5-9) respectively in this dataset, the listening and reading
scores are much more widely distributed.

Intercorrelations
Table 7 reports the intercorrelations between all components ofthe two
tests. As indicated in the table, the overall scores ofthe two tests are
highly correlated. The correlation between the scores on the GSLPA and
IELTS speaking components is also fairly strong. However, there is only
a weak, albeit statistically significant, correlation between the writing
components ofthe two tests. It is also interesting to note that the GSLPA
speaking component is relatively strongly associated with the IELTS
listening component. That is probably because some ofthe GSLPA speak-
ing tasks, such as Task 1, also require listening comprehension.
Based on these correlation coefficients, the following shared variances
(R^) were obtained (see Table 8). The magnitude of an R' indicates the
proportion of overlap between two datasets, in this case two tests the cor-
relation (r) of which was used to produce the R". Because all the candi-
dates took both tests it is safe to assume that the variance is not caused by
differences in test populations, but by some differences in the tests. In the
light of these shared variances, the following observations can be made.
The low value of 0.21 indicates that there is only 21% overlap ofthe
variance between the scores on the writing components ofthe GSLPA and
IELTS, which suggests that these two writing subtests basically measure
different skills.
The R^ value of 0.48 indicates overlap of almost half of the variance
between the scores on the speaking components ofthe two tests. Never-
theless, more than half of the construct (52%) of each test is distinctly
different from the other.
29
Assessing University Students

The 53% of overlap ofthe variance of overall scores on the two tests,
based on the R" value of 0.53 in Table 8, suggests that the GSLPA and
IELTS share about 53% ofthe test construct but there is still 47% ofthe
variance in each test that cannot be explained by the variance in the other
test. In other words, although the two tests seem to be measuring similar
areas of knowledge about half the time, almost half the time they seem to
be measuring different constructs or aspects of constructs.

Table 7. Pearson Correlations between the Scores


on the Components ofthe GSLPA and IELTS

Test GSLPA GSLPA IELTS IELTS ihLTS IELTS IELTS


Speaking Overall Writing Speaking Listening Reading Overall
GSLPA .41** .80** .46** .38** .45** .49** ,55**
Writing
GSLPA .88** .41** .69** .67** .44** .67**
Speaking
GSLPA .51** .66** ,64** .55** .73**
Overall
IELTS • ,
.42** .38** .35** .67**
Writing • ,

IELTS - .58** .43** .78**


Speaking
IELTS - .55** ,79**
Listening
IELTS - ,73**
Reading

p<.01 (2-tailed)

Table 8. Shared Variances ofthe Correlation Coefficients

Test JELTS IELTS IELTS IELTS IELTS


Writing Speaking Listening Reading Overall
GSLPA .21 .14 .20 .24 .30
Writing
GSLPA .17 .48 .45 .19 .45
Speaking
GSLPA .26 .44 .41 .30 .53
Overall
30
Regional Language Centre Journal 38.1

Implications and Conclusion


The present study has compared the score distributions ofthe writing and
speaking subtests ofthe GSLPA and ofthe IELTS, and has examined
the extent to which the constructs ofthe corresponding components of
the two tests can be said to be the same or similar, based on their inter-
correlations and shared variances.
In terms of what the two tests measure, the study has found, through
examining intercorrelations and shared variances, that the overlap of
the writing constructs ofthe two tests was only 21 %, that the overlap of
the speaking constructs was about 48%. and that the overall scores ofthe
GSLPA and IELTS shared 53% explained variance. These figures clearly
indicate that about half of the constructs ofthe two measures are distinct
from each other and test different areas of knowledge. This finding is
actually in line with tbe stated purposes ofthe two tests: the IELTS is an
entry test which measures English proficiency levels in order to evaluate
candidates' readiness for academic studies, whereas the GSLPA is an
exit test which measures English proficiency levels in order to evaluate
candidates' suitability for professional employment. Published task de-
scriptions ofthe two tests also indicate that the two tests differ consid-
erably in terms of their design (see British Council. IDP: IELTS Australia
and University of Cambridge ESOL Examinations 2005; Lumley and Qian
2003).
The study also found that the GSLPA writing and speaking scores were
distributed more widely over the available proficiency bands than their
IELTS counterparts. Three factors may have contributed to the narrow
spread ofthe IELTS writing and speaking scores. First, IELTS uses only
whole bands to report writing and speaking scores, whereas GSLPA util-
izes both whole and half bands in score reporting. Second, each candi-
date's IELTS writing and speaking perfomiance is graded by only one
rater for score reporting purpose. In contrast. GSLPA follows a strict dou-
ble rating system, the results from which are further equated and finalized
through Rasch analysis to guarantee the faimess and reliability of reported
scores. Third, the GSLPA test is targeted at a population of graduating
university students, and is based on language testing research with that
population. It is not surprising if the GSLPA spreads out students' per-
formances more powerfully in the 3-4.5 score range than the IELTS does
in the equivalent 5-7 bands.
However, it would not be fairtoconcludejust from these observations
that the GSLPA has a better discriminating power than the IELTS. because
31
Assessing University Students

by design the GSLPA writing and speaking subtests, being the only com-
ponents ofthe test, measure candidates' perfonnance in these areas in a
more intensive way than the corresponding components ofthe IELTS, as
indicated by the different lengths ofthe test time allowed and the number
of tasks assigned to each subtest. it is therefore critical that IELTS make
full use of its receptive components (reading and listening) in compen-
sating for what its productive components (writing and speaking) are
unable to achieve, that is to say. it is desirable that the lELTS should better
utilize the available proficiency bands in reporting test scores. This com-
pensation appears to have been made through a careful grading of reading
and listening performances. However, the IELTS has the potential of
being more discriminatory among mid-level candidates if its mid-range
score bands ofthe writing and speaking subtests can be further refined and
applied more meticulously by utilizing half bands.

Implications for Making an Ideal Exit Test


So what recommendations can these findings suggest for the purpose of
establishing an appropriate exit test for graduating students in Hong
Kong? To make the picture clear, strengths and weaknesses of each test
first need to be identified.

Test Components. The current exit test used by UGC, IELTS, appears to
be more comprehensive than the GSLPA as it contains components cov-
ering all four commonly recognized language skills, listening, speaking,
reading and writing. This comprehensive fonnal may make the test look
more credible to the general public, especially to some potential test users.
In contrast, the GSLPA contains only two main components, writing and
speaking, which, to some test users, may sound insufficient in temis of
skill coverage, although this format was drawn up based on a large-scale
survey of an important test user group, namely, senior executive officers
of business companies, who pointed out that professional writing and
speaking were the most essential communicative skills in workplaces.

Test Orientation. The GSLPA is a workplace-oriented performance test


which is fully task-based and contextualized, and as such, all items in the
GSLPA are clearly situated in a workplace context. What the candidate is
required to demonstrate in the test is exactly what he or she is expected to
perform in a professional workplace. From this perspective, GSLPA
scores are highly relevant as evidence of English language proficiency for
future employment.
32
Regional Language Centre Journal 38.1

On the Other hand, the IELTS is designed to assess English language


proficiency for academic purposes. Generally speaking, none of its com-
ponents are specifically workplace-oriented. Even its limited number of
productive tasks in the writing and speaking subtests are stripped of details
of real-life contexts. People whose English proficiency can cope with
academic studies do not necessarily perform well in professional work-
place communication because the two types of communication involve
different sets of constructs, strategies and registers, among other things.
There is little ground to believe that the IELTS Academic Version, in its
present form, can effectively serve the purpose of benchmarking post-
tertiary candidates for professional workplace communication, a primary
goal ofthe UGC when the committee launched CEPAS.
In addition to using an exit test as a leverage to enhance students'
awareness ofthe importance of English language proficieney, the UGC
also intended to use the exit test to facilitate students'job search or their
pursuantof further studies (UGC 2002). It is therefore important that the
construct ofthe exit test is valid for both purposes, that is, for employ-
ment and for academic studies. However, as the above analyses suggest,
neither test can claim with confidence to serve both purposes at the same
time. To address this problem, three options are hence recommended;
(1) The IELTS was chosen as the exit test mainly because of its
intemationai status. To keep this intemationally reputable test
as the exit test in Hong Kong, one solution would be to ask the
lELTS providers to design a new version catering for both work-
place and academic needs. However, this option is challenging
technically, as academic study skills and workplace communi-
cative skills are basically two different constructs.
(2) Altematively, the IELTS providers could add to their portfolio
a new version of IELTS for professional workplaces so that
graduates who plan to seek employment instead of pursuing
further studies would be able to take this version and obtain
direct evidence of their English proficieney for professional
communication when leaving university. In the long run, this
option would add to the predictive validity ofthe IELTS as a
whole.
(3) As a locally developed test, the GSLPA does not enjoy as
high an intemationai reputation as the IELTS. However, the
33
Assessing University Students

GSLPA has undergone rigorous trialling and validation just as


any reputable intemationai tests would, as evidenced in GSLPA
development reports. Therefore, as the third option, it would be
sensible to add the GSLPA to the CEPA scheme as a concur-
rent exit test so that graduates preferring to obtain evidence of
English language proficiency jusl to facilitate their job search
would bave the option of taking the GSLPA. In fact, a reason-
able proportion of students have indicated their preference for
the GSLPA, according to a recent survey {Qian 2002).

Implications for Teaching and Learning


An exit test can influence teaching and leaming once it becomes high-
stakes. At present, the IELTS is only a voluntary test for graduating uni-
versity students in Hong Kong. Therefore, the test's impact on classroom
activities is limited. For example, at HKPU, only some short IELTS
preparation courses are offered at the Centre for Independent Language
Learning. Test preparation basically has not interfered with the regular
curriculum. Nevertheless, as long as concems over the deterioration of
English proficiency still exist in the business sector, which is highly
influential on govemment policies in Hong Kong, and as long as there is
insufficient proof to convince the govemment that university students'
English language proficiency has considerably improved, chances are
that an exit test will eventually become mandatory for all graduating
students. If and when this happens, regular classroom teaching and leam-
ing will likely be affected in some significant manner, as happened with
high-stakes tests elsewhere {Cheng 1998). Therefore, in identifying an
exit test that would potentially affect classroom teaching and leaming, it
is important to ensure that its washback effects be positive, as the utmost
purpose of implementing an exit test has to be to 'enhance awareness of
importance ofthe proficiency in English generally' {UGC 2002: 1) and
to continuously and effectively improve students' English language pro-
ficiency through the enhancement of teaching and leaming activities. The
concern of negative washback effects could also be valid should the
IELTS become a high-stakes, compulsory exit test, since the test con-
tains a number of discrete-point item types, such as multiple choice and
matching, which may cause negative impact on teaching and leaming, as
such formats allow for too much guessing.
34
Regional Language Centre Journal 38.1

REFERENCES

Berry, V., and J. Lewkowicz


2O0O 'Exit Tests: Is There an Alternative?', Hong Kong Journal of Applied Lin-
guistics 5(\): 19-49.
British Council, IDP: IELTS Australia and University of Cambridge ESOL Examinations
2005 IELTS Handbook.
British Council, IDP: IELTS Australia and University of Cambridge ESOL Examinations
2004 Test Performance 2003. (http://www.ielts.org/teachersandresearchers/
anaiysisoftestdata/articlel73.aspx)
Cheng, Liying
1998 'Impact of a Public Examination Change on Students' Perceptiotis and Atti-
tudes toward their English Learning', Studies in Educational Evaluation
24(3): 279-301.
Conium, David, and P. Falvey
2002 'The Representative Nature of a Sample: The Hong Kong Pilot Betichmark
Assessment (English) Exercise', Hong Kong Journal of Applied Linguistics
7(1): 16-33.
Lumley, Tom, and D. Qian
2003 'Assessing English for Employment in Hong Kong', in C. Coombe and
N. Hubley (eds.). Assessment Practices (Waldorf, MA: TESOL Publica-
tions): 135-47.
Nunan, David
2002 The Impact of English as a Global Language: Policy and Planning in Greater
China', Hong Kong Journal of Applied Linguistics 7(1): 1-15.
Qian, David
2002 -Students' Perceptions of GSLPA and IELTS as Exit Tests' (Hong Kong:
The Hong Kong Polytechnic University Department of English).
^3QS 'Why Engineers Need to Be Multilinguists: Perspectives of Employers and
Employees', in E. Witte, L. Van Mensel, M. Pierrard, L. Mettewie, A. Housen
and R. De Groof (eds.). Language, Attitudes & Education in Multilingual
Cities (Wetteren: Universa Press): 199-210.
Oian, David, and T. Lumley
2005 'Rating Writing Performance: What Do the Judgments of Test Users Tell Us?'
Paper presented at 27th Language Testing Research Colloquium, Ottawa,
Canada.
The Hong Kong Polytechnic University ' '
2007 'The GSLPA English: Assessment Levels' (http://gslpa.poly.edu.hk/eng/web/
about3.html).
University Grants Committee of Hong Kong
2002 'Common English Proficiency Assessment for Graduating University Stu-
dents'. Public announcement (hup:// www.ugc.cdu.hk/eng/ugc/publication/
press/2002/pr290702e.htm, accessed July 29, 2002).
35
Assessing University Students

APPENDIX A: The GSLPA Proficiency Band Descriptors

Writing Bands
W6: Can produce clear, convincing, well argued texts, using suitable tone
and style. Vocabulary is precise and effective, and grammar is well
controlled. A highly proficient writer.
* W5+
W5: Cati produce well organized texts that communicate successfully and
clearly on required tasks. Has generally good control of tone, styie,
vocabulary and sentence structure, despite some inaccuracies. A
clearly competent writer.
W4+ - •
W4: Can produce relevant, interpretable and generally well organized texts
that address task requirements. Vocabulary is generally adequate, and
grammatical errors do not obscure communication. A generally
competent writer.
W3-t- • • • • •

W3: Can produce generally relevant and interpretable texts that show basic
ability to organise content appropriately. Vocabulary is adequate to
convey basic meanings and grammatical errors rarely prevent
communication. A basic writer.
W2+
W2: Can produce texts with some relevance to the task and some sense of
organisation. Vocabulary and grammar are inconsistent, but allow
meaning to be conveyed. A limited writer.
WI +
WI: Has some grasp of basic forms of written English, although these may
not be applied relevantly, appropriately or consistently. An elementary
writer.

* Grades which include + indicate that the student's performance falls between
two levels (e.g. 3+ means the student's performance is between a 3 and a 4).

Speaking Bands
S6: Can speak clearly, precisely and confidently on a range of tasks, using
complex language when necessary. Speech is effortless for listener to
follow.
S5+
S5: Can communicate successfully, clearly and with confidence on a range
of speaking tasks, generally using precise language. Sense is easy to fol-
low throughout.
36
Regional Language Centre Journal 38.1

S4+
S4: Can convey meaning successfully on a range of speaking tasks,
despite inaccuracies or limitations in vocabulary. Although
organization sometimes lacks clarity or fluency, message can always
be followed.

S3: ' Can convey meaning on a limited range of speaking tasks, but is
sometimes hesitant. Despite inaccuracies, unevenness in pronun-
ciation and/or limitations in vocabulary, message is mostly
comprehensible.

S2: Can convey basic meaning in some situations, although there are
frequent pauses and hesitations, and ideas may lack organization.
Speech can be hard to follow.
S1+
SI: Has some grasp of basic forms of spoken English, although speech is
disjointed and very hard to follow.
(Adapted from The Hong Kong Polytechnic University 2007)

APPENDIX B : The IELTS Proficiency Band Descriptors

9—Expert user
Has ftilly operational command ofthe language: appropriate, acctirate and fluent
with complete understanding.
8—Very good user
Has fully operational command ofthe language with only occasional unsystematic
inaccuracies and inappropriacies. Misunderstandings may occur in unfamiliar
situations. Handles complex detailed argumentation well.
7-—Good user
Has operational command ofthe language, though with occasional inaccuracies,
inappropriacies and misunderstandings in some situations. Generally handles com-
plex language well and understands detailed reasoning.
6—Competent user
Has generally effective command of the language despite some inaccuracies,
inappropriacies and misunderstandings. Can use and understand fairly complex
language, particularly in familiar situations.
5—Modest user
Has partial command ofthe language, coping with overall meaning in most situa-
tions, though is likely to make many mistakes. Should be able to handle basic
communication in own field.
4—Limited user
Basic competence is limited to familiar situations. Has frequent problems in under-
standing and expression. Is not able to use complex language.
37
Assessing University Students

3—^Extremely limited user


Conveys and understands only general meaning in very familiar situations. Fre-
quent breakdowns in communication occur.
2—Intermittent user
No real communication is possible except for the most basic infonnation using
isolated words or short formulae in familiar situations and to meet immediate
needs. Has great difficulty in understanding spoken and written English.
I - Non user
Essentially has no ability to use the language beyond possibly a few isolated words.
() - Did not attempt the test
No assessable information provided.

(IELTS Handbook 2005: 4)

Das könnte Ihnen auch gefallen