Sie sind auf Seite 1von 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/222051009

Measuring the Academic Skills of University Students: Evaluation of a


diagnostic procedure

Article  in  Assessing Writing · December 2010


DOI: 10.1016/j.asw.2010.08.002

CITATIONS READS

12 214

2 authors, including:

Elizabeth Erling
Karl-Franzens-Universität Graz
41 PUBLICATIONS   388 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

English Medium Instruction in Lower and Middle Income Countries: opportunities and challenges in Ghana and India (funded by The Open University, the British
Council and the Education Development Trust) View project

All content following this page was uploaded by Elizabeth Erling on 04 March 2019.

The user has requested enhancement of the downloaded file.


This article appeared in a journal published by Elsevier. The attached
copy is furnished to the author for internal non-commercial research
and education use, including for instruction at the authors institution
and sharing with colleagues.
Other uses, including reproduction and distribution, or selling or
licensing copies, or posting to personal, institutional or third party
websites are prohibited.
In most cases authors are permitted to post their version of the
article (e.g. in Word or Tex form) to their personal website or
institutional repository. Authors requiring further information
regarding Elsevier’s archiving and manuscript policies are
encouraged to visit:
http://www.elsevier.com/copyright
Author's personal copy

Available online at www.sciencedirect.com

Assessing Writing 15 (2010) 177–193

Measuring the Academic Skills of University Students:


Evaluation of a diagnostic procedure
Elizabeth J. Erling a,∗ , John T.E. Richardson b
a Faculty of Education and Language Studies, The Open University, Walton Hall, Milton Keynes MK7 6AA,
United Kingdom
b Institute of Educational Technology, The Open University, Walton Hall, Milton Keynes MK7 6AA, United Kingdom
Received 6 February 2010; received in revised form 12 August 2010; accepted 20 August 2010
Available online 22 September 2010

Abstract
Measuring the Academic Skills of University Students is a procedure developed in the 1990s at the
University of Sydney’s Language Centre to identify students in need of academic writing development by
assessing examples of their written work against five criteria. This paper reviews the literature relating to the
development of the procedure with a focus on studies exploring its reliability and validity. It then describes a
study in which teams of language experts and subject experts used the procedure to rate assignments produced
by students taking three distance-learning courses. This study provides further insight into the psychometric
properties of the procedure: the overall ratings had satisfactory internal consistency and test–retest reliability
and were highly correlated with the marks awarded to the assignments analyzed. However, the ratings were
also highly correlated with each other and yielded just one principal component, suggesting that even skilled
assessors were unable to differentiate between different aspects of academic writing. We therefore conclude
that the procedure is a reliable and valid means of identifying students who need writing skills development
but that it should not be relied on to identify their particular areas of weakness.
© 2010 Elsevier Ltd. All rights reserved.

Keywords: Academic language skills; Academic literacy; Academic writing; Distance education; Measuring the Academic
Skills of University Students; University students

1. Introduction

In many English-speaking countries, national policies aimed at widening participation in higher


education and financial pressures on universities and colleges to recruit internationally have led

∗ Corresponding author. Tel.: +44 113 217 9716.


E-mail addresses: e.j.erling@open.ac.uk (E.J. Erling), J.T.E.Richardson@open.ac.uk (J.T.E. Richardson).

1075-2935/$ – see front matter © 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.asw.2010.08.002
Author's personal copy

178 E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193

to an increasingly diverse student population. This has led in turn to concerns about the aca-
demic writing or literacy skills of students entering higher education (Jones, 2004; Lillis & Scott,
2007). Difficulties in academic writing can impair the progress of both domestic and interna-
tional students and both native and non-native speakers of English (Bonanno & Jones, 2007, p. 7;
Paton, 2007). This is hardly surprising: As Lillis and Scott (2007) pointed out, “Students’ writ-
ten texts continue to constitute the main form of assessment and such writing is a ‘high stakes’
activity in university education. If there are ‘problems’ in writing, then the student is likely to
fail” (p. 9). To increase retention and to enhance the student experience, institutions are seeking
new ways to support writing in higher education as traditional approaches no longer seem to be
sufficient.
In the US, for instance, where writing skills have traditionally been taught through generic
first-year classes in English composition, most universities now also host “writing centers” that
offer support to students in either face-to-face sessions or online tutorials. But in universities
where a large proportion of the student population is deemed to be in need of further writing
support, neither generic nor individual initiatives offer sufficient solutions. There is growing
recognition that universities need to take responsibility for their students’ language development
as part of their general academic development; and there is growing recognition that writing
skills development in higher education is best carried out in the context of the students’ dis-
cipline. As a result, there has been a shift towards a “writing across the curriculum” approach
that brings together writing and subject specialists to develop subject-specific writing (Benesch,
2001; Johns, 2001). This has coincided with the development of diagnostic procedures that have
been designed to identify the English-language needs of students entering higher education and
to increase students’ and staff awareness of the role of language in the development and success
of students in their academic courses (Jones, 2001; Knoch, 2009). Another development has been
research in “academic literacies” that sees writing as a social practice that needs to be embed-
ded and developed in the context of learning (Lea & Stierer, 2000; Lea & Street, 1998; Lillis,
2001).
One example of an institutional initiative to support the development of writing skills within
the disciplines has occurred at the Learning Centre at the University of Sydney, which adopted
a collaborative model that brings together subject and language specialists to provide additional
support to students thought to be at risk of failing. It was in this context that Webb and Bonanno
(1994, 1995) devised the diagnostic procedure called “Measuring the Academic Skills of Univer-
sity Students” (MASUS) to identify students who need writing development in different aspects of
academic writing. This procedure is being used in a growing number of institutions in Australia
and elsewhere as a way of situating language assessment and development within the context
of particular academic disciplines. There is however a paucity of evidence with regard to its
psychometric properties.
A recent study undertaken at the Open University, an open-access distance-learning institution
in the United Kingdom, provides further insight into the psychometric properties of the MASUS
instrument. Motivated partly by the finding that students from ethnic minorities tended to achieve
poorer results on their written assignments, the MASUS procedure was used in a research study
to help uncover whether variations in students’ academic writing skills might be responsible for
this gap in attainment. A supplementary benefit of this research was that it gave insight into the
discriminative validity of the MASUS instrument. This article first reviews the literature relating
to the development of the MASUS procedure with a focus on its reliability and validity. The
second part describes an empirical evaluation of the reliability and validity of the procedure based
on research undertaken at the Open University.
Author's personal copy

E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193 179

1.1. The development of the MASUS procedure

Studies on the diagnostic assessment of writing suggest that such assessment should be direct
rather than indirect; identify strengths and weaknesses in learners’ use of language; focus on spe-
cific elements rather than global abilities; and provide detailed feedback to stakeholders (Alderson,
2005; Knoch, 2009). The developers of the MASUS procedure seem to have aimed to fulfil all
of these criteria, in part by grounding the procedure in systemic functional linguistics (Halliday,
1985), in which “language is a resource for making meaning, and these meanings are determined
by the context of use” (Webb & Bonanno, 1994, p. 577). The application of this idea to student
texts involves a collaborative process between content and language specialists to achieve a shared
understanding of the language use required for a particular assignment. In the original diagnostic
procedure, students were provided with relevant background information and asked to produce a
test paper on a topic appropriate to their academic context. The raters then assessed the test papers
using a standard set of descriptors that were derived from previous corpus studies of student essay
writing (Drury & Webb, 1991; Jones, Gollin, Drury, & Economou, 1989) and that were tailored
to the students’ discipline and academic level as well as the particular assignment topic. These
descriptors were classified into four broad assessment criteria that represented “a spectrum of
perspectives on the student’s writing from macro-scale to micro-scale”:

1. Is information retrieval and processing of visual, verbal and numerical data correct and appro-
priate for the task?
2. Is the structure and development of the answer clear and generically appropriate to the question
and its context?
3. Does the grammar conform to the appropriate patterns of written academic English?
4. Is the message communicated without the interference of grammatical errors? (Webb &
Bonanno, 1994, p. 578)

In subsequent research, these criteria were referred to simply as “Use of source material”,
“Structure and development of text”, “Control of academic writing style”, and “Grammatical
correctness”, respectively, and this is how they will be referred to in this article. The first two
criteria focus on top-down perspectives on text – text content and text design – while the third and
fourth work from bottom-up perspectives, focussing on sentence and clause level (see Donohue &
Erling, submitted for publication). A fifth criterion, “Qualities of presentation”, has been added,
but this is usually simply assessed as appropriate or not appropriate and not included in the overall
assessment. Each of the other criteria are rated on each criterion on a scale from 1 (low) to 4 (high),
where ratings of 1 or 2 are “used as indicators of problems or weaknesses that could obstruct the
student’s progress, and therefore indicate a need for some intervention” (Bonanno & Jones, 2007,
p. 3).
At the University of Sydney, the MASUS procedure is used as a way to identify a student’s
strengths and weaknesses in writing, and then students are offered opportunities to improve. Each
of the four criteria is linked to the Learning Centre’s academic writing curriculum, and students
are given feedback as to which workshops they should attend to improve a particular area.

1.2. Monitoring and evaluation of the MASUS procedure

There have been various attempts to monitor the need for embedded writing support at the
University of Sydney and to evaluate the ability of the MASUS procedure to identify this need.
Author's personal copy

180 E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193

Much of this work has focused on correlations between students’ ratings on the MASUS procedure,
skills tested in school-leaving exams and various writing demands of different subject areas
(pharmacy, accountancy, etc.).
In trials carried out in 1993 using the MASUS procedure, a majority of 136 first-year pharmacy
students (75%) and of 442 first-year accounting students (59%) were deemed to be in need of some
writing development. Here it was found that students with more previous experience of English in
their schooling were more likely to produce written academic work that satisfied the expectations
of writing at university (Webb & Bonanno, 1994; see also Webb, English, & Bonanno, 1995).
Later trials found that there was not necessarily a correlation between accounting and pharmacy
students’ MASUS ratings and students’ marks at the end of their first year of study (Webb &
Bonanno, 1995). Webb and Bonanno (1995) suggested that this was because the assessments on
which the marks were based had not demanded the high level of writing competence assessed
by the MASUS procedure but required only short notes testing the students’ factual recall of
course contents. While this study suggested that students’ academic writing abilities do not seem
particularly significant in the early phases of programmes in pharmacy and accountancy, later
longitudinal studies suggested otherwise.
In a further evaluation of the role of writing development and student progression, Holder,
Jones, Robinson, and Krass (1999) explored the relationship between students’ ratings on the
MASUS procedure, their school-leaving examination, their marks at the end of the first year of
study, and their subsequent progression through the degree programme. They found that there
were significant correlations between students’ results on the MASUS procedure and their results
in the seven different subjects assessed at the end of their first year at university. However, these
included both positive and negative relationships: All four ratings on the MASUS procedure
were positively correlated with students’ marks in biology and introductory pharmacy but were
negatively correlated with students’ marks in mathematics. This was explained by the fact that
those students who had specialized in mathematics and science at high school might have had
restricted opportunities for developing their writing skills. This was subsequently confirmed in
a follow-up study which showed that pharmacy students who had specialized in mathematics or
science at high school tended to obtain lower MASUS ratings than those who had specialized
in English or the humanities (Jones, Holder, & Robinson, 2000). Holder et al. (1999) also found
that writing abilities become more important later in students’ study programmes and in their
future careers. Their study showed that about a third of the students on the three-year pharmacy
programme needed extra time to complete their degree because they had to retake courses that they
had failed or discontinued. Whether students graduated within the minimum time was predicted
both by their overall performance in the school-leaving examination and by their results on the
MASUS procedure.
Further evaluations were undertaken to explore correlations between students’ written and
spoken abilities and their success in their university programmes. Jones, Krass, Holder, and
Robinson (2000) administered the MASUS procedure to a cohort of first-year pharmacy students,
together with a standardized test which contained both verbal items and quantitative items, and an
interview in which the students had to ask a health professional about their work. The students’
scores on the verbal items significantly correlated with their ratings on each of the MASUS criteria.
However, their scores on the quantitative items significantly correlated only with their ratings
on the fourth MASUS criterion, grammatical correctness, and not with their ratings on other,
more discursive, aspects of academic writing. Jones et al. suggested that students with weaker
grammatical skills were more likely to misinterpret the quantitative items in the standardized test.
Students’ scores in the interview correlated with their ratings on three of the MASUS criteria (use
Author's personal copy

E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193 181

of source material, control of academic writing style, and grammatical correctness). In other words,
students with good written communication skills also tended to have good oral communication
skills. Finally, the students’ overall scores across the subjects taken in their first year of study were
found to be significantly predicted by their overall performance in the school-leaving examination,
their scores on the quantitative items on the standardized test, and their ratings on one MASUS
criterion, control of academic writing style. However, they were not significantly predicted by
their ratings on the other three MASUS criteria. In short, the assessment regime in the first year
of study appeared to be measuring quantitative abilities rather than writing skills.
Scouller, Bonanno, Smith, and Krass (2008) carried out a further investigation to explore the
relationship between students’ results on the MASUS procedure with their own evaluations of
their communicative abilities. In this study, 135 first-year pharmacy students were assessed using
the MASUS procedure and also asked to rate their own skills in written communication. Students
were regarded as having passed the MASUS procedure if the total of their ratings across the
four criteria was 10 or more (out of a possible maximum of 16). The students who had passed
produced significantly higher ratings of their own communication skills than those who had
failed. However, 81% of the students who had failed the MASUS procedure judged their own
written communication skills to be at least satisfactory in general. Scouller et al. argued that these
unrealistic self-evaluations might make students less likely to seek remedial support. Scouller et
al. also asked the students to report the length of the longest academic text that they had previously
had to write in English. They found that the length of the longest text that students had written
varied significantly (and directly) with their ratings on three of the four MASUS criteria (use
of source material, control of academic writing style, and grammatical correctness) but not with
their ratings on structure and development of text. This suggests that writing more complex texts
enhanced students’ academic writing skills with regard to grammar, style, and the use of source
material, but that the structure and development of their academic texts needed different kinds of
support.
While the body of evidence suggesting that the MASUS procedure provides a useful means
of evaluating the academic writing skills of students entering higher education is growing, it has
primarily been obtained at a single university, and mainly from students in a single academic
discipline, pharmacy. However, the MASUS is being used at other institutions, both in Australia
and elsewhere. At the University of Sydney, it is being applied to a variety of genres other than
conventional test papers (Bonanno, 2002). In many disciplines, MASUS ratings are reportedly
being used for summative as well as formative purposes on the assumption that this would motivate
the students to take the procedure more seriously and to seek support if necessary. Indeed, some
departments have made satisfactory MASUS ratings a requirement for admission into the second
year of study. Subsequently, the procedure has also been employed with students entering graduate
programmes (see Bonanno & Jones, 2007, p. 7).
An adaptation of this approach is also being used at the University of Wollongong, where
Skillen, Merten, Trivett, and Percy (1998) asked students who were taking courses in the first
two semesters of a biology programme to write two assignments in each course. The students
brought drafts of their assignments to class, exchanged them with one another, and evaluated
their partner’s draft using all five criteria from the MASUS procedure. They received instruction
on discipline-specific writing skills and then submitted the final assignments to be marked by
teaching staff using the same evaluation scheme based on the five MASUS criteria. Here it was
found that the ratings obtained on the assignments for the first semester were significantly higher
than those that had been obtained by the previous cohort of students who had not received such
support. The ratings obtained on the first assignment in the second semester were significantly
Author's personal copy

182 E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193

higher for students who had taken the first semester’s course than for students who had not taken
that course, especially on the MASUS criteria that had been targeted by the instructional sessions.
Hampton et al. (2003) followed up both groups of students and found that they passed a higher
proportion of their courses in the first year than did other science students. This remained the case
even when variations in their overall performance in the school-leaving examination had been
taken into account. The students who had received academic writing instruction were also more
likely to continue into the second and third years of study than were other science students.

1.3. Psychometric properties of the MASUS procedure

Evidence from the University of Sydney and the University of Wollongong suggests that
the MASUS procedure can be employed within a successful programme of academic writing
development for university students. However, this does not provide evidence of the adequacy
of the MASUS instrument. There has indeed been little attempt to evaluate its psychometric
properties. When discussing the issues of its reliability and validity, commentators such as Holder
et al. (1999) and Jones (2001, 2004) simply referred back to the original papers by Webb and
Bonanno (1994, 1995) and Webb et al. (1995). However, only Webb and Bonanno (1995) presented
evidence for the “validity of the literacy assessment instrument” (pp. 786, 788). In this study, the
MASUS procedure was administered to two successive cohorts comprising 136 and 155 pharmacy
students, respectively. In both cohorts the students’ ratings varied directly with the level of the
English qualification that they had obtained in their school-leaving examination. In a second
aspect of the study, the test papers produced by 58 accounting students were divided into four sets
and were rated by two experienced writing experts, by one less experienced writing expert, and
by one subject expert. The correlations among the totals of their MASUS ratings were measured
using Spearman’s rho. For two sets of papers, the correlation coefficients among all six possible
pairs of raters were statistically significant. For the other two sets of papers, however, only the
correlation coefficients between the two experienced writing experts were statistically significant.
Of course, the stability of the ratings assigned to two similar cohorts of students and the
consistency of the ratings assigned to a single cohort of students by different raters only provide
evidence of the reliability of ratings generated by the MASUS procedure and say nothing at all
about their validity. One particular issue is that of the procedure’s construct validity. Webb and
Bonanno (1995) implied that the MASUS criteria represented theoretically distinct “layers of
analysis of student writing”, and Bonanno (2002) emphasized the need to assign separate ratings
to each of the assessment areas. However, the ratings given to the same students on the different
MASUS criteria tend to be highly correlated with each other (Holder et al., 1999; Jones, Krass, et
al., 2000; Scouller et al., 2008). Moreover, Webb and Bonanno (1995) mentioned that “principal
components analyses carried out in the cohort monitoring part of the study had weighted each
of the literacy ratings equally” (p. 788). This suggests that the various MASUS criteria are not
measuring qualitatively different aspects of academic writing but are simply alternative measures
of a single underlying construct.
There is better evidence for the criterion validity of the MASUS procedure, as can be seen in
the studies mentioned above, where students’ MASUS ratings vary with the level of the English
qualification that they have obtained in their school-leaving examination (Scouller et al., 2008;
Webb & Bonanno, 1994, 1995; Webb et al., 1995), with their oral communication skills (Jones,
Krass, et al., 2000), and with their previous experience of writing longer academic texts (Scouller
et al., 2008). Moreover, students who have specialized in English or the humanities at high school
tend to obtain higher ratings than students who have specialized in mathematics or science (Jones,
Author's personal copy

E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193 183

Holder, et al., 2000). With regard to the predictive validity of the MASUS procedure, students’
ratings in their first year of study predict whether they will complete their programmes in the
minimum time (Holder et al., 1999). They may not predict their overall attainment in their first
year of study (Jones, Krass, et al., 2000; Webb & Bonanno, 1995); this is because they are
positively related to their attainment on certain courses (presumably ones that demand a high
level of academic writing skills) but negatively related to their attainment on courses that demand
a high level of mathematical skills (Holder et al., 1999). This latter finding, together with the fact
that students’ ratings tend not to be correlated with their overall performance in the school-leaving
examination, is evidence for the discriminant validity of the MASUS procedure and shows that it
is not simply a measure of overall intellectual ability.
We had an opportunity to consider these issues in the rather different context of the Open
University, an open-access distance-learning institution in the UK. The data used in this study
came from a project designed to investigate the relationship between language use and attain-
ment. This was partly motivated by the finding that students from ethnic minorities tended to
achieve poorer results both across the UK (Richardson, 2008) and specifically at the Open Uni-
versity (Richardson, 2009). In this project, to ensure that students were sufficiently motivated,
the MASUS procedure was applied to the actual assignments submitted by students taking three
different courses. This enabled us to investigate the test–retest reliability, internal consistency, con-
struct validity, and criterion validity of the MASUS ratings. Further analyses considered whether
variations in students’ writing skills might be responsible for the attainment gap between White
and ethnic minority students, and this analysis provided evidence for the discriminative validity
of the MASUS procedure.

2. Method

2.1. Context

The Open University was established in 1969 to provide degree programmes by distance edu-
cation throughout the UK. Originally, nearly of all its courses were delivered by correspondence
materials, combined with television and radio broadcasts, video and audio recordings, tutorial
support at a local level and (in some cases) week-long residential schools. Nowadays, the Univer-
sity makes extensive use of computer-based support, especially CD-ROMs, dedicated websites
and computer-mediated conferencing. At the time of writing, the University has around 150,000
students taking undergraduate courses and more than 30,000 students taking postgraduate courses.
For most undergraduate courses, the University does not impose any formal entrance require-
ments, provided that students are over the normal minimum age of 16. Recently, there have been
a number of strategic initiatives aimed at increasing the participation of students from underrep-
resented groups, such as those from ethnic minorities, those with no prior experience of higher
education, and those living in areas of multiple deprivation. As at other institutions, this has raised
issues regarding students’ writing skills and academic writing development. This article reports
on research that was partly conducted to identify whether students from ethnic minorities were
more likely to be deemed as needing writing development.

2.2. Courses

The research study from which these data were taken was concerned with students taking
three particular courses in which there were a relatively high proportion of students from ethnic
Author's personal copy

184 E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193

minorities, a relatively low pass rate, and a relatively high gap in attainment between White and
ethnic minority students:

• DD100 An Introduction to the Social Sciences: Understanding Social Change was an intro-
ductory course assessed by seven assignments. It was popular among students who were new
to the Open University or to higher education in general. It introduced students to contempo-
rary social science through practical social concerns as well to basic study skills. Assignments
mainly consisted of essays, although a few used other genres. Students were expected to write
theoretically from a social science perspective and to use evidence to support their arguments.
• K204 Working with Children and Families was an intermediate course that was assessed by six
assignments and an examination. It was designed to meet the educational and training needs
of people working with children and their families in social care, childcare, health, education,
and leisure settings. Essays were the only type of assignment, and students were expected to
be familiar with this genre when they started the course.
• T175 Networked Living: Exploring Information and Communication Technologies was an intro-
ductory course that was assessed by four general assignments, three online assignments, and
a longer end-of-course assignment. Students studied examples of information and communi-
cation technologies, learned about the concepts on which they were based, and considered the
contexts in which they were used. They also developed skills in communication, numeracy,
information literacy, and learning that were considered essential for further study and employ-
ment in this field. Most assignments were made up of a number of questions that each required
a discursive answer of one or two paragraphs. There were no essays.

All of these courses ran from February to October 2008. This study focused on three particular
tutor-marked assignments submitted at the beginning, in the middle, and towards the end of
each of the three courses (henceforth referred to as Assignments 1, 2, and 3). These assignments
were marked by tutors following normal procedures, using a percentage scale (where 40% was
the passing mark) and explicit grading criteria and standards, and the marks were returned to
contribute to the students’ overall assessment for the course. The assignments were separately
assessed using the MASUS procedure (see below). The project was approved by the three course
directors and by the University’s ethics committee.

2.3. Participants

Student assignments were collected in two ways. The first was through an intervention in which
28 students consented to take part. The students were drawn from tutorial groups in London and
Birmingham, where it was expected that there would be a high proportion from ethnic minorities.
The MASUS procedure was applied to three of their assignments. Because of nonsubmission and
student drop-out, only 73 (rather than 84) assignments were available from these students for
assessment. These student participants were interviewed by the researchers and given develop-
mental feedback on their assignments if desired. The assignments of another 50 students who had
identified themselves at registration as coming from an ethnic minority were collected from those
taking the three courses. The students were selected on the basis that they had received relatively
high, moderate, or relatively low marks. There was no significant difference between the assign-
ments collected in these two different ways, nor any significant effect of the feedback, as the
application of the MASUS procedure was not integrated into a writing development programme.
Author's personal copy

E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193 185

Across all 78 students, 18 were White, 18 were Black African, 17 were Black Caribbean, 13
were Pakistani, six were Bangladeshi, two were Chinese, and four were from other groups of
Asian origin (note that these terms are self-classifications of their ethnicity using categories based
on the UK census, not their countries of origin). The students consisted of 24 men and 54 women
aged between 19 and 58 with a mean age of 34.7 years.

2.4. Procedure

For each of the three courses, three tutors were recruited as co-researchers. One was a subject
expert who tutored students on the course. The others were language experts who served as tutors
on one of the University’s English-language courses or members of a special interest group on
English as an additional language. After two days of training during which the researchers were
familiarized with the MASUS procedure, each group adapted the MASUS framework to the
assignments they were analyzing. They then each applied the MASUS procedure independently
to the relevant students’ assignments using descriptors that were appropriate to each assignment
(an example is provided in Appendix). Following Skillen, Merten, Trivett, and Percy (1998),
the 4-point rating scale was used for all five criteria. Webb and Bonanno (1995) suggested that
“greater consistency of marking is likely to be achieved when markers cross-check their ratings
against other markers ratings” (p. 789). The language experts therefore conferred with each other
to produce a single agreed assessment for each of the five criteria with advice from the subject
expert as to whether the students had grasped the relevant course content. An overall rating was
obtained by averaging the ratings on the five criteria.

3. Results

Analysis of the results suggest that there is a significant proportion of students taking courses
with the Open University who are at risk of underachieving because of weaknesses in their aca-
demic writing. The ratings given to Assignment 1 revealed that the proportion of students at
risk of underachieving on each of the five MASUS criteria (that is, rated either 1 or 2) was
29% for use of source material, 47% for structure and development of text, 45% for control
of academic writing style, 24% for grammatical correctness, and 17% for qualities of presenta-
tion.

3.1. Construct validity

Table 1 shows that the Pearson correlation coefficients among the MASUS ratings given to
the three assignments are all statistically highly significant and represent large effects in absolute
terms (see Cohen, 1988, pp. 80–81). This is consistent with the idea that the five MASUS criteria
are “an interdependent set of descriptors” (Holder et al., 1999, p. 24) and that the overall rating
gives an accurate picture of a student’s performance on an assignment.
In order to evaluate whether the different descriptors were assessing different characteristics
of the assignments (and therefore of the students’ academic writing skills), principal component
analyses of these ratings were carried out on the 15 MASUS ratings provided for each student
across the three assignments, both for each of the courses individually and for the three courses
combined. In each analysis, just one component had an eigenvalue greater than one, implying that
the 15 MASUS ratings are alternative measures of a single underlying construct (see Table 2).
There was very little systematic variation among the coefficients, either across the three assign-
Author's personal copy

186 E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193

Table 1
Intercorrelations among MASUS ratings for three assignments.
A B C D E

Assignment 1 (N = 77)
A. Use of source material — .78 .58 .58 .61
B. Structure and development of text — .60 .61 .61
C. Control of academic writing style — .74 .63
D. Grammatical correctness — .64
E. Qualities of presentation —
Assignment 2 (N = 73)
A. Use of source material — .70 .65 .54 .51
B. Structure and development of text — .70 .51 .52
C. Control of academic writing style — .70 .63
D. Grammatical correctness — .66
E. Qualities of presentation —
Assignment 3 (N = 70)
A. Use of source material — .47 .49 .47 .47
B. Structure and development of text — .62 .49 .63
C. Control of academic writing style — .74 .74
D. Grammatical correctness — .75
E. Qualities of presentation —

Table 2
Results of principal components analyses on the MASUS ratings of three assignments.
Course

DD100 K204 T175 Overall

Assignment 1
A. Use of source material .77 .70 .80 .77
B. Structure and development of text .79 .75 .88 .80
C. Control of academic writing style .58 .81 .89 .84
D. Grammatical correctness .74 .84 .82 .82
E. Qualities of presentation .58 .79 .72 .72
Assignment 2
A. Use of source material .68 .65 .70 .72
B. Structure and development of text .73 .80 .88 .76
C. Control of academic writing style .79 .79 .89 .86
D. Grammatical correctness .79 .87 .84 .80
E. Qualities of presentation .73 .82 .81 .78
Assignment 3
A. Use of source material .69 .54 .60 .57
B. Structure and development of text .83 .79 .85 .78
C. Control of academic writing style .67 .85 .87 .83
D. Grammatical correctness .74 .82 .85 .78
E. Qualities of presentation .75 .86 .88 .83
Percentage of variance explained 52.86 61.49 67.51 60.75

ments or across the three courses. The sample size was sufficiently large that the overall analysis
could be expected to generate a reliable solution (Sapnas & Zeller, 2002). The results for the
separate courses are based upon much smaller samples but are presented to show the consistency
of the findings.
Author's personal copy

E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193 187

3.2. Reliability

Given that the MASUS criteria appear to be measuring a single construct, it makes sense to
ask how reliably they are measuring that construct. Cronbach’s (1951) coefficient alpha was used
to estimate the reliability of the overall ratings from the internal consistency of the ratings on the
five criteria. The value of coefficient alpha was .84 or higher both for each of the three courses
and on the combined data set. This would be regarded as satisfactory on conventional research-
based criteria (Robinson, Shaver, & Wrightsman, 1991). The Pearson correlation coefficients
between the overall ratings given by the same assessors to different assignments can be regarded
as estimates of test–retest reliability. These were all .66 or higher, which again would be regarded
as satisfactory.

3.3. Predictive validity

We next considered whether the MASUS ratings of an assignment predicted the marks that had
been awarded by the student’s tutor. Table 3 shows that the relationship between the overall ratings

Table 3
Correlation coefficients between overall MASUS ratings and marks awarded.
Course

DD100 K204 T175 Overall

Assignment 1 .56** .79** .46* .54**


Assignment 2 .51** .71** .53* .56**
Assignment 3 .78** .45* .32 .39**
* p < .05.
** p < .01 (two-tailed tests).

and the marks was positive, but that the magnitude of the Pearson correlation coefficients varied
across the three courses and across the three assignments for each course. Thus, the strength of
the relationship between the MASUS ratings on an assignment and the tutor’s marks appeared to
depend on the particular discipline being taught and on the demands of the particular assignment.
This could mean either that the underlying relationship was the same but was more apparent
in some courses than others, or that the underlying relationship itself was different in different
courses. For instance, the correlation coefficients tended to be higher in the case of DD100 and
K204, where students were assessed by means of essays. This suggests that the relationship
between academic and attainment is equally significant in courses of different disciplines, with
assignments of varying genres.
To examine this matter in more detail, analyses of covariance were carried out using the assign-
ment marks as dependent variables, the course as the independent variable, and the overall MASUS
ratings as covariates. The regression constants for the covariate across the three assignments were
highly significant: An increase of 1 point in the overall MASUS rating was associated with an
increase of between 8% and 18% in the marks awarded for different assignments. Nevertheless,
the interactions between the effect of the covariate and the effect of the independent variable
were not statistically significant. This implies that the regression of the marks awarded for each
assignment on the overall MASUS ratings was homogeneous across the three courses. In other
words, the underlying relationship between the MASUS ratings and the tutors’ marks was the
Author's personal copy

188 E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193

same across different courses, regardless of whether or not the students were assessed by means
of essays.
Further analyses of variance were carried out to determine which MASUS criteria were more
important in predicting the tutors’ marks. These analyses used the assignment marks as dependent
variables, the course as the independent variable, and the ratings on the five MASUS criteria as
covariates (see Table 4 for the unstandardized regression coefficients). We found that the use of

Table 4
Regression coefficients between five separate MASUS ratings and marks awarded.
Assignment 1 Assignment 2 Assignment 3

A. Use of source material 10.72** 11.03** 11.07**


B. Structure and development of text 3.20 10.00* 0.04
C. Control of academic writing style −4.00 −0.73 −3.17
D. Grammatical correctness 0.63 −3.63 0.22
E. Qualities of presentation 2.00 4.52 5.24
* p < .05.
** p < .01 (two-tailed tests).

source material was a highly significant predictor of the marks awarded for all three assignments.
Apart from a smaller effect of the structure and development of text in the second assignment,
there were no other significant relationships between the MASUS criteria and the marks when the
effect of the first criterion was statistically controlled. It is unsurprising that tutors were awarding
marks primarily for students’ use of the source material, because the grading criteria and standards
that they were expected to use prioritized this.

3.4. Comparisons among ethnic groups

The final part of the analysis explored whether there were variations in academic writing
skills across students from different ethnic groups and whether these might explain variations in
their academic attainment. The upper panel of Table 5 shows the mean overall MASUS ratings for
each ethnic group on each assignment (estimated means, adjusted for the varying representation of
different ethnic groups on the different courses.). An analysis of variance with repeated measures
on the factor of assignments yielded significant effects of both course and ethnicity. Pairwise
contrasts showed that the White students obtained significantly higher ratings than each of the
other four ethnic groups. The effect of ethnicity on student attainment would be regarded as
“large” in Cohen’s (1988, p. 287) terms, and this effect did not vary significantly across the three
assignments.
The middle panel in Table 5 shows the estimated mean marks awarded to students in each
ethnic group on each assignment. An analysis of variance with repeated measures on the factor
of assignments yielded a significant effect of ethnicity on marks. The effect of ethnicity would be
regarded as large. The effect varied significantly across the assignments, and was more pronounced
in Assignment 2 than in Assignment 1 or Assignment 3. Pairwise contrasts showed that the Black
African students and the students of other ethnicity obtained significantly lower marks than the
White students; however, the Black Caribbean and Pakistani students did not. Thus while ethnicity
clearly is a factor in these students’ performance on their assignments, further research is needed
to better understand these results.
Author's personal copy

E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193 189

Table 5
Overall MASUS ratings and marks awarded for each assignment by students’ ethnicity.
Assignment 1 Assignment 2 Assignment 3

M SE M SE M SE

Overall MASUS ratings


White 3.12 0.17 3.24 0.15 3.28 0.15
Black African 2.64 0.15 2.74 0.13 2.74 0.13
Black Caribbean 2.65 0.15 2.73 0.13 2.73 0.13
Pakistani 2.73 0.20 2.60 0.18 2.65 0.18
Other 2.65 0.19 2.61 0.17 2.66 0.17
Marks
White 66.39 4.13 83.00 4.46 71.56 4.26
Black African 54.45 3.54 50.19 3.82 59.19 3.64
Black Caribbean 66.44 3.54 69.26 3.82 68.29 3.64
Pakistani 64.76 4.28 63.50 4.62 63.29 4.41
Other 58.00 4.53 58.50 4.89 63.08 4.66
Marks adjusted for effects of overall MASUS ratings
White 63.07 2.93 74.89 3.82 66.38 4.05
Black African 56.37 2.89 51.65 3.06 59.17 3.35
Black Caribbean 67.66 2.88 69.88 3.05 69.27 3.27
Pakistani 64.26 3.47 66.07 3.71 61.74 4.48
Other 59.95 3.25 61.39 3.93 64.99 4.20

Note: Ratings range from 1 (low) to 4 (high). Marks are on a percentage scale.

To explore whether variations in academic writing might be a factor that could explain these
variations in attainment across different ethnic groups, an analysis of covariance with repeated
measures on the factor of assignments was carried out on the marks awarded on each assignment,
using the overall MASUS ratings of each of the three assignments as a varying covariate. The
lower panel in Table 5 shows the estimated mean marks statistically adjusted for the effects of
the covariate: in other words, the mean marks are estimated as those that would have arisen if
the overall MASUS ratings had been the same across the five ethnic groups and across the three
assignments. The covariate explained a highly significant amount both of the between-subjects
variation, and of the within-subjects variation. Despite this, the effect of ethnicity remained both
highly significant and large. Pairwise contrasts showed that the Black African students obtained
significantly lower marks than the White students, but the other three groups of students from
ethnic minorities did not. Thus, variation in students’ academic writing skills explained some of
the variation in attainment across different ethnic groups, but by no means all.

4. Discussion

Across the five MASUS criteria, between 17% and 47% of the students in the study conducted
at the Open University were found to be at risk of underachieving due to weaknesses in their
academic writing. These figures are broadly similar to those obtained by Holder et al. (1999) and
Scouller et al. (2008) at the University of Sydney, a highly selective research-focused institution
where the majority of students are well-qualified young adults. The results point to the need for
additional writing support in the form of a large-scale writing development programme. Given the
size of the Open University and the diverse nature of its student body, this is a daunting prospect,
but, if implemented, it should considerably benefit student retention and attainment.
Author's personal copy

190 E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193

The overall ratings awarded to students’ assignments on the MASUS showed satisfactory
levels of both internal consistency and test–retest reliability. They were highly correlated with the
percentage marks that the students’ tutors had given to the assignments, and they differentiated
significantly between White and ethnic minority students. The latter may partially explain the gap
in the academic attainment of students from ethnic minorities (Richardson, 2009), but clearly,
further research needs to be carried out in this area to better understand this issue. The results of
these statistical analyses provide evidence of the reliability, criterion validity and discriminative
validity of the MASUS procedure and suggest that the procedure constitutes a reliable and valid
means of identifying students who are in need of additional support with regard to their academic
writing skills.
However, consistent with the findings of Holder et al. (1999), Jones, Krass, et al. (2000), and
Scouller et al. (2008), the ratings given to the same assignments on the five MASUS criteria were
very highly correlated with each other. Consistent with the findings of Webb and Bonanno (1995),
principal components analyses on the 15 ratings given to the same students across three different
assignments yielded just one principal component. This suggests that the 15 ratings were simply
alternative measures of a single underlying construct, and that even relatively skilled assessors
were unable to differentiate among the various aspects of academic writing that are represented
by the different criteria. Thus, the procedure seems to be able to reliably identify students in
need of writing development and point to strengths and weaknesses in students’ use of language
(Alderson, 2005). However, the present results imply that the instrument as used in this particular
study cannot be relied upon to identify the particular areas of academic writing where students
might be in need of support, i.e. it tends to focus on more global rather than specific language
abilities (Alderson, 2005).
These findings might indicate a need for more thorough briefing and training of the assessors.
They might also indicate the need for the procedure to be improved through the use of more detailed
descriptors of the various categories (see Knoch, 2009). But the more detailed the descriptors are,
the more specific they are to a particular assignment, and the more thorough the training needs to
be in applying the procedure. Ideally the MASUS descriptors would be adapted for each context
to allow tutors and students of a particular discipline deeper insight into the language required
of assignments in that field. Moreover, a space needs to be made for tutors and tutors to explore
these texts, so that the MASUS can be used as the basis for writing development. It requires a
massive undertaking for a university to facilitate this kind of writing development provision for
students, but this just might be what is necessary for the level of support that appears to be needed
by students, both at the Open University and at the University of Sydney.
The MASUS procedure is grounded in a holistic approach to language that allows for a
multifaceted description of the complex nature of academic writing, from the highest level of
making arguments, to the details of the grammar required to make those arguments. These criteria
inevitably overlap and interact, and it therefore may not be surprising that the procedure might
only be measuring a single underlying construct. But it is important to note that the MASUS
procedure was designed to be more than an instrument for diagnostic writing assessment; it is
intended to be used a stimulus for dialogue for writing development (Donohue & Erling, submitted
for publication). Therefore, even if the instrument cannot provide such detailed results, imple-
mentation of the MASUS procedure can help to raise awareness of the complexities of academic
writing and of the need for an explicit focus on students’ writing development within the con-
text of their academic discipline. The various criteria in the MASUS procedure provide a useful
structure for approaching written assignments and a vocabulary that tutors and students can use to
discuss them. As Jones (2004) commented, it is “a powerful technology to help build students’ and
Author's personal copy

E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193 191

teachers’ understanding of how and why language works in the way it does”, and it also provides
students with “the necessary tools to learn how a text is linked to its context and to reflect on their
roles as writers in these contexts” (p. 268). Work at the University of Sydney and the University
of Wollongong, as well as ongoing work at the Open University and elsewhere, has shown that
the judicious use of the MASUS procedure in a writing development programme can enhance
both students’ academic writing and their subsequent attainment.

Acknowledgements

This research was part of a broader project on the relationship between students’ academic
language use and attainment. It was supported by funds from Student Services at the Open Univer-
sity. We are grateful to Jim Donohue, who initiated and supervised the project; to the participating
researchers, Kerry Bannister, Christine Buller, Zoe Doyé, David Hann, Christina Healey, John
Kearsey, Chris Lee, and Harish Mehra; to Paul Ginns, James Hartley, Rachel Hawkins, Janet
Jones, Maki Kimura, Mary Lea, Marc Marschark, and Tony O’Shea-Poon for their comments and
advice; and to the students who participated.

Appendix A. MASUS Checklist

4 = very good/hardly any problems/mainly accurate/largely appropriate.


3 = good/minor problems/some inaccuracies/some inappropriacies.
2 = only fair/some problems/often inaccurate/often inappropriate.
1 = poor/major problems/inaccurate/inappropriate.

Criteria Comments

A. Use of source material and personal experience – is information taken from study, research and 4321
experience correct and/or appropriate for the task?
1. Used relevant information from reading
2. Irrelevant information from course material is avoided
3. Information from course material and other research or personal experience is interpreted and
transferred correctly
4. Text is free from plagiarism/information is integrated with your own words and ideas
5. Recognition of various perspectives in the field
6. Accurate referencing in text, bibliography or reference list is correct
B. Structure and development of text – is the structure and development of the answer clear and 4321
appropriate to the task and its context?
1. Introduction engages with the task and orientates to how the argument will be presented
2. Text structure is appropriate to the task
3. Claims build up the argument
4. Conflicting arguments are presented, addressed and effectively managed
5. Evidence and other experience is used that supports the claims in the argument
6. Beginnings of paragraphs and sentences orientate to the argument
7. Information flow in the argument is linked
(argument moves between high level generalisations and low level details and examples)
8. Statement of conclusion follows from argument & relates to title
C. Control of academic writing style – does the grammar conform to appropriate patterns of written 4321
academic English
1. Appropriate use of abstract vocabulary
2. Appropriate use of technical terms from the field
3. Appropriate use of collocation
Author's personal copy

192 E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193

Appendix A (Continued )

Criteria Comments

4. Appropriate level of formality and ‘objectivity’


5. Appropriate use of modality, interpersonal metaphor and other evaluative language
6. Appropriate use of noun groups and grammatical metaphor
7. Control of taxonomic relations
8. Control of reference chains and other cohesive devices
D. Grammatical correctness 4321
1. Correctly formed clause structures
2. Correct subject-verb agreement
3. Correctly formed tense choice
4. Correctly formed passives
5. Correctly formed modality
6. Correct use of articles
7. Correct use of conjuncts, adjuncts and disjuncts
E. Qualities of presentation 4321
1. Punctuation use generally correct
2. Spelling generally correct
3. Capitals, italics etc. used correctly
4. Word processing appropriate

Note: Adapted from materials created by the Learning Centre, University of Sydney (Bonanno &
Jones, 1997).

References

Alderson, C. (2005). Diagnosing foreign language proficiency: The interface between learning and assessment. London:
Continuum.
Benesch, S. (2001). Critical English for academic purposes. Mahwah, NJ: Erlbaum.
Bonanno, H. (2002). Standing the test of time: Revisiting a first year diagnostic procedure. In: Changing agendas “Te ao
hurihuri” (Proceedings of the Sixth Pacific Rim Conference on First Year in Higher Education). Brisbane: Queensland
University of Technology. Retrieved from http://www.fyhe.qut.edu.au/past papers/papers02.htm
Bonanno, H., & Jones, J. (1997). Measuring the Academic Skills of University Students: The MASUS procedure, a
diagnostic assessment. Sydney: University of Sydney, Learning Assistance Centre Publications.
Bonanno, H., & Jones, J. (2007). The MASUS procedure: Measuring the Academic Skills of University Students, a
diagnostic assessment (3rd ed.). Sydney: University of Sydney, Learning Centre.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Academic Press.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334.
Donohue, J., & Erling, E. J. (submitted for publication). The role of text description in university-wide literacy development.
Drury, H., & Webb, C. (1991). Teaching academic writing at the tertiary level. Prospect, 7 (1), 7–27.
Halliday, M. A. K. (1985). An introduction to functional grammar. London: Edward Arnold.
Hampton, G., Skillen, J., Russell, W., Robinson, S., Rodgerson, L., & Trivett, N. E. (2003). Integrating tertiary liter-
acy into the curriculum: Effects on performance and retention. In: Proceedings of Improving Learning Outcomes
Through Flexible Science Teaching Symposium (pp. 25–30). University of Sydney: UniServe Science. Retrieved from
http://science.uniserve.edu.au/pubs/procs/wshop8
Holder, G. M., Jones, J., Robinson, R. A., & Krass, I. (1999). Academic literacy skills and progression rates amongst
pharmacy students. Higher Education Research and Development, 18, 19–30. doi:10.1080/0729436990180103
Johns, A. (2001). Text, role and context: Developing academic literacies. Cambridge: Cambridge University Press.
Jones, J. (2001). A diagnostic assessment of the academic writing of first year students: The value of collaborative research.
HERDSA News, 23 (3), 33–35.
Jones, J. (2004). Learning to write in the disciplines: The application of systemic functional linguistic theory to the teaching
and research of student writing. In: L. J. Ravelli & R. A. Ellis (Eds.), Analysing academic writing: Contextualized
frameworks (pp. 254–273). London: Continuum.
Author's personal copy

E.J. Erling, J.T.E. Richardson / Assessing Writing 15 (2010) 177–193 193

Jones, J., Gollin, S., Drury, H., & Economou, D. (1989). Systemic-functional linguistics and its application to the TESOL
curriculum. In: S. Hasan & J. R. Martin (Eds.), Advances in discourse processes: Language development: Learning
language, learning culture. Meaning and choice in language: Studies for Michael Halliday (pp. 257–328). Norwood,
NJ: Ablex.
Jones, J., Holder, G. M., & Robinson, R. A. (2000). School subjects and academic literacy skills at university. Australian
Journal of Career Development, 9 (2), 27–31.
Jones, J., Krass, I., Holder, G. M., & Robinson, R. A. (2000). Selecting pharmacy students with appropriate communication
skills. American Journal of Pharmaceutical Education, 64, 68–73. doi:aj640113.pdf.
Knoch, U. (2009). Diagnostic assessment of writing: A comparison of two rating scales. Language Testing, 26 (2),
275–304.
Lea, M. R., & Stierer, B. (Eds.). (2000). Student writing in higher education: New contexts. Buckingham, UK: Society
for Research into Higher Education & Open University Press.
Lea, M. R., & Street, B. (1998). Student writing in higher education: An academic literacies approach. Studies in Higher
Education, 23, 157–172. doi:10.1080/03075079812331380364
Lillis, T. (2001). Student writing: Access, regulation, desire. London: Routledge.
Lillis, T., & Scott, M. (2007). Defining academic literacies research: Issues of epistemology, ideology and strategy. Journal
of Applied Linguistics, 4, 5–32. doi:10.1558/japl.v4i1.5
Paton, M. (2007). Why international students are at greater risk of failure: An inconvenient truth. International Journal
of Diversity in Organisations, Communities and Nations, 6 (6), 101–111.
Richardson, J. T. E. (2008). The attainment of ethnic minority students in UK higher education. Studies in Higher
Education, 33, 33–48. doi:10.1080/03075070701794783
Richardson, J. T. E. (2009). The role of ethnicity in the attainment and experiences of graduates in distance education.
Higher Education, 58, 321–338. doi:10.1007/s10734-008-9196-3
Robinson, J. P., Shaver, P. R., & Wrightsman, L. S. (1991). Criteria for scale selection and evaluation. In: J. P. Robinson,
P. R. Shaver, & L. S. Wrightsman (Eds.), Measures of personality and social psychological attitudes. (pp. 1–16). San
Diego, CA: Harcourt Brace Jovanovich.
Sapnas, K. G., & Zeller, R. A. (2002). Minimizing sample size when using exploratory factor analysis for measurement.
Journal of Nursing Measurement, 10, 135–154.
Scouller, K., Bonanno, H., Smith, L., & Krass, I. (2008). Student experience and tertiary expectations: Factors
predicting academic literacy amongst first-year pharmacy students. Studies in Higher Education, 33, 167–178.
doi:10.1080/03075070801916047
Skillen, J., Merten, M., Trivett, N., & Percy, A. (1998). The IDEALL approach to learning development: A model for
fostering improved literacy and learning outcomes for students. In: P. L. Jeffery (Ed.), AARE 1998 Adelaide Conference.
Coldstream, Victoria: Australian Association for Research in Education.
Webb, C., & Bonanno, H. (1994). Systematic measurement of students’ academic literacy skills. Research and Develop-
ment in Higher Education, 16, 577–581.
Webb, C., & Bonanno, H. (1995). Assessing the literacy skills of an increasingly diverse student population. Research
and Development in Higher Education, 17, 784–790.
Webb, C., English, L., & Bonanno, H. (1995). Collaboration in subject design: Integration of the teaching and assessment
of literacy skills into a first-year accounting course. Accounting Education, 4, 335–350.

Elizabeth Erling is a lecturer in English Language Teaching at the UK Open University. Her research interests are in
world Englishes, language policy, teacher development and English for academic purposes.
John Richardson is professor of student learning and assessment at the UK Open University. His main research interests
are in the interrelationship between students’ perceptions of courses and their approaches to studying.

View publication stats

Das könnte Ihnen auch gefallen