Sie sind auf Seite 1von 5

AN ASSESSMENT OF ENGLISH ERRORS AMONG SECONDARY

SCHOOL STUDENTS AND THE PEDAGOGICAL IMPLICATIONS


'Demola Jolayemi
Department of Modern European Languages, University of Ilorin, Nigeria.
ABSTRACT
Reasons for the poor performance in English among Nigerian learners could be found in
the English curriculum, the teaching methodology, the students and in language testing
procedures. It is in the light of this that this paper has attempted to discuss some basic issues in
error evaluation and its implications for studies in English language. A pragmatic test was
administered on 100 SS HI students drawn from four Local Government Areas in Kwara State.
Twenty (20) erroneous sentences were collected from their responses and given to ten (10)
Native Speakers of English (NSE) and ten Non-Native Teachers of English (NNTE) for
assessment, using the criteria of intelligibility and acceptability. At the end of the study, it was
discovered that while there errors were intelligible, they were not acceptable. Using the t-test
statistic, Native speakers of English were found to be more tolerant of the errors than the NonNative Teachers of English. The paper ended with some pedagogical implications and
suggestions for English Studies,
Background
After a systematic exposure of learners to a body of knowledge in English, the teacher,
essentially, will want to know the performance or the learning outcome of students. He does this
by exposing learners to a corresponding body of tests. Language tests help to elicit from
language learners the extent to which the taught skills have been mastered. Language testing is
therefore the systematic process of getting information from learners regarding their levels of
acquisition of certain skills (Alderson 1981; Carroll 1981; Weir 1961).
A Language test must be valid, reliable and communicative (Lado 1961; Clara 1972;
Oller 1979; Marrow 1981; Palmer & Bachmen 1981). Furthermore, a Language tester may be
interested in the kinds of errors his students make. He may even be interested in the gravity of
the errors committed by learners. As such, he engages in the evaluation of learners' errors. This
process is called error evaluation.
Error Evaluation (henceforth EE) may be defined as a process of assessing the extent to
which an error impedes comprehension. This evaluation is usually done by using the criteria of
acceptability and intelligibility. Experts believe that errors differ hi terms of then- gravity. Thus, EE
helps to determine the gravity of an error, that is, the extent to which we can accept or reject an
error. This is why Valettee (1977) calls the process, 'Level of error tolerance', and this
considerably influences our attitude towards the answer scripts of students. It has been
discovered that individuals' attitudes to errors differ but research has shown that NNTE seem to
be less tolerant of Learners' errors than the NSE.
Devies (1983) carried out a research into how some individuals evaluated learners'
errors. He made use of sentences from Moroccan pupils of EFL, mostly erroneous, few correct.
He presented eighty two of such sentences to forty-three Moroccan teachers of English and same
to forty-three non-teacher native speakers of English. Each of the examiners had to rate each of
the sentences on a rating scale of 0-5. In other words, 0 bad no error at all and 5 had the worst
error. From the result of his research he concluded that five major points determined each of the
examiner's attitude to learners' errors (LE).
One of such factors, according to Davies, that
influenced judges' assessment (JA) was the language background of markers on the
respondents. Davies realised that Moroccan teachers of English judged certain errors low
because they were familiar with the strategies evoked by students, while native JA did not favour
such errors at all. Another factor that influenced JA was comprehension or intelligibility of such
sentences. Errors that affected intelligibility were more favoured by the native speakers than the

teachers. The reason according to Davies for this was that non-native speaker teachers, because
of their special insights into students' communicative strategies, were not well placed to assess
how intelligible a learner's utterance would be to ordinary NSE. The knowledge of the content of
the curriculum also affected the JA. The teachers seemed to be strict hi areas that were in the
syllabus and had probably been taught hi the school curriculum, therefore, they seemed to
heavily mark down errors not expected of the students; while native speakers who had no
background of the producers of the sentences seemed lenient. Similarly, the marking content
influenced the JA of errors. According to Davies, while native judges felt that they were meeting
novel usage's of English, hence, were lenient hi scoring the errors. The non-native (teachers)
felt threatened that their own mastery of English Language was probably being tested and tended
to be very strict. In conclusion, the researcher found out that the general attitude of NSE to errors
was much more lenient than that of NN teachers; that Non-Native Teachers of English were less
tolerable of errors.
Furthermore, James (1977) compared the judges Assessment on errors of twenty Native
Speakers and twenty Non-Native Speakers of English. He arrived at the same conclusion as
Davies. Likewise, Goroseb (1977) found a considerable difference between teachers and nonteachers of English in their assessment of learners' pronunciation and comprehension.
Similarly, Hughes and Lascaraton (1982) concluded in their report that NSE were found
to mark more leniently than NN teachers. According to them. this might be so because of the
NSE's better knowledge of the wide variety of acceptable structures of the Language. Thirty-two
erroneous sentences of Greek ESL students were used. But in his case, unlike Davies (1983)
who used two groups of judges to evaluate the gravity of the errors vis-a-vis acceptability and
intelligibility, Hughes and Lascaraton (1982) involved three separate groups in their study. There
were thirty (30) judges in all, ten (10) of them were native teachers of English, ten Greek teachers
of English and ten, educated NNS of English. A descriptive essay on a car accident was elicited
from some ESL Greek students who were in their penultimate year of High School. Some
thirty4wo erroneous sentences were collected and presented to these three groups of judges.
Through the JA, Hughes and Lascaraton found that the Greek teachers made references to the
'basicness of the grammatical rules violated. This group of judges almost depended only on the
criterion of intelligibility in their evaluation. The English native teachers and the non-teacher made
use of both the criteria of intelligibility and acceptability hi their error evaluation. Further, the
writers pointed out that if the objective of teaching was emphasis on communicative competence,
then the work must be assessed with reference to the effectiveness of the communication that it
achieves (p.209).
In other words, intelligibility and acceptability rather than the former alone must be the
objectives of communication and therefore assessment. Other works on error evaluation are
those of Palmer (1980); James (1974) and Burt and Kipersky (1975).
Purpose of the Study
The purpose of this study was to assess the attitude or NNTE to learners errors with the
aim of improving the status quo.
Significance of the Study
It is hoped that the findings of this study will help positively to change the attitude of
English Language markers and teachers of their learners' errors in such a way that they may be
more tolerable of errors in English. It is hoped that this study will be of immense significance to
language teachers as they will discover that the best error evaluator is he that allows both the
criteria of intelligibility and acceptability, rather than just one, to influence his error judgement.
Data Collection
For the purpose of data collection for this study, four Secondary Schools in four local
Government Areas in Kwara State were randomly setected. These were:
1. Government Secondary School, Afon - Asa L.G.A

Patigi Secondary School, Patigi - L.G.A


Offa Grammar School, Offa - Offa L.G.A
Government High School, Ilorin - Ilorin West L.G.A
Twenty-five (25) Senior Secondary School (SSS III) students from each of the schools
were subjected to an integrative essay test on 'My experience since I
have been in this school' for 45 minutes.
From the 100 answer scripts
collected, 20 syntactic errors were taken.
These were given to ten NNTE and
ten NSE for evaluation, using the criteria of intelligibility and acceptability.
This was to be done by indicating, by a tick (W), whether or not each of the
syntactically erroneous sentences was acceptable and intelligible.
For instance,
there were three options for the judges on each of the two variables.
The
judges were to indicate their choices by a tick (W).
They were to decide
whether
or
not
each
of
the
erroneous
sentences
was
intelligible
(I).
Unintelligible (UND) or was acceptable (A), unacceptable (UNA).
They were
equally to indicate if they were undecided (UND) in their choices.
2.
3.
4.

Data Analysis
Two hypotheses were drawn (see below) and the data collected were used to test the
hypotheses, the t-test statistic was used to find the significant difference that existed.
Hypotheses
From the background and purpose of the study as highlighted previously, two null hypotheses
were drawn. These were:
H01:
There is no significant difference between NSE and NNTE assessments on the
acceptability of the syntactic errors of the SS III students.
Significant difference between the NSE and NNTE's assessments on the intelligibility of
H02:
the syntactic errors of the students does not exist.
Presentation of Results
The findings of the study are presented in the tables below:
Table 1
Assessment by NSE and NNTE on Acceptability
N
X
SD
DF
Critical Calculated
Decision
t-value t-value
NSE
20
0
0
38
2.021 0.7
Not significant
NNTE 20
4
0.4
P = .05
Table 2
Assessment by NSE and NNTE on Intelligibility
N
X
SD
DF
Critical Calculated
t-value t-value
NSE
20
126
3.58
38
2.021 2.46
NNTE 20
97
3.43

Decision
Significant

Discussion and Conclusion


It would be recalled that two null hypotheses were drawn for the purpose of this study.
Hypothesis one sought to know whether or not there was a significant difference in the evaluation
on the acceptability of the syntactic errors of the students in Kwara State between NSE and
NNTE. Toe result as presented in the Table 1 indicated, that twenty erroneous sentences were

presented to both sets of evaluators. While NSE awarded zero score, the mean being zero and
the standard deviation being, equally, zero, NNTE awarded four, 0.2 and 0.4 respectively. Under
the probability level of 0.05 and the degree of freedom of 38, the calculated t. was 0.7 while the
critical t. was 2.021. As the calculated t. was less than the critical t. (See table 1) HO, was
accepted. This means that there is no significant difference between NSE and NNTE evaluation.
In addition, the same treatment was given to H02 which sought to know if there existed
any significant difference in the evaluation on intelligibility of the syntactic errors of the students
by NSE and NNTE. Findings from above indicated that twenty erroneous sentences attracted
126 scores from NSE, the mean being 6.3 and the standard deviation being 3.56; the same
number of sentences however attracted 97 scores from NNTE with a mean of 4.85 and standard
deviation of 3.43. Under the probability level of 0.05 and degree of freedom of 38, the calculated
t. stood at 2.46 while the critical t. was 0.021. This indicated that the calculated t. exceeded the
critical t. Therefore, the hypothesis is rejected. This means that there is significant difference in
the evaluation of the two sets of judges and that the NSE were more tolerant of the syntactic
errors of SS III students in Kwara State than the NNTE, using the criterion of intelligibility.
The implication that we wish to draw out here is that language testers should clearly
distinguish
the criterion of acceptability
from
that of intelligibility. Let us take for
consideration the following erroneous sentences:
1. My main body of this essay is to analyse to you the experience I have been gained since
when I was admitted to this school.
2. The student come to school at 7.00 am and to be in their class for lesson,
3. I was love by all my teacher also
4. My last experience was that as the game prefect which was a surprised to me.
Evidently, all the above sentences are deficient in verb formation, number selection and
diction. However, a careful reader would notice that in spite of these deficiencies comprehension
is not impeded. Thus, while we cannot accept any of the sentences we cannot say that they are
unintelligible.
In the early pan of this paper, it has been made abundantly clear that language testing,
under which EE is a subset, involves decision making. However, from the findings of this study it
is dear that NNTE are less tolerant of learners errors in English. Yet, most of these students are
able to perform some other skills whose instructions are passed through the medium of English.
In fact, many students often score high grades in all other subjects of the senior Secondary
School Examination while performing woefully in English. Consequently, this paper wishes to
suggest that teachers of English, testers of English and applied linguists in general should be less
strict in their attitudes to learners' errors in English as abundant evidence has been given Davies,
1983: James, 1977; and Hughes & Lascaraton, 1982, among others). Equally, they should allow
both the criteria of intelligibility and acceptability, rather than just one, to influence their scoring
pattern.
REFERENCES
Alderson, C. J. (1981). Introduction. In J. C. Alderson & A. Hughes (eds.) Issues in Language
testing. London: The British Council. ELT Document 3, 5 - 8
Baker, D. (1989). Language testing - A Critical Survey and Practical Guide.
London: Edward Arnold
Burt, M. K. & Kiparsky, C. (1975). Global and Local Mistakes. In J. Schomann & N, Stenson (eds.).
New Frontiers in Second Language Learning. 24 37 - 53.
Carroll, J. B. (1981). Specifications for an English language testing service. In J. C. Alderson & A.
Hughes (eds.). Issues in language testing. London: The British
Council. ELT Document 3, 66 89.
Clark J. L. D. (1972). Foreign language testing: theory and practice. Philadelphia: Centre
for
Curriculum Development.
Davies, E. E. (1983). Error Evaluation: The importance of view point. ELT Journal, 32 (4), 63 - 75.
Goroseh, M. (1973) Assessment of Intervariability in testing oral performance of adult

students. In C. Svartvik (ed) Testing Journal. 3 (7)17-23. Harris. D. P. (1969).


Testing
English as a Second Language. New York: McGraw Hill.
Heaton, B. J. (1975). Writing English Language Tests. London: Longman. Hughes, A.
(1981).
Epilogue. Issues in Language Testing, ELT Document 2, 206-209.
Hughes, A. & Lascaraton, C. (1982). Computing Criteria for error gravity. ELT
Journal, 36 (3)19
-31.
James , C. (1974). Linguistic Measure for error gravity. Audio-Visual Language
Journal. 12 (1),
9-16.
James C. (1977). Judgements of gravity. ELT Journal, 31(2) 116-124. Lado, R,
(1961).
Language Testing. London: Longman.
Marrow, K. (1981). Communicative language testing: revolution or evolution. Issues in
language testing ELT. Document 3, 9 - 25.
Oiler, W. J. (Jr.) (1979). Language tests at school. London: Longman.
Palmer, D. (1980). Expressing error gravity, ELT Journal, 34 (2). 92 - 96.
Palmer, S. A. & Bachman, F. L. (1981). Basic concerns in test validation. Issues in language
testing. ELT Document 3, 135 -151.
Tinuoye, M. C. (1991). Forum on common errors in usage and pronunciation:
Confusion
between their and 'there'. JESEL. (3), 47 - 49.
Valette, M. R. (1977). Modern Language Testing. New York: Harcourt Brace.
Jovanovich, Inc.
Weir, J. C. (1981). Reaction to the Marrow Paper (1) ELT Document. 2, 26 -37.

Das könnte Ihnen auch gefallen