Beruflich Dokumente
Kultur Dokumente
teachers. The reason according to Davies for this was that non-native speaker teachers, because
of their special insights into students' communicative strategies, were not well placed to assess
how intelligible a learner's utterance would be to ordinary NSE. The knowledge of the content of
the curriculum also affected the JA. The teachers seemed to be strict hi areas that were in the
syllabus and had probably been taught hi the school curriculum, therefore, they seemed to
heavily mark down errors not expected of the students; while native speakers who had no
background of the producers of the sentences seemed lenient. Similarly, the marking content
influenced the JA of errors. According to Davies, while native judges felt that they were meeting
novel usage's of English, hence, were lenient hi scoring the errors. The non-native (teachers)
felt threatened that their own mastery of English Language was probably being tested and tended
to be very strict. In conclusion, the researcher found out that the general attitude of NSE to errors
was much more lenient than that of NN teachers; that Non-Native Teachers of English were less
tolerable of errors.
Furthermore, James (1977) compared the judges Assessment on errors of twenty Native
Speakers and twenty Non-Native Speakers of English. He arrived at the same conclusion as
Davies. Likewise, Goroseb (1977) found a considerable difference between teachers and nonteachers of English in their assessment of learners' pronunciation and comprehension.
Similarly, Hughes and Lascaraton (1982) concluded in their report that NSE were found
to mark more leniently than NN teachers. According to them. this might be so because of the
NSE's better knowledge of the wide variety of acceptable structures of the Language. Thirty-two
erroneous sentences of Greek ESL students were used. But in his case, unlike Davies (1983)
who used two groups of judges to evaluate the gravity of the errors vis-a-vis acceptability and
intelligibility, Hughes and Lascaraton (1982) involved three separate groups in their study. There
were thirty (30) judges in all, ten (10) of them were native teachers of English, ten Greek teachers
of English and ten, educated NNS of English. A descriptive essay on a car accident was elicited
from some ESL Greek students who were in their penultimate year of High School. Some
thirty4wo erroneous sentences were collected and presented to these three groups of judges.
Through the JA, Hughes and Lascaraton found that the Greek teachers made references to the
'basicness of the grammatical rules violated. This group of judges almost depended only on the
criterion of intelligibility in their evaluation. The English native teachers and the non-teacher made
use of both the criteria of intelligibility and acceptability hi their error evaluation. Further, the
writers pointed out that if the objective of teaching was emphasis on communicative competence,
then the work must be assessed with reference to the effectiveness of the communication that it
achieves (p.209).
In other words, intelligibility and acceptability rather than the former alone must be the
objectives of communication and therefore assessment. Other works on error evaluation are
those of Palmer (1980); James (1974) and Burt and Kipersky (1975).
Purpose of the Study
The purpose of this study was to assess the attitude or NNTE to learners errors with the
aim of improving the status quo.
Significance of the Study
It is hoped that the findings of this study will help positively to change the attitude of
English Language markers and teachers of their learners' errors in such a way that they may be
more tolerable of errors in English. It is hoped that this study will be of immense significance to
language teachers as they will discover that the best error evaluator is he that allows both the
criteria of intelligibility and acceptability, rather than just one, to influence his error judgement.
Data Collection
For the purpose of data collection for this study, four Secondary Schools in four local
Government Areas in Kwara State were randomly setected. These were:
1. Government Secondary School, Afon - Asa L.G.A
Data Analysis
Two hypotheses were drawn (see below) and the data collected were used to test the
hypotheses, the t-test statistic was used to find the significant difference that existed.
Hypotheses
From the background and purpose of the study as highlighted previously, two null hypotheses
were drawn. These were:
H01:
There is no significant difference between NSE and NNTE assessments on the
acceptability of the syntactic errors of the SS III students.
Significant difference between the NSE and NNTE's assessments on the intelligibility of
H02:
the syntactic errors of the students does not exist.
Presentation of Results
The findings of the study are presented in the tables below:
Table 1
Assessment by NSE and NNTE on Acceptability
N
X
SD
DF
Critical Calculated
Decision
t-value t-value
NSE
20
0
0
38
2.021 0.7
Not significant
NNTE 20
4
0.4
P = .05
Table 2
Assessment by NSE and NNTE on Intelligibility
N
X
SD
DF
Critical Calculated
t-value t-value
NSE
20
126
3.58
38
2.021 2.46
NNTE 20
97
3.43
Decision
Significant
presented to both sets of evaluators. While NSE awarded zero score, the mean being zero and
the standard deviation being, equally, zero, NNTE awarded four, 0.2 and 0.4 respectively. Under
the probability level of 0.05 and the degree of freedom of 38, the calculated t. was 0.7 while the
critical t. was 2.021. As the calculated t. was less than the critical t. (See table 1) HO, was
accepted. This means that there is no significant difference between NSE and NNTE evaluation.
In addition, the same treatment was given to H02 which sought to know if there existed
any significant difference in the evaluation on intelligibility of the syntactic errors of the students
by NSE and NNTE. Findings from above indicated that twenty erroneous sentences attracted
126 scores from NSE, the mean being 6.3 and the standard deviation being 3.56; the same
number of sentences however attracted 97 scores from NNTE with a mean of 4.85 and standard
deviation of 3.43. Under the probability level of 0.05 and degree of freedom of 38, the calculated
t. stood at 2.46 while the critical t. was 0.021. This indicated that the calculated t. exceeded the
critical t. Therefore, the hypothesis is rejected. This means that there is significant difference in
the evaluation of the two sets of judges and that the NSE were more tolerant of the syntactic
errors of SS III students in Kwara State than the NNTE, using the criterion of intelligibility.
The implication that we wish to draw out here is that language testers should clearly
distinguish
the criterion of acceptability
from
that of intelligibility. Let us take for
consideration the following erroneous sentences:
1. My main body of this essay is to analyse to you the experience I have been gained since
when I was admitted to this school.
2. The student come to school at 7.00 am and to be in their class for lesson,
3. I was love by all my teacher also
4. My last experience was that as the game prefect which was a surprised to me.
Evidently, all the above sentences are deficient in verb formation, number selection and
diction. However, a careful reader would notice that in spite of these deficiencies comprehension
is not impeded. Thus, while we cannot accept any of the sentences we cannot say that they are
unintelligible.
In the early pan of this paper, it has been made abundantly clear that language testing,
under which EE is a subset, involves decision making. However, from the findings of this study it
is dear that NNTE are less tolerant of learners errors in English. Yet, most of these students are
able to perform some other skills whose instructions are passed through the medium of English.
In fact, many students often score high grades in all other subjects of the senior Secondary
School Examination while performing woefully in English. Consequently, this paper wishes to
suggest that teachers of English, testers of English and applied linguists in general should be less
strict in their attitudes to learners' errors in English as abundant evidence has been given Davies,
1983: James, 1977; and Hughes & Lascaraton, 1982, among others). Equally, they should allow
both the criteria of intelligibility and acceptability, rather than just one, to influence their scoring
pattern.
REFERENCES
Alderson, C. J. (1981). Introduction. In J. C. Alderson & A. Hughes (eds.) Issues in Language
testing. London: The British Council. ELT Document 3, 5 - 8
Baker, D. (1989). Language testing - A Critical Survey and Practical Guide.
London: Edward Arnold
Burt, M. K. & Kiparsky, C. (1975). Global and Local Mistakes. In J. Schomann & N, Stenson (eds.).
New Frontiers in Second Language Learning. 24 37 - 53.
Carroll, J. B. (1981). Specifications for an English language testing service. In J. C. Alderson & A.
Hughes (eds.). Issues in language testing. London: The British
Council. ELT Document 3, 66 89.
Clark J. L. D. (1972). Foreign language testing: theory and practice. Philadelphia: Centre
for
Curriculum Development.
Davies, E. E. (1983). Error Evaluation: The importance of view point. ELT Journal, 32 (4), 63 - 75.
Goroseh, M. (1973) Assessment of Intervariability in testing oral performance of adult