Sie sind auf Seite 1von 54

qwertyuiopasdfghjklzxcvbnmqwe

rtyuiopasdfghjklzxcvbnmqwertyu
iopasdfghjklzxcvbnmqwertyuiopa
sdfghjklzxcvbnmqwertyuiopasdfg
hjklzxcvbnmqwertyuiopasdfghjkl
ACHIEVEMENT TEST
REPORT
zxcvbnmqwertyuiopasdfghjklzxc
[Type the document subtitle]
vbnmqwertyuiopasdfghjklzxcvbn
mqwertyuiopasdfghjklzxcvbnmq
wertyuiopasdfghjklzxcvbnmqwer
tyuiopasdfghjklzxcvbnmqwertyui
opasdfghjklzxcvbnmqwertyuiopa
sdfghjklzxcvbnmqwertyuiopasdfg
hjklzxcvbnmqwertyuiopasdfghjkl
zxcvbnmqwertyuiopasdfghjklzxc
vbnmqwertyuiopasdfghjklzxcvbn
mqwertyuiopasdfghjklzxcvbnmrt
yuiopasdfghjklzxcvbnmqwertyuio
pasdfghjklzxcvbnmqwertyuiopas
dfghjklzxcvbnmqwertyuiopasdfg
hjklzxcvbnmqwertyuiopasdfghjkl
SUBMITTED TO
Dr. Geeta Sahni

By

Tathagata Dutta
Roll : 232
B. ED. (2009-10)

CONTENTS
S. NO.

TOPIC

1.

ACKNOWLEDGEMENT

2.

EVALUATION AND EDUCATION


- INTRODUCTION
- CONTINUOUS AND COMPREHENSIVE
EVALUTAION
- FORMATIVE AND SUMMATIVE
EVALUATION
- CONCLUSION

3.

CONCEPTS IN LANGUAGE TESTING


- INTRODUCTION
- TESTING LISTENING
- TESTING SPEAKING
- TESTING READING
- TESTING WRITING

4.

TESTS AND TYPES OF TESTING


- PROFICIENCY TESTS
- ACHIEVEMENT TESTS
- DIAGONISTIC TESTS
- PLACEMENT TESTS

5.

KINDS OF TESTING

6.

MY LEARNERS

7..

DETAILS OF THE SYLLABUS COVERED

8.

BLUEPRINT AND ROUGH DRAFT OF THE


ACHIEVEMENT TEST

9.

THE FRAMED ACHIEVEMENT TEST

10.

ANALYSIS OF THE ACHIVEMENT TEST


BEFORE THE TEST

11.

MARKSHEET OF THE ENTIRE CLASS

PAGE
NUMBERS

12.

DETAILED MARKSHEET OF THE HIGH AND


LOW ACHIVERS GROUP

13.

ITEM ANALYSIS AND ITEM


DISCRIMINATION

14.

ANALYSIS OF THE OBJECTIVE TYPE ITEMS

15.

ANALYSIS OF THE SUBJECTIVE TYPE ITEMS

16.

ACHIVEMENT TEST ANALYSIS AT A GLANCE

17.

WHAT IS STATISTICS?
- HISTOGRAM, FREQUENCY CURVE

18.

MEASURES OF CENTRAL TENDENCY


- ARITHMETIC MEAN
-

MEDIAN

MODE

STANDARD DEVIATION

19..

OVERALL ANALYSIS

20.

BIBLIOGRAPHY

ACKNOWLEDGEMENT

I am very thankful to my mentor and guide in the central Institute


of Education, Dr. Sahni, for being patient with me and allowing this
Achievement Test report a success. I am also thankful to Ms. Shubhangi,
where previous ATR provided scaffolding to the creation of this record.
My special thanks to Mr. Tulika Rajpal who has been a kind soul and
helped us during our teaching practice and also while making this report.
I would also convey my regards to the institution, Government
Senior Secondary School for Boys, 1, Roopnagar. I am also thankful to
the library and the staff at the Central Institute and at the British Council.
My special thanks to all my hostel mates who had been kind and very cooperative during the preparation of the report.

EVALUATION AND EDUCATION


Evaluation is the process of determining the extent to which pupils
achieve instructional objectives. It is a scheme for collecting evidence of
behavioral changes in the learners and judge the direction and extent of
such changes. Evaluation is a continuous process and forms an integral
part of the total system of education and is vary closely related to
educational objectives. It exercises a good influence on pupils study
habits and also speaks about the method that has been used by the teacher.
Thus, it not only helps to measure educational achievement but also how
to improve it.
The purpose of evaluation is to make provisions for guiding the
growth of the learners to diagnose their strengths and weakness and point
out areas where remedial measures are devised. It makes a judgment on
the quality or worth of an educational programme or students
achievement and provides for a subsequent modification of the
curriculum.
Wrightstone defines evaluation, a relatively new technical term, as
the method to designate a more comprehensive concept of measurement
that is implied in conventional tests and examinations...the emphasis in
measurement is upon single aspects of subject matter achievement in
specific skills and abilities...the emphasis in evaluation is upon broad
changes and major objectives of on educational programme. These
include not only subject matter achievement but also attitudes, interests,
ideas, ways of thinking, work habits and personal and social adaptability.

CONTINUOUS AND COMPREHENSIVE


EVALUATION

In order to meet the objective of real education, there is a need to


continuously and comprehensively evaluate children. In fact,
educationalists argue that if we really want to education system to turn the
corner and bring about the coveted all round development of the
personality of the child, them continuous and comprehensive evaluation is
the way forward.
The continuous aspects takes care of the continual (placement and
formative evaluation) part of evaluation and the comprehensive
component takes care of assessment of all round development of the
childs personality. It includes assessment in the process of reasoning
from evidence. To design assessments of students learning that will
provide useful evidence requires that we coordinate and align three key
components: COGNITION, which refers to a model of thinking and
learning of students within the subject domain, OBSERVATIONS; the
takes or activities that students engage in that provide evidence of
learning; and INTERPRETATION, the process or methods for making
sense of the evidence.

ASSESSMENT

LEARNING OBJECTIVES

INSTRUCTIONAL ACTIVITIES

Assessments are effective and useful only to the degree that these three
components are in synchrony.
In the Indian education system, the term evaluation and assessment
is associated with examination, stress and anxiety. The National
Curriculum Frameworks (2005) seek to provide guidelines for a good
evaluation and examination system that can become an integral part of the
learning process and benefit both the learners themselves and the
educational system by giving valuable feedback.
White speaking on school stages and assessment in chapter 3 of
NCF (2005) the frames state the assessment required at different stages:-

For ECCE and classes I and II of the Elementary stage, assessment


must be purely qualitative judgments of childrens activities in
various domains. There should be no test, oral or written.
For class III to VIII of the Elementary stage, a various methods
may be used but these should be seen as part of the teaching
process and not a constant threat.
For class IX to class XII of the secondary and Higher Secondary
Stage, assessment may be based more on tests, examinations and
projects for the knowledge-based areas of the curriculum, along
with self-assessment.

FORMATIVE
EVALUATION

AND

SUMMATIVE

Evaluation may be undertaken for three principal reasons:


1. Accountability
2. Curriculum development and betterment.
3. Self-development: teachers and other language teaching professionals.
Evaluation for the purpose of accountability

This is mainly concerned with determining whether there has been


value for money, in other words whether something has been both
effective and efficient. Generally, the information derived from this is not
used in any major way to improve the functioning of the curriculum or
classroom practice. Rather, it provides us with the information whether
something should be continued or discontinued. Evaluations of this type
are largely, although not exclusively, the domain of policy makes or
provides of resources. Usually, such evaluations are carried out after an
innovation has been running for some time, or at the end of the project.
This type of evaluation, known as SUMMATIVE EVALUATION, has
also tended to involve testing and measurement, and analyses of the
statistical importance of results obtained. Summative evaluations are

limited by their focus on overall outcomes at the end of an educational


innovation.
Evaluation for purpose of curriculum development

Teachers have a key role to play in the curriculum renewal and


development process. It is the teacher, rather than the tester or the
evaluation expert, who has most information about specific classroom
context. This information may be reported at various times and in various
forms, for example as responses to questionnaires, interviews, records, or
diary keeping. It may be largely descriptive and qualitative, and need not
entail tests, measurements, and inferences about curriculum quality from
statistical data. This type of evaluation which is intended to improve the
curriculum by gathering data from different people oven a period of time
is called FORMATIVE EVALUATION. Such evaluations are ongoing
and monitor developments by identifying the strengths and weaknesses of
all aspects for teaching and learning. As opposed to merely passing an
evaluative judgment on the end product of a teaching programme,
formative evaluation is designed to provide information that may be used
as the basis for future planning and action.
Evaluation for the purposes of teacher self development

A third and major role that evaluation has to play is in formalizing


and extending a teachers knowledge about teaching and learning in
classrooms. This is sometimes referred to as ILLUMINATIVE
EVALUATION, because it involves raising the consciousness of teachers
and after ELT practitioners as to what actually happens in the language
teaching classroom. This type of evaluation is developmental and
formative in nature and the focus is more on the process and less on the
end product, on the teaching and learning and has a major role to play in
teacher self-development.

CONCLUSION

When we evaluate different aspects of the teaching and the learning


process, it becomes important to make explicit the criteria used in our
judgments, and to be principled in our evaluation. It prepared and ad-hoc
evaluations are likely to be unreliable, unfair and also uninformative.
They do not provide a suitable base to make any educational decision.
Evaluation means much more than administering tests to learners
and analyzing the results. It not only focuses on the learner but also makes
a commentary on the process of teaching as well. Successful evaluation
should be systematic. In order to teaching this we need to take into
account the concept of management as reflected through our leadership
skills. As teachers we used to be aware of the role of manages and
evaluate our management styles. We need to know why we wish to
evaluate, what evaluation is for, and how to organize it.

CONCEPTS IN LANGUAGE
TESTING

English is the official associate language in India and as such


becomes the second language for national curriculum framework. With
the growing importance of English in every aspect of public life, the
teaching of English language has also evolved through the decades. At
present most language teachers follow the Communicative Language
Teaching (CLT) method in the classroom environment. The focus is on
fluency and the guided approach to help the learner arrive at the accurate
way of using the foreign language. The rules of the language are not
given; a convert method is followed where the learners are guided to
arrive at the rules. The teaching of English language can be divided into
four categories or skills:
LISTENING
RECEPTIVE

ORAL
SPEAKING
READING

PRODUCTIVE

WRITTEN
WRITING

While conducting a language test all four skills have to be kept in mind,
along with the usage of grammar and vocabulary. However, all the four
skills have a different set of consideration, ways of testing and their
evaluation. We shall consider all the skills in the following sections.

TESTING LISTENING
An oral and receptive skill, the testing of listening parallels in most
ways the testing of reading. But there may be situations where the testing
of oral ability is considered, for one reason or another, impractical; and so

a test of listening should be included to judge the backwash effect and


also for tested for diagnostic purposes.
The special problems in constructing listening tests arise out of the
transient nature of the spoken language. A listening test should be able to
test the following abilities of the learners.
Ability to obtain the gist.
Ability to follow on argument.
Ability to recognize the attitude of the speaker.
To test ability of the learners, the teacher has to be careful about the
sample of speech/text and has to keep in mind the test specifications. To
test the native speakers, samples should be taken for authentic speech.
Possible sources are the radio, television, the Internet and even our own
recordings.

TESTING SPEAKING
The objective of teaching spoken language is the development of
the ability to interact successfully in that language, and that this involves
comprehension as well as production. The representative tasks can be
grope under the following heads:
EXPRESSIONS - Likes/dislikes,
agreement/disagreement,
preferences, opinions
DIRECTING
- Instruction, persuading, advising
DESCRIBING
- Actions, events, objects, people, process.
ELICITING
- Information, directions, classifications.
NARRATION
- Sequence of events
REPORTING
- Description, comment, decisions and choices.
The skills that are tested while taking a test on oral ability can be subdivided into two broad heads - Informational and Interactional skills. In
the task, the student should be informative about the theme and also
interact with other students. While evaluating the skills in managing
interactions, the following abilities should be kept in mind Initiate interactions.
Charge the topic of an interaction
Share the responsibility for the development of an interaction
Take their turns and give turns to other speakers.

Come to a Decision
End an interaction.
The test-taken has to choose an appropriate technique. The test may be in
the form of an interview, role play, interpretation, prepared monologue,
reading about, responses to audio / video recordings, simulated
conversations.

TESTING READING
Reading used to be the principal aim of most foreign-language
courses and it was developed through textual analyses, vocabulary tests,
and translations into English, listening and speaking were merely taught
to be the by products. But with the change in approaches to teaching a
foreign language, there is a new goal in reading - not a verbatim
translation but total comprehension without recourse to English.
The primary aim in teaching a foreign language was to enable
students to read foreign texts in the original. Thus, when a student learns
to read a foreign language, his/her mind should also be functioning in that
language. Reading requires a familiarity on the put of the reader with the
two fundamental building blocks of that particular language structure and
vocabulary. The broader the particular language, structure and
vocabulary, the broader the students knowledge of structure and the
greater the vocabulary and the more difficult text he / she will be able to
approach. Consequently, two general types of test items are necessary to
evaluate student reading potential: Vocabulary items and structural
(Syntactical and morphological) items.
Reading can be differentiated from writing, speaking, and listening
by another characteristics speed. In learning a new language the student
wishes eventually to read it easily and rapidly. Fluency in speaking and
ease in listening comprehension correspond to speed in reading. The tasks
that are generated while testing a students reading skill depend on the
speed of the learner. There is a distinction, based on the difference of
purpose, between expeditions reading and slow and careful reading.
In expeditions reading operations, candidates may be asked to do:1.
SKIMMING, where the objective is to-

Obtain main ideas and discourse topic quickly and efficiently


Establish quickly the structure of a text,

Decide the relevance of the text to their needs.


2. SCANNING, where the objective is to find Specific words or phrases;
figures, percentages
Specific items in an index;
Specific names in a bibliography or a set of references.
In a careful reading operation, candidates may be asked to
Identity discourse makers;
Interpret complex sentences;
Interpret topic sentences and logical organization of the text;
Identify implicitly and explicitly stated main ideas;
Recognize writers intentions;
Distinguish between fact from opinion, hypothesis from fact;
Infer the meaning of an unknown word from context;
Make pragmatic inferences.

TESTING WRITING
Of the four language skills, writing may truly be considered the
most sophisticated. In listening and in reading, the student receives a
message formulated by another; his role is passive even though he may be
mentally interpreting and analyzing what it she is hearing or reading. In
speaking, the student is engaged in communicating his own ideas and
feelings, but with approximations and explanations. Communication
through the written word, on the other hand, possesses a certain degree of
finality and demands real proficiency from the winter if it is to be
effective. The mechanics - vocabulary, spelling, and grammar must be
mastered before the student can aspire to precision of expression, fluency,
and style. Tests must consequently be so structured that they measure the
various aspects of students progress toward the acquisition of this skill.
This can be achieved it is divided into three parts-

We have to set writing tasks that are properly representative of the


population of tasks that we should expect the students to be able to
perform.
The tasks should elicit valid samples of writing.
It is essential that the samples of the writing can and will be scored
validly and reliably.

TESTS AND TYPES OF TESTING


Tests can be categorized according to the types of information they
provide. This categorization is useful because it not only helps in deciding
whether an existing test is suitable for a particular purpose but also in
writing new tests where these are:

(a)
(b)
(c)
(d)

Proficiency tests
Achievement tests
Diagnostic tests
Placement tests

Proficiency tests are designed to measure peoples ability in


a language, regardless of any training they may have had in that language.
The content of a proficiency test, therefore, is not based on the content or
objectives of language courses that people taking the test may have
followed. Rather, it is based on a specification of what candidates have to
be able to do in the language in order to be considered proficient. This
raises the question of what we mean by the word proficient.
In the case of some proficiency tests, proficiency means having
sufficient command of the language for a particular purpose. Such a test
many even attempt to take into account the level and kind of English
needed to follow courses in particular subject areas. it might, for example,
have one form of the test for acts subjects, another for sciences and so on.
Whatever the particular purpose to which the language is to be put, this
will be reflected in the test content at an early stage of the test
development.
There are other proficiency tests which, by contrast, do not have
any occupation or course of study in mind. But these general proficiency
tests should have a detailed specification on what it is that the successful
candidates have demonstrated that they can do. Despite differences
between them in relation to content and level of difficulty, all proficiency
tests have in common the fact that they are not on courses that candidates
have previously taken.
In contrast to proficiency tests, it is much more probable that they
will be involved in the preparation and use of achievement tests.
The achievement test is directly related to language courses, their purpose
being to establish how successful individual students, groups of students,
or the courses themselves have been in achieving objectives. There are
two kinds of achievement test: final achievement test and progressive
achievement test.
Final achievement tests are those administered at the end
of a course of study. They content of these tests must be related to the

courses with which they are concerned. Because its content is so firmly
based on the syllabus or on the books and manuals used, it has been also
called as the syllabus content approach. It has an obvious appeal, since
the test only contains what it is thought that the students have actually
encountered, and thus in this respect, can be called a fair test. The
disadvantage of such a test is that if the syllabus in badly designed, then
the results of the test can be very misleading.
An alternative approach is to base the tests content directly on the
objectives of the course. This has number of advantages. First, it compels
course designers to be explicit about objectives. Secondly, it makes it
possible for performance on the test to show jus how far students have
achieved those objectives. This in turn puts pressure on those responsible
for the syllabus and for the selection of books and materials to ensure that
are consistent with the course objectives.
One may wonder if there is any real difference between the final
achievement tests and proficiency tests. If a test is based on the objectives
of a course, and these are equivalent to the language needs on which a
proficiency test is based, there is no reason to expect a difference between
the form and content of the two tests. But two things have to remember.
First, objectives and needs will not typically coincide in this way.
Secondly, many achievement tests are not in fact based on course
objectives. These facts name implication both for the uses of the test
results and for the test writers. It was to know on what basis on
achievement test has been constructed, and be aware of the possibly
limited validity and applicability of the test scores. Test writers, on the
other hand, must create achievement tests that reflect the objectives of a
particular course, and not expect a general proficiency test to provide a
satisfactory alternative.
Progressive achievement tests, as their name suggests,
are intended to measure the progress that students are making. They
contribute to formative assessment. One way to measure progress would
be to take achievement tests at regular basis. But in addition to this, the
teacher has also to create a set of pop quizzes which would provide a
rough check on the students progress.
Diagnostic tests are used to identify learners strengths and
weaknesses. They are intended primarily to ascertain what learning still

needs to take place. We can be fairly confident of our ability to create


tests that will us that someone is particularly weak, way in speaking as
opposed to reading in a language.
But there is lack of good diagnostic test. This is because the size of
such test would make it impractical to administer in a routine fashion.
Diagnostic could be extremely useful for individualized instructions.
Learners would be shown where gaps exist in their command of the
language, and could be happily directed to sources of information,
exemplification and practice.
Placement tests, as their name suggests, are intended to
provide the required information that will help to place students at the
stage of the teaching programme most appropriate to their abilities.
Typically, they are used to assign students to classes at different levels.
The placement tests depend on the identification of the key features at
different levels of teaching in the institution.

MY LEARNERS
I was assigned to teach English to class XI learners of Government
senior secondary school for boys, Roopnagar. The students of this section
had the Arts program of CBSE. Though the class strength was of 35
students were of 36 students, 7 of them had opted for Sanskrit.
The students were bright and eager to learn. But they have been
ingrained and conditioned into a method of learning that was teacher-

oriented. This posed a problem for me at the beginning as they were very
reluctant to speak up in class. They expected me to provide them with the
answer. But slowly they started to open up and started to speak in English
and take part in class discussions confidently.
The class was a pretty boisterous one and as a teacher I was
sometimes at a loss while dealing with some of the more mischievous
learners. But, in the end, all of them came to love the subject. Though
they were still a bit hesitant in the usage of the language, they were
definitely on the road where they will be more confident while dealing
with the language.

DETAILS OF THE SYLLABUS


COVERED
S. NO.

LINGUISTIC AREA

TOPIC

1.

Prose

The Adventure

2.

Poetry

The Browning Version

3.

Reading

Note-making

4.

Writing

Letter to the editor

5.

Grammar

Reported speech
Idioms
Tenses

MARKSHEET OF THE ENTIRE


CLASS
CLAS
S
ROLL
NO.
1.

EXAM
ROLL
NO.

NAME

MARKS
%
OBTAINE OBTAINE
D
D

XIA1

AMAN PREET SINGH

28

47 %

2.

XIA2

ARVIND

37

62%

3.

XIA3

BHARAT KUMAR

34

57%

4.

XIA4

CHITRANJAN KUMAR

39

65%

5.

XIA5

DEEPAK SHARMA

43

72%

6.

XIA6

GANGESH KUMAR JHA

28

47%

7.

XIA7

HARI GOVIND NIRALA

37

62%

8.

XIA8

JASPAL SINGH

30

50%

9.

XIA9

MAHENDER SHUKLA

29

48%

10.

XIA10

MANOJ

32

53%

11.

XIA11

MANOJ KUMAR

31

52%

12.

XIA12

MELEKHRAJ MAHAGURUJI

40

67%

13.

XIA13

MAYANK KHANDELWAL

29

48%

14.

XIA14

PANKAJ SINGH

38.5

64%

15.

XIA15

PRABHAKAR PAL

33

55%

16.

XIA16

RAHUL

21

33%

17.

XIA17

RAHUL

25

42%

18.

XIA18

RAHUL MATHUR

38

63%

19.

XIA19

RAJU CHAUDHURI

Ab.

--

20.

XIA20

RANI GOSWAMI

32

53%

21.

XIA21

SARAFAT ALI

27

45%

22.

XIA22

SUNNY SINGH

30

50%

23.

XIA23

TEJASVI SANKAR

29

48%

24.

XIA24

VIKAS KANT

41

68%

25.

XIA25

VINOD NEGI

41

68%

26.

XIA26

VIVEK PANDEY

34

57%

27.

XIA27

VIRENDER SINGH

Ab.

28.

XIA28

YODENDER SINGH

32

29.

XIA29

ZUNAID AHMED

Ab.

-32%
--

MARKSHEET OF THE ENTIRE


CLASS AND THEIR GRADES:

CLAS
S
ROLL
NO.

EXAM
ROLL
NO.

NAME

MARKS
OBTAIN
ED

%
OBTAIN
ED

1.

XIA1

AMAN PREET SINGH

28

47 %

C+

2.

XIA2

ARVIND

37

62%

B+

3.

XIA3

BHARAT KUMAR

34

57%

4.

XIA4

CHITRANJAN KUMAR

39

65%

B++

5.

XIA5

DEEPAK SHARMA

43

72%

6.

XIA6

GANGESH KUMAR JHA

28

47%

C+

7.

XIA7

HARI GOVIND NIRALA

37

62%

B++

8.

XIA8

JASPAL SINGH

30

50%

C++

9.

XIA9

MAHENDER SHUKLA

29

48%

C+

10.

XIA10

MANOJ

32

53%

C++

11.

XIA11

MANOJ KUMAR

31

52%

C++

12.

XIA12

MELEKHRAJ MAHAGURUJI

40

67%

B++

13.

XIA13

MAYANK KHANDELWAL

29

48%

C+

14.

XIA14

PANKAJ SINGH

38.5

64%

B+

15.

XIA15

PRABHAKAR PAL

33

55%

16.

XIA16

RAHUL

21

33%

17.

XIA17

RAHUL

25

42%

18.

XIA18

RAHUL MATHUR

38

63%

B+

19.

XIA19

RAJU CHAUDHURI

Ab.

--

--

20.

XIA20

RANI GOSWAMI

32

53%

C++

21.

XIA21

SARAFAT ALI

27

45%

C+

22.

XIA22

SUNNY SINGH

30

50%

C++

23.

XIA23

TEJASVI SANKAR

29

48%

C+

24.

XIA24

VIKAS KANT

41

68%

B++

25.

XIA25

VINOD NEGI

41

68%

B++

26.

XIA26

VIVEK PANDEY

34

57%

27.

XIA27

VIRENDER SINGH

Ab.

28.

XIA28

YODENDER SINGH

32

29.

XIA29

ZUNAID AHMED

Ab.

-32%
--

GRADE

-F
--

THE LEARNERS ARE GRADED IN ACCORDANCE


WITH THE FOLLOWING TABLE:

RANGE OF PERCENTAGE

GRADE

OBTAINED
95-85

84-80

A++

79-75

A+

74-70

69-65

B++

64-60

B+

59-55

54-50

C++

49-45

C+

44-40

39-35

LESS THAN 35

ITEM ANALYSIS: DIFFICULTY AND


DISCRIMINATION
Item analysis is a process which involves a careful of score
pattern on each of the test items. The analysis tells us that how well each
item is working, that is, the contribution it is making to the overall picture
of the candidates ability emerging from the test. The analysis of the
students responses to the objective-test items is a powerful tool for
improvement and for accumulating a bank of high quality items. It
suggests why an item is not effective and how it might be improved.

The analysis of the responses to the individual items of a test is helpful for
two broad reasons. First, the teacher can discover if there are certain
points that a sizeable number of students have failed to master. Second,
the teacher can verify how well certain items have be done in relation to
the test as a whole. This information will be useful in the construction of
new test. Item analysis usually provides two kinds of information on the
test items:
ITEM DIFFICULTY, which helps us to decide if the test item are
right for the target group.
ITEM DIFFICULTY, which helps us to see if the individual items
are providing information on the candidates abilities are consistent
with that provided by the other items of the test.
Item difficulty is determined by the observation of what percentage of
students answer the item correctly. The more difficult the item is, the
fewer will be the students who select the correct answer. The level of
difficulty of an item is calculated in the following manner:
For objective items,
Level of difficulty = (total no. of correct responses of High group + total no.
of correct responses of Low group) / (total no. of students x 100)

For subjective items


Level of difficulty = (total frequency of marks of High group + total
frequency of marks of Low group) / (total no. of students x 100 x marks
per question)

The analysis of the score is done as follows:

LEVEL OF DIFFICULTY

THE ITEM IS

ABOVE 90%

BETWEEN 80% - 90%

EASY

QUESTIONABLE

BETWEEN 50% - 80%

GOOD

BETWEEN 30% - 50%

QUESTIONABLE

BELOW 30%

DIFFICULT

ITEM DISCRIMINATION tells us how well the items perform in


separating the better students from the poorer ones. If the upper third of
the students gets the items correct and lower two-third generally gets the
items wrong, then it is a good discriminator between the two groups. Very
difficult items should discriminate between the very good students and all
of the others; relatively easy items should discriminate between the
majority of students in the class and the few poor ones. The item
discrimination level is calculated in the following manner:
for objective items:
Level of discrimination = (Total no. of correct responses in the high
group Total no. of correct responses in the low group) / (0.5 x total no.
of students in both groups)

for subjective items:


Level of discrimination = (Total frequency of marks in high group total
frequency of marks in low group) / (0.5 x total no. of students in both
group x marks per question)

The analysis of the score is done as follows:

LEVEL OF DISCRIMINATION

0 0.2

THE ITEM IS

VERY POOR

0.2 0.4

POOR

0.4 0.6

AVERAGE

0.6 0.8

GOOD

0.8 1.0

BEST

KINDS OF TESTING
The test that is created by the language teacher takes into consideration
the different approaches to test construction. Some of the different
approaches are described as below:
DIRECT TESTING:
Testing is said to be direct when it requires the candidate to perform
precisely the skill that we wish to measure. If we want to know how well
candidates can write compositions, we get them to write compositions.
The tasks, and the texts that are used, should be as authentic as possible. It
is easier to carry out when it is intended to measure the productive skills
of speaking and writing, testers have to devise methods of eliciting such
evidence accurately and without the method interfering with the

performance of skills in which they are interested. Direct testing has a


number of attractions. First, provided that we are clear about just what
abilities we want to assess, it is relatively straight-forward to create the
conditions which will elicit the behaviour on which to base our
judgements. Secondly, at least in the case of the productive skills, the
assessment and interpretation of students performance is also quite
straight-forward. Thirdly, since practice for the test involves practice of
the skills that we wish to foster, there is likely to be helpful backwash
effect.
DISCRETE-POINT TESTING:
This refers to the testing of one element at a time, item by item. This
might, for example, take the form of a series of items, each testing a
particular grammatical structure. It will almost always be in=direct.
Diagnostic tests of grammar of the kind referred to in an earlier section
will be part of the discrete point testing.
INTEGRATIVE TESTING:
As opposed to discrete point testing, the integrative testing requires the
candidate/student to combine many language elements in the completion
of the task. This might involve writing a composition, making notes while
listening to a lecture, taking a dictation or even a completion of a cloze
passage.
NORM-REFERENCED TESTING:
When a test is designed to provide the information which relates to one
candidates performance to that of the other candidates is called normreferenced testing. We are not told directly what the student is capable of
doing in the language. For example, if we have to judge the reading test of
an individual student and make a statement on the performance, we may
give two kinds of answers. The student can obtain a score that puts him or
her in the top 10% of the rest of the candidates, or in the bottom 5%; or
that he or she did better than 60% of those who took it.
CRITERION REFERENCED TESTING:
The purpose of these tests is to classify people according to whether or
not they are able to perform some tasks or a set of tasks satisfactorily. The

tasks are set, and those who perform them satisfactorily, pass; those who
do not, fail. This means that the students are encouraged to measure
their progress in relation to a meaningful criterion. These tests have two
positive virtues:
They set meaningful standards in terms of what people can do,
which do not change with different groups of candidates.
They motivate the candidates to achieve those standards.

ANALYSIS OF THE SUBJECTIVE


TYPE ITEMS
There are eleven subjective type items. Section A and section B,
which test the reading and writing capabilities of the learner, are of
subjective nature. Several items in Section D, the literature portion, deal
with the subjective understanding of the learners.
A.1. Read the following passage and make a note:
This question would test the learners reading skill. It will also tell how
fast the readers can read a section on the Early life of Akbar. Since the
students have taken up Arts program, I thought that something which was
part of their course would be of immense help. Marking would be done on

following the format, word limit and the use of language while making
the note.

HIGH GROUP
Marks
Tally
Frequency
6
III
18
5
IIII
20
4.5
0
0
4
I
4
3
I
3
2
0
0
1
0
0
0
0
0
TOTAL

45

LOW GROUP
Marks
Tally
6
0
5
I
4.5
I
4
I
3
II
2
I
1
I
0
II

Frequency
0
5
4.5
4
6
2
1
0
24.5

ITEM DIFFICULTY
= {(45 + 24.5) / (18 X 7)} X 100
= 55 %
GOOD
ITEM DISCRMINATION
= (45 24.5) / ( X 18 X 7)
= (20.5 / 63)
= 0.3
POOR
DETAILED ANALYSIS:
Around 55% of the students have answered this item correctly, which
makes the item good in the difficulty index and with regards to the
discrimination index, the item is poor, since the value is 0.3
ANALYSIS:
The item is ACCEPTABLE.

Suggestion:
The learners have tried to keep to the word limit. But there is a definite
problem in a coherent sentence formation and lack of strength of the
vocabulary. There should more practice of note making so that they can
further improve on their present knowledge and ability.
------------------------------B.1. Write a letter to the editor of a national newspaper regarding the
dismal state of traffic in front of your school, especially when the
school gets over.
A letter to the editor would allow the evaluation and testing of the
expression ability of the learners. The learners can core good marks if
they can get the format of the formal letter correct. The teacher would
also check on their coherency while presenting their argument.

HIGH GROUP
Marks
Tally
Frequency
8
II
16
7
II
14
6
I
6
5
II
10
4
I
4
3
I
3
2
0
0
0
0
0
TOTAL

53

ITEM DIFFICULTY
= {(53 + 24) / (18 X 10)} X 100
= 42%
QUESTIONABLE
ITEM DISCRMINATION

LOW GROUP
Marks
Tally
8
0
7
0
6
0
5
II
4
I
3
II
2
II
0
II

Frequency
0
0
0
10
4
6
4
0
24

= (53 24) / ( X 18 X 10)


= 0.3
POOR
DETAILED ANALYSIS:
Only 42% of the students have answered this item correctly, which makes
the item questionable in the difficulty index and with regards to the
discrimination index, the item is poor, since the value is 0.3
ANALYSIS:
The item is QUESTIONABLE.
Suggestion:
Most of the learners scored good in the maintaining a correct format
of the letter to the editor. But, as in the note-making, there was some
problem in the coherent structuring of the sentence while presenting
their argument.
---------------------------------

Section D
1. 1. Name the poet. Why does he say that he would sing about
The Tale of Melon City?
(1.5 MARKS)

HIGH GROUP
Marks
Tally
Frequency
1.5
1
IIIIIII
7
0
II
0
TOTAL

LOW GROUP
Marks
Tally
Frequency
1.5
1
IIIIII
6
0
III
0
6

ITEM DIFFICULTY
= {(7 + 6) / (18 X 1.5)} X 100
= 49%
QUESTIONABLE
ITEM DISCRMINATION
= (7 6) / ( X 18 X 1.5)
= 0.07
VERY POOR
DETAILED ANALYSIS:
Only 49% of the students have answered this item correctly, which makes
the item questionable in the difficulty index and with regards to the
discrimination index, the item is very poor, since the value is 0.07
ANALYSIS:
The item is QUESTIONABLE.
--------------------------------2. b. Do you think that they were fitting titles?

HIGH GROUP
Marks
Tally
Frequency
2
0
1
IIIII III 8
0
I
0
TOTAL
ITEM DIFFICULTY

(2MARKS)

LOW GROUP
Marks
Tally
2
0
1
IIIII II
0
II

Frequency
0
7
0
7

= {(8+ 7) / (18 X 2)} X 100


= 41%
QUESTIONABLE
ITEM DISCRMINATION
= (8 7) / ( X 18 X 2)
= 0.05
VERY POOR
DETAILED ANALYSIS:
Only 41% of the students have answered this item correctly, which makes
the item questionable in the difficulty index and with regards to the
discrimination index, the item is very poor, since the value is 0.05
ANALYSIS:
The item is QUESTIONABLE.
-----------------------------3. b. What happened after that?

HIGH GROUP
Marks
Tally
Frequency
2.5
I
2.5
2
II
4
1
III
3
`
1

0
II
0
TOTAL

10

(3 marks)

LOW GROUP
Marks
Tally
2.5
0
2
I
1
IIII
`
0
0
IIII

Frequency
0
2
4
0
0
6

ITEM DIFFICULTY
= {(10 + 6) / (18 X 3)} X 100
= 29%
DIFFICULT
ITEM DISCRMINATION
= (10 6) / ( X 18 X 3)
= 0.1
VERY POOR
DETAILED ANALYSIS:
Only 29% of the students have answered this item correctly, which makes
the item difficult in the difficulty index and with regards to the
discrimination index, the item is very poor, since the value is 0.1.
ANALYSIS:
The item is UNACCEPTABLE.
--------------------------------4. b. Give two examples of irony.

HIGH GROUP
Marks
Tally
Frequency
2
0
0
1
IIIII II
7
0
II
0
TOTAL

(2 marks)

LOW GROUP
Marks
Tally
2
I
1
IIIII I
0
II

Frequency
2
6
0
8

ITEM DIFFICULTY
= {(7 + 8) / (18 X 2)} X 100
= 42%
QUESTIONABLE
ITEM DISCRMINATION
= (7 8) / ( X 18 X 2)
= 0.05
VERY POOR
DETAILED ANALYSIS:
Only 42% of the students have answered this item correctly, which makes
the item questionable in the difficulty index and with regards to the
discrimination index, the item is very poor, since the value is 0.05
ANALYSIS:
The item is QUESTIONABLE.
-----------------------------2. 4. b. In what context is it being used here? (1 mark)

HIGH GROUP
Marks
Tally
Frequency
1
IIIII I
6

0
0
0
TOTAL

ITEM DIFFICULTY

LOW GROUP
Marks
Tally
1
IIIII I

0
0
0

Frequency
6
0
0
6

= {(6.5 + 6) / (18 X 1)} X 100


= 67%
GOOD
ITEM DISCRMINATION
= (6.5 6) / ( X 18 X 1)
= 0.05
VERY POOR
DETAILED ANALYSIS:
Around 67% of the students have answered this item correctly, which
makes the item good in the difficulty index and with regards to the
discrimination index, the item is very poor, since the value is 0.05
ANALYSIS:
The item is ACCEPTABLE.
------------------------D. 2.5. Give two reasons why you like or dislike this play? (2marks)

HIGH GROUP
Marks
Tally
Frequency
2
II
4
1.5
I
1.5
1
III
3
0
III
0
TOTAL

8.5

ITEM DIFFICULTY
= {(8.5 + 6) / (18 X 2)} X 100

LOW GROUP
Marks
Tally
2
0
1.5
0
1
IIIII I
0
III

Frequency
0
0
6
0
6

= 40%
QUESTIONABLE
ITEM DISCRMINATION
= (8.5 6) / ( X 18 X 2)
= 0.27
POOR
DETAILED ANALYSIS:
Only 40% of the students have answered this item correctly, which makes
the item QUESTIONABLE in the difficulty index and with regards to the
discrimination index, the item is POOR, since the value is 0.27
ANALYSIS:
The item is QUESTIONABLE.
--------------------------

3. c. 2 what was the plan of action decided by the professor?

(1 marks)

HIGH GROUP
LOW GROUP
Marks
Tally
Frequency Marks
Tally
Frequency
1
IIIII III 8
1
IIIII III 8
0
I
0
0
I
0
TOTAL

ITEM DIFFICULTY
= {(8 + 8) / (18 X 1)} X 100

= 88%
QUESTIONABLE

ITEM DISCRMINATION
= (8 8) / ( X 18 X 1)
= 0.0
VERY POOR
DETAILED ANALYSIS:
Over 88% of the students have answered this item correctly, which makes
the item questionable in the difficulty index and with regards to the
discrimination index, the item is very poor, since the value is 0
ANALYSIS:
The item is QUESTIONABLE.
-----------------------

D.3.G. What did he do? How did the audience react?

HIGH GROUP
Marks
Tally
Frequency
2
I
2
1.5
I
1.5
1
III
3

II
1
0
II
0
TOTAL
ITEM DIFFICULTY

7.5

LOW GROUP
Marks
Tally
2
I
1.5
0
1
IIIII

0
0
III

Frequency
2
0
5
0
0
7

= {(7.5 + 7) / (18 X 2)} X 100


= 40%
QUESTIONABLE
ITEM DISCRMINATION
= (7.5 7) / ( X 18 X 2)
= 0.05
VERY POOR
DETAILED ANALYSIS:
Only 40% of the students have answered this item correctly, which makes
the item questionable in the difficulty index and with regards to the
discrimination index, the item is very poor, since the value is 0.05
ANALYSIS:
The item is QUESTIONABLE.
-----------------------D.3.K. Given the fact that the accounts of history written by Bakhar
can be highly disputed, how did Rajendra try to explain the fantastic
happening with the help of science?
(4 marks)

HIGH GROUP
Marks
Tally
Frequency
3
I
3
2
IIII
8
1
I
1
0
III
0
TOTAL

12

ITEM DIFFICULTY
= {(12 + 7) / (18 X 4)} X 100

LOW GROUP
Marks
Tally
3
0
2
I
1
IIIII
0
III

Frequency
0
2
5
0
7

= 27%
DIFFICULT
ITEM DISCRMINATION
= (12 7) / ( X 18 X 4)
= 0.1
VERY POOR
DETAILED ANALYSIS:
Only 27% of the students have answered this item correctly, which makes
the item difficult in the difficulty index and with regards to the
discrimination index, the item is very poor, since the value is 0.1
ANALYSIS:
The item is UNACCEPTABLE.
------------------------------

REVIEW OF THE ACHIEVEMENT


TEST AT A GLANCE:
SECTI
ON

ITEM

PAR LEVEL OF
T
DIFIICULTY
%
ANALYSIS

LEVEL OF
DISCRIMINATION
VALUE
ANALYSIS

FINAL
ANALYSIS

1.

55%

GOOD

0.3

POOR

ACCEPTABLE

1.

42%

QUESTIONAB
LE

0.3

POOR

QUESTIONABL
E

a.
b.
c.

78%
61%
44%

0.02
0.3
0.2

VERY POOR
POOR
POOR

d.

78%

GOOD
GOOD
QUESTIONAB
LE
GOOD

0.2

POOR

ACCEPTABLE
ACCEPTABLE
QUESTIONABL
E
ACCEPTABLE

a.
b.
c.
d.

72%
50%
56%
56%

GOOD
GOOD
GOOD
GOOD

0.1
0.3
0.2
0.0

VERY POOR
POOR
POOR
VERY POOR

ACCEPTABLE
ACCEPTABLE
ACCEPTABLE
ACCEPTABLE

a.
b.
c.
d.

72%
66%
50%
61%

GOOD
GOOD
GOOD
GOOD

0.1
0.0
0.3
-0.1

VERY POOR
VERY POOR
POOR
VERY POOR

ACCEPTABLE
ACCEPTABLE
ACCEPTABLE
ACCEPTABLE

1.1

49%

QUESTIONAB
LE
GOOD
QUESTIONAB
LE
GOOD
DIFFICULT

0.07

VERY POOR

0.01
0.05

VERY POOR
VERY POOR

0.3
0.1

POOR
VERY POOR

QUESTIONABL
E
ACCEPTABLE
QUESTIONABL
E
ACCEPTABLE
UNACCEPTAB
LE
ACCEPTABLE
QUESTIONABL
E

1.2

a
b

61%
41%

1.3

a
b

50%
29%

1.4

a
b

78%
42%

GOOD
QUESTIONAB
LE

0.2
0.05

POOR
VERY POOR

2.1

a
b

72%
94%

GOOD
EASY

0.3
-0.1

POOR
VERY POOR

2.2

83%

0.3

POOR

83%

-0.1

VERY POOR

2.3

82%

0.3

POOR

2.4

b
a
b

2.5

72%
77%
67%
40%

QUESTIONAB
LE
QUESTI0NAB
LE
QUESTIONAB
LE
GOOD
GOOD
GOOD
QUESTIONAB
LE

0.1
0.0
0.05
0.27

VERY POOR
VERY POOR
VERY POOR
POOR

3.A
3.B

72%
88%

GOOD
QUESTIONAB
LE
GOOD
QUESTIONAB
LE
QUESTIONAB
LE
GOOD
QUESTIONAB
LE
QUESTIONAB
LE
QUESTIONAB

0.5
0.2

AVERAGE
POOR

0.0
0.0

VERY POOR
VERY POOR

0.05

VERY POOR

0.0
0.3

VERY POOR
POOR

0.05

VERY POOR

0.3

POOR

3.C

1
2

77%
88%

3.D

41%

3.E
3.F

77%
83%

3.G

40%

3.H

83%

ACCEPTABLE
UNACCEPTAB
LE
QUESTIONABL
E
QUESTIONABL
E
QUESTIONABL
E
ACCEPTABLE
ACCEPTABLE
ACCEPTABLE
QUESTIONABL
E
ACCEPTABLE
QUESTIONABL
E
ACCEPTABLE
QUESTIONABL
E
QUESTIONABL
E
ACCEPTABLE
QUESTIONABL
E
QUESTIONABL
E
QUESTIONABL

3.I
3.J

66%
41%

3.K

27%

TOTAL NUMBER OF ITEMS

LE
GOOD
QUESTIONAB
LE
DIFFICULT

0.4
0.05

AVERAGE
VERY POOR

0.1

VERY POOR

42

NUMBER OF ACCEPTABLE ITEMS :

23

NUMBER OF QUESTIONABLE ITEMS

16

NUMBER OF UNACCEPTABLE ITEMS

03

E
ACCEPTABLE
QUESTIONABL
E
UNACCEPTAB
LE

WHAT IS STATISTICS?
Statistics is the body of mathematical techniques or processes for
gathering, organizing, analyzing, and interpreting numerical data. Because
most research yields quantitative data, statistics is a basic tool of
measurement, evaluation and research.

The word statistics is sometimes used to describe the numerical


data that are gathered. Statistical data describe group behavior or group
characteristics abstracted from a number of individual observations that
are combined to make generalizations possible. When we speak of the
age, size or any other characteristics of an average 5th grade learner, we
are stating a generalized statement of all 5th grade learners, not any
particular learner. Thus, the statistical measurement is an abstraction that
may be used in place of a great mass of individual measures.
The research worker who uses statistics is concerned with more than
the manipulation of data. The statistical data collection method serves as
the fundamental purpose of description and analysis, and its proper
application involves answering the following questions:
What facts need to be gathered to provide the information
necessary to test the hypotheses?
How are these data to be selected, gathered, organized, and
analyzed?
What assumptions underlie the statistical methodology to be
employed?
What conclusions can be validly drawn from the analysis of the
data?
Research consists of systematic observation and description of the
characteristics or properties of objects for the purpose of discovering
relationship between variables. The ultimate purpose is to develop
generalizations that may be used to explain phenomena and to predict
future occurrences. To conduct research, we must establish principles so
that the observations and description have a common understood
meaning. Measurement is the most precise and universally accepted
process of description and assigning qualitative values to properties of
objects and events.

The science of statistics has gained an enormous importance and


popularity because of the various functions performed by it. Some of the
functions of statistics are as follows:
Provide precise and definite numerical outcome of the data

Simplify large volume and complex data into understandable form


Helps in making proper comparison
Framing and testing hypothesis
Enlarge individual knowledge and experience
Formulation of policies
Business forecasting

A proper statistical enquiry is conducted in the following stages:


I.
Collection of data
II.
Organization and presentation of numerical data
III. Analysis of numerical data
IV. Interpretation of numerical data.

HISTOGRAM
Histogram is one of the most frequently used graphs to
convey statistical data. In this graph, the frequencies are
represented by bars or columns, placed one next to other. Each
column represents the test scores in one of the class intervals of
the frequency distribution.

Unlike bar-graph, histogram is a form of representation


which is used for continuous class intervals for a data set and the
y axis shows a count of the number of cases ( frequency) falling
in each category. Also, since there are no gaps in between,
consecutive bars or rectangles, the resultant graph appears like a
solid figure. This figure forms a histogram, which is a graphical
representation of a grouped frequency distribution with
continuous classes. Also unlike, a bar graph the width of the bar
plays a significant role in its construction.
Here, in fact, areas of the rectangles, erected are
proportional to the corresponding frequency. However, since the
widths of the rectangles are all equal, the lengths of the rectangles
are proportional to the frequencies.
Histogram
X(Marks)

Y(students)

20-25

25-30

30-35

35-40

40-45

Total

26

Histogram
9

s
n
e
d
tu
.fS
o
N

0
20-25

25-30

30-35

35-40
Marks

FREQUENCY
POLYGON

40-45

Frequency polygons are a graphical device for


understanding the shapes of distributions. They serve the
same purpose as histograms, but are especially helpful in
comparing sets of data. Frequency polygons are also a
good choice for displaying cumulative frequency
distributions.
To create a frequency polygon, start just as for
histograms, by choosing a class interval. Then draw an
X-axis representing the
values of the scores in your
data. Mark the middle of each class interval with a tick
mark, and label it with the middle value represented by
the class. Draw the Y-axis to indicate the frequency of
each class. Place a point in the middle of each class
interval at the height corresponding to its frequency.
Finally, connect the points. You should include one class
interval below the lowest value in your data and one
above the highest value. The graph will then touch the Xaxis on both sides.
Frequency polygon is useful for comparing
distribution. This is achieved by overlaying the frequency
polygons drawn for different data sets.
X(Marks)

Y(students)

Mid
Point

20-25

22.5

25-30

27.5

30-35

32.5

35-40

37.5

40-45

42.5

Total

26

FrequencyPolygon
9
8
7
6
5

n
e
d
tu
fs
o
N

Frequency Polygon
4
3
2
1
0
20-25

25-30

30-35

35-40
Ma
rks

40-45

MEASURES OF CENTRAL
TENDENCY

ARITHMETIC MEAN
CLASS
INTERVAL
(marks
secured)

FREQUENCY
(f)
(no.of students)

MID-VALUE
(x)

fx

20 25

22.5

22.5

25 30

27.5

220

03 35

32.5

260

35 40

37.5

187.5

40 45

42.5

170

f = 26

MEAN ( x )

fx = 860

= fx / f
= 860 / 26
= 33.07

MEDIAN
CLASS
MIDINTERVAL VALUE

TALLY

FREQUENCY CUMULATIVE
(f)
FREQUENCY

(x)
20 25

22.5

25 30

27.5

IIIII III

30 35

32.5

IIIII III

17

35 40

37.5

IIIII

(MEDIAN
CLASS)
22

40 45

42.5

IIII

26

MEDIAN CLASS = f / 2
= 26 / 2
= 13

MEDIAN

= 30 + {(26/2 9) / 8} x 5
= 30 + {(13 9) / 8} x 5
= 30 + (4 / 8) x 5
= 30 + 2.5
= 32.5

MODE

MODE

3 MEDIAN 2 MEAN,

Where,
The MEDIAN value is 32.5
And
The ARITHMETIC MEAN is 33.07

3(32.5) 2(33.07)

97.5 66.14

31.36

STANDARD DEVIATION

CLASS

FREQUENCY
(f)

MIDVALUE
(x)

fx

X
X2
(x x)

fX2

20 25

22.5

22.5

-10.57 111.72

111.72

25 30

27.5

220

-5.57

31.02

248.16

30 35

32.5

260

-0.57

0.32

2.56

35 40

37.5

187.5

4.43

19.62

98.2

40 45

42.5

170

9.43

88.92

355.68

MEAN = 33.07

STANDARD DEVIATION ()
=

fX2 / f

816.22 / 26

5.60

OVERALL ANALYSIS
WRITING SECTION:

The learners should appreciable ability where they realized that they can
score good marks if they keep to the correct format. However, they have a
long way to go in sentence construction. They also have to do a lot of
reading to improve their vocabulary.
GRAMMAR SECTION:
The learners scored high marks in this section. There improvement in the
usage of the verbs in the grammar section is appreciable. However, it is
quite baffling that they do not make use of that same ability in the
construction of their answers.
LITERATURE SECTION:
The learners scored high marks in this section also. They scored heavily
in the objective section which shows that they were quite thorough with
the text. But in the subjective section, they were quite miserable. They did
not have the ability to answer questions which was based on their
understanding.

BIBLIOGRAPHY

1.

STUART D. SHAW & CYRIL J. WEIR 2007


EXAMINING WRITING NEW DELHI CUP

2.

VALETTE REBECCA 1967 MODERN LANGUAGE


TESTING NEW YORK HARCOURT

3.

GARRETT HENRY E.
1999 STATISTICS IN
PSYCHOLOGY AND EDUCATION DELHI PARAGON
INTERNATIONAL

4.

REA-DICKENS PAULINE & GERMAINE KEVIN


2000 EVALUATION OXFORD OUP

5.

HUGHES ARTHUR 2007 TESTING FOR LANGUAGE


TEACHERS CAMBRIDGE CUP

6.

WOOLFOLK ANITA
2005 EDUCATIONAL
PSYCHOLOGY DELHI PEARSON EDUCATION

Das könnte Ihnen auch gefallen