Sie sind auf Seite 1von 41

Evaluation

A common definition is: “The process of determining to what extent the educational objectives are
actually being realized” (Tyler, 1950, p. 69)

Another accepted definition is: “Evaluation is the process of determining merit, worth, or significance; an
evaluation is a product of that process”(Scriven, 1991, p. 53)

Concept of Evaluation:

In every walk of life the process of evaluation takes place in one or the other form. If the evaluation
process is eliminated from human life then perhaps the aim of life may be lost. It is only through
evaluation that one can discriminate between good and bad. The whole cycle of social development
revolves around the evaluation process.

In education how much a child has succeeded in his aims, can only be determined through evaluation.
Thus there is a close relationship between evaluation and aims.

Education is considered as an investment in human beings in terms of development of human resources,


skills, motivation, knowledge and the like. Evaluation helps to build an educational programme, assess
its achievements and improve upon its effectiveness.

It serves as an in-built monitor within the programme to review the progress in learning from time to
time. It also provides valuable feedback on the design and the implementation of the programme. Thus,
evaluation plays a significant role in any educational programme.

Evaluation plays an enormous role in the teaching-learning process. It helps teachers and learners to
improve teaching and learning. Evaluation is a continuous process and a periodic exercise.

It helps in forming the values of judgement, educational status, or achievement of student. Evaluation in
one form or the other is inevitable in teaching-learning, as in all fields of activity of education
judgements need to be made.

In learning, it contributes to formulation of objectives, designing of learning experiences and assessment


of learner performance. Besides this, it is very useful to bring improvement in teaching and curriculum.
It provides accountability to the society, parents, and to the education system.

Discuss its uses of evaluation:


(i) Teaching:
Evaluation is concerned with assessing the effectiveness of teaching, teaching strategies, methods and
techniques. It provides feedback to the teachers about their teaching and the learners about their
learning.

(ii) Curriculum:

The improvement in courses/curricula, texts and teaching materials is brought about with the help of
evaluation.

(iii) Society:

Evaluation provides accountability to society in terms of the demands and requirements of the
employment market.:

(iv) Parents:

Evaluation mainly manifests itself in a perceived need for regular reporting to parents.

In brief, evaluation is a very important requirement for the education system. It fulfills various purposes
in systems of education like quality control in education, selection/entrance to a higher grade or tertiary
level.

It also helps one to take decisions about success in specific future activities and provides guidance to
further studies and occupation. Some of the educationists view evaluation virtually synonymous with
that of learner appraisal, but evaluation has an expanded role.

It plays an effective role in questioning or challenging the objectives.

A simple representation explaining the role of evaluation in the teaching-learning process

Evaluation has its four different aspects namely:

(i) Objectives,

(ii) Learning experiences,

(iii) Learner appraisal and

(iv) Relationship between the three.


Characteristics of Evaluation:

1. Evaluation implies a systematic process which omits the casual uncontrolled


observation of pupils.

2. Evaluation is a continuous process. In an ideal situation, the teaching- learning


process on the one hand and the evaluation procedure on the other hand, go
together. It is certainly a wrong belief that the evaluation procedure follows the
teaching-learning process.

3. Evaluation emphasises the broad personality changes and major objectives of


an educational programme. Therefore, it includes not only subject-matter
achievements but also attitudes, interests and ideals, ways of thinking, work
habits and personal and social adaptability.

4. Evaluation always assumes that educational objectives have previously been


identified and defined. This is the reason why teachers are expected not to lose
sight of educational objectives while planning and carrying out the teaching-
learning process either in the classroom or outside it.

5. A comprehensive programme of evaluation involves the use of many


procedures (for example, analytico-synthetic, heuristic, experimental, lecture,
etc.); a great variety of tests (for example, essay type, objective type, etc.); and
other necessary techniques (for example, socio-metric, controlled-observation
techniques, etc.).

6. Learning is more important than teaching. Teaching has no value if it does not
result in learning on the part of the pupils.

7. Objectives and accordingly learning experiences should be so relevant that


ultimately they should direct the pupils towards the accomplishment of
educational goals.

8. To assess the students and their complete development brought about through
education is evaluation.

9. Evaluation is the determination of the congruence between the performance


and objectives.

Technique teacher are the few steps involved in the process of evaluation:

(i) Identifying and Defining General Objectives:

In the evaluation process first step is to determine what to evaluation, i.e., to set
down educational objectives. What kind of abilities and skills should be developed
when a pupil studies, say, Mathematics, for one year? What type of
understanding should be developed in the pupil who learns his mother tongue?
Unless the teacher identifies and states the objectives, these questions will
remain unanswered.

The process of identifying and defining educational objectives is a complex one;


there is no simple or single procedure which suits all teachers. Some prefer to
begin with the course content, some with general aims, and some with lists of
objectives suggested by curriculum experts in the area.

While stating the objectives, therefore, we can successfully focus our attention on
the product i.e., the pupil’s behaviour, at the end of a course of study and state it
in terms of his knowledge, understanding, skill, application, attitudes, interests,
appreciation, etc.
(ii) Identifying and Defining Specific Objectives:

It has been said that learning is the modification of behaviour in a desirable


direction. The teacher is more concerned with a student’s learning than with
anything else. Changes in behaviour are an indication of learning. These changes,
arising out of classroom instruction, are known as the learning outcome.

What type of learning outcome is expected from a student after he has


undergone the teaching-learning process is the first and foremost concern of the
teacher. This is possible only when the teacher identifies and defines the
objectives in terms of behavioural changes, i.e., learning outcomes.

These specific objectives will provide direction to teaching-learning process. Not


only that it will also be useful in planning and organising the learning activities,
and in planning and organising evaluation procedures too.

Thus, specific objectives determine two things; one, the various types of learning
situations to be provided by the class teacher 10 his pupils and second, the
method to be employed to evaluate both—the objectives and the learning
experiences.

(iii) Selecting Teaching Points:

The next step in the process of evaluation is to select teaching points through
which the objectives can be realised. Once the objectives are set up, the next step
is to decide the content (curriculum, syllabus, course) to help in the realisation of
objectives.

For the teachers, the objectives and courses of school subjects are ready at hand.
His job is to analyse the content of the subject matter into teaching points and to
find out what specific objectives can be adequately realised through the
introduction of those teaching points.

(iv) Planning Suitable Learning Activities:

In the fourth step, the teacher will have to plan the learning activities to be
provided to the pupils and, at the same time, bear two things in mind—the
objectives as well as teaching points. The process then becomes three
dimensional, the three co-ordinates being objectives, teaching points and learning
activities. The teacher gets the objectives and content readymade.

He is completely free to select the type of learning activities. He may employ the
analytico-synthetic method; he may utilise the inducto-deductive reasoning; he
may employ the experimental method or a demonstration method; or he may put
a pupil in the position of a discoverer; he may employ the lecture method; or he
may ask the pupils to divide into groups and to do a sort of group work followed
by a general discussion; and so on. One thing he has to remember is that he
should select only such activities as will make it possible for him to realise his
objectives.

(v) Evaluating:

In the fifth step, the teacher observes and measures the changes in the behaviour
of his pupils through testing. This step adds one more dimension to the evaluation
process. While testing, he will keep in mind three things-objectives, teaching
points and learning activities; but his focus will be on the attainment of
objectives. This he cannot do without enlisting the teaching points and planning
learning activities of his pupils.

Here the teacher will construct a test by making the maximum use of the teaching
points already introduced in the class and the learning experiences already
acquired by his pupils. He may plan for an oral lest or a written test; he may
administer an essay type test or an objective type of lest; or he may arrange a
practical test.

Purposes and Functions of Evaluation:

Evaluation plays a vital role in teaching learning experiences. It is an integral part


of the instructional programmes. It provides information’s on the basis of which
many educational decisions are taken. We are to stick to the basic function of
evaluation which is required to be practiced for pupil and his learning processes.
Evaluation has the following functions:

1. Placement Functions:

a. Evaluation helps to study the entry behaviour of the children in all respects.

b. That helps to undertake special instructional programmes.

c. To provide for individualisation of instruction.

d. It also helps to select pupils for higher studies, for different vocations and
specialised courses.

2. Instructional Functions:

a. A planned evaluation helps a teacher in deciding and developing the ways,


methods, techniques of teaching.

b. Helps to formulate and reformulate suitable and realistic objectives of


instruction.

c. Which helps to improve instruction and to plan appropriate and adequate


techniques of instruction.

d. And also helps in the improvement of curriculum.

e. To assess different educational practices.

f. Ascertains how far could learning objectives be achieved.

g. To improve instructional procedures and quality of teachers.

h. To plan appropriate and adequate learning strategies.

3. Diagnostic Functions:

a. Evaluation has to diagnose the weak points in the school programme as well as
weakness of the students.

b. To suggest relevant remedial programmes.


c. The aptitude, interest and intelligence are also to be recognised in each
individual child so that he may be energised towards a right direction.

d. To adopt instruction to the different needs of the pupils.

e. To evaluate the progress of these weak students in terms of their capacity,


ability and goal.

4. Predictive functions:

a. To discover potential abilities and aptitudes among the learners.

b. Thus to predict the future success of the children.

c. And also helps the child in selecting the right electives.

5. Administrative Functions:

a. To adopt better educational policy and decision making.

b. Helps to classify pupils in different convenient groups.

c. To promote students to next higher class,

d. To appraise the supervisory practices.

e. To have appropriate placement.

f. To draw comparative statement on the performance of different children.

g. To have sound planning.

h. Helps to test the efficiency of teachers in providing suitable learning


experiences.

i. To mobilise public opinion and to improve public relations.

j. Helps in developing a comprehensive criterion tests.


6. Guidance Functions:

a. Assists a person in making decisions about courses and careers.

b. Enables a learner to know his pace of learning and lapses in his learning.

c. Helps a teacher to know the children in details and to provide necessary


educational, vocational and personal guidance.

7. Motivation Functions:

a. To motivate, to direct, to inspire and to involve the students in learning.

b. To reward their learning and thus to motivate them towards study.

8. Development Functions:

a. Gives reinforcement and feedback to teacher, students and the teaching


learning processes.

b. Assists in the modification and improvement of the teaching strategies and


learning experiences.

c. Helps in the achievement of educational objectives and goals.

9. Research Functions:

a. Helps to provide data for research generalisation.

b. Evaluation clears the doubts for further studies and researches.

c. Helps to promote action research in education.

10. Communication Functions:

a. To communicate the results of progress to the students.

b. To intimate the results of progress to parents.

c. To circulate the results of progress to other schools.


Types of Evaluation:

Evaluation can be classified into different categories in many ways.

Some important classifications are as follows:

1. Placement Evaluation:

Placement evaluation is designed to place the right person in the right place. It
ensures the entry performance of the pupil. The future success of the
instructional process depends on the success of placement evaluation.

Placement evaluation aims at evaluating the pupil’s entry behaviour in a sequence


of instruction. In other words the main goal of such evaluation is to determine the
level or position of the child in the instructional sequence.

We have a planned scheme of instruction for classroom which is supposed to


bring a change in pupil’s behaviour in an orderly manner. Then we prepare or
place the students for planned instruction for their better prospects.

When a pupil is to undertake a new instruction, it is essential to know the answer


of the following questions:

a. Does the pupil possess required knowledge and skills for the instruction?

b. Whether the pupil has already mastered some of the instructional objectives or
not?

c. Whether the mode of instruction is suitable to pupil’s interests, work habits and
personal characteristics?
We get the answer to all the probable questions by using a variety of tests, self
report inventories, observational techniques, case study, attitude test and
achievement tests.

Sometimes past experiences, which inspire for present learning also lead to the
further placement in a better position or admission. This type of evaluation is
helpful for admission of pupils into a new course of instruction.

Examples:

i. Aptitude test

ii. Self-reporting inventories

iii. Observational techniques

2. Formative Evaluation:

Formative evaluation is used to monitor the learning progress of students during


the period of instruction. Its main objective is to provide continuous feedback to
both teacher and student concerning learning successes and failures while
instruction is in process.

Feedback to students provides reinforcement of successful learning and identifies


the specific learning errors that need correction. Feedback to teacher provides
information for modifying instruction and for prescribing group and individual
remedial work.

Formative evaluation helps a teacher to ascertain the pupil-progress from time to


time. At the end of a topic or unit or segment or a chapter the teacher
can evaluate the learning outcomes basing on which he can modify his methods,
techniques and devices of teaching to provide better learning experiences.

The teacher can even modify the instructional objectives, if necessary. In other
words, formative evaluation provides feedback to the teacher. The teacher can
know which aspects of the learning task were mastered and which aspects were
poorly or not at all mastered by pupils. Formative evaluation helps the teacher to
assess the relevance and appropriateness of the learning experiences provided
and to assess instantly how far the goals are being fulfilled.

Thus, it aims at improvement of instruction. Formative evaluation also provides


feedback to pupils. The pupil knows his learning progress from time to time. Thus,
formative evaluation motivates the pupils for better learning. As such, it helps the
teacher to take appropriate remedial measures. “The idea of generating
information to be used for revising or improving educational practices is the core
concept of formative evaluation.”

It is concerned with the process of development of learning. In the sense,


evaluation is concerned not only with the appraisal of the achievement but also
with its improvement. Education is a continuous process.

Therefore, evaluation and development must go hand in hand. The evaluation has
to take place in every possible situation or activity and throughout the period of
formal education of a pupil.

Cronback is the first educationist, who gave the best argument for formative
evaluation. According to him, the greatest service evaluation can perform is to
identify aspects of the course where education is desirable. Thus, this type of
evaluation is an essential tool to provide feedback to the learners for
improvement of their self-learning and to the teachers for improvement of their
methodologies of teaching, nature of instructional materials, etc.

It is a positive evaluation because of its attempt to create desirable learning goals


and tools for achieving such goals. Formative evaluation is generally concerned
with the internal agent of evaluation, like participation of the learner in the
learning process.

The functions of formation evaluation are:

(a) Diagnosing:

Diagnosing is concerned with determining the most appropriate method or


instructional materials conducive to learning.
(b) Placement:

Placement is concerned with the finding out the position of an individual in the
curriculum from which he has to start learning.

(c) Monitoring:

Monitoring is concerned with keeping track of the day-to- day progress of the
learners and to point out changes necessary in the methods of teaching,
instructional strategies, etc.

Characteristics of Formative Evaluation:

The characteristics of formative evaluation are as follows:

a. It is an integral part of the learning process.

b. It occurs, frequently, during the course of instruction.

c. Its results are made immediately known to the learners.

d. It may sometime take form of teacher observation only.

e. It reinforces learning of the students.

f. It pinpoints difficulties being faced by a weak learner.

g. Its results cannot be used for grading or placement purposes.

h. It helps in modification of instructional strategies including method of teaching,


immediately.

i. It motivates learners, as it provides them with knowledge of progress made by


them.

j. It sees role of evaluation as a process.

k. It is generally a teacher-made test.

l. It does not take much time to be constructed.


Examples:

i. Monthly tests.

ii. Class tests.

iii. Periodical assessment.

3. Diagnostic Evaluation:

It is concerned with identifying the learning difficulties or weakness of pupils


during instruction. It tries to locate or discover the specific area of weakness of a
pupil in a given course of instruction and also tries to provide remedial measure.

N.E. Gronlund says “…… formative evaluation provides first-aid treatment for
simple learning problems whereas diagnostic evaluation searches for the
underlying causes of those problems that do not respond to first-aid treatment.”

When the teacher finds that inspite of the use of various alternative methods,
techniques and corrective prescriptions the child still faces learning difficulties, he
takes recourse to a detailed diagnosis through specifically designed tests called
‘diagnostic tests’.

Diagnosis can be made by employing observational techniques, too. In case of


necessity the services of psychological and medical specialists can be utilised for
diagnosing serious learning handicaps.

4. Summative Evaluation:

Summative evaluation is done at the end of a course of instruction to know to


what extent the objectives previously fixed have been accomplished. In other
words, it is the evaluation of pupils’ achievement at the end of a course.

The main objective of the summative evaluation is to assign grades to the pupils.
It indicates the degree to which the students have mastered the course content. It
helps to judge the appropriateness of instructional objectives. Summative
evaluation is generally the work of standardised tests.
It tries to compare one course with another. The approaches of summative
evaluation imply some sort of final comparison of one item or criteria against
another. It has the danger of making negative effects.

This evaluation may brand a student as a failed candidate, and thus causes
frustration and setback in the learning process of the candidate, which is an
example of the negative effect.

The traditional examinations are generally summative evaluation tools. Tests for
formative evaluation are given at regular and frequent intervals during a course;
whereas tests for summative evaluation are given at the end of a course or at the
end of a fairly long period (say, a semester).

The functions of this type of evaluation are:

(a) Crediting:

Crediting is concerned with collecting evidence that a learner has achieved some
instructional goals in contents in respect to a defined curricular programme.

(b) Certifying:

Certifying is concerned with giving evidence that the learner is able to perform a
job according to the previously determined standards.

(c) Promoting:

It is concerned with promoting pupils to next higher class.

(d) Selecting:

Selecting the pupils for different courses after completion of a particular course
structure.

Characteristics of Summative Evaluation:

a. It is terminal in nature as it comes at the end of a course of instruction (or a


programme).
b. It is judgemental in character in the sense that it judges the achievement of
pupils.

c. It views evaluation “as a product”, because its chief concern is to point out the
levels of attainment.

d. It cannot be based on teachers observations only.

e. It does not pin-point difficulties faced by the learner.

f. Its results can be used for placement or grading purposes.

g. It reinforces learning of the students who has learnt an area.

h. It may or may not motivate a learner. Sometimes, it may have negative effect.

Examples:

1. Traditional school and university examination,

2. Teacher-made tests,

3. Standardised tests,

Need and Importance of Evaluation:

Now a days, education has multifold programmes and activities to inculcate in


students a sense of common values, integrated approach, group feelings,
community interrelationship leading to national integration and knowledge to
adjust in different situations.

Evaluation in education assesses the effectiveness of worth of an educational


experience which is measured against instructional objectives.

Evaluation is done to fulfill the following needs:

1. (a) It helps a teacher to know his pupils in details. Today, education is child-
centered. So, child’s abilities, interest, aptitude, attitude etc., are to be properly
studied so as to arrange instruction accordingly.
(b) It helps the teacher to determine, evaluate and refine his instructional
techniques.

(c) It helps him in setting, refining and clarifying the objectives.

(d) It helps him to know the entry behaviour of the students.

2. It helps an administrator.

(a) In educational planning and

(b) In educational decisions on selections, classification and placement.

3. Education is a complex process. Thus, there is a great need of continuous


evaluation of its processes and products. It helps to design better educational
programmes.

4. The parents are eager to know about the educational progress of their children
and evaluation alone can assess the pupils’ progress from time to time.

5. A sound choice of objectives depends on an accurate information regarding


pupil’s abilities, interest, attitude and personality traits and such information is
obtained through evaluation.

6. Evaluation helps us to know whether the instructional objectives have been


achieved or not. As such evaluation helps planning of better strategies for
education.

7. A sound programme of evaluation clarifies the aims of education and it helps us


to know whether aims and objectives are attainable or not. As such, it helps in
reformulation of aims and objectives.

8. Evaluation studies the ‘total child’ and thus helps us to undertake special
instructional programmes like enrichment programme, for the bright and
remedial programmes for the backward.
9. It helps a student in encouraging good study habits, in increasing motivation
and in developing abilities and skills, in knowing the results of progress and in
getting appropriate feedback.

10. It helps us to undertake appropriate guidance services.

From the above discussions it is quite evident that evaluation is quite essential for
promoting pupil growth. It is equally helpful lo parents, teachers, administrators
and students.

Measurement” is the act of determining a target's size, length, weight, capacity, or other aspect.
There are a number of terms similar to “measure” but which vary according to the purpose (such as
“weight,” “calculate,” and “quantify.”) In general, measurement can be understood as one action
within the term “instrumentation.”

Methods of Measurement

Measurement of any quantity involves two parameters: the magnitude of the value and unit of
measurement. For instance, if we have to measure the temperature we can say it is 10 degree C. Here
the value “10" is the magnitude and “C" which stands for “Celsius" is the unit of measurement. Similarly,
we can say the height of wall is 5 meters, where “5" is the magnitude and “meters" is the unit of
measurement.

There are two methods of measurement: 1) direct comparison with the standard, and 2) indirect
comparison with the standard. Both the methods are discussed below:

1) Direct Comparison with the Standard

In the direct comparison method of measurement, we compare the quantity directly with the primary or
secondary standard. Say for instance, if we have to measure the length of the bar, we will measure it
with the help of the measuring tape or scale that acts as the secondary standard. Here we are
comparing the quantity to be measured directly with the standard.

Even if you make the comparison directly with the secondary standard, it is not necessary for you to
know the primary standard. The primary standards are the original standards made from certain
standard values or formulas. The secondary standards are made from the primary standards, but most
of the times we use secondary standards for comparison since it is not always feasible to use the
primary standards from accuracy, reliability and cost point of view. There is no difference in the
measured value of the quantity whether one is using the direct method by comparing with primary or
secondary standard.

The direct comparison method of measurement is not always accurate. In above example of measuring
the length, there is limited accuracy with which our eye can read the readings, which can be about 0.01
inch. Here the error does not occur because of the error in the standards, but because of the human
limitations in noting the readings. Similarly, when we measure the mass of any body by comparing with
some standard, it’s very difficult to say that both the bodies are of exactly the same mass, for some
difference between the two, no matter how small, is bound to occur. Thus, in direct method of
measurement there is always some difference, however small, between the actual value of the quantity
and the measured value of the quantity.

2) Indirect Method of Measurement

There are number of quantities that cannot be measured directly by using some instrument. For
instance we cannot measure the strain in the bar due to applied force directly. We may have to record
the temperature and pressure in the deep depths of the ground or in some far off remote places. In such
cases indirect methods of measurements are used.

In the indirect method of measurements some transducing devise, called transducer, is used, which is
coupled to a chain of the connecting apparatus that forms the part of the measuring system. In this
system the quantity which is to be measured (input) is converted into some other measurable quantity
(output) by the transducer. The transducer used is such that the input and the output are proportional
to each other. The readings obtained from the transducer are calibrated to as per the relations between
the input and the output thus the reading obtained from the transducer is the actual value of the
quantity to be measured. Such type of conversion is often necessary to make the desired information
intelligible.

The indirect method of measurements comprises of the system that senses, converts, and finally
presents an analogues output in the form of a displacement or chart. This analogues output can be in
various forms and often it is necessary to amplify it to read it accurately and make the accurate reading
of the quantity to be measured. The majority of the transducers convert mechanical input into
analogues electrical output for processing, though there are transducers that convert mechanical input
into analogues mechanical output that is measured easily.

Analog and digital measuring methods

A distinction is also made between analog measurement and digital measurement. In the analog
method, measurement is done continuously, and the size of the signal is analogous to the measured
value. The measured value is indicated on a pointer instrument that has got a scale (e.g. voltage,
resistance). Another example of analog measurement is the measurement of the temperature by using a
mercury thermometer as mentioned above.

By contrast, in the digital method, the measured value is converted to binary format and shown in
numerical form. The measurement of the speed by counting the number of revolutions within a defined
period of time is a digital measuring procedure, too.

Though digital indications allow for more precise readings so that reading errors or inaccuracies can be
prevented – as may happen with analog indications – the latter are commonly considered to be easier to
grasp by humans than digital indications.

Continuous and discontinuous method

As the terms suggest, the time factor is in the foreground here, and the measuring signal can be analog
as well as digital. If the quantity to be measured is constantly captured, for instance, with a continuous
line recorder, this is called the continuous measurement.

In the discontinuous method, however, the signal path between the measuring point and the measuring
output (e.g. line recorder) is only activated intermittently. The parameter to be measured is captured
with periodic interruptions, e.g. with a dotted line recorder. In this way, the values of the parameter to
be measured can be represented over a longer period.

Deflection method of measurement

As said above, measurement is the process of comparing the quantity to be measured with a known
quantity, i.e. the calibration parameter. In the deflection method of measurement, this can be the
extension labeled on a scale, and the weight of an object can easily be determined, for example, by
using a spring balance. In this case, the known quantity is the spring force that increases proportionately
to the spring deflection.

characteristics of Measuring Instruments.

Measuring instru-
ments are usually specified by their metrological properties, such as range of measurement, scale
graduation value, scale spacing, sensitivity and reading accuracy.
Range of Measurement. It indicates the size values between which measurements may be
made on the given instrument.
Scale range. It is the difference between the values of the measured quantities corresponding
to the terminal scale marks.
Instrument range. It is the capacity or total range of values which an instrument is capable
of measuring. For example, a micrometer screw gauge with capacity of 25 to 50 mm has instrument
range of 25 to 50 mm but scale range is 25 mm.
Scale Spacing. It is the distance between the axes of two adjacent graduations on the scale.
Most instruments have a constant value of scale spacing throughout the scale. Such scales are said
to be linear.
In case of non-linear scales, the scale spacing value is variable within the limits of the scale.
Scale Division Value. It is the measured value of the measured quantity corresponding to
one division of the instrument, e.g., for ordinary scale, the scale division value is 1 mm. As a rule,
the scale division should not be smaller in value than the permissible indication error of an
instrument.
Sensitivity (Amplication or gearing ratio). It is the ratio of the scale spacing to the division
value. It could also be expressed as the ratio of the product of all the larger lever arms and the
product of all the smaller lever arms. It is the property of a measuring instrument to respond to
changes in the measured quantity.
Sensitivity Threshold. It is defined as the minimum measured value which may cause any
movement whatsoever of the indicating hand. It is also called the discrimination or resolving power
of an instrument and is the minimum change in the quantity being measured which produces a
perceptible movement of the index.
Reading Accuracy. It is the accuracy that may be attained in using a measuring instrument.
Reading Error. It is defined as the difference between the reading of the instrument and the
actual value of the dimension being measured.
Accuracy of observation. It is the accuracy attainable in reading the scale of an instrument.
It depends on the quality of the scale marks, the width or the pointer/index, the space between the
pointer and the scale, the illumination of the scale, and the skill of the inspector. The width of scale
mark is usually kept one-tenth of the scale spacing for accurate reading of indications.
Parallax. It is apparent change in the position of the index relative to the scale marks, when
the scale is observed in a direction other than perpendicular to its plane.
Repeatability. It is the variation of indications in repeated measurements of the same
dimension. The variations may be due to clearances, friction and distortions in the instrument’s
mechanism. Repeatability represents the reproducibility of the readings of an instrument when a
series of measurements in carried out under fixed conditions of use.
Measuring force. It is the force produced by an instrument and acting upon the measured
surface in the direction of measurement. It is usually developed by springs whose deformation and
pressure change with the displacement of the instrument’s measuring spindle.

Measurement errors

The absolute correct determination of a quantity is not possible – there is always a certain variation: the
measurement error. This is referred to as the difference between the measured value of a quantity and
its true value. There are several reasons for measurement errors. These include instrument errors,
environmental errors or human errors. In addition, the distinction is made between “systematic errors”
and “random errors”.
Systematic errors are consistent and therefore, reproducible errors. They can be caused by the
measuring instrument itself (e.g. due to wear, ageing or environmental influences), but also by the
measuring procedure. Systematic errors can, therefore, be corrected by taking appropriate measures.

Random errors, in contrast, are non-systematic errors, which may be attributable to miscalibrated
instruments, environmental conditions (e.g. temperature) or to the operator (reading mistake). Such
errors can eventually be compensated by repeated measurements and determination of an average
value (always given that the instrument is correctly calibrated).

Error calculation

Since exact measurements are not possible, deviations of the measured values from their actual values
always affect the measuring result. Consequently, it will also deviate from its true value. In order to
minimize such errors, the error calculation method is adopted. Actually, this term is misleading because
errors cannot be calculated, and it is only possible to estimate them in a realistic manner. Consequently,
the objective of error calculation is to determine the best estimate for the true value (measuring result)
and for the magnitude of the variation (measuring uncertainty).

What are the different types of measurement?

According to way to measuring direction, we have


[1]
Linear Measurement

Linear Measuring instruments are designed to measure the distance between two surfaces or points (
end measurements) like Vernier Caliper, Micrometer do. The Basic Fundamental unit of linear
measurement is mm. It is one of the seven units.
[2]
Angular Measurement

The concept of Angle come from the circle. Actually Angle is a part of circle. We measure an angle in
Degree or radian. Usually, the primary objective of angle measurement is not to measure angles but the
assessment of alignment of machine parts or products.

[3]
Comparing measurement

A device that compares the unknown length with the standard. Such measurement is known as
comparison measurement and the instrument, which provides such a comparison, is known as
Comparator.

Comparators are generally used for linear measurements. And it gives only dimensional differences in
relation to a basic dimension of the object.
Various comparators currently available basically differ in their methods of amplifying and recording the
variations measured.
In comparison measurement, it is dependent on the least count of the standard and the means for
comparing.A comparator should have a high degree of accuracy and precision.

Comparator can be it mechanical, Pneumatic, or electrical,by the means of amplification of signals,


linearity of the scale within the measuring range should be assured.
[4]
Temperature measurement

Temperature is one of the seven fundamental unit of measurement, unit of temperature is Kelvin(K).

We use two types of scale in measuring temperature that is –

1.Relative scales [Fahrenheit (°F) and Celsius (°C)]

2. Absolute scales [Rankine (°R) and Kelvin (K)]

THE THREE TYPES OF MEASUREMENT

There are three broad categories that encompass most, if not all measurement regarding your
marketing strategy. They are not mutually exclusive, your marketing strategy will end up being
measured by a combination of all three. The three measures are descriptive, diagnostic, and
predictive.

Descriptive is the most basic form of measurement. A Klout score, your Google Pagerank, the number
of unique visitors to your website. Descriptive measurements are what most of us believe
measurement to be. In reality your Klout score is just a snapshot in time, a rank compared to others at
that moment. It doesn’t help you understand how you got there, what you did to achieve such a score
or help with understanding how to increase ones’ Klout score in the future. Descriptive measurements
are the most elementary form of measurement.

Example: Comparing the Ad Agency’s in Regina to one another. All the comparisons in that post were of
the descriptive nature, none of which would give us an understanding of why one agency is more social
than another.

Diagnostic is the next step in measurement. After a descriptive measure allows to take a snapshot in
time, a diagnostic measurement helps in understand how something happened. Setting up Google
Analytics or some measurement tool to begin to move from descriptive to diagnostic is the first
step. When you can compare results over time you begin to notice trends and attribute outcomes to
specific actions.

Example: When you see a spike in visitors to your site one day, tracing that traffic surge back to the gala
you sponsored on the night before. The ad on the back of the program had a specific landing page on
your website, the surge in traffic and any resulting sales or outcomes will be attributed to your
sponsorship of the gala.
When you begin to diagnose how your business is acquiring new customers and clients then you can
begin to predict what tactics you should use in the future.

Predictive measures allow you to take what the descriptive measurements, and diagnosis of those
measurements has unveiled to help decision making in the future.

Example: Looking deeper into Google Adwords to understand which keywords are more profitable for
“time on site” goals you are trying to achieve and using last months data to increase the effectiveness in
next months campaign.

Comparing referral traffic after sharing a new blog post on Linkedin, Twitter, Google+, and Facebook to
better understand which audience you should be targeting in the future. Furthermore, comparing data
from these referral sources to understand where your most profitable traffic comes from and setting
goals to increase effectiveness.

This article was inspired by a post called “A True Measure of Influence”. Tom Webster introduces the
three categories of measurement and talks about how you can’t rely on a Klout score as a barometer of
your online marketing effectiveness. It’s definitely worth a read.

Whether you like them or not, tests are a way of checking your knowledge or comprehension. They are
the main instrument used to evaluate your learning by most educational institutions. According to
research studies, tests have another benefit: they make you learn and remember more than you might
have otherwise. Although it may seem that all tests are the same, many different types of tests exist and
each has a different purpose and style.

Diagnostic Tests

These tests are used o diagnose how much you know and what you know. They can help a teacher know
what needs to be reviewed or reinforced in class. They also enable the student to identify areas of
weakness.

Placement Tests

These tests are used to place students in the appropriate class or level. For example, in language
schools, placement tests are used to check a student’s language level through grammar, vocabulary,
reading comprehension, writing, and speaking questions. After establishing the student’s level, the
student is placed in the appropriate class to suit his/her needs.

Progress or Achievement Tests

Achievement or progress tests measure the students’ improvement in relation to their syllabus.
These tests only contain items which the students have been taught in class. There are two types of
progress tests: short-term and long-term.
Short-term progress tests check how well students have understood or learned material covered in
specific units or chapters. They enable the teacher to decide if remedial or consolidation work is
required.

Long-term progress tests are also called Course Tests because they check the learners’ progress over
the entire course. They enable the students to judge how well they have progressed. Administratively,
they are often the sole basis of decisions to promote to a higher level.

Progress tests can also be structured as quizzes, rather than as tests. They can be answered by teams of
students, rather than individuals. They can be formulated as presentations, posters, assignments, or
research projects. Structuring progress tests in this way takes into account the multiple intelligences and
differing learning styles of the students. Yet many students still expect a “regular test― as a part of

Proficiency Tests

These tests check learner levels in relation to general standards. They provide a broad picture of
knowledge and ability. In English language learning, examples are the TOEFL and IELTS exams, which are
mandatory for foreign-language speakers seeking admission to English-speaking universities. In addition,
the TOEIC (Test of English for International Communication) checks students’ knowledge of Business
English, as a prerequisite for employment.

Internal Tests

Internal tests are those given by the institution where the learner is taking the course. They are often
given at the end of a course in the form of a final exam.

External Tests

External tests are those given by an outside body. Examples are the TOEFL, TOEIC, IELTS, SAT, ACT, LSAT,
GRE and GMAT. The exams themselves are the basis for admission to university, job recruitment, or
promotion.

Objective Tests

Objective tests are those that have clear right or wrong answers. Multiple-choice tests fall into this
group. Students have to select a pre-determined correct answer from three or four possibilities.

Subjective Tests

Subjective tests require the marker or examiner to make a subjective judgment regarding the marks
deserved. Examples are essay questions and oral interviews. For such tests, it is especially important
that both examiner and student are aware of the grading criteria in order to increase their validity.
Combination Tests

Many tests are a combination of objective and subjective styles. For example, on the TOEFL iBT, the Test
of English as a Foreign Language, the reading and listening sections are objective, and the writing and
speaking sections are subjective.

Pre-assessment or diagnostic assessment

Before creating the instruction, it’s necessary to know for what kind of students you’re creating the
instruction. Your goal is to get to know your student’s strengths, weaknesses and the skills and
knowledge the posses before taking the instruction. Based on the data you’ve collected, you can create
your instruction.

Formative assessment

Formative assessment is used in the first attempt of developing instruction. The goal is to monitor
student learning to provide feedback. It helps identifying the first gaps in your instruction. Based on this
feedback you’ll know what to focus on for further expansion for your instruction.

Summative assessment

Summative assessment is aimed at assessing the extent to which the most important outcomes at the
end of the instruction have been reached. But it measures more: the effectiveness of learning, reactions
on the instruction and the benefits on a long-term base. The long-term benefits can be determined by
following students who attend your course, or test. You are able to see whether and how they use the
learned knowledge, skills and attitudes.

Read more about formative and summative assessments.

Confirmative assessment

When your instruction has been implemented in your classroom, it’s still necessary to take assessment.
Your goal with confirmative assessments is to find out if the instruction is still a success after a year, for
example, and if the way you're teaching is still on point. You could say that a confirmative assessment is
an extensive form of a summative assessment.

Norm-referenced assessment

This compares a student’s performance against an average norm. This could be the average national
norm for the subject History, for example. Other example is when the teacher compares the average
grade of his or her students against the average grade of the entire school.

Criterion-referenced assessment

It measures student’s performances against a fixed set of predetermined criteria or learning standards.
It checks what students are expected to know and be able to do at a specific stage of their education.
Criterion-referenced tests are used to evaluate a specific body of knowledge or skill set, it’s a test to
evaluate the curriculum taught in a course.

Ipsative assessment

It measures the performance of a student against previous performances from that student. With this
method you’re trying to improve yourself by comparing previous results. You’re not comparing yourself
against other students, which may be not so good for your self-confidence.

A good test can be defined as one that is:


 Reliable

 Valid

 Practical

 Socially Sensitive

 Candidate Friendly.

Briefly and simply, I will review the meaning of each of these characteristics.

Reliable

Reliability refers to the accuracy of the obtained test score or to how close the obtained scores for
individuals are to what would be their “true” score, if we could ever know their true score. Thus,
reliability is the lack of measurement error, the less measurement error the better. The reliability
coefficient, similar to a correlation coefficient, is used as the indicator of the reliability of a test. The
reliability coefficient can range from 0 to 1, and the closer to 1 the better. Generally, experts tend to
look for a reliability coefficient in excess of .70. However, many tests used in public safety screening are
what is referred to as multi-dimensional. Interpreting the meaning of a reliability coefficient for a
knowledge test based on a variety of sources requires a great deal of experience and even experts are
often fooled or offer incorrect interpretations. There are a number of types of reliability, but the type
usually reported is internal consistency or coefficient alpha. All things being equal, one should look for
an assessment with strong evidence of reliability, where information is offered on the degree of
confidence you can have in the reported test score.
Valid

Validity will be the topic of our third primer in the series. In the selection context, the term “validity”
refers to whether there is an expectation that scores on the test have a demonstrable relationship to job
performance, or other important job-related criteria. Validity may also be used interchangeably with
related terms such as “job related” or “business necessity.” For now, we will state that there are a
number of ways of evaluating validity including:

 Content

 Criterion-related

 Construct

 Transfer or transportability

 Validity generalization

A good test will offer extensive documentation of the validity of the test.

Practical

A good test should be practical. What defines or constitutes a practical test? Well, this would be a
balancing of a number of factors including:

 Length – a shorter test is generally preferred

 Time – a test that takes less time is generally preferred

 Low cost – speaks for itself

 Easy to administer

 Easy to score

 Differentiates between candidates – a test is of little value if all the applicants obtain the same
score

 Adequate test manual – provides a test manual offering adequate information and
documentation

 Professionalism – is produced by test developers possessing high levels of expertise

The issue of the practicality of a test is a subjective judgment, which will be impacted by the constraints
facing the public-sector jurisdiction. A test that may be practical for a large city with 10,000 applicants
and a large budget, may not be practical for a small town with 10 applicants and a miniscule testing
budget.
Socially Sensitive

A consideration of the social implications and effects of the use of a test is critical in public sector,
especially for high stakes jobs such as public safety occupations. The public safety assessment
professional must be considerate of and responsive to multiple group of stakeholders. In addition, in
evaluating a test, it is critical that attention be given to:

 Avoiding adverse Impact – Recent events have highlighted the importance of balance in the
demographics of safety force personnel. Adverse impact refers to differences in the passing
rates on exams between males and females, or minorities and majority group members. Tests
should be designed with an eye toward the minimization of adverse impact. A complicated
topic, I addressed adverse impact in greater depth in previous blog posts here and here.

 Universal Testing – The concept behind universal testing is that your exams should be able to be
taken by the most diverse set of applicants possible, including those with disabilities and by
those who speak other languages. Having a truly universal test is a difficult, if not impossible,
standard to meet. However, organizations should strive to ensure that testing locations and
environments are compatible with the needs of as wide a variety of individuals as possible. In
addition, organizations should have in place committees and procedures for dealing with
requests for accommodations.

Candidate Friendly

One of the biggest changes in testing over the past twenty years has been the increased attention paid
to the candidate experience. Thus, your tests should be designed to look professional and be easy to
administer. Furthermore, the candidate should see a clear connection between the exams and the job.
As the candidate completed the selection battery, you want the reaction to be “That was a fair test, I
had an opportunity to prove why I deserve the job, and this is the type of organization where I would
like to work.” One of my early set of blogs for IPMA-HR dealt with “how you treat a candidate makes a
difference” here, here and here.

MEANING OF INTERVIEW:

The word interview comes from Latin and middle French words meaning to “see between” or “see each
other”. Generally, an interview means a private meeting between people when questions are asked and
answered. The person who answers the questions of an interview is called in the interviewer. The
person who asks the questions of our interview is called an interviewer. It suggests a meeting between
two persons for the purpose of getting a view of each other or for knowing each other. When we
normally think of an interview, we think a setting in which an employer tries to size up an applicant for a
job.

According to Gary Dessler, “An interview is a procedure designed to obtain information from a person’s
oral response to oral inquiries.”
According to Thill and Bovee, “An interview is any planed conversation with a specific purpose involving
two or more people”.

According to Dr. S. M. Amunuzzaman, “Interview is a very systematic method by which a person enters
deeply into the life of even a stranger and can bring out needed information and data for the research
purpose.”

What is interview? Types of interviews

Posted By The Business Communication 23 Comments

MEANING OF INTERVIEW:

The word interview comes from Latin and middle French words meaning to “see between” or “see each
other”. Generally, an interview means a private meeting between people when questions are asked and
answered. The person who answers the questions of an interview is called in the interviewer. The
person who asks the questions of our interview is called an interviewer. It suggests a meeting between
two persons for the purpose of getting a view of each other or for knowing each other. When we
normally think of an interview, we think a setting in which an employer tries to size up an applicant for a
job.

According to Gary Dessler, “An interview is a procedure designed to obtain information from a person’s
oral response to oral inquiries.”

According to Thill and Bovee, “An interview is any planed conversation with a specific purpose involving
two or more people”.

According to Dr. S. M. Amunuzzaman, “Interview is a very systematic method by which a person enters
deeply into the life of even a stranger and can bring out needed information and data for the research
purpose.”

So, an interview is formal meetings between two people (the interviewer and the interviewee) where
questions are asked by the interviewer to obtain information, qualities, attitudes, wishes etc. Form the
interviewee.

TYPES OF INTERVIEWS

There are many types of interviews that an organization can arrange. It depends on the objectives of
taking the interview. Some important types of interviews are stated below:

1. Personal interviews: Personal interviews include:

 Selection of the employees

 Promotion of the employees


 Retirement and resignation of the employees

Of course, this type of interview is designed to obtain information through discussion and observation
about how well the interviewer will perform on the job.

2. Evaluation interviews: The interviews which take place annually to review the progress of the
interviewee are called the evaluation interviews. Naturally, it is occurring between superiors and
subordinates. The main objective of this interview is to find out the strengths and weaknesses of
the employees.

3. Persuasive interviews: This type of interview is designed to sell someone a product or an idea.
When a sales representative talk with a target buyer, persuasion takes the form of convincing
the target that the product or idea meets a need.

4. Structured interviews: Structured interviews tend to follow formal procedures; the interviewer
follows a predetermined agenda or questions.

5. Unstructured interviews: When the interview does not follow the formal rules or procedures. It
is called an unstructured interview. The discussion will probably be free-flowing and may shift
rapidly form on subject to another depending on the interests of the interviewee and the
interviewer.

6. Counseling interviews: This may be held to find out what has been troubling the workers and
why someone has not been working.

7. Disciplinary interviews: Disciplinary interviews are occurring when an employee has been
accused of breaching the organization’s rules and procedures.

8. Stress interviews: It is designed to place the interviewee in a stress situation in order to observe
the interviewees reaction.

9. Public interviews: These include political parties’ radio-television and newspaper.

10. Informal or conversational interview: In the conversational interview, no predetermined


questions are asked, in order to remain as open and adaptable a possible to the interviewee’s
nature and priorities; during the interview the interviewer “goes with the flow”.

11. General interview guide approach: The guide approach is intended to ensure that the same
general areas of information are collected from each interviewee this provides more focus than
the conversational approach but still allows a degree of freedom and adaptability in getting the
information from the interviewee.

12. Standardized or open-ended interview: Here the same open-ended questions are asked to all
interviewees; this approach facilitates faster interviews faster interviews that can be more easily
analyzed and compared.
13. Closed or fixed-response interview: It is an interview where all interviewers ask the same
questions and asked to choose answers from among the same set of alternatives. This format is
useful for those not practiced in interviewing.

Characteristics of a Good Interview

1. Come to the interview well prepared with background knowledge of the subject, familiarity with your
recording equipment, a consent form that the interviewee will sign giving you permission to use the
tape recorded interview for research purposes. You should also mention that the interview will be
archived as part of a larger project documenting the lives of Latino migrants in the United States.

2. Make the narrator as comfortable as possible; polite, friendly behavior will put your interviewee at
ease. Interviews should not begin abruptly. Take the time to introduce yourself and to talk about your
project. For example, “Hello Mr. Jones, I’m Jill Savage. How are you today? Thanks for taking time to let
me interview you about your migration experiences for my oral history project. Let’s find a quiet place
where we can sit down and talk. Where would you like to sit to do the interview? How would you like to
proceed with the interview?”

3. Take time to find a quiet spot in which to conduct the interview. Remember that even the sound of
clocks, pets, chatter, add distracting noises to the recordings and may also distract you and the
interviewer, affecting the overall quality of the interview & recording. Set up the recorder between
yourself and the interviewee. Before you turn on the recorder, ask if the narrator is ready to begin.

4. Begin the interview with a few simple questions that the interviewee can answer easily and
comfortably.

5. Ask questions one at a time and do not rush the interviewee to respond. Allow the interviewee time
to think and respond. Do not become anxious by silence. Silences will make for a better interview; pause
at least ten seconds before asking a new question.

6. Speak clearly so that the interviewee can easily understand and hear you. Keep the questions as brief
as possible so that what you are asking will be clear to the interviewee. Repeat the question if you need
to.

7. Ask as many open ended questions as possible. These questions encourage the interviewee to tell
stories rather than providing yes/no responses.

8. When constructing your questions, write them in clear, plain English. Remember that your
interviewees are not academics.

For example, do not ask: “How has gender impacted your migration experience.” Rather, ask, “What
was your experience like as woman crossing the border?” “How did being a woman affect your decision
to migrate?” “How was your experience as a woman different than that of other migrants you know?”
“Tell me about what your experiences as a single man were like immigrating to the United States.”
Another example. Do not ask: “Did you access social networks?” or “what social networks if any did you
access?” Instead, consider: “Were there people (family members, friends, or co-workers, for example)
that you depended on to help you with your trip?” Or “Were there family members or friends that you
were able to depend on when you first came to the United States?” Then you can ask follow up
questions if they answer yes…For example: “Who were they? And, in what ways (or how) did they help
you? Was that common practice?”

9. Listen actively to the interviewee’s answers and then ask follow up questions like, “how did you feel
about that?” or “what happened next?” to bring out more details before you go on to the next question
on your page. Respond appropriately to the interviewee. Pause or say something like “that must have
been difficult” if the interviewee describes a painful memory. Also, if the interviewee is clearly overcome
by emotion, ask if they would like to take a break and/or stop the interview and return to it later.

10. Do not contradict or correct your interviewee and keep your personal opinions to yourself as much
as possible. Do not ask leading questions like: “Tell me about that winter, you must have had a
miserable time.”

11. Do not rush the end of the interview. Have a good closing question that helps the interviewee
summarize or come to a conclusion. You might consider asking them if there is anything they wish to say
that they may not have already told you, before pausing the recorder.

Why is Interview Important?

1. The assessment of the employees:

The employees are assessed through the process of interview and that assessment is considered one of
the best ways to know one’s potential.

So, this is one of the reasons why the assessment of the employees is essential through the interview
process.

2. No other procedure:

There is no other selective procedure better than the interviews. So, this is the reason interviews form a
vital part in the selection process.

It is one thing which helps in linking the interviewee and the interviewer.

3. It forms a bridge between the sender and the receiver:

The process of interview acts as a bridge as it conveys what the sender has to communicate while the
receiver gets to know about the sender. So, it bridges all sorts of gaps.

4. Speaking skills:

The person can be evaluated well by the manner he or she communicates. Their good speaking
skills obviously cannot be known through their writing, but through the way one utters.
So, this is also one of the reasons of knowing the importance of the interviews during the recruiting
process.

5. Check the confidence level:

An individual may have to present in front of other people in office and if he or she comes out to be shy
and less confident, then it won’t do any good to the company.

This is also obvious that the companies require efficient people for their own benefit but this also holds
true that the employees have to give their best to earn the best.

So, to know whether a person is able to speak up in front of a number of people, the interviews are
conducted.

6. Social behaviour is analyzed:

Another benefit of taking interviews is that the social behaviour of the individual is analyzed. When a
person speaks, his body language, the words he or she make use are assessed and the basic etiquette
are counted.

So, among many others, this is also one of the important reason.

7. The body language and the smartness of the individual:

How smart is the person and also how he or she presents himself in front of others i.e the body
language of the person is witnessed through the process of interview.

8. Quality of answers is tested:

When we talk normally to our friend, then also the way we pronounce and speak is taken into notice
smartly and at times interrupted on saying the wrong word at wrong time. So, if being with friends can
be noticed then why not such thing be noticed during the interview.

Of course, it is done, so, in order to know how well a person can pronounce and to examine the quality
of answers delivered by the person, the process of interview is essential.

So, these are some of the points which clearly states that the process of interview is important to all the
organizations, no matter the size and the type of enterprise.

Meaning of Cumulative Record Card:

A Cumulative Record Card is that which contains the results of different assessment and judgments held
from time to time during the course of study of a student or pupil. Generally it covers three consecutive
years. It contains information regarding all aspects of life of the child or educed-physical, mental, social,
moral and psychological. It seeks to give as comprehensive picture as possible of the personality of a
child.
“The significant information gathered periodically on student through the use of various techniques –
tests, inventories, questionnaire, observation, interview, case study etc.”

Basically a Cumulative Record Card is a document in which it is recorded cumulatively useful and reliable
information about a particular pupil or student at one place. Hence presenting a complete and growing
picture of the individual concerned for the purpose of helping him during his long stay at school. And at
the time of leaving it helps in the solution of his manifold problems of educational, vocational and
personal-social nature and thus assisting him in his best development.

According to Jones, a Cumulative Record is, “A permanent record of a student which is kept up-to-date
by the school; it is his educational history with information about his school achievement, attendance,
health, test scores and similar pertinent data,” If the Cumulative Record is kept together in a folder it is
called Cumulative Record Folder (CRF). If the Cumulative Record is kept in an envelop it is called a
Cumulative Record Envelop (CRE). If the cumulative Record is kept in a card it is called a Cumulative
Record Card (CRC).

Need for School Record:

The modern type of Cumulative Record was first made available in 1928 by the American Council on
education. The need for such a record was felt in view of an inadequate information that was contained
in the various forms as available. The Secondary Education Commission has made the following
observations regarding the need for School records “neither the external examination singly or together
can give a correct and complete picture of a pupils all round progress at any particular age of his
education, yet it is imparted for us to assess this in order to determine his future course of study or his
future vocation.”

For this purpose, a proper system of school records should be maintained for every pupil indicating the
work done by him in the school from day to day, month to month, term-to-term and year to year. Such a
school record will present a clear and continuous statement of the attainment of the child in different
intellectual pursuits through-out the successive stages of his education. It will also contain a progressive
evolution of development in other directions of no less importance such as the growth of his interest,
aptitudes and personal traits, his social adjustments, the practical and social activities in which he takes
part.

Characteristics of Cumulative Record:

The Cumulative Record is characterised in the following grounds:

(i) The Cumulative Record is a permanent record about the pupil or student.

(ii) It is maintained up-to-date. Whenever any new information is obtained about the pupil it is entered
in the card.

(iii) It presents a complete picture about the educational progress of the pupil, his past achievements
and present standing.
(iv) It is comprehensive in the sense that it contains all information about the pupil’s attendance, test
scores, health etc.

(v) It contains only those information’s which are authentic, reliable, pertinent, objective and useful.

(vi) It is continuous in the sense that it contains information about the pupil from the time he enters for
pre-school education or kindergarten system till he leaves the school.

(vii) Whenever any information is desired by any-body concerned with the welfare of the child he should
be given the information but not the card itself.

(viii) Confidential information about the pupil is not entered in the CRC but kept in a separate file.

Basic Principles that Should Govern the Maintenance of the CRC:

(i) Keeping of record is a continuous process and should cover the hole history from pre-school or
kindergarten to the college and this should follow the child from school. The Card will furnish valuable
information’s about the growth of a child and the new school can place him and deal with him to a
greater advantage.

(ii) All the teachers and the guidance workers should have access to these records. Matters too
confidential may be kept at a separate place. The child concerned may have an opportunity to study
his own Cumulative Record in consultation with the counseller.

(iii) The essential data should be kept in a simple, concise and readable form so that it may be
convenient to find out the main points of life of the child at a glance.

(iv) Records should be based on an objective data. They should be as reliable as possible.

(v) The record system should provide for a minimum of repetition of items.

(vi) It should contain reliable, accurate and objective information.

(vii) A manual should be prepared and directions for the guidance of persons, feeling out of using the
records given in it.

(viii) The record should be maintained by the counsellor and should not be circulated throughout the
faculty for making entries on it by other members of the staff. These entries should made by them on
other forms and the entry in this card should be made very carefully by counsellor.

Meaning of Objective Type Test:


Simply, an objective type test is one which is free from any subjective bias either from the tester or
the marker. It refers to any written test that requires the examinee to select the correct answer from
among one or more of several alternatives or supply a word or two and that demands an objective
judgement when it is scored.

Objective-Centered Test/Objective based Test:

When questions are framed with reference to the objectives of instruction, the test becomes
objective-based. This type of test may contain essay type and objective type test items.

An essay test may be objective-centered or objective-based, though it may be difficult to score it


objectively. An objective type test, on the other hand, can always be scored objectively, though it may
not be objective-centered if it is not planned with reference to the objectives of instruction.

Objective-type tests have two characteristics viz.:

1. They are pin-pointed, definite and so clear that a single, definite answer is expected.

2. They ensure perfect objectivity in scoring. The scoring will not vary from examiner to examiner.

Merits of Objective Type Test:

1. Objective type test gives scope for wider sampling of the content.

2. It can be scored objectively and easily. The scoring will not vary from time to time or from examiner
to examiner.

3. This test reduces (a) the role of luck and (b) cramming of expected questions. As a result, there is
greater reliability and better content validity.

4. This type of question has greater motivational value.

5. It possesses economy of time, for it takes less time to answer than an essay test. Comparatively,
many test items can be presented to students. It also saves a let of time of the scorer.

6. It eliminates extraneous (irrelevant) factors such as speed of writing, fluency of expression, literary
style, good handwriting, neatness, etc.

7. It measures the higher mental processes of understanding, application, analysis, prediction and
interpretation.

8. It permits stencil, machine or clerical scoring. Thus scoring is very easy.

9. Linguistic ability is not required.

Limitations of Objective Type Test:


1. Objectives like ability to organise matter, ability to present matter logically and in a coherent
fashion, etc., cannot be evaluated.

2. Guessing is possible. No doubt the chances of success may be reduced by the inclusion of a large
number of items.

3. If a respondent marks all responses as correct, the result may be misleading.

4. Construction of the objective test items is difficult while answering them is quite easy.

5. They demand more of analysis than synthesis.

6. Linguistic ability of the testee is not at all tested.

7. Printing cost considerably greater than that of an essay test.

observation
science begins with observation and must ultimately return to observationfor its final validation ³.
Moses and altan are of the opinion that ³ observation implies the use of the eyes rather than of the
ears and the voice´. Observation may be defined as systematicviewing, coupled with consideration of
the seen phenomena , in which main consideration must be given to the larger unit of activity by which
the specific observed phenomena occurred (young ). Observing natural phenomena , aided by a
systematic classification and measurementled to the development of theories and laws of nature¶s
force. It is classic method of scientificinquiry.the accumulated knowledge of biologist ,physicists,
astronomers and other naturalscientists is built upon centuries of systematic observation , much of it of
phenomena in their natural sorroundings rather than in the laboratory.Components of
observation:Observation involves three processes , i.e 1. Sensation 2. Attention 3. Perception. Sensation
isgained through the sense organs which depends upon the physical alertness of the observer.
Thencomes attention which is largely a matter of habit . the third is perception which involve
theinterpretation of sensory reports. Thus sensation merely reports the facts. Observation helps
instudying collective behavior and complex social situations: following up of individual unitscomposing
the situations; understanding the whole and the parts in their interpretation; andgetting the out of the
way details of the situation.Characteristics of observationFirstly, observation is at once a physical as well
as mental activity. The use of sense organs ininvolved as in observation one has to see or hear
something.Secondly, observation is selective because one has to observe the range of those things
which fallwithin the observation.Thirdly, observation is purposive. Observation is limited to those facts
and details which help inachieving the specified objective of research. Fourthly,observation has to be
efficient . merewatching alone is not enough . there should be scientific thinking. Further, these
observationsshould be based on tools of research which have been properly standarised.

Meaning of Anecdotal Record:

It is a well known fact that most of the times are being spent by the student in the school with teachers
and peers etc. It is natural that certain significant incidents or happenings occur in the life of students
which are to be noted as they are based on some sort of experiences. Keeping these things in the mind
teachers are subject to record these happenings for the purpose of data collection.

So that teachers should advise the students to write down this concerning facts on a piece of paper or
they should record this matters by tape recorder after asking the facts about incident of students
without their knowledge.

Here students should not know that the facts written or asked are not going to be recorded by teachers.
But one thing should be kept in the mind of teacher that the questioning or writing facts must be
selective and factual in nature.

Now-a-days this type of data collection is required for the purpose of guidance services. Really the
technique bearing the process is called record.

Characteristics of a Good Anecdotal Record:

The following characteristics of a good anecdotal record have been suggested by Prescott:

(i) Anecdotal record gives the date, place and situation in which the action occurred. This is called the
setting.

(ii) It describes the actions of the individual (pupil/child) the reactions of the other people involved and
the responses of the former to these reactions.

(iii) It quotes what is said to the individual and by the individual during the action.

(iv) It states “mood cues” postures, gestures, voice qualities and facial expressions which serve as a cue
to help understand how the individual felt. It does not provide interpretations of his feelings but only
the cues by which a reader may judge what they were.

(v) The description is inclusion of and extensive enough to cover the episode. The action or conversation
is not left incomplete and unfinished but is followed through to the point where an aspect of a
behavioural moment in the life of the individual is supplied.

Uses of Anecdotal Record:

The uses of anecdotal record is as follows:

(i) The anecdotal record is useful for the guidance worker and teacher as it present dynamic picture
about the pupil in different situations.

(ii) The anecdotal record is used by teachers to know and understand the pupil on the basis of
description of happening of student’s life.
(iii) The anecdotal record enables teacher and guidance worker to understand the personality pattern of
students.

(iv) The anecdotal record enables teachers and guidance worker to study and understand the
adjustment patterns of the students.

(v) The anecdotal record is helpful for the teachers and guidance worker in assisting the students for
solving their problems and difficulties.

(vi) The anecdotal record is better means of data collection which is helpful to improve relationship with
teachers and peers etc.

(vii) The anecdotal record is helpful for the students to get rid of mental tensions, anxieties and so on.

(viii) The anecdotal is useful for the parents to know about the child clearly and they will try to help their
child in various ways.

(ix) The anecdotal is useful to improve teaching standard after knowing comment of the students
through anecdotal technique.

6. Limitations of Anecdotal Record:

The anecdotal record has following limitations:

(i) The anecdotal records are of no value if the proper care is not taken by the teacher in the context of
data collection about student’s behaviour.

(ii) The anecdotal records are of little use if objectivity in data collection is not followed and maintained
strictly.

(iii) In some cases anecdotal records are merely confined to exceptional children as a result of which
average students are seriously neglected.

(iv) The anecdotal record is a technique of data collection for the purpose of guidance services which
provides some partly information’s about the students.

(v) The anecdotal record is of no use if the incidents and its description is not properly recorded.

(vi) The anecdotal record sometimes in some cases invites disappointment and tensions of students
which are not desirable from the part of the teacher.

(vii) It is not possible in the part of teacher to detect a observable incident as because an incident may
be important and memorable for the student may not treated important in case of teacher.
(viii) Sometimes students being more sentimental, reactive and tensional do not respond or answer or
write correctly as a result of which the anecdotal records do not bear weightage so far its uses and
importance’s are concerned.

(ix) The preparation of anecdotal record is nothing but the unnecessary wastage time and money.

Das könnte Ihnen auch gefallen