Sie sind auf Seite 1von 99

Dissertation Report On


Submitted by Vidhu Arora A0101907548 MBA (Gen), Class of 2009 Under the Supervision of Dr. R.S. Rai Assistant Professor, Decision Science.

In Partial Fulfillment of Award of Master of Business Administration





I, Vidhu Arora, student of Masters of Business Administration from Amity Business School, Amity University Uttar Pradesh hereby declare that I have completed Dissertation on EVALUATING TRAINING AND DEVELOPMENT IN VARIOUS SECTORS as part of the course requirement. I further declare that the information presented in this project is true and original to the best of my knowledge.

Date: March 2009 Place: Noida

Name: Vidhu Arora Enroll. No: A0101907548 Program: MBA (Gen)




I Dr. R.S Rai hereby certify that Vidhu Arora student of Masters of Business Administration at Amity Business School, Amity University Uttar Pradesh has completed dissertation on EVALUATING TRAINING AND DEVELOPMENT IN VARIOUS SECTORS under my guidance.

Dr R.S. Rai Assistant Professor, Decision Science Amity Business School

Completion of any project report is the milestone in the life of every management student and the success of live project then enhances the self confidence of the student. A successful and satisfactorily completion of any task is the outcome of the invaluable aggregate contribution of the different personal effort in all the direction, explicitly or implicitly. The key to the acknowledgement of such a different task lies in the hands of the professor. Words are poor gratitude bearer but I give him this opportunity to offer my sincere thanks to my professor, Dr. R.S. Rai, Assistant professor, Decision Science, for his benevolent and expert guidance without which this dissertation would not have seen the life of today. My sincere thanks to Dr. Sanjay Srivastava , Additional Director General, ABS and Dr. Sanjeev Bansal for keeping faith in me and giving me admission into Amity. My heartfelt thanks to the people from various organizations for allowing me to carry on the respective project in their organization and for giving me access to all the material required. I also wish sincere and humble thanks to all my friends and colleagues who encouraged and inspired me from time to time. It is only because of them that I did not lose the sight of my track and completed the research. I thank my parents who supported me and provided me with all the help required during my Dissertation period and the preparation of the project report. I would finally Thank God for making all this possible!! Thank you all for your time, co-operation and support.

Vidhu Arora A0101907548 MBA (Gen)

Table of Contents Content Declaration Certificate Acknowledgement Table of contents Table of figures Page Number

Preface 1.Literature Review 1.1 General approaches 1.2 Evaluation models and taxonomies 1.3 Issues wit evaluation approach 1.4 Experience with training program evaluation models 2.Problem identification 3.Methodology 4.Discussions 4.1 T&D at Indian oil 4.2 T&D at JK tiers 4.3 T&D at PepsiCo 4.4 T&D at General motors 5.Analysis 6.Results and Recommendations 6.1 Choosing and evaluation style 6.2 Simulation based training 7. Conclusion Bibliography Annexure 1 JK Tyres Annexure 2Pepsico Annexure 3 General Motors

1 2 8 9 11 17 19 20 23 23 35 41 48 54 57 57 59 67 69 75 82 93

Table of Figures Figures in the chapter

Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8 Models and schools of thought in evaluation Chain of consequences for a training event Evaluation of outcomes and linked to decision making Research methodology Training procedure at General Motor Use of evaluation style matrix Three phases of simulation evaluation Possible learning outcomes for simulation 8 13 15 21 49 58 64 66

There are several issues involved in evaluating management development, more so when the evaluation also considers the method of management development that this paper explores. Through a review of the literature and the analysis of the training procedures of various organizations across sectors, this paper will address the high level issues and establish the basis for a suitable model of evaluation that will measure the effectiveness of the method(s) employed in a managerial development intervention or event. The paper commences by reviewing why we should evaluate; what should be evaluated; and lastly how to evaluate. The paper then considers suitable models and methods of evaluation to measure the effectiveness of a management development intervention. The study of training procedures and the evaluation methods used in various Organizations across the sectors give a holistic overview of the methods being used and offers an in-depth comparison of the methods available. The paper concludes by suggesting the method employed in undertaking the intervention with particular reference to experiential learning and the use of computer-based simulations during a training intervention.


Chapter 1: Literature Review

WHY EVALUATE A number of authors consider the need and reasons for evaluation though all tend to fall into four broad categories identified by Mark Easterby-Smith (Easterby-Smith, 1994). He notes four general purposes of evaluation (P14): 1. Proving: the worth and impact of training. Designed to demonstrate conclusively that something has happened as a result of training or developmental activities. 2. Improving: A formative purpose to explicitly discover what improvements to a training programme are needed 3. Learning: Where the evaluation itself is or becomes an integral part of the learning of a training programme 4. Controlling: Quality aspects in the broadest sense, both in terms of quality of content and delivery to established standards.

More recent literature uses different terminology which can be placed into these four broad categories: Russell (1999) does not include such a Learning purpose explicitly but suggests that the evaluation of performance interventions produces the following benefits: Determines if performance gaps still exist or have changed (Proving) Determines whether the performance intervention achieved its goals (Controlling) Assesses the value of the performance intervention against organizational results (Proving) Identifies future changes or improvements to the intervention (Improving)

According to Russell, whichever evaluation model is chosen to follow the evaluation should focus on how effective the performance intervention was at meeting the organizations goals. 2|Page

Russ-Eft and Preskill (2001) review evaluation across the entire organisation, not just in the training or development arena and cite 6 distinct reasons to evaluate: Evaluation: Ensures quality (Controlling) Contributes to increased organisation members knowledge (Learning) Helps prioritise resources (Improving) Helps plan and deliver organisational initiatives (Improving) Helps organisation members be accountable (Controlling) Findings can help convince others of the need or effectiveness of various organisational initiatives (Proving) They include a seventh reason to gain experience in evaluation as it is fast becoming a marketable skill. Increasingly in the business and Human Resources media and the professional bodies representing training and development professionals (for example the American Society for Training and Development, ASTD), there is a call for better evaluation of training and development intervention. The basic principle driving this is for training to demonstrate its worth to organisations whether that be its attributable Return on Investment (ROI) or its value in improving performance of individuals (such as productivity gains or reduced accidents) or of the organisation (such as more efficient use of resources or demonstrable improvement in quality). Essentially, training and development costs time and money and needs to be shown to be worthwhile.

Exactly what is to be measured as part of the evaluation is an especially problematic area. Aspects of behaviour or reaction that are relatively easy to measure are usually trivial. Measuring changes in behaviour, for example, may require the observations to be reduced to simpler and more trivial characteristics.


Alternatively, these characteristics may be assessed by the individuals subordinates however, the purist would no doubt claim that such holistic judgements are of dubious validity. The problem is that the general requirement for quantitative measurement tends to produce a trivialization in the focus of the evaluation. According to Bedingham (1997), ultimately the only criteria that make sense are those which are related to on-the-job behaviour change. Bedingham advocates the use of research based 360 questionnaires that objectively measure competency sets and skills applicable to most organisations, functions or disciplines and making the results of the feedback taken immediately prior to the event, available to trainees during their training thus allowing individuals to easily see how they actually do something and the relevance of the training. Thus, they can then start transferring the learned skills immediately on return to the workplace. Managerial effectiveness and competency David McClelland is often cited as the father of the modern competency movement. In 1973, he challenged the then orthodoxy of academic aptitude and knowledge content tests, as being unable to predict performance or success in life and as being biased against minorities and women (Young, 2002). Identified through patterns of behaviour, competencies are characteristics of people that differentiate performance in a specific job or role (Kelner, 2001, McClelland, 1973). Competencies distinguish well between roles at the same level in different functions and also between roles at different levels (even in the same function) often by the number of competencies required to define the role. A competency model for a Middle Manager, is usually defined within ten to twelve competencies, of those two are relatively unique to a given role. Kelner suggest that competency models for senior executives require fifteen to eighteen competencies, up to half of which were unique to the model (Kelner, 2001). Kelner (2001), cites a 1996 unpublished paper by the late David McClelland were he performed a meta-analysis of executives assessed on competencies, where McClelland discovered that only eight competencies could consistently predict performance in any executive with 80 percent accuracy.


The first scientifically valid set of scaled competencies competencies that have sets of behaviours ordered into levels of sophistication or complexity were developed by Hay/McBer (Spencer and Spencer, 1993). The competencies found to be the most critical for effective managers include: Achievement Orientation Developing Others Directiveness Impact and Influence Interpersonal Understanding Organisational Awareness Team Leadership

This set of characteristics, or individual competencies, that a manager brings to the job need to match well to the job or additional effort may be necessary to carry out the job or the manager may not be able to use certain managerial styles effectively. These are in turn affected by the organisational climate and the actual requirement of the job. Managerial effectiveness is the combination of these four critical factors, Organisational Climate, Managerial Styles, Job Requirements and the Individual Competencies that the manager brings to the job. Reddin (1970) points out that 'managerial effectiveness' is not a quality but a statement about the relationship between his behaviour and some task situation. According to Reddin, it is therefore not possible to discover as fact the qualities of effectiveness which would then be of self-evident value. If, however the Hay/McBer competencies are able to predict performance in any executive to 80 percent accuracy and these competencies are trainable then any training programme designed to develop managerial effectiveness in any role can be evaluated by means of assessing the changes in behaviour of the participant that demonstrates the competency.


Learning outcomes Most researchers conceptualize learning as a multidimensional construct. There is considerable commonality across different attempts to classify types of learning outcomes. (Kraiger et al., 1993) synthesised the work of (Gagne, 1984), (Anderson, 1982) and others proposing broad based categories of learning outcomes: skill-based, cognitive and affective outcomes. Skill-based learning outcomes address technical or motor skills. A number of game-based instructional programmes have been used for practice and drill of technical skills. For example, (Gopher et al., 1994) investigated the effectiveness of the use of an aviation computer game by military trainees on subsequent test flights. Cognitive learning outcomes include three subcategories of: Declarative knowledge. The learner is typically required to reproduce or recognise some item of information. For example, (White, 1984) demonstrated that students who played a computer game focusing on Newtonian principles were able to more accurately answer questions on motion and force than those who did not play the game. Procedural knowledge. Requires a demonstration of the ability to apply knowledge, general rules, or skills to a specific case. For example, (Whitehall and McDonald, 1993) found that students using a variable payoff electronics game achieved higher scores on electronics troubleshooting tasks than students who received standard drill and practice. Strategic knowledge. Requires application of learned principles to different contexts or deriving new principles for general or novel situations (referred to by others, as Constructivist Learning (Dede, 1997)), (Wood and Stewart, 1987), for example, found that use of a computer game to improve practical reasoning skills of students led to improvements in critical thinking.

Affective learning outcomes refers to attitudes. Includes feelings of confidence, self-efficacy, attitudes, preferences and dispositions. Some research has shown that games can influence attitudes, for example (Wiebe and Martin, 1994) and (Thomas et al., 1997).


General approaches to evaluation Schools Easterby-Smith groups major approaches and classifies within two dimensions, the scientificconstructivist dimension, and the research-pragmatic dimension. The Scientific-constructivist dimension represents the distinct paradigms according to Filstead (1979), these paradigms represent distinct, largely incompatible, ways of seeing and understanding the world, yet In practice, most evaluations contain elements of each point of view (Easterby-Smith, 1994). The scientific approach favours the use of quantitative techniques involving the attempt to operationalise all variables in measurable terms normally analysed by statistical techniques in order to establish absolute criteria. The Scientific approach contrasts greatly with constructivist or naturalistic methods which emphasise the collection of different views of various stakeholders before data collection begins. The process continues with reviewing largely, but not exclusively, qualitative data along with the view of various stakeholders. Research-Pragmatic dimension This dimension represents the contrasting styles of how the evaluation is conducted. EasterbySmith (1980) described the two extremes as Evaluation and evaluation Research and Pragmatic styles respectively. Research styles stress the importance of rigorous procedures, that the direction and emphasis of the evaluation study should be guided by theoretical considerations and that these considerations are aimed at producing enduring generalizations, and knowledge about the learning and developmental process involved. The researcher in this instance should be independent and maintain an objective view of the courses under investigation without becoming personally involved. The pragmatic approach, in contrast, emphasizes the reduction of data collection and other timeconsuming aspects of the evaluation study to the minimum possible. As pointed out in (EasterbySmith et al., 1991), when working within companies, the researcher is dependent on the cooperation and given time of managers and other informants who may and will have other more important priorities than the researchers study. 7|Page representing the

Easterby-Smith combines the dimensions into a useful matrix. The arrows show the influence of one school on the development of the linked school.
Figure 1. Models and 'schools of thought' in evaluation

Research Experimental research Illuminative evaluation Goal-free evaluation Scientific Systems model Constructivist

Interventionalist evaluation


Experimental Research Experimental research has its roots in traditional social science research methodology. (Campbell and Stanley, 1966), (Campbell et al., 1970), (Kirkpatrick, 1959/60), (Hesseling, 1966) are cited in (Easterby-Smith, 1994) as the best known representatives of this school. The emphasis in experimental research is: 1. Determining the effects of training and other forms of development 2. Demonstrating that any observed changes in behaviour or state can be attributed to the training, or treatment, that was provided. There is an emphasis on the theoretical considerations, preordinate designs and quantitative measurement and emphasis on comparing the effects of different treatments. In a training evaluation study this would involve at least two groups being evaluated before and after a training intervention. One group receives a training treatment whilst the other group does not. The evaluation would measure the differences between the groups in specific quantifiable aspects related to their work.


Evaluation models and taxonomies Kirkpatrick Model Donald Kirkpatrick created the most familiar evaluation taxonomy of a four step approach to evaluation (Kirkpatrick, 1959/60) now referred to as a model of four levels of evaluation (Kirkpatrick, 1994). It is one of the most widely accepted and implemented models used to evaluate training interventions. Kirkpatricks four levels measure the following: 1. Reaction to the intervention 2. Learning attributed to the intervention 3. Application of behaviour changes on the job 4. Business results realised by the organization

Russ-Eft and Preskill (2001) note that the ubiquity of Kirkpatricks model stems from its simplicity and understandability having reviewed 57 journal articles in the training, performance, and psychology literature that discussed training evaluation models, Russ-Eft and Preskill found that 44 (77%) of these included Kirkpatricks model. They also note that only in recent years (1996) that several alternative models have been developed. In an article written in 1977, Donald Kirkpatrick considered how the evaluation at his four levels provided evidence or proof of training effectiveness. Proof of effectiveness requires an experimental design using a control group to eliminate possible factors affecting the measurement of outcomes from a training program. Without such a design, i.e., evaluating only the changes in a group participating in a training program, can provide evidence of training effectiveness but not proof (Kirkpatrick, 1998). Brinkerhoff model Brinkerhoffs model has its roots in evaluating training and HRD interventions. Brinkerhoffs cyclical model consists of six stages grouped into the following four stages of performance intervention:


Performance Analysis Design Implementation Evaluation

Brinkerhoffs (1988, 1989) model addresses the need for evaluation throughout the entire human performance intervention process. The six stages: 1. Goal setting identify business results and performance needs and determine if the problem is worth addressing 2. Program design evaluation of all types of interventions that may be appropriate 3. Program implementation Evaluates the implementation and addresses the success of the implementation 4. Immediate outcomes focuses on learning that takes place during the intervention 5. Intermediate outcomes focuses on the after-effects of the intervention some time following the intervention 6. Impacts and worth how the intervention has impacted the organization, the desired business results and whether it has addressed the original performance need or gap. Holtons HRD Evaluation Research and Measurement Model Holton (1996) identified three outcomes of training (learning, individual performance, and organisational results) that are affected by primary and secondary influences. Learning is affected by trainees reactions, their cognitive ability, and their motivation to learn, The outcome of individual performance is influenced by motivation to transfer their learning, the programmes design, and the condition of training transfer. Organisational results are affected by the expectations for return on investment, the organisations goals, and external events and factors. Holtons model bears great similarity to Kirkpatricks Levels 2, 3 and 4. The model is more testable than others in that it is the only one that identifies specific variables that affect trainings impact through identification of various objects, relationships, influencing factors, predictions and hypotheses. Russ-Eft view Holtons model as being related to a theorydriven approach to evaluation (Russ-Eft and Preskill, 2001). 10 | P a g e


EXPERIMENTAL APPROACH There are a number of reasons why this approach may not work as well as it might, particularly with management training where sample sizes are limited and especially where training and development activities are secondary to the main objectives of the organization. Easterby-Smith (1994) discusses four main reasons: Sample sizes Using statistical techniques, essential to experimental research, need to be large to discover statistical significances when evaluating management training and development. This is particularly difficult in this field when group sizes are often less than ten and rarely greater than 30. Control Groups There are special problems in achieving genuine control groups. In one study (Easterby-Smith and Ashton, 1975), the selection of the group to receive training were closer to their bosses than those who were not selected rather than more objective or even random selection criteria. Further, does no training have a negligible effect on the control group? Indeed Guba and Lincoln (1989) now include those whoa are left out as one of the main groups of stakeholders. Causality The intention of experimental design is primarily to demonstrate causality between the training intervention and any subsequent outcomes. However, it is often hard to isolate the intervention from other influences on the managers behaviour. It may be possible to reduce the clutter of other influences for training interventions that have a specific, identifiable skills focus, interventions of a more complex nature may be subject to myriad external influences between evaluations.

11 | P a g e

Time and effort Kirkpatrick identifies that an experimental design to prove that training is effective requires additional steps including the evaluation of a control group. Particularly with regard to his Level 3 (Behaviour) and Level 4 (Results), ensuring that any post-test evaluation is undertaken at a time after the training event long enough for the participant to have had an opportunity to change behaviour, or for results to be realisable (Kirkpatrick, 1998). ILLUMINATIVE EVALUATION In response to the problems above with experimental research, evaluators usually try to increase the sample size. The Illuminative evaluation school takes issue with the comparative and quantitative aspects of experimental research. Such a view will tend to concentrate rather on the views of different people in a more qualitative way. However, this school has been noted to be more costly than anticipated (Jenkins et al., 1981), and also this style of evaluation had been rejected by sponsors. SYSTEMS MODEL Easterby-Smith (1994) notes three main features of the systems model school. Starting with the objectives, an emphasis on identifying the outcomes of training and a stress on providing feedback about these outcomes to those people involved in providing inputs to the training. Additionally, evaluation is the assessment of the total value of a training system, training course or programme in social as well as financial terms. Critical of the approach Hamblin (1974) suggests that this is restrictive because evaluation has to be related to objectives whilst being over ambitious because the total value of training is evaluated in social as well as financial terms. However, Hamblins own cycle of evaluation depends heavily on the formulation of objectives, either as a starting point or as a product of the evaluation process. Another important feature of Hamblins work is the emphasis on measurement of outcomes from training at different levels. It assumes that any training event will, or can. Lead to a chain of consequences, each of which may be seen as causing the next consequence.

12 | P a g e

Hamblin stresses that it would be unwise to conclude from an observed change at one of the higher levels of effect that this was due to a particular training intervention, unless one has followed the chain of causality through the intervening levels of effect. Should a change in job behaviour, for example, be observed, the constructivist take on this would possibly be to ask the individual for his or her own views of why they were now behaving in a different way and then compare this interpretation with the views of one or two close colleagues or subordinates. Hamblins Five-Level Model Hamblin (1974), also widely referenced, devised a five-level model similar to Kirkpatricks. Hamblin adds a fifth level that measures ultimate value variables of human good (economic outcomes). This also can be viewed as falling into the tradition of the behavioural objectives approach (Russ-Eft and Preskill, 2001).

Training Event

Reactions effects Learning effects Job behaviour effects Organisation effects Ultimate value effects

Figure 2. 'Chain of Consequences' for a training event

13 | P a g e

The third main feature of the systems model is the stress on feedback to trainers and others decision makers in the training process. This features significantly in the work of (Warr et al., 1970) who take a very pragmatic view of evaluation, suggesting that it should be of help to the trainer in making decisions about a particular programme as it is happening. (Rackham, 1973) subsequently makes a further distinction between assisting decisions that can be made about current programmes, and feedback that can contribute to decisions about future programmes. This, Rackham began to appreciate after attempting to improve the amount of learning achieved in successive training programmes by feeding back to the trainers data about the reactions and learning achieved in earlier programmes. What Rackham noticed was that the process of feedback from one course to the next resulted in clear improvements when the programmes were non-participative in nature, but that there were no apparent improvements in programmes that involved a lot of participation. The idea of feedback as an important aspect of evaluation was developed further by Burgoyne and Singh (1977). They distinguish between evaluation as feedback and feedback adding to the body of knowledge. The former they saw as providing transient and perishable data relating directly to decision-making, and the latter as generating permanent and enduring knowledge about education and training processes. Burgoyne and Singh relate evaluative feedback to a range of different decisions about training in the broad sense: 1. Intra-method decisions: about how particular methods are handled, for example lectures may vary from straight delivery to lively debates. 2. Method decisions: for example, whether to employ a lecture, a case study, or a simulation in order to introduce the topic 3. Programme decisions: about the design of a particular programme, whether it should be longer or shorter, more or less structured, taught by insiders or visiting speakers 4. Strategy decisions: about the optimum use of resources, and about the way the training institution might be organized.

14 | P a g e

5. Policy decisions: about the overall provision of funding and resources, and the role of the institution as a whole, whether for example, a management training college should see itself as an agent for change or as something that oils the wheels of change that are already taking place. (Figure 3)

Policy Body of knowledge about training and education



Decision levels Evaluation research


Intra-method Programme Course/Event Behavioural/ organisational outcomes


Figure 3. Evaluation of outcomes and link to decision making

The systems model has been widely accepted, especially in the UK, but there are a number of problems and limitations that should be understood. According to Easterby-Smith (1994) the main limitation is that feedback (i.e. data provided from evaluation of what has happened in the past) can only contribute marginally to decisions about what should happen in the future. This is due to the legacy of the past training event and feedback can highlight incremental improvements based on the previous design, but not note when radical change is needed. The emphasis on outcomes provides a good and logical basis for evaluation but it represents a mechanistic view of learning. In the extreme, this suggests that learning consists of facts and knowledge being placed in peoples heads and that this becomes internalized and then gradually incorporated in peoples behavioural responses.

15 | P a g e

The emphasis on starting with objectives brings us to a classic critique of the systems approach. Just whose objectives are they? It has been questioned by MacDonald-Ross (1973) whether there is any particular value in specifying formal objectives at all, since among other things, this might place undue restrictions on the learning that could be achieved from a particular educational or training experience. GOAL FREE EVALUATION This leads to the next major evaluation school: Goal free evaluation. This starts with the assumption that the evaluator should avoid consideration of formal objectives in carrying out his or her work. Scriven (1972) proposed the radical view that the evaluator should take no notice of the formal objectives of a programme when carrying out an investigation of it. Goal free evaluation leans more to the constructive method where the evaluator should avoid discussing or even seeing the published objectives of the programme and discover from participants what happened and what was learned (Scriven, 1972). INTERVENTIONALIST EVALUATION According to Easterby-Smith (1994) this approach includes responsive evaluation and utilization focused evaluation. He cites Stake (1980) contrasts responsive evaluation with the preordinate approach of experimental research. The latter requires the design to be clearly specified and determined before evaluation begins, it makes use of objective measures, evaluates these against criteria determined by programme staff, and produces reports in the form of researchtype reports. In contrast, responsive evaluation is concerned with programme activities rather than intentions, and takes account of different value perspectives. In addition, Stake stresses the importance of attempting to respond to the audiences requirements for information (contrasting with some goal-free evaluators to distance themselves from some of the principal stakeholders). Stake positions this method more pragmatically by recognizing that different styles of evaluation will serve different purposes. He also recognizes that preordinate evaluations may be preferable to responsive evaluations under certain circumstances.

16 | P a g e

Guba and Lincoln (1989) take this method further by what they call responsive constructivist evaluation. Guba and Lincoln recommend starting with the identification of stakeholders and their concerns, and arranging for these concerns to be exchanged and debated before collection of further data. Patton (1978) stresses the importance of identifying the motives of key decision makers before deciding what kind of information needs to be collected. Recognising that some stakeholders have more influence than others (a view Guba and Lincoln argue should not be the case) but goes further by concentrating on the uses of the subsequent information might be put. But, Interventionalist evaluation has the danger of being so flexible in its approach because it considers the views of all stakeholders and the use of the subsequent information, that the form adapts and changes to every passing whim and circumstance, and thereby producing results that are weak and inconclusive. It may also be, that the evaluator becomes too close to the programme and stakeholders, something that goal-free evaluators set pout to avoid, leading to a reduction of impartiality and credibility. EXPERIENCE WITH TRAINING PROGRAMME EVALUATION MODELS Many researchers measuring the effects of training have looked at one or more of the outcomes identified by Kirkpatrick (1959/60, 1994): reactions, learning, behaviour, and results. Evaluation of Trainee reactions to learning has yielded mixed results (Russ-Eft and Preskill, 2001), yet it is often the only formal evaluation of a training programme and relied upon to assess learning and behaviour change. It may be reasonable to assume that enjoyment is a precursor to learning, and that if a trainee enjoys the training, they are likely to learn but such an assumption is not supported by a meta-analytic study combining the results of several other studies (Alliger et al., 1997). Kirkpatricks second level, Learning, is the most commonly used measurement after trainee reactions to assess the impact of training.

17 | P a g e

Studies investigating the relationship between learning and work behaviour have also shown mixed results and offer little concrete evidence to support the notion that increased learning from a training programme relates to organisational performance.(Collins, In Press) cited in (Russ-Eft and Preskill, 2001). Training transfer is defined as applying the knowledge, skills, and attitudes acquired during training to the work setting. Most organisations are genuinely interested in this aspect of the effectiveness of training events and programmes yet the paucity of research dedicated to transfer of training contradicts the importance of the issues. Some research in this area has focussed on comparison of alternative conditions to training transfer. A typical design of such research compares groups who receive different training methods and/or a control group that receives no training. Such studies do not all use the same research design and the results are inconsistent. However, the research does indicate that some form of post-training transfer strategy facilitates training transfer (Russ-Eft and Preskill, 2001). Evaluating Results of training in terms of business results, financial results and return on investment (ROI) is much discussed in the popular literature most offering anecdotal evidence or conjecture about the necessity of evaluating trainings return on investment and methods that trainers might use to implement such an evaluation. Solid research on this topic is not, however, so voluminous. Mosier (1990) proposes a number of capital budgeting techniques that could be applied to evaluating training whilst recognising that there are common reasons why such financial analyses are rarely conducted or reported:

It is difficult to quantify or establish a monetary value for many aspects of training No usable cost-benefit tool is readily available The time lag between training and results is problematic Training managers and trainers are not familiar with financial models.

Little progress appears to have been made in this area since Mosier wrote. 18 | P a g e

Chapter 2: Problem Identification

Problem Statement
Technology and Global Competition, the two driving forces of economic change in todays Business World, havent bypassed the once staid world of training and development. Companies seeking to gain advantage through better trained and better developed workers are employing everything from e-learning delivery systems to multicultural and polyglot training solutions. They are hiring chief learning officers to deal with the increasingly complex field. And, they are demanding better accounting of results. Yet despite the focus on efficiency and cost control overall spending on training and development continues to rise across the sectors. Companies clearly subscribe to the belief that, smarter, better trained workers increase chances for success.

The objective of this research is to identify the factors that lead to the choice of a particular T&D methodology. At the very least, the research should corroborate the existing assumptions regarding the influencing factors. It should be in a position to verify that the steps various companies are taking to stimulate training and development are in the right direction, and would eventually lead to a satisfied employee. There are four purposes of this study: To describe selected successful models of training and development as they may apply to firms in various sectors. To describe various methods of needs assessment for training and development programs being used in various companies. To draw comparison of evaluation of training and development procedures being followed in various organizations The results of this study will be used to formulate recommendations on developing a comprehensive plan (model) for the creation of effective employee training and development. This study will be conducted by engaging in comprehensive review and critique of the existing literature on training and development models. 19 | P a g e

A little information goes a long way
We call it research. But its often called measurement, staff surveys, or polling, or focus groups, or feedback. Etc. Its a simple way to find out some amazing things. I am still reeling from the amount of information that I took from the experience surveys I conducted. I didnt ask about satisfaction or engagement or morale. I dont think that any of those are correlated to business results in a satisfactory way. Instead I asked if people can see how their work connects to business goals, I asked if their bosses regularly listen to what they have to say, if they get the required support for development and I asked if they feel they can control their job.

Metrics Used In Research

Metrics is the numerical part of research. Clearly, it is essential to be able to measure. Put more simply, metrics is a piece of research jargon for numbers and the related analysis of them. In my project, I collected soft data. An example of soft data would be employee views of the organization. It is soft because, the employees views are determined by many factors that are varied and even outside the strict concern of the organization, for example, how the person feels about his or her own health. Experience surveys were conducted with the HR Department to seek information regarding T&D processes and methods adopted in the organization.

20 | P a g e

21 | P a g e

Research Methodology
Unquestionably, data analysis is the most complex and mysterious of all of the phases of a qualitative project, and the one that receives the least thoughtful discussion in the literature. Exploratory research was undertaken to gain insight into the employee behavior. At the same time Experience surveys were conducted with the Experts in the industry to get data regarding the various training methods being adopted in the companies. Several depth interviews were conducted with the target experts. Depth interviews, a qualitative market research method, conducted among "a priori" target segments was my preferred first step to explore, discover and assess the business market. The point of in-depth interviews is to allow for a semi-structured discussion in which the respondents are treated with great respect as "experts" in whatever field or situation they happen to represent. In-depth interviews are often used in place of focus groups when having a group of people with similar characteristics together for the purpose of stimulating a conversation on a specific topic is unnecessary.

22 | P a g e

Chapter 4: Discussions
Training & Development at INDIAN OIL
INDIANOIL is the largest industrial corporation in India in terms of sales turnover. As per the ranking by the Fortune500 magazine, it is ranked 153rd largest Industrial Corporation in the world. The training department of Head Office and the Regional Office look after the formal training requirements of about 17000 employees in the Marketing Division of the corporation. The Head Office training center looks after the management training activity of over 2000 managers in Marketing Division. In addition it also receives nominations from Refinery and Pipelines Divisions, R & D Center, Assam Oil Division, Indian Oil Blending Limited (IOBL), Oil Co-ordination Committee (OCC), PCRA, PII and defence personnel. It also offers an opportunity for participation in training programme to managers of organization from developing countries with whom it has co-operation agreements.

The Corporation has not only established its leadership in this field, but has also gained unquestionable credibility at the international level. The credit for this achievement goes to its most potent and vital force that is the dedicated efforts of the Indian Oil employees. Indian Oils aims at world-class excellence, which requires top-notch managerial training and development of their human resource. That is the reason why INDIANOIL has given the highest importance to training right from its inception, which has enabled it to maintain and perpetuate its profitable and efficient existence.

Evolution of Training in INDIAN OIL

Prior to 1964, the individual companies had their own training schemes patterned on the erstwhile oil companies approach. The early focus was on supervisory development. Since the organization was designed along divisional lines, the training activities were also carried out almost independently division-wise. The training in Marketing Division started with programmes organized by them for the Defense Personnel for handling of petroleum products called the Petrol Oil and Lubricants (POL) courses. 23 | P a g e

Even as early as 1963-64 technical programmes on fuel engineering were conducted for the sales force with the help of MOBIL, USA. In 1965, the Administrative Staff College (ASC) was set up in Bombay to conduct functional programmes in areas such as Personnel Management and Productivity for INDIANOIL personnel in addition to the POL programmes for the Defense Personnel. The first few programmes were adapted from those being conducted by the Burma Shell with guest faculty and later on with the help of internal faculty. Emphasis in the organization on training programmes was largely in the areas of handling of Petroleum products, Liquid Petroleum Gas (LPG) and a few behavioral programmes. A few years later, in 1967-68 as more programmes were added, training was taken up at the regional levels with four regional training centers, each set up headed by a Branch Training Officer. Training activities in the R&D Division started in early 1960s in the Guwahati Refinery. In the initial years the United Nations experts helped in organizing supervisory development programmes. Full fledged training centers were set up in all the refineries Guwahati, Barauni, Gujarat, Haldia, Digboi Assam Oil Division (AOD) and Mathura with major emphasis on technical and skill-development programmes in the initial years of their existence In the 1970s, with the help of the Marketing Division, General Management Programmes and behavioral programmes were organized in the Refinery training centers. For increased

understanding and coordination between the two divisions a series of interface programmes were organized since 1975-76. The Indian Oil Management Academy (IMA) started functioning in August 1979 to meet the emerging training needs at selected managerial levels. The IMA was to conduct specific functional and developmental programmes for officers of the R&P Division. The AOD by itself has a long history of emphasis on training even prior to its nationalization and integration with INDIANOIL. Various technical and skill development programmes were conducted by AOD for officers and staff at all levels.

24 | P a g e

The refinery at Digboi being peculiar to itself in terms of the technology and operations, specific training programmes in operations and maintenance were organized. The AOD had a strong and systematic approach to training with major emphasis on planned on-the-job training. In the mid-1970s, the performance appraisal forms were amended in INDIAN OIL to introduce a component of training to help in the identification of training needs. In early 1980s, the INDIAN OIL reviewed its corporate plans as a result of which the need was felt to give a different orientation to the training activity keeping in view the organizations development. It was also decided to have an organization development (OD) intervention by an outside consultant with a view to develop a proper linkage between the corporate plans and human resources development. Accordingly, Professor M. Athreya was invited as an OD interventionist. Based on the suggestions made by the consultant, emphasis was given to Human Resources Development and it became a subsystem of the Personnel function. Consequently, there were certain organizational changes in the Personnel function. The Personnel function was regrouped and, reorganized into three subsystems Personnel and Administration, Training and HRD. The HRD group was specifically assigned the task of integrating the identified corporate mission with the department and individual goals, which included appropriate career planning and role analysis. Consequently, further changes were introduced in the Annual Performance Appraisal (APA) System. The APA form was redesigned and training need identification was given more In 1990, the personnel department was redesigned as Human Resource


Management to reflect the greater emphasis on HRD though the basic set up continued as earlier. Thus, the training also got a fillip with introduction of general management and leadership courses and interface programmes. At the same time, there was considerable technical up gradation, which necessitated greater emphasis on technical training. As a result of the HRD outlook in the organization, eight disciplines were identified in the Marketing Division namely, Marketing, LPG, Operations, Technical Services, Personnel, Finance, Sales, Aviation and appropriate career path models drawn. 25 | P a g e

The interdisciplinary programmes were introduced to expose the officers to function other than their parent discipline. -The concept of the staple programmes all officers to be exposed to them was simultaneously introduced. Coinciding with this, in keeping with the corporate objective closer / greater interface was envisaged between the divisions. In 1987, the Tata Management Training Centre (TMTC) was invited to study the training activities in INDIAN OIL. The TMTC offered suggestions to improve the training infrastructural facilities and better utilization of manpower for training within the organization. In Indian Oil Corporation today, training committees at the corporate office level, Head Office (Division) level, Regional/unit level play an active role in formulating training plans, review of ongoing course, etc. In the Marketing Division, the training activity is organized with set ups at selected locations, at each regional headquarter level and at the head office. The workmen training is organized at the selected locations while officers training and some workmen training are organized at regional headquarters, apart from the training programmes for direct recruits and promotee officers and the middle and senior level officers at the HO. In the Refineries and Pipelines Divisions, each refinery has a comprehensive training set up taking care of both workmen training, officer training for junior and mid level officers and management training programmes for direct recruits while the Indian Management Academy (IMA) organizes programmes for middle and senior level officers in addition to the Junior Executive Development programmes for Promottee officers. The Pipeline training activities are also organized on a three-tier basis with unit/location, regional office and head office handling workmen training, middle and junior level officer training programmes and external and middle and senior level officers training respectively. The internal training programme at the head office level for mid/senior level officers is taken up by the IMA.

26 | P a g e

Over the past few years there has been a substantial increase in the number of employees exposed to training in the various divisions of INDIAN OIL. In addition to the efforts of the training department in INDIAN OIL, certain training programmes are conducted by other departments like Fire & Safety Department and Inspection etc. Over the years the INDIAN OIL has invested substantial amount of financial resources into training. Training Mission

To facilitate the process of integration of personal ambitions and aspirations of employees with the corporate objectives through training interventions. To continuously scan the environment, review training programmes and design needbased inputs to ensure achievement of high level of excellence in customer satisfaction. Equip work-force with skills to make Indian Oil Corporation a global player.

Assist / guide the employees in their pursuit of knowledge and self-actualization, expounding the belief that there are no limits to human potential and growth. Facilitate the induction of new employees into Indian Oil Corporation through suitable orientation programmes. Enable through training, Defense Services to efficiently handle storage, distribution and consumption of petroleum products, which shall also play a vital role in building customer relations over a long term.

TRAINING PROCESS ROLE OF TRAINING COMMITTEES The main role of the Training Committees is to oversee the training functions and the training needs of the organization keeping in view the environmental changes. Based on the need identified, training programmes focused towards specific area of interest are approved by the committee for implementation. training activities on a half-yearly basis. The training committee also reviews the

27 | P a g e

HO TRAINING COMMITTEE Chairman Members - Director (M) - Executive Director, and General Managers I/C

In addition one Regional ED is invited to the HO training Committee. Convener - DGM (T&D)

REGIONAL TRAINING COMMITTEE Chairman Members Convener - ED of the Regional Office - General Managers / Dy. General Managers (HOD) of the Region - Training In-charge of the Region (Senior / Manager (T&D))

In Indian Oil Corporation: Training Department has a training calendar, which is sent to all the departments. Basically two types of training programmes are conducted by the training department o Functional Programme o Developmental Programme Training department makes use of in-house personnel for functional programmes and employs people from outside for development programmes The current system provides consultation with concerned officers by his superior to ascertain the training needs. Similarly, the superior in consultation with the Unit level co-coordinators identifies training needs of the workmen and Regional Training Heads and new need based programmes are mounted. Each employees training needs are identified through system of Annual Performance Appraisal (APA). Nomination for Regional course is as per the eligibility criterion laid down for each programme. 28 | P a g e

Once the nominations are identified and course announcement made, withdrawal of nomination is normally not permitted. Participants at the end of each programme do the overall course evaluation and the courses are modified depending upon the feedback received. Participants attending external training are required to make a formal presentation regarding the training received along with action plan for implementation. This ensures transfer of knowledge for on job performance. Role of training Training has been performing a very important role in helping the Corporation to reach the commanding heights of performance over the years. Training has played a pivotal role in helping the organization adapt itself to change, which is the most important thing, called for in the current changing environment. To assist the employees in their pursuit of knowledge and self-actualization. Training Linkage to Corporate / Divisional Objectives The training policies have been developed for 4 main reasons: To define the relationship between the organization objectives and its commitment to the training function. To provide operational guidelines for management, for example to state managements responsibilities for planning and implementing training and in particular to ensure that training resources are allocated to priority and statutory requirements. To provide information for employees. For example, to stress the performance standards expected; to indicate the organizations commitment to training and development and to inform employees of the opportunities of training development (including willingness to grant time off, and/or payment of fees for External courses). To enhance public relations for example, to help recruit high caliber recruits; to reassure clients and public at large about the quality of products or services or to project an image as a caring and progressive employer by taking part in government sponsored Social training programmes. 29 | P a g e

Participants Expectations from a training programme (Inputs from employees) Improving and solving specific problems confronted in job functions. As a means to improve promotional aspects. For professional growth in organization. To develop understanding of specific subjects covered in the training programme As a change for routine job schedule. To get acquainted with new technology. For personal growth To gain new and pertinent knowledge Acquire specific approaches, skills or techniques that can be applied on the job. Help and confirm some earlier ideas. Acquaint with problems, ideas and solutions from other departments Look at oneself and ones job objectively. Training process of INDIAN OIL involves several steps 1. Defining organizational objectives and strategies 2. Assessment of the training needs 3. Establishing training goals 4. Devising the training programme 5. Implementation of the programme 6. Evaluation of the results Training procedure in INDIAN OIL 1. Identification of training needs. 2. Training Nominations 3. Training Facilities 4. Training Techniques 5. Training Faculty 6. Preparation of the Trainee 7. Evaluation Effectiveness of training in Indian Oil. 8. Follow up.

30 | P a g e

Identification of training needs In Indian Oil, training needs are identified by the Training committees at HO/Regional level keeping in view the changing environment and the objectives and the mission of the organization. Based on this new training also gets identified year after year. Traditional

programmes are redesigned so as to be effective utilized. Each officers training are identified in the Training needs Exercise which is covered every two years. Prior to identifying needs of each officer, their current system provides consultation with concerned officers by his superior to ascertain the training needs. The training need for officers are identified in the Training need form that constitutes the basis on which the nominations are accepted by the training centers for the various in-house training programmes. The nominations to external training programmes are encouraged only for such programmes where in-house training programmes are not available, and there is a self/organizational need. The training needs of workmen in employees category are identified by the superiors in consultation with the unit level coordinators and the Regional Training Heads and new based inhouse programmes are mounted. Employees do write in their Annual Performance Appraisal (APA) forms about the training they would like to undergo. Training Nominations As far as possible, opportunity is given to the officers to attend HO training programmes of their choice once every two years. Nominations for regional courses are as per the eligibility criteria laid down for each programme. The main thrust of training activities at regional level is improving functional competency. The nominations for workmen category are finalized at the regional training centers keeping in view the specific needs of each employee segment.

31 | P a g e

Withdrawal of nomination once the nominations are identified by the training department and course announcement is made, withdrawal of nomination is normally not permitted. However, in extreme unavoidable circumstances, this is permitted subject to prior approval of the competent authority. Training Facilities The non residential training programmes are conducted in the training halls located in the HO/Regional head quarters. The training halls have been carefully designed keeping acoustic requirement in view. The training halls are equipped with the latest and most sophisticated audio-visual equipments to ensure training effectiveness. Management training courses /

supervisory development courses are conducted in some of the reputed nominated hotels. They have also acquired latest electronic gadget like liquid Crystal Display, Videorama, Electronic Board (Panaboard), direct projector, for improving training efficiency. The training center at HO has well equipped reference library. It has an excellent collection of books and CDs on various aspects of Management, Information Technology, Petroleum Industry and Energy Management besides general disciplines. Training Techniques Depending on the course objectives, training methodology mix is carefully designed. In the training courses the management provides ample opportunity to the employees to take active part in the learning process. In the management training courses emphasis is placed on the use of state-of-the art training technologies like simulation exercises, computer aided Management Games, live video recording, structured experiential instruments, case study method etc. Syndicate project studies are also given in most of the courses to study the live organizational problems and give recommendations, which are then duly considered by the management for implementation. The participants of various training programmes are also required to prepare reports and make formal presentations. This aids in the process of learning for workmen training. Adequate emphasis is given to hands-on-training.

32 | P a g e

Training Faculty The trainer has to be prepared for the job for he is the key figure in the entire programme. The strength of the Training in INDIANOIL lies in the faculty being generated from within. Inhouse faculty imparts a lot of credibility to the training efforts. The training faculty members are selected on the basis of their positive attitude towards training assignment and their ability to communicate effectively with others. Each faculty member is required to have thorough

knowledge of his subject. The training faculty is responsible for ensuring that the formal training activities are in line with the organizational requirements. The functional managers from various departments are invited as part-time faculty members to share their knowledge pertaining to their disciplines. It also helps the functional managers to remain up-to-date in their specialized functions. On a very selective basis, they invite guest speakers in the training programmes in such areas where they do not have sufficient expertise within the organization. This is done with the selection of names from CEOs of top corporate houses and professional institutes like IIMs and IITs. The training center is also well equipped with professionally qualified trainer who are competent to conduct the management development programmes, both at HO/Regional level. Preparation of the Trainee (participants) This step consists of 1 2 3 4 Putting the learner (trainee) at ease. Stating the importance and the ingredients of the Explaining why he is being taught. Creating interest and encouraging questions, finding out what the learner already knows about his job. 5 Explaining why of the whole job and relating it to some other job the worker already knows. 6 7 Placing the learner as close to his normal working position as possible. Familiarizing him with equipment, materials, tools and trade terms. 33 | P a g e job and its relationship to work flow.

Evaluation Effectiveness of training in INDIAN OIL For management training courses, session-wise evaluation is done for each session. The overall course evaluation is done by the participants at the end of each programme and the courses are modified depending upon the feedback received. At periodic intervals, evaluation of training effectiveness is conducted by reputed outside professional agencies like Tata Management Training Centre, Administrative staff college of India etc. The participants attending the external training and are required to make a formal presentation regarding the training received along with an action plan for implementation. This ensures transfer of knowledge for on-the-job performance. Training function also enjoys adequate support from top management to monitor various programmes and upgrade the same from time to time. The post course evaluation and monitoring of functional courses is done with the support of the respective functional groups. A high degree of innovation in training efforts is ensured due to the above interventions. Follow-up This step is undertaken with a view to trusting the effectiveness of training efforts. This consist of Putting trainee on his own. Checking frequently to be sure that he/she has followed instructions. Tapering off extra supervision and close follow-up until he is qualified to work with normal supervision.

34 | P a g e

Training & Development at JK Tyres ltd.

Training is one of the key functional areas in JK Tyre as it aims at developing the human resource of the organization on whom key to success or failure of the company lies. JK Tyre has a large employee base. Every employee has to be given proper training so that they are productive and efficient in their respective field. The HR department takes up this responsibility of giving training to the employee as per the need identified by the employees and the organization. JK Tyre was established in 1977, since then the company the company is growing in numbers. To transform from an old economy company, JK Tyre had to undergo sweeping changes to acquire this new face. It necessitated a paradigm shift in focus from National leadership to global leadership which entailed new sensitivity to business and anew set of skills among its employees to make them market savvy. It had to prepare its people to handle ambiguity and uncertainty. It had to train them to be more competent, responsible and aggressive to beat the competition and to meet the present market demands. In order to make employees adjust with the new environment, the attitude, work style of the employees had to be changed. Coping with change also requires specific skills in terms of retaining strategic control and delivering customer- oriented products and services. With the advent of competitors setting new standards in business, business houses talk in terms of world standards competing with the best in the new global markets. It was the HR department that was required to play a big role in that scenario. It came up with many programmes in, technical and behavioral related areas which changed the mindset of the employees and made them more productive and efficient. That's how JK Tyre was able to meander its way to success even after strong competition from rival companies.

35 | P a g e

Even today training is a thrust area in JK Tyre and all its employees are required to undergo training. HR department conducts various programs in order to make employees competent to take quick and sound decision, handle. new technology, tap new market, effective marketing and customer handling. Since JK TYRE has spread its wings in foreign markets it is necessary to train employees to function in cross-cultural scenario. For maintaining this lead over the years the company has undergone numerous strategic interventions. Apart from forays into related fields, joint ventures diversification, the organization has continuously strived to improve its performance in all the spheres such as restructuring of marketing division, Enterprise resource planning, customer orientation with cost economics or technology up-gradation - in all these endeavors it was due to excellent training the company achieved great results. MODES OF OFFERING TRAINING PROGRAMS AT JK TYRE IN HOUSE TRAINING PROGRAMMES Training activities in JK TYRE are being continuously developed and directed so as to meet substantially all the specific training needs of the employees. In such programs the company decides regarding everything right from the Faculty, participants, venue, etc .Everything is organized by the company itself. In this the training can be provided inside the organization premises as well as outside EXTERNAL PROGRAMES For programmers of specialized nature, or for the training programs which requires simulation techniques which are not available within JK TYRE in those cases external faculty Assistance is fully sought and at times some programs are fully hired. Thus these outside JK TYRE premises and employees are required to attend these programs. In such programs the consultancy which is hired is the decision maker. But the company decides about the employees it wants to send by asking the superior to nominate the employees name depending upon PMS.

36 | P a g e

TRAINING NEEDS IDENTIFICATION AT JK TYRE Before providing the employees of the organization any kind of training its essential to identify the need of training. The benefits that the employees get from training are: Increased skills and knowledge High productivity Opportunity for promotion Increased mobility High morale Fewer accidents

The benefits that the organization gets from training are as follows Economical supervision Better climate Recruitment through promotion Better utilization of machines and materials

Thus identifying the need of training is imperative for both organizations as well as the employees. Before providing the employees any kind of training program first the needs are identified. At JK TYRE also first the needs are identified by the appraiser for the appraise. Then depending upon these needs training is given to the employee. Whenever there is a training program conducted the superior is asked to nominate the employee he wants, to attend the training program. The training needs are identified at mainly two levels At individual level the needs are identified through PMS (performance Management System). At organizational level

37 | P a g e

Apart from it the needs are also identified from Talent management System Succession Planning

INDIVIDUAL LEVEL At individual level the needs are identified with the help of Performance Management system(PMS).this is the only system which is being used to identify individual needs. Performance management system (PMS) is a system where the performance of all the employees are measured as well as developed on continuous basis. Performance management system is primarily used as a basis to identify the training needs of the individual. The objectives of performance management system JK TYRE are as follows: To manage performance and align individual objectives to business objectives To help appraisse give his/her best performance by focusing on major valuing adding results To promote objectivity in performance Assessment &Rewards To help him /her become aware of his /her strengths and build on these Also to bring some areas of improvement to his/ her attention

ORGANIZATIONAL LEVEL At this level level the needs are identified based on the organizational requirement .A meeting is held at JK tyre at senior level .In this meeting all the the senior level people discuss the training that can be given to the employees depending to the upcoming needs of the organization and accordingly the training programs are identified which meets the organization requirement Apart from this the needs are identified through Talent Management System and Succession Planning. In this also meeting are held where the head of the department selects the people who he/she thinks are talented based (on their performance and potential) and can be their successors.

38 | P a g e

In such meetings the head sits and expresses that according to him who can be his/her successor and accordingly the training needs are identified and provided to the employees chosen in the meeting in order to make them reach at that particular level. Owing to the above fact JK Tyre is meticulous about choosing its training officers. Training officials have to bear a big responsibility on their shoulders. It is the job of the training officers to change the mindset of its employees, which is a very tough job. They are supposed to be competent enough to understand the organizational needs as well as the future needs of the market. Accordingly it is supposed to shape the employees so that they may function as per the needs of both market and organization. Therefore only crme de le crme officers are selected to this post with good communication skills -who knows how to persuade employees to undergo training. Jk tyre mainly focuses on two kinds of training programs one being technical and the other being behavioral. The employees at Jk tyre are given training on a continuous basis. The training need of the employee are identified through PMS (performance Management System) in which the superior writes down the kind of training that should be given to its subordinates. Whenever a training program is conducted be it internal or external the superiors are asked to nominate the employees they think need that particular kind of training. So the employees are provided by the training that their superiors send them for. Training Evaluation At JK Tyre Kirkpatrick Model is used to measure the effectiveness of training programs conducted every year. After every training program the questionnaires are distributed in order to know the reactions of the employees pertaining to every training program. The employees give ratings to the parameters described in the Questionnaires. With the help of the ratings given a group average is taken out which helps the organization in knowing as to how much was the training program liked by the trainees.

39 | P a g e

After 90 days i.e. 3 months after the training again the questionnaires are distributed to know how much the employees have learnt and whether they have applied anything learnt in their day to day work. The third level of the model i.e., behavior; in this the superior evaluates the performance of their subordinates through performance management system whether any change in his behavior has occurred or not and after training he is applying the learnings of the training in his day to day work or not. JK Tyre conducts every year an event known as People Management Award in which results are taken out as in how much was the training program conducted proved beneficial to the employees. The result taken out is not pertaining to any one training program but all the training programs on the whole. The results are on the basis of percentage implementation of training plan as per Annual Calendar. Accordingly marks are given to the organizations.

40 | P a g e

Training & Development at Pepsico

Pepsicos commitment to Talent Sustainability means theyre continually working to develop and retain exceptional people. Their people are their greatest strength. Without a broad base of talented people, it cant continue to deliver exceptional results. Its goals are to attract, hire, develop and retain the most talented people. By valuing their employees, supporting their ability to work effectively together and providing them with the tools they need to succeed, they are ensuring that PepsiCo is the kind of company where talented people of all backgrounds want to work. It will continue to foster an inclusive environment, by increasing female and minority representation in management ranks, engaging employees in health and wellness programs and creating rewarding job opportunities for people with disabilities. It will leverage the reach of their employee base and continue to encourage their employees around the world to participate in community service and inclusion activities, which are designed not only to positively impact the communities they serve, but to drive their employees to be leaders in social responsibility. PHILOSOPHY The basic philosophy is to make training an effective instrument in transforming Pepsico into a learning organisation. OBJECTIVES Make learning one of the fundamental values of the Company Ensure value addition through training to the overall business process Institutionalise learning opportunities that supplement work experience Integrate organisational and individual developmental needs Enable employees to keep abreast with the latest knowledge and skills and enable them to undertake current and future responsibilities in a more effective manner. 41 | P a g e

Provide linkages between the different functionaries of training activity Provide linkages of training activity with overall Human Resource function. Training Year Training Year shall mean a period of one year commencing from 1st April till 31st March of the subsequent year. There are different training interventions in a particular training year: In-house Training Programme: A training programme designed, developed and conducted within the Company, exclusively for the regular employees of the Company, with or without the assistance of external agency (ies). External Training Programme: A training programme designed, developed and conducted within India, by an outside agency, not exclusively for the employees of the Company, and to which one or more employees of the Company may be nominated. Planned Intervention: A grade/level/category-wise in-house training programme, normally based on a template course design, and conducted to improve competency base of employees as felt necessary by the organisation. Need-based Programme: A training programme, designed, developed and conducted on the basis of the developmental needs felt and identified for the employees concerned in the Training Needs Form. Specified Intervention: An external training programme or an in-house training programme other than a Planned Intervention or a Need-based Programme, conducted to improve certain specified competencies, as felt necessary by the organization. TRAINING TARGET It shall be the endeavor of the Company to provide seven mandays of training in a training year to every employee.

42 | P a g e

TRAINING NEED ANALYSIS (TNA) The objectives of Training Need Analysis are to Systematically identify developmental needs of employees Integrate so-identified individual needs with organisational needs Enhance relevance and acceptance of training programmes Employees would identify their training needs once in two years. This is as per the Training Plan implemented in 1998. Each employee will identify his/her training needs in a maximum of four areas in consultation with his reporting officer. The training needs expressed should be related to the employees present responsibilities and his likely areas of future assignments. Training Needs would be classified as Essential and Desirable along two time-frames of short-term (for immediate job performance) and long-term (for future job performance, in next two years or so ). The identified needs would be prioritized in the following manner and would be addressed accordingly:

Priority A Priority B Priority C Priority D

Essential Short-term Desirable short-term Essential long-term Desirable long-term

Training needs identification in case of executives would be done by the executive concerned in consultation with his/her Reporting Officer in the Training Needs Form. Training needs identification in case of non-executives would be done by their Reporting Officers in the Training Needs Form.

43 | P a g e

The Departmental Training Co-coordinator shall trigger the TNA exercise from 1st September, every second year, with the distribution of Training Need Forms. He/she would consolidate and submit the filled up Training Need forms of Executives and Non- Executives, of his/her department concerned to the respective Training Centre by 30th October. In case of projects/stations/units, based on such Training Need forms, a Department-wise analysis would be done by the Training Centre of the Unit and discussed in Site Management Committee/Heads of Department for evolving the yearly Training Calendar for the next two years, by 15th November. With a view to integrate the process of evolving of the Training Calendar and to utilise Training resources optimally, the finalisation of Training Calendars would be done by Heads of Training in association with the Head of Personnel of the respective Regions by 30th November. In case of advanced training needs where it is not possible for the Training Centres to design and conduct programmes, the same should be forwarded to PMI for designing and conducting Company-level programmes and incorporation of the same in its Training Calendar. For this purpose, the Heads of Personnel of Regions and PMI shall meet before 15th December for sharing Training Calendars of the Projects/Stations of the respective Region and for providing inputs to the Training Calendar of PMI. In case of training needs expressed by only a few employees and it is not viable to design and conduct programmes at the project/station level, the Heads of Personnel / HR of each Region would explore the possibility of conducting the programmes at the Region-level either at the Regional Headquarters or in any of the Projects/Stations of the Region, so that the training needs are not left unfulfilled for want of number of employees. In case of Company-level need-based programmes, the Training Need Analysis forwarded by individual projects/stations and Corporate Centre would be consolidated by PMI and the yearly training calendar for the next two years would be prepared accordingly, by end February.

44 | P a g e

In case of Corporate Centre, the Training Need forms and Department-wise analysis would be consolidated by PMI and the yearly supplementary training calendar for the next two years would be prepared accordingly, after discussion with and acceptance of the EDs/GMs of the respective departments, by 15th December. TRAINING CALENDAR The Heads of training from the Projects / Stations would meet in the Regional Head Quarter during 1week of January to share training calendars. They would also provide inputs to RHQ / PMI regarding programmes to be assigned to RHQ / PMI, from out of the training needs

identified by the employees of their respective projects. Each Training Centre/PMI shall bring out, by 15th February every year, a Training Calendar, specifying the schedules of the programmes, both planned interventions and need-based interventions, planned to be conducted by it during the following training year. Each Training Centre shall circulate on bi-monthly basis calendar of programmes scheduled for the next two months to all HODs and other Training Agencies. The Training Calendars of the various Training Centres and of PMI would be widely made available to all departments/sections at all plants/offices. Copies of Training Calendar would also be kept in the Central Library of the Unit. Copies of Training Calendar of one Project/Station would be circulated to other Training Centres and PMI by 15th March, for needbased utilisation. NOMINATION SYSTEM The objectives of the nomination system are: to ensure that employees are nominated to training in areas which are relevant to their duties or which have been identified as their developmental needs. to ensure that opportunities to attend training programmes are made available to all employees to achieve the Training Target of average of seven mandays of training in a training year for each employee.

45 | P a g e


The objective of training evaluation is to enhance value addition through training programmes by building on the strengths and by removing the shortcomings, if any, and measure the impact of training programmes on job behaviour.

Training Evaluation would be done at three levels: Pre-training Evaluation Programme Feedback Impact Assessment

Pre-training Evaluation: Pre-training Evaluation is aimed at detecting shortcomings in the programme design before the commencement of the programme. For this, in case of all in-house long-duration programmes, the Training Centre/PMI/HR Group, as the case may be, shall review the programme design, content etc. in the light of the feedback obtained from a sample of participants.

Programme Evaluation: The Training Centre/PMI/HR Group, as the case may be, shall seek participant feedback at the end of the training programme in the Programme Feedback Form for making modifications/improvements in future programmes.

Impact Assessment It involves measuring the change in job behaviour of the employee on account of the learning during the training programme. The information would be collected through the Impact Assessment Form (IAF) after completion of six months of the programme.

46 | P a g e

Pre-training Evaluation is required in case of all Long-duration programmes. Impact Assessment is required in case of all Long-duration programmes and Planned Interventions of a duration of not less than 10 days. Programme Evaluation Report

After Impact Assessment, the Training Centre/PMI/HR Group, as the case may be, would prepare a Programme Evaluation Report in case of all Long-duration programmes and Planned Interventions of a duration of not less than 10 training days in the proforma and circulate it to all the participants concerned.

47 | P a g e

Training & Development at General Motors

GM's commitment to learning encourages a culture of continuous learning and personal development that allows GM's employees to be the best. To help achieve this objective, General Motors University (GMU) offers many classes, seminars and professional development programs in many different areas of specialization. Employees can also benefit from initiatives like global task teams, tuition assistance, and paid education leave. GMU is a global network of education and training designed to help employees continuously improve their skills to conduct and grow the business of General Motors. GMU courses are conducted in classrooms and learning laboratories throughout the world and also available electronically via GM's Intranet through GM University On-Line. GMU currently has 15 colleges charged with developing curricula tailored not only to the professional needs of its students, but also to the unique challenges facing employees from a business sector, divisional or regional perspective. Since its inception in 1997, GMU has helped GM become a learning organization able to compete in the world marketplace where knowledge is important and a key to success. GMU continues its focus on creating and strengthening learning initiatives and strategies that align the growth and developmental needs of GM employees within the framework of GM's overall business needs and priorities.

48 | P a g e

The Training Procedure at General Motors India

No. Activity HR Functions Other Depts. References Control Points

Competency Determination & Review

Job Descriptions

Review in Jan.

Training Need Identification

PMF/Appraisals/Interview Rating Etc.

Complete in Jan.

Compilation of Training Needs

Training Compilation Sheet

Within 15 days of PMF receipt.

Identified training needs review

Within a week of compilation

Training Plan

Training Plan (Annual/6 mth/Qtr)

Actual against Plan % Trg. coverage

Training Execution Training Assessment Effectiveness

Attendance Sheet / Feedback Sheet

Test Papers , Feedback Sheets etc.

Training Effectiveness Corrective action Necessary)


Section - Training Evaluation, Correction action procedure (a, b, & c)

% of Trg. found effective

Training Records

Data Base

Avg. Trg. Mandays/Emp

49 | P a g e

Training Needs Identification The Training Needs Identification process at General Motors is done through the PMP forms that are the Performance Management Process. The Performance Management Process (PMP) provides tools that help build a performance driven culture. The foundation of the Performance Management Process is a formal discussion between a leader/supervisor and employee linking business objectives to the employee's individual objectives. The process is outlined in the Performance Management Process with forms and instructions to facilitate a thorough and valueadded discussion. This discussion also sets the framework for ongoing feedback between employee and leader/supervisor.

At General Motors the IDP or the individual Development Plan is a Web-based application on GMU On-line that enables you to create your own training development plan. It allows you, via a series of user-friendly Web pages, to enter, view, print, edit, and track your plans electronically. If you are a Training Manager, you can also generate special summary reports on-line.

IDP provides value in a variety of ways. Below are just a few of the key contributions the IDP offers:

Enables the employee to electronically create and track his own development plans, which will help them to achieve their objectives.

Allows the development plan to move with the employee when they change jobs or organizations.

Provides leaders and supervisors the ability to generate reports to help identify their organization's training priorities for a given time period.

Provides leaders the ability to track completion of PMPs. Assists GM University in forecasting and responding to the training needs of GM.

The Individual Development Plan consists of three Sections within the PMP Form Strengths Development Opportunities Development Plan/Training Plan 50 | P a g e

The Training Needs were identified by studying the PMP forms in which each employee had mentioned the area in which he needed training. Thereafter the data was collected in a database and the training needs were analyzed by collecting the data. Evaluating Training Effectiveness A training programme is likely to be more effective when the trainees want to learn, are involved in their jobs, have career strategies. Contents of a training programme, and the ability and motivation of trainee attitudes on training effectiveness. 1. For Domestic external training, the employee shall fill the, ExternalTraining Nomination Form (FMT/HRD/TRN/004), obtain Staff Head approval and forward it to Training Coordinator-HR. 2. The Level E (Training) shall obtain further approvals and notify the employee. 3. The Level E (Training) shall make arrangements for payments to be made for the course and shall register the employee for the course. 4. After completion of training, the employee shall complete Course Feedback Sheet (FMT/HRD/TRN/005) and submit it to Level E (Training). 5. If certificates are issued at the end of the course, a copy of this certificate must be handed over to the Level E (Training). 6. For International external training, the Staff Head shall make the recommendation to the President and Managing Director through Level B (Staff Head HR) for approval before any arrangements are made.. 7. The Level E (Training) shall assist the employee in making reservations for the training programme. 8. The company shall provide sufficient cash and travelers cheques to the employee in advance for the purpose of these External Training.

At GM India On-the-Job Training (OJT) is executed based on the OJT plan. DTC executes the OJT training to new hire or recently transferred employee.

51 | P a g e

GM India evaluates the effectiveness of training to take appropriate corrective actions. Training evaluation is based on type of training. Evaluation is done for all the employees who have undergone training in Calendar Year in each department as per below evaluation procedure.

Evaluation Procedure Off the-job training Evaluation: For all Trainings (except awareness trainings and training less than equal to 4 hours) Feedback Sheet (FMT/HRD/TRN/005) is filled up by the participants. Trainings of one or more days duration are evaluated by one or more of the following methods. Method is decided by Faculty or by Trainees superior or Level D (Training). Pre / Post Training Tests: To be administered by Faculty before and after the training programme through a Test paper. Post Training Skills Tests: To be administered by Faculty after training is over to assess skill acquired either through a written test or test on the equipment. Training effectiveness evaluation : To be done by immediate superior of trainee, 4-5 months after training is over through Training Assessment Form (FMT/HRD/TRN/006)

On The Job Training Evaluation: After completion of OJT each employee has to undergo the OJT effectiveness assessment test or skill test as applicable. This test will be designed by DTC (Objective questions or work quality assessment or work quality assessment).

Training Records GM India maintains the records of off the job types of training. The records are maintained in database.

52 | P a g e

Procedure: Level E (Training) maintains training records (FMT/HRD/TRN/007) other than OJT and EMS training. OJT records are maintained by respective DTCs. Training records are preserved for two years after the employees separation. Workmanship qualification criteria for special processes and achievement of those are contained in process owners DMN. HR department maintains records related to education (qualification) and experience for each employee.

53 | P a g e

Chapter 5: ANALYSIS
Training and development are always used in conjugation, without knowing whether one leads to the other or not. Training as a stand-alone element for organizational growth is immaterial, unless it has an element of tangible growth attached to it. Quite often trainers, managers from various functions, tend to use these terms interchangeably or in conjugation without knowing their relevance, importance and sequencing in the whole scheme of things. It is, therefore, important to understand what is training, what are its objectives and what are the expected deliverables from a training program? Training is an important tool for increasing the overall productivity of an organization. The emergence of new sectors, where human elements play a pivotal role, has exponentially increased the need for training. Most organizations in spite of putting in place the best standards to plan and execute training-related plans, often find that their exercise is not bringing in the requisite amount of deliverables, be it employee performance or productivity. Organizations are happy noting the short-term changes, instead of long-term implications of a training session, which is why training and development are loosely used by most organizations, without even knowing their rightful place under the sun. Globalization, liberalization and modernization are the buzz words in today's business world. We are living in a global village where the survival of the fittest has been replaced by the survival of the fastest in this competitive world. The development of human resources is putting newer challenges in today's business world. Therefore training and development are receiving a great deal of emphasis for the following reasons:

Rapid technological development Change of management Style of functioning Performance of manpower.

Training is an activity leading to skilled behavior

It is not what you want in life, but it is knowing how to reach it. 54 | P a g e

It is not where you want to go, but it is knowing how to get there. It is not what you dream of doing, but it is having the knowledge to do it. It is not a set of goals, but it is more like a vision.

The class-room learning is more or less theoretical in nature. Unless the theory is integrated into practical study, the study remains static/incomplete. Training program always invites many expectations because the activities are to justify that the investment in terms of time, money and energy has demonstrated benefit to the organization. It was observed that training activities of any organization are largely measured by number of training programs conducted per year or number of training programs per employee per annum which does not reflect quality of the training program. In addition to quantity, quality needs are to be taken as a measure of evaluating training activities of any organization. Theoretically, there should be proper matching between the training output and expectations from the training, which yields benefits to the organization. But studies showed that there is always a wide gap between these two. As the gap widens, effectiveness of the training decreases. This gap is mainly because in many organizations training is imparted routinely and also for the fulfillment of the fancy figures only without proper assessment of its impact on the organization. It is commonly believed that nicely executing a function is taken much significantly than evaluating positive impact of the training program. Many organizations do not realize the need to evaluate the training development programs once the session ends. The most probable reasons any management cites for ignoring the crucial "evaluation" stage are lack of time, resources or tools. As organizations are increasingly adopting training development programs, there is a pressing need for evaluating the same. The process of examining a training program is called training evaluation. Training evaluation checks whether training has had the desired effect. Training evaluation ensures that candidates implement their learning in their respective workplaces, in the regular work routines.

Training is a transforming process that requires some input and in turn it produces output in the form of knowledge, skills and attitudes (KSA's). 55 | P a g e

Nowadays, training is an investment because the departments such as marketing and sales, human resource, production, finance, etc., depends on training for its survival. So training provides the opportunity to raise the profile development activities in the organization. It is observed that across the sectors management uses Kirkpatrick's Four Levels of Evaluation and it remains the favorite irrespective of the sector. There are difficulties in trying to quantify the results of training and development programs and that many training ROI assessments are the `best estimates', it may be of more value to gain the `best estimates' than to have no assessment at all. Usually, a lot of time, money and other resources are invested in the training programs and hence it becomes mandatory for any organization to investigate the effectiveness of the training program, the benefits accrued and the ROI. The evaluation process involves informing the participants about the prime objectives of the training development program. The opinions and views of the participants regarding the relevance of the goals to them as well as to the organization are sought based on Kearns' Baseline Evaluation Model. The participants are informed well in advance regarding the context of the program, the training methods and they are continuously encouraged to pen their own thoughts throughout the program so that they can easily recall these points when they are actually required to fill in their feedback questionnaires. This information would also help the participants draw an action plan of applying the acquired knowledge in their jobs. Six months after the completion of the program, the participants have another opportunity to re-evaluate the impact of the training received on their performance.

56 | P a g e


EXPERIENCE WITH TRAINING PROGRAMME EVALUATION MODELS Reviewing the history and development of training evaluation research shows that there are many variables that ultimately affect how trainees learn and transfer their learning in the workplace. Russ-Eft and Preskill suggest that a comprehensive evaluation of learning, performance , and change would include the representation of a considerable number of variables (Russ-Eft and Preskill, 2001). Such an approach, whilst laudable in terms of purist academic research, is likely to cause another problem, that of collecting data to demonstrate the affects and effects of so many independent variables and factors. Thus, we need to recognise that there is a trade off between the cost and duration of a research design and increasing the quality of the information which it generates (Warr et al., 1970). Hamblin (1974) points out that a considerable amount of evaluation research has been done. This research has been carried out with a great variety of focal theories, usually looking for consistent relationships between educational methods and learning outcomes, using a variety of observational methods but with a fairly consistent and stable background theory. However, the underlying theory has been taken from behaviouralist psychology summed up as the patient here the essential view is that the subject (patient) does (behaviour or response) is a consequence of what has been done to him (treatment or stimulus). Another approach according to Burgoyne (Burgoyne and Cooper, 1975) which researchers appear to take to avoid confronting value issues is to hold that all value questions can ultimately be reduced to questions of fact. This usually takes the form of regarding the quality of 'managerial effectiveness' as a personal quality which is potentially objectively measurable, and therefore a quality, the possession of which could be assessed as a matter of fact. In practice this approach has appeared elusive.

57 | P a g e

Seashore et al, (Seashore et al., 1960) felt that the existence of the concept of 'managerial effectiveness' as a unitary trait could be confirmed, if they found high intercorrelations between five important aspects of managerial behaviour: high overall performance ratings, high productivity, low accident record, low absenteeism and few errors There are no forced rules about the style of evaluation used for a particular evaluative purpose. However, (Easterby-Smith, 1994) suggests that studies aimed at fulfilling the purpose of proving will tend to be located towards the research end of the dimension, and studies aimed at improving will tend to be located near the pragmatic end. On the methodological dimension there may be more concern with proving at the scientific end, and learning at the constructivist end. (Figure 4)

Research Experimental research Illuminative evaluation Goal-free evaluation Constructivist Learning



Proving Improving Systems model

Interventionalist evaluation

Adapted from Easterby-Smith

Figure 4. Use of evaluation style matrix

A number of emerging challenges including globalization, economic pressures, and the changing nature of work have combined to create a business environment that demands innovative, flexible training solutions. Simulations are a promising tool for creating more realistic, experiential learning environments to meet these challenges.

58 | P a g e

Simulation-Based Training Defined Simulations are generally defined as artificial environments that are carefully created to manage individuals experiences of reality. For instance, Jones (1998, p. 329) defines a simulation as an exercise involving reality of function in a simulated environment. Cannon- Bowers and Bowers (in press) note that an essential feature of simulations and other synthetic learning environments (e.g., virtual reality) is, the ability to augment, replace, create, and/or manage a learners actual experience with the world by providing realistic content and embedded instructional features. Although not all simulations utilize technology (e.g., board games, roleplays), our focus in the current article is computer-based simulations because of their growing use and the pressing need for research on their effectiveness. There are a number of constructs that conceptually overlap with simulations. For instance, games represent a specific type of simulation that features competitive engagement, adherence to a set of rules, and a scoring system (Cannon-Bowers & Bowers, in press; Teach, 1990). Thavikulwat (2004) notes that games and simulations are terms that are used relatively interchangeably (e.g., simulation-based games). Also, virtual worlds represent very elaborate simulations that allow for interactions among multiple players as well as between players and objects in the world (Cannon-Bowers & Bowers, in press). In the current article we use the term simulation-based training to refer broadly to all types of computer-based simulations that are used create synthetic learning environments.


Margaret Gredler (1996) notes that a major design weakness of most studies evaluating simulation based training and development is that they are compared to regular classroom instruction. However, the instructional goals for each can differ. Similarly, many studies show measurement problems in the nature of the post-tests used. The problems highlighted above regarding the choice of evaluation methodology are further compounded by the lack of well-designed research studies in the development and use of games and simulations (Gredler, 1996) much of the published literature consists of anecdotal reports

59 | P a g e

and testimonials providing sketchy descriptions of the game or simulation and report only on perceived student reactions. Most of the research, as noted by Pierfy (1977) is flawed by basic weaknesses in both design and measurement. Some studies implemented games or simulations that were brief treatments of less than 40 minutes and assessed effects weeks later. Intervening instruction in these cases, however, contaminated the results. DO SIMULATIONS REQUIRE A DIFFERENT ASSESSMENT OF EFFECTIVENESS? Mckenna (1996) cites many references to papers criticising the evaluation of learning effectiveness using Interactive Multimedia and many researchers ask for more research (Clark and Craig, 1992) (Reeves, 1993) to quantitatively evaluate the real benefits of simulations and games-based learning. The question whether Simulation based learning requires a new and different assessment techniques beyond those in us e today remains relatively unexplored. Although there have been some attempts at constructing theoretical frameworks for the evaluation of virtual worlds (Rose, 1995), (Whitelock et al., 1996), very few working examples or reports on the practical use of these frameworks exists. Whitelock et al. (1996) argue that effective evaluation methods need to be established to discover if conceptual learning takes place in virtual environments.

In practice, however, the assessment of virtual environments and simulations has been focused primarily on its usefulness for training and less on its efficacy for supporting learning especially in domains with a high conceptual and social content. Hodges (1998) suggests that simulations (especially Virtual Reality) prove most valuable in job training where hands-on practice is essential but actual equipment cannot often be used. Greeno et al. (1993) suggest that knowledge of how to perform a task is embedded in the contextual environment. Druckman and Bjork (1994) suggest that only when a task is learned in its intended performance situations can the learned skills be used in those situations.

60 | P a g e

Since the early days of simulation and gaming as a method to teach, there have been calls for hard evidence that support the teaching effectiveness of simulations (Anderson and Lawton, 1997). Their paper states that Despite the extensive literature, it remains difficult, if not impossible, to support objectively even the most fundamental claims for the efficacy of games as a teaching pedagogy. There is little relatively little hard evidence that simulations produce learning or that they are superior to other methodologies. They go on to review the reasons as being traceable to the selection of dependent variables and to the lack of rigour with which investigations have been conducted. HOW TO EVALUATE EFFECTIVENESS OF SIMULATIONS One of the major problems of simulations is how to evaluate the training effectiveness [of a simulation] (Feinstein and Cannon, 2002) citing (Hays and Singer, 1989). Although for more than 40 years, researchers have lauded the benefits of simulation (Wolfe and Crookall, 1998), very few of these claims are supported with substantial research (Miles et al., 1986) (Butler et al., 1988) (Wolfe, 1981, Wolfe, 1985, Wolfe, 1990) Many of the above cited authors attribute the lack of progress in simulation evaluation to poorly designed studies and the difficulties inherent in creating an acceptable methodology of evaluation. There are a number of empirical studies that have examined the effects of game-based instructional programs on learning. For example both Ricci, et al. (1996) and Whitehall and McDonald (1993) found that instruction incorporating game features led to improved learning. The rationale for these positive results varied, given the different factors examined in these studies. Whitehall and McDonald argued that the incorporation of a variable payoff schedule within the simulation led to increased risk taking among students, resulting in greater persistence and improved performance. Ricci, et al. proposed that instruction incorporating game features enhanced student motivation, leading to greater attention to training content and greater retention.

61 | P a g e

However, although anecdotal evidence suggests that students seem to prefer games over other, more traditional methods of instruction, review have reported mixed results regarding the training effectiveness of games and simulations. Pierfy (1977) evaluated the results of 22 simulation-based training game effectiveness studies to determine patterns of training effectiveness. 21 studies are reported on having comparable assessments. Three of the studies reported results favouring the effectiveness of games, three studies reported results favouring the effectiveness of conventional teaching. The remaining 15 studies reported no significant differences. 11 of the studies tested for retention of learning, eight of these indicated that retention was superior for games, the remaining 3 yielded no significant result. 7 of 8 of the studies assessing student preference for training games over classroom instruction reported greater interest in simulation game activities over conventional teaching methods. More recently, Druckman (1995) concluded that games seem to be effective in enhancing motivation and increasing student interest in subject matter, yet the extent to which this translates into more effective learning is less clear. THE PROBLEM OF SIMULATION VALIDATION Specifically with regard to simulation based training programmes (but also applying to all training delivery methods perhaps with different terminology), this section reviews the literature on simulation evaluation developing a coherent framework for pursuing the evaluation problem. Three prominent constructs appear in the literature: fidelity, verification, and validation. Validation of a simulation as a method of teaching and that the simulation programme as a training intervention produces (or helps to produce) learning and transfer of learning are the important criteria (Feinstein and Cannon, 2002) yet fidelity and verification are easier to evaluate (but not necessarily to measure objectively) and often distract evaluators from the more tricky issue of validation. Simulation Fidelity Fidelity is the level of realism that a simulation presents to a learner. Hays and Singer (1989) describe fidelity as how similar a training situation must be, relative to the operational situation, in order to train most efficiently. Fidelity focuses on the equipment that is used to simulate a particular learning environment.

62 | P a g e

In more sophisticated (technologically) simulations that use virtual reality for example, the construct of fidelity has an additional dimension, that of presence (the degree to which an individual believes that they are immersed within the virtual world or simulation) (Bailey and Witmer, 1994) (Witmer and Singer, 1994) (Dede, 1997) (Salzman et al., 1999). The degree of fidelity or presence in a learning environment is a difficult element to measure (Witmer and Singer, 1994). Much research during the 60s and 70s studied the relationship between fidelity and its effects on training and education. These studies according to Feinstein and Cannon found that a higher level of fidelity does not translate into more effective training or enhanced learning (Feinstein and Cannon, 2002) In fact, it may be that lower levels of fidelity but with effective human virtual environment interface (navigational simplicity) and a significant degree of presence can assist in trainees acquiring knowledge or skills within the simulation (Salzman et al., 1999) (Stanney et al., 1998) (Gagne, 1984) (Alessi, 1988).

Simulation Verification Verification is the process of assessing that a model is operating as intended. Verification is a process designed to see if we have built the model right (Pegden et al., 1995). During the process, simulation developers need to test and debug errors in the simulation (now usually software errors) through a series of alpha and beta tests under different conditions, verifying that the model works as intended. Often, developers are distracted by this process producing what appear to be brilliant models that work correctly but with no appreciation of the educational effect and hence their validity. In this sense, verification can be a trap, notwithstanding its critical status as a necessary condition of validity (Feinstein and Cannon, 2002). Validation Building on the work of Feinstein and Cannon, Cannon and Burns and others, there are three main questions beyond those of the general issues of evaluation noted above. (Figure 5) 1. Are we measuring the right thing? Validity of the constructs 2. Does the simulation provide the opportunity to learn the right thing? Verification of the simulation and the appropriate fidelity for the content and audience.

63 | P a g e

3. Does the method of using a simulation deliver the learning? Validity as a method of developing.

Figure 5. Three Faces of Simulation Evaluation - adapted from Anderson, Cannon, Malik, and Thayikulwat (1998)

Evaluation of simulations for learning outcomes An analysis of the review of simulation literature identifies learning outcomes instructors adopt as they strive to educate business students. These learning outcomes have been advanced as targeting the skills and knowledge needed by practicing managers. In particular, the sources are (Teach, 1989) (Teach and Giovahi, 1988) (Miles and Randolph, 1985) (Miles et al., 1986) (Anderson, 1982) (Anderson and Lawton, 1997) . Simulation researchers have speculated that the method is an effective pedagogy for achieving many of these outcomes. Table 1 below identifies these learning outcomes where P= measurement of learner perceptions of learning outcomes O= an objective measurement of learning outcomes It is clear from the table below that the vast majority of evaluations have relied on the learners perceptions of their learning outcomes. Objective measurements (for any learning intervention) are more difficult, however, there appears to be a need to bring in more objective measures to

64 | P a g e

help understand if simulations are an effective method for people to learn business management skills.

Table 1. Possible learning outcomes for Simulations - adapted from (Anderson and Lawton, 1997)

65 | P a g e

Facts and concepts of the business discipline Increase the students knowledge of basic principles and concepts of the discipline Interpersonal skills Improve the students ability to Participate effectively in group problem solving Motivate coworkers Provide meaningful feedback to coworkers Resolve conflicts Communicate clearly with coworkers Develop people Lead Form coalitions Develop consensus Delegate responsibility Supervise Manage People Work as a member of a team Work in a group environment Appraise performance Increase the students knowledge of human behaviour in a group setting General analytical, critical thinking, problem-solving, or decision-making skills Improve the students ability to Identify problems Frame problems Structure unstructured problems Analyse problems Use data to make better decisions Distinguish relevant from irrelevant data Interpret data Implement ideas and plans Make decisions using incomplete information Solve problems Solve problems creatively Solve problems systematically Make good decisions The interrelationships among things Improve the students ability to Integrate material from vaiour functional areas of business See the big picture Increase the students understanding of the complex interrelationships in a business organisation Increase the students understanding of why organisational subsystems must be integrated for organisational effectiveness Business specific knowledge and skills Improve the students ability to Assess the situation quickly Plan effectively Plan business operations Schedule and coordinate Prioritize tasks Forecast Use spreadsheet for decision-making Increase the students understanding of The decision process





66 | P a g e

Chapter 7: Conclusion
This paper commenced with three key questions: 1. Why evaluate? 2. What to evaluate? and 3. How to evaluate? Why evaluate? Any deliberate learning intervention costs and organisation and individuals time and money. The value placed on this may vary from person to person, however, there is a value, hence it is worth ensuring that the value of the outcome is greater than or equal to the value of the input. What to evaluate? What to evaluate is more difficult to answer. In business and especially management, behavioural competencies as learning outcomes are increasingly recognized as a valid and useful measurement. How to evaluate? How to evaluate poses significant problems for the researcher. The training community is most familiar with Kirkpatricks four levels of evaluation and most frequently measures Level 1 (Reactions) to training intervention as a proxy for all learning outcomes achieved. The basic premise is that if trainees enjoy the training, then they will learn from it. This may be true; however, it does not mean that the trainees have learned what was intended, let alone needed. This suggest that a goal-free (Scriven, 1972) evaluation approach may be suitable to establish what was learned regardless of what was intended in the intervention. Greater objectivity about on-the-job behaviour change may be obtained through the use of a suitable instrument and 360 assessment. However, the researcher needs to be aware of Wimers warning that 360 feedback can have detrimental effects especially when the feedback is particularly negative or not handled sensitively (Wimer, 2002).

67 | P a g e

Evaluation style Choosing an evaluation style requires the researcher to clearly understand his intentions of the evaluation. In order to demonstrate the effectiveness of using simulations as a method of developing managerial competencies, this researcher is interested in proving (rather than learning) the effectiveness of the delivery method and in proving (rather than improving) the development intervention. The experimental research approach to evaluation appears to be the most objective and scientific in approach for this purpose. Researcher involvement and bias There are significant problems to consider, those of; the researchers involvement whether direct or not, the fact of measuring participants in a programme may have a direct impact on the achievement of the learning outcomes. However, if this is the same with all delivery methods being evaluated, the comparison measures on the same basis. Sample size The next and very significant issue is that of sample size. It is recognised that in order to achieve statistical validity, the sample size will need to be significantly large to the order of 100 or more participants. Control group and other influences Issues of the suitability of the control group need to be considered as well. It can be expected that individuals will react differently to the same training intervention. Such aspects of the individual as their preferred learning style, their educational history, perhaps gender, perhaps race, perhaps cultural heritage, their age, etc. may all play a part in whether they, as an individual, learn and change behaviour as a result or as a consequence of a particular learning intervention.

The design of a suitable evaluation research method will need to consider all of these issues.

68 | P a g e


ALESSI, S. M. (1988) Fidelity in the Design of Instructional Simulations. Journal of Computer-Based Instruction, 15, 40-47. ALLIGER, G. M., TANNENBAUM, S., BENNET, W., TRAVER, H. & SHOTLAND, A. (1997) A Meta-Analysis of the Relations among Training Criteria. Personnel Psychology, 50, 341-358.

ANDERSON, J. R. (1982) Acquisition of Cognitive Skills. Psychology Review, 89, 369406. BEDINGHAM, K. (1997) Proving the Effectiveness of Training. Industrial and Commercial Training, 29, 88-91. BRINKERHOFF, R. O. (1988) An Integrated Evaluation Model for HRD. Training & Development, 42, 66-68. BRINKERHOFF, R. O. (1989) Achieving Results from Training, San Francisco, JoseyBass. BURGOYNE, J. & COOPER, C. L. (1975) Evaluation Methodology. Journal of Occupational Psychology, 48, 53-62. BURGOYNE, J. & SINGH, R. (1977) Evaluation of Training and Education: Micro and Macro Perspectives. Journal of European Industrial Training, 1, 17-21. BUTLER, R. J., MARKULIS, P. M. & STRANG, D. R. (1988) Where Are We? An Analysis of the Methods and Focus of the Research on Simulation Gaming. Simulation & Games, 19, 3-26.

CAMPBELL, J. P., DUNNETTE, M. D., LAWLER, E. E. & WEICK, K. E. (1970) Managerial Behaviour, Performance and Effectiveness, Maidenhead, McGraw-Hill. COLLINS, D. B. (In Press) Performance-Level Evaluation Methods Used in Management Development Studies from 1986-2000. Human Resource Development Quarterly.

DEDE, C. (1997) The Evolution of Constructivist Learning Environments. Educational Technology, 52, 54-60.

69 | P a g e

DRUCKMAN, D. (1995) The Educational Effectiveness of Interactive Games. IN CROOKALL, D. & ARAI, K. (Eds.) Simulation and Gaming Across Disciplines and Cultures: ISAGA at a Watershed. Thousand Oaks, CA., Sage.

DRUCKMAN, D. & BJORK, A. (1994) Learning, Remembering, Believing: Enhancing Human Performance, Washington DC, National Academy Press. EASTERBY-SMITH, M. (1980) The Evaluation of Management and Development: an Overview. Personnel Review, 10, 28-36. EASTERBY-SMITH, M. (1994) Evaluating Management Development, Training and Education, Aldershot, Gower. EASTERBY-SMITH, M. & ASHTON, D. J. L. (1975) Using Repertory Grid Technique to Evaluate Management Training. Personnel Review, 4, 15-21. EASTERBY-SMITH, M., THORPE, R. & LOWE, A. (1991) Management Research: An Introduction, London, Sage. FEINSTEIN, A. H. & CANNON, H. M. (2002) Constructs of Simulation Evaluation. Simulation and Gaming, 33, 425,440. FILSTEAD, W. J. (1979) Qualitative Methods: A Needed Perspective in Evaluation Research. IN COOK, T. D. & REICHARDT, C. S. (Eds.) Qualitative and Quantitative Methods in Evaluation Research. Beverly Hills, Sage.

GAGNE, R. M. (1984) Learning outcomes and their effects: Useful categories of human performance. American Psychologist, 39, 377-385. GREDLER, M. E. (1996) Educational Games and Simulations: A Technology in search of a (Research) Paradigm. IN JONASSEN, D. H. (Ed.) Handbook of Research for Educational Communications and Technology. New York, Simon & Schuster Macmillan.

GREENO, J., SMITH, D. & MOORE, J. (Eds.) (1993) Transfer on Trial: Intelligence, cognition and instruction, Norwood NJ, Ablex. GUBA, E. G. & LINCOLN, Y. S. (1989) Fourth Generation Evaluation, London, Sage. HAMBLIN, A. C. (1974) Evaluation and Control of Training, Maidenhead, McGraw Hill. HAYS, R. T. & SINGER, M. J. (1989) Simulation fidelity in training systems design: Bridging the gap between reality and training., New York, Springer-Verlag.

70 | P a g e

HESSELING, P. (1966) Strategy of Evaluation Research in the Field of Supervisory and Management Training, Anssen, Van Gorcum. HODGES, M. (1998) Virtual Reality in Training. Computer Graphics World. HOLTON, E. F., III (1996) The Flawed Four-Level Evaluation Model. Human Resource Development Quarterly, 7, 5-21. JENKINS, D., SIMMONS, H. & WALKER, R. (1981) Thou Nature are my Goddess. Naturalistic Enquiry in Educational Evaluation. Cambridge Journal of Education, 11, 169-89.

KELNER, S. P. (2001) A Few Thoughts on Executive Competency Convergence. Center for Quality of Management Journal, 10, 67-71. KIRKPATRICK, D. (1959/60) Techniques for evaluating training programs: Parts 1 to 4. Journal of the American Society for Training and Development, November, December, January and February.

KIRKPATRICK, D. L. (1994) Evaluating Training Programs: The Four Levels, San Francisco, Berret-Koehler. KIRKPATRICK, D. L. (1998) Evaluating Training Programs: Evidence vs. Proof. IN KIRKPATRICK, D. L. (Ed.) Another Look at Evaluating Training Programs. Alexandria, VA, ASTD.

KRAIGER, K., FORD, J. K. & SALAS, E. (1993) Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology, 78, 311-328.

MACDONALD-ROSS, M. (1973) Behavioural Objectives: A Critical Review. Instructional Science, 2, 1-52. MCCLELLAND, D. C. (1973) Testing for Competence Rather Than Intelligence. American Psychologist, 28, 1-14. MCKENNA, S. (1996) Evaluating IMM: Issues for Researchers. Charles Stuart University. MILES, R. H. & RANDOLPH, W. A. (1985) The Organisation Game: A Simulation, Glenview, Il, Scott, Foresman and Company.

71 | P a g e

MILES, W. G., BIGGS, W. D. & SCHUBERT, J. N. (1986) Students Perceptions of Skill Acquisition Through Cases and a General Management Simulation: A Comparison. Simulation & Games, 17, 7-24.

MOSIER, N. R. (1990) Financial Analysis: The Methods and Their Application to Employee Training. Human Resource Development Quarterly, 1, 45-63. PATTON, M. Q. (1978) Utilization-Focussed Evaluation, Beverly Hills, Sage. PIERFY, D. A. (1977) Comparative Simulation Game Research: Stumbling Blocks and Steppingstones. Simulation and Gaming, 8, 255-68. RACKHAM, N. (1973) Recent Thoughts on Evaluation. Industrial and Commercial Training, 5, 454-61. REDDIN, W. J. (1970) Managerial Effectiveness, London, McGraw Hill. REEVES, T. (1993) Research Support for Interactive Multimedia: Existing Foundations and New Directions. IN LATCHEM, C., WILLIAMSON, J. & HENDERSONLANCETT, L. (Eds.) Interactive Multimedia. London, Kogan Page.

RICCI, K., SALAS, E. & CANNON-BOWERS, J. A. (1996) Do Computer-based Games Facilitate Knowledge Acquisition and Retention? Military Psychology, 8, 295-307. ROSE, H. (1995) Assessing Learning in VR: Towards Developing a Paradigm. HITL. RUSS-EFT, D. & PRESKILL, H. (2001) Evaluation in Organizations A Systematic Approach to Enhancing Learning, Performance, and Change, Cambridge, MA., Perseus Publishing. RUSSELL, S. (1999) Evaluating Performance Interventions. Info-line. SALZMAN, M. C., DEDE, C., R., B. L. & CHEN, J. (1999) A Model for Understanding How Virtual Reality Aids Complex Conceptual Learning. Presence: Teleoperators and Virtual Environments, 8, 293-316.

SCRIVEN, M. (1972) Pros and Cons about Goal-Free Evaluation. Evaluation Comment, 3, 1-4. SEASHORE, S. E., INDIK, B. P. & GEORGOPOULUS, B. S. (1960) Relationships Among Criteria of Effective Job Performance. Journal of Applied Psychology, 44, 195202.

72 | P a g e

SPENCER, L. M. & SPENCER, S. (1993) Competence at Work: Models for Superior Performance, New York, John Wiley & Sons. STAKE, R. E. (1980) Responsive Evaluation. University of Illinois. STANNEY, K., MORRANT, R. & KENNEDY, R. (1998) Human Factor issues in Virtual Environments. Presence: Teleoperators and Virtual Environments, 7, 327-351. TEACH, R., D. & GIOVAHI, G. (Eds.) (1988) The Role of Experiential Learning and Simulation in Teaching Management Skills. TEACH, R. D. (Ed.) (1989) Using Forecast Accuracy as a Measure of Success in Business Simulations. THOMAS, R., CAHILL, J. & SANTILLI, L. (1997) Using an Interactive Computer Game to Increase Skill and Self-efficacy Regarding Safer Sex Negotiation: Field Test Results. Health Education & behavior, 24, 71-86.

WARR, P. B., BIRD, M. W. & RACKHAM, N. (1970) Evaluation of Management Training, Aldershot, Gower. WHITE, B. (1984) Designing Computer Games to Help Physics Students Understand Newton's Laws of Motion. Cognition and Instruction, 1, 69-108. WHITEHALL, B. & MCDONALD, B. (1993) Improving Learning Persistence of Military Personnel by Enhancing Motivation in a Technical Training Program. Simulation & Gaming, 24, 294-313.

WHITELOCK, D., BIRNA, P. & HOLLAND, S. (1996) Proceedings. IN EDITIONS, C. (Ed.) European Conference on AI in Education. Lisbon Portugal, Colibri Editions. WIEBE, J. H. & MARTIN, N. J. (1994) The Impact of a Computer-based Adventure Game on Achievement and Attitudes in Geography. Journal of Computing in Childhood Education, 5, 61-71.

WIMER, S. (2002) The Dark Side of 360-degree. Training & Development, 37-42. WITMER, B. & SINGER, M. J. (1994) Measuring Immersion in Virtual Environments. ARI Technical Report 1014. WOLFE, J. (1981) Research on the Learning Effectiveness of Business Simulation Games: A review of the state of the science. Developments in Business Simulation & Experiential Exercises, 8, 72.

73 | P a g e

WOLFE, J. (1985) The Teaching Effectiveness of Games in Collegiate Business Courses: A 1973-1983 Update. Simulation & Games, 16, 251-288. WOLFE, J. (Ed.) (1990) The Guide to Experiential Learning, New York, Nichols. WOLFE, J. & CROOKALL, D. (1998) Developing a Scientific Knowledge of Simulation/Gaming. Simulation & Gaming, 29, 7-19. WOOD, L. E. & STEWART, P. W. (1987) Improvement of Practical Reasoning Skills with a Computer Game. Journal of Computer-Based Instruction, 14, 49-53. YOUNG, M. (2002) Clarifying Competency and Competence. Henley Working Paper. Henley, Henley Management College.

74 | P a g e

Annexure I List of Training programs held for two or more days(April 2007-March 2008) at JK TYRE

PARTICIPANTS 1 Plant orientation at BTP (5 days) 5

Thomas Profiling PPA & HGA

2-3rd may '07(2 days)

Thomas International certification program: 4-5th may '07 (2 days) Ability testing &team diagonosis:2 modules

Communication skill

4-5th may '07 (2 days)


Vehicle familiarization at all

(3 days)


6 7 8 9

Presentation skills Advanced MS OFFICE Team dynamics

25-26th July '07(2 days) 9-10th aug'07(2 days) 21-22nd aug'07(2 days)

19 17 10 1

7 th Experimental workshop on learning 23 -26th may '07(4 days) instruments HRD&OD


Orientation at HO

(5 days)

75 | P a g e

S.NO Training program Dates and no of days No of participants 1 2 Inventory Management Workshop management 3 4 5 Marketing summit 2007 PHDCCI international tax conference 21-22nd aug'07(2 days) 31-1 st sept'07(2 days) 1 2 1 on International 19-20th april'07(2 days) contract 30-31st july'07(2 days) 2 1

2 day internal session on Microsoft project 16-17 feb'08(2 days) 2003 professionals

Seminar on annual "energy efficiency 2007 21-22nd Feb. '08(2 days) markups Evans"

7 8

TS :16949 :INTERNAL AUDIT ICWAI national convention -2008

13-14th feb'08(2 days) 7,8&9 days) th march'08(3

4 1

Executive education program on "marketing 6-9th jan'08(4 days) strategies in a competitive environment"

10 11

Financial modeling for project finance

11-13 th oct'07(3 days)

1 4

Workshop on "Nurturing leadership for 30th nov -1 st dec'07(2 business excellence " days) 29-30th june'07(2 days)


workshop on equity valuation

76 | P a g e

JK Tyre & Industries Limited CSU8.01-FR.05








3 = GOOD



77 | P a g e







78 | P a g e








79 | P a g e













Feedback can be taken either from Superior and / or from participant. These questions are

indicative. Programme specific questions may be designed in-place of following questions.



80 | P a g e


2.1 2.2






81 | P a g e

Annexure II (Pepsico)

82 | P a g e

83 | P a g e

84 | P a g e

85 | P a g e

86 | P a g e

87 | P a g e

88 | P a g e

89 | P a g e

Annexure III (General Motors)

The External Training Nomination Form:

______________________ (Unit HR Head)

__________________ (Staff Head - HR)

(President& MD)

90 | P a g e

The Training Plan

CC = Course Completed CP = Course Planned

91 | P a g e

The Training Evaluation Form

92 | P a g e

The Training Effectiveness Feedback Form

93 | P a g e