Sie sind auf Seite 1von 11

Training evaluation: an empirical study in Kuwait

Ahmad Al-Athari European Centre for TQM, University of Bradford, Bradford, UK Mohamed Zairi SABIC Chair in Best Practice Management, European Centre for TQM, University of Bradford, Bradford, UK

Keywords

Training, Evaluation, Model, Human resource development, Organisational development

Introduction
The Manpower Services Commission (1981) in its Glossary of Training Terms defines evaluation as:

Abstract

This paper is based on a study which examined the current training evaluation activity and challenges that face Kuwaiti organisations. The study sample was five UK organisations (recognised as best practice organisations in their T&D activities) and 77 Kuwaiti organisations (40 government and 37 private). Interviews and questionnaires were used. The study reveals that the majority of respondents, both in government and in private sectors, only evaluate their training programme occasionally. The most popular evaluation tools and technique used by government and private sectors were questionnaires. The most common model used by Kuwaiti organisations is the Kirkpatrick model, while the most common level of evaluation for both government and private sector is reaction type.

The assessment of a total value of the training system, training course or programme in social as well as financial terms. Evaluation differs from validation in that it attempts to measure the overall cost-benefit of the course or programme and not just the achievement of its laid-down objectives. The term is also used in the general judgmental sense of the continuous monitoring of a programme or of the training function as a whole (McDougall, 1990).

For those who get it right, it should lead to building up the training function, not depleting it. Automatically links T&D with strategic and operational business objectives. Ensures buy-in and commitment at all levels. Produces results that can act as a great reinforcer of learning and further motivate individuals to develop themselves.

The Glossary of Training Terms also defines the validation of the training as: . Internal validation. A series of tests and assessments designed to ascertain whether a training programme has achieved the behavioural objectives specified. . External validation. A series of tests and assessments designed to ascertain whether the behavioural objectives of an internally valid training programme were realistically based on an accurate initial identification of training needs in relation to the criteria of effectiveness adopted by the organisation (Rae, 1986). Evaluation has become a very important task for the organisation, and there are several very sound reasons for starting to put more effort into it. According to Kearns and Miller (1996, p. 9): . It is about building credibility and a solid foundation for T&D decisions. . Provides a basis for maximising return on investment. . Helps to categorise training by the type of return you will get from your investment.
The research register for this journal is available at http://www.emeraldinsight.com/researchregisters

Journal of European Industrial Training 26/5 [2002] 241251 # MCB UP Limited [ISSN 0309-0590] [DOI 10.1108/03090590210424911]

The most forgotten stage in any training programme is the evaluation. Magdy (1999), in his research in the USA, found out that organisations spend $30 billion annually on training programmes and only 10 per cent of that expenditure goes to evaluation. Often, the value of conducting training evaluations is overshadowed by the necessity simply to gain participation's immediate post-course reactions, the results of which are sometimes mistakenly viewed as indicating whether or not the course was successful overall. In addition, budgetary and other constraints have caused many trainers and designers to employ standardised, commercially available, evaluation instruments. These have many disadvantages: generally, not focused, offer little assistance in assessing long-term effects, one size fits all (McClelland, 1994). So to get the best benefit from the evaluation instrument it has to be designed to meet the goals and objectives of the programme. Designing training to meet goals or objectives is not a new concept. Nor is the concept of attempting to measure changes that have occurred as a result of the training and determining what benefits the organisation has received for its investment. However, to accomplish both tasks is a challenge. Add to that variables such as multiple functional as well as behavioural
The current issue and full text archive of this journal is available at http://www.emeraldinsight.com/0309-0590.htm

[ 241 ]

Ahmad Al-Athari and Mohamed Zairi Training evaluation: an empirical study in Kuwait Journal of European Industrial Training 26/5 [2002] 241251

objectives, and decisions of what process or procedures to use become more complex.

Study objectives and methodology


The study in this paper is part of a larger research project aimed at identifying best practice of training and its impact on employees' and organisations' effectiveness and performance. The part being discussed in this paper had the main objective of identifying the training evaluation activities and challenges in Kuwaiti organisations. To achieve this objective, the study identified what was present by the literature and published case studies as reward and appraisal best practice and training (Nadler, 1970; Hamblin, 1974; Laird, 1978; Robinson, 1985; Kenney and Reid, 1986; Kirkpatrick, 1986; Rae, 1986; Camp et al., 1986; McDougall, 1990; Hames, 1991; Oakland, 1993; McClelland, 1994; Kearns and Miller, 1996; Buren and Bassi, 1999; Newman, 1999; Magdy, 1999). These have been reproduced in a generic format and structured in questionnaires to assess the applicability and the viewpoint of experienced practitioners toward them. The survey focused on targeting government and private (joint venture) organisations in Kuwait to see how many subscribed to the idea proposed, thus providing further proof of whether these ideas were the right approach to a successful human resource development in the future. The questionnaire was designed and piloted to assess: time required to complete the questionnaire, simplicity, clear language, clarity of instructions, comprehensiveness and item sequence. The pilot sample includes NatWest Bank, British Airways, IBM, Elida Faberge, ICL and University of Bradford. Once the final questionnaire version was available, the survey sample was selected. For the purpose of the study, the only criterion for sample selection was the size of the organisations and their financial statutes. The sources used to select the sample were Ministry of Planning and case studies analysis in the literature. The selected populations for this research are training department managers and HRD managers in all government and joint venture organisations. For the government sector, there are currently 48 authorities and, for the private sector, the investigator will study only the main shareholding companies (joint venture with the government, only 38 companies working in investment, insurance, industrial, real estate, transport, and services) (Ministry of Planning, 1998)

and in addition the banking sector (eight banks) and hotels sector (14 hotels). In order to identify the training evaluations activity and to compare government with private organisations in relation to these activities, a sample of 108 organisations was selected from the Kuwaiti organisations population. A total of 96 questionnaires were collected and 77 were processed and analysed. The sample represents both government and private sectors and is divided almost equally between government (40, representing 51.9 per cent) and private organisations (37, representing 48.9 per cent of the sample). The majority of the sample, accounting for 23 organisations in both government and private sectors, work in the services sector, representing 29.9 per cent of the sample, followed by 14 organisations working in the manufacturing sector, which accounted for 18.2 per cent of the sample. Banking and finance sectors accounted for 13 organisations, representing 16.9 per cent, while hotels and catering organisations accounted for 11, representing 14.3 per cent of the sample. Transportation and storage, and construction sectors accounted for eight, each on 10.4 per cent.

Study finding
Initially, the study participants were presented with several statements to assess the perceived importance of training evaluation, evaluation models, evaluation tools and techniques, evaluation input and output, and challenges. Participants were requested to show how strongly they agreed with these statements on a five-point Likert scale.

Importance of evaluation

In order to identify how Kuwaiti organisations view training evaluation, the respondents were asked about the level of importance of the evaluation process in their organisation. Their answers reveal that only eight government organisations out of 40, representing 20 per cent, believe that training evaluation is important, while 32 government organisations, representing 80 per cent, believe that it is somewhat important. On the other hand, 22 private organisations out of 37, representing 59.4 per cent, believe that training evaluation is very important, while 13, representing 35.1 per cent, believe that it is somewhat important. In addition, only two private organisations do not believe in its importance (Figure 1).

[ 242 ]

Ahmad Al-Athari and Mohamed Zairi Training evaluation: an empirical study in Kuwait Journal of European Industrial Training 26/5 [2002] 241251

Figure 1 Importance of evaluation

The above findings show that private organisations believe in the importance of training evaluation more than the government organisations. This could be attributed again to the nature of each sector. The private organisations believe that each dinar (Kuwaiti currency) they spend has to pay off, or give them returns, so training must pay off. This conclusion is consistent with ASTD (1997) research; they found out that organisations in the USA with higher annual sales were especially likely to say that they conducted evaluations, while services sector organisations were the least likely to evaluate their training. ASTD (1997) conducted a survey on the importance of evaluation to US organisations; 81 per cent attached some level of importance, while 84 per cent felt that it is important only for HRD.

Frequency of training evaluation

In order to investigate the importance of training evaluation and to support the previous findings, the respondents were asked about the frequency of conducting evaluations for their training programme (Figure 2). Their answers show that the majority in both government and private sectors sometimes evaluate their training

programme, while only 7.5 per cent of government and 13.5 per cent of private organisations usually evaluate their training programme. Furthermore, 35 per cent of government and 18.9 per cent of private sector organisations rarely evaluate their training programme. One of the main difficulties faced by the training efforts in Kuwaiti organisations is that no specific body is in charge of evaluating the training programme. The responsibility for training evaluation is formally left to the training co-ordinator, who occasionally evaluates the programme just for reporting to the management and submitting it in the annual report. That is why most Kuwaiti organisations only occasionally evaluate their training programmes. So these organisations have to work on finding a specific body to be in charge of evaluation, either the training department or HR management department. This result is also consistent with Al-Ali (1998). He stated:
We could see the lack of professional management thinking that could find a training programme which suits the work environment and employees' needs (Al-Ali, 1998, p. 165).

Figure 2 Frequency of training evaluation

[ 243 ]

Ahmad Al-Athari and Mohamed Zairi Training evaluation: an empirical study in Kuwait Journal of European Industrial Training 26/5 [2002] 241251

Magdy (1999, p. 21) stated that:

The most forgotten stage in any training programme is the evaluation.

He found that, in the USA, organisations spend $30 billion annually on training programms and only 10 per cent of that expenditure goes to evaluation.

Evaluation tools and techniques

Many instruments are used in evaluating training effectiveness. The most popular in the field include: . tests; . questionnaires; . interviews; . observations; and . performance records. In the training context, all known evaluation instruments can be used to collect necessary data. However, evaluation purpose and strategy will govern what evaluation instruments are most appropriate. The evaluation tool and technique most used by the government and private sectors is the questionnaire (Table I), which is used by 70 per cent of government organisations and 81 per cent of the private sample. Observation and performance records are no less important for the private sector, as they had been used respectively by 81 per cent and 70 per cent. Furthermore, 62 per cent of the government organisations sample use

performance records as evaluation tools. Test, interview, other management tool, attitude survey, and CAT, CAL were used by less than 54 per cent of the private and government samples. Al-Muraifea (1993) found in his study that the only method used to evaluate the training courses was direct observation in the classroom. Also, Al-Ali (1999) found that the most common evaluation tool used by Kuwaiti organisations is the questionnaire. Another finding by ASTD (1997) is consistent with the above; they found that 94 per cent out of 300 US organisations use the questionnaire. Furthermore, they found that 100 per cent of public sector organisations use the test to evaluate their training programme. The above discussion has indicated that the entire samples in both sectors use the questionnaire (see Table I). This is widely used in training programmes, and it included trainees' evaluations of training, often assessed on a ``smile sheet''. Robinson and Robinson (1989) stated:

Almost all HRD professionals provide end-of-course questionnaires, which are completed by participants and given to the instructor. However, these reaction evaluations, too frequently, are poorly designed and yield minimally helpful information.

So, whatever was the evaluation instrument used by Kuwaiti organisations, it must be

Table I Evaluation tools and techniques Tool/technique Questionnaire Government Private Interview Government Private Test Government Private Observation Government Private Attitude survey Government Private Performance record Government Private CAT, CAL Government Private Other management tools Government Private [ 244 ] Small extent (%) Considerable extent (%) 30 14 45 41 38 22 45 14 43 19 35 19 5 14 5 39 Great extent (%) 70 81 47 32 54 48 35 81 17 13 62 70 5 7 25 Total (%) 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

5 8 27 8 30 15 5 40 68 3 11 90 86 88 36

Ahmad Al-Athari and Mohamed Zairi Training evaluation: an empirical study in Kuwait Journal of European Industrial Training 26/5 [2002] 241251

designed to meet the goal and objectives of the programme. Designing training to meet goals or objectives is not a new concept. Nor is the concept of attempting to measure changes that have occurred as a result of the training and determining what benefits the organisation has received for its investment. However, to accomplish both tasks is a challenge. Add to that variables such as multiple functional, as well as behavioural, objectives, and decisions of what process or procedures to use become more complex.

Evaluation of training outcomes

Evaluation models

The selection of an evaluation model is very important in relation to what the training activity sponsor organisation aims to get out of it. Is it to benchmark the organisation training activity, or to see how much organisations benefited from the training programme, what is the employees' reaction, how much did they learn, did they change their behaviour, and what is the result? The respondents were asked about the type of evaluation model and methods they use for their evaluations. Their answers show that the entire sample in both government and private sectors use the Kirkpatrick model to evaluate their training programme to a different extent, while only 5 per cent of the private sector use the CIRO model. Furthermore, none of the sample in either sector uses Investor in People standard or benchmarking (Table II). As the study shows, the entire sample uses the Kirkpatrick model to conduct training evaluation. This finding is consistent with ASTD (1997) research; they found that 67 per cent out of 300 US organisations reported that they use the Kirkpatrick model. Large organisations were much more likely to use the Kirkpatrick model than smaller organisations.

When the respondents were asked about the evaluation of training outcomes, especially with use of the Kirkpatrick model, their answers indicate that the most common evaluation for both government and private organisations is trainee reaction. For government organisations, 47 per cent of them evaluate learning, and 40 per cent evaluate result, and 35 per cent of them evaluate job behaviour. On the other hand, 48 per cent of private organisations evaluate result, 11 per cent evaluate job behaviour, and only 10 per cent evaluate learning (Table III). This finding is consistent with ASTD (1997); they found that 67 per cent of the organisations included in the study used Kirkpatrick, 92 per cent of them evaluate reaction, 51 per cent evaluate learning, while 32 per cent evaluate job behaviour, and only 26 per cent evaluate result. To evaluate the trainees' reaction to the training programme by asking them what they thought of it in order to determine their degree of satisfaction with the training without moving to the second step, their answer would sometimes be misleading. Their answer will sometimes be based on how much they like the instructor, or whether they had a good time or not. This means that, if it is a good programme but they felt unsatisfied or they did not like the instructor, they will kill the programme by their answer (see Table III). Saari et al. (1988) stated that organisations typically use only ``happy sheets'', and ignore whether training has had an impact on learning, behaviour and performance of the trainee on the job. Evaluation has become a very important task for the organisation, and there are several very sound reasons for starting to put more effort into it, such as to build credibility for T&D decisions, to see the return on investment and to build up the training

Table II Evaluation models Model Kirkpatrick Government Private CIRO Government Private Investor in People Government Private Benchmark Government Private Small extent (%) 52 46 100 95 100 100 100 100 Considerable extent (%) 28 43 0 5 0 0 0 0 Great extent (%) 20 11 0 0 0 0 0 0 Total (%) 100 100 100 100 100 100 100 100 [ 245 ]

Ahmad Al-Athari and Mohamed Zairi Training evaluation: an empirical study in Kuwait Journal of European Industrial Training 26/5 [2002] 241251

Table III Evaluation of training outcomes Level Reaction Government Private Learning Government Private Job behaviour Government Private Result Government Private Small extent (%) 0 3 0 49 5 35 5 22 Considerable extent (%) 15 24 53 41 60 54 55 30 Great extent (%) 85 73 47 10 35 11 40 48 Total (%) 100 100 100 100 100 100 100 100

function. But, for Kuwaiti organisations, the problem is that nobody is interested in the findings of the evaluation process. The only target for such evaluation is the instructor. Al-Muraifea (1993) stated:
One of the defects in the structure of the Training Department in most Kuwaiti organisations is that there is nobody or division in charge of the results of such assessments, so ultimately nobody ensures the effectiveness of employee training in Kuwait. It is necessary to know whether objectives have been achieved or not.

Measuring training input

To continue with the evaluation section, and also to reach a conclusion about evaluation activity in Kuwait, the respondents were asked whether they measure their training input or not. Their answers indicate that almost the entire sample measures their training input, while only 2.5 per cent of the government sample do not perform any such measurement (Figure 3).

Input measurement

In order to identify the kind of training input measurement, the sample was provided with a list from which to choose. Their answers show that private sector organisations

measure their total training expenditure, number of employees receiving training, and number of courses they offer to their employees. In addition, 97 per cent of them measure their payments to outsider training providers, 81 per cent measure their trainee travel expenses, and 68 per cent measure their training expenditure per employee. The most popular training input measurement for the government organisations is number of employees receiving training and number of courses which they offer to their employees. A total of 79 per cent of government organisations measure their total training time/days, and 72 per cent of them measure their cost of paying for training facilities and equipment. As a conclusion, the entire private organisations sample chose measurement of the quantitative input, while the government organisations had chosen the qualitative input (Table IV). That also could be attributed to the nature of each sector and the responsibility for payment. Almost all government organisations' payment comes from the civil services authority (Dewan). So the lack of measuring the quantitative input by the government sample could be attributed to the monopoly of the civil services authority (Dewan) in the training

Figure 3 Measuring training input

[ 246 ]

Ahmad Al-Athari and Mohamed Zairi Training evaluation: an empirical study in Kuwait Journal of European Industrial Training 26/5 [2002] 241251

Table IV Input measurement Input Total training expenditure Government Private Number of employees receiving training Government Private Number of courses Government Private Payment to outside training providers Government Private Total training time/days Government Private Trainee travel expenses Government Private Course development expenditures Government Private Cost of facilities and equipment Government Private Training expenditure per employee Government Private Training expenditure as % of payroll Government Private Course development time Government Private
activity and payment, while the private sector is responsible for its own payment. As indicated before, the private sector is always looking for payback on expenditure, and that is why this sector measures the quantitative input. This finding is consistent with ASTD (1997); they found in their research that total expenditures on training and the number of employees receiving training were measured by nearly nine out of ten organisations that evaluated their training in the USA (88 per cent). Just over three-quarters (77 per cent) of these organisations kept track of the number of courses they offer. Organisations with high volumes of annual sales tended to measure total training expenditures, training as a percentage of payroll, and tuition reimbursements, more than other organisations. Tracking training as a percentage of payroll and the cost of facilities and equipment was also more prevalent among large firms. Other comparisons by industry group indicated that none of the

Yes (%) 31 100 97 100 97 100 28 97 79 51 31 81 28 51 72 54 13 68 8 30 23 30

No (%) 69 3 3 72 3 21 49 69 19 72 49 28 46 87 32 92 70 77 70

Total (%) 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

organisations in the service sector or in agriculture, mining and construction recorded their course development expenditures.

Measuring training output

The respondents were asked whether they measure their training output or not. Their answers indicate that the entire private organisations sample measures their training output, while only 40 per cent of the government sample do so (Figure 4).

Training output measurement

In order to identify the kind of training output measurement, the sample was provided with a list from which to choose. Their answers show that the entire sample of government which measure their training output measure their employees' job satisfaction and their productivity (Table V). In addition, 80 per cent of them measure their employees' absenteeism, and 69 per cent measure their customer satisfaction as an

[ 247 ]

Ahmad Al-Athari and Mohamed Zairi Training evaluation: an empirical study in Kuwait Journal of European Industrial Training 26/5 [2002] 241251

output for their training programmes. The private organisations also match the government organisations in measuring their employees' productivity and their job satisfaction, which are measured respectively by 97 per cent and 95 per cent of the private organisations sample; 92 per cent measure their customer satisfaction, 84 per cent measure their profitability, and 76 per cent measure their sales as training output. As a comparison between Kuwaiti organisations and US organisations concerning training output measurement (Table V), it appears from the ASTD (1997) research that customer satisfaction is the

most commonly measured outcome, tracked by 69 per cent of organisations. Surprisingly, profit-making organisations were less likely to track this measure than non-profit organisations. Job satisfaction was a distant second at 38 per cent (63 per cent among health-care firms). Organisations in the finance, real estate and insurance sectors were more apt to measure return on expectations and sales as outcomes than other organisations.

Evaluation challenges

Figure 4 Measuring training output

The evaluation process for assessing T&D effectiveness is not easy; it requires special techniques, financial resources and the availability of required information. However, there are some challenges which might minimise the evaluation process. The respondents were asked to determine the most important evaluation challenges that face them in conducting sound evaluation. More than 50 per cent in both sectors believe that finding evaluation methods that suit a variety of courses, cost of doing evaluations well, translating evaluation results into top management's language and determining specific actions to take based on evaluation results are the most important challenges they faced (Table VI). These are in addition to

Table V Training output measurement Output Customer satisfaction Government Private Job satisfaction Government Private Productivity Government Private Return on expectations Government Private Sales Government Private Return on investment Government Private Cost/benefit ratio Government Private Profitability Government Private Absenteeism Government Private [ 248 ] Yes (%) 69 92 100 95 100 97 19 27 18 76 25 24 19 16 18 84 80 22 No (%) 31 8 5 3 81 73 82 24 75 76 81 84 82 16 20 78 Total (%) 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

Ahmad Al-Athari and Mohamed Zairi Training evaluation: an empirical study in Kuwait Journal of European Industrial Training 26/5 [2002] 241251

Table VI Evaluation challenges Challenges Small extent (%) Considerable extent (%) Great extent (%) 70 86 57 54 54 86 62 54 57 37 87 75 49 65 64 60 65 38 72 70 70 78 8 3 3 5 8 16 23 22 Do not know (%) 7 5 Total (%) 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 Cost of doing evaluations well Government 23 Private 14 Determining the impact of training on financial performance Government 20 18 Private 5 41 Time required to do evaluations well Government 18 28 Private 14 Identifying appropriate quantitative measures Government 10 20 Private 8 22 Identifying appropriate qualitative measures Government 10 10 Private 11 30 Finding evaluation methods that suit a variety of courses Government 13 Private 3 22 Getting trainees and managers to participate in evaluations Government 33 18 Private 16 14 Getting top management buy-in Government 8 28 Private 24 16 Finding qualified measurement and evaluation professionals Government 10 35 Private 35 19 Translating evaluation results into top management's language Government 28 Private 3 24 Determining specific actions to take based on evaluation results Government 30 Private 8 11
determining the impact of training on financial performance, time required to do evaluations well, identifying appropriate quantitative and qualitative measures, getting top management buy-in, finding qualified measurement and evaluation professionals, and getting trainees and managers to participate in evaluations. The challenges of translating evaluation results into top management language and determining specific actions to take based on evaluation results are consistent with the previous finding. This indicates that evaluation activity in Kuwaiti organisations is not well designed to meet their objectives, and also shows that no action is taken to improve training activity and training results based on the evaluation, because they cannot determine the kind of action they should take based on its results (see Table VII). This result also contrasts with Al-Ali (1998, p. 165); he stated that there is a big gap between the training department and employees in relation to the training need analysis and

training evaluations. In addition, the smaller evaluation challenges that face private organisations are in identifying appropriate qualitative measures and finding qualified measurement and evaluation professionals, which represent respectively 37 per cent and 38 per cent of the private organisations sample, as Table VI shows.

Difficulty in obtaining information needed for evaluation

HRD professionals who must perform training evaluations rely on a variety of resources to do their work. However, not all the information they require to use these resources is widely available (Figure 5). To continue with the same section, the respondents were asked whether they faced any difficulty concerning obtaining information needed for evaluations. Their answers indicate that 75 per cent of government and 59.5 per cent of private organisations face difficulty in obtaining the information needed.

[ 249 ]

Ahmad Al-Athari and Mohamed Zairi Training evaluation: an empirical study in Kuwait Journal of European Industrial Training 26/5 [2002] 241251

Table VII Kinds of information difficult to obtain Small extent (%) Considerable extent (%) Great extent (%) Do not know (%) Total (%) 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 Information on the latest advances in measurement and evaluation Government 27 73 Private 23 77 Tools/methods for benchmarking training outcomes against other companies or organisations Government 27 73 Private 5 18 72 5 Information on what other companies or HRD executives are doing Government 17 17 66 Private 9 23 65 5 Information on measurement and evaluation tools themselves Government 33 10 57 Private 14 18 68 Information on how to conduct sound measurement and evaluation Government 7 50 43 Private 14 27 59 Information on the evaluation skills HRD professionals will need in the future Government 17 20 63 Private 27 23 50 Information on all the measurement and evaluation resources available Government 22 26 52 Private 5 32 63 Information about outside providers of measurement and evaluation assistance Government 17 33 50 Private 23 18 59

Figure 5 Difficulty in obtaining information needed for evaluation

Information on the evaluation skills HRD professionals will need in the future. Information on all the measurement and evaluation resources available. Information about outside providers of measurement and evaluation assistance.

This finding is also consistent with Al-Ali (1999, pp. 7-38), who stated:

Information difficult to obtain

More than 60 per cent of the entire sample for both government and private organisations faced some degree of difficulties in obtaining the following (Table VII): . Information on the latest advances in measurement and evaluation. . Information about tools/methods for benchmarking training outcomes against other companies or organisations. . Information on what other companies or HRD executives are doing. . Information on measurement and evaluation tools themselves. . Information on how to conduct sound measurement and evaluation.

The most important challenges facing the Kuwaiti organisations are difficulties in measuring performance improvement in certain jobs (services), difficulties in measuring the change in behaviour of individuals over a short period of time, and the absence of a follow-up process after T&D programmes. Other difficulties such as lack of knowledge about the evaluation process seem to exist in the government sector more than in the private sector.

For the purpose of comparison, respondents to the ASTD (1997) survey reported that information on the latest advances in evaluation and training was the most difficult to locate, followed closely by information about how to benchmark their training outcomes against other companies. Executives whose organisations used the Kirkpatrick model were also likely to perceive a lack of information on evaluation and measurement geared towards HRD executives.

[ 250 ]

Ahmad Al-Athari and Mohamed Zairi Training evaluation: an empirical study in Kuwait Journal of European Industrial Training 26/5 [2002] 241251

Conclusion
A minority in the government sector and the majority in the private sector believe that training evaluation was the single most important for their training system success. However, the majority in both government and private sectors only occasionally evaluate their training programme. The most popular evaluation tools and technique used by government and private sectors were questionnaires, followed by observation and performance records. The most common model used by Kuwaiti organisations is the Kirkpatrick model, while the most common level of evaluation for both government and private sector is reaction. Furthermore, almost the entire study sample measured their training input, which included measuring the total training expenditure, number of employees receiving training and number of courses they offer to their employees, payments to outside training providers, trainee travel expenses, training expenditure per employee, total training time/days and amount they pay for training facilities and equipment. On the other hand, all the private and a minority of government organisations measured their training output, which includes employees' job satisfaction, productivity, employees' absenteeism and customer satisfaction. The most important evaluation challenges that deter Kuwaiti organisations from conducting sound evaluation were as follows: finding evaluation methods that suit a variety of courses, cost of doing evaluations well, translating evaluation results into top management's language and determining specific actions to take based on evaluation results. The study also revealed that the majority in both sectors face difficulty in obtaining the information needed for evaluations.

References

Al-Ali, A. (1999), ``HRD training and development practices and related organisational factors in Kuwaiti organisations'', PhD thesis, University of Bradford, Bradford. Al-Ali, S. (1998), Scientific Research, Kuwait Library Co., Kuwait. Al-Muraifea, K.M. (1993), ``Employees' training programs of public authority for applied education and training'', PhD thesis, University of Hull, Hull. ASTD (1997), ``Training industry trend'', November, American Society for Training and Development, Alexandria, VA, available at: www.ASTD.com

Buren, E. and Bassi, J. (1999), Sharpening the Leading Edge, Report code 99ASTDIR, American Society for Training and Development, Alexandria, VA. Camp, R., Blanchard, P. and Huszczo, E. (1986), Towards a More Organisational Effective Training Strategy and Practice, Areston Books, Englewood Cliffs, NJ. Hamblin, A.C. (1974), The Evaluation and Control of Training, McGraw-Hill Book Company, Maidenhead and London. Hames, R.D. (1991), ``Dynamic learning: a quality approach to quality training'', Total Quality Management, Vol. 2 No. 1, pp. 39-44. Kearns, P. and Miller, T. (1996), Measuring the Impact of Training and Development on the Bottom Line, Technical Communication Publishing. Kenney, J. and Reid, M. (1986), Training Interventions, Institute of Personnel Management, London. Kirkpatrick, D. (1986), ``Do training classes change attitudes?'', Personnel, Vol. 63 No. 7, pp. 11-15. Laird, D. (1978), Approaches to Training and Development, Addison-Wesley, London. McClelland, S. (1994), ``A model for designing objective-oriented training evaluations'', Industrial and Commercial Training, Vol. 26 No. 1, pp. 3-9. McDougall, N. (1990), ``Management training evaluation for decision making'', MSc dissertation, UMIST, Manchester. Magdy, A. (1999), ``Measuring and evaluating salesforce training effectiveness'', PhD thesis, Old Dominion University, Norfolk, VA. Manpower Services Commission (1981), Glossary of Training Terms, HMSO, London. Ministry of Planning (1998), ``Annual statistical abstract'', Statistics and Information Sector, Kuwait Times, Edition 34, Kuwait. Nadler, L. (1970), Developing Human Resources, Gulf, Houston, TX. Newman, J. (1999), ``An evaluation of a professional development technology training program as reported by selected school administration'', PhD thesis, University of Sarasota, Sarasota, FL. Oakland, J.S. (1993), Total Quality Management, 2nd ed., Butterworth-Heinemann, Oxford. Rae, L. (1986), How to Measure Training Effectiveness, Gower Publishing, Aldershot. Robinson, D. and Robinson, J. (1989), Training for Impact, Jossey-Bass Publishers, San Francisco, CA. Robinson, K.R. (1985), A Handbook of Training Management, Kogan Page, London. Saari, L., Jo, T., McLaughlin, S. and Zimmerle, D. (1988), ``A survey of management training and education practices in US companies'', Personnel Psychology, Vol. 41 No. 4, pp. 731-43.

[ 251 ]

Das könnte Ihnen auch gefallen