Sie sind auf Seite 1von 9

THE TRAINING CYCLE AND THE TRAINING QUALITY STANDARDS (TQS) AT QUEENS BOROUGH PUBLIC LIBRARY

The Queens Borough Public Library serves a book-hungry population of 2.2 million from 63 locations plus 6 Adult Learning Centers. It has circulated more books and other library materials than any other library system in the country since 1994. It has 980 full time employees, 600 part-time and about 1,000 volunteers. Developing a learning event here, as in any organization, requires planning, execution, and follow-up or revision phases. We can describe these phases as an inter-related system or continuous cycle with 5 phases: 1. Training Needs Analysis (TNA). There are two purposes of TNA: a. to determine that the training is needed and b. to make certain that the training is based on reliable and identified training requirements. 2. Planning. How to plan the training activity depends on the number of people to be trained, the technical complexity of the training, the quality, suitability and cost of outside training, and the availability and skills of staff to do the necessary design work and implement the training plan. 3. Development. In this phase the designer develops the learning objectives, also selects training methods, tools, and timing appropriate for the skills to be learned and the learners participating. 4. Delivery. The trainers conduct the actual training; the trainer monitors progress and response of learners, attempting to evaluate the effectiveness of the design and delivery. 5. Evaluation. The training program is evaluated and feedback gathered for updating or revising the training design. This is typically the most neglected phase of the training cycle. Assessing training effectiveness often entails using the four-level model developed by Donald Kirkpatrick [Kirkpatrick, D.L. (1994). Evaluating Training Programs: The Four Levels. San Francisco, CA: Berrett-Koehler]. According to this model, evaluation should always begin with level one, and then, as time and budget allows, should move sequentially through levels two, three, and four.

The four levels are: Level 1 - Reaction: how participants have reacted to the program. Reaction outcomes are measures of the trainees perceptions, emotions, and subjective evaluations of the training experience. This level is not indicative of the training's performance potential as it does not measure what new skills the learners have acquired or what they have learned that will transfer back to the working environment. However, the interest, attention and motivation of the participants are critical to the success of any training program. People learn better when they react positively to the learning environment. Level 2 - Learning: what participants have learnt from the program. Learning outcomes are measured by the requisite learning objectives and the overall learning needs. The learning evaluation requires post-testing to ascertain what skills were learned during the training. Level 3 - Behavior: whether what was learnt is being applied on the job. Job behavior outcomes are the true measure of how effective the training was. This evaluation involves testing the trainees capabilities to perform learned skills while on the job, rather than in the classroom. Level 4 - Results: whether that application is achieving results. Organizational outcomes are the highest level in the hierarchy and are concerned with how well the training solved the organizational problems. While it is often difficult to isolate the results of a training program, it is usually possible to link training contributions to organizational improvements. Kirkpatrick defines Level 4 evaluation as the results linked to training. These results could take the form of reduced absenteeism and turnover, quality improvement, productivity, or even cost reduction. However, over time the need to measure the dollar value impact of training became so important to organizations that a fifth level were added by Dr. Phillips. Dr. Phillips outlines his approach to Level Five in his book Return on Investment in Training and Performance Improvement Programs, Butterworth Heinemann Publishers, Inc, Woburn, MA 1997. In the revised Kirkpatrick model, the fifth level of evaluation is developed by collecting Level 4 data, converting the data to monetary values, and comparing them to the cost of the program to represent the return on training investment.
The American Society for Training and Development (ASTD) offers a Benchmarking Service (BMS) to all types of organizations private, public, non-profit, government, etc. The accumulated entries to the database are also used by ASTD Research as a foundation for its summary annual review of statistical trends in training known as the State of the Industry Report (SOIR).

In a study done by ASTD and DDI, the participants were asked to report on their evaluation methods, using Kirkpatricks four levels of evaluation. These are traditionally described in sequence from the most basic Reaction level (1), the Learning level (2), the Behavior level (3), and the Results level (4). Typically, a very large majority of BMS organizations report using level 1 Reaction measures for evaluation, but after that there is a significant fall-off in use with the higher levels. Within the larger group of BMS organizations in 2001, 91% reported using level 1, whereas 36% used level 2, 17% used level 3, and only 9% used level 4. (Training projections for 2003: Results of an ASTD/DDI poll - available at http://www.americanpressinstitute.org/trainer/ASTD_report.pdf) Of course, like a lot of companies, QBPL faced problems using level 3 and 4 for the evaluation of its training. Besides, training events are planned, designed and delivered by different departments: Information Technology and Systems, Programs and Services, Extension Services, Training and Development. Different types of learning objectives (if any), different designs, structures, delivery methods, etc. For these reasons we felt a need for standards to assure the consistency and effectiveness of training throughout the system. Thus in the summer of 2000 we started to develop our Training Quality Standards. So, we started from a simplified approach to Quality Fundamentals: The Customer establishes the definition of quality If measurements are not performed then improvements cannot be made Do it right the first time!

We gathered a team with representatives from different departments and started our journey toward developing our Training Quality Standards. Training Standards provide criteria to evaluate the effectiveness of a training program. They enable managers, developers and trainers to approach training projects with a common language and establish guidelines that assist all parties in achieving the greatest return from the investment made in training. They apply to any event which is designed to improve organizational results through the enhancement of knowledge and skills to all training conducted in our organization. This includes existing programs and those under consideration for development or purchase. 1 Standard 1 Learning Objectives 1. Before training: The developer

Writes the Learning Objectives (module and/or units: Given (insert the conditions), the (state the job title of the performer) will be able to (describe the performance) that (describe the principle criteria) within (state the time limit for the performance). b) Organizes the Learning Objectives into a plan or agenda that includes the sequence of events and training time allocated to each objective.
a)

2. During training: The trainer: a) At Program Introduction: Displays the objectives and reaches agreement with the trainees on the relevance of the objectives of their needs b) At Program Conclusion: Reviews the objectives with the trainees to determine if they have been achieved. Rationale Objectives provide the basis for: 1) Before training: Conducting a needs analysis, determining program content, selecting instructional techniques, allocating time and resources
2)

During training: Focusing trainees effort toward the acquisition of the desired behavior and assessing the programs success in meeting its objectives and, as a direct result, the trainees needs.

3) After training: Evaluating the programs success in terms of its impact on productivity see Standard 5. 2
1)

Standard 2 Needs Analysis Before Training: The developer collects behavioral data on the performance of the population targeted for training (collected data as is practically and economically feasible). A report should contain the data collected, its analysis and the conclusions drawn. The data will be collected by: observation, response to simulated situation, surveying employees or customers, interviews or surveys with the immediate managers, interviews or surveys with the performers.

2) During training: The instructor can: a) Give the trainees problems to solve that reveal their present competence with the knowledge or skill to be taught which will produce a baseline performance. This could require pre-class assignments, cases or practice exercises prior to conducting training. b) Ask the students to describe their perceived knowledge or skill discrepancies or the problems they are experiencing in a given area. Rationale:

1) Needs Analysis focuses the training on the knowledge and skills needed to perform by identifying the actual discrepancies in current performance. 2) Learning occurs, most effectively, when the trainee perceives a problem and wants to clear up his/her uneasiness about a problem. 3 Standard 3 Modeling

There are examples or models of the desired behavior during the training program. This Standard can be met as follows: 1. Before Training: Depends on the type of behavior to be exhibited: a) Knowledge based: The developer can prepare sample problems and solutions b) Interactive: The developer can either: 1) Prepare for a live demonstration (creating role plays or the trainer can competently demonstrate the desired behavior) 2) Produce a video of the desired behavior.

2. During training: The trainer can: a) In knowledge based training could show sample problems - solutions and in interactive training conduct a demonstration of the desired behavior in a role play situation. b) Discuss the relevance, importance and structure of each of the units of knowledge and skill that enable the trainee to perform the desired behavior. Rationale: 1. People commit their energies to that which they perceive as relevant.

2. Examples and models provide graphic illustration which permit the trainee to structure and compare the desired behavior to his/her own experience. 3 Standard 4 Performance Evaluation

This is evaluation of the trainee performance of the desired behavior to determine competence during training. This Standard can be met as follows: 1. Before training the developer can:

a) Provide adequate time during the program for each student to perform the desired behavior. b) Establish Expert Criteria to measure the desired behavior. c) Establish a rating scale to measure levels of competence. d) Create an Evaluation Form to record ratings and comments. e) Establish a performance standard (a minimum level of competence by which the success of both the course and trainee can be evaluated). f) Create cases (if applicable) to simulate situations likely to be encountered on the job. 2. During training the trainer can: a) Employ methods to permit the trainees to evaluate their own performance. 1) Have trainees identify both strengths and improvement opportunities. 2) Use criteria, evaluation forms and the rating scale to measure competence 3) (Interactive behavior): Videotape the performance. This allows the trainees to see the performance rather then to recall it from the memory. b) Observe, evaluate and comment on trainee performance. c) Ensure that multiple performance and evaluation take place (if time is allocated). Rationale: Evaluating of performance allows both the trainee and trainer to determine if learning has taken place. Evaluation increases the trainees learning motivation by creating accountability for the acquisition of the desired behavior.

Standard 5 Post Training Evaluation

This is obtaining evidence that the desired behavior has persisted after training and is linked to results. The Standard requires evaluation of on-the-job performance. To meet this Standard the developer must collect behavioral data on the population trained. The process uses the same techniques as those employed in a needs analysis. A written report is prepared based on the data. The report should contain the data collected, its analysis, the conclusions drawn and action plan for correcting, if applicable, performance discrepancies identified. Rationale: Post Training Evaluation proves that:

1. Participants are using skills/knowledge learned during training to produce/improve results or 2. The problem persists. In this case, the process acts as a needs analysis and provides data for modification of the training or the desired behavior.

TRAINING QUALITY STANDARDS EVALUATION Program Name ___________________________________________________________ Program Length ________ Decision Maker ____________________________________ Developer _____________________ Development Date __________________________ Trainer(s) _______________________________________________________________ Trainees Job Title _________________ Average Class Size ________________ trainees Evaluated by _________________________ Evaluation Date ______________________ Instructions 1.0 Score and describe how the program meets each Standard (on the following page) 10.0 5.0 0.0 N Meets Questionable Does Not Meet or Not Evident Not Applicable (only for Standard 2 item 3)

2.0 Attach a copy of the learning objectives. 4.1 Total Quality Standards Score 1 Learning objectives which describe the desired behavior in relation to the desired behavior(s) to be exhibited during the program 2 Evidence of a performance discrepancy in relation to the desired behavior resulting from a lack of skill or knowledge. (Needs Analysis) 3 Examples or models of the desired behavior to determine competence during the training program 4 Evaluation of student performance of the desired behavior to determine competence during the training. 5 Evidence of the desired behavior has persisted after training and is link to results. (Post Training Evaluation) Total (Divide by 5) 5 EFFECTIVENESS RATING

Justification of Ratings 1. Learning objectives which describe the desired behavior in relation to the desired behavior(s) to be exhibited during the program. Attach a copy.

2. Evidence of a performance discrepancy in relation to the desired performance behavior resulting from the need for additional a lack of skill or knowledge. (Needs Analysis)

3. Examples or models of the desired behavior to determine competence during the training program

4. Evaluation of student performance of the desired behavior to determine competence during the training.

5.

Evidence of the desired behavior has persisted after training and is linked to results. (Post Training Evaluation)

Das könnte Ihnen auch gefallen