Sie sind auf Seite 1von 78

Management

Asim Tufail Fareed Vardag


(Group Chief, Consumer & (Chief Risk Officer)
Personal Banking)

Iqbal Zaidi Mohammad Abbas Sheikh


(Group Chief, Compliance) (Group Chief, Special Assets
Management)

Mohammad Aftab Manzoor Muhammad Jawaid Iqbal


(Chief Executive Officer) (Group Chief, Corporate &
Investment Banking)

Muhammad Shahzad Sadiq Muhammad Yaseen


(Group Chief, Audit & CRR) (Group Chief, Treasury)

Mujahid Ali Shafique Ahmed Uqaili


(Group Chief, Information (Group Chief, Human Resources)
Technology)

Khawaja Mohammad Almas Tahir Hassan Qureshi


(Head, Core Banking Projects) (Chief Financial Officer)

Tariq Mehmood Waheed ur Rehman


(Group Chief, Operations) (Company Secretary)

Zia Ijaz
(Group Chief, Commercial &
Retail Banking)
Privatisation Commission, Government of
Pakistan
12
Annexure B: ABL Organizational Structure
Khalid Sherwani
President
Islamic
Banking &
Planning
Audit &
Inspection
International
Division
Treasury Regional
Offices
(16)
M. Naveed
Masud
Establishment Human
Resources
Business
Promotion
Special
Assets
Management
Credit Finance Information
Technology
Comparative Analysis of Domestic Banking Industry of Pakistan

(Rs. million)

Bank Deposit Advances Investments

ACB 51,732 30,035 26,759

BAH 34,240 23,775 18,831

Bop 23,767 6,621 8,295

BB 7,761 3,298 1,328

FB 24,554 21,935 6,842

UNION BANK 328,182 167,523 142,877

KB 2,640 490 2,118

MB 5,079 3,532 856

Metro 28,515 19,444 15,013

MCB 182,706 78,924 89,610

NBP 362,866 140,547 143,525

PCB 21,155 10,876 10,306

PB 14,640 9,016 7,534

SPB 12,341 8,522 6,365

SB 20,545 11,378 9,844

UB 37,760 28,890 11,822

UBL 154,915 74,117 69,385

 
performance appraisal, employee appraisal, performance review, or (career)
development discussion[1] is a method by which the job performance of an employee is
evaluated (generally in terms of quality, quantity, cost, and time) typically by the
corresponding manager or supervisor[2]. A performance appraisal is a part of guiding and
managing career development. It is the process of obtaining analyzing and recording
information about relative worth of an employee to the organization.

Contents
[hide]

• 1 Aims
• 2 Methods
• 3 Criticism
• 4 See also
• 5 References
• 6 Sources

• 7 External links

[edit] Aims
Generally, the aims of a performance appraisal are to:

• Give an employees feedback on performance


• Identify employee training needs
• Document criteria used to allocate organizational rewards
• Form a basis for personnel decisions: salary increases, promotions, disciplinary
actions, bonuses, etc.
• Provide the opportunity for organizational diagnosis and development
• Facilitate communication between employee and administration
• Validate selection techniques and human resource policies to meet federal Equal
Employment Opportunity requirements

[edit] Methods
A common approach to assessing performance is to use a numerical or scalar rating
system whereby managers are asked to score an individual against a number of
objectives/attributes. In some companies, employees receive assessments from their
manager, peers, subordinates, and customers, while also performing a self assessment.
This is known as a 360-degree appraisal and forms good communication patterns.
The most popular methods used in the performance appraisal process include the
following:

• Management by objectives
• 360-degree appraisal
• Behavioral observation scale
• Behaviorally anchored rating scales

Trait-based systems, which rely on factors such as integrity and conscientiousness, are
also commonly used by businesses. The scientific literature on the subject provides
evidence that assessing employees on factors such as these should be avoided. The
reasons for this are two-fold:

1) Because trait-based systems are by definition based on personality traits, they make it
difficult for a manager to provide feedback that can cause positive change in employee
performance. This is caused by the fact that personality dimensions are for the most part
static, and while an employee can change a specific behavior they cannot change their
personality. For example, a person who lacks integrity may stop lying to a manager
because they have been caught, but they still have low integrity and are likely to lie again
when the threat of being caught is gone.

2) Trait-based systems, because they are vague, are more easily influenced by office
politics, causing them to be less reliable as a source of information on an employee's true
performance. The vagueness of these instruments allows managers to fill them out based
on who they want to/feel should get a raise, rather than basing scores on specific
behaviors employees should/should not be engaging in. These systems are also more
likely to leave a company open to discrimination claims because a manager can make
biased decisions without having to back them up with specific behavioral information.

[edit] Criticism
Performance appraisals are an instrument for social control. They are annual discussions, avoided
more often than held, in which one adult identifies for another adult three improvement areas to
work on over the next twelve months. You can soften them all you want, call them development
discussions, have them on a regular basis, have the subordinate identify the improvement areas
instead of the boss, and discuss values. None of this changes the basic transaction... If the intent
of the appraisal is learning, it is not going to happen when the context of the dialogue is
evaluation and judgment.
QUESTION NO. 1 (B)

Care to step onto a business-version of a land mine? Try to


make sense of the economically sensitive and
emotionally loaded topic of pay.

Pay is a subject loaded with emotion because it communicates an individual's value to an


organization and describes a cornpany's commitment to its employees. Employees'
attitudes about the fairness of pay affect their motivation and productivity. Yet businesses
trying to compete in the marketplace and provide profitable returns to shareholders are
under constant pressure to keep pay in line.

The ability to attract and retain quality employees who add to your bottom line depends
on your ability to craft an attractive compensation package. The traditional "salary-plus-
bonus, seniority-based' pay strategy is on its last legs. Fortunately, a variety of "new pay"
options are offering business owners a wide array of new choices.

Since the 1980's a new paradigm of pay determination has emerged to reflect business
trends including leaner flatter organizational structures, customer focus, quality
improvement, re-occuring and team-based work structures. Companies have struggled for
years to develop individual merit pay programs, but many have come to realize that
employee evaluations are to subjective and bear little relationship to how well the
company is doing in achieving its financial goals. Many executives have found that while
their employees may he rated above average on individual performance and may have
earned corresponding merit increases, the company is actually losing market share
profits, or both. The challenge, then, is how to link individual and company performance
in a way that will meet business goals.

1. Skill-based pay (rewards employees for learning and using new skills)
2. Team pay (rewards employees for solving particular business problems)
3. Gainsharing (rewards employees for creating direct benefits for the bottom line)

Sharing the risks and rewards

The principal concept behind "new pay" is that individual performance and overall
organizational success are, in fact. inexorably linked. What follows is that salaries and
pay increases. which are derived from company revenues must be tied a least in part to
productivity and performance improvement. New pay options shift a certain amount of
bottom line responsibility onto the employees who also collect a greater share of the
rewards of outstanding performance. But new pay strategies cannot be expected to
succeed in a vacuum. Instead, pay determination should flow directly from a company's
business plan. When the business plan changes, companies need to review their pay
strategies, too.

A trend toward nontraditional pay programs, particularly among small and growing
companies, has emerged over the past five to eight years.
Among the army of new pay strategies evolving within companies. the most prevalent
right now are skill-based pay, team pay, and gainsharing.

Paying for playing

Skill-based pay systems reward employees according to the competencies they learn and
use in the work setting. The highest value is placed on cross-trained employees who can
perform multiple functions.

This compensation system parallels traditional merit pay in that employees are evaluated
individually instead of as a team; however. raises are not automatic. Raises are granted
only when the skills an employee learns and displays enable the company to avoid
additional hires or realize other tangible benefits, such as better use of existing
employees.

Skill-based pay systems work best in an environment where on-the-job training is


emphasized: where he company's business goals are communicated among all employees:
and where an effort is being made to promote a strong sense of ownership rather than
entitlement. The critical task of human resources in a skill-based pay scenario is to
establish pay levels that not only match identifiable skills across all job descriptions. but
also reflect the button of an employees learned skills to the achievement of company
goals.

Paying for winning

Team-pay systems place a premium on achieving specific, measurable goals, rather than
on having employees display certain skills that may be used to achieve those outcomes.
For instance, in a team pay system, groups of up to 10 employees work together to solve
specific problems or to achieve benchmark improvements such as increased customer
satisfaction. Bonuses then are awarded to individuals based on the performance of the
group-providing that the team meets its objectives. A key difference between team pay
and traditional merit pay is that with team pay, one employee's word affects another
person's compensation. This underscores the importance or having every function within
an organization contribute to the company's success in a measurable way.

Paying for playing nicely

Gainsharing systems encourage employees and managers to work together to solve


problems of cost, quality, safety, or efficiency that lead to a monetary gain for the
company. The company then shares the gain with employees, typically retaining 50
percent of the monetary gain and distributing the other half among members of the
employee-management team. In a manufacturing environment for example, payouts for a
specified gain would be distributed not only among the line workers involved, but also
among employees within business units linked to production, such as purchasing, sales,
shipping, and accounts receivable.
Compared with a traditional salary-plus-bonus pay structure, gainsharing over time tends
to provide larger financial rewards to provide employees at a lower cost to the company.
(Gainsharing systems have produced an average of 8 percent pay increases for
employees, with companies realizing an equivalent benefit to their P&Ls.) Further,
because employee payouts must be earned from year to year, gainsharing systems do not
add to the fixed cost of employee base salaries, but instead are considered a variable
expense.

Revamping your pay plans

As new economic pressures and social patterns add complexity to compensation issues,
more companies, even small businesses, are looking to outside council for help. Before
doing this however, ask yourself the following questions:

 What is your business plan?


Unless the company mission and goals are clearly articulated, it will be difficult to develop a
compensation strategy to support them. Companies also need to understand what kinds of
workers they will have in determining the companies future.
 How do we want to pay in comparison to our competitors?
Knowing what your competitors pay their employees will better position your company to
develop a strong recruiting message, regardless of how it pays in comparison. Companies
that cannot afford to pay at or above the market average for base salaries may be able to
offer rapid advancement, access to training. or bonuses and long term incentives instead.
In fact, a growing number of highly skilled managers are leaving large corporations and
substantial salaries behind in favor of the hands-on challenges and ownership potential
offered by small companies.
 What activity do we want to reward?

If your company's business plan calls for an empowered, customer-focused


workforce organized into self-directed teams, its compensation program should
reinforce that goal by asking employees to help determine their own performance
targets and how they will be paid for achieving them.

 How much pay should be maintained as fixed cost, and how much placed at risk?

The amount of pay at risk(pay that is tied to team or company performance) will
vary depending upon an employee's position within the organization. Employees
at lower end of the pay scale cannot afford to place much pay at risk, but more
substantial incentive pay helps motivate those with greater bottom-line
responsibility.

 What percentage of total pay should be distributed annually versus long term?

More companies are adopting long-term employee stock ownership plans


(ESOPs)or, in companies that are not publicly traded, phantom share plans, that
reward employees for increasing shareholder value. The value of shares awarded
to employees generally ranges from one half of base salary for lower level support
staff to four or five times base salary for top executives. However, if the company
cannot afford to pay competitive base salaries or annual bonuses, those multiples
should be increased to reflect business conditions.

Finally, when developing a new compensation plan, seek input from participating
employees. This proposition may be uncomfortable to think about, but asking employees
how they would like to be paid can unearth some surprisingly creative - and often
workable- solutions. For example, in 1993 I was approached by president of a medium-
sized, heavy equipment dealer here in the Midwest. He said that his five departments
were not working together. Each department was meeting its goals but company profits
were not increasing. A meeting was held with the president and the five department
heads. We learned that since their bonus plans were tied only to department performance,
there was no reward for interdepartmental teamwork, nor were their performance goals
attached to company profitability.

After several more meetings, we scrapped the old bonus plan entirely, disposing of all
department targets, and instead, we crafted a new plan that rewarded sales volume and
profits company-wide with bonuses payable only after the owners had received at least a
5 percent return on invested capital. By the end of the first year, the company had
exceeded its sales targets by 30 percent and its profit targets by 50 percent. The
executives doubled their bonuses.

Clearly, pay systems that require individual and team contributions to overall company
performance are here to stay. For companies, new pay systems offer greater control over
costs and profits, along with more ways to attract and retain top-notch employees. For
employees, new pay means more responsibility for personal income, along with monetary
and psychological rewards gained by contributing to the company's success.
QUESTION NO. 5(B)

Program Evaluation
Some Myths About Program Evaluation
1.. Many people believe evaluation is a useless activity that generates lots of boring data
with useless conclusions. This was a problem with evaluations in the past when program
evaluation methods were chosen largely on the basis of achieving complete scientific
accuracy, reliability and validity. This approach often generated extensive data from
which very carefully chosen conclusions were drawn. Generalizations and
recommendations were avoided. As a result, evaluation reports tended to reiterate the
obvious and left program administrators disappointed and skeptical about the value of
evaluation in general. More recently (especially as a result of Michael Patton's
development of utilization-focused evaluation), evaluation has focused on utility,
relevance and practicality at least as much as scientific validity.

2. Many people believe that evaluation is about proving the success or failure of a
program. This myth assumes that success is implementing the perfect program and never
having to hear from employees, customers or clients again -- the program will now run
itself perfectly. This doesn't happen in real life. Success is remaining open to continuing
feedback and adjusting the program accordingly. Evaluation gives you this continuing
feedback.

3. Many believe that evaluation is a highly unique and complex process that occurs at a
certain time in a certain way, and almost always includes the use of outside experts.
Many people believe they must completely understand terms such as validity and
reliability. They don't have to. They do have to consider what information they need in
order to make current decisions about program issues or needs. And they have to be
willing to commit to understanding what is really going on. Note that many people
regularly undertake some nature of program evaluation -- they just don't do it in a formal
fashion so they don't get the most out of their efforts or they make conclusions that are
inaccurate (some evaluators would disagree that this is program evaluation if not done
methodically). Consequently, they miss precious opportunities to make more of
difference for their customer and clients, or to get a bigger bang for their buck.

So What is Program Evaluation?


First, we'll consider "what is a program?" Typically, organizations work from their
mission to identify several overall goals which must be reached to accomplish their
mission. In nonprofits, each of these goals often becomes a program. Nonprofit programs
are organized methods to provide certain related services to constituents, e.g., clients,
customers, patients, etc. Programs must be evaluated to decide if the programs are indeed
useful to constituents. In a for-profit, a program is often a one-time effort to produce a
new product or line of products.
So, still, what is program evaluation? Program evaluation is carefully collecting
information about a program or some aspect of a program in order to make necessary
decisions about the program. Program evaluation can include any or a variety of at least
35 different types of evaluation, such as for needs assessments, accreditation, cost/benefit
analysis, effectiveness, efficiency, formative, summative, goal-based, process, outcomes,
etc. The type of evaluation you undertake to improve your programs depends on what
you want to learn about the program. Don't worry about what type of evaluation you need
or are doing -- worry about what you need to know to make the program decisions you
need to make, and worry about how you can accurately collect and understand that
information.

Where Program Evaluation is Helpful


Frequent Reasons:
Program evaluation can:
1. Understand, verify or increase the impact of products or services on customers or
clients - These "outcomes" evaluations are increasingly required by nonprofit funders as
verification that the nonprofits are indeed helping their constituents. Too often, service
providers (for-profit or nonprofit) rely on their own instincts and passions to conclude
what their customers or clients really need and whether the products or services are
providing what is needed. Over time, these organizations find themselves in a lot of
guessing about what would be a good product or service, and trial and error about how
new products or services could be delivered.
2. Improve delivery mechanisms to be more efficient and less costly - Over time, product
or service delivery ends up to be an inefficient collection of activities that are less
efficient and more costly than need be. Evaluations can identify program strengths and
weaknesses to improve the program.
3. Verify that you're doing what you think you're doing - Typically, plans about how to
deliver services, end up changing substantially as those plans are put into place.
Evaluations can verify if the program is really running as originally planned.

Other Reasons:
Program evaluation can:
4. Facilitate management's really thinking about what their program is all about,
including its goals, how it meets it goals and how it will know if it has met its goals or
not.
5. Produce data or verify results that can be used for public relations and promoting
services in the community.
6. Produce valid comparisons between programs to decide which should be retained, e.g.,
in the face of pending budget cuts.
7. Fully examine and describe effective programs for duplication elsewhere.

Basic Ingredients: Organization and Program(s)


You Need An Organization:
This may seem too obvious to discuss, but before an organization embarks on evaluating
a program, it should have well established means to conduct itself as an organization,
e.g., (in the case of a nonprofit) the board should be in good working order, the
organization should be staffed and organized to conduct activities to work toward the
mission of the organization, and there should be no current crisis that is clearly more
important to address than evaluating programs.

You Need Program(s):


To effectively conduct program evaluation, you should first have programs. That is, you
need a strong impression of what your customers or clients actually need. (You may have
used a needs assessment to determine these needs -- itself a form of evaluation, but
usually the first step in a good marketing plan). Next, you need some effective methods
to meet each of those goals. These methods are usually in the form of programs.

It often helps to think of your programs in terms of inputs, process, outputs and
outcomes. Inputs are the various resources needed to run the program, e.g., money,
facilities, customers, clients, program staff, etc. The process is how the program is carried
out, e.g., customers are served, clients are counseled, children are cared for, art is created,
association members are supported, etc. The outputs are the units of service, e.g., number
of customers serviced, number of clients counseled, children cared for, artistic pieces
produced, or members in the association. Outcomes are the impacts on the customers or
on clients receiving services, e.g., increased mental health, safe and secure development,
richer artistic appreciation and perspectives in life, increased effectiveness among
members, etc.

Planning Your Program Evaluation


Depends on What Information You Need to Make Your Decisions and
On Your Resources.
Often, management wants to know everything about their products, services or programs.
However, limited resources usually force managers to prioritize what they need to know
to make current decisions.

Your program evaluation plans depend on what information you need to collect in order
to make major decisions. Usually, management is faced with having to make major
decisions due to decreased funding, ongoing complaints, unmet needs among customers
and clients, the need to polish service delivery, etc. For example, do you want to know
more about what is actually going on in your programs, whether your programs are
meeting their goals, the impact of your programs on customers, etc? You may want other
information or a combination of these. Ultimately, it's up to you.

But the more focused you are about what you want to examine by the evaluation, the
more efficient you can be in your evaluation, the shorter the time it will take you and
ultimately the less it will cost you (whether in your own time, the time of your employees
and/or the time of a consultant).

There are trade offs, too, in the breadth and depth of information you get. The more
breadth you want, usually the less depth you get (unless you have a great deal of
resources to carry out the evaluation). On the other hand, if you want to examine a certain
aspect of a program in great detail, you will likely not get as much information about
other aspects of the program.

For those starting out in program evaluation or who have very limited resources, they can
use various methods to get a good mix of breadth and depth of information. They can
both understand more about certain areas of their programs and not go bankrupt doing so.

Key Considerations:
Consider the following key questions when designing a program evaluation.
1. For what purposes is the evaluation being done, i.e., what do you want to be able to
decide as a result of the evaluation?
2. Who are the audiences for the information from the evaluation, e.g., customers,
bankers, funders, board, management, staff, customers, clients, etc.
3. What kinds of information are needed to make the decision you need to make and/or
enlighten your intended audiences, e.g., information to really understand the process of
the product or program (its inputs, activities and outputs), the customers or clients who
experience the product or program, strengths and weaknesses of the product or program,
benefits to customers or clients (outcomes), how the product or program failed and why,
etc.
4. From what sources should the information be collected, e.g., employees, customers,
clients, groups of customers or clients and employees together, program documentation,
etc.
5. How can that information be collected in a reasonable fashion, e.g., questionnaires,
interviews, examining documentation, observing customers or employees, conducting
focus groups among customers or employees, etc.
6. When is the information needed (so, by when must it be collected)?
7. What resources are available to collect the information?

Some Major Types of Program Evaluation


When designing your evaluation approach, it may be helpful to review the following
three types of evaluations, which are rather common in organizations. Note that you
should not design your evaluation approach simply by choosing which of the following
three types you will use -- you should design your evaluation approach by carefully
addressing the above key considerations.

Goals-Based Evaluation
Often programs are established to meet one or more specific goals. These goals are often
described in the original program plans.
Goal-based evaluations are evaluating the extent to which programs are meeting
predetermined goals or objectives. Questions to ask yourself when designing an
evaluation to see if you reached your goals, are:
1. How were the program goals (and objectives, is applicable) established? Was the
process effective?
2. What is the status of the program's progress toward achieving the goals?
3. Will the goals be achieved according to the timelines specified in the program
implementation or operations plan? If not, then why?
4. Do personnel have adequate resources (money, equipment, facilities, training, etc.) to
achieve the goals?
5. How should priorities be changed to put more focus on achieving the goals?
(Depending on the context, this question might be viewed as a program management
decision, more than an evaluation question.)
6. How should timelines be changed (be careful about making these changes - know why
efforts are behind schedule before timelines are changed)?
7. How should goals be changed (be careful about making these changes - know why
efforts are not achieving the goals before changing the goals)? Should any goals be added
or removed? Why?
8. How should goals be established in the future?

Process-Based Evaluations
Process-based evaluations are geared to fully understanding how a program works -- how
does it produce that results that it does. These evaluations are useful if programs are long-
standing and have changed over the years, employees or customers report a large number
of complaints about the program, there appear to be large inefficiencies in delivering
program services and they are also useful for accurately portraying to outside parties how
a program truly operates (e.g., for replication elsewhere).

There are numerous questions that might be addressed in a process evaluation. These
questions can be selected by carefully considering what is important to know about the
program. Examples of questions to ask yourself when designing an evaluation to
understand and/or closely examine the processes in your programs, are:
1. On what basis do employees and/or the customers decide that products or services are
needed?
2. What is required of employees in order to deliver the product or services?
3. How are employees trained about how to deliver the product or services?
4. How do customers or clients come into the program?
5. What is required of customers or client?
6. How do employees select which products or services will be provided to the customer
or client?
7. What is the general process that customers or clients go through with the product or
program?
8. What do customers or clients consider to be strengths of the program?
9. What do staff consider to be strengths of the product or program?
10. What typical complaints are heard from employees and/or customers?
11. What do employees and/or customers recommend to improve the product or
program?
12. On what basis do emplyees and/or the customer decide that the product or services
are no longer needed?

Outcomes-Based Evaluation
Program evaluation with an outcomes focus is increasingly important for nonprofits and
asked for by funders.An outcomes-based evaluation facilitates your asking if your
organization is really doing the right program activities to bring about the outcomes you
believe (or better yet, you've verified) to be needed by your clients (rather than just
engaging in busy activities which seem reasonable to do at the time). Outcomes are
benefits to clients from participation in the program. Outcomes are usually in terms of
enhanced learning (knowledge, perceptions/attitudes or skills) or conditions, e.g.,
increased literacy, self-reliance, etc. Outcomes are often confused with program outputs
or units of services, e.g., the number of clients who went through a program.

The United Way of America (http://www.unitedway.org/outcomes/) provides an


excellent overview of outcomes-based evaluation, including introduction to outcomes
measurement, a program outcome model, why to measure outcomes, use of program
outcome findings by agencies, eight steps to success for measuring outcomes, examples
of outcomes and outcome indicators for various programs and the resources needed for
measuring outcomes. The following information is a top-level summary of information
from this site.

To accomplish an outcomes-based evaluation, you should first pilot, or test, this


evaluation approach on one or two programs at most (before doing all programs).

The general steps to accomplish an outcomes-based evaluation include to:


1. Identify the major outcomes that you want to examine or verify for the program under
evaluation. You might reflect on your mission (the overall purpose of your organization)
and ask yourself what impacts you will have on your clients as you work towards your
mission. For example, if your overall mission is to provide shelter and resources to
abused women, then ask yourself what benefits this will have on those women if you
effectively provide them shelter and other services or resources. As a last resort, you
might ask yourself, "What major activities are we doing now?" and then for each activity,
ask "Why are we doing that?" The answer to this "Why?" question is usually an outcome.
This "last resort" approach, though, may just end up justifying ineffective activities you
are doing now, rather than examining what you should be doing in the first place.
2. Choose the outcomes that you want to examine, prioritize the outcomes and, if your
time and resources are limited, pick the top two to four most important outcomes to
examine for now.
3. For each outcome, specify what observable measures, or indicators, will suggest that
you're achieving that key outcome with your clients. This is often the most important and
enlightening step in outcomes-based evaluation. However, it is often the most
challenging and even confusing step, too, because you're suddenly going from a rather
intangible concept, e.g., increased self-reliance, to specific activities, e.g., supporting
clients to get themselves to and from work, staying off drugs and alcohol, etc. It helps to
have a "devil's advocate" during this phase of identifying indicators, i.e., someone who
can question why you can assume that an outcome was reached because certain
associated indicators were present.
4. Specify a "target" goal of clients, i.e., what number or percent of clients you commit to
achieving specific outcomes with, e.g., "increased self-reliance (an outcome) for 70% of
adult, African American women living in the inner city of Minneapolis as evidenced by
the following measures (indicators) ..."
5. Identify what information is needed to show these indicators, e.g., you'll need to know
how many clients in the target group went through the program, how many of them
reliably undertook their own transportation to work and stayed off drugs, etc. If your
program is new, you may need to evaluate the process in the program to verify that the
program is indeed carried out according to your original plans. (Michael Patton,
prominent researcher, writer and consultant in evaluation, suggests that the most
important type of evaluation to carry out may be this implementation evaluation to verify
that your program ended up to be implemented as you originally planned.)
6. Decide how can that information be efficiently and realistically gathered (see Selecting
Which Methods to Use below). Consider program documentation, observation of
program personnel and clients in the program, questionnaires and interviews about clients
perceived benefits from the program, case studies of program failures and successes, etc.
You may not need all of the above. (see Overview

Selecting Which Methods to Use


Overall Goal in Selecting Methods:
The overall goal in selecting evaluation method(s) is to get the most useful information to
key decision makers in the most cost-effective and realistic fashion. Consider the
following questions:
1. What information is needed to make current decisions about a product or program?
2. Of this information, how much can be collected and analyzed in a low-cost and
practical manner, e.g., using questionnaires, surveys and checklists?
3. How accurate will the information be (reference the above table for disadvantages of
methods)?
4. Will the methods get all of the needed information?
5. What additional methods should and could be used if additional information is needed?
6. Will the information appear as credible to decision makers, e.g., to funders or top
management?
7. Will the nature of the audience conform to the methods, e.g., will they fill out
questionnaires carefully, engage in interviews or focus groups, let you examine their
documentations, etc.?
8. Who can administer the methods now or is training required?
9. How can the information be analyzed?

Note that, ideally, the evaluator uses a combination of methods, for example, a
questionnaire to quickly collect a great deal of information from a lot of people, and then
interviews to get more in-depth information from certain respondents to the
questionnaires. Perhaps case studies could then be used for more in-depth analysis of
unique and notable cases, e.g., those who benefited or not from the program, those who
quit the program, etc.

Four Levels of Evaluation:


There are four levels of evaluation information that can be gathered from clients,
including getting their:
1. reactions and feelings (feelings are often poor indicators that your service made lasting
impact)
2. learning (enhanced attitudes, perceptions or knowledge)
3. changes in skills (applied the learning to enhance behaviors)
4. effectiveness (improved performance because of enhanced behaviors)

Usually, the farther your evaluation information gets down the list, the more useful is
your evaluation. Unfortunately, it is quite difficult to reliably get information about
effectiveness. Still, information about learning and skills is quite useful.

Analyzing and Interpreting Information


Analyzing quantitative and qualitative data is often the topic of advanced research and
evaluation methods. There are certain basics which can help to make sense of reams of
data.

Always start with your evaluation goals:


When analyzing data (whether from questionnaires, interviews, focus groups, or
whatever), always start from review of your evaluation goals, i.e., the reason you
undertook the evaluation in the first place. This will help you organize your data and
focus your analysis. For example, if you wanted to improve your program by identifying
its strengths and weaknesses, you can organize data into program strengths, weaknesses
and suggestions to improve the program. If you wanted to fully understand how your
program works, you could organize data in the chronological order in which clients go
through your program. If you are conducting an outcomes-based evaluation, you can
categorize data according to the indicators for each outcome.

Basic analysis of "quantitative" information (for information other than commentary,


e.g., ratings, rankings, yes's, no's, etc.):
1. Make copies of your data and store the master copy away. Use the copy for making
edits, cutting and pasting, etc.
2. Tabulate the information, i.e., add up the number of ratings, rankings, yes's, no's for
each question.
3. For ratings and rankings, consider computing a mean, or average, for each question.
For example, "For question #1, the average ranking was 2.4". This is more meaningful
than indicating, e.g., how many respondents ranked 1, 2, or 3.
4. Consider conveying the range of answers, e.g., 20 people ranked "1", 30 ranked "2",
and 20 people ranked "3".
Basic analysis of "qualitative" information (respondents' verbal answers in interviews,
focus groups, or written commentary on questionnaires):
1. Read through all the data.
2. Organize comments into similar categories, e.g., concerns, suggestions, strengths,
weaknesses, similar experiences, program inputs, recommendations, outputs, outcome
indicators, etc.
3. Label the categories or themes, e.g., concerns, suggestions, etc.
4. Attempt to identify patterns, or associations and causal relationships in the themes,
e.g., all people who attended programs in the evening had similar concerns, most people
came from the same geographic area, most people were in the same salary range, what
processes or events respondents experience during the program, etc.
4. Keep all commentary for several years after completion in case needed for future
reference.

Interpreting Information:
1. Attempt to put the information in perspective, e.g., compare results to what you
expected, promised results; management or program staff; any common standards for
your services; original program goals (especially if you're conducting a program
evaluation); indications of accomplishing outcomes (especially if you're conducting an
outcomes evaluation); description of the program's experiences, strengths, weaknesses,
etc. (especially if you're conducting a process evaluation).
2. Consider recommendations to help program staff improve the program, conclusions
about program operations or meeting goals, etc.
3. Record conclusions and recommendations in a report document, and associate
interpretations to justify your conclusions or recommendations.

Reporting Evaluation Results


1.The level and scope of content depends on to whom the report is intended, e.g., to
bankers, funders, employees, customers, clients, the public, etc.
2. Be sure employees have a chance to carefully review and discuss the report. Translate
recommendations to action plans, including who is going to do what about the program
and by when.
3. Bankers or funders will likely require a report that includes an executive summary (this
is a summary of conclusions and recommendations, not a listing of what sections of
information are in the report -- that's a table of contents); description of theorganization
and the program under evaluation; explanation of the evaluation goals, methods, and
analysis procedures; listing of conclusions and recommendations; and any relevant
attachments, e.g., inclusion of evaluation questionnaires, interview guides, etc. The
banker or funder may want the report to be delivered as a presentation, accompanied by
an overview of the report. Or, the banker or funder may want to review the report alone.
4. Be sure to record the evaluation plans and activities in an evaluation plan which can be
referenced when a similar program evaluation is needed in the future.
Contents of an Evaluation Report -- Example
An example of evaluation report contents is included later on below in this document.
Click Contents of an Evaluation Plan but, don't forget to look at the next section "Who
Should Carry Out the Evaluation".

Who Should Carry Out the Evaluation?


Ideally, management decides what the evaluation goals should be. Then an evaluation
expert helps the organization to determine what the evaluation methods should be, and
how the resulting data will be analyzed and reported back to the organization. Most
organizations do not have the resources to carry out the ideal evaluation.

Still, they can do the 20% of effort needed to generate 80% of what they need to know to
make a decision about a program. If they can afford any outside help at all, it should be
for identifying the appropriate evaluation methods and how the data can be collected. The
organization might find a less expensive resource to apply the methods, e.g., conduct
interviews, send out and analyze results of questionnaires, etc.

If no outside help can be obtained, the organization can still learn a great deal by
applying the methods and analyzing results themselves. However, there is a strong
chance that data about the strengths and weaknesses of a program will not be interpreted
fairly if the data are analyzed by the people responsible for ensuring the program is a
good one. Program managers will be "policing" themselves. This caution is not to fault
program managers, but to recognize the strong biases inherent in trying to objectively
look at and publicly (at least within the organization) report about their programs.
Therefore, if at all possible, have someone other than the program managers look at and
determine evaluation results.
QUESTION NO. 5 (B)
Assessment center
Disciplines > Human Resources > Selection > Assessment center
Description | Development | Discussion | See also

Description
The Assessment Center is an approach to selection whereby a battery of tests and
exercises are administered to a person or a group of people across a number of hours
(usually within a single day).

Assessment centers are particularly useful where:

• Required skills are complex and cannot easily be assessed with


interview or simple tests.
• Required skills include significant interpersonal elements (e.g.
management roles).
• Multiple candidates are available and it is acceptable for them to
interact with one another.

Individual exercises

Individual exercises provide information on how the person works by themselves. The
classic exercise is the in-tray, of which there are many variants, but which have a
common theme of giving the person an unstructured large pile of work and then see
how they go about doing it.

Individual exercises (and especially the 'in tray') are very common and have a
correlation with cognitive ability. Other variants include planning exercises (here’s
problems, how will you address them) and case analysis (here’s a scenario, what
wrong? How would you fix it?).

One-to-one exercises

In one-to-one exercises, the candidate interacts in various ways with another person,
being observed (as with other exercises) by the assessor(s). They are often used to
assess listening, communication and interpersonal skills, as well as other job-related
knowledge and skills.

In role-play exercises, the person takes on a role (possibly the job being applied for)
and interacts with someone who is acting (possibly one of the assessors) in a defined
scenario. This may range from dealing with a disaffected employee to putting a
persuasive argument to conducting a fact-finding interview.

Other exercises may have elements of role-play but are in more 'normal' positions, such
as making a presentation or doing an interview (interesting reversal!).

Group exercises

Group exercises test how people interact in a group, for example showing in practice
the Belbin Team Roles that they take.

Leaderless group discussions (often of a group of candidates) start with everyone on a


relatively equal position (although this may be affected by such as the shape of the
table).

A typical variant is to assign roles to each candidate and give them a brief of which
others are unaware. These groups can be used to assess such skills as negotiation,
persuasion, teamwork, planning and organization, decision-making and, leadership.

Another variant is simply to give a give topic for group to discuss (has less face
validity).

Business simulations may be used, sometimes with computers being used to add
information and determine outcomes of decisions. These often work with 'turns' that are
made of data given to the group, followed by a discussion and decision which is entered
into the computer to give the results for the next round.

Relevant topics increases face validity. Studies (Bass, 1954) have shown high inter-
rater reliability (.82) and test-re-test results (.72).

Self-assessment exercises

A neat trick is to ask candidates to assess themselves, for example by asking them to
rate themselves after each exercise. There is usually a high correlation between
candidate and assessor ratings (indicating honesty).

Ways of improving these exercises include:

• Increasing length of assessment form to include behavioral


dimensions based on selection competencies
• Change instructions to promote a more realistic appraisal by applicant
of their skills
• Imply that candidate would be held accountable if a discrepancy is
found between their and assessor ratings.

Those with low self-assessment accuracy are likely to find behavioral modification and
adaptation difficult (perhaps as they have low emotional intelligence).
Development
Developing assessment centers involves much test development, although much can be
selected 'off the shelf'. A key area of preparation is with assessors, on whose judgment
candidates will be rejected and selected.

Identify criteria

Identify the criteria by which you will assess the candidates. Derive these from a sound
job analysis.

Keep the number of criteria low -- less than six is good -- in order to help assessors
remember and focus. This also helps simplify the final judgment process.

Develop exercises

Make exercises as realistic as possible. This will help both candidates and assessors and
will give a good idea what the candidate is like in real situations.

Design the exercises around the criteria so they can be identified rather than find a nice
exercise and see if you can spot any useful criteria. Allow for confirmation and for
disconfirmation of criteria.

Include clear guidelines for player so they can get 'into' the exercises as easily as
possible. You should be assessing them on the exercise, not on their memory.

Include guidelines also for role-players, assessors and also for those who will set up the
exercises (eg. what parts to include in exercise packs, how to set them up ready for use,
etc.).

Triangulate for results across multiple exercises so each exercise supports others,
showing different facets of the person and their behavior against the criteria.

Select assessors

Select assessors based on their ability to make effective judgments. Gender is not
important, but age and rank are.

There are two approaches to selecting assessors. You can use a small pool of assessors
who become better at the job, or you can use many people to help diffuse acceptance of
the candidates and the selection method.

Do use assessors who are aware of organizational norms and values (this militates
against using external assessors), but do also include specialists, e.g. organizational
psychologists (who may well be external, unless you are in a large company).
Develop tools for assessors

Asking assessors to make personal judgments is likely to result in bias. Tools can be
developed to help them score candidates accurately and consistently.

Include behavioral checklists (lists of behaviors that display criteria) and behavioral
coding that uses prepared data-gathering sheets (this standardizes between-gatherers
data).

Traditional assessment has a process of observe, record, classify, evaluate. Schema-


based assessment has examples of poor, average and good behavior (there is no
separation of evaluation and observation).

Prepare assessors and others

Ensure the people who will be assessing, role-playing, etc. are ready beforehand. The
assessment center should not be a learning exercise for assessors.

Two days of training are better than one. Include theory of social information
processing, interpersonal judgment, social cognition and decision-making theory.

Make assessors responsible for giving feedback to candidates and accountable to


organization for their decisions. This encourages them to be careful with their
assessments.

Run the assessment center

If you have planned everything well, it will go well. Things to remember include:

• Directions to the center sent well beforehand, including by road, rail


and air.
• Welcome for candidates, with refreshments and waiting area between
exercises.
• Capturing feedback from assessors immediately after sessions.
• A focus with assessors on criteria.
• Swift and smooth correction of assessors who are not using criteria.
• A timetable for everyone that runs on time.
• Lunch! Coffee breaks!
• Thanks to everyone involved.
• Finishing the exercises in time for the assessors to do the final
scoring/discussion session.
Follow-up

After the center, follow up with candidates and assessors as appropriate. A good
practice is to give helpful feedback to candidates who are unsuccessful so they can
understand their strengths and weaknesses.

Discussion
Assessments have grown hugely in popularity. In 1973 only about 7% of companies
were using them. By the mid-1980s, this had grown to 20%, and by the end of the
1990s it had leapt again to 65%.

Assessment centers allow assessment of potential skill and so are good when seeking
new recruits. They allows a wide range of criteria to be assessed, including group
activity and aggregations of higher-level, managerial competences.

Assessment centers are not cheap to put on and require multiple assessors who must be
available. Organizational psychologists can be of particular value to assess and identify
the subtler aspects of behavior.

Origins

The assessment center was originated by AT&T, who included the following nine
components:

1. Business game
2. Leaderless group discussion
3. In-tray exercise
4. Two-hour interview
5. Projective test
6. Personality test
7. ‘q sort’
8. intelligence tests
9. Autobiographical essay and questionnaire

Validity

Reliability and validity is difficult, as there are so many parts and so much variation. A
1966 study showed high validity in identifying middle managers. There is a lower
adverse effect on individuals than separate tests (eg. psychometrics).

Criticisms

The outcome of assessment centers are based on the judgments of the assessors and
hence the quality of those judgments. Not only are judgments subject to human bias but
they also are affected by the group psychology effects of assessors interacting.
Assessors often deviate from marking schemes, often collapsing multiple criteria into a
generic ‘performance’ criterion. This is often due to overburdening of assessors with
more than 4-5 criteria (so use less). More attention is often given to direct observation
than other data (eg. psychometric tests). Assessors even use their own private criteria –
especially organizational fit.

pdf

Assessment Center Defined


An assessment center consists of a standardized evaluation
of
behavior based on multiple inputs. Multiple trained
observers and
techniques are used. Judgments about behaviors are made,
in major
part, from specifically developed assessment simulations.
These
judgments are pooled in a meeting among the assessors or
by a
statistical integration process.

Essential Features of an Assessment


Center
� Job analysis of relevant behaviors
� Measurement techniques selected based on job analysis
� Multiple measurement techniques used, including
simulation exercises
� Assessors’ behavioral observations classified into
meaningful and relevant
categories (dimensions, KSAOs)
� Multiple observations made for each dimension
� Multiple assessors used for each candidate
� Assessors trained to a performance standard
QUESTION NO. 4 (B)

According to R.D. Gatewood and H.S. Field, employee selection is the "process of
collecting and evaluating information about an individual in order to extend an
offer of employment." Employee selection is part of the overall staffing process of
the organization, which also includes human resource (HR) planning,
recruitment, and retention activities. By doing human resource planning, the
organization projects its likely demand for personnel with particular knowledge,
skills, and abilities (KSAs), and compares that to the anticipated availability of
such personnel in the internal or external labor markets. During the recruitment
phase of staffing, the organization attempts to establish contact with potential job
applicants by job postings within the organization, advertising to attract external
applicants, employee referrals, and many other methods, depending on the type
of organization and the nature of the job in question. Employee selection begins
when a pool of applicants is generated by the organization's recruitment efforts.
During the employee selection process, a firm decides which of the recruited
candidates will be offered a position.

Effective employee selection is a critical component of a successful organization.


How employees perform their jobs is a major factor in determining how
successful an organization will be. Job performance is essentially determined by
the ability of an individual to do a particular job and the effort the individual is
willing to put forth in performing the job. Through effective selection, the
organization can maximize the probability that its new employees will have the
necessary KSAs to do the jobs they were hired to do. Thus, employee selection is
one of the two major ways (along with orientation and training) to make sure that
new employees have the abilities required to do their jobs. It also provides the
base for other HR practices—such as effective job design, goal setting, and
compensation—that motivate workers to exert the effort needed to do their jobs
effectively, according to Gatewood and Field.
Job applicants differ along many dimensions, such as educational and work
experience, personality characteristics, and innate ability and motivation levels.
The logic of employee selection begins with the assumption that at least some of
these individual differences are relevant to a person's suitability for a particular
job. Thus, in employee selection the organization must (1) determine the relevant
individual differences (KSAs) needed to do the job and (2) identify and utilize
selection methods that will reliably and validly assess the extent to which job
applicants possess the needed KSAs. The organization must achieve these tasks in
a way that does not illegally discriminate against any job applicants on the basis
of race, color, religion, sex, national origin, disability, or veteran's status.

AN OVERVIEW OF THE
SELECTION PROCESS
Employee selection is itself a process consisting of several important stages, as
shown in Exhibit 1. Since the organization must determine the individual KSAs
needed to perform a job, the selection process begins with job analysis, which is
the systematic study of the content of jobs in an organization. Effective job
analysis tells the organization what people occupying particular jobs "do" in the
course of performing their jobs. It also helps the organization determine the
major duties and responsibilities of the job, as well as aspects of the job that are
of minor or tangential importance to job performance. The job analysis often
results in a document called the job description, which is a comprehensive
document that details the duties, responsibilities, and tasks that make up a job.
Because job analysis can be complex, time-consuming, and expensive,
standardized job descriptions have been developed that can be adapted to
thousands of jobs in organizations across the world. Two examples of such
databases are the U.S. government's Standard Occupational Classification
(SOC), which has information on at least 821 occupations, and the Occupational
Information Network, which is also known as O*NET. O*NET provides job
descriptions for thousands of jobs.
An understanding of the content of a job assists an organization in specifying the
knowledge, skills, and abilities needed to do the job. These KSAs can be
expressed in terms of a job specification, which is an

Exhibit 1
Selection Process
Source: Adapted from Gatewood and Field, 2001.
The systematic study of job content in order to
determine the major duties and responsibilities
of the job. Allows the organization to determine
1. Job Analysis
the important dimensions of job performance.
The major duties and responsibilities of a job
are often detailed in the job description.
Drawing upon the information obtained
through job analysis or from secondary sources
2. The Identification such as O*NET, the organization identifies the
of KSAs or knowledge, skills, and abilities necessary to
Job Requirements perform the job. The job requirements are often
detailed in a document called the job
specification.
3. The Identification Once the organization knows the KSAs needed
by job applicants, it must be able to determine
the degree to which job applicants possess
them. The organization must Once the
organization knows the KSAs needed by job
applicants, it must be able to determine the
of Selection Methods
degree to which job applicants possess them.
to Assess KSAs
The organization must Selection methods
include, but are not limited to, reference and
background checks, interviews, cognitive
testing, personality testing, aptitude testing,
drug testing, and assessment centers.
The organization should be sure that the
selection methods they use are reliable and
4. The Assessment of
valid. In terms of validity, selection methods
the Reliability and
should actually assess the knowledge, skill, or
Validity of Selection
ability they purport to measure and should
Methods
distinguish between job applicants who will be
successful on the job and those who will not.
The organization should use its selection
methods to make selection decisions. Typically,
the organization will first try to determine
5. The Use of
which applicants possess the minimum KSAs
Selection Methods to
required. Once unqualified applicants are
Process Job
screened, other selection methods are used to
Applicants
make distinctions among the remaining job
candidates and to decide which applicants will
receive offers.

organizational document that details what is required to successfully perform a


given job. The necessary KSAs are called job requirements, which simply means
they are thought to be necessary to perform the job. Job requirements are
expressed in terms of desired education or training, work experience, specific
aptitudes or abilities, and in many other ways. Care must be taken to ensure that
the job requirements are based on the actual duties and responsibilities of the job
and that they do not include irrelevant requirements that may discriminate
against some applicants. For example, many organizations have revamped their
job descriptions and specifications in the years since the passage of the
Americans with Disabilities Act to ensure that these documents contain only job-
relevant content.

Once the necessary KSAs are identified the organization must either develop a
selection method to accurately assess whether applicants possess the needed
KSAs, or adapt selection methods developed by others. There are many selection
methods available to organizations. The most common is the job interview, but
organizations also use reference and background checking, personality testing,
cognitive ability testing, aptitude testing, assessment centers, drug tests, and
many other methods to try and accurately assess the extent to which applicants
possess the required KSAs and whether they have unfavorable characteristics that
would prevent them from successfully performing the job. For both legal and
practical reasons, it is important that the selection methods used are relevant to
the job in question and that the methods are as accurate as possible in the
information they provide. Selection methods cannot be accurate unless they
possess reliability and validity.

VALIDITY OF SELECTION METHODS


Validity refers to the quality of a measure that exists when the measure assesses a
construct. In the selection context, validity refers to the appropriateness,
meaningfulness, and usefulness of the inferences made about applicants during
the selection process. It is concerned with the issue of whether applicants will
actually perform the job as well as expected based on the inferences made during
the selection process. The closer the applicants' actual job performances match
their expected performances, the greater the validity of the selection process.

ACHIEVING VALIDITY
The organization must have a clear notion of the job requirements and use
selection methods that reliably and accurately measure these qualifications. A list
of typical job requirements is shown in Exhibit 2. Some qualifications—such as
technical KSAs and nontechnical skills—are job-specific, meaning that each job
has a unique set. The other qualifications listed in the exhibit are universal in that
nearly all employers consider these qualities important, regardless of the job. For
instance, employers want all their employees to be motivated and have good work
habits.

The job specification derived from job analysis should describe the KSAs needed
to perform each important task of a job. By basing qualifications on job analysis
information, a company ensures that the qualities being assessed are important
for the job. Job analyses are also needed for legal reasons. In discrimination suits,
courts often judge the job-relatedness of a selection practice on whether or not
the selection criteria was based on job analysis information. For instance, if
someone lodges a complaint that a particular test discriminates against a
protected group, the court would (1) determine whether the qualities measured
by the test were selected on the basis of job analysis findings and (2) scrutinize
the job analysis study itself to determine whether it had been properly conducted.

SELECTION METHODS
The attainment of validity depends heavily on the appropriateness of the
particular selection technique used. A firm should use selection methods that
reliably and accurately measure the needed qualifications. The reliability of a
measure refers to its consistency. It is defined as "the degree of self-consistency
among the scores earned by an individual." Reliable evaluations are consistent
across both people and time. Reliability is maximized when two people evaluating
the same candidate provide the same ratings, and when the ratings of a candidate
taken at two different times are the same. When selection scores are unreliable,
their validity is diminished. Some of the factors affecting the reliability of
selection measures are:
• Emotional and physical state of the candidate. Reliability suffers if
candidates are particularly nervous during the assessment process.
• Lack of rapport with the administrator of the measure. Reliability suffers
if candidates are "turned off" by the interviewer and thus do not "show
their stuff" during the interview.
• Inadequate knowledge of how to respond to a measure. Reliability suffers
if candidates are asked questions that are vague or confusing.
• Individual differences among respondents. If the range or differences in
scores on the attribute measured by a selection device is large, that means
the device can reliably distinguish among people.
• Question difficulty. Questions of moderate difficulty produce the most
reliable measures. If questions are too easy, many applicants will give the
correct answer and individual differences are lessened; if questions are too
difficult, few applicants will give the correct answer and, again, individual
differences are lessened.
• Length of measure . As the length of a measure increases, its reliability
also increases. For example, an interviewer can better gauge an applicant's
level of interpersonal skills by asking several questions, rather than just
one or two.

Exhibit 2
A Menu of Possible Qualities Needed for Job Success

1. Technical KSAs or aptitude for learning them


2. Nontechnical skills, such as
1. Communication
2. Interpersonal
3. Reasoning ability
4. Ability to handle stress
5. Assertiveness
3. Work habits
1. Conscientiousness
2. Motivation
3. Organizational citizenship
4. Initiative
5. Self-discipline
4. Absence of dysfunctional behavior, such as
1. Substance abuse
2. Theft
3. Violent tendencies
5. Job-person fit; the applicant
1. is motivated by the organization's reward system
2. fits the organization's culture regarding such things as
risk-taking and innovation
3. would enjoy performing the job
4. has ambitions that are congruent with the promotional
opportunities available at the firm

In addition to providing reliable assessments, the firm's assessments should


accurately measure the required worker attributes. Many selection techniques are
available for assessing candidates. How does a company decide which ones to
use? A particularly effective approach to follow when making this decision is
known as the behavior consistency model. This model specifies that the best
predictor of future job behavior is past behavior performed under similar
circumstances. The model implies that the most effective selection procedures are
those that focus on the candidates' past or present behaviors in situations that
closely match those they will encounter on the job. The closer the selection
procedure simulates actual work behaviors, the greater its validity. To implement
the behavioral consistency model, employers should follow this process:

1. Thoroughly assess each applicant's previous work experience to determine


if the candidate has exhibited relevant behaviors in the past.
2. If such behaviors are found, evaluate the applicant's past success on each
behavior based on carefully developed rating scales.
3. If the applicant has not had an opportunity to exhibit such behaviors,
estimate the future likelihood of these behaviors by administering various
types of assessments. The more closely an assessment simulates actual job
behaviors, the better the prediction.

ASSESSING AND
DOCUMENTING VALIDITY
Three strategies can be used to determine the validity of a selection method. The
following section lists and discusses these strategies:

1. Content-oriented strategy: Demonstrates that the company followed


proper procedures in the development and use of its selection devices.
2. Criterion-related strategy: Provides statistical evidence showing a
relationship between applicant selection scores and subsequent job
performance levels.
3. Validity generalization strategy: Demonstrates that other companies have
already established the validity of the selection practice.

When using a content-oriented strategy to document validity, a firm gathers


evidence that it followed appropriate procedures in developing its selection
program. The evidence should show that the selection devices were properly
designed and were accurate measures of the worker requirements. Most
importantly, the employer must demonstrate that the selection devices were
chosen on the basis of an acceptable job analysis and that they measured a
representative sample of the KSAs identified. The sole use of a content-oriented
strategy for demonstrating validity is most appropriate for selection devices that
directly assess job behavior. For example, one could safely infer that a candidate
who performs well on a properly-developed typing test would type well on the job
because the test directly measures the actual behavior required on the job.
However, when the connection between the selection device and job behavior is
less direct, content-oriented evidence alone is insufficient. Consider, for example,
an item found on a civil service exam for police officers: "In the Northern
Hemisphere, what direction does water circulate when going down the drain?"
The aim of the question is to measure mental alertness, which is an important
trait for good police officers. However, can one really be sure that the ability to
answer this question is a measure of mental alertness? Perhaps, but the
inferential leap is a rather large one.

When employers must make such large inferential leaps, a content-oriented


strategy, by itself, is insufficient to document validity; some other strategy is
needed. This is where a criterion-related strategy comes into play. When a firm
uses this strategy, it attempts to demonstrate statistically that someone who does
well on a selection instrument is more likely to be a good job performer than
someone who does poorly on the selection instrument. To gather criterion-
related evidence, the HR professional needs to collect two pieces of information
on each person: a predictor score and a criterion score.

• Predictor scores represent how well the individual fared during the
selection process as indicated by a test score, an interview rating, or an
overall selection score.
• Criterion scores represent the job performance level achieved by the
individual and are usually based on supervisor evaluations.

Validity is calculated by statistically correlating predictor scores with criterion


scores (statistical formulas for computing correlation can be found in most
introductory statistical texts). This correlation coefficient (designated as r ) is
called a validity coefficient. To be considered valid, r must be statistically
significant and its magnitude must be sufficiently large to be of practical value.
When a suitable correlation is obtained ( r > 0.3, as a rule of thumb), the firm can
conclude that the inferences made during the selection process have been
confirmed. That is, it can conclude that, in general, applicants who score well
during selection turn out to be good performers, while those who do not score as
well become poor performers.

A criterion-related validation study may be conducted in one of two ways: a


predictive validation study or a concurrent validation study. The two approaches
differ primarily in terms of the individuals assessed. In a predictive validation
study, information is gathered on actual job applicants; in a concurrent study,
current employees are used. The steps to each approach are shown in Exhibit 3.

Concurrent studies are more commonly used than predictive ones because they
can be conducted more quickly; the assessed individuals are already on the job
and performance measures can thus be more quickly obtained. (In a predictive
study, the criterion scores cannot be gathered until the applicants have been
hired and have been on the job for several months.) Although concurrent validity
studies have certain disadvantages compared to predictive ones, available
research indicates that the two types of studies seem to yield approximately the
same results.

Up to this point, our discussion has assumed that an employer needs to validate
each of its selection practices. But what if it is using a selection device that has
been used and properly validated by other companies? Can it rely on that validity
evidence and thus avoid having to conduct its own study? The answer is yes. It
can do so by using a validity generalization strategy. Validity generalization is
established by demonstrating that a selection device has been consistently found
to be valid in many other similar settings. An impressive amount of evidence
points to the validity generalization of many specific devices. For example, some
mental aptitude tests have been found to be valid predictors for nearly all jobs
and thus can be justified without performing a new validation study to
demonstrate job relatedness. To use validity generalization evidence, an
organization must present the following data:

• Studies summarizing a selection measure's validity for similar jobs in


other settings.
• Data showing the similarity between the jobs for which the validity
evidence is reported and the job in the new employment setting.
• Data showing the similarity between the selection measures in the other
studies composing the validity evidence and those measures to be used in
the new employment setting.
MAKING A FINAL SELECTION
The extensiveness and complexity of selection processes vary greatly depending
on factors such as the nature of the job, the number of applicants for each
opening, and the size of the organization. A typical way of applying selection
methods to a large number of applicants for a job requiring relatively high levels
of KSAs would be the following:

1. Use application blanks, resumes, and short interviews to determine which


job applicants meet the minimum requirements for the job. If the number
of applicants is not too large, the information provided by applicants can
be verified with reference and/or background checks.
2. Use extensive interviews and appropriate testing to determine which of the
minimally qualified job candidates have the highest degree of the KSAs
required by the job.
3. Make contingent offers to one or more job finalists as identified by Step 2.
Job offers may be contingent upon successful completion of a drug test or
other forms of back-ground checks. General medical exams can only be
given after a contingent offer is made.

One viable strategy for arriving at a sound selection decision is to first evaluate
the applicants on each individual attribute needed for the job. That is, at the
conclusion of the selection process, each applicant could be rated on a scale (say,
from one to five) for each important attribute based on all the information
collected during the selection process. For example, one could arrive at an overall
rating of a candidate's dependability by combining information derived from
references, interviews, and tests that relate to this attribute.

Exhibit 3
Steps in the Predictive and Concurrent Validation
Processes

Predictive Validation
1. Perform a job analysis to identify needed competencies.
2. Develop/choose selection procedures to assess needed
competencies.
3. Administer the selection procedures to a group of applicants.
4. Randomly select applicants or select all applicants.
5. Obtain measures of the job performance for the applicant after
they have been employed for a sufficient amount of time. For
most jobs, this would be six months to a year.
6. Correlate job performance scores of this group with the scores
they received on the selection procedures.

Concurrent Validation

• 1 and 2. These steps are identical to those taken in a predictive


validation study.
• 3. Administer the selection procedures to a representative
group of job incumbents.
• 4. Obtain measures of the current job performance level of the
job incumbents who have been assessed in step 3.
• 5. Identical to step 6 in a predictive study.

Decision-making is often facilitated by statistically combining applicants' ratings


on different attributes to form a ranking or rating of each applicant. The
applicant with the highest score is then selected. This approach is appropriate
when a compensatory model is operating, that is, when it is correct to assume
that a high score on one attribute can compensate for a low score on another. For
example, a baseball player may compensate for a lack of power in hitting by being
a fast base runner.

In some selection situations, however, proficiency in one area cannot compensate


for deficiencies in another. When such a non-compensatory model is operating, a
deficiency in any one area would eliminate the candidate from further
consideration. Lack of honesty or an inability to get along with people, for
example, may serve to eliminate candidates for some jobs, regardless of their
other abilities.

When a non-compensatory model is operating, the "successive hurdles" approach


may be most appropriate. Under this approach, candidates are eliminated during
various stages of the selection process as their non-compensable deficiencies are
discovered. For example, some applicants may be eliminated during the first
stage if they do not meet the minimum education and experience requirements.
Additional candidates may be eliminated at later points after failing a drug test or
honesty test or after demonstrating poor interpersonal skills during an interview.
The use of successive hurdles lowers selection costs by requiring fewer
assessments to be made as the list of viable candidates shrinks.
Question no. 4 (a)

Substantial research examining the efficacy of Realistic Job Previews (RJPs) has been
conducted in the past decade (Wanous, 1989). Nearly all of this research has focused on
the effects of RJPs on one or more desirable organizational outcomes, such as some
measure of job acceptance, job persistence, or job satisfaction. Concern has been
expressed that the reported results of RJP interventions have been, at best, equivocal
(Milkovich and Boudreau, 1994). Nearly as many RJP studies have been conducted that
found no relationship between realistic job information and reduced turnover rates as the
number of studies which found a significant reduction (e. g., Premack and Wanous, 1985;
Taylor, 1994; Wanous and Colella, 1989). Results have been less than overwhelming
even in those situations in which statistically significant relationships were demonstrated.
As a result of these mixed findings, considerable effort is now being directed toward
uncovering the theoretical processes explaining the role of RJPs in influencing these
positive organizational outcomes (Fedor et al., In Press). An inference can reasonably be
drawn from this RJP literature that, absent positive organizational utility, an RJP cannot
be seriously proposed as an appropriate recruiting or socialization tool.
This article explores the possibility that the provision of realistic pre-employment and
post-employment job information is ethically required, absent any positive, or even in the
face of negative, returns to the organization. In fact, one of the suggested explanations for
RJP's influence on the reduction of turnover implies an ethical underpinning -- employer
honesty (Meglino et al., 1988; Suszko and Breaugh, 1986). The frequent incidence of
positive organizational utility may merely be a fortuitous benefit on an ethically
mandatory practice. Efforts directed toward isolating the most efficient RJP contents,
methods and media, while not without practical importance, do nothing to establish or
enhance an organizational imperative to provide recruits and new employees with
accurate job information.
RJPs are designed to provide "realistic" job information. This realistic information is
sometimes thought to include only the negative aspects of a job -- that information which
is thought to be more likely to be withheld from the recruit An RJP, however, provides
positive and neutral information, as wen. It is, of course, the provision of negative
information that sets RJPs off from what might be characterized as the "traditional"
recruiting situation. Theoretically, at least, where the organization and the recruit have
unlimited time and financial resources, the RJP provides all of the information necessary
to provide the recruit with a complete picture of the job and the organization.
Furthermore, what is or isn't a negative job aspect is frequently determined within the
sole purview of the recruit (Meglino et al., 1993). It is difficult for the recruiting
organization to recognize which job/organization characteristics may have important
consequences for the prospective employee. For purposes of this article, the RJP is
considered to truthfully provide all relevant positive, neutral, and negative job
information, despite the impracticality, of such a requirement. The totality of this
information is what we characterize in this article as "accurate" information.
The importance and ethics of providing employment recruits with accurate job
information was made abundantly evident during the United States' war with Iraq. The
truthfulness of the recruiting information the U.S. military services dispensed to attract
men and women to active and reserve duty was questioned by many military personnel.
In particular, the call of many Reserve and National Guard personnel to active duty in a
combat zone generated reactions among many of these individuals, ranging from surprise
and shock to outrage. Of course, the body of knowledge common to all potential
employees (in this case, the general citizenry's awareness of military affairs and reserve
status in time of war) may be an input into consideration of the ethical adequacy of
recruiting information. While the individual and societal consequences of the
transmission of inaccurate job information is substantial in the military context, the
consequences in other organizational settings are only slightly less substantial.
Review of the personnel literature and the expanding body of business ethics literature
uncovers little direct consideration of the ethical imperative of organization recruiters and
trainers to dispense truthful and realistic job information by direct face-to-face
communication, in recruiting advertisements or other recruiting literature, or in employee
training media. While much has been written about the ethics and legalities of selection,
little has directly considered the organizational tactics ...
QUESTION NO. 2 (B)

Human Resource Management (HRM) is the term used to describe formal systems devised for
the management of people within an organization. These human resources responsibilities are
generally divided into three major areas of management: staffing, employee compensation, and
defining/designing work. Essentially, the purpose of HRM is to maximize the productivity of an
organization by optimizing the effectiveness of its employees. This mandate is unlikely to change
in any fundamental way, despite the ever-increasing pace of change in the business world. "The
basic mission of human resources will always be to acquire, develop, and retain talent; align the
workforce with the business; and be an excellent contributor to the business. Those three
challenges will never change."

Until fairly recently, an organization's human resources department was often consigned to lower
rungs of the corporate hierarchy, despite the fact that its mandate is to replenish and nourish the
company's work force, which is often cited—legitimately—as an organization's greatest resource.
But in recent years recognition of the importance of human resources management to a
company's overall health has grown dramatically. This recognition of the importance of HRM
extends to small businesses, for while they do not generally have the same volume of human
resources requirements as do larger organizations, they too face personnel management issues
that can have a decisive impact on business health. "Hiring the right people—and training them
well—can often mean the difference between scratching out the barest of livelihoods and steady
business growth…. Personnel problems do not discriminate between small and big business. You
find them in all businesses, regardless of size."
=============================================================

PRINCIPLES OF HUMAN RESOURCE MANAGEMENT


There is a simple recognition that human resources are the most important assets of an
organization; a business cannot be successful without effectively managing this resource.
Business success "is most likely to be achieved if the personnel policies and procedures of the
enterprise are closely linked with, and make a major contribution to, the achievement of corporate
objectives and strategic plans." A third guiding principle, similar in scope, holds that it is HR's
responsibility to find, secure, guide, and develop employees whose talents and desires are
compatible with the operating needs and future goals of the company. Other HRM factors that
shape corporate culture—whether by encouraging integration and cooperation across the
company, instituting quantitative performance measurements, or taking some other action—are
also commonly cited as key components in business success. HRM, "is a strategic approach to
the acquisition, motivation, development and management of the organization's human
resources. It is devoted to shaping an appropriate corporate culture, and introducing programs
which reflect and support the core values of the enterprise and ensure its success."
=========================================================================
=
POSITION AND STRUCTURE OF HUMAN RESOURCE MANAGEMENT
Human resource management department responsibilities can be broadly classified by individual,
organizational, and career areas. Individual management entails helping employees identify their
strengths and weaknesses; correct their shortcomings; and make their best contribution to the
enterprise. These duties are carried out through a variety of activities such as performance
reviews, training, and testing. Organizational development, meanwhile, focuses on fostering a
successful system that maximizes human (and other) resources as part of larger business
strategies. This important duty also includes the creation and maintenance of a change program,
which allows the organization to respond to evolving outside and internal influences. The third
responsibility, career development, entails matching individuals with the most suitable jobs and
career paths within the organization.
Human resource management functions are ideally positioned near the theoretic center of the
organization, with access to all areas of the business. Since the HRM department or manager is
charged with managing the productivity and development of workers at all levels, human resource
personnel should have access to—and the support of—key decision makers. In addition, the
HRM department should be situated in such a way that it is able to effectively communicate with
all areas of the company.
HRM structures vary widely from business to business, shaped by the type, size, and governing
philosophies of the organization that they serve. But most organizations organize HRM functions
around the clusters of people to be helped—they conduct recruiting, administrative, and other
duties in a central location. Different employee development groups for each department are
necessary to train and develop employees in specialized areas, such as sales, engineering,
marketing, or executive education. In contrast, some HRM departments are completely
independent and are organized purely by function. The same training department, for example,
serves all divisions of the organization.
In recent years, however, observers have cited a decided trend toward fundamental
reassessments of human resources structures and positions. "A cascade of changing business
conditions, changing organizational structures, and changing leadership has been forcing human
resource departments to alter their perspectives on their role and function almost over-night,"
"Previously, companies structured themselves on a centralized and compartmentalized basis—
head office, marketing, manufacturing, shipping, etc. They now seek to decentralize and to
integrate their operations, developing cross-functional teams…. Today, senior management
expects HR to move beyond its traditional, compartmentalized 'bunker' approach to a more
integrated, decentralized support function." Given this change in expectations, Johnston noted
that "an increasingly common trend in human resources is to decentralize the HR function and
make it accountable to specific line management. This increases the likelihood that HR is viewed
and included as an integral part of the business process, similar to its marketing, finance, and
operations counterparts. However, HR will retain a centralized functional relationship in areas
where specialized expertise is truly required," such as compensation and recruitment
responsibilities.

=========================================================================
====
HUMAN RESOURCE MANAGEMENT—KEY RESPONSIBILITIES
Human resource management is concerned with the development of both individuals and the
organization in which they operate. HRM, then, is engaged not only in securing and developing
the talents of individual workers, but also in implementing programs that enhance communication
and cooperation between those individual workers in order to nurture organizational development.
The primary responsibilities associated with human resource management include: job analysis
and staffing, organization and utilization of work force, measurement and appraisal of work force
performance, implementation of reward systems for employees, professional development of
workers, and maintenance of work force.
Job analysis consists of determining—often with the help of other company areas—the nature
and responsibilities of various employment positions. This can encompass determination of the
skills and experiences necessary to adequately perform in a position, identification of job and
industry trends, and anticipation of future employment levels and skill requirements. "Job analysis
is the cornerstone of HRM practice because it provides valid information about jobs that is used
to hire and promote people, establish wages, determine training needs, and make other important
HRM decisions," Staffing, meanwhile, is the actual process of managing the flow of personnel
into, within (through transfers and promotions), and out of an organization. Once the recruiting
part of the staffing process has been completed, selection is accomplished through job postings,
interviews, reference checks, testing, and other tools.

Organization, utilization, and maintenance of a company's work force is another key function of
HRM. This involves designing an organizational framework that makes maximum use of an
enterprise's human resources and establishing systems of communication that help the
organization operate in a unified manner. Other responsibilities in this area include safety and
health and worker-management relations. Human resource maintenance activities related to
safety and health usually entail compliance with federal laws that protect employees from hazards
in the workplace. These regulations are handed down from several federal agencies.
Maintenance tasks related to worker-management relations primarily entail: working with labor
unions; handling grievances related to misconduct, such as theft or sexual harassment; and
devising communication systems to foster cooperation and a shared sense of mission among
employees.

Performance appraisal is the practice of assessing employee job performance and providing
feedback to those employees about both positive and negative aspects of their performance.
Performance measurements are very important both for the organization and the individual, for
they are the primary data used in determining salary increases, promotions, and, in the case of
workers who perform unsatisfactorily, dismissal.

Reward systems are typically managed by HR areas as well. This aspect of human resource
management is very important, for it is the mechanism by which organizations provide their
workers with rewards for past achievements and incentives for high performance in the future. It
is also the mechanism by which organizations address problems within their work force, through
institution of disciplinary measures. Aligning the work force with company goals, "requires
offering workers an employment relationship that motivates them to take ownership of the
business plan."

Employee development and training is another vital responsibility of HR personnel. HR is


responsible for researching an organization's training needs, and for initiating and evaluating
employee development programs designed to address those needs. These training programs can
range from orientation programs, which are designed to acclimate new hires to the company, to
ambitious education programs intended to familiarize workers with a new software system.
"After getting the right talent into the organization," , "the second traditional challenge to human
resources is to align the workforce with the business—to constantly build the capacity of the
workforce to execute the business plan." This is done through performance appraisals, training,
and other activities. In the realm of performance appraisal, HRM professionals must devise
uniform appraisal standards, develop review techniques, train managers to administer the
appraisals, and then evaluate and follow up on the effectiveness of performance reviews. They
must also tie the appraisal process into compensation and incentive strategies, and work to
ensure that federal regulations are observed.

Responsibilities associated with training and development activities, meanwhile, include the
determination, design, execution, and analysis of educational programs. The HRM professional
should be aware of the fundamentals of learning and motivation, and must carefully design and
monitor training and development programs that benefit the overall organization as well as the
individual. The importance of this aspect of a business's operation can hardly be over-stated.
"The quality of employees and their development through training and education are major
factors in determining long-term profitability of a small business…. Research has shown specific
benefits that a small business receives from training and developing its workers, including:
increased productivity; reduced employee turnover; increased efficiency resulting in financial
gains; [and] decreased need for supervision."

Meaningful contributions to business processes are increasingly recognized as within the purview
of active human resource management practices. Of course, human resource managers have
always contributed to overall business processes in certain respects—by disseminating
guidelines for and monitoring employee behavior, for instance, or ensuring that the organization is
obeying worker-related regulatory guidelines—but increasing numbers of businesses are
incorporating human resource managers into other business processes as well. In the past,
human resource managers were cast in a support role in which their thoughts on cost/benefit
justifications and other operational aspects of the business were rarely solicited. But , the
changing character of business structures and the marketplace are making it increasingly
necessary for business owners and executives to pay greater attention to the human resource
aspects of operation: "Tasks that were once neatly slotted into well-defined and narrow job
descriptions have given way to broad job descriptions or role definitions. In some cases,
completely new work relationships have developed; telecommuting, permanent part-time roles
and outsourcing major non-strategic functions are becoming more frequent." All of these
changes, which human resource managers are heavily involved in, are important factors in
shaping business performance.

===================================================================
Importance of Human Resource

Human resource is an integral part of any organization. Great stress is laid on implementing an
effective human resource system in an organization. There are lots of department in an
organization that makes use of human resource to setup strategic planning and means to process
officials assignments. The companies that do not have a proper human resource department
suffer from official disorders and lack of management in office activities.

THE FOLLOWING ACTIVITIES ARE CARRIED OUT BY HR

1.RECRUITMENT APPROACH
-using modern online recruitment and resume assessment.
2.SELECTION METHODS
-using modern tools like psychometrics , personality profiling etc
========================================================
4.STAFF INITIATIONS

1. INDUCTION PROGRAMS
-tailoring induction to each individuals.
2.ORIENTATION PROGRAMS
-tailoring orientation to each individuals.
==========================================================
5.HOW EFFECTIVE IS THE HUMAN RESOURCE DEVELOPMENT PROCESS
WHAT ARE THE VARIOUS METHODS/SYSTEMS USED

1.PERFORMANCE APPRAISALS SYSTEMS


-360 degree systems
2.PERFORMANCE MANAGEMENT SYSTEM
-self development programs
-management development programs
-training
-coaching
==========================================
6.ASSESSMANT OF POTENTIALS
-use it for promotions
-for succession planning
-for talent management
===============================================
7.HR AUDIT
===================================================

8.HR STRATEGIC PLANNING


======================================================
9.ORGANIZATIONAL BEHAVIOR PROGRAMS

-employee engagement
-motivation
-organization culture
-organization development
==============================================
10.HUMAN RESOURCE PLANNING
-HR planning
-manpower planning
============================================
11.HUMAN RESOURCE DEVELOPMENT
-org. learning
-training
-education
-development
-Training evaluation
-e learning
-management development
-career planning /development.
================================================
12. REWARDS MANAGEMENT
-job evaluation
-managing reward process
-administration of rewards
-benefits
QUESTION 3 (A)

SYSTEM REGULATIONS
31.01.02 Fair Labor Standards
December 4, 1997
Revised January 10, 2002
Revised February 28, 2005
Supplements System Policy 31.01
1. GENERAL
1.1 System components will comply with the Fair Labor Standards Act (FLSA) and
related federal and state laws. Related administrative procedures are detailed in the HR
Manual.
1.2 All faculty, staff and student employees (except as set out in Section 1.3) of the
System are covered by the FLSA, although certain classes of employees are exempt
from its overtime pay and minimum wage requirements. An employee's rights under
the FLSA may not be waived. No employee may agree, even voluntarily, to work in
violation of the FLSA.
1.3 A graduate student who teaches is exempt from overtime pay and minimum wage
requirements. A graduate student engaged in research in the course of obtaining an
advanced degree under the supervision of a faculty member does not fall under the
requirements of the FLSA, even though the student may receive a stipend for his or
her services.
2. MINIMUM WAGE PROVISIONS
The System pays all employees, including student workers, at least the federal minimum
wage
prescribed by the FLSA.
3. DETERMINATION OF EXEMPTION STATUS OF EMPLOYEES
Each employee's overtime pay and minimum wage coverage under the FLSA (exempt,
nonexempt or partially exempt) must be determined on an individual basis in accordance
with
the terms of federal regulations. When questions arise concerning an employee's status
under the
FLSA, advice of the relevant human resources office should be obtained.
4. OVERTIME
The FLSA and state law govern the handling of overtime work. See System Regulation
31.01.09
Overtime for more information.
5. DEDUCTIONS TO PAY FOR EXEMPT EMPLOYEES
5.1 Subject to the following exceptions, an exempt employee will receive full salary for
any week in which work is performed without regard to the days and number of hours
31.01.02 Fair Labor Standards Page 2 of 4
worked. In general, an employee need not be paid for any workweek in which the
employee performs no work. The exceptions to these provisions are:
(1) Deductions may be made when an employee is absent from work for one or
more full days for personal reasons, other than sickness or disability or other
circumstances for which leave with pay or release time is available.
(2) Deductions may be made for absences of one or more full days because of
sickness or disability (including workers' compensation accidents) if the
deduction is made after the employee exhausts paid sick leave or workers'
compensation benefits.
(3) Deductions may be made for penalties imposed for infractions of significant
safety rules relating to prevention of serious danger in the workplace or to
other employees.
(4) Deductions may be made for unpaid disciplinary suspensions of one or more
full days imposed pursuant to System policy or regulation or component rule
for infractions of workplace conduct rules.
(5) The System is not required to pay the full weekly salary in the first and last
weeks of employment if the employee does not work the full week.
5.2 Deductions may be made for absences of less than one day for personal reasons or
due
to sickness when accrued leave is not used because permission for its use has not been
sought or has been denied, accrued sick leave and vacation have been exhausted, or
the employee chooses to use leave without pay. Deductions from pay due to a
budgetrequired
furlough will not disqualify an employee from being paid on a salary basis
except in the workweek in which the furlough occurs and for which the employee’s
pay is reduced.
5.3 The A&M System prohibits improper deductions from pay. If deductions are
inadvertently made in contradiction to Department of Labor regulations,
reimbursement will be made retroactively to the affected employee. An employee
who believes an improper deduction has been made from his or her pay should notify
his or her payroll office.
6. EQUAL PAY FOR EQUAL WORK UNDER THE FLSA
System employees are covered by the Equal Pay Act, an amendment to the FLSA, that
prohibits
gender-based wage differentials between persons employed in the same location on jobs
that
require equal skill, effort, and responsibility and that are performed under similar
working
conditions. Jobs need only be substantially equal, not identical, for comparison purposes.
The
law permits differences in pay based on factors other than gender such as bona fide
seniority or
merit systems or systems that reward productivity.
31.01.02 Fair Labor Standards Page 3 of 4
7. EMPLOYMENT OF MINORS
7.1 The FLSA prescribes at what age and in which types of occupations minors can be
employed. Federal regulations also limit hours of work for certain age groups. A list
of prohibited occupations and other restrictions on employment of minors is available
from component human resources offices.
7.2 To protect the System from an unwitting violation of the age restrictions, the
employing component must obtain and keep on file a Minor's Employment Release
form (HR-200) if the person being employed is younger than 18 years. In addition,
the employing System component must obtain and keep on file a Federal Certificate of
Age issued by the U. S. Department of Labor, a state Certificate of Age issued by the
Texas Workforce Commission or other proof of age acceptable to the component
human resources officer for any person offered employment when there is any reason
to believe the person being employed is younger than 19 years.
8. EXAMINATION BY DEPARTMENT OF LABOR REPRESENTATIVES
8.1 If an examination is initiated by Wage and Hour or Equal Employment Opportunity
Commission representatives, the head of the department or comparable administrative
unit being examined will immediately report the pertinent facts through normal
channels to the appropriate Human Resources Officer and CEO (before any
information is provided to the representatives). The component human resources
office will notify the System Human Resources Office (SHRO) immediately so SHRO
may provide counsel concerning federal regulations, assist in resolving charges of
noncompliance or other findings, and communicate information from the examination
System-wide.
8.2 The FLSA forbids discharge of or discrimination against employees who file
complaints alleging noncompliance or who participate in legal proceedings under the
law.
9. ADMINISTRATION
9.1 SHRO is responsible for administering and answering questions on the FLSA for all
System components. Inquiries as well as requests for special exemptions should be
submitted to that office through the appropriate human resources officer.
Interpretation of FLSA legal issues should be directed to the System Office of General
Counsel (OGC). Component human resources offices may also consult the
Department of Labor or outside consultants for advice on FLSA matters, and any
information received should be forwarded to SHRO and OGC for dissemination
System-wide.
9.2 Each System component human resources office is responsible for posting, and
keeping posted, notices pertaining to the applicability of the FLSA. These notices,
which can be obtained from the Department of Labor, are to be displayed in
conspicuous places to facilitate observation by all employees.
31.01.02 Fair Labor Standards Page 4 of 4
9.3 Component human resources offices are also responsible for ensuring that all
FLSAand
DOL-required records are maintained.
QUESTION NO. 3 (B)

Control Mechanisms
It is very common that a person uses one main scheme of manipulating others. That is
usually quite an unconscious thing, something one is simply doing habitually.

Usually one learns early in life what it takes to control the energy of others, to get them to
give you energy and avoid giving your own energy away. The particular scheme used is
often a reflection of which control mechanisms one's parents used.

For example, if one had a father who would always find fault in anything one said. One
would probably instinctively discover that if one doesn't offer any information then there
is nothing he could find fault with. You would gain power by withholding information
and waiting for others to make the first move. That could easily become a permanent
pattern and become a way of sucking up the energy of others in turn. If you are holding
yourself back, then others will have to reach more and will enter your territory without
knowing what is going on.

A control mechanism doesn't have to appear very controlling at first glance. Being meek
or submissive or quiet might be a perfectly fine control mechanism. If you are submissive
then people will possibly keep you around, feed you energy and rely on you.

Often a control mechanism will be a way of increasing one's status with others. It could
be that one plays on being more educated, more experienced, stronger, bigger than others,
or that one simply shows an arrogance that says so. High status might be gained by
playing either high or low status roles. You might control others by being aloof and put
them down and drive a bigger car. But a bum on the street might control others by
making them feel guilt or pity. You might suck energy towards you by being
unapproachable or by being overwhelming, by always staying collected, or by always
starting a scene.

Control mechanisms usually fall in one of four categories, dividing people into one of
four personality types:

Intimidator: Somebody who controls others by overwhelming them, commanding them,


telling them what to do.

Interrogator: Somebody who gets information from others in order to find something
wrong with it. Gets others to do or say something and then finds weaknesses in it.

Aloof: Somebody who doesn't volunteer information, but controls others by having them
reaching for the hidden information. Stays above others by not reacting, but waiting for
them to make a mistake.
Victim: Somebody who makes others feel sorry for them. Talks about and demonstrates
how they are particularly unlucky or persecuted. Controls others by getting them to feel
pity or guilt.

Main control mechanisms are usually somewhat hidden, even though it is something
done openly. It is just that people tend not to notice. Everybody, including the person,
might just assume that it is a personality trait or a natural thing to do according to the
circumstances.

Having a control mechanism at all is based on the subconscious belief that there isn't
enough energy to go around, that you somehow need to suck it out of others. That luckily
isn't true, so it opens the door to the transformation of control mechanisms into something
else.

Mainly one needs to realize what one is doing. It might be the best idea for the person to
examine the phenomenon in others first, before admitting to anything personally. People
tend to be kind of defensive about their control mechanisms, unless they are pinpointed
very precisely. Part of the makeup of a control mechanism is to control others so as to
evade being found out. If the person gets used to seeing control mechanisms in others she
will tend to take her own defenses down a bit.

Examining one's early life might be a good way of identifying which control mechanism
one is using. It is very likely to be a defense against something one's parents were doing
as manipulation. And since that provides the proper context it is also the best place to
examine the mechanism and start changing it around. Basically it was a survival response
to the lack of resources in a series of incidents. It could be addressed through re-
experiencing.

Any given person might use a whole number of different mechanisms and fixed ideas to
be right and to control energy. However, there is most likely one or two main control
mechanisms one is using continuously. Transforming those will be more valuable than
any of the more peripheral ones.

A control mechanism needs to be replaced with another way of getting energy and feeling
powerful, resourceful or safe. We need to find inner sources for these qualities. And we
need to alleviate whatever it is that the control mechanism keeps at bay. There is some
unpleasant event that it is there to keep away.
Question no. 5 (bb)

training programme evaluation


training and learning evaluation, feedback forms, action
plans and follow-up

This section begins with an introduction to training and learning


evaluation, including some useful learning reference models. The
introduction also explains that for training evaluation to be truly
effective, the training and development itself must be appropriate for
the person and the situation. Good modern personal development and
evaluation extend beyond the obvious skills and knowledge required
for the job or organisation or qualification. Effective personal
development must also consider: individual potential (natural abilities
often hidden or suppressed); individual learning styles; and whole
person development (life skills, in other words). Where training or
teaching seeks to develop people (rather than merely being focused on
a specific qualification or skill) the development must be approached
on a more flexible and individual basis than in traditional paternalistic
(authoritarian, prescribed) methods of design, delivery and testing.
These principles apply to teaching and developing young people too,
which interestingly provides some useful lessons for workplace
training, development and evaluation.

introduction
A vital aspect of any sort of evaluation is its effect on the person being
evaluated.

Feedback is essential for people to know how they are progressing,


and also, evaluation is crucial to the learner's confidence too.

And since people's commitment to learning relies so heavily on


confidence and a belief that the learning is achievable, the way that
tests and assessments are designed and managed, and results
presented back to the learners, is a very important part of the learning
and development process.

People can be switched off the whole idea of learning and


development very quickly if they receive only negative critical
test results and feedback. Always look for positives in negative
results. Encourage and support - don't criticize without adding some
positives, and certainly never focus on failure, or that's just what you'll
produce.

This is a much overlooked factor in all sorts of evaluation and testing,


and since this element is not typically included within evaluation and
assessment tools the point is emphasised point loud and clear here.

So always remember - evaluation is not just for the trainer or


teacher or organisation or policy-makers - evaluation is
absolutely vital for the learner too, which is perhaps the most
important reason of all for evaluating people properly, fairly, and with
as much encouragement as the situation allows.

Most of the specific content and tools below for workplace training
evaluation is based on the work of Leslie Rae, an expert and author on
the evaluation of learning and training programmes, and this
contribution is greatly appreciated. W Leslie Rae has written over 30
books on training and the evaluation of learning - he is an expert in his
field. His guide to the effective evaluation of training and learning,
training courses and learning programmes, is a useful set of rules and
techniques for all trainers and HR professionals.

This training evaluation guide is augmented by an excellent set of free


learning evaluation and follow-up tools, created by Leslie Rae.

There are other training evaluation working files on the free resources
page.

It is recommended that you read this article before using the free
evaluation and training follow-up tools.

See also the section on Donald Kirkpatrick's training evaluation model,


which represents fundamental theory and principles for evaluating
learning and training.

Also see Bloom's Taxonomy of learning domains, which establishes


fundamental principles for training design and evaluation of learning,
and thereby, training effectiveness.

Erik Erikson's Psychosocial (Life Stages) Theory is very helpful in


understanding how people's training and development needs change
according to age and stage of life. These generational aspects are
increasingly important in meeting people's needs (now firmly a legal
requirement within age discrimination law) and also in making the
most of what different age groups can offer work and organisations.
Erikson's theory is helpful particularly when considering broader
personal development needs and possibilities outside of the obvious
job-related skills and knowledge.

Multiple Intelligence theory (section includes free self-tests) is


extremely relevant to training and learning. This model helps address
natural abilities and individual potential which can be hidden or
suppressed in many people (often by employers).

Learning Styles theory is extremely relevant to training and teaching,


and features in Kolb's model, and in the VAK learning styles model
(also including a free self-test tool). Learning Styles theory also relates
to methods of assessment and evaluation, in which inappropriate
testing can severely skew results. Testing, as well as delivery, must
take account of people's learning styles, for example some people find
it very difficult to prove their competence in a written test, but can
show remarkable competence when asked to give a physical
demonstration. Text-based evaluation tools are not the best way to
assess everybody.

The Conscious Competence learning stages theory is also a helpful


perspective for learners and teachers. The model helps explain the
process of learning to trainers and to learners, and is also helps to
refine judgements about competence, since competence is rarely a
simple question of 'can or cannot'. The Conscious Competence model
particularly provides encouragement to teachers and learners when
feelings of frustration arise due to apparent lack of progress. Progress
is not always easy to see, but can often be happening nevertheless.

lessons from (and perhaps also for)


children's education
While these various theories and models are chiefly presented here for
adult work-oriented training, the principles also apply to children's and
young people's education, which provides some useful fundamental
lessons for workplace training and development.

Notably, while evaluation and assessment are vital of course (because


if you can't measure it you can't manage it) the most important thing
of all is to be training and developing the right things in the right ways.
Assessment and evaluation (and children's testing) will not ensure
effective learning and development if the training and development
has not been properly designed in the first place.

Lessons for the workplace are everywhere you look within children's
education, so please forgive this diversion..

If children's education in the UK ever actually worked well, successive


governments managed to wreck it by the 1980s, and have made it
worse since then. This was achieved by the imposition of a ridiculously
narrow range of skills and delivery methods, plus similarly narrowly-
based testing criteria and targets, and a self-defeating administrative
burden. All of this perfectly characterises arrogance and delusion found
in X-Theory management structures, in this case of high and mighty
civil servants and politicians, who are not in the real world, and who
never went to normal school and whose kids didn't either. A big lesson
from this for organisations and workplace training is that X-Theory
directives and narrow-mindedness are a disastrous combination.
Incidentally, according to some of these same people, society is broken
and our schools and parents are to blame and are responsible for
sorting out the mess. Blaming the victims is another classic behaviour
of inept governance. Society is not broken; it just lacks some proper
responsible leadership, which is another interesting point:

The quality of any leadership (government or organisation) is defined


by how it develops its people. Good leaders have a responsibility to
help people understand, develop and fulfil their own individual
potential. This is very different to just training them to do a job, or
teaching them to pass an exam and get into university, which ignores
far more important human and societal needs and opportunities.

Thankfully modern educational thinking (and let's hope policy too) now
seems to be addressing the wider development needs of the individual
child, rather than aiming merely to transfer knowledge in order to pass
tests and exams. Knowledge transfer for the purpose of passing tests
and exams, especially when based on such an arbitrary and extremely
narrow idea of what should be taught and how, has little meaning or
relevance to the development potential and needs of most young
people, and even less relevance to the demands and opportunities of
the real modern world, let alone the life skills required to become a
fulfilled confident adult able to make a positive contribution to society.

The desperately flawed UK children's education system of the past


thirty years, and its negative impacts on society, offer many useful
lessons for organisations. Perhaps most significantly, if you fail to
develop people as individuals, and only aim to transfer knowledge and
skills to meet the organisational priorities of the day, then you will
seriously hamper your chances of fostering a happy productive society
within your workforce, assuming you want to, which I guess is another
subject altogether.

Assuming you do want to develop a happy and productive workforce,


it's useful to consider and learn from the mistakes that have been
made in children's education:

• the range of learning is far too


narrowly defined and ignores
individual potential, which is then
devalued or blocked
• the range of learning focuses on
arbitrary criteria set from the policy-
makers' own perspectives (classic
arrogant X-Theory management - it's
stifling and suppressive)
• policy-makers give greatest or
exclusive priority to the obvious
'academic' intelligences (reading,
writing, arithmetic, etc), when other
of the multiple intelligences (notably
interpersonal and intrapersonal
capabilities, helpfully encompassed
by emotional intelligence) arguably
have a far bigger value in work and
society (and certainly cause more
problems in work and society if
under-developed)
• testing and assessment of learners
and teachers is measuring the wrong
things, too narrowly, in the wrong
way - like measuring the weather with
a thermometer
• testing (the wrong sort, although
none would be appropriate for this) is
used to assess and pronounce
people's fundamental worth -
which quite obviously directly affects
self-esteem, confidence, ambition,
dreams, life purpose, etc (nothing too
serious then..)
• wider individual development
needs - especially life needs - are
ignored (many organisations and
educational policy-makers seem to
think that people are robots and that
their work and personal lives are not
connected; and that work is
unaffected by feelings of well-being
or depression, etc)
• individual learning styles are
ignored (learning is delivered mainly
through reading and writing when
many people are far better at
learning through experience,
observation, etc - again see Kolb and
VAK)
• testing and assessment focuses on
proof of knowledge in a distinctly
unfair situation only helpful to certain
types of people, rather than assessing
people's application, interpretation
and development of capabilities,
which is what real life requires (see
Kirkpatrick's model - and consider the
significance of assessing what people
do with their improved capability,
beyond simply assessing whether
they've retained the theory, which
means relatively very little)
• children's education has
traditionally ignored the fact that
developing confident happy
productive people is much easier if
primarily you help people to discover
what they are good at - whatever it is
- and then building on that.

Teaching, training and learning must be aligned with individual


potential, individual learning styles, and wider life
development needs, and this wide flexible individual approach to
human development is vital for the workplace, just as it is for schools.

Returning to consider workplace training itself, and the work of Leslie


Rae:
evaluation of workplace learning and
training
There have been many surveys on the use of evaluation in training and
development (see the research findings extract example below). While
surveys might initially appear heartening, suggesting that many
trainers/organisations use training evaluation extensively, when more
specific and penetrating questions are asked, it if often the case that
many professional trainers and training departments are found to use
only 'reactionnaires' (general vague feedback forms), including the
invidious 'Happy Sheet' relying on questions such as 'How good did you
feel the trainer was?', and 'How enjoyable was the training course?'. As
Kirkpatrick, among others, teaches us, even well-produced
reactionnaires do not constitute proper validation or evaluation of
training.

For effective training and learning evaluation, the principal questions


should be:

• To what extent were the identified


training needs objectives achieved by
the programme?
• To what extent were the learners'
objectives achieved?
• What specifically did the learners
learn or be usefully reminded of?
• What commitment have the
learners made about the learning
they are going to implement on their
return to work?

And back at work,

• How successful were the trainees


in implementing their action plans?
• To what extent were they
supported in this by their line
managers?
• To what extent has the action
listed above achieved a Return on
Investment (ROI) for the organization,
either in terms of identified objectives
satisfaction or, where possible, a
monetary assessment.

Organizations commonly fail to perform these evaluation processes,


especially where:

• The HR department and trainers,


do not have sufficient time to do so,
and/or
• The HR department does not have
sufficient resources - people and
money - to do so.

Obviously the evaluation cloth must be cut according to available


resources (and the culture atmosphere), which tend to vary
substantially from one organization to another. The fact remains that
good methodical evaluation produces a good reliable data; conversely,
where little evaluation is performed, little is ever known about the
effectiveness of the training.

evaluation of training
There are the two principal factors which need to be resolved:

• Who is responsible for the


validation and evaluation processes?
• What resources of time, people and
money are available for
validation/evaluation purposes?
(Within this, consider the effect of
variation to these, for instance an
unexpected cut in budget or
manpower. In other words anticipate
and plan contingency to deal with
variation.)

responsibility for the evaluation of


training
Traditionally, in the main, any evaluation or other assessment has
been left to the trainers "because that is their job..." My (Rae's)
contention is that a 'Training Evaluation Quintet' should exist, each
member of the Quintet having roles and responsibilities in the process
(see 'Assessing the Value of Your Training', Leslie Rae, Gower, 2002).
Considerable lip service appears to be paid to this, but the actual
practice tends to be a lot less.

The 'Training Evaluation Quintet' advocated consists of:

• senior management
• the trainer
• line management
• the training manager
• the trainee

Each has their own responsibilities, which are detailed next.

senior management - training evaluation


responsibilities
• Awareness of the need and value
of training to the organization.
• The necessity of involving the
Training Manager (or equivalent) in
senior management meetings where
decisions are made about future
changes when training will be
essential.
• Knowledge of and support of
training plans.
• Active participation in events.
• Requirement for evaluation to be
performed and require regular
summary report.
• Policy and strategic decisions
based on results and ROI data.
the trainer - training evaluation
responsibilities
• Provision of any necessary pre-
programme work etc and programme
planning.
• Identification at the start of the
programme of the knowledge and
skills level of the trainees/learners.
• Provision of training and learning
resources to enable the learners to
learn within the objectives of the
programme and the learners' own
objectives.
• Monitoring the learning as the
programme progresses.
• At the end of the programme,
assessment of and receipt of reports
from the learners of the learning
levels achieved.
• Ensuring the production by the
learners of an action plan to
reinforce, practise and implement
learning.

the line manager - training evaluation


responsibilities
• Work-needs and people
identification.
• Involvement in training programme
and evaluation development.
• Support of pre-event preparation
and holding briefing meetings with
the learner.
• Giving ongoing, and practical,
support to the training programme.
• Holding a debriefing meeting with
the learner on their return to work to
discuss, agree or help to modify and
agree action for their action plan.
• Reviewing the progress of learning
implementation.
• Final review of implementation
success and assessment, where
possible, of the ROI.

the training manager - training evaluation


responsibilities
• Management of the training
department and agreeing the training
needs and the programme application
• Maintenance of interest and
support in the planning and
implementation of the programmes,
including a practical involvement
where required
• The introduction and maintenance
of evaluation systems, and production
of regular reports for senior
management
• Frequent, relevant contact with
senior management
• Liaison with the learners' line
managers and arrangement of
learning implementation
responsibility learning programmes
for the managers
• Liaison with line managers, where
necessary, in the assessment of the
training ROI.
the trainee or learner - training
evaluation responsibilities
• Involvement in the planning and
design of the training programme
where possible
• Involvement in the planning and
design of the evaluation process
where possible
• Obviously, to take interest and an
active part in the training programme
or activity.
• To complete a personal action plan
during and at the end of the training
for implementation on return to work,
and to put this into practice, with
support from the line manager.
• Take interest and support the
evaluation processes.

N.B. Although the principal role of the trainee in the programme is to


learn, the learner must be involved in the evaluation process. This is
essential, since without their comments much of the evaluation could
not occur. Neither would the new knowledge and skills be
implemented. For trainees to neglect either responsibility the business
wastes its investment in training. Trainees will assist more readily if
the process avoids the look and feel of a paper-chase or number-
crunching exercise. Instead, make sure trainees understand the
importance of their input - exactly what and why they are being asked
to do.

training evaluation and validation options


As suggested earlier what you are able to do, rather than what you
would like to do or what should be done, will depend on the various
resources and culture support available. The following summarizes a
spectrum of possibilities within these dependencies.
1 - do nothing

Doing nothing to measure the effectiveness and result of any business


activity is never a good option, but it is perhaps justifiable in the
training area under the following circumstances:

• If the organization, even when


prompted, displays no interest in the
evaluation and validation of the
training and learning - from the line
manager up to to the board of
directors.
• If you, as the trainer, have a solid
process for planning training to meet
organizational and people-
development needs.
• If you have a reasonable level of
assurance or evidence that the
training being delivered is fit for
purpose, gets results, and that the
organization (notably the line
managers and the board, the
potential source of criticism and
complaint) is happy with the training
provision.
• You have far better things to do
than carry out training evaluation,
particularly if evaluation is difficult
and cooperation is sparse.

However, even in these circumstances, there may come a time when


having kept a basic system of evaluation will prove to be helpful, for
example:

• You receive have a sudden


unexpected demand for a justification
of a part or all of the training activity.
(These demands can spring up, for
example with a change in
management, or policy, or a new
initiative).
• You see the opportunity or need to
produce your own justification (for
example to increase training
resource, staffing or budgets, new
premises or equipment).
• You seek to change job and need
evidence of the effectiveness of your
past training activities.

Doing nothing is always the least desirable option. At any time


somebody more senior to you might be moved to ask "Can you prove
what you are saying about how successful you are?" Without
evaluation records you are likely to be at a loss for words of proof...

2 - minimal action

The absolutely basic action for a start of some form of evaluation is as


follows:

At the end of every training programme, give the learners sufficient


time and support in the form of programme information, and have the
learners complete an action plan based on what they have learned on
the programme and what they intend to implement on their return to
work. This action plan should not only include a description of the
action intended but comments on how they intend to implement it, a
timescale for starting and completing it, and any resources required,
etc. A fully detailed action plan always helps the learners to
consolidate their thoughts. The action plan will have a secondary use
in demonstrating to the trainers, and anyone else interested, the types
and levels of learning that have been achieved. The learners should
also be encouraged to show and discuss their action plans with their
line managers on return to work, whether or not this type of follow-up
has been initiated by the manager.

3 - minimal desirable action leading to evaluation

When returning to work to implement the action plan the learner


should ideally be supported by their line manager, rather than have
the onus for implementation rest entirely on the learner. The line
manager should hold a debriefing meeting with the learner soon after
their return to work, covering a number of questions, basically
discussing and agreeing the action plan and arranging support for the
learner in its implementation. As described earlier, this is a clear
responsibility of the line manager, which demonstrates to senior
management, the training department and, certainly not least, the
learner, that a positive attitude is being taken to the training. Contrast
this with, as often happens, a member of staff being sent on a training
course, after which all thoughts of management follow-up are
forgotten.

The initial line manager debriefing meeting is not the end of the
learning relationship between the learner and the line manager. At the
initial meeting, objectives and support must be agreed, then
arrangements made for interim reviews of implementation progress.
After this when appropriate, a final review meeting needs to consider
future action.

This process requires minimal action by the line manager - it involves


no more than the sort of observations being made as would be normal
for a line manager monitoring the actions of his or her staff. This
process of review meetings requires little extra effort and time from
the manager, but does much to demonstrate at the very least to the
staff that their manager takes training seriously.

4 - training programme basic validation approach

The action plan and implementation approach described in (3) above is


placed as a responsibility on the learners and their line managers, and,
apart from the provision of advice and time, do not require any
resource involvement from the trainer. There are two further parts of
an approach which also require only the provision of time for the
learners to describe their feelings and information. The first is the
reactionnaire which seeks the views, opinions, feelings, etc., of the
learners about the programme. This is not at a 'happy sheet' level, nor
a simple tick-list - but one which allows realistic feelings to be stated.

This sort of reactionnaire is described in the book ('Assessing the Value


of Your Training', Leslie Rae, Gower, 2002). This evaluation seeks a
score for each question against a 6-point range of Good to Bad, and
also the learners' own reasons for the scores, which is especially
important if the score is low.

Reactionnaires should not be automatic events on every course or


programme. This sort of evaluation can be reserved for new
programmes (for example, the first three events) or when there are
indications that something is going wrong with the programme.

Sample reactionnaires are available in the set of free training


evaluation tools.

The next evaluation instrument, like the action plan, should be used at
the end of every course if possible. This is the Learning Questionnaire
(LQ), which can be a relatively simple instrument asking the learners
what they have learned on the programme, what they have been
usefully reminded of, and what was not included that they expected to
be included, or would have liked to have been included. Scoring ranges
can be included, but these are minimal and are subordinate to the text
comments made by the learners. There is an alternative to the LQ
called the Key Objectives LQ (KOLQ) which seeks the amount of
learning achieved by posing the relevant questions against the list of
Key Objectives produced for the programme. When a reactionnaire and
LQ/KOLQ are used, they must not be filed away and forgotten at the
end of the programme, as is the common tendency, but used to
produce a training evaluation and validation summary. A factually-
based evaluation summary is necessary to support claims that a
programme is good/effective/satisfies the objectives set'. Evaluation
summaries can also be helpful for publicity for the training programme,
etc.

Example Learning Questionnaires and Key Objectives Learning


Questionnaires are included in the set of free evaluation tools.

5 - total evaluation process

If it becomes necessary the processes described in (3) and (4) can be


combined and supplemented by other methods to produce a full
evaluation process that covers all eventualities. Few occasions or
environments allow this full process to be applied, particularly when
there is no Quintet support, but it is the ultimate aim. The process is
summarized below:

• Training needs identification and


setting of objectives by the
organization
• Planning, design and preparation of
the training programmes against the
objectives
• Pre-course identification of people
with needs and completion of the
preparation required by the training
programme
• Provision of the agreed training
programmes
• Pre-course briefing meeting
between learner and line manager
• Pre-course or start of programme
identification of learners' existing
knowledge, skills and attitudes, ('3-
Test' before-and-after training
example tool and manual version and
working file version)
• Interim validation as programme
proceeds
• Assessment of terminal knowledge,
skills, etc., and completion of
perceptions/change assessment ('3-
Test' example tool and manual
version and working file version)
• Completion of end-of-programme
reactionnaire
• Completion of end-of-programme
Learning Questionnaire or Key
Objectives Learning Questionnaire
• Completion of Action Plan
• Post-course debriefing meeting
between learner and line manager
• Line manager observation of
implementation progress
• Review meetings to discuss
progress of implementation
•Final implementation review
meeting
• Assessment of ROI

Whatever you do, do something. The processes described above


allow considerable latitude depending on resources and culture
environment, so there is always the opportunity to do something -
obviously the more tools used and the wider the approach, the more
valuable and effective the evaluation will be. However be pragmatic.
Large expensive critical programmes will always justify more
evaluation and scrutiny than small, one-off, non-critical training
activities. Where there's a heavy investment and expectation, so the
evaluation should be sufficiently detailed and complete. Training
managers particularly should clarify measurement and evaluation
expectations with senior management prior to embarking on
substantial new training activities, so that appropriate evaluation
processes can be established when the programme itself is designed.
Where large and potentially critical programmes are planned, training
managers should err on the side of caution - ensure adequate
evaluation processes are in place. As with any investment, a senior
executive is always likely to ask, "What did we get for our
investment?", and when he asks, the training manager needs to be
able to provide a fully detailed response.

the trainer's overall responsibilities -


aside from training evaluation
Over the years the trainer's roles have changed, but the basic purpose
of the trainer is to provide efficient and effective training programmes.
The following suggests the elements of the basic role of the trainer, but
it must be borne in mind that different circumstances will require
modifications of these activities.

1. The basic role of a trainer (or however they may be designated) is to


offer and provide efficient and effective training programmes aimed at
enabling the participants to learn the knowledge, skills and attitudes
required of them.

2. A trainer plans and designs the training programmes, or otherwise


obtains them (for example, distance learning or e-technology
programmes on the Internet or on CD/DVD), in accordance with the
requirements identified from the results of a TNIA (Training Needs
Identification and Analysis - or simply TNA, Training Needs Analysis) for
the relevant staff of an organizations or organizations.

3. The training programmes cited at (1) and (2) must be completely


based on the TNIA which has been: (a) completed by the trainer on
behalf of and at the request of the relevant organization (b)
determined in some other way by the organization.

4. Following discussion with or direction by the organization


management who will have taken into account costs and values (e.g.
ROI - Return on Investment in the training), the trainer will agree with
the organization management the most appropriate form and methods
for the training.

5 . If the appropriate form for satisfying the training need is a direct


training course or workshop, or an Intranet provided programme, the
trainer will design this programme using the most effective
approaches, techniques and methods, integrating face-to-face
practices with various forms of e-technology wherever this is possible
or desirable.

6. If the appropriate form for satisfying the training need is some form
of open learning programme or e-technology programme, the trainer,
with the support of the organization management obtain, plan the
utilization and be prepared to support the learner in the use of the
relevant materials.

7. The trainer, following contact with the potential learners, preferably


through their line managers, to seek some pre-programme activity
and/or initial evaluation activities, should provide the appropriate
training programme(s) to the learners provided by their
organization(s). During and at the end of the programme, the trainer
should ensure that: (a) an effective form of training/learning validation
is followed (b) the learners complete an action plan for implementation
of their learning when they return to work.

8. Provide, as necessary, having reviewed the validation results, an


analysis of the changes in the knowledge, skills and attitudes of the
learners to the organization management with any recommendations
deemed necessary. The review would include consideration of the
effectiveness of the content of the programme and the effectiveness of
the methods used to enable learning, that is whether the programme
satisfied the objectives of the programme and those of the learners.

9. Continue to provide effective learning opportunities as required by


the organization.

10. Enable their own CPD (Continuing Professional Development) by all


possible developmental means - training programmes and self-
development methods.

11. Arrange and run educative workshops for line managers on the
subject of their fulfillment of their training and evaluation
responsibilities.

Dependant on the circumstances and the decisions of the organization


management, trainers do not, under normal circumstances:

1. Make organizational training decisions without the full agreement of


the organizational management.

2. Take part in the post-programme learning implementation or


evaluation unless the learners' line managers cannot or will not fulfil
their training and evaluation responsibilities.
Unless circumstances force them to behave otherwise, the trainer's
role is to provide effective training programmes and the role of the
learners' line managers is to continue the evaluation process after the
training programme, counsel and support the learner in the
implementation of their learning, and assess the cost-value
effectiveness or (where feasible) the ROI of the training. Naturally, if
action will help the trainers to become more effective in their training,
they can take part in but not run any pre- and post-programme actions
as described, always remembering that these are the responsibilities of
the line manager.
Perspective on Evaluating Training
Evaluation is often looked at from four different levels (the "Kirkpatrick levels") listed
below. Note that the farther down the list, the more valid the evaluation.

1. Reaction - What does the learner feel about the training?


2. Learning - What facts, knowledge, etc., did the learner gain?
3. Behaviors - What skills did the learner develop, that is, what new information is
the learner using on the job?
4. Results or effectiveness - What results occurred, that is, did the learner apply the
new skills to the necessary tasks in the organization and, if so, what results were
achieved?

Although level 4, evaluating results and effectiveness, is the most desired result from
training, it's usually the most difficult to accomplish. Evaluating effectiveness often
involves the use of key performance measures -- measures you can see, e.g., faster and
more reliable output from the machine after the operator has been trained, higher ratings
on employees' job satisfaction questionnaires from the trained supervisor, etc. This is
where following sound principles of performance management is of great benefit.

Basic Suggestions for Evaluating Training


Typically, evaluators look for validity, accuracy and reliability in their evaluations.
However, these goals may require more time, people and money than the organization
has. Evaluators are also looking for evaluation approaches that are practical and relevant.

Training and development activities can be evaluated before, during and after the
activities. Consider the following very basic suggestions:

Before the Implementation Phase

• Will the selected training and development methods really result in the
employee's learning the knowledge and skills needed to perform the task or carry
out the role? Have other employee's used the methods and been successful?
• Consider applying the methods to a highly skilled employee. Ask the employee of
their impressions of the methods.
• Do the methods conform to the employee's preferences and learning styles? Have
the employee briefly review the methods, e.g., documentation, overheads, etc.
Does the employee experience any difficulties understanding the methods?

During Implementation of Training

• Ask the employee how they're doing. Do they understand what's being said?
• Periodically conduct a short test, e.g., have the employee explain the main points
of what was just described to him, e.g., in the lecture.
• Is the employee enthusiastically taking part in the activities? Is he or she coming
late and leaving early. It's surprising how often learners will leave a course or
workshop and immediately complain that it was a complete waste of their time.
Ask the employee to rate the activities from 1 to 5, with 5 being the highest rating.
If the employee gives a rating of anything less than 5, have the employee describe
what could be done to get a 5.

After Completion of the Training

• Give him or her a test before and after the training and development, and compare
the results?
• Interview him or her before and after, and compare results?
• Watch him or her perform the task or conduct the role?
• Assign an expert evaluator from inside or outside the organization to evaluate the
learner's knowledge and skills?
Receive and count working cash at the beginning of the day

- Receive cash and cheques for deposit and check accuracy of deposit slips

- Process cash withdrawals

- Perform specialized tasks such as preparing cashiers cheques, personal money orders
and exchanging foreign currency

- Record all transactions promptly, accurately and in compliance with the Bank
procedures

- Balance currency, cash and cheques in the cash drawer, at the end of each shift.

- Attempt to resolve issues and problems with customers accounts

- Promote Bank products and services to customers

Das könnte Ihnen auch gefallen