Beruflich Dokumente
Kultur Dokumente
This study unit addresses the analysis of performance by business organizations. A pervasive
consideration is the pursuit of quality in all aspects of the organization’s activities. The importance of
quality management has been recognized by the International Organization for Standardization,
which has issued quality assurance standards. Also crucial to successful business performance is
effective planning. The aspect of planning covered in this study unit is forecasting, including a variety
of mostly quantitative forecasting models. The study unit continues with project management, a topic
of growing importance to all types of organizations in a technology-based society. The next subunit
describes business process reengineering and its implications, concluding with an approach to the
persistent problem of bottleneck management.
Core Concepts
■ TQM is the continuous pursuit of quality in every aspect of organizational activities through (1) a
philosophy of doing it right the first time, (2) employee training and empowerment, (3) promotion
of teamwork, (4) improvement of processes, and (5) attention to satisfaction of internal and
external customers.
■ The total cost of quality (conformance and nonconformance costs) should be minimized.
■ Benchmarking involves analysis and measurement of key outputs against those of the best
organizations. This procedure also involves identifying the underlying key actions and causes
that contribute to the performance difference.
■ The balanced scorecard is a means of implementing an organization’s strategy. It connects critical
success factors determined in a strategic analysis to performance measures that may be financial
or nonfinancial.
■ ISO 9000 is a series of voluntary generic standards for establishing and maintaining a quality
management system (QMS) within a company. QMS principles are (1) customer focus,
(2) leadership, (3) involvement of people, (4) use of a process approach, (5) a systems approach
to management, (6) continual improvement, (7) a factual approach to decision making, and
(8) mutually beneficial supplier relationships.
■ Forecasts are the basis for business plans. They attempt to answer questions about the outcomes
of events, the timing of events, or the future value of a statistic (e.g., sales).
■ Project management techniques are designed to aid the planning and control of large-scale
projects having many interrelated activities. A project is a temporary undertaking with specified
objectives that often involves a cross-functional team and working outside customary
organizational lines.
■ One approach to business process analysis is reengineering (also called business process
reengineering). It involves process innovation and core process redesign. Instead of improving
existing procedures, it finds new ways of doing things.
■ The theory of constraints (TOC) is a short-term approach to managing bottlenecks (binding
constraints) in production and distribution processes.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
2 SU 1: Business Performance
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 3
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
4 SU 1: Business Performance
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 5
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
6 SU 1: Business Performance
$1.05 UCL
X
$1.00
X
$0.95 LCL
March April May
vi) A P chart is based on an attribute (acceptable/not acceptable) rather
than a measure of a variable. Specifically, it shows the percentage
of defects in a sample.
vii) A C chart also is an attribute control chart. It shows defects per
item.
viii) An R chart shows the range of dispersion of a variable, such as size
or weight.
ix) An X-bar chart shows the sample mean for a variable.
c) Variations in a process parameter may have several causes.
i)
Random variations occur by chance. Present in virtually all
processes, they are not correctable because they will not repeat
themselves in the same manner. Excessively narrow control limits
will result in many investigations of what are simply random
fluctuations.
ii) Implementation deviations occur because of human or mechanical
failure to achieve target results.
iii) Measurement variations result from errors in the measurements of
actual results.
iv) Model fluctuations can be caused by errors in the formulation of a
decision model.
v) Prediction variances result from errors in forecasting data used in a
decision model.
3) A Pareto diagram is a bar chart that assists managers in what is commonly
called 80:20 analysis.
a) The 80:20 rule, formulated by management theorist Joseph M. Juran,
states that 80% of all effects are the result of only 20% of all causes.
b) In the context of quality control, managers optimize their time by focusing
their effort on the sources of most problems.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 7
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
8 SU 1: Business Performance
8. The costs of quality must be assessed in terms of relative costs and benefits. Thus, an
organization should attempt to minimize its total cost of quality. Moreover,
nonquantitative factors also must be considered. For example, an emphasis on quality
improves competitiveness, enhances employee expertise, and generates goodwill.
a. Conformance costs include costs of prevention and costs of appraisal, which are
financial measures of internal performance.
1) Prevention attempts to avoid defective output. These costs include
(a) preventive maintenance, (b) employee training, (c) review of equipment
design, and (d) evaluation of suppliers.
2) Appraisal embraces such activities as statistical quality control programs,
inspection, and testing.
b. Nonconformance costs include internal failure costs (a financial measure of internal
performance) and external failure costs (a financial measure of customer
satisfaction).
1) Internal failure costs occur when defective products are detected before
shipment. Examples are scrap, rework, tooling changes, and downtime.
2) The costs of external failure, e.g., warranty costs, product liability costs, and
loss of customer goodwill, arise when problems occur after shipment.
a) They are even more important in service enterprises than in
manufacturing environments. Faulty goods sometimes may be
reworked or replaced to a customer’s satisfaction, but poor service tends
to result in a loss of customers.
3) Environmental costs also are external failure costs, e.g., fines for
nonadherence to environmental law and loss of customer goodwill.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 9
c. A cost-of-quality report includes most costs related to quality. However, some items
often are not reported, for example, lost contribution margins from poor product
quality. They are opportunity costs and are not usually recorded by the accounting
system. The result is understatement of the costs of poor quality.
1) Lost contribution margins from reduced sales, market share, and sales prices
are external failure costs that also are not usually included in a cost-of-quality
report.
2) An example of a cost of quality report is presented below:
Prevention costs $35,000
Appraisal costs 5,000
Internal failure costs 17,500
External failure costs 9,500
Total costs of quality $67,000
d. Quality cost indices measure the cost of maintaining a given level of quality, for
example, total quality costs divided by direct labor costs.
1) EXAMPLE: To continue the example, if total direct labor costs were $201,000,
the quality cost index for the money was 33.3% ($67,000 ÷ $201,000).
9. Quality and productivity do not necessarily have an inverse relationship. The robust
quality view is that improving quality and reducing costs in each category may be possible
if the most efficient prevention methods are applied.
a. For example, selection of a supplier meeting high quality standards regarding defect
rates and delivery times may drive down not only failure costs, but also the
prevention and appraisal costs incurred when supplier performance was less reliable.
10. Management of time is related to TQM.
a. Product development time is a crucial factor in the competitive equation. A company
that is first in the market with a new product has obvious advantages.
1) Reducing development time is also important because product life cycles are
becoming shorter.
2) Companies need to respond quickly and flexibly to new technology, changes in
consumer tastes, and competitive challenges.
b. One financial measure of product development is breakeven time. It is the time from
management approval of the project to the time when the cumulative present value
of cash inflows equals the cumulative present value of cash outflows.
1) The most popular method of determining breakeven time calculates the time
required for the present value of the cumulative cash flows to equal zero.
a)
An alternative that results in a longer breakeven time is to consider the
time required for the present value of the cumulative cash inflows to equal
the present value of all the expected future cash outflows.
c. Customer-response time is the delay from placement of an order to delivery of the
good or service. Response time is a function of time drivers. A change in a time
driver causes a change in the time required for an activity. Such changes reflect
uncertainty about arrivals of customers in the queue and bottlenecks (points at
which capacity is reached or exceeded).
1) Customer response time consists of order receipt time (delay between the
customer’s placement of an order and its receipt by the production facility),
manufacturing lead or cycle time (delay from the order’s receipt by the
production facility to its completion), and order delivery time.
2) Manufacturing lead or cycle (throughput) time equals order waiting time plus
manufacturing time.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
10 SU 1: Business Performance
11. Benchmarking is a primary tool used in TQM. It is a means of helping organizations with
productivity management and business process analysis.
a. Benchmarking involves analysis and measurement of key outputs against those of the
best organizations. This procedure also involves identifying the underlying key
actions and causes that contribute to the performance difference.
1) Best practices are recognized by authorities in the field and by customers for
generating outstanding results. They are generally innovative technically or in
their management of human resources.
2) Benchmarking is an ongoing process that requires quantitative and qualitative
measurement of the difference between the performance of an activity and the
performance by the benchmark. This entity need not be a competitor.
b. The following are kinds of benchmarking:
1) Competitive benchmarking studies an organization in the same industry.
2) Process (function) benchmarking studies operations of organizations with
similar processes regardless of industry. Thus, the benchmark need not be a
competitor or even a similar entity.
a)This method may introduce new ideas that provide a significant competitive
advantage.
3) Strategic benchmarking is a search for successful competitive strategies.
4) Internal benchmarking is the application of best practices in one part of the
organization to its other parts.
c. The first phase in the benchmarking process is to select and prioritize
benchmarking projects.
1) An organization must understand its critical success factors and business
environment to identify key business processes and drivers and to develop
parameters defining what processes to benchmark. The criteria for selecting
what to benchmark relate to the reasons for the existence of a process and its
importance to the entity’s mission, values, and strategy. These reasons relate
in large part to satisfaction of end-user or customer needs.
d. The next phase is to organize benchmarking teams. A team organization is
appropriate. It permits an equitable division of labor, participation by those
responsible for implementing changes, and inclusion of a variety of functional
expertise and work experience.
1) Team members should have (a) knowledge of the function to be benchmarked,
(b) respected positions in the organization, (c) good communication skills,
(d) teaming skills, (e) motivation to innovate and to support cross-functional
problem solving, and (f) project management skills.
2) The team must thoroughly investigate and document the organization’s internal
processes. The organization is a series of processes, not a fixed structure.
a) A process is a network of related and independent activities joined by their
outputs. One way to determine the primary characteristics of a process is
to trace the path a request for a product or service takes through the
organization.
b) The team must develop a family of measures that are true indicators of
process performance. It also must develop a process taxonomy, that is,
a set of process elements, measures, and phrases that describes the
process to be benchmarked.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 11
c)
The development of key indicators for performance measurement in a
benchmarking context is an extension of the basic evaluative function of
internal auditors. Internal auditors evaluate governance, risk
management, and control processes. Evaluation requires establishment
of adequate criteria by management. In the absence of these criteria,
internal auditors must work with management to develop “appropriate
evaluation criteria” (Standard 2120.A4).
e. Researching and identifying best-in-class performance is often the most difficult
phase. The critical steps are
1) Setting up databases
2) Choosing information-gathering methods (internal sources, external public
domain sources, and original research)
3) Formatting questionnaires (lists of questions prepared in advance), and
4) Selecting benchmarking partners.
f. Data analysis identifies performance gaps, obtains an understanding of the reasons
they exist, and prioritizes the key activities that will facilitate the behavioral and
process changes needed. Sophisticated statistical and other methods may be
needed when the study involves many variables, testing of assumptions, or quantified
results.
g. Leadership is most important in the implementation phase because the team must
be able to justify its recommendations. Moreover, the process improvement teams
must manage the implementation of approved changes.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
12 SU 1: Business Performance
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 13
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
14 SU 1: Business Performance
Customer Perspective
Objective: Increase customer satisfaction Measures: Greater market share
Higher customer retention rate
Positive responses to surveys
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 15
b. The intent of the standards is to ensure the quality of the process, not the product.
The marketplace determines whether a product is good or bad.
1) For this reason, the ISO deems it unacceptable for phrases referring to ISO
certification to appear on individual products or packaging.
c. Only one of the standards is a certification standard.
1) ISO 9001:2000, Quality Management Systems – Requirements, is the standard
that provides a model for quality assurance programs.
2) For this reason, “ISO 9001:2000 certified” is the only acceptable formulation.
There is no such thing as “ISO 9000 certification.”
3) ISO 9000:2005 was issued recently. It applies to (a) entities implementing a
QMS, (b) entities seeking assurance about products provided by suppliers,
(c) users of the products, (d) everyone needing an understanding of quality
terminology, (e) those who assess QMSs, (f) those who provide advice or
training relative to a QMS, and (g) standard setters.
2. The following are the objectives of applying the standards:
a. Achieving and continuously improving quality relative to requirements
b. Improving operations to meet all needs of stakeholders (interested parties)
c. Giving confidence to employees that quality requirements are met and improvement is
occurring
d. Giving confidence to stakeholders that quality of delivered products is achieved
e. Providing confidence that the quality system meets expectations
3. QMS standards are founded on the following quality management principles defined in
ISO 9000:2000 and ISO 9004:2000:
a. Customer focus means understanding needs, meeting requirements, and trying to
surpass expectations.
b. Leadership develops unity of purpose by maintaining an environment that permits full
involvement in reaching entity objectives.
c. Involvement of people in the fullest sense allows their abilities to be used for the
entity’s benefit.
d. A process approach to managing activities and resources is the efficient way to
obtain desired results.
e. A systems approach to management integrates and aligns processes to obtain
desired results more efficiently and effectively.
f. Continual improvement of overall performance should be a permanent objective.
g. A factual approach to decision making is based on data and information.
h. Mutually beneficial supplier relationships increase all parties’ value creation.
4. The following are the basic requirements of a QMS:
a. Key processes affecting quality must be identified and included.
1) A process management approach must be used. It manages the entity as a
set of linked processes that are controlled for continuous improvement.
b. General requirements. The entity must have a quality policy and quality goals. It
also must design a QMS to control process performance. Quality goals are
measurable and specific.
1) The QMS is documented in the (a) quality policy, (b) quality manual,
(c) procedures, (d) work instructions, and (e) records.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
16 SU 1: Business Performance
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 17
7. The ISO also has issued a set of environmental standards known as ISO 14000. These
standards are comparable in purpose to ISO 9000 but concern environmental quality
systems. Although they have not been as widely adopted as the ISO 9000 standards, they
may become a necessity for conducting international business.
a. ISO 14000 establishes internationally recognized standards that will diminish barriers
to trade and make it easier to do business across borders.
b. Some companies feel that adherence to ISO 14000 standards will reduce monitoring
and inspection by regulatory agencies.
c. A survey of managers found that failure to obtain ISO 14000 certification could
constitute a potential nontariff trade barrier because customers will require it.
d. At present, the main benefit of adopting ISO 14000 standards is internal. Companies
learn how well their environmental management system operates relative to those
of other companies.
e. Some companies have decided to seek ISO 14000 certification because they found
that ISO 9000 was beneficial.
f. Some European countries already have environmental systems standards in place,
and how these single-country standards will mesh with ISO 14000 is not clear.
However, individual countries’ standards are typically more strict.
g. Some are concerned that regulators may use voluntary ISO audits or self-audits as
a basis for punitive action. To allay these fears in the U.S., the Environmental
Protection Agency has issued new audit guidelines that are intended to avoid such
self-incrimination.
8. The scope of ISO 19011:2002 extends to (a) the principles of auditing, (b) managing audit
programs, (c) conducting QMS audits and environmental management system audits, and
(d) the competence of QMS and environmental management system auditors.
a. It applies to all entities that must perform internal or external audits of QMSs or
environmental management systems or manage an audit program.
b. ISO 19011 may apply to other types of audits if due consideration is given to
identifying the competencies required of the auditors.
9. ISO 10012:2003 is a generic standard. It addresses the management of measurement
processes and confirmation of measuring equipment used to support compliance with
required measures.
a. It states quality management requirements of a measurement management system
(MMS) that can be used as part of the overall management system.
b. It is not to be used as a requirement for demonstrating conformance with other
standards. Interested parties may agree to use ISO 10012:2003 as an input for
satisfying MMS requirements in certification activities. However, other standards
apply to specific elements affecting measurement results, e.g., details of
measurement methods, competence of personnel, or comparisons among
laboratories.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
18 SU 1: Business Performance
1.4 FORECASTING
1. Forecasts are the basis for business plans, including budgets. They attempt to answer
questions about the outcomes of events (e.g., the effect of a war involving a producer of oil
on the oil market), the timing of events (e.g., when will unemployment fall), or the future
value of a statistic (e.g., sales). In addition to intuition (informed judgment), many
quantitative methods are useful in projecting the future from past experience.
a. Examples of forecasts include sales projections, inventory demand, cash flow, and
future capital needs.
1) Most models are used in the forecasting process. They are used to make
decisions that optimize future results.
2) The reliability of the forecast should be determined before using it. No objective
method can determine the reliability of judgmental forecasts. When quantitative
methods are used, however, measurement of reliability is usually possible, e.g.,
by calculating the standard error of the estimate.
2. Correlation analysis is used to measure the strength of the linear relationship between two
or more random variables. Correlation between two variables can be seen by plotting their
values on a single graph to form a scatter diagram. If the points tend to form a straight
line, correlation is high. Otherwise, correlation is low. Correlation measures only linear
relationships.
a. If the points form a curve, several possibilities exist.
1) A linear relationship (a straight line) may be used to approximate a portion of the
curve.
2) A linear relationship exists between some other function of the independent
variable x (e.g., log x) and the dependent variable y.
3) No relationship exists.
b. The coefficient of correlation (r) measures the relative strength of the linear
relationship. It has the following properties:
1) The magnitude of r is independent of the scales of measurement of x and y.
2) –1.0 < r < 1.0
a)
A value of –1.0 indicates a perfectly inverse linear relationship between
x and y.
b) A value of zero indicates no linear relationship between x and y.
c) A value of +1.0 indicates a direct relationship between x and y.
c. Scatter diagrams may be used to demonstrate correlations. Each observation
creates a dot that pairs the x and y values. The collinearity and slope of these
observations are related to the coefficient of correlation by the above-stated rules.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 19
If: r2 =
the coefficient of determination
∑ =
summation
yi =
an actual data point
=
a point on the regression line calculated from the sample
linear regression equation
y = the mean of the observed data points
2) EXAMPLE: The assertion that new car sales are a function of disposable
income with a coefficient of correlation of .8 is equivalent to stating that 64%
(.8²) of the variation of new car sales (from average new car sales) can be
explained by the variation in disposable income (from average disposable
income).
3) Because r 2 increases as the number of independent variables increases,
regardless of whether the additional variables are actually correlated with the
dependent variable, r 2 may be adjusted (reduced) to allow for this effect. If k is
the number of independent variables and n is the number of observations, the
formula for adjusted r 2 is
3. Regression (least squares) analysis extends correlation to find an equation for the linear
relationship among variables. The behavior of the dependent variable is explained in terms
of one or more independent variables. Thus, regression analysis determines functional
relationships among quantitative variables.
a. Simple regression has one independent variable, and multiple regression has more
than one.
1) EXAMPLE: A dependent variable such as sales is dependent on advertising,
consumer income, availability of substitutes, and other independent variables.
2) Multicollinearity is the condition in which two or more independent variables are
strongly correlated. The effect is greater uncertainty regarding the coefficient of
the variables; that is, their standard errors increase. Multicollinearity is a
concern in multiple regression.
b. Regression analysis is used to find trend lines in business data such as sales or
costs (time series analysis or trend analysis) and to develop models based on the
association of variables (cross-sectional analysis, a method that is not time related
as is trend analysis). Examples are
1) Trend in product sales
2) Trend in overhead as a percentage of sales
3) Relationship of direct labor hours to variable overhead
4) Relationship of direct material usage to accounts payable
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
20 SU 1: Business Performance
c. Some reasonable basis should exist for expecting the variables to be related.
1) If they are obviously independent, any association found by regression is mere
coincidence.
2) Regression does not determine causality, however. Although x and y move
together, the apparent relationship may be caused by some other factor.
a)
EXAMPLE: A strong negative correlation exists between the decline in
infant mortality and the increase in the number of senior citizens. Both
are likely to be primarily caused by a factor such as better medical care.
3) The statistical relationships revealed by regression and correlation analysis are
valid only for the range of the data in the sample.
d. The simple regression equation is
If: y = the dependent variable
a = the y-axis intercept (the fixed cost in cost functions)
b = the slope of the regression line (the variable portion of the
total cost in cost functions)
x = the independent variable
e = the error term
1) Assumptions of the model are that
a) For each value of x, there is a distribution of values of y. The means of
these distributions form a straight line. Hence, x and y are linearly
related.
b) The error term (e) is normally distributed with a mean or expected value
equal to zero.
i) The y-intercept (a) and the slope of the regression line (b) also have
normal distributions.
c) Errors in successive observations are statistically independent.
i) Thus, the estimators are unbiased.
ii)Autocorrelation (serial correlation) occurs when the observations
are not independent; in other words, later observations may be
dependent on earlier ones.
d) The distribution of y around the regression line is constant for different
values of x.
i) Thus, the observations are characterized by homoscedasticity or
constant variance. The deviation of points from the regression line
does not vary significantly with a change in the size of the
independent variable.
● Heteroscedasticity is the condition in which the variance of
the error term is not constant.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 21
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
22 SU 1: Business Performance
g. The following equations can be used to determine the equation for the least squares
regression line (the equation for the line is in the form of y = a + bx):
1) EXAMPLE: The use of the two equations can be illustrated with the following
data based on a set of six paired observations (n = 6):
y x
$ 6 2
7 3
5 2
4 1
8 3
6 2
∑y = $36 ∑x = 13
∑xy ∑x2
6 × 2 = 12 4
7 × 3 = 21 9
5 × 2 = 10 4
4×1= 4 1
8 × 3 = 24 9
6 × 2 = 12 4
83 31
e) Alternative formulas that are ordinarily simpler to use are given below:
i) The slope may be expressed as
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 23
Because costs increased $200 for 500 additional hours, the variable cost is
$.40 per machine hour. For the low month, the total variable portion of that
monthly cost is $320 ($.40 × 800 hours). Given that the total cost is $400 and
$320 is variable, the remaining $80 must be a fixed cost. The regression
equation is y = 80 + .4x.
2) The major criticism of the high-low method is that the high and low points may be
abnormalities not representative of normal events.
4. Time series or trend analysis relies on past experience. Changes in the value of a
variable (e.g., unit sales of a product) over time may have several possible components.
a. In time series analysis, the dependent variable is regressed on time (the independent
variable).
b. The secular trend is the long-term change that occurs in a series. It is represented by
a straight line or curve on a graph.
c. Seasonal variations are common in many businesses. A variety of analysis methods
includes seasonal variations in a forecasting model, but most methods make use of a
seasonal index.
d. Cyclical fluctuations are variations in the level of activity in business periods.
Although some of these fluctuations are beyond the control of the firm, they need to
be considered in forecasting. They are usually incorporated as index numbers.
e. Irregular or random variables are any variations not included in the categories
above. Business can be affected by random happenings (e.g., weather, strikes, fires,
etc.).
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
24 SU 1: Business Performance
f. The percentage-of-sales method is the most widely used for sales forecasting. It
adjusts the current level of sales by a specified percentage increase or decrease.
This method is a form of trend analysis that is convenient and easy to apply and
intuitively appealing to managers. It is also useful for developing pro forma financial
statements by estimating items that vary directly with sales as percentages of
expected sales.
1) This method is based on the assumptions that most items directly correlate with
sales and that current levels of all assets are optimal for current sales.
5. Exponential smoothing is a technique used to level or smooth variations encountered in a
forecast. This technique also adapts the forecast to changes as they occur.
a. The simplest form of smoothing is the moving average, in which each forecast is
based on a fixed number of prior observations. Exponential smoothing is similar to
the moving average.
b. Exponential means that greater weight is placed on the most recent data, with the
weights of all data falling off exponentially as the data age. The selection of alpha
(α), the smoothing factor, is important because a high alpha places more weight on
recent data.
c. The equation for the forecast (F) for period t + 1 is
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 25
b. Graphical Presentation
!
"
c. If the average time for 100 units in the example were 3 minutes per unit, the total time
would be 300 minutes. At an average time of 2.4 minutes for 200 units, the total time
would be 480 minutes. In other words, the additional 100 units required only
180 minutes (480 – 300), or 1.8 minutes per unit.
7. Simulation is a technique for experimenting with logical and mathematical models using a
computer.
a. Despite the power of mathematics, many problems cannot be solved by known
analytical methods because of the behavior of the variables and the complexity of
their interactions, e.g.,
1) Corporate planning models
2) Financial planning models
3) New product marketing models
4) Queuing system simulations
5) Inventory control simulations
b. Experimentation is neither new nor uncommon in business. Building a mockup of a
new automobile, having one department try out new accounting procedures, and test-
marketing a new product are all forms of experimentation. In effect, experimentation
is organized trial and error using a model of the real world to obtain information prior
to full implementation.
c. Models can be classified as either physical or abstract.
1) Physical models include automobile mockups, airplane models used for wind-
tunnel tests, and breadboard models of electronic circuits.
2) Abstract models may be pictorial (architectural plans), verbal (a proposed
procedure), or logical-mathematical. Experimentation with logical-mathematical
models can involve many time-consuming calculations. Computers have
eliminated much of this costly drudgery and have led to the growing interest in
simulation for management.
d. The simulation procedure has five steps.
1) Define the objectives. The objectives serve as guidelines for all that follows.
The objectives may be to aid in the understanding of an existing system (e.g.,
an inventory system with rising costs) or to explore alternatives (e.g., the effect
of investments on the firm’s financial structure). A third type of objective is
estimating the behavior of some new system, such as a production line.
2) Formulate the model. The variables to be included, their individual behavior,
and their interrelationships must be defined in precise logical-mathematical
terms. The objectives of the simulation serve as guidelines in deciding which
factors are relevant.
3) Validate the model. Some assurance is needed that the results of the
experiment will be realistic. This assurance requires validation of the model --
often using historical data. If the model gives results equivalent to what actually
happened, the model is historically valid. Some risk remains, however, that
changes could make the model invalid for the future.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
26 SU 1: Business Performance
i. Markov analysis. Markov processes are useful in decision problems in which the
probability of the occurrence of a future state depends only on the current state.
1) A characteristic of the Markov process is that the initial state matters less and
less as time goes on because the process will eventually reach its steady
state.
2) EXAMPLE: A machine tool may be in one of two states, in adjustment or out of
adjustment. The machine moves from one state to the other in 1 day with the
following probabilities:
% !& (
' !& )
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
28 SU 1: Business Performance
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 29
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
30 SU 1: Business Performance
4) The insufficient reason (Laplace) criterion may be used when the decision
maker cannot assign probabilities to the states of nature arising after a
decision.
a) The reasoning is that, if no probability distribution can be assigned, the
probabilities must be equal, and the expected value is calculated
accordingly. For each decision, the payoffs for the various states of
nature are simply added, and the decision with the highest total is
chosen. This criterion is risk-neutral.
5) An expected value criterion might be used by a risk-neutral player, that is, one
for whom the utility of a gain is the same as the disutility of an equal loss.
11. For decisions involving risk, expected value provides a rational means for selecting the
best alternative. The expected value of an action is found by multiplying the probability of
each outcome by its payoff and adding the products. It is the long-term average payoff
for repeated trials. The best alternative has the highest expected value.
a. EXAMPLE: A dealer in yachts may order 0, 1, or 2 yachts for this season’s inventory.
The cost of carrying each excess yacht is $50,000, and the gain for each yacht sold
is $200,000.
State of Nature = Decision = Decision = Decision =
Actual Demand Order 0 Order 1 Order 2
0 yachts $0 $(50,000) $(100,000)
1 yacht 0 200,000 150,000
2 yachts 0 200,000 400,000
a) The decision with the greatest expected value is to order two yachts.
Absent additional information, the dealer should order two.
b. Perfect information is the knowledge that a future state of nature will occur with
certainty, i.e., being sure of what will occur in the future. The expected value of
perfect information (EVPI) is the difference between the expected value without
perfect information and the return if the best action is taken given perfect information.
1) EXAMPLE (continued):
Best Action Expected Value
State of Nature Pr Best Action Payoff (Pr × Payoff)
Demand = 0 .1 Buy 0 $ 0 $ 0
Demand = 1 .5 Buy 1 200,000 100,000
Demand = 2 .4 Buy 2 400,000 160,000
$260,000
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 31
a) The dealer expects to make $260,000 with perfect information about future
demand and $225,000 if the choice with the best expected value is
made. EVPI is
Expected value with perfect information $260,000
Expected value without perfect information (225,000)
$ 35,000
b) The dealer will not pay more than $35,000 for perfect information.
12. Well-designed surveys using questionnaires or interviews are often used to determine
customer preferences, attitudes, and tastes. They also may be used to gather opinions
from experts.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
32 SU 1: Business Performance
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 33
3. Gantt charts are simple to construct and use. A Gantt chart divides the project into logical
subprojects, called activities or tasks, estimates the start and completion times for each
activity, and shows each activity as a horizontal bar along a time scale. A traditional Gantt
chart is given below. However, a chart also may be drawn that displays work completed as
of a given date.
#*
$
+
$ ,
a. The major advantage of the Gantt chart is its simplicity. It forces the planner to think
ahead and define logical activities. As the project progresses, actual completion
times can be compared with planned times. Furthermore, the technique requires no
special tools or mathematics and can be used on small projects.
b. The major disadvantage is that interrelationships among activities are not shown.
Several special methods have been developed to show these on a Gantt chart, but
they are feasible only for simple relationships.
4. Program evaluation and review technique (PERT) was developed to aid managers in
controlling large-scale, complex projects. PERT diagrams are free-form networks showing
each activity as a line between events. A sequence of lines shows interrelationships
among activities. PERT diagrams are more complex than Gantt charts, but they have the
advantages of incorporating probabilistic time estimates and identifying the critical path.
a. Events are discrete moments in time representing the start or finish of an activity.
They consume no resources.
b. Activities are tasks to be accomplished. They consume resources (including time)
and have a duration over time.
1) But a dummy activity is one that consumes no time but establishes precedence
among activities. It is used specifically in project management.
2) The latest finish is the latest that an activity can finish without causing delay in
the completion of the project.
c. The network diagram is formed by
1) The lines (activities) connected from left to right in the necessary sequence of
their accomplishment. They can be marked with time lengths.
2) Circles representing events and numbered for identification.
d. The critical path is the longest path in time through the network. It is critical because,
if any activity on the critical path takes longer than expected, the entire project will be
delayed. Every network has at least one critical path. Some have more than one.
1) The mean completion time for the critical path is the sum of the means of the
activity times.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
34 SU 1: Business Performance
2) The standard deviation of the completion time for the critical path is the
square root of the sum of the variances (squares of the standard deviations) of
the activity times.
a) EXAMPLE: If the critical path has two activities, and the standard
deviations of the completion times are 3 and 4, the standard deviation for
the critical path is
e. Paths that are not critical have slack time. One advantage of PERT is that it identifies
slack time, which represents unused resources that can be diverted to the critical
path.
f. Several techniques have been developed to include cost information in the analyses.
This variation of PERT is often called PERT-Cost. It entails combining activities into
work packages to facilitate cost control. By estimating costs for each work package,
a manager can develop a budget that indicates when costs should be incurred during
the project.
g. Activity times can be expressed probabilistically. Computer programs are available
to make the calculations and find critical paths.
h. PERT analysis includes probabilistic estimates of activity completion times. Three
time estimates are made – optimistic, most likely, and pessimistic.
1) The time estimates for an activity are assumed to approximate a beta probability
distribution. In contrast with the normal distribution, this distribution has finite
endpoints (the optimistic and pessimistic estimates) and is unimodal; that is, it
has only one mode (the most likely time).
2) PERT approximates the mean of the beta distribution by dividing the sum of the
optimistic time, the pessimistic time, and four times the most likely time (the
mode) by six.
3) The standard deviation is approximated by dividing the difference between the
pessimistic and optimistic times by six. The basis for the latter approximation is
that various probability distributions have tails that lie about plus or minus three
standard deviations from the mean. For example, 99.9% of observations in the
normal distribution are expected to lie within this range.
i. EXAMPLE: If an activity can be completed in 6 days (optimistic time), 10 days (most
likely time), or 20 days (pessimistic time), the expected duration is 11 days {[6 + (4 ×
10) + 20] ÷ 6}.
1) Thus, the most likely time is weighted the most heavily.
2) The standard deviation is 2.33 [(20 – 6) ÷ 6].
j. EXAMPLE:
.
-
/
/
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 35
1) For the network on the previous page, the following are the paths and path
times:
Path Time (hours)
Start-1-9-6 16.5
Start-1-2-5-6 16.9
Start-1-2-3-4-6 20.8
Start-1-3-4-6 19.3
Start-1-7-8-6 17.4
2) Path Start-1-2-3-4-6 is the critical path because it has the longest time.
3) Path 1-3 takes only 5.7 hours, but the critical path events (1-2-3) take 7.2 hours.
The slack time represented by path 1-3 is thus 7.2 – 5.7, or 1.5. People
assigned to path 1-3 have an extra 1.5 hours to help elsewhere.
5. The critical path method (CPM) was developed independently of PERT and is widely used
in the construction industry. CPM may be thought of as a subset of PERT. Like PERT, it is
a network technique. Unlike PERT, it uses deterministic time and cost estimates. Its
advantages include cost estimates plus the concept of “crash” efforts and costs.
a. Activity times are estimated for normal effort and crash effort. Crash time is the time
to complete an activity, assuming that all available resources were devoted to the
task (overtime, extra crew, etc.).
b. Activity costs are also estimated for normal and crash efforts.
c. These estimates allow the project manager to estimate the costs of completing the
project if some of the activities are completed on a crash basis.
d. The network diagram is constructed in the same manner as PERT diagrams. Once
the diagram is constructed, the critical paths are found for normal and crash times.
More than one critical path may exist for each diagram.
e. Crashing the network means finding the minimum cost for completing the project in
minimum time.
1) EXAMPLE (CMA, adapted): Builder uses the critical path method to monitor
jobs. It is currently 2 weeks behind schedule on Job #181, which is subject to a
$10,500-per-week completion penalty. Path A-B-C-F-G-H-I has a normal
completion time of 20 weeks, and critical path A-D-E-F-G-H-I has a normal
completion time of 22 weeks. The following activities can be crashed:
Cost to Crash Cost to Crash
Activities 1 Week 2 Weeks
BC $ 8,000 $15,000
DE 10,000 19,600
EF 8,800 19,500
Builder desires to reduce the normal completion time of Job #181 and report the
highest possible income for the year. Builder should crash Activity DE 1 week
and activity EF 1 week. Activities to be crashed should be on the critical path.
Thus, activity BC should not be selected. It is not on the critical path. To crash
activity BC would not reduce the total time to complete the project. The only
feasible choices are DE and EF on the critical path. The total cost to crash DE
and EF for 1 week each is $18,800 ($10,000 + $8,800), which is less than the
cost to crash either activity for 2 weeks. Thus, DE and EF should be crashed
for 1 week each. The total cost is less than the $21,000 ($10,500 × 2) 2-week
delay penalty.
f. CPM computer programs allow updating of the solution as work proceeds.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
36 SU 1: Business Performance
6. Network models are used to solve managerial problems pertaining to project scheduling,
information systems design, and transportation systems design. Networks consisting of
nodes and arcs may be created to represent in graphic form problems related to
transportation, assignment, and transshipment. The shortest-route, minimal spanning tree,
and maximal flow problems are other applications of network models.
a. A shortest-route algorithm minimizes total travel time from one site to each of the
other sites in a transportation system.
b. The maximal flow algorithm maximizes throughput in networks with distinct entry
(source node) and exit (sink node) points. Examples of applications are highway
transportation systems and oil pipelines. Flows are limited by capacities of the arcs
(e.g., highways or pipes).
c. The minimal spanning tree algorithm identifies the set of connecting branches
having the shortest combined length. A spanning tree is a group of branches (arcs)
that connects each node in the network to every other node. An example problem is
the determination of the shortest telecommunications linkage among users at remote
sites and a central computer.
7. Flowcharting is a pictorial method of analyzing and understanding the processes and
procedures involved in operations, whether manual or computerized.
a. Flowcharting is therefore useful in describing the sequence of a project’s planned
activities and decisions.
b. Flowcharting is based on standardized symbols.
1) However, different systems of symbols have been devised, so care must be
taken that the usage is consistent.
c. Today, software has simplified the flowcharting task.
d. One drawback of flowcharting is that it is not feasible for a complicated project with
many simultaneous activities. A second drawback is that a flowchart does not display
the time needed for each activity and decision.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 37
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
38 SU 1: Business Performance
b. TOC analysis defines all costs as fixed in the short-term except direct materials costs.
Accordingly, the throughput contribution equals sales dollars minus direct
materials costs, which include materials handling costs as well as raw materials and
purchased components. This approach is a type of supervariable costing because
only direct materials costs are inventoried.
1) The objective of TOC analysis is to maximize throughput contribution and to
minimize investments (defined as materials costs of all inventories, plus R&D
costs, plus fixed assets) and other operating costs (defined as all operating
costs other than direct materials costs necessary to earn the throughput
contribution).
c. TOC analysis identifies the bottleneck operation that determines the throughput
contribution. This operation has large inventories waiting to be processed.
1) The bottleneck operation establishes the processing schedule for
nonbottleneck operations. Hence, nonbottleneck production should not exceed
what can be processed by the bottleneck operation.
2) In the longer term, actions should be undertaken to improve the capacity of the
bottleneck operation so that the increase in the throughput contribution
exceeds the additional costs.
3) Production flow is managed using a drum-buffer-rope (DBR) system.
a)The drum (or drummer providing the beat to which a production process
marches) is the bottleneck operation or binding constraint.
b) The buffer is a minimal amount of work-in-process input to the drum that is
maintained to ensure that it is always in operation.
c) The rope is the sequence of activities preceding and including the
bottleneck operation that must be coordinated to avoid inventory buildup.
Analysis of the rope includes consideration of lead times.
d. The optimal strategy to avoid bottleneck problems is to (1) redesign processes,
(2) apply improved technology, (3) redesign products to make them easier to
manufacture, or (4) possibly eliminate some products that are difficult to
manufacture. Value engineering is useful for this purpose because it explicitly
balances product cost and the needs of potential customers (product functions).
e. To summarize, the steps in a TOC analysis include
1) Determining the bottleneck operation or binding constraint, that is, the one that
restricts output to less than the amount demanded.
2) Discovering the best use of the bottleneck operation, for example, by choosing
the optimal product mix or by enhancing product flow through (a) minimizing
setups, (b) ascertaining the appropriate lot size, (c) improving the quality of
units produced, and (d) focusing on the throughput contribution instead of
efficiency.
3) Using the DBR system to manage production through the bottleneck operation.
4) Increasing the bottleneck operation’s capacity after the foregoing procedures are
complete, provided that the throughput contribution exceeds the cost.
5) Redesigning the process or product(s) for greater flexibility and faster
throughput.
f. A TOC report should present relevant performance measures, for example, of
(1) throughput contribution, (2) elimination of bottlenecks, (3) reduction of average
lead times, and (4) number of unfilled orders.
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 39
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
40 SU 1: Business Performance
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 1: Business Performance 41
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
42 SU 1: Business Performance
Copyright © 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com