Sie sind auf Seite 1von 49

lOMoARcPSD|1636909

PART ONE – BACKGROUND

The book is divided into five parts, each of which covers a different aspect of
quantitative methods. This first part describes the underlying concepts of quantitative
methods, setting the context for the rest of the book.

At first, you might not realise how much managers use quantitative methods, but with
a bit of thought you can soon see the breadth of decisions that depend on
quantitative analyses. Whenever you see a set of accounts, financial analysis,
market survey, forecast sales, share prices, performance figures, sales analysis,
productivity measures – and just about anything else – you can see the results of
quantitative methods. Of course, this does not mean that you have to do a lot of
rigorous mathematical analyses – and it certainly does not mean that you have to be
mathematical whiz kids. But it does mean that you have to understand the results
and know what is happening. Quantitative methods form an essential skill that is
needed by every manager – and, thankfully, these skills are not particularly difficult
to learn.

There are three chapters in this first part of the book. Chapter 1 discusses the ways
that managers use numbers for their decisions, and they must understand a range of
quantitative ideas. The rest of the book describes a series of key analyses for
business, all of which rely on some basic quantitative tools. Chapters 2 and 3 review
these tools – with chapter 2 describing calculations and equations, and chapter 3
showing how to draw graphs.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 1 – MANAGERS AND NUMBERS

REVIEW OF THE CHAPTER


This chapter introduces the idea that quantitative methods are widely used in
business. It discusses the importance of numerical information, the general
approach of quantitative methods, and the way that quantitative models are used to
solve problems. These methods are not complicated and do not need sophisticated
mathematics. Managers are much more likely to use – and understand – simple
numerical measures and analyses than any sophisticated mathematical techniques.

The main function of managers can be summarised as ‘making decisions’ – or more


accurately, analysing problems, making decisions to solve them and then
implementing the decisions. Their problems almost invariably include numerical
features – and it is difficult to imagine a management problem that does not have a
significant quantitative element. For this reason managers must have some
quantitative skills and understand a range of associated methods. No manager can
function properly without these skills – which is the reason that you are attending a
class in quantitative methods. You can find a lot of evidence to support this view
from examples in newspapers or searching through the Web.

The chapter suggests a general approach to making decisions – with problem


identification, analysis, decision and implementation. There are many variations on this
basic model, with quantitative methods appearing in an analysis stage. Then the
analysis itself can be broken down into a series of related steps, with a key element
being the design of a model. This simplified representation of reality helps clarify ideas
and allows experimentation.

We do not dwell on the detailed methodology of decision making, which is a


considerable topic in itself. But we do assume some core steps in management
decisions, aspects of which are described throughout the book.

It is worth emphasising that we are describing methods that are widely used in
practice – and we are certainly not doing pointless calculations. Every function of
management – from accounting to human resource management – relies on
quantitative methods. The book is not about mathematics – so it does not
emphasise proofs or derivations, and does not get bogged down in arithmetic
manipulation. It discusses topics that are essential for the survival of any
organisation.

This practical approach is reinforced by the use of software. The book does not
assume that you have access to any particular software, but it assumes that virtually
all calculations are actually done by computer. Spreadsheets are particularly useful
for the routine arithmetic. These need not be especially sophisticated or expensive.
If you do not already have software, it might be worth looking at OpenOffice which is
produced by Oracle and you can download free from www.openoffice.org.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Appreciate the importance and benefits of numbers
o Say why quantitative methods are particularly useful for managers
o Understand the use of models
o Describe a general approach to solving problems
o Use computers for calculations

KEY TERMS FROM THE GLOSSARY


o calculations – arithmetic manipulation of numbers
o feedback – return of information to managers so that they can compare
actual performance with plans
o measure – a numerical description of some attribute
o model – a simplified representation of reality
o operations – all the activities that make an organisation’s products
o qualitative –not using numbers, but based on opinions and judgement
o quantitative – using numbers
o quantitative methods – a broad range of numerical approaches to
solving problems
o spreadsheet – a general program that stores values in the cells of a grid,
and does calculations based on defined relationships between cells
o symbolic model – model where real properties are represented by
symbols, usually algebraic variables

USEFUL WEBSITES
The general Website to accompany this book is at www.pearsoned.co.uk/waters.

You can find details of software from suppliers’ sites, such as www.microsoft.com.
There is a huge amount of information on the Web, and it is best to start with a search
engine, such as those available at, www.altavista.com, www.baidu.com, www.bing.com,
www.excite.com, www.google.com, www.lycos.com, www.webcrawler.com and
www.yahoo.com.

Many Websites give tutorials on spreadsheets (often free) – particularly Microsoft Excel.
Some sources are manufacturers (primarily Microsoft at www.office.microsoft.com),
universities (such as Iowa State University at www.wxtension.iostate.edu/pages/excel),
companies (such as Baycon Group Inc on www.baycongroup.com) and individuals
(such as Brad James on www.usd.edu/trio/tut/excel).

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 2 – CALCULATIONS AND EQUATIONS

REVIEW OF THE CHAPTER


Business and management students come from a wide range of backgrounds and
interests. A single course might contain some students with backgrounds in science or
engineering, and others with backgrounds in arts or humanities. A textbook –
particularly in quantitative methods – cannot assume much common knowledge. This
book starts with the basics. It assumes that you have no previous knowledge of
management or quantitative methods. Then it works from initial principles and develops
ideas in a logical sequence.

This chapter reviews some basic mathematics that is used in the rest of the book. It
gives a summary of some core ideas – but it cannot attempt a comprehensive
description of basic arithmetic (which is covered in many other texts). You might find
this review unnecessary, as you already know all the material – in which case you can
skip over this chapter very quickly. Or you might use parts of the chapter for revision.
Or you might find a lot of new material and have to spend more time on the chapter.
Whatever your starting point, it is essential that you get to grips with this basic material
as it is used in the rest of the book. You might find that a little effort here may save a lot
of worry later on.

This chapter introduces the idea of working with numbers for arithmetic and analyses.
Then more general views are developed with algebraic models. These models can
provide working equations, which are solved to find specific solutions. And there are
specific procedures to solve problems that involve powers, roots, simultaneous
equations and other factors.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Understand the underlying operations of arithmetic
o Work with integers, fractions, decimals and percentages
o Round numbers to decimal places and significant figures
o Understand the principles of algebra
o Solve an equation to find a previously unknown value
o Appreciate the use of inequalities
o Understand the principles of simultaneous equations
o Use algebraic methods to solve simultaneous equations
o Work with powers and roots
o Describe numbers in scientific notation
o Use logarithms.

KEY TERMS FROM THE GLOSSARY


algebra – use of symbols to represent variables and describe relationships between them
arithmetic – any kind of calculation with numbers
1

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

base – the value of b when a number is represented in the logarithmic format of n = bp


common fraction or fraction – part of a whole expressed as the ratio of a numerator over
a denominator, such as 1/5
common logarithms – logarithms to the base 10
constants – a number or quantity that always has the same, fixed value, such as π, e
or 2
decimal fraction – part of a whole described by a number following a decimal point, such
as 0.5
decimal places – number of digits following a decimal point
denominator – bottom line of a common
e, or exponential constant – a constant calculated from (1 + 1/n)n, where n is an
indefinitely large number; it equals 2.7182818
equation – algebraic formula that shows the relationship between variables, saying that
the value of one expression equals the value of a second expression
independent equations – equations that are not related, and are not different versions of
the same equation
inequality – relationship that is less precise than an equation, typically with a form like a ≤
b
integer – whole number without any fractional parts
logarithm – the value of p when a number is represented in the logarithmic format of n =
bp
natural logarithms – logarithms to the base e
negative number – number below zero
numerator – top line of a common fraction
percentage – fraction expressed as a part of a hundred
positive number – number above zero
power – the value of b when a number is represented as ab (i.e. the number of times a is
multiplied by itself)
round – to state a number to a specified number of decimal places or significant
figures
scientific notation – represents a number in the form a × 10b
significant figures – the main digits to the left of a number
solving an equation – using the known constants and variables in an equation to
find the value of a previously unknown constant or variable
square root – the square root of n, √n, is the number that is multiplied by itself to give n
variable – a quantity that can take different values, such as x, a or P

USEFUL WEBSITES
The general Website to accompany this book is at www.booksites.net/waters.

As always, you should search the Web for sites that be of help. Many of these are
available, often providing tutorials and advice about specific topics. Several Websites
describe mathematical material, including technical tutoring (which is not a tutor offering
service) at www.hyper-ad.com/tutoring/, Matthew Pinkney’s site at
www.mathsrevision.net, www.mathcentre.ac.uk, www.math.about.com,
2

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

www.webmath.com and www.sosmath.com. It is usually best to search through a range


of sites until you find the material you want.

Many sites give financial data, with the following giving useful starting points.
www.bankofengland.co.uk – financial information from the Bank of England
www.financialtimes.com – information from the Financial Times
www.money.cnn.com – financial information from CNN
www.reuters.com – information from Reuters
www.uk.finance.yahoo.com - information from Yahoo’s specialised financial information
www.fool.co.uk – Motley Fool Website has a strange name but gives lots of useful
financial information.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 3 – DRAWING GRAPHS

REVIEW OF THE CHAPTER


The last chapter reviewed the basic mathematics that are used in the rest of the
book, focussing on calculations and equations. This chapter looks specifically at
graphs. These are widely used as an efficient way of presenting information in a
pictorial format – so it is important that managers can use them and understand
exactly what they mean.

Business graphs are almost invariably drawn on rectangular – or Cartesian – co-


ordinates. Then basic figures are most commonly straight lines. The chapter
reviews the principles of these, and shows how they can be extended to more
complicated forms such as quadratic, and polynomial equations. Drawing
several lines on the same axes gives a way of solving simultaneous equations.

As with the numerical tools described in the last chapter, you may already be
familiar with the material in this review. Then you can skip over it quickly, or use
parts of the chapter for revision. On the other hand, you may find a lot of new
material and have to spend more time on the chapter. Again, it is essential that
you get to grips with this basic material as it is used throughout the rest of the
book.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Appreciate the benefits of graphs
o Use Cartesian co-ordinates to draw graphs
o Draw straight line graphs and interpret the results
o Draw graphs of quadratic equations and calculate the roots
o Draw graphs of more complicated curves, including polynomials and
exponential curves
o Use graphs to solve simultaneous equations

KEY TERMS FROM THE GLOSSARY


axes – rectangular scales for drawing graphs
co-ordinates – values of x and y that define a point on Cartesian axes
dependent variable – a variable whose value is set by the value of the
independent variable (usually the y value on a graph)
gradient – a measure of how steeply a graph is rising
graph – a pictorial view of the relationship between two (usually) variables,
often drawn on Cartesian co-ordinates
independent variable – a variable that can take any value, and sets the value
of a dependent variable (usually the x value on a graph)
intercept – the point where the graph of a line crosses the y-axis
line graph – graph that shows the relationship between two variables, usually
on Cartesian axes
linear relationship – a relationship between two variables of the form y = ax +
b, giving a straight line graph

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

non-linear relationship – any relationship between variables that is not linear


origin – the point where x and y Cartesian axes cross
polynomial – equation containing a variable raised to some power
positive quadrant – top right-hand quarter of a graph, where both x and y are
positive
quadratic equation – equation with the general form y = ax2 + bx + c
roots of a quadratic equation – points where the curve crosses the x
axis

USEFUL WEBSITES
Software suppliers have useful Websites – such as www.office.microsoft.com,
www.serif.com, www.lotus.com and www.smartdraw.com – and these often
include tutorials and guidance on drawing graphs.

You can start looking at information about finance and share prices to present in
graphs in stock exchange Websites, such as www.londonstockexchange.com,
www.deutsche-bourse.com, www.nasdaq.com and www.nyse.com. There are
also information services, such as www.bbc.co.uk, www.bloomberg.com,
www.uk.finance.yahoo.com, www.financialtimes.com, www.fool.co.uk,
www.money.cnn.com and www.reuters.com.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

PART TWO – COLLECTING AND SUMMARISING DATA

The book is divided into five parts, each of which covers a different aspect of
quantitative methods. The first part reviewed the background and context for
quantitative methods and laid the foundations for the rest of the book. Later
parts look at different quantitative analyses. An obvious point is that these
analyses rely on the availability of accurate numerical data. There is no
pointing having convincing analyses, if they use the wrong data – or the
correct data is not available, or it is collected in a way that introduces errors.
So the second part of the book shows how to collect and summarise
numerical data.

There are four chapters in the second part. Chapter 4 shows how to collect the
data that managers need for their decisions. The raw data often has too much
detail, so we have to summarise it and present it in ways that highlight its
important features. Chapter 5 shows how to do this with different types of
diagrams. Chapter 6 continues this theme by looking at numerical descriptions
of data. Chapter 7 describes index numbers, which monitor changing values
over time.

The message underlying this part of the book is that managers need accurate
information for their decisions. This has to be carefully collected – and if they
have poor information, they are unlikely to make good decisions.

CHAPTER 4 – COLLECTING DATA

REVIEW OF THE CHAPTER


The first chapter discussed the importance of quantitative analyses – but
these rely on the availability of relevant and accurate numerical information.
Managers can design the best and most accurate models and analyses, but
these are useless unless they have the necessary data to work them. To put it
simply, managers must have reliable data to make their decisions. There are
several implications in this statement.
o Firstly, it assumes that managers make better decisions when they have
relevant information, than they make without it. This seems obvious – as
you cannot expect good decisions when you do not have relevant
information. And you can find many examples of managers making poor
decisions when they do not have enough (good) information.
o Secondly, it suggests that data collection is an essential function for
business, providing the raw, unprocessed data. This raw data can then be
turned into useful information. Later chapters discuss the use of data, but
here the book starts by describing its basic collection.
o Thirdly, it assumes that there is some way of determining when ‘enough’
data has been collected. This idea of measuring the amount of data
collected, and then balancing the benefit against the cost, is difficult. The
problems, of course, are providing a reasonable measure for the amount
of data collected, and then assigning realistic costs and benefits.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

Data can be classified in several ways – quantitative/qualitative,


nominal/ordinal/cardinal, etc. Each of these is best collected in a different
way. So the chapter starts by giving some classifications of data, and then
showing how these can be collected.

Sometimes businesses collect all relevant information – and this trend is


growing with increasingly sophisticated information processing systems.
Often data is collected using samples. There are different ways of organising
these, usually involving some element of randomness. Here we look at the
ways of using random samples, while chapter 14 returns to this theme by
looking in more detail at the size of sample needed.

Most samples collect data from people, and this is usually organised through
questionnaires. You are almost certainly familiar with surveys, so the chapter
gives some basic rules for running them.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Appreciate the importance of data collection
o Discuss the amount of data to be collected
o Classify data in different ways
o Identify sources of data
o Understand the concept of populations and samples
o Discuss the reasons for using samples
o Describe and use different types of sample
o Consider different ways of collecting data from samples
o Design questionnaires

KEY TERMS FROM THE GLOSSARY


bias – a systematic error in a sample
cardinal – data that can be measured
census – a sample of the entire population
cluster sample – result of choosing a sample in clusters rather than individually
continuous data – data that can take any value (rather than just discrete
values)
data – raw facts that are processed to give information
data collection – the gathering on facts that are needed for decisions
discrete data – data that is limited to integer values
information – data that has been processed into a useful form
marginal benefit – benefit from the last unit made, collected, etc.
marginal cost – cost of one extra unit made, collected, etc
multi-stage sample – sample that successively breaks a population into
smaller parts, confining it to a small geographical area
nominal data – data for which there is no convincing quantitative measure
ordinal data – data that cannot be precisely measured, but which can be
ordered or ranked
population – every source of data for a particular application
primary data – new data that is collected for a particular purpose

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

questionnaire – set of questions used to collect data


quota sample – sample structured in such a way that it has the same
characteristics as the population
random numbers – a string of digits that follow no patterns
random sample – sample in which every member of the population has the
same chance of being selected
sample – members of the population chosen as sources of data
sampling frame – a list of every member of a population
secondary data – data that already exists and can be used for a problem
stratified sample – takes a sample from each distinct group in a population
systematic sample – sample in which data is collected at regular intervals

USEFUL WEBSITES
Governments publish a huge amount of data on their Websites. In the UK you
might start at the following sites – or equivalent sites for other countries.
www.ons.gov.uk – homepage of the UK Office of National Statistics; you can get
the same information from www.statistics.gov.uk. Government sites that give
public information and links to other sites include www.open.gov.uk,
www.yougov.com and www.directgov.uk.

You can find international information from the United Nations (whose
homepage is www.un.org) and its subsidiaries, such as its statistics division at
www.un.org/depts/unsd. Other international bodies are the Economic
Commission for Europe at www.unece.org, World Bank (www.worldbank.org),
International Monetry Fund (www.imf.org) and European Union
(www.europa.eu.int and www.eurostat.ec.europa.eu.)

For specific information, you might try market research companies, such as
www.gallup.com, www.mori.com, www.nop.com, www.mrs.org.uk, or many
others. Companies that present information, include newspapers such as
www.financialtimes.com, www.wsj.com (Wall Street Journal),
www.economist.com, www.fortune.com, etc – and television companies such as
www.bbc.co.uk, www.cnn.com, etc.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 5 – DIAGRAMS FOR PRESENTING DATA

REVIEW OF THE CHAPTER


Chapter 4 described the difference between data and information – where data
are the raw numbers that are processed to give useful information. Chapters 5
and 6 describe different approaches to processing data and presenting the
results.

The basic problem is that raw data swamps us with detail, and we are more
interested in the underlying trends and patterns. To identify these data has to
be summarised, reduced and presented to give useful information.

This chapter describes different types of diagrams for summarising data. These
are widely used, and you are probably familiar with the principles. But it is
important to recognise that being familiar with diagrams does not necessarily
give an understanding. The diagrams you can see in advertisements and
newspapers are often poorly drawn, and can be misleading. Sometimes this is
done deliberately, but often the presenter simply does not know how to do it
properly. The aim of this chapter is to encourage good practice for drawing
diagrams, and this means giving clear, fair and honest presentations of data.

In practice, there is a huge variety of possible formats for presenting data, and
we can only give a review of the most common. Then the main foci of the
chapter are to:
• review the purpose of data reduction, and show how this can be viewed as
processing data into information
• discuss tables of data, as probably the most widely used format for
presenting data
• describe a variety of other diagrams for presenting data and say when these
are most appropriate
• recognise the limitations of diagrammatic representations
• encourage a clear and honest representation of data.

This last point is particularly important. Diagrams give a powerful format that is
very good at giving an overall impression of data – but, unfortunately, this
impression can easily be misleading. Chapter 6 describes alternative numerical
descriptions of data, which are more rigorous and give a more precise view –
but they lack the impact of diagrams. The best approach to data presentation is
often to combine the two main formats – showing an overview with a diagram
and then describing the details numerically.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Discuss the aims of data reduction and presentation
o Design tables of numerical data
o Draw frequency distributions of data
o Use graphs to show the relationship between two variables
o Design pie charts

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

o Draw different kinds of bar chart


o Consider pictograms and other formats
o Draw histograms for continuous data
o Draw ogives and Lorenz curves for cumulative data

KEY TERMS FROM THE GLOSSARY


bar charts – diagrams that represent the frequency of observations in a class
by the length of a bar
class – range or entry in a frequency distribution
cumulative frequency distribution – a diagram showing the sum of
frequencies in lower classes
cumulative percentage frequency distribution – a diagram showing the sum
of percentage frequencies in lower classes
data presentation – format for showing the characteristics of data and
emphasising the underlying patterns
data reduction – reducing the amount of detail in data to emphasise the
underlying patterns
frequency distribution – diagram showing the number of observations in each
class
frequency table – table showing the number of observations in each class
histograms – frequency distributions for continuous data
ogive – graph of the cumulative frequency against class for continuous data
percentage frequency distribution – diagram showing the percentage of
observations in each class
pictogram – bar chart where the plain bar is replaced by some kind of picture
pie chart – diagram that represents the frequency of observations in a class by
the area of a sector of a circle
scatter diagram – unconnected graph of a set of points (x, y)

USEFUL WEBSITES
A surprising number of Websites give tutorials and advice about drawing
diagrams, including software suppliers and training companies. Software
suppliers have useful Websites – such as www.office.microsoft.com,
www.serif.com, www.lotus.com and www.smartdraw.com – and these often
include advice and guidance on drawing graphs. Other useful sites include
www.lacher.com and www.fodoweb.com/erfora.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 6 – USING NUMBERS TO DESCRIBE DATA

REVIEW OF THE CHAPTER


This is the second chapter that discusses the transformation of data into
useful information – focussing on the summary of raw data and its
presentation in useful formats. Chapter 5 described some diagrams for data
reduction, which can have considerable impact and are good at giving overall
impressions. However, they are not so good at giving rigorous and precise
summaries. This chapter discusses numerical descriptions, which have less
impact than diagrams, but give specific measures.

There are several ways of measuring data, but the two most useful describe:
• location or mean – which are generally average values that give typical
values or show where the centre of the data is
• spread or deviation – which measure how spread out the data are
around the centre

You are almost certainly familiar with the idea of an average. The arithmetic
mean is often used to give some idea of a typical value – but in practice it can
be misleading. Unfortunately, people are often happy to use the arithmetic
mean without recognising its limitations, and they do not realise that it can
give a mistaken view. Although less widely used (and arguably with less clear
meanings) other measures for location can be more reliable, particularly the
median. Politicians are often mocked for making claims like, ‘Half of people
live on incomes below the national average’; but this is more snappy than the
alternative, ‘Half of people live on incomes below the national median, while
considerably more than half live on incomes below the national mean’.

In the same way, the range is the easiest measure of spread, but it can be
affected by odd outlying values. The mean error and mean absolute deviation
are more complicated, but they also have clear meanings. So you probably
wonder why the more obscure and difficult mean squared error – and
particularly the variance and standard deviation – are the most widely used
measures of spread. The answer is that they give more reliable views and are
used in other analyses.

Other measures of data are mentioned, but these are far less widely used by
managers.

You probably find it easiest to describe data using the standard functions that
come with spreadsheets (such as the descriptive statistics option in Microsoft
Excel’s data analysis tool). But spreadsheets are not designed specifically to
deal with statistical analyses, and they can be rather complicated, give limited
analyses, use awkward formats, make a number of assumptions, and so on.
Specialised statistics packages can be better at describing data. There are
many of these available, but they have the disadvantage that you have to
learn how to use them and they often appear daunting. Sometimes the output
from a statistical package seems more obscure than the raw data entered.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

You may find it worthwhile to have a look at a range of packages and see
which one you prefer.

The chapter – along with the last – seems to emphasise the way that you can
present your own data; but really it aims at giving a wider appreciation of the
broad subject of data presentation. In other words, it also allows you to look
at someone else’s data, interpret it properly, understand the main points, and
appreciate any limitations of the analyses. Data presentation allows you to
pass on our own results properly to other people, and understand results that
they pass to you.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Appreciate the need for numerical measures of data
o Understand measures of location
o Find the arithmetic mean, median and mode of data
o Understand measures of data spread
o Find the range and quartile deviation of data
o Calculate mean absolute deviations, variances and standard
deviations
o Use coefficients of variation and skewness.

KEY TERMS FROM THE GLOSSARY


arithmetic mean – the ‘average’ of a set of numbers
average – typical value for a set of data, often the arithmetic mean
coefficient of skewness – a measure of the symmetry or skewness of a set of
data
coefficient of variation – the ratio of standard deviation over mean
deviation – distance an observation is away from the mean
grouped data – raw data already divided into classes
interquartile range – distance between the first and third quartiles
mean – the ‘average’ of a set of numbers
mean absolute deviation – average distance of observations from the mean
mean squared deviation or variance – average of the squared distance from
the mean
measure of location – showing the ‘centre’ or typical value for a set of data
measure of spread – showing how widely data is dispersed about its centre
median – the middle value of a set of numbers
mode – the most frequent value in a set of numbers
quartile deviation – half the interquartile range
quartiles – points that are a quarter of the way through data when it is
sorted by size
range – the difference between the largest and smallest values in a set of data
semi-interquartile range – half the interquartile range
standard deviation – measures data spread, equal to the square root of the
variance
variance – a measure of the spread of data using the mean squared deviation
weighted mean – a mean that uses a different weight for each observation

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

USEFUL WEBSITES
You can find help with data description in a surprising number of Websites such
as www.mathstore.ac.uk., Alan Dix’s www.meandeviation.com, the Government
of Canada’s www.statcan.co/english/edu, and www.webmath,com,
www.math.about.com., www.stat-help.com, www.conceptstew.co.uk, and
Matthew Pinkney’s site at www.mathrevision.net,

This material is also covered in on-line textbooks, such as


www.statsoft.com/textbook/stathome.html, David Lane’s
www.davidlane.co./hyperstat, David Garson’s
www2.chass.ncsu.edu/garson/pa765/statnote.htm and Wikipedia at
www.wikibooks.org/wiki/statistics.

You can get lists of other sources from, for example, www.statpages.org and
www.math.yorku.ca.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 7 – DESCRIBING CHANGES WITH INDEX


NUMBERS

REVIEW OF THE CHAPTER


Chapters 5 of the book described ways of summarising data in diagrams, then
Chapter 6 discussed more precise numerical measures. This chapter continues
the general theme of data description by considering indices. These are
numerical measures that monitor changes over time.

When you collect data, you can summarise it to get a snapshot of the situation
at one particular point. But most values change over time – in the way that the
case study at the end of Chapter 6 described changes over a year in the
number of customers seen at a consumer advice office. A convenient way of
describing such changes is with index numbers, which show the ratio of two
values recorded at different times. For convenience, we normally multiply this
ratio by 100. If we sold 10 units last year, and this year we sold 15 units, the
index of sales is 15 / 10 × 100 = 150.

Index numbers usually monitor changes in a single variable over time, in the
way that a sales index records changing sales, or an unemployment index
monitors the number of people looking for work. These give a simple and
effective way of following patterns. But often we want to compare different
variables, or take a more complex picture, in the way that FTSE indices monitor
the changing value of stocks and shares, or consumer price surveys monitor
average prices. These are more complicated and difficult to interpret, especially
as there are several ways of calculating compound indices. The chapter
describes the most commonly used indices. These can give varying views,
illustrated by the announcement in 2010 by the UK government that it would link
various benefits to the consumer price index (CPI) rather than the higher retail
price index (RPI) and save several billion pounds a year.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


• Understand the purpose of index numbers
• Calculate indices for changes in the value of a variable
• Change the base of an index
• Use simple aggregate and mean price relative indices
• Calculate aggregate indices using base-weighting and current-weighting
• Appreciate the use of the retail price index

KEY TERMS FROM THE GLOSSARY


aggregate index – monitors the way that several variables change over time
base period – the fixed point of reference for an index
base-period weighted or base-weighted index – aggregate index which
assumes that quantities purchased do not change from the base period

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

base value – value of a variable in the base period


current-period weighted index or current-weighted index – aggregate index
which assumes that the current amounts bought were also bought in the base
period
index or index number –a number that compares the value of a variable at
any point in time with its value in a base period
Laspeyre index – base-weighted index
mean price relative index – composite index found from the mean value of
indices for separate items
Paasche index – current-weighted index
percentage point change – change in an index between two periods
price index – index for monitoring the price of an item
simple aggregate index or simple composite index – composite index that
adds all prices (say) together and calculates an index based on the total price
weighted index – aggregate price index (say) that takes into account both
prices and the importance of items

USEFUL WEBSITES
Probably the best place to see the use of indices is in government
publications. Some sources for these are given in chapter 4.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

PART THREE – SOLVING MANAGEMENT PROBLEMS

The book is divided into five parts, each of which covers a different aspect of
quantitative methods. The first part looked at the underlying concepts of
quantitative methods, showing why and how managers used quantitative
methods. This set the broad context for the rest of the book. The second part
showed how to collect, summarise and present data. Together, these parts
have laid the foundations and given the basic tools for tackling management
problems.

This third part of the book shows how to use quantitative methods for solving
some common – and even universal – types of management problem. The
problems tackled here are deterministic, which means we are dealing with
conditions of ‘certainty’. In other words, the variables take fixed and identifiable
values for the period we are looking at. The term ‘certainty’ might be a bit
misleading, because we do not know exactly what will happen – only that things
will take fixed values. The alternative, which we look at in following parts,
assumers ‘uncertainty’ where probabilities are attached to values. For instance,
with certainty we might say that sales will be 1,000 units; with uncertainty we
might only say that there is a probability of 0.4 that sales are more than 1,000.

There are four chapters in this part. Chapter 8 describes some calculations for
finance and performance. Chapter 9 uses regression to describe the
relationship between variables, and Chapter 10 extends these ideas in
forecasting. Chapter 11 introduces the ideas of linear programming (which has
nothing to do with computer programming).

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 8 – FINANCE AND PERFORMANCE

REVIEW OF THE CHAPTER


The first two parts of the book set the context for the use of quantitative methods
by managers – describing possible calculations in the first part of the book and
data collection in the second part. Now the rest of the book looks at some
specific types of quantitative models. The most common areas for managers to
meet quantitative ideas are probably finance (including accounts) and
measuring performance. This chapter illustrates a range of quantitative models
in these areas. So it has three main aims:
1. To describe some models that are widely used in business
2. To reinforce the idea that quantitative models really are used by
managers
3. To form links with other courses

Every business is concerned with its finances. As the finances are invariably
expressed in quantitative terms, all managers must have some understanding of
quantitative ideas. There is a huge variety of financial models, ranging from
transaction recording in accounts to comprehensive analyses of the international
futures and derivatives markets. This chapter takes some of the most common,
concentrating on:
• Measures of performance
• Break-even points
• Value of money over time
• Discounting monetary value
• Models for debts and loans

You will do other courses in finance and/or accounting, so this material either
introduces or reinforces the ideas you meet there. It is worth mentioning that
spreadsheets were originally developed for repetitive financial calculations, and
this is still probably their most common use.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


• Appreciate the importance of measuring performance
• Calculate a number of performance ratios
• Find break-even points
• Understand the reasons for economies of scale
• Do calculations for compound interest
• Discount amounts of money to their present value
• Calculate net present values and internal rates of return
• Depreciate the value of assets
• Calculate the payments for sinking funds, mortgages and annuities

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

KEY TERMS FROM THE GLOSSARY


annual equivalent rate or annual percentage rate – true interest rate for
borrowing or lending money
annuity – amount invested to give a fixed income over some period
break-even point – sales volume at which an organisation covers its costs and
begins to make a profit
capacity – the maximum output that is possible in a specified time
compound interest – has interest is paid on both the principal and the interest
previously earned
depreciation – amount by which an organisation reduces the value of its
assets
discount factor – value of (1+i)-n when discounting to present value
discount rate – value of i when discounting to present value
discounting to present value – calculating the present value of an amount
available at some point in the future
diseconomies of scale – effect where the average cost per unit rises as the
number of units produced increases
economies of scale – effect where the average cost per unit declines as the
number of units produced increases
interest – amount paid to lenders as reward for using their money
internal rate of return – discount rate that gives a net present value of zero
marginal cost – the cost of making one extra unit made, collected, etc
marginal revenue – the revenue generated by selling one more unit of a
product
mortgage – amount borrowed to a house, or other capital facilities
net present value – the result of subtracting the present value of all costs from
the present value of all revenues
partial productivity – the output achieved for each unit of a specified resource
performance ratio – some measure of actual performance divided by a
standard reference value
present value – the discounted value of a future amount
principal – amount originally borrowed for a loan
productivity – amount of output for each unit of resource used
profit – residue when all costs are subtracted from all revenues
simple interest – has interest paid on only the initial deposit, but not on interest
already earned
sinking fund – a fund that receives regular payments so that a specified sum is
available at a specified point in the future
utilisation – proportion of available capacity that is actually used

USEFUL WEBSITES
You can find useful explanations and examples of performance in
www.bized.ac.uk, www.businesslink.gov.uk and www.apqr.org (the American
Productivity and Quality Center). You can get related information from
organisations, such as the Operational Research Society (www.orsoc.org.uk),
International Federation of Operational Research Societies (www.ifors.org),
European Operations Management Association (www.euroma-online.org) and
the Association of Operations Management (www.apics.org).

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

A source of general finance information is www.investopedia.com, and more


specific information comes in www.hsps.sph.sc.edu (University of Southern
Carolina’s site), www.businesstown.com, www.theequator (Steve’s financial
modelling tutorial), www.entrepreneur.com and www.dwmbeancounter.com

Information about financial irregularities comes in any news site, such as


www.news.bbc.co.uk, www.financialtimes.com, www.wsj.com (The Wall Street
Journal), www.economist.com and www.guardian.co.uk.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 9 – REGRESSION AND CURVE FITTING

REVIEW OF THE CHAPTER


We have already seen with graphs in chapter 3 how managers can show a
relationship between two variables. This chapter expands this idea to explore
numerical aspects of such relationships. For instance, it measures the
strength of a relationship between sales and price. Then you can tell if
changes in one variable are associated with changes in a second variable.
Profit changes with sales, output changes with hours worked, borrowing
changes with interest rates, and so on.

Occasionally there is a perfect relationship between variables, and we can


find an equation that describes it exactly. Then substituting values for
independent variables gives an exact value for the dependent variable. More
usually, the relationship is not so clear and there is variability or noise. Then
we want the equation that gives the best fit to the data, and for this we use
regression.

This chapter describes linear regression, which finds the straight line of best fit
through a set of points. Here the ‘best’ line is defined as the one that
minimises the error (actually the mean squared error). Then the coefficients
of determination and correlation show how good this line is. This topic clearly
includes uncertainty and might be placed in later sections of the book, but we
want to introduce this important topic early without getting bogged down in
statistical analyses. Some people use regression to illustrate a wide range of
statistical models, but we have avoided this and focussed on a broad
introduction to the subject.

When the independent variable is time, we have a method of forecasting (a


theme that is explored in the next chapter). This illustrates the point that you
are likely to meet linear regression in many different places. And there are
several extensions to the basic models. The most common of these is
multiple (linear) regression, where the value of a dependent variable depends
on the values of a set of independent variables. More complicated forms of
curve fitting are less widely used in practice.

An important point is that regression describes relationships, but does not


suggest cause and effect. A very common error is to assume that because
two variables are related, changes in one actually cause changes in the other.
You can find many examples where this mistaken reasoning leads to
ridiculous conclusions. More worrying are the examples where the mistaken
reasoning leads to plausible conclusions, which then become accepted as
‘fact’. For instance, a government might suggest that high pay is linked to
inflation – so the way to reduce inflation is to reduce wage rates and
subsequent standards of living.

The calculations for regression are quite tedious, so they are always done by
computer. Spreadsheets do this reasonably well, but specialised software

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

can be easier to use. One warning is that statistical packages often give a
large amount of analysis, only some of which is needed.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Understand the purpose of regression
o See how the strength of a relationship is related to the amount of
noise
o Measure the errors introduced by noise
o Use linear regression to find the line of best fit through a set of data
o Use this line of best fit for causal forecasting
o Calculate and interpret coefficients of determination and correlation
o Use Spearman's coefficient of rank correlation
o Understand the results of multiple regression
o Use curve fitting for more complex functions

KEY TERMS FROM THE GLOSSARY


autocorrelation – a relationship between the errors in multiple regression
causal forecasting – using a relationship to forecast the value of a dependent
variable that corresponds to a known value of an independent variable
coefficient of correlation or Pearson’s coefficient – a measure of the
strength of a linear relationship
coefficient of determination – proportion of the total sum of squared errors
from the mean that is explained by a regression
curve-fitting – finding the function that best fits a set of data
dependent variable – a variable whose value is set by the value of the
independent variable
extrapolation – causal forecasting with values outside the range used to define
the regression
independent variable – a variable that can take any value, and sets the value
of a dependent variable
line of best fit – line that minimises some measure of the error (usually the
mean squared error) in a set of data
linear regression – process finds the straight line that best fits a set of data
mean absolute error – average error, typically in a forecast
mean squared error – average value of squared error, typically in a forecast
multicollinearity – a relationship between the independent variables in multiple
regression
multiple (linear) regression – process that finds the line of best fit through a
set of dependent variables
noise – the random errors in observations (often described as errors)
non-linear regression – procedure to find the function that best fits a set of
data
Pearson’s coefficient – a measure of the strength of a linear relationship
(known as the coefficient of correlation)
regression – method of finding the best equation to describe the relationship
between variables
Spearman's coefficient (of rank correlation) – a measure of the
correlation of ranked data

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

USEFUL WEBSITES
As well as the usual mathematical and statistical sites, such
www.mathstore.ac.uk, and www.zweigmedia.com/thirdedsite you can get
other ideas at the Minitab Corporation site at www.minitab.com/resources,
Kovach Computing Services at www.kovcomp.co.uk, David Lane at
www.davidmlane.com and Middle Tennessee State University at
www.mtsu32.mtsu.edu:11308.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 10 – FORECASTING

REVIEW OF THE CHAPTER


The last chapter showed how linear regression describes the relationship
between two variables. Then setting a value for the independent variable allows
a forecast of the value for a dependent variable. Moreover, a common use of
regression puts time as the independent variable, giving a means of forecasting
values for the future. This chapter continues the theme of forecasting,
introducing some more specialised methods.

The chapter starts by showing the importance of forecasting. When managers


make decisions, they always become effective at some point in the future – so
their decisions should be based on prevailing future conditions. But there is no
way of knowing exactly what will happen in the future, and the best they can do
is forecast likely conditions. Because of variability, these forecasts often contain
errors, but with care these should be small.

There are two distinct approaches to forecasting:


• qualitative or judgemental forecasting
• quantitative forecasting
Judgemental forecasting collects opinions from ‘experts’, so this is generally
easiest and most widely used. However, it can be very unreliable and is often
little more than informed guesswork. Personal insight is particularly suspect (as
you can see when ‘experts’ forecast the winner of a horse race, the price of
shares, or the cost of a project). For this reason, it is best to avoid judgemental
forecasts whenever possible. Quantitative forecasts are more reliable and you
should use them whenever there is enough reliable data.

There are two approaches to quantitative forecasting:


• causal forecasting, as illustrated by regression in the last chapter
(actually, this is rather a misnomer, as it considers relationships rather
than cause-and-effect)
• projective forecasting, which projects underlying patterns

The chapter describes some principles of projective forecasting for time series,
specifically simple averages, moving averages, exponential smoothing, and
models for seasonality and trend. There are many more complex methods
based on these principles, but you should remember that because a method is
more complicated it does not necessarily give better results.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Appreciate the importance of forecasting to every organisation
o List different types of forecasting method
o Discuss the characteristics of judgemental forecasting
o Use a variety of approaches to judgemental forecasting
o Describe the characteristics of projective forecasting
o Understand the importance of time series

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

o Calculate errors and a tracking signal for forecasts


o Forecast using simple averages, moving averages, and exponential
smoothing
o Forecast time series with seasonality and trend

KEY TERMS FROM THE GLOSSARY


causal methods – quantitative methods of forecasting that analyse the effects
of outside influences and use these to produce forecasts
exponential smoothing – weighting based on the idea that older data is less
relevant and therefore should be given less weight
forecast – some form of prediction of future conditions
judgemental forecasts – forecasts that rely on subjective assessments and
opinions
moving average – an average of the most recent periods of data
projective method – quantitative methods that extend the pattern of past
demand into the future
seasonal index – the amount by which a deseasonalised value is multiplied to
get a seasonal value
sensitivity – speed at which a forecast responds to changing conditions
smoothing constant – parameter used to adjust the sensitivity of exponential
smoothing forecasts
time series – a series of observations taken at regular intervals of time
tracking signal – a measure to monitor the performance of a forecast

USEFUL WEBSITES
Sites with forecasting tutorials include www.mathabout.comwww,
.forecastpro.com and www.dwmbeancounter.com. Sites with general advice
about forecasting include www.businesslink.gov.uk, www.ibf.org (the Institute of
Business Forecasting) and www.bized.ac.uk,.

Sites specifically for oil prices and energy include www.eia.doe.gov (the US
government’s Energy Information Administration) and www.energy.ca.gov.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 11 – LINEAR PROGRAMMING

REVIEW OF THE CHAPTER


Linear programming is a widely used method of solving certain types of
business problem. Specifically, it is used for problems of constrained
optimisation, where there are:
o a set of variables whose optimal values have to be found
o a set of constraints which limit the values that can be taken by
the variables (including non-negativity conditions)
o an objective function, which measures the quality of a solution
and which we want to optimise
o both the constraints and objective function are linear with
respect to the problem variables.

Many problems in business are – or can be approximated by – linear


constrained optimisation. There are dozens of standard problems (product
mix, assignment, transportation, knapsack, and so on) and a huge variety of
other applications have been reported. LP has proved a successful approach
to many types of problem, but its scope may be limited by its apparent
technical difficulty. Some managers find it difficult to understand and cast it
aside as a theoretical view.

There are three stages in linear programming – formulating the problem,


finding the optimal solution, and then doing a sensitivity analysis. Formulation
is the most difficult and needs most skills. The other two steps are always
done by computer, as the arithmetic is simple but horribly tedious. Some
spreadsheets have routines that you can use for linear programming, such as
‘Solver’ in Excel. But these can be difficult to use and the results are not
always clear. In general it is easier to use a specialised package – such as
the widely-available Lindo – but you have to balance the effort of learning to
use new software with the benefits.

We illustrate the principles of linear programming with examples that have two
or three variables. These are fairly easy to understand, but realistic problems
become very large and cumbersome. It might be easy to follow the logic of
linear programming with two variables, but real problems with thousands of
variables are – to say the least – daunting. Complex formulations take years
to develop and test, and then need a huge amount of data. Much of this data
are approximations and estimates, so the final solution needs careful
interpretation before it is implemented.

Apart from its complexity and approximations, one problem with LP is it basic
assumption that problems are linear. Extensions have been developed to
overcome this, including integer, zero-one, non-linear and goal programming.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Appreciate the concept of constrained optimisation
o Describe the stages in solving a linear programme
o Formulate linear programmes and understand the basic assumptions
o Use graphs to solve linear programmes with two variables
o Calculate marginal values for resources
o Calculate the effect of changing an objective function
o Interpret printouts from computer packages

KEY TERMS FROM THE GLOSSARY


constrained optimisation – problems with an aim of optimising some
objective, subject to constraints
decision variables – variables whose value we can choose
extreme point – corner of the feasible region in linear programming
feasible region – area of a graph in which all feasible solutions lie for a linear
programme
formulation – getting a problem in the right form, particularly with linear
programming
linear programming (LP) – a method of solving some problems of constrained
optimisation
non-negativity constraint – constraints that set all variables to be positive,
especially in linear programmes
objective function – function to be optimised, especially in linear programming
sensitivity analysis – seeing what happens when a problem (particularly a
linear programme) is changed slightly
shadow price – marginal value of resources in a linear programme
solution – finding an optimal solution to a problem (particularly a linear
programme)

USEFUL WEBSITES
Linear programming is a popular topic in mathematics and management
Websites. Useful introductions are on www.teachnet.ie/jcleary/index1.html,
www.en.wikipedia.org and www.zweigmedia.com/thirdedsite. The Solver
routine in Excel is provided by Frontline Systems Inc, and their site at
www.frontsys.com describes the latest features.

You can get related information from organisations, such as the Operational
Research Society (www.orsoc.org.uk), International Federation of Operational
Research Societies (www.ifors.org), the Institute for operations research and
management science (www.informs.org), European Operations Management
Association (www.euroma-online.org) and the Association of Operations
Management (www.apics.org).

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

PART 4 – INTRODUCING STATISTICS

The book is divided into five parts, each of which covers a different aspect of
quantitative methods in business. The first part described the underlying
concepts of quantitative methods, setting the context for the rest of the book.
The second part showed how to collect and summarise data, and the third part
used this data to solve some common business problem. These problems were
deterministic, which means that they dealt with certainties. This fourth part of
the book introduces the idea of uncertainty, described by probabilities and
statistical methods. The final part shows how to solve a range of problems that
include uncertainty.

With certainty we know exactly what sales, costs, prices, production, interest
rates, and so on will be – and we can do related calculations. In reality, most
problems include some level of uncertainty, as we can never know exactly
what will happen in the future. When the amount of uncertainty is small we
can ignore it, arguing that all models are simplification of reality, so ‘certainty’
is just one of the assumptions. This allows us to use deterministic models that
are much easier to understand and use. However, when there is more
uncertainty we have to accept it and include probabilities in appropriate
statistical models. These are generally more difficult to work with and properly
understand.

There are four chapters in this fourth part of the book. Chapter 12 introduces
the ideas of probability as a way of measuring uncertainty. This is the core idea
that is developed in the rest of the book. Chapter 13 looks at probability
distributions, which describe some common patterns in uncertain data. Chapter
14 returns to the theme of using samples for collecting data, and chapter 15
introduces the ideas of statistical testing, focussing on hypothesis testing.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 12 – UNCERTAINTY AND PROBABILITIES

REVIEW OF THE CHAPTER


There can be uncertainty in almost every problem in business. We might say
that ‘demand will increase by 5 percent next year’ but this is a best estimate and
we usually mean that ‘demand will probably increase by 5 percent next year’.
This apparently subtle difference has many consequences, all of which are
based on the ideas of uncertainty and probability.

The deterministic models described so far are accurate enough for many
circumstances, but sometimes the amount of uncertainty is too great and we
have to explicitly include it. This means that we have to add elements of
probability, which inevitably make the models more complicated – and the
results less convincing.

This chapter introduces the ideas of probability, which is a measure between 0


and 1 that defines the likelihood of an event. This concept is so common that
we usually take it for granted – but in reality it can be quite hard to imagine.
What exactly does it mean when the results of an opinion poll are accurate to
within 2% nineteen times out of twenty? And what exactly does it mean when
you hear that ‘there is a significant chance that the economy will decline by up to
1 percent next year’?

The chapter discusses the meaning of probabilities for different types of events
(independent, mutually exclusive, conditional, etc) and describes some related
calculations. The underlying ideas may be fairly straightforward, but it is easy to
get confused – especially, say, with Bayes’ theorem.

The ideas introduced in this chapter are important, as they are used throughout
the remainder of the book. So it is essential that you understand them before
moving on. If you have any difficulties it is worth sorting them out early, perhaps
using other sources of information.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o appreciate the difference between deterministic and stochastic problems
o define probability and appreciate its importance
o calculate probabilities for independent events
o calculate probabilities for mutually exclusive events
o understand the concept of dependent events and conditional probabilities
o use Bayes' Theorem to calculate conditional probabilities
o draw probability trees

KEY TERMS FROM THE GLOSSARY


a-priori – probability calculated by analysing circumstances
Bayes' Theorem – equating for calculating conditional probabilities

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

conditional probabilities – probabilities for dependent events with the form


P(a/b)
dependent events – events in which the occurrence of one event directly
affects the probability of another
deterministic – describing a situation of certainty
empirical probability – probability found by observation or experiment
independent events – events for which the occurrence of one event does not
affect the probability of a second
mutually exclusive events – events where only one can happen, but not both
probabilistic or stochastic – containing uncertainty that is measured by
probabilities
probability – likelihood or relative frequency of an event
probability tree – diagram showing a series of related probabilities
stochastic or probabilistic – containing uncertainty that is measured by
probabilities
Venn diagram – a diagram that represents probabilities as circles that may or
may not overlap

USEFUL WEBSITES
Several Websites give advice on statistics, and it is best to search available sites
until you find the material you want. You might find some useful ideas in
www.math.about.com., www.stat-help.com, www.conceptstew.co.uk, Alan Dix’s
www.meandeviation.com, and Matthew Pinkney’s site at www.mathrevision.net,
www.mathstore.ac.uk, the Government of Canada’s
www.statcan.co/english/edu, and www.webmath,com.

This material is also covered in on-line textbooks, such as


www.statsoft.com/textbook/stathome.html, David Lane’s
www.davidmlane.com/hyperstat, David Garson’s
www2.chass.ncsu.edu/garson/pa765/statnote.htm and Wikipedia at
www.wikibooks.org/wiki/statistics.

You can also get lists of other sources from, for example, www.statpages.org
and www.math.yorku.ca. There are also a number of statistical societies that
have useful Websites, such as www.rss.org.uk (Royal Statistical Society),
www.amstat.org (American Statistical Association), www.statsoc.org.au
(Statistical Society of Australia) and www.ssc.ca (Statistical Society of Canada).

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 13 – PROBABILITY DISTRIBUTIONS

REVIEW OF THE CHAPTER


The last chapter reviewed the ideas of probability, and this chapter develops
the theme by describing probability distributions. These are particularly
important aspects of probability, which you can imagine in terms of relative
frequency distributions. They show the relative frequency of different events.

You can develop empirical distributions for a particular problem – perhaps


finding the distribution of people visiting a shop during a particular period.
These empirical distributions are restricted to the specific problem considered.
However, they often follow general patterns. For instance, you might find the
probability distribution for the production of gold from mines in South Africa –
and find that this has a similar shape to the weight of adult men in a particular
town (they are both Normally distributed). Such patterns give standard
distributions whose features are well known, and which you can use for
different types of problem. There are several of these standard patterns, and
the chapter describes the most important:
• binomial distribution for Bernoulli trials
• Poisson distribution for random events
• Normal distribution, which is the most widely used ‘bell-shaped’
distribution

The first two are discrete distributions – which are more directly linked to
frequency distributions – while the Normal is continuous.

The chapter illustrates the use of these distributions by standard calculations,


broadly based on the shape of the distribution and the relative likelihood of
events. Some other distributions – including the beta, student-t, χ2 and
negative exponential – appear in later chapters.

SPECIFIC AIMS OF THE CHAPTER ALLOW READERS TO:


o Understand the role of probability distributions
o Draw an empirical probability distribution
o Describe the difficulties of sequencing and scheduling
o Calculate numbers of combinations and permutations
o Know how to use of a binomial distribution and calculate probabilities
o Know how to use a Poisson distribution and calculate probabilities
o Work with a Normal distribution and do related calculations
o Ease calculations by approximating one distribution by another

KEY TERMS FROM THE GLOSSARY


binomial distribution – distribution that shows the probabilities of different
numbers of successes in a number of trials
combination – number of ways of selecting r things from n, when the order of
selection does not matter

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

Normal distribution – the most widely used bell-shaped probability distribution


for continuous data
permutation – number of ways of selecting r things from n, when the order of
selection is important
Poisson distribution – probability distribution largely used for describing
random events
probability distribution – a description of the relative frequency of
observations
sequencing – putting activities into an order for processing

USEFUL WEBSITES
Here you can look in the Websites identified for the last chapter.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 14 – USING SAMPLES

REVIEW OF THE CHAPTER


Chapter 4 described ways of collecting data, and suggested that most data is
collected from samples. It described random and other sampling methods that
guaranteed a sample with the required characteristics – usually that the sample
has the same features as the broader population. But we avoided the question
of how big these samples should be. This chapter fills the missing step and
describes the effects of sample size.

The essential aim of sampling is to find the characteristics of a population by


collecting data from a representative sample. Larger samples generally give
more reliable results – but collecting and analysing data always costs money.
So sampling has to balance the reliability of information with the cost of
collection and analysis.

Most work with samples uses – not surprisingly – sampling distributions for
some feature, and these describe the features of samples drawn from
populations. So you have to keep a clear track of three elements:
o the population and overall distribution of the feature considered
o a sample and the specific distribution of the feature within it
o a sampling distribution describing features for all such samples.

The most common calculations for sampling lead to point estimates and
confidence intervals – which show our confidence that a population value is
within a specified range. Then you might hear that ‘we are 95 percent confident
that profits will be between $1 and $1.2 million’. Clearly, we are always more
confident that a result will be within a broader confidence interval. This basic
idea can be extended to deal with small samples (with t-distributions), one-sided
tests, proportions, and so on.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Understand how and why to use sampling
o Appreciate the aims of statistical inference
o Use sampling distributions to find point estimates for population means
o Calculate confidence intervals for means and proportions
o Use one-sided distributions
o Use t-distributions for small samples

KEY TERMS FROM THE GLOSSARY


central limit theorem – theorem that describes the distribution of observations
about the mean
confidence interval – the interval that we are, for instance, 95% confident that
a value lies within
degrees of freedom – a measure of the number of independent pieces of
information used in probability distributions

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

interval estimate – estimated range within which the value for a population is
likely to lie
point estimate – single estimate of a population value from a sample
sampling distribution of the mean – distribution of the mean of samples from
the population
standard error – standard deviation of the sampling distribution of the mean
statistical inference – process of collecting data from a random sample of a
population and using it to estimate features of the whole population
t-distribution or student-t distribution – a distribution used instead of the
Normal distribution for small samples

USEFUL WEBSITES
The best starting point is the statistics sites mentioned in chapter 14. Useful
tutorials are in www.socialresearchmethods.net/kb/sampling.htm and
www.wise.cgu.edu/sdmmod.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 15 – TESTING HYPOTHESES

REVIEW OF THE CHAPTER


This chapter continues the theme of statistical analyses – and particularly the
idea of finding information about a population from a sample discussed in the
last chapter. It introduces the ideas of statistical hypothesis testing. For this a
precise statement – or hypothesis – is suggested for a population, and then a
sample is examined to test whether it supports the hypothesis. In practice, with
probabilistic data a hypothesis can never be proved, so the best we can say is
either ‘the hypothesis can be rejected’ or ‘the hypothesis cannot be rejected’. In
normal speech this second phrase seems rather negative, but it is the most
positive statement we can really make. This is summarised in the view that ‘you
cannot prove a negative’. For instance, it would be easy to prove that
abominable snowmen exist (by finding one), but it is impossible to prove that
they do not exist.

A key element here is the significance level, which is the maximum chance of
rejecting a hypothesis when it is in fact true. It follows that it a higher
significance level is less rigorous – with a lower significance level needing
stronger evidence to reject the hypothesis. But whatever significance level is
chosen, it will always fall short of ‘proof’.

The principle benefit of hypothesis testing is that it defines a rigorous procedure


that you can use in many situations. Arguably, understanding the approach of
hypothesis testing and the rigorous ideas behind it are more important than the
details of the arithmetic.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Understand the purpose of hypothesis testing
o List the steps involved in hypothesis testing
o Understand the errors involved and the use of significance levels
o Test hypotheses about population means
o Use one and two-tail tests
o Extend these tests deal with small samples
o Use the tests for a variety of problems
o Consider non-parametric tests, particularly the chi-squared test

KEY TERMS FROM THE GLOSSARY


alternative hypothesis – hypothesis that is true when we reject the null
hypothesis
chi-squared (or χ²) test – a non-parametric hypothesis test
contingency table – table showing the relationship between two parameters
critical value – the test value for a chi-squared test
distribution-free tests (or non-parametric test) –hypothesis tests that make
no assumptions about the distribution of the population

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

hypothesis testing – seeing whether a belief about a population is supported


by the evidence from a sample
non-parametric or distribution free tests – hypothesis tests that make no
assumptions about the distribution of the population
null hypothesis – an original hypothesis that is being tested
parametric test – hypothesis test that concerns the value of a parameter
significance level – the minimum acceptable probability that a value
actually comes from the hypothesised population.

USEFUL WEBSITES
It is difficult to find Websites that give specific advice on hypothesis testing,
beyond the general statistical sites already mentioned for chapter 14

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

PART 5 – MANAGEMENT PROBLEMS WITH


UNCERTAINTY

The book is divided this book into five parts, each of which covers a different
aspect of quantitative methods. The first described the underlying concepts of
quantitative methods, setting the context for the rest of the book. The second
part showed how to collect, summarise and present data. The third part used
this data to solve deterministic problems, where we knew conditions with
certainty. The fourth part showed how uncertainty can be measured and
analysed using probabilities. This is the fifth part, which uses statistical ideas to
tackle problems that contain uncertainty.

In practice, virtually all problems contain uncertainty, but deterministic models


are accurate enough for many circumstances. Only when the uncertainty
grows, or the consequences are more severe, is it worth using a complicated
probabilistic model. One cynical viewpoint suggests that statistical models are
not always used to clarify situations, but to baffle the less informed with technical
jargon and build a defence for decisions that might prove to be wrong.

There are five chapters in this part. Chapter 16 describes decision analysis,
which allows managers give structure to problems and make decisions in
conditions of uncertainty. Chapter 17 looks at the use of statistics in quality
control and broader quality management. Chapter 18 describes some models
for inventory management, while chapter 19 shows how to use network analysis
for planning and scheduling projects. Chapter 20 looks at the management of
queues, and broader uses of simulation.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 16 – MAKING DECISIONS

REVIEW OF THE CHAPTER


This chapter describes some ways that managers can approach decisions. It
has two main purposes:
• to describe different formats for presenting problems and showing their
structure
• to show how these formats can help managers make their decisions
Often the first of these is more important, allowing managers to analyse and
describe the details of their problems – defining the alternatives, events,
outcomes and relationships. When the problem has been clearly described,
managers are in a better position to make reasonable – and defensible –
decisions.

Decisions appear in different circumstances, usually classified as:


• certainty – when the outcome from each alternative is known in advance
and managers can scan the outcomes to identify their best options
• risk – when there is some number of outcomes to each alternative, each
with a know probability that managers can use to calculate expected
values or utilities
• uncertainty – when there is some number of outcomes to each
alternative, but probabilities cannot be put to these and managers have
to use some kind of decision rules
• ignorance – when managers do not really know anything about a
problem and cannot make any informed decision.

Even recognising these differences can be a major step forward. Managers


often face pressure to assume (often mistakenly) less risk – so that real
uncertainty is considered to be risk, and real risk is assumed to be certainty.

Many problems really consist of a series of related decisions, and decisions


trees give a useful format for tackling these. We have already met this idea with
probability trees, and can extend their use to more complicated situations.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Appreciate the need to structure decisions
o Draw maps of problems
o List the main elements of a decision and construct a payoff matrix
o Tackle decisions under certainty
o Describe situations of uncertainty and use decision criteria to suggest
decisions
o Describe situations of risk and use expected values to suggest decisions
o Use Bayes' theorem to update conditional probabilities
o Appreciate the use of utilities
o Use decision trees to solve problems with sequential decisions

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

KEY TERMS FROM THE GLOSSARY


decision criteria – simple rules that recommend an alternative for decisions
with uncertainty
decision nodes – points in a decision tree where decisions are made
decision tree – diagram that represents a series of alternatives and events by
the branches of a tree
expected value – the sum of the probability multiplied by the value of the
outcome
nodes – representation of decisions or events in networks
payoff matrix or payoff table – table that shows the outcomes for each
combination of alternatives and events in a decision
problem map, relationship diagram or mind map – a diagram that shows
interactions and relationships in a problem
random nodes – points in a decision tree where events happen
regret – difference between the best possible outcome and the actual outcome
in a decision
strict uncertainty or uncertainty – situation in which we can list possible
events for a decision, but cannot given them probabilities
terminal nodes – points in a decision tree at the end of each path
utility – a measure that shows the real value of money to a decision maker

USEFUL WEBSITES
You can find tutorials on decision analysis from universities, such as the
Arizona State University site at www.public.asu.edu/~kirkwoodwww.public or
software suppliers such as Palisade Europe at
www.palisadeeurope.com/training/precisiontree.html and Vanguard
Corporation at www.vanguard.com/dphelp4. Other useful sites are
www.mindtools.com and www.en.wikipedia.org/wiki/decision-tree.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 17 – QUALITY MANAGEMENT

REVIEW OF THE CHAPTER


In chapter 4 we introduced the idea of sampling to collect data from a
population, and we returned to this theme in chapters 14 and 15. In practice,
most managers are likely to meet sampling in some context, typically market
surveys or quality control. This chapter considers the role of sampling in quality
control, which has now broadened into quality management. In recent years
this has become increasingly important, and many people refer to a ‘quality
revolution’. Now organisations often find that they are not competing through
providing high quality – but high quality products are a sine qua non without
which they cannot even start trading.

Quality management has broadened into a management philosophy that


concerns almost every aspect of operations (exemplified by Total Quality
Management). This has been assisted by a variety of quantitative tools. For
instance, at the heart of quality management is the need to measure and
monitor performance. This is the traditional role of quality control, and
ensures that variation is within acceptable limits. For instance, quality control
has traditionally tested that product weights, say, are between acceptable
limits.

Quality control most often occurs as acceptance sampling or process control.


These have been an area of constant research for more than a century and a
huge amount of work has been done to establish standard methods and
procedures. You will probably meet these again in courses on, say,
operations management.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Discuss the meaning of quality and appreciate its importance
o Describe the costs of quality management
o Review the principles of Total Quality Management (TQM)
o See how quality control forms part of the broader quality
management function
o Discuss the variation in a process and the need to control it
o Describe some key tools of quality control
o Design sampling plans for acceptance sampling
o Draw process control charts

KEY TERMS FROM THE GLOSSARY


5-whys method – repeatedly asking questions to find the cause of faults
acceptable quality level (AQL) – the poorest level of quality, or the most
defects, that is acceptable in a batch
acceptance sampling – tests a sample from a batch to see whether the whole
batch reaches an acceptable level of quality

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

achieved quality –shows how closely a product conforms to its designed


specifications
cause-and-effect diagrams, Ishikawa diagram or a fish bone diagram –
diagram that shows the causes of problems with quality
consumer's risk – the highest acceptable probability of accepting a bad
batch, with more defects than LTPD
designed quality – the quality that a product is designed to have
loss function – function that shows the notional cost of missing a performance
target
lot tolerance percent defective (LTPD) – the level of quality that is
unacceptable, or the highest number of defects that customers are willing to
accept
operating characteristic curve – curve that shows how well a sampling plan
separates good batches from bad ones
Pareto analysis – the rule of 80/20 method of identifying the small number of
causes that cause most problems
process control – taking a sample of products to check that a process is
working within acceptable limits
producer’s risk – highest acceptable probability of rejecting a good batch,
with fewer defects than the AQL
process control charts – diagrams for monitoring a process over time
quality – the ability of a product to meet, and preferably exceed, customer
expectations
quality control – using a series of independent inspections and tests to
make sure that designed quality is actually being achieved.
quality management – all aspects of management related to product quality
producer's risk – the highest acceptable probability of rejecting a good
batch, with fewer defects than the AQL.
sampling by attribute – taking a quality control sample where units are
described as either acceptable or defective
sampling by variable – taking a quality control sample where units have a
measurable feature
total quality management (TQM) – the system of having the whole
organisation working together to guarantee, and systematically improve,
quality

USEFUL WEBSITES
There are many sources of information on the Web. In addition to general
sites, and you might start by looking at:
www.asq.org – the American Society for Quality
www.iso.org – International Standards organisation, particularly their ISO
1900 family of standards
www.quality.nist.gov – describes the prestigious Malcolm Bradbridge National
Quality Award
www.shingoprize.org – which describes the Shingo Prize for quality.

Some basic information is in www.deming.eng.clemson.edu. For other


information you might look in www.qualitydigest.com, www.bized.ac.uk , and
www.apqr.org (the American Productivity and Quality Center).

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 18 – INVENTORY MANAGEMENT

REVIEW OF THE CHAPTER


This chapter discusses the control of stocks. In common with topics such as
quality control, stock control has received a huge amount of attention over the
past 80 or 90 years. Early work was done in the 1920s and there have been
continuous developments ever since.

It typically costs 25 percent of value a year to hold stocks, and any reduction
gives a direct contribution to profits. The aim of any control system is to
balance these costs with the benefits of holding stocks (or possibly with the
costs of not holding it). The best way of doing this depends on circumstances.
Manufacturers and other organisations have increasingly moved to just-in-
time operations to minimise their stocks. These ideas have been extended
into more general lean operations, efficient customer response and quick
response to pull goods very quickly through the supply chain. Alternatively
material requirements planning (MRP) has evolved into MRP II
(manufacturing resource planning) and ERP (enterprise resource planning) to
co-ordinate supply and demand.

However, the lean options cannot be used in all organisations, and many have
to base decisions on the alternative independent demand models. These
build a model of prevailing circumstances, and calculate the features of an
inventory system that will give the best results. The models come in a huge
variety, with traditional ones using a fixed order quantity that is some variant
of the economic order quantity. Other models assume periodic review – again
with a huge number of variations.

Despite the amount of work done on stock control, this is an area where major
changes are still occurring – triggered by e-business, automation, global
positioning and tracking systems, international sourcing, and so on. Many
organisations are racing to ‘replace inventory by information’ but this does not
come without risk, and managers still have to balance the costs of holding
stock with the benefits (or the costs of not holding it).

SPECIFIC AIMS OF THE CHAPTER ALLOW READERS TO:


o Appreciate the need for stocks and the associated costs
o Discuss different approaches to inventory management
o Calculate an economic order quantity and reorder level
o Calculate the effects of fixed production rates
o Appreciate the need for safety stock and define a service level
o Calculate safety stock when lead time demand is Normally distributed
o Describe periodic review models and calculate target stock levels
o Do ABC analyses of inventories

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

KEY TERMS FROM THE GLOSSARY


ABC analysis – Pareto analysis for inventory items
cycle service level – the probability that all demand can be met in a stock
cycle
dependent demand – situation in which demands for materials are somehow
related to each other
economic order quantity – order size that minimises costs for a simple
inventory system
independent demand – demand where there is no link between different
demands for items
reorder level – stock level when it is time to place an order
safety stock – additional stock that is used to cover unexpectedly high demand
service level – probability that demand can be met from stock
stock – stores of materials that organisations keep until needed
target stock level –stock level that determines the order size for stocks with
periodic review

USEFUL WEBSITES
Some useful source sites are www.inventoryops.com and www.apics.org
(formerly the American Production and Inventory Control Society – now the
Association for Operations Management). You can get related information from
organisations, such as the Operational Research Society (www.orsoc.org.uk),
International Federation of Operational Research Societies (www.ifors.org),
European Operations Management Association (www.euroma-online.org) and
the Association of Operations Management (www.apics.org).

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 19 – PROJECT MANAGEMENT

REVIEW OF THE CHAPTER


Project network analysis was developed in the 1950s by two groups who
faced similar problems. The first group worked on the Polaris missile project
for the US Department of Defence. At the time the US government felt that
progress was too slow and gave high priority to this huge project. PERT was
developed to control the work of the 3000 contractors involved. The second
group worked for Du Pont and developed CPM to help with the maintenance
of facilities.

Any differences between the original ideas have now disappeared, and the
only remaining difference is that PERT considers activity durations which
follow a beta distribution, while CPM assumes the durations are fixed.

Another question considers the choice between activity-on-arrow networks


and activity-on-node. These are very similar in principle, so differences are
really a matter of detail. Activity-on-arrow format is arguably the more
rigorous, but activity-on-node does not need dummy activities, and is easier to
program (and is consequently used in most standard software). It seems that
the activity on node approach is becoming most common, and this is the
format used in the book.

The main benefits of network analysis are that it gives a very powerful
planning tool, and yet is fairly simple to understand. It is used on almost all
projects of any size, but is especially common in construction and IT projects.
In common with other quantitative methods, it is the preparation needed for
the analysis that is often most useful.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Appreciate the need to plan complex projects
o Divide a project into distinct activities and show the relationships between
them
o Draw a project as a network of connected activities
o Calculate the timing of activities
o Identify critical paths and the overall project duration
o Reduce the duration of a project
o Draw Gantt charts
o Consider the resources needed during a project.
o Use PERT when there is uncertainty in activity durations

KEY TERMS FROM THE GLOSSARY


critical activities – activities at fixed times in a project
critical path – a series of critical activities in a project
critical path method (CPM) – a method of project planning that assumes each
activity has a fixed duration

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

dependence table – table showing the relationships between activities in a


project
float, total float (or sometimes slack) – the difference between the amount
of time available for an activity in a project and the time actually used
Gantt chart – diagram for showing the schedule of a project
non-critical activities – activities in a project that have some flexibility in their
timing
project – a distinct and unique set of activities that makes a one-off product
project (or programme) evaluation and review technique (PERT) –
method of project planning that assumes each activity has an uncertain duration
project network analysis – the most widely used method of organising
complex projects
rule of sixths – rule to find the expected duration of an activity for PERT
slack – sometimes used to mean float or total float, being the difference
between the amount of time available for an activity in a project and the time
actually used

USEFUL WEBSITES
Two main Institutions for project management have Websites at:
www.pmi.org – Project Management Institute
www.apm.org.uk – Association of Project Management
You can also try related sites, such as www.pmforum.org, www.allpm.com
and www.pmtoday.co.uk.

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

CHAPTER 20 – QUEUES AND SIMULATION

REVIEW OF THE CHAPTER


This chapter describes two types of analysis.
• It starts by looking at queuing theory
• then uses queues as a way of introducing simulation.

We are all familiar with queues, but usually assume that they are there
because of incompetence, lack of consideration, or deliberately to annoy us.
It often seems that no-one has done any work to reduce their discomfort. In
practice, a lot of research has been done on queuing – with much early work
done in telephone exchanges. The underlying principle is that short queues
need more servers and come at higher cost, so organisations have to balance
the cost of queues with the service given. It is also worth emphasising that
queues occur in many types of operation, and are certainly not limited to
queues of people.

The mathematical analyses of queues are described in the subject of ‘queuing


theory’. These analyses can give useful results, but practical applications are
limited by the assumptions of models and the complexity of calculations.
Apart from very simple queues, a lot of effort is needed to solve any problem.

Another approach is needed for more complex problems, and this is where
simulation can help. The distinctive approach is to build a simulation model
(usually on a computer) that allows you to ‘follow’ typical operations for some
period of time and measure relevant features. Then by repeating the
simulation run a large number of times, you can find general patterns of
performance.

Simulation can be used for a huge variety of problems – not limited to queues
– and it has become one of the most valuable quantitative tools for business.
It has the benefits of being easy to understand and can tackle the most
complicated of problems. Unfortunately, it has the disadvantages of needing
some expertise to build and run the models. Many software packages have
been developed to help with this, including specialised languages,
sophisticated graphics and virtual operations.

SPECIFIC AIMS OF THE CHAPTER ALLOW YOU TO:


o Appreciate the scope of queuing problems and describe the features of
queues
o Calculate the characteristics of a single server queue
o Describe the characteristic approach of simulation
o Do manual simulations of queuing systems
o Use computers for bigger simulations

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)


lOMoARcPSD|1636909

KEY TERMS FROM THE GLOSSARY


Monte Carlo simulation – type of simulation model that includes a lot of
uncertainty
negative exponential distribution – probability distribution used for
continuous random values
operating characteristics – features and calculations for a queuing system
queue – line of customers waiting to be served
simulation – process that analyses problems by imitating real operations,
giving a set of typical, but artificial, results
utilisation factor – proportion of time a queuing system is busy (= λ/μ for a
single server)

USEFUL WEBSITES
A source of a lot of information about queuing – as well as a list of books on-line
– is Myron Hlynka’s site at ww2.uwindsor.ca/~hlynka.queue.html. A useful
source for illustrations is www.opsresearch.com

Distributing prohibited | Downloaded by Yohan Stokes (godsofkillzone@gmail.com)

Das könnte Ihnen auch gefallen