Sie sind auf Seite 1von 18

<Insert Picture Here>

Forecast Metrics and Evaluation


Nadav Zivelin
Principal Product Manager
Agenda
• Discuss main disciplines of evaluating forecast quality

• Detailed discussion of quantitative accuracy metrics

• Discussion of best practice measurement


How To evaluate Forecast Quality
• Two core disciplines used to evaluate Forecast Quality

• Qualitative

• Heuristic based

• Common sense metrics

• Requires less expertise

• Quantitative

• Method based

• Robust period over period tracking

• Initial effort required to set up


Qualitative Evaluation
• Forecast assessment using look and feel

• Eyeball data and look at history and the forecast pattern

• If both of them are in sync make decision if it is a ‘reasonable’


forecast

• History and the forecast are approximately the same volume

• The increasing/decreasing trend in history is reflective in the


forecast

• No random spikes in the forecast

• Are historical events accurately reflected in the forecast

• No un-necessary forecast and no missing forecast


0
2000
4000
6000
8000
10000
12000
14000
16000
18000
20000

10/21/2006
11/21/2006
12/21/2006
1/21/2007
2/21/2007
3/21/2007
4/21/2007
5/21/2007
6/21/2007
7/21/2007
8/21/2007
9/21/2007
End Of
History

10/21/2007
11/21/2007
12/21/2007
1/21/2008
2/21/2008
3/21/2008
4/21/2008
5/21/2008
6/21/2008
Example: Qualitative Analysis

Demand

Forecast 3
Forecast 2
Forecast 1
Quantitative Evaluation
• Measuring Forecast Goodness Using Mathematical Evaluation

• Numerical comparison between Historical Data and Forecast

• Typically the objective is to measure the deviation of forecast from


History and report it as a %

• The forecast goodness is evaluated using this forecast error

• Lower the error, higher the quality of forecast forecast


Quantitative VS Qualitative
Qualitative Quantitative
Benefits Benefits
• Leverage user expertise • Ideal for measurement
• Easy to perform over time
• Attention to key items • Quick snapshot of
business
• Objective and consistent

Gaps
Gaps
• Subjective
• Labor intensive • More difficult initial setup

Best Practice: Quantitative Analysis Measured Over Time


Keys to Quantitative Evaluation

• Archived future forecast

• Predefined forecast accuracy level

• Accuracy calculation method


Archive Forecast
• What forecast version should be archived?
• Statistical

• Sales

• Final Forecast

• What lag/s should be archived


• One or more future forecast periods?

• Minimum recommended is Final Forecast


Accuracy Measurement Level
• Forecast accuracy can be computed at different levels

• Metrics at different levels tell different stories

• Generally, MAPE at less-aggregated levels will be lower than at higher levels


• Aggregate metrics less susceptible to statistical noise

• Lower level errors get averaged out in aggregation

• Measure accuracy where it most meaningful for your business


• Who is the primary owner/user of the forecast?

• What does primary use of forecast?

• Majority of error calculations can be aggregated and reported at higher levels


Forecast Error Metrics
• Mean Forecast Error (MFE or MPE or Bias): Measures average deviation of

forecast from actuals.

• Mean Absolute Deviation (MAD): Measures average absolute deviation of


forecast from actuals.

• Mean Absolute Percentage Error (MAPE): Measures absolute error as a


percentage of the sales.

• Weighted Mean Absolute Percentage Error (WMAPE): MAPE Weighted


(Normalized) against measures like Sales Volume, Sales $ Volume etc
Forecast Error Metrics – Formulae

n
1
MFE =
n
∑t=1
(D t − Ft)

n
1
MAD =
n
∑t=1
D t − F t

n
100 D − Ft
MAPE =
n

t =1
t
Dt

Di − Fi
∑ Di
*Di
WMAPE =
∑D i
Forecast Error Metrics – MFE

n
1
MFE =
n
∑ (D
t =1
t − Ft )

• Want MFE to be as close to zero as possible -- minimum bias

• A large positive (negative) MFE means that the forecast is undershooting


(overshooting) the actual observations

• Note that zero MFE does not imply that forecasts are perfect (no error) -- only
that mean is “on target”
• +ve and –ve errors cancel out here

• Also called forecast BIAS

Not recommended as single metric


Forecast Error Metrics – MAD

1 n
MAD = ∑ Dt − Ft
n t =1
• Measures absolute error

• Positive and negative errors thus do not cancel out (as with MFE)

• Want MAD to be as small as possible

• Since this is a number, no way to know if MAD error is large or small in relation
to the actual data
Forecast Error Metrics – MAPE

n
100 D t − Ft
MAPE =
n
∑t =1 Dt

• Same as MAD except…

• Measures absolute deviation of forecast from actual as a percentage of actual


data

• Indicates persistent absolute error in forecast

• Combinations with very small or zero volumes can cause large skew in results

Most common measure of forecast accuracy


Forecast Error Metrics – WMAPE

D i − Fi
∑ Di
*D i
WMAPE =
∑ Di
• Unlike MAPE, which is a straight mathematical average of APEs, here the APEs are weighted
• Weighting is usually by Sales volume. But, it can also be based on Revenue, Profit etc

• Since, this is a weighted measure, it does not have the same problems as MAPE such as over-skewing
due to low/zero volumes

• Allows businesses to identify areas which affect business goals

• Combinations with large weights can skew the results in their favor
Forecast Error Measurement
Recommendations
• Accuracy based on final adjusted forecast
• Important to measure accuracy of statistical forecast but final forecast drives the business

• Generate a bias metric


• Allows identification of systemic bias up or down

• Use percentage error calculation


• Calculations can be easily aggregated to variety of levels

• Initially evaluate error at high level and dig down as necessary

• Until business baseline is established it is difficult to ascertain if accuracy is acceptable


• Goal should not be an accuracy of X%

• Goal should be continues reduction of error

Das könnte Ihnen auch gefallen