Measuring Forecast Performance

Never compare your inside with somebody else’s outside – Hugh Macleod

appleorange

I’m aware that this topic has been covered ad nauseum, but first a brief word on the subject of benchmarking your forecast accuracy against competitors or industry peers:

Don’t.

Does any company in the world have the exact same product mix that you do? The same market presence? The same merchandising and promotional strategies?

If your answer to all three of the above questions is ‘yes’, then you have a lot more to worry about than your forecast accuracy.

For the rest of you, you’re probably wondering to yourself: “How do I know if we’re doing a good job of forecasting?”

Should you measure MAPE? MAD/Mean? Weighted MAPE? Symmetric MAPE? Comparison to a naïve method? Should you be using different methods depending on volume?

Yes! Wait, no! Okay, maybe…

The problem here is that if you’re looking for some arithmetic equation to definitively tell you whether or not your forecasting process is working, you’re tilting at windmills.

It’s easy to measure on time performance: Either the shipment arrived on time or it didn’t. In cases where it didn’t, you can pinpoint where the failure occurred.

It’s easy to measure inventory record accuracy: Either the physical count matches the computer record or it doesn’t. In cases where it doesn’t, the number of variables that can contribute to the error is limited.

In both of the above cases (and most other supply chain performance metrics), near-perfection is an achievable goal if you have the resources and motivation to attack the underlying problems. You can always rank your performance in terms of ‘closeness to 100%’.

Demand forecast accuracy is an entirely different animal. Demand is a function of human behaviour (which is often, but not always rational), weather, the actions of your competitors and completely unforeseen events whose impact on demand only makes sense through hindsight.

So is measuring forecast accuracy pointless?

Of course not, so long as you acknowledge that the goal is continuous improvement, not ‘closeness to 100%’ or ‘at least as good as our competitors’. And, for God’s sake, don’t rank and reward (or punish) your demand planners based solely on how accurate their forecasts are!

Always remember that a forecast is the result of a process and that people’s performance and accountability should be measured on things that they can directly control.

Also, reasonableness is what you’re ultimately striving for, not some arbitrary accuracy measurement. As a case in point, item/store level demand can be extremely low for the majority of items in any retail enterprise. If a forecast is 1 unit for a week and you sell 3, that’s a 67% error rate – but was it really a bad forecast?

A much better way to think of forecast performance is in terms of tolerance. For products that sell 10-20 units per year at a store, a MAPE of 70% might be quite tolerable. But for items that sell 100-200 units per week a MAPE of 30% might be unacceptable.

Start by just setting a sliding scale based on volume, using whatever level of error you’re currently achieving for each volume level as a benchmark ‘tolerance’. It doesn’t matter so much where you set the tolerances – it only matters that the tolerances you set are grounded in reasonableness.

Your overall forecast performance is a simple ratio: Number of Forecasts Outside Tolerance / Number of Forecasts Produced * 100%.

Whenever your error rate exceeds tolerance (for that item’s volume level), you need to figure out what caused the error to be abnormally high and, more importantly, if any change to the process could have prevented that error from occurring.

Perhaps your promotional forecasts are always biased to the high side. Does everyone involved in the process understand that the goal is to rationally predict the demand, not provide an aspirational target?

Perhaps demand at a particular store is skyrocketing for a group of items because a nearby competitor closed up shop. Do you have a process whereby the people in the field can communicate this information to the demand planning group?

Perhaps sales of a seasonal line is in the doldrums because spring is breaking late in a large swath of the country. Have your seasonal demand planners been watching the Weather Channel?

Not every out of tolerance forecast result has an explanation. And not every out of tolerance forecast with an explanation has a remedy.

But some do.

Working your errors in this fashion is where demand insight comes from. Over time, your forecasts out of tolerance will drop and your understanding of demand drivers will increase. Then you can tighten the tolerances a little and start the cycle again.

Leave a Reply

Your email address will not be published. Required fields are marked *