The Point of No Return


Events in the past may be roughly divided into those which probably never happened and those which do not matter. – William Ralph Inge (1860-1954)


Tedious. Banal. Tiresome.

These are all worthy adjectives to describe this topic.

So why am I even discussing it?

Because, for some reason I’m unable to explain, the question of how to deal with saleable merchandise returns in the sales forecasting process often seems to take on the same gravity as a discussion of Roe v. Wade or the existence of intelligent extraterrestrial life.

Point of sale data, imperfect as it is, is really the only information we have to build an historical proxy of customer demand. However, the POS data contains both sales and merchandise returns, so the existential question becomes: Do we build our history using gross sales or net sales?

The main argument on the ‘gross sales’ side of the debate is that a return is an unpredictable inventory event, not a true indicator of ‘negative demand’.

On the ‘net sales’ side, the main argument is that constructing a forecast using gross sales data overstates demand and will ultimately lead to excess inventory.

So which is correct?

Gross sales and here’s why: Demand has two dimensions – quantity and time.

Once a day has gone into the past, whatever happened, happened. Although most retailers have transaction ID numbers on receipts that allow for returns to be associated with the original purchase, we must assume that the customer intended to keep the item on the day it was purchased.

The fact that there was negative demand a few days (or weeks) later doesn’t change the fact that there was positive demand on the day of the original purchase.

Whenever I’m at a client who starts thinking about this too hard, I like to use the following example:

Suppose that you know with 100% certainty that you will sell 10 units of Product X on a particular day. Further suppose that you know with 100% certainty that 4 units of Product X will be returned in a saleable state on that same day.

You don’t know exactly when the sales will happen throughout the day, nor do you know exactly when the returns will happen. At the beginning of that day, what is the minimum number of units of Product X you would want to have on the shelf?

If your answer is 10 units, then that means you want to plan with gross sales.

If your answer is less than 10 units, then that means you’re not very serious about customer service.


Measuring Forecast Performance

Never compare your inside with somebody else’s outside – Hugh Macleod

I’m aware that this topic has been covered ad nauseum, but first a brief word on the subject of benchmarking your forecast accuracy against competitors or industry peers:


Does any company in the world have the exact same product mix that you do? The same market presence? The same merchandising and promotional strategies?

If your answer to all three of the above questions is ‘yes’, then you have a lot more to worry about than your forecast accuracy.

For the rest of you, you’re probably wondering to yourself: “How do I know if we’re doing a good job of forecasting?”

Should you measure MAPE? MAD/Mean? Weighted MAPE? Symmetric MAPE? Comparison to a naïve method? Should you be using different methods depending on volume?

Yes! Wait, no! Okay, maybe…

The problem here is that if you’re looking for some arithmetic equation to definitively tell you whether or not your forecasting process is working, you’re tilting at windmills.

It’s easy to measure on time performance: Either the shipment arrived on time or it didn’t. In cases where it didn’t, you can pinpoint where the failure occurred.

It’s easy to measure inventory record accuracy: Either the physical count matches the computer record or it doesn’t. In cases where it doesn’t, the number of variables that can contribute to the error is limited.

In both of the above cases (and most other supply chain performance metrics), near-perfection is an achievable goal if you have the resources and motivation to attack the underlying problems. You can always rank your performance in terms of ‘closeness to 100%’.

Demand forecast accuracy is an entirely different animal. Demand is a function of human behaviour (which is often, but not always rational), weather, the actions of your competitors and completely unforeseen events whose impact on demand only makes sense through hindsight.

So is measuring forecast accuracy pointless?

Of course not, so long as you acknowledge that the goal is continuous improvement, not ‘closeness to 100%’ or ‘at least as good as our competitors’. And, for God’s sake, don’t rank and reward (or punish) your demand planners based solely on how accurate their forecasts are!

Always remember that a forecast is the result of a process and that people’s performance and accountability should be measured on things that they can directly control.

Also, reasonableness is what you’re ultimately striving for, not some arbitrary accuracy measurement. As a case in point, item/store level demand can be extremely low for the majority of items in any retail enterprise. If a forecast is 1 unit for a week and you sell 3, that’s a 67% error rate – but was it really a bad forecast?

A much better way to think of forecast performance is in terms of tolerance. For products that sell 10-20 units per year at a store, a MAPE of 70% might be quite tolerable. But for items that sell 100-200 units per week a MAPE of 30% might be unacceptable.

Start by just setting a sliding scale based on volume, using whatever level of error you’re currently achieving for each volume level as a benchmark ‘tolerance’. It doesn’t matter so much where you set the tolerances – it only matters that the tolerances you set are grounded in reasonableness.

Your overall forecast performance is a simple ratio: Number of Forecasts Outside Tolerance / Number of Forecasts Produced * 100%.

Whenever your error rate exceeds tolerance (for that item’s volume level), you need to figure out what caused the error to be abnormally high and, more importantly, if any change to the process could have prevented that error from occurring.

Perhaps your promotional forecasts are always biased to the high side. Does everyone involved in the process understand that the goal is to rationally predict the demand, not provide an aspirational target?

Perhaps demand at a particular store is skyrocketing for a group of items because a nearby competitor closed up shop. Do you have a process whereby the people in the field can communicate this information to the demand planning group?

Perhaps sales of a seasonal line is in the doldrums because spring is breaking late in a large swath of the country. Have your seasonal demand planners been watching the Weather Channel?

Not every out of tolerance forecast result has an explanation. And not every out of tolerance forecast with an explanation has a remedy.

But some do.

Working your errors in this fashion is where demand insight comes from. Over time, your forecasts out of tolerance will drop and your understanding of demand drivers will increase. Then you can tighten the tolerances a little and start the cycle again.

Overly Sophistimicated

There are many methods for predicting the future. For example, you can read horoscopes, tea leaves, tarot cards or crystal balls. Collectively, these are known as ‘nutty methods’. Or you can put well researched facts into sophisticated computer models, more commonly known as ‘a complete waste of time.’ – Scott Adams

If you have your driver’s license, you can get into virtually any automobile in any country in the world and drive it. Not only that, but you can drive any car made between 1908 and today.

You want to make a left turn? Rotate the steering wheel counter-clockwise.
Right turn? Clockwise.
Speed up? Press your foot down on the accelerator pedal.
Slow down? Remove your foot from the accelerator pedal.
Come to a stop? Press your foot down on the brake pedal.

Think all of the advances in automotive technology – from the Ford Model T in 1908 to the Tesla Model S in 2016… Over 100 years and countless technological leaps, yet the ‘user interface’ has remained the same (and universally applied) for all this time.

This is what makes the skill of driving easy to learn and transferable from one car to the next. And all of the complexities of road design, elevation and traffic can be solved by making the decisions on the part of the driver in any scenario very simple: speed up, slow down, stop or turn. Heck, even the lunar rover used the same user interface to deal with extraterrestrial terrain!

Not only that, but because the interface is simple and control on the part of the driver is absolute, there is built in accountability for the result. If the car is travelling faster than the speed limit, it’s because the driver made it so, manufacturing defects (most often caused by ‘over sophistimication’) notwithstanding.

While supply chain forecasting software hasn’t been around since the early 1900s, it’s been around long enough that it doesn’t seem unreasonable to expect some level of uniformity in the user interface by now.

Yet, while a semi-experienced driver can walk up to an Avis counter and be off cruising in a car model that they’ve never driven before within minutes, it would take weeks (if not months) for an experienced forecaster to become proficient in a software tool that they’ve never used before.

The difference, in my opinion, is that the automobile was designed from the start to be used by any person. Advanced degrees in chemistry, physics and engineering are needed to build a car, not operate it.

While no one expects that ‘any person’ can be a professional forecaster, it should not be necessary (nor is it economically feasible) for every person accountable for predicting demand to have a PhD in statistics to understand how to operate a forecasting system. The less understood the methods are for calculating forecasts, the easier it is for people on the front line of the process to avail themselves of accountability for the results. Police hand out speeding tickets to drivers, not passengers.

Obviously, not all cars are alike. They compete on features, gadgets, styling, horsepower and price. But whatever new gizmos car manufacturers dream up, they can’t escape the simple, intuitive user interface that has been in place for over 100 years.

While I’m sure it’s an enriching intellectual exercise to fill pages with clouds of Greek symbols in the quest to develop the most sophisticated forecasting algorithm, wouldn’t it be nice if managing a demand forecast was as easy as driving a car?

Cross Purposes


A compromise is the art of dividing a cake in such a way that everyone believes he has the biggest piece. – Ludwig Erhard (1897 – 1977)


Last week, I had the chance to catch up with a good friend and colleague of many years. His name is Ian and he is the VP of Supply Chain at a mid-sized retailer (with vast prior experience at a large retailer).

After a couple of beers, he asked me point blank: “Forecasting and Replenishment. One job or two?”

Without hesitation, I responded “Two!” Then I proceeded to describe the differences in skill sets, business relationships and aptitudes between a Demand Planner and a Supply Planner.

“Okay”, he said. “Who does the Demand Planning group report to?”

That’s a very interesting question indeed.

The goal of the demand planning process is to create a sales forecast that is as accurate and unbiased as possible. In retail, the process of coming up with a forecast is typically a “joint effort” between the Category Manager and the Supply Chain Planner.

The Category Manager is measured primarily on sales. Therefore he/she has a tendency to make optimistic projections and bias the forecast upward, knowing that a higher forecast will buy more inventory, thereby (theoretically) reducing the likelihood of lost sales.

The Supply Chain Planner is measured primarily on inventory turns. Therefore he/she has a tendency to “keep the forecast lean” to avoid carryover inventory after a promotion or selling season.

Therein lies the rub. If you give control of the forecasting process to the Merchandising group, Supply Chain feels like you’re “putting the fox in charge of the henhouse”. If the forecasting process falls under Supply Chain, Merchandising feels like they have no control over one of the key inputs that drive their businesses.

So should the Demand Planning function reside within Merchandising or Supply Chain?


Think about it. Neither group can be faulted for exhibiting behaviour on which they are rewarded. The problem is that they have competing objectives and the biases on either side can have a direct negative impact on the P&L.

That’s why the Demand Planning function needs to report to someone who has accountability for the entire P&L – either the CEO or, more practically, the CFO.

Remember the goal for the sales forecast: As accurate and as unbiased as possible. By having Demand Planners report into Finance, they can be effective mediators between the competing groups and would have “a seat at the table” when matters relating to the sales forecast are discussed (promotions, product launches, safety stock policies, etc.)

Their job would be to chair S&OP meetings with their Merchandising and Supply Chain counterparts and hear both sides of the story with an objective ear. Without either group having the direct ability to impose their biases on the forecast, they must instead make a convincing case to the Demand Planner to support their view.

As a consequence, the current “blurred lines” of accountability are made clear once and for all:

Merchandising: Stimulate demand and be accountable for sales and gross margin.

Demand Planning: Forecast demand and be accountable for forecast quality.

Supply Chain: Optimize supply and be accountable for inventory turns and availability.