If you don’t mind haunting the margins, I think there is more freedom there. – Colin Firth
A couple of months ago, I wrote a piece called Employing the Law of Large Numbers in Bottom Up Forecasting. The morals of that story were fourfold:
- That when sales at item/store level are intermittent (fewer than 52 units per year), a proper sales pattern at that level can’t be properly determined from the demand data at that level.
- That any retailer has a sufficient percentage of slow selling item/store combinations that the problem simply can’t be ignored in the planning process.
- That using a multi level, top-down approach to developing properly shaped forecasts in a retail context is fundamentally flawed.
- That the Law of Large Numbers can be used in a store centric fashion by aggregating sales across similar items at a store only for the purpose of determining the shape of the curve, thereby eliminating the need to create any forecasts above item/store level.
A high level explanation of the Profile Based Forecasting approach developed by Darryl Landvater (but not dissimilar to what many retailers were doing for years with systems like INFOREM and various home grown solutions) was presented as the antidote to this problem. Oh and by the way, it works fabulously well, even with such a low level of “sophistication” (i.e. unnecessary complexity).
But being able to shape a forecast for intermittent demands without using top-down forecasting is only one aspect of the slow seller problem. The objective of this piece is to look more closely at the implications of intermittent demands on replenishment.
The Bunching Problem
Regardless of how you provide a shape to an item/store forecast for a slow selling item (using either Profile Based Forecasting or the far more cumbersome and deeply flawed top-down method), you are still left with a forecasted stream of small decimal numbers.
In the example below, the shape of the sales curve cannot be determined using only sales history from two years ago (blue line) and the most recent year (orange line), so the pattern for the forecast (green dashed line) was derived from an aggregation of sales of similar items at the same store and multiplied through the selling rate of the item/store itself (in this case 13.5 units per year):
You can see that the forecast indeed has a defined shape – it’s not merely a flat line that would be calculated from intermittent demand data with most forecasting approaches. However, when you multiply the shape by a low rate of sale, you don’t actually have a realistic demand forecast. In reality, what you have is a forecast of the probability that a sale will occur.
Having values to the right of the decimal in a forecast is not a problem in and of itself. But when the value to the left of the decimal is a zero, it can create a huge problem in replenishment.
Because replenishment calculations always operate in discrete units and don’t know the difference between a forecast of true demand and a forecast of a probability of a sale.
Using the first 8 weeks of the forecast calculated above, you can see how time-phased replenishment logic will behave:
The store sells 13 to 14 units per year, has a safety stock of 2 units and 2 units in stock (a little less than 2 months of supply). By all accounts, this store is in good shape and doesn’t need any more inventory right now.
However, the replenishment calculation is being told that 0.185 units will be deducted from inventory in the first week, which will drive the on hand below the safety stock. An immediate requirement of 1 unit is triggered to ensure that doesn’t happen.
Think of what that means. Suppose you have 100 stores in which the item is slow selling and the on hand level is currently sitting at the safety stock (not an uncommon scenario in retail). Because of small decimal forecasts triggering immediate requirements at all of those stores, the DC needs to ship out 100 pieces to support sales of fewer than 20 pieces at store level – demand has been distorted 500%.
Now, further suppose that this isn’t a break-pack item and the ship multiple to the store is an inner pack of 4 pieces – instead of 100 pieces, the immediate requirement would be 400 pieces and demand would be distorted by 2,000%!
The Antidote to Bunching – Integer Forecasts
What’s needed to prevent bunching from occurring is to convert the forecast of small decimals (the probability of a sale occurring) into a realistic forecast of demand, while still retaining the proper shape of the curve.
This problem has been solved (likewise by Darryl Landvater) using simple accumulator logic with a random seed to convert a forecast of small decimals into a forecast of integers.
It works like this:
- Start with a random number between 0 and 1
- Add this random number to the decimal forecast of the first period
- Continue to add forecasts for subsequent periods to the accumulation until the value to the right of the decimal in the accumulation “tips over” to the next integer – place a forecast of 1 unit at each of these “tip-over” points
Here’s our small decimal forecast converted to integers in this fashion:
Because a random seed is being used for each item/store, the timing of the first integer forecast will vary by each item/store.
And because the accumulator uses the shaped decimal forecast, the shape of the curve is preserved. In faster selling periods, the accumulator will tip over more frequently and the integer forecasts will likewise be more frequent. In slower periods, the opposite is true.
Below is our original forecast after it has been converted from decimals to integers using this logic:
And when the requirements across multiple stores are placed back on the DC, they are not “bunched” and a more realistic shipment schedule results:
Stabilizing the Plans – Variable Consumption Periods
Just to stay grounded in reality, none of what has been described above (or, for that matter, in the previous piece Employing the Law of Large Numbers in Bottom Up Forecasting) improves forecast accuracy in the traditional sense. This is because, quite frankly, it’s not possible to predict with a high degree of accuracy the exact quantity and timing of 13 units of sales over a 52 week forecast horizon.
The goal here is not pinpoint accuracy (the logic does start with a random number after all), but reasonableness, consistency and ease of use. It allows for long tail items to have the same multi-echelon planning approach as fast selling items without having separate processes “on the side” to deal with them.
For fast selling items with continuous demand, it is common to forecast in weekly buckets, spread the weekly forecast into days for replenishment using a traffic profile for that location and consume the forecast against actuals to date for the current week:
In the example above, the total forecast for Week 1 is 100 units. By end of day Wednesday, the posted actuals to date totalled 29 units, but the original forecast for those 3 days was 24 units. The difference of -5 units is spread proportionally to the remainder of the week such as to keep the total forecast for the week at 100 units. The assumption being used is that you have higher confidence in the weekly total of 100 units than you have in the exact daily timing as to when those 100 units will actually sell.
For slow moving items, we would not even have confidence in the weekly forecasts, so consuming forecast against actual for a week makes no sense. However, there would still be a need to keep the forecast stable in the very likely event that the timing and magnitude of the actuals don’t match the original forecast. In this case, we would consume forecast against actuals on a less frequent basis:
The logic is the same, but the consumption period is longer to reflect the appropriate level of confidence in the forecast timing.
Controlling Store Inventory – Selective Order Release
Let’s assume for a moment a 1 week lead time from DC to store. In the example below, a shipment is planned in Week 2, which means that in order to get this shipment in Week 2, the store needs to trigger a firm replenishment right now:
Using standard replenishment rules that you would use for fast moving items, this planned shipment would automatically trigger as a store transfer in Week 1 to be delivered in Week 2. But this replenishment requirement is being calculated based on a forecast in Week 2 and as previously mentioned, we do not have confidence that this specific quantity will be sold in this specific week at this specific store.
When that shipment of 1 unit arrives at the store (bringing the on hand up to 3 units), it’s quite possible that you won’t actually sell it for several more weeks. And the overstock situation would be further exacerbated if the order multiple is greater than 1 unit.
This is where having the original decimal forecast is useful. Remember that, as a practical matter, the small decimals represent the probability of a sale in a particular week. This allows us to calculate a tradeoff between firming this shipment now or waiting for the sale to materialize first.
Let’s assume that choosing to forgo the shipment in Week 2 today means that the next opportunity for a shipment is in Week 3. In the example below, we can see that there is a 67.8% chance (0.185 + 0.185 + 0.308) that we will sell 1 unit and drop the on hand below safety stock between now and the next available ship date:
Based on this probability, would you release the shipment or not? The threshold for this decision could be determined based on any number of factors such as product size, cost, etc. For example, if an item is small and cheap, you might use a low probability threshold to trigger a shipment. If another slow selling item is very large and expensive, you might set the threshold very high to ensure that this product is only replenished after a sale drives the on hand below the safety stock.
Remember, the probabilities themselves follow the sales curve, so an order has a higher probability of triggering in a higher selling period than in a lower selling period, which would be the desired behaviour.
The point of all of this is that the same principles of Flowcasting (forecast only at the point of consumption, every item has a 52 week forecast and plan, only order at the lead time, etc.) can still apply to items on the long tail, so long as the planning logic you use incorporates these elements.