It pays to be obvious, especially if you have a reputation for subtlety. – Isaac Asimov (1920-1992)
The sun came up today.
I’ve been tracking it daily in a spreadsheet for months. Please reach out if you’re interested in seeing my data. My suspicion is that you won’t.
In (belated) honour of Groundhog Day, the topic du jour is in stock reporting.
You come into the office on Monday morning, log in to your reporting/BI dashboard and display your company’s overall in stock report for the last 21 days, up to and including yesterday. As you look at it, you’re thinking about all of the conversations about in-stock you’ve had over the last 3 weeks and anticipate what today’s conversations will be about:
For the sake of argument, we’ll assume that there’s no major issue with how you calculate the in stock measure. Everyone understands it and there’s broad agreement that it’s a good approximation of the organization’s ability to have stock in the right place at the right time. (This isn’t always the case, but that’s a topic for another day).
It certainly looks like a bit of a roller coaster ride from one day to the next. That’s where applying some principles of statistical process control can help:
By summarizing the results over the last 21 days using basic statistical measures, we can see that the average in stock performance has been 92% and we can expect it to normally fluctuate between 86% (lower control limit) and 97% (upper control limit) on any given day.
In other words, everything that happens between the green dashed lines above is just the normal variation in the process. When you publish an in stock result that’s between 86% and 97% for any given day, it’s like reporting that “the sun came up today”.
Out of the last 21 days, the only one that’s potentially worth talking about is Day 11. Something obviously happened there that took the process out of control. (Even the so-called “downward trend” that you were planning to talk about today is just 3 or 4 recent data points that are within the control limits).
I used the word “potentially” as a qualifier there, because statistical process control was originally developed to help manufacturers isolate the causes of defects, so that they could then apply fixes to the part of the process that’s failing in order to prevent future defects with the same cause. In most cases, the causes (and therefore the fixes) were completely within their control.
Now when you think of “the process” that ultimately results in product being in front of a customer at a retail store, there are a LOT of things that could have gone wrong and many of them are not in the retailer’s control. In the example above, it was a trucker strike that prevented some deliveries from getting to the stores that caused some of them to run out of stock. Everybody probably knew that in stock would suffer as soon as they heard about the strike. But there was really nothing anybody could have done about it and very little that can be done to prevent it from happening again.
So where does that leave us?
Common Cause Variation is not worth discussing, because that’s just indicative of the normal functioning of the process.
Special Cause Variation is often not worth discussing (in this context) unless you have complete control over the sub-process that failed and can implement a process change to fix it.
So what should we be looking at?
In terms of detecting true process problems that need to be discussed and addressed, you want to look at a few observations in a row that are falling outside the established control limits, investigate what changed in the process and decide if you want to do something to correct it or just accept the “new normal”. For example:
But you can also take a broader view and ask questions like:
- How can we get our average in stock up from 92% to 96%?
- How can we reduce the variation between the upper and lower limits to give our customers a more consistent experience?
By asking these questions, what you’re looking for are significant changes you can make to the process that will break current the upper control limit and set a new permanent standard for how the process operates day to day:
But be warned: The things you need to do to achieve this are not for the faint of heart. Things like:
- Completely tearing apart how you plan stock flow from the supplier to the shelf and starting from scratch
- Switching from cheap overseas suppliers to ones who are closer and more responsive
- Refitting your distribution network to flow smaller quantities more frequently to the store
They all have costs and ancillary additional benefits to the operation beyond just improving the in stock measure, but this is the scale of change that’s needed to do it without just blowing your inventory holdings out of the water.
Reporting your in stock rates (or any other process output measure for that matter) regularly is a fine thing to do. Just make sure that you’re drawing the right conclusions about what the report is actually telling you.