About Jeff Harrop

Jeff Harrop

On Shelf Symbiosis (Robots Optional)

 

The cows shorten the grass, and the chickens eat the fly larvae and sanitize the pastures. This is a symbiotic relation. – Joel Salatin

interlinked

Daily In Stock.

It’s the gold standard measure of customer service in retail. The inventory level for each item at each selling location is evaluated independently on a daily basis to determine whether or not you are “in stock” for that item at that store.

The criteria to determine whether or not you are “in stock” can vary (e.g. at least one unit on hand, enough to cover forecasted sales until the next shipment arrives, X% of minimum display stock covered, etc.), but the intent is the same. To develop a single, quantifiable metric that represents how well customers are being served (at least with regard to inventory availability).

One strength of this measure is that – unless you get crazy with conditions and filters – it’s relatively easy to calculate with available information. A simple version is as follows:

  • Collect nightly on hands for all item/locations where there is a customer expectation that the store should have stock at all times (e.g. currently active planogrammed items)
  • If there’s at least 1 unit of stock recorded, that item/location is “in stock” for that day. If not, that item/location is “out of stock” for that day.
  • Divide the number of “in stock” records by the number of item/locations in the population and that’s your quick and easy in stock percentage.

By calculating this measure daily, it becomes less necessary to worry about selling rates in the determination. If an item/location is in stock with 2 units today, but the selling rate is 5 units per day, it stands to reason that the same item/location will be out of stock tomorrow. What’s important is not so much the pure efficacy of the measure, rather that it’s evaluated daily and moving in the right direction.

Using this measure, people can picture the physical world the customer is seeing. If your in-stock is 94% at a particular store on a particular day, then that means that 6% of the shelf positions in the stores were empty, representing potential lost sales.

Here’s the problem, though: Customers don’t care about the percentage of the time that your digital stock records are >0 (or some other formula) – they want physical products on the shelf to buy.

That’s the major weakness of the in stock measure – in order to interpret it as a true customer service measure, the following (somewhat dubious) assumptions must be made:

  1. The number of units of an item that the system says is in the store is actually physically in the store. You can deduct 5 points from your in stock just by making this assumption alone.
  2. Even if assumption #1 is true, you then need to assume that the inventory within the 4 walls of the store is in a customer accessible location where they would expect to find it.

That’s where shelf scanning robots come in – quiet, unassuming sentinels traversing the aisles to find those empty shelves and alert staff to take action.

As cool and futuristic as that notion is, it must be noted that this is still a reactive approach, no matter how quickly the holes can be spotted.

The real question is: Why did the shelf become empty in the first place?

Let’s consider that in the context of our 2 assumptions:

  1. It could very well be that a shortage of stock is the result of shitty planning. But for the sake of argument, let’s say that you have the most sophisticated and responsive planning process and system in the world. If there is no physical stock anywhere in the store, but the planning system is being told that the store is holding 12 units, what exactly would you expect it to do? Likewise, if there is “extra” physical stock in the store that’s not accounted for in the on hand balance, the replenishment system will be sending more before it’s actually needed, which results in a different set of problems – more on that later.
  2. To the extent that physical stock exists in the 4 walls of the store (whether the system inventory is accurate or not) and it is not in a selling location, the general consensus is that this is a stock management issue within the store (hence the development of robots to more quickly and accurately find the holes).

While the use of a daily recalculating planning process is the best way to achieve high levels of in stock, more needs to be done to ensure that the in stock measure more closely resembles on shelf availability, which is what the customer actually sees.

Instituting a store inventory accuracy program to find and permanently fix the process failures that cause mismatches between the stock records and the physical goods to occur in the first place will make the in stock measure more reliable from a “what’s in the 4 walls” perspective.

Flowing product directly from the back door to the shelf location as a standard operating procedure gives confidence that any stock that is within the store is likely on the shelf (and, ideally, only on the shelf). This goes beyond just speeding up receiving and putaway (although that could be a part of it). It’s as much about lining up the space planning, replenishment planning and physical flow of goods such that product arrives at the store in quantities that can fit on the shelf upon arrival. This really isn’t super sophisticated stuff:

  1. From the space plan, how much capacity (in units) is allocated to the item at the store? How much of that capacity is “reserved” by the minimum display quantity?
  2. Is the number of units in a typical shipment less than the remaining shelf space after the minimum display quantity is subtracted from the shelf capacity?

If the answer to question 2 is “no”, then you’re basically guaranteeing that at least some of the inbound stock is going to go onto an overhead or stay in the back room. The shelf might be filled up shortly after the shipment arrives, but you can’t count on the replenishment system to send more when the shelf is low a few weeks later, because the backroom or overhead stock is still in the store, leading to potential holes.

Solving this problem requires thinking about the structural policies that allocate space and flow product into the store:

  • Is enough shelf space allocated to this item based on the demand rate?
  • Are shipping multiples/delivery frequency suitable to the demand rate and shelf allocation?

Finding this balance on as many items as possible serves to ensure – structurally – that any product in the store exists briefly on the receiving dock, then only resides in the selling location after that (similar to a DC flowthrough operation with no “putaway” into storage racking).

Like literally everything in retail, the number 100% doesn’t exist – it’s highly unlikely that you’ll be able to achieve this balance for all items in all locations at all times. But the more this becomes standard criteria for allocating space and setting replenishment policies, the more you narrow the gap between “in stock” and “on the shelf”.

So if the three ingredients to on shelf availability are 1) continuous daily replanning, 2) maintaining accurate inventory records and 3) organizing the supply chain and space plans to flow product directly to the shelf while avoiding overstock, then any work done in any of these areas in isolation will definitely help.

Taken together, however, they work symbiotically to provide exponential value in terms of customer service:

  • More accurate inventory balances means that the right product is flowing into the back of the store when it’s needed to fulfill demand, decreasing the potential for holes on the shelf due to stockout.
  • Stocking product only on the shelf without any overhead/backroom stock keeps it all in one place so that it doesn’t end up misplaced or miscounted, increasing inventory accuracy.
  • Improved inventory accuracy increases the likelihood that when a shipment arrives, the free shelf space that’s expected to be there is actually there when the physical stock arrives.

The (stated) intent of utilizing shelf scanning robots is to help humans more effectively keep the shelves stocked, not to make them obsolete.

I think it a nobler goal to design from end-to-end for the express purpose of maximizing on shelf availability as part of day in, day out execution.

And obsolete those robots.

Store Inventory Accuracy: Getting It Right

 

A man who has committed a mistake and doesn’t correct it, is committing another mistake. – Confucius (551BC – 479BC)

correct and incorrect

 

A couple months ago, I wrote a piece entitled What Everybody Gets Wrong About Store Inventory Accuracy. Here it is in a nutshell:

  • Retailers are pretty terrible at keeping their store inventory accurate
  • It’s costing them a lot in terms of sales, customer service and yes, shrink
  • The problem is pervasive and has not been properly addressed due to some combination of willful blindness, misunderstanding and fear

I think what mostly gives rise to the inaction is the assumption that the only way to keep inventory accurate is to expend vast amounts of time and energy on counting.

Teaching people how to bandage cuts, use eyewash stations or mend broken bones is not a workplace health and safety program. Yes, those things would certainly be part of the program, but the focus should be far more heavily weighted to prevention, not in dealing with the aftermath of mishaps that have already occurred.

In a similar vein, a store cycle counting program is NOT an inventory accuracy program!

A recent trend I’ve noticed among retailers is to mine vast quantities of sales and stock movement data to predict which items in which stores are most likely to have inventory record discrepancies at any given time. Those items and stores are targeted for more frequent counting so as to minimize the duration of the mismatch. Such programs are often described as being “proactive”, but how can that be so if the purpose of the program is still to correct errors in the stock ledger after they have already happened?

Going back to the workplace safety analogy, this is like “proactively” locating an eyewash station near the key cutting kiosk. That way, the key cutter can immediately wash his/her eyes after getting metal shavings in them. Perhaps safety glasses or a protective screen might be a better idea.

Again, what’s needed is prevention – intervening in the processes that cause the inaccurate records in the first place.

Think of the operational processes in a store that adjust the electronic stock ledger on a daily basis:

  • Receiving
  • POS Scanning
  • Returns
  • Adjustments for damage, waste, store use, etc.

Two or more of those processes touch every single item in every single store on a fairly frequent basis. To the extent that flaws exist in those processes that result in the wrong items and quantities being recorded in the stock ledger (or even the right items and quantities at the wrong time), then any given item in any given store at any given time can have an inaccurate inventory balance without anyone knowing about it or why until it is discovered long after the fact.

By the same token, fixing defects in a relatively small number of processes can significantly (and permanently) improve inventory accuracy across a wide swath of items.

So how do you find these process defects?

At the outset, it may not be as difficult as you think. In my experience, a 2 hour meeting with anyone who works in Loss Prevention will give you plenty of things to get started on. Whether it’s an onerous and manual receiving process that is prone to error, poor shelf management or lackadaisical behaviour at the checkout, identifying the problems is usually not the hard part – it’s actually making the changes necessary to begin to address them (which could involve system changes, retraining, measurement and monitoring or all of the above).

If your organization actually cares about keeping inventory records accurate (versus fixing them long after they have been allowed to degrade), then there’s nothing stopping you from working on those things immediately, before a single item is ever counted (see the Confucius quote at the top). If not, then I hate to say it but you’re doomed to having inaccurate inventory in perpetuity (or at least until someone at or near the top does start caring).

Tackling some low hanging fruit is one thing, but to attain and sustain high levels of accuracy – day in and day out – over the long term, rooting out and correcting process defects needs to become part of the organization’s cultural DNA. The end goal is one that can never be reached – better every day.

This entails moving to a three pronged approach for managing stock:

  • Counting with purpose and following up (Control Group Cycle Counting)
  • Keeping the car between the lines on the road (Inspection Counting)
  • Keeping track of progress (Measurement Counting)

Control Group Cycle Counting

The purpose of this counting approach is not to correct inventory balances that have become inaccurate. Rather, it’s to detect the process failures that cause discrepancies in the first place.

It works like this:

  1. Select a sample of items that is representative of the entire store, yet small enough to detail count in a reasonable amount of time (for the sake of argument, let’s say that’s 50 items in a store). This sample is the control group.
  2. Perform a highly detailed count of the control group items, making sure that every unit of stock has been located. Adjust the inventory balances to set the baseline for the first “perfect” count.
  3. One week later, count the exact same items in detail all over again. Over such a short duration, the expectation is that the stock ledger should exactly match the number of units counted. If there are any discrepancies, whatever caused the discrepancy must have occurred in the last 7 days.
  4. Research the transactions that have happened in the last week to find the source of the error. If the discrepancy was 12 units and a goods receipt for a case of 12 was recorded 3 days ago, did something happen in receiving? If the system record shows 6 units but there are 9 on the shelf, was the item scanned once with a quantity override, even though 4 different items may have actually been sold? The point is that you’re asking people about potential errors that have recently happened and will have a better chance of successfully isolating the source of the problem while it’s in everyone’s mind. Not every discrepancy will have an identifiable cause and not every discrepancy with an identifiable cause will have an easy remedy, but one must try.
  5. Determine the conditions that caused the problem to occur. Chances are, those same conditions could be causing problems on many other items outside the control group.
  6. Think about how the process could have been done differently so as to have avoided the problem to begin with and trial new procedure(s) for efficiency and effectiveness.
  7. Roll out new procedures chainwide.
  8. Repeat steps 3 to 7 forever (changing the control group every so often to make sure you continue to catch new process defects).

Eight simple steps – what could be easier, right?

Yes, this process is somewhat labour intensive.
Yes, this requires some intestinal fortitude.
Yes, this is not easy.

But…

How much time does your sales staff spend running around on scavenger hunts looking for product that “the system says is here”?

How much money and time do you waste on emergency orders and store-to-store transfers because you can’t pick an online order?

How long do you think your customers will be loyal if a competitor consistently has the product they want on the shelf or can ship it to their door in 24 hours?

Inspection Counting

In previous pieces written on this topic, I’ve referred to this as “Process Control Counting” – so coined by Roger Brooks and Larry Wilson in their book Inventory Record Accuracy – which they describe as being “controversial in theory, but effective in practice”.

We’ve found that moniker to be not very descriptive and can be confusing to people who are not well versed in inventory accuracy concepts (i.e. every retailer we’ve encountered in the last 25 years).

The Inspection Counting approach is designed to quickly identify items with obvious large discrepancies and correct them on the spot.

Here’s how it works:

  1. Start at the beginning of an aisle and inquiry the first item using a handheld scanner that can instantly display the inventory balance.
  2. Quickly scan the shelf and determine whether or not it appears the system balance is correct.
  3. If it appears to be correct, move on to the next item. If there appears to be a large discrepancy, do some simple investigation to see if it can be located – if not, then perform a count, adjust the balance and move on.

It may seem like this approach is not very scientific and subject to interpretation and judgment on the part of the person doing the inspection counting. That’s because it is. (That’s the “controversial” part).

But there are clear advantages:

  • It is fast – Every item in the store can be inspection counted every few weeks.
  • It is efficient – The items that are selected to be counted are items that are obviously way off (which are the ones that are most important to correct).
  • It is more proactive – “Hole scans” performed today are quite often major inventory errors that occurred days or weeks ago and were only discovered when the shelf was empty – bad news early is better than bad news late.

No matter how many process defects are found and properly addressed through Control Group Counting, there will always be theft and honest mistakes. Inspection Counting ensures that there is a stopgap to ensure that no inventory record goes unchecked for a long period of time, even when there are thousands of items to cycle through.

As part of an overall program underpinned by Control Group Counting and process defect elimination, the number of counts triggered by an inspection (and the associated time and effort) should decrease over time as fewer defects cause the discrepancies in the first place.

Measurement Counting

The purpose of this counting approach is to use sampling to estimate the accuracy of the population based on the accuracy of a representative group.

It works like this:

  1. Once a month, select a fresh sample of items that is representative of the entire store, yet small enough to detail count in a reasonable amount of time, similar to how a control group is selected. This sample is the measurement group.
  2. Perform a highly detailed count of the measurement group items, making sure that every unit of stock has been located.
  3. Post the results in the store and discuss it in executive meetings every month. Is accuracy trending upward or downward? Do certain stores need some additional temporary support? Have new root causes been identified that need to be addressed?

Whether retailers like it or not, inventory accuracy is a KPI that customers are measuring anecdotally and it’s colouring their viewpoint on their shopping experience. Probably a good idea to actually measure and report on it properly, right?

If you’re doing a good job detecting and eliminating process defects that cause inaccurate inventory and continuously making corrections to erroneous records, then this should be reflected in your measurement counts over time. Who knows? If you can demonstrate a high level of accuracy on a continuously changing representative sample, maybe you can convince the Finance and Loss Prevention folks to do away with annual physical counts altogether.

What Everybody Gets Wrong About Store Inventory Accuracy

 

Don’t build roadblocks out of assumptions. – Lorii Myers

red herring

Retailers are not properly managing the most important asset on their balance sheets – and it’s killing customer service.

I analyzed sample data from 3 retailers who do annual “wall to wall” physical counts. There were 898,526 count records in the sample across 92 stores. For each count record (active items only on the day of the count), the system on hand balance before the count was captured along with the physical quantity counted. The products in the sample include hardware, dry grocery, household consumables, sporting goods, basic apparel and all manner of specialty hardlines items. Each of the retailers report annual shrink percentages that are in line with industry averages.

A system inventory record is considered to be “accurate” if the system quantity is adjusted by less than +/- 5% after the physical count is taken. Here are the results:

So 54% of inventory records were accurate within a 5% tolerance on the day of the count. Not good, right?

It gets worse.

For 19% of the total records counted (that’s nearly 1 in every 5 item/locations), the adjustment changed the system quantity by 50% or more!

Wait, there’s more!

In addition, I calculated simple in-stock measures before and after the count as follows:

Reported In Stock: Percentage of records where the system on hand was >0 just before the count

Actual In Stock: Percentage of records where the counted quantity was >0 just after the count

Here are the results of that:

Let’s consider what that means for a moment. If you ran an in-stock report based on the system on hand just before those records were counted, you would think that you’re at 94%. Not world class, but certainly not bad. However, once the lie is exposed on that very same day, you realize that the true in-stock (the one your customer sees) is 5% lower than what you’ve been telling yourself.

Sure, this is a specific point in time and we don’t know how long it took the inventory accuracy to degrade up for each item/location, but how can you ever look at an in-stock report the same way again?

Further, when you look at it store by store, it’s clear that stores with higher levels of inventory accuracy experience a lesser drop in in-stock after the records are counted. Each of the blue dots on the scatterplot below represent one of the 92 stores in the sample:


A couple of outliers notwithstanding, it’s clear that the higher on hand accuracy is, the more truthful the in-stock measure is and vice-versa.

Now let’s do some simple math. A number of studies have consistently shown that an out-of-stock results in a lost sale for the retailer about 1/3 of the time. Assuming the 5% differential between reported and actual in-stock is structural, this means that having inaccurate inventory records could be costing retailers 1.67% of their topline sales. This is in addition to the cost of shrink.

So, a billion dollar retailer could be losing almost $17 million per year in sales just because of inaccurate on hands and nothing else.

Let’s be clear, this isn’t like forecast accuracy where you are trying to predict an unknown future. And it’s not like the myriad potential flow problems that can arise and prevent product from getting to the stores to meet customer demands. It is an erosion in sales caused by the inability to properly keep records of assets that are currently observable in the physical world.

So why hasn’t this problem been tackled?

Red Herring #1: Our Shrink Numbers Are Good

Whenever we perform this type of analysis for a retailer, it’s not uncommon for people to express incredulity that their store inventory balances are so inaccurate.

“That can’t possibly be. Our shrink numbers are below industry average.”

To that, I ask two related questions:

  1. Who gives a shit about industry averages?
  2. What about your customers?

In addition to the potential sales loss, inaccurate on hands can piss customers off in many other ways. For example, if it hasn’t happened already, it won’t be long until you’re forced by competition to publish your store on hand balances on your website. What if a customer makes a trip to the store or schedules a pickup order based on this information?

The point here is that shrink is a financial measure, on hand accuracy is a customer service measure. Don’t assume that “we have low shrink” means the same thing as “our inventory management practices are under control”.

Red Herring #2: It Must Have Been Theft

It’s true that shoplifting and employee theft is a problem that is unlikely to be completely solved. Maybe one day item level RFID tagging will become ubiquitous and make it difficult for product to leave the store without being detected. In the meantime, there’s a limit to what can be done to prevent theft without either severely inconveniencing customers or going bankrupt.

But are we absolutely sure that the majority of inventory shrinkage is caused by theft? Using the count records mentioned earlier, here is another slice showing how the adjustments were made:

From the second column of this table, you can see that for 29% of all the count transactions, the system inventory balances were decreased by at least 1 unit after the count.

Think about that next time you’re walking the aisles in a store. If you assume that theft is the primary cause for negative adjustments, then by extension you must also believe that one out of every 3 unique items you see on the shelves will be stolen by someone at least once in the course of a year – and it could be higher than that if an “accurate” record on the day of the count was negatively adjusted at other times throughout the year. I mean, maybe… seems a bit much, though, don’t you think?

Now let’s look at the first column (count adjustments that increase the inventory balance). If you assume that all of the inventory decreases were theft, then – using the same logic – you must also believe that for one out of every 5 unique items, someone is sneaking product into the store and leaving it on the shelves. I mean, come on.

Perhaps there’s more than theft going on here.

Red Herring #3: The Problem Is Just Too Big

Yes, it goes without saying that when you multiply out the number of products and locations in retail, you get a large number of individual inventory balances – it can easily get into the millions for a medium to large sized retailer. “There’s no way that we can keep that many inventory pools accurate on a daily basis” the argument goes.

But the flaw in this thinking stems from the (unfortunately quite popular) notion that the only way to keep inventory records accurate is through counting and correcting. The problem with this approach (besides being highly labour intensive, inefficient and prone to error) is that it corrects errors that have already happened and does not address whatever process deficiencies caused the error in the first place.

This is akin to a car manufacturer noticing that every vehicle rolling off the assembly line has a scratch on the left front fender. Instead of tracing back through the line to see where the scratch is occurring, they instead just add another station at the end with a full time employee whose job it is to buff the scratch out of each and every car.

The problem is not about the large number of inventory pools, it’s about the small number of processes that change the inventory balances. To the extent that inventory movements in the physical world are not being matched with proper system transactions, a small number of process defects have the potential to impact all inventory records.

When your store inventory records don’t match the physical stock on hand, it must necessarily be a result of one of the following processes:

  • Receiving: Is every carton being scanned into the store’s inventory? Do you “blind receive” shipments from DCs or suppliers that have not demonstrated high levels of picking accuracy for the sake of speed?
  • POS Scanning and Saleable Returns: Do cashiers scan each and every individual item off the belt or do they sometimes use the mult key for efficiency? If an item is missing a bar code and must be keyed under a dummy product number, is there a process to record those circumstances to correct the inventory later?
  • Damage and Waste: Whenever a product is found damaged or expired, is it scanned out of the on hand on a nightly basis?
  • Store Use, Transformations, Transfers: If a product taken from the shelf to use within the store (e.g. paper towels to clean up a mess) or used as a raw material for another product (e.g. flour taken from the pantry aisle to use in the bakery) are they stock adjusted out? Are store-to-store transfers or DC returns scanned out of the store’s inventory correctly before they leave?
  • Counting: Before a stock record is changed because of a count, are people making sure that they’ve located and counted all units of that product within the store or do they just “pencil whip” based on what they see in front of them and move on?
  • Theft: Are there more things that can be done within the store to minimize theft? Do you actively “transact” some of your theft when you find empty packaging in the aisle?

So how can retailers finally make a permanent improvement to the accuracy of their store on hands?

  • They need to actually care about it (losing 1-2% of top line sales should be a strong motivator)
  • They need to measure store on hand accuracy as a KPI
  • They need an approach whereby process failures that cause on hand errors can be detected and addressed
  • They need an efficient approach for finding and correcting discrepancies as the process issues are being fixed

Stay tuned for more on that.

Jimmy Crack Corn

 

Science may have found a cure for most evils; but it has found no remedy for the worst of them all – the apathy of human beings. – Helen Keller (1880-1968)

apathy-i-dont-care

On hand accuracy.

It has been a problem ever since retailers started using barcode scanning to maintain stock records in their stores.

It’s certainly not the first time we’ve written on this topic, nor is it likely to be the last.

The real question is: Why is this such a pervasive problem?

I think I may have the answer: Nobody cares.

Okay, maybe that’s a little harsh. It’s probably more fair to say that there is a long list of things that retailers care about more than the accuracy of their on hands.

I’m not being judgmental, nor am I trying to invoke shame. I’m just making a dispassionate observation based on 25 years experience working in retail.

Whatever you think of the axiom “what gets measured gets managed” (NOT a quote from Peter Drucker), I would argue that it is largely true.

By that yardstick, I have yet to come across a single retailer who routinely measures the accuracy of their on hands as a KPI, even though – if you think about it – it wouldn’t be that difficult to do. Just send out a count list of a random sample of SKUs each month to every store and have them do a detailed count. Either the system record matches what’s physically there or it doesn’t.

Measuring forecast accuracy (the ability to predict an unknown future) seems to take up a lot more time and attention than inventory accuracy (the ability to keep a stock record in synch with a quantity that exists in the physical world right now), but the accuracy of on hand records has a much greater influence on the customer experience than forecast accuracy – by a very wide margin.

And on hand accuracy will only become more important as retailers expand customer delivery options to include click and collect and ship from store. Even “old school” shoppers (those who just want to go to the store to buy something and leave) will be expecting to check online to see how much a store has in stock before getting in their cars.

It’s quite clear that retailers should care about this more, so why don’t they?

Conflating Accuracy and Shrink

After a physical stock count, positive and negative on hand variances are costed and summed up. If the value of the system on hand drops by less than 2% of sales after the count adjustments are made, this is deemed to be a good result when compared to the industry as a whole. The conclusion is drawn that the management of inventory must therefore be under control and that on hand records must not be that far off. The problem with shrink is that the positive and negative errors can still be large in magnitude, but they cancel each other out, thereby hiding significant issues with on hand record accuracy (by item/location, which is what the customer cares about). Shrink is a measure for accountants, not customers.

Store Replenishment is Manual Anyhow

It’s still common practice for many retailers to use visual shelf reviews for store replenishment. Department managers walk through the aisles with RF scanning guns, scan the shelf tags for items they want to order and use an app on the gun to place replenishment orders. Most often, this process is used when perpetual inventory capabilities don’t exist at store level, but it’s not uncommon to see it also being used even if stores have system calculated on hand balances. Why? Because there isn’t enough trust in the accuracy of the on hands to use them for automated replenishment. Hmmm…

It’s Perceived to be an Overwhelming Problem

It’s certainly true that the number of item/store inventory pools that need to be kept accurate can get quite large. The predominant thinking in retail is that the only way to make inventory records more accurate is to count each item more frequently. Do the math on that and many retailers conclude that the labour costs to maintain accurate inventory records will drive them into bankruptcy.

The problem with this viewpoint is that frequent counting and correcting isn’t really maintaining accurate records – it’s continuously fixing inaccurate records. A different way to look at it is not by the sheer volume of item/location records to be managed, but rather by the number of potential process failure points that could affect any given item in any given location.

Think about an auto assembly line where every finished car that rolls off  has a 2 inch scratch on the right front fender. One option to address this problem is to set up an additional station at the end of the line to buff out the scratch on every single car that rolls through. This is analogous to the “count and correct” approach to managing inventory records – highly labour intensive and only addresses the problem after it has already occurred.

Another option would be to trace back through the process until you find the where the scratch is occurring and why. Maybe there’s a bolt sticking out from a pass-through point that’s causing the scratch. Cut off the end of the bolt, no more scratches. Addressing this one point of process failure permanently resolves the root cause of the defect for every car that passes through the process.

Going back to our store on hand accuracy example, a retailer may have thousands or millions of item/store combinations, but the number of processes (potential points of failure) that change on hand balances is limited:

  • DC picking
  • Store receiving
  • Stock writedowns for damage or waste
  • Counts
  • Sales and saleable returns

For retailers who have implemented store perpetual inventory, each of these processes that affect the movement of physical stock have a corresponding transaction that changes the on hand balance accordingly. How carefully are those transactions being recorded for accuracy (versus speed)?

Are DC shipments regularly audited for accuracy? Do stores “blind receive” shipments only from highly reliable sources? Are there nightly procedures to scan out damaged or unsaleable goods? Is the store well organized so that all units of a particular item can be easily found before a physical count is done? is every sale being properly scanned at the checkout?

Of course, the elephant (or maybe scapegoat?) in the room is theft. After all, there is no corresponding transaction for those stock movements. While there are certainly things that can be done to reduce theft, I consider it to be a self evident fact that it won’t be eliminated completely anytime soon.

But before you assume that every negative stock adjustment “must have been theft”, are you totally certain that all of the other processes are being transacted properly?

Does it seem reasonable to assume that for every single unique product whose on hand balance decreases after a physical count (typically 20-30% of all products in a store) all of those units were stolen since the last count?

And if we do assume that theft is the culprit in the vast majority of those cases, then what are we to assume about products whose on hand balances increase after being counted (typically 10-20% of all products in a store)? Are customers or employees sneaking items into the store, leaving them on the shelves and secretly leaving without being detected?

Setting theft aside, there’s still plenty that can be done by thoroughly examining and addressing the potential points of process failure that cause on hands to become inaccurate in the first place, while at the same time reducing the amount of time and money being spent on “counting and correcting”.

What’s Step 1 on this path?

You need to care.

What’s Good for the Goose

 

What’s good for the goose is good for the gander – Popular Idiom

ruledoesntapply

Thinking in retail supply chain management is still evolving.

Which is a nicer way of saying that it’s not very evolved.

Don’t get me wrong here. It wasn’t that long ago that virtually no retailer even had a Supply Chain function. When I first started my career, retailers were just beginning to use the word “logistics” – a military term, fancy that! – in their job descriptions and org charts. At the time it was an acknowledgement that sourcing, inbound transportation, distribution and outbound transportation were all interrelated activities, not stand alone functions.

A positive development, but “logistics” was really all about shipping containers, warehouses and trucks – the mission ended at the store receiving bay.

Time passed and barcode scanning at the checkouts became ubiquitous.

More time passed and many (but by no means a large majority) of medium to large sized retailers implemented scan based receiving and perpetual inventory balances at stores in a centralized system. This was followed quickly by computer assisted store ordering and with that came the notion that store replenishment could be a highly automated, centralized function.

Shortly thereafter, retailers began to recognize that they needed more than just operational logistics, but true supply chain management – covering all of the planning and execution processes that move product from the point of manufacture to the retail shelf.

In theory, at least.

I say that, because even though most retailers of size have adopted the supply chain management vernacular and have added Supply Chain VP roles to their org structures, over the years I’ve heard some dubious “supply chain” discussions that tend to suggest that thinking hasn’t fully evolved past “trucks and warehouses”. Some of you reading this now my find yourselves falling into this train of thought without even realizing it.

So how do you know if your thinking is drifting away from holistic supply chain thinking toward myopic logistics centric thinking?

An approach that we use is to apply the Goose and Gander Rule to these situations. If you find yourself advocating behaviour in the middle of the supply chain that seems nonsensical if applied upstream or downstream, then you’re not thinking holistically.

Here are a few examples:


The warehouse is overstocked. We can’t sell it from there, so let’s push it out to the stores.


At a very superficial level, this argument makes some sense. It is true that product can’t sell if it’s sitting in the warehouse (setting aside the fact that using this approach to transfer overstock from warehouses to stores generally doesn’t make it sell any faster).

Now suppose that a supplier unexpectedly shipped a truckload of product that you didn’t need to your distribution centre because they were overstocked. Would you just receive it and scramble to find a place to store it? Because that’s what happens when you push product to stores.

Or how would you feel if you were out shopping and as you were approaching the checkout, a member of the store staff started filling your cart with items that the store didn’t want to stock any more? Would you just pay for it with a shrug and leave?

I hate to break the news, but there is no such thing as “push” when you’re thinking of the retail supply chain holistically. The only way to liquidate excess inventory is to encourage a “pull” by dropping the price or negotiating a return. All pushing does is add more cost to the product and transfer the operational issues downstream.


If we increase DC to store lead times, we can have store orders locked in further in advance and optimize our operations.


Planning with certainty is definitely easier than planning with uncertainty, but where does it end? Do you increase store lead times by 2 days? 2 weeks? 2 months? Why not lock in store orders for the next full year?

Increasing lead times does nothing but make the supply chain less responsive and that helps precisely no one. And, like the “push” scenario described above, stores are forced to hold more inventory, so you’re improving efficiency at one DC, but degrading it in dozens of stores served by that DC.

Again, would you be okay with suppliers arbitrarily increasing order lead times to improve their operational efficiency at your expense?

Would you shop at a store that only allows customers in the door who placed their orders two days in advance?

Customers buy what they want when they want. There are things that can be done to influence their behaviour, but it can’t be fully controlled in such a way that you can schedule your supply chain flow to be a flat line, day in and day out.


We sell a lot of slow moving dogs. We should segregate those items in the DC and just pick and deliver them to the stores once a month.


The first problem with this line of thinking is that “slow moving” doesn’t necessarily mean “not important to the assortment”.

Also, aren’t you sending 1 or 2 (or more) shipments a week to the same stores from the same building anyhow?

When’s the last time you went shopping for groceries and were told by store staff that, even though you need mushroom soup today, they only sell mushroom soup on alternate Thursdays?

Listen, I’m not arguing that retailers’ logistics operations shouldn’t be run as efficiently as possible. You just need to do it without cheating.

We need to remember that the SKU count, inventory and staff levels across the store network is many times greater than the logistics operations. Employing tactics that hurt the stores in order to improve KPIs in the DCs or Transport operations is tantamount to cutting of your nose to spite your face.

Managing the Long Tail

If you don’t mind haunting the margins, I think there is more freedom there. – Colin Firth

long-tail

 

A couple of months ago, I wrote a piece called Employing the Law of Large Numbers in Bottom Up Forecasting. The morals of that story were fourfold:

  1. That when sales at item/store level are intermittent (fewer than 52 units per year), a proper sales pattern at that level can’t be properly determined from the demand data at that level.
  2. That any retailer has a sufficient percentage of slow selling item/store combinations that the problem simply can’t be ignored in the planning process.
  3. That using a multi level, top-down approach to developing properly shaped forecasts in a retail context is fundamentally flawed.
  4. That the Law of Large Numbers can be used in a store centric fashion by aggregating sales across similar items at a store only for the purpose of determining the shape of the curve, thereby eliminating the need to create any forecasts above item/store level.

A high level explanation of the Profile Based Forecasting approach developed by Darryl Landvater (but not dissimilar to what many retailers were doing for years with systems like INFOREM and various home grown solutions) was presented as the antidote to this problem. Oh and by the way, it works fabulously well, even with such a low level of “sophistication” (i.e. unnecessary complexity).

But being able to shape a forecast for intermittent demands without using top-down forecasting is only one aspect of the slow seller problem. The objective of this piece is to look more closely at the implications of intermittent demands on replenishment.

The Bunching Problem

Regardless of how you provide a shape to an item/store forecast for a slow selling item (using either Profile Based Forecasting or the far more cumbersome and deeply flawed top-down method), you are still left with a forecasted stream of small decimal numbers.

In the example below, the shape of the sales curve cannot be determined using only sales history from two years ago (blue line) and the most recent year (orange line), so the pattern for the forecast (green dashed line) was derived from an aggregation of sales of similar items at the same store and multiplied through the selling rate of the item/store itself (in this case 13.5 units per year):

You can see that the forecast indeed has a defined shape – it’s not merely a flat line that would be calculated from intermittent demand data with most forecasting approaches. However, when you multiply the shape by a low rate of sale, you don’t actually have a realistic demand forecast. In reality, what you have is a forecast of the probability that a sale will occur.

Having values to the right of the decimal in a forecast is not a problem in and of itself. But when the value to the left of the decimal is a zero, it can create a huge problem in replenishment.

Why?

Because replenishment calculations always operate in discrete units and don’t know the difference between a forecast of true demand and a forecast of a probability of a sale.

Using the first 8 weeks of the forecast calculated above, you can see how time-phased replenishment logic will behave:

The store sells 13 to 14 units per year, has a safety stock of 2 units and 2 units in stock (a little less than 2 months of supply). By all accounts, this store is in good shape and doesn’t need any more inventory right now.

However, the replenishment calculation is being told that 0.185 units will be deducted from inventory in the first week, which will drive the on hand below the safety stock. An immediate requirement of 1 unit is triggered to ensure that doesn’t happen.

Think of what that means. Suppose you have 100 stores in which the item is slow selling and the on hand level is currently sitting at the safety stock (not an uncommon scenario in retail). Because of small decimal forecasts triggering immediate requirements at all of those stores, the DC needs to ship out 100 pieces to support sales of fewer than 20 pieces at store level – demand has been distorted 500%.

Now, further suppose that this isn’t a break-pack item and the ship multiple to the store is an inner pack of 4 pieces – instead of 100 pieces, the immediate requirement would be 400 pieces and demand would be distorted by 2,000%!

The Antidote to Bunching – Integer Forecasts

What’s needed to prevent bunching from occurring is to convert the forecast of small decimals (the probability of a sale occurring) into a realistic forecast of demand, while still retaining the proper shape of the curve.

This problem has been solved (likewise by Darryl Landvater) using simple accumulator logic with a random seed to convert a forecast of small decimals into a forecast of integers.

It works like this:

  • Start with a random number between 0 and 1
  • Add this random number to the decimal forecast of the first period
  • Continue to add forecasts for subsequent periods to the accumulation until the value to the right of the decimal in the accumulation “tips over” to the next integer – place a forecast of 1 unit at each of these “tip-over” points

Here’s our small decimal forecast converted to integers in this fashion:

Because a random seed is being used for each item/store, the timing of the first integer forecast will vary by each item/store.

And because the accumulator uses the shaped decimal forecast, the shape of the curve is preserved. In faster selling periods, the accumulator will tip over more frequently and the integer forecasts will likewise be more frequent. In slower periods, the opposite is true.

Below is our original forecast after it has been converted from decimals to integers using this logic:

And when the requirements across multiple stores are placed back on the DC, they are not “bunched” and a more realistic shipment schedule results:

Stabilizing the Plans – Variable Consumption Periods

Just to stay grounded in reality, none of what has been described above (or, for that matter, in the previous piece Employing the Law of Large Numbers in Bottom Up Forecasting) improves forecast accuracy in the traditional sense. This is because, quite frankly, it’s not possible to predict with a high degree of accuracy the exact quantity and timing of 13 units of sales over a 52 week forecast horizon.

The goal here is not pinpoint accuracy (the logic does start with a random number after all), but reasonableness, consistency and ease of use. It allows for long tail items to have the same multi-echelon planning approach as fast selling items without having separate processes “on the side” to deal with them.

For fast selling items with continuous demand, it is common to forecast in weekly buckets, spread the weekly forecast into days for replenishment using a traffic profile for that location and consume the forecast against actuals to date for the current week:

In the example above, the total forecast for Week 1 is 100 units. By end of day Wednesday, the posted actuals to date totalled 29 units, but the original forecast for those 3 days was 24 units. The difference of -5 units is spread proportionally to the remainder of the week such as to keep the total forecast for the week at 100 units. The assumption being used is that you have higher confidence in the weekly total of 100 units than you have in the exact daily timing as to when those 100 units will actually sell.

For slow moving items, we would not even have confidence in the weekly forecasts, so consuming forecast against actual for a week makes no sense. However, there would still be a need to keep the forecast stable in the very likely event that the timing and magnitude of the actuals don’t match the original forecast. In this case, we would consume forecast against actuals on a less frequent basis:

The logic is the same, but the consumption period is longer to reflect the appropriate level of confidence in the forecast timing.

Controlling Store Inventory – Selective Order Release

Let’s assume for a moment a 1 week lead time from DC to store. In the example below, a shipment is planned in Week 2, which means that in order to get this shipment in Week 2, the store needs to trigger a firm replenishment right now:

Using standard replenishment rules that you would use for fast moving items, this planned shipment would automatically trigger as a store transfer in Week 1 to be delivered in Week 2. But this replenishment requirement is being calculated based on a forecast in Week 2 and as previously mentioned, we do not have confidence that this specific quantity will be sold in this specific week at this specific store.

When that shipment of 1 unit arrives at the store (bringing the on hand up to 3 units), it’s quite possible that you won’t actually sell it for several more weeks. And the overstock situation would be further exacerbated if the order multiple is greater than 1 unit.

This is where having the original decimal forecast is useful. Remember that, as a practical matter, the small decimals represent the probability of a sale in a particular week. This allows us to calculate a tradeoff between firming this shipment now or waiting for the sale to materialize first.

Let’s assume that choosing to forgo the shipment in Week 2 today means that the next opportunity for a shipment is in Week 3. In the example below, we can see that there is a 67.8% chance (0.185 + 0.185 + 0.308) that we will sell 1 unit and drop the on hand below safety stock between now and the next available ship date:

Based on this probability, would you release the shipment or not? The threshold for this decision could be determined based on any number of factors such as product size, cost, etc. For example, if an item is small and cheap, you might use a low probability threshold to trigger a shipment. If another slow selling item is very large and expensive, you might set the threshold very high to ensure that this product is only replenished after a sale drives the on hand below the safety stock.

Remember, the probabilities themselves follow the sales curve, so an order has a higher probability of triggering in a higher selling period than in a lower selling period, which would be the desired behaviour.

The point of all of this is that the same principles of Flowcasting (forecast only at the point of consumption, every item has a 52 week forecast and plan, only order at the lead time, etc.) can still apply to items on the long tail, so long as the planning logic you use incorporates these elements.

Employing the Law of Large Numbers in Bottom-Up Forecasting

 

It is utterly implausible that a mathematical formula should make the future known to us, and those who think it can would once have believed in witchcraft. – Jakob Bernoulli (1655-1705)

forest through the trees

This is a topic I’ve touched on numerous times in the past, but I’ve never really taken the time to tackle the subject comprehensively.

Before diving in, I just want to make clear that I’m going to stay in my lane: the frame of reference for this entire piece is around forecasting sales at the point of consumption in retail.

In that context, here are some truths that I consider to be self evident:

  1. Consumers buy specific items in specific stores at specific times. Therefore, in order to plan the retail supply chain from consumer demand back, forecasts are needed by item by store.
  2. Any retailer has a large enough percentage of intermittent demand streams at item/store level (e.g. fewer than 1 sale per week) that they can’t simply be ignored in the forecasting process.
  3. Any given item can have continuous demand in some locations and intermittent demand in other locations.
  4. “Intermittent” doesn’t mean the same thing as “random”. An intermittent demand stream could very well have a distinct pattern that is not visible to the naked eye (nor to most forecast algorithms that were designed to work with continuous demands).
  5. Because of points 1 to 4 above, the Law of Large Numbers needs to be employed to see any patterns that exist in intermittent demand streams.

On this basis, it seems to be a foregone conclusion that the only way to forecast at item/store is by employing a top-down approach (i.e. aggregate sales history to some higher level(s) than item/store so that a pattern emerges, calculate an independent forecast at that level, then push down the results proportionally to the item/stores that participated in the original aggregation of history).

So now the question becomes: How do you pick the right aggregation level for forecasting?

This recent (and conveniently titled) article from Institute of Business Forecasting by Eric Wilson called How Do You Pick the Right Aggregation Level for Forecasting? captures the considerations and drawbacks quite nicely and provides an excellent framework to discuss the problem in a retail context.

A key excerpt from that article is below (I recommend that you read the whole thing – it’s very succinct and captures the essence about how to think about this problem in a short few paragraphs):


When To Go High Or Low?

Despite all the potential attributes, levels of aggregation, and combinations of them, historically the debate has been condensed down to only two options, top down and bottom up.

The top-down approach uses an aggregate of the data at the highest level to develop a summary forecast, which is then allocated to individual items on the basis of their historical relativity to the aggregate. This can be any generated forecast as a ratio of their contribution to the sum of the aggregate or on history which is in essence a naïve forecast.

More aggregated data is inherently less noisy than low-level data because noise cancels itself out in the process of aggregation. But while forecasting only at higher levels may be easier and provides less error, it can degrade forecast quality because patterns in low level data may be lost. High level works best when behavior of low-level items is highly correlated and the relationship between them is stable. Low level tends to work best when behavior of the data series is very different from each other (i.e. independent) and the method you use is good at picking up these patterns.

The major challenge is that the required level of aggregation to get meaningful statistical information may not match the precision required by the business. You may also find that the requirements of the business may not need a level of granularity (i.e. Customer for production purposes) but certain customers may behave differently, or input is at the item/customer or lower level. More often than not it is a combination of these and you need multiple levels of aggregation and multiple levels of inputs along with varying degrees of noise and signals.


These are the two most important points:

  • “High level works best when behavior of low-level items is highly correlated and the relationship between them is stable.”
  • “Low level tends to work best when behavior of the data series is very different from each other (i.e. independent) and the method you use is good at picking up these patterns.”

Now, here’s the conundrum in retail:

  • The behaviour of low level items is very often NOT highly correlated, making forecasting at higher levels a dubious proposition.
  • Most popular forecasting methods only work well with continuous demand history data, which can often be scarce at item/store level (i.e. they’re not “good at picking up these patterns”).

My understanding of this issue was firmly cemented about 19 years ago when I was involved in a supply chain planning simulation for beer sales at 8 convenience stores in the greater Montreal area. During that exercise, we discovered that 7 of those 8 stores had a sales pattern that one would expect for beer consumption in Canada (repeated over 2 full years): strong sales during the summer months, lower sales in the cooler months and a spike around the holidays. The actual data is long gone, but for those 7 stores, it looked something like this:

The 8th store had a somewhat different pattern.

And by “somewhat different”, I mean exactly the opposite:

Remember, these stores were all located within about 30 kilometres of each other, so they all experienced generally the same weather and temperature at the same time. We fretted over this problem for awhile, thinking that it might be an issue with the data. We even went so far as to call the owner of the 8 store chain to ask him what might be going on.

In an exasperated tone that is typical of many French Canadians, he impatiently told us that of course that particular store has slower beer sales in the summer… because it is located in the middle of 3 downtown university campuses: fewer students in the summer months = a decrease in sales for beer during that time for that particular store.

If we had visited every one of those 8 stores before we started the analysis (we didn’t), we may have indeed noticed the proximity of university campuses to one particular store. Would we have pieced together the cause/effect relationship to beer sales? My guess is probably not. Yet the whole story was right there in the sales data itself, as plain as the nose on your face.

We happened upon this quirk after studying a couple dozen SKUs across 8 locations. A decent sized retailer can sell tens of thousands of SKUs across hundreds or thousands of locations. With millions of item/store combinations, how many other quirky criteria like that could be lurking beneath the surface and driving the sales pattern for any particular item at any particular location?

My primary conclusion from that exercise was that aggregating sales across store locations is definitely NOT a good idea.

So in terms of figuring out the right level of aggregation, that just leaves us with the item dimension – stay at store level, but aggregate across categories of similar items. But in order for this to be a good option for the top level, we now have another problem: “behavior of low-level items is highly correlated and the relationship between them is stable“.

That second part becomes a real issue when it comes to trying to aggregate across items. Retailers live every day on the front line of changing consumer sentiment and behaviour. As a consequence of that, it is very uncommon to see a stable assortment of items in every store year in and year out.

Let’s say that a category currently has 10 similar items in it. After an assortment review, it’s decided that 2 of those items will be leaving the category and 4 new products will be introduced into the category. This change is planned to be executed in 3 months’ time. This is a very simple variation of a common scenario in retail.

Now think about what that means with regard to managing the aggregated sales history for the top level (category/store):

  • The item/store sales history currently includes 2 items that will be leaving the assortment. But you can’t simply exclude those 2 items from the history aggregation, because this would understate the category/store forecast for the next 3 months, during which time those 2 items will still be selling.
  • The item/store level sales history currently does not include the 4 new items that will be entering the assortment. But you can’t simply add surrogate history for the 4 new items into the aggregation, because this would overstate the category/store forecast for next 3 months before those items are officially launched.

In this scenario, how would one go about setting up the category/store forecast in such a way that:

  1. It accounts for the specific items participating in the aggregation at different future times (before, during and after the anticipated assortment change)?
  2. The category/store forecast is being pushed down to the correct items at different future times (before, during and after the anticipated assortment change)?

And this is a fairly simple example. What if the assortment changes above are being rolled out to different stores at different times (e.g. a test market launch followed by a staged rollout)? What if not every store is carrying the full 10 SKU assortment today? What if not every store will be carrying the full 12 SKU assortment in the future?

The complexity of trying to deal with this in a top-down structure can be nauseating.

So it seems that we find ourselves in a bit of a pickle here:

  1. The top-down approach is unworkable in retail because the behaviour between locations for the same item are not correlated (beer in Montreal stores) and the relationships among items for the same location are not stable (constantly changing assortments).
  2. In order for the bottom-up approach to work, there needs to be some way of finding patterns in intermittent data. It’s a self-evident truth that the only way to do this is by aggregating.

So the Law of Large Numbers is still needed to solve this problem, but in a retail setting, there is no “right level” of aggregation above item/store at which to develop reliable independent top level forecasts that are also manageable.

Maybe we haven’t been thinking about this problem in the right way.

This is where Darryl Landvater comes in. He’s a long time colleague and mentor of mine best known as a “manufacturing guy” (he’s the author of World Class Production and Inventory Management, as well as co-author of The MRP II Standard System), but in reality he’s actually a “planning guy”.

A number of years ago, Darryl recognized the inherent flaws with using a top-down approach to apply patterns to intermittent demand streams and broke the problem down into two discrete parts:

  1. What is the height of the curve (i.e. rate of sale)?
  2. What is the shape of the curve (i.e. selling profile)?

His contention was that it’s not necessary to use aggregation to calculate completely independent sales forecasts (i.e. height + shape) to achieve this. Instead, what’s needed is to aggregate to calculate selling profiles to be used in cases where the discrete demand history for an item at a store is insufficient to determine one. We’re still using the Law of Large Numbers, but only to solve for the specific problem inherent in slow selling demands – finding the shape of the curve.

It’s called Profile Based Forecasting and here’s a very simplified explanation of how it works:

  1. Calculate an annual forecast quantity for each independent item/store based on sales history from the last 52+ weeks (at least 104 weeks of rolling history is ideal). For example, if an item in a store sold 25 units 2 years ago and 30 units over the most current 52 weeks, then the total forecast for the upcoming 52 weeks might be around 36 units with a calculated trend applied.
  2. Spread the annual forecast into individual time periods as follows:
    • If the item/store has a sufficiently high rate of sale that a pattern can be discerned from its own unique sales history (for example, at least 70 units per year), then calculate the selling pattern from only that history and multiply it through the item/store’s selling rate.
    • If the item/store’s rate of sale is below the “fast enough to use its own history” threshold, then calculate a sales pattern using a category of similar items at the same store and multiply those percentages through the independently calculated item/store annual forecast.

There is far more to it than that, but the separation of “height of the curve” from “shape of the curve” as described above is the critical design element that forms the foundation of the approach.

Think about what that means:

  1. If an item/store’s rate of sale is sufficient to calculate its own independent sales profile at that level, then it will do so.
  2. If the rate of sale is too low to discern a pattern, then the shape being applied to the independent item/store’s rate of sale is derived by looking at similar items in the category within the same store. Because the profiles are calculated from similar products and only represent the weekly percentages through which to multiply the independent rate of sale, they don’t need to be recalculated very often and are generally immune to the “ins and outs” of specific products in the category. It’s just a shape, remember.
  3. All forecasting is purely bottom-up. Every item at every store can have its own independent forecast with a realistic selling pattern and there are no forecasts to be calculated or managed above the item/store level.
  4. The same forecast method can be used for every item at every store. The only difference between fast and slow selling items is how the selling profile is determined. As the selling rate trends up or down over time, the appropriate selling profile will be automatically applied based on a comparison to the threshold. This makes the approach very “low touch” – demand planners can easily oversee several hundred thousand item/store combinations by managing only exceptions.

With realistic, properly shaped forecasts for every item/store enabled without any aggregate level modelling, it’s now possible to do top-down stuff that makes sense, such as applying promotional lifts or overrides for an item across a group of stores and applying the result proportionally based on each store’s individual height and shape for those specific weeks, rather than using a naive “flat line” method.

Simple. Intuitive. Practical. Consistent. Manageable. Proven.

Customer Service Collateral Damage

 

Good intentions can often lead to unintended consequences. – Tim Walberg

unintended-Consequences

Speed kills.

Retailers with brick and mortar operations are always trying to keep the checkout lines moving and get customers out the door as quickly as possible. Many collect time stamps on their sales transactions in order to measure and reward their cashiers based on how quickly they can scan.

Similarly, being able to receive quickly at the back of the store is seen as critical to customer service – product only sells off the shelf, not from the receiving bay or the back of a truck.

This focus on speed has led to many in-store transactional “efficiencies”:

  • If a customer puts 12 cans of frozen concentrated juice on the belt, a cashier may scan the first one and use the multiplier key to add the other 11 to the bill all at once.
  • If a product doesn’t scan properly or is missing the UPC code, just ask the customer for the price and key the sale under a “miscellaneous” SKU or a similar item with the same price, rather than calling for a time consuming code check.
  • If a shipment arrives in the receiving bay, just scan the waybill instead of each individual case and get the product to the floor.

These time saving measures can certainly delight “the customer of this moment”, but there can also be consequences.

In the “mult key” example, the 12 cans scanned could be across 6 different flavours of juice. The customer may not care since they’re paying the same price, but the inventory records for 6 different SKUs have just been fouled up for the sake of saving a few seconds. To the extent that the system on hand balances are used to make automated replenishment decisions, this one action could be inconveniencing countless customers for several more days, weeks or even months before the lie is exposed.

The smile on a customer’s face because you saved her 5 seconds at the checkout or the cashier speed rankings board in the break room might be tangible signs of “great customer service”, but the not-so-easy-to-see costs of stockouts and lost sales that arise from this practice over time is extremely costly.

Similarly with skipping code checks or “pencil whipping” back door receipts. Is sacrificing accuracy for the sake of speed really good customer service policy?

A recent article published in Canadian Grocer magazine begins with the following sentence:

“A lack of open checkouts and crowded aisles may be annoying to grocery shoppers, but their biggest frustration is finding a desired product is out of stock, according to new research from Field Agent.”

According to the article, out of stocks are costing Canadian grocers $63 billion per year in sales. While better store level planning and replenishment can drive system reported in-stocks close to 100%, the benefits are muted if the replenishment system thinks the store has 5 units when they actually have none.

Not only does this affect the experience of a walk-in customer looking at an empty shelf, but it’s actually even more serious in an omnichannel world where the expectation is that retailers will publish store inventories on their public websites (gulp!). An empty shelf is one thing, but publishing an inaccurate on hand on your website is tantamount to lying right to your customers’ faces.

We’ve seen firsthand that it’s not uncommon for retailers to have a store on hand accuracy percentage in the low 60s (meaning that almost 40% of the time, the system on hand record differs from the counted quantity by more than 5% at item/location level). Furthermore, we’ve found that on the day of an inventory count, the actual in stock is several points lower than the reported in stock on average.

Suffice it to say that inaccurate on hand records are a big part of the out of stocks problem.

Nothing I’ve said above is particularly revolutionary or insightful. The real question is why has it been allowed to continue?

In my view, there are 3 key reasons:

  1. Most retailers conflate shrink with inventory accuracy and make the horribly, horribly mistaken assumption that if their financial shrink is below 1.5%, then their inventory management is under control. Shrink is a measure for accountants, not customers and the responsibility of store inventory management belongs in Store Operations, not Finance.
  2. Nobody measures the accuracy of their on hands. It’s fine to measure the speed of transactions and the efficiency of store labour, but if you’re taking shortcuts to achieve those efficiencies, you should also be measuring the consequence of those actions – especially when the consequence so profoundly impacts the customer experience.
  3. Retailers think that inaccurate store on hands is an intractable problem that’s impossible to economically solve. That was true for every identified problem in human history at one point. However, I do agree that if no action is taken to solve the problem because it is “impossible to solve”, then it will never be solved.

It’s true that overcoming inertia on this will not be easy.

Your customers’ expectations will continue to rise regardless.

Rise of the Machines?

 

It requires a very unusual mind to undertake the analysis of the obvious. – Alfred North Whitehead (1861-1947)

20180626210156-GettyImages-917581126

 

My doctor told me that I need to reduce the amount of salt, fat and sugar in my diet. So I immediately increased the frequency of oil changes for my car.

Confused?

I don’t blame you. That’s how I felt after I read a recent survey about the adoption of artificial intelligence (AI) in retail.

Note that I’m not criticizing the survey itself. It’s a summary of collected thoughts and opinions of retail C-level executives (pretty evenly split among hardlines/softlines/grocery on the format dimension and large/medium/small on the size dimension), so by definition it can’t be “wrong”. I just found some of the responses to be revealing – and bewildering.

On the “makes sense” side of the ledger, the retail executives surveyed intend to significantly expand customer delivery options for purchases made online over the next 24 months, specifically:

  • 79% plan to offer ship from store
  • 80% plan to offer pick up in store
  • 75% plan to offer delivery using third party services

This supports my (not particularly original) view that the physical store affords traditional brick and mortar retailers a competitive advantage over online retailers like Amazon, at least in the short to medium term.

However, the next part of the survey is where we start to see trouble (the title of this section is “Retailers Everywhere Aren’t Ready for the Anywhere Shelf”):

  • 55% of retailers surveyed don’t have a single view of inventory across channels
  • 78% of retailers surveyed don’t have a real-time view of inventory across channels

What’s worse is that there is no mention at all about inventory accuracy. I submit that the other 45% and 22% respectively may have inventory visibility capabilities, but are they certain that their store level inventory records are accurate? Do they actually measure store on hand accuracy (by item by location in units, which is what a customer sees) as a KPI?

The title of the next slide is “Customer Experience and Supply Chain Maturity Demands Edge Technologies”. Okay… Sure… I guess.

The slide after that concludes that retail C-suite executives believe that the top technologies “having the broadest business impact on productivity, operational efficiency and customer experience” are as follows:

  • #1 – Artificial Intelligence/Machine Learning
  • #2 – Connected Devices
  • #3 – Voice Recognition

Towards the end, it was revealed that “The C-suite is planning a 5X increase in artificial intelligence adoption over the next 2 years”. And that 50% of those executives see AI as an emerging technology that will have a significant impact on “sharpening inventory levels” (whatever that actually means).

So just to recap:

  • Over the next 2 years, retailers will be aggressively pursuing customer delivery options that place ever increasing importance on visibility and accuracy of store inventory.
  • A majority of retailers haven’t even met the visibility criteria and it’s highly unlikely that the ones who have are meeting the accuracy criteria (the second part is my assumption and I welcome being proved wrong on that).
  • Over the next 2 years, retailers intend to increase their investment in artificial intelligence technologies fivefold.

I’m reminded of the scene in Die Hard 2 (careful before you click – the language is not suitable for a work environment or if small children are nearby) where terrorists take over Dulles International Airport during a zero visibility snowstorm and crash a passenger jet simply by transmitting a false altitude reading to the cockpit of the plane.

Even in 1990, passenger aircraft were quite technologically advanced and loaded with systems that could meet the definition of “artificial intelligence“. What happens when one piece of critical data fed into the system is wrong? Catastrophe.

I need some help understanding the thought process here. How exactly will AI solve the inventory visibility/accuracy problem? Are we talking about every retailer having shelf scanning robots running around in every store 2 years from now? What does “sharpen inventory levels” mean and how is AI expected to achieve that (very nebulous sounding) goal?

I’m seriously asking.