Store Inventory Accuracy: Getting It Right

 

A man who has committed a mistake and doesn’t correct it, is committing another mistake. – Confucius (551BC – 479BC)

correct and incorrect

 

A couple months ago, I wrote a piece entitled What Everybody Gets Wrong About Store Inventory Accuracy. Here it is in a nutshell:

  • Retailers are pretty terrible at keeping their store inventory accurate
  • It’s costing them a lot in terms of sales, customer service and yes, shrink
  • The problem is pervasive and has not been properly addressed due to some combination of willful blindness, misunderstanding and fear

I think what mostly gives rise to the inaction is the assumption that the only way to keep inventory accurate is to expend vast amounts of time and energy on counting.

Teaching people how to bandage cuts, use eyewash stations or mend broken bones is not a workplace health and safety program. Yes, those things would certainly be part of the program, but the focus should be far more heavily weighted to prevention, not in dealing with the aftermath of mishaps that have already occurred.

In a similar vein, a store cycle counting program is NOT an inventory accuracy program!

A recent trend I’ve noticed among retailers is to mine vast quantities of sales and stock movement data to predict which items in which stores are most likely to have inventory record discrepancies at any given time. Those items and stores are targeted for more frequent counting so as to minimize the duration of the mismatch. Such programs are often described as being “proactive”, but how can that be so if the purpose of the program is still to correct errors in the stock ledger after they have already happened?

Going back to the workplace safety analogy, this is like “proactively” locating an eyewash station near the key cutting kiosk. That way, the key cutter can immediately wash his/her eyes after getting metal shavings in them. Perhaps safety glasses or a protective screen might be a better idea.

Again, what’s needed is prevention – intervening in the processes that cause the inaccurate records in the first place.

Think of the operational processes in a store that adjust the electronic stock ledger on a daily basis:

  • Receiving
  • POS Scanning
  • Returns
  • Adjustments for damage, waste, store use, etc.

Two or more of those processes touch every single item in every single store on a fairly frequent basis. To the extent that flaws exist in those processes that result in the wrong items and quantities being recorded in the stock ledger (or even the right items and quantities at the wrong time), then any given item in any given store at any given time can have an inaccurate inventory balance without anyone knowing about it or why until it is discovered long after the fact.

By the same token, fixing defects in a relatively small number of processes can significantly (and permanently) improve inventory accuracy across a wide swath of items.

So how do you find these process defects?

At the outset, it may not be as difficult as you think. In my experience, a 2 hour meeting with anyone who works in Loss Prevention will give you plenty of things to get started on. Whether it’s an onerous and manual receiving process that is prone to error, poor shelf management or lackadaisical behaviour at the checkout, identifying the problems is usually not the hard part – it’s actually making the changes necessary to begin to address them (which could involve system changes, retraining, measurement and monitoring or all of the above).

If your organization actually cares about keeping inventory records accurate (versus fixing them long after they have been allowed to degrade), then there’s nothing stopping you from working on those things immediately, before a single item is ever counted (see the Confucius quote at the top). If not, then I hate to say it but you’re doomed to having inaccurate inventory in perpetuity (or at least until someone at or near the top does start caring).

Tackling some low hanging fruit is one thing, but to attain and sustain high levels of accuracy – day in and day out – over the long term, rooting out and correcting process defects needs to become part of the organization’s cultural DNA. The end goal is one that can never be reached – better every day.

This entails moving to a three pronged approach for managing stock:

  • Counting with purpose and following up (Control Group Cycle Counting)
  • Keeping the car between the lines on the road (Inspection Counting)
  • Keeping track of progress (Measurement Counting)

Control Group Cycle Counting

The purpose of this counting approach is not to correct inventory balances that have become inaccurate. Rather, it’s to detect the process failures that cause discrepancies in the first place.

It works like this:

  1. Select a sample of items that is representative of the entire store, yet small enough to detail count in a reasonable amount of time (for the sake of argument, let’s say that’s 50 items in a store). This sample is the control group.
  2. Perform a highly detailed count of the control group items, making sure that every unit of stock has been located. Adjust the inventory balances to set the baseline for the first “perfect” count.
  3. One week later, count the exact same items in detail all over again. Over such a short duration, the expectation is that the stock ledger should exactly match the number of units counted. If there are any discrepancies, whatever caused the discrepancy must have occurred in the last 7 days.
  4. Research the transactions that have happened in the last week to find the source of the error. If the discrepancy was 12 units and a goods receipt for a case of 12 was recorded 3 days ago, did something happen in receiving? If the system record shows 6 units but there are 9 on the shelf, was the item scanned once with a quantity override, even though 4 different items may have actually been sold? The point is that you’re asking people about potential errors that have recently happened and will have a better chance of successfully isolating the source of the problem while it’s in everyone’s mind. Not every discrepancy will have an identifiable cause and not every discrepancy with an identifiable cause will have an easy remedy, but one must try.
  5. Determine the conditions that caused the problem to occur. Chances are, those same conditions could be causing problems on many other items outside the control group.
  6. Think about how the process could have been done differently so as to have avoided the problem to begin with and trial new procedure(s) for efficiency and effectiveness.
  7. Roll out new procedures chainwide.
  8. Repeat steps 3 to 7 forever (changing the control group every so often to make sure you continue to catch new process defects).

Eight simple steps – what could be easier, right?

Yes, this process is somewhat labour intensive.
Yes, this requires some intestinal fortitude.
Yes, this is not easy.

But…

How much time does your sales staff spend running around on scavenger hunts looking for product that “the system says is here”?

How much money and time do you waste on emergency orders and store-to-store transfers because you can’t pick an online order?

How long do you think your customers will be loyal if a competitor consistently has the product they want on the shelf or can ship it to their door in 24 hours?

Inspection Counting

In previous pieces written on this topic, I’ve referred to this as “Process Control Counting” – so coined by Roger Brooks and Larry Wilson in their book Inventory Record Accuracy – which they describe as being “controversial in theory, but effective in practice”.

We’ve found that moniker to be not very descriptive and can be confusing to people who are not well versed in inventory accuracy concepts (i.e. every retailer we’ve encountered in the last 25 years).

The Inspection Counting approach is designed to quickly identify items with obvious large discrepancies and correct them on the spot.

Here’s how it works:

  1. Start at the beginning of an aisle and inquiry the first item using a handheld scanner that can instantly display the inventory balance.
  2. Quickly scan the shelf and determine whether or not it appears the system balance is correct.
  3. If it appears to be correct, move on to the next item. If there appears to be a large discrepancy, do some simple investigation to see if it can be located – if not, then perform a count, adjust the balance and move on.

It may seem like this approach is not very scientific and subject to interpretation and judgment on the part of the person doing the inspection counting. That’s because it is. (That’s the “controversial” part).

But there are clear advantages:

  • It is fast – Every item in the store can be inspection counted every few weeks.
  • It is efficient – The items that are selected to be counted are items that are obviously way off (which are the ones that are most important to correct).
  • It is more proactive – “Hole scans” performed today are quite often major inventory errors that occurred days or weeks ago and were only discovered when the shelf was empty – bad news early is better than bad news late.

No matter how many process defects are found and properly addressed through Control Group Counting, there will always be theft and honest mistakes. Inspection Counting ensures that there is a stopgap to ensure that no inventory record goes unchecked for a long period of time, even when there are thousands of items to cycle through.

As part of an overall program underpinned by Control Group Counting and process defect elimination, the number of counts triggered by an inspection (and the associated time and effort) should decrease over time as fewer defects cause the discrepancies in the first place.

Measurement Counting

The purpose of this counting approach is to use sampling to estimate the accuracy of the population based on the accuracy of a representative group.

It works like this:

  1. Once a month, select a fresh sample of items that is representative of the entire store, yet small enough to detail count in a reasonable amount of time, similar to how a control group is selected. This sample is the measurement group.
  2. Perform a highly detailed count of the measurement group items, making sure that every unit of stock has been located.
  3. Post the results in the store and discuss it in executive meetings every month. Is accuracy trending upward or downward? Do certain stores need some additional temporary support? Have new root causes been identified that need to be addressed?

Whether retailers like it or not, inventory accuracy is a KPI that customers are measuring anecdotally and it’s colouring their viewpoint on their shopping experience. Probably a good idea to actually measure and report on it properly, right?

If you’re doing a good job detecting and eliminating process defects that cause inaccurate inventory and continuously making corrections to erroneous records, then this should be reflected in your measurement counts over time. Who knows? If you can demonstrate a high level of accuracy on a continuously changing representative sample, maybe you can convince the Finance and Loss Prevention folks to do away with annual physical counts altogether.

The key to being in-stock

Key

Abraham Lincoln is widely considered the greatest President in history. He preserved the Union, abolished slavery and helped to strengthen and modernize government and the economy. He also led a fragile America through one of her darkest and most crucial periods – the American Civil War.

In the early days of the war, there were lots of competing ideas about how to secure victory and who should attempt it. Most of the generals at that time had concluded that the war could only be won through long, savage and bloody battles in the nation’s biggest cities – like Richmond, New Orleans and even Washington.

Lincoln – who taught himself strategy by reading obsessively – had a different plan. He laid out a large map and pointed to Vicksburg, Mississippi, a small city deep in the South. Not only did it control important navigation waterways, but it was also a junction of other rivers, as well as the rail lines that supplied Confederate armies and plantations across the South.

“Vicksburg is the key”, he proclaimed. “We can never win the war until that key is ours”.

As it turns out, Lincoln was right.

It would take years, blood, sweat and ferocious commitment to the cause, but his strategy he’d laid out was what won the war and ended slavery in America forever. Every other victory in the Civil War was possible because Lincoln had correctly understood the key to victory – taking the city that would split the South in half and gaining control of critical shipping lanes.

Lincoln understood the key. Understanding the key is paramount in life and in business.

It’s no secret that many retailers are struggling – especially in terms of the customer journey – most notably when it comes to retail out of stocks. Retail out of stocks have remained, on average, sadly, at 8% for decades.

So what’s the key to finally ending out-of-stocks?

The key is speed and completeness of planning.

First, we all know that the retail supply chain can and should only be driven by a forward looking forecast of consumer demand – how much you think you’ll sell, by product and consumption location.

Second, everyone also agrees (though few understand the key to solving this thorn in our ass) that store/location on-hands need to be accurate.

But the real key is that, once these are in place, the planning process must be at least done daily and must be complete – from consumption to supplier.

Daily re-forecasting and re-planning is necessary to re-orient and re-synch the entire supply chain based on what did or didn’t sell yesterday. Forecasts will always be wrong and speedy re-planning is the key to mitigating forecast error.

However, that is not enough to sustain exceptionally high levels of daily in-stock. In addition, the planning process must be complete – providing the latest projections from consumption to supply, giving all trading partners their respective projections in the language in which they operate (e.g., units, volume, cube, weight, dollars). The reason is simple – all partners need to see, as soon as possible, the result of the most up to date plans. All plans are re-calibrated to help you stay in stock. And the process repeats, day in, day out.

We have retail clients that are achieving, long term, daily in-stocks of 98%+, regardless of the item, time of year or planning scenario.

They understand the key to making it happen.

Now you do too.

What Everybody Gets Wrong About Store Inventory Accuracy

 

Don’t build roadblocks out of assumptions. – Lorii Myers

red herring

Retailers are not properly managing the most important asset on their balance sheets – and it’s killing customer service.

I analyzed sample data from 3 retailers who do annual “wall to wall” physical counts. There were 898,526 count records in the sample across 92 stores. For each count record (active items only on the day of the count), the system on hand balance before the count was captured along with the physical quantity counted. The products in the sample include hardware, dry grocery, household consumables, sporting goods, basic apparel and all manner of specialty hardlines items. Each of the retailers report annual shrink percentages that are in line with industry averages.

A system inventory record is considered to be “accurate” if the system quantity is adjusted by less than +/- 5% after the physical count is taken. Here are the results:

So 54% of inventory records were accurate within a 5% tolerance on the day of the count. Not good, right?

It gets worse.

For 19% of the total records counted (that’s nearly 1 in every 5 item/locations), the adjustment changed the system quantity by 50% or more!

Wait, there’s more!

In addition, I calculated simple in-stock measures before and after the count as follows:

Reported In Stock: Percentage of records where the system on hand was >0 just before the count

Actual In Stock: Percentage of records where the counted quantity was >0 just after the count

Here are the results of that:

Let’s consider what that means for a moment. If you ran an in-stock report based on the system on hand just before those records were counted, you would think that you’re at 94%. Not world class, but certainly not bad. However, once the lie is exposed on that very same day, you realize that the true in-stock (the one your customer sees) is 5% lower than what you’ve been telling yourself.

Sure, this is a specific point in time and we don’t know how long it took the inventory accuracy to degrade up for each item/location, but how can you ever look at an in-stock report the same way again?

Further, when you look at it store by store, it’s clear that stores with higher levels of inventory accuracy experience a lesser drop in in-stock after the records are counted. Each of the blue dots on the scatterplot below represent one of the 92 stores in the sample:


A couple of outliers notwithstanding, it’s clear that the higher on hand accuracy is, the more truthful the in-stock measure is and vice-versa.

Now let’s do some simple math. A number of studies have consistently shown that an out-of-stock results in a lost sale for the retailer about 1/3 of the time. Assuming the 5% differential between reported and actual in-stock is structural, this means that having inaccurate inventory records could be costing retailers 1.67% of their topline sales. This is in addition to the cost of shrink.

So, a billion dollar retailer could be losing almost $17 million per year in sales just because of inaccurate on hands and nothing else.

Let’s be clear, this isn’t like forecast accuracy where you are trying to predict an unknown future. And it’s not like the myriad potential flow problems that can arise and prevent product from getting to the stores to meet customer demands. It is an erosion in sales caused by the inability to properly keep records of assets that are currently observable in the physical world.

So why hasn’t this problem been tackled?

Red Herring #1: Our Shrink Numbers Are Good

Whenever we perform this type of analysis for a retailer, it’s not uncommon for people to express incredulity that their store inventory balances are so inaccurate.

“That can’t possibly be. Our shrink numbers are below industry average.”

To that, I ask two related questions:

  1. Who gives a shit about industry averages?
  2. What about your customers?

In addition to the potential sales loss, inaccurate on hands can piss customers off in many other ways. For example, if it hasn’t happened already, it won’t be long until you’re forced by competition to publish your store on hand balances on your website. What if a customer makes a trip to the store or schedules a pickup order based on this information?

The point here is that shrink is a financial measure, on hand accuracy is a customer service measure. Don’t assume that “we have low shrink” means the same thing as “our inventory management practices are under control”.

Red Herring #2: It Must Have Been Theft

It’s true that shoplifting and employee theft is a problem that is unlikely to be completely solved. Maybe one day item level RFID tagging will become ubiquitous and make it difficult for product to leave the store without being detected. In the meantime, there’s a limit to what can be done to prevent theft without either severely inconveniencing customers or going bankrupt.

But are we absolutely sure that the majority of inventory shrinkage is caused by theft? Using the count records mentioned earlier, here is another slice showing how the adjustments were made:

From the second column of this table, you can see that for 29% of all the count transactions, the system inventory balances were decreased by at least 1 unit after the count.

Think about that next time you’re walking the aisles in a store. If you assume that theft is the primary cause for negative adjustments, then by extension you must also believe that one out of every 3 unique items you see on the shelves will be stolen by someone at least once in the course of a year – and it could be higher than that if an “accurate” record on the day of the count was negatively adjusted at other times throughout the year. I mean, maybe… seems a bit much, though, don’t you think?

Now let’s look at the first column (count adjustments that increase the inventory balance). If you assume that all of the inventory decreases were theft, then – using the same logic – you must also believe that for one out of every 5 unique items, someone is sneaking product into the store and leaving it on the shelves. I mean, come on.

Perhaps there’s more than theft going on here.

Red Herring #3: The Problem Is Just Too Big

Yes, it goes without saying that when you multiply out the number of products and locations in retail, you get a large number of individual inventory balances – it can easily get into the millions for a medium to large sized retailer. “There’s no way that we can keep that many inventory pools accurate on a daily basis” the argument goes.

But the flaw in this thinking stems from the (unfortunately quite popular) notion that the only way to keep inventory records accurate is through counting and correcting. The problem with this approach (besides being highly labour intensive, inefficient and prone to error) is that it corrects errors that have already happened and does not address whatever process deficiencies caused the error in the first place.

This is akin to a car manufacturer noticing that every vehicle rolling off the assembly line has a scratch on the left front fender. Instead of tracing back through the line to see where the scratch is occurring, they instead just add another station at the end with a full time employee whose job it is to buff the scratch out of each and every car.

The problem is not about the large number of inventory pools, it’s about the small number of processes that change the inventory balances. To the extent that inventory movements in the physical world are not being matched with proper system transactions, a small number of process defects have the potential to impact all inventory records.

When your store inventory records don’t match the physical stock on hand, it must necessarily be a result of one of the following processes:

  • Receiving: Is every carton being scanned into the store’s inventory? Do you “blind receive” shipments from DCs or suppliers that have not demonstrated high levels of picking accuracy for the sake of speed?
  • POS Scanning and Saleable Returns: Do cashiers scan each and every individual item off the belt or do they sometimes use the mult key for efficiency? If an item is missing a bar code and must be keyed under a dummy product number, is there a process to record those circumstances to correct the inventory later?
  • Damage and Waste: Whenever a product is found damaged or expired, is it scanned out of the on hand on a nightly basis?
  • Store Use, Transformations, Transfers: If a product taken from the shelf to use within the store (e.g. paper towels to clean up a mess) or used as a raw material for another product (e.g. flour taken from the pantry aisle to use in the bakery) are they stock adjusted out? Are store-to-store transfers or DC returns scanned out of the store’s inventory correctly before they leave?
  • Counting: Before a stock record is changed because of a count, are people making sure that they’ve located and counted all units of that product within the store or do they just “pencil whip” based on what they see in front of them and move on?
  • Theft: Are there more things that can be done within the store to minimize theft? Do you actively “transact” some of your theft when you find empty packaging in the aisle?

So how can retailers finally make a permanent improvement to the accuracy of their store on hands?

  • They need to actually care about it (losing 1-2% of top line sales should be a strong motivator)
  • They need to measure store on hand accuracy as a KPI
  • They need an approach whereby process failures that cause on hand errors can be detected and addressed
  • They need an efficient approach for finding and correcting discrepancies as the process issues are being fixed

Stay tuned for more on that.

Grandmaster Collaboration

Garry Kasparov is one of the world’s greatest ever chess grandmasters – reigning as World Champion for 15 years from 1985-2000, the longest such reign in chess history. Kasparov was a brilliant tactician, able to out-calculate his opponents and “see” many moves into the future.

In addition to his chess prowess, Kasparov is famous for the 1997 chess showdown, aptly billed as the final battle for supremacy between human and artificial intelligence. The IBM supercomputer, Deep Blue, defeated Kasparov in a 6 game match – the first time that a machine beat a reigning World Champion.

Of course chess is a natural game for the computational power of AI – Deep Blue reportedly being able to calculate over 200 million moves per second. Today, virtually all top chess programs that you and I can purchase are stronger than any human on earth.

The loss to Deep Blue intrigued Kasparov and made him think. He recalled Moravec’s paradox: machines and humans frequently have opposite strengths and weaknesses. There’s a saying that chess is “99 percent tactics” – that is, the short combinations of moves players use to get an advantage in position. Computers are tactically flawless compared to humans.

On the flip side, humans, especially chess Grandmasters were brilliant at recognizing strategic themes of positions and deeply grasping chess strategy.

What if, Kasparov wondered, if the computational tactical prowess were combined with the human big-picture, strategic thinking that top Grandmasters had honed after years of play and positional study?

In 1998 he helped organize the first “advanced chess” tournament in which each human player had a machine partner to help during each game. The results were incredible and the combination of human/machine teams regularly beat the strongest chess computers (all of which were stronger than Kasparov). According to Kasparov, “human creativity was more important under these conditions”.

By 2014, and to this day, there continue to be what is described as “freestyle” chess tournaments in which teams made up of humans and any combination of computers compete against each other, along with the strongest stand-alone machines. The human-machine combination wins most of the time.

In freestyle chess the “team” is led by human executives, who have a team of mega-grandmaster tactical advisers helping decide whose advice to probe in depth and ultimately the strategic direction to take the game in.

For us folks in supply chain, and especially in supply chain planning, there’s a lot to be learned from the surprisingly beneficial collaboration of chess grandmaster and supercomputer.

Humans excel at certain things. So do computers.

Combine them, effectively, like Kasparov inspired and you’ll undoubtedly get…

Grandmaster Collaboration.

Jimmy Crack Corn

 

Science may have found a cure for most evils; but it has found no remedy for the worst of them all – the apathy of human beings. – Helen Keller (1880-1968)

apathy-i-dont-care

On hand accuracy.

It has been a problem ever since retailers started using barcode scanning to maintain stock records in their stores.

It’s certainly not the first time we’ve written on this topic, nor is it likely to be the last.

The real question is: Why is this such a pervasive problem?

I think I may have the answer: Nobody cares.

Okay, maybe that’s a little harsh. It’s probably more fair to say that there is a long list of things that retailers care about more than the accuracy of their on hands.

I’m not being judgmental, nor am I trying to invoke shame. I’m just making a dispassionate observation based on 25 years experience working in retail.

Whatever you think of the axiom “what gets measured gets managed” (NOT a quote from Peter Drucker), I would argue that it is largely true.

By that yardstick, I have yet to come across a single retailer who routinely measures the accuracy of their on hands as a KPI, even though – if you think about it – it wouldn’t be that difficult to do. Just send out a count list of a random sample of SKUs each month to every store and have them do a detailed count. Either the system record matches what’s physically there or it doesn’t.

Measuring forecast accuracy (the ability to predict an unknown future) seems to take up a lot more time and attention than inventory accuracy (the ability to keep a stock record in synch with a quantity that exists in the physical world right now), but the accuracy of on hand records has a much greater influence on the customer experience than forecast accuracy – by a very wide margin.

And on hand accuracy will only become more important as retailers expand customer delivery options to include click and collect and ship from store. Even “old school” shoppers (those who just want to go to the store to buy something and leave) will be expecting to check online to see how much a store has in stock before getting in their cars.

It’s quite clear that retailers should care about this more, so why don’t they?

Conflating Accuracy and Shrink

After a physical stock count, positive and negative on hand variances are costed and summed up. If the value of the system on hand drops by less than 2% of sales after the count adjustments are made, this is deemed to be a good result when compared to the industry as a whole. The conclusion is drawn that the management of inventory must therefore be under control and that on hand records must not be that far off. The problem with shrink is that the positive and negative errors can still be large in magnitude, but they cancel each other out, thereby hiding significant issues with on hand record accuracy (by item/location, which is what the customer cares about). Shrink is a measure for accountants, not customers.

Store Replenishment is Manual Anyhow

It’s still common practice for many retailers to use visual shelf reviews for store replenishment. Department managers walk through the aisles with RF scanning guns, scan the shelf tags for items they want to order and use an app on the gun to place replenishment orders. Most often, this process is used when perpetual inventory capabilities don’t exist at store level, but it’s not uncommon to see it also being used even if stores have system calculated on hand balances. Why? Because there isn’t enough trust in the accuracy of the on hands to use them for automated replenishment. Hmmm…

It’s Perceived to be an Overwhelming Problem

It’s certainly true that the number of item/store inventory pools that need to be kept accurate can get quite large. The predominant thinking in retail is that the only way to make inventory records more accurate is to count each item more frequently. Do the math on that and many retailers conclude that the labour costs to maintain accurate inventory records will drive them into bankruptcy.

The problem with this viewpoint is that frequent counting and correcting isn’t really maintaining accurate records – it’s continuously fixing inaccurate records. A different way to look at it is not by the sheer volume of item/location records to be managed, but rather by the number of potential process failure points that could affect any given item in any given location.

Think about an auto assembly line where every finished car that rolls off  has a 2 inch scratch on the right front fender. One option to address this problem is to set up an additional station at the end of the line to buff out the scratch on every single car that rolls through. This is analogous to the “count and correct” approach to managing inventory records – highly labour intensive and only addresses the problem after it has already occurred.

Another option would be to trace back through the process until you find the where the scratch is occurring and why. Maybe there’s a bolt sticking out from a pass-through point that’s causing the scratch. Cut off the end of the bolt, no more scratches. Addressing this one point of process failure permanently resolves the root cause of the defect for every car that passes through the process.

Going back to our store on hand accuracy example, a retailer may have thousands or millions of item/store combinations, but the number of processes (potential points of failure) that change on hand balances is limited:

  • DC picking
  • Store receiving
  • Stock writedowns for damage or waste
  • Counts
  • Sales and saleable returns

For retailers who have implemented store perpetual inventory, each of these processes that affect the movement of physical stock have a corresponding transaction that changes the on hand balance accordingly. How carefully are those transactions being recorded for accuracy (versus speed)?

Are DC shipments regularly audited for accuracy? Do stores “blind receive” shipments only from highly reliable sources? Are there nightly procedures to scan out damaged or unsaleable goods? Is the store well organized so that all units of a particular item can be easily found before a physical count is done? is every sale being properly scanned at the checkout?

Of course, the elephant (or maybe scapegoat?) in the room is theft. After all, there is no corresponding transaction for those stock movements. While there are certainly things that can be done to reduce theft, I consider it to be a self evident fact that it won’t be eliminated completely anytime soon.

But before you assume that every negative stock adjustment “must have been theft”, are you totally certain that all of the other processes are being transacted properly?

Does it seem reasonable to assume that for every single unique product whose on hand balance decreases after a physical count (typically 20-30% of all products in a store) all of those units were stolen since the last count?

And if we do assume that theft is the culprit in the vast majority of those cases, then what are we to assume about products whose on hand balances increase after being counted (typically 10-20% of all products in a store)? Are customers or employees sneaking items into the store, leaving them on the shelves and secretly leaving without being detected?

Setting theft aside, there’s still plenty that can be done by thoroughly examining and addressing the potential points of process failure that cause on hands to become inaccurate in the first place, while at the same time reducing the amount of time and money being spent on “counting and correcting”.

What’s Step 1 on this path?

You need to care.

Pissed Off People

Jim is basically your average bloke. One Saturday afternoon, about 25 years ago, he’s doing something a lot of average blokes do; cleaning his home – a small farmhouse in the west of England.

After some dusting, it’s time to vacuum. Like everyone at the time, he’s shocked how quickly his top-of-the-line Hoover cleaner loses its suction power.

Jim is pissed. Royally pissed off. Madder than a wet hen.

So mad, in fact, that he took the cleaner out to his shed, took it apart and examined why it would lose suction power so quickly. After a few experiments he correctly deduced that the issue was that fine dust blocked the filter almost immediately and that’s why performance in conventional cleaners dips so fast.

Jim continued to be pissed until one day he visited a timber mill, looking for some wood. In those days, timber mills planed the logs on the spot for you. Jim watched as he saw his wood travel along until it reached a cyclone specifically designed to change the dynamics of airflow, separating the dust from the air via centrifugal force.

BOOM! James Dyson, still pissed at how shit traditional vacuum cleaners were, got the core idea of the Dyson cyclone cleaner. An idea that he would use to eventually deposit over £3 billion into his back pocket.

Unbelievably it took Dyson three years and 5,127 small, incremental prototypes to finally “perfect” his design and revolutionize cleaning forever. Can you imagine how pissed you’d need to be to work, diligently, over that many iterations to finally see your idea through?

Dyson’s story is incredible and enlightening – offering us a couple of key insights into the innovative process.

First, most folks think that innovation happens as a result of ideas just popping into people’s heads. That’s missing the key piece of the puzzle: the problem! Without a problem, a flaw, a frustration, innovation cannot happen. As Dyson himself states, “creativity should be thought of as a dialogue. You have to have a problem before you can have game-changing innovation”.

Second, for innovative solutions to emerge you need pissed off people. People like Dyson who are mad, frustrated and generally peeved with current solutions and approaches for the problem at hand. So they are always thinking, connecting and, at times, creating a breakthrough solution – sometimes years after initially surfacing the problem. So, while it’s easy to say that the “idea” just happened, more often than not you’ve been mulling it over, subconsciously, because you’re pissed about something.

Here’s a true story about Flowcasting and how it eventually saw the light of day as a result of some pissed off people.

About 25 years ago, I was the leader of a team whose mandate was to improve supply chain planning for a large, very successful Canadian retailer. I won’t bore you with the details but eventually we designed, on paper, what we now call Flowcasting.

Problem was that it was very poorly received by the company’s Senior Leadership team, especially the Supply Chain executives. On numerous occasions I was informed that this idea would never work and that we needed to change the design. I was also threatened to be fired more than once if we didn’t change.

The problem was, our team loved the design and could see it potentially working. As I was getting more pressure and “never” from the leadership team, I was getting more and more pissed. Royally pissed off as a matter of fact.

As luck would have it, as a pissed off person, I didn’t back down (there’s a lesson here too – “never” is not a valid reason why something might not work, regardless who says it). One person on the team suggested I contact Andre Martin and he and his colleague, Darryl Landvater, helped us convince the non-believers that it would be the future and that we should pilot a portion of the design. The rest is, of course, history.

The Flowcasting saga didn’t stop there. As we were embarking on our early pilot of the DC-supplier integration, Andre and Darryl tried, unsuccessfully, to convince a few major technology planning vendors that an integrated solution, from store/consumption to supply was needed and that they needed to build it, from scratch.

All the major technology players turned them down, citing lots of “nevers” themselves as to why this solution was either not needed, or would not scale and/or work.

To be honest, it pissed them off, as they’ve admitted to me many times over the years.

So much so, that, despite all the warnings from the experts they “put their money where their mouth is” and built a Flowcasting solution that connects the store to supplier in an elegant, intuitive and seamless fashion – properly planning for crucial retail planning scenarios like slow sellers, promotions, and seasonal items just to name a few.

In 2015, using the concept of Flowcasting and the technology that they developed, a retailer seamlessly connected their supply chain from consumption to supply – improving in-stocks, sales and profits and instilling a process that facilitates any-channel planning however they wish to do it.

Sure, having a reasonably well thought out design was important. As was having a solution suited for the job.

But what really enabled the breakthrough were some pissed-off people!

What’s Good for the Goose

 

What’s good for the goose is good for the gander – Popular Idiom

ruledoesntapply

Thinking in retail supply chain management is still evolving.

Which is a nicer way of saying that it’s not very evolved.

Don’t get me wrong here. It wasn’t that long ago that virtually no retailer even had a Supply Chain function. When I first started my career, retailers were just beginning to use the word “logistics” – a military term, fancy that! – in their job descriptions and org charts. At the time it was an acknowledgement that sourcing, inbound transportation, distribution and outbound transportation were all interrelated activities, not stand alone functions.

A positive development, but “logistics” was really all about shipping containers, warehouses and trucks – the mission ended at the store receiving bay.

Time passed and barcode scanning at the checkouts became ubiquitous.

More time passed and many (but by no means a large majority) of medium to large sized retailers implemented scan based receiving and perpetual inventory balances at stores in a centralized system. This was followed quickly by computer assisted store ordering and with that came the notion that store replenishment could be a highly automated, centralized function.

Shortly thereafter, retailers began to recognize that they needed more than just operational logistics, but true supply chain management – covering all of the planning and execution processes that move product from the point of manufacture to the retail shelf.

In theory, at least.

I say that, because even though most retailers of size have adopted the supply chain management vernacular and have added Supply Chain VP roles to their org structures, over the years I’ve heard some dubious “supply chain” discussions that tend to suggest that thinking hasn’t fully evolved past “trucks and warehouses”. Some of you reading this now my find yourselves falling into this train of thought without even realizing it.

So how do you know if your thinking is drifting away from holistic supply chain thinking toward myopic logistics centric thinking?

An approach that we use is to apply the Goose and Gander Rule to these situations. If you find yourself advocating behaviour in the middle of the supply chain that seems nonsensical if applied upstream or downstream, then you’re not thinking holistically.

Here are a few examples:


The warehouse is overstocked. We can’t sell it from there, so let’s push it out to the stores.


At a very superficial level, this argument makes some sense. It is true that product can’t sell if it’s sitting in the warehouse (setting aside the fact that using this approach to transfer overstock from warehouses to stores generally doesn’t make it sell any faster).

Now suppose that a supplier unexpectedly shipped a truckload of product that you didn’t need to your distribution centre because they were overstocked. Would you just receive it and scramble to find a place to store it? Because that’s what happens when you push product to stores.

Or how would you feel if you were out shopping and as you were approaching the checkout, a member of the store staff started filling your cart with items that the store didn’t want to stock any more? Would you just pay for it with a shrug and leave?

I hate to break the news, but there is no such thing as “push” when you’re thinking of the retail supply chain holistically. The only way to liquidate excess inventory is to encourage a “pull” by dropping the price or negotiating a return. All pushing does is add more cost to the product and transfer the operational issues downstream.


If we increase DC to store lead times, we can have store orders locked in further in advance and optimize our operations.


Planning with certainty is definitely easier than planning with uncertainty, but where does it end? Do you increase store lead times by 2 days? 2 weeks? 2 months? Why not lock in store orders for the next full year?

Increasing lead times does nothing but make the supply chain less responsive and that helps precisely no one. And, like the “push” scenario described above, stores are forced to hold more inventory, so you’re improving efficiency at one DC, but degrading it in dozens of stores served by that DC.

Again, would you be okay with suppliers arbitrarily increasing order lead times to improve their operational efficiency at your expense?

Would you shop at a store that only allows customers in the door who placed their orders two days in advance?

Customers buy what they want when they want. There are things that can be done to influence their behaviour, but it can’t be fully controlled in such a way that you can schedule your supply chain flow to be a flat line, day in and day out.


We sell a lot of slow moving dogs. We should segregate those items in the DC and just pick and deliver them to the stores once a month.


The first problem with this line of thinking is that “slow moving” doesn’t necessarily mean “not important to the assortment”.

Also, aren’t you sending 1 or 2 (or more) shipments a week to the same stores from the same building anyhow?

When’s the last time you went shopping for groceries and were told by store staff that, even though you need mushroom soup today, they only sell mushroom soup on alternate Thursdays?

Listen, I’m not arguing that retailers’ logistics operations shouldn’t be run as efficiently as possible. You just need to do it without cheating.

We need to remember that the SKU count, inventory and staff levels across the store network is many times greater than the logistics operations. Employing tactics that hurt the stores in order to improve KPIs in the DCs or Transport operations is tantamount to cutting of your nose to spite your face.

Covered in Warts

It’s the early 1990’s and Joanne is down on her luck. A recently divorced, single mother who’s jobless, she decides to move back from England to Scotland to at least be closer to her sister and family.

During her working days in Manchester she had started scribbling some ideas and notes about a nonsensical book idea and, by the time she’d moved home, had three chapters written of a book. Once back near Edinburgh, she continued to write and improve her manuscript until she had a first draft completed in 1995 – fully five years from her first penned thoughts.

During the next two years she pitched the very rough manuscript to a dozen major publishers. They all rejected it and believed the story would not resonate with people and, as a result, sales would be dismal.

Undismayed she eventually convinced Bloomsbury to take a very small chance on the book – advancing her a paltry $1500 pounds and agreeing to print 1,000 copies, 500 of which would be sent to various libraries.

In 1997 and 1998 the book, Harry Potter by J. K. Rowling, would win both the Nestle Book award and the British Book Awards Children’s book of the year. That book would launch Rowling’s worldwide success and, to date, her books have sold over 400 million copies.

The eventual success of the Harry Potter series of books is very instructive for breakthroughs and innovation.

The most important breakthroughs—the ones that change the course of science, business, or history — are fragile. They rarely arrive dazzling everyone with their brilliance.

Instead, they often arrive covered in warts — the failures and seemingly obvious reasons they could never work that make them easy to dismiss. They travel through tunnels of skepticism and uncertainty, their champions often dismissed as crazy.

Luckily most of the champions of breakthrough items are what many would describe as loons – people that refuse to give up on their ideas and will work, over time, to smooth and eliminate the warts.

When it comes to supply chain planning innovation, you’d have to put Andre Martin into the loon category as well.

In the mid 1970’s Andre invented Distribution Resource Planning (DRP) and, along with his colleague Darryl Landvater, designed and implemented the first DRP system in 1978 – connecting distribution to manufacturing and changing planning paradigms forever.

Most folks don’t know but around that time Andre saw that the thinking of DRP could be extended to the retail supply chain – connecting the store to the factory using the principles of DRP and time-phased planning.

The idea, which has since morphed and labelled as Flowcasting, was covered in warts. During the course of the last 40 years Andre and Darryl have refined the thinking, smoothed the warts, eliminated dissention, educated an industry and, unbelievably, built a solution that enables Flowcasting.

I’ve been a convert and a colleague in the wart-reduction efforts over the last 25 years – experiencing first-hand the some irrational responses and views from, first, a large Canadian retailer, and more recently the market in general.

But, like JK, the warts are largely being exposed as pimples and people and retailers are seeing the light – the retail supply chain can only deliver if it’s connected from consumer to supplier – driven only by a forecast of consumer demand. Planned and managed using the principles of Flowcasting.

The lesson here is to realize that if you think you’ve got a breakthrough idea, there’s a good chance it’ll be covered in warts and will need time, effort, patience and determination to smooth and eliminate them.

It can, however, be done.

And you can do it.

Godspeed.

Managing the Long Tail

If you don’t mind haunting the margins, I think there is more freedom there. – Colin Firth

long-tail

 

A couple of months ago, I wrote a piece called Employing the Law of Large Numbers in Bottom Up Forecasting. The morals of that story were fourfold:

  1. That when sales at item/store level are intermittent (fewer than 52 units per year), a proper sales pattern at that level can’t be properly determined from the demand data at that level.
  2. That any retailer has a sufficient percentage of slow selling item/store combinations that the problem simply can’t be ignored in the planning process.
  3. That using a multi level, top-down approach to developing properly shaped forecasts in a retail context is fundamentally flawed.
  4. That the Law of Large Numbers can be used in a store centric fashion by aggregating sales across similar items at a store only for the purpose of determining the shape of the curve, thereby eliminating the need to create any forecasts above item/store level.

A high level explanation of the Profile Based Forecasting approach developed by Darryl Landvater (but not dissimilar to what many retailers were doing for years with systems like INFOREM and various home grown solutions) was presented as the antidote to this problem. Oh and by the way, it works fabulously well, even with such a low level of “sophistication” (i.e. unnecessary complexity).

But being able to shape a forecast for intermittent demands without using top-down forecasting is only one aspect of the slow seller problem. The objective of this piece is to look more closely at the implications of intermittent demands on replenishment.

The Bunching Problem

Regardless of how you provide a shape to an item/store forecast for a slow selling item (using either Profile Based Forecasting or the far more cumbersome and deeply flawed top-down method), you are still left with a forecasted stream of small decimal numbers.

In the example below, the shape of the sales curve cannot be determined using only sales history from two years ago (blue line) and the most recent year (orange line), so the pattern for the forecast (green dashed line) was derived from an aggregation of sales of similar items at the same store and multiplied through the selling rate of the item/store itself (in this case 13.5 units per year):

You can see that the forecast indeed has a defined shape – it’s not merely a flat line that would be calculated from intermittent demand data with most forecasting approaches. However, when you multiply the shape by a low rate of sale, you don’t actually have a realistic demand forecast. In reality, what you have is a forecast of the probability that a sale will occur.

Having values to the right of the decimal in a forecast is not a problem in and of itself. But when the value to the left of the decimal is a zero, it can create a huge problem in replenishment.

Why?

Because replenishment calculations always operate in discrete units and don’t know the difference between a forecast of true demand and a forecast of a probability of a sale.

Using the first 8 weeks of the forecast calculated above, you can see how time-phased replenishment logic will behave:

The store sells 13 to 14 units per year, has a safety stock of 2 units and 2 units in stock (a little less than 2 months of supply). By all accounts, this store is in good shape and doesn’t need any more inventory right now.

However, the replenishment calculation is being told that 0.185 units will be deducted from inventory in the first week, which will drive the on hand below the safety stock. An immediate requirement of 1 unit is triggered to ensure that doesn’t happen.

Think of what that means. Suppose you have 100 stores in which the item is slow selling and the on hand level is currently sitting at the safety stock (not an uncommon scenario in retail). Because of small decimal forecasts triggering immediate requirements at all of those stores, the DC needs to ship out 100 pieces to support sales of fewer than 20 pieces at store level – demand has been distorted 500%.

Now, further suppose that this isn’t a break-pack item and the ship multiple to the store is an inner pack of 4 pieces – instead of 100 pieces, the immediate requirement would be 400 pieces and demand would be distorted by 2,000%!

The Antidote to Bunching – Integer Forecasts

What’s needed to prevent bunching from occurring is to convert the forecast of small decimals (the probability of a sale occurring) into a realistic forecast of demand, while still retaining the proper shape of the curve.

This problem has been solved (likewise by Darryl Landvater) using simple accumulator logic with a random seed to convert a forecast of small decimals into a forecast of integers.

It works like this:

  • Start with a random number between 0 and 1
  • Add this random number to the decimal forecast of the first period
  • Continue to add forecasts for subsequent periods to the accumulation until the value to the right of the decimal in the accumulation “tips over” to the next integer – place a forecast of 1 unit at each of these “tip-over” points

Here’s our small decimal forecast converted to integers in this fashion:

Because a random seed is being used for each item/store, the timing of the first integer forecast will vary by each item/store.

And because the accumulator uses the shaped decimal forecast, the shape of the curve is preserved. In faster selling periods, the accumulator will tip over more frequently and the integer forecasts will likewise be more frequent. In slower periods, the opposite is true.

Below is our original forecast after it has been converted from decimals to integers using this logic:

And when the requirements across multiple stores are placed back on the DC, they are not “bunched” and a more realistic shipment schedule results:

Stabilizing the Plans – Variable Consumption Periods

Just to stay grounded in reality, none of what has been described above (or, for that matter, in the previous piece Employing the Law of Large Numbers in Bottom Up Forecasting) improves forecast accuracy in the traditional sense. This is because, quite frankly, it’s not possible to predict with a high degree of accuracy the exact quantity and timing of 13 units of sales over a 52 week forecast horizon.

The goal here is not pinpoint accuracy (the logic does start with a random number after all), but reasonableness, consistency and ease of use. It allows for long tail items to have the same multi-echelon planning approach as fast selling items without having separate processes “on the side” to deal with them.

For fast selling items with continuous demand, it is common to forecast in weekly buckets, spread the weekly forecast into days for replenishment using a traffic profile for that location and consume the forecast against actuals to date for the current week:

In the example above, the total forecast for Week 1 is 100 units. By end of day Wednesday, the posted actuals to date totalled 29 units, but the original forecast for those 3 days was 24 units. The difference of -5 units is spread proportionally to the remainder of the week such as to keep the total forecast for the week at 100 units. The assumption being used is that you have higher confidence in the weekly total of 100 units than you have in the exact daily timing as to when those 100 units will actually sell.

For slow moving items, we would not even have confidence in the weekly forecasts, so consuming forecast against actual for a week makes no sense. However, there would still be a need to keep the forecast stable in the very likely event that the timing and magnitude of the actuals don’t match the original forecast. In this case, we would consume forecast against actuals on a less frequent basis:

The logic is the same, but the consumption period is longer to reflect the appropriate level of confidence in the forecast timing.

Controlling Store Inventory – Selective Order Release

Let’s assume for a moment a 1 week lead time from DC to store. In the example below, a shipment is planned in Week 2, which means that in order to get this shipment in Week 2, the store needs to trigger a firm replenishment right now:

Using standard replenishment rules that you would use for fast moving items, this planned shipment would automatically trigger as a store transfer in Week 1 to be delivered in Week 2. But this replenishment requirement is being calculated based on a forecast in Week 2 and as previously mentioned, we do not have confidence that this specific quantity will be sold in this specific week at this specific store.

When that shipment of 1 unit arrives at the store (bringing the on hand up to 3 units), it’s quite possible that you won’t actually sell it for several more weeks. And the overstock situation would be further exacerbated if the order multiple is greater than 1 unit.

This is where having the original decimal forecast is useful. Remember that, as a practical matter, the small decimals represent the probability of a sale in a particular week. This allows us to calculate a tradeoff between firming this shipment now or waiting for the sale to materialize first.

Let’s assume that choosing to forgo the shipment in Week 2 today means that the next opportunity for a shipment is in Week 3. In the example below, we can see that there is a 67.8% chance (0.185 + 0.185 + 0.308) that we will sell 1 unit and drop the on hand below safety stock between now and the next available ship date:

Based on this probability, would you release the shipment or not? The threshold for this decision could be determined based on any number of factors such as product size, cost, etc. For example, if an item is small and cheap, you might use a low probability threshold to trigger a shipment. If another slow selling item is very large and expensive, you might set the threshold very high to ensure that this product is only replenished after a sale drives the on hand below the safety stock.

Remember, the probabilities themselves follow the sales curve, so an order has a higher probability of triggering in a higher selling period than in a lower selling period, which would be the desired behaviour.

The point of all of this is that the same principles of Flowcasting (forecast only at the point of consumption, every item has a 52 week forecast and plan, only order at the lead time, etc.) can still apply to items on the long tail, so long as the planning logic you use incorporates these elements.

Ordinary Observation

OrdinaryObservation

It’s September 28, 1928 in a West London lab. A young physician, Alex, was doing some basic research that had been assigned to him regarding antibacterial agents. He’d been doing the same thing for a number of days when one day he noticed something odd.

What caught his eye and attention that fateful day was that mold had actually killed some bacteria in one of his plates. Usually samples like this are discarded, but instead Alex kept this sample and began to wonder. If this mold could kill this type of bacteria, could it be used to kill destructive bacteria in the human body?

Alexander Fleming would spend the next 14 years working out the kinks and details before “penicillin” was officially used to treat infections. It was an invention that would revolutionize medicine by discovering the world’s first antibiotic.

Dr. Fleming was able to develop this innovation through the simple power of ordinary observation. Sherlock Holmes famously said once to Watson: “You see, but you do not observe. The distinction is clear.” According to psychologist and writer Maria Konnikov…“To observe, you must learn to separate situation from interpretation, yourself from what you are seeing.

Here’s another example of the power of observation. Fast forward to 1955, a relatively unknown and small furniture store in Almhult, Sweden. One day, the founder and owner noticed something odd. An employee had purchased a table to take home to the family. Rather than struggling to try to cram the assembled table into his car, this employee took the legs off and carefully placed them in a box, which, in turn, would fit nicely in his car for delivery home.

As it turned out, the owner of the store, Ingvard Kamprad, would observe this unpacking phenomena regularly. Carefully he observed what his employees were doing and why it was so effective. And, if this concept was better for his employees, it would stand to reason that it would also be better for his customers – and the bottom line.

Soon after, Kamprad would work tirelessly to perfect the idea of selling dis-assembled furniture – changing the customer journey for furniture acquisition forever, and making IKEA synonymous with this brand promise and a worldwide household name. All because of the power of ordinary observation.

A final story about observation and its impact on supply chain planning.

Ken Moser is one of Canada’s top retailers – leading and managing one of Canadian Tire’s best stores in northern Ontario. About 15 years ago, he was visited by a chap who would eventually build the world’s first and, to date, best Flowcasting solution.

This person followed Ken around the store, asking questions and observing how the store operated and how Ken thought – particularly about how to manage the inventory of tens of thousands of items. Rumour has it that when Ken got to a section of the store, he proclaimed something like…”these items are like a set-it-and-forget-it. I have no idea when they’ll sell, and neither do you. All I know is that, like clockwork, they’ll only sell one a month. For others, it’s like one every quarter.”

Our Flowcasting architect was fascinated with this observation and spent time watching/observing customers perusing this section of the store. And like the two examples above, deep observation and reflection would eventually morph into an approach to forecasting and planning slow selling items that is, to date, the only proven solution in retail. All from the awesome power of ordinary observation.

Yogi Berra, the great Yankee catcher and sometimes philosopher, hit the nail on the proverbial head regarding the importance of ordinary observation when he proclaimed…

You can observe a lot, just by watching.

Turns out, you can.