The Great Lever of Power

I shan’t be pulling the levers there, but I shall be a very good back-seat driver. – Margaret Thatcher

lever

A number of years ago, I saw a television interview with President Ronald Reagan after he left office. In that interview, he reminisced on his political career, including when he first stepped into the Oval Office in 1981.

I can’t find any transcripts or direct quotes from that interview, but I do distinctly remember him saying something to the effect of: “Before I assumed the presidency, I imagined a great lever of power on the Resolute Desk. When I took office, I learned that the lever actually existed – but it wasn’t connected to anything.” (If anyone out there has the exact quote, please share!)

I think of that whenever I hear senior leaders in retail say things like “our inventory is too high – we need to get it under control”.

What often follows this declaration is a draconian set of directives to “bring the inventory down”:

  • “Look at all of our outstanding purchase orders and cancel anything that’s not needed”
  • “We can’t sell excess stock out of the DCs, so return as much as possible and push the rest out to the stores where it can sell”

[One quarter later…]:

  • “Oh shit, our in-stock has nosedived and we’re losing sales! Buy! Buy! Buy!”

Rinse and repeat.

It has been described to me as a “swinging pendulum” in terms that would lead one to believe that these inventory imbalances are cyclical in nature, like the rate of inflation in the economy. When it gets too high, the central bank steps in with an interest rate hike to steer it to an acceptable range.

A couple of problems with that:

  1. The behaviour of consumers drives the inflation rate and this behaviour can’t be directly controlled. In contrast, the processes that drive inventory flow are internal to the retailer and, as such, are directly controllable.
  2. The pendulum swings themselves are caused by management’s efforts to control the pendulum swings – that popping sound you heard was my head exploding

I should note that I rarely hear “We need to review our inventory management policies and processes to determine what’s causing our inventory levels to be higher than expected, so that we can improve the process to ensure that we can flow stock better in the future without sacrificing in stock.”

Inventory is not an “input variable” that can be directly manipulated by management and brought to “the right level” in the aggregate. It is an output of policies and processes being executed day in, day out for every item at every location over a period of time. Believing that inventory levels can be directly controlled with blunt instruments is like believing that you can directly impact your gross margin without changing the price or the cost (or both).

It may sound trite, but if management doesn’t like the output of the process, then they must necessarily be taking issue with the process inputs or the process itself (both of which, by the way, are owned by management).

On the input side:

  • Are your stocking policies excessive compared to variability in demand?
  • Are you purchasing in higher quantities or with higher lead times than you used to (e.g. container loads from overseas versus pallets from a domestic source)?
  • Are you buffering poor inbound performance from suppliers with more safety stock?

On the process side:

  • Are demand planners striving to predict what will happen in an unbiased way or are they encouraged to be optimistic?
  • Are people buying first and figuring out how to sell it later?
  • Is your inventory higher because your sales have been increasing?

Management does not “own results”.

Management owns the processes that give rise to the results. If you make the determination that “inventory is too high” and you don’t know why, then you’re not doing your job.

Or to put it another way:

The aim of leadership should be to improve the performance of man and machine, to improve quality, to increase output, and simultaneously to bring pride of workmanship to people. Put in a negative way, the aim of leadership is not merely to find and record failures of men, but to remove the causes of failure: to help people to do a better job with less effort. – W. Edwards Deming

The beauty of being wrong

Wrong

Let’s be honest, no one likes to be wrong.  From early schooling and continuing through our careers we’ve been ingrained to do our best to be right.  It keeps us out of trouble, builds our self-esteem and helps us progress.

But what if our views on being wrong were, well, wrong?

There is growing research from a number of disciplines that in order to improve, grow, innovate and lead you need to be able to question your own thinking – allowing for different ideas and views to be heard and essentially being humble enough to admit that you might be wrong about what you think you know.

To illustrate this point of view, consider Julia Galef, co-founder of the Center for Applied Rationality, who asks a beautiful, metaphorical question, “Are you a soldier or a scout?

Soldiers defend and protect. Scouts, in contrast, seek and try to understand.  In her view and a number of others, this worldview shapes how you process information, develop ideas and guides your ability to change.

The mindset of a scout is anchored in curiosity. They love to learn, feel intrigued when something new contradicts their previous views and they are also extremely grounded: their self-worth as a person or team mate isn’t tied to how right or wrong they are about a specific topic.

Scouts have what many refer to as “intellectual humility” – a term that has been popularized in the past several years by a number of influential folks including Google’s Lazlo Block and University of Virginia Professor Edward Hess, who even penned a brilliant book entitled, “Humility is the new smart”.  According to Hess, in order to compete you need to assume the role of lifelong humble inquirer.

Intellectual humility is loosely defined as “a state of openness to new ideas, and a willingness to be receptive to new sources of evidence”.  At the heart of intellectual humility are questions.  Scouts ask lots of questions and are comfortable with all sorts of answers.

In short, scouts have a completely different view about being wrong.  They’re actually cool with it.  They embrace it.  And understand that being wrong is as important as being right – since they understand that being wrong helps you learn, change, iterate and, ultimately, make breakthroughs.

While you might feel a tad uncomfortable with this assertion, I would contend that virtually every major innovation, change or scientific breakthrough started out, at some point, being “wrong”.

While I’m not sure I’d consider myself a scout (though I do like the term and metaphor), I can surely confess I’ve been wrong a lot.  Maybe even more often than I’ve been right.

As just one example, a few years ago I was on a team working to achieve inventory accuracy in stores.  We’d followed a sensible approach that we’ve outlined in previous newsletters – frequently counting a control group of items to uncover and correct the root causes of the errors.

The team had surfaced and resolved some important discipline and housekeeping errors and consistently had the control group of products between 92-94% accurate.  Unfortunately, I helped to convince the team that we needed to get to virtually 100% accurate before we could roll it out to all stores.

Unfortunately, I was wrong.

A couple of years ago I was talking to a colleague who has more experience and, importantly, a different view.  He asked me why I thought that we needed to be 100% accurate and, during the discussion, politely reminded me that “perfection is the enemy of great”.  According to him, what we’d done could have been rolled out to all stores, instead of only rolling out minor procedural changes.

Here’s a great example of why I think being wrong is actually pretty instructive.  This learning helped me change my perspective on change and altered my thinking and approach.  Now “perfection” will never be the goal and good enough truly is – since good enough gets done (implemented) and done can be built on and improved.

Now, I’m not saying you should try to be wrong.  It’s just that being wrong has gotten a bad rap. Innovation and change require that you get good and comfortable with the notion of being wrong.  Wrong leads to right.

Of course, that’s my view and, well, I could be wrong.

On Shelf Symbiosis (Robots Optional)

 

The cows shorten the grass, and the chickens eat the fly larvae and sanitize the pastures. This is a symbiotic relation. – Joel Salatin

interlinked

Daily In Stock.

It’s the gold standard measure of customer service in retail. The inventory level for each item at each selling location is evaluated independently on a daily basis to determine whether or not you are “in stock” for that item at that store.

The criteria to determine whether or not you are “in stock” can vary (e.g. at least one unit on hand, enough to cover forecasted sales until the next shipment arrives, X% of minimum display stock covered, etc.), but the intent is the same. To develop a single, quantifiable metric that represents how well customers are being served (at least with regard to inventory availability).

One strength of this measure is that – unless you get crazy with conditions and filters – it’s relatively easy to calculate with available information. A simple version is as follows:

  • Collect nightly on hands for all item/locations where there is a customer expectation that the store should have stock at all times (e.g. currently active planogrammed items)
  • If there’s at least 1 unit of stock recorded, that item/location is “in stock” for that day. If not, that item/location is “out of stock” for that day.
  • Divide the number of “in stock” records by the number of item/locations in the population and that’s your quick and easy in stock percentage.

By calculating this measure daily, it becomes less necessary to worry about selling rates in the determination. If an item/location is in stock with 2 units today, but the selling rate is 5 units per day, it stands to reason that the same item/location will be out of stock tomorrow. What’s important is not so much the pure efficacy of the measure, rather that it’s evaluated daily and moving in the right direction.

Using this measure, people can picture the physical world the customer is seeing. If your in-stock is 94% at a particular store on a particular day, then that means that 6% of the shelf positions in the stores were empty, representing potential lost sales.

Here’s the problem, though: Customers don’t care about the percentage of the time that your digital stock records are >0 (or some other formula) – they want physical products on the shelf to buy.

That’s the major weakness of the in stock measure – in order to interpret it as a true customer service measure, the following (somewhat dubious) assumptions must be made:

  1. The number of units of an item that the system says is in the store is actually physically in the store. You can deduct 5 points from your in stock just by making this assumption alone.
  2. Even if assumption #1 is true, you then need to assume that the inventory within the 4 walls of the store is in a customer accessible location where they would expect to find it.

That’s where shelf scanning robots come in – quiet, unassuming sentinels traversing the aisles to find those empty shelves and alert staff to take action.

As cool and futuristic as that notion is, it must be noted that this is still a reactive approach, no matter how quickly the holes can be spotted.

The real question is: Why did the shelf become empty in the first place?

Let’s consider that in the context of our 2 assumptions:

  1. It could very well be that a shortage of stock is the result of shitty planning. But for the sake of argument, let’s say that you have the most sophisticated and responsive planning process and system in the world. If there is no physical stock anywhere in the store, but the planning system is being told that the store is holding 12 units, what exactly would you expect it to do? Likewise, if there is “extra” physical stock in the store that’s not accounted for in the on hand balance, the replenishment system will be sending more before it’s actually needed, which results in a different set of problems – more on that later.
  2. To the extent that physical stock exists in the 4 walls of the store (whether the system inventory is accurate or not) and it is not in a selling location, the general consensus is that this is a stock management issue within the store (hence the development of robots to more quickly and accurately find the holes).

While the use of a daily recalculating planning process is the best way to achieve high levels of in stock, more needs to be done to ensure that the in stock measure more closely resembles on shelf availability, which is what the customer actually sees.

Instituting a store inventory accuracy program to find and permanently fix the process failures that cause mismatches between the stock records and the physical goods to occur in the first place will make the in stock measure more reliable from a “what’s in the 4 walls” perspective.

Flowing product directly from the back door to the shelf location as a standard operating procedure gives confidence that any stock that is within the store is likely on the shelf (and, ideally, only on the shelf). This goes beyond just speeding up receiving and putaway (although that could be a part of it). It’s as much about lining up the space planning, replenishment planning and physical flow of goods such that product arrives at the store in quantities that can fit on the shelf upon arrival. This really isn’t super sophisticated stuff:

  1. From the space plan, how much capacity (in units) is allocated to the item at the store? How much of that capacity is “reserved” by the minimum display quantity?
  2. Is the number of units in a typical shipment less than the remaining shelf space after the minimum display quantity is subtracted from the shelf capacity?

If the answer to question 2 is “no”, then you’re basically guaranteeing that at least some of the inbound stock is going to go onto an overhead or stay in the back room. The shelf might be filled up shortly after the shipment arrives, but you can’t count on the replenishment system to send more when the shelf is low a few weeks later, because the backroom or overhead stock is still in the store, leading to potential holes.

Solving this problem requires thinking about the structural policies that allocate space and flow product into the store:

  • Is enough shelf space allocated to this item based on the demand rate?
  • Are shipping multiples/delivery frequency suitable to the demand rate and shelf allocation?

Finding this balance on as many items as possible serves to ensure – structurally – that any product in the store exists briefly on the receiving dock, then only resides in the selling location after that (similar to a DC flowthrough operation with no “putaway” into storage racking).

Like literally everything in retail, the number 100% doesn’t exist – it’s highly unlikely that you’ll be able to achieve this balance for all items in all locations at all times. But the more this becomes standard criteria for allocating space and setting replenishment policies, the more you narrow the gap between “in stock” and “on the shelf”.

So if the three ingredients to on shelf availability are 1) continuous daily replanning, 2) maintaining accurate inventory records and 3) organizing the supply chain and space plans to flow product directly to the shelf while avoiding overstock, then any work done in any of these areas in isolation will definitely help.

Taken together, however, they work symbiotically to provide exponential value in terms of customer service:

  • More accurate inventory balances means that the right product is flowing into the back of the store when it’s needed to fulfill demand, decreasing the potential for holes on the shelf due to stockout.
  • Stocking product only on the shelf without any overhead/backroom stock keeps it all in one place so that it doesn’t end up misplaced or miscounted, increasing inventory accuracy.
  • Improved inventory accuracy increases the likelihood that when a shipment arrives, the free shelf space that’s expected to be there is actually there when the physical stock arrives.

The (stated) intent of utilizing shelf scanning robots is to help humans more effectively keep the shelves stocked, not to make them obsolete.

I think it a nobler goal to design from end-to-end for the express purpose of maximizing on shelf availability as part of day in, day out execution.

And obsolete those robots.

Experts on the future

In 1971, Judah Folkman, a doctor working in Boston, developed a new approach to treat cancer – essentially by stopping the blood vessels supplying the tumors. Blocking the flow, he concluded, would halt the growth of the tumors.

At the time, the only accepted and endorsed approach to treating cancer was chemotherapy. Dr Folkman’s idea was scorned and ridiculed by the medical establishment consisting of a group of PhD insiders – mostly from the field of biology.

According to Dr Folkman, when he attempted to share his idea and thinking to the scientific community, the entire room would get up and leave – as if, collectively, they all had to take a piss at the exact same time. Over time, the criticism got so bad that special committees were developed to review his ideas and not only judged his idea to be of little value, they also threatened to revoke his medical license if he did not cease – in one letter, writing to him to reject his ideas and calling him a ‘clown’.

Folkman, however, was undaunted and pressed on. Painstakingly, his ideas were slowly starting to be accepted by more “open” thinkers and eventually morphed into drugs available for cancer trials. To his credit, in the summer of 2003, at a major medical conference, the results from a large trial for patients with advanced colon cancer validated Dr. Folkman’s thinking.

The way to treat cancer had been transformed. At the event, the crowd rose in a standing ovation. The presenter, at the time, said, to the effect, “it’s a shame that Dr. Folkman couldn’t be here to experience this” – little did he know that, sitting in a back row, Dr. Folkman had just smiled.

Eventually he couldn’t hide his fame and was asked about his achievement – which had taken the better part of 32 years. Most folks wanted to know how he felt and why he continued on his journey in light of all the criticism and personal attacks.

His answers were and still are very insightful.

First, in terms of the ridicule he proclaimed, “You can always tell the leader of new thinking from all the arrows in their ass”.

And even more profound about why he never gave up: “There are no experts of the future.”

Presently we’re living through an unprecedented time and there are a lot of questions about the future – how will the world look after the virus is subdued and what will the new normal look like? Some of these questions are focused on retail and supply chain management.

How will consumers change their behaviors? How much sales will be transacted online? Will home delivery become even more significant? Will supply chain networks become more diverse and less susceptible to a single country’s supply disruption? What other customer delivery methods will emerge?

All good questions for which my answer is the same as yours, “I really don’t know”. In my opinion, a lot will change, I’m just not sure where, when, how much and how fast.

Remember, “There are no experts of the future”.

What I do know is that human behavior will not change. We are social animals and like and need to acquire stuff. We just might shift, perhaps dramatically over time, how we go about this.

For us supply chain planners – especially retailers – that means having the supply chain driven by and connected to consumer demand will be crucial. As consumer demand shifts and evolves having a complete model of the business and providing longer term visibility to all stakeholders will be a core capability – both in the short to medium term but also longer term to proactively plan for and respond to the next disruption.

Wait a minute…that sounds like Flowcasting, doesn’t it?

Store Inventory Accuracy: Getting It Right

 

A man who has committed a mistake and doesn’t correct it, is committing another mistake. – Confucius (551BC – 479BC)

correct and incorrect

 

A couple months ago, I wrote a piece entitled What Everybody Gets Wrong About Store Inventory Accuracy. Here it is in a nutshell:

  • Retailers are pretty terrible at keeping their store inventory accurate
  • It’s costing them a lot in terms of sales, customer service and yes, shrink
  • The problem is pervasive and has not been properly addressed due to some combination of willful blindness, misunderstanding and fear

I think what mostly gives rise to the inaction is the assumption that the only way to keep inventory accurate is to expend vast amounts of time and energy on counting.

Teaching people how to bandage cuts, use eyewash stations or mend broken bones is not a workplace health and safety program. Yes, those things would certainly be part of the program, but the focus should be far more heavily weighted to prevention, not in dealing with the aftermath of mishaps that have already occurred.

In a similar vein, a store cycle counting program is NOT an inventory accuracy program!

A recent trend I’ve noticed among retailers is to mine vast quantities of sales and stock movement data to predict which items in which stores are most likely to have inventory record discrepancies at any given time. Those items and stores are targeted for more frequent counting so as to minimize the duration of the mismatch. Such programs are often described as being “proactive”, but how can that be so if the purpose of the program is still to correct errors in the stock ledger after they have already happened?

Going back to the workplace safety analogy, this is like “proactively” locating an eyewash station near the key cutting kiosk. That way, the key cutter can immediately wash his/her eyes after getting metal shavings in them. Perhaps safety glasses or a protective screen might be a better idea.

Again, what’s needed is prevention – intervening in the processes that cause the inaccurate records in the first place.

Think of the operational processes in a store that adjust the electronic stock ledger on a daily basis:

  • Receiving
  • POS Scanning
  • Returns
  • Adjustments for damage, waste, store use, etc.

Two or more of those processes touch every single item in every single store on a fairly frequent basis. To the extent that flaws exist in those processes that result in the wrong items and quantities being recorded in the stock ledger (or even the right items and quantities at the wrong time), then any given item in any given store at any given time can have an inaccurate inventory balance without anyone knowing about it or why until it is discovered long after the fact.

By the same token, fixing defects in a relatively small number of processes can significantly (and permanently) improve inventory accuracy across a wide swath of items.

So how do you find these process defects?

At the outset, it may not be as difficult as you think. In my experience, a 2 hour meeting with anyone who works in Loss Prevention will give you plenty of things to get started on. Whether it’s an onerous and manual receiving process that is prone to error, poor shelf management or lackadaisical behaviour at the checkout, identifying the problems is usually not the hard part – it’s actually making the changes necessary to begin to address them (which could involve system changes, retraining, measurement and monitoring or all of the above).

If your organization actually cares about keeping inventory records accurate (versus fixing them long after they have been allowed to degrade), then there’s nothing stopping you from working on those things immediately, before a single item is ever counted (see the Confucius quote at the top). If not, then I hate to say it but you’re doomed to having inaccurate inventory in perpetuity (or at least until someone at or near the top does start caring).

Tackling some low hanging fruit is one thing, but to attain and sustain high levels of accuracy – day in and day out – over the long term, rooting out and correcting process defects needs to become part of the organization’s cultural DNA. The end goal is one that can never be reached – better every day.

This entails moving to a three pronged approach for managing stock:

  • Counting with purpose and following up (Control Group Cycle Counting)
  • Keeping the car between the lines on the road (Inspection Counting)
  • Keeping track of progress (Measurement Counting)

Control Group Cycle Counting

The purpose of this counting approach is not to correct inventory balances that have become inaccurate. Rather, it’s to detect the process failures that cause discrepancies in the first place.

It works like this:

  1. Select a sample of items that is representative of the entire store, yet small enough to detail count in a reasonable amount of time (for the sake of argument, let’s say that’s 50 items in a store). This sample is the control group.
  2. Perform a highly detailed count of the control group items, making sure that every unit of stock has been located. Adjust the inventory balances to set the baseline for the first “perfect” count.
  3. One week later, count the exact same items in detail all over again. Over such a short duration, the expectation is that the stock ledger should exactly match the number of units counted. If there are any discrepancies, whatever caused the discrepancy must have occurred in the last 7 days.
  4. Research the transactions that have happened in the last week to find the source of the error. If the discrepancy was 12 units and a goods receipt for a case of 12 was recorded 3 days ago, did something happen in receiving? If the system record shows 6 units but there are 9 on the shelf, was the item scanned once with a quantity override, even though 4 different items may have actually been sold? The point is that you’re asking people about potential errors that have recently happened and will have a better chance of successfully isolating the source of the problem while it’s in everyone’s mind. Not every discrepancy will have an identifiable cause and not every discrepancy with an identifiable cause will have an easy remedy, but one must try.
  5. Determine the conditions that caused the problem to occur. Chances are, those same conditions could be causing problems on many other items outside the control group.
  6. Think about how the process could have been done differently so as to have avoided the problem to begin with and trial new procedure(s) for efficiency and effectiveness.
  7. Roll out new procedures chainwide.
  8. Repeat steps 3 to 7 forever (changing the control group every so often to make sure you continue to catch new process defects).

Eight simple steps – what could be easier, right?

Yes, this process is somewhat labour intensive.
Yes, this requires some intestinal fortitude.
Yes, this is not easy.

But…

How much time does your sales staff spend running around on scavenger hunts looking for product that “the system says is here”?

How much money and time do you waste on emergency orders and store-to-store transfers because you can’t pick an online order?

How long do you think your customers will be loyal if a competitor consistently has the product they want on the shelf or can ship it to their door in 24 hours?

Inspection Counting

In previous pieces written on this topic, I’ve referred to this as “Process Control Counting” – so coined by Roger Brooks and Larry Wilson in their book Inventory Record Accuracy – which they describe as being “controversial in theory, but effective in practice”.

We’ve found that moniker to be not very descriptive and can be confusing to people who are not well versed in inventory accuracy concepts (i.e. every retailer we’ve encountered in the last 25 years).

The Inspection Counting approach is designed to quickly identify items with obvious large discrepancies and correct them on the spot.

Here’s how it works:

  1. Start at the beginning of an aisle and inquiry the first item using a handheld scanner that can instantly display the inventory balance.
  2. Quickly scan the shelf and determine whether or not it appears the system balance is correct.
  3. If it appears to be correct, move on to the next item. If there appears to be a large discrepancy, do some simple investigation to see if it can be located – if not, then perform a count, adjust the balance and move on.

It may seem like this approach is not very scientific and subject to interpretation and judgment on the part of the person doing the inspection counting. That’s because it is. (That’s the “controversial” part).

But there are clear advantages:

  • It is fast – Every item in the store can be inspection counted every few weeks.
  • It is efficient – The items that are selected to be counted are items that are obviously way off (which are the ones that are most important to correct).
  • It is more proactive – “Hole scans” performed today are quite often major inventory errors that occurred days or weeks ago and were only discovered when the shelf was empty – bad news early is better than bad news late.

No matter how many process defects are found and properly addressed through Control Group Counting, there will always be theft and honest mistakes. Inspection Counting ensures that there is a stopgap to ensure that no inventory record goes unchecked for a long period of time, even when there are thousands of items to cycle through.

As part of an overall program underpinned by Control Group Counting and process defect elimination, the number of counts triggered by an inspection (and the associated time and effort) should decrease over time as fewer defects cause the discrepancies in the first place.

Measurement Counting

The purpose of this counting approach is to use sampling to estimate the accuracy of the population based on the accuracy of a representative group.

It works like this:

  1. Once a month, select a fresh sample of items that is representative of the entire store, yet small enough to detail count in a reasonable amount of time, similar to how a control group is selected. This sample is the measurement group.
  2. Perform a highly detailed count of the measurement group items, making sure that every unit of stock has been located.
  3. Post the results in the store and discuss it in executive meetings every month. Is accuracy trending upward or downward? Do certain stores need some additional temporary support? Have new root causes been identified that need to be addressed?

Whether retailers like it or not, inventory accuracy is a KPI that customers are measuring anecdotally and it’s colouring their viewpoint on their shopping experience. Probably a good idea to actually measure and report on it properly, right?

If you’re doing a good job detecting and eliminating process defects that cause inaccurate inventory and continuously making corrections to erroneous records, then this should be reflected in your measurement counts over time. Who knows? If you can demonstrate a high level of accuracy on a continuously changing representative sample, maybe you can convince the Finance and Loss Prevention folks to do away with annual physical counts altogether.

The key to being in-stock

Key

Abraham Lincoln is widely considered the greatest President in history. He preserved the Union, abolished slavery and helped to strengthen and modernize government and the economy. He also led a fragile America through one of her darkest and most crucial periods – the American Civil War.

In the early days of the war, there were lots of competing ideas about how to secure victory and who should attempt it. Most of the generals at that time had concluded that the war could only be won through long, savage and bloody battles in the nation’s biggest cities – like Richmond, New Orleans and even Washington.

Lincoln – who taught himself strategy by reading obsessively – had a different plan. He laid out a large map and pointed to Vicksburg, Mississippi, a small city deep in the South. Not only did it control important navigation waterways, but it was also a junction of other rivers, as well as the rail lines that supplied Confederate armies and plantations across the South.

“Vicksburg is the key”, he proclaimed. “We can never win the war until that key is ours”.

As it turns out, Lincoln was right.

It would take years, blood, sweat and ferocious commitment to the cause, but his strategy he’d laid out was what won the war and ended slavery in America forever. Every other victory in the Civil War was possible because Lincoln had correctly understood the key to victory – taking the city that would split the South in half and gaining control of critical shipping lanes.

Lincoln understood the key. Understanding the key is paramount in life and in business.

It’s no secret that many retailers are struggling – especially in terms of the customer journey – most notably when it comes to retail out of stocks. Retail out of stocks have remained, on average, sadly, at 8% for decades.

So what’s the key to finally ending out-of-stocks?

The key is speed and completeness of planning.

First, we all know that the retail supply chain can and should only be driven by a forward looking forecast of consumer demand – how much you think you’ll sell, by product and consumption location.

Second, everyone also agrees (though few understand the key to solving this thorn in our ass) that store/location on-hands need to be accurate.

But the real key is that, once these are in place, the planning process must be at least done daily and must be complete – from consumption to supplier.

Daily re-forecasting and re-planning is necessary to re-orient and re-synch the entire supply chain based on what did or didn’t sell yesterday. Forecasts will always be wrong and speedy re-planning is the key to mitigating forecast error.

However, that is not enough to sustain exceptionally high levels of daily in-stock. In addition, the planning process must be complete – providing the latest projections from consumption to supply, giving all trading partners their respective projections in the language in which they operate (e.g., units, volume, cube, weight, dollars). The reason is simple – all partners need to see, as soon as possible, the result of the most up to date plans. All plans are re-calibrated to help you stay in stock. And the process repeats, day in, day out.

We have retail clients that are achieving, long term, daily in-stocks of 98%+, regardless of the item, time of year or planning scenario.

They understand the key to making it happen.

Now you do too.

What Everybody Gets Wrong About Store Inventory Accuracy

 

Don’t build roadblocks out of assumptions. – Lorii Myers

red herring

Retailers are not properly managing the most important asset on their balance sheets – and it’s killing customer service.

I analyzed sample data from 3 retailers who do annual “wall to wall” physical counts. There were 898,526 count records in the sample across 92 stores. For each count record (active items only on the day of the count), the system on hand balance before the count was captured along with the physical quantity counted. The products in the sample include hardware, dry grocery, household consumables, sporting goods, basic apparel and all manner of specialty hardlines items. Each of the retailers report annual shrink percentages that are in line with industry averages.

A system inventory record is considered to be “accurate” if the system quantity is adjusted by less than +/- 5% after the physical count is taken. Here are the results:

So 54% of inventory records were accurate within a 5% tolerance on the day of the count. Not good, right?

It gets worse.

For 19% of the total records counted (that’s nearly 1 in every 5 item/locations), the adjustment changed the system quantity by 50% or more!

Wait, there’s more!

In addition, I calculated simple in-stock measures before and after the count as follows:

Reported In Stock: Percentage of records where the system on hand was >0 just before the count

Actual In Stock: Percentage of records where the counted quantity was >0 just after the count

Here are the results of that:

Let’s consider what that means for a moment. If you ran an in-stock report based on the system on hand just before those records were counted, you would think that you’re at 94%. Not world class, but certainly not bad. However, once the lie is exposed on that very same day, you realize that the true in-stock (the one your customer sees) is 5% lower than what you’ve been telling yourself.

Sure, this is a specific point in time and we don’t know how long it took the inventory accuracy to degrade up for each item/location, but how can you ever look at an in-stock report the same way again?

Further, when you look at it store by store, it’s clear that stores with higher levels of inventory accuracy experience a lesser drop in in-stock after the records are counted. Each of the blue dots on the scatterplot below represent one of the 92 stores in the sample:


A couple of outliers notwithstanding, it’s clear that the higher on hand accuracy is, the more truthful the in-stock measure is and vice-versa.

Now let’s do some simple math. A number of studies have consistently shown that an out-of-stock results in a lost sale for the retailer about 1/3 of the time. Assuming the 5% differential between reported and actual in-stock is structural, this means that having inaccurate inventory records could be costing retailers 1.67% of their topline sales. This is in addition to the cost of shrink.

So, a billion dollar retailer could be losing almost $17 million per year in sales just because of inaccurate on hands and nothing else.

Let’s be clear, this isn’t like forecast accuracy where you are trying to predict an unknown future. And it’s not like the myriad potential flow problems that can arise and prevent product from getting to the stores to meet customer demands. It is an erosion in sales caused by the inability to properly keep records of assets that are currently observable in the physical world.

So why hasn’t this problem been tackled?

Red Herring #1: Our Shrink Numbers Are Good

Whenever we perform this type of analysis for a retailer, it’s not uncommon for people to express incredulity that their store inventory balances are so inaccurate.

“That can’t possibly be. Our shrink numbers are below industry average.”

To that, I ask two related questions:

  1. Who gives a shit about industry averages?
  2. What about your customers?

In addition to the potential sales loss, inaccurate on hands can piss customers off in many other ways. For example, if it hasn’t happened already, it won’t be long until you’re forced by competition to publish your store on hand balances on your website. What if a customer makes a trip to the store or schedules a pickup order based on this information?

The point here is that shrink is a financial measure, on hand accuracy is a customer service measure. Don’t assume that “we have low shrink” means the same thing as “our inventory management practices are under control”.

Red Herring #2: It Must Have Been Theft

It’s true that shoplifting and employee theft is a problem that is unlikely to be completely solved. Maybe one day item level RFID tagging will become ubiquitous and make it difficult for product to leave the store without being detected. In the meantime, there’s a limit to what can be done to prevent theft without either severely inconveniencing customers or going bankrupt.

But are we absolutely sure that the majority of inventory shrinkage is caused by theft? Using the count records mentioned earlier, here is another slice showing how the adjustments were made:

From the second column of this table, you can see that for 29% of all the count transactions, the system inventory balances were decreased by at least 1 unit after the count.

Think about that next time you’re walking the aisles in a store. If you assume that theft is the primary cause for negative adjustments, then by extension you must also believe that one out of every 3 unique items you see on the shelves will be stolen by someone at least once in the course of a year – and it could be higher than that if an “accurate” record on the day of the count was negatively adjusted at other times throughout the year. I mean, maybe… seems a bit much, though, don’t you think?

Now let’s look at the first column (count adjustments that increase the inventory balance). If you assume that all of the inventory decreases were theft, then – using the same logic – you must also believe that for one out of every 5 unique items, someone is sneaking product into the store and leaving it on the shelves. I mean, come on.

Perhaps there’s more than theft going on here.

Red Herring #3: The Problem Is Just Too Big

Yes, it goes without saying that when you multiply out the number of products and locations in retail, you get a large number of individual inventory balances – it can easily get into the millions for a medium to large sized retailer. “There’s no way that we can keep that many inventory pools accurate on a daily basis” the argument goes.

But the flaw in this thinking stems from the (unfortunately quite popular) notion that the only way to keep inventory records accurate is through counting and correcting. The problem with this approach (besides being highly labour intensive, inefficient and prone to error) is that it corrects errors that have already happened and does not address whatever process deficiencies caused the error in the first place.

This is akin to a car manufacturer noticing that every vehicle rolling off the assembly line has a scratch on the left front fender. Instead of tracing back through the line to see where the scratch is occurring, they instead just add another station at the end with a full time employee whose job it is to buff the scratch out of each and every car.

The problem is not about the large number of inventory pools, it’s about the small number of processes that change the inventory balances. To the extent that inventory movements in the physical world are not being matched with proper system transactions, a small number of process defects have the potential to impact all inventory records.

When your store inventory records don’t match the physical stock on hand, it must necessarily be a result of one of the following processes:

  • Receiving: Is every carton being scanned into the store’s inventory? Do you “blind receive” shipments from DCs or suppliers that have not demonstrated high levels of picking accuracy for the sake of speed?
  • POS Scanning and Saleable Returns: Do cashiers scan each and every individual item off the belt or do they sometimes use the mult key for efficiency? If an item is missing a bar code and must be keyed under a dummy product number, is there a process to record those circumstances to correct the inventory later?
  • Damage and Waste: Whenever a product is found damaged or expired, is it scanned out of the on hand on a nightly basis?
  • Store Use, Transformations, Transfers: If a product taken from the shelf to use within the store (e.g. paper towels to clean up a mess) or used as a raw material for another product (e.g. flour taken from the pantry aisle to use in the bakery) are they stock adjusted out? Are store-to-store transfers or DC returns scanned out of the store’s inventory correctly before they leave?
  • Counting: Before a stock record is changed because of a count, are people making sure that they’ve located and counted all units of that product within the store or do they just “pencil whip” based on what they see in front of them and move on?
  • Theft: Are there more things that can be done within the store to minimize theft? Do you actively “transact” some of your theft when you find empty packaging in the aisle?

So how can retailers finally make a permanent improvement to the accuracy of their store on hands?

  • They need to actually care about it (losing 1-2% of top line sales should be a strong motivator)
  • They need to measure store on hand accuracy as a KPI
  • They need an approach whereby process failures that cause on hand errors can be detected and addressed
  • They need an efficient approach for finding and correcting discrepancies as the process issues are being fixed

Stay tuned for more on that.

Grandmaster Collaboration

Garry Kasparov is one of the world’s greatest ever chess grandmasters – reigning as World Champion for 15 years from 1985-2000, the longest such reign in chess history. Kasparov was a brilliant tactician, able to out-calculate his opponents and “see” many moves into the future.

In addition to his chess prowess, Kasparov is famous for the 1997 chess showdown, aptly billed as the final battle for supremacy between human and artificial intelligence. The IBM supercomputer, Deep Blue, defeated Kasparov in a 6 game match – the first time that a machine beat a reigning World Champion.

Of course chess is a natural game for the computational power of AI – Deep Blue reportedly being able to calculate over 200 million moves per second. Today, virtually all top chess programs that you and I can purchase are stronger than any human on earth.

The loss to Deep Blue intrigued Kasparov and made him think. He recalled Moravec’s paradox: machines and humans frequently have opposite strengths and weaknesses. There’s a saying that chess is “99 percent tactics” – that is, the short combinations of moves players use to get an advantage in position. Computers are tactically flawless compared to humans.

On the flip side, humans, especially chess Grandmasters were brilliant at recognizing strategic themes of positions and deeply grasping chess strategy.

What if, Kasparov wondered, if the computational tactical prowess were combined with the human big-picture, strategic thinking that top Grandmasters had honed after years of play and positional study?

In 1998 he helped organize the first “advanced chess” tournament in which each human player had a machine partner to help during each game. The results were incredible and the combination of human/machine teams regularly beat the strongest chess computers (all of which were stronger than Kasparov). According to Kasparov, “human creativity was more important under these conditions”.

By 2014, and to this day, there continue to be what is described as “freestyle” chess tournaments in which teams made up of humans and any combination of computers compete against each other, along with the strongest stand-alone machines. The human-machine combination wins most of the time.

In freestyle chess the “team” is led by human executives, who have a team of mega-grandmaster tactical advisers helping decide whose advice to probe in depth and ultimately the strategic direction to take the game in.

For us folks in supply chain, and especially in supply chain planning, there’s a lot to be learned from the surprisingly beneficial collaboration of chess grandmaster and supercomputer.

Humans excel at certain things. So do computers.

Combine them, effectively, like Kasparov inspired and you’ll undoubtedly get…

Grandmaster Collaboration.

Jimmy Crack Corn

 

Science may have found a cure for most evils; but it has found no remedy for the worst of them all – the apathy of human beings. – Helen Keller (1880-1968)

apathy-i-dont-care

On hand accuracy.

It has been a problem ever since retailers started using barcode scanning to maintain stock records in their stores.

It’s certainly not the first time we’ve written on this topic, nor is it likely to be the last.

The real question is: Why is this such a pervasive problem?

I think I may have the answer: Nobody cares.

Okay, maybe that’s a little harsh. It’s probably more fair to say that there is a long list of things that retailers care about more than the accuracy of their on hands.

I’m not being judgmental, nor am I trying to invoke shame. I’m just making a dispassionate observation based on 25 years experience working in retail.

Whatever you think of the axiom “what gets measured gets managed” (NOT a quote from Peter Drucker), I would argue that it is largely true.

By that yardstick, I have yet to come across a single retailer who routinely measures the accuracy of their on hands as a KPI, even though – if you think about it – it wouldn’t be that difficult to do. Just send out a count list of a random sample of SKUs each month to every store and have them do a detailed count. Either the system record matches what’s physically there or it doesn’t.

Measuring forecast accuracy (the ability to predict an unknown future) seems to take up a lot more time and attention than inventory accuracy (the ability to keep a stock record in synch with a quantity that exists in the physical world right now), but the accuracy of on hand records has a much greater influence on the customer experience than forecast accuracy – by a very wide margin.

And on hand accuracy will only become more important as retailers expand customer delivery options to include click and collect and ship from store. Even “old school” shoppers (those who just want to go to the store to buy something and leave) will be expecting to check online to see how much a store has in stock before getting in their cars.

It’s quite clear that retailers should care about this more, so why don’t they?

Conflating Accuracy and Shrink

After a physical stock count, positive and negative on hand variances are costed and summed up. If the value of the system on hand drops by less than 2% of sales after the count adjustments are made, this is deemed to be a good result when compared to the industry as a whole. The conclusion is drawn that the management of inventory must therefore be under control and that on hand records must not be that far off. The problem with shrink is that the positive and negative errors can still be large in magnitude, but they cancel each other out, thereby hiding significant issues with on hand record accuracy (by item/location, which is what the customer cares about). Shrink is a measure for accountants, not customers.

Store Replenishment is Manual Anyhow

It’s still common practice for many retailers to use visual shelf reviews for store replenishment. Department managers walk through the aisles with RF scanning guns, scan the shelf tags for items they want to order and use an app on the gun to place replenishment orders. Most often, this process is used when perpetual inventory capabilities don’t exist at store level, but it’s not uncommon to see it also being used even if stores have system calculated on hand balances. Why? Because there isn’t enough trust in the accuracy of the on hands to use them for automated replenishment. Hmmm…

It’s Perceived to be an Overwhelming Problem

It’s certainly true that the number of item/store inventory pools that need to be kept accurate can get quite large. The predominant thinking in retail is that the only way to make inventory records more accurate is to count each item more frequently. Do the math on that and many retailers conclude that the labour costs to maintain accurate inventory records will drive them into bankruptcy.

The problem with this viewpoint is that frequent counting and correcting isn’t really maintaining accurate records – it’s continuously fixing inaccurate records. A different way to look at it is not by the sheer volume of item/location records to be managed, but rather by the number of potential process failure points that could affect any given item in any given location.

Think about an auto assembly line where every finished car that rolls off  has a 2 inch scratch on the right front fender. One option to address this problem is to set up an additional station at the end of the line to buff out the scratch on every single car that rolls through. This is analogous to the “count and correct” approach to managing inventory records – highly labour intensive and only addresses the problem after it has already occurred.

Another option would be to trace back through the process until you find the where the scratch is occurring and why. Maybe there’s a bolt sticking out from a pass-through point that’s causing the scratch. Cut off the end of the bolt, no more scratches. Addressing this one point of process failure permanently resolves the root cause of the defect for every car that passes through the process.

Going back to our store on hand accuracy example, a retailer may have thousands or millions of item/store combinations, but the number of processes (potential points of failure) that change on hand balances is limited:

  • DC picking
  • Store receiving
  • Stock writedowns for damage or waste
  • Counts
  • Sales and saleable returns

For retailers who have implemented store perpetual inventory, each of these processes that affect the movement of physical stock have a corresponding transaction that changes the on hand balance accordingly. How carefully are those transactions being recorded for accuracy (versus speed)?

Are DC shipments regularly audited for accuracy? Do stores “blind receive” shipments only from highly reliable sources? Are there nightly procedures to scan out damaged or unsaleable goods? Is the store well organized so that all units of a particular item can be easily found before a physical count is done? is every sale being properly scanned at the checkout?

Of course, the elephant (or maybe scapegoat?) in the room is theft. After all, there is no corresponding transaction for those stock movements. While there are certainly things that can be done to reduce theft, I consider it to be a self evident fact that it won’t be eliminated completely anytime soon.

But before you assume that every negative stock adjustment “must have been theft”, are you totally certain that all of the other processes are being transacted properly?

Does it seem reasonable to assume that for every single unique product whose on hand balance decreases after a physical count (typically 20-30% of all products in a store) all of those units were stolen since the last count?

And if we do assume that theft is the culprit in the vast majority of those cases, then what are we to assume about products whose on hand balances increase after being counted (typically 10-20% of all products in a store)? Are customers or employees sneaking items into the store, leaving them on the shelves and secretly leaving without being detected?

Setting theft aside, there’s still plenty that can be done by thoroughly examining and addressing the potential points of process failure that cause on hands to become inaccurate in the first place, while at the same time reducing the amount of time and money being spent on “counting and correcting”.

What’s Step 1 on this path?

You need to care.

Pissed Off People

Jim is basically your average bloke. One Saturday afternoon, about 25 years ago, he’s doing something a lot of average blokes do; cleaning his home – a small farmhouse in the west of England.

After some dusting, it’s time to vacuum. Like everyone at the time, he’s shocked how quickly his top-of-the-line Hoover cleaner loses its suction power.

Jim is pissed. Royally pissed off. Madder than a wet hen.

So mad, in fact, that he took the cleaner out to his shed, took it apart and examined why it would lose suction power so quickly. After a few experiments he correctly deduced that the issue was that fine dust blocked the filter almost immediately and that’s why performance in conventional cleaners dips so fast.

Jim continued to be pissed until one day he visited a timber mill, looking for some wood. In those days, timber mills planed the logs on the spot for you. Jim watched as he saw his wood travel along until it reached a cyclone specifically designed to change the dynamics of airflow, separating the dust from the air via centrifugal force.

BOOM! James Dyson, still pissed at how shit traditional vacuum cleaners were, got the core idea of the Dyson cyclone cleaner. An idea that he would use to eventually deposit over £3 billion into his back pocket.

Unbelievably it took Dyson three years and 5,127 small, incremental prototypes to finally “perfect” his design and revolutionize cleaning forever. Can you imagine how pissed you’d need to be to work, diligently, over that many iterations to finally see your idea through?

Dyson’s story is incredible and enlightening – offering us a couple of key insights into the innovative process.

First, most folks think that innovation happens as a result of ideas just popping into people’s heads. That’s missing the key piece of the puzzle: the problem! Without a problem, a flaw, a frustration, innovation cannot happen. As Dyson himself states, “creativity should be thought of as a dialogue. You have to have a problem before you can have game-changing innovation”.

Second, for innovative solutions to emerge you need pissed off people. People like Dyson who are mad, frustrated and generally peeved with current solutions and approaches for the problem at hand. So they are always thinking, connecting and, at times, creating a breakthrough solution – sometimes years after initially surfacing the problem. So, while it’s easy to say that the “idea” just happened, more often than not you’ve been mulling it over, subconsciously, because you’re pissed about something.

Here’s a true story about Flowcasting and how it eventually saw the light of day as a result of some pissed off people.

About 25 years ago, I was the leader of a team whose mandate was to improve supply chain planning for a large, very successful Canadian retailer. I won’t bore you with the details but eventually we designed, on paper, what we now call Flowcasting.

Problem was that it was very poorly received by the company’s Senior Leadership team, especially the Supply Chain executives. On numerous occasions I was informed that this idea would never work and that we needed to change the design. I was also threatened to be fired more than once if we didn’t change.

The problem was, our team loved the design and could see it potentially working. As I was getting more pressure and “never” from the leadership team, I was getting more and more pissed. Royally pissed off as a matter of fact.

As luck would have it, as a pissed off person, I didn’t back down (there’s a lesson here too – “never” is not a valid reason why something might not work, regardless who says it). One person on the team suggested I contact Andre Martin and he and his colleague, Darryl Landvater, helped us convince the non-believers that it would be the future and that we should pilot a portion of the design. The rest is, of course, history.

The Flowcasting saga didn’t stop there. As we were embarking on our early pilot of the DC-supplier integration, Andre and Darryl tried, unsuccessfully, to convince a few major technology planning vendors that an integrated solution, from store/consumption to supply was needed and that they needed to build it, from scratch.

All the major technology players turned them down, citing lots of “nevers” themselves as to why this solution was either not needed, or would not scale and/or work.

To be honest, it pissed them off, as they’ve admitted to me many times over the years.

So much so, that, despite all the warnings from the experts they “put their money where their mouth is” and built a Flowcasting solution that connects the store to supplier in an elegant, intuitive and seamless fashion – properly planning for crucial retail planning scenarios like slow sellers, promotions, and seasonal items just to name a few.

In 2015, using the concept of Flowcasting and the technology that they developed, a retailer seamlessly connected their supply chain from consumption to supply – improving in-stocks, sales and profits and instilling a process that facilitates any-channel planning however they wish to do it.

Sure, having a reasonably well thought out design was important. As was having a solution suited for the job.

But what really enabled the breakthrough were some pissed-off people!