What’s Good for the Goose

 

What’s good for the goose is good for the gander – Popular Idiom

ruledoesntapply

Thinking in retail supply chain management is still evolving.

Which is a nicer way of saying that it’s not very evolved.

Don’t get me wrong here. It wasn’t that long ago that virtually no retailer even had a Supply Chain function. When I first started my career, retailers were just beginning to use the word “logistics” – a military term, fancy that! – in their job descriptions and org charts. At the time it was an acknowledgement that sourcing, inbound transportation, distribution and outbound transportation were all interrelated activities, not stand alone functions.

A positive development, but “logistics” was really all about shipping containers, warehouses and trucks – the mission ended at the store receiving bay.

Time passed and barcode scanning at the checkouts became ubiquitous.

More time passed and many (but by no means a large majority) of medium to large sized retailers implemented scan based receiving and perpetual inventory balances at stores in a centralized system. This was followed quickly by computer assisted store ordering and with that came the notion that store replenishment could be a highly automated, centralized function.

Shortly thereafter, retailers began to recognize that they needed more than just operational logistics, but true supply chain management – covering all of the planning and execution processes that move product from the point of manufacture to the retail shelf.

In theory, at least.

I say that, because even though most retailers of size have adopted the supply chain management vernacular and have added Supply Chain VP roles to their org structures, over the years I’ve heard some dubious “supply chain” discussions that tend to suggest that thinking hasn’t fully evolved past “trucks and warehouses”. Some of you reading this now my find yourselves falling into this train of thought without even realizing it.

So how do you know if your thinking is drifting away from holistic supply chain thinking toward myopic logistics centric thinking?

An approach that we use is to apply the Goose and Gander Rule to these situations. If you find yourself advocating behaviour in the middle of the supply chain that seems nonsensical if applied upstream or downstream, then you’re not thinking holistically.

Here are a few examples:


The warehouse is overstocked. We can’t sell it from there, so let’s push it out to the stores.


At a very superficial level, this argument makes some sense. It is true that product can’t sell if it’s sitting in the warehouse (setting aside the fact that using this approach to transfer overstock from warehouses to stores generally doesn’t make it sell any faster).

Now suppose that a supplier unexpectedly shipped a truckload of product that you didn’t need to your distribution centre because they were overstocked. Would you just receive it and scramble to find a place to store it? Because that’s what happens when you push product to stores.

Or how would you feel if you were out shopping and as you were approaching the checkout, a member of the store staff started filling your cart with items that the store didn’t want to stock any more? Would you just pay for it with a shrug and leave?

I hate to break the news, but there is no such thing as “push” when you’re thinking of the retail supply chain holistically. The only way to liquidate excess inventory is to encourage a “pull” by dropping the price or negotiating a return. All pushing does is add more cost to the product and transfer the operational issues downstream.


If we increase DC to store lead times, we can have store orders locked in further in advance and optimize our operations.


Planning with certainty is definitely easier than planning with uncertainty, but where does it end? Do you increase store lead times by 2 days? 2 weeks? 2 months? Why not lock in store orders for the next full year?

Increasing lead times does nothing but make the supply chain less responsive and that helps precisely no one. And, like the “push” scenario described above, stores are forced to hold more inventory, so you’re improving efficiency at one DC, but degrading it in dozens of stores served by that DC.

Again, would you be okay with suppliers arbitrarily increasing order lead times to improve their operational efficiency at your expense?

Would you shop at a store that only allows customers in the door who placed their orders two days in advance?

Customers buy what they want when they want. There are things that can be done to influence their behaviour, but it can’t be fully controlled in such a way that you can schedule your supply chain flow to be a flat line, day in and day out.


We sell a lot of slow moving dogs. We should segregate those items in the DC and just pick and deliver them to the stores once a month.


The first problem with this line of thinking is that “slow moving” doesn’t necessarily mean “not important to the assortment”.

Also, aren’t you sending 1 or 2 (or more) shipments a week to the same stores from the same building anyhow?

When’s the last time you went shopping for groceries and were told by store staff that, even though you need mushroom soup today, they only sell mushroom soup on alternate Thursdays?

Listen, I’m not arguing that retailers’ logistics operations shouldn’t be run as efficiently as possible. You just need to do it without cheating.

We need to remember that the SKU count, inventory and staff levels across the store network is many times greater than the logistics operations. Employing tactics that hurt the stores in order to improve KPIs in the DCs or Transport operations is tantamount to cutting of your nose to spite your face.

Covered in Warts

It’s the early 1990’s and Joanne is down on her luck. A recently divorced, single mother who’s jobless, she decides to move back from England to Scotland to at least be closer to her sister and family.

During her working days in Manchester she had started scribbling some ideas and notes about a nonsensical book idea and, by the time she’d moved home, had three chapters written of a book. Once back near Edinburgh, she continued to write and improve her manuscript until she had a first draft completed in 1995 – fully five years from her first penned thoughts.

During the next two years she pitched the very rough manuscript to a dozen major publishers. They all rejected it and believed the story would not resonate with people and, as a result, sales would be dismal.

Undismayed she eventually convinced Bloomsbury to take a very small chance on the book – advancing her a paltry $1500 pounds and agreeing to print 1,000 copies, 500 of which would be sent to various libraries.

In 1997 and 1998 the book, Harry Potter by J. K. Rowling, would win both the Nestle Book award and the British Book Awards Children’s book of the year. That book would launch Rowling’s worldwide success and, to date, her books have sold over 400 million copies.

The eventual success of the Harry Potter series of books is very instructive for breakthroughs and innovation.

The most important breakthroughs—the ones that change the course of science, business, or history — are fragile. They rarely arrive dazzling everyone with their brilliance.

Instead, they often arrive covered in warts — the failures and seemingly obvious reasons they could never work that make them easy to dismiss. They travel through tunnels of skepticism and uncertainty, their champions often dismissed as crazy.

Luckily most of the champions of breakthrough items are what many would describe as loons – people that refuse to give up on their ideas and will work, over time, to smooth and eliminate the warts.

When it comes to supply chain planning innovation, you’d have to put Andre Martin into the loon category as well.

In the mid 1970’s Andre invented Distribution Resource Planning (DRP) and, along with his colleague Darryl Landvater, designed and implemented the first DRP system in 1978 – connecting distribution to manufacturing and changing planning paradigms forever.

Most folks don’t know but around that time Andre saw that the thinking of DRP could be extended to the retail supply chain – connecting the store to the factory using the principles of DRP and time-phased planning.

The idea, which has since morphed and labelled as Flowcasting, was covered in warts. During the course of the last 40 years Andre and Darryl have refined the thinking, smoothed the warts, eliminated dissention, educated an industry and, unbelievably, built a solution that enables Flowcasting.

I’ve been a convert and a colleague in the wart-reduction efforts over the last 25 years – experiencing first-hand the some irrational responses and views from, first, a large Canadian retailer, and more recently the market in general.

But, like JK, the warts are largely being exposed as pimples and people and retailers are seeing the light – the retail supply chain can only deliver if it’s connected from consumer to supplier – driven only by a forecast of consumer demand. Planned and managed using the principles of Flowcasting.

The lesson here is to realize that if you think you’ve got a breakthrough idea, there’s a good chance it’ll be covered in warts and will need time, effort, patience and determination to smooth and eliminate them.

It can, however, be done.

And you can do it.

Godspeed.

Managing the Long Tail

If you don’t mind haunting the margins, I think there is more freedom there. – Colin Firth

long-tail

 

A couple of months ago, I wrote a piece called Employing the Law of Large Numbers in Bottom Up Forecasting. The morals of that story were fourfold:

  1. That when sales at item/store level are intermittent (fewer than 52 units per year), a proper sales pattern at that level can’t be properly determined from the demand data at that level.
  2. That any retailer has a sufficient percentage of slow selling item/store combinations that the problem simply can’t be ignored in the planning process.
  3. That using a multi level, top-down approach to developing properly shaped forecasts in a retail context is fundamentally flawed.
  4. That the Law of Large Numbers can be used in a store centric fashion by aggregating sales across similar items at a store only for the purpose of determining the shape of the curve, thereby eliminating the need to create any forecasts above item/store level.

A high level explanation of the Profile Based Forecasting approach developed by Darryl Landvater (but not dissimilar to what many retailers were doing for years with systems like INFOREM and various home grown solutions) was presented as the antidote to this problem. Oh and by the way, it works fabulously well, even with such a low level of “sophistication” (i.e. unnecessary complexity).

But being able to shape a forecast for intermittent demands without using top-down forecasting is only one aspect of the slow seller problem. The objective of this piece is to look more closely at the implications of intermittent demands on replenishment.

The Bunching Problem

Regardless of how you provide a shape to an item/store forecast for a slow selling item (using either Profile Based Forecasting or the far more cumbersome and deeply flawed top-down method), you are still left with a forecasted stream of small decimal numbers.

In the example below, the shape of the sales curve cannot be determined using only sales history from two years ago (blue line) and the most recent year (orange line), so the pattern for the forecast (green dashed line) was derived from an aggregation of sales of similar items at the same store and multiplied through the selling rate of the item/store itself (in this case 13.5 units per year):

You can see that the forecast indeed has a defined shape – it’s not merely a flat line that would be calculated from intermittent demand data with most forecasting approaches. However, when you multiply the shape by a low rate of sale, you don’t actually have a realistic demand forecast. In reality, what you have is a forecast of the probability that a sale will occur.

Having values to the right of the decimal in a forecast is not a problem in and of itself. But when the value to the left of the decimal is a zero, it can create a huge problem in replenishment.

Why?

Because replenishment calculations always operate in discrete units and don’t know the difference between a forecast of true demand and a forecast of a probability of a sale.

Using the first 8 weeks of the forecast calculated above, you can see how time-phased replenishment logic will behave:

The store sells 13 to 14 units per year, has a safety stock of 2 units and 2 units in stock (a little less than 2 months of supply). By all accounts, this store is in good shape and doesn’t need any more inventory right now.

However, the replenishment calculation is being told that 0.185 units will be deducted from inventory in the first week, which will drive the on hand below the safety stock. An immediate requirement of 1 unit is triggered to ensure that doesn’t happen.

Think of what that means. Suppose you have 100 stores in which the item is slow selling and the on hand level is currently sitting at the safety stock (not an uncommon scenario in retail). Because of small decimal forecasts triggering immediate requirements at all of those stores, the DC needs to ship out 100 pieces to support sales of fewer than 20 pieces at store level – demand has been distorted 500%.

Now, further suppose that this isn’t a break-pack item and the ship multiple to the store is an inner pack of 4 pieces – instead of 100 pieces, the immediate requirement would be 400 pieces and demand would be distorted by 2,000%!

The Antidote to Bunching – Integer Forecasts

What’s needed to prevent bunching from occurring is to convert the forecast of small decimals (the probability of a sale occurring) into a realistic forecast of demand, while still retaining the proper shape of the curve.

This problem has been solved (likewise by Darryl Landvater) using simple accumulator logic with a random seed to convert a forecast of small decimals into a forecast of integers.

It works like this:

  • Start with a random number between 0 and 1
  • Add this random number to the decimal forecast of the first period
  • Continue to add forecasts for subsequent periods to the accumulation until the value to the right of the decimal in the accumulation “tips over” to the next integer – place a forecast of 1 unit at each of these “tip-over” points

Here’s our small decimal forecast converted to integers in this fashion:

Because a random seed is being used for each item/store, the timing of the first integer forecast will vary by each item/store.

And because the accumulator uses the shaped decimal forecast, the shape of the curve is preserved. In faster selling periods, the accumulator will tip over more frequently and the integer forecasts will likewise be more frequent. In slower periods, the opposite is true.

Below is our original forecast after it has been converted from decimals to integers using this logic:

And when the requirements across multiple stores are placed back on the DC, they are not “bunched” and a more realistic shipment schedule results:

Stabilizing the Plans – Variable Consumption Periods

Just to stay grounded in reality, none of what has been described above (or, for that matter, in the previous piece Employing the Law of Large Numbers in Bottom Up Forecasting) improves forecast accuracy in the traditional sense. This is because, quite frankly, it’s not possible to predict with a high degree of accuracy the exact quantity and timing of 13 units of sales over a 52 week forecast horizon.

The goal here is not pinpoint accuracy (the logic does start with a random number after all), but reasonableness, consistency and ease of use. It allows for long tail items to have the same multi-echelon planning approach as fast selling items without having separate processes “on the side” to deal with them.

For fast selling items with continuous demand, it is common to forecast in weekly buckets, spread the weekly forecast into days for replenishment using a traffic profile for that location and consume the forecast against actuals to date for the current week:

In the example above, the total forecast for Week 1 is 100 units. By end of day Wednesday, the posted actuals to date totalled 29 units, but the original forecast for those 3 days was 24 units. The difference of -5 units is spread proportionally to the remainder of the week such as to keep the total forecast for the week at 100 units. The assumption being used is that you have higher confidence in the weekly total of 100 units than you have in the exact daily timing as to when those 100 units will actually sell.

For slow moving items, we would not even have confidence in the weekly forecasts, so consuming forecast against actual for a week makes no sense. However, there would still be a need to keep the forecast stable in the very likely event that the timing and magnitude of the actuals don’t match the original forecast. In this case, we would consume forecast against actuals on a less frequent basis:

The logic is the same, but the consumption period is longer to reflect the appropriate level of confidence in the forecast timing.

Controlling Store Inventory – Selective Order Release

Let’s assume for a moment a 1 week lead time from DC to store. In the example below, a shipment is planned in Week 2, which means that in order to get this shipment in Week 2, the store needs to trigger a firm replenishment right now:

Using standard replenishment rules that you would use for fast moving items, this planned shipment would automatically trigger as a store transfer in Week 1 to be delivered in Week 2. But this replenishment requirement is being calculated based on a forecast in Week 2 and as previously mentioned, we do not have confidence that this specific quantity will be sold in this specific week at this specific store.

When that shipment of 1 unit arrives at the store (bringing the on hand up to 3 units), it’s quite possible that you won’t actually sell it for several more weeks. And the overstock situation would be further exacerbated if the order multiple is greater than 1 unit.

This is where having the original decimal forecast is useful. Remember that, as a practical matter, the small decimals represent the probability of a sale in a particular week. This allows us to calculate a tradeoff between firming this shipment now or waiting for the sale to materialize first.

Let’s assume that choosing to forgo the shipment in Week 2 today means that the next opportunity for a shipment is in Week 3. In the example below, we can see that there is a 67.8% chance (0.185 + 0.185 + 0.308) that we will sell 1 unit and drop the on hand below safety stock between now and the next available ship date:

Based on this probability, would you release the shipment or not? The threshold for this decision could be determined based on any number of factors such as product size, cost, etc. For example, if an item is small and cheap, you might use a low probability threshold to trigger a shipment. If another slow selling item is very large and expensive, you might set the threshold very high to ensure that this product is only replenished after a sale drives the on hand below the safety stock.

Remember, the probabilities themselves follow the sales curve, so an order has a higher probability of triggering in a higher selling period than in a lower selling period, which would be the desired behaviour.

The point of all of this is that the same principles of Flowcasting (forecast only at the point of consumption, every item has a 52 week forecast and plan, only order at the lead time, etc.) can still apply to items on the long tail, so long as the planning logic you use incorporates these elements.

Ordinary Observation

OrdinaryObservation

It’s September 28, 1928 in a West London lab. A young physician, Alex, was doing some basic research that had been assigned to him regarding antibacterial agents. He’d been doing the same thing for a number of days when one day he noticed something odd.

What caught his eye and attention that fateful day was that mold had actually killed some bacteria in one of his plates. Usually samples like this are discarded, but instead Alex kept this sample and began to wonder. If this mold could kill this type of bacteria, could it be used to kill destructive bacteria in the human body?

Alexander Fleming would spend the next 14 years working out the kinks and details before “penicillin” was officially used to treat infections. It was an invention that would revolutionize medicine by discovering the world’s first antibiotic.

Dr. Fleming was able to develop this innovation through the simple power of ordinary observation. Sherlock Holmes famously said once to Watson: “You see, but you do not observe. The distinction is clear.” According to psychologist and writer Maria Konnikov…“To observe, you must learn to separate situation from interpretation, yourself from what you are seeing.

Here’s another example of the power of observation. Fast forward to 1955, a relatively unknown and small furniture store in Almhult, Sweden. One day, the founder and owner noticed something odd. An employee had purchased a table to take home to the family. Rather than struggling to try to cram the assembled table into his car, this employee took the legs off and carefully placed them in a box, which, in turn, would fit nicely in his car for delivery home.

As it turned out, the owner of the store, Ingvard Kamprad, would observe this unpacking phenomena regularly. Carefully he observed what his employees were doing and why it was so effective. And, if this concept was better for his employees, it would stand to reason that it would also be better for his customers – and the bottom line.

Soon after, Kamprad would work tirelessly to perfect the idea of selling dis-assembled furniture – changing the customer journey for furniture acquisition forever, and making IKEA synonymous with this brand promise and a worldwide household name. All because of the power of ordinary observation.

A final story about observation and its impact on supply chain planning.

Ken Moser is one of Canada’s top retailers – leading and managing one of Canadian Tire’s best stores in northern Ontario. About 15 years ago, he was visited by a chap who would eventually build the world’s first and, to date, best Flowcasting solution.

This person followed Ken around the store, asking questions and observing how the store operated and how Ken thought – particularly about how to manage the inventory of tens of thousands of items. Rumour has it that when Ken got to a section of the store, he proclaimed something like…”these items are like a set-it-and-forget-it. I have no idea when they’ll sell, and neither do you. All I know is that, like clockwork, they’ll only sell one a month. For others, it’s like one every quarter.”

Our Flowcasting architect was fascinated with this observation and spent time watching/observing customers perusing this section of the store. And like the two examples above, deep observation and reflection would eventually morph into an approach to forecasting and planning slow selling items that is, to date, the only proven solution in retail. All from the awesome power of ordinary observation.

Yogi Berra, the great Yankee catcher and sometimes philosopher, hit the nail on the proverbial head regarding the importance of ordinary observation when he proclaimed…

You can observe a lot, just by watching.

Turns out, you can.

Employing the Law of Large Numbers in Bottom-Up Forecasting

 

It is utterly implausible that a mathematical formula should make the future known to us, and those who think it can would once have believed in witchcraft. – Jakob Bernoulli (1655-1705)

forest through the trees

This is a topic I’ve touched on numerous times in the past, but I’ve never really taken the time to tackle the subject comprehensively.

Before diving in, I just want to make clear that I’m going to stay in my lane: the frame of reference for this entire piece is around forecasting sales at the point of consumption in retail.

In that context, here are some truths that I consider to be self evident:

  1. Consumers buy specific items in specific stores at specific times. Therefore, in order to plan the retail supply chain from consumer demand back, forecasts are needed by item by store.
  2. Any retailer has a large enough percentage of intermittent demand streams at item/store level (e.g. fewer than 1 sale per week) that they can’t simply be ignored in the forecasting process.
  3. Any given item can have continuous demand in some locations and intermittent demand in other locations.
  4. “Intermittent” doesn’t mean the same thing as “random”. An intermittent demand stream could very well have a distinct pattern that is not visible to the naked eye (nor to most forecast algorithms that were designed to work with continuous demands).
  5. Because of points 1 to 4 above, the Law of Large Numbers needs to be employed to see any patterns that exist in intermittent demand streams.

On this basis, it seems to be a foregone conclusion that the only way to forecast at item/store is by employing a top-down approach (i.e. aggregate sales history to some higher level(s) than item/store so that a pattern emerges, calculate an independent forecast at that level, then push down the results proportionally to the item/stores that participated in the original aggregation of history).

So now the question becomes: How do you pick the right aggregation level for forecasting?

This recent (and conveniently titled) article from Institute of Business Forecasting by Eric Wilson called How Do You Pick the Right Aggregation Level for Forecasting? captures the considerations and drawbacks quite nicely and provides an excellent framework to discuss the problem in a retail context.

A key excerpt from that article is below (I recommend that you read the whole thing – it’s very succinct and captures the essence about how to think about this problem in a short few paragraphs):


When To Go High Or Low?

Despite all the potential attributes, levels of aggregation, and combinations of them, historically the debate has been condensed down to only two options, top down and bottom up.

The top-down approach uses an aggregate of the data at the highest level to develop a summary forecast, which is then allocated to individual items on the basis of their historical relativity to the aggregate. This can be any generated forecast as a ratio of their contribution to the sum of the aggregate or on history which is in essence a naïve forecast.

More aggregated data is inherently less noisy than low-level data because noise cancels itself out in the process of aggregation. But while forecasting only at higher levels may be easier and provides less error, it can degrade forecast quality because patterns in low level data may be lost. High level works best when behavior of low-level items is highly correlated and the relationship between them is stable. Low level tends to work best when behavior of the data series is very different from each other (i.e. independent) and the method you use is good at picking up these patterns.

The major challenge is that the required level of aggregation to get meaningful statistical information may not match the precision required by the business. You may also find that the requirements of the business may not need a level of granularity (i.e. Customer for production purposes) but certain customers may behave differently, or input is at the item/customer or lower level. More often than not it is a combination of these and you need multiple levels of aggregation and multiple levels of inputs along with varying degrees of noise and signals.


These are the two most important points:

  • “High level works best when behavior of low-level items is highly correlated and the relationship between them is stable.”
  • “Low level tends to work best when behavior of the data series is very different from each other (i.e. independent) and the method you use is good at picking up these patterns.”

Now, here’s the conundrum in retail:

  • The behaviour of low level items is very often NOT highly correlated, making forecasting at higher levels a dubious proposition.
  • Most popular forecasting methods only work well with continuous demand history data, which can often be scarce at item/store level (i.e. they’re not “good at picking up these patterns”).

My understanding of this issue was firmly cemented about 19 years ago when I was involved in a supply chain planning simulation for beer sales at 8 convenience stores in the greater Montreal area. During that exercise, we discovered that 7 of those 8 stores had a sales pattern that one would expect for beer consumption in Canada (repeated over 2 full years): strong sales during the summer months, lower sales in the cooler months and a spike around the holidays. The actual data is long gone, but for those 7 stores, it looked something like this:

The 8th store had a somewhat different pattern.

And by “somewhat different”, I mean exactly the opposite:

Remember, these stores were all located within about 30 kilometres of each other, so they all experienced generally the same weather and temperature at the same time. We fretted over this problem for awhile, thinking that it might be an issue with the data. We even went so far as to call the owner of the 8 store chain to ask him what might be going on.

In an exasperated tone that is typical of many French Canadians, he impatiently told us that of course that particular store has slower beer sales in the summer… because it is located in the middle of 3 downtown university campuses: fewer students in the summer months = a decrease in sales for beer during that time for that particular store.

If we had visited every one of those 8 stores before we started the analysis (we didn’t), we may have indeed noticed the proximity of university campuses to one particular store. Would we have pieced together the cause/effect relationship to beer sales? My guess is probably not. Yet the whole story was right there in the sales data itself, as plain as the nose on your face.

We happened upon this quirk after studying a couple dozen SKUs across 8 locations. A decent sized retailer can sell tens of thousands of SKUs across hundreds or thousands of locations. With millions of item/store combinations, how many other quirky criteria like that could be lurking beneath the surface and driving the sales pattern for any particular item at any particular location?

My primary conclusion from that exercise was that aggregating sales across store locations is definitely NOT a good idea.

So in terms of figuring out the right level of aggregation, that just leaves us with the item dimension – stay at store level, but aggregate across categories of similar items. But in order for this to be a good option for the top level, we now have another problem: “behavior of low-level items is highly correlated and the relationship between them is stable“.

That second part becomes a real issue when it comes to trying to aggregate across items. Retailers live every day on the front line of changing consumer sentiment and behaviour. As a consequence of that, it is very uncommon to see a stable assortment of items in every store year in and year out.

Let’s say that a category currently has 10 similar items in it. After an assortment review, it’s decided that 2 of those items will be leaving the category and 4 new products will be introduced into the category. This change is planned to be executed in 3 months’ time. This is a very simple variation of a common scenario in retail.

Now think about what that means with regard to managing the aggregated sales history for the top level (category/store):

  • The item/store sales history currently includes 2 items that will be leaving the assortment. But you can’t simply exclude those 2 items from the history aggregation, because this would understate the category/store forecast for the next 3 months, during which time those 2 items will still be selling.
  • The item/store level sales history currently does not include the 4 new items that will be entering the assortment. But you can’t simply add surrogate history for the 4 new items into the aggregation, because this would overstate the category/store forecast for next 3 months before those items are officially launched.

In this scenario, how would one go about setting up the category/store forecast in such a way that:

  1. It accounts for the specific items participating in the aggregation at different future times (before, during and after the anticipated assortment change)?
  2. The category/store forecast is being pushed down to the correct items at different future times (before, during and after the anticipated assortment change)?

And this is a fairly simple example. What if the assortment changes above are being rolled out to different stores at different times (e.g. a test market launch followed by a staged rollout)? What if not every store is carrying the full 10 SKU assortment today? What if not every store will be carrying the full 12 SKU assortment in the future?

The complexity of trying to deal with this in a top-down structure can be nauseating.

So it seems that we find ourselves in a bit of a pickle here:

  1. The top-down approach is unworkable in retail because the behaviour between locations for the same item are not correlated (beer in Montreal stores) and the relationships among items for the same location are not stable (constantly changing assortments).
  2. In order for the bottom-up approach to work, there needs to be some way of finding patterns in intermittent data. It’s a self-evident truth that the only way to do this is by aggregating.

So the Law of Large Numbers is still needed to solve this problem, but in a retail setting, there is no “right level” of aggregation above item/store at which to develop reliable independent top level forecasts that are also manageable.

Maybe we haven’t been thinking about this problem in the right way.

This is where Darryl Landvater comes in. He’s a long time colleague and mentor of mine best known as a “manufacturing guy” (he’s the author of World Class Production and Inventory Management, as well as co-author of The MRP II Standard System), but in reality he’s actually a “planning guy”.

A number of years ago, Darryl recognized the inherent flaws with using a top-down approach to apply patterns to intermittent demand streams and broke the problem down into two discrete parts:

  1. What is the height of the curve (i.e. rate of sale)?
  2. What is the shape of the curve (i.e. selling profile)?

His contention was that it’s not necessary to use aggregation to calculate completely independent sales forecasts (i.e. height + shape) to achieve this. Instead, what’s needed is to aggregate to calculate selling profiles to be used in cases where the discrete demand history for an item at a store is insufficient to determine one. We’re still using the Law of Large Numbers, but only to solve for the specific problem inherent in slow selling demands – finding the shape of the curve.

It’s called Profile Based Forecasting and here’s a very simplified explanation of how it works:

  1. Calculate an annual forecast quantity for each independent item/store based on sales history from the last 52+ weeks (at least 104 weeks of rolling history is ideal). For example, if an item in a store sold 25 units 2 years ago and 30 units over the most current 52 weeks, then the total forecast for the upcoming 52 weeks might be around 36 units with a calculated trend applied.
  2. Spread the annual forecast into individual time periods as follows:
    • If the item/store has a sufficiently high rate of sale that a pattern can be discerned from its own unique sales history (for example, at least 70 units per year), then calculate the selling pattern from only that history and multiply it through the item/store’s selling rate.
    • If the item/store’s rate of sale is below the “fast enough to use its own history” threshold, then calculate a sales pattern using a category of similar items at the same store and multiply those percentages through the independently calculated item/store annual forecast.

There is far more to it than that, but the separation of “height of the curve” from “shape of the curve” as described above is the critical design element that forms the foundation of the approach.

Think about what that means:

  1. If an item/store’s rate of sale is sufficient to calculate its own independent sales profile at that level, then it will do so.
  2. If the rate of sale is too low to discern a pattern, then the shape being applied to the independent item/store’s rate of sale is derived by looking at similar items in the category within the same store. Because the profiles are calculated from similar products and only represent the weekly percentages through which to multiply the independent rate of sale, they don’t need to be recalculated very often and are generally immune to the “ins and outs” of specific products in the category. It’s just a shape, remember.
  3. All forecasting is purely bottom-up. Every item at every store can have its own independent forecast with a realistic selling pattern and there are no forecasts to be calculated or managed above the item/store level.
  4. The same forecast method can be used for every item at every store. The only difference between fast and slow selling items is how the selling profile is determined. As the selling rate trends up or down over time, the appropriate selling profile will be automatically applied based on a comparison to the threshold. This makes the approach very “low touch” – demand planners can easily oversee several hundred thousand item/store combinations by managing only exceptions.

With realistic, properly shaped forecasts for every item/store enabled without any aggregate level modelling, it’s now possible to do top-down stuff that makes sense, such as applying promotional lifts or overrides for an item across a group of stores and applying the result proportionally based on each store’s individual height and shape for those specific weeks, rather than using a naive “flat line” method.

Simple. Intuitive. Practical. Consistent. Manageable. Proven.

Noise is expensive

Noise

Did you know that the iHome alarm clock, common in many hotels, shows a small PM when the time is after 12 noon?  You wonder how many people fail to note the tiny ‘pm’ isn’t showing when they set the alarm, and miss their planned wake up.  Seems a little complicated and unnecessary, wouldn’t you agree?

Did you also know that most microwaves also depict AM or PM? If you need the clock in the microwave to tell you whether it’s morning or night, somethings a tad wrong.

More data/information isn’t always better. In fact, in many cases, it’s a costly distraction or even provides the opportunity to get the important stuff wrong.

Contrary to current thinking, data isn’t free.

Unnecessary data is actually expensive.

If you’re like me, then your life is being subjected to lots of data and noise…unneeded and unwanted information that just confuses and adds complication.

Just think about shopping now for a moment.  In a recent and instructive study sponsored by Oracle (see below), the disconnect between noise and what consumers really want is startling:

  1. 95% of consumers don’t want to talk or engage with a robot
  2. 86% have no desire for other shiny new technologies like AI or virtual reality
  3. 48% of consumers say that these new technologies will have ZERO impact on whether they visit a store and even worse, only 14% said these things might influence them in their purchasing decisions

From the consumers view what this is telling us, and especially supply chain technology firms, we don’t seem to understand what’s noise and what’s actually relevant. I’d argue we’ve got big time noise issues in supply chain planning, especially when it relates to retail.

I’m talking about forecasting consumer sales at a retail store/webstore or point of consumption.  If you understand retail and analyze actual sales you’ll discover something startling:

  1. 50%+ of product/store sales are less than 20 per year, or about 1 every 2-3 weeks.

Many of the leading supply chain planning companies believe that the answer to forecasting and planning at store level is more data and more variables…in many cases, more noise. You’ll hear many of them proclaim that their solution takes hundreds of variables into account, simultaneously processing hundreds of millions of calculations to arrive at a forecast.  A forecast, apparently, that is cloaked in beauty.

As an example, consider the weather.  According to these companies not only can they forecast the weather, they can also determine the impact the weather forecast has on each store/item forecast.

Now, since you live in the real world with me, here’s a question for you:  How often is the weather forecast (from the weather network that employs weather specialists and very sophisticated weather models) right?  Half the time?  Less?  And that’s just trying to predict the next few days, let alone a long term forecast.  Seems like noise, wouldn’t you agree?

Now, don’t get me wrong.  I’m not saying the weather does not impact sales, especially for specific products.  It does.  What I’m saying is that people claiming to predict it with any degree of accuracy are really just adding noise to the forecast.

Weather.  Facebook posts.  Tweets.  The price of tea in China.  All noise, when trying to forecast sales by product at the retail store.

All this “information” needs to be sourced.  Needs to be processed and interpreted somehow.  And it complicates things for people as it’s difficult to understand how all these variables impact the forecast.

Let’s contrast that with a recent retail implementation of Flowcasting.

Our most recent retail implementation of Flowcasting factors none of these variables into the forecast and resulting plans.  No weather forecasts, social media posts, or sentiment data is factored in at all.

None. Zip. Zilch.  Nada.  Heck, it’s so rudimentary that it doesn’t even use any artificial intelligence – I know, you’re aghast, right?

The secret sauce is an intuitive forecasting solution that produces integer forecasts over varying time periods (monthly, quarterly, semi-annually) and consumes these forecasts against actual sales. So, the forecasts and consumption could be considered like a probability.  Think of it like someone managing a retail store. They can say fairly confidently that “I know this product will sell one this month, I just don’t know what day”!

The solution also includes simple replenishment logic to ensure all dependent plans are sensible and ordering for slow selling products is based on your opinion on how probable you think a sale is likely in the short term (i.e., orders are only triggered for a slow selling item if the probability of making a sale is high).

In addition to the simple, intuitive system capabilities above, the process also employs and utilizes a different kind of intelligence – human.  Planners and category managers, since they are speaking the same language – sales – easily come to consensus for situations like promotions and new product introductions.  Once the system is updated then the solution automatically translates and communicates the impact of these events for all partners.

So, what are the results of using such a simple, intuitive process and solution?

The process is delivering world class results in terms of in-stock, inventory performance and costs.  Better results, from what I can tell, than what’s being promoted today by the more sophisticated solutions.  And, importantly, enormously simpler, for obscenely less cost.

Noise is expensive.

The secret for delivering world class performance (supply chain or otherwise) is deceptively simple…

Strip away the noise.

Customer Service Collateral Damage

 

Good intentions can often lead to unintended consequences. – Tim Walberg

unintended-Consequences

Speed kills.

Retailers with brick and mortar operations are always trying to keep the checkout lines moving and get customers out the door as quickly as possible. Many collect time stamps on their sales transactions in order to measure and reward their cashiers based on how quickly they can scan.

Similarly, being able to receive quickly at the back of the store is seen as critical to customer service – product only sells off the shelf, not from the receiving bay or the back of a truck.

This focus on speed has led to many in-store transactional “efficiencies”:

  • If a customer puts 12 cans of frozen concentrated juice on the belt, a cashier may scan the first one and use the multiplier key to add the other 11 to the bill all at once.
  • If a product doesn’t scan properly or is missing the UPC code, just ask the customer for the price and key the sale under a “miscellaneous” SKU or a similar item with the same price, rather than calling for a time consuming code check.
  • If a shipment arrives in the receiving bay, just scan the waybill instead of each individual case and get the product to the floor.

These time saving measures can certainly delight “the customer of this moment”, but there can also be consequences.

In the “mult key” example, the 12 cans scanned could be across 6 different flavours of juice. The customer may not care since they’re paying the same price, but the inventory records for 6 different SKUs have just been fouled up for the sake of saving a few seconds. To the extent that the system on hand balances are used to make automated replenishment decisions, this one action could be inconveniencing countless customers for several more days, weeks or even months before the lie is exposed.

The smile on a customer’s face because you saved her 5 seconds at the checkout or the cashier speed rankings board in the break room might be tangible signs of “great customer service”, but the not-so-easy-to-see costs of stockouts and lost sales that arise from this practice over time is extremely costly.

Similarly with skipping code checks or “pencil whipping” back door receipts. Is sacrificing accuracy for the sake of speed really good customer service policy?

A recent article published in Canadian Grocer magazine begins with the following sentence:

“A lack of open checkouts and crowded aisles may be annoying to grocery shoppers, but their biggest frustration is finding a desired product is out of stock, according to new research from Field Agent.”

According to the article, out of stocks are costing Canadian grocers $63 billion per year in sales. While better store level planning and replenishment can drive system reported in-stocks close to 100%, the benefits are muted if the replenishment system thinks the store has 5 units when they actually have none.

Not only does this affect the experience of a walk-in customer looking at an empty shelf, but it’s actually even more serious in an omnichannel world where the expectation is that retailers will publish store inventories on their public websites (gulp!). An empty shelf is one thing, but publishing an inaccurate on hand on your website is tantamount to lying right to your customers’ faces.

We’ve seen firsthand that it’s not uncommon for retailers to have a store on hand accuracy percentage in the low 60s (meaning that almost 40% of the time, the system on hand record differs from the counted quantity by more than 5% at item/location level). Furthermore, we’ve found that on the day of an inventory count, the actual in stock is several points lower than the reported in stock on average.

Suffice it to say that inaccurate on hand records are a big part of the out of stocks problem.

Nothing I’ve said above is particularly revolutionary or insightful. The real question is why has it been allowed to continue?

In my view, there are 3 key reasons:

  1. Most retailers conflate shrink with inventory accuracy and make the horribly, horribly mistaken assumption that if their financial shrink is below 1.5%, then their inventory management is under control. Shrink is a measure for accountants, not customers and the responsibility of store inventory management belongs in Store Operations, not Finance.
  2. Nobody measures the accuracy of their on hands. It’s fine to measure the speed of transactions and the efficiency of store labour, but if you’re taking shortcuts to achieve those efficiencies, you should also be measuring the consequence of those actions – especially when the consequence so profoundly impacts the customer experience.
  3. Retailers think that inaccurate store on hands is an intractable problem that’s impossible to economically solve. That was true for every identified problem in human history at one point. However, I do agree that if no action is taken to solve the problem because it is “impossible to solve”, then it will never be solved.

It’s true that overcoming inertia on this will not be easy.

Your customers’ expectations will continue to rise regardless.

Lucky the car was dirty

Luck

It’s 1971 and Bill Fernandez would do something that would change the course of history. On that fateful day, Bill decided to go for a nice stroll with his good friend, Steve Jobs. As luck would have it, their walk took them pass the house of another of Bill’s pals, Steve Wozniak.

Luckily, Woz’s car was dirty and he was outside, washing it. Bill introduced the two Steve’s and they instantly hit it off. They both shared a passion for technology and practical jokes. Soon after, they started hanging out, collaborating and eventually working together to form Apple. The rest is history.

It’s incredible, in life and business, how powerful and important Luck is.

People who know me well, know that I’m an avid reader and one of the authors that’s influenced my thinking the most is the legendary Tom Peters – you know, of In Search of Excellence fame, among many other brilliant works.

Tom’s also a big believer in Luck. In fact, he believes it’s the most important factor in anyone’s success. I think he’s right. As he correctly points out in his ditty below, you make your own luck and, when you do, you just get luckier and luckier – which is an ongoing philosophy that helps you learn, change, grow and deliver.

So, today, I’m celebrating and counting my lucky stars. I know that luck is THE factor in any success (and failures) that I’ve had. Just consider…

Years ago, I started my career fresh from school at a prestigious consulting firm in downtown Toronto. As luck would have it, one of my Partners, Gus, gave me some brilliant advice. He said to me, “Mike you don’t know shit. The only way to learn is to read. Tons. I’ll make a deal with you. For every business related book you read, the firm will pay for it”. Luckily, I took the advice of Gus and this propelled me into life-long reading and learning.

Roughly 20 years ago, another massive jolt of luck helped me considerably. I was leading a team at a large Canadian retailer who would eventually design what we now call Flowcasting, along with delivering the first full scale implementation of integrated time-phased planning and supplier scheduling in retail.

The original design was enthusiastically supported by our team, but did not have the blessings of Senior Management. In fact, the VP at the time (my boss) indicated that this would not work, we’d better change it, or I’d be fired.

Luckily one of the IT folks, John, then said to me something like “this is just like DRP at store level. You should call Andre Martin and see what he thinks”. To which I replied, “Who’s Andre Martin and what is DRP?”. The next day John brought me copy of Andre’s book, Distribution Resource Planing. I read it (luckily I’m a reader you know) and agreed. I called Andre the next day and eventually he and his colleague, Darryl, helped us convince Senior Management the design was solid – which led to a very successful implementation and helped change the paradigm of retail planning.

As luck would have it, my director on that initial project would later become CEO of Princess Auto Ltd (PAL) – as you know, an early adopter of the Flowcasting process and solution. Given his understanding of the potential of planning and connecting the supply chain from consumption to supply, it was not surprising that we were called to help. Luck had played an important role again.

Luck also played a significant role in the successful implementation of Flowcasting at PAL. The Executive Sponsor, Ken, and the Team Lead, Kim, were people that:

  1. Could simplify things;
  2. See the potential of the organization working in harmony driven by the end consumer; and
  3. Had credibility within the organization to help drive and instill the change.

We were lucky that the three of us had very similar views and philosophy regarding change – focusing on changing the mental model, and less on spewing what I’d call Corporate Mayonnaise.

In addition to being like-minded, the project team at PAL were lucky in that they used a software solution that was designed for the job. The RedPrairie Collaborative Flowcasting solution was designed for purpose – a simple, elegant, low-touch, intuitive system that is easy to use and even easier to implement.

We were very lucky that as an early adopter, we were given the opportunity to use the solution to prove the concept, at scale. As a result, our implementation focused mainly on changing minds and behaviors rather than the typical system and integration issues that plague these implementations when a solution not fit for purpose is deployed.

So, my advice to you is simple. When you get the chance, jot down all the luck you’ve had in your career and life so far. If you’re honest, you’ll realize that luck has played a huge role in your success and who you are today.

And, by all means, you should continue to welcome and encourage more luck into your life.

Thank you and Good Luck!

Rise of the Machines?

 

It requires a very unusual mind to undertake the analysis of the obvious. – Alfred North Whitehead (1861-1947)

20180626210156-GettyImages-917581126

 

My doctor told me that I need to reduce the amount of salt, fat and sugar in my diet. So I immediately increased the frequency of oil changes for my car.

Confused?

I don’t blame you. That’s how I felt after I read a recent survey about the adoption of artificial intelligence (AI) in retail.

Note that I’m not criticizing the survey itself. It’s a summary of collected thoughts and opinions of retail C-level executives (pretty evenly split among hardlines/softlines/grocery on the format dimension and large/medium/small on the size dimension), so by definition it can’t be “wrong”. I just found some of the responses to be revealing – and bewildering.

On the “makes sense” side of the ledger, the retail executives surveyed intend to significantly expand customer delivery options for purchases made online over the next 24 months, specifically:

  • 79% plan to offer ship from store
  • 80% plan to offer pick up in store
  • 75% plan to offer delivery using third party services

This supports my (not particularly original) view that the physical store affords traditional brick and mortar retailers a competitive advantage over online retailers like Amazon, at least in the short to medium term.

However, the next part of the survey is where we start to see trouble (the title of this section is “Retailers Everywhere Aren’t Ready for the Anywhere Shelf”):

  • 55% of retailers surveyed don’t have a single view of inventory across channels
  • 78% of retailers surveyed don’t have a real-time view of inventory across channels

What’s worse is that there is no mention at all about inventory accuracy. I submit that the other 45% and 22% respectively may have inventory visibility capabilities, but are they certain that their store level inventory records are accurate? Do they actually measure store on hand accuracy (by item by location in units, which is what a customer sees) as a KPI?

The title of the next slide is “Customer Experience and Supply Chain Maturity Demands Edge Technologies”. Okay… Sure… I guess.

The slide after that concludes that retail C-suite executives believe that the top technologies “having the broadest business impact on productivity, operational efficiency and customer experience” are as follows:

  • #1 – Artificial Intelligence/Machine Learning
  • #2 – Connected Devices
  • #3 – Voice Recognition

Towards the end, it was revealed that “The C-suite is planning a 5X increase in artificial intelligence adoption over the next 2 years”. And that 50% of those executives see AI as an emerging technology that will have a significant impact on “sharpening inventory levels” (whatever that actually means).

So just to recap:

  • Over the next 2 years, retailers will be aggressively pursuing customer delivery options that place ever increasing importance on visibility and accuracy of store inventory.
  • A majority of retailers haven’t even met the visibility criteria and it’s highly unlikely that the ones who have are meeting the accuracy criteria (the second part is my assumption and I welcome being proved wrong on that).
  • Over the next 2 years, retailers intend to increase their investment in artificial intelligence technologies fivefold.

I’m reminded of the scene in Die Hard 2 (careful before you click – the language is not suitable for a work environment or if small children are nearby) where terrorists take over Dulles International Airport during a zero visibility snowstorm and crash a passenger jet simply by transmitting a false altitude reading to the cockpit of the plane.

Even in 1990, passenger aircraft were quite technologically advanced and loaded with systems that could meet the definition of “artificial intelligence“. What happens when one piece of critical data fed into the system is wrong? Catastrophe.

I need some help understanding the thought process here. How exactly will AI solve the inventory visibility/accuracy problem? Are we talking about every retailer having shelf scanning robots running around in every store 2 years from now? What does “sharpen inventory levels” mean and how is AI expected to achieve that (very nebulous sounding) goal?

I’m seriously asking.

Unvarnished

It’s an altercation that’s stuck with me for decades.

Roughly twenty years ago I was leading a retail team that would eventually design what we now call Flowcasting. We were an eclectic team, full of passion and dedicated to designing and implementing something new, and much better.

After a particularly explosive team session – that saw tensions and ideas run hot – everyone went back to their workstations to let sleeping dogs lie. One business team member, who’d really gotten into it with one of the IT associates, could not contain his passion. He promptly walked over to the team member’s cubicle and said…

“Oh, one more thing…F**k You!!”

Like most of the team, I was a little startled. I went over and talked to the team member and we had a good chat about how inappropriate his actions were. Luckily the IT team member was one cool dude and he didn’t take offence to it – the event just rolled off his back. To his credit, the next day my team member formally apologized and all was forgiven.

Now, please don’t think I’m condoning this type of action. I’m not. However, as a student of business, change and innovation I’ve been actively learning and trying to understand what really seeds innovation and, in particular, what types of people seem to be able to make change happen.

And, during my research and studies, I keep coming back to this event. It’s evidence of what seems to be a key trait and characteristic of innovative teams and people. They are what many refer to as…

Unvarnished.

If I think back to that team from two decades ago, we were definitely unvarnished. We called a spade a spade. Had little to no respect to the company hierarchy and even less for the status quo. And, as a team, we were brutally honest with each other and everyone on the team felt very comfortable letting me know when I was full of shit – which was, and continues to be, often.

But that team moved, as Steve Jobs would say, mountains – not only designing what would later morph into Flowcasting, but implementing a significant portion of the concept and, as a result, changing the mental model of retail planning.

I had no idea at the time, but being unvarnished was the key trait we had. Franseca Gino has extensively studied what makes great teams and penned a brilliant book about her learnings, entitled “Rebel Talent”.

She dedicates consider time to unvarnishment and quotes extensively from Ed Catmull, famed leader of Pixar Animation Studios who’s worked brilliantly with another member of the unvarnished hall of fame – Steve Jobs.

According to Catmull, “a hallmark of creative cultures is that people feel free to share ideas, opinions and criticisms. When the group draws on the unvarnished perspectives of all its members, the collective knowledge and decision making benefits.”

According to Catmull, and others (including me), “Candor is the key to constructive collaboration”. The KEY to disruptive innovation.

Here’s another example to prove my point. When I was consulting at a national western Canadian retailer, our team was lucky to have an Executive Sponsor who was, as I now understand, unvarnished as well.

As the project unfolded I was amazed how he operated and the way he encouraged and responded to what I’d call dissent. Most leaders of teams absolutely abhor dissent – having been unfortunately schooled over time that company hierarchy was there for a reason and was the tie-breaker on decision making and direction setting.

Our Sponsor openly encouraged people to dissent with him and readily and openly changed his mind whenever required. I vividly remember a very tense and rough session around job design and rollout in which he was at loggerheads with the team, including me. When I think back, it was amazing to see how “safe” team members felt disagreeing with him – and, in this case, very passionately.

As it turned out, over the next few days, we continued the dialogue and he changed his opinion 180 degrees – eventually agreeing with his direct report.

Neuroscience refers to this as being able to work with “psychological safety” – which is a fancier way of saying people are free to be unvarnished. To say what they believe, why and to whom with no consequences whatsoever.
Without question, as I’ve been thinking and studying great teams and innovation I realize just how brilliant this Sponsor was and the environment he helped to foster.

How many Executives, Leaders or teams are really working in an unvarnished environment – with complete psychological safety? I think you’d agree, not many.

If you, your company and your supply chain is going to compete and continually evolve and improve, won’t ongoing innovation need to become a way of life? And that means people need to collaborate better, disrupt faster and feel completely comfortable challenging and destroying the status quo.

Now, I’m not saying that when you don’t agree with someone to tell them to go F-themselves.

What I am saying – and other folks who are a lot smarter than me – is that hiring, promoting, encouraging and fostering people and a working environment that is unvarnished will be a crucial!

So here’s to being unvarnished. To being and working in safety. To real collaboration and candor.

And to looking your status quo in the eye and saying…”F**k you!”