Employing the Law of Large Numbers in Bottom-Up Forecasting

 

It is utterly implausible that a mathematical formula should make the future known to us, and those who think it can would once have believed in witchcraft. – Jakob Bernoulli (1655-1705)

forest through the trees

This is a topic I’ve touched on numerous times in the past, but I’ve never really taken the time to tackle the subject comprehensively.

Before diving in, I just want to make clear that I’m going to stay in my lane: the frame of reference for this entire piece is around forecasting sales at the point of consumption in retail.

In that context, here are some truths that I consider to be self evident:

  1. Consumers buy specific items in specific stores at specific times. Therefore, in order to plan the retail supply chain from consumer demand back, forecasts are needed by item by store.
  2. Any retailer has a large enough percentage of intermittent demand streams at item/store level (e.g. fewer than 1 sale per week) that they can’t simply be ignored in the forecasting process.
  3. Any given item can have continuous demand in some locations and intermittent demand in other locations.
  4. “Intermittent” doesn’t mean the same thing as “random”. An intermittent demand stream could very well have a distinct pattern that is not visible to the naked eye (nor to most forecast algorithms that were designed to work with continuous demands).
  5. Because of points 1 to 4 above, the Law of Large Numbers needs to be employed to see any patterns that exist in intermittent demand streams.

On this basis, it seems to be a foregone conclusion that the only way to forecast at item/store is by employing a top-down approach (i.e. aggregate sales history to some higher level(s) than item/store so that a pattern emerges, calculate an independent forecast at that level, then push down the results proportionally to the item/stores that participated in the original aggregation of history).

So now the question becomes: How do you pick the right aggregation level for forecasting?

This recent (and conveniently titled) article from Institute of Business Forecasting by Eric Wilson called How Do You Pick the Right Aggregation Level for Forecasting? captures the considerations and drawbacks quite nicely and provides an excellent framework to discuss the problem in a retail context.

A key excerpt from that article is below (I recommend that you read the whole thing – it’s very succinct and captures the essence about how to think about this problem in a short few paragraphs):


When To Go High Or Low?

Despite all the potential attributes, levels of aggregation, and combinations of them, historically the debate has been condensed down to only two options, top down and bottom up.

The top-down approach uses an aggregate of the data at the highest level to develop a summary forecast, which is then allocated to individual items on the basis of their historical relativity to the aggregate. This can be any generated forecast as a ratio of their contribution to the sum of the aggregate or on history which is in essence a naïve forecast.

More aggregated data is inherently less noisy than low-level data because noise cancels itself out in the process of aggregation. But while forecasting only at higher levels may be easier and provides less error, it can degrade forecast quality because patterns in low level data may be lost. High level works best when behavior of low-level items is highly correlated and the relationship between them is stable. Low level tends to work best when behavior of the data series is very different from each other (i.e. independent) and the method you use is good at picking up these patterns.

The major challenge is that the required level of aggregation to get meaningful statistical information may not match the precision required by the business. You may also find that the requirements of the business may not need a level of granularity (i.e. Customer for production purposes) but certain customers may behave differently, or input is at the item/customer or lower level. More often than not it is a combination of these and you need multiple levels of aggregation and multiple levels of inputs along with varying degrees of noise and signals.


These are the two most important points:

  • “High level works best when behavior of low-level items is highly correlated and the relationship between them is stable.”
  • “Low level tends to work best when behavior of the data series is very different from each other (i.e. independent) and the method you use is good at picking up these patterns.”

Now, here’s the conundrum in retail:

  • The behaviour of low level items is very often NOT highly correlated, making forecasting at higher levels a dubious proposition.
  • Most popular forecasting methods only work well with continuous demand history data, which can often be scarce at item/store level (i.e. they’re not “good at picking up these patterns”).

My understanding of this issue was firmly cemented about 19 years ago when I was involved in a supply chain planning simulation for beer sales at 8 convenience stores in the greater Montreal area. During that exercise, we discovered that 7 of those 8 stores had a sales pattern that one would expect for beer consumption in Canada (repeated over 2 full years): strong sales during the summer months, lower sales in the cooler months and a spike around the holidays. The actual data is long gone, but for those 7 stores, it looked something like this:

The 8th store had a somewhat different pattern.

And by “somewhat different”, I mean exactly the opposite:

Remember, these stores were all located within about 30 kilometres of each other, so they all experienced generally the same weather and temperature at the same time. We fretted over this problem for awhile, thinking that it might be an issue with the data. We even went so far as to call the owner of the 8 store chain to ask him what might be going on.

In an exasperated tone that is typical of many French Canadians, he impatiently told us that of course that particular store has slower beer sales in the summer… because it is located in the middle of 3 downtown university campuses: fewer students in the summer months = a decrease in sales for beer during that time for that particular store.

If we had visited every one of those 8 stores before we started the analysis (we didn’t), we may have indeed noticed the proximity of university campuses to one particular store. Would we have pieced together the cause/effect relationship to beer sales? My guess is probably not. Yet the whole story was right there in the sales data itself, as plain as the nose on your face.

We happened upon this quirk after studying a couple dozen SKUs across 8 locations. A decent sized retailer can sell tens of thousands of SKUs across hundreds or thousands of locations. With millions of item/store combinations, how many other quirky criteria like that could be lurking beneath the surface and driving the sales pattern for any particular item at any particular location?

My primary conclusion from that exercise was that aggregating sales across store locations is definitely NOT a good idea.

So in terms of figuring out the right level of aggregation, that just leaves us with the item dimension – stay at store level, but aggregate across categories of similar items. But in order for this to be a good option for the top level, we now have another problem: “behavior of low-level items is highly correlated and the relationship between them is stable“.

That second part becomes a real issue when it comes to trying to aggregate across items. Retailers live every day on the front line of changing consumer sentiment and behaviour. As a consequence of that, it is very uncommon to see a stable assortment of items in every store year in and year out.

Let’s say that a category currently has 10 similar items in it. After an assortment review, it’s decided that 2 of those items will be leaving the category and 4 new products will be introduced into the category. This change is planned to be executed in 3 months’ time. This is a very simple variation of a common scenario in retail.

Now think about what that means with regard to managing the aggregated sales history for the top level (category/store):

  • The item/store sales history currently includes 2 items that will be leaving the assortment. But you can’t simply exclude those 2 items from the history aggregation, because this would understate the category/store forecast for the next 3 months, during which time those 2 items will still be selling.
  • The item/store level sales history currently does not include the 4 new items that will be entering the assortment. But you can’t simply add surrogate history for the 4 new items into the aggregation, because this would overstate the category/store forecast for next 3 months before those items are officially launched.

In this scenario, how would one go about setting up the category/store forecast in such a way that:

  1. It accounts for the specific items participating in the aggregation at different future times (before, during and after the anticipated assortment change)?
  2. The category/store forecast is being pushed down to the correct items at different future times (before, during and after the anticipated assortment change)?

And this is a fairly simple example. What if the assortment changes above are being rolled out to different stores at different times (e.g. a test market launch followed by a staged rollout)? What if not every store is carrying the full 10 SKU assortment today? What if not every store will be carrying the full 12 SKU assortment in the future?

The complexity of trying to deal with this in a top-down structure can be nauseating.

So it seems that we find ourselves in a bit of a pickle here:

  1. The top-down approach is unworkable in retail because the behaviour between locations for the same item are not correlated (beer in Montreal stores) and the relationships among items for the same location are not stable (constantly changing assortments).
  2. In order for the bottom-up approach to work, there needs to be some way of finding patterns in intermittent data. It’s a self-evident truth that the only way to do this is by aggregating.

So the Law of Large Numbers is still needed to solve this problem, but in a retail setting, there is no “right level” of aggregation above item/store at which to develop reliable independent top level forecasts that are also manageable.

Maybe we haven’t been thinking about this problem in the right way.

This is where Darryl Landvater comes in. He’s a long time colleague and mentor of mine best known as a “manufacturing guy” (he’s the author of World Class Production and Inventory Management, as well as co-author of The MRP II Standard System), but in reality he’s actually a “planning guy”.

A number of years ago, Darryl recognized the inherent flaws with using a top-down approach to apply patterns to intermittent demand streams and broke the problem down into two discrete parts:

  1. What is the height of the curve (i.e. rate of sale)?
  2. What is the shape of the curve (i.e. selling profile)?

His contention was that it’s not necessary to use aggregation to calculate completely independent sales forecasts (i.e. height + shape) to achieve this. Instead, what’s needed is to aggregate to calculate selling profiles to be used in cases where the discrete demand history for an item at a store is insufficient to determine one. We’re still using the Law of Large Numbers, but only to solve for the specific problem inherent in slow selling demands – finding the shape of the curve.

It’s called Profile Based Forecasting and here’s a very simplified explanation of how it works:

  1. Calculate an annual forecast quantity for each independent item/store based on sales history from the last 52+ weeks (at least 104 weeks of rolling history is ideal). For example, if an item in a store sold 25 units 2 years ago and 30 units over the most current 52 weeks, then the total forecast for the upcoming 52 weeks might be around 36 units with a calculated trend applied.
  2. Spread the annual forecast into individual time periods as follows:
    • If the item/store has a sufficiently high rate of sale that a pattern can be discerned from its own unique sales history (for example, at least 70 units per year), then calculate the selling pattern from only that history and multiply it through the item/store’s selling rate.
    • If the item/store’s rate of sale is below the “fast enough to use its own history” threshold, then calculate a sales pattern using a category of similar items at the same store and multiply those percentages through the independently calculated item/store annual forecast.

There is far more to it than that, but the separation of “height of the curve” from “shape of the curve” as described above is the critical design element that forms the foundation of the approach.

Think about what that means:

  1. If an item/store’s rate of sale is sufficient to calculate its own independent sales profile at that level, then it will do so.
  2. If the rate of sale is too low to discern a pattern, then the shape being applied to the independent item/store’s rate of sale is derived by looking at similar items in the category within the same store. Because the profiles are calculated from similar products and only represent the weekly percentages through which to multiply the independent rate of sale, they don’t need to be recalculated very often and are generally immune to the “ins and outs” of specific products in the category. It’s just a shape, remember.
  3. All forecasting is purely bottom-up. Every item at every store can have its own independent forecast with a realistic selling pattern and there are no forecasts to be calculated or managed above the item/store level.
  4. The same forecast method can be used for every item at every store. The only difference between fast and slow selling items is how the selling profile is determined. As the selling rate trends up or down over time, the appropriate selling profile will be automatically applied based on a comparison to the threshold. This makes the approach very “low touch” – demand planners can easily oversee several hundred thousand item/store combinations by managing only exceptions.

With realistic, properly shaped forecasts for every item/store enabled without any aggregate level modelling, it’s now possible to do top-down stuff that makes sense, such as applying promotional lifts or overrides for an item across a group of stores and applying the result proportionally based on each store’s individual height and shape for those specific weeks, rather than using a naive “flat line” method.

Simple. Intuitive. Practical. Consistent. Manageable. Proven.

Noise is expensive

Noise

Did you know that the iHome alarm clock, common in many hotels, shows a small PM when the time is after 12 noon?  You wonder how many people fail to note the tiny ‘pm’ isn’t showing when they set the alarm, and miss their planned wake up.  Seems a little complicated and unnecessary, wouldn’t you agree?

Did you also know that most microwaves also depict AM or PM? If you need the clock in the microwave to tell you whether it’s morning or night, somethings a tad wrong.

More data/information isn’t always better. In fact, in many cases, it’s a costly distraction or even provides the opportunity to get the important stuff wrong.

Contrary to current thinking, data isn’t free.

Unnecessary data is actually expensive.

If you’re like me, then your life is being subjected to lots of data and noise…unneeded and unwanted information that just confuses and adds complication.

Just think about shopping now for a moment.  In a recent and instructive study sponsored by Oracle (see below), the disconnect between noise and what consumers really want is startling:

  1. 95% of consumers don’t want to talk or engage with a robot
  2. 86% have no desire for other shiny new technologies like AI or virtual reality
  3. 48% of consumers say that these new technologies will have ZERO impact on whether they visit a store and even worse, only 14% said these things might influence them in their purchasing decisions

From the consumers view what this is telling us, and especially supply chain technology firms, we don’t seem to understand what’s noise and what’s actually relevant. I’d argue we’ve got big time noise issues in supply chain planning, especially when it relates to retail.

I’m talking about forecasting consumer sales at a retail store/webstore or point of consumption.  If you understand retail and analyze actual sales you’ll discover something startling:

  1. 50%+ of product/store sales are less than 20 per year, or about 1 every 2-3 weeks.

Many of the leading supply chain planning companies believe that the answer to forecasting and planning at store level is more data and more variables…in many cases, more noise. You’ll hear many of them proclaim that their solution takes hundreds of variables into account, simultaneously processing hundreds of millions of calculations to arrive at a forecast.  A forecast, apparently, that is cloaked in beauty.

As an example, consider the weather.  According to these companies not only can they forecast the weather, they can also determine the impact the weather forecast has on each store/item forecast.

Now, since you live in the real world with me, here’s a question for you:  How often is the weather forecast (from the weather network that employs weather specialists and very sophisticated weather models) right?  Half the time?  Less?  And that’s just trying to predict the next few days, let alone a long term forecast.  Seems like noise, wouldn’t you agree?

Now, don’t get me wrong.  I’m not saying the weather does not impact sales, especially for specific products.  It does.  What I’m saying is that people claiming to predict it with any degree of accuracy are really just adding noise to the forecast.

Weather.  Facebook posts.  Tweets.  The price of tea in China.  All noise, when trying to forecast sales by product at the retail store.

All this “information” needs to be sourced.  Needs to be processed and interpreted somehow.  And it complicates things for people as it’s difficult to understand how all these variables impact the forecast.

Let’s contrast that with a recent retail implementation of Flowcasting.

Our most recent retail implementation of Flowcasting factors none of these variables into the forecast and resulting plans.  No weather forecasts, social media posts, or sentiment data is factored in at all.

None. Zip. Zilch.  Nada.  Heck, it’s so rudimentary that it doesn’t even use any artificial intelligence – I know, you’re aghast, right?

The secret sauce is an intuitive forecasting solution that produces integer forecasts over varying time periods (monthly, quarterly, semi-annually) and consumes these forecasts against actual sales. So, the forecasts and consumption could be considered like a probability.  Think of it like someone managing a retail store. They can say fairly confidently that “I know this product will sell one this month, I just don’t know what day”!

The solution also includes simple replenishment logic to ensure all dependent plans are sensible and ordering for slow selling products is based on your opinion on how probable you think a sale is likely in the short term (i.e., orders are only triggered for a slow selling item if the probability of making a sale is high).

In addition to the simple, intuitive system capabilities above, the process also employs and utilizes a different kind of intelligence – human.  Planners and category managers, since they are speaking the same language – sales – easily come to consensus for situations like promotions and new product introductions.  Once the system is updated then the solution automatically translates and communicates the impact of these events for all partners.

So, what are the results of using such a simple, intuitive process and solution?

The process is delivering world class results in terms of in-stock, inventory performance and costs.  Better results, from what I can tell, than what’s being promoted today by the more sophisticated solutions.  And, importantly, enormously simpler, for obscenely less cost.

Noise is expensive.

The secret for delivering world class performance (supply chain or otherwise) is deceptively simple…

Strip away the noise.

Customer Service Collateral Damage

 

Good intentions can often lead to unintended consequences. – Tim Walberg

unintended-Consequences

Speed kills.

Retailers with brick and mortar operations are always trying to keep the checkout lines moving and get customers out the door as quickly as possible. Many collect time stamps on their sales transactions in order to measure and reward their cashiers based on how quickly they can scan.

Similarly, being able to receive quickly at the back of the store is seen as critical to customer service – product only sells off the shelf, not from the receiving bay or the back of a truck.

This focus on speed has led to many in-store transactional “efficiencies”:

  • If a customer puts 12 cans of frozen concentrated juice on the belt, a cashier may scan the first one and use the multiplier key to add the other 11 to the bill all at once.
  • If a product doesn’t scan properly or is missing the UPC code, just ask the customer for the price and key the sale under a “miscellaneous” SKU or a similar item with the same price, rather than calling for a time consuming code check.
  • If a shipment arrives in the receiving bay, just scan the waybill instead of each individual case and get the product to the floor.

These time saving measures can certainly delight “the customer of this moment”, but there can also be consequences.

In the “mult key” example, the 12 cans scanned could be across 6 different flavours of juice. The customer may not care since they’re paying the same price, but the inventory records for 6 different SKUs have just been fouled up for the sake of saving a few seconds. To the extent that the system on hand balances are used to make automated replenishment decisions, this one action could be inconveniencing countless customers for several more days, weeks or even months before the lie is exposed.

The smile on a customer’s face because you saved her 5 seconds at the checkout or the cashier speed rankings board in the break room might be tangible signs of “great customer service”, but the not-so-easy-to-see costs of stockouts and lost sales that arise from this practice over time is extremely costly.

Similarly with skipping code checks or “pencil whipping” back door receipts. Is sacrificing accuracy for the sake of speed really good customer service policy?

A recent article published in Canadian Grocer magazine begins with the following sentence:

“A lack of open checkouts and crowded aisles may be annoying to grocery shoppers, but their biggest frustration is finding a desired product is out of stock, according to new research from Field Agent.”

According to the article, out of stocks are costing Canadian grocers $63 billion per year in sales. While better store level planning and replenishment can drive system reported in-stocks close to 100%, the benefits are muted if the replenishment system thinks the store has 5 units when they actually have none.

Not only does this affect the experience of a walk-in customer looking at an empty shelf, but it’s actually even more serious in an omnichannel world where the expectation is that retailers will publish store inventories on their public websites (gulp!). An empty shelf is one thing, but publishing an inaccurate on hand on your website is tantamount to lying right to your customers’ faces.

We’ve seen firsthand that it’s not uncommon for retailers to have a store on hand accuracy percentage in the low 60s (meaning that almost 40% of the time, the system on hand record differs from the counted quantity by more than 5% at item/location level). Furthermore, we’ve found that on the day of an inventory count, the actual in stock is several points lower than the reported in stock on average.

Suffice it to say that inaccurate on hand records are a big part of the out of stocks problem.

Nothing I’ve said above is particularly revolutionary or insightful. The real question is why has it been allowed to continue?

In my view, there are 3 key reasons:

  1. Most retailers conflate shrink with inventory accuracy and make the horribly, horribly mistaken assumption that if their financial shrink is below 1.5%, then their inventory management is under control. Shrink is a measure for accountants, not customers and the responsibility of store inventory management belongs in Store Operations, not Finance.
  2. Nobody measures the accuracy of their on hands. It’s fine to measure the speed of transactions and the efficiency of store labour, but if you’re taking shortcuts to achieve those efficiencies, you should also be measuring the consequence of those actions – especially when the consequence so profoundly impacts the customer experience.
  3. Retailers think that inaccurate store on hands is an intractable problem that’s impossible to economically solve. That was true for every identified problem in human history at one point. However, I do agree that if no action is taken to solve the problem because it is “impossible to solve”, then it will never be solved.

It’s true that overcoming inertia on this will not be easy.

Your customers’ expectations will continue to rise regardless.

Lucky the car was dirty

Luck

It’s 1971 and Bill Fernandez would do something that would change the course of history. On that fateful day, Bill decided to go for a nice stroll with his good friend, Steve Jobs. As luck would have it, their walk took them pass the house of another of Bill’s pals, Steve Wozniak.

Luckily, Woz’s car was dirty and he was outside, washing it. Bill introduced the two Steve’s and they instantly hit it off. They both shared a passion for technology and practical jokes. Soon after, they started hanging out, collaborating and eventually working together to form Apple. The rest is history.

It’s incredible, in life and business, how powerful and important Luck is.

People who know me well, know that I’m an avid reader and one of the authors that’s influenced my thinking the most is the legendary Tom Peters – you know, of In Search of Excellence fame, among many other brilliant works.

Tom’s also a big believer in Luck. In fact, he believes it’s the most important factor in anyone’s success. I think he’s right. As he correctly points out in his ditty below, you make your own luck and, when you do, you just get luckier and luckier – which is an ongoing philosophy that helps you learn, change, grow and deliver.

So, today, I’m celebrating and counting my lucky stars. I know that luck is THE factor in any success (and failures) that I’ve had. Just consider…

Years ago, I started my career fresh from school at a prestigious consulting firm in downtown Toronto. As luck would have it, one of my Partners, Gus, gave me some brilliant advice. He said to me, “Mike you don’t know shit. The only way to learn is to read. Tons. I’ll make a deal with you. For every business related book you read, the firm will pay for it”. Luckily, I took the advice of Gus and this propelled me into life-long reading and learning.

Roughly 20 years ago, another massive jolt of luck helped me considerably. I was leading a team at a large Canadian retailer who would eventually design what we now call Flowcasting, along with delivering the first full scale implementation of integrated time-phased planning and supplier scheduling in retail.

The original design was enthusiastically supported by our team, but did not have the blessings of Senior Management. In fact, the VP at the time (my boss) indicated that this would not work, we’d better change it, or I’d be fired.

Luckily one of the IT folks, John, then said to me something like “this is just like DRP at store level. You should call Andre Martin and see what he thinks”. To which I replied, “Who’s Andre Martin and what is DRP?”. The next day John brought me copy of Andre’s book, Distribution Resource Planing. I read it (luckily I’m a reader you know) and agreed. I called Andre the next day and eventually he and his colleague, Darryl, helped us convince Senior Management the design was solid – which led to a very successful implementation and helped change the paradigm of retail planning.

As luck would have it, my director on that initial project would later become CEO of Princess Auto Ltd (PAL) – as you know, an early adopter of the Flowcasting process and solution. Given his understanding of the potential of planning and connecting the supply chain from consumption to supply, it was not surprising that we were called to help. Luck had played an important role again.

Luck also played a significant role in the successful implementation of Flowcasting at PAL. The Executive Sponsor, Ken, and the Team Lead, Kim, were people that:

  1. Could simplify things;
  2. See the potential of the organization working in harmony driven by the end consumer; and
  3. Had credibility within the organization to help drive and instill the change.

We were lucky that the three of us had very similar views and philosophy regarding change – focusing on changing the mental model, and less on spewing what I’d call Corporate Mayonnaise.

In addition to being like-minded, the project team at PAL were lucky in that they used a software solution that was designed for the job. The RedPrairie Collaborative Flowcasting solution was designed for purpose – a simple, elegant, low-touch, intuitive system that is easy to use and even easier to implement.

We were very lucky that as an early adopter, we were given the opportunity to use the solution to prove the concept, at scale. As a result, our implementation focused mainly on changing minds and behaviors rather than the typical system and integration issues that plague these implementations when a solution not fit for purpose is deployed.

So, my advice to you is simple. When you get the chance, jot down all the luck you’ve had in your career and life so far. If you’re honest, you’ll realize that luck has played a huge role in your success and who you are today.

And, by all means, you should continue to welcome and encourage more luck into your life.

Thank you and Good Luck!

Rise of the Machines?

 

It requires a very unusual mind to undertake the analysis of the obvious. – Alfred North Whitehead (1861-1947)

20180626210156-GettyImages-917581126

 

My doctor told me that I need to reduce the amount of salt, fat and sugar in my diet. So I immediately increased the frequency of oil changes for my car.

Confused?

I don’t blame you. That’s how I felt after I read a recent survey about the adoption of artificial intelligence (AI) in retail.

Note that I’m not criticizing the survey itself. It’s a summary of collected thoughts and opinions of retail C-level executives (pretty evenly split among hardlines/softlines/grocery on the format dimension and large/medium/small on the size dimension), so by definition it can’t be “wrong”. I just found some of the responses to be revealing – and bewildering.

On the “makes sense” side of the ledger, the retail executives surveyed intend to significantly expand customer delivery options for purchases made online over the next 24 months, specifically:

  • 79% plan to offer ship from store
  • 80% plan to offer pick up in store
  • 75% plan to offer delivery using third party services

This supports my (not particularly original) view that the physical store affords traditional brick and mortar retailers a competitive advantage over online retailers like Amazon, at least in the short to medium term.

However, the next part of the survey is where we start to see trouble (the title of this section is “Retailers Everywhere Aren’t Ready for the Anywhere Shelf”):

  • 55% of retailers surveyed don’t have a single view of inventory across channels
  • 78% of retailers surveyed don’t have a real-time view of inventory across channels

What’s worse is that there is no mention at all about inventory accuracy. I submit that the other 45% and 22% respectively may have inventory visibility capabilities, but are they certain that their store level inventory records are accurate? Do they actually measure store on hand accuracy (by item by location in units, which is what a customer sees) as a KPI?

The title of the next slide is “Customer Experience and Supply Chain Maturity Demands Edge Technologies”. Okay… Sure… I guess.

The slide after that concludes that retail C-suite executives believe that the top technologies “having the broadest business impact on productivity, operational efficiency and customer experience” are as follows:

  • #1 – Artificial Intelligence/Machine Learning
  • #2 – Connected Devices
  • #3 – Voice Recognition

Towards the end, it was revealed that “The C-suite is planning a 5X increase in artificial intelligence adoption over the next 2 years”. And that 50% of those executives see AI as an emerging technology that will have a significant impact on “sharpening inventory levels” (whatever that actually means).

So just to recap:

  • Over the next 2 years, retailers will be aggressively pursuing customer delivery options that place ever increasing importance on visibility and accuracy of store inventory.
  • A majority of retailers haven’t even met the visibility criteria and it’s highly unlikely that the ones who have are meeting the accuracy criteria (the second part is my assumption and I welcome being proved wrong on that).
  • Over the next 2 years, retailers intend to increase their investment in artificial intelligence technologies fivefold.

I’m reminded of the scene in Die Hard 2 (careful before you click – the language is not suitable for a work environment or if small children are nearby) where terrorists take over Dulles International Airport during a zero visibility snowstorm and crash a passenger jet simply by transmitting a false altitude reading to the cockpit of the plane.

Even in 1990, passenger aircraft were quite technologically advanced and loaded with systems that could meet the definition of “artificial intelligence“. What happens when one piece of critical data fed into the system is wrong? Catastrophe.

I need some help understanding the thought process here. How exactly will AI solve the inventory visibility/accuracy problem? Are we talking about every retailer having shelf scanning robots running around in every store 2 years from now? What does “sharpen inventory levels” mean and how is AI expected to achieve that (very nebulous sounding) goal?

I’m seriously asking.

Unvarnished

It’s an altercation that’s stuck with me for decades.

Roughly twenty years ago I was leading a retail team that would eventually design what we now call Flowcasting. We were an eclectic team, full of passion and dedicated to designing and implementing something new, and much better.

After a particularly explosive team session – that saw tensions and ideas run hot – everyone went back to their workstations to let sleeping dogs lie. One business team member, who’d really gotten into it with one of the IT associates, could not contain his passion. He promptly walked over to the team member’s cubicle and said…

“Oh, one more thing…F**k You!!”

Like most of the team, I was a little startled. I went over and talked to the team member and we had a good chat about how inappropriate his actions were. Luckily the IT team member was one cool dude and he didn’t take offence to it – the event just rolled off his back. To his credit, the next day my team member formally apologized and all was forgiven.

Now, please don’t think I’m condoning this type of action. I’m not. However, as a student of business, change and innovation I’ve been actively learning and trying to understand what really seeds innovation and, in particular, what types of people seem to be able to make change happen.

And, during my research and studies, I keep coming back to this event. It’s evidence of what seems to be a key trait and characteristic of innovative teams and people. They are what many refer to as…

Unvarnished.

If I think back to that team from two decades ago, we were definitely unvarnished. We called a spade a spade. Had little to no respect to the company hierarchy and even less for the status quo. And, as a team, we were brutally honest with each other and everyone on the team felt very comfortable letting me know when I was full of shit – which was, and continues to be, often.

But that team moved, as Steve Jobs would say, mountains – not only designing what would later morph into Flowcasting, but implementing a significant portion of the concept and, as a result, changing the mental model of retail planning.

I had no idea at the time, but being unvarnished was the key trait we had. Franseca Gino has extensively studied what makes great teams and penned a brilliant book about her learnings, entitled “Rebel Talent”.

She dedicates consider time to unvarnishment and quotes extensively from Ed Catmull, famed leader of Pixar Animation Studios who’s worked brilliantly with another member of the unvarnished hall of fame – Steve Jobs.

According to Catmull, “a hallmark of creative cultures is that people feel free to share ideas, opinions and criticisms. When the group draws on the unvarnished perspectives of all its members, the collective knowledge and decision making benefits.”

According to Catmull, and others (including me), “Candor is the key to constructive collaboration”. The KEY to disruptive innovation.

Here’s another example to prove my point. When I was consulting at a national western Canadian retailer, our team was lucky to have an Executive Sponsor who was, as I now understand, unvarnished as well.

As the project unfolded I was amazed how he operated and the way he encouraged and responded to what I’d call dissent. Most leaders of teams absolutely abhor dissent – having been unfortunately schooled over time that company hierarchy was there for a reason and was the tie-breaker on decision making and direction setting.

Our Sponsor openly encouraged people to dissent with him and readily and openly changed his mind whenever required. I vividly remember a very tense and rough session around job design and rollout in which he was at loggerheads with the team, including me. When I think back, it was amazing to see how “safe” team members felt disagreeing with him – and, in this case, very passionately.

As it turned out, over the next few days, we continued the dialogue and he changed his opinion 180 degrees – eventually agreeing with his direct report.

Neuroscience refers to this as being able to work with “psychological safety” – which is a fancier way of saying people are free to be unvarnished. To say what they believe, why and to whom with no consequences whatsoever.
Without question, as I’ve been thinking and studying great teams and innovation I realize just how brilliant this Sponsor was and the environment he helped to foster.

How many Executives, Leaders or teams are really working in an unvarnished environment – with complete psychological safety? I think you’d agree, not many.

If you, your company and your supply chain is going to compete and continually evolve and improve, won’t ongoing innovation need to become a way of life? And that means people need to collaborate better, disrupt faster and feel completely comfortable challenging and destroying the status quo.

Now, I’m not saying that when you don’t agree with someone to tell them to go F-themselves.

What I am saying – and other folks who are a lot smarter than me – is that hiring, promoting, encouraging and fostering people and a working environment that is unvarnished will be a crucial!

So here’s to being unvarnished. To being and working in safety. To real collaboration and candor.

And to looking your status quo in the eye and saying…”F**k you!”

Concealing Your Shame

 There is no shame in not knowing; the shame lies in not finding out. – Russian Proverb

Customer expectations of brick & mortar retailers are changing.

Most retailers are failing miserably at meeting those expectations with regard to providing information about stock availability at their stores online.

I’m not talking about whether or not they have sufficient stock to meet customer demand – it’s even more basic than that. When a customer is looking to visit your store can you even properly tell him/her what your stock status actually is?

Recently, I decided to anecdotally put one particular store to the test on this. I chose this store for the following reasons:

  1. They actually publish their store on hand balances online for all the world to see in real time.
  2. They offer a “buy online, pick up in store” option.
  3. I visit the store fairly frequently and it’s about 1 kilometre from my house.

On the day of my “study”, I only had 2 items I needed. Before leaving, I called up the pages for those items on my iPhone and went to the store. When I got there, I refreshed the pages to retrieve the most up-to-date stock information and compared that number to what I actually found on the shelf. After that, I wandered around the aisles and picked a few other items at random and did the same thing.

Now before I share the results, there are some rather significant caveats that I need to mention:

  1. The inventory is updated in real time, but obviously it’s based on POS transactions. When I did the “physical count” on the shelf, it’s certainly possible that some other customer had picked the item off the shelf but had not yet paid for it.
  2. The study was performed on a busy Saturday afternoon about 4 weeks before Christmas. Not exactly ideal timing for ensuring that the store was stocked neatly or that there wasn’t a lot of product floating around in customer baskets as per point 1 above.
  3. I know that this store has a very large back room and doesn’t keep separate on hand balances for shelf stock and backroom stock. In cases where my count is short, it’s certainly possible that the product was in the back room or displayed elsewhere in the store.
  4. When I got a count discrepancy, I did not ask the staff for help in locating the “missing” items. As I mentioned, we are only weeks away from Christmas and I wasn’t about to waste people’s time finding items that I had no intention of purchasing.

The first item on my list was a carbon dioxide cylinder for our SodaStream. Note that I’ve attempted to crop out any information that would reveal who the retailer is (logos, shelf tags, product identifiers, etc.). This won’t stop some of you from recognizing them, but I can’t do much about that.

Okay, back to the SodaStream cylinder. When I reached the shelf and refreshed the page on my phone, here’s what I got:

Wow, 337 units in stock! (As an aside, this retailer almost always shows the aisle number in the store where the product can be found, which is stellar – not sure why it’s not shown in this case, but it’s a product I buy often, so I knew exactly where to go).

Now here’s the shelf:

You can’t see them all in this image, but the actual count was 18 units, far short of 337. Obviously this is either a massive inventory record error or there’s a pallet of them on a secondary display or in the back room. So long as they sell fewer than 18 per day, buyers of this item will be happy.

RESULT: INCONCLUSIVE

The second item on my list was a large, bark deterring dog collar for my mother-in-law’s dog (it uses vibration or noise to deter barking, not electric shocks, so don’t judge me!). As you’ll see below, my phone told me to go to aisle 56 to find 1 unit:

Unfortunately when I got to the aisle, there was none to be found. I spent a few minutes searching all of the overheads, pegs and bins in this aisle and one aisle over in each direction and couldn’t find it.

RESULT: FAIL

While in aisle 56, I picked another random item (mulberry scented dog shampoo) and looked it up on my phone:

And here is the shelf:

6 units – right on the nose.

RESULT: SUCCESS

Now, how about this Bissell Little Green pet stain remover?

This item is on promotion for $25.00 off and I found an end aisle display with 12 units:

…and one more unit in the home in aisle 60:

So that’s 13 on the shelf vs 32 units reported on hand. But because this item is promoted, there is almost certainly more in the back room to replenish the shelves.

RESULT: INCONCLUSIVE

On to aisle 17 to check out the Stanley chalk line reels.

Hoping to find 5…

…and 5 it is.

RESULT: SUCCESS

You get the picture (no pun intended). I also documented a few other items in the same way, but I’ll spare you the photographic evidence:

  • Richard Self Adhesive Drywall Tape: 3 online, 4 on the shelf (RESULT: PRETTY CLOSE)
  • T.S.P. Heavy Duty Cleaner (400g): 10 online, 4 on the shelf (RESULT: FAIL)
  • Soft Glide Cabinet Hinge: 12 online, none to be found anywhere (RESULT: EPIC FAIL)
  • OOK Picture Hanging Kit: 14 online, 13 on the peg (RESULT: PRETTY CLOSE)

In summary:

  • There were 3 failures out of 9 (I’m counting “Pretty Close” and “Inconclusive” in the success column for fairness)
  • 2 of those 3 failures could have resulted in a lost sale on that day (i.e. the reported on hand was > 0, but there was no stock to be found on the sales floor).
  • With regard to the bark deterrent collar (one of the items I actually wanted to buy), there’s more to the story:
    • When I got home, I ordered the item for in store pickup and the on hand immediately dropped to zero
    • Later that day, I received an email notification and a phone call informing me that the item wouldn’t be available for pickup until the next day
    • From this, I’m surmising that they couldn’t find it in the store and had one delivered from a nearby store overnight
    • The next day, I picked up the item at my home store – lost sale averted

So what was the point of all this and why did I choose “Concealing Your Shame” as the title? Am I trying to shame this retailer for what (anecdotally and with all of my previous caveats applied) looks like imperfect performance?

Au contraire!

Store on hand accuracy is not easy to achieve and this retailer is to be highly commended for their confidence and willingness to be as transparent to customers as possible.

No, the shame is reserved for those retailers who have on hand balances readily available in their systems but choose not to share it. I guess the thinking is that you can’t fail if you don’t try.

I say it again: customer expectations are changing.

If you’re afraid to share your on hand balances with your customers, I have 2 questions:

  1. Why? (you already know why)
  2. What are you doing about it?

Questions and Answers

Questions

Did you know that most, if not all, organizations and innovations started with a question, or series of questions?

Reed Hastings concocted Netflix by asking a simple question to himself…”what if DVD’s could be rented through a subscription-type service, so no one ever had to pay late fees?” (Rumor was that this was just after he’d been hit with a $40 late fee).

Apple Computer was forged by Woz and Jobs asking, “Why aren’t computers small enough for people to have them in their homes and offices?”

In the 1940’s, the Polaroid instant camera was conceived based on the question of a three year old. Edwin H. Land’s daughter grew impatient after her father had taken a photo and asked, “Why do we have to wait for the picture, Daddy?”

Harvard child psychologist Dr. Paul Harris estimated that between the ages of two to five, a child asks about 40,000 questions. Yup, forty thousand!

Questions are pretty important. They lead to thinking, reflection, discovery and sometimes breakthrough ideas and businesses.

The problem is that we’re not five years old anymore and, as a result, we just don’t seem to ask enough questions – especially the “why” and “what if” kinds of questions. We should.

Turns out our quest for answers and solutions would be much better served by questions. To demonstrate the power of questions, let’s consider the evolution of solutions to develop a forward looking, time-phased forecast of consumer demand by item/store.

Early solutions realized that at item/store level a significant number of products sold at a very slow rate. Using just that items sales history, at that store, made it difficult to determine a selling pattern – how the forecasted demand would happen over the calendar year.

To solve this dilemma, many of the leading solutions used the concept of the “law of large numbers” – whereby they could aggregate a number of similar products into a grouping of those products to determine a sales pattern.

I won’t bore you with the details, but that is essentially the essence behind the thinking that, for the retail/store, the forecast pattern would need to be derived from a higher level forecast and then each individual store forecast would be that stores contribution to the forecast, spread across time using the higher level forecast’s selling pattern.

It’s the standard approach used by many solutions, one who’s even labelled it as multi-level forecasting. Most retail clients who are developing a time-phased forecast at item/store are using this approach.

Although the approach does produce a time-phased item/store forecast, it has glaring and significant problems – most notably in terms of complexity, manageability and reasonableness of using the same selling pattern for a product across a number of stores.

To help you understand, consider a can of pork and beans at a grocery retailer. At what level of aggregation would you pick so that the aggregate selling pattern could be used in every store for that product? If you think about it for a while, you’ll understand that two stores even with a few miles of each other could easily have very different selling patterns. Using the same pattern to spread each stores forecast would yield erroneous and poor results. And, in practice, they do.
Not only that, but you need to manage many different levels of a system calculated forecast and ensuring that these multi-level forecasts are synchronized amongst each level – which requires more system processing. Trying to determine the appropriate levels to forecast in order to account for the myriad of retail planning challenges has also been a big problem – which has tended to make the resulting implementations more complex.

As an example, for most of these implementations, it’s not uncommon to have 3 or more forecasting levels to “help” determine a selling pattern for the item/store. Adding to the issue is that as the multi-level implementation becomes more complex, it’s harder for planners to understand and manage.

Suffice it to say, this approach has not worked well. It’s taken a questioner, at heart, to figure out a better, simpler and more effective way.

Instead of the conventional wisdom, much like our 3 year old above, he asked some simple questions…

“What if I calculated a rolling, annual forecast first?” “Couldn’t I then spread that forecast into the weekly/daily selling pattern?”

As it turns out, he was right.

Then, another question…

“Why do I have to create a higher level forecast to determine a pattern?” Couldn’t I just aggregate sales history for like items, in the same store, to determine the selling pattern?”

Turns out, he could.

Finally, a last question…

“Couldn’t I then multiple the annual forecast by the selling pattern to get my time-phased, item/store forecast?”

Yes, indeed he could.

Now, the solution he developed also included some very simple and special thinking around slow selling items and using a varying time period to forecast them – fast sellers in weekly periods, slower sellers monthly and even slower sellers in quarterly or semi-annual periods.

The questions he asked himself were around the ideas of “Why does every item, at the retail store, need to be forecast in weekly time periods?”

Given the very slow rate of sales for most item/stores the answer is they don’t and shouldn’t.

The solution described above was arrived at by asking questions. It works beautifully and if you’re interested in learning more and perhaps asking a few questions of your own, you know how to find me.

So, if you’re a retailer and are using the complicated, hard-to-manage, multi-level forecasting approach outlined above, perhaps you should ask a question or two as well…

1. “Why are we doing it like this?”
2. “Who is using the new approach and how’s it working?”

They’re great questions and, as you now know, questions will lead you to the answers!

Accuracy or Precision?

 

It is the mark of an educated mind to rest satisfied with the degree of precision which the nature of the subject admits and not to seek exactness where only an approximation is possible. – Aristotle (384 BC – 322 BC)

barn

My favourite part about writing these articles is finding just the right quote to introduce them. Before we get started, go back and read the quote from Aristotle above if you happened to skip past it – I think it both accurately and precisely summarizes my argument.

Now in the context of forecasting for the supply chain, let’s talk about what each of these terms mean:

Accuracy: Ability to hit the target (i.e. how close is the actual to the forecast?)

Precision: Size of the target you’re aiming at (i.e. specificity of product, place and timing of the forecast)

I’m sorry to be a total downer, but the reason this article is titled Accuracy or Precision is because you can’t have both. The upper right quadrant in the illustration above ain’t happening (a bit more on that later).

In the world of forecasting, people seem obsessed with accuracy and often ask questions like:

  • What level of forecast accuracy are you achieving?
  • How should we be benchmarking our forecast accuracy?
  • Are we accurate enough? How can we be more accurate?

The problem here is that any discussion about forecast accuracy that does not at the same time account for precision is a complete waste of time.

For example, one tried and true method for increasing forecast accuracy is by harnessing the mystical properties of The Law of Large Numbers.

To put it another way – by sacrificing precision.

Or to put it in the most cheeky way possible (many thanks to Richard Sherman for this gem, which I quote often):


Sherman’s Law:
Forecast accuracy improves in direct correlation to its distance from usefulness.


So how do we manage the tradeoff between precision and accuracy in forecasting?You must choose the level of precision that is required (and no more precise than that) and accept that in doing so, you may be sacrificing accuracy.

For a retailer, the only demand that is truly independent is customer demand at the point of sale. Customers choose specific items in specific locations on specific days. That’s how the retail business works.

This means that the precision of the forecasting process must be by item by location by day – full stop.

Would you be able to make a more accurate prediction by forecasting in aggregate for an item (or a group of items) across all locations by month? Without a doubt.

Will that help you figure out when you need to replenish stock for a 4 pack of 9.5 watt A19 LED light bulbs at store #1378 in Wichita, Kansas?

Nope. Useless.

I can almost see the wincing and hear the heart palpitations that this declaration will cause.

“Oh God! You’ll NEVER be able to get accurate forecasts at that level of precision!” To that I say two things:

  1. It depends on what level of accuracy is actually required at that level of precision.
  2. Too damn bad. That’s the requirement as per your customers’ expectation.
With regard to the first point, keep in mind that it’s not uncommon for an item in a retail store to sell fewer than 20 units per YEAR. On top of that, there are minimum display quantities and pack rounding that will ultimately dictate how much inventory will be available to customers to a much greater degree than the forecast.Forecasts by item/location/day are still necessary to plan and schedule the upstream supply chain properly, but it’s only necessary for forecasts at that level of precision to be reasonable, not accurate in the traditional sense of the word. This is especially true if you also replan daily with refreshed sales and inventory numbers for every item at every location.

There are those out there who would argue that my entire premise is flawed. That I’m not considering the fact that with advances in artificial intelligence, big data and machine learning, it will actually be possible to process trillions of data elements simultaneously to achieve both precision and accuracy. That I shouldn’t even be constraining my thinking to daily forecasting – soon, we’ll be able to forecast hourly.

Let’s go back to the example I mentioned earlier – an item that sells 20 units (give or take) in a location throughout the course of a year. Assuming that store is open for business 12 hours out of every day and closed 5 days per year for holidays, there are 4,320 hours in which those 20 units will sell. Are we to believe that collecting tons of noise (whoops, I meant “data”) from social media, weather forecasting services and the hourly movement of soybean prices (I mean, why not, right?) will actually be able to predict with accuracy the precise hour for each of those 20 units in that location over the next year? Out of 4,320 hours to choose from? Really?

(Let’s put aside the fact that no retailer that I’ve ever seen even measures how accurate their on hand records are right now, let alone thinking they can predict sales by hour).

I sometimes have a tendency to walk the middle line on these types of predictions. “I don’t see it happening anytime soon, but who knows? Maybe someday…”

Well, not this time.

This is utter BS. Unless all of the laws of statistics have been debunked recently without my noticing, degrees of freedom are still degrees of freedom.

Yes, I’m a loud and proud naysayer on this one and if anyone ever actually implements something like that and demonstrates the benefits they’re pitching, I will gleefully eat a wheelbarrow of live crickets when that time comes (assuming I’m not long dead).

In the meantime, I’m willing to bet my flying car, my personal jetpack and my timeshare on the moon colony (all of which were supposed to be ubiquitous by now) that this will eventually be exposed as total nonsense.

The Autonomous Self-learning Supply Chain

I have to admit, its hard work trying to keep up with the latest lingo and thinking when it comes to supply chain planning. Suffice it to say, the concept of digitizing the supply chain is not only cool, but offers tremendous value to those companies that achieve it…and it will, over time, become the norm, in my humble opinion.

A number of companies and supply chain technologists are pursuing a vision they describe as the Autonomous Supply Chain – a supply chain that is largely self-learning, adapting and holistically focused on continuously meeting the needs of consumers and customers.

A lot of folks, when they hear this, shutter at the thought, or dismiss it out of hand…poppycock they say, this will never happen and is a futurist’s wet dream.

I beg to differ and not only essentially agree with the vision, but can offer initial proof that the concept not only has merit, but also tremendous potential.

At one of our most recent retail clients, they use the Flowcasting process to plan and manage the flow of inventory from supplier to consumer. What’s brilliant and consistent with the idea of the autonomous and self-learning supply chain is that they have, within their Flowcasting solution, a digital twin of their entire, extended supply chain.

What’s a digital twin?

A digital twin is a complete model of the business, whereby all physical product flows, both current and planned, are digitally represented within the solution – a complete, up-to-date, real time view of their business; containing all projected flows from supplier to consumer for an extended planning horizon of 52 or more weeks.

The Flowcasting solution and digital model of the business enables what we often refer to as continuous planning.

The process and solution re-plans and re-calibrates the entire value chain, digitally, based on what happens physically. Changes in sales, inventories, or shipments will result in re-forecasting and re-planning product flows – to stay in stock, flow inventory, and respond to real exceptions or unplanned events. The process, solution and supply chain is self-learning.

The result is that the Flowcasting process/solution can manage the flow of information and trigger the movement of goods, digitally, on auto-pilot, a vast majority of the time—requiring planner input only when judgment and experience are needed.

When I think about how our client is using the Flowcasting process/solution to plan, I would estimate 95% of the product flows are initiated automatically (e.g., digitally) based on the solution interpreting what yesterday’s sales and inventory movements mean, and then re-adjusting, self-correcting, and altering current and planned product flows.

Furthermore, as part of the implementation, we worked with the planners and semi-automated how they would handle certain exceptions, based on learning from initial planners responding to these exceptions. It’s certainly not a stretch to think that, at some point, a machine/algorithm could learn too and respond to these types of anomalies in order to enable the smooth and continuous flow of product.

And what are the results of using a self-learning, self-correcting and fairly autonomous planning process (i.e., Flowcasting)?

Highest in-stocks in company history, increased sales, improved inventory turns, reduced costs and, most importantly, happier customers.

Please understand I’m not talking about a Skynet scenario here. I firmly believe that supply chain planning solutions can largely become autonomous and self-learning, but will always require some human input for situations where intuition and judgement are required. But, I’d argue this will be the exception and is also a form of a self-learning supply chain (e.g., people learn from experience).

The autonomous, self-learning supply chain is quite a vision. And, like all visions, it needs initial pilots and examples to move the ball forward, provide initial learnings and help people understand what is and might be possible. Our recent retail implementation of Flowcasting, we believe, helps the cause and should provide food for thought for any retailer.

So to the folks and companies pursuing this vision (most notably JDA Software), I can only offer best wishes and the advice from Calvin Coolidge…

“Press on. Nothing in this world can take the place of persistence”.