In 1972, for my 10th birthday, my Mom would buy me a wooden chess set and a chess book to teach me the basics of the game. Shortly after, I’d become hooked and the timing was perfect as it coincided with Bobby Fischer’s ascendency in September 1972 to chess immortality – becoming the 11th World Champion.
As a chess aficionado, I was recently intrigued by a new and different chess book, Game Changer, by International Grandmaster Matthew Sadler and International Master Natasha Regan.
The book chronicles the evolution and rise of computer chess super-grandmaster AlphaZero – a completely new chess algorithm developed by British artificial intelligence (AI) company DeepMind.
Until the emergence of AlphaZero, the king of chess algorithms was Stockfish. Stockfish was architected by providing the engine the entire library of recorded grandmaster games, along with the entire library of chess openings, middle game tactics and endgames. It would rely on this incredible database of chess knowledge and it’s monstrous computational abilities.
And, the approach worked. Stockfish was the king of chess machines and its official chess rating of around 3200 is higher than any human in history. In short, a match between current World Champion Magnus Carlsen and Stockfish would see the machine win every time.
Enter AlphaZero. What’s intriguing and instructive about AlphaZero is that the developers took a completely different approach to enabling its chess knowledge. The approach would use machine learning.
Rather than try to provide the sum total of chess knowledge to the engine, all that was provided were the rules of the game.
AlphaZero would be architected by learning from examples, rather than drawing on pre-specified human expert knowledge. The basic approach is that the machine learning algorithm analyzes a position and determines move probabilities for each possible move to assess the strongest move.
And where did it get examples from which to learn? By playing itself, repeatedly. Over the course of 9 hours, AlphaZero played 44 million games against itself – during which it continuously learned and adjusted the parameters of its machine learning neural network.
In 2017 AlphaZero would play a 100 game match against Stockfish and the match would result in a comprehensive victory for AlphaZero. Imagine, a chess algorithm, architected based on a probabilistic machine learning approach would teach itself how to play and then smash the then algorithmic world champion!
What was even more impressive to the plethora of interested grandmasters was the manner in which AlphaZero played. It played like a human, like the great attacking players of all time – a more precise version of Tal, Kasparov, and Spassky, complete with pawn and piece sacrifices to gain the initiative.
The AlphaZero story is very instructive for us supply chain planners and retail Flowcasters in particular.
As loyal disciples know, retail Flowcasting requires the calculation of millions of item/store forecasts – a staggering number. Not surprisingly, people cannot manage that number of forecasts and even attempting to manage by exception is proving to have its limits.
What’s emerging, and is consistent with the AlphaZero story and learning, is that algorithms (either machine learning or a unified model approach) can shoulder the burden of grinding through and developing item/store specific baseline forecasts of sales, with little to no human touch required.
If you think about it, it’s not as far-fetched as you might think. It will facilitate a game changing paradigm shift in demand planning.
First, it will relieve the burden of demand planners from learning and understanding different algorithms and approaches for developing a reasonable baseline forecast. Keep in mind that I said a reasonable forecast. When we work with retailers helping them design and implement Flowcasting most folks are shocked that we don’t worship at the feet of forecast accuracy – at least not in the traditional sense.
In retail, with so many slow selling items, chasing traditional forecast accuracy is a bit of a fool’s game. What’s more important is to ensure the forecast is sensible and assess it on some sort of a sliding scale. To wit, if you usually sell between 20-24 units a year for an item at a store with a store-specific selling pattern, then a reasonable forecast and selling pattern would be in that range.
Slow selling items (indeed, perhaps all items) should be forecasted almost like a probability…for example, you’re fairly confident that 2 units will sell this month, you’re just not sure when. That’s why, counter-intuitively, daily re-planning is more important than forecast accuracy to sustain exceptionally high levels of in-stock…whew, there, I said it!
What an approach like this means is that planners will no longer be dilly-dallying around tuning models and learning intricacies of various forecasting approaches. Let the machine do it and review/work with the output.
Of course, sometimes, demand planners will need to add judgment to the forecast in certain situations – where the future will be different and this information and resulting impacts would be unknowable to the algorithm. Situations where planners have unique market insights – be it national or local.
Second, and more importantly, it will allow demand planners to shift their role/work from analytic to strategic – spending considerably more time on working to pick the “winners” and developing strategies and tactics to drive sales, customer loyalty and engagement.
In reality, spending more time shaping the demand, rather than forecasting it.
And that, in my opinion, will be a game changing shift in thinking, working and performance.
Tag Archives: AI
Rise of the Machines?
It requires a very unusual mind to undertake the analysis of the obvious. – Alfred North Whitehead (1861-1947)
My doctor told me that I need to reduce the amount of salt, fat and sugar in my diet. So I immediately increased the frequency of oil changes for my car.
Confused?
I don’t blame you. That’s how I felt after I read a recent survey about the adoption of artificial intelligence (AI) in retail.
Note that I’m not criticizing the survey itself. It’s a summary of collected thoughts and opinions of retail C-level executives (pretty evenly split among hardlines/softlines/grocery on the format dimension and large/medium/small on the size dimension), so by definition it can’t be “wrong”. I just found some of the responses to be revealing – and bewildering.
On the “makes sense” side of the ledger, the retail executives surveyed intend to significantly expand customer delivery options for purchases made online over the next 24 months, specifically:
- 79% plan to offer ship from store
- 80% plan to offer pick up in store
- 75% plan to offer delivery using third party services
This supports my (not particularly original) view that the physical store affords traditional brick and mortar retailers a competitive advantage over online retailers like Amazon, at least in the short to medium term.
However, the next part of the survey is where we start to see trouble (the title of this section is “Retailers Everywhere Aren’t Ready for the Anywhere Shelf”):
- 55% of retailers surveyed don’t have a single view of inventory across channels
- 78% of retailers surveyed don’t have a real-time view of inventory across channels
What’s worse is that there is no mention at all about inventory accuracy. I submit that the other 45% and 22% respectively may have inventory visibility capabilities, but are they certain that their store level inventory records are accurate? Do they actually measure store on hand accuracy (by item by location in units, which is what a customer sees) as a KPI?
The title of the next slide is “Customer Experience and Supply Chain Maturity Demands Edge Technologies”. Okay… Sure… I guess.
The slide after that concludes that retail C-suite executives believe that the top technologies “having the broadest business impact on productivity, operational efficiency and customer experience” are as follows:
- #1 – Artificial Intelligence/Machine Learning
- #2 – Connected Devices
- #3 – Voice Recognition
Towards the end, it was revealed that “The C-suite is planning a 5X increase in artificial intelligence adoption over the next 2 years”. And that 50% of those executives see AI as an emerging technology that will have a significant impact on “sharpening inventory levels” (whatever that actually means).
So just to recap:
- Over the next 2 years, retailers will be aggressively pursuing customer delivery options that place ever increasing importance on visibility and accuracy of store inventory.
- A majority of retailers haven’t even met the visibility criteria and it’s highly unlikely that the ones who have are meeting the accuracy criteria (the second part is my assumption and I welcome being proved wrong on that).
- Over the next 2 years, retailers intend to increase their investment in artificial intelligence technologies fivefold.
I’m reminded of the scene in Die Hard 2 (careful before you click – the language is not suitable for a work environment or if small children are nearby) where terrorists take over Dulles International Airport during a zero visibility snowstorm and crash a passenger jet simply by transmitting a false altitude reading to the cockpit of the plane.
Even in 1990, passenger aircraft were quite technologically advanced and loaded with systems that could meet the definition of “artificial intelligence“. What happens when one piece of critical data fed into the system is wrong? Catastrophe.
I need some help understanding the thought process here. How exactly will AI solve the inventory visibility/accuracy problem? Are we talking about every retailer having shelf scanning robots running around in every store 2 years from now? What does “sharpen inventory levels” mean and how is AI expected to achieve that (very nebulous sounding) goal?
I’m seriously asking.