Changing the game

In 1972, for my 10th birthday, my Mom would buy me a wooden chess set and a chess book to teach me the basics of the game.  Shortly after, I’d become hooked and the timing was perfect as it coincided with Bobby Fischer’s ascendency in September 1972 to chess immortality – becoming the 11th World Champion.

As a chess aficionado, I was recently intrigued by a new and different chess book, Game Changer, by International Grandmaster Matthew Sadler and International Master Natasha Regan.

The book chronicles the evolution and rise of computer chess super-grandmaster AlphaZero – a completely new chess algorithm developed by British artificial intelligence (AI) company DeepMind.

Until the emergence of AlphaZero, the king of chess algorithms was Stockfish.  Stockfish was architected by providing the engine the entire library of recorded grandmaster games, along with the entire library of chess openings, middle game tactics and endgames.  It would rely on this incredible database of chess knowledge and it’s monstrous computational abilities.

And, the approach worked.  Stockfish was the king of chess machines and its official chess rating of around 3200 is higher than any human in history.  In short, a match between current World Champion Magnus Carlsen and Stockfish would see the machine win every time.

Enter AlphaZero.  What’s intriguing and instructive about AlphaZero is that the developers took a completely different approach to enabling its chess knowledge.  The approach would use machine learning.

Rather than try to provide the sum total of chess knowledge to the engine, all that was provided were the rules of the game.

AlphaZero would be architected by learning from examples, rather than drawing on pre-specified human expert knowledge.  The basic approach is that the machine learning algorithm analyzes a position and determines move probabilities for each possible move to assess the strongest move.

And where did it get examples from which to learn?  By playing itself, repeatedly. Over the course of 9 hours, AlphaZero played 44 million games against itself – during which it continuously learned and adjusted the parameters of its machine learning neural network.

In 2017 AlphaZero would play a 100 game match against Stockfish and the match would result in a comprehensive victory for AlphaZero.  Imagine, a chess algorithm, architected based on a probabilistic machine learning approach would teach itself how to play and then smash the then algorithmic world champion!

What was even more impressive to the plethora of interested grandmasters was the manner in which AlphaZero played.  It played like a human, like the great attacking players of all time – a more precise version of Tal, Kasparov, and Spassky, complete with pawn and piece sacrifices to gain the initiative.

The AlphaZero story is very instructive for us supply chain planners and retail Flowcasters in particular.

As loyal disciples know, retail Flowcasting requires the calculation of millions of item/store forecasts – a staggering number.  Not surprisingly, people cannot manage that number of forecasts and even attempting to manage by exception is proving to have its limits.

What’s emerging, and is consistent with the AlphaZero story and learning, is that algorithms (either machine learning or a unified model approach) can shoulder the burden of grinding through and developing item/store specific baseline forecasts of sales, with little to no human touch required.

If you think about it, it’s not as far-fetched as you might think.  It will facilitate a game changing paradigm shift in demand planning.

First, it will relieve the burden of demand planners from learning and understanding different algorithms and approaches for developing a reasonable baseline forecast. Keep in mind that I said a reasonable forecast.  When we work with retailers helping them design and implement Flowcasting most folks are shocked that we don’t worship at the feet of forecast accuracy – at least not in the traditional sense.

In retail, with so many slow selling items, chasing traditional forecast accuracy is a bit of a fool’s game.  What’s more important is to ensure the forecast is sensible and assess it on some sort of a sliding scale.  To wit, if you usually sell between 20-24 units a year for an item at a store with a store-specific selling pattern, then a reasonable forecast and selling pattern would be in that range.

Slow selling items (indeed, perhaps all items) should be forecasted almost like a probability…for example, you’re fairly confident that 2 units will sell this month, you’re just not sure when.  That’s why, counter-intuitively, daily re-planning is more important than forecast accuracy to sustain exceptionally high levels of in-stock…whew, there, I said it!

What an approach like this means is that planners will no longer be dilly-dallying around tuning models and learning intricacies of various forecasting approaches.  Let the machine do it and review/work with the output.

Of course, sometimes, demand planners will need to add judgment to the forecast in certain situations – where the future will be different and this information and resulting impacts would be unknowable to the algorithm.  Situations where planners have unique market insights – be it national or local.

Second, and more importantly, it will allow demand planners to shift their role/work from analytic to strategic – spending considerably more time on working to pick the “winners” and developing strategies and tactics to drive sales, customer loyalty and engagement.

In reality, spending more time shaping the demand, rather than forecasting it.

And that, in my opinion, will be a game changing shift in thinking, working and performance.