Blog-hero.jpg

The Innovator's Solution

     

When should I trust my computer’s demand forecast?

Posted by Jeff Bodenstab on Feb 28, 2017 9:02:00 PM

Thinking fast and slow.png

Editor’s Note: There will be no blog next week. The next blog will be posted on March 14th.

A few weeks ago we laid out a four step process for minimizing forecast bias when making manual adjustments to the demand forecast. It was based on Nobel Prize-winning Daniel Kahneman’s book Thinking Fast and Slow. Today we ask the question, “Under what circumstances should you avoid manual intervention altogether?” again using Kahneman’s insight.

The four step process we described previously does a good job of finding a happy medium between the computer and the human expert (what some call “manual overrides”). But in some cases the human input doesn’t add value, and can actually reduce the forecast accuracy. In statistical terms, this is “negative Forecast Value Added (FVA)”. Regarding when to adjust the computer’s demand forecast with expert opinion or intuition, many people would guess:    

  • algorithms perform better in highly structured environments with lots of clean data
  • humans would be better in messy, complex, real-world environments where data is often lacking, low-quality, or conflicting

Surprisingly, according to Kahneman it is exactly the opposite. In a chapter entitled “Expert Intuition: When Can We Trust It?” he says two conditions are required for high value added expert intuition to develop:

  1. an environment that is sufficiently regular to be predictable
  2. an opportunity to learn these regularities through prolonged practice

Regarding the first condition, Kahneman says, “Statistical algorithms greatly outdo humans in noisy environments for two reasons: they are more likely than human judges to detect weakly valid cues and much more likely to maintain a modest level of accuracy by using such cues consistently.” In other words, people often miss cues in the environment that would be useful to them, and even when they are aware of such cues they don’t use them the same way every time. Because most real-world environments are messy and noisy they do not favor human experts over algorithms.

Regarding the second condition, Kahneman says that “Whether professionals have a chance to develop intuitive expertise depends essentially on the quality and speed of feedback, as well as on sufficient opportunity to practice.” Fast accurate feedback is not always available to a human expert.

The problem with demand forecasting and supply chain planning is that these conditions needed for good expert advice often do not apply. Demand chains contain lots of messy data, many unknowns and challenging complexities. And the latency in most supply chain means that results and consequences are not immediately visible, and don’t become evident until much later on. It’s very difficult to determine how specific actions affected key performance indicators (KPIs). Even when the deviation from a KPI is clear, it's often tough to uncover the root cause.

So when should you trust your computer’s demand forecast? Assuming that once you have a reasonably good working algorithm (some organizations don’t – but that’s a more fundamental problem), the question should really be turned around. It’s not about when you can trust the computer. It’s about when you can trust the expert. And Kahneman says it’s only when the two above conditions exist. Otherwise leave the forecast alone. Over the long run, you are probably doing more harm than good.

A data-driven approach to know when manual overrides are not adding value is tracking Forecast Value Added. By examining the input of specific individuals, groups or other data inputs, it’s possible to identify those that are improving the forecast and those who are not. The reason for a poor showing can be lack of skill, but can also be institutional.

One of my favorite examples of institutional bias appears in Nate Silver’s book The Signal and the Noise. He shows that local television weather forecasters are biased towards forecasting rain and snow rather than sunshine. Why? Because if they predict rain and the sun shines, their audience is pleased by the unexpectedly good weather. But if they predict sun and it rains, then the audience is unhappy and perhaps ill prepared. Hence they blame the weatherperson.

The one-sided incentives create bias. And in fact, Silver says that unaltered NOAA weather forecasts consistently outperform local forecasters. Their forecast value added is negative. It’s likely that similar incentives exist in your organization and should be detected and eradicated, or at least minimized in the forecasting process. For example, the demand collaboration process could be influenced by factors that are biasing the forecast. Sales people could be overly optimistic because inflating the forecast (either consciously or subconsciously) increases the chances of product availability.

Demand Collaboration Hub Executive Brief

A number of companies have created a forecasting process they can trust. One of my favorites is Lennox Residential who has achieved 99.7% no touch, computer-controlled automation in their planning and replenishment. At Lennox, 997 out of 1000 planning decisions have been automated to the point where there is no manual intervention at all.

Another is Cipla Medpro Pharmaceutical who is also approaching nearly autonomous supply chain planning. Their statistical forecast is consistently proving to be up to 20 percent higher than Cipla’s own market intelligence system. They are now at the point where they have confidently switched off their own manual overrides and put complete trust in the forecasts.


Click below to read the Cipla Medpro case study.

Cipla Case Study - A Demand Sensing Case Study

Topics: Forecasting Demand and Analytics, Supply Chain Planning

Learn more about supply chain innovation

Subscribe to “The Innovator’s Solution” Blog, you will:
  • Stay on top of new supply chain innovations
  • Learn what the analysts are saying
  • Get the “big picture”