For the sake of demonstration, let's consider our favorite *made-up* consumer packaged goods brand, called Eggsactly. Eggsactly specializes in the manufacture and marketing of egg and eggless products, which can found in the refrigerated Milk & Eggs section of grocery stores, as well as the frozen section.

Eggsactly's inability to consistently forecast demand with high levels of accuracy has significantly harmed its ability to balance supply and demand—and it's stunted its growth. For example, a recent unforeseen surge in demand quickly depleted Eggsactly's inventory, including safety stock. As a result, Eggsactly lost sales and their service levels plummeted, among other inefficiencies and detriments to the business. In response, Eggsactly reactively ramped up its manufacturing capacity and overproduced for the following months, when sales dropped again, leaving it with excess inventory they'll now have to sell at a significant discount or eventually throw away.

Eggsactly has decided it wants to get back to basics and gain a deeper understanding of essential demand planning metrics that could help set them on the right path again. On the surface, these measurements and points of reference may seem simple and logical, but if you splice ‘em, dice ‘em, and mix ‘em all together, especially using machine-learning models, you could build quite a sophisticated forecast that defies even 2020’s expectations. Here are 11 of the most important demand planning metrics you'll need in 2020 and beyond.

1. Forecasted Vs. Actual

2. Bias

3. MAPE

4. SMAPE

5. WMAPE

6. MAE

7. MAD

8. MSE

9. RMSE

10. Perfect Order

11. Phase-out / Obsolescence

11 Most Important Demand Planning Metrics You Need in 2020

It's important to note that there are multitudes of metrics and various calculations that matter throughout the forecasting process. The following list is not intended to be exhaustive but rather an introduction into the possibilities of understandings available. Deep analysis, including applied case studies and demand planning guides, is available here.

**1. Forecasted vs. Actual**

Consider what a manual forecast, based off of internal calculations, would look like for a given month for Eggactly:

Here, dividing 85 by 100 gives us a forecast accuracy of 85%. Eggactly's predicted that 100 Vegan-om-lets would be sold, but in reality, only 85 were sold—the prediction was off by 15. These are the two core stats from which everything else blossoms.

And thus commences our descent into demand planning metrics.

**2. Bias**

Bias is the historical average forecast error for a specific SKU along with the direction it skews. If it’s above +4% error, the forecast is biased toward underforecasting; below -4% and it’s biased toward overforecasting. Error equals forecast minus demand, which you can see below:

Average the errors together and you get -12%, which points to a bias toward overforecasting for this particular SKU.

Zoom out a bit and there’s forecaster bias, in which there’s consistent error for *all* SKUs. This comes from either unnecessary forecast safeguards or an erroneous historical point generated by a bad model that skews the whole forecast.

**3. MAPE: Mean Absolute Percentage Error**

Let’s return back to our original two stats: forecasted volume for a certain SKU and the actual demand. When you divide demand by your forecast, you get the MAPE; the closer the MAPE is to 0%, the better your forecasting was.

The MAPE has its issues, though. First off, it doesn’t differentiate SKUs and their varying shelf lives and seasonalities, weighting them all equally. Here’s what that could look like:

This demand planning metric doesn’t take into account the variables that affect demand, such as the Vegan-om-let having a quicker turnover due to expiration potential compared to the frozen quiche which would have no expiration potential. Plus, Eggactly's Vegan-om-let mixes are an ingredient instead of a stand-alone food only; there’s no sense of urgency to buy the quiches.

It’s clear that outlier MAPES distort the average MAPE, resulting in a muddled forecast. But worse than that? An overforecast of +50% averaged with an underforecast of -50% gives the illusion of a perfect 0% error...

**4. SMAPE: Symmetrical Mean Absolute Percentage Error**

...Which is why SMAPE exists. For SMAPE, you take the error metrics and turn them into absolute values, which stops the positive and negative error values from canceling each other out. Add ‘em all up, divide, and out comes a more accurate average error metric.

**5. WMAPE: Weighted Mean Absolute Percentage Error**

SMAPE still isn’t great for combos of slow- and fast-moving SKUs, which hinge on factors like expiration date, seasonality, and maybe even a global health crisis. Thanks to SMAPE, the positive and negative values no longer cancel out, but the granularity needed to get a truly accurate forecast doesn’t exist without contextualizing each SKU. In addition, there’s Pareto’s Principle, which states that on average, 20% of SKUs are responsible for 80% of sales. How do we take this into account?

This is where WMAPE comes in, a value-weighted demand planning metric that factors in both forecast errors and actual observations. Let’s say the eggs sell 10x more than the quiche. You can either weight them with revenue or weight them with units. Let’s take a look at what units would look like:

In this case, the error matches up more accurately to the actual quantities being sold, giving you the necessary information to make your next forecast take into account what you sell more of. Consequently, WMAPE is the industry standard.

**6. MAE: Mean Absolute Error**

MAE is defined by the means of absolute error. It is not scaled to average demand. It targets the median demand and protects against outliers. To get a percentage, divide MAE by average demand.

**7. MAD: Mean Average Deviation**

Like MAE, MAD is defined by the means of absolute error and isn’t scaled to average demand. However, instead of measuring the errors themselves like MAE, MAD measures how much those errors deviate from the mean error. The smaller the MAD, the higher the forecast accuracy.

When comparing forecasts with one another, MAD values reveal which is the most accurate. This asset, coupled with MAPE’s shortcoming of taking errors at face value, is why academics prefer MAD. MAD isn’t without its own shortcomings, though: its key drawback is that it can only help you determine whether your demand forecast is good or bad if you’re intimately familiar with your business’s numbers. Once you start to scale, it’s game over.

**8. MSE: Mean Square Error**

The MSE is a way to find out how closely data points fit to your forecast’s mean regression line. Somewhat similar to the SMAPE, the MSE eliminates negative values by averaging the squares of each forecast error to get absolute values. Just find the distance between each point and the regression line and square it. Add up the squares of each point, find the mean, and subtract 2 to account for the mean being determined from data instead of from an outside reference. The smaller the MSE, the better the forecast.

**9. RMSE: Root Mean Square Error**

The RMSE is just the MSE with one added step. By taking the MSE and square-rooting it, you can see the average distance of the data points from the regression line. The pros of the RMSE are that it targets average demand, protects against bias, and helps set parameters for safety stock calculations.

Of course, this demand planning metric has its downfalls too. Like MAE, the RMSE isn’t scaled to demand, but unlike MAE, it does not weight the errors equally—instead, it emphasizes the biggest errors, and it’s super sensitive to outliers. One great month will affect the overall forecast, but one big error can result in a very bad RMSE.

**10. Perfect Order Performance**

Composite of on-time delivery (% sales arriving on time), in-full delivery (% delivered w correct items and correct quantity), damage-free delivery (% damage-free), and accurate documentation. Multiply the four percentage values (decimal form) by each other and then multiply by 100 to get a full percentage. Gaining an understanding of top performers, bottom performers and median is important and can help businesses identify and understand where and how failures are occurring.

**11. Phase out / Obsolescence**

To mitigate risks like obsolescence, planned product sunsets, or phase-outs, should be monitored on a regular basis.

### Accurate demand forecasts can make a lot of things go right.

At the end of the day, accurate forecasts can greatly help companies create an operating budget that best serves their specific needs. It can also helps to sell a reputation to clients, investors, and even potential hires. The benefits of accurate sales forecasts extend not just to sales and inventory teams, but also to manufacturing and distribution teams.

There are interpersonal benefits, too. When the accuracy of distribution predictions improve, so does customer service and satisfaction. A great forecast helps your manufacturing team know exactly how much working capital is needed, which gives them the background needed to negotiate better pricing for components of the production process, like raw materials and packaging. Your sales team can then put out the most strategic distribution schemes. All of this translates to sharper, more visible performance benchmarks for your team and more transparent communication with investors. The benefits are quite literally top-to-bottom, touching every team member along the way.