#
Forecast Accuracy
You can't manage what you don't measure, and demand forecasts are no exception. Forecast accuracy quantifies how close your demand plan was to what actually happened — and more importantly, it tells you where to focus improvement efforts. 📊
#
Why It Matters
Poor forecast accuracy cascades through the entire business:
- Inventory — Over-forecast and you build excess stock; under-forecast and you stock out.
- Service levels — Customers don't care about your forecast process, they care about availability.
- Financial planning — Revenue, margin, and cash flow projections all start with a demand number.
- Supply operations — Production scheduling, procurement, and logistics capacity all depend on a reliable demand signal.
Improving forecast accuracy by even a few percentage points often delivers outsized returns in working capital and service.
#
Key Metrics
#
MAPE (Mean Absolute Percentage Error)
The most widely used forecast accuracy metric. Expresses error as a percentage of actual demand.
\displaystyle \text{MAPE} = \frac{1}{n} \sum_{i=1}^{n} \left| \frac{A_i - F_i}{A_i} \right| \times 100Where A = actual demand and F = forecast. Simple and intuitive, but has a well-known flaw: it's undefined when actuals are zero and overweights low-volume items.
#
Weighted MAPE (wMAPE)
Weights each item's error by its share of total volume, giving high-volume products proportionally more influence. This is often more useful than simple MAPE for business decisions.
\displaystyle \text{wMAPE} = \frac{\sum_{i=1}^{n} |A_i - F_i|}{\sum_{i=1}^{n} A_i} \times 100Rule of thumb: If you can only track one accuracy metric, make it wMAPE. It avoids the distortions of simple MAPE and reflects the items that matter most to the business.
#
Bias
Measures whether your forecast systematically over- or under-predicts demand. A forecast can have a low MAPE yet still be heavily biased — and bias is often more actionable than error.
\displaystyle \text{Bias} = \frac{\sum_{i=1}^{n} (F_i - A_i)}{\sum_{i=1}^{n} A_i} \times 100Positive bias = over-forecasting (builds inventory). Negative bias = under-forecasting (risks stockouts). Target is zero.
#
Tracking Signal
Detects when a forecast has drifted systematically out of control. It's the ratio of cumulative error to mean absolute deviation (MAD), functioning like a quality control chart for your forecast.
\displaystyle \text{Tracking Signal} = \frac{\sum_{i=1}^{n} (A_i - F_i)}{\frac{1}{n} \sum_{i=1}^{n} |A_i - F_i|}When the tracking signal exceeds +/- 4 to 6 (thresholds vary by organization), the forecast model likely needs recalibration.
#
Forecast Value Added (FVA)
Measures whether each step in the demand planning process — especially human overrides — actually improves accuracy. Compare accuracy at each stage against a naive or baseline forecast.
\displaystyle \text{FVA}_{\text{step}} = \text{Error}_{\text{before step}} - \text{Error}_{\text{after step}}If FVA is negative, that process step is making the forecast worse. This happens more often than people expect, particularly with sales overrides that introduce optimism bias.
#
Summary Table
#
Where to Measure
#
Aggregation Level
Forecast accuracy varies dramatically depending on where you measure:
- SKU-location — Most granular, highest error. This is where execution happens.
- Product family — Errors partially cancel out. Useful for capacity planning.
- Category / Business unit — Smoothest signal. Relevant for financial planning.
Always report accuracy at the level where decisions are made. Measuring only at the aggregate level can hide serious problems at SKU level.
#
Time Horizon
- 1-month lag — How good was last month's forecast? Your most frequent check.
- 3-month lag — The horizon most relevant to S&OP/IBP decisions (lead time coverage).
- 6+ month lag — Strategic accuracy. Expect lower precision, focus on bias.
#
Common Pitfalls ⚠️
- Measuring at the wrong level — Aggregate accuracy looks great while individual SKUs are wildly off. Always decompose.
- Ignoring bias — A 20% MAPE with zero bias is very different from 20% MAPE with 15% positive bias. Track both.
- Not tracking FVA — Without FVA, you can't tell if your consensus process adds value or just adds noise.
- Gaming the metric — If planners are incentivized on accuracy alone, they may sandbag forecasts or avoid adjustments. Pair accuracy with bias.
- Comparing across segments — A 10% MAPE on stable A-items is mediocre; 10% MAPE on erratic C-items is exceptional. Benchmark within segments.
#
Response Playbook by Failure Type
Measure-and-report is not enough. Pair each failure pattern with a default response.
Use this playbook in the monthly demand review and log actions with owners and due dates.
#
Accuracy Bands by Segment
Do not apply one target to all items. Set expected bands by segment behavior.
These are planning bands, not universal benchmarks. Tune them by lead time, service target, and market volatility.
#
Review Cadence and Ownership
Use a fixed cadence so accuracy management becomes operational discipline:
- Weekly: exception signals for short-range execution (owner: demand planning team)
- Monthly: full KPI pack (
wMAPE, bias, tracking signal, FVA) in demand review (owner: demand manager) - Quarterly: target/band recalibration by segment and horizon (owner: demand + finance + supply leadership)
No metric should be reported without a named owner and an action rule when thresholds are missed.
#
Further Reading
- Demand Planning — The end-to-end process that forecast accuracy measures.
- Demand Sensing & Shaping — Short-term techniques that complement traditional accuracy measurement.