**Guideline 3.2: Decompose by causal forces**

Causal forces that may affect a time- series can be classified as growing, decaying, supporting, opposing, regressing, and unknown (Armstrong and Collopy 1993). A time-series can be the product of opposing causal forces. To forecast such a situation, decompose the time-series into the components affected by those forces and extrapolate each component separately.

Consider, for example, the problem of forecasting highway deaths. The number of deaths tends to increase with the number of miles driven, but decrease as the safety of vehicles and roads improve. Because of the conflicting forces, the direction of the trend in the fatality rate is uncertain. By decomposing the problem into miles-driven-per-year and deaths- per-mile-driven, the analyst can use knowledge about the individual trends to extrapolate each component. The forecast for the total number of deaths per year is calculated as the product of the two component forecasts.

**3.3 Modify trends to incorporate more knowledge**

In situations involving high uncertainty, conservatism might call for a reduction in the magnitude of a trend to keep the forecast closer to the current situation. The process is commonly referred to as dampening. Forecasters should, however, also consult cumulative knowledge about the situation to identify when dampening would not be conservative. For example, if a long-term trend arises from well- supported and persistent causal forces, such as in Moore’s Law for computers, a more conservative approach might be to dampen toward the long-term trend.

*Guideline 3.3.1: Modify trends to incorporate more knowledge if the series is variable or unstable*

Damp initial estimates of the trend in a time-series, especially when the series is variable. The level of variability in a data- series can be assessed using statistical procedures or judgmentally, or both. Forecast accuracy is almost always improved by dampening the trend.

*Guideline 3.3.2: Modify trends to incorporate more knowledge if the historical trend conflicts with causal forces*

Ask diverse experts (preferably three or more) whether the observed trend in a time-series conflicts with the causal forces acting on that time-series. The condition is called contrary series. For example, while the historical trend in furniture sales may have been up, news of increased mortgage defaults and unemployment in the U.S. should have alerted forecasters that economic forces would be exerting downward pressure on sales. When a contrary series situation occurs, damp the trend heavily towards, or even to, a no-change forecast.

*Guideline 3.3.3: Modify trends to incorporate more knowledge if the forecast horizon is longer than the historical series*

Avoid making forecasts for periods longer into the future than the length of the historical time-series. If forecasts are nevertheless needed, (1) dampen the trend towards zero as the forecast horizon increases, and (2) average the trend with trends from analogous series.

*Guideline 3.3.4: Modify trends to incorporate more knowledge if the short- and long-term trend directions are inconsistent*

Damp the short-term trend towards the long-term trend as the forecast horizon lengthens if the trends are inconsistent. Unless there have been important and well-established changes in causal forces —such as a new law that imposes tariffs on imports—the long-term trend represents more knowledge about the behaviour of the series.

**3.4 Modify seasonal factors to reflect uncertainty**

For situations of high uncertainty about seasonal factors, modifying the factors can improve forecast accuracy. Conservatism suggests damping the seasonal factors, or incorporating more information; e.g. from adjacent time periods or analogous data series.

*Guideline 3.4.1: Modify seasonal factors to reflect uncertainty if estimates vary substantially across years*

If estimates of the size of seasonal factors vary substantially from one year to the next, this suggests uncertainty. Variations might be due to shifting dates of major holidays, strikes, natural disasters, irregular marketing actions such as advertising or price reductions, and so on. To deal with such variations, damp seasonal factors or average them with those for time periods immediately before and after.

*Guideline 3.4.2: Modify seasonal factors to reflect uncertainty if few years of data are available*

When historical data is insufficient to confidently estimate seasonal factors, damp the estimates strongly, or avoid estimating them altogether. Alternatively, compensate for the lack of information by combining estimates with those from analogous data series.

*Guideline 3.4.3: Modify seasonal factors to reflect uncertainty if causal knowledge is weak*

Seasonal factors are likely to increase forecast error if prior knowledge on the causes of seasonality in the series to be forecast is poor. In that situation, damp seasonal factors to reflect the extent that causal knowledge is weak. If there is no established causal basis for seasonality— e.g. demand for ice creams in summer— do not use seasonal factors.

**Guideline 3.5: Combine forecasts from alternative extrapolation methods and alternative data**

Combining forecasts from alternative extrapolations can improve forecast accuracy. Doing so incorporates more information into the final forecast. The alternative forecasts can be estimated from different extrapolation methods and different data sources; e.g., analogous time-series.

#### 4. Causal Methods

Causal methods can provide useful forecasts when good knowledge about causation is available. An example of a simple causal model might be that brand- X sales are forecast to increase by 5% when advertising spend is increased by 50%.

Perhaps the most common approach to forecasting using causal methods is regression models with statistically estimated coefficients. The method is conservative in that estimates regress toward the mean value of the series in response to unattributed variability, such as from measurement error in the causal variables. Regression analysis is insufficiently conservative, however, because it does not allow for uncertainty regarding causal effects that arises from omitted variables, predicting the causal variables, changing causal relationships, and inferred causality if variables in the model correlate with important excluded variables over the estimation period. In addition, using statistical significance tests and sophisticated statistical methods to help select predictor variables from large databases is unreliable.

**Guideline 4.1: Use prior knowledge to specify variables, relationships, and effects**

Only include causal variables in a model if they are known to be related to the variable being forecast. Identify causal variables from well established theory (e.g., price elasticities for normal goods) obvious relationships (e.g., rainfall and crop production), or experimental evidence.

For simple problems, one might consider, with considerable trepedation, statistical analysis of non-experimental data. Statistical analysis cannot, however, be used to discover causal relationships in complex situations. Statistical procedures find patterns in numbers, even random numbers. The chances that such pattterns correspond well with causal relations is slight when the data are non- experimental and the situation is complex.

Prior research can provide forecasters with evidence to help them to estimate elasticities for the situation they are concerned with. Elasticities are the percentage change that occurs in the variable to be forecast in response to a 1% change in the causal variable. For forecasting sales, one can find income, price, and advertising elasticities for various product types in published meta- analyses. If little prior research exists, ask domain experts for their estimates.

**Guideline 4.2: Modify effect estimates to reflect uncertainty**

Forecast accuracy suffers when the forecasting model includes spurious causal variables. When there is uncertainty about the relationship between a causal variable and the variable being forecast, damp the causal effect towards no effect. In general, damp more the greater the uncertainty.

Another strategy for addressing uncertainty over relationships is to adjust the weights of causal variables so that they are more equal with one another. In other words, adjusting the variable coefficients towards equality, which we refer to to as “equalising.” When uncertainty about relative effect sizes is high, consider the most extreme form of equalising: standardise the variables, and weight them equally.

**Guideline 4.3: Use all important variables**

Forecast accuracy can be improved by including all of the important causal variables in the forecasting model. Regression models are limited in this regard, as regression analysis can typically estimate the effects of no more than two or three causal variables properly when using non-experimental data. Moreover, the effect of a causal variable can only be estimated by regression analysis if the variable varies in the data available for estimation. For example, if the price of a product changes little or not at all in the historical data, regression cannot estimate a model that will accurately forecast the effect of a price increase.

Fortunately, there is an alternative to regression models for causal forecasting that allows forecasters to include in a model all knowledge about causal variables that is important. The method is called the index method (Graefe and Armstrong 2013). To construct an index model, use prior knowledge to identify all relevant variables and their effects on whatever is being forecast. Ideally, this knowledge would derive from experimental studies. In cases where experimental research is scarce, survey independent experts with diverse knowledge.

**Guideline 4.4: Combine forecasts from dissimilar models**

As with judgmental and extrapolation methods, forecast accuracy can be improved by combing forecasts from different causal models. Combing forecasts from models that incorporate different causal variables and that are estimated using different data sources is one way of dealing with the limitations of regression analysis. This technique is conservative in that it incorporates more information than a single model with few variables.

#### 5. Combine Forecasts

**Guideline 5: Combine forecasts from diverse evidence-based methods**

Combining forecasts from evidence-based methods is conservative in that more knowledge is used, and the effects of biases and mistakes such as data errors, computational errors, and poor model specification are likely to offset one another. The combined forecast can never be worse that the typical component. If the errors bracket the true value, the combined forecast will always be more accurate than the typical forecast. In addition, combining forecasts reduces the likelihood of large errors, as the combined forecast will always have a lower error than the worst component. These benefits of combining are not intuitively obvious—experts and managers believe they can pick the best forecast—so combining is seldom used. In fact, even if the manager knows the best method, the combined forecast is often more accurate than the forecast from the best method.

Equally weighting component forecasts is conservative in the absence of strong evidence of large differences in out-of- sample forecast accuracy from different methods. Equal weighting might involve averaging the forecasts of each of judgmental, extrapolative, and causal methods, then averaging the three averages to calculate a combined forecast.

#### 6. Judgmental Adjustments

**Guideline 6: Avoid unstructured judgmental adjustments to forecasts**

Forecasters and managers are often tempted to make informal adjustments to forecasts from quantitative methods. Managers commonly adjust statistical forecasts, and most forecasting practitioners expect that judgmental adjustments reduce errors. Little evidence supports that belief. Rather, unstructured judgmental adjustments tend to reduce objectivity and introduce biases and random errors, and should be avoided.

Judgmental adjustments may improve accuracy when experts have domain knowledge about important influences not included in the forecasting models, such as special events and changes in causal forces. When that is the case, adjustments are conservative in that more knowledge and information is used in the forecasting process. Use structured procedures to elicit judgmental adjustments from experts who know the information used by the forecasting methods, but have not been shown the forecasts.

#### CONCLUSIONS

The Golden Rule of Forecasting is a unifying theory of how best to go about making accurate forecasts. The Rule is universal: It applies to all forecasting problems. In other words, it applies to any situation in which a decision-maker would benefit by knowing what is likely to happen in the future. The rule is easy to understand: It can be stated as “forecast conservatively, by being consistent with cumulative knowledge about the situation and about forecasting methods.” When analysts fail to follow the Golden Rule, there is little reason to expect that their forecasts will be of practical use to decision makers.

The Golden Rule checklist, available from GoldenRuleofForecasting.com, provides guidance that can help decision makers to quickly assess whether a forecast has been derived using procedures that are consistent with the Golden Rule. Analysts can use the checklist to help them to choose appropriate forecasting methods for their problem and to help them to implement the methods. Managers can insist that analysts follow the Golden Rule when they are preparing forecasts. Better decisions will follow.

The Golden Rule of Forecasting allows firms to improve the accuracy of their forecasts substantially. Firms that follow the Golden Rule have a competitive advantage over those that do not, as the Moneyball story vividly illustrates (Lewis 2004). When others follow suit, they will all provide better service to their customers at lower cost.