Ehrenberg-BassSponsor Website  
    University of South Australia Ehrenberg-Bass Institute for Marketing Science University of South Australia Ehrenberg-Bass Institute for Marketing Science
Log Out
  • Home
  • Online Courses
    • Mining Panel Data for Insights
    • Six Simple Steps of Data Reduction
  • Questions & Feedback
  • Buy Books
  • Additional Services
    • Specialist Research Services
    • How Brands Grow – Live!
    • Other Collaborations
  • Podcast Interviews
Ehrenberg-Bass Institute for Marketing Science

Ehrenberg-BassSponsor Website

Select a category
Search
  • All Categories
  • All Categories
  • # Latest Research
  • Advertising
  • Best Practice
  • Beyond :30
  • Brand Building & Growth
  • Brand Competition
  • ad spend
  • Budgeting
  • Business-to-Business (B2B)
  • Category Entry Points
  • Category Growth
  • Buyer Behaviour
  • Consumer Behaviour
  • Market Research
  • Data Presentation & Method
  • Distinctiveness & Distinctive Assets
  • Double Jeopardy
  • Durables
  • Emerging Markets
  • Innovation
  • Light & Heavy Buyers
  • Loyalty & Defection
  • Loyalty Programs
  • Luxury Brands
  • Marketing Myths
  • Media Decisions
  • Mental Availability & Salience
  • digital
  • Online
  • Packaging Design
  • Pareto Share
  • Penetration and Brand Metrics
  • Physical Availability
  • Portfolio Management
  • Price Promotions & Discounting
  • Pricing Decisions
  • Private Labels
  • qotw
  • Question of the Week
  • Coronavirus
  • Virus
  • Covid
  • COV19
  • COV-19
  • Recessions
  • Segmentation & Targeting
  • Services & Service Quality
  • Shopper Behaviour
  • social marketing
  • Social Cause Marketing
  • Social Media
  • Television
  • Word-of-Mouth

The Golden Rule of Forecasting: for better forecasts and better decisions

  • Watch Summary Video
  • REPORT 69
  • J. Scott Armstrong, Kesten C. Green, Andreas Graefe
  • May 2015
  • Watch Summary Video

Abstract

This report presents the Golden Rule of Forecasting. Firms that follow the Golden Rule gain a competitive advantage from more accurate forecasts, and can provide better service to their customers at lower cost. The short form of the Golden Rule is, be conservative. Conservative forecasting requires consistency with cumulative knowledge about the situation, and with findings from the decades of research on forecasting methods. The Rule applies to all forecasting problems. The Rule rejects methods that fail to make proper use of cumulative knowledge, such as the complex statistical techniques associated with the terms big data, analytics, data mining, stepwise regression, and neural networks. We provide an evidence-based checklist of twenty-eight Golden Rule guidelines. Analysts can use the checklist to help them to improve their forecasting, especially when the situation is uncertain and complex. Managers can use the checklist to identify dubious forecasts quickly and inexpensively, and to make better decisions.

ABOUT FORECASTING

Claims that forecasting is impossible are often made in popular management books. Such claims are, however, false.

The past century has seen substantial advances in forecasting procedures. Advances in forecasting are most clearly seen in the astonishing improvements in the accuracy of weather, medicine, sports, and election forecasts.

Practitioners in many fields have, however, failed to adopt evidence-based forecasting practices. Examples of poor forecasting practice are common in economic, population, public policy, demand, and business forecasting in general.

What is more, practitioners have been attracted to complex statistical methods for analysing big databases that conflict with evidence on forecasting. The methods include data mining, stepwise regression, neural networks, analytics, and variations of these provided under various names by consultancies. The methods are attractive to analysts for the very reason that they are at odds with evidence on what forecasting methods work best: they are easier to use than because they do not require the analyst to know about the situation.

Decision-makers depend on forecasts to make plans and to choose between alternative courses of action, even if they chose not to think explicitly about the forecasts. For example, it would be a rare person who makes a formal forecast about the success of a proposed marriage. Better forecasts lead to better decisions. To help practitioners make better forecasts and decision-makers to know the difference between good and bad forecasts, we present the Golden Rule of Forecasting.

THE GOLDEN RULE

The Golden Rule is a new unifying theory of forecasting that codifies the knowledge gained from a century of research on how to get the best forecasts in the circumstances. The short form of the Golden Rule is to be conservative. The long form is to be conservative by adhering to cumulative knowledge about the situation and about forecasting methods. The Golden Rule of Forecasting was proposed and tested by Armstrong, Green, and Graefe (2015).

With a little familiarity, the Golden Rule is simple to understand. Even the simplest rule is, however, easier to follow if you have some guidelines, and so we provide an evidence-based checklist. The checklist helps forecasters to examine whether they are following best practice, and managers to quickly check the worth of forecasts.

The Golden Rule applies to all forecasting problems. Ignoring the Golden Rule will harm forecast accuracy. Inaccurate forecasts are especially likely when (1) the situation is uncertain and complex, and (2) there is reason to expect bias in the forecasting process.

Forecasters’ biases—whether deliberate or otherwise—are often the source of Golden Rule violations. The biggest obstacle to greatly improving the accuracy of the forecasts that decision-makers rely upon is, however, ignorance of the cumulative knowledge from forecasting research. In recent times, the proliferation of “big data” and complex statistical procedures has led many forecasters away from cumulative knowledge, and hence to violate the Golden Rule.

The Golden Rule of Forecasting: BE CONSERVATIVE

AN EVIDENCE-BASED CHECKLIST

Checklists are of great value as tools to help decision-makers working in complex fields as diverse as aviation and medicine. Thankfully, practitioners in those fields recognise that unaided judgment based on experience is inadequate for analysing the multifarious aspects of the complex situations they face. The Golden Rule checklist performs a similar service for situations that require forecasts. The checklist makes scientific forecasting accessible to all: Analysts, clients, and lawyers can use the checklist to check for, and avoid, violations of the Golden Rule.

The checklist provides guidelines for the different stages of the forecasting process, namely (1) problem formulation, implementation of (2) judgmental, (3) extrapolative, and (4) causal methods, and (5) combining forecasts and (6) making judgmental adjustments.

The guidelines in the Golden Rule checklist follow logically from the Golden Rule. The guidelines are also consistent with the evidence from forecasting research. A review of conservative forecasting procedures identified 105 papers with comparative experimental evidence. Conservatism improved forecast accuracy in 102 of the papers. On average, violating a single guideline led to an increase in forecast error of 33% (see Table 1).

When we surveyed forecasting experts, we found that almost all would typically follow, or would consider following, the great majority of the guidelines.

Table 1: Evidence on accuracy of forecasts from conservative procedures by method
Total
(No. of papers)
Conservative
better or similar
(No. of papers)
Error Increase
vs. Conservative
(Per cent, (n))
Problem Formulation252545 (12)
Judgmental363436 (16)
Extrapolative171625 (16)
Causal121244 (11)
Combined151518 (14)
All Method Types10510233 (69)
Weighted average*32

Details of the evidence behind the Golden Rule guidelines, and the expert survey responses are available at GoldenRuleOfForecasting.com.

The Golden Rule checklist provides non-experts with a tool to identify dubious forecasts quickly and inexpensively. Managers who want accurate forecasts should require forecasters to use the checklist. Managers can also use the checklist themselves to evaluate the procedures that forecasters used. If guidelines are violated, managers should insist that forecasters correct violations and resubmit their forecasts. If the manager is unable to assess whether the guidelines have been followed, the forecast should be rejected on the basis of inadequate disclosure.

Using the checklist requires little training —intelligent people with no background in forecasting can use it to check whether appropriate forecasting procedures were used. A one-page copy of the Golden Rule checklist and an online computer- aided checklist is available at no cost from GoldenRuleOfForecasing.com.

The balance of this report describes the 28 individual Golden Rule checklist guidelines. They are organised under numbered headings that correspond to those in the checklist. Evidence on the individual guidelines is provided in Armstrong, Green, and Graefe’s (2015) article in the Journal of Business Research.

THE GOLDEN RULE CHECKLIST

1. Problem Formulation

Formulate the forecasting problem before considering what forecasting methods to use. To help formulate the problem, seek all relevant information and knowledge, avoid or find ways to counter bias, and provide full disclosure.

1.1 Use all important knowledge and information

Use all relevant, reliable, and important information to formulate the problem, and no more. Including unimportant and dubious variables, as some analysts are tempted to do when they have access to complex statistical techniques and “big data”, will reduce forecast accuracy. Knowing everything that is important about the situation may require considerable research effort.

Guideline 1.1.1: Use all important knowledge and information by selecting evidence-based methods validated by the situation

Use only procedures that have been empirically validated under conditions similar to those of the situation being forecast. There is a great deal of evidence on the accuracy of forecasts from alternative methods under different conditions. That evidence is summarised in Principles of Forecasting (Armstrong 2001), and is also freely available from ForecastingPrinciples.com.

Proposed forecasting methods must be validated against methods that evidence has shown to be valid; in other words, “is it really a better mousetrap?” Managers should ask about independent validation testing rather than assume that it was done. Many statistical forecasting procedures have been proposed without adequate validation studies, simply on the basis of experts’ opinions. Statisticians in particular have typically shown little interest in how well their proposed methods perform in forecasting the unknown, as opposed to how well they fit historical data.

Guideline 1.1.2: Use all important knowledge and information by decomposing to best use knowledge, information, and judgment

Forecasters can make better use of more knowledge and information by decomposing the forecasting problem. Decomposition involves forecasting different parts of the problem separately, and then combining the forecasts of the parts in order to calculate an aggregate forecast.

Additive decomposition involves, for example, forecasting different product, geographic or demographic segments separately and then adding the forecasts together. Multiplicative decomposition involves, for example, forecasting market size and market share, then multiplying the forecasts to calculate a company’s sales forecast.

Another advantage of decomposition is that it allows the forecaster to use the evidence-based forecasting methods that are most appropriate for different parts of the problem—e.g., the forecaster might use a causal method for the market size forecast, but extrapolate a market share forecast using data from analogous geographical regions.

1.2 Avoid bias

Biases can lead forecasters to depart from prior knowledge or to use methods that have not been validated. Biases may be unconscious, such as those arising through optimism, financial and other incentives, deference to authority, or as a consequence of confusing forecasting with planning. Bias may also be deliberate if the purpose of the forecast is to further an organisational or political objective.

Guideline 1.2.1: Avoid bias by concealing the purpose of the forecast

Intentional biases can be avoided by ensuring forecasters are unaware of the purpose of the forecast. To implement this guideline, give the forecasting problem to independent forecasters who are not privy to how the forecast will be used. This is critical when making forecasts that might be challenged by lawsuits as being biased in order to, for example, entice investors.

Guideline 1.2.2: Avoid bias by specifying multiple hypotheses

Obtaining experimental evidence on multiple reasonable hypotheses—such as alternative methods and models, alternative data, alternative causal theories, and alternative possible outcomes—is an ideal way to avoid bias. Properly done, evaluating multiple hypotheses should help to overcome even unconscious bias. The approach has a long tradition in science (Chamberlin 1890, 1965). Consider in particular using an appropriate no-change model as a benchmark hypothesis. The no-change model is often an appropriate conservative forecasting approach for complex and highly uncertain problems that is hard to beat when it comes to accuracy.

Guideline 1.2.3: Avoid bias by obtaining signed ethics statements before and after forecasting

To reduce deliberate bias, ask forecasters to sign ethics statements at the outset and again at the completion of a forecasting project. Ideally, these would state that the forecaster is familiar with and will follow evidence-based forecasting procedures, and would include declarations of any actual or potential conflicts of interest.

Guideline 1.3: Disclose data and procedures – Provide full disclosure for independent audits, replications and extensions

Forecasters should fully disclose the data and methods they used for forecasting, and describe how they were selected, to enable independent assessments and replications. Replications are vital for detecting mistakes and for developing cumulative knowledge. Even the possibility of an audit or replication is likely to encourage the forecaster to take more care to follow evidence-based procedures.

2. Judgmental Methods

Judgmental forecasts are often made when considering important decisions such as whether to launch a new product, enter a new market, hire a new manager, or move manufacturing operations. For situations where quantitative data is scarce, judgment may be the only option.

Guideline 2.1: Avoid unaided judgment

Unaided judgment does not use knowledge effectively due to many shortcomings including faulty memories, inadequate mental models, and unreliable mental processing. When experts use their unaided judgment, they tend to recall events that are recent, extreme, and vivid and those events tend to dominate their forecasts. Unaided judges tend to perceive patterns in the past and predict their persistence, without prior reasons for the patterns. Even forecasting experts are prone to departing from conservatism in that way when they use their unaided judgement.

Guideline 2.2: Use alternative wording and pre-test questions

The way a question is framed can have a large effect on the response. To reduce response errors, pose questions to elicit forecasts from expert judges in multiple ways, pre-test the questions to ensure they are understood as intended, and combine responses from the alternative questions.

Guideline 2.3: Ask judges to write reasons against the forecast

Asking experts to write reasons why their forecast may be wrong can improve forecast accuracy. The approach encourages them to consider more information and contributes to full disclosure of the information they have used and their reasoning. It also helps to reduce overconfidence.

Guideline 2.4: Use judgmental bootstrapping (to model expert judgment)

People are inconsistent in applying their knowledge. They often provide different answers to the same questions when asked on different occasions. Inconsistencies can be the result of information overload, boredom, fatigue, distraction, or forgetfulness. Judgmental bootstrapping is a technique that protects against those problems by applying an expert’s implicit rules in a consistent manner.

Judgmental bootstrapping involves developing a quantitative model to simulate how an expert would make a forecast. To develop a bootstrapping model, ask an expert to make forecasts for artificial cases in which the values of causal factors vary independently. Then estimate a model by regressing the expert’s forecasts against the values of the causal variables (Armstrong 2001a). A key condition is that variables with causal effects in the model that are opposite to what is expected from experimental evidence or logic should be removed from the model.

Guideline 2.5: Use structured analogies

A situation of interest is likely to turn out like analogous situations. Evidence on behaviour from analogous situations can be used to increase the overall knowledge applied to the problem. To forecast using structure analogies, ask five to 20 independent experts to identify analogous situations, describe similarities and differences, rate each analogy’s similarity to the target, and to report the outcome of each. The modal (most common) outcome from among the experts’ top-rated analogies has been used with notable success as the forecast in validation studies to date (Green and Armstrong 2007).

Guideline 2.6: Combine independent forecasts from judges

Combine independent forecasts from judges using pre-specified structured methods. Use experts with different information, theories, and biases when predicting the behaviour of others, or a representative sample of the population of interest asked to predict their own behaviour (e.g. through intentions surveys).

Avoid traditional group meetings as a procedure for combining forecasts. The risk of bias is high in face-to-face meetings because people can be reluctant to share their opinions in order to avoid conflict or ridicule. Instead, combine the forecasts of judges using established and validated methods. A key method for combining expert judgments is the Delphi technique: a multi-round survey that elicits independent and anonymous forecasts and the reasons for them (Rowe and Wright 2001). Freeware for conducting Delphi surveys is available from http:// www.forecastingprinciples.com/ index.php/software.

3. Extrapolation Methods

Consider extrapolation when time-series data (such as monthly sales of toothpaste) or cross-sectional data (such as changes in sales of toothpaste following promotions in some supermarkets) are available. Forecasting by extrapolation assumes that patterns in the data (such as trends, seasonality, and behavioural responses) will occur in the future and elsewhere. Extrapolation for forecasting is in part conservative because it is based on data about past behaviour. However, it ceases to be conservative when the extrapolation is at odds with cumulative knowledge about the situation.

Guideline 3.1: Use the longest time- series of valid and relevant data

Conservative forecasting requires knowing the current situation and the history of the situation. To reduce the risk of bias, use all relevant data. When forecasting time-series, use the longest obtainable data-series. Failing to do so opens the possibility that by choosing the starting point or the data set, the forecaster can influence the resulting forecast. For example, energy prices can go up and down quite dramatically and for extended periods, yet the long-term trend in real prices has been for prices to fall, for good economic reasons. The belief that things are different now—that recent trends are the “new normal”—has led to disastrous forecasts by governments, businesses, and investors.

Guideline 3.2: Decompose by causal forces

Causal forces that may affect a time- series can be classified as growing, decaying, supporting, opposing, regressing, and unknown (Armstrong and Collopy 1993). A time-series can be the product of opposing causal forces. To forecast such a situation, decompose the time-series into the components affected by those forces and extrapolate each component separately.

Consider, for example, the problem of forecasting highway deaths. The number of deaths tends to increase with the number of miles driven, but decrease as the safety of vehicles and roads improve. Because of the conflicting forces, the direction of the trend in the fatality rate is uncertain. By decomposing the problem into miles-driven-per-year and deaths- per-mile-driven, the analyst can use knowledge about the individual trends to extrapolate each component. The forecast for the total number of deaths per year is calculated as the product of the two component forecasts.

3.3 Modify trends to incorporate more knowledge

In situations involving high uncertainty, conservatism might call for a reduction in the magnitude of a trend to keep the forecast closer to the current situation. The process is commonly referred to as dampening. Forecasters should, however, also consult cumulative knowledge about the situation to identify when dampening would not be conservative. For example, if a long-term trend arises from well- supported and persistent causal forces, such as in Moore’s Law for computers, a more conservative approach might be to dampen toward the long-term trend.

Guideline 3.3.1: Modify trends to incorporate more knowledge if the series is variable or unstable

Damp initial estimates of the trend in a time-series, especially when the series is variable. The level of variability in a data- series can be assessed using statistical procedures or judgmentally, or both. Forecast accuracy is almost always improved by dampening the trend.

Guideline 3.3.2: Modify trends to incorporate more knowledge if the historical trend conflicts with causal forces

Ask diverse experts (preferably three or more) whether the observed trend in a time-series conflicts with the causal forces acting on that time-series. The condition is called contrary series. For example, while the historical trend in furniture sales may have been up, news of increased mortgage defaults and unemployment in the U.S. should have alerted forecasters that economic forces would be exerting downward pressure on sales. When a contrary series situation occurs, damp the trend heavily towards, or even to, a no-change forecast.

Guideline 3.3.3: Modify trends to incorporate more knowledge if the forecast horizon is longer than the historical series

Avoid making forecasts for periods longer into the future than the length of the historical time-series. If forecasts are nevertheless needed, (1) dampen the trend towards zero as the forecast horizon increases, and (2) average the trend with trends from analogous series.

Guideline 3.3.4: Modify trends to incorporate more knowledge if the short- and long-term trend directions are inconsistent

Damp the short-term trend towards the long-term trend as the forecast horizon lengthens if the trends are inconsistent. Unless there have been important and well-established changes in causal forces —such as a new law that imposes tariffs on imports—the long-term trend represents more knowledge about the behaviour of the series.

3.4 Modify seasonal factors to reflect uncertainty

For situations of high uncertainty about seasonal factors, modifying the factors can improve forecast accuracy. Conservatism suggests damping the seasonal factors, or incorporating more information; e.g. from adjacent time periods or analogous data series.

Guideline 3.4.1: Modify seasonal factors to reflect uncertainty if estimates vary substantially across years

If estimates of the size of seasonal factors vary substantially from one year to the next, this suggests uncertainty. Variations might be due to shifting dates of major holidays, strikes, natural disasters, irregular marketing actions such as advertising or price reductions, and so on. To deal with such variations, damp seasonal factors or average them with those for time periods immediately before and after.

Guideline 3.4.2: Modify seasonal factors to reflect uncertainty if few years of data are available

When historical data is insufficient to confidently estimate seasonal factors, damp the estimates strongly, or avoid estimating them altogether. Alternatively, compensate for the lack of information by combining estimates with those from analogous data series.

Guideline 3.4.3: Modify seasonal factors to reflect uncertainty if causal knowledge is weak

Seasonal factors are likely to increase forecast error if prior knowledge on the causes of seasonality in the series to be forecast is poor. In that situation, damp seasonal factors to reflect the extent that causal knowledge is weak. If there is no established causal basis for seasonality— e.g. demand for ice creams in summer— do not use seasonal factors.

Guideline 3.5: Combine forecasts from alternative extrapolation methods and alternative data

Combining forecasts from alternative extrapolations can improve forecast accuracy. Doing so incorporates more information into the final forecast. The alternative forecasts can be estimated from different extrapolation methods and different data sources; e.g., analogous time-series.

4. Causal Methods

Causal methods can provide useful forecasts when good knowledge about causation is available. An example of a simple causal model might be that brand- X sales are forecast to increase by 5% when advertising spend is increased by 50%.

Perhaps the most common approach to forecasting using causal methods is regression models with statistically estimated coefficients. The method is conservative in that estimates regress toward the mean value of the series in response to unattributed variability, such as from measurement error in the causal variables. Regression analysis is insufficiently conservative, however, because it does not allow for uncertainty regarding causal effects that arises from omitted variables, predicting the causal variables, changing causal relationships, and inferred causality if variables in the model correlate with important excluded variables over the estimation period. In addition, using statistical significance tests and sophisticated statistical methods to help select predictor variables from large databases is unreliable.

Guideline 4.1: Use prior knowledge to specify variables, relationships, and effects

Only include causal variables in a model if they are known to be related to the variable being forecast. Identify causal variables from well established theory (e.g., price elasticities for normal goods) obvious relationships (e.g., rainfall and crop production), or experimental evidence.

For simple problems, one might consider, with considerable trepedation, statistical analysis of non-experimental data. Statistical analysis cannot, however, be used to discover causal relationships in complex situations. Statistical procedures find patterns in numbers, even random numbers. The chances that such pattterns correspond well with causal relations is slight when the data are non- experimental and the situation is complex.

Prior research can provide forecasters with evidence to help them to estimate elasticities for the situation they are concerned with. Elasticities are the percentage change that occurs in the variable to be forecast in response to a 1% change in the causal variable. For forecasting sales, one can find income, price, and advertising elasticities for various product types in published meta- analyses. If little prior research exists, ask domain experts for their estimates.

Guideline 4.2: Modify effect estimates to reflect uncertainty

Forecast accuracy suffers when the forecasting model includes spurious causal variables. When there is uncertainty about the relationship between a causal variable and the variable being forecast, damp the causal effect towards no effect. In general, damp more the greater the uncertainty.

Another strategy for addressing uncertainty over relationships is to adjust the weights of causal variables so that they are more equal with one another. In other words, adjusting the variable coefficients towards equality, which we refer to to as “equalising.” When uncertainty about relative effect sizes is high, consider the most extreme form of equalising: standardise the variables, and weight them equally.

Guideline 4.3: Use all important variables

Forecast accuracy can be improved by including all of the important causal variables in the forecasting model. Regression models are limited in this regard, as regression analysis can typically estimate the effects of no more than two or three causal variables properly when using non-experimental data. Moreover, the effect of a causal variable can only be estimated by regression analysis if the variable varies in the data available for estimation. For example, if the price of a product changes little or not at all in the historical data, regression cannot estimate a model that will accurately forecast the effect of a price increase.

Fortunately, there is an alternative to regression models for causal forecasting that allows forecasters to include in a model all knowledge about causal variables that is important. The method is called the index method (Graefe and Armstrong 2013). To construct an index model, use prior knowledge to identify all relevant variables and their effects on whatever is being forecast. Ideally, this knowledge would derive from experimental studies. In cases where experimental research is scarce, survey independent experts with diverse knowledge.

Guideline 4.4: Combine forecasts from dissimilar models

As with judgmental and extrapolation methods, forecast accuracy can be improved by combing forecasts from different causal models. Combing forecasts from models that incorporate different causal variables and that are estimated using different data sources is one way of dealing with the limitations of regression analysis. This technique is conservative in that it incorporates more information than a single model with few variables.

5. Combine Forecasts

Guideline 5: Combine forecasts from diverse evidence-based methods

Combining forecasts from evidence-based methods is conservative in that more knowledge is used, and the effects of biases and mistakes such as data errors, computational errors, and poor model specification are likely to offset one another. The combined forecast can never be worse that the typical component. If the errors bracket the true value, the combined forecast will always be more accurate than the typical forecast. In addition, combining forecasts reduces the likelihood of large errors, as the combined forecast will always have a lower error than the worst component. These benefits of combining are not intuitively obvious—experts and managers believe they can pick the best forecast—so combining is seldom used. In fact, even if the manager knows the best method, the combined forecast is often more accurate than the forecast from the best method.

Equally weighting component forecasts is conservative in the absence of strong evidence of large differences in out-of- sample forecast accuracy from different methods. Equal weighting might involve averaging the forecasts of each of judgmental, extrapolative, and causal methods, then averaging the three averages to calculate a combined forecast.

6. Judgmental Adjustments

Guideline 6: Avoid unstructured judgmental adjustments to forecasts

Forecasters and managers are often tempted to make informal adjustments to forecasts from quantitative methods. Managers commonly adjust statistical forecasts, and most forecasting practitioners expect that judgmental adjustments reduce errors. Little evidence supports that belief. Rather, unstructured judgmental adjustments tend to reduce objectivity and introduce biases and random errors, and should be avoided.

Judgmental adjustments may improve accuracy when experts have domain knowledge about important influences not included in the forecasting models, such as special events and changes in causal forces. When that is the case, adjustments are conservative in that more knowledge and information is used in the forecasting process. Use structured procedures to elicit judgmental adjustments from experts who know the information used by the forecasting methods, but have not been shown the forecasts.

CONCLUSIONS

The Golden Rule of Forecasting is a unifying theory of how best to go about making accurate forecasts. The Rule is universal: It applies to all forecasting problems. In other words, it applies to any situation in which a decision-maker would benefit by knowing what is likely to happen in the future. The rule is easy to understand: It can be stated as “forecast conservatively, by being consistent with cumulative knowledge about the situation and about forecasting methods.” When analysts fail to follow the Golden Rule, there is little reason to expect that their forecasts will be of practical use to decision makers.

The Golden Rule checklist, available from GoldenRuleofForecasting.com, provides guidance that can help decision makers to quickly assess whether a forecast has been derived using procedures that are consistent with the Golden Rule. Analysts can use the checklist to help them to choose appropriate forecasting methods for their problem and to help them to implement the methods. Managers can insist that analysts follow the Golden Rule when they are preparing forecasts. Better decisions will follow.

The Golden Rule of Forecasting allows firms to improve the accuracy of their forecasts substantially. Firms that follow the Golden Rule have a competitive advantage over those that do not, as the Moneyball story vividly illustrates (Lewis 2004). When others follow suit, they will all provide better service to their customers at lower cost.

REFERENCE LIST

Armstrong, J. S. (2001). Principles of Forecasting: A Handbook for Researchers and Practitioners. New York: Springer.

Armstrong, J. S. (2001a). Judgmental bootstrapping: Inferring experts’ rules for forecasting. In J. S. Armstrong (Ed.), Principles of Forecasting: A Handbook for Researchers and Practitioners (pp. 171–192). New York: Springer.

Armstrong, J. S., & Collopy, F. (1993). Causal forces: Structuring knowledge for time-series extrapolation. Journal of Forecasting, 12(2), 103– 115.

Armstrong, J. S., Green, K. C., & Graefe, A. (2015). Golden Rule of Forecasting: Be conservative. Journal of Business Research [forthcoming – available from goldenruleofforecasting.com].

Chamberlin, T. C. (1890, 1965). The method of multiple working hypotheses. Science, 148, 754–759. (Reprint of an 1890 paper).

Graefe, A., & Armstrong, J. S. (2013). Forecasting elections from voters’ perceptions of candidates’ ability to handle issues. Journal of Behavioral Decision Making, 26(3), 295–303.

Green, K. C., & Armstrong, J. S. (2007). Structured analogies for forecasting. International Journal of Forecasting, 23(3), 365– 376.

Lewis, M. M. (2004). Moneyball: The art of winning an unfair game. New York: Norton.

Rowe, G., & Wright, G. (2001). Expert opinions in forecasting: The role of the Delphi technique. In J. S. Armstrong (Ed.), Principles of Forecasting: A Handbook for Researchers and Practitioners (pp. 125–144). New York: Springer.

About the Authors:

Prof. J. Scott Armstrong, The Wharton School, University of Pennsylvania and the Ehrenberg-Bass Institute (Adjunct Professor)

Dr Kesten C. Green
University of South Australia Business School
and the Ehrenberg-Bass Institute (Senior Research Associate)

Dr Andreas Graefe LMU Munich, Germany

image description

RELATED CATEGORIES

  • Data Presentation & Method
Content from the Ehrenberg-Bass Institute website for Corporate Sponsors: https://sponsors.marketingscience.info
This content is exclusively for the use of members of the Ehrenberg-Bass Institute Corporate Sponsorship Program.

Can’t find what you are looking for? or have some feedback about the site?                  Contact Us

FOLLOW US

Contact

Phone: +61 8 8302 0111 Postal Address:
GPO Box 2471
Adelaide SA 5001
Australia
Freecall: 1800 801 857 (within Australia) Fax: +61 8 8302 0123 Email: info@MarketingScience.info

Sitemap

  • Home
  • About the Institute
  • Awards and Accolades
  • Ehrenberg-Bass Sponsorship
  • Specialist Research Services
  • News & Media
  • Contact Us
  • Disclaimers, Privacy & Copyright

Corporate Sponsors Member’s Area

  • Sponsor Website Home
  • Online Courses
  • Questions & Feedback
  • Buy Books
  • Research Services

Corporate Sponsors Member’s Area

  • Sponsor Website Home
  • Online Courses
  • Questions & Feedback
  • Buy Books
  • Research Services
image-description

Now available as an eBook exclusively to Apple iBooks

image-description

The Ehrenberg-Bass Institute for Marketing Science is the world’s largest centre for research into marketing. Our team of market research experts can help you grow your brand and develop a culture of evidence-based marketing.

Acknowledgement of Country

Ehrenberg-Bass Institute acknowledges the Traditional Owners of the lands across Australia as the continuing custodians of Country and Culture. We pay our respect to First Nations people and their Elders, past and present.

University of south Australia

The Ehrenberg-Bass Institute is based at the University of South Australia

Website designed & developed by

Website designed & developed by Atomix