Working Paper No. 237
By Richard Harrison, George Kapetanios and Tony Yates
This paper explores the effects of measurement error on dynamic forecasting models. It illustrates a trade-off that confronts forecasters and policymakers when they use data that are measured with error. On the one hand, observations on recent data give valuable clues as to the shocks that are hitting the system and that will be propagated into the variables to be forecast. But on the other, those recent observations are likely to be those least well measured. The paper studies two classes of forecasting problem. The first class includes cases where the forecaster takes the coefficients in the data-generating process as given, and has to choose how much of the historical time series of data to use to form a forecast. We show that if recent data are sufficiently badly measured, relative to older data, it can be optimal not to use recent data at all. The second class of problems we study is more general. We show that for a general class of linear autoregressive forecasting models, the optimal weight to place on a data observation of some age, relative to the weight in the true data-generating process, will depend on the measurement error in that observation. We illustrate the gains in forecasting performance using a model of UK business investment growth.