Monetary policy and data uncertainty

Working papers set out research in progress by our staff, with the aim of encouraging comments and debate.
Published on 27 October 2005

Working Paper No. 281
By Jarkko Jääskelä and Tony Yates

One of the problems facing policymakers is that recent releases of data are liable to subsequent revisions. This paper discusses how to deal with this, and is in two parts. In the normative part of the paper, we study the design of monetary policy rules in a model that has the feature that data uncertainty varies according to the vintage. We show how coefficients on lagged variables in optimised simple rules for monetary policy increase as the relative measurement error in early vintages of data increases. We also explore scenarios when policymakers are uncertain by how much measurement error in new data exceeds that in old data. An optimal policy can then be one in which it is better to assume that the ratio of measurement error in new compared to old data is larger, rather than smaller. In the positive part of the paper, we show that the response of monetary policy to vintage varying data uncertainty may generate evidence of apparent interest rate smoothing in interest rate reaction functions: but we suggest that it may not generate enough to account for what has been observed in the data.

PDFMonetary policy and data uncertainty

Other papers

Was this page useful?
Add your details...