Good afternoon everyone.
It is a great pleasure and honour to speak at today’s Society of Professional Economists conference. Thanks to George Buckley and his fellow organisers for the invitation to participate.
Last week’s MPC decision
Let me start with last week’s MPC decision.
As you all know, the MPC decided to raise Bank Rate by 25bp to 0.5%. The Committee also agreed to commence the unwind of gilt holdings accumulated through quantitative easing (by ending re-investment of maturing bonds), as well as announce an intended unwind of the corporate bond portfolio.footnote 
This further withdrawal of the monetary policy accommodation introduced at the outset of the pandemic was motivated by a forecast with three important and inter-related elements, which I will address in turn.
First, the evolution of international energy and goods prices, which continue to drive headline inflation developments in the UK. These have repeatedly surprised to the upside in both magnitude and persistence over the past year, putting painful pressure on the cost of living and placing the MPC in the current, very uncomfortable position of forecasting headline CPI inflation to peak at over 7% in April.
For a net importer of energy and goods, these developments represent a deterioration of the terms of trade.footnote  This is an adverse real shock, with adverse real consequences. Higher international energy and goods prices unavoidably weigh on UK real national income.
But it is a real shock with substantial nominal implications. Higher energy and core goods prices account for the bulk of the inflation target overshoot, and a large part of the successive upside surprises we have seen in inflation outturns.
This lays bare that we are in difficult trade-off territory for monetary policy. The driver of higher inflation is also a driver of lower real income and demand in the later years of our forecast, and thus higher unemployment. No ‘divine coincidence’ here.footnote  Bringing down inflation through monetary tightening has to be weighed against its impact on employment and activity.
Looking ahead, the sensitivity of headline inflation to developments in international energy prices is significant, as the alternative forecast scenarios published in the MPC’s February Monetary Policy Report illustrate.
Were international energy and goods prices to stabilise – even at permanently higher levels – eventually their direct impact on UK inflation would dissipate. Should they fall slowly in line with futures prices, inflation is forecast to fall below the 2% target by a meaningful margin at the policy-relevant horizon.
Yet uncertainty on this dimension remains considerable. With the pandemic giving way to geopolitics as the driver of international energy prices, they are unlikely to become easier to forecast.
This brings me to the second key element of the forecast: our treatment of the labour market and wage developments.
Recent data confirm that the UK labour market is tight. It is tighter than we expected at the time of the previous Monetary Policy Report published in November.footnote  And we expect it to tighten further in the coming months.
On the back of this labour market tightness and in light of our Agents’ recent wage survey – which embodied the highest expectation for pay settlements since well before the financial crisisfootnote  – we have revised up underlying wage growth in 2022 to rates approaching 5%. Given expected productivity developments, this rate is probably stronger than that consistent with the inflation target over the medium term.
But we expect the forecast stronger momentum in domestic wage and cost growth to ease beyond this year, as headline inflation falls and unemployment rises owing to the impact of higher energy prices on real incomes and domestic demand.
This is a crucial assumption – the ‘big call’ – underlying our inflation outlook.
It is a call grounded in the experience of the past quarter of a century, as embodied in wage equations estimated over that period.footnote  On that basis, in our forecast the easing of domestic inflationary pressures is achieved without the UK falling into recession.
But, by nature, it is also a call surrounded by uncertainty.
The structure of the UK labour and product markets may be changing, rendering the experience of the past twenty years less relevant for the outlook of wages over the forecast period. The pandemic has lowered labour force participation rates, while Brexit may have changed the structure of both labour and goods markets, reducing the discipline of competitive pressures on the evolution of domestic margins and costs.
The environment created by higher international energy and goods prices may also have implications.
To the extent that any implied deterioration in the terms of trade is permanent, there will be a real hit to UK income that must be absorbed by some domestic actor.
So, in the end, someone will have to bear that cost. The important question for monetary policy is how that happens; in particular, how pragmatic and realistic UK firms and households are in accepting the macroeconomic implications of the higher imported energy and goods prices.
In an attempt to protect their own real income from the unavoidable impact of higher external prices, the longer that firms try to maintain real profit margins and employees try to maintain real wages, the more likely it is that domestically-generated inflation will achieve its own self-sustaining momentum even as the external impulse to UK inflation recedes.
Our baseline assumes that this risk of so-called second round effects will be contained, in part by the monetary policy measures taken and in prospect. But should this assumption come under threat or prove to be misplaced, a further monetary policy response would be required.
And that brings me to the third element of our published forecast: the outlook for monetary policy.
Under our baseline paths for wages and energy prices, our published scenarios suggest that leaving Bank Rate unchanged at 0.5% indefinitely would – unsurprisingly – leave inflation above our 2% target at the policy-relevant horizon, whereas following the market-implied path to 1.2% by the end of this year would have left inflation somewhat below target.
I leave it to you to draw any implications for where the MPC sees the path of Bank Rate headed.
But I would emphasise that these scenarios, by nature, embody the underlying assumptions about energy prices and wage developments that I have discussed. Since the outlook for wages and energy prices is uncertain as I have emphasised, then the prospective path for Bank Rate is also uncertain.
Were we to see evidence of second round effects in wage and cost developments, a tighter policy than otherwise might be required. Were energy prices to fall steadily in line with futures rather than stabilise as we assume, then – other things equal – more policy accommodation could be maintained.
Exploring how monetary policy decisions and their communication should be managed in the face of such uncertainties is the topic to which I will now turn in the body of my talk.
Monetary policy and uncertainty
As I hope to have illustrated, the MPC’s assessment and treatment of uncertainty has played an important role in coming to last week’s monetary policy decision. Of course, the current conjuncture has its own specificities – many of which are uncomfortable for the MPC, as I have discussed.
But dealing with uncertainty is part of the job. If we lived in a world of perfect certainty, then monetary policy would be a much simpler – perhaps even a trivial – task. Sitting on the MPC at present, that is not how it feels.
So I hope you will indulge me with scope to discuss more generally how monetary policymakers might deal with risk and uncertainty, before concluding with some more specific remarks about where the MPC stands today.
The economic literature on monetary policy and uncertainty is voluminous and surprisingly inconclusive. I don’t have the time to survey that literature here today.footnote  But I will summarise a few points that I think are relevant both to my own remarks, and to the current conjuncture.
Much of the analytical discussion has taken place with a framework that casts monetary policy strategy in the form of a control problem.footnote  Within a system that defines the behaviour of the economy, policymakers seek to minimise the social costs arising from the departure of inflation, employment and economic activity from their efficient levels, using the short-term interest rate (and potentially other monetary policy instruments) as a control variable.
This is monetary policy as engineering. That cannot be an adequate or complete characterisation of the challenges the MPC faces. But it can help clarify and codify how to think about some of the important conceptual issues surrounding monetary policy decisions in the face of uncertainty.
Characterising monetary policy as a control problem has a long history. It achieved renewed prominence as the academic literature sought to formalise and evaluate inflation targeting strategies, once they were introduced by central banks from the early 1990s.footnote  Since the UK was in the vanguard of these developments, unsurprisingly the Bank of England – in part, through the efforts of many attendees of today’s conference – contributed richly to that research programme.footnote 
A common starting point was the standard new Keynesian macroeconomic model. Building on the work of Michael Woodford in collaboration with my former colleague and mentor Julio Rotemberg (who very sadly passed away a few years ago), footnote linear approximations of these models around the inflation target steady state, in conjunction with quadratic approximations of social welfare derived from the household preferences upon which the model was based, defined the system within which model-optimal monetary policy could be determined.
Over the past quarter of century, a lot of ink has been spilt in academic journals, doctoral theses, and central bank working papers exploring the features and implications of this general framework in great detail.
For those of us schooled in modern macro using Thomas Sargent’s Macroeconomic Theory textbook,footnote  this linear / quadratic (or LQ) framework was associated with one important result: certainty equivalence. In other words, within the LQ set-up defined by Rotemberg and Woodford, the optimal monetary policy rule governing how the policy interest rate should respond to changes in the state of the economy is invariant to distribution of shocks to that economy.
This does not mean that uncertainty does not matter. On the contrary, shocks are the drivers of economic fluctuations. The welfare losses associated with inflation volatility stemming from those shocks provides a rationale for the stabilisation of inflation that underlies the adoption of inflation targeting.
But the analysis implies that the optimal policy rule – defining how the policy rate should respond to economic developments – does not change if and when the degree of uncertainty in the economy increases.
My impression is that this result did not sit comfortably with monetary policymakers. And – as is often the case in the academic literature – a result proving something did not matter proved to be the catalyst for a research agenda seeking to demonstrate that it did.
Engineering again provided some of the inspiration for this research programme. But different strands of the engineering literature were taken up, which – when filtered through the macroeconomic lens – offered different implications for monetary policy.
One strand of research developed from so-called Brainard uncertainty.footnote  If uncertainty entered the system via the parameters of the economy, rather than as shocks to the economy, then the consequences were different.
For example, if the slope of the Phillips curve governing the relationship between inflation and unemployment were uncertain, then the Brainard view implied that the monetary policy response should be attenuated relative to the case when such uncertainty were absent. And the greater the uncertainty about the slope, the greater the attenuation required.
The intuition behind this result is reasonably straightforward. Uncertainty about the slope of the Phillips curve introduces scope for the monetary policy response to any specific shock to be mis-calibrated. That mis-calibration threatens to introduce additional volatility into the inflation outlook beyond that created by the shock itself. So policymakers need to manage a trade-off between, on the one hand, the impact of the underlying shock on inflation and, on the other hand, the impact of any policy mis-calibration on inflation.
Managing this trade-off implies a less aggressive response to any specific shock than would be desired if there were no uncertainty about the parameters of the economy, and thus no scope for policy mis-calibration.
On occasion, policymakers have embraced this Brainard view in rationalising their decisions. Former Federal Reserve Vice-Chairman Alan Blinder went so far as to state that the FOMC should first
“estimate how much you need to tighten or loosen monetary policy to ‘get it right’. Then do less”.footnote  But other strands of the engineering literature offer guidance pointing in the opposite direction.
One thread falls under the label ‘robust control’. Broadly speaking, the desire for robustness in policy design starts from the premise that ‘the perfect should not become the enemy of the good’.footnote 
In the face of uncertainty about the outlook for and structure of the economy, policy should be based on principles that give good outcomes across a broad set of possible eventualities, rather than an optimal outcome in one specific context, especially if the optimality of that outcome is fragile to changes in context.
One – perhaps extreme – characterisation of such robust control thinking starts from the adoption of a so-called MINIMAX strategy to monetary policy decisions.footnote  Even if it is too abstract to offer a practical guide to monetary policy, it can act as a device to help clarify some ideas underlying the policy debate.
The MINIMAX approach involves choosing the policy setting which works best in the worst circumstances that you might face across the possibilities of how the economy behaves.
More precisely, if we characterise uncertainty as being reflected in a set of different possible models of the economy, policymakers seek to minimise the welfare cost in that model where those welfare costs are highest, given the observed developments in macroeconomic data.
The intuition here can be understood using a recent practical example. Most new Keynesian macroeconomic models have more than one steady state. But the inflation targeting literature originally focused on developments around one of those states, namely – and of course unsurprisingly – that defined by the inflation target itself.
But in the face of deflationary risks and the aftermath of the global financial crisis, another steady state – one characterised by the zero lower bound on nominal interest rates, the liquidity trap and the threat of persistent deflation spiral – could no longer be ignored.
Since this other steady state entailed lasting deviations from the inflation target, the threat of falling into it weighed heavily on central bank thinking. Indeed, some characterised the liquidity trap as an absorbing state: the Hotel California of monetary policy, where you can try to check out but can never leave.
The experience of Japan since the early 1990s was held up as a salutary example.
Robust control MINIMAX thinking implies an aggressive response to the threat of falling into such a deflationary liquidity trap. Since the costs of doing so are large and persistent – possibly permanent – the magnitude of the policy response should rise as the risk of entering such a trap increases. Monetary policy should do everything it can to avoid a deflation trap. Go big and go early. If what you have done is not enough, do more. No Brainard-like attenuation in the face of risk here.
By nature, when seen in isolation, such an approach is highly asymmetric – and has been criticised as such.
But should the implied aggressive easing in the face of deflation prove successful, then the balance of risks and uncertainties faced by monetary policymakers may shift abruptly. If the biggest threat to price stability then became the emergence of a self-sustaining cost / price spiral, then a literal – but symmetric – interpretation of the MINIMAX approach might then entail an aggressive tightening to choke off the threat of such a spiral.
As this discussion illustrates, the literal-but-symmetric interpretation of the MINIMAX approach to robust monetary policy would imply very ‘activist’ policy responses in the face of uncertainty. In this framework, from having their foot to the floor with the monetary accelerator, policymakers might have to switch in short order to stamping on the monetary brake.
Unsurprisingly, on my reading, that characterisation also finds little sympathy in policymaking circles. In the interests of avoiding tail outcomes, such activism threatens to induce volatility into monetary policy, financial markets, the economy and inflation in other circumstances, which is both costly and hard to communicate.
But the research on monetary policy and uncertainty as a whole has influenced the discussion and conduct of policy in practice. I would pick out a number of lessons that have been drawn.
- The magnitude and evolution of uncertainty and risk can, do and should influence monetary policy decisions.
- Not all uncertainty is the same. The way uncertainty influences monetary policy decisions depends on the circumstances and the nature of the uncertainty – its source, its persistence, its magnitude – that is being faced
- Monetary policy should not be paralysed in the face of uncertainty. If policymakers were to wait until all uncertainty is resolved before taking decisions, they would wait forever.
- At the same time, there is no general rule that says policy responses should either be attenuated or more activist in the face of uncertainty.
- While it is helpful to ask what might be the most adverse consequences of any decision taken, possible tail outcomes need to be balanced in a judicious and pragmatic way against more likely outcomes.
I see these elements as being embodied in the so-called risk management approach to the formulation of monetary policy, which has been embraced by many central bank policymakers.footnote 
At a practical level, in recent years this approach has acknowledged the need for monetary policy to be cognisant of uncertainties and risks stemming from (say) the lower bound on policy rates or the threat of a liquidity trap, while at the same time recognising that policy considerations cannot ignore the implications of decisions should the trap be avoided and inflation rise over the medium term.footnote 
The former aspect of risk management has justified a more aggressive response to downside shocks to price stability as the lower bound drew closer. Our discussions in the MPC now also address the latter aspect of the risk management approach: how policy should evolve as inflation rises and those asymmetric risks stemming from the lower bound recede.
In undertaking this discussions, the risk management approach entails judgement and pragmatism. Engineering is not enough. The inevitable subjectivity of these judgements re-introduces some art into the science of monetary policy.
And it is differences in those judgements across the members of the MPC that lead to the finely balanced vote outcomes like that we saw last week.
Guiding monetary policy by the stars
In discussing those judgements – and, in particular, in converting them into monetary policy decisions about Bank Rate – I am often confronted by questions about ‘starred variables’ in the standard new Keynesian model.
In highly stylised form, three starred variables lie at the heart of these models: Pi-star, the steady state rate of inflation; R-star, the Wicksellian neutral real rate of interest; and Y-star, the potential level of economic activity.
Pi-star is the easiest to dispose of. Within an inflation targeting framework, the steady-state level of inflation is determined exogenously by the central bank’s mandate. In that context, the inflation target should be as clear as possible: uncertainty about the target only adds unnecessary – and costly – volatility into the economy.
The UK’s inflation targeting framework delivers on this score. The inflation target is defined as an annual rate of CPI inflation of 2%, which is given primacy in the remit delivered to the Bank of England by the Treasury and deemed to hold at all times.
While there are inevitably times – like those we are experiencing today – when unforeseen shocks to the economy drive inflation away from target. These circumstances beg questions about how quickly it is feasible and/or desirable to return inflation to target. But behind those questions, the clarity of Pi-star as expressed in the form of the inflation target is the touchstone for the MPC’s approach to monetary policy.
R-star and Y-star are trickier to deal with. These are both real variables that, at least over medium-term horizons, are largely independent of monetary policy.footnote  They cannot be determined or even much influenced by the central bank.
Moreover, R-star and Y-star are not directly observable. They are latent variables, surrounded by considerable conceptual as well as empirical uncertainty. Researchers don’t always agree on what they mean, let alone how to measure them, and still less what their specific value is at any point in time.
The Wicksellian neutral real rate of interest is a concept that may be recovered from the data and interpreted in a specific model context after the fact and with the benefit of hindsight. Even then, that exercise is difficult and unlikely to prove definitive. But R-star is not a very meaningful guide or reference for real time monetary policy making.
In similar vein, the academic literature suggests that over-reliance on uncertain – and likely flawed – real time estimates or R-star and/or Y-star can lead to significant monetary policy mistakes.footnote  In research exploring the origins of the 1970s ‘Great Inflation’ in the United States, Athanasios Orphanides has explored how the Federal Reserve repeatedly over-estimated the level of US potential GDP (and thus under-estimated the extent of inflationary pressure in the US economy). footnote  Such analysis has been replicated in other jurisdictions and periods with similar results.
Orphanides’ historical work was conducted in parallel with research by his then colleagues at the Federal Reserve exploring the performance of simple policy rules across a variety of empirical models.footnote 
One lesson drawn from this cross-model evaluation of the robustness of simple monetary policy rules is that rules that rely on differences consistently perform better than rules that rely on levels. In other words, rules where changes in policy interest rates are driven by the evolution of growth rates of economic activity outperform rules where the level of the policy rate is determined on the basis of estimates of the neutral real rate and output gap.
One explanation of this result is that, by their nature, difference rules largely eliminate the influence of R-star or Y-star from the determination of policy choices. Difference rules thereby avoid introducing the policy mis-calibration and resulting noise that stems from using poorly estimated or mis-specified R- and Y-star measures.footnote 
I will explore briefly this idea in a narrower setting using a version of a model built by my colleagues Rich Harrison, Martin Seneca and Matt Waldron (a sketch of which is provided in the Annex).
Essentially, this is a canonical three equation new Keynesian macroeconomic model estimated for the UK, with a few additional features to ensure more realistic persistence in inflation and output. Although estimated, I would emphasise that this is a stylised representation of the UK economy – I am using it for illustrative purposes, rather than to foreshadow any specific policy decision.
In the background of the model are ‘cost-push’ shocks that introduce a difficult trade-off between inflation volatility around target and output gap volatility, which monetary policymakers seek to manage by varying the policy rate.
My focus here is on the treatment by monetary policy of shocks to the R- and Y-star concepts embodied in this model. One can think of these as risk premium and productivity shocks. The crucial feature is that shocks to these starred variables have two components: one temporary, the other persistent. Neither policymakers nor households and firms can observe these shocks in real time. Rather, they have to learn about the persistence of the driving shocks over time on the basis of how the observed data evolve.
I should recognise at this point the debt I owe to my colleagues Jack Meaning and, in particular, Alberto Polo, who helped me with the simulations of this model. The past couple of weeks have been a busy time for me with the MPC: I would not have been able to complete this work without their considerable help.
Charts 1 and 2 illustrate the advantages of difference rules over level rules in this set-up. When restricted to simple forms and optimised, a difference rule performs better than a level rule in stabilising inflation and lowering social losses.
This is true however much weight is placed on output gap stabilisation relative to inflation stabilisation – in other words, for all values of ‘lambda’ in standard policy evaluation exercises.footnote  And – at least in our set-up – it is true even as the importance of the persistent shock to the starred variables relative to the temporary shock diminishes.
The intuition behind the result is as I have discussed. If monetary policy decisions are based on a mis-interpretation of the shock driving R-star, then the difference rule is more ‘forgiving’ of those mis-perceptions than the level rule, and less noise is introduced into the system as a result.
This is illustrated by the impulse responses shown in Chart 3. These show the evolution of model variables following a risk premium shock that lowers R-star persistently, but is believed by policymakers, firms and households to be temporary.
The level rule implies that policy rates snap back after the incidence of the shock. After all, if the downward shock to R-star is temporary, failing to raise policy rates in parallel with the recovery of R-star would lead to excessive monetary ease. But because the true shock to R-star is persistent, that approach creates a monetary tightening that widens the output gap and weighs on inflation.
By contrast, the difference rule leaves policy rates lower and therefore does not induce the same instability in output and inflation. It stabilising properties are superior – and associated with a slower, steadier and more persistent rise in the policy rate.
In the end, the learning process embodied in this model ensures that the outcomes of the different rules eventually converges. But during this process of convergence, the difference rule dominates in terms of social welfare and implies a slower, steadier and more persistent evolution of policy rates following the shock: one characterisation of a ‘steady-handed’ policy response.
Of course, Chart 3 only illustrates the responses for one combination of underlying shocks and perceptions. A full analysis would explore all those combinations. But we can show – as illustrated in Chart 4 – that it is possible to construct scenarios (defined here as combinations of underlying shocks) that create this steady-handed response under the difference rule.
This is probably the most relevant result for the current conjuncture. Where there is uncertainty about the level of R-star, the scenario shows the advantages of setting policy rates on the one-step-at-a-time basis embodied in difference rules. Policies that attempt to move quickly back to preconceived neutral levels without reference to the real time evolution of the data run the risk of adding noise and volatility through a mis-calibration of the policy response.footnote 
Monetary policy outlook
Coming back to last week’s monetary policy announcement, the MPC agreed to continue with a further modest tightening of monetary policy.
Placing the specifics of that decision in a longer context, since I joined the MPC in September we have: (1) retired the forward guidance that signalled unchanged policy rates; (2) brought asset purchases to an end; (3) started to raise Bank Rate and continued that process last week; and (4) announced a start to the reduction of asset portfolios accumulated as a result of QE.
From a starting point of substantial policy accommodation, this is a consistent, measured and resolute set of actions intended to rebalance the stance of monetary policy and address the inflationary pressures that have emerged.
We have signalled that more is to come in the coming months if the path sketched out in our February forecast plays out. But equally we have flagged that the outlook for Bank Rate beyond the coming months is uncertain, reflecting the two-sided risks to inflation at the policy-relevant two to three year horizon.
How the stance of monetary policy evolves will depend on how the economic outlook evolves. This should be self-evident. But it is an assertion that bears repeating today.
Not all of the MPC’s decisions last week were unanimous. The published vote pointed to a finely balanced decision over whether to raise Bank Rate by a standard 25bp to 0.5% or to hike by a bigger 50bp step in order to get back to pre-Covid levels of Bank Rate in one bound.
That decision was finely balanced for me as an individual voter on the MPC, as well as for the overall balance of votes on the Committee as a whole. The considerations I have outlined in this talk help to explain why I voted with the majority on this occasion.
First, I am sceptical of efforts to return Bank Rate quickly to some pre-defined neutral level or terminal rate, given the inevitable conceptual and empirical uncertainty surrounding such a level or rate. As we have seen, such an approach risks increasing inflation and output volatility if policy is mis-calibrated owing to assuming the wrong value of R- or Y-star.
Better to adopt a more measured and data-dependent approach, which learns from how the economy responds to each step taken rather than pre-commits to a concept surrounded by uncertainty. This is likely to produce a steadier, more persistent and more purposeful path for policy rates.
Second, I worry that taking unusually large policy steps may validate a market narrative that Bank policy is either foot-to-the-floor on the accelerator or foot-to-the-floor with the brake. This narrative was fuelled by the (perhaps necessarily) activist responses to the onset of global financial crisis and pandemic.footnote  It is not unique to the UK.
In my mind, the perceptions underlying this narrative may help to explain both markets’ reluctance to price in policy tightening in the middle of last year and the subsequent relatively rapid steepening of the money market yield curve of late as central banks have embraced the need to address inflationary pressures.
And such perceptions can shape reality. If such swings in market sentiment and expectations were to weaken central banks’ ability to steer the market rates of most relevance to spending and investment decisions, then the transmission of monetary policy will be at risk.
Of course, there may be occasions where aggressive monetary policy actions are necessary. Recent history is littered with them: the global financial crisis and onset of the pandemic are cases in point. I would certainly not wish to rule out changes in Bank Rate of more than the usual 25bp in all circumstances.
Retaining flexibility is important. And given the inflationary pressures we currently face, I can certainly understand why colleagues on the MPC voted for a 50bp hike last week.
But if – as I expect is a risk – we are set to need more nuanced monetary policy actions in the future as the macroeconomic situation normalises and decisions become more finally balanced, then re-establishing a reputation for measured and purposeful steps in policy is valuable. Restricting ourselves to a 25bp now – albeit with the prospect of more to come in the coming months – is an investment in containing market expectations of aggressive ‘activism’ that I saw as worth making.
In the early years of the MPC, former Bank Governor (then Deputy Governor and now Lord) King argued that the establishment of the inflation target should mark the end of characterising policymakers as either ‘hawks’ or ‘doves’: “it made no sense to use these descriptions because each member of the Committee had the same objective”.footnote 
An alternative distinction could emerge along the dimension of how ‘activist’ policymakers were in pursuit of their common target: whether they would use policy aggressively to return inflation to target quickly, or take a more measured approach reflecting the potential for difficult trade-offs between inflation and output volatility created by some shocks.
Given the data and structural uncertainties facing monetary policy that I have discussed in this talk, a case can be made for a measured rather than activist approach to policy decisions, with a focus on more persistent developments in the data that have lasting implications for the outlook for price stability.
That is what I would label a ‘steady handed’ approach to monetary policy. Even if it does not provide guidance in all circumstances, I hope it can help explain why I voted for a 25bp hike – rather than something larger – last week.
With that provocation, allow me to open up to questions.
My thanks to Jack Meaning, Alberto Polo, Rich Harrison, for helpful discussions in the preparation of this speech. I would also like to thank Andrew Bailey, Lennart Brandt, Ben Broadbent, Alan Castle, Michael Goldby, Jonathan Haskel, Andrew Hauser, Michael Saunders, Martin Seneca, Fergal Shortall, Tom Smith, Matthew Swannell and Silvana Tenreyro for their comments. Opinions (and all remaining errors and omissions) are my own.
Blanchard, O. and J. Galí (2007). ‘Real wage rigidities and the new Keynesian model,’ Journal of Money, Credit and Banking 39 (1), pp. 35–65.
Blinder, A.S. (1998). Central banking in theory and practice, MIT Press.
Brainard, W. (1967). ‘Uncertainty and the effectiveness of policy,’ American Economic Review 57(2), pp. 411-425.
Carney, M. (2017). ‘Lambda,’ speech given at the London School of Economics, 16 January.
Cunliffe, J. (2018). ‘A little bit of stodginess?’ speech given at Cumbria Chambers of Commerce, 13 July.
Currie, D.A. (1985). ‘Macroeconomic policy design and control theory – A failed partnership?’ Economic Journal 95(378), pp. 285-306.
Evans, C.L., J.D.M. Fisher, F. Gourio and S. Krane (2015). ‘Risk management for monetary policy near the zero lower bound,’ Brookings Papers on Economic Activity (Spring), pp. 141-196.
Gerdesmeier, D. and B. Roffia (2004). ‘Taylor rules for the euro area: The issue of real time data,’ Bundesbank Discussion Paper 2004/37.
Greenspan, A. (1994). ‘Risk and uncertainty in monetary policy,’ American Economic Review 94(2), pp. 33-40.
Haldane, A.G. (1995). Targeting inflation: A conference of central banks on the use of inflation targets, Bank of England.
Issing, O. (2002). ‘Monetary policy in a world of uncertainty,’ Economie Internationale 92, pp. 165-179.
King, M.A. (1999). ‘MPC two years on,’ speech given at Queen’s University Belfast, 17 May.
Levin, A.T., V. Wieland, and J.C. Williams (1999). ‘Robustness of simple monetary policy rules under model uncertainty,’ in J.B. Taylor (ed.) Monetary policy rules, Chicago University Press, pp. 263–299.
Levin, A.T. and J.C. Williams (2003). ‘Robust monetary policy with competing reference models,’ Journal of Monetary Economics 50(5), pp. 945-975.
McCallum, B.T. (1988). ‘Robustness properties of a rule for monetary policy,’ Carnegie-Rochester Conference Series on Public Policy 29, pp. 173–203.
Mian, A., L. Straub and A. Sufi (2021). ‘Indebted demand,’ Quarterly Journal of Economics 136(4), pp. 2243–2307.
Onatski, A. and J.H. Stock (2002). ‘Robust monetary policy under model uncertainty in a small model of the US economy,’ Macroeconomic Dynamics 6(1), pp. 85-110.
Orphanides, A. (2002). ‘Monetary policy rules and the Great Inflation,’ American Economic Review 92(2), pp. 115-120.
Orphanides, A. and S. van Norden (2002). ‘The unreliability of output gap estimates in real time,’ Review of Economics and Statistics 84(4), pp. 569-583.
Pill, H. (2010). ‘Monetary policy in a low interest rate environment: A checklist,’ NBER International Seminar on Macroeconomics 6(1), pp. 335-345.
Rotemberg, J.J. and M. Woodford (1997). ‘An optimization-based econometric framework for the evaluation of monetary policy,’ NBER Macroeconomics Annual 12, pp. 297-346.
Sargent, T.J. (1987). Macroeconomic theory, 2nd ed., Academic Press.
Svensson, L.E.O. (1999). ‘Inflation targeting as a monetary policy rule,’ Journal of Monetary Economics 43(3), pp. 607-654.
The policy decisions are explained in greater detail in the Bank’s February 2022 Monetary Policy Report.
Owing to some statistical weaknesses, the official series for the UK terms of trade, which is based upon the import price time series, does not reflect this deterioration. The discussion in the text is based on an economic assessment of the evolution of UK import and export prices over recent months.
‘Divine coincidence’ here refers to the property of new Keynesian macroeconomic models that implies there is no trade-off between the stabilization of inflation and the stabilization of the welfare-relevant output gap in the conduct of monetary policy. See Blanchard and Galí (2007).
See the Bank’s November 2021 Monetary Policy Report.
We do not have comparable survey results for the period prior to the financial crisis, so a longer time series comparison is not possible.
How one such wage equation accounts for the movements in underlying pay growth since 2011 is shown in Chart 3.7 of the Bank’s February 2022 Monetary Policy Report.
For an overview, see Issing (2002).
For a critical review of this characterisation, see Currie (1985).
For an example, see Svensson (1999).
See Haldane (1997).
See Rotemberg and Woodford (1997).
See Brainard (1967).
See p.17 in Blinder (1998); for an application to UK monetary policy, see Cunliffe (2018).
For applications of this principle to the design of monetary policy rules, see McCallum (1988) and Levin et al. (1999).
See Onatski and Stock (2002).
See, for example, Greenspan (1994).
For a discussion of this issue, see Evans et al. (2015).
At least, that is the conventional view, which I maintain in this speech. Mian et al.
See Orphanides and van Noorden (2002).
See Orphanides (2002).
See Levin et al. (1999) and Levin and Williams (2003). This agenda has continued using the excellent macroeconomic model database managed by Volker Wieland and hosted at the Goethe Universität in Frankfurt.
An analogue to this result is found in the engineering literature, which advocates difference-based control rules when steady-state values are unknown or uncertain.
For an explanation and discussion of the role ‘lambda’ plays in managing monetary policy trade-offs, see Carney (2017).
Consideration of lower bound constraints and their role in rationalising aggressive policy easing decisions on risk management grounds – in the manner suggested by Evans et al. (2015) – are absent from these model exercises. This might imply that the case for an equally rapid reversal of the policy easing as disinflationary risks abate is understated. However, expectations of such a rapid reversal may themselves undermine some of the stimulative effect of the easing: expectations of a steady-handed approach to policy decisions may help stabilise the economy close to the lower bound (or even avoid its incidence altogether, as in Pill (2010)).
Within the framework discussed in earlier sections of this speech, it may be seen as consistent with the MINIMAX characterisation of robust monetary policy that I outlined.
See King (1999), p. 7.