Skip to main content
  • This website sets cookies on your device. To find out more about how we use cookies please refer to our Privacy and Cookie Policy. By continuing to use the site, we’ll assume that you are content for us to set these on your device.
  • Close
Home > News and Publications > The Dappled World – speech by Andy Haldane
 

The Dappled World – speech by Andy Haldane

10 November 2016

Given at GLS Shackle Biennial Memorial Lecture

Speech

All footnotes and references can be found in the speech pdf.
 
Introduction
 
I am delighted to be giving this year’s Shackle Biennial Memorial Lecture.
 
The past few years have witnessed a jarring financial crisis as great as any have experienced since the world wars, a crisis whose aftershocks are still being felt today. Against that dramatic backdrop, I thought I would use this occasion to reflect on the state of economics, not least in helping us make sense of such catastrophic phenomena.
 
This topic has risen in both prominence and urgency since the financial crisis (Coyle and Haldane (2015), Battiston et al (2016)). Indeed, it would probably not be an exaggeration to say the economic and financial crisis has spawned a crisis in the economics and finance profession - and not for the first time. Much the same occurred after the Great Depression of the 1930s when economics was rethought under Keynes’ intellectual leadership (Keynes (1936)).
 
Although this crisis in economics is a threat for some, for others it is an opportunity – an opportunity to make a great leap forward, as Keynes did in the 1930s. For the students in this room, there is the chance to rethink economics with as clean a sheet of paper as you are ever likely to find. That is perhaps why the numbers of students applying to study economics has shot up over recent years. This is one of the silver linings of the crisis. No discipline could ask for a better endowment.
 
But seizing this opportunity requires a re-examination of the contours of economics and an exploration of some new pathways. That is what I wish to do in this lecture, drawing liberally on the work of George Shackle. In the light of the crisis, there has been renewed interest in Shackle’s work as economists have sought new insights into age-old problems. Indeed, it was this quest that first brought Shackle’s work to my own attention around five years ago.
 
In exploring new pathways, I will draw my inspiration from three features of economic systems which underpinned Shackle’s own work. First is the importance of recognising that these systems may often find themselves in a state of near-continuous disequilibrium. Indeed, even the notion of an equilibrium, stationary through time, may itself be misleading (Shackle (1972)). It was perhaps this feature of Shackle’s work that earned him his heterodox label.
 
Second is the importance of looking at economic systems through a cross-disciplinary lens. Drawing on insights from a range of disciplines, natural as well as social sciences, can provide a different perspective on individual behaviour and system-wide dynamics. In Shackle’s own work he drew liberally on a range of economic schools of thought, from Keynesian to Austrian, as well as history, philosophy and psychology (Shackle (1949, 1979)).
 
Third is the importance of radical uncertainty. Shackle saw uncertainty about the future as fundamental to human decision-making and, thus, to the functioning of social systems (Shackle (1972)). Human imagination was a crucial frame for social progress (Shackle (1979)). This meant social systems were inherently unpredictable in their behaviour. Latterly, the importance of radical uncertainty in making sense of social systems has gained new traction (Taleb (2014), King (2016)).
 
I wish to explore whether there are insights and techniques from other disciplines which better capture these features of economic systems and which thus hold promise when moving the economics profession forward. These approaches would still be considered heterodox in much of the profession even though, in many other fields, they would be considered entirely standard.
 
One of the potential failings of the economics profession, I will argue, is that it may have borrowed too little from other disciplines - a methodological mono-culture. In keeping with this spirit, the title of my lecture is itself borrowed. In 1999 Professor Nancy Cartwright, a philosopher of science, published a book with the title The Dappled World: A Study of the Boundaries of Science (Cartwright (1999)). This quote captures its essence:
 
“Science as we know it is apportioned into disciplines, apparently arbitrarily grown up; governing different sets of properties at different levels of abstraction; pockets of great precision; large parcels of qualitative maxims resisting precise formulation; erratic overlaps; here and there, once in a while, corners line up but mostly ragged edges; and always the cover of law just loosely attracted to the jumbled world of material things.”
 
Cartwright was describing the natural sciences. She describes them as a patchwork of theory and evidence, some of it precisely cut, most of it haphazardly shaped and pieced together irregularly. And there is, she argues, no shame in that. To the contrary, it may be the best science can do, given limited knowledge and limited time, in trying to make sense of an often-jumbled world.
 
Here is another way of putting Cartwright’s point. Sometimes the popular perception of the natural sciences, and physics in particular, is that it too is a methodological monoculture. The defining characteristics of this monoculture are great laws and unifying theories. For example, it is perhaps no accident that the world’s most famous equation is this one: E= MC2. Not only is this uniquely famous equation drawn from theoretical physics; it defines a great law, a unifying theory.
 
Yet much of modern physics does not, in fact, deal in great laws and unifying theories. Instead it deals in great unknowns and empiric regularities, in ragged edges and qualitative maxims (Buchanan (2014)). Advances in physics these days often come courtesy of massive empirical fishing expeditions, industrial-scale searches for needles in haystacks, often using large-scale computational techniques. The large Hadron collider – in essence, a massive Monte Carlo machine - would be one prominent example of these techniques in practice.
 
So even physics, for some the theoretical pinnacle of the natural sciences, is these days far from being a precise science. Through its evolution, the intellectual bloodline of physics and the other natural sciences has been mixed and re-mixed. What we have today is a methodological hybrid, an intellectual mongrel. This has been not just a natural evolution, but an essential one in making sense of our dappled world.
 
From Natural to Social Sciences
 
While Cartwright’s Dappled World aims to dispel the notion that natural science is precise science, it goes further in arguing that the self-same case can be made, just as forcefully, for the social sciences. It, too, is a haphazard patchwork of theories and hunches, operating at different levels of abstraction, often loosely held together. The economic and financial world is every bit as dappled as the natural world. And, as with the natural sciences, there is real virtue in that eclectic approach.
 
Yet this view jars somewhat with the dominant methodological direction of travel in economics. That methodological lead was provided by Karl Popper in the 1930s (Popper (1934)). In The Logic of Scientific Discovery, Popper argued for a “deductive” approach to advancing knowledge. This involved, first, specifying a clear set of assumptions or axioms. From those were deduced a set of logical propositions or hypotheses. And then, and only then, were these hypotheses taken to the data to be validated or falsified.
 
This pursuit of empirically-falsifiable hypotheses had even-earlier antecedents. The use of dynamic optimisation techniques goes back at least to Leon Walras (Walras (1874)). And he, in turn, drew on the tools developed by mathematicians such as Leibniz, Lagrange and Euler. These techniques were used to construct early “general equilibrium” models of the economy, models which where internally consistent when looked at as a whole.
 
Newtonian physics was built on the same principle of internal consistency within systems, with energy within that system always preserved. Those systems were subject to disturbances which could cause them to oscillate dynamically. Ultimately, however, these systems typically had an equilibrium or steady-state to which they returned. As one of the simplest examples, Newton’s pendulum exhibits damped harmonic motion once displaced before returning to a state of rest.
 
Much of mainstream macro-economics and finance has essentially followed this intellectual lead. It typically starts with a set of assumptions or axioms – in economics often defining the preferences of consumers and the technology facing firms. From those assumptions are derived equations of motion for the behaviour of consumers and firms – which, rather revealingly, are often called the Euler conditions. Then, and only then, are these first-order conditions for behaviour taken to the data to be tested.
 
These models, so derived, predictably exhibit the same damped harmonic motion as Newton’s pendulum. Let me give a couple of examples of this approach in mainstream practice in economics and finance. The most famous equation in finance is probably this one:
 
Figure 1: The Black-Scholes Formula


Source: Black and Scholes (1973).
 
This is the Black-Scholes options pricing formula (Black and Scholes (1973)). Under a (relatively) small set of assumptions, it provides a (relatively) simple analytical description of how to price a financial option. It cannot rival Einstein for its simplicity and aesthetic beauty. But it nonetheless has many of the trademark characteristics of theoretical physics.
 
That should come as no surprise because this equation was itself drawn from theoretical physics. In seeking a solution to their option-pricing problem, Fisher Black, Myron Scholes and Robert Merton drew an explicit link between their contingent-claims pricing problem and the heat transfer equation in physics (Churchill (1963)). If not quite a lift and shift from theoretical physics, the Black-Scholes formula was certainly a genetic mutation.
 
Moving from finance to economics, the dominant approach over recent decades to modelling the macro-economy has probably been the Dynamic Stochastic General Equilibrium (DSGE) model (Smets and Wouters (2003)). In its plain-vanilla form, this comprises a set of representative, optimising households and firms. This model gives rise to an equilibrium which is unique and stationary, and dynamics around that equilibrium which are regular and oscillatory.
 
Various knobs and whistles have been added to this workhorse framework, often involving market frictions in price-setting, competition and credit provision. These add colour to the model’s dynamics but, by and large, leave intact its properties – stable, stationary, oscillatory. It is not just among academics that this workhorse framework has found favour. The majority of central banks also take the DSGE framework as their starting point, including the Bank of England (Burgess et al (2013)).
 
The DSGE approach has many of the hallmarks of Newtonian physics. As every action has an equal and opposite reaction, every shock has an equal and proportional reaction in DSGE models. The economy’s dynamics exhibit the same damped harmonic motion as Newton’s pendulum, or if a rocking horse were to be hit with a stick. The rocking horse metaphor is apt, as it was first used by Swedish economist Knut Wicksell almost a hundred years ago to describe the business cycle motion of an economy (Wicksell (1918)).
 
Mainstream finance and macroeconomics has, then, followed firmly in the footsteps of giants, part Popperian, part Newtonian. It has been heavily indebted, intellectually, to Classical physics. That has led some to dub the dominant economic paradigm “econo-physics” (Mirowski (1989)). Less kindly, some have described economics as suffering from physics-envy.
 

Assessing the Pros and Cons

Despite recent criticism, which has come thick and fast, it is important not to overlook the benefits from having followed this path. One benefit, shared with theoretical physics, is that economic theory has well-defined foundations. There are fewer “free”, or undefined, parameters floating around the model. Nobel Laureate Robert Lucas said “beware of economists bearing free parameters”. He was right. A theory of everything is a theory of nothing.

The advantages do not stop there. On the assumption agents’ behaviour is representative – it broadly mirrors the average person’s – these models of micro-level behaviour can be simply-summed to replicate macro-economic behaviour. The individual is, in effect, a shrunken replica of the economy as a whole. These macro-economic models are, in the jargon, micro-founded – that is, constructed bottom-up from optimising, micro-economic foundations.

These advantages carry across into the policy sphere. If the assumptions underlying these models are valid, then the behavioural rules from which they are derived will be unaffected by changes in the prevailing policy regime. These models are then a robust test-bed for policy analysis. They are, in economists’ jargon, immune to the Lucas critique (Lucas (1976)). This feature, above all others, probably explains these models’ ubiquity in policy organisations.

Not least in the light of the crisis, however, the potential pitfalls of these approaches have also become clearer of late. Recently, these models have been subject to stinging critiques (Romer (2016)). One common complaint is that they may not do an especially good job of describing the real world, especially in situations of economic stress. Exhibit one is that they offered a spectacularly poor guide to the economy’s dynamics around the time of the global financial crisis.

To illustrate, Chart 1 plots the range of forecasts for UK GDP growth from 2008 onwards produced by 27 economic forecasters (including the Bank) in 2007, the dawn of the financial crisis. Three features are notable. First, pre-crisis forecasts were very tightly bunched in a range of one percentage point, perhaps because they were drawn from similar models. The methodological mono-culture produced, unsurprisingly, the same crop.

Second, these forecasts foresaw a continuation of the gentle undulations in the economy seen in the decade prior to the crisis - the so-called Great Moderation (Bernanke (2004)). At the time, these damped oscillations seemed to match well the damped harmonic motion of DSGE models. A good crop today foretold of an only slightly less good crop tomorrow.

Third, most striking of all, every one of these forecasts was not just wrong but spectacularly so. Few forecasters foresaw even a slight downturn in GDP in 2008 and none foresaw a recession. Yet we witnessed not just a recession but the largest since the 1930s. The one-year-ahead forecast error in 2008 was 8 percentage points. The crop failed and the result was economic famine.

While forecasting performance has improved in the period since, there has a continued string of serially correlated errors, with the speed of the recovery consistently over-estimated (Chart 2). The average forecast error one-year-ahead has been consistently negative, averaging 0.5 percentage points per year. The average error two years ahead has been over one percentage point per year.

At root, these were failures of models, methodologies and mono-cultures. It has been argued that these models were not designed to explain such extreme events. To quote Robert Lucas once more: “The charge is that the [..] forecasting model failed to predict the events of September 2008. Yet the simulations were not presented as assurance that no crisis would occur, but as a forecast of what could be expected conditional on a crisis not occurring” (Lucas (2009)).

Chart 1: Range of GDP forecasts in 2007Q4Chart 1: Range of GDP forecasts in 2007Q4
Source: Bank of England November 2007 Inflation Report; Bank of England Survey of Economic Forecasters.

Chart 2: Forecasts for World Growth by the IMF since 2007Chart 2: Forecasts for World Growth by the IMF since 2007
Source: IMF World Economic Outlook.

For me, this is not really a defence. Economics is important because of the social costs of extreme events. Economic policy matters precisely because of these events. If our models are silent about these events, this jeopardises the very thing that makes economics interesting and economic policy important. It risks squandering the human capital endowment the financial crisis has bestowed.

Even if some of the post-crisis criticism of workhorse macro-economic models is overdone, it still raises the question of whether new modelling approaches might be explored which provide a different lens on the world or which better match real-world dynamics. If, as Cartwright suggests, there is more methodologically that links the natural and social sciences than divides them, there could be considerable scope for disciplinary cross-pollination of ideas and models.

So just how interdisciplinary is economics? The short answer appears, regrettably, to be “not very”. Table 1 looks at responses by professionals in different social sciences to the proposition “inter-disciplinary knowledge is better than knowledge obtained by a single discipline” (Fourcade, Ollion and Algan (2015)). A majority, often a large majority, of other social sciences agree with this statement. Only in economics do a majority disagree.

Table 1: Responses to the question - Interdisciplinary knowledge is better than knowledge obtained by a single discipline?

Source: Fourcade, Ollion and Algan (2015). Notes: Table shows responses as a percentage of professors in each discipline.

Economists’ words are matched by their actions. The science periodical Nature recently published interdisciplinary indices, measuring the number of references made to outside disciplines by a subject area and the number of citations made outside the discipline to that subject area. Over the period 1950-2014, economics sat in the bottom left-hand quadrant (Chart 3). Alongside theoretical physics, economics was at the bottom of the inter-disciplinary league table (van Noordan (2015)).

Chart 3: Citations and references to outside disciplines

Source: van Noordan (2015).

Chart 4: Citations and references to outside disciplines over time




Source: van Noordan (2015).

Looking at these patterns over time paints a somewhat more optimistic picture (Chart 4). There is clear evidence of economics having become more cross-disciplinary during the course of this century. Once finance is taken out of these estimates, however, economics continues to lag. And alternative metrics, such as the degree of rigidity in hierarchies and gender diversity, paint economics in a less favourable light (Fourcade, Ollion and Algan (2015)). Despite progress, it is difficult to escape the conclusion that economics remains an insular, self-referential discipline.

The economics profession’s intellectual bloodline is purer than in most other disciplines. In some respects, it is a source of strength that the workhorse economic model is a thoroughbred. That quest for purity also contributed importantly, however, to this workhorse falling at the first meaningful fence put in its path. This suggests it is probably as good a time as any to consider mixing the intellectual bloodline, just as physics and other of the natural sciences have done.

Agent-Based Models (ABMs)

Against this backdrop, what other disciplines and approaches might be productive alternative lines of enquiry? One potentially promising strand is so-called agent-based modelling. Agent-based models (ABMs) are interconnected systems of individual “agents” who follow well-defined behavioural rules of thumb.

What makes these models interesting is that these agents are heterogeneous and interactive. In other words, these models relax one of the key assumptions of the standard model – a single, representative agent. For social systems, this does not sound too implausible. Humans are social animals. Indeed, the feature which set humans apart from other animals is their degree of social interaction (Harari (2015)).

Chart 5: The most highly cited articles on agent-based modelling across disciplines
Source: Google scholar; Bank calculations. Notes:  This was found by performing a Google Scholar search of the relevant term for “agent-based modelling” in each field and then using the citation count of the most highly cited paper which could be fairly and solely attributed to each distinct field. When searching for “Monte Carlo” only papers modelling distinct subsystems were considered and not those using the technique of random number generation more generally.

This, on the face of it, modest adaptation gives rise to some fundamental changes in model dynamics. Linear, proportional responses to shocks become the exception; complex, non-linear responses the rule. Single, stationary equilibria become the exception; multiple, evolutionary equilibria the rule. I discuss these differences in the next section.

Before doing so, I want to bring to life some of the uses of ABMs. These models have found widespread application across a broad array of disciplines in both the natural and social sciences. They have been used to address a massive array of problems, big and small: from segregating nuts to segregating races, from simulating the fate of the universe to simulating the fate of a human cell, from military planning to family planning, from flocking birds to herding (fat) cats.

Yet in economics, ABMs have been used relatively less (Battiston et al (2016)). Chart 5 shows the number of citations that the most-cited article on agent-based modelling has in different fields. Economics has been a late and slow-adopter of ABM techniques. In other words, ABMs may be another example of the lack of cross-disciplinary perspective in economics.

To be clear, ABMs are no panacea for the modelling ills of economics. In discussing them here, the implication is not that they should replace DSGE models, lock, stock and barrel. Rather, their value comes from providing a different, complementary, lens through which to make sense of our dappled economic and financial world, a lens which other disciplines have found useful when understanding their worlds or when devising policies to improve it.

(a) Early Origins

ABMs grew out of some of the earliest applications of computing at Los Alamos National Laboratory in the United States (Metropolis (1987)). In 1945, it was this lab that developed the first-ever nuclear weapon - the fission bomb. The chain-reactions used in these weapons are based on the fissioning of large atoms by neutrons. Neutron transport involves high-dimension, discrete collisions, with each step probabilistic. This quickly gives rise to a vast probability tree of possible outcomes.

Enrico Fermi had experimented with a method to solve these sorts of high-dimensional problems in the 1930s, using a small mechanical adding machine. The technique involved generating random numbers and comparing them with probabilities derived from theory. By doing this for a large number of possible neutron paths, he painstakingly painted a picture of the transport of neutrons. Fermi had, by hand, constructed the first agent-based model.

​Figure 2: John von NeumannFigure 2: John von Neumann
Source: LANL (public domain).  John von Neumann in the 1940s

Figure 3: The hydrogen bomb


Figure 3: The hydrogen bomb


Source: The Official CTBTO Photostream (
CC-BY-2.0). An atmospheric nuclear test conducted by the U.S. at Enewetak Atoll on 1 November 1952
 

These models were about to get a machine which could do these large-scale computations for them. Assembled at Los Alamos during World War II was a dream team of the great and good in science, including John von Neumann, Richard Feynman, Niels Bohr and Robert Oppenheimer. Von Neumann himself offered use of one of the first-ever computers to perform neutron calculations. That exercise culminated in the hydrogen bomb. ABMs had found their first real-world application.

(b) Military Planning

By 1947, the use of random numbers in computation had a name which reflected its probabilistic nature - Monte Carlo, home of the casino. In a lecture in 1948, von Neumann described the idea of cellular automata - artificial systems of cells exhibiting life-like behaviour which evolved according to simple rules of behaviour (von Neumann et al (1966), Sarkar (2000)). Perhaps the best-known example of cellular automata came several years later: Conway's “Game of Life” (Conway (1970)).

In 1949, the emerging technique of Monte Carlo analysis was beginning to gain traction. A symposium organised and sponsored by the RAND Corporation – a spin-off of the US War Department – was the springboard for developing some of the first military applications of ABMs. What had once been simple board-and-dice based war games switched to computational ABMs during the 1960s, borrowing heavily on von Neumann’s cellular automata (Woodcock et al (1988)).

These models have since been used to provide insights in a range of real-world military operations, past and present. They have been used to understand the dynamics of past battles, such as the operation of German U-boat battles (Champagne (2003)), Hill et al (2004)). And they have been used, typically secretly, to plan military strategies for current conflicts, with organisations such as the US Naval Research Laboratory and the US Air Force investing in ABM technology (Manheimer et al (1997)).

Figure 4: WW2 U-Boat Campaigns
​Figure 4: WW2 U-Boat Campaigns

 
Source: IWM Collections (public domain).  Torpedo fired on a merchant ship
Figure 5: War Games
Figure 5: War Games
Source: GSC Game World (GNU FDL) (Image from ‘Cosacks: European Wars’

The US Air Force has an agent-based model known as Systems Effectiveness Analysis Simulation (SEAS). The SEAS package models the entire surface of the earth and its satellites. Each agent has a specific longitude and latitude to a resolution of one millimetre. In order to allow interaction, SEAS agents are equipped with sensors, communications, and weapons allowing co-operative or competitive behaviour (Brooks et al (2004)).

Most recently, the work on ABMs in a military context has taken on a new direction. As autonomous vehicles such as drones are used in war, they add a new dimension to ABM simulations. In a world of human agents, ABMs could only approximate agent behaviours. In a world of military automata, these simulations come closer to replicating real-world warfare (Sanchez and Lucas (2002)).

A more benign military application of ABMs has been in the context of war games. Military ABMs were a forerunner of many of today’s computer war games: Eve Online, World of Warcraft and Command and Conquer are all popular, sophisticated ABM-based games which mix computer and human controlled agents.

(c) Physical Sciences

As the Large Hadron Collider illustrates, modern physics is heavily computational in certain fields. This arises in part because even simple systems can quickly exhibit complex behaviours. In general, a system comprising three bodies undergoing motion via a force cannot be solved analytically – the “three-body problem”. And there are many problems in physics which involve the interaction of “particles” in ways which are complex and non-deterministic.

These features have resulted in ABMs being extremely widely used in the physical sciences, at various different levels of resolution. At the micro scale, some of the earliest applications of ABMs were to study plasmas – charged particles which constantly interact. Plasmas are the fourth state of matter after solids, liquids and gases. They are prime ABM candidates because they exhibit rich, non-linear, complex behaviour.

In 1966, Edward Teller wrote a paper which described how to build a plasma ABM with 32 interacting particles (Brush, Sahlin and Teller (1966)). Even with “only” 32 interacting agents, this was a formidable computational challenge. But the advent of parallel and high performance computing since the 1990s has changed the scale of plasma particle simulations. Simulations today regularly use as many as 109 computational agents (Turrell et al (2015)).

These plasma ABMs have found widespread application, micro and macro. At the micro level, they have been used to help identify and kill cancer cells with pinpoint accuracy, while leaving healthy surrounding cells unharmed (Bulanov et al (2002)). At the macro level, they have been used to simulate and reproduce the way stars produce energy, using a nuclear fusion reactor here on Earth, an approach that has the potential to provide clean energy for billions of years (Turrell (2013)).

Simulating the processes powering stars is modest by comparison with ABMs which aim to model the entire universe. In 1985, one of the first comprehensive computational models of the Universe was published, with over 32,000 particles representing known matter (Davis et al (1985)). As estimates put the number of protons and neutrons in the Universe at 1080, this ABM was impressively parsimonious (Penrose (1999)). These models have generated important insights, including the role of “dark matter” in the evolution of galaxies.

Not all applications of ABMs are, however, large in scale and universal in application. A 1987 paper called “Why the Brazil Nuts Are on Top” used an ABM with agents of different physical sizes to explain how mixed nuts segregate over time when shaken, with Brazil nuts generally rising to the surface (Rosato et al (1987)). This research was not as nuts as it sounds. It has since found real-world applications in industries such as pharmaceuticals and manufacturing.

Figure 6: Segregation of nutsFigure 6: Segregation of nuts

Source: Sae Rpss (CC-BY-SA-3.0). 

Figure 7: The universeFigure 7: The universe
Source: NASA Headquarters - Greatest Images of NASA (NASA-HQ-GRIN) (public domain). 

 

(d) Operational Research

ABMs have been used to study, and help solve, a range of logistical problems where these involve coordination across a large number of agents. One example would be crowd dynamics. ABMs have been used to simulate crowd behaviour at festivals and parades to aid logistics and improve safety (Batty et al (2003)). They have also been used to simulate crowd dynamics in emergency situations – for example, simulating people evacuating a building to determine their speed and to help improve safety procedures.

ABMs have been used to model and simulate the behaviour of shoppers – for example, at a micro level, simulating flows of people through a shopping centre. At a macro level, they have been used to simulate how shopping patterns might be affected by the introduction of new stores. For example, what is the impact of introducing an out-of-town hyper-market on city-centre stores?

ABMs have been extensively used for the study of behaviour in transport and utility networks. They have been used to simulate fluctuations in demand, and the impact of operational and economic events, on the functioning of energy grids. For example, the EMCAS (Electricity Market Complex Adaptive System) is an ABM used to simulate the US electricity grid. Its high level of granularity allows it to compute electricity prices each hour at each location in the transmission network. EMCAS has also been used to simulate the introduction of new energy sources and technologies.

In transportation systems, ABMs have been used for traffic management, taxi dispatch, traffic signal control and combined rail and road transport planning. Long before the age of Uber, ABMs were being used to maximise the efficiency of taxi pick-ups. They have been used as a means of undertaking air traffic control on a decentralised basis, including by the US Federal Aviation Authority. As well as increasing the efficiency of routing, this decentralised approach can improve aircraft safety by serving as a backstop when communication with a central air traffic controller is lost.

A more recent application of the same technology is autonomous vehicles, such as driverless cars. As with aircraft, these too need to solve a complex co-ordination problem, taking account not just of the environment but the behaviour of other agents. As with aircraft, these models can be used to demonstrate the scope for both efficiency savings (it is estimated each autonomous vehicle can replicate the performance of 11 conventional ones) and safety improvements (it is estimated an autonomous vehicle network might reduce road accidents by over 90%).

Figure 8: Air traffic and transport control



Figure 8: Air traffic and transport control

 

 

Source: United States Air Force (public domain).

Figure 9: Electricity grids
Figure 9: Electricity grids
Source: Rept0n1x (CC-BY-SA-3.0).


(e) Biology

Given their cellular structures, ABMs have found a wide range of biological and medical applications. In biology, an area which was ripe for ABMs was so-called morphogenesis - the biological process that causes an organism to develop shape and pattern. In 1953, Alan Turing had considered this when explaining the origins of the “dappled” pattern found on Friesian cattle.

Biologists began thinking about constructing an ABM of an entire cell from the late 1990s (Tomita et al (1999)). In more recent years, ABMs have broken into the biomedical sciences, being applied to cancer, immunology, vascular disease and in vitro development (Thorne et al (2007)). One of the most exciting recent developments in ABMs has been the construction of a complete model of the human pathogen Mycoplasma genitalium (Karr et al (2012)). These models are likely to become more popular with the advent of powerful gene editing technologies.

Most recently, ABMs has been to streamline drug or device design, simulate clinical trials, and predict the effects of drugs on individuals. ABMs can help bridge the different scales inherent in biological problems - from gene to cell, from cell to tissue, and from tissue to organism. Concrete applications have included the treatment of acute inflammation and the healing of wounds (An et al (2009)); the optimal rest periods necessary to enhance bone formation; and the distribution, or “hallmarking”, of cancer cells and their spread.

In the field of disease control, some recent major work in epidemiology has only been achievable through the use of ABMs. For example, using geographical data, commuting patterns, age distributions and other census derived information, an ABM was recently used to simulate the spread of an influenza pandemic across all 57 million of Italy’s inhabitants (Degli et al. (2008)). This approach could, in principle, be tailored to other epidemics on a country-by-country basis.

In the treatment of patients, ABMs have been used as a tool for scheduling in Emergency Departments. Using data on the weekly distribution of patient arrivals and the number of physicians available, ABMs have been used to determine which combinations of resources reduce the ‘door-to-doc’ time for patients.

​Figure 10: Friesian Cattle​Figure 10: Friesian Cattle
Source: Keith Weller/USDA (public domain). 

Figure 11: Bio-medical scienceFigure 11: Bio-medical science
Source: National Institutes of Health (public domain).

(f) Ecology

ABMs have been applied to a number of ecological problems involving the interaction of multiple agents with each other and with the environment. In the 1990s, ABMs were applied to population and resource dynamics and migration patterns (Grimm (1999)). For example, there is now a large body of work on ABMs of marine organisms (Werner et al (2001)), simulating how these organisms are moved around by ocean currents and how they interact in food webs.

Model-derived fish spawning locations have been found to coincide with observed spawning locations of fish (Heath and Gallego (1998)). These models have also been used to simulate the effects of hypothetical interventions, such as the impact of introducing new predatory species to eliminate pests, different techniques for managing fish stocks, or the impact of contamination of marine eco-systems (Madenjian et al (1993)).

ABM techniques have been applied to the management of forests, by simulating the establishment, growth, and death of individual trees, taking account of tree-tree interactions as they compete for resources. These models have been used to help forest management and assess the systemic impact of environmental changes, such as widespread deforestation.

Figure 12: Migrating animals

Figure 12: Migrating animals​


Source: Bjørn Christian Tørrissen (CC-BY-SA-3.0).

Figure 13: Oceanographic patternsFigure 13: Oceanographic patternsSource: Bruno de Giusti (CC-BY-SA-2.5-IT). 
  

(g) Economics and Finance

Although the path less followed, ABM techniques have found a number of applications in economics and finance. In 1957, Guy Orcutt proposed a new model which “consists of various sorts of interacting units which receive inputs and generate outputs” (Orcutt (1957)). Orcutt was ahead of his time: just 8 of his subsequent 500+ citations are from before 1980.

Perhaps the most famous application of ABMs in economics is Thomas Schelling's work on racial segregation (Schelling (1969, 1971)). This demonstrated how, in a simple cellular structure with agents following simple rules of thumb, a pattern of segregation might naturally emerge. The model’s predictions closely matched locational patterns in real cities and communities.

In the 1980s, Robert Axelrod applied ABMs to game theory (Axelrod (1997)). These could be used to simulate the outcome of dynamic games among interacting agents over time. For example, the optimal behavioural response of agents was often different in repeated games. These techniques found application in real-world settings, from trade wars to Cold Wars.

Through the 1990s, some Wall Street firms began using agent-based models to forecast financial variables – for example, prepayment rates for individual mortgages, with high degrees of reliability. ABMs techniques became attractive in helping explain how even highly irrational agents might still produce efficient markets and could explain some of the irregularities in financial markets, such as crashes and fat-tailed asset prices distributions.

In the light of the crisis, interest in ABM methods has blossomed, albeit from a low base. They have been used to study the effects of fiscal and monetary policies (Dosi et al (2015)), systemic risk (Geanakoplos et al (2012) and financial market liquidity (Bookstaber (2015)). ABMs of entire economies have also begun to be developed.

One example here is the Complexity Research Initiative for Systemic Instabilities (CRISIS), an open source collaboration between academics, firms, and policymakers (Klimek (2015)). Another is EURACE, a large micro-founded macroeconomic model with regional heterogeneity (Dawid (2012)). A third is the Complex Adaptive System model, which incorporates bounded rationality and heterogeneity to reproduce business cycles. A fourth is the MINSKY model.

Figure 14: Segregation
Figure 14: Segregation
Source: Esther Bubley (public domain). 
Figure 15: Financial markets

Figure 15: Financial markets

 

Source: Katrina Tuliao (CC-BY-2.0). 

The Costs and Benefits of ABM

If ABMs have found relatively limited application in economics and finance, despite widespread application elsewhere, does that matter? That depends on the potential benefits, and associated costs, of ABM techniques in understanding economic phenomena.

Starting with costs, ABM technology has been transformed over the past decade, for two reasons. First, the cost and speed of running these models has been revolutionised. As computing capacity has grown in line with Moore’s Law, the numbers of interacting agents that can be simulated using ABMs has escalated. The largest ABMs can now deal with interactions among 100 billion agents (LLNL (2013))). This is an order of magnitude larger than the number of humans on the planet.

Second, there has been a revolution, every bit as significant, in the availability of data to calibrate these models. This Big Data revolution is hardly unique to economics and finance, affecting pretty much every aspect of academe, business and public policy (McKinsey (2011)). Nonetheless, with economics and finance a late-adopter of ABM technology, large-scale and more granular economic and financial data removes an important constraint on the development of these models.

At the same time as the costs of developing and simulating ABMs have shrunk, it seems likely the benefits of developing these models may have become larger. Inter-connections between agents have lengthened and strengthened over recent decades, locally and globally (Haldane (2015)). This connective tissue linking individuals and economies can take a variety of forms.

Flows of goods and services along global supply chains have never been larger. Flows of people across borders have probably never been greater. Flows of capital across borders have certainly never been greater. And, most striking of all, flows of information across agents and borders are occurring on an altogether different scale than at any time in the past. All of these trends increase the importance of taking seriously interactions between agents when modelling an economy’s dynamics.

These benefits can perhaps best be illustrated by drawing out some of the key behavioural differences between ABM and mainstream macro-models. These differences should not be exaggerated: in practice, ABMs lie along a spectrum with micro-founded DSGE models at one end and statistical models at the other. But nor should they be overlooked.

(a) Emergent Behaviour 

In standard macro-models, system dynamics are fully defined by the distribution of shocks to the economy and the behavioural parameters determining how they ripple through the system. There is a classic Frisch/Slutsky impulse-propagation mechanism determining the economy’s fortunes. If the distribution of shocks and the parameters of the model are known and fixed, the dynamics of this system are well-defined and predictable.

Complex systems, of which ABM are one example, do not in general have these properties. The Frisch/Slutsky decomposition is very unlikely to be stable, if it exists at all. The reason is that a complex system’s dynamics do not derive principally from disturbances arising outside the system but from interactions within the system. Dynamics are endogenously, not exogenously, driven.

These feedback effects within the system may either amplify or dampen cycles. They may also give rise to abrupt shifts or discontinuities in system behaviour if pushed beyond a critical threshold or tipping point (Wilson and Kirman (2016)). These complex patterns are often referred to as “emergent” behaviours because they “emerge” without any outside stimulus. And because these emergent patterns arise from complex interactions, they are often difficult or impossible to predict.

To bring this to life, imagine that instead of a single wooden rocking horse, the system instead comprised a pack of wild horses. Taking a stick to one of them will generate “emergent” behaviour. It may result in them all staying put, one of them fleeing or all of them fleeing. And if they do all flee, this will be in a direction, and to a destination, impossible to predict. These emergent behaviours depend crucially on behavioural interactions within the system.

In the natural sciences, examples of these emergent behaviours are legion. They include the dynamics of sandpiles which “self-organise” as each new grain of sand is added until a tipping point is reached and collapse occurs (Bak, Tang and Wiesenfeld (1988)); the flocking of migrating birds and fish, whose patterns exhibit complex, and sometimes chaotic, patterns of motion (Macy and Willer (2002)); and the dynamics of traffic jams among cars and pedestrians, whose flows are irregular and emergent (Nagel and Paczuski (1995)).

These emergent properties of complex systems carry important implications for model-building. In these systems, there is a sharp disconnect between the behaviour of individual agents and the behaviour of the system as a whole. Aggregating from the microscopic to the macroscopic is very unlikely to give sensible insights into real world behaviour, for the same reason the behaviour of a single neutron is uninformative about the threat of nuclear winter (Haldane (2015)). The simple aggregation of “micro-founded” models, rather than being a virtue, may then be a cause for concern.

The emergent dynamics of these systems are likely to exhibit significant degrees of uncertainty and ambiguity, as distinct from risk (Knight (1921), Shackle (1979)). This uncertainty is intrinsic to complex systems, and makes forecasting and prediction in these systems extremely difficult.

There is unlikely to be any simple, stable mapping from shocks through to outcomes, from causes to consequences, from stick to rocking horse. Indeed, in these systems you do not need any shocks to generate variability in the system. This, too, stands in sharp contrast to existing macro-economic orthodoxy, which tends to emphasize the identification of exogenous shocks as a key factor in understanding system dynamics.

If I had a pound for every time I had heard someone in the Bank of England say “it all depends on the shock”, I could have long since retired. Yet in complex systems, it is simply not true. There is no simple link between either the size or source of disturbance and its downstream impact on the economy or financial system. Indeed, in these systems the very notion of identification or causation becomes blurred.

(b) Heuristic Behaviour

Mainstream models in macro-economics and finance tend to have a fairly sophisticated treatment of risk. Provided the distribution of possible outcomes is reasonably well-understood, this risk can be priced and hedged in financial markets. Saving and investment behaviour can then be analysed under the assumption agents optimise their risk-adjusted decision-making (Haldane (2012)).

A world of radical uncertainty, the like of which is arises in a complex system, changes that perspective fundamentally. Uncertainty means it may sometimes be impossible to compute future outcomes. In the language of computer science, behavioural decisions are no longer “Turing computable” (Beinhocker (2006), Velupillai (2000)). The relevant Euler conditions, familiar from mainstream macro-models, may not even exist.

Faced with that uncertainty, it is often rational for agents to rely on simpler decision rules (Gigerenzer (2015)). These are often called “rules of thumb” or heuristics. Use of such rules is sometimes thought to be arbitrary, sub-optimal or irrational. Yet in a world of uncertainty, that is far from clear. Heuristics may be the most robust means of making decisions in a world of uncertainty.

Rather than solving a complex inter-temporal optimisation, many consumers appear to follow simple rules of thumb when deciding their spending (Allen and Carroll (2001)). Rather than solving a complex mean-variance optimisation, many investors appear to invest passively or to equally-weight assets in their portfolios (Gigerenzer (2014)). And rather than solve a complex inter-temporal trade-off, monetary policy in practice seems to mimic simple rules of thumb (Taylor (2016)).

Some would interpret these simple decision rules as irrational, in the sense of being inconsistent with the Euler condition from standard macro-models. But even the concept of rationally needs careful reconsideration in an environment of radical uncertainty. Rationality can only be defined in relation to the environment in which decisions are made – what some have called ecological rationality (Gigerenzer (2014)). Heuristics can be the ecologically-rational response to radical uncertainty.

In ABMs, the behaviour of agents is characterised, not by Euler conditions, but by behavioural rules of thumb. These systems also exhibit radical uncertainty. That means there is a degree of model-consistency in ABMs – heuristics and uncertainty are mutually consistent. In that sense, the behavioural rules embedded in ABMs are neither as irrational, nor as prone to the Lucas critique, as some critics might imply.

(c) Non-Normal Behaviour

In many standard models, the equilibrium of the system is both singular and stationary. There is a natural and unique state of rest towards which the model converges following a disturbance. Wicksell’s rocking horse is not a perpetual motion machine and nor does it turn somersaults. While many models of multiple equilibria exist in economics and finance, they tend to occupy the suburbs rather than the city-centre of the profession.

In ABMs, the equilibria which emerge are often non-stationary or multiple, sometimes both. The equilibrium may often be an evolutionary one, the type of which often arises in ecological and biological models. The dynamics around this equilibrium are also often highly non-linear, and sometimes discontinuous, with a degree of non-linearity which is state-dependent (Taleb (2014)).

The combined effect of non-stationary, multiple equilibria and highly non-linear dynamics makes for non-standard, and often highly non-normal, distributions for the variables in these systems. For example, they are more likely to exhibit excess sensitivity in their fluctuations relative to fundamentals. And they may also be subject to large dislocations or discontinuities. In consequence, they are liable to have much fatter tails than the Gaussian distributions that often emerge from linearised, DSGE models.

(d) Matching the Dappled World

These features of ABMs – emergent behaviour, multiple equilibria, non-normalities - can be illustrated using the original cellular automata example, Conway’s Game of Life. Take as the starting point the simple cellular structure shown in Figure 16. Now augment that with some very simple behavioural rules of thumb describing the evolution of the cells. Then run this model forward to see what behaviour emerges among the cells.

As the simulation shows, even a simple cellular structure following simple decision rules gives rise to complex system dynamics. In this example, the properties of this system are clearly “emergent” – you would be hard-pressed to have predicted these patterns ex-ante. They give rise to a multiplicity of non-stationary equilibria, which evolve through time. And the resulting distribution of outcomes in this system is highly non-normal, non-linear and fat-tailed.

Figure 16: Conway's game of life
Source:  Kieff (CC-BY-SA-3.0-migrated-with-disclaimers).  Bill Gosper’s “Glider Gun” animation created by Johan Bontes.
 
Of course, this system is hypothetical. So it is interesting to ask whether the behaviour exhibited in real-world economic and financial systems is broadly consistent with the patterns from complex ABMs. One simple, reduced-form way to assess that is to look at the statistical distribution of various economic and financial time-series for evidence of discontinuity, non-normality and fat-tails. These are properties which, we know, exist in other natural and social systems (Barabasi (2005), Turrell (2013)).
 

The short answer is yes – the properties of economic and financial systems are little different than other social, and many natural, systems (Haldane and Nelson (2012)). To provide a few illustrations, Chart 6 looks at the distribution of a set of economic and financial variables over a time-series dating back at least 150 years – equity prices, bond prices, GDP etc. These historical distributions can be compared with a normal curve calibrated to the same data.

In each case, there is strong evidence of non-normality in the empirical distribution. Specifically, the tails of the historical distribution are often significantly higher than normality would imply. For real variables like GDP, around 18% of the data across the UK, US, Germany and Japan fall outside the ‘bell’ curve described by a best-fit normal distribution. For financial variables like equity prices, around 10% of that data fall outside the normal distribution. In both cases, there is evidence of out-sized dislocations in variables, which occur with far-higher frequency than normality would imply. This evidence is consistent with, if not proof of, complex system dynamics.

Chart 6:  Long-run distributions of economic and financial variables

UK House Prices: 1846-2015, Annual
UK House Prices: 1846-2015, Annual
Oil Prices: 1862-2015, Annual
Oil Prices: 1862-2015, Annual

M4 Credit Growth: 1881-2015, AnnualM4 Credit Growth: 1881-2015, Annual

CPI inflation: 1662-2015, AnnualCPI inflation: 1662-2015, Annual

World GDP growth (across the UK, US, Germany, Japan): 1871-2015, Annual​World GDP growth (across the UK, US, Germany, Japan): 1871-2015, Annual

UK GDP growth: 1701-2015, Annual

UK GDP growth: 1701-2015, Annual

UK wage growth: 1751-2015, Annual ​UK wage growth: 1751-2015, Annual

Equity prices: 1709-2016, Monthly Equity prices: 1709-2016, Monthly

Corporate bond spreads: 1854-2016, Monthly Corporate bond spreads: 1854-2016, Monthly

Dollar-Sterling Exchange rate: 1791-2016, monthly
 Dollar-Sterling Exchange rate: 1791-2016, monthly

 
Source: Hills, Thomas and Dimsdale (2016); Bank calculations.

 

Real-World Applications of Agent-Based Models

Given that ABMs potentially better match the moments of real-world data, at least in some situations and in some markets, and given their seeming success in other disciplines, the Bank of England has recently made an investment in them as part of its One Bank Research Agenda (Bank of England (2015)). Let me briefly describe two pieces of Bank research which have drawn on ABMs in an attempt to better understand two markets and how policy might reshape dynamics in these markets.

(a) The UK Housing Market

The housing market has been one of the primary sources of financial stress in a great many countries (Jorda, Schulerick and Taylor (2014)). Not coincidentally, this market has also been characterised by pronounced cyclical swings. Chart 7 runs a filter through UK house price inflation in the period since 1896. It exhibits clear cyclicality, with peak-to-trough variation often of around 20 percentage points. Mortgage lending exhibits a similar cyclicality.

Chart 7: Long-run UK house price growth 1846 to 2015

Chart 7: Long-run UK house price growth 1846 to 2015
Source: Hills, Thomas and Dimsdale (2016); Bank calculations. Notes: The chart shows the Hodrick-Prescott trend in annual house price growth data (where lambda=6.25). Data during WWI and WWII are interpolated.

House prices, like other asset prices, also exhibit out-sized booms and busts. Chart 6 plots the distribution of UK house price growth since 1846. It has fat-tails, with the probability mass of big rises or falls larger than implied by a normal distribution. For example, the probability of a 10% movement in house prices in any given year is twice as large as normality would imply.

Capturing these cyclical dynamics, and fat-tailed properties, of the housing market is not straightforward using aggregate models. These models typically rely, as inputs, on a small number of macro-economic variables, such as incomes and interest rates. They have a mixed track record in explaining and predicting housing market behaviour.

One reason for this poor performance may be that the housing market comprises not one but many sub-markets - a rental market, sales market, a mortgage market etc. Moreover, there are multiple players operating in these markets - renters, landlords, owner-occupiers, mortgage lenders and regulators – each with distinctive characteristics, such as age, income, gearing and location.

It is the interaction between these multiple agents in multiple markets which shapes the dynamics of the housing market. Aggregate models suppress these within-system interactions. The housing market model developed at the Bank aims to unwrap and model these within-system interactions and use them to help explain cyclical behaviour (Baptista, Farmer, Hinterschweiger, Low, Tang and Uluc (2016)).

Specifically, the model comprises households of three types:

  • Renters who decide whether to continue to rent or attempt to buy a house when their rental contract ends and, if so, how much to bid;
  • Owner-occupiers who decide whether to sell their house and buy a new one and, if so, how much to bid/ask for the property; and
  • Buy-to-let investors who decide whether to sell their rental property and/or buy a new one and, if so, how much to bid/ask for the property. They also decide whether to rent out a property and, if so, how much rent to charge.

The behavioural rules of thumb that households follow when making these decisions are based on factors such as their expected rental payments, house price appreciation and mortgage cost. These households differ not only by type, but also by characteristics such as age and income.

An important feature of the model is that it includes an explicit banking sector, itself a feature often missing from off-the-shelf DSGE models. The banking sector provides mortgage credit to households and sets the terms and conditions available to borrowers in the mortgage market, based on their characteristics.

The banking sector’s lending decisions are, in turn, subject to regulation by a central bank or regulator. They set loan-to-income (LTI), loan-to-value (LTV) and interest cover ratio policies, with the objective of safeguarding the stability of the financial system. These so-called macro-prudential policy measures are being used increasingly by policy authorities internationally (IMF-FSB-BIS (2016)).

The various agents in the model, and their inter-linkages, are shown schematically in Figure 4.

Figure 17: Agents and interactions in the housing market model

Figure 17: Agents and interactions in the housing market model
Source: Baptista et al (2016).

This multi-agent model can be calibrated using micro datasets. This helps ensure agents in the model have characteristics, and exhibit behaviours, which match those of the population at large. For example, the distribution of loan-to-income or loan-to-value ratios on mortgages are calibrated to match the UK population using data on over a million UK mortgages. And the impact on the sale price of a house of it remaining unsold is calibrated to match historical housing transactions data.

One of the key benefits of the ABM approach is in providing a framework for drawing together and using, in a consistent way, data from a range of sources to calibrate a model. For example, a variety of data sources were used to calibrate this model, including:

  • Housing market data: FCA Product Sales Data, Council of Mortgage Lenders, Land Registry and WhenFresh/Zoopla.
  • Household surveys: English Housing Survey, Living Cost and Food Survey, NMG Household Survey, Wealth and Asset Survey, Survey of Residential Landlords (ARLA) and Private Landlord Survey.

Micro-economic data such as these are essential for understanding the impact of regulatory policies – for example, macro-prudential policies which affect the housing market. For example, the Bank has been making use of the FCA’s Product Sales Database to get a more granular picture of the mortgage position of households. This is a very detailed database, covering over 13 million financial transactions by UK households since 2005. By combining these data with land registry data, it is also possible to build up a regional picture of pockets of indebtedness.

Chart 8 documents the evolution of high (more than 4.5 times income) leverage mortgages since 2008, on a regional basis. Warmer colours suggest a higher fraction of loans at or above that multiple. What you will see is a gradual heating-up of the mortgage market over the past few years, with a clear epicentre of London and the South-East in the run up to the macro-prudential intervention made by the Bank of England’s Financial Policy Committee (FPC) in June 2014.

Chart 8: Proposition of mortgages with a loan-to-income ratio greater than 4.5Chart 8: Proposition of mortgages with a loan-to-income ration greater than 4.5
Source: FCA Product Sales Database; Land Registry; Bank calculations.

One of the key features of an agent-based model is that it is able to generate complex housing market dynamics, without the need for exogenous shocks. In other words, within-system interactions are sufficient to generate booms and busts in the housing market. Cycles in house prices and in mortgage lending are, in that sense, an “emergent” property of the model.

Chart 9 shows a simulation run of the model, looking at the dynamic behaviour of listed prices, house prices when sold and the number of years a property is on the market. The model exhibits large cyclical swings, which arise endogenously as a result of feedback loops in the model. Some of these feedback loops are dampening (“negative feedback”), others amplifying (“positive feedback”).

For example, when mortgage rates fall, this boosts the affordability and the demand of housing, putting upward pressure on house prices. This generates expectations of higher future house price inflation and a further increase in housing demand - an amplifying loop.

Ultimately, however, affordability constraints bite and dampen house prices expectations and demand – a dampening loop. We can use the simulated data from Chart 9 to construct distributions of house price inflation over time (Chart 10). This simulated distribution exhibits fat-tails, if not as heavy as the historical distribution. Nonetheless, the model goes some way towards matching the moments of the real-world housing market.

Chart 9: Model simulations of the housing market

Chart 9: Model simulations of the housing market

Source: Baptista et al (2016). Notes: Blue is the list price index, red the house price index and green the number of years a house is on the market.

​Chart 10: The distribution of house pricesChart 10: The distribution of house prices
Source: Baptista et al (2016); Hills, Thomas and Dimsdale (2016); Bank calculations. Notes: The blue diamonds show the distribution of simulated house price growth for over 160 years from the model. The red diamonds show the distribution of real house price growth between 1847 and 2015.

This same approach can also be used to examine the impact of various macro-prudential policy measures, whether hard limits (such as an LTV limit of 80% for all mortgage contracts) or soft limits (such as an LTI cap for some fraction of mortgages). These policies could also be state-contingent (such as an LTV limit if credit growth rises above a certain threshold).

As an example, we can simulate the effects of introducing a loan-to-income (LTI) limit of 3.5, where 15% of mortgages are not bound by this limit. This simulation is similar, if not directly comparable, to the macro-prudential intervention made by the Bank of England’s Financial Policy Committee (FPC) in June 2014.

Chart 11 looks at the simulated impact of this policy on the distribution of loan-to-income ratios across households, relative to a policy of no intervention. The incidence of high LTI mortgages (above 3.5) decreases, with some clustering just below the limit. With some borrowers nudged out of riskier loans, a greater degree of insurance is provided to households and the banking system. Another advantage of these class of models is that they allow you to simulate the longer run impacts once the second round and feedback loops have taken effect. Chart 13 shows that the distribution of house price growth narrows under the scenario relative to the baseline.

Chart 11: Simulated effect of a loan-to-income policy​Chart 11: Simulated effect of a loan-to-income policy
Source: Baptista et al (2016).

Chart 12: Simulated effect on house price growthChart 12: Simulated effect on house price growth​
Source: Baptista et al (2016).
(b) An Agent-Based Model of Financial Markets

A second ABM project looks at behaviour in financial markets (Braun-Munzinger, Liu, and Turrell (2016)). As in the housing market, this involves complex interactions between multiple agents. And as in the housing market, these interactions are prone to abrupt dislocations in prices, fattening the tails of the asset price distribution. Chart 13 looks at the empirical distribution of daily corporate bond price movements over various intervals, pre and post crisis. It is clearly fat-tailed.

Chart 13: Distribution of corporate bond price returnsChart 13: Distribution of corporate bond price returns
Source: Braun-Munzinger et al (2016). Notes: The distribution function of daily log-price returns of three periods shown against data generated by a single model run.

The dynamics of financial markets are also an area of active policy interest, not least in the light of the financial crisis. During the crisis, there were sharp swings in asset prices and liquidity premia in many financial markets. Since the crisis, there have been concerns about market-makers’ willingness to make markets, potentially impairing liquidity. These are policy questions which are not easily amenable to existing asset pricing models.

Braun-Munzinger et al (2016) build a model which seeks to capture some of the interactions between market players that might give rise to these asset price patterns. In particular, the model comprises three classes of agent: a market maker, making two-way prices in the asset; a set of funds trading in the asset, but pursuing distinct trading strategies; and end-investors in these funds.

Funds are, in turn, assumed to be one of three types: value traders – who assume yields converge over time to some equilibrium value, buying/selling when the asset is under/over-valued; momentum traders – who follow short-term trends on the assumption they persist; and passive funds – who only trade in response to in- and out-flows from investors. These interactions are shown schematically in Figure 19.

Figure 18: Overview of the corporate bond trading modelFigure 18: Overview of the corporate bond trading model

Source: Braun-Munzinger, Liu, and Turrell (2016).

Table 3: Fund characteristics

Table 3: Fund characteristics
Source: Morningstar and Bank of America Merrill Lynch. Notes: The model is calibrated against empirical data including the flow-performance relationship and the distribution of sizes of open-ended corporate bond funds.

The model is based on, and calibrated against, the corporate bond market using micro-level data on 1,000 mutual funds. These data can be used to calibrate the size distribution of funds, their trading behaviours and the links between their performance and in and outflows from the fund. Some of the fund characteristics are shown in Table 3.

The interactions among these players give rise to interesting dynamics, some of which are shown schematically in Figure 20. For example, imagine a shock to the expected loss on a bond. This reduces demand for that bond by funds holding it and causes a re-pricing by the market-maker and momentum selling by funds, generating a further fall in the bond’s price and in the wealth of the funds holding it.

Figure 19: Transmission mechanism of expected loss rate shock

Figure 19: Transmission mechanism of expected loss rate shock

Source: Braun-Munzinger et al (2016).  Notes: A schematic showing the feedback loops following a shock to the value of the expected loss rate. The colours in the feedback loop indicate the different market players; funds, the market maker, and the investor pool.

This fall in fund performance then gives rise to a second feedback loop, inducing investor withdrawals from those funds which have under-performed, further reducing demand for the bond and amplifying the fall in its price.  It is only after some time that the influence of value investors stabilises the market.   

Each individual run of the model is like hitting a wild horse once and has an unpredictable outcome.  But we can get an idea of the general behaviour of the funds by running the same scenario repeatedly - if you like, hitting the wild horses hundreds of times and looking at their most likely response. The most likely behaviour of funds in this model-market are oscillatory, with shocks amplified in the short run and only damped after a period of several hundred days (Chart 14).

But by rolling the dice over and over again, we can also look at the distribution of possible outcomes, as the average may hide extreme behaviour in the tails.  For example, if the fraction of passive investors increases, this dampens average changes in bond yields (Chart 15).  But it also increases the chances of much bigger changes in yields – the tails of the distribution fatten.

Chart 14: Impact of a shock to the expected loss rateChart 14: Impact of a shock to the expected loss rate
Source: Braun-Munzinger et al (2016).  Notes: What happens after a shock to the expected loss rate; a sudden change in firms' expected loss rate on bonds causes both short-term fluctuations in yield, and a new, higher long-term yield. Results presented are the median of 100 individual simulations runs, individual model runs exhibit a range of outcomes.

Chart 15: Distribution of outcomes after a shock to the expected loss rateChart 15: Distribution of outcomes after a shock to the expected loss rate
Source: Braun-Munzinger et al (2016).   The outcomes for median yield over 100 trading days after a 0.36% loss rate shock. Percentiles indicate the distribution of results taken from 250 simulation runs.

How can this model help in understanding the dynamics of real-world financial markets and the appropriate setting of policy in these markets? Chart 13 compares the actual distribution of corporate bond prices changes with the distribution which emerges from ABM simulations. There is a reasonable correspondence between the two, with fatter than normal tails.

The model can be used counter-factually – for example, to assess the impact of a rise in the number of passive or momentum traders relative to value investors. This makes for larger and longer-lived oscillations. So too does a reduction in the market-making capacity – for example, lower market-maker inventories - as this amplifies the impact on prices of any shock to fund demand.

One topical policy issue is whether constraints might be imposed on some funds to forestall investor redemptions in the face of falling prices and performance. For example, US money market mutual funds experienced such redemption runs during the course of the financial crisis. And, more recently, UK property investment funds also exhibited run-like redemptions following the EU referendum result, which depressed asset prices.

The model can be used to assess the impact of different approaches to constraining redemption. Chart 16 shows the impact on yields of varying the redemption window, for three different rates of investor flow. Extending the redemption window from one day to one month would, according to the model, have reduced amplitude of the resulting asset price cycle by a factor of around five.

Chart 16: Impact of action to reduce speed with which investor redemption requests are fulfilled
Chart 16: Impact of action to reduce speed with which investor redemption requests are fulfilled

Source: Braun-Munzinger et al (2016).   Notes: When the parameter which controls the flow of money out of funds, s, is large, redemption management can significantly reduce the magnitude of the maximum change in yield following a large loss rate shock. The redemption management policy splits investor redemptions over a variable number of days with diminishing returns. The loss rate shock is 0.35%, which is a comparable order of magnitude as the annual loss rate change between 2007 and 2008.

Conclusion

In one of his most famous metaphors, Shackle described the economy as a kaleidoscope, a collision of colours subject to on-going, rapid and radical change. Many of our existing techniques for modelling and measuring the economy invoke a rather different metaphor, with the economy a rather colourless, inanimate rocking horse.

Both approaches have their place in making sense of the dappled economic and financial world. But, to date, both have not been given equal billing. The global financial crisis is an opportunity to rebalance those scales, to take uncertainty and disequilibrium seriously, to make the heterodox orthodox. A profession that has, perhaps for too long, been shackled needs now to be Shackled.

Share
 

 Related content