The events of the past couple of years have hardly helped the reputation of the world’s economists, and have led to a great deal of soul-searching within the profession. Following a conference at the LSE last year, the Queen asked a very simple question – “what went wrong?” A group of eminent economists answered with a letter a few weeks ago, and a few more added their own take on things yesterday.
The first letter mentions the underlying imbalances in areas such as the housing and credit markets, and the way in which relatively benign macroeconomic indicators led to a fairly loose monetary policy – central bank’s models didn’t predict any great trouble ahead, so there was no concrete evidence upon which to take pre-emptive action. Their central thesis, however, is that the crisis was not foreseen due to the failure of economists to think about the economic system as a whole:
Everyone seemed to be doing their own job properly on its own merit. And according to standard measures of success, they were often doing it well. The failure as to see how collectively this added up to a series of interconnected imbalances over which no single authority had jurisdiction. This, combined with the psychology of herding and the mantra of financial and policy gurus, lead to a dangerous recipe. Individual risks may rightly have been viewed as small, but the risk to the system as a whole was vast.
This seems to be a reasonable argument. If one looks at the baseline DSGEframework used by lots of central banks, there are all sorts of implicit assumptions as to how various asset and financial markets work perfectly. Blanchard’s The State of Macro, written just as the financial crisis went critical, acknowledges these sorts of deficiencies in macro models:
The current financial crisis makes it clear that the arbitrage approach to the determination of the term structure of interest rates and asset prices implicit in the basic NK [New Keynesian] model falls short of the mark: Financial institutions matter, and shocks to their capital or liquidity position appear to have potentially large macroeconomic effects.
The second royal letter takes a much more radical tone, arguing that the emphasis on mathematical modelling by economists has been to the detriment of a broader view incorporating insights from psychology, economic history, and how individuals and organisations actually act (rather than what rational agents would do). They write:
… [the previous letter] overlooks the part that many leading economists have had in turning economics into a discipline that is detached from the real world, and in promoting unrealistic assumptions that have helped to sustain an uncritical view of how markets operate.
One of the authors of the letter was interviewed on the Today programme yesterday, and it seems that their criticisms aren’t perhaps the use of maths per-se, but the countless simplifying assumptions that must be made in these models (e.g. about individual behaviour or ignoring out-of-equilibrium behaviour) and are then seemingly ignored when it comes to applying things to real life. This I agree with – whenever you read an economic model it’s vital to continually test the assumptions against real-life and, if they don’t hold, at least have some idea what that means for the model.
That’s the way that the Bank of England works: their BEQM model has a DSGE “core” which is, by necessity, a caricature of reality. However the Bank realises the deficiencies of the model, and so the predictions it gives are supplemented with a number of more ad-hoc corrections that take into account areas ignored by the core model. Sure, it would be better for the Bank to have a model which takes everything into account, but that’s simply not realistic. At least by having a rigorous core to the model (which, unlike other forms of forecasting such as using VARs, is based on theory and can give structure to why the model predicts something) the bank can back-up their forecasts with hard data. If the model doesn’t fit the real world, then it’s possible to look at what failed and bring the model more into line with the reality – as with any scientific model, BEQM and its cousins provide predictions which can be falsified.
What I disagree with is the notion that mathematical modelling encourages economists to make stupid assumptions and disregard them. Nothing could be further from the truth. Any model, whether it’s defined in mathematical terms (like BEQM) or in more verbose language is a simplification of real-life: that’s the definition of a model. All models therefore make assumptions. At least by using maths, I’m forced to make my assumptions explicit otherwise the results just won’t appear. If, instead, I attempt to describe a model by pure intuition alone, there are all sorts of hidden assumptions that are often hard to point out.
Far from making models detached from the real world, mathematical modelling helps verify just how our simplified version of the world differs from the actual thing. It’s then the job of the economist applying these models to carefully note what assumptions are broken, and either fix things up or at least admit that there’s a possibility that things might not come out entirely as predicted. “Is my financial model based entirely on rational agents? Let’s check what happens if we add some behavioural assumptions instead – do things change much?” These are the sorts of questions that economists can (and do) ask. While it’s certainly true that many practitioners cling dogmatically to false assumptions of frictionless markets and the like, it’s the mathematics itself that highlights these simplifications.
By all means criticise the unwarranted assumptions of mathematical models and the way the don’t see the “big picture”. Just don’t forget that a well-written analytical model is far more truthful about its assumptions than any paper based on hunches and intuitions will be. Caveat emptor…