The lower panels of Figure 10 are histograms that express the probability distribution as in Figure 9. Panel D, for the normal distribution, reproduces the familiar bell curve as expected. The largest price swing in any one day is about 4.5 per cent. The real data, panel C, might not look that different; however, closer inspection reveals that it has what are known as fat tails: the extremes of the distribution extend to much higher and smaller values. As seen more easily in the upper panel, the price swing exceeded 4.5 per cent on 54 different days over the 60-year period - so an event that according to theory should almost never happen, actually happened about once per year on average.
Now, it has often been argued that the markets are normal most of the time, with only the occasional lapse into abnormal behaviour. Perhaps Black Monday and the credit crunch are inherently unpredictable events that come out of nowhere. If so, then there isn"t much we can do about these "Black Swans," to use Taleb"s term. But if we look a little closer at the financial data, we see that it does have, not regularity, but a kind of character.
Figure 10. Panel A shows daily percentage price changes in the S&P 500 index over a period of nearly 60 years. Panel B is what the price changes would look like if they followed a normal distribution with the same standard deviation. Panel C is a histogram of price changes. Panel D is a histogram of the corresponding normal distribution.
For example, financial crashes are often compared to earthquakes. This is no loose metaphor: in a very real sense, financial crashes feel like an earthquake happening in slow motion. The top panel of Figure 11 is a zoomed view of the S&P 500 data for the period of the recent credit crunch. The lower panel shows 50 minutes of seismographic data recording during the earthquake of January 17, 1995, in Kobe, j.a.pan. There is a strikingly similar appearance to the data. So when one of the traders at Lehman Brothers told a BBC reporter in September 2008 that "It is terrible ... like a ma.s.sive earthquake," she was being accurate.6 But the correspondence goes even deeper than that, for it turns out that the frequency of both phenomena is described by the same kind of mathematical law. If you double the size of an earthquake, it becomes about four times rarer. This is called a power law, because the probability depends on the size to the power 2. (A number raised to the power 2 is the number squared; a number raised to the power 3 is the number cubed; and so on.) Financial crashes are similar. Numerous studies have demonstrated that the distribution of price changes for major international indices follow a power-law distribution with a power of approximately 3.7 Power-law distributions are scale-free, in the sense that there is no "typical" or "normal" representative. There is only the rule that the larger an event is, the less likely it becomes. In many respects, power-law distributions are therefore the opposite of the normal distribution. The bell curve is concentrated about the mean, with a well-defined standard deviation. The power law, in contrast, is scale-free, but biased towards smaller events. (If economists were to do the job of a geophysicist, they would say that earthquakes don"t exist - there is just a constant low level of vibration in the earth.) Given that we can"t predict earthquakes any better than we can predict financial crashes, it is again tempting to see crashes as isolated events. However, the scale-free nature of financial data implies that this is not the case. There is no clear boundary between normal and extreme: only the knowledge that, the bigger the change, the smaller the chance of it happening. To understand this more clearly, it may help to take another look at Pascal"s triangle, which also has a power law hidden within it.
Figure 11. Top panel is a zoomed view of the S&P 500 data from the preceding figure, showing the time period of the 2008 credit crunch. Lower panel shows seismological data from the 1995 earthquake in Kobe, j.a.pan.8
Fractal markets.
Figure 12 shows a modified version of Pascal"s triangle in which the odd numbers are covered with triangles, leaving only the even numbers. Now imagine this extending to larger versions of the triangle (corresponding to more coin tosses). The result then converges towards a peculiar geometric object known as the Sierpinski gasket, discovered by the Polish mathematician Vaclav Sierpinski in 1916. This figure is shown in the lower panel.
The usual way to construct this figure is to start with a triangle; divide into four smaller triangles; and remove the central triangle. This process is then repeated for the three remaining triangles, and so on, ad infinitum.
The Sierpinski gasket is an early example of what mathematician Benoit Mandelbrot called fractals. These are geometric objects that have the property of self-similarity - no matter how far you zoom in, the object continues to reveal finer detail and structure, and different scales have a similar appearance. There is no preferred or "normal" size for one of the white triangles.
As Mandelbrot noted, financial data is also self-similar - if you look at data taken over years, months, weeks, days, or even seconds, it is hard to know from the shape of the plot what the timescale is.9 And it turns out that the gasket is close to being a visual history of stock-market crashes.
To see this, suppose that each of the white triangles corresponds to a price change, with the size of the top edges equal to the size of the percentage change in a single day. The figure is dominated by a few large events, but as you zoom in further and further you can see that there are innumerable smaller triangles that correspond to much smaller fluctuations.
In real markets, the largest price changes tend to be crashes, which happen in times of panic. If the overall figure has sides of length 100, then the triangle at the centre has size 50. Let"s say that corresponds to a fall of 50 per cent in one day - the worst imaginable crash, I hope, and the one we haven"t had yet in a major index. Then there are three crashes of half the size (25 per cent); nine crashes of a quarter the size (12.5 per cent); and 27 crashes of an eighth the size (6.25 per cent). In general, every time we halve the size of the price change, there are three times as many that occur.
Figure 12. Top panel shows Pascal"s triangle with the odd numbers covered. If you extend this pattern to larger versions of the figure, the result begins to resemble the Sierpinski gasket (lower panel). Market price fluctuations follow a similar pattern to the size of the triangles.
This relationship is equivalent to a power law, similar to that for earthquakes except that the power used is slightly different. It also captures the behaviour of the financial markets. Figure 13 shows the 100 largest price changes in the S&P 500, ranked in descending order (solid line). The dotted line is the result you would get by a.s.suming that price changes follow the same distribution as the Sierpinski gasket. It is obviously not a perfect fit (this can be improved by adjusting the power). However, it is much better than the results calculated using the normal distribution, shown by the dashed line, which are clearly far too small. We therefore see that the presence of extreme market crashes is no more bizarre or inexplicable than the presence of the large triangles in the gasket: they all belong to the picture, and are created by the same processes.
Figure 13. Plot comparing the 100 largest price changes in the S&P 500, ranked in descending order (solid line), with the 100 largest price changes from the normal distribution (dashed line), and the result you would get by a.s.suming that price changes follow the same distribution as the Sierpinski gasket (dotted line). A largest crash of 50 per cent is a.s.sumed (not shown). The laddering effect is because the triangles in the gasket come in groups of the same size - for example there is one of the largest size, three of the next largest size, and so on.
It isn"t just financial crashes and earthquakes that have these fractal patterns; the lengths of coastlines, the size of craters on the moon, the diameters of blood vessels, the populations of cities, the arrangements of species in ecosystems, and many other natural and man-made systems show fractal statistics. As we will see in Chapter 7, the sizes of businesses follow a fractal pattern - there are many smaller ones, and just a few big multinationals at the top. Pareto"s 80-20 law was an observation that wealth scales fractally. Indeed, fractals are a kind of signature of complex organic systems operating far from equilibrium, which tend to evolve towards a state known as self-organised criticality.10
Going critical.
The cla.s.sic example of self-organised criticality is the humble sandpile. Imagine a conical sandpile with sides of a certain slope. If the slope is too steep, then the pile will be unstable and adding just a single grain could cause it to collapse. This is the chaotic state, which shows sensitivity to initial conditions. On the other hand, if the slope is very shallow, then adding a few extra grains to the top of the pile won"t cause much of a disturbance. In this state the system is stable. If you now continue to add grains of sand, then the sandpile will eventually converge, or self-organise, to a critical state. In a sense, this state is maximally efficient, because it has the steepest sides and reaches as high as possible without becoming fully unstable. However, it is not very robust. Adding further grains of sand will create avalanches that range in size from extremely small to very large, and that follow a power-law, scale-free distribution. The system is not chaotic or stable, but on the border between the two.
One can therefore hypothesise that markets, like the earth"s crust or species in ecosystems, self-organise to a critical state, and so exhibit fractal statistics. This is an interesting insight, and one that links financial markets to the natural world. Unfortunately, it isn"t much help in making precise predictions, or even calculating the odds of a crash in the future.11 One reason is that we can know the exact distribution only by making a statistical sample of existing data. If you know that the data is normally distributed, then it is possible to quickly estimate the mean and standard deviation using standard techniques. But if the data is scale-free, then the largest and most important events occur extremely rarely. Even several decades of S&P 500 data gives us only a few major crashes. This makes it much more difficult to come up with accurate statistical estimates for the probability of similar events.12 Also, fractal statistics tell you only about the distribution of price changes, not the timing or the degree of cl.u.s.tering. We know that earthquakes follow a power-law distribution, but we still have no idea when the next one will strike.13 (One useful tip, though, is that volatility does show cl.u.s.tering - if the markets are stormy, it"s not safe to expect them to calm down soon.) Finally, a model that was valid in the 1960s or 1980s won"t be valid in the 2010s, because the entire economy will be different - the mix of companies, investors, regulators, and so on will have changed.
Traders and investors like to have simple formulae that they can understand. The great appeal of the normal distribution is that it boils risk down to a single number, the standard deviation. It is possible to come up with more elaborate versions of the bell curve that include fat tails (extreme events), asymmetry, or temporal cl.u.s.tering, but this introduces extra complications and parameters, and the sampling problem means that the resulting estimates for extreme events are little better than guesses. Some hedge funds do use sophisticated statistical algorithms, but they tend to be focused on specialised trades that exploit isolated pockets of predictability. A dirty secret of quant.i.tative finance is that the tools used are usually quite simple - those math and physics Ph.D.s are largely for show. So in practice, the broader field of risk modelling hasn"t moved on all that much since Pascal and his triangle.
A good example is the most widely used risk-modelling technique, Value at Risk or VaR, seen earlier. As the name suggests, it is used to estimate the worst-case loss that an inst.i.tution could face on a given financial position. The method has been adopted and sanctioned by banks, regulators, and credit rating inst.i.tutions, and is important for investors and a.n.a.lysts. Risk is estimated by taking historical data over a time window ranging from a few months to several years, depending on the case, and applying standard statistical techniques to give a likelihood for a particular loss. This is usually expressed in terms either of standard deviation or percentage confidence level. A 3-standard deviation event is one that has a 99.9 per cent probability of not happening, so this defines a kind of maximum limit on the expected loss. A 1.65-standard deviation event has a 95 per cent probability of not happening.
Despite its popularity, the model fails on a regular basis. As mentioned in the introduction, Goldman Sachs complained of 25-standard deviation events during 2007. On February 29, 2008, Bear Stearns reported a VaR of $62 million at 95 per cent confidence. By mid-March, its share price had dropped from $70 to $2, representing a loss of $8 billion. Bear Stearns were of course aware of the drawbacks of VaR, but as they stated in their final report to the Security and Exchange Commission, "the company believes VaR models are an established methodology for the quantification of risk in the financial services industry despite these limitations."14 Indeed, it often seems that VaR is used as a device to sanction risk-taking behaviour and provide an excuse when things go wrong.
Even before the crisis, there have been many attempts to make VaR more robust by introducing extra variables (aka fudge factors) into the model to account for unseen risks. However, because the risks are unseen and therefore un.o.bservable, there is no reliable way to forecast their future values.15 The calculated risk will also depend on the exact values of these variables, making the techniques difficult to share on an industry-wide basis. Which is why risk still tends to be expressed in terms of standard deviations, even though the concept has little meaning for non-normal distributions.
The danger, as hedge fund manager David Einhorn put it, is that quant.i.tative risk management give users "a false sense of security ... like an air bag that works all the time, except when you have a car accident."16 And by downplaying the possibility of extreme events, it increases risky behaviour, thus making disaster more likely. The next way to improve economics, then, is to use the findings of complexity and fractal statistics, not to predict the timing or magnitude of the next crash, but to model the financial system and find ways to reduce the likelihood and impact of extreme events. We may not be able to build a perfect air bag, but we can make the roads safer in the first place.
Getting to normal.
One useful observation from complexity theory is that, in complex organic systems, there is a trade-off between efficiency and robustness.17 In the chaotic regime, the system fluctuates uncontrollably. In the stable regime, changes are small and follow the normal distribution. If left to their own devices, the systems will often evolve towards the critical state on the boundary between chaos and order. Here the fluctuations follow a power-law, scale-free distribution: the system is highly efficient, in the sense that it is being pushed to the maximum, but it is not robust because it is susceptible to extreme fluctuations.
In the same way, our financial system is very efficient, in the narrow sense that it generates big profits for banks and investors over the short term (this has nothing to do with the efficient market hypothesis, which is about normally distributed price changes). However, its fluctuations follow a power-law distribution, and it is susceptible to crashes that have a severe impact on the rest of society. Like a giant sandpile, investors are piling more and more money on to the top, in the hope that they can make a quick profit before the whole thing comes crashing down. In this state, extreme events aren"t aberrations - they"re part of the landscape.
An interesting question, then, is whether it is possible to improve the balance between stability and efficiency: to create a financial system that is less profitable in the short term, but also less p.r.o.ne to harmful collapses. After all, the financial system is not a natural system, but is something that we have created ourselves, so we should be able to engineer it in such a way that it behaves in a more stable fashion. Unlike a sandpile, we have at least some influence over our destiny.
Four steps immediately suggest themselves. The first is to better regulate the introduction of new financial products. The finance industry has become increasingly deregulated in the last few decades, based on the dogma that markets are stable and self-regulating. Whenever governments propose new rules, the banks complain that this will stifle innovation in the field of financial engineering. But in other kinds of engineering or technology, regulations are strictly applied because they save money and lives - they keep our systems operating in the stable regime, instead of veering off into chaos. This reduces efficiency by some measures because it means that new products take longer to get to market, but the benefits outweigh the costs.
As discussed further in Chapter 6, a major contributing factor to the credit crunch was the proliferation of new financial products (i.e. schemes) that allowed risk to be sliced and diced and sold off to third parties. The risk calculations were performed using standard techniques, which unsurprisingly weren"t up to the job. In fact these new products, such as credit default swaps and collateralised mortgage obligations, turned out to have the same effect on risk as a sausage factory has on a diseased animal - instead of controlling risk, they disguised it and helped it propagate. Furthermore, they were not properly regulated by the financial authorities, which was a large part of their attraction. Their adoption was rather like the pharmaceutical industry introducing a new drug and claiming that there was no need to perform the necessary trials before bringing it to market.
Actually, some healthcare companies (or quacks) try to do this all the time, which is why regulatory bodies have to be on their toes. During the 2009 swine flu pandemic, the US Food and Drug Administration was forced to take action against 120 products that claimed to be able to cure or prevent the disease. These included a $2,995 "photon genie" to stimulate the immune system, and a bottle of "Silver Shampoo" that would kill the airborne swine flu virus if it settled on your hair.18 The FDA said that these claims were "a potentially significant threat to public health because they may create a false sense of security and possibly prevent people from seeking proper medical help."19 Rather like risk models then.
It might seem that financial markets are so complex that they are impossible to regulate. However, this impression is largely due to the carefully maintained myths that markets are efficient and optimal, so any attempt to interfere with their function will be counterproductive. As Adair Turner, chair of the UK"s Financial Services Authority, observes, "the whole efficient market theory, Washington consensus, free market deregulation system" has resulted in "regulatory capture through the intellectual zeitgeist." The abandonment of this framework puts regulators "in a much more worrying s.p.a.ce, because you don"t have an intellectual system to refer each of your decisions."20 Effective regulation doesn"t mean that regulators have to be smarter than hedge fund managers. The first step is to change their position, from allowing new financial products unless and until they are proven faulty, to (the default in other critical industries like medicine or nuclear energy) not allowing them unless they can be shown to provide a measurable improvement over other alternatives, with no dangerous side-effects. It costs about a billion dollars to bring a new cancer drug to market, and one reason is that it has to jump over many regulatory hurdles before it can be adopted as a therapy. People don"t automatically a.s.sume that drug regulators are more clever or cunning than drug developers, but they seem quite effective at their jobs.
Living on the edge.
The next method to move the economy to a less risky regime is to reduce the incentives for bankers to make bets that have a high probability of paying off in the short term, thus generating fabulous bonuses, but are guaranteed to eventually blow up. The asymmetry in these bets is like that of Pascal"s wager: they have a great upside (the bonuses) and a minimal downside (eventually the bet will go wrong, but there are no negative bonuses, and by that time the person will probably be on the beach anyway). After the credit crunch there was great public demand for measures such as pay caps, and withholding a portion of bonuses for a few years to limit short-termism. However, at the time of writing, regulators have made only limited progress. As Mervyn King put it, paraphrasing Winston Churchill: "Never in the field of financial endeavour has so much money been owed by so few to so many. And, one might add, with so little real reform."21 The third suggestion, mentioned also in Chapter 3, is that credit creation and leverage should be controlled. The total amount of debt in the economic system in 2008, relative to gross domestic product, was approximately three times what it was in the 1980s.22 That"s fine when markets are rising, but any unexpected events quickly become amplified and create indirect effects. Regulations shouldn"t apply just to banks, but to any inst.i.tution that creates credit, including derivative markets.
Leverage is of course linked to perceived risk: if you can convince a lending inst.i.tution that a proposed investment has low risk, they will lend you more money. Finally, then, the risk models used by banks and financial inst.i.tutions should be modernised to better reflect the fractal nature of the markets and the possibility of extreme events. Techniques from areas of mathematics such as fractal a.n.a.lysis can help, for example by generating realistic stress tests for financial products or inst.i.tutions; however, it is equally important to acknowledge the limitations of mathematical models of any type.23 The main implications are that banks need to hold larger reserves - i.e. keep some money under the mattress, even when risk seems low - and develop scepticism about their ability to predict the future. Just because a formula says it"s safe, that doesn"t mean it really is. Time-tested risk management techniques such as experience-honed intuition, common sense, and conservatism might even come back into vogue.
Perhaps the most important thing to realise with complex systems is that models can actually be counterproductive if they are taken too literally. Risk a.s.sessment based on faulty models may give a comforting illusion of control, but can turn out to be highly dangerous. Overconfidence in models makes us blind to the dangers that lurk below the surface. Just as engineers and biological systems exploit redundancy to provide a margin of safety, and boat-builders over-design their ships rather than build them to withstand only the "normal" wave, so we need to buffer the economic system against unexpected shocks.
To summarise, the trade-off between efficiency and robustness in complex systems suggests that we can lower the risk level of the economy, if we are prepared to accept lower levels of short-term efficiency. Along with structural changes of the sort discussed in Chapter 2, this will also require a high degree of regulation. This shouldn"t be surprising - all forms of life, from bacteria to an ecosystem, are closely regulated. Economists talk about the invisible hand, but if you look at your own hand, everything about it - the temperature, the blood pressure, the salinity of the cells, and so on - is subject to a fierce degree of regulation that would put any financial regulator to shame. Of course, it is possible to go too far in the other direction - we want to steady the economy, not stop it. A first step towards finding this balance is to change the "intellectual zeitgeist" by absorbing new ideas from science.
It often feels that our financial system is living on the edge. Insights from complexity, network theory, nonlinear dynamics, and fractal statistics may help us find our way back to a more stable and less nail-biting regime. Of course, finance is inherently risky, and when it comes to calculating the danger mathematical models are only one piece of the puzzle. Risk is ultimately a product of human behaviour, which has a way of eluding neat mathematical equations.
CHAPTER 5.
THE EMOTIONAL ECONOMY.
In an insane world, the person who is rational has the problem. Money is as addictive as cocaine.
Andrew Lo, professor of financial engineering (2009).
If everything on Earth were rational, nothing would happen.
Fyodor Dostoevsky (1880).
Mainstream economists see the economy as rational and efficient. This is based on the idea that individual investors make decisions rationally. It has clearly been a while since these economists visited a mall. While we do sometimes apply reason and logic to financial decisions, we are also highly influenced by the opinions of other people, advertisers, and random compulsions that drift into our head for no particular reason. Indeed, markets rely for their existence on emotions such as trust and confidence: without trust there is no credit, and without confidence there is no risk-taking. This chapter shows that the emphasis on rationality in economic theory says more about economists and their training than it does about the real world; and discusses new approaches that take into account the fact that money is emotional stuff.
Market crashes often seem to happen in the autumn. The worst days of the 1929 Wall Street crash were October 24 (known as Black Thursday) and October 29 (Black Tuesday). The crash of October 19, 1987, gave us Black Monday. During the credit crunch, there was an entire so-called Black Week: five days of trading that began on October 6, 2008, and that set records for volume and weekly decline. Historically, by far the worst month for US investors is actually September: the S&P 500 falls on average by 1.3 per cent during that month.1 There is clearly something about the end of summer and the onset of winter that makes investors see black.
This pattern will be familiar to people who live in northern climates (I grew up in Edmonton, so I know what I"m talking about here). It"s the time of year when the days are shortening at their fastest rate. I found it strangely comforting to realise this fact during the peak of the credit crunch. Of course, I thought: the markets are suffering from Seasonal Affective Disorder (SAD). They"ll cheer up in the spring, when the sun starts to shine again.
Now, this might not be a very logical interpretation of stock market history. But the idea that markets have a life of their own, which includes patches of depression and elation, is hardly new. Even Alan Greenspan referred to the "irrational exuberance" that preceded the dot-com bust.
Indeed, when you stand back and take a cool, hard look at the economy, it doesn"t look like the most rational thing on earth. Those traders, for example, who stand around in pits wearing brightly coloured shirts and shouting instructions over the phone to buy or sell shares or futures contracts - they seem a little, shall we say, over-excited. I suppose their actions could be driving prices back to equilibrium in an impartial, Newtonian sort of way, but I wouldn"t want to bet on it.
Much trading these days is carried out by computerised trading algorithms, which are presumably immune to mood swings or psychological ailments. But it remains true that money brings out intense emotional reactions. A neurological study at University College London showed that the physical response of our brain when we experience financial loss is the same as that caused by fear or pain.2 When the stock index spikes down, people the world over take it on the chin; when it spikes up, it"s like a shot in the arm.
It is therefore strange that orthodox economic theory sees individuals as entirely cold, rational, and emotionless. This is the myth of h.o.m.o economicus: aka rational economic man.
Irrational numbers.
Before going on to plumb the depths, or shallows, of rational economic man, a short mathematical detour may be in order. In mathematics, the word "rational" has a rather different and specialised meaning: it is used to describe numbers that can be expressed as a ratio of two integers - for example 1/2, or 2/3, or 237/12. The Pythagoreans believed that everything in the world was created from the whole numbers - specifically the positive integers - and so it followed that any number should be expressible as a fraction, i.e. was rational. It therefore caused something of a stir when a member of the Pythagorean sect discovered the existence of irrational numbers.
As the theorem of Pythagoras tells us, a right-angled triangle that has two equal sides of length 1 will have a hypotenuse of length equal to the square root of 2. Hippasus tried to express this number as a fraction, but found that it couldn"t be done. Instead he provided a mathematical proof that the square root of 2 was an irrational number. This was bad enough, but he then made the mistake of leaking the result to people outside the secretive cult. A short time later he died at sea under suspicious circ.u.mstances.
Over 2,000 years later, while the first neocla.s.sical economists were busy putting the economy on a logical footing, the story of irrational numbers took another turn. In 1874 the mathematician Georg Cantor showed that irrational numbers actually outnumber the rational numbers. In fact, if there were a way to arbitrarily select a number in the range of 0 to 1, purely by chance, then the probability of choosing a rational number would be zero. Rational numbers are like needles in a haystack - the only way to find one is if someone tells you exactly where it is.
To Cantor"s contemporaries his claim seemed absurd, because there are obviously infinitely many rational numbers. But Cantor proved that there are different kinds of infinity. The rational numbers are countable, in the sense that you can construct a long list of them that, if you had the inclination and an infinite amount of time, you could read. However, the irrational numbers are uncountable - no such list can be created, even in theory.
The reaction to these revelations was that Cantor became about as popular as Hippasus. Henri Poincare condemned his work as a "grave disease" on mathematics; others called him a "scientific charlatan" and even a "corrupter of youth."3 Cantor suffered from depression anyway, but the hostile response to his work worsened his condition and his last years were spent in a sanatorium. Rational numbers helped drive him crazy.
Now, it might seem that rational numbers have little to do with rational human beings or economic theories. However, when Hippasus showed that root 2 is irrational, he was proving that the Pythagorean theory of numbers hid an internal contradiction. The structure could not stand - not everything could be expressed in terms of whole numbers, as their dogma dictated. When Cantor showed that there are different kinds of infinity, with more irrationals than rationals, his result again revealed contradictions and inconsistencies in the supposedly stable edifice of mathematics that are still not completely resolved.
Neocla.s.sical economic theory is based on a body of work with a different kind of rationality at its core. By a.s.suming that people behave in a logical or rational way, actions and motivations can be reduced to mathematical equations, just as rational numbers can be reduced to ratios of whole numbers. And once again, evidence of irrationality is showing that these apparently solid foundations are built on sand.
The Logic Piano.
Before he got into economics, the first love of William Stanley Jevons was the study of logic. He wrote a number of books and papers on the subject, including the bestselling textbook Elementary Lessons on Logic. In 1870 he even produced and exhibited at the Royal Society his "Logic Piano," a kind of mechanical computer resembling a keyboard that could perform elementary logical tasks and show how conclusions are derived from a set of premises.
The aim of Jevons, and the other neocla.s.sical economists, was to ground the study of money on a set of logical principles. The two main ingredients were the concept of the "average man" and the utility theory of Jeremy Bentham (1748-1832).
According to Bentham, mankind was ruled by "two sovereign masters - pain and pleasure ... The principle of utility recognises this subjection, and a.s.sumes it for the foundation of that system, the object of which is to rear the fabric of felicity by the hands of reason and of law."4 The pursuit of pleasure was therefore a rational enterprise that could be explained in terms of logic. To question this was to "deal in sounds instead of sense, in caprice instead of reason, in darkness instead of light". (John Stuart Mill described Bentham as curiously naive about the complexities of the real world: "Knowing so little of human feelings, he knew still less of the influences by which those feelings are formed: all the more subtle workings both of the mind upon itself, and of external things upon the mind, escaped him."5) Of course, one person"s pleasure is often another person"s pain, but what counted was the response of the "average man." This hypothetical person was first proposed by the French scientist Adolphe Quetelet in his book A Treatise on Man (published in English in 1842). Quetelet found that many human statistics - such as mortality, height and weight, criminality, insanity, and so on - could be modelled using the bell curve, with a well-defined mean - the average man - and a certain spread, due to deviations from the average. He argued that the average man therefore captured the true essence of society. "The greater the number of individuals observed, the more do peculiarities, whether physical or moral, become effaced, and allow the general facts to predominate, by which society exists and is preserved." His book even turned the idea of the average man into a kind of moral ideal - something to aspire to. As he put it: "If an individual at any epoch of society possessed all the qualities of the average man, he would represent all that is great, good, or beautiful."6 Armed with these ideas of utility and the average man, the neocla.s.sical economists argued that, on average, investors would act rationally to maximise their own utility. Even if a few individuals had only a fragile grasp of logic, what counted was the average man, who could always be counted on to do the right and reasonable thing. It was therefore possible to build up a detailed mathematical model of the economy based on equations. Individual irrationality was seen only as a kind of random noise that could safely be ignored.
The model economy.
The culmination of this effort was what many consider to be the jewel in the crown of neocla.s.sical economics: the Arrow-Debreu model, created by Kenneth Arrow (the uncle of economist Larry Summers) and Gerard Debreu in the 1950s.7 It finally proved, in a mathematically rigorous fashion, the conjecture of Leon Walras that idealised market economies would have an equilibrium.
The model consists of a number of ingredients:* An inventory of available goods, with prices in a single currency. Goods at different times or places are handled by treating them as different things: when a banana arrives in London from Puerto Rico, it gets a.s.signed a new inventory number, and will have a different price.
* A list of firms, each of which has a set of production processes that describes how the firm produces or consumes goods.
* A list of households, each of which has a specified consumption plan, which describes how it intends to consume goods; an endowment, which includes things like goods and services (including labour) that it can sell; and other a.s.sets such as shares in companies.
The consumption plan for each household is determined by a unique utility function that reflects their preferences for the available goods (in fact it is necessary only for each household to rank the available goods in terms of preference). Preferences are a.s.sumed to remain fixed with time. For firms, the utility function is simply their profits. Given a particular set of prices, it is possible to compute the optimal consumption plan for households, and the optimal production process for firms. From those one can calculate the total demand for the available products at the specified price, and also the total supply from firms.
The model therefore describes, in very general terms, a basic market economy. Its description may seem very dry and abstract and mathematical, but this is precisely the point. Just as the Pythagorean theorem works for any right-angled triangle, independent of the exact dimension, so the Arrow-Debreu model doesn"t care what goods the economy produces or the exact preferences of the households - it can work for any goods and any preferences.
The achievement of Arrow and Debreu was to mathematically prove that, given certain conditions, there exists an equilibrium set of prices for which supply and demand are perfectly balanced. The model didn"t say whether the equilibrium was stable, or unique, or explain how or if the market would attain it; but it did say that, in theory at least, one such point existed. Furthermore, according to the first fundamental theorem of welfare economics, any such equilibrium will be Pareto-efficient, meaning that it is impossible to reallocate the goods without making at least one household worse off.
To accomplish this level of generality, the model had to make a number of a.s.sumptions. One of them was the a.s.sumption of perfect compet.i.tion, which states that no individuals or firms can influence prices except through the price mechanism - i.e. there are no monopolies or unions. As discussed further in Chapter 7, this ignores the power discrepancies in the real economy. But perhaps the biggest catch was that, to make the proof work, it required that all partic.i.p.ants in the market act rationally to maximise their utility, now and in the future.
Since the future is unknown, the model a.s.sumed that households and firms could create a list of all possible future states, and draw up a consumption plan for each one. For example, a household"s optimal consumption would depend on the price of food, which in turn would depend on the weather. So it would need a separate plan for each of the different weather states (flood, drought, cloudy with some rain, hurricane, etc.), now and forever. The same would have to be done for other events, including the arrival of new technologies, changes in availability of commodities such as oil, and so on. The utility function of the household would be extended to cover all of these different contingencies.
Just to be clear, the claim here isn"t that individuals do their best to make the right decisions based on the information that is immediately available to them. The claim is that, first of all, they can create a list of all the possible future states of the world. Then, they make the best decisions taking into account each of these separate future worlds. These people aren"t just rational, they"re hyper-rational.