Banking as we know it actually began almost by accident, as a sideline of a very different business, goldsmithing. Goldsmiths, by virtue of the high value of their raw material, always had really strong, theft-resistant safes. Some of them began renting out the use of these safes: individuals who had gold but no safe place to keep it would put it in the goldsmiths’ care, receiving a ticket that would allow them to claim their gold whenever they wanted it. At this point two interesting things started happening. First, the goldsmiths discovered that they didn’t really have to keep all that gold in their safes. Since it was unlikely that all the people who had deposited gold with them would demand it at the same time, it was (usually) safe to lend much of the gold out, keeping only a fraction in reserve. Second, tickets for stored gold began circulating as a form of currency; instead of paying someone with actual gold coins, you could transfer ownership of some of the coins you had stored with a goldsmith, so the slip of paper corresponding to those coins became, in a sense, as good as gold. And that’s what banking is all about. Investors generally face a trade-off between liquidity—the ability to lay your hands on funds on short notice—and returns, putting your money to work earning even more money. Cash in your pocket is perfectly liquid, but earns no return; an investment in, say, a promising start-up may pay off handsomely if all goes well, but can’t easily be turned into cash if you face some financial emergency. What banks do is partially remove the need for this trade-off. A bank provides its depositors with liquidity, since they can lay hands on their funds whenever they want. Yet it puts most of those funds to work earning returns in longer-term investments, such as business loans or home mortgages. So far, so good—and banking is a very good thing, not just for bankers but for the economy as a whole, most of the time. On occasion, however, banking can go very wrong, for the whole structure depends on depositors’ not all wanting their funds at the same time. If for some reason all or at least many of a bank’s depositors do decide simultaneously to withdraw their funds, the bank will be in big trouble: it doesn’t have the cash on hand, and if it tries to raise cash quickly by selling off loans and other assets, it will have to offer fire-sale prices—and quite possibly go bankrupt in the What would lead many of a bank’s depositors to try withdrawing their funds at the same time? Why, fear that the bank might be about to fail, perhaps because so many depositors are trying to get out. So banking carries with it, as an inevitable feature, the possibility of bank runs—sudden losses of confidence that cause panics, which end up becoming self-fulfilling prophecies. Furthermore, bank runs can be contagious, both because panic may spread to other banks and because one bank’s fire sales, by driving down the value of other banks’ assets, can push those other banks into the same kind of financial distress. As some readers may already have noticed, there’s a clear family resemblance between the logic of bank runs—especially contagious bank runs—and that of the Minsky moment, in which everyone simultaneously tries to pay down debt. The main difference is that high levels of debt and leverage in the economy as a whole, making a Minsky moment possible, happen only occasionally, whereas banks are normally leveraged enough that a sudden loss of confidence can become a self-fulfilling prophecy. The possibility of bank runs is more or less inherent in the nature of banking. Before the 1930s there were two main answers to the problem of bank runs. First, banks themselves tried to seem as solid as possible, both through appearances—that’s why bank buildings were so often massive marble structures—and by actually being very cautious. In the nineteenth century banks often had “capital ratios” of 20 or 25 percent—that is, the value of their deposits was only 75 or 80 percent of the value of their assets. This meant that a nineteenth-century bank could lose as much as 20 or 25 percent of the money it had lent out, and still be able to pay off its depositors in full. By contrast, many financial institutions on the eve of the 2008 crisis had capital backing only a few percent of their assets, so that even small losses could “break the bank.”
Second, there were efforts to create “lenders of last resort”—institutions that could advance funds to banks in a panic, and thereby ensure that depositors were paid and the panic subsided. In Britain, the Bank of England began playing that role early in the nineteenth century. In the United States, the Panic of 1907 was met with an ad hoc response organized by J. P. Morgan, and the realization that you couldn’t always count on having J. P. Morgan around led to the creation of the Federal Reserve. But these traditional responses proved dramatically inadequate in the 1930s, so Congress stepped in. The Glass-Steagall Act of 1933 (and similar legislation in other countries) established what amounted to a system of levees to protect the economy against financial floods. And for about half a century, that system worked pretty well. On one side, Glass-Steagall established the Federal Deposit Insurance Corporation (FDIC), which guaranteed (and still guarantees) depositors against loss if their bank should happen to fail. If you’ve ever seen the movie It’s a Wonderful Life, which features a run on Jimmy Stewart’s bank, you might be interested to know that the scene is completely anachronistic: by the time the supposed bank run takes place, that is, just after World War II, deposits were already insured, and old-fashioned bank runs were things of the past. On the other side, Glass-Steagall limited the amount of risk banks could take. This was especially necessary given the establishment of deposit insurance, which could have created enormous “moral hazard.” That is, it could have created a situation in which bankers could raise lots of money, no questions asked—hey, it’s all government-insured—then put that money into high-risk, high-stakes investments, figuring that it was heads they win, tails taxpayers lose. One of the first of many deregulatory disasters came in the 1980s, when savings and loan institutions demonstrated, with a vengeance, that this kind of taxpayer-subsidized gambling was more than a theoretical possibility.
Second, there were efforts to create “lenders of last resort”—institutions that could advance funds to banks in a panic, and thereby ensure that depositors were paid and the panic subsided. In Britain, the Bank of England began playing that role early in the nineteenth century. In the United States, the Panic of 1907 was met with an ad hoc response organized by J. P. Morgan, and the realization that you couldn’t always count on having J. P. Morgan around led to the creation of the Federal Reserve. But these traditional responses proved dramatically inadequate in the 1930s, so Congress stepped in. The Glass-Steagall Act of 1933 (and similar legislation in other countries) established what amounted to a system of levees to protect the economy against financial floods. And for about half a century, that system worked pretty well. On one side, Glass-Steagall established the Federal Deposit Insurance Corporation (FDIC), which guaranteed (and still guarantees) depositors against loss if their bank should happen to fail. If you’ve ever seen the movie It’s a Wonderful Life, which features a run on Jimmy Stewart’s bank, you might be interested to know that the scene is completely anachronistic: by the time the supposed bank run takes place, that is, just after World War II, deposits were already insured, and old-fashioned bank runs were things of the past. On the other side, Glass-Steagall limited the amount of risk banks could take. This was especially necessary given the establishment of deposit insurance, which could have created enormous “moral hazard.” That is, it could have created a situation in which bankers could raise lots of money, no questions asked—hey, it’s all government-insured—then put that money into high-risk, high-stakes investments, figuring that it was heads they win, tails taxpayers lose. One of the first of many deregulatory disasters came in the 1980s, when savings and loan institutions demonstrated, with a vengeance, that this kind of taxpayer-subsidized gambling was more than a theoretical possibility.
No comments:
Post a Comment