Thursday, November 29, 2012

Macro graduate

Noah Smith has a depressing look at the reaction of some graduate students to the macro they are taught. He links this to the ‘war’ between freshwater and saltwater visions of macro, and any disconnect between what is taught and the real world is bound to be more acute with the former. Particularly today, courses that attach New Keynesian theory to the end of the programme - and in some cases (through accident or design) end up not teaching it at all - are just asking for trouble. Even in more normal times, isn’t it a good idea to give students some idea of what central banks think they are doing? - they might just want a job in one!

However,even if you put this ideological problem to one side, I think there is a difficulty for anyone teaching graduate macro, which is rather different from anything encountered teaching at the undergraduate level. At the masters level it is just logical to delay teaching New Keynesian economics until some way into the course. As is frequently said, New Keynesian theory is an elaboration of the RBC construct, so all that needs to be done first. To take the example of Oxford’s MPhil, we do the Ramsey model, RBC, OLG, growth theory, and the flex price open economy all before New Keynesian economics, and it makes sense to do it this way.

Now this would not be a problem if this other stuff was as obviously interesting and relevant as Keynesian economics is today. However I fear that it is often not presented as such. Take growth theory for example. Now in principle this is all about why some countries are rich and some poor, which should be attention grabbing. But if in practice it amounts to discussing whether the speed of catch up is consistent with the Solow model, it can appear rather irrelevant. With the Ramsey model, I suspect the question of whether the allocation is optimal was not quite what students really wanted to know when they started the course. And if you do not teach the RBC model as the way to explain the business cycle, there is not that much to get excited about.

The problem of lack of motivation is compounded by something that I think those teaching micro often fail to appreciate. Today graduate macro is intrinsically harder than micro. In terms of the techniques involved it is probably no more or less difficult, but what in my experience students find really hard is that everything we teach fits together. Yet until you have done everything it is difficult to understand why we choose to focus on some model features to discuss some issues, but on other aspects of the macroeconomy when talking about different issues. Motivation is useful when the subject is challenging.

Luckily I think recent events, and specifically the debates over how quickly to reduce government debt, have come to the rescue. how this year I was experimenting with starting the macro course with the two-period OLG model, instead of first developing the Ramsey model. This has a straightforward advantage, which is that students are familiar with the two-period consumption model from their undergraduate training, so we do not have to hit them with Hamiltonians quite so soon. However I think the main plus is that it allows us to focus on government debt and intergenerational equity right at the beginning of the course. There is an obvious interest among students in debt and intergenerational equity, but the crowding out effects of government debt on capital and therefore output in the simple two-period OLG model (with no wage income in the second period) are also dramatic. Almost certainly overstated as well, but better to start here than with a model where government debt does not matter at all!


We will see at the end of the year whether this turned out to be a good idea. Even if it is, I am still searching for the equivalent motivation when presenting the basic ideas behind flex price new open economy macro. What determines international competitiveness, its relationship to PPP, non-traded goods and home bias are all things students should know about, but I’m not sure it really grabs their attention. Any ideas will be gratefully received.

Wednesday, November 28, 2012

Our Fragile Intellect Gerald R. Crabtree


I would be willing to wager that if an average citizen from Athens of 1000 BC were to appear suddenly amongus, he or she would be among the brightest and most intellectually alive of our colleagues and companions. We
would be surprised by our time-visitor’s memory, broad range of ideas and clear-sighted view of important
issues. I would also guess that he or she would be among the most emotionally stable of our friends and
colleagues. I do not mean to imply something special about this time in history or the location, but would also
make this wager for the ancient inhabitants of Africa, Asia, India or the Americas of perhaps 2,000 to 6,000
years ago. I mean to say simply that we Homo sapiens may have changed as a species in the past several
thousand years and will use 3000 years to emphasize the potential rapidity of change and to provide a basis for
calculations, although dates between 2,000 and 6,000 years ago might suffice equally well. The argument that
I will make is that new developments in genetics, anthropology and neurobiology make a clear prediction about
our historical past as a species and our possible intellectual fate. The message is simple: our intellectual and
emotional abilities are genetically surprising fragile.
How many genes are required to carry out our everyday tasks, read a book, care for a loved one, conceive of a just law or compose a song? An accurate answer to these questions is critical to understanding our genetic fragility. The larger the number of genes required, the more susceptible we are as a species to random genetic
events that reduce our intellectual and emotional fitness. Recently the means to answering this question have
emerged from genetic studies and insights into the human genome. Several lines of evidence and classic as well  
as modern genetic studies have converged to indicate that the number of genes required for normal human
intelligence and abilities might be surprisingly large.
As biologists we commonly think in terms of traits controlled by single genes. Indeed, the one-gene oneprotein paradigm was a critical part of our education and the thought that one protein did one thing governed much of the thinking during the past 50 years. Hence, when I recently mentioned to a group of my colleagues that the average Greek of 1000 BC might be intellectually and emotionally superior to our average present day  colleagues they raised the objection that this was impossible because the most recent estimates of the frequency of random mutations in yeast is about 3.80 x 10-10 to 8.4 x 10-9 per-base-pair per generation1. Furthermore, the vast majority of these random mutations do not influence the function of a gene. Hence, if you imagine a small number of intelligence genes that control this trait, our abilities would not be affected during the course of 3000 years (100 to 150 generations). However, modern genetic studies in mammals are suggesting something very different than this simple analysis.
Perhaps the most effective way to estimate the number of genes in humans needed for full intellectual function
comes from studies of X-linked intellectual deficiency (XLID). Because there is but one X chromosome in
males, the affects of X chromosome mutations cannot be rescued or compensated by the second copy as for
other chromosomes. Present studies indicate that mutation of about 215 intellectual deficiency genes (ID
genes) on the X chromosome gives rise to XLID and/or emotional disability 2,3. Present estimates indicate that there are 818 human X chromosome protein-coding genes of a total of 19, 586 genes; taken from: (Vertebrate Gene Annotation version 35 [Vega v35; March 2009]; http://vega.sanger.ac.uk/index.html). Thus, this line of evidence indicates that about ¼ of genes on the human X chromosome are needed for full intellectual and emotional function. Of the 215 genes on the X chromosome that give rise to XLID when mutated 86 have been characterized and do not seem to be neomorphs (a gain of inappropriate function). If we derive our estimate from this group of characterized genes a more conservative estimate of about 10% of the genes on the X chromosome are necessary for normal intellectual and emotional function. Because mutation of any one of the genes gives rise to compromise, we can state that these genes do not operate as a robust network, but rather as links on a chain in which failure of any one of the links gives rise to deficiency. If the X chromosomal is not enriched for those required for intellectual development, there should be between 2,000 and 5,000 genes needed for intellectual and emotional function. The X chromosome does not appear to be enriched for ID genes as shown by the distribution of unmapped autosomal loci4. In addition, autosomal recessive mental retardation seems to be very heterogenous even within a genetically similar background indicating that it is due to mutations in many genes4,5. Many of these genes appear to function quite indirectly, such as Brm, one of two ATPase subunits of BAF chromatin regulatory complexes6. Although Brm would not normally be considered an intelligence gene or to contribute to the origins of abstract thought in humans, even minor point mutations give rise to mild to severe mental retardation in humans6. Brm and its homologue SWI2/Snf2 play critical roles in chromatin regulation in many species. A critical point is that a gene need not be human or brain specific in its function to be essential for our specific human intellectual abilities. A third estimate of the number of genes that function like links on a chain to support normal intellectual and emotional function can be made by assaying how frequently human genetic diseases in general have an intellectual deficiency component. This analysis is more difficult than it might seem but, a recent study of the OMIM database indicates that about ½ of  all human genetic diseases have a neurologic component7 frequently including some aspect of intellectual  deficiency. These figures are consistent with the rough estimate of 2,000 to 5,000 genes required for intellectual
and emotional function. With this estimate in hand we can revisit the calculations of how quickly our intellects
might change with a reduction in selection.
If the proper function of 2,000 to 5,000 genes are necessary for our intellectual ability, then in the simplest case the complex traits of emotional and intellectual fitness will drift with reduced selection at 2,000- to 5,000-times that of a trait specified by a single gene. Independent studies in humans using phenotypic methods have estimated that the germline suffers about one deleterious mutation per average protein-coding gene per 100,000 generations8-11. These are mostly point mutations that lead to compromise of gene function without totally inactivating it. Recently, direct sequencing of parents and their children have found about 35 to 50 new mutations per genome per generation8, or about 5,000 new mutations in the past 3000 years (120 generations).
Of these germ-line mutations only a small fraction (less than 1%) will be harmful and some vanishingly small
fraction will increase fitness. Thus direct sequencing as well as phenotypic analysis indicates that the germline
suffers at least one deleterious mutation per average protein-coding gene per 100,000 generations8-11. If indeed 2,000 to 5,000 genes are necessary for our intellectual and emotional stability then about one child in 20 to 50 should suffer a new mutation effecting intellectual function. Another way to state the same information is that every twenty to fifty generations we should sustain a deleterious mutation. Within 3000 years or about 120 generations we have all very likely sustained two or more mutations harmful to our intellectual or emotional stability.
A test of this estimated frequency of deleterious heterozygous mutations was recently published12. A survey of 185 human genomes contained on average about 100 heterozygous mutations predicted to produce a loss of function. Remarkably, about 20 of these were found to be homozygous. Often these mutations were in genes such as olfactory receptors that seem less important in humans and may be deteriorating due to lack of
selection. This estimate was made on the basis of exon sequences and hence would miss regulatory mutations that are much more difficult to predict. Hence, it represents an underestimate of the number of deleterious mutations in current human genomes derived from different human populations with different migration routes in the past 50,000 years. The number of mutations that lead to intellectual deficiency can be derived from examination of the frequency of mental retardation in the children of consanguineous marriages. If our genomes were free of such heterozygous mutations, there would be no tendency for mental retardation to occur in children of consanguineous marriages. Needless to say this is not the case. For reasons mentioned below the best estimates are derived from 1st degree consanguinity, for which there is relatively little information.
However, incidental reports indicate that 1st degree consanguinity (in which ¼ of the genome is reduced to
homozygosity) leads to mental retardation in about ¼ to ½ of off spring13 and lesser degrees of consanguinity to lower frequencies5. These figures are roughly consistent with the estimate of 2 or 3 deleterious heterozygous ID mutations per genome. However, heterozygous mutations (effecting only one copy) are generally not considered likely to produce a problem without reduction to homozygosity by consanguinity or random chance.
But new discoveries indicate that the human nervous system is uniquely susceptible to heterozygousity.
Recently Gage and colleagues have reported14 that Line 1 repetitive elements in humans transpose and appear to lead to gene inactivation in neurons. The somatic origin of these transpositions was demonstrated by direct sequencing of different brain regions by Faulkner and colleagues15, who found that other repetitive elements could also transpose and insert into or control critical neurodevelopmental genes. Indeed they have a strong tendency to insert into coding regions and these insertions lead to transcriptional interruption16. Thus, even if they insert into a long intron they can be damaging. Over 7000 L1 insertions were detected in three individuals.
The Line 1 insertions occur in neural stem cells and lead to clones of neurons with specific insertion sites. Gage and colleagues estimate that each neuron sustains about 80 Line 1 insertions, indicating that most neurons would have a number of genes whose activity could potentially be affected. These could be beneficial and lead to greater diversity, but this seems less likely based on the prior work of Boeke and colleagues16. Transposon inactivation would not be a problem if we were dealing with a single or small number of intelligence genes, rather than several thousand that could lose function in a specific brain region. By random L1 insertion heterozygosity is transformed to homozygous loss of function in a clone of neural stem cells and a focal defect in the brain. L1 insertions do not occur randomly, but rather target transcribed genes indicating that they have a high probability of inactivating a gene, and indeed insertion sites in ID genes have already been documented15.
Thus, if one were heterozygous for a gene involved in formulating speech, and this gene were lost in some of
the neural progenitors for the speech regions, one would expect a specific loss of speech function, even if this
gene were used for other essential embryonic processes (see below). Many neurons with deleterious insertions might be eliminated by their failure to form effective neural circuits, which could lower their impact on neural functions. One could argue that anything that occurs in Nature must be good for us, but this line of reasoning is quite incorrect. More species have become extinct by natural means than are presently present on our planet and internal parasites could be quite harmful. A practical implication of these studies is that identical twins will be non-identical genetically in neuronal subpopulations and hence the contribution of genetic factors will be underestimated in classic identical twin studies. It is also worth noting that the number of genes that could comprise intellectual function by this means would be much larger than that estimated by the analysis of the Xchromosome, because even embryonic lethal mutations could be inactivated by insertion of mobile elements such as Line 1 transposons. Another less obvious consequence is that that this route to homozygosity will make intellectual ability less heritable. The consequence is that selective pressure must be higher to maintain neurologic traits in general. This makes the job of maintaining the 2,000 to 5,000 genes in good working order even more difficult. The simple lesson is that as a species we are almost certainly more susceptible to heterozygous inactivation of ID genes than we had previously understood.
Another route to homozygous inactivation (removing or altering both gene copies) in individuals already
bearing a germline mutation in one allele of the estimated 2,000 to 5,000 genes required for intellectual fitness
is a feature of the nervous system that has recently come to light. For reasons that are unclear apparently
between 10 and 50% of human neurons are aneuploid, i.e. have chromosomal abnormalities that lead to breaks, losses and duplications of genetic material17. Again, it appears that aneuploidly might originate in neural stem cells18 and hence be clonal, thereby resulting in a focal loss of function in a specific region of the brain.
Furthermore, neurons with aneuploid genomes form genetically mosaic neural circuitries as part of the normal
organization of the mammalian brain19. Aneuploidy of chromosome 21 is of course the basis of Down
Syndrome, which is accompanied by a reduction in intellectual function and illustrates the effect of alterations
in gene copy number. Copy number variation appears to have a role in several neurologic diseases20 including autism21, which for uncertain reasons has become more common in recent years22. However, the apparent
recent increase in incidence of autism may simply be due to greater awareness of the condition and in any event would probably not be impacted by the rate of mutation accumulation within a 50-year period. The above two arguments suggest that focal loss of heterozygousity might be an underlying feature of neurologic diseases that would be difficult to detect by present day genome sequencing approaches designed to find the genes at fault in human disease. In order to detect focal loss of heterozygousity, neurons from many regions of the brain would need to be sampled and their DNA sequenced. Aneuoploidly and transposon insertion are non-germline routes to homozygous inactivation of a gene and are the reason that 1st degree consanguinity gives the best estimate of the frequency of heterozygous mutations in the human genome. As is the case with transposon inactivation of genes, clonal aneuoploidly would lead to misinterpretation of studies with identical twins causing one to underestimate the genetic contribution to intellectual or emotional traits. As with retrotransposon insertion, focal aneuploidly would also reduce heritability of neurologic traits, making them more difficult to maintain by selection.
A third and perhaps even more likely way that inactivation of one of the two copies of an ID gene could be
damaging is through compound heterozygosity. The calculations mentioned above and recent population
genome sequencing studies8 suggest that most of us are heterozygous for two or more of the 2,000-5,000 genes that appear to be required for intellectual function. This brings up the complex issue of cooperativity between the ID genes. Presently, there are not easy ways of defining gene pairs that lead to reduced function when one allele of both genes is defective. Heterozygous inactivation of two or more genes encoding proteins within the same biochemical pathway, genetic circuit or protein complex is known to produce reduced function. One recent example is that human intellectual deficiency is produce by mutation of at least six subunits of nBAF complexes6,23,24, which are large ATP-dependent chromatin remodeling complexes found in a specialized assembly in the nervous system25. It seems quite likely that compound heterozygousity of genes encoding subunits within these complexes would reduce intellectual fitness, and indeed this is the case for nBAF subunits24. In general, it is quite difficult to know if loss of one allele in, for example an enzyme removing a neurotoxic intermediate would exaggerate or lead to defects in an individual heterozygous for a gene required for dendritic morphogenesis. These considerations make human genetic studies designed to find the genes at fault in human cognitive disorders quite difficult, yet double- or compound-heterozygosity would almost certainly contribute to reduced function among the estimated 2,000 to 5,000 genes required for full intellectual and emotional function. One could argue that this group of genes operates as a robust network, however this can not be the case since the criteria used for selecting these genes is that inactivation of any one of the 2,000 to 5,000 leads to reduced function, demonstrating that they function like links on a chain rather than a robust, failsafe network. Reduced function due to double- or compound-heterozygousity may be expected to operate exponentially over time as deleterous heterozygous mutations accumulate in our genome at a linear rate.If we are losing emotional and intellectual traits, how did we get them in the first place? Needless to say this is one of the most important questions of modern anthropology and the subject of much investigation and debate.
I can only speculate, but it seems necessary and also just plain fun to step outside my comfort zone and
comment. One clear fact is that the expansion of the human frontal cortex and endocranial volume (Fig 1),
which is thought to have given humanity our capacity for abstract thought occurred about 50,000 and 500,000 years ago26 27 in our pre historic African ancestors. These ancestors did not have a written language and for most of their history probably did not have much of a verbal language26,28. They also did not have organized agriculture that permitted life at high density in cities and societies. Thus, the selective pressures that gave us our capacity for abstract thought and human mental characteristics operated among hunter-gathers living in dispersed bands, nothing like our present day high-density, supportive societies. It is also seems clear that both written and verbal language first appeared well after endocranial expansion (Fig 1) and hence could not have been a driving force to acheive our present brain size (blue area in Fig 1) about 50,000 years ago. Furthermore, it seems that our intellectual capacity has not changed very much in the last 50,000 years since our African ancestors began their migrations. How do we know this? Because the societies with different migration routes that experienced quite different environments seem to have near identical intellectual capacities. For example,written language was independently invented by the group with the longest migration path as hunter gathers:
the Indians of Middle and South American and also independently by the people with the one of the shortest
migration paths and the earliest cities: Sumerians, in what would now be Iraq. In addition, whether a migration group lived a high density city-life made possible by agriculture or as dispersed hunter gathers did not greatly influence their intellectual development. If we are to understand how 2,000 to 5,000 genes were optimized for abstract thought to produce our present abilities we almost certainly have to look to this period 50,000 to 500,000 years ago and to ancestors common to all humans on earth today. Yet somehow the selective pressures that allowed survival as dispersed hunter gatherers led to the evolution of a brain capable of writing symphonies and performing higher mathematics. Almost certainly our present day abilities are a collateral effect of being selected for more fundamental tasks. Because it seems
clear that we developed the ability for abstract thought by being selected for abstract thought, it must be that life as a dispersed hunter-gather was more intellectually demanding than we would commonly think. The fact
that the expansion of the frontal cortex and with it the capability for abstract thought was driven by evolution
forces that appear to have operated before the development of verbal or written language (Fig 1) might
seem an affront to people like myself that make our living by writing and speaking. We seem to be forced to
the conclusion that life as a hunter gather required at least as much abstract thought as operating successfully
in our present society. We know that most of our ancestors lived the dispersed hunter gather life until
about 5- to 10-thousand years ago, when the invention of agriculture lead to our high density societies, written language and to a life style something like what we have
today. Regardless of how we have lived since we began
our migrations: hunter-gather or cosmopolitan, we are
intellectually about the same. Surprisingly, it seems that
if one is a good architect, mathematician or banker, these
skills were an offshoot of the evolutionary perfection of
skills leading to our ancestor's survival as nonverbal,
dispersed hunter-gathers.
To understand the extremes of selection that must have
occurred when our ancestors went from using speed,
strength and agility to survive and began to survive by
using thought, we have to consider the difficulty of optimizing 2,000 to 5,000 genes. For the reasons mentioned
above it seems that retrotransposon insertion and aneuploidy of neurons substantially reduce heritability of
neuronal traits. Without going into the mathematics, when heritability of a trait is reduced, the selective
pressure required to maintain the trait is increased. In addition, one would need to sum the selective pressure
for each of the genes operating independently to produce the trait. Thus, extraordinary selective pressure was
necessary to optimize and maintain such a large group of intelligence genes. This optimization probably
occurred in a world where every individual was exposed to nature’s raw selective mechanisms on a daily basis.
In the transition to surviving by thinking most people (our non-ancestors) probably died simply due to errors of
judgment or a lack of an intuitive, non-verbal comprehension of things such as the aerodynamics and
gyroscopic stabilization of a spear while hunting a large dangerous animal.
Figure 1. Expansion of endocranial volume during the past 2.5 million years among Homo Sapiens ancestors. Modified from R.G. Klein (ref 24). Notethat language follows the expansion.
One might think that our modern abilities could not have originated from a time 50,000 to 500,000 years ago
and selection based on hunter-gather abilities. We think of the common hunter-gather abilities as crude and
unrefined and not intellectually challenging, how could our modern abilities be an offshoot of being selected in
this way? It seems that the field of artificial intelligence may be making a significant contribution to this
question. When this field was first born several decades ago, it promised household robots that would do all
our daily tasks: cook meals; take the dishes off the table, wash them and put them away; mow the lawn; fix that leaky rain gutter, repair a child’s toy and bring us freshly cooked croissants and coffee in the morning.
Needless to say we do not have these robots now and none of the readers of this piece will probably ever benefit from such a household robot. (Although one AI expert I consulted said computers might have this kind of computational power in 10 years). This is true even though such a robot would have the commercial value of the world’s automotive industry and hence there is immense impetus to design them. Paradoxically, things that we consider intellectual, such as playing chess, winning at Jeopardy, flying a jet plane or driving a car are fairly straightforward for a computer and do not require even a small fraction of the computational power required for common human actions. The point is that selection could easily have operated on common (but computationally complex) tasks, like building shelter, with the result of allowing us to do more computationally simple tasks, like playing chess. Indeed, mutation of any one of 2,000 to 5,000 genes prevents us from effectively doing these common everyday tasks and selection for the ability to perform them would tend to optimize the function of the entire group of genes. But, as mentioned above the selective pressure would have to be remarkable.
When might we have begun to loss these abilities? Most likely we started our slide with the invention of
agriculture, which enabled high density living in cites. Selective pressure was then turned to resistance to
diseases that naturally grow out of high density, urban living. A principal of genetics is that when one selects
highly for one trait (such as resistance to infectious disease) other traits are inadvertently selected against. It is
also quite likely that the need for intelligence was reduced as we began to live in supportive, high-density cities that made up for lapses of judgment or failure of comprehension. Community life would, I believe tend to reduce the selective pressure placed on every individual, every day of their life; indeed that's why I prefer to livein such a society.
Several considerations could mitigate the validity of the argument that intellectual and emotional fitness are
slowly decaying. The most significant is the assumption that modern society has reduced selective pressure for intellectual fitness. Even if one agrees with the assumption that selection for intellectual fitness has decreased,selective pressure for the genes required for intellectual and emotional function could originate from other sources. Probably the most significant is that genes used for intellectual development could be needed for early development or even fertility. Indeed, this is true of some of the genes required for diverse cellular functions where retardation or emotional compromise is found only with alleles that partially impair the function of the gene. An estimate of the frequency with which XLID genes are also required for other functions can be derived from the observation that about ½ of XLID patients have syndromes that suggest these genes are used in the development or function of other tissues or organs. However, these other syndromic features appear not to be lethal and many do not impair reproduction, hence, there would be little limit on the ability of these genes to be prevalent in the human population without selection. The estimate that 215 of 818 genes on the X chromosome are required for intellectual function accounts for the possible use of these genes in early development, because these estimates are derived from viable individuals. While multiple usage of genes could slow the rate of accumulation of mutations in intellectual fitness genes, if the estimate of the number of genes required is correct, and the rate of accumulation of deleterious mutations is correct and selection only slightly relaxed, then one would still conclude that nearly all of us are compromised compared to our ancient ancestors of Asia, Africa, Europe and the Americas of say 3,000 to 6,000 years ago.

Another common counter argument to the possibility that we are losing our intellectual fitness raised by my
colleagues is that we are under constant selection for our intellectual traits. Presumably, musical ability,
employment and emotional stability may all have mating advantages that would reduce the rate at which
mutations that affect these traits become fixed in our genome. This argument is clearly correct, but I fear does
not take into account the extreme selection that must operate to maintain traits dependent upon thousands of
genes in the face of relatively low heritability of the traits due to non-germline inactivation operating within the
group of genes. Needless to say a hunter gather that did not correctly conceive a solution to providing food or shelter probably died along with their progeny, while a modern Wall Street executive that made a similar
conceptual mistake would receive a substantial bonus.
Yet another, less compelling counter argument goes like this: Our generation has an intricate written language,
uses computers, drives cars, designs space crafts, and plays chess; which the ancients of several thousand years ago did not. Hence, we must be smarter than they were. This argument presumes that operating a computer or playing chess is more complex that building a house, farming, surviving in the jungle or washing the dishes and putting them away. However, as mentioned above our nervous system evolved until recently to do common, but computation complex task very well, hence none of our modern abilities are different than just a retrofit of modes of thought that we have been selected to do as hunter gathers until the very recent invention of fa rming.
Furthermore, the faults in this argument are easily revealed by the fact that an inexpensive hand-held computer can beat all but the best chess players in the world. In addition, relatively little computational power is needed for flying a plane or driving a car. In contrast, the computational complexity of many common practical tasks is revealed by the immense difficultly of building a computer that could direct a household robot to do whathumans do very well. Although obvious, the frequently drawn analogy between a computer and a brain is not  a very good one. Among other differences, our nervous system has far more computational units than any existing computer, operates in analogue mode(s) and is electrochemical in nature. Humans play chess and accomplish other tasks using different strategies than computers. Nevertheless, the difficulty of reproducing human tasks is one measure of how computationally complex a given task might be and what its intrinsic value might be. This is not to negate in anyway rare intellectual skills that are very valuable to society.
In addition to common house hold tasks another example of a very difficult computational problem that humans do very well is the game Foldit, in which players use their spatial intuition to predict protein structures29. Foldit has been described as resembling a Rubix cube with a thousand faces. Yet humans beat supercomputers at  this game much in the same way that we can take the dishes off the table, wash them and put them away better than a supercomputer. Almost certainly we are very good at Foldit, because the game uses spatial reasoning and skills that were perfected and selected for in our non-verbal, hunter-gather ancestors 50,000 to 500,000 year ago. In contrast humans are bad chess players, probably because our brains were not selected for this kind of game designed to perfect skills for organized warfare. Organized warfare, being a communal activity was not invented till after our brains had undergone nearly all evolutionary selection, at a time when it was too late to perfect chess-playing (or warfare) abilities. As a result we are rather poor warriors, but we are excellent at spatial reasoning critical to Foldit, building shelter and other common tasks. If we had survived for the past million years based on our chess-playing skills we would almost certainly play a master game in far less than one second. Indeed, the only way the game could be made challenging would be to have a 1000 pieces, that could each make a dozen or more different moves. In other words a chess board would look like a table full  dirty dishes that needed to be washed and put away: a truly massive intellectual exercise, which should not be diminished by the fact that many of us can do it. It seems too obvious to state, but the tautology applies: our brains are good at the things they have been selected to be good at. Many kinds of modern refined intellectual activity (that our children are judged by) may not necessarily require more innovation, synthesis and creativity than more ancient forms. Inventing a bow-and-arrow, which seems to have occurred once about 40,000 years ago was probably as complex an intellectual task as inventing language or coming up with the theory of relativity. Our intellectual abilities were highly selected at immense human expense to accomplish seemingly common tasks that require the perfected actions of 2,000 to 5,000 genes.
If the above argument is correct one would predict that individuals in undisturbed hunter gather societies wouldbe more intellectually capable than those of us in more modern, urban, distributive societies. Certainly, Jered Diamond, who has spent his career of 50 years among one of the few remaining such societies feels that this is the case, but also acknowledges the difficulty with testing the idea. Because, all remaining hunter gather
societies are restricted geographically, they have higher frequencies of reduction of heterozygous mutations to
homozygousity, which as mentioned above is a particular concern when large numbers of genes are at issue.
The hypothesis that genes critical to intellectual
function are decaying could be tested by a form of
genetic triangulation (Fig 2). The sequences of
genomes of many individuals whose last common
ancestors spanned the period from present day to
5,000 years ago should produce an estimate of the
rapidity of change and the level of selection
operating on these genomes at various timeintervals
during this 5,000-year period. Five
thousand years would probably be an adequate
interval since it would span the invention of
agriculture for several population groups, which
enabled high density living in cities and the shift
to selection for resistance to infection. To obtain
the required fineness and discrimination, many
genomes would need to be sequenced. If we
focus on the interval between 5,000 years ago and
present day, we would need 100 genome
sequences for a 50-year fineness map. Since each
generation produces 2,000-4,000 signature new
mutations these could guide the temporal
ordering. If the genes that control our intellectual
development act like links on a chain, only one
conservative mutation in any of 2,000 to 5,000 genes would diminish our intellectual abilities and also be
difficult to detect with certainty. Because mutations that control the evolution of specific characteristics have
often been found in regulatory rather than coding regions, full genome sequences would need to be determined.
In addition, many of the mutations would almost certainly produce weak alleles that might erode our abilities in
subtle ways. However, as a first pass an examination of the coding regions of XLID genes and those from the
OMIM data base having ID phenotypes as well as memory and learning genes from other organisms would be a
good place to begin and give estimates of the rate of emergence of alleles that might be deleterious in this large
set of genes. I would very happy to learn from this test that there is no substance to my argument.
If on the other hand such a study found accelerating rates of accumulation of deleterious alleles in the past
several thousand years then we would have to think about these issues more seriously. But we would not have to think too fast. One does not need to imagine a day when we might no longer be able to comprehend theproblem or the means to do anything about the slow decay in the genes underlying our intellectual fitness. Nor do we need to have visions of the world’s population docilely watching reruns on televisions that they can no longer understand or build. It is exceedingly unlikely that one hundred or two hundred years will make any difference at the rate of change that might be occurring. Remarkably, it seems that while our genomes are
Fig 2. Genetic triangulation to measure rate of change of ID
genes over the past 5000 years based on genome sequences of
present day individuals with last common ancestors separated
by specific times, Δt. (500 years in this case for illustration).
The bar at the top indicates the transition from hunter-gather
to a more high-density life style when selection based on
resistance to infection might begin to dominate.
fragile and built like a chain with many links, our society is robust almost entirely by virtue of education, which allows strengths to be rapidly distributed to all members. The sciences have come so far in the past hundred years that we can safely predict that the accelerating rate of knowledge accumulation within our intellectually
robust society will lead to the solution of this potentially very difficult problem by socially and morally
acceptable means.

Tuesday, November 20, 2012

FAber Marc Faber!!!!

Marc FABER
Many analyst says he is the most outspoken guy and he should not be taken seriously but he was there in Hong kong who is a asian resident now. But when you get  great presentations from the biggest players in gold and silver at the annual London Bullion Market Association conference.


"When the People's Bank speaks it pays to listen," as Tom Kendall of Credit Suisse put it in his conference summary.
"Especially when it talks about gold."

But the star of the show, at least by popular vote at Tuesday's close, was Swiss ex-pat and long-time Asian resident, Marc Faber (pictured).
If you know his work, you can guess his theme - what doom and gloom mean for the boom in gold. Starting, of course, with the unintended consequences of constant government meddling.

Follow up:
"Continuous interventions by governments with fiscal and monetary measures, instead of smoothing the business cycle, have actually led to greater instability. The short-term fixes of the New-Keynesians have had a very negative impact, particularly in the United States."
Faber's big beef is with US Federal Reserve chairman Ben Bernanke. But "numerous Fed members make Mr. Bernanke look like a hawk," he said.  Nor does it matter who is running the White House, because, thanks to welfare and military budgets, "spending is out of control, tax is low, and most spending is mandatory."
So Federal Reserve policy is inevitable, Faber went on, and while we haven't yet got the negative interest rates demanded by Fed member Janet Yellen, we have got negative real interest rates. The US and the West had sub-inflation interest rates in the 1970s too, and we got a boom in commodity prices then as well. But with exchange controls now missing from the developed world, "One important point," said Marc Faber:
"Ben Bernanke can drop as many Dollar bills as he likes into this room," he told the LBMA conference in Hong Kong,
"but what he doesn't know is what we will do with them. His helicopter drop will not lead to an even increase in all prices. Sometimes it will be commodities, sometimes precious metals, collectibles, wages or financial assets. [More importantly], the doors to this room are not locked. And so money flows out and has an impact elsewhere - not in this room."
That elsewhere has of course been emerging Asia, most notably China (see our video pick of the Top 5 Slides from LBMA 2012 on YouTube for more).  But back home, these negative interest rates are forcing people to speculate, to do something with the money, said Faber.
These rates are artificially low, well below the 200-year average.
That's doing horrible things to the United States' domestic savings and, therefore, capital investment. "You don't become rich by consuming.  You need capital formation," said Marc Faber.  Unlike investing in a factory to earn profits and repay your loan,  "Consumer credit is totally different. You spend it once, and you have merely advanced expenditure from the future."
adrian-real-interest-rates-US
So far, so typical for the doom-n-gloomster. Noting total US debt at 379% of GDP, "if we included the unfunded liabilities then this chart would jump to the fifth floor of this hotel!" said Faber, waving his red laser pointer at the ceiling. After the private sector "responded rationally" to the runaway credit growth of 20% by collapsing credit in 2007-2009, the US government stepped in to take over - and "Government credit is the most unproductive credit of all."
In short, the easy money and bail-outs which got us here - from the Fed's rescue of Goldman Sachs during the early '80s Tequila Crisis in Mexican debt, through LTCM in the late '90s and then the Tech Stock boom and bust - have had serious consequences. "Bubbles are a disaster from a social point of view," said Faber. Looking at his charts of the generational shift in wealth, it would take a Fed voting member to disagree.
"Only at the Federal Reserve they don't eat or drive!" exclaimed Faber as he turned on the central bank's inflation target, produced by "the Ministry of Truth, the Bureau of Labor Studies. It is a complete fraud." But even as the United States' persistently mistaken policies lead to the emerging powers side-stepping it ("We are in a new world. China's exports to commodity-producing countries - such as Australia and Brazil - are greater than its exports to the United States. Exports from South Korea to commodity-exporting countries are greater than its exports to the US and Europe combined!"), there will come a slowdown in commodity demand and leveling off in prices in time.
"I would rather be long precious metals than industrial commodities," said Marc Faber.
Which was of course what most people at the LBMA conference wanted to hear. Less welcome was his warning not to hold gold in the United States or even Switzerland. Because "if gold is owned by a minority, then in a crisis the government will take it away." But even Faber said that some of his 25% personal allocation to precious metals is still in his home country, rather than in Asia where he's lived for almost 30 years.
Once the deflationary collapse finally arrives (the impossible question is knowing when, said Faber), there will be great opportunities in real and productive assets. But until then, and as for the Gold Price ahead, "Gold is not anywhere close to a bubble stage," he concluded. And every time he thinks about selling to take profit?
"I keep in my toilet a picture of Mr. Bernanke. And every time I think about selling my gold, I look at it and I know better!"
Please Note: This article is to inform your thinking, not lead it. Only you can decide the best place for your money, and any decision you make will put your money at risk. Information or data included here may have already been overtaken by events – and must be verified elsewhere – should you choose to act on it.

Monday, November 19, 2012

MAP

I would like to appreciate all the visitors who take some time out of their busy life and read , my advice would be that all of you should also read the previous (earlier) posts , some of them are not technical neither mechanical if you read from the beginning  . The blog was not dedicated to Economics or finance initially coz when I started i was a graduating engineer ( interested in markets also)  and had a serious heartbreak (which is kind of obvious from some earlier posts )but gradually i became very keen in financial markets and invested a lot of time trying to understand and make sense, i will be learning all my life and each passing moment try to teach everyone a lot, its like osmosis (process of learning).
I would always try to bring in interesting relevant topics with my views and the all international thinkers , the contents will broadly be finance and economics but some time from here and there you will get a light topic.
And read the previous topics also for sure it would not be a waste of time. 

Sunday, November 18, 2012

Luke johnson ride rich

One of the ways financiers make money is doing a “roll-up”: a series of horizontal acquisitions in a fragmented industry. Typically these are industrial sectors that are obscure, unloved and overlooked. I first saw this technique in action among various listed companies I researched as a stockbroking analyst in the 1980s. For example, a series of companies sprang up that consolidated funeral homes, including Kenyon Securities, Hodgson Holdings and Great Southern Group. They rationalised the use of hearses and crematoria, saved money on administration and marketing, and arbitraged the different earnings multiples between private and public companies. I have been involved as a principal in roll-ups in healthcare, recruitment, food distribution, financial and marketing services. All did well investors, which is fairly remarkable in itself. Of course, not every transaction worked – after all, between the various companies we probably executed more than 50 mergers. I haven’t performed a rigorous analysis, but I suspect at least one in eight deals destroyed rather than generated value. But there were decent returns overall. A roll-up should possess certain features if it is to succeed. First, the industry in question should not be consolidated; there must be plenty of small family-run companies to buy. The sector must not be so niche that monopoly concerns arise. Second, the businesses to be acquired should be available cheaply – ideally on profit multiples of three or four. Third, the buying vehicle should develop an effective formula for finding, negotiating and integrating acquisitions. Fourth, the acquirer must have the wherewithal to carry out the deals – cash and/or shares that vendors will accept. Finally, the enlarged group should be able to achieve savings and economies of scale, be it through buying, distribution or administration. There are plenty of pitfalls, because buying companies is a risky undertaking. One can simply pay too much, or rush deals and do insufficient due diligence. Other cases suffer from culture clashes, or earn-out structures that unravel badly. Frequently the purchased companies are not amalgamated, so the supposed advantages of the whole exercise are lost. And if the cash for acquisitions is borrowed and the acquired companies don’t deliver, the buyer can breach covenants with the bank and end up bust. I once made the mistake of buying into a roll-up that had gone wrong. It looked cheap. In fact it was a wasteland of debt, disgruntled ex-owners, litigation, subsidiaries that had not been integrated and vendors limbering up to compete with their old businesses. Unlike an organically grown business, it was a jumble of hastily assembled operations with little logic and even less direction or soul. Purely financial constructs like that rarely succeed: there needs to be an underlying coherence and rationale. The American master of roll-ups is probably the Floridian entrepreneur Wayne Huizenga. He co-founded Waste Management, Blockbuster Video and AutoNation. By targeting garbage collection, video rental and car retailing, he found three unglamorous industries with no dominant businesses. He was able to use highly rated quoted paper to absorb smaller rivals on lower multiples, thereby enhancing earnings. But he appears to have lost his magic touch: the shares of Swisher Hygiene, There have been fewer buy-and-build ventures in recent years. Many markets are now dominated by a handful of big organisations, with little opportunity to acquire family companies at fair prices. Many of the public companies using this strategy failed, so private equity firms have taken the lead. They initially back a “platform” company and then add bolt-on acquisitions. However, it is much harder to find immature sectors where the right improvements are to be found from merger sprees. Roll-ups work if there is a genuine operational formula that can be applied to the acquired companies, and duplicate overhead costs that can be eliminated. Accomplishing this requires expert leaders who focus on identifying the right partner companies, merging them properly, while resisting the temptation to get carried away.

 this article is by luke johnson FT
lukej@riskcapitalpartners.co.uk

Merkel Union messed up

Last month, Athens; next week, Lisbon: by dint of austerity tourism, Germany’s Chancellor Angela Merkel is announcing her near-term plan for the euro. So long as governments in the periphery are at least attempting to abide by the centre’s conditions, the European Central Bank will prevent cliff dwellers from toppling into the abyss. There will be drama, of course; witness the brinkmanship on Greece. But neither sovereign bankruptcy nor banking bust will be allowed to trigger a crack-up. The chancellor’s uneasy peace is intended to free Europe to focus on its longer-term challenge: how to reform the monetary union so that it is sound in the future. By all appearances, Ms Merkel dominates this design process. Her principle, that support for the periphery must be conditional upon control from the centre, is enshrined in both the new fiscal compact and the drive towards centralised bank supervision. In an ironic echo of the euro’s founding, however, the chancellor may be about to allow an imprudent concession, poisoning the monetary union when the next crisis hits. When the euro was created, the Germans thought they were getting an ECB in the image of the Bundesbank. In one sense this was true: the ECB has outdone the Bundesbank in holding down German inflation. But in terms of governance, the Germans were kidding themselves. The ECB’s governing council works on a one member, one vote system. Only two out of 23 members are German. On paper, at least, the president of the Bundesbank is no more powerful than the governor of the Bank of Cyprus or the Bank of Malta. In ordinary times, this governance structure worked fine. Germans got the price stability they wanted, along with a gratifying feeling of transcending primitive national loyalties. But in extraordinary times, Germans who revere the Bundesbank have been shocked. As the crisis has forced the ECB to stretch its mandate, it has embraced a policy of bailouts that the Bundesbank detests. The fact that the ECB’s policy is right does not alter the reality that many Germans hate it. This is a recipe for trouble. At some point, banks in peripheral Europe are bound to look weak again. To avoid an expensive bailout, German officials will presumably press for them to shed risk and thicken capital cushions. But to avoid painful deleveraging, officials from the periphery are likely to take the opposite position, preferring charitable forbearance. Given the ECB’s governance structure, not to mention the inherent difficulty of confronting powerful bankers, it seems overwhelmingly likely that the periphery will prevail and supervision will err towards softness. Now suppose that German misgivings prove justified: unchecked bank risk-taking leads to a systemic bust. On any plausible projection, the peripheral governments will still be labouring under heavy debt burdens and will lack the capacity to rescue their own banks. The cost of the bailout will then fall on a central rescue fund, most likely the European Stability Mechanism. Germany provides a quarter of the ESM’s money, so the Germans will be expected to pay for much of the crisis they tried vainly to prevent. As Thomas Mayer of Deutsche Bank argues in a new book, this cannot be tenable. Germans are already unhappy with the ECB’s monetary policy, even though inflation remains dormant. But if its governance structure prevents Germany heading off banking crises that Germans must nonetheless pay for, unhappiness will turn into fury. Consider the anger of Americans and Britons about bailouts for plutocratic bankers. Now imagine their feelings if the bankers were foreign. At the start of the 1990s, Germany agreed to the ECB’s governance structure because its postwar leaders were reluctant to press their national interest and because they wanted European acquiescence in German unification. Today, Germany is more assertive and unification is history. It is naive to expect Germans to tolerate governance arrangements that combine high potential taxation without commensurate representation. After all, one country, one vote organisations have been tried before. Just look at the UN, where the principle serves merely to ensure that powerful countries treat the General Assembly with contempt. It is not too late for Europe to avoid this error. The ECB plans to create a new board to oversee its bank supervisors and, although this will be subservient to the ECB’s governing council, it will be afforded great autonomy. There is no reason why the new bank supervision board cannot have a voting system that favours richer countries, modelled on the weighted voting at the World Bank and International Monetary Fund. Of course, to embrace weighted voting would be to admit that Europe has failed to realise its post-nationalist ideals. But the crisis has made that clear already.

Tuesday, November 13, 2012

The Mantle of Science

Scientism is the profoundly unscientific attempt to transfer uncritically the methodology of the physical sciences to the study of human action. Both fields of inquiry must, it is true, be studied by the use of reason—the mind’s identification of reality. But then it becomes crucially important, in reason, not to neglect the critical attribute of human action: that, alone in nature, human beings possess a rational consciousness. Stones, molecules, planets cannot choose their courses; their behavior is strictly and mechanically determined for them. Only human beings possess free will and consciousness: for they are conscious, and they can, and indeed must, choose their course of action.1 To ignore this primordial fact about the nature of man—to ignore his volition, his free will—is to misconstrue the facts of reality and therefore to be profoundly and radically unscientific. Man’s necessity to choose means that, at any given time, he is acting to bring about some end in the immediate or distant future, that is, that he has purposes. The steps that he takes to achieve his ends are his means. Man is born with no innate knowledge of what ends to choose or how to use which means to attain them. Having no inborn knowledge of how to survive and prosper, he must learn what ends and means to adopt, and he is liable to make errors along the way. But only his reasoning mind can show him his goals and how to attain them. We have already begun to build the first blocks of the many-storied edifice of the true sciences of man—and they are all grounded on the fact of man’s volition.On the formal fact that man uses means to attain ends we ground the science of praxeology, or economics; psychology is the study of how and why man chooses the contents of his ends; technology tells what concrete means will lead to various ends; and ethics employs all the data of the various sciences to guide man toward the ends he should seek to attain, and therefore, by imputation, toward his proper means. None of these disciplines can make any sense whatever on scientistic premises. If men are like stones, if they are not purposive beings and do not strive for ends, then there is no economics, no psychology, no ethics, no technology, no science of man whatever.On the formal fact that man uses means to attain ends we ground the science of praxeology, or economics; psychology is the study of how and why man chooses the contents of his ends; technology tells what concrete means will lead to various ends; and ethics employs all the data of the various sciences to guide man toward the ends he should seek to attain, and therefore, by imputation, toward his proper means. None of these disciplines can make any sense whatever on scientistic premises. If men are like stones, if they are not purposive beings and do not strive for ends, then there is no economics, no psychology, no ethics, no technology, no science of man whatever.

Before proceeding further, we must pause to consider the validity of free will, for it is curious that the determinist dogma has so often been accepted as the uniquely scientific position. And while many philosophers have demonstrated the existence of free will, the concept has all too rarely been applied to the “social sciences.” In the first place, each human being knows universally from introspection that he chooses. The positivists and behaviorists may scoff at introspection all they wish, but it remains true that the introspective knowledge of a conscious man that he is conscious and acts is a fact of reality. What, indeed, do the determinists have to offer to set against introspective fact? Only a poor and misleading analogy from the physical sciences. It is true that all mindless matter is determined and purposeless. But it is highly inappropriate, and moreover question-begging, simply and uncritically to apply the model of physics to man. Why, indeed, should we accept determinism in nature? The reason we say that things are determined is that every existing thing must have a specific existence. Having a specific existence, it must have certain definite, definable, delimitable attributes, that is, every thing must have a specific nature. Every being, then, can act or behave only in accordance with its nature, and any two beings can interact only in accord with their respective natures. Therefore, the actions of every being are caused by, determined by, its nature.3 But while most things have no consciousness and therefore pursue no goals, an essential attribute of man’s nature that he has consciousness, and therefore that his actions are self-determined by the choices his mind makes. At very best, the application of determinism to man is just an agenda for the future. After several centuries of arrogant proclamations, no determinist has come up with anything like a theory determining all of men’s actions. Surely the burden of proof must rest on the one advancing a theory, particularly when the theory contradicts man’s primary impressions. Surely we can, at the very least, tell the determinists to keep quiet until they can offer their determinations—including, of course, their advance determinations of each of our reactions to their determining theory. But there is far more that can be said. For determinism, as applied to man, is a self-contradictory thesis, since the man who employs it relies implicitly on the existence of free will.
If we are determined in the ideas we accept, then X, the determinist, is determined to believe in determinism, while Y, the believer in free will, is also determined to believe in his own doctrine. Since man’s mind is, according to determinism, not free to think and come to conclusions about reality, it is absurd for X to try to convince Y or anyone else of the truth of determinism. In short, the determinist must rely, for the spread of his ideas, on the nondetermined, free-will choices of others, on their free will to adopt or reject ideas.4 In the same way, the various brands of determinists—behaviorists, positivists, Marxists, and so on—implicitly claim special exemption for themselves from their own determined systems.5 But if a man cannot affirm a proposition without employing its negation, he is not only caught in an inextricable self-contradiction; he is conceding to the negation the status of an axiom.6 A corollary self-contradiction: the determinists profess to be able, some day, to determine what man’s choices and actions will be. But, on their own grounds, their own knowledge of this determining theory is itself determined. How then can they aspire to know all, if the extent of their own knowledge is itself determined, and therefore arbitrarily delimited? In fact, if our ideas are determined, then we have no way of freely revising our judgments and of learning truth—whether the truth of determinism or of anything else.
Determinists often imply that a man’s ideas are necessarily determined by the ideas of others, of “society.” Yet A and B can hear the same idea propounded; A can adopt it as valid while B will not. Each man, therefore, has the free choice of adopting or not adopting an idea or value. It is true that many men may uncritically adopt the ideas of others; yet this process cannot regress infinitely. At some point in time, the idea originated, that is, the idea was not taken from others, but was arrived at by some mind independently and creatively. This is logically necessary for any given idea. “Society,” therefore, cannot dictate ideas. If someone grows up in a world where people generally believe that “all redheads are demons,” he is free, as he grows up, to rethink the problem and arrive at a different conclusion. If this were not true, ideas, once adopted, could never have been changed.
We conclude, therefore, that true science decrees determinism for physical nature and free will for man, and for the same reason: that every thing must act in accordance with its specific nature. And since men are free to adopt ideas and to act upon them, it is never events or stimuli external to the mind that cause its ideas; rather the mind freely adopts ideas about external events. A savage, an infant, and a civilized man will each react in entirely different ways to the sight of the same stimulus—be it a fountain pen, an alarm clock, or a machine gun, for each mind has different ideas about the object’s meaning and qualities.11 Let us therefore never again say that the Great Depression of the 1930s caused men to adopt socialism or interventionism (or that poverty causes people to adopt Communism). The depression existed, and men were moved to think about this striking event; but that they adopted socialism or its equivalent as the way out was not determined by the event; they might just as well have chosen laissez-faire or Buddhism or any other attempted solution. The deciding factor was the idea that people chose to adopt. What led the people to adopt particular ideas? Here the historian may enumerate and weigh various factors, but he must always stop short at the ultimate freedom of the will. Thus, in any given matter, a person may freely decide either to think about a problem independently or to accept uncritically the ideas offered by others. Certainly, the bulk of the people, especially in abstract matters, choose to follow the ideas offered by the intellectuals. At the time of the Great Depression, there was a host of intellectuals offering the nostrum of statism or socialism as a cure for the depression, while very few suggested laissez-faire or absolute monarchy.

New Study

Why Nations Fail (WNF) by
Professors Daron Acemoglu
and James Robinson (AR) has
deservedly gained right of entry to the
pantheon of Big Books on economic
development.
Like the pantheon’s other occupants
– most recently Jared Diamond’s
Guns, Germs and Steel (GGS) and Ian
Morris’ Why the West Rules for Now
(WWR) – WNF tackles one of the
biggest questions facing humanity:
why some countries are rich and others
poor. It is daringly ambitious in the
parsimony of the answer; its scholarship
is serious while avoiding the modern
bane of narrow erudition; and
above all, it offers a deep and plausible
insight about development.
GGS was dazzlingly pioneering.
WWRhad the additional virtue of heavy
subject matter being leavened by light
and fluid prose, thanks to frequent
appearances by Asimov, Kipling,
Dickens, and the likes. And, WNF does
not draw upon as breathtakingly broad
a range of disciplines as those in either
GGS or WWR. This stems from the different
timescales of enquiry. Professor
Diamond starts the development clock
around 13,000 BC and Professor Morris
more than a million years ago. But the
AR story begins “only” about 700-800
years ago, necessarily ruling out evidence
from genetics, evolution, paleobiology,
archaeology, which form the
staple in both GGS and WWR.
WNFis both a derivative and development
of an academic paper that AR
co-authored in 2000 with Professor
Simon Johnson (MIT) (full disclosure:
Professor Johnson is my colleague and
co-author). In “The Colonial Origins of
Comparative Development”, one of the
most-widely cited and justly influential
academic papers on economic
development in the last 15 years, the
trio argued that the quality of economic
institutions was the key long-run
determinant of economic prosperity
(measured broadly in terms of per capita
GDP). Good economic institutions
protected property rights and guaranteed
sanctity of contract, which are the
key pre-requisites for private sector
investment and entrepreneurship.
In WNF, though, AR go one step further
in arguing that economic institutions
in turn are determined by politics.
The more concentrated is political power,
the more a small group in society
tries to extract wealth for itself to the
detriment of the rest: this is a world of
“extractive” institutions. And, conversely
dispersed political power as in
democracies is conducive to contestability
and competition, which creates
the conditions for broadly-shared prosperity
(a world of “inclusive” institutions).
Thus, their parsimonious explanation
for the disparities in wealth
across the world is: political institutions.
Invoking the very Occam’s razor
spirit that imbues the book, WNF can
be explained in the figure above (see
graph). Economic development (proxied
by per capita GDP) is measured on
the y-axis and an index of political
institutions (higher values denote
more representative or inclusive ones)
on the x-axis. The choice of axes is very
important because WNF asserts that
causation runs from politics (the independent
variable on the x-axis) to economic
development (the dependent
variable). The authors are unsympathetic
to causation running the other
way. That is, they reject the modernisation
hypothesis, which asserts that
improvements in standards of living
will lead to more democratic politics,
stemming, for example, from increased
demand for political freedom and participation.
For AR, political institutions
bear the deep imprints of history, and
although they are not immutable, their
susceptibility to change induced by
economic development is limited.
The upward-sloping line in the figure
reflects a strong relationship (on average)
between political institutions and
economic development, validating the
central argument of WNF. However,
China and India stand out as outliers
(they are far away from the line). And,
the interesting thing is that each of these
countries is an exception to, or even a
challenge to, the AR thesis, but in opposite
ways. India (which is way below the
line shown in the graph) is too economically
underdeveloped, given the quality
of its political institutions and China
(well above the line) is too rich, given
that it is still so undemocratic.
AR can mount two defences. First,
they could contend that all countries
should be treated equally because every
political unit is one experiment, one
data point (regardless of size). After all,
their thesis holds true for a vast majority
of countries (that is why the line is
upward sloping), and they must be
granted some leeway, given they have
daringly embraced a mono-causal
explanation of what a complex relationship.
Second, AR would contend
that theirs is a claim about the medium-
to long-run horizons, which are
never clearly specified but which rule
out criticisms based on relationships
observed for say 20-30 years.
Reproducing the graph for 1980 would
show that China was not an outlier
(although India was). Wait for another
say 20 years, AR might plead, and the
anomalies in the figure will fade away or
at least move in the direction predicted
in their book.
This defence is more problematic.
Suppose that we were to revisit the book
in 2030. What would have to happen to
China and India for them to be consistent
with the relationship predicted by
AR? India in 20 years would have to slide
into authoritarian chaos and become
the equivalent of countries such as
Venezuela today politically; or it would
have to boom to become the equivalent
of countries such as China in terms of
standards of living. And conversely,
China would either have to become a
near-Jeffersonian democracy or suffer a
dramatic collapse in output (i.e. post
negative growth). None of these four
outcomes is impossible, but none is likely
either.
One could make a stronger critique
of AR. Even if China and India were to
move rapidly in the direction predicted
by them over the next 20 years, it would
still beg the question of how China managed
to sustain 30-50 years of historically
unprecedented rapid growth (and
poverty reduction) under repressive
political conditions, and how India
squandered 30-40 years of democracy
with its Hindu rate of growth. Of course,
there are answers, but the point is that
they would have to be different from,
and even orthogonal to, AR’s central
thesis.
In other words, the inability of
Acemoglu and Robinson to explain the
development trajectories of these two
large countries is a fault not of their rich
and excellent book, but of the sui generis,
uncooperating realities of Chinese
and Indian history.

Tuesday, November 6, 2012

Economist Wrong Paul krugman

It’s hard to believe now, but not long ago economists were congratulating themselves over the success of their field. Those successes — or so they believed — were both theoretical and practical, leading to a golden era for the profession. On the theoretical side, they thought that they had resolved their internal disputes. Thus, in a 2008 paper titled “The State of Macro” (that is, macroeconomics, the study of big-picture issues like recessions), Olivier Blanchard of M.I.T., now the chief economist at the International Monetary Fund, declared that “the state of macro is good.” The battles of yesteryear, he said, were over, and there had been a “broad convergence of vision.” And in the real world, economists believed they had things under control: the “central problem of depression-prevention has been solved,” declared Robert Lucas of the University of Chicago in his 2003 presidential address to the American Economic Association. In 2004, Ben Bernanke, a former Princeton professor who is now the chairman of the Federal Reserve Board, celebrated the Great Moderation in economic performance over the previous two decades, which he attributed in part to improved economic policy making.
Last year, everything came apart.
Few economists saw our current crisis coming, but this predictive failure was the least of the field’s problems. More important was the profession’s blindness to the very possibility of catastrophic failures in a market economy. During the golden years, financial economists came to believe that markets were inherently stable — indeed, that stocks and other assets were always priced just right. There was nothing in the prevailing models suggesting the possibility of the kind of collapse that happened last year. Meanwhile, macroeconomists were divided in their views. But the main division was between those who insisted that free-market economies never go astray and those who believed that economies may stray now and then but that any major deviations from the path of prosperity could and would be corrected by the all-powerful Fed. Neither side was prepared to cope with an economy that went off the rails despite the Fed’s best efforts.
And in the wake of the crisis, the fault lines in the economics profession have yawned wider than ever. Lucas says the Obama administration’s stimulus plans are “schlock economics,” and his Chicago colleague John Cochrane says they’re based on discredited “fairy tales.” In response, Brad DeLong of the University of California, Berkeley, writes of the “intellectual collapse” of the Chicago School, and I myself have written that comments from Chicago economists are the product of a Dark Age of macroeconomics in which hard-won knowledge has been forgotten.
What happened to the economics profession? And where does it go from here?
As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth. Until the Great Depression, most economists clung to a vision of capitalism as a perfect or nearly perfect system. That vision wasn’t sustainable in the face of mass unemployment, but as memories of the Depression faded, economists fell back in love with the old, idealized vision of an economy in which rational individuals interact in perfect markets, this time gussied up with fancy equations. The renewed romance with the idealized market was, to be sure, partly a response to shifting political winds, partly a response to financial incentives. But while sabbaticals at the Hoover Institution and job opportunities on Wall Street are nothing to sneeze at, the central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess.
Unfortunately, this romanticized and sanitized vision of the economy led most economists to ignore all the things that can go wrong. They turned a blind eye to the limitations of human rationality that often lead to bubbles and busts; to the problems of institutions that run amok; to the imperfections of markets — especially financial markets — that can cause the economy’s operating system to undergo sudden, unpredictable crashes; and to the dangers created when regulators don’t believe in regulation.
It’s much harder to say where the economics profession goes from here. But what’s almost certain is that economists will have to learn to live with messiness. That is, they will have to acknowledge the importance of irrational and often unpredictable behavior, face up to the often idiosyncratic imperfections of markets and accept that an elegant economic “theory of everything” is a long way off. In practical terms, this will translate into more cautious policy advice — and a reduced willingness to dismantle economic safeguards in the faith that markets will solve all problems.
II. FROM SMITH TO KEYNES AND BACK
The birth of economics as a discipline is usually credited to Adam Smith, who published “The Wealth of Nations” in 1776. Over the next 160 years an extensive body of economic theory was developed, whose central message was: Trust the market. Yes, economists admitted that there were cases in which markets might fail, of which the most important was the case of “externalities” — costs that people impose on others without paying the price, like traffic congestion or pollution. But the basic presumption of “neoclassical” economics (named after the late-19th-century theorists who elaborated on the concepts of their “classical” predecessors) was that we should have faith in the market system.
This faith was, however, shattered by the Great Depression. Actually, even in the face of total collapse some economists insisted that whatever happens in a market economy must be right: “Depressions are not simply evils,” declared Joseph Schumpeter in 1934 — 1934! They are, he added, “forms of something which has to be done.” But many, and eventually most, economists turned to the insights of John Maynard Keynes for both an explanation of what had happened and a solution to future depressions.
Keynes did not, despite what you may have heard, want the government to run the economy. He described his analysis in his 1936 masterwork, “The General Theory of Employment, Interest and Money,” as “moderately conservative in its implications.” He wanted to fix capitalism, not replace it. But he did challenge the notion that free-market economies can function without a minder, expressing particular contempt for financial markets, which he viewed as being dominated by short-term speculation with little regard for fundamentals. And he called for active government intervention — printing more money and, if necessary, spending heavily on public works — to fight unemployment during slumps.
It’s important to understand that Keynes did much more than make bold assertions. “The General Theory” is a work of profound, deep analysis — analysis that persuaded the best young economists of the day. Yet the story of economics over the past half century is, to a large degree, the story of a retreat from Keynesianism and a return to neoclassicism. The neoclassical revival was initially led by Milton Friedman of the University of Chicago, who asserted as early as 1953 that neoclassical economics works well enough as a description of the way the economy actually functions to be “both extremely fruitful and deserving of much confidence.” But what about depressions?
Friedman’s counterattack against Keynes began with the doctrine known as monetarism. Monetarists didn’t disagree in principle with the idea that a market economy needs deliberate stabilization. “We are all Keynesians now,” Friedman once said, although he later claimed he was quoted out of context. Monetarists asserted, however, that a very limited, circumscribed form of government intervention — namely, instructing central banks to keep the nation’s money supply, the sum of cash in circulation and bank deposits, growing on a steady path — is all that’s required to prevent depressions. Famously, Friedman and his collaborator, Anna Schwartz, argued that if the Federal Reserve had done its job properly, the Great Depression would not have happened. Later, Friedman made a compelling case against any deliberate effort by government to push unemployment below its “natural” level (currently thought to be about 4.8 percent in the United States): excessively expansionary policies, he predicted, would lead to a combination of inflation and high unemployment — a prediction that was borne out by the stagflation of the 1970s, which greatly advanced the credibility of the anti-Keynesian movement.
Eventually, however, the anti-Keynesian counterrevolution went far beyond Friedman’s position, which came to seem relatively moderate compared with what his successors were saying. Among financial economists, Keynes’s disparaging vision of financial markets as a “casino” was replaced by “efficient market” theory, which asserted that financial markets always get asset prices right given the available information. Meanwhile, many macroeconomists completely rejected Keynes’s framework for understanding economic slumps. Some returned to the view of Schumpeter and other apologists for the Great Depression, viewing recessions as a good thing, part of the economy’s adjustment to change. And even those not willing to go that far argued that any attempt to fight an economic slump would do more harm than good.
Not all macroeconomists were willing to go down this road: many became self-described New Keynesians, who continued to believe in an active role for the government. Yet even they mostly accepted the notion that investors and consumers are rational and that markets generally get it right.
Of course, there were exceptions to these trends: a few economists challenged the assumption of rational behavior, questioned the belief that financial markets can be trusted and pointed to the long history of financial crises that had devastating economic consequences. But they were swimming against the tide, unable to make much headway against a pervasive and, in retrospect, foolish complacency.
III. PANGLOSSIAN FINANCE
In the 1930s, financial markets, for obvious reasons, didn’t get much respect. Keynes compared them to “those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those that he thinks likeliest to catch the fancy of the other competitors.”
And Keynes considered it a very bad idea to let such markets, in which speculators spent their time chasing one another’s tails, dictate important business decisions: “When the capital development of a country becomes a by-product of the activities of a casino, the job is likely to be ill-done.”
By 1970 or so, however, the study of financial markets seemed to have been taken over by Voltaire’s Dr. Pangloss, who insisted that we live in the best of all possible worlds. Discussion of investor irrationality, of bubbles, of destructive speculation had virtually disappeared from academic discourse. The field was dominated by the “efficient-market hypothesis,” promulgated by Eugene Fama of the University of Chicago, which claims that financial markets price assets precisely at their intrinsic worth given all publicly available information. (The price of a company’s stock, for example, always accurately reflects the company’s value given the information available on the company’s earnings, its business prospects and so on.) And by the 1980s, finance economists, notably Michael Jensen of the Harvard Business School, were arguing that because financial markets always get prices right, the best thing corporate chieftains can do, not just for themselves but for the sake of the economy, is to maximize their stock prices. In other words, finance economists believed that we should put the capital development of the nation in the hands of what Keynes had called a “casino.”
It’s hard to argue that this transformation in the profession was driven by events. True, the memory of 1929 was gradually receding, but there continued to be bull markets, with widespread tales of speculative excess, followed by bear markets. In 1973-4, for example, stocks lost 48 percent of their value. And the 1987 stock crash, in which the Dow plunged nearly 23 percent in a day for no clear reason, should have raised at least a few doubts about market rationality.
These events, however, which Keynes would have considered evidence of the unreliability of markets, did little to blunt the force of a beautiful idea. The theoretical model that finance economists developed by assuming that every investor rationally balances risk against reward — the so-called Capital Asset Pricing Model, or CAPM (pronounced cap-em) — is wonderfully elegant. And if you accept its premises it’s also extremely useful. CAPM not only tells you how to choose your portfolio — even more important from the financial industry’s point of view, it tells you how to put a price on financial derivatives, claims on claims. The elegance and apparent usefulness of the new theory led to a string of Nobel prizes for its creators, and many of the theory’s adepts also received more mundane rewards: Armed with their new models and formidable math skills — the more arcane uses of CAPM require physicist-level computations — mild-mannered business-school professors could and did become Wall Street rocket scientists, earning Wall Street paychecks.
To be fair, finance theorists didn’t accept the efficient-market hypothesis merely because it was elegant, convenient and lucrative. They also produced a great deal of statistical evidence, which at first seemed strongly supportive. But this evidence was of an oddly limited form. Finance economists rarely asked the seemingly obvious (though not easily answered) question of whether asset prices made sense given real-world fundamentals like earnings. Instead, they asked only whether asset prices made sense given other asset prices. Larry Summers, now the top economic adviser in the Obama administration, once mocked finance professors with a parable about “ketchup economists” who “have shown that two-quart bottles of ketchup invariably sell for exactly twice as much as one-quart bottles of ketchup,” and conclude from this that the ketchup market is perfectly efficient.
But neither this mockery nor more polite critiques from economists like Robert Shiller of Yale had much effect. Finance theorists continued to believe that their models were essentially right, and so did many people making real-world decisions. Not least among these was Alan Greenspan, who was then the Fed chairman and a long-time supporter of financial deregulation whose rejection of calls to rein in subprime lending or address the ever-inflating housing bubble rested in large part on the belief that modern financial economics had everything under control. There was a telling moment in 2005, at a conference held to honor Greenspan’s tenure at the Fed. One brave attendee, Raghuram Rajan (of the University of Chicago, surprisingly), presented a paper warning that the financial system was taking on potentially dangerous levels of risk. He was mocked by almost all present — including, by the way, Larry Summers, who dismissed his warnings as “misguided.”
By October of last year, however, Greenspan was admitting that he was in a state of “shocked disbelief,” because “the whole intellectual edifice” had “collapsed.” Since this collapse of the intellectual edifice was also a collapse of real-world markets, the result was a severe recession — the worst, by many measures, since the Great Depression. What should policy makers do? Unfortunately, macroeconomics, which should have been providing clear guidance about how to address the slumping economy, was in its own state of disarray.
IV. THE TROUBLE WITH MACRO
“We have involved ourselves in a colossal muddle, having blundered in the control of a delicate machine, the working of which we do not understand. The result is that our possibilities of wealth may run to waste for a time — perhaps for a long time.” So wrote John Maynard Keynes in an essay titled “The Great Slump of 1930,” in which he tried to explain the catastrophe then overtaking the world. And the world’s possibilities of wealth did indeed run to waste for a long time; it took World War II to bring the Great Depression to a definitive end.
Why was Keynes’s diagnosis of the Great Depression as a “colossal muddle” so compelling at first? And why did economics, circa 1975, divide into opposing camps over the value of Keynes’s views?
I like to explain the essence of Keynesian economics with a true story that also serves as a parable, a small-scale version of the messes that can afflict entire economies. Consider the travails of the Capitol Hill Baby-Sitting Co-op.
This co-op, whose problems were recounted in a 1977 article in The Journal of Money, Credit and Banking, was an association of about 150 young couples who agreed to help one another by baby-sitting for one another’s children when parents wanted a night out. To ensure that every couple did its fair share of baby-sitting, the co-op introduced a form of scrip: coupons made out of heavy pieces of paper, each entitling the bearer to one half-hour of sitting time. Initially, members received 20 coupons on joining and were required to return the same amount on departing the group.
Unfortunately, it turned out that the co-op’s members, on average, wanted to hold a reserve of more than 20 coupons, perhaps, in case they should want to go out several times in a row. As a result, relatively few people wanted to spend their scrip and go out, while many wanted to baby-sit so they could add to their hoard. But since baby-sitting opportunities arise only when someone goes out for the night, this meant that baby-sitting jobs were hard to find, which made members of the co-op even more reluctant to go out, making baby-sitting jobs even scarcer. . . .
In short, the co-op fell into a recession.
O.K., what do you think of this story? Don’t dismiss it as silly and trivial: economists have used small-scale examples to shed light on big questions ever since Adam Smith saw the roots of economic progress in a pin factory, and they’re right to do so. The question is whether this particular example, in which a recession is a problem of inadequate demand — there isn’t enough demand for baby-sitting to provide jobs for everyone who wants one — gets at the essence of what happens in a recession.
Forty years ago most economists would have agreed with this interpretation. But since then macroeconomics has divided into two great factions: “saltwater” economists (mainly in coastal U.S. universities), who have a more or less Keynesian vision of what recessions are all about; and “freshwater” economists (mainly at inland schools), who consider that vision nonsense.
Freshwater economists are, essentially, neoclassical purists. They believe that all worthwhile economic analysis starts from the premise that people are rational and markets work, a premise violated by the story of the baby-sitting co-op. As they see it, a general lack of sufficient demand isn’t possible, because prices always move to match supply with demand. If people want more baby-sitting coupons, the value of those coupons will rise, so that they’re worth, say, 40 minutes of baby-sitting rather than half an hour — or, equivalently, the cost of an hours’ baby-sitting would fall from 2 coupons to 1.5. And that would solve the problem: the purchasing power of the coupons in circulation would have risen, so that people would feel no need to hoard more, and there would be no recession.
But don’t recessions look like periods in which there just isn’t enough demand to employ everyone willing to work? Appearances can be deceiving, say the freshwater theorists. Sound economics, in their view, says that overall failures of demand can’t happen — and that means that they don’t. Keynesian economics has been “proved false,” Cochrane, of the University of Chicago, says.
Yet recessions do happen. Why? In the 1970s the leading freshwater macroeconomist, the Nobel laureate Robert Lucas, argued that recessions were caused by temporary confusion: workers and companies had trouble distinguishing overall changes in the level of prices because of inflation or deflation from changes in their own particular business situation. And Lucas warned that any attempt to fight the business cycle would be counterproductive: activist policies, he argued, would just add to the confusion.
By the 1980s, however, even this severely limited acceptance of the idea that recessions are bad things had been rejected by many freshwater economists. Instead, the new leaders of the movement, especially Edward Prescott, who was then at the University of Minnesota (you can see where the freshwater moniker comes from), argued that price fluctuations and changes in demand actually had nothing to do with the business cycle. Rather, the business cycle reflects fluctuations in the rate of technological progress, which are amplified by the rational response of workers, who voluntarily work more when the environment is favorable and less when it’s unfavorable. Unemployment is a deliberate decision by workers to take time off.
Put baldly like that, this theory sounds foolish — was the Great Depression really the Great Vacation? And to be honest, I think it really is silly. But the basic premise of Prescott’s “real business cycle” theory was embedded in ingeniously constructed mathematical models, which were mapped onto real data using sophisticated statistical techniques, and the theory came to dominate the teaching of macroeconomics in many university departments. In 2004, reflecting the theory’s influence, Prescott shared a Nobel with Finn Kydland of Carnegie Mellon University.
Meanwhile, saltwater economists balked. Where the freshwater economists were purists, saltwater economists were pragmatists. While economists like N. Gregory Mankiw at Harvard, Olivier Blanchard at M.I.T. and David Romer at the University of California, Berkeley, acknowledged that it was hard to reconcile a Keynesian demand-side view of recessions with neoclassical theory, they found the evidence that recessions are, in fact, demand-driven too compelling to reject. So they were willing to deviate from the assumption of perfect markets or perfect rationality, or both, adding enough imperfections to accommodate a more or less Keynesian view of recessions. And in the saltwater view, active policy to fight recessions remained desirable.
But the self-described New Keynesian economists weren’t immune to the charms of rational individuals and perfect markets. They tried to keep their deviations from neoclassical orthodoxy as limited as possible. This meant that there was no room in the prevailing models for such things as bubbles and banking-system collapse. The fact that such things continued to happen in the real world — there was a terrible financial and macroeconomic crisis in much of Asia in 1997-8 and a depression-level slump in Argentina in 2002 — wasn’t reflected in the mainstream of New Keynesian thinking.
Even so, you might have thought that the differing worldviews of freshwater and saltwater economists would have put them constantly at loggerheads over economic policy. Somewhat surprisingly, however, between around 1985 and 2007 the disputes between freshwater and saltwater economists were mainly about theory, not action. The reason, I believe, is that New Keynesians, unlike the original Keynesians, didn’t think fiscal policy — changes in government spending or taxes — was needed to fight recessions. They believed that monetary policy, administered by the technocrats at the Fed, could provide whatever remedies the economy needed. At a 90th birthday celebration for Milton Friedman, Ben Bernanke, formerly a more or less New Keynesian professor at Princeton, and by then a member of the Fed’s governing board, declared of the Great Depression: “You’re right. We did it. We’re very sorry. But thanks to you, it won’t happen again.” The clear message was that all you need to avoid depressions is a smarter Fed.
And as long as macroeconomic policy was left in the hands of the maestro Greenspan, without Keynesian-type stimulus programs, freshwater economists found little to complain about. (They didn’t believe that monetary policy did any good, but they didn’t believe it did any harm, either.)
It would take a crisis to reveal both how little common ground there was and how Panglossian even New Keynesian economics had become.
V. NOBODY COULD HAVE PREDICTED . . .
In recent, rueful economics discussions, an all-purpose punch line has become “nobody could have predicted. . . .” It’s what you say with regard to disasters that could have been predicted, should have been predicted and actually were predicted by a few economists who were scoffed at for their pains.
Take, for example, the precipitous rise and fall of housing prices. Some economists, notably Robert Shiller, did identify the bubble and warn of painful consequences if it were to burst. Yet key policy makers failed to see the obvious. In 2004, Alan Greenspan dismissed talk of a housing bubble: “a national severe price distortion,” he declared, was “most unlikely.” Home-price increases, Ben Bernanke said in 2005, “largely reflect strong economic fundamentals.”
How did they miss the bubble? To be fair, interest rates were unusually low, possibly explaining part of the price rise. It may be that Greenspan and Bernanke also wanted to celebrate the Fed’s success in pulling the economy out of the 2001 recession; conceding that much of that success rested on the creation of a monstrous bubble would have placed a damper on the festivities.
But there was something else going on: a general belief that bubbles just don’t happen. What’s striking, when you reread Greenspan’s assurances, is that they weren’t based on evidence — they were based on the a priori assertion that there simply can’t be a bubble in housing. And the finance theorists were even more adamant on this point. In a 2007 interview, Eugene Fama, the father of the efficient-market hypothesis, declared that “the word ‘bubble’ drives me nuts,” and went on to explain why we can trust the housing market: “Housing markets are less liquid, but people are very careful when they buy houses. It’s typically the biggest investment they’re going to make, so they look around very carefully and they compare prices. The bidding process is very detailed.”
Indeed, home buyers generally do carefully compare prices — that is, they compare the price of their potential purchase with the prices of other houses. But this says nothing about whether the overall price of houses is justified. It’s ketchup economics, again: because a two-quart bottle of ketchup costs twice as much as a one-quart bottle, finance theorists declare that the price of ketchup must be right.
In short, the belief in efficient financial markets blinded many if not most economists to the emergence of the biggest financial bubble in history. And efficient-market theory also played a significant role in inflating that bubble in the first place.
Now that the undiagnosed bubble has burst, the true riskiness of supposedly safe assets has been revealed and the financial system has demonstrated its fragility. U.S. households have seen $13 trillion in wealth evaporate. More than six million jobs have been lost, and the unemployment rate appears headed for its highest level since 1940. So what guidance does modern economics have to offer in our current predicament? And should we trust it?
VI. THE STIMULUS SQUABBLE
Between 1985 and 2007 a false peace settled over the field of macroeconomics. There hadn’t been any real convergence of views between the saltwater and freshwater factions. But these were the years of the Great Moderation — an extended period during which inflation was subdued and recessions were relatively mild. Saltwater economists believed that the Federal Reserve had everything under control. Fresh­water economists didn’t think the Fed’s actions were actually beneficial, but they were willing to let matters lie.
But the crisis ended the phony peace. Suddenly the narrow, technocratic policies both sides were willing to accept were no longer sufficient — and the need for a broader policy response brought the old conflicts out into the open, fiercer than ever.
Why weren’t those narrow, technocratic policies sufficient? The answer, in a word, is zero.
During a normal recession, the Fed responds by buying Treasury bills — short-term government debt — from banks. This drives interest rates on government debt down; investors seeking a higher rate of return move into other assets, driving other interest rates down as well; and normally these lower interest rates eventually lead to an economic bounceback. The Fed dealt with the recession that began in 1990 by driving short-term interest rates from 9 percent down to 3 percent. It dealt with the recession that began in 2001 by driving rates from 6.5 percent to 1 percent. And it tried to deal with the current recession by driving rates down from 5.25 percent to zero.
But zero, it turned out, isn’t low enough to end this recession. And the Fed can’t push rates below zero, since at near-zero rates investors simply hoard cash rather than lending it out. So by late 2008, with interest rates basically at what macroeconomists call the “zero lower bound” even as the recession continued to deepen, conventional monetary policy had lost all traction.
Now what? This is the second time America has been up against the zero lower bound, the previous occasion being the Great Depression. And it was precisely the observation that there’s a lower bound to interest rates that led Keynes to advocate higher government spending: when monetary policy is ineffective and the private sector can’t be persuaded to spend more, the public sector must take its place in supporting the economy. Fiscal stimulus is the Keynesian answer to the kind of depression-type economic situation we’re currently in.
Such Keynesian thinking underlies the Obama administration’s economic policies — and the freshwater economists are furious. For 25 or so years they tolerated the Fed’s efforts to manage the economy, but a full-blown Keynesian resurgence was something entirely different. Back in 1980, Lucas, of the University of Chicago, wrote that Keynesian economics was so ludicrous that “at research seminars, people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another.” Admitting that Keynes was largely right, after all, would be too humiliating a comedown.
And so Chicago’s Cochrane, outraged at the idea that government spending could mitigate the latest recession, declared: “It’s not part of what anybody has taught graduate students since the 1960s. They [Keynesian ideas] are fairy tales that have been proved false. It is very comforting in times of stress to go back to the fairy tales we heard as children, but it doesn’t make them less false.” (It’s a mark of how deep the division between saltwater and freshwater runs that Cochrane doesn’t believe that “anybody” teaches ideas that are, in fact, taught in places like Princeton, M.I.T. and Harvard.)
Meanwhile, saltwater economists, who had comforted themselves with the belief that the great divide in macroeconomics was narrowing, were shocked to realize that freshwater economists hadn’t been listening at all. Freshwater economists who inveighed against the stimulus didn’t sound like scholars who had weighed Keynesian arguments and found them wanting. Rather, they sounded like people who had no idea what Keynesian economics was about, who were resurrecting pre-1930 fallacies in the belief that they were saying something new and profound.
And it wasn’t just Keynes whose ideas seemed to have been forgotten. As Brad DeLong of the University of California, Berkeley, has pointed out in his laments about the Chicago school’s “intellectual collapse,” the school’s current stance amounts to a wholesale rejection of Milton Friedman’s ideas, as well. Friedman believed that Fed policy rather than changes in government spending should be used to stabilize the economy, but he never asserted that an increase in government spending cannot, under any circumstances, increase employment. In fact, rereading Friedman’s 1970 summary of his ideas, “A Theoretical Framework for Monetary Analysis,” what’s striking is how Keynesian it seems.
And Friedman certainly never bought into the idea that mass unemployment represents a voluntary reduction in work effort or the idea that recessions are actually good for the economy. Yet the current generation of freshwater economists has been making both arguments. Thus Chicago’s Casey Mulligan suggests that unemployment is so high because many workers are choosing not to take jobs: “Employees face financial incentives that encourage them not to work . . . decreased employment is explained more by reductions in the supply of labor (the willingness of people to work) and less by the demand for labor (the number of workers that employers need to hire).” Mulligan has suggested, in particular, that workers are choosing to remain unemployed because that improves their odds of receiving mortgage relief. And Cochrane declares that high unemployment is actually good: “We should have a recession. People who spend their lives pounding nails in Nevada need something else to do.”
Personally, I think this is crazy. Why should it take mass unemployment across the whole nation to get carpenters to move out of Nevada? Can anyone seriously claim that we’ve lost 6.7 million jobs because fewer Americans want to work? But it was inevitable that freshwater economists would find themselves trapped in this cul-de-sac: if you start from the assumption that people are perfectly rational and markets are perfectly efficient, you have to conclude that unemployment is voluntary and recessions are desirable.
Yet if the crisis has pushed freshwater economists into absurdity, it has also created a lot of soul-searching among saltwater economists. Their framework, unlike that of the Chicago School, both allows for the possibility of involuntary unemployment and considers it a bad thing. But the New Keynesian models that have come to dominate teaching and research assume that people are perfectly rational and financial markets are perfectly efficient. To get anything like the current slump into their models, New Keynesians are forced to introduce some kind of fudge factor that for reasons unspecified temporarily depresses private spending. (I’ve done exactly that in some of my own work.) And if the analysis of where we are now rests on this fudge factor, how much confidence can we have in the models’ predictions about where we are going?
The state of macro, in short, is not good. So where does the profession go from here?
VII. FLAWS AND FRICTIONS
Economics, as a field, got in trouble because economists were seduced by the vision of a perfect, frictionless market system. If the profession is to redeem itself, it will have to reconcile itself to a less alluring vision — that of a market economy that has many virtues but that is also shot through with flaws and frictions. The good news is that we don’t have to start from scratch. Even during the heyday of perfect-market economics, there was a lot of work done on the ways in which the real economy deviated from the theoretical ideal. What’s probably going to happen now — in fact, it’s already happening — is that flaws-and-frictions economics will move from the periphery of economic analysis to its center.
There’s already a fairly well developed example of the kind of economics I have in mind: the school of thought known as behavioral finance. Practitioners of this approach emphasize two things. First, many real-world investors bear little resemblance to the cool calculators of efficient-market theory: they’re all too subject to herd behavior, to bouts of irrational exuberance and unwarranted panic. Second, even those who try to base their decisions on cool calculation often find that they can’t, that problems of trust, credibility and limited collateral force them to run with the herd.
On the first point: even during the heyday of the efficient-market hypothesis, it seemed obvious that many real-world investors aren’t as rational as the prevailing models assumed. Larry Summers once began a paper on finance by declaring: “THERE ARE IDIOTS. Look around.” But what kind of idiots (the preferred term in the academic literature, actually, is “noise traders”) are we talking about? Behavioral finance, drawing on the broader movement known as behavioral economics, tries to answer that question by relating the apparent irrationality of investors to known biases in human cognition, like the tendency to care more about small losses than small gains or the tendency to extrapolate too readily from small samples (e.g., assuming that because home prices rose in the past few years, they’ll keep on rising).
Until the crisis, efficient-market advocates like Eugene Fama dismissed the evidence produced on behalf of behavioral finance as a collection of “curiosity items” of no real importance. That’s a much harder position to maintain now that the collapse of a vast bubble — a bubble correctly diagnosed by behavioral economists like Robert Shiller of Yale, who related it to past episodes of “irrational exuberance” — has brought the world economy to its knees.
On the second point: suppose that there are, indeed, idiots. How much do they matter? Not much, argued Milton Friedman in an influential 1953 paper: smart investors will make money by buying when the idiots sell and selling when they buy and will stabilize markets in the process. But the second strand of behavioral finance says that Friedman was wrong, that financial markets are sometimes highly unstable, and right now that view seems hard to reject.
Probably the most influential paper in this vein was a 1997 publication by Andrei Shleifer of Harvard and Robert Vishny of Chicago, which amounted to a formalization of the old line that “the market can stay irrational longer than you can stay solvent.” As they pointed out, arbitrageurs — the people who are supposed to buy low and sell high — need capital to do their jobs. And a severe plunge in asset prices, even if it makes no sense in terms of fundamentals, tends to deplete that capital. As a result, the smart money is forced out of the market, and prices may go into a downward spiral.
The spread of the current financial crisis seemed almost like an object lesson in the perils of financial instability. And the general ideas underlying models of financial instability have proved highly relevant to economic policy: a focus on the depleted capital of financial institutions helped guide policy actions taken after the fall of Lehman, and it looks (cross your fingers) as if these actions successfully headed off an even bigger financial collapse.
Meanwhile, what about macroeconomics? Recent events have pretty decisively refuted the idea that recessions are an optimal response to fluctuations in the rate of technological progress; a more or less Keynesian view is the only plausible game in town. Yet standard New Keynesian models left no room for a crisis like the one we’re having, because those models generally accepted the efficient-market view of the financial sector.
There were some exceptions. One line of work, pioneered by none other than Ben Bernanke working with Mark Gertler of New York University, emphasized the way the lack of sufficient collateral can hinder the ability of businesses to raise funds and pursue investment opportunities. A related line of work, largely established by my Princeton colleague Nobuhiro Kiyotaki and John Moore of the London School of Economics, argued that prices of assets such as real estate can suffer self-reinforcing plunges that in turn depress the economy as a whole. But until now the impact of dysfunctional finance hasn’t been at the core even of Keynesian economics. Clearly, that has to change.
VIII. RE-EMBRACING KEYNES
So here’s what I think economists have to do. First, they have to face up to the inconvenient reality that financial markets fall far short of perfection, that they are subject to extraordinary delusions and the madness of crowds. Second, they have to admit — and this will be very hard for the people who giggled and whispered over Keynes — that Keynesian economics remains the best framework we have for making sense of recessions and depressions. Third, they’ll have to do their best to incorporate the realities of finance into macroeconomics.
Many economists will find these changes deeply disturbing. It will be a long time, if ever, before the new, more realistic approaches to finance and macroeconomics offer the same kind of clarity, completeness and sheer beauty that characterizes the full neoclassical approach. To some economists that will be a reason to cling to neoclassicism, despite its utter failure to make sense of the greatest economic crisis in three generations. This seems, however, like a good time to recall the words of H. L. Mencken: “There is always an easy solution to every human problem — neat, plausible and wrong.”
When it comes to the all-too-human problem of recessions and depressions, economists need to abandon the neat but wrong solution of assuming that everyone is rational and markets work perfectly. The vision that emerges as the profession rethinks its foundations may not be all that clear; it certainly won’t be neat; but we can hope that it will have the virtue of being at least partly right.
Paul Krugman is a Times Op-Ed columnist and winner of the 2008 Nobel Memorial Prize in Economic Science. His latest book is “The Return of Depression Economics and the Crisis of 2008.”