Lars Hansen- The Nobel Third Wheel

(Note: I thought I had published this post earlier, but never did. It might seem a little random to publish this today, because it is random. However, I think Lars Hansen's work is important and decided to still publish, belatedly.)

It is not very often that I can say a Nobel Laureate did not get enough attention after his award. I think that is safe to about this year's winner, Lars Hansen.

His co-Laureate's Gene Fama and Robert Shiller had been household names (okay, only a weird subset of households) for a long time. Business magazines, newspapers, and economics blogs would discuss the two figures often. This was often the case because caricatures of Fama's and Shiller's work fits into a headline- "Market is ALWAYS Rational" or "Market is Irrational."

While neither of these headlines are true of their work, they are catchier than one about Hansen- "Parameters can be Better Estimated." I would not read that newspaper and I blog on economics. If I brought up Fama's and Shiller's work at a party people might discuss. If I brought up Hansen's work, I would hear crickets. But just because that is true, does not make it right. Hansen's work is important and people outside of economics Ph.D. programs should be aware of it.

Who is he?

Lars Hansen got his Ph.D. from Minnesota in 1978. He is a Minnesota macroeconomist from the days of the freshwater/saltwater divide. His time at Minnesota overlapped with two other famous macro Nobel Laureates, Thomas Sargent and Chris Sims. Since 1981, he has been at the University of Chicago economics program.

What did he do?

Along with his co-lauretes, Lars Hansen won the Nobel for “for their empirical analysis of asset prices." He is most famous for developing an econometric technique known as generalized method of moments (GMM). It is a complex technique that helps improve estimations with fewer assumptions than traditional estimators. It is not a simple idea and even Tyler Cowen basically punted when trying to explain GMM. He quotes from some lecture notes-

Unlike maximum likelihood estimation (MLE), GMM does not require complete knowledge (BCA: read assumptions) of the distribution of the data. Only specified moments derived from an underlying model are needed for GMM estimation. In some cases in which the distribution of the data is known, MLE can be computationally very burdensome whereas GMM can be computationally very easy. The log-normal stochastic volatility model is one example. In models for which there are more moment conditions than model parameters, GMM estimation provides a straightforward way to test the specification of the proposed model. This is an important feature that is unique to GMM estimation.

Basically, it allows the econometrician to do more with fewer assumptions about the data. John Cochrane describes it as follows-

Like all of Lars' work, it looks complex at the outset, but once you see what he did, it is actually brilliant in its simplicity. The GMM approach basically says, anything you want to do in statistical analysis or econometrics can be written as taking an average.

I cannot improve on Cochrane's explanation and so will leave it to him. It takes a little work, but Cochrane shows the beauty well. I encourage you to follow the link. For now, we economists (and people who care about economics) can thank Lars Hansen for making our understanding less bad.

Enhanced by Zemanta

Leave a Reply

Your email address will not be published. Required fields are marked *