Two Cheers for (Good) Theory

Everyone is having a grand-ole time cheering about the "new" empirical wave of economics. Write-ups about the recent Nobel winner, Angus Deaton, have talked all about his influence on data analysis:

His method of careful analysis of data from household surveys has transformed four large swaths of the dismal science: microeconomics, econometrics, macroeconomics and development economics.

He has brought microeconomics — traditionally a field populated by theorists — into closer connection with the data. Partly because of his influence, modern microeconomists are more likely to spend their days knee-deep in large-scale data sets describing the real-world decisions made by millions of people, and less likely to be mired in Greek-letter abstractions.

Much of the empirical revolution in economics has been enabled by the tools that Mr. Deaton developed. These tools reimagine the role of economic theory, using it to organize and interpret the tidal wave of data coming from the hundreds of household surveys conducted around the world each year.

Bloggers have gone nuts over it, but it's not only them. The John Bates Clark award has been heavy on the empirical micro lately;

(E)ssentially changing this prize from “Best Economist Under 40” to “Best Applied Microeconomist Under 40”. Of the past seven winners, the only one who isn’t obviously an applied microeconomist is Levin, and yet even he describes himself as “an applied economist with interests in industrial organization, market design and the economics of technology.”

I get it. Increased data and computational power have allowed us to do things we couldn't do 10 years ago. It has exploded.  Plus, for the blogosphere, the data turns into pretty pictures that we can all uuuhhh and aaawww at. FiveThirtyEight and others have made a whole industry out of this.

It's great. I love it.

But theory is ultimately the force that drives our understanding of the world. As Hayek wrote, the abstract is primary. That's why I love seeing Rakesh Vohra come to the defense of theory. I know; shocking that a theorist would defend theory... It's short, snarky, and spot on, as blog posts should be. (And hard to excerpt it so read it.)

If you'll indulge me and let another theorist try to defend the first theorist, using theory... I'll use a theory to try to think through the marginal benefits of theory and empirical work. I think Supply and Demand is a pretty useful theory for thinking through things... But I might be biased.

Let me try my best at making pretty pictures. (If FiveThirtyEight wants me to start designing charts for them, I'm available. Call me.)

Suppose because of a technology "shock" the marginal cost of empirical work declines. This causes a shift to the right of the supply curve of empirical work, lowering the marginal value of empirical work.

supplyshift (1)

This LOWERS the marginal value of empirical work. The next regression gives less value than before the technology shock.

If you believe, like I do, that empirical work and theoretical work are complements, then this should cause a shift of the marginal benefit of theoretical work. I don't think that markets adjust instantaneously, so there might be lag for the theory market to adjust to the change in the empirical market.

demandshiftIf this model gives us any insight into the markets for research, I'd say it says the exact opposite of the standard conclusion.

The marginal value of theory is now HIGHER, because of the "data" revolution. If today is between the shock and the final adjustment of theoretical work, the marginal value of the theory will continue to grow.

So I'll get back to my theory work...

9 thoughts on “Two Cheers for (Good) Theory

  1. I'll also defend (good) theory... and as an applied micro guy, my endorsement isn't self-serving.

    Though I suspect my definition differs from many other folks' and I doubt the likelihood is high that we'll get a lot more good theory, I agree with your analysis at least in the abstract. The problem as I see it is that better access to data is likely to reduce the quality of theory.

    Good theory is certainly useful we applied micro people. My papers usually look like this: 1) discuss theory relevant to the question, 2) explain my empirical model, 3) data, and 4) explain my analysis and results. More good theory will be helpful, more crappy theory won't.

    • I doubt the likelihood is high that we'll get a lot more good theory

      Do you mean the supply curve of good theory is vertical?

      I'm not quite sure the exact mechanism you are thinking of. How does better data reduce marginal quality of theory? Could you use a model to help explain it? 🙂

      I agree that this post doesn't talk about good vs. bad theory. I'm taking it as given that we are talking about a certain fixed quality of theory and empirical work.

      • Brian,

        I'm not a big consumer of the theory literature, so take my comments with a grain of salt.

        I think we have two markets: good theory and bad theory. The bad theory supply curve is more elastic than the good theory supply curve.

  2. I like the post! I agree with 90% of it.

    However, I would just point out that Angus Deaton would probably also agree with you. He's complained about atheoretical Randomized Control Trials in a number of places--saying that they are not useful to policy makers because their findings can't be generalized.

    This makes sense when you see that Deaton's structural version of empirical micro is much different from Josh Angrist's (atheoretical) reduced-form version. Take, for example, Deaton's Almost Ideal Demand System. This is a clear attempt at combining theory with data. First, the AIDS model itself is derived using duality theory and can be used to test whether a number of theoretical restrictions hold. Second, one big reason you might want to estimate an AIDS model is to come up with some of the structural parameters (namely demand elasticity estimates) you would need to populate a standard partial-equilibrium market model. Once you get those, you could analyze the consequences of a number of different government policies (e.g. taxes, quotas, etc). You could never do that with the Local Average Treatment Effects that Angrist and company estimate.

    That's why I think it is funny that Noah Smith seemed so excited about the Deaton win. He seems very enamored with natural experiments and the "credibility revolution", but that doesn't really characterize Deaton's work. At least not his work in demand analysis anyways. That work is grounded firmly in theory. Of course, I honestly wonder how familiar Smith was with Deaton's work before he won the Nobel. In his bloomberg article, he says that the AIDS model was adopted as a way to "measure consumption." I'm really not sure what he means by that. To be clear, I'm no Deaton scholar either. So maybe I'm missing something.

    • I don't know enough about Deaton's work to make any substantive comments on it directly. My main point was not to discuss Deaton, but to discuss how people have used his Nobel as a "Go Team Data" moment.

      I'd say the same thing about ways that Chetty has been brought up to show the power of empirical work.

      • Right. I was just pointing out that the people that frame Deaton's win as a triumph of data over theory (e.g. Smith in his Bloomberg piece), probably shouldn't.

  3. Yes, Hayek said the abstract was primary, but in this regards, he was nuts. How could we possibly create an abstraction of things we did not understand as concrete entities already?

  4. Pingback: Theory vs data, computerization, old wine and new bottles: Morgenstern and Econometric Society fellows, 1953 | The Undercover Historian

Comments are closed.