Article

Matching the Moment, But Missing the Point?


This essay critically evaluates the benefits and costs of the dominant methodology in macroeconomics, the DSGE approach. Although the approach has led to great progress in some areas, it has also created biases and blind spots in the profession that hold back our understanding and our ability to govern the macroeconomy. There is great scope for progress in macroeconomics by judiciously pushing the boundaries of some of the methodological restrictions imposed by the DSGE approach.

Modern macroeconomics relies heavily on dynamic stochastic general equilibrium (DSGE) models of the economy. In the aftermath of the Great Financial Crisis of 2008/09, DSGE macroeconomists have faced scathing criticism both within their profession and from outsiders of the field, and the DSGE approach has come under heavy fire. In this essay, I will evaluate this criticism and discuss what I view as the main benefits and shortcomings of the DSGE approach for macroeconomic analysis.

Curiously, a majority of the critics of dynamic stochastic general equilibrium macroeconomics agree that it is, in principle, desirable for macroeconomic models (i) to incorporate dynamics, i.e. a time dimension, (ii) to deal with stochastic uncertainty, and (iii) to study general equilibrium effects. It seems the critique of DSGE macroeconomics therefore does not refer to models being dynamic, stochastic, and featuring general equilibrium analysis, but rather to broader methodological concerns about modern macroeconomics.

In the following sections I will thus evaluate the benefits and costs of the methodological restrictions on macroeconomic research that are imposed by the DSGE approach. Although some of them are useful, I will argue that others are counterproductive for the profession. Dogmatically applying these methodological restrictions to all macroeconomic problems risks biasing the scientific progress in macroeconomics in a single direction. This comes at the expense of other approaches that would have led to a deeper and more robust understanding of the real world.

I expect that most of the progress in macroeconomics will come from focusing on the merit of individual methodological restrictions imposed by the DSGE approach – and

removing them if warranted. This holds much promise for the macro profession in future years and will ultimately allow us to develop new theories that improve our understanding and the robustness of our understanding of the macroeconomy.

Before proceeding, let me emphasize two caveats that I want to point out as a member of the macro profession who himself at times employs DSGE models to analyze interesting macroeconomic questions.

First, the field of DSGE macroeconomics is incredibly diverse. Many modern macroeconomists who employ the DSGE approach have a deep appreciation of the methodological concerns that I am discussing below. They have been – and are – working hard on addressing them to expand the frontiers of our knowledge. I do not intend to criticize those individual research programs. I rather want to argue that the DSGE approach has led to shortcomings in the macro profession as a whole that deserve, in my view, more attention in future research.

Secondly, I cannot and I do not think it is desirable to offer a single unified alternative approach to DSGE macroeconomics. In this article, I deliberately abstain from advocating any specific alternative approaches (including the ones I am employing in my own research) to push the boundaries of DSGE. I believe instead that the most desirable future direction for macroeconomics would be less dogma, more diversity and more acceptance of diversity of thought within the macro profession.

DSGE Methodology

At its most basic level, the DSGE approach can be described as a research methodology for the field of macroeconomics. A research methodology defines the general strategy that is to be applied to research questions in a field, defines how research is to be conducted, and identifies a set of methods and restrictions on what is permissible in the field.

A methodology consists not only of a set of formal methods, such as e.g. the powerful set of DSGE methods taught in graduate school, but also of a less explicit set of requirements and restrictions that are imposed on the researcher and that sometimes more like unspoken social conventions. When teachers tell their students to make their macroeconomic models “more rigorous” or to “impose more discipline” on their model, they frequently refer to such unspoken restrictions. For example, it is not acceptable to call a dynamic stochastic general equilibrium model with two time periods a DSGE model. This article will consider both the explicit and implicit, unspoken requirements and restrictions imposed by the DSGE approach.

The methodological restrictions imposed by the DSGE approach can be distinguished into two categories. First, we consider conceptual restrictions, such as the requirement for models to be dynamic, stochastic, and general equilibrium (as captured by the name of the approach), the use of microfoundations, the analysis of stationary equilibria, etc. Then we will turn to the quantitative methods and restrictions that are part of the DSGE approach.

To evaluate benefits and costs, we need to have an ultimate objective in mind against which these benefits and costs are measured. I will take this objective to be a sound understanding of the functioning of the macroeconomy, with an eye toward guiding economic policy and predicting the future course of the economy.

Going beyond the benefits and costs of specific methodological restrictions, I will also discuss two broader implications of the widespread use of the DSGE approach – the way in which the complexity of DSGE models limits the scope of our analysis in macroeconomics and, finally, the lack of robustness in our understanding (“groupthink”) that is generated by having a single dominant methodological approach.

Conceptual Restrictions

… D, S, GE and more

Three of the conceptual methodological restrictions imposed by the DSGE approach are apparent from its name: the DSGE approach requires macro models to be dynamic, stochastic, and analyze general equilibrium. Few critics question that these three elements are useful in principle, as mentioned in the introduction. However, the three elements carry an interpretation that is far more specific than a naïve understanding of the words abbreviated by “DSGE” suggests:

Dynamic means that a model following the DSGE approach is expected to be an infinite horizon model – it is socially unacceptable to call a stochastic general equilibrium model in which the dynamics consist of two time periods a DSGE model, even though it technically contains the elements D, S and GE. Using infinite horizon models carries both large benefits and costs. On the positive side, they allow for elegant and parsimonious descriptions of economic models since each period can be described as following the same laws of motion. In some respects, this makes infinite horizon models even simpler than two period models – in which, by their very nature, the two periods are asymmetric.

On the downside, an infinite time horizon introduces far greater complexity in solving models and creates a bias towards models that have a well-behaved ergodic steady state. On the first issue, it is rarely possible to explicitly solve stochastic infinite horizon models, which makes it necessary to use approximations and computer simulations even to solve simple DSGE models. Further consequences of this complexity are discussed below in a separate section.

On the second issue, an infinite horizon model is only well behaved and can be subjected to the standard methods of economic analysis if it has an ergodic steady state. This may be problematic because there are many real-world processes for which it is not obvious that they follow a defined ergodic distribution. If an economy is assumed to always revert back towards its steady state, there is much less concern about destabilizing dynamics than there may be in the real world, where individuals as well as, potentially, humanity as a whole, are subject to a finite life span.

Stochastic means not only that models should take account of uncertainty, but, in the conventional DSGE approach (inherited from real business cycle analysis), that a fundamental driving force of uncertainty is productivity shocks. Although DSGE researchers have long ventured beyond productivity shocks and introduced all other kinds of shocks, productivity shocks are still the most common source of uncertainty in DSGE models, and the first type of shocks we typically tell our students to incorporate in their macroeconomic models. This prevalence stands in marked contrast to the much less robust empirical evidence on the relevance of productivity shocks.

Shocks to productivity also introduce a bias regarding the efficiency of equilibria: when macroeconomic fluctuations are driven by the changes in productivity, the first welfare theorem continues to apply and there is no role for policymakers to intervene. It is not clear that this is the best benchmark for economic shocks.

General equilibrium means that macroeconomic models need to be built from the bottom up based on solid microfoundations. This distinguishes DSGE models from the preceding methodology in macroeconomics that was dominant up to the 1970s. At the time, macroeconomists used structural equations that were based on empirical relationships between macroeconomic variables to describe the path of the economy. It was the exclusive sphere of microeconomists to develop theories based on the notion that individual economic behavior was the result of an optimization problem that described how economic actors maximized their objective (profits, or utility) given the constraints that they faced. (Curiously, DSGE models need to be micro-founded, but they don’t really need to be full general equilibrium models to be called “DSGE” – it is, for example, perfectly acceptable to speak of small open economy DSGE models even though they take world prices as given and are thus partial equilibrium models.)

One of the driving forces to employ microfoundations in macroeconomics was to use more consistent methodologies in economics and to allow 1970s macroeconomics to benefit from the great methodological innovations in microeconomics in the prior decades. As a first approximation, the thought was that we can describe the aggregate behavior of the macroeconomy simply by adding up the actions of all the individual agents in the economy as described by microeconomics. It turns out that doing so carries both large benefits and disadvantages. But before evaluating these in detail, let us consider the question on the desirability of a more consistent methodologies for micro- and macroeconomics from a more general perspective.

There are many sciences in which there are different methodological approaches at the micro level and the macro level. For example, the following pairs of scientific fields describe micro- and macro-level aspects of the same processes: nuclear physics and chemistry, chemistry and microbiology, microbiology and medicine. In all these fields, macro level researchers use different methodologies than micro level researchers. They commonly use approximate laws that hold at the macro level, even though they are not (yet) able to derive them in detail from the underlying micro-foundations. In general, many macro phenomena in all the described scientific fields are what systems theorists calls “emergent phenomena” that emerge from the interactions of entities at the micro-level but are too complex to be satisfactorily described from a micro perspective given our current state of knowledge.

To put it more starkly, we know that physicists understand the micro-level processes that occur in our bodies in much greater detail and precision than medical doctors – but would you rather see your physician or your physicist if you fall sick, on the basis that the latter better understands the micro-foundations of what is going on in your body?

In macroeconomics, there are a number of emergent phenomena that are still difficult to trace back to their precise micro origins. One of the most important such concepts is aggregate demand, which does not have a clear counterpart in microeconomics.

In any given field, micro and macro approaches inform each other, but in most fields, macro level researchers (say engineers or medical doctors) would not be willing to throw out the set of macro laws and heuristics that their field has accumulated over centuries and use exclusively the micro foundations of their field. This is, however, what happened in macroeconomics when the DSGE approach became the dominant approach.

While highlighting the desirability of different methodological approaches for micro and macro level researchers in principle, I also want to stress the desirability for the two subfields to learn from each other. For example, much of the progress in medicine over the past decades has been driven by insights from biochemistry.

Rational Expectations were considered one of the most important areas of progress of the DSGE approach in the 1970s, after Robert Lucas Jr. pointed out (in what became known as the Lucas critique) that rational agents would update their expectations and change their behavior in response to policy changes. If macroeconomic models employ statistical relationships between macroeconomic variables that were derived from past observed behavior that ignore such changes in expectations, then they are bound to be wrong. This was an important insight, especially in the 1970s when policymakers and macroeconomists around the world battled with high inflation that was, in part, driven by unmodeled inflationary expectations.

The Lucas critique led to an innovation in macroeconomics that was clearly driven by a microeconomic insight, i.e. the effects of rational expectations in optimizing models. DSGE models are based on microeconomic fundamentals such as preferences and technologies that are typically not affected by policy action. Along this dimension, policy analysis in a DSGE model has the potential to be more robust. For example, in New Keynesian DSGE models, monetary policy cannot permanently increase output since economic agents have rational expectations and foresee that permanently expansive monetary policy only leads to inflation.

From a somewhat broader perspective, the Lucas critique is an application of the principle that if you leave something out of your model and that thing changes, you will get things wrong. DSGE models are neither necessary nor sufficient to deal with this broader problem – for example, the macroeconometric models at many central banks have explicitly incorporated inflation expectations in response to the Lucas critique without relying on full microfoundations. Furthermore, there are many dimensions along which the DSGE literature falls short of capturing the true microeconomic foundations of economic behavior. For example, it is common to employ assumptions and parameter values that are clearly at odds with actual measured microeconomic behavior in order to fit aggregate economic behavior. This includes, among others, assumptions on the homogeneity of economic agents (or heterogeneity along only a small number of groups or dimensions), on the elasticity of labor supply, which is typically assumed to be an order of magnitude higher than what is observed in micro data so as to fit the observed response of employment in recessions, or on utility functions that exhibit strong habit persistence in the New Keynesian literature so as to fit the behavior of the inflation rate.

If models abstract from certain features of reality or, even more, if they employ fundamental parameter values that are at odds with empirical estimates at the micro level in order to replicate certain aggregate summary statistics of the economy, then they are not actually capturing the true microeconomic incentives faced by economic agents, but are “bent” to fit the data, as was the case with 1970s-style macroeconomic models. Since they are not capturing the true underlying preferences and technologies of the agents in the economy, the described behavior is not robust to changes in policy regimes or other external factors.

The broader point of the “Lucas critique,” that models can only make useful predictions if they do not leave out some of the most important effects of the policies under consideration, applies to any model, including DSGE models. Researchers who employ DSGE models have to keep in mind that any macro model is bound to make some simplifications that destroy its robustness to some types of policy intervention. When investigating a specific research question, the art of being a good researcher is to distinguish which simplifications matter and which ones don’t.

Welfare experiments are a second aspect of DSGE models that are made possible by building on micro foundations and that has proven very useful. In the context of the traditional macroeconomic models of the 1970s, it was not possible to make direct statements about welfare, although the models could be used to speak about real variables that matter for welfare such as growth or unemployment. Since DSGE models explicitly assume utility functions for all economic agents, evaluating the impact of different economic policies on the utility of agents is a useful way to study welfare effects.

What we discussed in the context of the Lucas critique equally applies here: any welfare calculation is only as reliable as the macroeconomic model it is derived from. If a model makes the wrong simplifications, the welfare implications derived from it will not capture reality. A typical example is the low cost of business cycle fluctuations that is obtained in standard real business cycle models – if periods of unemployment correspond to voluntary equal reductions in labor supply by all agents, then it is unsurprising that the costs of unemployment are low, but it is questionable if the model is a useful guide to reality. Again, the art of being a good researcher is to make sure that those aspects of the model that matter for welfare in a given policy experiment are included in one’s model.

More generally, micro foundations are a useful tool for many types of questions in macroeconomics, but they are not a goal in itself. For some questions, micro foundations are indispensable, for others they may be misplaced.

Quantitative Macroeconomics

…Matching the Moment, but Missing the Point?

The second important aspect of DSGE methodology that I want to discuss are its quantitative ambitions. DSGE models aim to quantitatively describe the macroeconomy in an engineering-like fashion.

A typical modern approach to writing a paper in DSGE macroeconomics is as follows:

  • to establish “stylized facts” about the quantitative interrelationships of certain macroeconomic variables (e.g. moments of the data such as variances, autocorrelations, covariances, …) that have hitherto not been jointly explained;
  • to write down a DSGE model of an economy subject to a defined set of shocks that aims to capture the described interrelationships; and
  • to show that the model can “replicate” or “match” the chosen moments when it is fed with stochastic shocks generated by the assumed shock process.

The last described step is used to test the fitness of DSGE models by comparing the simulated moments from the model to the observed moments in the data. Models that roughly match the observed moments are accepted; models that are not consistent with the data are rejected.

However, the test imposed by matching DSGE models to the data is problematic in at least three respects: First, the set of moments chosen to evaluate the model is largely arbitrary. The macro profession has developed conventions as to which moments in the data are customary to compare to the model, but there is no strong scientific basis for one particular set of moments over another.

Second, for a given set of moments, there is no well-defined statistic to measure the goodness of fit of a DSGE model or to establish what constitutes an improvement in such a framework. Whether the moments generated by the model satisfactorily match the moments observed in the real world is often determined by an eyeball comparison and is largely at the discretion of the reader. The scientific rigor of this method is questionable.

Third, the evaluation is complicated by the fact that, at some level, all economic models are rejected by the data. All macroeconomic models, whether DSGE or not, simplify complex social interactions into a small set of variables and interrelationships. In addition, DSGE models, as we emphasized in the previous section, frequently impose a number of restrictions that are in direct conflict with micro evidence. If a model has been rejected along some dimensions, then a statistic that measures the goodness-of-fit along other dimensions is meaningless.

Should we have greater confidence in DSGE models that match more moments and that achieve a closer match to the data than other models? Are these likely to provide a more useful guide to reality? There is no scientific basis to answer this question affirmatively. In some instances, the criterion of matching moments may even be a dangerous guide for how useful a model is for the real world. For example, the most important macroeco- nomic events, such as financial crises, are not well captured by the second moments that are typically employed to evaluate macroeconomic models – they are tail events. As a result, a good model of financial crises may well distinguish itself by not matching the moments used to evaluate regular business cycle models, which are driven by a different set of shocks.

The Theory of the Second-Best asserts that in an economy with multiple market failures, correcting one may actually reduce overall economic efficiency. A meta-theory of the second best applies to economic models: since our models of the real world are never “first-best” and always contain simplifications, improving the fit of a model along one dimension may make it a worse guide to reality.

Focusing on the quantitative fit of models also creates powerful incentives for researchers (i) to introduce elements that bear little resemblance to reality for the sake of achieving a better fit (ii) to introduce opaque elements that provide the researcher with free (or almost free) parameters and (iii) to introduce elements that improve the fit for the reported moments but deteriorate the fit along other unreported dimensions.

Albert Einstein observed that “not everything that counts can be counted, and not everything that can be counted counts.” DSGE models make it easy to offer a wealth of numerical results by following a well-defined set of methods (that requires one or two years of investment in graduate school, but is relatively straightforward to apply thereafter). There is a risk for researchers to focus too much on numerical predictions of questionable reliability and relevance that absorb a lot of time and effort rather than focusing on deeper conceptual questions that are of higher relevance for society.

The Complexity of DSGE Models

…Limiting the Scope of Our Investigation?

DSGE models are not easy to solve: economic agents confront an infinitely forward- looking optimization problem; value and policy functions typically do not have an explicit representation; rational expectations imply that the expectations and actions of all agents have to be mutually consistent; etc. The simplest benchmark RBC model is non- trivial to solve for beginners; each frictions that is introduced on top of that benchmark makes the model exponentially more difficult to solve, especially if global solution methods need to be employed.

Many times, the conceptual requirements and the quantitative ambitions of DSGE models are in direct conflict with each other – the conceptual restrictions make quantitative analysis more difficult and vice versa. Some conceptual insights that are difficult to simulate numerically are not spelled out; some numerical simulations that are difficult to square with the conceptual restrictions of DSGE model are not performed.

Biases The complexity of DSGE models thus introduces a number of biases into the macroeconomic profession:

First, there is a bias in the positive mechanisms that the profession is able to describe in DSGE models. Mathematical and computational complexity impose serious restrictions on the set of models that DSGE macroeconomists can analyze. In other words, the set of ideas that we can describe in rigorously quantified DSGE models is smaller than the set of ideas that we can express in simpler models. These methodological restrictions limit our modeling and, ultimately, our thinking.

There is a danger that ideas that are at present too complex to capture in DSGE models (for example, because numerical simulations are beyond our computational capabilities) get discounted as “unscientific” just because they do not fit into the dominant prevailing methodological apparatus.

Secondly, complexity also introduces a normative bias. When adding frictions into economic models, it is easiest to assume them in well-behaved analytical forms, e.g. as reduced-form shocks to technology or other parameters, as fixed wedges, or as convex constraints. Oftentimes, these assumptions automatically imply that the welfare theorems in the described system hold. In other words, the assumptions that keep a model numerically more tractable and the assumptions that assure that a model economy is constrained efficient frequently overlap. But it is dangerous to derive normative implications from a model that was designed to be constrained efficient just so it can be solved more easily. The desire to channel economic frictions into well-behaved analytical forms therefore introduces a normative bias into macroeconomics that makes the welfare theorems hold more frequently than an accurate description of the economy would suggest.

Third, the complexity introduced by the DSGE approach conflicts with Occam’s razor, i.e. with the scientific principle that models should be as simple as possible. This implies that ideas are presented in a fashion that is less clear than possible and that some economic insights are clouded or obscured by complexity. This is especially problematic in graduate education – some graduate students who have successfully passed qualifying exams in macroeconomics are unable to reproduce basic macroeconomic relationships that have been known for generations.

A fourth and related bias introduced by the complexity of solving DSGE models occurs in the amount of time spent on methods versus substance. This starts during graduate education but carries over into the profession at large. Since the methods of DSGE macroeconomics require a significant investment, graduate education in macroeconomics often focuses so heavily on methods that it gives insufficient attention to content. More broadly, the average DSGE macroeconomist spends a considerable amount of time, energy and effort dealing with the complexity generated by satisfying simultaneously the conceptual and numerical requirements of the DSGE approach. At the margin, society may benefit more if some of these resources were spent on tackling macroeconomic problems without being subject the methodological restrictions imposed by the DSGE approach.

Methodological innovations frequently create methodological traps. After every methodological innovation, some works are more concerned with incorporating the latest techniques than with the underlying questions themselves. However, applying the latest methodology does not guarantee economic insight – not every mathematical truism is a useful economic theory. In fact, the two all too often get confused. Ultimately, the main criterion to judge the usefulness of macroeconomic models must be their potential to contribute to our understanding of the real world and to improve our ability to govern the macroeconomy.

Finally, the complexity of DSGE models also introduces a selection bias into who becomes a macroeconomist. If we believe that the distribution of technical skills and of conceptual economic skills across the population is not perfectly correlated, then the increasing technical demands of DSGE models cause the pool of macroeconomists to be on average less well-endowed with conceptual economic skills. (Max Planck once famously remarked that he decided to study physics rather than economics in 1874 because the latter seemed too difficult – presumably too difficult for his ingenious but very mathematical mind. One can only wonder which of the two fields he would have chosen today.)

Uniformity in Macroeconomics

…Dangers of Groupthink?

An interesting property of scientific methodologies, including of the DSGE approach, is that they generate network externalities. The more people use a given methodology, the greater the payoffs to using it. A methodology provides a common framework of thought that facilitates the exchange of ideas and the incremental nature of scientific progress. These forces, by themselves, lead to natural monopolies in scientific methodologies. By some accounts, they have catapulted the DSGE approach into the position of a natural monopoly. Indeed, some macroeconomists tend to dismiss anything that’s not DSGE as not being macroeconomics, or not being scientific.

The uniformity created by a single dominant methodology in a scientific field is desirable if that methodology is able to efficiently encompass all the important phenomena to be described in a given field. However, the macroeconomics profession is far from this goal. Each of the restrictions imposed by the DSGE approach that we discussed in the previous sections has some clear benefits, but there are also situations when they are unnecessary for the insights to be gained, inconvenient because of the additional mathematical burden imposed, or outright misplaced in the sense that they do not represent the most fitting restrictions to capture empirical evidence.

Robustness If a methodology becomes dominant but is not sufficiently broad to capture all the phenomena of interest in a given field, then it exposes the field to a dangerous lack of robustness. Macroeconomics has arguably suffered from such a lack of robustness: Prior to 2008, mainstream macroeconomics was unprepared to understand the roots, mechanisms and policy solutions to the financial crisis of 2008/09. One of the reasons were arguably the focus on models with a well-behaved ergodic steady state and on second moments of macroeconomic variables rather than on extreme tail events.

The importance of robustness creates a powerful countervailing force to the network externalities that we discussed earlier, which makes methodological diversity desirable from a social point of view. However, it is not clear if private and social incentives for diversity are aligned.

Methodological Uniformity may in the end not be very desirable for the macro profession. If we want to build a useful quantitative model of the economy, it is not clear that imposing the conceptual restrictions of the DSGE approach is always a good idea – for example, no major central bank in the world uses a DSGE model as their main model of the economy. Similarly, if we want to understand a conceptual economic mechanism, it is not clear how useful it is to subject it to the requirement to perform detailed numerical simulations if these are costly to implement.

In short, as in many other scientific disciplines that study macro phenomena, it may be better for macroeconomists to embrace more diversity of methodological approaches, some of them focusing on quantitative insights, others on conceptual insights, and yet others working on combining the two. Recall that even in physics – the science which is perhaps closest to reaching a single unifying framework – mankind has not yet managed to find a theory of everything.


  • 1. This paper was prepared for the 2015 Conference "A Just Society" honoring Joseph Stiglitz's 50 years of teaching. It has benefited greatly from detailed comments by Joseph Stiglitz and Martin Guzman, as well as interesting conversations with Boragan Aruoba and Wouter den Haan.

Share your perspective