To export this article to Microsoft Word, please log in or subscribe.
Have an account? Please log in
Not a subscriber? Sign up today
King, Robert G.. "Quantitative theory and econometrics." Economic Quarterly. Federal Reserve Bank of Richmond. 1995. HighBeam Research. 22 Apr. 2018 <https://www.highbeam.com>.
King, Robert G.. "Quantitative theory and econometrics." Economic Quarterly. 1995. HighBeam Research. (April 22, 2018). https://www.highbeam.com/doc/1G1-18550506.html
King, Robert G.. "Quantitative theory and econometrics." Economic Quarterly. Federal Reserve Bank of Richmond. 1995. Retrieved April 22, 2018 from HighBeam Research: https://www.highbeam.com/doc/1G1-18550506.html
Quantitative theory uses simple, abstract economic models together with a small amount of economic data to highlight major economic mechanisms. To illustrate the methods of quantitative theory, we review studies of the production function by Paul Douglas, Robert Solow, and Edward Prescott. Consideration of these studies takes an important research area from its earliest days through contemporary real business cycle analysis. In these quantitative theoretical studies, economic models are employed in two ways. First, they are used to organize economic data in a new and suggestive manner. Second, models are combined with economic data to display successes and failures of particular theoretical mechanisms. Each of these features is present in each of the three studies, but to varying degrees, as we shall see.
These quantitative theoretical investigations changed how economists thought about the aggregate production function, i.e., about an equation describing how the total output of many firms is related to the total quantities of inputs, in particular labor and capital inputs. Douglas taught economists that the production function could be an important applied tool, as well as a theoretical device, by skillfully combining indexes of output with indexes of capital and labor input. Solow taught economists that the production function could not be used to explain long-term growth, absent a residual factor that he labeled technical progress. Prescott taught economists that Solow's residual was sufficiently strongly procyclical that it might serve as a source of economic fluctuations. More specifically, he showed that a real business cycle model driven by Solow's residuals produced fluctuations in consumption, investment, and output that broadly resembled actual U.S. business cycle experience.
In working through three key studies by Douglas, Solow, and Prescott, we focus on their design, their interrelationship, and the way in which they illustrate how economists learn from studies in quantitative theory. This learning process is of considerable importance to ongoing developments in macroeconomics, since the quantitative theory approach is now the dominant research paradigm being used by economists incorporating rational expectations and dynamic choice into small-scale macroeconomic models.
Quantitative theory is thus necessarily akin to applied econometric research, but its methods are very different, at least at first appearance. Indeed, practitioners of quantitative theory - notably Prescott (1986) and Kydland and Prescott (1991) - have repeatedly clashed with practitioners of econometrics. Essentially, advocates of quantitative theory have suggested that little is learned from econometric investigations, while proponents of econometrics have suggested that little tested knowledge of business cycle mechanisms is uncovered by studies in quantitative economic theory.
This article reviews and critically evaluates recent developments in quantitative theory and econometrics. To define quantitative theory more precisely, Section 1 begins by considering alternative styles of economic theory. Subsequently, Section 2 considers the three examples of quantitative theory in the area of the production function, reviewing the work of Douglas, Solow, and Prescott. With these examples in hand, Section 3 then considers how economists learn from exercises in quantitative theory.
One notable difference between the practice of quantitative theory and of econometrics is the manner in which the behavioral parameters of economic models are selected. In quantitative theoretical models of business cycles, for example, most behavioral parameters are chosen from sources other than the time series fluctuations in the macroeconomic data that are to be explained in the investigation. This practice has come to be called calibration. In modern macroeconometrics, the textbook procedure is to estimate parameters from the time series that are under study. Thus, this clash of methodologies is frequently described as "calibration versus estimation."
After considering how a methodological controversy between quantitative theory and econometrics inevitably grew out of the rational expectations revolution in Section 4 and describing the rise of quantitative theory as a methodology in Section 5, this article then argues that the ongoing controversy cannot really be about "calibration versus estimation." It demonstrates that classic calibration studies estimate some of their key parameters and classic estimation studies are frequently forced to restrict some of their parameters so as to yield manageable computational problems, i.e., to calibrate them. Instead, in Section 6, the article argues that the key practical issue is styles of "model evaluation," i.e., about the manner in which economists determine the dimensions along which models succeed or fail.
In terms of the practice of model evaluation, there are two key differences between standard practice in quantitative theory and econometrics. One key difference is indeed whether there are discernible differences between the activities of parameter selection and model evaluation. In quantitative theory, parameter selection is typically undertaken as an initial activity, with model evaluation being a separate secondary stage. By contrast, in the dominant dynamic macroeconometric approach, that of Hansen and Sargent (1981), parameter selection and model evaluation are undertaken in an essentially simultaneous manner: most parameters are selected to maximize the overall fit of the dynamic model, and a measure of this fit is also used as the primary diagnostic for evaluation of the theory. Another key difference lies in the breadth of model implications utilized, as well as the manner in which they are explored and evaluated. Quantitative theorists look at a narrow set of model implications; they conduct an informal evaluation of the discrepancies between these implications and analogous features of a real-world economy. Econometricians typically look at a broad set of implications and use specific statistical methods to evaluate these discrepancies.
By and large, this article takes the perspective of the quantitative theorist. It argues that there is a great benefit to choosing parameters in an initial stage of an investigation, so that other researchers can readily understand and criticize the attributes of the data that give rise to such parameter estimates. It also argues that there is a substantial benefit to limiting the scope of inquiry in model evaluation, i.e., to focusing on a set of model implications taken to display central and novel features of the operation of a theoretical model economy. This limitation of focus seems appropriate to the current stage of research in macroeconomics, where we are still working with macroeconomic models that are extreme simplifications of macroeconomic reality.
Yet quantitative theory is not without its difficulties. To illustrate three of its limitations, Section 7 of the article reconsiders the standard real business cycle model, which is sometimes described as capturing a dominant component of postwar U.S. business cycles (for example, by Kydland and Prescott [1991] and Plosser [1989]). The first limitation is one stressed by Eichenbaum (1991): since it ignores uncertainty in estimated parameters, a study in quantitative theory cannot give any indication of the statistical confidence that should be placed in its findings. The second limitation is that quantitative theory may direct one's attention to model implications that do not provide much information about the endogenous mechanisms contained in the model. In the discussion of these two limitations, the focus is on a "variance ratio" that has been used, by Kydland and Prescott (1991) among others, to suggest that a real business cycle arising from technology shocks accounts for three-quarters of postwar U.S. business cycle fluctuations in output. In discussing the practical importance of the first limitation, Eichenbaum concluded that there is "enormous" uncertainty about this variance ratio, which he suggested arises because of estimation uncertainty about the values of parameters of the exogenous driving process for technology. In terms of the second limitation, the article shows that a naive model - in which output is driven only by production function residuals without any endogenous response of factors of production - performs nearly as well as the standard quantitative theoretical model according to the "variance ratio." The third limitation is that the essential focus of quantitative theory on a small number of model implications may easily mean that it misses crucial failures (or successes) of an economic model. This point is made by Watson's (1993) recent work that showed that the standard real business cycle model badly misses capturing the "typical spectral shape of growth rates" for real macroeconomic variables, including real output. That is, by focusing on only a small number of low-order autocovariances, prior investigations such as those of Kydland and Prescott (1982) and King, Plosser, and Rebelo (1988) simply overlooked the fact that there is an important predictable output growth at business cycle frequencies.
However, while there are shortcomings in the methodology of quantitative theory, its practice has grown at the expense of econometrics for a good reason: it provides a workable vehicle for the systematic development of macroeconomic models. In particular, it is a method that can be used to make systematic progress in the current circumstances of macroeconomics, when the models being developed are still relatively incomplete descriptions of the economy. Notably, macroeconomists have used quantitative theory in recent years to learn how the business cycle implications of the basic neoclassical model are altered by a wide range of economic factors, including fiscal policies, international trade, monopolistic competition, financial market frictions, and gradual adjustment of wages and prices.
The main challenge for econometric theory is thus to design procedures that can be used to make similar progress in the development of macroeconomic models. One particular aspect of this challenge is that the econometric methods must be suitable for situations in which we know before looking at the data that the model or models under study are badly incomplete, as we will know in most situations for some time to come. Section 8 of the article discusses a general framework of model-building activity within which quantitative theory and traditional macroeconometric approaches are each included. On this basis, it then considers some initial efforts aimed at developing econometric methods to capture the strong points of the quantitative theory approach while providing the key additional benefits associated with econometric work. Chief among these benefits are (1) the potential for replication of the outcomes of an empirical evaluation of a model or models and (2) an explicit statement of the statistical reliability of the results of such an evaluation.
In addition to providing challenges to econometrics, Section 9 of the article shows how the methods of quantitative theory also provide new opportunities for applied econometrics, using Friedman's (1957) permanent income theory of consumption as a basis for constructing two more detailed examples. The first of these illustrates how an applied econometrician may use the approach of quantitative theory to find a powerful estimator of a parameter of interest. The second of these illustrates how quantitative theory can aid in the design of informative descriptive empirical investigations.
In macroeconometric analysis, issues of identification have long played a central role in theoretical and applied work, since most macroeconomists believe that business fluctuations are the result of a myriad of causal factors. Quantitative theories, by contrast, typically are designed to highlight the role of basic mechanisms and typically identify individual causal factors. Section 10 considers the challenges that issues of identification raise for the approach of quantitative theory and the recent econometric developments that share its model evaluation strategy. It suggests that the natural way of proceeding is to compare the predictions of a model or models to characteristics of economic data that are isolated with a symmetric empirical identification.
The final section of the article offers a brief summary as well as some concluding comments on the relationship between quantitative theory and econometrics in the future of macroeconomic research.
1. STYLES OF ECONOMIC THEORY
The role of economic theory is to articulate the mechanisms by which economic causes are translated into economic consequences. By requiring that theorizing is conducted in a formal mathematical way, economists have assured a rigor of argument that would be difficult to attain in any other manner. Minimally, the process of undertaking a mathematical proof lays bare the essential linkages between assumptions and conclusions. Further, and importantly, mathematical model-building also has forced economists to make sharp abstractions: as model economies become more complex, there is a rapidly rising cost to establishing formal propositions. Articulation of key mechanisms and abstraction from less important ones are essential functions of theory in any discipline, and the speed at which economic analysis has adopted the mathematical paradigm has led it to advance at a much greater rate than its sister disciplines in the social sciences.
If one reviews the history of economics over the course of this century, the accomplishments of formal economic theory have been major. Our profession developed a comprehensive theory of consumer and producer choice, first working out static models with known circumstances and then extending it to dynamics, uncertainty, and incomplete information. Using these developments, it established core propositions about the nature and efficiency of general equilibrium with interacting consumers and producers. Taken together, the accomplishments of formal economic theory have had profound effects on applied fields, not only in the macroeconomic research that will be the focal point of this article but also in international economics, public finance, and many other areas.
The developments in economic theory have been nothing short of remarkable, matched within the social sciences perhaps only by the rise of econometrics, in which statistical methods applicable to economic analysis have been developed. For macroeconomics, the major accomplishment of econometrics has been the development of statistical procedures for the estimation of parameters and testing of hypotheses in a context where a vector of economic variables is dynamically interrelated. For example, macroeconomists now think about the measurement of business cycles and the testing of business cycle theories using an entirely different statistical conceptual framework from that available to Mitchell (1927) and his contemporaries.(1)
When economists discuss economic theory, most of us naturally focus on formal theory, i.e., the construction of a model economy - which naturally is a simplified version of the real world - and the establishment of general propositions about its operation. Yet, there is another important kind of economic theory, which is the use of much more simplified model economies to organize economic facts in ways that change the focus of applied research and the development of formal theory. Quantitative theory, in the terminology of Kydland and Prescott (1991), involves taking a more detailed stand on how economic causes are translated into economic consequences. Quantitative theory, of course, embodies all the simplifications of abstract models of formal theory. In addition, it involves making (1) judgments about the quantitative importance of various economic mechanisms and (2) decisions about how to selectively compare the implications of a model to features of real-world economies. By its very nature, quantitative theory thus stands as an intermediate activity to formal theory and the application of econometric methods to evaluation of economic models.
A decade ago, many economists thought of quantitative theory as simply the natural first step in a progression of research activities from formal theory to econometrics, but there has been a hardening of viewpoints in recent years. Some argue that standard econometric methods are not necessary or are, in fact, unhelpful; quantitative theory is sufficient. Others argue that one can learn little from quantitative theory and that the only source of knowledge about important economic mechanisms is obtained through econometrics. For those of us that honor the traditions of both quantitative theory and econometrics, not only did the onset of this controversy come as a surprise, but its depth and persistence also were unexpected. Accordingly, the twin objectives of this paper are, first, to explore why the events of recent years have led to tensions between practitioners of quantitative theory and econometrics and, second, to suggest dimensions along which the recent controversy can lead to better methods and practice.
2. EXAMPLES OF QUANTITATIVE THEORY
This section discusses three related research topics that take quantitative theory from its earliest stages to the present day. The topics all concern the production function, i.e., the link between output and factor inputs.(2)
The Production Function and Distribution Theory
The production function is a powerful tool of economic analysis, which every first-year graduate student learns to manipulate. Indeed, the first example that most economists encounter is the functional form of Cobb and Douglas (1928), which is also the first example studied here. For contemporary economists, it is difficult to imagine that there once was a time when the notion of the production function was controversial. But, 50 years after his pioneering investigation, Paul Douglas (1976) reminisced:
Critics of the production function analysis such as Horst Mendershausen and his mentor, Ragnar Frisch, . . . urged that so few observations were involved that any mathematical relationship was purely accidental and not causal. They sincerely believed that the analysis should be abandoned and, in the words of Mendershausen, that all past work should be torn up and consigned to the wastepaper basket. This was also the general sentiment among senior American economists, and nowhere was it held more strongly than among my senior colleagues at the University of Chicago. I must admit that I was discouraged by this criticism and thought of giving up the effort, but there was something which told me I should hold on. (P. 905)
The design of the investigation by Douglas was as follows. First, he enlisted the assistance of a mathematician, Cobb, to develop a production function with specified properties.(3) Second, he constructed indexes of physical capital and labor input in U.S. manufacturing for 1899-1922. Third, Cobb and Douglas estimated the production function
[Mathematical Expression Omitted].
In this specification, [Y.sub.t] is the date t index of manufacturing output, [N.sub.t] is the date t index of employed workers, and [K.sub.t] is the date t index of the capital stock. The least squares estimates for 1899-1922 were [Mathematical Expression Omitted] and [Mathematical Expression Omitted]. Fourth, Cobb and Douglas performed a variety of checks of the implications of their specification. These included comparing their estimated [Mathematical Expression Omitted] to measures of labor's share of income, which earlier work had shown to be reasonably constant through time. They also examined the extent to which the production function held for deviations from trend rather than levels. Finally, they examined the relationship between the model's implied marginal product of labor ([Alpha]Y/N) and a measure of real wages that Douglas (1926) had constructed in earlier work.
The results of the Cobb-Douglas quantitative theoretical investigation are displayed in Figure 1. Panel A provides a plot of the data on output, labor, and capital from 1899 to 1922. All series are benchmarked at 100 in 1899, and it is notable that capital grows dramatically over the sample period. Panel B displays the fitted production function, [Mathematical Expression Omitted], graphed as a dashed line and manufacturing output, Y, graphed as a solid line. As organized by the production function, variations in the factors N and K clearly capture the upward trend in output.(4)
With the Cobb-Douglas study, the production function moved from the realm of pure theory - where its properties had been discussed by Clark (1889) and others - to that of quantitative theory. In the hands of Cobb and Douglas, the production function displayed an ability to link (1) measures of physical output to measures of factor inputs (capital and labor) and (2) measures of real wages to measures of average products. It thus became an engine of analysis for applied research. …
Browse back issues from our extensive library of more than 6,500 trusted publications.
HighBeam Research is operated by Cengage Learning. © Copyright 2018. All rights reserved.
The HighBeam advertising network includes: womensforum.com GlamFamily