Авторы: 147 А Б В Г Д Е З И Й К Л М Н О П Р С Т У Ф Х Ц Ч Ш Щ Э Ю Я

Книги:  180 А Б В Г Д Е З И Й К Л М Н О П Р С Т У Ф Х Ц Ч Ш Щ Э Ю Я


Real Business Cycle Theory

In your 1980 paper ‘Methods and Problems in Business Cycle Theory’ you

seem to be anticipating in some respects the next decade’s work. You appear

to be asking for the kind of methodological approach which Kydland and

Prescott were about to take up. Were you aware of what they were doing at

the time?

Yes. But I wasn’t anticipating their work.

But your statements in that paper seem to be calling for the kind of methodology

that they have used.

Well, Prescott and I have been very close for years and we talk about everything.

But if you’re asking whether at the time I wrote that paper I had an idea

that you could get some sort of satisfactory performance out of a macroeconomic

model in which the only disturbances were productivity shocks, then

the answer is no. I was as surprised as everybody else when Kydland and

Prescott showed that was possible [laughter].

Is it fair to say that you, Friedman, Tobin and other leading macroeconomists

up until 1980 tended to think of a long-run smooth trend around which there

are fluctuations?


Basically differences of opinion concerned what caused these fluctuations

and what you could do about them. Then Kydland and Prescott [1982] came

along and changed that way of thinking.

Well, they talk about business cycles in terms of deviations from trend as

well. The difference is that Friedman, Tobin and I would think of the sources

of the trend as being entirely from the supply side and the fluctuations about

trend as being induced by monetary shocks. Of course we would think of

very different kinds of theoretical models to deal with the long-run and the

short-run issues. Kydland and Prescott took the sources that we think of as

long term to see how well they would do for these short-term movements.

The surprising thing was how well it worked. I am still mostly on the side of

Friedman and Tobin, but there is no question that our thinking has changed a

lot on the basis of this work.

In an article in Oxford Economic Papers Kevin Hoover [1995b] has suggested

that ‘the calibration methodology, to date, lacks any discipline as

stern as that imposed by econometric methods … and above all, it is not clear

on what standards competing, but contradictory models are to be compared

and adjudicated’. Does this pose a problem?

Yes, but it is not a problem that’s resolved by Neyman–Pearson statistics.

There the whole formalism is for testing models that are nested. It has always

been a philosophical issue to compare non-nested models. It’s not something

that Kydland and Prescott introduced. I think Kydland and Prescott are in

part responding to the sterility of Neyman–Pearson statistical methods. These

methods just don’t answer the questions that we want to answer. Maybe they

do for studying the results of agricultural experiments, or something like that,

but not for dealing with economics.

Would you agree with the view that a major contribution of the real business

cycle approach has been to raise fundamental questions about the meaning,

significance and characteristics of economic fluctuations?

I think that is true of any influential macroeconomics. I don’t think that

statement isolates a unique contribution of real business cycle theory.

In commenting on recent developments in new classical economics Gregory

Mankiw [1989] has argued that although real business cycle theory has ‘served

the important function of stimulating and provoking scientific debate, it will [he

predicts] ultimately be discarded as an explanation of observed fluctuations’.

What are your predictions for the future development of macroeconomics?

I agree with Mankiw, but I don’t think he understands the implication of his

observation. We are now seeing models in the style of Kydland and Prescott

with nominal rigidities, imperfect credit markets, and many other features

that people thinking of themselves as Keynesians have emphasized. The

difference is that within an explicit equilibrium framework we can begin to

work out the quantitative implications of these features, not just illustrate

them with textbook diagrams.

New Keynesian Economics

When we interviewed Gregory Mankiw in 1993 [see Snowdon and Vane,

1995] he suggested that ‘the theoretical challenge of Lucas and his followers

has been met’ and that Keynesian economics is now ‘well founded on

microeconomic models’. Do you think that new Keynesians such as Mankiw

have created firm microeconomic foundations for Keynesian models?

There are some interesting theoretical models by people who call themselves

‘new Keynesians’. I don’t know who first threw out this challenge but I

would think it was Patinkin. When I was a student this idea of microfoundations

for Keynesian models was already on everyone’s agenda and I thought of

Patinkin as the leading exponent of that idea.

Keynesian models in the 1960s, and this is what excited people like Sargent

and me, were operational in the sense that you could quantify the effects of

various policy changes by simulating these models. You could find out what

would happen if you balanced the budget every year, or if you increased the

money supply, or changed fiscal policy. That was what was exciting. They

were operational, quantitative models that addressed important policy questions.

Now in that sense new Keynesian models are not quantitative, are not

fitted to data; there are no realistic dynamics in them. They are not used to

address any policy conclusions. What are the principal policy conclusions of

‘new Keynesian economics’? Ask Greg Mankiw that question the next time

you interview him [laughter]. I don’t even ask that they prove interesting

policy conclusions, just that they attempt some. Everyone knows that Friedman

said we ought to expand the money supply by 4 per cent per year. Old

Keynesians had similar ideas about what we ought to do with the budget

deficit, and what they thought the effects of it would be. New Keynesian

economics doesn’t seem to make contact with the questions that got us

interested in macroeconomics in the first place.

In Europe, where currently unemployment is a much bigger problem compared

to the USA, some new Keynesian work has tried to explain this

phenomenon in terms of hysteresis effects. This work implies that Friedman

[1968a] was wrong when he argued that aggregate demand disturbances

cannot affect the natural rate. So in that sense some new Keynesian economists

are trying to address the problem of unemployment, suggesting that

aggregate demand management still has a role to play.

When Friedman wrote his 1968 article the average rate of unemployment in

the USA was something like 4.8 per cent and the system always seemed to

return to about that level. Since then the natural rate has drifted all over the

place. It looked much more like a constant of nature back in those days than

it does now. Everyone would have to agree with that. That is not a theory but

an observation about what has happened. Now in Europe the drift upwards

has been much more striking. Unemployment is a hugely important problem.

But I don’t want to call anyone who notes that that is a problem a Keynesian.

Ljungqvist and Sargent (1998) have done some very exciting work on this,

trying to make the connections between the European welfare state and

unemployment rates. I don’t know whether they have got it right or not.

That has also been a theme of Patrick Minford et al.’s [1985] work in the UK.

It is a tough theme to defend though, because the welfare state has been in

place for 30 years more or less in its present form in most European countries.

Perhaps the best way is to identify changes within the incentive structure

rather than the level of benefits.

Yes, that is what you have got to do. Ljungqvist and Sargent try to address

that issue as well.

General and Methodological Issues

Do you think it is healthy to subject students to a breadth of perspectives at

the undergraduate level?

I don’t know. I teach introductory macro and I want my students to see

specific, necessarily pretty simple, models and to compare their predictions

to US data. I want them to see for themselves rather than just be told about it.

Now that does give a narrowness to their training. But the alternative of

giving them a catalogue of schools and noting what each says without giving

students any sense of how economic reasoning is used to try to account for

the facts is not very attractive either. Maybe there is a better way to do it.

Have you ever thought of writing a basic introductory textbook?

I have thought a lot about it, but it would be hard to do. I sat down once with

my course notes, to see how far the notes I had been using over the years

were from a textbook, and it was a long long way [laughter]. So I have never

done it.

Is the philosophy of science and formal methodology an area that interests


Yes. I don’t read very much in the area but I like to think about it.

You acknowledge that Friedman has had a great influence on you, yet his

methodological approach is completely different to your own approach to

macroeconomics. Why did his methodological approach not appeal to you?

I like mathematics and general equilibrium theory. Friedman didn’t. I think

that he missed the boat [laughter].

His methodological approach seems more in keeping with Keynes and


He describes himself as a Marshallian, although I don’t know quite what that

means. Whatever it is, it’s not what I think of myself as.

Would you agree that the appropriate criterion for establishing the fruitfulness

of a theory is the degree of empirical corroboration attained by its


Something like that. Yes.

You are Friedmanite on that issue of methodology?

I am certainly a Friedmanite. The problem with that statement is that not all

empirical corroborations are equal. There are some crucial things that a

theory has to account for and if it doesn’t we don’t care how well it does on

other dimensions.

Do you think that it is crucial for macroeconomic models to have neoclassical

choice-theoretic microfoundations?

No. It depends on the purposes you want the model to serve. For short-term

forecasting, for example, the Wharton model does very well with little in the

way of theoretical foundations, and Sims, Litterman and others have had

pretty good success with purely statistical extrapolation methods that involve

no economics at all. But if one wants to know how behaviour is likely to

change under some change in policy, it is necessary to model the way people

make choices. If you see me driving north on Clark Street, you will have

good (though not perfect) predictive success by guessing that I will still be

going north on the same street a few minutes later. But if you want to predict

how I will respond if Clark Street is closed off, you have to have some idea of

where I am going and what my alternative routes are – of the nature of my

decision problem.

Why do you think there is more consensus among economists over microeconomic

issues compared to macroeconomic issues?

What is the microeconomic consensus you are referring to? Does it just mean

that microeconomists agree on the Slutsky equation, or other purely mathematical

propositions? Macroeconomists all take derivatives in the same way,

too. On matters of application and policy, microeconomists disagree as vehemently

as macroeconomists – neither side in an antitrust action has any

difficulty finding expert witnesses.

I think there is a tremendous amount of consensus on macroeconomic

issues today. But there is much that we don’t know, and so – necessarily – a

lot to argue about.

Do you see any signs of an emerging consensus in macroeconomics, and if

so, what form will it take?

When a macroeconomic consensus is reached on an issue (as it has been, say,

on the monetary sources of inflation) the issue passes off the stage of professional

debate, and we argue about something else. Professional economists

are primarily scholars, not policy managers. Our responsibility is to create

new knowledge by pushing research into new, and hence necessarily controversial,

territory. Consensus can be reached on specific issues, but consensus

for a research area as a whole is equivalent to stagnation, irrelevance and


In what areas, other than the monetary sources of inflation, do you think

there is now a consensus in macro? Do you think, for example, that there is a

majority of economists who are now anti fine-tuning?

Yes. Fine-tuning certainly has come down a few pegs. Paul Krugman has

been doing a lot of very effective writing attacking non-economists writing

about economic matters. Paul is speaking for the whole profession in a very

effective way and addressing the most important questions in social science.

Economists have a lot of areas of agreement, partly due to the fact that we

look at numbers. If somebody says the world is breeding itself into starvation,

we look at numbers and see that per capita incomes are rising in the

world. It seems to be that on a lot of questions there is a huge amount of

consensus among economists. More and more we are focusing on technology,

supply-side, long-run issues. Those are the big issues for us now, not on

depression prevention.