Авторы: 147 А Б В Г Д Е З И Й К Л М Н О П Р С Т У Ф Х Ц Ч Ш Щ Э Ю Я

Книги:  180 А Б В Г Д Е З И Й К Л М Н О П Р С Т У Ф Х Ц Ч Ш Щ Э Ю Я


загрузка...

What about Milton Friedman?

Well, I know Bob Lucas regards Friedman as being incredibly influential to

the research programme in the monetary area. Friedman’s work certainly

influenced people interested in the monetary side of things – Neil Wallace,

for example, was one of Friedman’s students. But I’m more biased towards

Neil Wallace’s programme, which is to lay down theoretical foundations for

money. Friedman’s work in the monetary field with Anna Schwartz [1963] is

largely empirically orientated. Now when Friedman talked about the natural

rate – where the unit of account doesn’t matter – that is serious theory. But

Friedman never accepted the dynamic equilibrium paradigm or the extension

of economic theory to dynamic stochastic environments.

You were a graduate student at a time when Keynesianism ‘seemed to be the

only game in town in terms of macroeconomics’ [Barro, 1994]. Were you ever

persuaded by the Keynesian model? Were you ever a Keynesian in those

days?

Well, in my dissertation I used a Keynesian model of business cycle fluctuations.

Given that the parameters are unknown, I thought that maybe you

could apply optimal statistical decision theory to better stabilize the economy.

Then I went to the University of Pennsylvania. Larry Klein was there – a

really fine scholar. He provided support for me as an assistant professor,

which was much appreciated. I also had an association with the Wharton

Economic Forecasting group. However, after writing the paper on ‘InvestEdward

C. Prescott 347

ment under Uncertainty’ with Bob Lucas [Econometrica, 1971], plus reading

his 1972 Journal of Economic Theory paper on ‘Expectations and the Neutrality

of Money’, I decided I was not a Keynesian [big smile]. I actually

stopped teaching macro after that for ten years, until I moved to Minnesota in

the spring of 1981, by which time I thought I understood the subject well

enough to teach it.

Business Cycles

The study of business cycles has itself gone through a series of cycles.

Business cycle research flourished from the 1920s to the 1940s, waned during

the 1950s and 1960s, before witnessing a revival of interest during the 1970s.

What were the main factors which were important in regenerating interest in

business cycle research in the 1970s?

There were two factors responsible for regenerating interest in business cycles.

First, Lucas beautifully defined the problem. Why do market economies

experience recurrent fluctuations of output and employment about trend?

Second, economic theory was extended to the study of dynamic stochastic

economic environments. These tools are needed to derive the implications of

theory for business cycle fluctuations. Actually the interest in business cycles

was always there, but economists couldn’t do anything without the needed

tools. I guess this puts me in the camp which believes that economics is a

tool-driven science – absent the needed tools we are stymied.

Following your work with Finn Kydland in the early 1980s there has been

considerable re-examination of what are the stylized facts of the business

cycle. What do you think are the most important stylized facts of the business

cycle that any good theory needs to explain?

Business cycle-type fluctuations are just what dynamic economic theory

predicts. In the 1970s everybody thought the impulse or shock had to be

money and were searching for a propagation mechanism. In our 1982

Econometrica paper, ‘Time to Build and Aggregate Fluctuations’, Finn and I

loaded a lot of stuff into our model economy in order to get propagation. We

found that a prediction of economic theory is that technology shocks will

give rise to business cycle fluctuations of the nature observed. The magnitude

of the fluctuations and persistence of deviations from trend match observations.

The facts that investment is three times more volatile than output, and

consumption one-half as volatile, also match, as does the fact that most

business cycle variation in output is accounted for by variation in the labour

input. This is a remarkable success. The theory used, namely neoclassical

growth theory, was not developed to account for business cycles. It was

developed to account for growth.

Were you surprised that you were able to construct a model economy which

generated fluctuations which closely resembled actual experience in the

USA?

Yes. At that stage we were still searching for the model to fit the data, as

opposed to using the theory to answer the question – we had not really tied

down the size of the technology shock and found that the intertemporal

elasticity of labour supply had to be high. In a different context I wrote a

paper with another one of my students, Raj Mehra [Mehra and Prescott,

1985] in which we tried to use basic theory to account for the difference in

the average returns on stock and equity. We thought that existing theory

would work beforehand – the finance people told us that it would [laughter].

We actually found that existing theory could only account for a tiny part of

the huge difference.

How do you react to the criticism that there is a lack of available supporting

evidence of strong intertemporal labour substitution effects?

Gary Hansen [1985] and Richard Rogerson’s [1988] key theoretical development

on labour indivisibility is central to this. The margin that they use is the

number of people who work, not the number of hours of those that do work.

This results in the stand-in or representative household being very willing to

intertemporally substitute even though individuals are not that willing. Labour

economists using micro data found that the association between hours

worked and compensation per hour was weak for full-time workers. Based on

these observations they concluded that the labour supply elasticity is small.

These early studies ignore two important features of reality. The first is that

most of the variation in labour supply is in the number working – not in the

length of the workweek. The second important feature of reality ignored in

these early studies is that wages increase with experience. This suggests that

part of individuals’ compensation is this valuable experience. Estimates of

labour supply are high when this feature of reality is taken into account. The

evidence in favour of high intertemporal labour supply elasticity has become

overwhelming. Macro and micro labour economics have been unified.

Many prominent economists such as Milton Friedman [see Snowdon and

Vane, 1997b], Greg Mankiw [1989] and Lawrence Summers [1986] have

been highly critical of real business cycle models as an explanation of aggregate

fluctuations. What do you regard as being the most serious criticisms

that have been raised in the literature against RBC models?

I don’t think you criticize models – maybe the theory. A nice example is

where the Solow growth model was used heavily in public finance – some of

its predictions were confirmed, so we now have a little bit more confidence in

that structure and what public finance people say about the consequences of

Edward C. Prescott 349

different tax policies. Bob Lucas [1987] says technology shocks seem awfully

big and that is the feature he is most bothered by. When you look at how

much total factor productivity changes over five-year periods and you assume

that changes are independent, the quarterly changes have to be big. The

difference between total factor productivity in the USA and India is at least

400 per cent. This is a lot bigger than if in say a two-year period the shocks

are such that productivity growth is a couple of per cent below or above

average. This is enough to give rise to a recession or boom. Other factors are

also influential – tax rates matter for labour supply and I’m not going to rule

out preference shocks either. I can’t forecast what social attitudes will be, I

don’t think anybody can – for example, whether or not the female labour

participation rate will go up.

In your 1986 Federal Reserve Bank of Minneapolis paper, ‘Theory Ahead of

Business Cycle Measurement’, you concluded that attention should be focused

on ‘determinants of the average rate of technological advance’. What

in your view are the main factors that determine the average rate at which

technology advances?

The determinants of total factor productivity is the question in economics. If

we knew why total factor productivity in the USA was four times bigger than

in India, I am sure India would immediately take the appropriate actions and

be as rich as the USA [laughter]. Of course the general rise throughout the

world has to be related to what Paul Romer talks about – increasing returns

and the increase in the stock of usable knowledge. But there is a lot more to

total factor productivity, particularly when you look at the relative levels

across countries or different experiences over time. For example, the Philippines

and Korea were very similar in 1960 but are quite different today.

How important are institutions?

Very. The legal system matters and matters a lot, particularly the commercial

code and the property rights systems. Societies give protection to certain

groups of specialized factor suppliers – they protect the status quo. For

example, why in India do you see highly educated bank workers manually

entering numbers into ledgers? In the last few years I have been reading quite

a lot about these types of issues. However, there seem to be more questions

than answers [laughter].

When it comes to the issue of technological change, are you a fan of

Schumpeter’s work?

The old Schumpeter, but not the new [laughter]. The new suggests that we

need monopolies – what the poor countries need is more competition, not

more monopolies.

In your 1991 Economic Theory paper, co-authored with Finn Kydland, you

estimated that just over two-thirds of post-war US fluctuations can be attributed

to technology shocks. A number of authors have introduced several

modifications of the model economy, for example Cho and Cooley [1995].

How robust is the estimate of the contribution of technology shocks to aggregate

fluctuations to such modifications?

The challenge to that number has come from two places. First, the size of the

estimate of the intertemporal elasticity of labour supply. Second, are technology

shocks as large as we estimated them to be? You can have lots of other

factors and they need not be orthogonal – there could be some moving in

opposite directions that offset each other or some moving in the same direction

that amplify each other. Are the shocks that big? Marty Eichenbaum

[1991] tried to push them down and came up with a 0.005 number for the

standard deviation of the total factor productivity shocks. My number is

0.007. I point out to Marty that Ian Fleming’s secret agent 005 is dead. Agent

007 survives [laughter].

How do you view the more recent development of introducing nominal

rigidities, imperfect credit markets and other Keynesian-style features into

RBC models?

I like the methodology of making a theory quantitative. Introducing monopolistic

competition with sticky prices has been an attempt to come up with a

good mechanism for the monetary side. I don’t think it has paid off as much

as people had hoped, but it is a good thing to explore.

The new classical monetary-surprise models developed in the 1970s by Lucas,

Sargent, Wallace and others were very influential. When did you first begin to

lose faith in that particular approach?

In our 1982 paper Finn and I were pretty careful – what we said was that in

the post-war period if the only shocks had been technology shocks, then the

economy would have been 70 per cent as volatile. When you look back at

some of Friedman and Schwartz’s [1963] data, particularly from the 1890s

and early 1900s, there were financial crises and associated large declines in

real output. It is only recently that I have become disillusioned with monetary

explanations. One of the main reasons for this is that a lot of smart people

have searched for good monetary transmission mechanisms but they haven’t

been that successful in coming up with one – it’s hard to get persistence out

of monetary surprises.

How do you now view your 1977 Journal of Political Economy paper,

co-authored with Finn Kydland, in which monetary surprises, if they can be

achieved, have real effects?

Edward C. Prescott 351

Finn and I wanted to make the point about the inconsistency of optimal plans

in the setting of a more real environment. The pressure to use this simple

example came from the editor – given the attention that paper has subsequently

received, I guess his call was right [laughter].

What do you regard to be the essential connecting thread between the monetary-

surprise models developed in the 1970s and the real business cycle

models developed in the 1980s?

The methodology – Bob Lucas is the master of methodology, as well as

defining problems. I guess when Finn and I undertook the research for our

1982 piece we didn’t realize it was going to be an important paper. Ex post

we see it as being an important paper – we certainly learnt a lot from writing

it and it did influence Bob Lucas in his thinking about methodology. That

paper pushed the profession into trying to make macroeconomic theory more

quantitative – to say how big things are. There are so many factors out there –

most of them we have got to abstract from, the world is too complex otherwise

– we want to know which factors are little and which are significant.

Turning to one of the stylized facts of the business cycle, does the evidence

suggest that the price level and inflation are procyclical or countercyclical?

Finn and I [Kydland and Prescott, 1990] found that in the USA prices since

the Second World War have been countercyclical, but that in the interwar

period they were procyclical. Now if you go to inflation you are taking the

derivative of the price level and things get more complex. The lack of a

strong uniform regular pattern has led me to be a little suspicious of the

importance of the monetary facts – but further research could change my

opinion.

What is your current view on the relationship between the behaviour of the

money supply and the business cycle?

Is it OK to talk about hunches? [laughter]. My guess is that monetary and

fiscal policies are really tied together – there is just one government with a

budget constraint. In theory, at least, you can arrange to have a fiscal authority

with a budget constraint and an independent monetary authority – in

reality some countries do have a high degree of independence of their central

bank. Now I’ve experimented with some simple closed economy models

which unfortunately get awfully complicated, very fast [laughter]. In some of

those models government policy changes do have real consequences – the

government ‘multipliers’ are very different from those in the standard RBC

model. Monetary and fiscal policy are not independent – there is a complex

interaction between monetary and fiscal policy with respect to debt management,

money supply and government expenditure. So I think that there is a

rich class of models to be studied and as we get better tools we are going to

learn more.

One of the main features of Keynesianism has always been the high priority

given by its advocates to the problem of unemployment. Equilibrium business

cycle theory seems to treat unemployment as a secondary issue. How do you

think about unemployment?

When I think about employment it is easy because you can go out and

measure it – you see how many hours people work and how many people

work. The problem with unemployment is that it is not a well-defined

concept. When I look at the experience of European economies like France

and Spain, I see unemployment as something to do with the arrangements

that these societies set up. Unemployment, particularly among the young, is

a social problem. Lars Ljungqvist and Tom Sargent [1998] are doing some

very interesting work on this question and that is something I want to study

more.

Given that your work has provided an integrated approach to the theory of

growth and fluctuations, should we perhaps abandon the term ‘business

cycle’ when we refer to aggregate economic fluctuations?

Business cycles are in large part fluctuations due to variations in how many

hours people work. Is that good language or not? I think I’ll leave that for you

to decide [laughter]. I’m sympathetic to what your question implies, but I

can’t think of any better language right now.

Methodology

You are known as a leading real business cycle theorist. Are you happy with

that label?

I tend to see RBC theory more as a methodology – dynamic applied general

equilibrium modelling has been a big step forward. Applied analyses that

people are doing now are so much better than they used to be. So in so far as I

am associated with that, and have helped get that started, I am happy with

that label.

Do you regard your work as having resulted in a revolution in macroeconomics?

No – I have just followed the logic of the discipline. There has been no real

dramatic change, only an extension, to dynamic economics – it takes time to

figure things out and develop new tools. People are always looking for the

revolutions – maybe some day some revolution will come along, but I don’t

think I’ll sit around and wait for it [laughter].

Edward C. Prescott 353

What role have calibration exercises played in the development of real business

cycle models?

I think of the model as something to use to measure something. Given the

posed question, we typically want our model economy to match reality on

certain dimensions. With a thermometer you want it to register correctly

when you put it in ice and in boiling water. In the past economists have tried

to find the model and that has held them back. Today people don’t take the

data as gospel; they look at how the data are collected. So it has forced people

to learn a lot more about government statistics on the economy.

How important was Lucas’s [1980a] paper on ‘Methods and Problems in

Business Cycle Theory’ in your development of the calibration approach?

It’s hard to recall exactly – I saw his vision more clearly later on. Back then I

kept thinking of trying to find the model, as opposed to thinking of economic

theory in terms of a set of instructions for constructing a model to answer a

particular question. There never is a right or wrong model – the issue is

whether a model is good for the purpose it is being used.

Kevin Hoover [1995b] has suggested that ‘the calibration methodology, to

date, lacks any discipline as stern as that imposed by econometric methods’.

What happens if you have a Keynesian and a real business cycle model which

both perform well? How do you choose between the two?

Well, let’s suppose you work within a Keynesian theoretical framework and it

provides guidance to construct models, and you use those models and they

work well – that’s success, by definition. There was a vision that neoclassical

foundations would eventually be provided for Keynesian models but in the

Keynesian programme theory doesn’t provide much discipline in constructing

the structure. A lot of the choice of equations came down to an empirical

matter – theory was used to restrict these equations, some coefficients being

zero. You notice Keynesians talk about equations. Within the applied general

equilibrium approach we don’t talk about equations – we always talk about

production functions, utility functions or people’s ability and willingness to

substitute. We are not trying to follow the physicist in discovering the laws of

motion of the economy, unlike Keynesians and monetarists. Keynesian approaches

were tried and put to a real test, and to quote Bob Lucas and Tom

Sargent [1978], in the 1970s Keynesian macroeconometric models experienced

‘econometric failure on a grand scale’.

To what extent is the question of whether the computational experiment

should be regarded as an econometric tool an issue of semantics?

It is pure semantics. Ragnar Frisch wanted to make neoclassical economics

quantitative – he talked about quantitative theoretical economics and quanti354

tative empirical economics, and their unification. The modern narrow definition

of econometrics only focuses on the empirical side.

Lawrence Summers [1991a] in a paper on ‘The Scientific Illusion in Empirical

Macroeconomics’ has argued along the lines that formal econometric

work has had little impact on the growth of economic knowledge, whereas the

informal pragmatic approach of people like Friedman and Schwartz [1963]

has had a significant effect. Are you sympathetic to Summers’s view?

In some ways I’m sympathetic, in others I’m unsympathetic – I think I’ll

hedge (laughter). With regard to representing our knowledge in terms of the

likelihood of different things being true, so that as we get more observations

over time we zero in on the truth, it doesn’t seem to work that way.