The morning papers contain articles on the newly released paper on monetary policy by James Bullard, the President of the Federal Reserve Bank of St. Louis. The basic thrust of the paper is that the Fed’s efforts to keep interest rates so low and for “an extended period” of time may eventually backfire and result in a Japan-like dilemma of stagnation and price deflation.
The “appropriate tool” in the present situation, Bullard contends, is the use of “quantitative easing.” More specifically, he argues that the Federal Reserve needs to be willing to buy longer-term Treasury issues to expand the amount of Federal Reserve Credit outstanding in spite of the fact that there are more than $1.0 trillion in excess reserves currently in the banking system.
The problem, as Bullard sees it, is a policy dilemma that results from the fact that nominal interest rates include a factor to account for inflationary expectations and that current Federal Reserve operating procedures rely on some form of what is called “the Taylor Rule” to set target interest rates. Bullard contends in his paper (which can be accessed through this article: http://blogs.wsj.com/economics/2010/07/29/feds-bullard-raises-policy-concerns/) that using the Taylor Rule can result in one of two “steady state” outcomes, one with higher interest rates yet more inflation, and the other with low interest rates and outright deflation.
The latter “steady state” position is what the Japanese have experienced. The former is where the United States has been operating. The fear is that by continuing the Federal Reserve policy of keeping its target interest rate close to zero for “an extended period” the United States will migrate from where it is now into the situation more similar to that of the Japanese.
This is why Bullard suggests that the Fed may need to focus more on “quantitative easing” going forward.
The concept of “quantitative easing” was originated early on in the financial crisis that accompanied the Great Recession. When nominal interest rates approached zero, the Federal Reserve (and the Bank of England) argued that it needed to continue to provide more reserves for the banking system (print more money electronically) even though it could not drive nominal interest rates below zero.
Quantitative monetary policy procedures went out-of-fashion in the late 1980s. Paul Volcker had used quantitative measures in the late 1970s and early 1980s to “frame” his efforts to combat the inflation being experienced at the time. However, by the late 1980s, policy makers began to lose confidence in quantitative measures for policy purposes because the various monetary measures that were used did not provide consistent information, at least to those making policy at the time.
As a consequence, policy makers relied more and more on interest rate targets and this is when the Taylor Rule came into usage. Many claim that by adhering to this rule, even implicitly, resulted in a period of relative claim in financial markets and the economy now referred to as “the Great Moderation.”
One reason given for the disenchantment with quantitative monetary measures is that so much reliance has been placed on mathematical modeling within the Fed: monetary measures just did not lend themselves to such a formal process. Hence, quantitative targets were not easy to produce and actual results were even harder to explain because of the divergent movements of the different measures.
We can observe this kind of behavior over the past two years in many of the monetary measures.
For example, if one looks at the behavior of the narrow measure of the money stock, M1, relative to the behavior of a broader measure of the money stock, M2, from 2008 through the present, one can get different signals. In the first six months of 2008, the year-over-year growth of the M1 measure was close to zero. The year-over-year growth rate of M2 was around 6%.
As the financial crisis hit and progressed in the fall of 2008, the rate of growth of the M1 money stock increased dramatically whereas that of the M2 money stock rose only modestly. In the first quarter of 2009 the M1 money stock was growing, year-over-year, at around 17%; the M2 measure had peaked in the fourth quarter of 2008 at about 10% and was beginning to decline.
Furthermore, the behavior of other monetary aggregates seemed all out of line with these money stock measures: Total Reserves in the first quarter of 2009 were increasing, year-over-year, at about 1,850%; the Monetary Base rose by about 110%, year-over-year.
How do you explain these differences? Econometric models couldn’t do it.
Two basic things were happening. First, as the financial crisis progressed, people took more and more money out of less liquid asset holdings and began putting the funds in currency or very liquid bank deposits. Although the M2 measure rose during this time period, most of its increase was coming in the M1 component.
Second, the Federal Reserve was pumping unprecedented amounts of reserves into the financial system. These funds did not go into bank lending so as to expand the money stock: the banks just held onto the money. In August 2008, excess reserves in the banking system totaled less than $2.0 billion. In the first quarter of 2009 excess reserves averaged over $800 billion.
How can you mathematically model these kinds of behavior?
And, the problems of interpretation continue. Money stock growth has dropped off. In the second quarter of 2010 the year-over-year rate of growth of M1 was just under 6.0% while the rate of growth of M2 was less than 2.0%. The non-M1 component of M2 was growing well under 1.0%. People were still putting money into transaction balances and not in savings staying as liquid as they could. The rates of growth in both Total Reserves and the Monetary Base fell dramatically through 2010 yet excess reserves in the banking system averaged more than $1.0 trillion. Banks, too, were acting very conservatively by not lending and keeping as liquid as they could.
The point of this discussion is that the Federal Reserve needs to focus a lot more on the quantitative monetary measures than they have in recent history. But, the understanding of what is going on over shorter periods of time requires institutional understanding within a historical context and not just formal mathematical modeling. The consideration of monetary variables is important for setting and conducting monetary policy!
It is still true that over the longer run, important things like inflation/deflation are still “everywhere and in every time” a monetary phenomenon. Interest rates don’t correlate over the longer run.
Bullard is arguing for a greater focus on the monetary aggregates. By buying Treasury securities in a “quantitative easing” the Fed will be expanding the monetary base. Why would you want to expand the monetary base? Because commercial banks are not lending and if the economy is going to show more life going forward, bank lending is going to have to increase and the money stock measures are going to have to start growing faster again.
Milton Friedman argued that in the 1929-1933 period, the M2 money stock measure declined by one-third. However, the monetary base rose modestly. Freidman criticized the Fed for letting the money stock measure fall. To him, the Fed needed to provide more base money to get the banks’ lending again so that the money stock would grow. Is Bullard saying we are in the same type of situation Friedman described?
Friday, July 30, 2010
Monetary Targets: The Latest Take
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment