The Fed’s Misleading Money Supply Measures

Chuck Myers / Zuma Press

The most robust model for determining national income is the monetarist model. The price of the economy when measured in nominal terms is determined by the price taken by the money supply. Indeed, the positive relationship between the rate of money supply growth and both nominal GDP and nominal aggregate demand growth is unambiguous and overwhelming.

So what is the most appropriate measure of money to measure the temperature of the economy and predict its evolution? Is a narrow metric, like monetary base (M0), the best? Or should we focus on broad monetary measures, like M3 and M4? For determining national income, the more inclusive the metric, the better. Indeed, to obtain the most complete and precise picture possible, it is necessary to include all the important components of the money supply, and not just a few.

Obtaining data on the money supply is fairly straightforward. Just go to the Fed’s monetary database and choose the broadest measure of money supply, and you’re good to go. To the right? No, it’s not that simple. First, since the Fed stopped reporting the M3 money supply measure in March 2006, only M2 remains as the broadest measure reported by the Fed. And, M2 is not very wide.

The Fed’s money supply measures are limited to rather narrow measures, and that’s a problem. To get higher and broader measures, you have to go to the Center for Financial Stability (CFS) in New York, where I’m a special advisor. The CFS was founded in 2009 by Lawrence “Larry” Goodman to, in part, improve the measurement and reporting of money supply statistics. The CFS was fortunate that William A. “Bill” Barnett, the world’s foremost expert on Divisia monetary aggregates, agreed to develop and lead the CFS Monetary and Financial Measurement Advances program. As a result, the CFS, under Bill’s watchful eye, produces a detailed monthly report “CFS Divisia Monetary Data for the United States”. This report contains a broad monetary measure M4. It has five more components than M2: institutional money market funds, long-term deposits, pensions, commercial paper and treasury bills. These components are important because they all serve, to varying degrees, money. To exclude them from a monetary measure would be to exclude a great deal.

Thus, CFS money supply measures contain important components that are excluded from the Fed’s M2. In this sense, the CFS data is superior. But, the narrowness is just the start of the Fed’s data problems.

What really separates CFS measures from those of the Fed is that CFS measures of currency are not just a simple sum of the various components that make up the different measures of money (read the different M’s). It has long been recognized that simple sum aggregation can lead to big measurement problems. Indeed, Irving Fisher in his classic 1922 book, The making of index numbers, concluded that simple-sum aggregation can lead to the worst kind of indices. When it comes to measuring monetary aggregates, Milton Friedman recognized that there were cases in which simple sum aggregation could result in poor measures of money supply.

Why was Friedman’s conjecture correct? Money takes the form of various types of financial assets that are used for trading and as a store of value. The currency created by the Fed (banknotes, coins and bank deposits with the Fed) represents the monetary base (M0). This state currency, or great power currency, is imbued with the greatest monetary value of the various types of financial assets called money. The components of the monetary base are ready to be used in transactions in which goods and services are exchanged for “money”.

But, in addition to the assets that make up the monetary base, there are many others that have varying degrees of money, a characteristic that can be measured by the ease and opportunity cost associated with their exchange for money. base currency. These other assets are, to varying degrees, substitutes for money and are included in broader measures of money. But, these other assets should not be given the same weights when added together to obtain a measure of broad money supply. Instead, these assets should receive less weight per unit than the other components of the high powered money base, with those that are easiest to replace with base money receiving higher weights than those that have a degree. lower liquidity.

How can we eliminate simple-sum aggregates and determine the weights that should be attached to the components of broad money supply measures? Enter Bill Barnett. Using an aggregation theory developed in 1925 by the French engineer François Divisia (1889-1964), Bill was the first to apply the Divisia index to the measure of money supply. Thus, the general measures of money produced in the CFS monthly report are not simply single-sum M3 and M4 numbers. These are superior Divisia M3 and Divisia M4 metrics, in which their components are weighted according to the monetary value of each component.

We all know that good science is not possible without good data. After all, bad measurements and bad data can lead to disastrous missile launches. What pilot would want to “fly blind” with a malfunctioning altimeter? What about the money supply? Are the Fed’s Wrong Measures Really Making a Difference for Policymakers and Investors? In a nutshell, the answer is “yes”. Let’s take a look at a dramatic example of how Fed data caused economic “emergency landings”.

When Paul Volcker took over the reins of the Fed, the health of the US economy was “bad”. Indeed, 1979 ended with a double-digit inflation rate of 13.3%. Do you remember stagflation?

President Volcker realized that money matters, and it didn’t take long for him to act. On Saturday October 6, 1979, he stunned the world with an unexpected announcement. He proclaimed he was going to put money supply measures on the Fed’s dashboard. For him, it was obvious that, in order to put the US economy back in good health, inflation had to be wrung out. And, to kill inflation, we would have to control the money supply.

Volcker has achieved his goal. In 1982, the annual inflation rate fell to 3.8%, which is a great achievement. But the problem was that Volcker’s compression of inflation resulted in a relatively short (less than a year) recession that began in January 1980, and another more severe crisis that began soon after and ended. in November 1982.

Paul Volcker’s problem was that the monetary altimeter installed on his dashboard was faulty. When my friend Volcker looked at his M2 gauge, it was showing M2 data which was calculated by a simple summation of their components – the normal Fed procedure. As the chart below shows, the Fed felt that the double-digit federal funds rates it designed allowed it to exploit the money supply brakes with just the right amount of pressure. In fact, if the money supply had been correctly measured by a Divisia metric, Volcker would have realized that the Fed was not just hitting pauses, but needlessly crushing them from 1978 until early 1982. The Fed was enforcing pauses. a monetary policy that was much tighter than he thought — an excessive tightening that resulted in two recessions.

Why the huge discrepancies between the standard single-sum measurements of M2 that Volcker observed and the true Divisia M2 measurement? As the Fed raised the federal funds rate, the opportunity cost of holding cash increased. As a result, retail money market funds and term deposits, for example, have become relatively more attractive and have been given a lower weight when measured by a Divisia metric. Faced with a higher interest rate, people had a much greater incentive to avoid “large” cash and current account balances, choosing to keep their funds in relatively high-yielding assets. As the federal funds rate increased, the gap between the single sum measures and Divisia M2 widened.

Sadly, the data issues associated with Volcker’s monetary experiment were overlooked by the Fed. Indeed, as Bill Barnett concluded in his book Getting It Wrong: How Faulty Monetary Statistics Undermine the Fed, the Financial System, and the Economy: As the complexity of financial products increased, the quantity and quality of the Fed’s data declined. Fortunately, we have a reliable alternative for delivering high-quality money supply data: the Center for Financial Stability.

Where are we today ? As shown in the graph below, the growth rate of Divisia M4 is 4.2% y / y. This rate is a bit low. Indeed, it is lower than it has been in the past year, and it is also lower than its trend rate of the past 30 years of 4.9%. This suggests that the unwinding of the Fed’s balance sheet resulted in a somewhat “tight” monetary position.

If we look at aggregate demand measured in nominal terms, it is holding up rather well (see graph below). At a rate of 5.65% per annum, nominal aggregate demand, as measured by final sales to domestic purchases, is above its 30-year trend rate of 4.82%. So for now, things are looking pretty good. But, a monetary storm cloud would develop if the Fed faltered and slipped into a quantitative tightening mode. Without Divisa M4 on the Fed dashboard, the Fed might not be aware of the storm that’s building on the horizon.

About the author