Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
Over the past decade, inflation has persistently undershot the Fed's inflation target. The Fed's preferred measure of inflation, the core PCE deflator, has average 1.56 percent over this time compared to a target of 2 percent. The Fed officially begin inflation targeting in 2012, but was implicitly targeting 2 percent long before that time. So below-target inflation has been happening for close to a decade and for many observers it is a mystery.
There have been a spate of articles as to why the Fed has not been able to hit its inflation target. Some have wondered if the Fed really understands or even controls the inflation rate. Even Fed officials have been perplexed by the low inflation since it cannot be explained by their Phillips curve models. As a result, they sometimes attribute the persistently low inflation to developments such as falling oil prices, demographics, global competition, changes in labor's share of income, safe asset shortage, and even the rise of Amazon.
These explanations, however, are not satisfactory since the Fed should be able to determine the inflation rate over the medium to long-run. That is, the Fed should be able to respond over time to developments that might cause inflation to drift off target. The Fed should be, in theory, the final arbiter of the trend inflation rate.
So why has inflation been so low? In my view, the answer is simple: the Fed is getting the inflation it wants. There is no mystery. One does not get a decade of trend inflation that is below target by accident. Instead, revealed preferences tell us inflation is where it is because the FOMC allowed it to be there. Put differently, the Fed has chosen not to fully offset the shocks and secular forces listed above that have pushed inflation down. This is a policy choice.
Fed officials and others may disagree, but the revealed preference argument is hard to ignore. Moreover, there are other reason to believe that the low inflation is, in fact, the desired outcome of the FOMC. They are presented below.
SEP Core Inflation Forecasts
The first reason to believe the low inflation is a desired outcome comes from the FOMC itself. The FOMC's Summary of Economic Projections (SEP) provides a central tendency forecasts for core PCE inflation. The FOMC's definition of the SEP is as follows (my emphasis):
Each participant's projections are based on his or her assessment of appropriate monetary policy.
The SEP, in other words, reveals FOMC members forecasts of economic variables conditional on the Fed doing monetary policy right. And up until recently, doing monetary policy right was not overshooting 2 percent inflation in the following year, as seen in the figure below. Even now, 2 is still seen largely as a ceiling. There is nothing symmetric about 2 percent in these SEP forecasts.
Most FOMC members, therefore, have treated 2 percent as a ceiling over the past decade. This is "appropriate" monetary policy for them. Keep in mind, that at this forecast horizon most of them also believe they have meaningful influence on inflation. Both of these observations point to the low inflation as a choice.
Textual Analysis
The second reason to believe that low inflation is a desired outcome comes from a recent study by the San Francisco Fed. It is titled "Taking the Fed at its Word: Direct Estimation of Central Bank Objectives using Text Analytics" and the abstract reads (my emphasis):
We directly estimate the Federal Open Market Committee's (FOMC) loss function, including the implicit inflation target, from the tone of the language used in FOMC transcripts, minutes, and members' speeches. Direct estimation is advantageous because it requires no knowledge of the underlying macroeconomic structure nor observation of central bank actions. We find that the FOMC had an implicit inflation target of approximately 1.5 percent on average over our baseline 2000 - 2013 sample period.
Fed officials, via their words, actually want 1.5 inflation on average. And shocker of all shockers, they are very close to getting that just that rate of inflation since 2009.
The Neel Kashkari Counterfactual
The third reason to believe low inflation is a desired outcome comes from imagining a counterfactual FOMC. Imagine a FOMC that has twelve members that are all clones of Neel Kashkari, as seen below. In this FOMC, where interest rates were not raised over the past few years--and maybe even lowered--do we really think inflation would be the same? I find that hard to believe.
To be clear, I do think there are important secular forces pushing down trend inflation, like the demand for safe assets. But again, the Fed should be able to offset such pressures if it chose to do so. The real question, then, is why the Fed has settled for trend inflation near 1.5 percent. That is a question for a different post. This post is simply a retort to all those who think the low inflation is a mystery. Folks, it is not a mystery. It is a choice.
It is worth nothing that this choice is actually more than a choice for trend inflation. It is implicitly a choice for lower trend aggregate demand (AD) growth. As seen below, aggregate demand growth was averaging 5.6 percent in the decades before the crisis. Since the recovery started, it has averaged about 3.6 percent. That is a 2 percentage point decline in the trend. The red line in the figure shows what a naive autoregressive forecast would have predicted over the past decade conditional on past nominal expenditure history. There has been a sizable AD shortfall.
In my view, it is this dearth of aggregate demand growth rather than the low inflation that is a problem. The slowdown in AD growth has arguably contributed to problems like hysteresis and populism. If so, this policy choice has been costly.
P.S. Adam Ozimek gives us estimates of how costly this AD shortfall has been.
Macroeconomics would not be what it is today without Edmund Phelps. This book assembles the field's leading figures to highlight the continuing influence of his ideas from the past four decades. Addressing the most important current debates in macroeconomic theory, it focuses on the rates at which new technologies arise and information about markets is dispersed, information imperfections, and the heterogeneity of beliefs as determinants of an economy's performance. The contributions, which represent a breadth of contemporary theoretical approaches, cover topics including the real effects of monetary disturbances, difficulties in expectations formation, structural factors in unemployment, and sources of technical progress. Based on an October 2001 conference honoring Phelps, this incomparable volume provides the most comprehensive and authoritative account in years of the present state of macroeconomics while also pointing to its future. The fifteen chapters are by the editors and by Daron Acemoglu, Jess Benhabib, Guillermo A. Calvo, Oya Celasun, Michael D. Goldberg, Bruce Greenwald, James J. Heckman, Bart Hobijn, Peter Howitt, Hehui Jin, Charles I. Jones, Michael Kumhof, Mordecai Kurz, David Laibson, Lars Ljungqvist, N. Gregory Mankiw, Dale T. Mortensen, Maurizio Motolese, Stephen Nickell, Luca Nunziata, Wolfgang Ochel, Christopher A. Pissarides, Glenda Quintini, Ricardo Reis, Andrea Repetto, Thomas J. Sargent, Jeremy Tobacman, and Gianluca Violante. Commenting are Olivier J. Blanchard, Jean-Paul Fitoussi, Mark Gertler, Robert E. Hall, Robert E. Lucas, Jr., David H. Papell, Robert A. Pollak, Robert M. Solow, Nancy L. Stokey, and Lars E. O. Svensson. Also included are reflections by Phelps, a preface by Paul A. Samuelson, and the editors' introduction
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
Kevin Warsh has a nice WSJ oped warning of financial problems to come. The major point of this essay: "countercyclical capital buffers" are another bright regulatory idea of the 2010s that now has fallen flat. As in previous posts, a lot of banks have lost asset value equal or greater than their entire equity due to plain vanilla interest rate risk. The ones that haven't run are now staying afloat only because you and me keep our deposits there at ridiculously low interest rates. Commercial real estate may be next. Perhaps I'm over-influenced by the zombie-apocalypse goings on in San Francisco -- $755 million default on the Hilton and Parc 55, $558 million default on the whole Westfield mall after Nordstrom departed and on and on. How much of this debt is parked in regional banks? I would have assumed that the Fed's regulatory army could see something so obvious coming, but since they completely missed plain vanilla interest rate risk, and the fact that you don't have to stand in line any more to run on your bank, who knows?So, banks are at risk; the Fed now knows it, and is reportedly worried that more interest rates to lower inflation will cause more problems. To some extent that's a feature not a bug -- the whole theory behind the Fed lowering inflation is that higher interest rates "cool economic activity," i.e. make banks hesitant to lend, people lose their jobs, and through the Phillips curve (?) inflation comes down. But the Fed wants a minor contraction, not full-on 2008. (That did bring inflation down though!) I don't agree with all of Kevin's essay, but I always cherry pick wisdom where I find it, and there is plenty. On what to do: Ms. Yellen and the other policy makers on the Financial Stability Oversight Council should take immediate action to mitigate these risks. They should promote the private recapitalization of small and midsize banks so they survive and thrive. Yes! But. I'm a capital hawk -- my answer is always "more." But we shouldn't be here in the first place. Repeating a complaint I've been making for a while, everything since the great treasury market bailout of March 2020 reveals how utterly broken the premises and promises of post-2008 financial regulation are. One of the most popular ideas was "countercyclical capital buffers." A nice explainer from Kaitlyn Hoevelmann at the St. Louis Fed (picked because it came up first on a Google search), "A countercyclical capital buffer would raise banks' capital requirements during economic expansions, with banks required to maintain a higher capital-to-asset ratio when the economy is performing well and loan volumes are growing rapidly. " Well, that makes sense, doesn't it? Buy insurance on a clear day, not when the forest fire is half a mile upwind. More deeply, remember "capital" is not "reserves" or "liquid assets." "Capital" is one way banks have of getting money, by selling stock, rather than selling bonds or taking deposits. (There is lots of confusion on this point. If someone says "hold" capital that's a sign of confusion.) It has the unique advantage that equity holders can't run to get their money out at any time. In bad times, the stock price goes down and there's nothing they can do about it. But also obviously, it's a lot easier to sell bank stock for a high price in good times than it is just after it has been revealed that the bank has lost a huge amount of money, i.e. like now. Why don't banks naturally issue more equity in good times? Well, because buying insurance is expensive, and most of all there is no deposit insurance or too big to fail guarantee subsidizing stock. So banks always leverage as much as they can. Behavioralists will add that bankers get over enthusiastic and happy to take risks in good times. Why don't regulators demand more capital in good times, so banks are ready for the bad times ahead? That's the natural idea of "countercyclical capital buffers." And after 2008, all worthy opinion said regulators should do that. Only some cynical types like me opined that the regulators will be just as human, just as behavioral, just as procyclically risk averse, just as prey to political pressures in the future as they were in the past. And so it has turned out. Despite 15 years of writing about procyclical capital, of "managing the credit cycle," here we are again -- no great amounts of capital issued in the good times, and now we want banks to do it when they're already in trouble, and anyone buying bank stock will be providing money that first of all goes to bail out depositors and other debt holders. As the ship is sinking, go on amazon to buy lifeboats. Just as in 2008, regulators will be demanding capital in bad times, after the horse has left the barn. So, the answer has to be, more capital always! Kevin has more good points: Bank regulators have long looked askance at capital from asset managers and private equity firms, among others. But this is no time for luxury beliefs. Capital is capital, even from disparaged sources. Policy makers should also green-light consolidation among small, midsize and even larger regional banks. I recognize concerns about market power. But the largest banks have already secured a privileged position with their "too big to fail" status. Hundreds of banks need larger, stronger franchises to compete against them, especially in an uncertain economy. Banks need prompt regulatory approval to be confident that proposed mergers will close. Better to allow bank mergers before weak institutions approach the clutches of the Federal Deposit Insurance Corp.'s resolution process. Voluntary mergers at market prices are preferable to rushed government auctions that involve large taxpayer losses and destruction of significant franchise value.It is a bit funny to see the Administration against all mergers, and then when a bank fails, Chase gets to swallow up failing banks with government sweeteners. Big is bad is another luxury belief. Yes, banks are uncompetitive. Look at the interest on your deposits (mine, Chase, 0.01%) and you'll see it just as clearly as you can see lack of competition in a medical bill. But most of that competition comes from regulation, not evil behavior. As per Kevin: The past decade's regulatory policies have undermined competition and weakened resiliency in the banking business. A final nice point: The Fed's flawed inflation forecasts in the past couple of years are a lesson in risk management. Policy makers shouldn't bet all their chips on hopes for low prices or anything else. Better to evaluate the likely costs if the forecast turns out to be wrong.Maybe the lesson of the massive failure to forecast inflation is that inflation is just bloody hard to forecast. Rather than spend a lot of effort improving the forecast, spend effort recognizing the uncertainty of any forecast, and being ready to react to contingencies as they arise. (I'm repeating myself, but that's the blogger's prerogative.)
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
The Fed's much-anticipated new monetary policy framework is now public. Fed Chairman Jerome Powell outlined the policy framework last week in Jackson Hole; you can view his speech here. Overall, I thought Powell's delivery was very good. While there's room for improvement, I think the new framework is a step in the right direction (George Selgin provides a good critique here). There were three things in Powell's speech that stuck out for me. I discuss these below. Shortfalls vs. DeviationsAt the 22:30 mark, Powell reports what may very well be the most substantive change to the monetary policy statement. Here, he states that the FOMC will now interpret important macroeconomic time-series like GDP and unemployment as exhibiting "shortfalls" instead of "deviations" from some ideal or "maximum" level (a frustratingly vague concept). The practical effect of this shift is to remove (or make less prominent) in the minds of FOMC members the idea that the economy is, or will soon be, "overheating" (i.e., embarked on an unsustainable path that can only end in misery for those most vulnerable to economic recession). The idea of "deviation from (some) trend" seems like a plausible description of the postwar U.S. up to the mid-1980s. Severe contractions were usually followed by equally robust recoveries. However, this representation seems to break down since the "great moderation" that began in the mid-1980s. Since then, economic recessions have not been followed by above-average growth. Instead, each recession seems better described as a "growth shortfall." We're not entirely sure what accounts from this cyclical asymmetry, but it seems consistent with Milton Friedman's "plucking model." I think we can expect a stream of research resurrecting this old idea (see here, for example). In any case, the upshot here is that, to the extent that "overheating" is no longer considered a serious threat, the FOMC will be less likely to implement "preemptive" policy rate hikes. This constitutes a tacit acknowledgement that the period leading up to "lift off" and what followed might have been handled better. As I wrote at the time (see my discussion here), standard Phillips Curve logic did not seem to support tightening (unemployment was above the estimated natural rate, inflation was below target, and inflation expectations were declining). But the Committee somehow talked itself into the need to "normalize," to act preemptively and not get caught "behind the curve." In fairness, monetary policy is always about balancing risks (in this case, the perceived risk of overheating). In the near future, less weight will be assigned to the risk of overheating. The Maximum Level of Employment At the 22:30 mark, Powell states "Of course, when employment is below its maximum level, as is so clearly the case now, we will actively seek to minimize that shortfall..." I have a hard time not interpreting "maximum" here as "socially desirable." I think most people would agree that the 2008 financial crisis caused employment to decline below its maximum level. The workers rendered idle in that episode constituted a social waste, and the Fed was right to loosen monetary policy to stimulate economic activity in the face of recessionary headwinds. But the recession induced by the C-19 is very different from standard recessions. This was laid out very clearly by St. Louis Fed President Jim Bullard on March 23, 2020: Expected U.S. Macroeconomic Performance during the Pandemic Adjustment Period. According to Bullard, the temporary removal of some workers from their jobs is not, in this case, a waste of resources. The decline in employment in this case should be viewed as an investment in public health. That is, the maximum level of employment declined and its recovery is driven mostly by the contagion dynamic (as well as improvements in social distancing protocols, masking, testing, treatments, etc.). The role of monetary policy here is to calm financial markets (which the Fed successfully accomplished in March) and to aid the fiscal authority with its income maintenance programs. In short, the primary monetary/fiscal policy objective here is to deliver insurance, not stimulus. Monetary stimulus is appropriate, however, to the extent that demand factors (e.g., individually rational, but a collectively irrational restraint on spending) are inhibiting the recovery dynamic. The evidence for this is usually assumed to be found in falling inflation and inflation expectations, and declining bond yields. And usually, this makes sense, because we usually assume that recessions are caused by collapses in aggregate demand (as in 2008-09). But what if the increase in the demand for money (safe assets in general) is driven by a collectively rational fear? We'd expect to see the exact same inflation and interest rate dynamic, but the role for stimulative monetary policy would be more difficult to justify (though the desirability for insurance remains). So, maybe it is not so clearly the case now that employment is below or, at least, far below its "maximum" level. Note that a significant part of the decline in aggregate employment is coming from the leisure and hospitality sector: Arguably, we do not want, at this stage of the pandemic, to promote the indoor dining experiences people enjoyed earlier this year. This activity will return slowly as economic fundamentals improve. The "full employment" level of employment in this sector is clearly below what it was in Jan 2020. But, to be fair, it is entirely possible, and perhaps even likely, that the level of employment even here is lower than the "full employment" level. It's very hard to tell by how much though. Average Inflation Targeting At the 24:00 mark, Powell explains how AIT will help anchor inflation expectations. Missing the inflation for a prolonged period of time will cause expectations to drift away from target and line up with the historical experience. This view of expectation formation is firmly rooted in the "adaptive expectations" tradition. That is, expectations are assumed to be formed by looking backward instead of forward. People sometimes claim that adaptive expectations are inconsistent with "rational" expectations. But this is not necessarily the case. In fact, it makes sense to use the historical record of inflation realizations to make inferences about the long-run inflation target if people are not sure of the monetary authority's true inflation target; see, for example, here: Monetary Policy Regimes and Beliefs. It's still not entirely clear to me whether FOMC members view AIT as a policy to pursue passively (i.e., let inflation creep up to and beyond target on its own) or actively (i.e., take explicit actions to promote an overshoot of inflation). If it's the former, then I'm on board with the idea. But if it's the latter, I am not. In particular, with the liquidity-trap-like conditions we're presently in, the Fed does not have the tools (or political will) to boost inflation persistently. It is likely to fail, just as the Bank of Japan failed. (I explain here why it's more difficult for a central bank to raise the inflation target than to lower it.) So, as I've advocated many times in the past, why not just declare 2% as a soft-ceiling and let fiscal policy do the rest? My view rests on the belief that missing the inflation target from below by 50bp over the past eight years is not a significant macroeconomic problem (especially given how crudely inflation is measured). The FOMC did view it as a problem, but mainly, it seems, because of the embarrassment associated with missing its target. "We are a central bank. We have an inflation target. Central banks are supposed to hit their inflation targets. We need to hit our inflation target to remain credible." This is why earlier FOMC statements emphasized the Fed's "symmetric" inflation target. That did not work and so now we have AIT which, I'm afraid, might not work either. Happily (for those who want to see higher inflation), Congress seems comfortable with the idea of producing large budget deficits into the foreseeable future. So, if we get higher inflation, it will largely be a fiscal phenomenon. The purpose of AIT is to accommodate any rise in inflation for the purpose of increasing inflation expectations and avoiding the specter of deflation (people often point to Japan as a case to avoid, by Japan seems to be doing fine as far as I can tell). There is the question of how the Fed would react should inflation rise sharply and persistently above 2%. Even if the event is unlikely, it would be good to state a contingency plan. In the past, the Fed could be expected to raise its policy rate sharply. But this event, should it transpire, will almost surely take place during an employment shortfall (since this is now the acknowledged new normal). The only prediction I'll make here is that the FOMC will have a lot of explaining to do in this event.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
Friday May 12 we had the annual Hoover monetary policy conference. Hoover twitter stream here. Conference webpage and schedule here (update 5/24 now contains videos.) As before, the talks, panels, and comments will eventually be written and published. The Fed has experienced two dramatic institutional failures: Inflation peaking at 8%, and a rash of bank failures. There were panels focused on each, and much surrounding discussion. We started with a little celebration of the 30th anniversary of Taylor (1993), which put the Taylor rule on the map. As Andy Levin pointed out in the discussion, academic immortality comes when they omit the number after your name. Rich Clarida, Volker Weiland and I quickly outlined some academic influence. John Lipsky added some very interesting commentary on how the Taylor rule was important on Wall Street, and specifically from his experience at Salomon Bros. The second panel on financial regulation was a smash. Anat Admati chaired, with presentations by Darrell Duffie, Randy Quarles, and Amit Seru. Duffie showed how online banking has taken over, and the combination of twitter and online banking makes runs happen much faster than before. You don't have to stand in line, you can all push "withdraw" at once. He also showed a glaring hole in liquidity regulations: A bank cannot count as liquidity its ability to use the discount window at the Fed. Seru covered some of his recent work, showing just how many banks have lost 10% or more of their asset value, and thus the value of their equity. (Nobody mentioned commercial real estate, the next shoe to drop.) They gently disagreed, Darrel viewing more liquidity and better liquidity rules as the main solution, and Amit more equity. All seemed to agree that the current regulatory mechanism is fundamentally broken. Randy gave a thoughtful, eloquent, and impassioned talk laying to rest the common notion that "deregulation" caused SVB to fail. It would have passed all the stress tests. This will be important to read when the papers are all available. I take the implication that the regulatory structure is, again, fundamentally broken. No, more of the current regulations would not have helped. But Randy didn't say that. Peter Henry next presented "Disinflation and the Stock Market: Third World Lessons for First World Monetary Policy" (a paper with Anusha Chari), discussed by Josh Rauh and Chaired by Bill Nelson. A key innovation, they use stock market reactions to measure whether disinflations are a success on a cost/benefit basis. Large inflations seem to end with stock market expansions. Moderate disinflations don't really do much for stock markets. Most disinflationary reforms fail.Over lunch, Haruhiko Kuroda, Former Governor, Bank of Japan updated us on the Japanese situation. He is confident 2% inflation will return soon. Niall Ferguson and Paul Schmelzing presented "The Safety Net: Central Bank Balance Sheets and Financial Crises 1587-2020," (with Martin Kornejew and Moritz Schularick), with Barry Eichengreen discussing and Michael Bordo chair. A taste: The paper concludes that lender of last resort operations do work, and also create moral hazard. Barry had an eloquent discussion, noting among other things that not all balance sheet expansions are the same. Look for those in the written versions. Next, Mickey Levy presented The Fed: Bad Forecasts and Misguided Monetary Policy, Steve Davis discussing and Jim Wilcox chair. The Fed -- and most industry analysts -- completely missed 8% inflation, both ahead of time and as it was happening. Why? How can the Fed do better? (And why is the Fed not asking this question?) To me, it looks like the forecast is not much more than an AR(1) reversion to 2% inflation. The paper has a good summary of how Fed forecasts are made, along with recommendations for institutional improvement. Steve Davis had an excellent discussion, pointing to a central incentive problem. The Fed uses forecasts to try to shape expectations. Like pubic health authorities, it can be afraid to reveal actual fears. I also see conceptual flaws -- not much attention to supply or fiscal policy, using the Phillips curve as a causal model and as a model in itself, too much attention to the one-period link from expected inflation to inflation, and too much attention to the forecast rather than risk management; what do we do if things come out differently. The conference day ended with the traditional policy panel, with Jim Bullard (talk here), Philip Jefferson (talk here), Jeff Lacker, and Charlie Plosser, Chaired by John Taylor. Bullard pointed to the huge fiscal stimulus as a source of inflation, warming my heart. He opined that this stimulus is fading, making him hopeful for a soft landing. He presented the following chart. This is a very interesting measure of how much "stimulus" is sitting out there in the economy. The government did write a lot of checks, that went straight to people's bank accounts, and eventually were spent, driving up inflation. On the other hand, I am still a bit shocked that we're running $1 trillion deficit despite beyond-full employment and output revving at every bit that the "supply" side of the economy can produce. What's your measure of fiscal stimulus? Which forecasts inflation? This is a very provocative and interesting idea. Jefferson gave a great talk. He has the measured cadence of a seasoned central banker, but speaks very clearly and directly. He started by announcing his appointment as vice-chair, which got a well deserved ovation. He then jumped right in: The title of the conference "How to Get Back on Track: A Policy Conference" is potent. Its intent and ambiguity are striking. First, the title presupposes that U.S. monetary policy is currently on the wrong track. Second, the webpage for this conference advances a puzzling definition of the phrase "on track." How so? According to the Hoover webpage, "A key goal of the conference is to examine how to get back on track and, thereby, how to reduce the inflation rate without slowing down economic growth" (emphasis added).1 As this audience knows, there are macroeconomic models that permit disinflation with no slowdown in economic growth, but the assumptions underlying these models are very strong. It's not clear, at least to me, why such a strict metric would be used to assess real-world monetary policymaking....I loved this. It shows he took the time to read up on the conference, and I love seeing basic premises challenged. Later, this struck me as thoughtful: I want to share with you a few strategic principles that are important to me. First, policymakers should be ready to react to a wide range of economic conditions with respect to inflation, unemployment, economic growth, and financial stability. The unprecedented pandemic shock is a good reminder that under extraordinary circumstances it will be difficult to formulate precise forecasts in real time. Our dual mandate from the Congress is especially helpful here. It provides the foundation for all our policy decisions. Second, policymakers should clearly communicate monetary policy decisions to the public. Our commitment to transparency should be evident to the public, and monetary policy should be conducted in a way that anchors longer-term inflation expectations. Third—and this is where I am revealing my passion for econometrics—policymakers should continuously update their priors about how the economy works as new data become available. In other words, it is appropriate to change one's perspective as new facts emerge. In this sense, I am in favor of a Bayesian approach to information processing.The first point brings us back to the problem that the Fed has so far been too silent about: How did it miss 8% inflation? And how to operate when such huge misses are possible? The Fed seems to have been making a forecast, then announcing a policy path that works for the forecast, and then trying to stick to it. In this first principle you see a quite different view. Let's call it data-dependent rather than time-dependent. This is a conference about the Taylor rule. Should the Fed look at more than inflation and employment? Well, yes and no according to these comments. And when models are not certain, distrust and update.Plosser and Lacker previewed an upcoming paper on the Fed's deviation from rules. Stay tuned. The evening started with a delightful speech by Sebastian Edwards on Latin American inflation. Stay tuned for that too. Videos should be up soon, and written versions as fast as we can get authors to turn them in. This is just a teaser! Update: Videos are now up, with some more commentary here.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
An a recent WSJ oped (which I will post here when 30 days have passed), I criticized the "supply shock" theory of our current inflation. Alan Blinder responds in WSJ letters First, Mr. Cochrane claims, the supply-shock theory is about relative prices (that's true), and that a rise in some relative price (e.g., energy) "can't make the price of everything go up." This is an old argument that monetarists started making a half-century ago, when the energy and food shocks struck. It has been debunked early and often. All that needs to happen is that when energy-related prices rise, many other prices, being sticky downward, don't fall. That is what happened in the 1970s, 1980s and 2020s.Second, Mr. Cochrane claims, the supply-shock theory "predicts that the price level, not the inflation rate, will return to where it came from—that any inflation should be followed by a period of deflation." No. Not unless the prices of the goods afflicted by supply shocks return to the status quo ante and persistent inflation doesn't creep into other prices. Neither has happened in this episode.When economists disagree about fairly basic propositions, there must be an unstated assumption about which they disagree. If we figure out what it is, we can think more productively about who is right. I think the answer here is simple: To Blinder there is no "nominal anchor." In my analysis, there is. This is a question about which one can honorably disagree. (WSJ opeds have a hard word limit, so I did not have room for nuance on this issue.) Suppose there are two goods, TVs and hamburgers. Chip production problems make TVs temporarily scarce. We agree, the relative price of TVs must rise. TVs could go up, hamburgers could go down, or half of both. When the chip shortage eases, the relative price goes back to normal. TVs could come down, or hamburgers go up. In Blinder's view, TV prices go up now, and hamburger prices catch up when the chip shortage eases. ("Inflation creeps in to other prices.") The reason, true enough, is that prices are stickier downward than upward. As Blinder writes, a half century ago monetarists started making an "old argument" against this analysis. Informally, they pointed out that people have to have enough money to buy TVs and hamburgers at higher prices. If they don't, the price level cannot permanently rise. Maybe TVs go up temporarily, but if people don't have enough money to pay the higher prices, those downwardly sticky prices eventually drift down. A bit more formally, monetarists start with MV=PY, money times velocity equals overall price level (TVs and hamburgers) times overall quantity. Without more M, PY cannot go up. In the short run with sticky downward prices, we'll have less Y. But lower demand for hamburgers eventually causes sellers to lower their prices, and when the chip shortage eases lower demand also brings back the price of TVs. Without more M there is, eventually, no more P. So to monetarists, a "supply shock" would indeed first raise the price of TVs, and thus the measured price level. But lower demand for TVs and hamburgers means the price level eventually comes back to where it was, as I had stated. MV=PY is a "nominal anchor." It is a force that determines the overall price level. Relative prices changes cannot change this overall price level, once we get past the price stickiness that sends some demand into output. My analysis also has a nominal anchor, fiscal theory, that nails down the overall price level, while allowing some stickiness in the short run. The difference between fiscal theory and monetarism or some other theory does not matter for today's discussion. (Exercise for the reader: do new-keynesian models have a nominal anchor, and if so what is it?) Now, how does Blinder avoid this logic? Simple. In his view there is no nominal anchor. There are two kinds of economic analysis that removes the nominal anchor. First, the nominal anchor simply may be absent. Standard 1970s ISLM analysis, the subject of an eloquent elegy (eulogy?) in Blinder's recent book, does not have a nominal anchor. The price level today is whatever the price level was yesterday plus whatever supply and demand shocks move it around today. The MV=PY constraint on the overall price level is simply absent. Given Blinder's writing in favor of ISLM and against everything that has happened since, I think this is the basic disagreement in our analysis. So you have it: If you think there is a nominal anchor, then you're with me: Supply shocks without more demand (more M, more debt), cannot permanently raise the price level. If you think there is no nominal anchor, and prices are whatever they were yesterday plus shocks, then Blinder's analysis that supply shocks can set off inflation that does not reverse, and permanently raise the price level, is possible. A second possibility: There is a nominal anchor, but it is passive. In a monetarist analysis, a supply shock comes, that raises the price of TVs because hamburgers are sticky downwards. The Fed, not wishing to see a recession, "accommodates" this supply shock by printing more M, so the price level goes up. And in monetarist analysis an interest rate target automatically provides that sort of accommodation, which is why they don't like interest rate targets. This is a standard analysis of the 1970s supply shocks. They didn't cause inflation directly, but they did induce the Fed to accommodate, to print up more money, which allowed broad-based inflation. Indeed, that might be a good fiscal theory story for recent inflation, merging supply shocks and fiscal theory. Why did the government send $5 trillion to people? Exactly so they could pay the higher prices, and no price had to fall. It worked like gangbusters. Facing the energy shortages after the Ukraine war, Europe subsidized demand with fiscal transfers to much the same effect. I don't think Blinder is making this argument, though. If that's the argument, the "supply shock" really still isn't the important driver of inflation. The inflation would not happen without the Fed (or fiscal policy) giving people the money to pay the higher prices. It's a supply-induced demand shock. The emergence of inflation is still 100% tied to loose monetary and fiscal policy. It's just making a plea that this inflationary policy was a good idea, avoiding painful price declines in some goods and wages. A minor rhetorical complaint: "This is an old argument that monetarists started making a half-century ago," Old arguments aren't necessarily bad. Adam Smith made the argument that free trade is good 250 years ago! And if "old" is an insult, I might point out that ISLM hasn't been publishable since about 1980, while "new" Keynesian models, which work in a completely different way, have been the rage. Update: I emailed Alan, who answered in part, The Fed's reaction function is the nominal anchor, but for the inflation rate more than the price level.This is an interesting and important comment, and I think it is representative of the standard view. It's halfway between the two views I articulated. There is no natural nominal anchor – no M in MV=PY or no B in B/P = expected present value of surpluses (fiscal theory alternative) – that constrains the price level. The price level is wherever the Fed drives it by interest rate policy (the "standard view" of "interest rates and inflation part 1" But for a supply shock to cause inflation in this view, there must be monetary accommodation. The Fed could run a price level target. It would raise interest rates to keep the price level from rising, or to bring it back after a shock. The Fed could run an inflation target. It would raise interest rates to keep the price level from rising. So inflation does not really emerge as a pure "supply shock." It is supply shock + demand accommodation, via the interest rate policy. Alan added Back when OPEC I struck in 1973-74, many monetarists denied that it could possibly cause inflation without a rise in M.The argument has indeed been around for a while. Did M rise? I'll leave monetarists to the history on that one. To be clear in this sequence, I used MV=PY as an easy illustrative example. I think there is a nominal anchor, but it is fiscal theory not monetarism. And I acknowledge that Blinder's "standard view" is most prevalent in policy commentary. UpdateA correspondent reminds me of basic IS/LM AS/AD. Do not confuse a "supply shock" -- a relative price movement -- with an "aggregate supply shock" -- a shock that makes the overall price level, and wages, rise for a given level of output. The latter is a Phillips curve phenomenon. They may be related in the depths of Keynesian thinking, but they are not the same, and it is a bit of a mistake to jump from one to another.
Effective January 1, 2015, Indonesia's new government took the decisive step of implementing a new fuel pricing system, dramatically reducing gasoline and diesel subsidy costs. This paved the way for the government's first budget, passed in February, to shift spending towards development priorities, especially infrastructure, the allocation for which is double the 2014 outturn. Successful implementation of the bold vision of the budget, however, will require overcoming administrative constraints to spending and dramatically lifting revenue collection performance. Achieving this, and having the benefits flow through into faster economic growth and poverty reduction, is likely to take time, especially with the pace of sustainable economic growth having slowed, due partly to lower commodity prices. Beyond the fiscal sector, reforms taken in the first months of the government's term in key areas such as investment licensing also face complex challenges to make operational. The government has signaled its strong reform intentions, and raised expectations. Early progress will now need to be consolidated by effectively implementing major reforms and the budget posture, against a still-challenging global economic backdrop for Indonesia.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
I've been reading a lot of macro lately. In part, I'm just catching up from a few years of book writing. In part, I want to understand inflation dynamics, the quest set forth in "expectations and the neutrality of interest rates," and an obvious next step in the fiscal theory program. Perhaps blog readers might find interesting some summaries of recent papers, when there is a great idea that can be summarized without a huge amount of math. So, I start a series on cool papers I'm reading. Today: "Tail risk in production networks" by Ian Dew-Becker, a beautiful paper. A "production network" approach recognizes that each firm buys from others, and models this interconnection. It's a hot topic for lots of reasons, below. I'm interested because prices cascading through production networks might induce a better model of inflation dynamics. (This post uses Mathjax equations. If you're seeing garbage like [\alpha = \beta] then come back to the source here.) To Ian's paper: Each firm uses other firms' outputs as inputs. Now, hit the economy with a vector of productivity shocks. Some firms get more productive, some get less productive. The more productive ones will expand and lower prices, but that changes everyone's input prices too. Where does it all settle down? This is the fun question of network economics. Ian's central idea: The problem simplifies a lot for large shocks. Usually when problems are complicated we look at first or second order approximations, i.e. for small shocks, obtaining linear or quadratic ("simple") approximations. On the x axis, take a vector of productivity shocks for each firm, and scale it up or down. The x axis represents this overall scale. The y axis is GDP. The right hand graph is Ian's point: for large shocks, log GDP becomes linear in log productivity -- really simple. Why? Because for large enough shocks, all the networky stuff disappears. Each firm's output moves up or down depending only on one critical input. To see this, we have to dig deeper to complements vs. substitutes. Suppose the price of an input goes up 10%. The firm tries to use less of this input. If the best it can do is to cut use 5%, then the firm ends up paying 5% more overall for this input, the "expenditure share" of this input rises. That is the case of "complements." But if the firm can cut use of the input 15%, then it pays 5% less overall for the input, even though the price went up. That is the case of "substitutes." This is the key concept for the whole question: when an input's price goes up, does its share of overall expenditure go up (complements) or down (substitutes)? Suppose inputs are complements. Again, this vector of technology shocks hits the economy. As the size of the shock gets bigger, the expenditure of each firm, and thus the price it charges for its output, becomes more and more dominated by the one input whose price grows the most. In that sense, all the networkiness simplifies enormously. Each firm is only "connected" to one other firm. Turn the shock around. Each firm that was getting a productivity boost now gets a productivity reduction. Each price that was going up now goes down. Again, in the large shock limit, our firm's price becomes dominated by the price of its most expensive input. But it's a different input. So, naturally, the economy's response to this technology shock is linear, but with a different slope in one direction vs. the other. Suppose instead that inputs are substitutes. Now, as prices change, the firm expands more and more its use of the cheapest input, and its costs and price become dominated by that input instead. Again, the network collapsed to one link. Ian: "negative productivity shocks propagate downstream through parts of the production process that are complementary (\(\sigma_i < 1\)), while positive productivity shocks propagate through parts that are substitutable (\(\sigma_i > 1\)). ...every sector's behavior ends up driven by a single one of its inputs....there is a tail network, which depends on \(\theta\) and in which each sector has just a single upstream link."Equations: Each firm's production function is (somewhat simplifying Ian's (1)) \[Y_i = Z_i L_i^{1-\alpha} \left( \sum_j A_{ij}^{1/\sigma} X_{ij}^{(\sigma-1)/\sigma} \right)^{\alpha \sigma/(\sigma-1)}.\]Here \(Y_i\) is output, \(Z_i\) is productivity, \(L_i\) is labor input, \(X_{ij}\) is how much good j firm i uses as an input, and \(A_{ij}\) captures how important each input is in production. \(\sigma>1\) are substitutes, \(\sigma<1\) are complements. Firms are competitive, so price equals marginal cost, and each firm's price is \[ p_i = -z_i + \frac{\alpha}{1-\sigma}\log\left(\sum_j A_{ij}e^{(1-\sigma)p_j}\right).\; \; \; (1)\]Small letters are logs of big letters. Each price depends on the prices of all the inputs, plus the firm's own productivity. Log GDP, plotted in the above figure is \[gdp = -\beta'p\] where \(p\) is the vector of prices and \(\beta\) is a vector of how important each good is to the consumer. In the case \(\sigma=1\) (1) reduces to a linear formula. We can easily solve for prices and then gdp as a function of the technology shocks: \[p_i = - z_i + \sum_j A_{ij} p_j\] and hence \[p=-(I-\alpha A)^{-1}z,\]where the letters represent vectors and matrices across \(i\) and \(j\). This expression shows some of the point of networks, that the pattern of prices and output reflects the whole network of production, not just individual firm productivity. But with \(\sigma \neq 1\) (1) is nonlinear without a known closed form solution. Hence approximations. You can see Ian's central point directly from (1). Take the \(\sigma<1\) case, complements. Parameterize the size of the technology shocks by a fixed vector \(\theta = [\theta_1, \ \theta_2, \ ...\theta_i,...]\) times a scalar \(t>0\), so that \(z_i=\theta_i \times t\). Then let \(t\) grow keeping the pattern of shocks \(\theta\) the same. Now, as the \(\{p_i\}\) get larger in absolute value, the term with the greatest \(p_i\) has the greatest value of \( e^{(1-\sigma)p_j} \). So, for large technology shocks \(z\), only that largest term matters, the log and e cancel, and \[p_i \approx -z_i + \alpha \max_{j} p_j.\] This is linear, so we can also write prices as a pattern \(\phi\) times the scale \(t\), in the large-t limit \(p_i = \phi_i t\), and \[\phi_i = -\theta_i + \alpha \max_{j} \phi_j.\;\;\; (2)\] With substitutes, \(\sigma<1\), the firm's costs, and so its price, will be driven by the smallest (most negative) upstream price, in the same way. \[\phi_i \approx -\theta_i + \alpha \min_{j} \phi_j.\] To express gdp scaling with \(t\), write \(gdp=\lambda t\), or when you want to emphasize the dependence on the vector of technology shocks, \(\lambda(\theta)\). Then we find gdp by \(\lambda =-\beta'\phi\). In this big price limit, the \(A_{ij}\) contribute a constant term, which also washes out. Thus the actual "network" coefficients stop mattering at all so long as they are not zero -- the max and min are taken over all non-zero inputs. Ian: ...the limits for prices, do not depend on the exact values of any \(\sigma_i\) or \(A_{i,j}.\) All that matters is whether the elasticities are above or below 1 and whether the production weights are greater than zero. In the example in Figure 2, changing the exact values of the production parameters (away from \(\sigma_i = 1\) or \(A_{i,j} = 0\)) changes...the levels of the asymptotes, and it can change the curvature of GDP with respect to productivity, but the slopes of the asymptotes are unaffected....when thinking about the supply-chain risks associated with large shocks, what is important is not how large a given supplier is on average, but rather how many sectors it supplies...For a full solution, look at the (more interesting) case of complements, and suppose every firm uses a little bit of every other firm's output, so all the \(A_{ij}>0\). The largest input price in (2) is the same for each firm \(i\), and you can quickly see then that the biggest price will be the smallest technology shock. Now we can solve the model for prices and GDP as a function of technology shocks: \[\phi_i \approx -\theta_i - \frac{\alpha}{1-\alpha} \theta_{\min},\] \[\lambda \approx \beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\min}.\] We have solved the large-shock approximation for prices and GDP as a function of technology shocks. (This is Ian's example 1.) The graph is concave when inputs are complements, and convex when they are substitutes. Let's do complements. We do the graph to the left of the kink by changing the sign of \(\theta\). If the identity of \(\theta_{\min}\) did not change, \(\lambda(-\theta)=-\lambda(\theta)\) and the graph would be linear; it would go down on the left of the kink by the same amount it goes up on the right of the kink. But now a different \(j\) has the largest price and the worst technology shock. Since this must be a worse technology shock than the one driving the previous case, GDP is lower and the graph is concave. \[-\lambda(-\theta) = \beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\max} \ge\beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\min} = \lambda(\theta).\] Therefore \(\lambda(-\theta)\le-\lambda(\theta),\) the left side falls by more than the right side rises. Does all of this matter? Well, surely more for questions when there might be a big shock, such as the big shocks we saw in a pandemic, or big shocks we might see in a war. One of the big questions that network theory asks is, how much does GDP change if there is a technology shock in a particular industry? The \(\sigma=1\) case in which expenditure shares are constant gives a standard and fairly reassuring result: the effect on GDP of a shock in industry i is given by the ratio of i's output to total GDP. ("Hulten's theorem.") Industries that are small relative to GDP don't affect GDP that much if they get into trouble. You can intuit that constant expenditure shares are important for this result. If an industry has a negative technology shock, raises its prices, and others can't reduce use of its inputs, then its share of expenditure will rise, and it will all of a sudden be important to GDP. Continuing our example, if one firm has a negative technology shock, then it is the minimum technology, and [(d gdp/dz_i = \beta_i + \frac{\alpha}{1-\alpha}.\] For small firms (industries) the latter term is likely to be the most important. All the A and \(\sigma\) have disappeared, and basically the whole economy is driven by this one unlucky industry and labor. Ian: ...what determines tail risk is not whether there is granularity on average, but whether there can ever be granularity – whether a single sector can become pivotal if shocks are large enough.For example, take electricity and restaurants. In normal times, those sectors are of similar size, which in a linear approximation would imply that they have similar effects on GDP. But one lesson of Covid was that shutting down restaurants is not catastrophic for GDP, [Consumer spending on food services and accommodations fell by 40 percent, or $403 billion between 2019Q4 and 2020Q2. Spending at movie theaters fell by 99 percent.] whereas one might expect that a significant reduction in available electricity would have strongly negative effects – and that those effects would be convex in the size of the decline in available power. Electricity is systemically important not because it is important in good times, but because it would be important in bad times. Ben Moll turned out to be right and Germany was able to substitute away from Russian Gas a lot more than people had thought, but even that proves the rule: if it is hard to substitute away from even a small input, then large shocks to that input imply larger expenditure shares and larger impacts on the economy than its small output in normal times would suggest.There is an enormous amount more in the paper and voluminous appendices, but this is enough for a blog review. ****Now, a few limitations, or really thoughts on where we go next. (No more in this paper, please, Ian!) Ian does a nice illustrative computation of the sensitivity to large shocks:Ian assumes \(\sigma>1\), so the main ingredients are how many downstream firms use your products and a bit their labor shares. No surprise, trucks, and energy have big tail impacts. But so do lawyers and insurance. Can we really not do without lawyers? Here I hope the next step looks hard at substitutes vs. complements.That raises a bunch of issues. Substitutes vs. complements surely depends on time horizon and size of shocks. It might be easy to use a little less water or electricity initially, but then really hard to reduce more than, say, 80%. It's usually easier to substitute in the long run than the short run. The analysis in this literature is "static," meaning it describes the economy when everything has settled down. The responses -- you charge more, I use less, I charge more, you use less of my output, etc. -- all happen instantly, or equivalently the model studies a long run where this has all settled down. But then we talk about responses to shocks, as in the pandemic. Surely there is a dynamic response here, not just including capital accumulation (which Ian studies). Indeed, my hope was to see prices spreading out through a production network over time, but this structure would have all price adjustments instantly. Mixing production networks with sticky prices is an obvious idea, which some of the papers below are working on. In the theory and data handling, you see a big discontinuity. If a firm uses any inputs at all from another firm, if \(A_{ij}>0\), that input can take over and drive everything. If it uses no inputs at all, then there is no network link and the upstream firm can't have any effect. There is a big discontinuity at \(A_{ij}=0.\) We would prefer a theory that does not jump from zero to everything when the firm buys one stick of chewing gum. Ian had to drop small but nonzero elements of the input-output matrix to produces sensible results. Perhaps we should regard very small inputs as always substitutes? How important is the network stuff anyway? We tend to use industry categorizations, because we have an industry input-output table. But how much of the US industry input-output is simply vertical: Loggers sell trees to mills who sell wood to lumberyards who sell lumber to Home Depot who sells it to contractors who put up your house? Energy and tools feed each stage, but don't use a whole lot of wood to make those. I haven't looked at an input-output matrix recently, but just how "vertical" is it? ****The literature on networks in macro is vast. One approach is to pick a recent paper like Ian's and work back through the references. I started to summarize, but gave up in the deluge. Have fun. One way to think of a branch of economics is not just "what tools does it use?" but "what questions is it asking? Long and Plosser "Real Business Cycles," a classic, went after idea that the central defining feature of business cycles (since Burns and Mitchell) is comovement. States and industries all go up and down together to a remarkable degree. That pointed to "aggregate demand" as a key driving force. One would think that "technology shocks" whatever they are would be local or industry specific. Long and Plosser showed that an input output structure led idiosyncratic shocks to produce business cycle common movement in output. Brilliant. Macro went in another way, emphasizing time series -- the idea that recessions are defined, say, by two quarters of aggregate GDP decline, or by the greater decline of investment and durable goods than consumption -- and in the aggregate models of Kydland and Prescott, and the stochastic growth model as pioneered by King, Plosser and Rebelo, driven by a single economy-wide technology shock. Part of this shift is simply technical: Long and Plosser used analytical tools, and were thereby stuck in a model without capital, plus they did not inaugurate matching to data. Kydland and Prescott brought numerical model solution and calibration to macro, which is what macro has done ever since. Maybe it's time to add capital, solve numerically, and calibrate Long and Plosser (with up to date frictions and consumer heterogeneity too, maybe). Xavier Gabaix (2011) had a different Big Question in mind: Why are business cycles so large? Individual firms and industries have large shocks, but \(\sigma/\sqrt{N}\) ought to dampen those at the aggregate level. Again, this was a classic argument for aggregate "demand" as opposed to "supply." Gabaix notices that the US has a fat-tailed firm distribution with a few large firms, and those firms have large shocks. He amplifies his argument via the Hulten mechanism, a bit of networkyiness, since the impact of a firm on the economy is sales / GDP, not value added / GDP. The enormous literature since then has gone after a variety of questions. Dew-Becker's paper is about the effect of big shocks, and obviously not that useful for small shocks. Remember which question you're after.My quest for a new Phillips curve in production networks is better represented by Elisa Rubbo's "Networks, Phillips curves and Monetary Policy," and Jennifer La'o and Alireza Tahbaz-Salehi's "Optimal Monetary Policy in Production Networks," If I can boil those down for the blog, you'll hear about it eventually. The "what's the question" question is doubly important for this branch of macro that explicitly models heterogeneous agents and heterogenous firms. Why are we doing this? One can always represent the aggregates with a social welfare function and an aggregate production function. You might be interested in how aggregates affect individuals, but that doesn't change your model of aggregates. Or, you might be interested in seeing what the aggregate production or utility function looks like -- is it consistent with what we know about individual firms and people? Does the size of the aggregate production function shock make sense? But still, you end up with just a better (hopefully) aggregate production and utility function. Or, you might want models that break the aggregation theorems in a significant way; models for which distributions matter for aggregate dynamics, theoretically and (harder) empirically. But don't forget you need a reason to build disaggregated models. Expression (1) is not easy to get to. I started reading Ian's paper in my usual way: to learn a literature start with the latest paper and work backward. Alas, this literature has evolved to the point that authors plop results down that "everybody knows" and will take you a day or so of head-scratching to reproduce. I complained to Ian, and he said he had the same problem when he was getting in to the literature! Yes, journals now demand such overstuffed papers that it's hard to do, but it would be awfully nice for everyone to start including ground up algebra for major results in one of the endless internet appendices. I eventually found Jonathan Dingel's notes on Dixit Stiglitz tricks, which were helpful. Update:Chase Abram's University of Chicago Math Camp notes here are also a fantastic resource. See Appendix B starting p. 94 for production network math. The rest of the notes are also really good. The first part goes a little deeper into more abstract material than is really necessary for the second part and applied work, but it is a wonderful and concise review of that material as well.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
My first post described a few anecdotes about what a warm person Bob Lucas was, and such a great colleague. Here I describe a little bit of his intellectual influence, in a form that is I hope accessible to average people.The "rational expectations" revolution that brought down Keynesianism in the 1970s was really much larger than that. It was really the "general equilibrium" revolution. Macroeconomics until 1970 was sharply different from regular microeconomics. Economics is all about "models," complete toy economies that we construct via equations and in computer programs. You can't keep track of everything in even the most beautiful prose. Microeconomic models, and "general equilibrium" as that term was used at the time, wrote down how people behave — how they decide what to buy, how hard to work, whether to save, etc.. Then it similarly described how companies behave and how government behaves. Set this in motion and see where it all settles down; what prices and quantities result. But for macroeconomic issues, this approach was sterile. I took a lot of general equilibrium classes as a PhD student — Berkeley, home of Gerard Debreu was strong in the field. But it was devoted to proving the existence of equilibrium with more and more general assumptions, and never got around to calculating that equilibrium and what it might say about recessions and government policies. Macroeconomics, exemplified by the ISLM tradition, inhabited a different planet. One wrote down equations for quantities rather than people, for example that "consumption" depended on "income," and investment on interest rates. Most importantly, macroeconomics treated each year as a completely separate economy. Today's consumption depended on today's income, having nothing to do with whether people expected the future to look better or worse. Economists recognized this weakness, and a vast and now thankfully forgotten literature tried fruitlessly to find "micro foundations" for Keynesian economics. But building foundations under an existing castle doesn't work. The foundations want a different castle. Bob's "islands" paper is famous, yes, for a complete model of how unexpected money might move output in the short run and not just raise inflation. But you can do that with a half a page of simple math, and Bob's paper is hard to read. It's deeper contribution, and the reason for that difficulty, is that Bob wrote out a complete "general equilibrium" model. People, companies and government each follow described rules of behavior. Those rules are derived as being the optimal thing for people and companies to do given their environment. And they are forward-looking. People think about how to make their whole lives as pleasant as possible, companies to maximize the present value of profits. Prices adjust so supply = demand. Bob said, by example, that we should do macroeconomics by writing down general equilibrium models. General equilibrium had also been abandoned by the presumption that it only studies perfect economies. Macroeconomics is really about studying how things go wrong, how "frictions" in the economy, such as the "sticky" wages underlying Keynesian thinking, can produce undesirable and unnecessary recessions. But here too, Bob requires us to write down the frictions explicitly. In his model, people don't see the aggregate price level right away, and do the best they can with local information. That is the real influence of the paper and Bob's real influence in the profession. (Current macroeconomic modeling reflects the fact that the Fed sets interest rates, and does not control the money supply.) You can see this influence in Tom Sargent's textbooks. The first textbook has an extensive treatment of Keynesian economics. It's about the most comprehensible treatment there is — but it is no insult to Tom to say that in that book you can see how Keynesian economics really doesn't hang together. Tom describes how, the minute he learned from Bob how to to general equilibrium, everything changed instantly. Rational expectations was, like any other advance, a group effort. But what made Bob the leader was that he showed the rest how to do general equilibrium. This is the heart of my characterization that Bob is the most important macroeconomist of the 20th century. Yes, Keynes and Friedman had more policy impact, and Friedman's advocacy of free markets in microeconomic affairs is the most consequential piece of 20th century economics. But within macroeconomics, there is before Lucas and after Lucas. Everyone today does economics the Lucas way. Even the most new-Keynesian article follows the Lucas rules of how to do economics. Once you see models founded on complete descriptions of people, businesses, government, and frictions, you can see the gaping holes in standard ISLM models. This is some of his stinging critique, such as "after Keynesian macroeconomics." Sure, if people's income goes up they are likely to consume more, as the Keynesians posited. But interest rates, wages, and expectations of the future also affect consumption, which Keynesians leave out. "Cross equations restrictions" and "budget constraints" are missing. Now, the substantive prediction that monetary policy can only move the real economy via unexpected money supply growth did not bear out, and both subsequent real business cycles and new-Keynesianism brought persistent responses. But the how we do macroeconomics part is the enduring contribution. The paper still had enduring practical lessons. Lucas, together with Friedman and Phelps brought down the Phillips curve. This curve, relating inflation to unemployment, had been (and sadly, remains) at the center of macroeconomics. It is a statistical correlation, but like many correlations people got enthused with it and started reading it as stable relationship, and indeed a causal one. Raise inflation and you can have less unemployment. Raise unemployment in order to lower inflation. The Fed still thinks about it in that causal way. But Lucas, Friedman, and Phelps bring a basic theory to it, and thereby realize it is just a correlation, which will vanish if you push on it. Rich guys wear Rolexes. That doesn't mean that giving everyone a Rolex will have a huge "multiplier" effect and make us all rich. This is the essence of the "Lucas critique" which is a second big contribution that lay readers can easily comprehend. If you push on correlations they will vanish. Macroeconomics was dedicated to the idea that policy makers can fool people. Monetary policy might try to boost output in a recession with a surprise bit of money growth. That will wok once or twice. But like the boy who cried wolf, people will catch on, come to expect higher money growth in recessions and the trick won't work anymore. Bob showed here that all the "behavioral" relations of Keynesian models will fall apart if you exploit them for policy, or push on them, though they may well hold as robust correlations in the data. The "consumption function" is the next great example. Keynesians noticed that when income rises people consume more, so write a consumption function relating consumption to income. But, following Friedman's great work on consumption, we know that correlation isn't always true in the data. The relation between consumption and income is different across countries (about one for one) than it is over time (less than one for one). And we understand that with Friedman's theory: People, trying to do their best over their whole lives don't follow mechanical rules. If they know income will fall in the future, they consume a lot less today, no matter what today's current income. Lucas showed that people who behave this sensible way will follow a Keynesian consumption function, given the properties of income overt the business cycle. You will see a Keynesian consumption function. Econometric estimates and tests will verify a Keynesian consumption function. Yet if you use the model to change policies, the consumption function will evaporate. This paper is devastating. Large scale Keynesian models had already been constructed, and used for forecasting and policy simulation. It's natural. The model says, given a set of policies (money supply, interest rates, taxes, spending) and other shocks, here is where the economy goes. Well, then, try different policies and find ones that lead to better outcomes. Bob shows the models are totally useless for that effort. If the policy changes, the model will change. Bob also showed that this was happening in real time. Supposedly stable parameters drifted around. (This one is also very simple mathematically. You can see the point instantly. Bob always uses the minimum math necessary. If other papers are harder, that's by necessity not bravado.) This devastation is sad in a way. Economics moved to analyzing policies in much simpler, more theoretically grounded, but less realistic models. Washington policy analysis sort of gave up. The big models lumber on, the Fred's FRBUS for example, but nobody takes the policy predictions that seriously. And they don't even forecast very well. For example, in the 2008 stimulus, the CEA was reduced to assuming a back of the envelope 1.5 multiplier, this 40 years after the first large scale policy models were constructed. Bob always praised the effort of the last generation of Keynesians to write explicit quantitative models, to fit them to data, and to make numerical predictions of various policies. He hoped to improve that effort. It didn't work out that way, but not by intention. This affair explains a lot of why economists flocked to the general equilibrium camp. Behavioral relationships, like what fraction of an extra dollar of income you consume, are not stable over time or as policy changes. But one hopes that preferences, — how impatient you are, how much you are willing to save more to get a better rate of return — and technology — how much a firm can produce with given capital and labor — do not change when policy changes. So, write models for policy evaluation at the level of preferences and technology, with people and companies at the base, not from behavioral relationships that are just correlations. Another deep change: Once you start thinking about macroeconomics as intertemporal economics — the economics that results from people who make decisions about how to consume over time, businesses make decisions about how to produce this year and next — and once you see that their expectations of what will happen next year, and what policies will be in place next year are crucial, you have to think of policy in terms of rules, and regimes, not isolated decisions. The Fed often asks economists for advice, "should we raise the funds rate?" Post Lucas macroeconomists answer that this isn't a well posed question. It's like saying "should we cry wolf?" The right question is, should we start to follow a rule, a regime, should we create an institution, that regularly and reliably raises interest rates in a situation like the current one? Decisions do not live in isolation. They create expectations and reputations. Needless to say, this fundamental reality has not soaked in to policy institutions. And that answer (which I have tried at Fed advisory meetings) leads to glazed eyes. John Taylor's rule has been making progress for 30 years trying to bridge that conceptual gap, with some success. This was, and remains, extraordinarily contentious. 50 years later, Alan Blinder's book, supposedly about policy, is really one long snark about how terrible Lucas and his followers are, and how we should go back to the Keynesian models of the 1960s. Some of that contention comes back to basic philosophy. The program applies standard microeconomics: derive people's behaviors as the best thing they can do given their circumstances. If people pick the best combination of apples and bananas when they shop, then also describe consumption today vs. tomorrow as the best they can do given interest rates. But a lot of economics doesn't like this "rational actor" assumption. It's not written in stone, but it has been extraordinarily successful. And it imposes a lot of discipline. There are a thousand arbitrary ways to be irrational. Somehow though, a large set of economists are happy to write down that people pick fruit baskets optimally, but don't apply the same rationality to decisions over time, or in how they think about the future. But "rational expectations" is really just a humility condition. It says, don't write models in which the predictions of the model are different from the expectations in the model. If you do, if your model is right, people will read the model and catch on, and the model won't work anymore. Don't assume you economist (or Fed chair) are so much less behavioral than the people in your model. Don't base policy on an attempt to fool the little peasants over and over again. It does not say that people are big super rational calculating machines. It just says that they eventually catch on. Some of the contentiousness is also understandable by career concerns. Many people had said "we should do macro seriously like general equilibrium." But it isn't easy to do. Bob had to teach himself, and get the rest of us to learn, a range of new mathematical and modeling tools to be able to write down interesting general equilibrium models. A 1970 Keynesian can live just knowing how to solve simple systems of linear equations, and run regressions. To follow Bob and the rational expectations crowd, you had to learn linear time-series statistics, dynamic programming, and general equilibrium math. Bob once described how tough the year was that it took him to learn functional analysis and dynamic programming. The models themselves consisted of a mathematically hard set of constructions. The older generation either needed to completely retool, fade away, or fight the revolution. Some good summary words: Bob's economics uses"rational expectations," or at least forward-looking and model-consistent expectations. Economics becomes "intertemporal," not "static" (one year at a time). Economics is "stochastic" as well as "dynamic," we can treat uncertainty over time, not just economies in which everyone knows the future perfectly. It applies "general equilibrium" to macroeconomics. And I've just gotten to the beginning of the 1970s. When I got to Chicago in the 1980s, there was a feeling of "well, you just missed the party." But it wasn't true. The 1980s as well were a golden age. The early rational expectations work was done, and the following real business cycles were the rage in macro. But Bob's dynamic programming, general equilibrium tool kit was on a rampage all over dynamic economics. The money workshop was one creative use of dynamic programs and interetempboral tools after another one, ranging from taxes to Thai villages (Townsend). I'll mention two. Bob's consumption model is at the foundation of modern asset pricing. Bob parachuted in, made the seminal contribution, and then left finance for other pursuits. The issue at the time was how to generalize the capital asset pricing model. Economists understood that some stocks pay higher returns than others, and that they must do so to compensate for risk. The understood that the risk is, in general terms, that the stock falls in some sense of bad times. But how to measure "bad times?" The CAPM uses the market, other models use somewhat nebulous other portfolios. Bob showed us that at least in the purest theory, that stocks must pay higher average returns if they fall when consumption falls. (Breeden also constructed a consumption model in parallel, but without this "endowment economy" aspect of Bob's) This is the purest most general theory, and all the others are (useful) specializations. My asset pricing book follows. The genius here was to turn it all around. Finance had sensibly built up from portfolio theory, like supply and demand: Given returns, what stocks do you buy, and how much to you save vs. consume? Then, markets have to clear find the stock prices, and thus returns, given which people will buy exactly the amount that's for sale and consume what is produced. That's hard. (Technically, finding the vector of prices that clears markets is hard. Yes, N equations in N unknowns, but they're nonlinear and N is big.) Bob instead imagined that consumption is fixed at each moment in time, like a desert island in which so many coconuts fall each day and you can't store them or plant them. Then, you can just read prices from people's preferences. This gives the same answer as if the consumption you assume is fixed had derived from a complex production economy. You don't have to solve for prices that equate supply and demand. Brilliantly, though prices cause consumption to individual people, consumption causes prices in aggregate. This is part of Bob's contribution to the hard business of actually computing quantitative models in the stochastic dynamic general equilibrium tradition. Bob, with Nancy Stokey also took the new tools to the theory of taxation. (Bob Barro also was a founder of this effort in the late 1980s.) You can see the opportunity: we just learned how to handle dynamic (overt time, expectations of tomorrow matter to what you do today) stochastic (but there is uncertainty about what will happen tomorrow) economics (people make explicit optimizing decisions) for macro. How about taking that same approach to taxes? The field of dynamic public finance is born. Bob and Nancy, like Barro, show that it's a good idea for governments to borrow and then repay, so as to spread the pain of taxes evenly over time. But not always. When a big crisis comes, it is useful to execute a "state contingent default." The big tension of Lucas-Stokey (and now, all) dynamic public finance: You don't want any capital taxes for the incentive effects. If you tax capital, people invest less, and you just get less capital. But once people have invested, a capital tax grabs revenue for the government with no economic distortion. Well, that is, if you can persuade them you'll never do it again. (Do you see expectations, reputations, rules, regimes, wolves in how we think of policy?) Lucas and Stoney say, do it only very rarely to balance the disincentive of a bad reputation with the need to raise revenue in once a century calamities. Bob went on, of course, to be one of the founders of modern growth theory. I always felt he deserved a second Nobel for this work. He's absolutely right. Once you look at growth, it's hard to think about anything else. The average Indian lives on $2,000 per year. The average American, $60,000. That was $15,000 in 1950. Nothing else comes close. I only work on money and inflation because that's where I think I have answers. For us mortals, good research proceeds where you think you have an answer, not necessarily from working on Big Questions. Bob brilliantly put together basic facts and theory to arrive at the current breakthrough. Once you get out of the way, growth does not come from more capital, or even more efficiency. It comes from more and better ideas. I remember being awed by his first work for cutting through the morass and assembling the facts that only look salient in retrospect. A key one: Interest rates in poor countries are not much higher than they are in rich countries. Poor countries have lots of workers, but little capital. Why isn't the return on scarce capital enormous, with interest rates in the hundreds of percent, to attract more capital to poor countries? Well, you sort of know the answer, that capital is not productive in those countries. Productivity is low, meaning those countries don't make use of better ideas on how to organize production. Ideas too are produced by economics, but, as Paul Romer crystallized, they are fundamentally different from other goods. If I produce an idea, you can use it without hurting my use of it. Yes, you might drive down the monopoly profits I gain from my intellectual property. But if you use my Pizza recipe, that's not like using my car. I can still make Pizza, where if you use my car I can't go anywhere. Thus, the usual free market presumption that we will produce enough ideas is false. (Don't jump too quickly to advocate government subsides for ideas. You have to find the right ideas, and governments aren't necessarily good at subsidizing that search.) And the presumption that intellectual property should be preserved forever is also false. Once produced it is socially optimal for everyone to use it. I won't go on. It's enough to say that Bob was as central to the creation of idea-based growth theory, which dominates today, as he was to general equilibrium macro, which also dominates today.Bob is an underrated empiricist. Bob's work on the size distribution of firms (great tweet summary by Luis Garicano) similarly starts from basic facts of the size distribution of firms and the lack of relationship between size and growth rates. It's interesting how we can go on for years with detailed econometric estimates of models that don't get basic facts right. I loved Bob's paper on money demand for the Carnegie Rochester conference series. An immense literature had tried to estimate money demand functions with dynamics, and was pretty confusing. It made a basic mistake, by looking at first differences rather than levels and thereby isolating the noise and drowning out the signal. Bob made a few plots, basically rediscovered cointegration all on his own, and made sense of it all. And don't forget the classic international comparison of inflation-output relations. Countries with volatile inflation have less Phillips curve tradeoff, just as his islands model featuring confusion between relative prices and the price level predicts. One last note to young scholars. There is a tendency today to value people by the number of papers they produce, and how quickly they rise through the ranks. Read Bob's CV. He wrote about one paper a year, starting quite late in life. But, as Aesop said, they were lions. In his Nobel prize speech, Bob also passed on that he and his Nobel-winning generation at Chicago always felt they were in some backwater, where the high prestige stuff was going on at Harvard and MIT. You never know when it might be a golden age. And the AER rejected his islands paper (as well as Akerlof's lemons). If you know it's good, revise and try again. I will miss his brilliant papers as much as his generous personality. Update: See Ivan Werning's excellent "Lucas Miracles" for an appreciation by a real theorist.
This paper offers a personal review of the current state of knowledge on monetary policy. In a nutshell, what Friedman knew-have survived, but that modern monetary policy departs in some important ways from older principles? The older wisdom that monetary policy determines inflation in the long run but can have systematic shorter run effects has survived a major challenge. Most of the new ideas stem from the recognition of the crucial role of expectations. In today's world, this observation lies behind the spectacular trend toward ever greater central bank transparency. Then it is more than likely that ideas will change in the wake of the global financial crisis. Early debates challenge the old wisdom that central banks ought to be mainly concerned with price stability. In particular, financial stability has always been part of a central bank's mission, but it has occupied limited space in theoretical and empirical studies.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
(This post continues part 1 which just looked at the data. Part 3 on theory is here) When the Fed raises interest rates, how does inflation respond? Are there "long and variable lags" to inflation and output? There is a standard story: The Fed raises interest rates; inflation is sticky so real interest rates (interest rate - inflation) rise; higher real interest rates lower output and employment; the softer economy pushes inflation down. Each of these is a lagged effect. But despite 40 years of effort, theory struggles to substantiate that story (next post), it's had to see in the data (last post), and the empirical work is ephemeral -- this post. The vector autoregression and related local projection are today the standard empirical tools to address how monetary policy affects the economy, and have been since Chris Sims' great work in the 1970s. (See Larry Christiano's review.) I am losing faith in the method and results. We need to find new ways to learn about the effects of monetary policy. This post expands on some thoughts on this topic in "Expectations and the Neutrality of Interest Rates," several of my papers from the 1990s* and excellent recent reviews from Valerie Ramey and Emi Nakamura and Jón Steinsson, who eloquently summarize the hard identification and computation troubles of contemporary empirical work.Maybe popular wisdom is right, and economics just has to catch up. Perhaps we will. But a popular belief that does not have solid scientific theory and empirical backing, despite a 40 year effort for models and data that will provide the desired answer, must be a bit less trustworthy than one that does have such foundations. Practical people should consider that the Fed may be less powerful than traditionally thought, and that its interest rate policy has different effects than commonly thought. Whether and under what conditions high interest rates lower inflation, whether they do so with long and variable but nonetheless predictable and exploitable lags, is much less certain than you think. Here is a replication of one of the most famous monetary VARs, Christiano Eichenbaum and Evans 1999, from Valerie Ramey's 2016 review: Fig. 1 Christiano et al. (1999) identification. 1965m1–1995m6 full specification: solid black lines; 1983m1–2007m12 full specification: short dashed blue (dark gray in the print version) lines; 1983m1–2007m12, omits money and reserves: long-dashed red (gray in the print version) lines. Light gray bands are 90% confidence bands. Source: Ramey 2016. Months on x axis. The black lines plot the original specification. The top left panel plots the path of the Federal Funds rate after the Fed unexpectedly raises the interest rate. The funds rate goes up, but only for 6 months or so. Industrial production goes down and unemployment goes up, peaking at month 20. The figure plots the level of the CPI, so inflation is the slope of the lower right hand panel. You see inflation goes the "wrong" way, up, for about 6 months, and then gently declines. Interest rates indeed seem to affect the economy with long lags. This was the broad outline of consensus empirical estimates for many years. It is common to many other studies, and it is consistent with the beliefs of policy makers and analysts. It's pretty much what Friedman (1968) told us to expect. Getting contemporary models to produce something like this is much harder, but that's the next blog post. What's a VAR?I try to keep this blog accessible to nonspecialists, so I'll step back momentarily to explain how we produce graphs like these. Economists who know what a VAR is should skip to the next section heading. How do we measure the effect of monetary policy on other variables? Milton Friedman and Anna Schwartz kicked it off in the Monetary History by pointing to the historical correlation of money growth with inflation and output. They knew as we do that correlation is not causation, so they pointed to the fact that money growth preceeded inflation and output growth. But as James Tobin pointed out, the cock's crow comes before, but does not cause, the sun to rise. So too people may go get out some money ahead of time when they see more future business activity on the horizon. Even correlation with a lead is not causation. What to do? Clive Granger's causality and Chris Sims' VAR, especially "Macroeconomics and Reality" gave today's answer. (And there is a reason that everybody mentioned so far has a Nobel prize.) First, we find a monetary policy "shock," a movement in the interest rate (these days; money, then) that is plausibly not a response to economic events and especially to expected future economic events. We think of the Fed setting interest rates by a response to economic data plus deviations from that response, such as interest rate = (#) output + (#) inflation + (#) other variables + disturbance. We want to isolate the "disturbance," movements in the interest rate not taken in response to economic events. (I use "shock" to mean an unpredictable variable, and "disturbance" to mean deviation from an equation like the above, but one that can persist for a while. A monetary policy "shock" is an unexpected movement in the disturbance.) The "rule" part here can be but need not be the Taylor rule, and can include other variables than output and inflation. It is what the Fed usually does given other variables, and therefore (hopefully) controls for reverse causality from expected future economic events to interest rates. Now, in any individual episode, output and inflation and inflation following a shock will be influenced by subsequent shocks to the economy, monetary and other. But those average out. So, the average value of inflation, output, employment, etc. following a monetary policy shock is a measure of how the shock affects the economy all on its own. That is what has been plotted above. VARs were one of the first big advances in the modern empirical quest to find "exogenous" variation and (somewhat) credibly find causal relationships. Mostly the huge literature varies on how one finds the "shocks." Traditional VARs use regressions of the above equations and the residual is the shock, with a big question just how many and which contemporaneous variables one adds in the regression. Romer and Romer pioneered the "narrative approach," reading the Fed minutes to isolate shocks. Some technical details at the bottom and much more discussion below. The key is finding shocks. One can just regress output and inflation on the shocks to produce the response function, which is a "local projection" not a "VAR," but I'll use "VAR" for both techniques for lack of a better encompassing word. Losing faithShocks, what shocks?What's a "shock" anyway? The concept is that the Fed considers its forecast of inflation, output and other variables it is trying to control, gauges the usual and appropriate response, and then adds 25 or 50 basis points, at random, just for the heck of it. The question VARS try to answer is the same: What happens to the economy if the Fed raises interest rates unexpectedly, for no particular reason at all? But the Fed never does this. Ask them. Read the minutes. The Fed does not roll dice. They always raise or lower interest rates for a reason, that reason is always a response to something going on in the economy, and most of the time how it affects forecasts of inflation and employment. There are no shocks as defined.I speculated here that we might get around this problem: If we knew the Fed was responding to something that had no correlation with future output, then even though that is an endogenous response, then it is a valid movement for estimating the effect of interest rates on output. My example was, what if the Fed "responds" to the weather. Well, though endogenous, it's still valid for estimating the effect on output. The Fed does respond to lots of things, including foreign exchange, financial stability issues, equity, terrorist attacks, and so forth. But I can't think of any of these in which the Fed is not thinking of these events for their effect on output and inflation, which is why I never took the idea far. Maybe you can. Shock isolation also depends on complete controls for the Fed's information. If the Fed uses any information about future output and inflation that is not captured in our regression, then information about future output and inflation remains in the "shock" series. The famous "price puzzle" is a good example. For the first few decades of VARs, interest rate shocks seemed to lead to higher inflation. It took a long specification search to get rid of this undesired result. The story was, that the Fed saw inflation coming in ways not completely controlled for by the regression. The Fed raised interest rates to try to forestall the inflation, but was a bit hesitant about it so did not cure the inflation that was coming. We see higher interest rates followed by higher inflation, though the true causal effect of interest rates goes the other way. This problem was "cured" by adding commodity prices to the interest rate rule, on the idea that fast-moving commodity prices would capture the information the Fed was using to forecast inflation. (Interestingly these days we seem to see core inflation as the best forecaster, and throw out commodity prices!) With those and some careful orthogonalization choices, the "price puzzle" was tamped down to the one year or so delay you see above. (Neo-Fisherians might object that maybe the price puzzle was trying to tell us something all these years!) Nakamura and Steinsson write of this problem: "What is being assumed is that controlling for a few lags of a few variables captures all endogenous variation in policy... This seems highly unlikely to be true in practice. The Fed bases its policy decisions on a huge amount of data. Different considerations (in some cases highly idiosyncratic) affect policy at different times. These include stress in the banking system, sharp changes in commodity prices, a recent stock market crash, a financial crisis in emerging markets, terrorist attacks, temporary investment tax credits, and the Y2K computer glitch. The list goes on and on. Each of these considerations may only affect policy in a meaningful way on a small number of dates, and the number of such influences is so large that it is not feasible to include them all in a regression. But leaving any one of them out will result in a monetary policy "shock" that the researcher views as exogenous but is in fact endogenous." Nakamura and Steinsson offer 9/11 as another example summarizing my "high frequency identification" paper with Monika Piazzesi: The Fed lowered interest rates after the terrorist attack, likely reacting to its consequences for output and inflation. But VARs register the event as an exogenous shock.Romer and Romer suggested that we use Fed Greenbook forecasts of inflation and output as controls, as those should represent the Fed's complete information set. They provide narrative evidence that Fed members trust Greenback forecasts more than you might suspect. This issue is a general Achilles heel of empirical macro and finance: Does your procedure assume agents see no more information than you have included in the model or estimate? If yes, you have a problem. Similarly, "Granger causality" answers the cock's crow-sunrise problem by saying that if unexpected x leads unexpected y then x causes y. But it's only real causality if the "expected" includes all information, as the price puzzle counterexample shows. Just what properties do we need of a shock in order to measure the response to the question, "what if the Fed raised rates for no reason?" This strikes me as a bit of an unsolved question -- or rather, one that everyone thinks is so obvious that we don't really look at it. My suggestion that the shock only need be orthogonal to the variable whose response we're estimating is informal, and I don't know of formal literature that's picked it up. Must "shocks" be unexpected, i.e. not forecastable from anything in the previous time information set? Must they surprise people? I don't think so -- it is neither necessary nor sufficient for shock to be unforecastable for it to identify the inflation and output responses. Not responding to expected values of the variable whose response you want to measure should be enough. If bond markets found out about a random funds rate rise one day ahead, it would then be an "expected" shock, but clearly just as good for macro. Romer and Romer have been criticized that their shocks are predictable, but this may not matter. The above Nakamura and Steinsson quote says leaving out any information leads to a shock that is not strictly exogenous. But strictly exogenous may not be necessary for estimating, say, the effect of interest rates on inflation. It is enough to rule out reverse causality and third effects. Either I'm missing a well known econometric literature, as is everyone else writing the VARs I've read who don't cite it, or there is a good theory paper to be written.Romer and Romer, thinking deeply about how to read "shocks" from the Fed minutes, define shocks thus to circumvent the "there are no shocks" problem:we look for times when monetary policymakers felt the economy was roughly at potential (or normal) output, but decided that the prevailing rate of inflation was too high. Policymakers then chose to cut money growth and raise interest rates, realizing that there would be (or at least could be) substantial negative consequences for aggregate output and unemployment. These criteria are designed to pick out times when policymakers essentially changed their tastes about the acceptable level of inflation. They weren't just responding to anticipated movements in the real economy and inflation. [My emphasis.] You can see the issue. This is not an "exogenous" movement in the funds rate. It is a response to inflation, and to expected inflation, with a clear eye on expected output as well. It really is a nonlinear rule, ignore inflation for a while until it gets really bad then finally get serious about it. Or, as they say, it is a change in rule, an increase in the sensitivity of the short run interest rate response to inflation, taken in response to inflation seeming to get out of control in a longer run sense. Does this identify the response to an "exogenous" interest rate increase? Not really. But maybe it doesn't matter. Are we even asking an interesting question? The whole question, what would happen if the Fed raised interest rates for no reason, is arguably besides the point. At a minimum, we should be clearer about what question we are asking, and whether the policies we analyze are implementations of that question. The question presumes a stable "rule," (e.g. \(i_t = \rho i_{t-1} + \phi_\pi \pi_t + \phi_x x_t + u_t\)) and asks what happens in response to a deviation \( +u_t \) from the rule. Is that an interesting question? The standard story for 1980-1982 is exactly not such an event. Inflation was not conquered by a big "shock," a big deviation from 1970s practice, while keeping that practice intact. Inflation was conquered (so the story goes) by a change in the rule, by a big increase in $\phi_\pi$. That change raised interest rates, but arguably without any deviation from the new rule \(u_t\) at all. Thinking in terms of the Phillips curve \( \pi_t = E_t \pi_{t+1} + \kappa x_t\), it was not a big negative \(x_t\) that brought down inflation, but the credibility of the new rule that brought down \(E_t \pi_{t+1}\). If the art of reducing inflation is to convince people that a new regime has arrived, then the response to any monetary policy "shock" orthogonal to a stable "rule" completely misses that policy. Romer and Romer are almost talking about a rule-change event. For 2022, they might be looking at the Fed's abandonment of flexible average inflation targeting and its return to a Taylor rule. However, they don't recognize the importance of the distinction, treating changes in rule as equivalent to a residual. Changing the rule changes expectations in quite different ways from a residual of a stable rule. Changes with a bigger commitment should have bigger effects, and one should standardize somehow by the size and permanence of the rule change, not necessarily the size of the interest rate rise. And, having asked "what if the Fed changes rule to be more serious about inflation," we really cannot use the analysis to estimate what happens if the Fed shocks interest rates and does not change the rule. It takes some mighty invariance result from an economic theory that a change in rule has the same effect as a shock to a given rule. There is no right and wrong, really. We just need to be more careful about what question the empirical procedure asks, if we want to ask that question, and if our policy analysis actually asks the same question. Estimating rules, Clarida Galí and Gertler. Clarida, Galí, and Gertler (2000) is a justly famous paper, and in this context for doing something totally different to evaluate monetary policy. They estimate rules, fancy versions of \(i_t = \rho i_{t-1} +\phi_\pi \pi_t + \phi_x x_t + u_t\), and they estimate how the \(\phi\) parameters change over time. They attribute the end of 1970s inflation to a change in the rule, a rise in \(\phi_\pi\) from the 1970s to the 1980s. In their model, a higher \( \phi_\pi\) results in less volatile inflation. They do not estimate any response functions. The rest of us were watching the wrong thing all along. Responses to shocks weren't the interesting quantity. Changes in the rule were the interesting quantity. Yes, I criticized the paper, but for issues that are irrelevant here. (In the new Keynesian model, the parameter that reduces inflation isn't the one they estimate.) The important point here is that they are doing something completely different, and offer us a roadmap for how else we might evaluate monetary policy if not by impulse-response functions to monetary policy shocks. Fiscal theoryThe interesting question for fiscal theory is, "What is the effect of an interest rate rise not accompanied by a change in fiscal policy?" What can the Fed do by itself? By contrast, standard models (both new and old Keynesian) include concurrent fiscal policy changes when interest rates rise. Governments tighten in present value terms, at least to pay higher interest costs on the debt and the windfall to bondholders that flows from unexpected disinflation. Experience and estimates surely include fiscal changes along with monetary tightening. Both fiscal and monetary authorities react to inflation with policy actions and reforms. Growth-oriented microeconomic reforms with fiscal consequences often follow as well -- rampant inflation may have had something to do with Carter era trucking, airline, and telecommunications reform. Yet no current estimate tries to look for a monetary shock orthogonal to fiscal policy change. The estimates we have are at best the effects of monetary policy together with whatever induced or coincident fiscal and microeconomic policy tends to happen at the same time as central banks get serious about fighting inflation. Identifying the component of a monetary policy shock orthogonal to fiscal policy, and measuring its effects is a first order question for fiscal theory of monetary policy. That's why I wrote this blog post. I set out to do it, and then started to confront how VARs are already falling apart in our hands. Just what "no change in fiscal policy" means is an important question that varies by application. (Lots more in "fiscal roots" here, fiscal theory of monetary policy here and in FTPL.) For simple calculations, I just ask what happens if interest rates change with no change in primary surplus. One might also define "no change" as no change in tax rates, automatic stabilizers, or even habitual discretionary stimulus and bailout, no disturbance \(u_t\) in a fiscal rule \(s_t = a + \theta_\pi \pi_t + \theta_x x_t + ... + u_t\). There is no right and wrong here either, there is just making sure you ask an interesting question. Long and variable lags, and persistent interest rate movementsThe first plot shows a mighty long lag between the monitor policy shock and its effect on inflation and output. That does not mean that the economy has long and variable lags. This plot is actually not representative, because in the black lines the interest rate itself quickly reverts to zero. It is common to find a more protracted interest rate response to the shock, as shown in the red and blue lines. That mirrors common sense: When the Fed starts tightening, it sets off a year or so of stair-step further increases, and then a plateau, before similar stair-step reversion. That raises the question, does the long-delayed response of output and inflation represent a delayed response to the initial monetary policy shock, or does it represent a nearly instantaneous response to the higher subsequent interest rates that the shock sets off? Another way of putting the question, is the response of inflation and output invariant to changes in the response of the funds rate itself? Do persistent and transitory funds rate changes have the same responses? If you think of the inflation and output responses as economic responses to the initial shock only, then it does not matter if interest rates revert immediately to zero, or go on a 10 year binge following the initial shock. That seems like a pretty strong assumption. If you think that a more persistent interest rate response would lead to a larger or more persistent output and inflation response, then you think some of what we see in the VARs is a quick structural response to the later higher interest rates, when they come. Back in 1988, I posed this question in "what do the VARs mean?" and showed you can read it either way. The persistent output and inflation response can represent either long economic lags to the initial shock, or much less laggy responses to interest rates when they come. I showed how to deconvolute the response function to the structural effect of interest rates on inflation and output and how persistently interest rates rise. The inflation and output responses might be the same with shorter funds rate responses, or they might be much different. Obviously (though often forgotten), whether the inflation and output responses are invariant to changes in the funds rate response needs a model. If in the economic model only unexpected interest rate movements affect output and inflation, though with lags, then the responses are as conventionally read structural responses and invariant to the interest rate path. There is no such economic model. Lucas (1972) says only unexpected money affects output, but with no lags, and expected money affects inflation. New Keynesian models have very different responses to permanent vs. transitory interest rate shocks. Interestingly, Romer and Romer do not see it this way, and regard their responses as structural long and variable lags, invariant to the interest rate response. They opine that given their reading of a positive shock in 2022, a long and variable lag to inflation reduction is baked in, no matter what the Fed does next. They argue that the Fed should stop raising interest rates. (In fairness, it doesn't look like they thought about the issue much, so this is an implicit rather than explicit assumption.) The alternative view is that effects of a shock on inflation are really effects of the subsequent rate rises on inflation, that the impulse response function to inflation is not invariant to the funds rate response, so stopping the standard tightening cycle would undo the inflation response. Argue either way, but at least recognize the important assumption behind the conclusions. Was the success of inflation reduction in the early 1980s just a long delayed response to the first few shocks? Or was the early 1980s the result of persistent large real interest rates following the initial shock? (Or, something else entirely, a coordinated fiscal-monetary reform... But I'm staying away from that and just discussing conventional narratives, not necessarily the right answer.) If the latter, which is the conventional narrative, then you think it does matter if the funds rate shock is followed by more funds rate rises (or positive deviations from a rule), that the output and inflation response functions do not directly measure long lags from the initial shock. De-convoluting the structural funds rate to inflation response and the persistent funds rate response, you would estimate much shorter structural lags. Nakamura and Steinsson are of this view: While the Volcker episode is consistent with a large amount of monetary nonneutrality, it seems less consistent with the commonly held view that monetary policy affects output with "long and variable lags." To the contrary, what makes the Volcker episode potentially compelling is that output fell and rose largely in sync with the actions [interest rates, not shocks] of the Fed. And that's a good thing too. We've done a lot of dynamic economics since Friedman's 1968 address. There is really nothing in dynamic economic theory that produces a structural long-delayed response to shocks, without the continued pressure of high interest rates. (A correspondent objects to "largely in sync" pointing out several clear months long lags between policy actions and results in 1980. It's here for the methodological point, not the historical one.) However, if the output and inflation responses are not invariant to the interest rate response, then the VAR directly measures an incredibly narrow experiment: What happens in response to a surprise interest rate rise, followed by the plotted path of interest rates? And that plotted path is usually pretty temporary, as in the above graph. What would happen if the Fed raised rates and kept them up, a la 1980? The VAR is silent on that question. You need to calibrate some model to the responses we have to infer that answer. VARs and shock responses are often misread as generic theory-free estimates of "the effects of monetary policy." They are not. At best, they tell you the effect of one specific experiment: A random increase in funds rate, on top of a stable rule, followed by the usual following path of funds rate. Any other implication requires a model, explicit or implicit. More specifically, without that clearly false invariance assumption, VARs cannot directly answer a host of important questions. Two on my mind: 1) What happens if the Fed raises interest rates permanently? Does inflation eventually rise? Does it rise in the short run? This is the "Fisherian" and "neo-Fisherian" questions, and the answer "yes" pops unexpectedly out of the standard new-Keynesian model. 2) Is the short-run negative response of inflation to interest rates stronger for more persistent rate rises? The long-term debt fiscal theory mechanism for a short-term inflation decline is tied to the persistence of the shock and the maturity structure of the debt. The responses to short-lived interest rate movements (top left panel) are silent on these questions. Directly is an important qualifier. It is not impossible to answer these questions, but you have to work harder to identify persistent interest rate shocks. For example, Martín Uribe identifies permanent vs. transitory interest rate shocks, and finds a positive response of inflation to permanent interest rate rises. How? You can't just pick out the interest rate rises that turned out to be permanent. You have to find shocks or components of the shock that are ex-ante predictably going to be permanent, based on other forecasting variables and the correlation of the shock with other shocks. For example, a short-term rate shock that also moves long-term rates might be more permanent than one which does not do so. (That requires the expectations hypothesis, which doesn't work, and long term interest rates move too much anyway in response to transitory funds rate shocks. So, this is not directly a suggestion, just an example of the kind of thing one must do. Uribe's model is more complex than I can summarize in a blog.) Given how small and ephemeral the shocks are already, subdividing them into those that are expected to have permanent vs. transitory effects on the federal funds rate is obviously a challenge. But it's not impossible. Monetary policy shocks account for small fractions of inflation, output and funds rate variation. Friedman thought that most recessions and inflations were due to monetary mistakes. The VARs pretty uniformly deny that result. The effects of monetary policy shocks on output and inflation add up to less than 10 percent of the variation of output and inflation. In part the shocks are small, and in part the responses to the shocks are small. Most recessions come from other shocks, not monetary mistakes. Worse, both in data and in models, most inflation variation comes from inflation shocks, most output variation comes from output shocks, etc. The cross-effects of one variable on another are small. And "inflation shock" (or "marginal cost shock"), "output shock" and so forth are just labels for our ignorance -- error terms in regressions, unforecasted movements -- not independently measured quantities. (This and old point, for example in my 1994 paper with the great title "Shocks." Technically, the variance of output is the sum of the squares of the impulse-response functions -- the plots -- times the variance of the shocks. Thus small shocks and small responses mean not much variance explained.)This is a deep point. The exquisite attention put to the effects of monetary policy in new-Keynesian models, while interesting to the Fed, are then largely beside the point if your question is what causes recessions. Comprehensive models work hard to match all of the responses, not just to monetary policy shocks. But it's not clear that the nominal rigidities that are important for the effects of monetary policy are deeply important to other (supply) shocks, and vice versa. This is not a criticism. Economics always works better if we can use small models that focus on one thing -- growth, recessions, distorting effect of taxes, effect of monetary policy -- without having to have a model of everything in which all effects interact. But, be clear we no longer have a model of everything. "Explaining recessions" and "understanding the effects of monetary policy" are somewhat separate questions. Monetary policy shocks also account for small fractions of the movement in the federal funds rate itself. Most of the funds rate movement is in the rule, the reaction to the economy term. Like much empirical economics, the quest for causal identification leads us to look at a tiny causes with tiny effects, that do little to explain much variation in the variable of interest (inflation). Well, cause is cause, and the needle is the sharpest item in the haystack. But one worries about the robustness of such tiny effects, and to what extent they summarize historical experience. To be concrete, here is a typical shock regression, 1960:1-2023:6 monthly data, standard errors in parentheses: ff(t) = a + b ff(t-1) + c[ff(t-1)-ff(t-2)] + d CPI(t) + e unemployment(t) + monetary policy shock, Where "CPI" is the percent change in the CPI (CPIAUCSL) from a year earlier. ff(t-1)ff(t-1)-ff(t-2)CPIUnempR20.970.390.032-0.0170.985(0.009)(0.07)(0.013)(0.009)The funds rate is persistent -- the lag term (0.97) is large. Recent changes matter too: Once the Fed starts a tightening cycle, it's likely to keep raising rates. And the Fed responds to CPI and unemployment. The plot shows the actual federal funds rate (blue), the model or predicted federal funds rate (red), the shock which is the difference between the two (orange) and the Romer and Romer dates (vertical lines). You can't see the difference between actual and predicted funds rate, which is the point. They are very similar and the shocks are small. They are closer horizontally than vertically, so the vertical difference plotted as shock is still visible. The shocks are much smaller than the funds rate, and smaller than the rise and fall in the funds rate in a typical tightening or loosening cycle. The shocks are bunched, with by far the biggest ones in the early 1980s. The shocks have been tiny since the 1980s. (Romer and Romer don't find any shocks!) Now, our estimates of the effect of monetary policy look at the average values of inflation, output, and employment in the 4-5 years after a shock. Really, you say, looking at the graph? That's going to be dominated by the experience of the early 1980s. And with so many positive and negative shocks close together, the average value 4 years later is going to be driven by subtle timing of when the positive or negative shocks line up with later events. Put another way, here is a plot of inflation 30 months after a shock regressed on the shock. Shock on the x axis, subsequent inflation on the y axis. The slope of the line is our estimate of the effect of the shock on inflation 30 months out (source, with details). Hmm. One more graph (I'm having fun here):This is a plot of inflation for the 4 years after each shock, times that shock. The right hand side is the same graph with an expanded y scale. The average of these histories is our impulse response function. (The big lines are the episodes which multiply the big shocks of the early 1980s. They mostly converge because, either multiplied by positive or negative shocks, inflation wend down in the 1980s.) Impulse response functions are just quantitative summaries of the lessons of history. You may be underwhelmed that history is sending a clear story. Again, welcome to causal economics -- tiny average responses to tiny but identified movements is what we estimate, not broad lessons of history. We do not estimate "what is the effect of the sustained high real interest rates of the early 1980s," for example, or "what accounts for the sharp decline of inflation in the early 1980s?" Perhaps we should, though confronting endogeneity of the interest rate responses some other way. That's my main point today. Estimates disappear after 1982Ramey's first variation in the first plot is to use data from 1983 to 2007. Her second variation is to also omit the monetary variables. Christiano Eichenbaum and Evans were still thinking in terms of money supply control, but our Fed does not control money supply. The evidence that higher interest rates lower inflation disappears after 1983, with or without money. This too is a common finding. It might be because there simply aren't any monetary policy shocks. Still, we're driving a car with a yellowed AAA road map dated 1982 on it. Monetary policy shocks still seem to affect output and employment, just not inflation. That poses a deeper problem. If there just aren't any monetary policy shocks, we would just get big standard errors on everything. That only inflation disappears points to the vanishing Phillips curve, which will be the weak point in the theory to come. It is the Phillips curve by which lower output and employment push down inflation. But without the Phillips curve, the whole standard story for interest rates to affect inflation goes away. Computing long-run responsesThe long lags of the above plot are already pretty long horizons, with interesting economics still going on at 48 months. As we get interested in long run neutrality, identification via long run sign restrictions (monetary policy should not permanently affect output), and the effect of persistent interest rate shocks, we are interested in even longer run responses. The "long run risks" literature in asset pricing is similarly crucially interested in long run properties. Intuitively, we should know this will be troublesome. There aren't all that many nonoverlapping 4 year periods after interest rate shocks to measure effects, let alone 10 year periods.VARs estimate long run responses with a parametric structure. Organize the data (output, inflation, interest rate, etc) into a vector \(x_t = [y_t \; \pi_t \; i_t \; ...]'\), then the VAR can be written \(x_{t+1} = Ax_t + u_t\). We start from zero, move \(x_1 = u_1\) in an interesting way, and then the response function just simulates forward, with \(x_j = A^j x_1\). But here an oft-forgotten lesson of 1980s econometrics pops up: It is dangerous to estimate long-run dynamics by fitting a short run model and then finding its long-run implications. Raising matrices to the 48th power \(A^{48}\) can do weird things, the 120th power (10 years) weirder things. OLS and maximum likelihood prize one step ahead \(R^2\), and will happily accept small one step ahead mis specifications that add up to big misspecification 10 years out. (I learned this lesson in the "Random walk in GNP.") Long run implications are driven by the maximum eigenvalue of the \(A\) transition matrix, and its associated eigenvector. \(A^j = Q \Lambda^j Q^{-1}\). This is a benefit and a danger. Specify and estimate the dynamics of the combination of variables with the largest eigenvector right, and lots of details can be wrong. But standard estimates aren't trying hard to get these right. The "local projection" alternative directly estimates long run responses: Run regressions of inflation in 10 years on the shock today. You can see the tradeoff: there aren't many non-overlapping 10 year intervals, so this will be imprecisely estimated. The VAR makes a strong parametric assumption about long-run dynamics. When it's right, you get better estimates. When it's wrong, you get misspecification. My experience running lots of VARs is that monthly VARs raised to large powers often give unreliable responses. Run at least a one-year VAR before you start looking at long run responses. Cointegrating vectors are the most reliable variables to include. They are typically the state variable that most reliably carries long - run responses. But pay attention to getting them right. Imposing integrating and cointegrating structure by just looking at units is a good idea. The regression of long-run returns on dividend yields is a good example. The dividend yield is a cointegrating vector, and is the slow-moving state variable. A one period VAR \[\left[ \begin{array}{c} r_{t+1} \\ dp_{t+1} \end{array} \right] = \left[ \begin{array}{cc} 0 & b_r \\ 0 & \rho \end{array}\right] \left[ \begin{array}{c} r_{t} \\ dp_{t} \end{array}\right]+ \varepsilon_{t+1}\] implies a long horizon regression \(r_{t+j} = b_r \rho^j dp_{t} +\) error. Direct regressions ("local projections") \(r_{t+j} = b_{r,j} dp_t + \) error give about the same answers, though the downward bias in \(\rho\) estimates is a bit of an issue, but with much larger standard errors. The constraint \(b_{r,j} = b_r \rho^j\) isn't bad. But it can easily go wrong. If you don't impose that dividends and price are cointegrated, or with vector other than 1 -1, if you allow a small sample to estimate \(\rho>1\), if you don't put in dividend yields at all and just a lot of short-run forecasters, it can all go badly. Forecasting bond returns was for me a good counterexample. A VAR forecasting one-year bond returns from today's yields gives very different results from taking a monthly VAR, even with several lags, and using \(A^{12}\) to infer the one-year return forecast. Small pricing errors or microstructure dominate the monthly data, which produces junk when raised to the twelfth power. (Climate regressions are having fun with the same issue. Small estimated effects of temperature on growth, raised to the 100th power, can produce nicely calamitous results. But use basic theory to think about units.) Nakamura and Steinsson (appendix) show how sensitive some standard estimates of impulse response functions are to these questions. Weak evidenceFor the current policy question, I hope you get a sense of how weak the evidence is for the "standard view" that higher interest rates reliably lower inflation, though with a long and variable lag, and the Fed has a good deal of control over inflation. Yes, many estimates look the same, but there is a pretty strong prior going in to that. Most people don't publish papers that don't conform to something like the standard view. Look how long it took from Sims (1980) to Christiano Eichenbaum and Evans (1999) to produce a response function that does conform to the standard view, what Friedman told us to expect in (1968). That took a lot of playing with different orthogonalization, variable inclusion, and other specification assumptions. This is not criticism: when you have a strong prior, it makes sense to see if the data can be squeezed in to the prior. Once authors like Ramey and Nakamura and Steinsson started to look with a critical eye, it became clearer just how weak the evidence is. Standard errors are also wide, but the variability in results due to changes in sample and specification are much larger than formal standard errors. That's why I don't stress that statistical aspect. You play with 100 models, try one variable after another to tamp down the price puzzle, and then compute standard errors as if the 100th model were written in stone. This post is already too long, but showing how results change with different specifications would have been a good addition. For example, here are a few more Ramey plots of inflation responses, replicating various previous estimatesTake your pick. What should we do instead? Well, how else should we measure the effects of monetary policy? One natural approach turns to the analysis of historical episodes and changes in regime, with specific models in mind. Romer and Romer pass on thoughts on this approach: ...some macroeconomic behavior may be fundamentally episodic in nature. Financial crises, recessions, disinflations, are all events that seem to play out in an identifiable pattern. There may be long periods where things are basically fine, that are then interrupted by short periods when they are not. If this is true, the best way to understand them may be to focus on episodes—not a cross-section proxy or a tiny sub-period. In addition, it is valuable to know when the episodes were and what happened during them. And, the identification and understanding of episodes may require using sources other than conventional data.A lot of my and others' fiscal theory writing has taken a similar view. The long quiet zero bound is a test of theories: old-Keynesian models predict a delation spiral, new-Keynesian models predicts sunspot volatility, fiscal theory is consistent with stable quiet inflation. The emergence of inflation in 2021 and its easing despite interest rates below inflation likewise validates fiscal vs. standard theories. The fiscal implications of abandoning the gold standard in 1933 plus Roosevelt's "emergency" budget make sense of that episode. The new-Keynesian reaction parameter \(\phi_\pi\) in \(i_t - \phi_\pi \pi_t\), which leads to unstable dynamics for ](\phi_\pi>1\) is not identified by time series data. So use "other sources," like plain statements on the Fed website about how they react to inflation. I already cited Clarida Galí and Gertler, for measuring the rule not the response to the shock, and explaining the implications of that rule for their model. Nakamura and Steinsson likewise summarize Mussa's (1986) classic study of what happens when countries switch from fixed to floating exchange rates: "The switch from a fixed to a flexible exchange rate is a purely monetary action. In a world where monetary policy has no real effects, such a policy change would not affect real variables like the real exchange rate. Figure 3 demonstrates dramatically that the world we live in is not such a world."Also, analysis of particular historical episodes is enlightening. But each episode has other things going on and so invites alternative explanations. 90 years later, we're still fighting about what caused the Great Depression. 1980 is the poster child for monetary disinflation, yet as Nakamura and Steinsson write, Many economists find the narrative account above and the accompanying evidence about output to be compelling evidence of large monetary nonneutrality. However, there are other possible explanations for these movements in output. There were oil shocks both in September 1979 and in February 1981.... Credit controls were instituted between March and July of 1980. Anticipation effects associated with the phased-in tax cuts of the Reagan administration may also have played a role in the 1981–1982 recession ....Studying changes in regime, such as fixed to floating or the zero bound era, help somewhat relative to studying a particular episode, in that they have some of the averaging of other shocks. But the attraction of VARs will remain. None of these produces what VARs seemed to produce, a theory-free qualitative estimate of the effects of monetary policy. Many tell you that prices are sticky, but not how prices are sticky. Are they old-Keynesian backward looking sticky or new-Keynesian rational expectations sticky? What is the dynamic response of relative inflation to a change in a pegged exchange rate? What is the dynamic response of real relative prices to productivity shocks? Observations such as Mussa's graph can help to calibrate models, but does not answer those questions directly. My observations about the zero bound or the recent inflation similarly seem (to me) decisive about one class of model vs. another, at least subject to Occam's razor about epicycles, but likewise do not provide a theory-free impulse response function. Nakamura and Steinsson write at length about other approaches; model-based moment matching and use of micro data in particular. This post is going on too long; read their paper. Of course, as we have seen, VARs only seem to offer a model-free quantitative measurement of "the effects of monetary policy," but it's hard to give up on the appearance of such an answer. VARs and impulse responses also remain very useful ways of summarizing the correlations and cross correlations of data, even without cause and effect interpretation. In the end, many ideas are successful in economics when they tell researchers what to do, when they offer a relatively clear recipe for writing papers. "Look at episodes and think hard is not such recipe." "Run a VAR is." So, as you think about how we can evaluate monetary policy, think about a better recipe as well as a good answer. (Stay tuned. This post is likely to be updated a few times!) VAR technical appendixTechnically, running VARs is very easy, at least until you start trying to smooth out responses with Bayesian and other techniques. Line up the data in a vector, i.e. \(x_t = [i_t \; \pi_t\; y_t]'\). Then run a regression of each variable on lags of the others, \[x_t = Ax_{t-1} + u_t.\] If you want more than one lag of the right hand variables, just make a bigger \(x\) vector, \(x_t = [i_t\; \pi_t \; y_t \; i_{t-1}\; \pi_{t-1} \;y_{t-1}]'.\) The residuals of such regressions \(u_t\) will be correlated, so you have to decide whether, say, the correlation between interest rate and inflation shocks means the Fed responds in the period to inflation, or inflation responds within the period to interest rates, or some combination of the two. That's the "identification" assumption issue. You can write it as a matrix \(C\) so that \(u_t = C \varepsilon_t\) and cov\((\varepsilon_t \varepsilon_t')=I\) or you can include some contemporaneous values into the right hand sides. Now, with \(x_t = Ax_{t-1} + C\varepsilon_t\), you start with \(x_0=0\), choose one series to shock, e.g. \(\varepsilon_{i,1}=1\) leaving the others alone, and just simulate forward. The resulting path of the other variables is the above plot, the "impulse response function." Alternatively you can run a regression \(x_t = \sum_{j=0}^\infty \theta_j \varepsilon_{t-j}\) and the \(\theta_j\) are (different, in sample) estimates of the same thing. That's "local projection". Since the right hand variables are all orthogonal, you can run single or multiple regressions. (See here for equations.) Either way, you have found the moving average representation, \(x_t = \theta(L)\varepsilon_t\), in the first case with \(\theta(L)=(I-AL)^{-1}C\) in the second case directly. Since the right hand variables are all orthogonal, the variance of the series is the sum of its loading on all of the shocks, \(cov(x_t) = \sum_{j=0}^\infty \theta_j \theta_j'\). This "forecast error variance decomposition" is behind my statement that small amounts of inflation variance are due to monetary policy shocks rather than shocks to other variables, and mostly inflation shocks. Update:Luis Garicano has a great tweet thread explaining the ideas with a medical analogy. Kamil Kovar has a nice follow up blog post, with emphasis on Europe. He makes a good point that I should have thought of: A monetary policy "shock" is a deviation from a "rule." So, the Fed's and ECB's failure to respond to inflation as they "usually" do in 2021-2022 counts exactly the same as a 3-5% deliberate lowering of the interest rate. Lowering interest rates for no reason, and leaving interest rates alone when the regression rule says raise rates are the same in this methodology. That "loosening" of policy was quickly followed by inflation easing, so an updated VAR should exhibit a strong "price puzzle" -- a negative shock is followed by less, not more inflation. Of course historians and practical people might object that failure to act as usual has exactly the same effects as acting. * Some Papers: Comment on Romer and Romer What ends recessions? Some "what's a shock?"Comment on Romer and Romer A new measure of monetary policy. The greenbook forecasts, and beginning thoughts that strict exogeneity is not necessary. Shocks monetary shocks explain small fractions of output variance.Comments on Hamilton, more thoughts on what a shock is.What do the VARs mean? cited above, is the response to the shock or to persistent interest rates?The Fed and Interest Rates, with Monika Piazzesi. Daily data and interest rates to identify shocks. Decomposing the yield curve with Monika Piazzesi. Starts with a great example of how small changes in specification lead to big differences in long run forecasts. Time seriesA critique of the application of unit root tests pretesting for unit roots and cointegration is a bad ideaHow big is the random walk in GNP? lessons in not using short run dynamics to infer long run properties. Permanent and transitory components of GNP and stock prices a favorite of cointegration really helps on long run propertiesTime series for macroeconomics and finance notes that never quite became a book. Explains VARs and responses.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
Today, I'll add an entry to my occasional reviews of interesting academic papers. The paper: "Price Level and Inflation Dynamics in Heterogeneous Agent Economies," by Greg Kaplan, Georgios Nikolakoudis and Gianluca Violante. One of the many reasons I am excited about this paper is that it unites fiscal theory of the price level with heterogeneous agent economics. And it shows how heterogeneity matters. There has been a lot of work on "heterogeneous agent new-Keynesian" models (HANK). This paper inaugurates heterogeneous agent fiscal theory models. Let's call them HAFT. The paper has a beautifully stripped down model. Prices are flexible, and the price level is set by fiscal theory. People face uninsurable income shocks, however, and a borrowing limit. So they save an extra amount in order to self-insure against bad times. Government bonds are the only asset in the model, so this extra saving pushes down the interest rate, discount rate, and government service debt cost. The model has a time-zero shock and then no aggregate uncertainty. This is exactly the right place to start. In the end, of course, we want fiscal theory, heterogeneous agents, and sticky prices to add inflation dynamics. And on top of that, whatever DSGE smorgasbord is important to the issues at hand; production side, international trade, multiple real assets, financial fractions, and more. But the genius of a great paper is to start with the minimal model. Part II effects of fiscal shocks. I am most excited by part II, the effects of fiscal shocks. This goes straight to important policy questions. Note: This figure plots impulse responses to a targeted and untargeted helicopter drop, aggregated at the quarterly frequency. The helicopter drop is a one-time issuance of 16% of total government nominal debt outstanding at t = 0. Only households in the bottom 60% of the wealth distribution receive the issuance in the targeted experiment (dashed red line). The orange line plots dynamics in the representative agent (RA) model. The dashed black line plots the initial steady state. Source: Kaplan et al. Figure 7At time 0, the government drops $5 trillion of extra debt on people, with no plans to pay it back. The interest rate does not change. What happens? In the representative agent economy, the price level jumps, just enough to inflate away outstanding debt by $5 trillion. (In this simulation, inflation subsequent to the price level jump is just set by the central bank, via an interest rate target. So the rising price level line of the representative agent (orange) benchmark is not that interesting. It's not a conventional impulse response showing the change after the shock; it's the actual path after the shock. The difference between colored heterogeneous agent lines and the orange representative agent line is the important part.) Punchline: In the heterogeneous agent economies, the price level jumps a good deal more. And if transfers are targeted to the bottom of the wealth distribution, the price level jumps more still. It matters who gets the money. This is the first step on an important policy question. Why was the 2020-2021 stimulus so much more inflationary than, say 2008? I have a lot of stories ("fiscal histories," FTPL), one of which is a vague sense that printing money and sending people checks has more effect than borrowing in treasury markets and spending the results. This graph makes that sense precise. Sending people checks, especially people who are on the edge, does generate more inflation. In the end, whether government debt is inflationary or not comes down to whether people treat the asset as a good savings vehicle, and hang on to it, or try to spend it, thereby driving up prices. Sending checks to people likely to spend it gives more inflation. As you can see, the model also introduces some dynamics, where in this simple setup (flexible prices) the RA model just gives a price level jump. To understand those dynamics, and more intuition of the model, look at the response of real debt and the real interest rate The greater inflation means that the same increase in nominal debt is a lesser increase in real debt. Now, the crucial feature of the model steps in: due to self-insurance, there is essentially a liquidity value of debt. If you have less debt, the marginal value of higher; people bid down the real interest rate in an attempt to get more debt. But the higher real rate means the real value of debt rises, and as the debt rises, the real interest rate falls. To understand why this is the equilibrium, it's worth looking at the debt accumulation equation, \[ \frac{db}{dt} = r_t (b_t; g_t) b_t - s_t. \]\(b_t\) is the real value of nominal debt, \(r_t=i_t-\pi_t\) is the real interest rate, and \(s_t\) is the real primary surplus. Higher real rates (debt service costs) raise debt. Higher primary surpluses pay down debt. Crucially -- the whole point of the paper -- the interest rate depends on how much debt is outstanding and on the distribution of wealth \(g_t\). (\(g_t\) is a whole distribution.) More debt means a higher interest rate. More debt does a better job of satisfying self-insurance motives. Then the marginal value of debt is lower, so people don't try to save as much, and the interest rate rises. It works a lot like money demand,Now, if the transfer were proportional to current wealth, nothing would change, the price level would jump just like the RA (orange) line. But it isn't; in both cases more-constrained people get more money. The liquidity constraints are less binding, they're willing to save more. For given aggregate debt the real interest rate will rise. So the orange line with no change in real debt is no longer a steady state. We must have, initially \(db/dt>0.\) Once debt rises and the distribution of wealth mixes, we go back to the old steady state, so real debt rises less initially, so it can continue to rise. And to do that, we need a larger price level jump. Whew. (I hope I got that right. Intuition is hard!) In a previous post on heterogeneous agent models, I asked whether HA matters for aggregates, or whether it is just about distributional consequences of unchanged aggregate dynamics. Here is a great example in which HA matters for aggregates, both for the size and for the dynamics of the effects. Here's a second cool simulation. What if, rather than a lump-sum helicopter drop with no change in surpluses, the government just starts running permanent primary deficits? Note: Impulse response to a permanent expansion in primary deficits. The dotted orange line shows the effects of a reduction in surplus in the Representative Agent model. The blue line labelled "Lump Sum" illustrates the dynamics following an expansion of lump sum transfers. The dashed red line labelled "Tax Rate" plots dynamics following a tax cut. The orange line plots dynamics in the representative agent (RA) model. The dashed black line plots the initial steady state. Source: Kaplan et. al. Figure 8.
In the RA model, a decline in surpluses is exactly the same thing as a rise in debt. You get the initial price jump, and then the same inflation following the interest rate target. Not so the HA models! Perpetual deficits are different from a jump in debt with no change in deficit. Again, real debt and the real rate help to understand the intuition. The real amount of debt is permanently lower. That means people are more starved for buffer stock assets, and bid down the real interest rate. The nominal rate is fixed, by assumption in this simulation, so a lower real rate means more inflation. For policy, this is an important result. With flexible prices, RA fiscal theory only gives a one-time price level jump in response to unexpected fiscal shocks. It does not give steady inflation in response to steady deficits. Here we do have steady inflation in response to steady deficits! It also shows an instance of the general "discount rates matter" theorem. Granted, here, the central bank could lower inflation by just lowering the nominal rate target but we know that's not so easy when we add realisms to the model. To see just why this is the equilibrium, and why surpluses are different than debt, again go back to the debt accumulation equation, \[ \frac{db}{dt} = r_t (b_t, g_t) b_t - s_t. \] In the RA model, the price level jumps so that \(b_t\) jumps down, and then with smaller \(s_t\), \(r b_t - s_t\) is unchanged with a constant \(r\). But in the HA model, the lower value of \(b\) means less liquidity value of debt, and people try to save, bidding down the interest rate. We need to work down the debt demand curve, driving down the real interest costs \(r\) until they partially pay for some of the deficits. There is a sense in which "financial repression" (artificially low interest rates) via perpetual inflation help to pay for perpetual deficits. Wow! Part I r<gThe first theory part of the paper is also interesting. (Though these are really two papers stapled together, since as I see it the theory in the first part is not at all necessary for the simulations.) Here, Kaplan, Nikolakoudis and Violante take on the r<g question clearly. No, r<g does not doom fiscal theory! I was so enthused by this that I wrote up a little note "fiscal theory with negative interest rates" here. Detailed algebra of my points below are in that note, (An essay r<g and also a r<g chapter in FTPL explains the related issue, why it's a mistake to use averages from our real economy to calibrate perfect foresight models. Yes, we can observe \(E(r)<E(g)\) yet present values converge.) I'll give the basic idea here. To keep it simple, think about the question what happens with a negative real interest rate \(r<0\), a constant surplus \(s\) in an economy with no growth, and perfect foresight. You might think we're in trouble: \[b_t = \frac{B_t}{P_t} = \int e^{-r\tau} s d\tau = \frac{s}{r}.\]A negative interest rate makes present values blow up, no? Well, what about a permanently negative surplus \(s<0\) financed by a permanently negative interest cost \(r<0\)? That sounds fine in flow terms, but it's really weird as a present value, no? Yes, it is weird. Debt accumulates at \[\frac{db_t}{dt} = r_t b_t - s_t.\] If \(r>0\), \(s>0\), then the real value of debt is generically explosive for any initial debt but \(b_0=s/r\). Because of the transversality condition ruling out real explosions, the initial price level jumps so \(b_0=B_0/P_0=s/r\). But if \(r<0\), \(s<0\), then debt is stable. For any \(b_0\), debt converges, the transversality condition is satisfied. We lose fiscal price level determination. No, you can't take a present value of a negative cashflow stream with a negative discount rate and get a sensible present value. But \(r\) is not constant. The more debt, the higher the interest rate. So \[\frac{db_t}{dt} = r(b_t) b_t - s_t.\] Linearizing around the steady state \(b=s/r\), \[\frac{db_t}{dt} = \left[r_t + \frac{dr(b_t)}{db}\right]b_t - s.\] So even if \(r<0\), if more debt raises the interest rate enough, if \(dr(b)/db\) is large enough, dynamics are locally and it turns out globally unstable even with \(r<0\). Fiscal theory still works! You can work out an easy example with bonds in utility, \(\int e^{-\rho t}[u(c_t) + \theta v(b_t)]dt\), and simplifying further log utility \(u(c) + \theta \log(b)\). In this case \(r = \rho - \theta v'(b) = \rho - \theta/b\) (see the note for derivation), so debt evolves as \[\frac{db}{dt} = \left[\rho - \frac{\theta}{b_t}\right]b_t - s = \rho b_t - \theta - s.\]Now the \(r<0\) part still gives stable dynamics and multiple equilibria. But if \(\theta>-s\), then dynamics are again explosive for all but \(b=s/r\) and fiscal theory works anyway. This is a powerful result. We usually think that in perfect foresight models, \(r>g\), \(r>0\) here, and consequently positive vs negative primary surpluses \(s>0\) vs. \(s<0\) is an important dividing line. I don't know how many fiscal theory critiques I have heard that say a) it doesn't work because r<g so present values explode b) it doesn't work because primary surpluses are always slightly negative. This is all wrong. The analysis, as in this example, shows is that fiscal theory can work fine, and doesn't even notice, a transition from \(r>0\) to \(r<0\), from \(s>0\) to \(s<0\). Financing a steady small negative primary surplus with a steady small negative interest rate, or \(r<g\) is seamless. The crucial question in this example is \(s<-\theta\). At this boundary, there is no equilibrium any more. You can finance only so much primary deficit by financial repression, i.e. squeezing down the amount of debt so its liquidity value is high, pushing down the interest costs of debt. The paper staples these two exercises together, and calibrates the above simulations to \(s<0\) and \(r<g\). But I bet they would look almost exactly the same with \(s>0\) and \(r>g\). \(r<g\) is not essential to the fiscal simulations.* The paper analyzes self-insurance against idiosyncratic shocks as the cause of a liquidity value of debt. That's interesting, and allows the authors to calibrate the liquidity value against microeconomic observations on just how much people suffer such shocks and want to insure against them. The Part I simulations are just that, heterogeneous agents in action. But this theoretical point is much broader, and applies to any economic force that pushes up the real interest rate as the volume of debt rises. Bonds in utility, here and in the paper's appendix, work. They are a common stand in for the usefulness of government bonds in financial transactions. And in that case, it's easier to extend the analysis to a capital stock, real estate, foreign borrowing and lending, gold bars, crypto, and other means of self-insuring against shocks. Standard ``crowding out'' stories by which higher debt raises interest rates work. (Blachard's r<g work has a lot of such stories.) The ``segmented markets'' stories underlying faith in QE give a rising b(r). So the general principle is robust to many different kinds of models. My note explores one issue the paper does not, and it's an important one in asset pricing. OK, I see how dynamics are locally unstable, but how do you take a present value when r<0? If we write the steady state \[b_t = \int_{\tau=0}^\infty e^{-r \tau}s d\tau = \int_{\tau=0}^T e^{-r \tau}s d\tau + e^{-rT}b_{t+T}= (1-e^{-rT})\frac{s}{r} + e^{-rT}b,\]and with \(r<0\) and \(s<0\), the integral and final term of the present value formula each explode to infinity. It seems you really can't discount with a negative rate. The answer is: don't integrate forward \[\frac{db_t}{dt}=r b_t - s \]to the nonsense \[ b_t = \int e^{-r \tau} s d\tau.\]Instead, integrate forward \[\frac{db_t}{dt} = \rho b_t - \theta - s\]to \[b_t = \int e^{-\rho \tau} (s + \theta)dt = \int e^{-\rho \tau} \frac{u'(c_t+\tau)}{u'(c_t)}(s + \theta)dt.\]In the last equation I put consumption (\(c_t=1\) in the model) for clarity. Discount the flow value of liquidity benefits at the consumer's intertemporal marginal rate of substitution. Do not use liquidity to produce an altered discount rate. This is another deep, and frequently violated point. Our discount factor tricks do not work in infinite-horizon models. \(1=E(R_{t+1}^{-1}R_{t+1})\) works just as well as \(1 = E\left[\beta u'(c_{t+1})/u'(c_t)\right] r_{t+1}\) in a finite horizon model, but you can't always use \(m_{t+1}=R_{t+1}^{-1}\) in infinite period models. The integrals blow up, as in the example. This is a good thesis topic for a theoretically minded researcher. It's something about Hilbert spaces. Though I wrote the discount factor book, I don't know how to extend discount factor tricks to infinite periods. As far as I can tell, nobody else does either. It's not in Duffie's book. In the meantime, if you use discount factor tricks like affine models -- anything but the proper SDF -- to discount an infinite cashflow, and you find "puzzles," and "bubbles," you're on thin ice. There are lots of papers making this mistake. A minor criticism: The paper doesn't show nuts and bolts of how to calculate a HAFT model, even in the simplest example. Note by contrast how trivial it is to calculate a bonds in utility model that gets most of the same results. Give us a recipe book for calculating textbook examples, please!Obviously this is a first step. As FTPL quickly adds sticky prices to get reasonable inflation dynamics, so should HAFT. For FTPL (or FTMP, fiscal theory of monetary policy; i.e. adding interest rate targets), adding sticky prices made the story much more realistic: We get a year or two of steady inflation eating away at bond values, rather than a price level jump. I can't wait to see HAFT with sticky prices. For all the other requests for generalization: you just found your thesis topic. Send typos, especially in equations. Updates*Greg wrote, and pointed out this isn't exactly right. "In the standard r>g, s>0 case, an increase desire to hold real assets (such as more income risk) leads to a lower real rate and higher real debt - the standard "secular stagnation" story. With r<g, s<0, an increased desire to hold real assets leads to higher real rates and higher debt." To understand this comment, you have to look at the supply and demand graph in the paper, or in my note. The "supply" of debt in the steady state \(b = s/r/), plotted with \(r\) as a function of \(b\) flips sign from a declining curve to a rising curve when \(s\) and \(r\) change sign. The "demand" \( r(b)) is upward sloping. So when demand shifts out, \(b\) rises, but \(r\) falls when \(r>0\) and rises when \(r<0\). With positive interest rates, you produce a greater amount of real debt, for the same surplus, with a lower real interest rate. With negative interest rates and a negative surplus, you produce more debt with a less negative real rate. Hmm. The \(r<g\) region is still a little weird. There is also the possibility of multiple equilibria, like the New-Keynesian zero bound equilibria; see the paper and note. Erzo Luttmer has a related HAFT paper, "Permanent Primary Deficits, Idiosyncratic Long-Run Risk, and Growth." It's calibrated in much more detail, and also more detailed on the r<g and long run deficit questions. It includes fiscal theory (p. 14) but does not seem centrally focused on inflation. I haven't read it yet, but it's important if you're getting in to these issues. I still regard r<g as a technical nuisance. In most of the cases here, it does not relieve the government of the need to repay debts, it does not lead to a Magic Money Tree, and it does not undermine fiscal price level determination. I am still not a fan of OLG models, which delicately need the economy truly to go on for infinite growth. I'm not totally persuaded HA is first-order important for getting aggregate inflation dynamics right. The Phillips curve still seems like the biggest rotten timber in the ship to me. But these issues are technical and complex, and I could be wrong. Attention is limited, so you have to place your bets in this business; but fortunately you can still read after other people work it out! Noah Kwicklis at UCLA has a very interesting related paper "Transfer Payments, Sacrifice Ratios, and Inflation in a Fiscal Theory HANK"I numerically solve a calibrated Heterogeneous Agent New-Keynesian (HANK) model that features nominal rigidities, incomplete markets, hand-to-mouth households, nominal long-term government debt, and active fiscal policy with a passive monetary policy rule to analyze the implications of the fiscal theory of the price level (FTPL) in a setting with wealth and income inequality. In model simulations, the total cumulative inflation generated by a fiscal helicopter drop is largely determined by the size of the initial stimulus and is relatively insensitive to the initial distribution of the payments. In contrast, the total real GDP and employment response depends much more strongly on the balance sheets of the transfer recipients, such that payments to and from households with few assets and high marginal propensities to consume (MPCs) move aggregate output much more strongly than payments to or from households with low MPCs....
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
With President Milei's election in Argentina, dollarization is suddenly on the table. I'm for it. Here's why. Why not? A standard of valueStart with "why not?'' Dollarization, not a national currency, is actually a sensible default. The dollar is the US standard of value. We measure length in feet, weight in pounds, and the value of goods in dollars. Why should different countries use different measures of value? Wouldn't it make sense to use a common standard of value? Once upon a time every country, and often every city, had its own weights and measures. That made trade difficult, so we eventually converged on international weights and measures. (Feet and pounds are actually a US anachronism since everyone else uses meters and kilograms. Clearly if we had to start over we'd use SI units, as science and engineering already do.) Moreover, nobody thinks it's a good idea to periodically shorten the meter in order to stimulate the economy, say by making the sale of cloth more profitable. As soon as people figure out they need to buy more cloth to make the same jeans, the profit goes away. PrecommitmentPrecommitment is, I think, the most powerful argument for dollarization (as for euorization of, say, Greece): A country that dollarizes cannot print money to spend more than it receives in taxes. A country that dollarizes must also borrow entirely in dollars, and must endure costly default rather than relatively less costly inflation if it doesn't want to repay debts. Ex post inflation and devaluation is always tempting, to pay deficits, to avoid paying debt, to transfer money from savers to borrowers, to advantage exporters, or to goose the economy ahead of elections. If a government can precommit itself to eschew inflation and devaluation, then it can borrow a lot more money on better terms, and its economy will be far better off in the long run. An independent central bank is often advocated for precommitment value. Well, locating the central bank 5,000 miles away in a country that doesn't care about your economy is as independent as you can get!The Siren Vase. Greek 480-470 BC. Source: The Culture CriticPrecommitment is an old idea. See picture. It's hard. A country must set things up so that it cannot give in to temptation ex post, and it will regret and try to wriggle out of that commitment when the time comes. A lot of the structure of our laws and government amount to a set of precommitments. An independent central bank with a price-level mandate is a precommitment not to inflate. A constitution and property rights are precommitments not to expropriate electoral minorities. Especially in Argentina's case, precommitment is why full dollarization is better than an exchange rate peg or a currency board. A true exchange rate peg -- one dollar for one peso, as much as you like -- would seem to solve the temptation-to-inflate problem. But the country can always abrogate the peg, reinstitute currency controls, and inflate. An exchange rate peg is ultimately a fiscal promise; the country will raise enough taxes so that it can get the dollars necessary to back its currency. When that seems too hard, countries devalue the peg or abandon it altogether. A currency board is tougher. Under a currency board, every peso issued by the government is backed by a dollar. That seems to ensure adequate reserves to handle any conceivable run. But a strapped government eyes the great Uncle-Scrooge swimming pool full of dollars at the currency board, and is tempted to abrogate the board, grab the assets and spend them. That's exactly how Argentina's currency board ended. Dollarization is a burn the ships strategy. There is no return. Reserves are neither necessary nor sufficient for an exchange rate peg. The peg is a fiscal promise and stands and falls with fiscal policy. A currency board, to the governmentFull dollarization -- the country uses actual dollars, and abandons its currency -- cannot be so swiftly undone. The country would have to pass laws to reinstitute the peso, declare all dollar contracts to be Peso contracts, ban the use of dollars and try to confiscate them. Dollars pervading the country would make that hard. People who understand their wealth is being confiscated and replaced by monopoly money would make it harder -- harder than some technical change in the amount of backing at the central bank for the same peso notes and bank accounts underlying a devalued peg or even an abrogated currency board. The design of dollarization should make it harder to undo. The point is precommitment, to make it as costly as possible for a following government to de-dollarize, after all. It's hard to confiscate physical cash, but if domestic Argentine banks have dollar accounts and dollar assets, it is relatively easy to pronounce the accounts in pesos and grab the assets. It would be better if dollarization were accompanied by full financial, capital, and trade liberalization, including allowing foreign banks to operate freely and Argentinian banks to become subsidiaries of foreign banks. Absence of a central bank and domestic deposit insurance will make that even more desirable. Then Argentinian bank "accounts" could be claims to dollar assets held offshore, that remain intact no matter what a future Peronist government does. Governments in fiscal stress that print up money, like Argentina, also impose an array of economy-killing policies to try to prop up the value of their currency, so the money printing generates more revenue. They restrict imports with tariffs, quotas, and red tape; they can restrict exports to try to steer supply to home markets at lower prices; they restrict currency conversion and do so at manipulated rates; they restrict capital markets, stopping people from investing abroad or borrowing abroad; they force people to hold money in oligopolized bank accounts at artificially low interest rates. Dollarization is also a precommitment to avoid or at least reduce all these harmful policies, as generating a demand for a country's currency doesn't do any good to the government budget when there isn't a currency. Zimbabwe dollarized in 2009, giving up on its currency after the greatest hyperinflation ever seen. The argument for Argentina is similar. Ecuador dollarized successfully in much less trying circumstances. It's not a new idea, and unilateral dollarization is possible. In both cases there was a period in which both currencies circulated. (Sadly, Zimbabwe ended dollarization in 2019, with a re-introduction of the domestic currency and redenomination of dollar deposits at a very unfavorable exchange rate. It is possible to undo, and the security of dollar bank accounts in face of such appropriation is an important part of the dollarization precommitment.) The limits of precommitmentDollarization is no panacea. It will work if it is accompanied by fiscal and microeconomic reform. It will be of limited value otherwise. I'll declare a motto: All successful inflation stabilizations have come from a combination of fiscal, monetary and microeconomic reform. Dollarization does not magically solve intractable budget deficits. Under dollarization, if the government cannot repay debt or borrow, it must default. And Argentina has plenty of experience with sovereign default. Argentina already borrows abroad in dollars, because nobody abroad wants peso debt, and has repeatedly defaulted on dollar debt. The idea of dollar debt is that explicit default is more costly than inflation, so the country will work harder to repay debt. Bond purchasers, aware of the temptation to default, will put clauses in debt contracts that make default more costly still. For you to borrow, you have to give the bank the title to the house. Sovereign debt issued under foreign law, with rights to grab assets abroad works similarly. But sovereign default is not infinitely costly and countries like Argentina sometimes choose default anyway. Where inflation may represent simply hugging the mast and promising not to let go, default is a set of loose handcuffs that you can wriggle out of painfully. Countries are like corporations. Debt denominated in the country's own currency is like corporate equity (stock): If the government can't or won't pay it back the price can fall, via inflation and currency devaluation. Debt denominated in foreign currency is like debt: If the government can't or won't pay it back, it must default. (Most often, default is partial. You get back some of what is promised, or you are forced to convert maturing debt into new debt at a lower interest rate.) The standard ideas of corporate finance tell us who issues debt and who issues equity. Small businesses, new businesses, businesses that don't have easily valuable assets, businesses where it is too easy for the managers to hide cash, are forced to borrow, to issue debt. You have to borrow to start a restaurant. Businesses issue equity when they have good corporate governance, good accounting, and stockholders can be sure they're getting their share. These ideas apply to countries, and the choice between borrowing in their own currency and borrowing in foreign currency. Countries with poor governance, poor accounting, out of control fiscal policies, poor institutions for repayment, have to borrow in foreign currency if they are going to borrow at all, with intrusive conditions making default even more expensive. Issuing and borrowing in your own currency, with the option to inflate, is the privilege of countries with good institutions, and democracies where voters get really mad about inflation in particular. Of course, when things get really bad, the country can't borrow in either domestic or foreign currency. Then it prints money, forcing its citizens to take it. That's where Argentina is. In personal finance, you start with no credit at all; then you can borrow; finally you can issue equity. On the scale of healthier economies, dollarizing is the next step up for Argentina. Dollarization and foreign currency debt have another advantage. If a country inflates its way out of a fiscal mess, that benefits the government but also benefits all private borrowers at the expense of private savers. Private borrowing inherits the inflation premium of government borrowing, as the effective government default induces a widespread private default. Dollarization and sovereign default can allow the sovereign to default without messing up private contracts, and all prices and wages in the economy. It is possible for sovereigns to pay higher interest rates than good companies, and the sovereign to be more likely to default than those companies. It doesn't always happen, because sovereigns about to default usually grab all the wealth they can find on the way down, but the separation of sovereign default from inflationary chaos is also an advantage. Greece is a good example, and a bit Italy as well, both in the advantages and the cautionary tale about the limitations of dollarization. Greece and Italy used to have their own currencies. They also had borders, trade controls, and capital controls. They had regular inflation and devaluation. Every day seemed to be another "crisis" demanding another "just this once" splurge. As a result, they paid quite high interest rates to borrow, since savvy bondholders wanted insurance against another "just this once."They joined the EU and the eurozone. This step precommitted them to free trade, relatively free capital markets, and no national currency. Sovereign default was possible, but regarded as very costly. Having banks stuffed with sovereign debt made it more costly. Leaving the euro was possible, but even more costly. Deliberately having no plan to do so made it more costly still. The ropes tying hands to the mast were pretty strong. The result: borrowing costs plummeted. Governments, people and businesses were able to borrow at unheard of low rates. And they did so, with aplomb. The borrowing could have financed public and private investment to take advantage of the new business opportunities the EU allowed. Sadly it did not. Greece soon experienced the higher ex-post costs of default that the precommitment imposed. Dollarizaton -- euroization -- is a precommitment, not a panacea. Recommitments impose costs on yourself ex post. Those costs are real. A successful dollarization for Argentina has to be part of a joint monetary, fiscal, and microeconomic reform. (Did I say that already? :) ) If public finances aren't sorted out, a default will come eventually. And public finances don't need a sharp bout of "austerity" to please the IMF. They need decades of small primary surpluses, tax revenues slightly higher than spending, to credibly pay down any debt. To get decades of revenue, the best answer is growth. Tax revenue equals tax rate times income. More income is a lot easier than higher tax rate, which at least partially lowers income. Greece and Italy did not accomplish the microeconomic reform part. Fortunately, for Argentina, microeconomic reform is low-hanging fruit, especially for a Libertarian president. TransitionWell, so much for the Promised Land, they may have asked of Moses, how do we get there? And let's not spend 40 years wandering the Sinai on the way. Transition isn't necessarily hard. On 1 January 1999, Italy switched from Lira to Euro. Every price changed overnight, every bank account redenominated, every contract reinterpreted, all instantly and seamlessly. People turned in Lira banknotes for Euro banknotes. The biggest complaint is that stores might have rounded up converted prices. If only Argentina could have such problems. Why is Argentina not the same? Well, for a lot of reasons. Before getting to the euro, Italy had adopted the EU open market. Exchange rates had been successfully pegged at the conversion rate, and no funny business about multiple rates. The ECB (really the Italian central bank) could simply print up euros to hand out in exchange for lira. The assets of the Italian central bank and other national central banks were also redenominated in euro, so printing up euros to soak up national currencies was not inflationary -- assets still equal liabilities. Banks with lira deposits that convert to Euro also have lira assets that convert to euro. And there was no sovereign debt crisis, bank crisis, or big inflation going on. Italian government debt was trading freely on an open market. Italy would spend and receive taxes in euros, so if the debt was worth its current price in lira as the present value of surpluses, it was worth exactly the same price, at the conversion rate, in euro. None of this is true in Argentina. The central problem, of course, is that the government is broke. The government does not have dollars to exchange for Pesos. Normally, this would not be a problem. Reserves don't matter, the fiscal capacity to get reserves matters. The government could simply borrow dollars internationally, give the dollars out in exchange for pesos, and slowly pay off the resulting debt. If Argentina redenominated interest-bearing peso debt to dollars at a market exchange rate, that would have no effect on the value of the debt. Obviously, borrowing additional dollars would likely be difficult for Argentina right now. To the extent that its remaining debt is a claim to future inflationary seigniorage revenues, its debt is also worth less once converted to dollars, even at a free market rate, because without seigniorage or fiscal reforms, budget deficits will increase. And that leads to the primary argument against dollarization I hear these days. Yes it might be the promised land, but it's too hard to get there. I don't hear loudly enough, though, what is the alternative? One more muddle of currency boards, central bank rules, promises to the IMF and so forth? How do you suddenly create the kind of stable institutions that Argentina has lacked for a century to justify a respectable currency? One might say this is a problem of price, not of quantity. Pick the right exchange rate, and conversion is possible. But that is not even clearly true. If the state is truly broke, if pesos are only worth anything because of the legal restrictions forcing people to hold them, then pesos and peso debt are genuinely worthless. The only route to dollarization would be essentially a complete collapse of the currency and debt. They are worth nothing. We start over. You can use dollars, but you'll have to export something to the US -- either goods or capital, i.e. stock and bonds in private companies -- to get them. (Well, to get any more of them. Lots of dollars line Argentine mattresses already.) That is enough economic chaos to really put people off. In reality, I think the fear is not a completely worthless currency, but that a move to quick dollarization would make peso and peso claims worth very little, and people would rebel against seeing their money holdings and bank accounts even more suddenly worthless than they are now. Maybe, maybe not. Just who is left in Argentina counting on a robust value of pesos? But the state is not worth nothing. It may be worth little in mark to market, or current dollar borrowing capacity. But a reformed, growing Argentina, with tax, spending, and microeconomic reform, could be a great place for investment, and for tax revenue above costs. Once international lenders are convinced those reform efforts are locked in, and Argentina will grow to anything like its amazing potential, they'll be stumbling over themselves to lend. So a better dollarization plan redeems pesos at the new greater value of the post-reform Argentine state. The question is a bit of chicken and egg: Dollarization has to be part of the reform, but only reform allows dollarization with a decent value of peso exchange. So there is a genuine question of sequencing of reforms. This question reminds me of the totally fruitless discussion when the Soviet Union broke up. American economists amused themselves with clever optimal sequencing of liberalization schemes. But if competent benevolent dictators (sorry, "policy-makers") were running the show, the Soviet Union wouldn't have failed in the first place. The end of hyperinflation in Germany. Price level 1919-1924. Note left-axis scale. Source: Sargent (1982) "The ends of four big inflations." A better historical analogy is, I think, the ends of hyperinflation after WWI, so beautifully described by Tom Sargent in 1982. The inflations were stopped by a sudden, simultaneous, fiscal, monetary, and (to some extent) microeconomic reform. The fiscal problem was solved by renegotiating reparations under the Versailles treaty, along with severe cuts in domestic spending, for example firing a lot of government and (nationalized) railroad workers. There were monetary reforms, including an independent central bank forbidden to buy government debt. There were some microeconomic reforms as well. Stopping inflation took no monetary stringency or high interest rates: Interest rates fell, and the governments printed more money, as real money demand increased. There was no Phillips curve of high unemployment. Employment and the economies boomed. So I'm for almost-simultaneous and fast reforms. 1) Allow the use of dollars everywhere. Dollars and pesos can coexist. Yes, this will put downward pressure on the value of the peso, but that might be crucial to maintain interest in the other reforms, which will raise the value of the peso. 2) Instant unilateral free trade and capital opening. Argentina will have to export goods and capital to get dollars. Get out of the way. Freeing imports will lower their prices and make the economy more efficient. Capital will only come in, which it should do quickly, if it knows it can get out again. Float the peso. 3) Long list of growth - oriented microeconomic reforms. That's why you elected a Libertarian president. 4) Slash spending. Reform taxes. Low marginal rates, broad base. Subsidies in particular distort prices to transfer income. Eliminate. 5) Once reforms are in place, and Argentina has some borrowing capacity, redenominate debt to dollars, and borrow additional dollars to exchange pesos for dollars. All existing peso contracts including bank accounts change on the date. Basically, you want people to hold peso bills and peso debt in the interim as claims on the post-reform government. Peso holders have an incentive to push for reforms that will raise the eventual exchange value of the peso. 6) Find an interim lender. The central problem is who will lend to Argentina in mid stream in order to retire pesos. This is like debtor in possession financing but for a bankrupt country. This could be a job for the IMF. The IMF could lend Argentina dollars for the purpose of retiring pesos. One couldn't ask for much better "conditionality" than a robust Libertarian pro-growth program. Having the IMF along for the ride might also help to commit Argentina to the program. (The IMF can force conditionality better than private lenders.) When things have settled down, Argentina should be able to borrow dollars privately to pay back the IMF. The IMF might charge a decent interest rate to encourage that. How much borrowing is needed? Less than you think. Interest-paying debt can simply be redenominated in dollars once you pick a rate. That might be hard to pay off, but that's a problem for later. So Argentina really only needs to borrow enough dollars to retire cash pesos. I can't find numbers, but hyper inflationary countries typically don't have much real value of cash outstanding. The US has 8% of GDP in currency outstanding. If Argentina has half that, then it needs to borrow only 4% of GDP in dollars to buy back all its currency. That's not a lot. If the peso really collapses, borrowing a little bit more (against great future growth of the reform program) to give everyone $100, the sort of fresh start that Germany did after WWII and after unification, is worth considering. Most of the worry about Argentina's borrowing ability envisions continued primary deficits with slow fiscal adjustment. Make the fiscal adjustment tomorrow."You never want a serious crisis to go to waste," said Rahm Emanuel wisely. "Sequencing" reforms means that everything promised tomorrow is up for constant renegotiation. Especially when parts of the reform depend on other parts, I'm for doing it all as fast as possible, and then adding refinements later if need be. Roosevelt had his famous 100 days, not a 8 year sequenced program. The Argentine reform program is going to hurt a lot of people, or at least recognize losses that had long been papered over in the hope they would go away. Politically, one wants to make the case "We're all in this, we're all hurting. You give up your special deal, preferential exchange rate, special subsidy or whatever, but so will everyone else. Hang with me to make sure they don't get theirs, and in a year we'll all be better off." If reforms are in a long sequence, which means long renegotiation, it's much harder to get buy in from people who are hurt earlier on that the ones who come later will also do their part. The standard answersOne standard critique of dollarization is monetary policy and "optimal currency areas." By having a national currency, the country's wise central bankers can artfully inflate and devalue the currency on occasion to adapt to negative shocks, without the inconvenience and potential dislocation of everyone in the country lowering prices and wages. Suppose, say, the country produces beef, and exports it in order to import cars. If world demand for beef declines, the dollar price of beef declines. The country is going to have to import fewer cars. In a dollarized country, or with a pegged exchange rate, the internal price of beef and wages go down. With its own country and a floating rate, the value of the currency could go down, leaving beef and wages the same inside the country, but the price of imported cars goes up. If lowering prices and wages causes more recession and dislocation than raising import prices, then the artful devaluation is the better idea. (To think about this question more carefully you need traded and non-traded goods; beef, cars, and haircuts. The relative price of beef, cars, and haircuts along with demand for haircuts is also different under the two regimes). Similarly, suppose there is a "lack of demand'' recession and deflation. (90 years later, economists are still struggling to say exactly where that comes from.) With its own central bank and currency, the country can artfully inflate just enough to offset the recession. A country that dollarizes also has to import not-always-optimal US inflation. Switzerland did a lot better than the US and EU once again in the covid era. This line of thinking answers the question, "OK, if Argentina ($847 bn GDP, beef exports) should have its own currency in order to artfully offset shocks, why shouldn't Colorado ($484 bn GDP, beef exports)?'' Colorado is more dependent on trade with the rest of the US than is Argentina. But, the story goes, people can more easily move across states. A common federal government shoves "fiscal stimulus" to states in trouble. Most of all, "lack of demand" recessions seem to be national, in part because of the high integration of states, so recessions are fought by national policy and don't need state-specific monetary stimulus. This is the standard "optimal currency area" line of thinking, which recommends a common currency in an integrated free trade zone such as US, small Latin American countries that trade a lot with the US, and Europe. Standard thinking especially likes a common currency in a fiscal union. Some commenters felt Greece should keep or revert to the Drachma because the EU didn't have enough common countercyclical fiscal policy. It likes independent currencies elsewhere.I hope you're laughing out loud by now. A wise central bank, coupled with a thrifty national government, that artfully inflates and devalues just enough to technocratically exploit price stickiness and financial frictions, offsetting national "shocks" with minimum disruption, is a laughable description of Argentina's fiscal and monetary policies. Periodic inflation, hyperinflation and default, together with a wildly overregulated economy with far too much capital and trade controls is more like it. The lure of technocratic stabilization policy in the face of Argentina's fiscal and monetary chaos is like fantasizing whether you want the tan or black leather on your new Porsche while you're on the bus to Carmax to see if you can afford a 10-year old Toyota. Another reason people argue that even small countries should have their own currencies is to keep the seigniorage. Actual cash pays no interest. Thus, a government that issues cash earns the interest spread between government bonds and interest. Equivalently, if demand for cash is proportional to GDP, then as GDP grows, say 2% per year, then the government can let cash grow 2% per year as well, i.e. it can print up that much cash and spend it. But this sort of seigniorage is small for modern economies that don't have inflation. Without inflation, a well run economy might pay 2% for its debt, so save 2% by issuing currency. 2% interest times cash which is 10% of GDP is 0.2% of GDP. On the scale of Argentinian (or US) debt and deficits, that's couch change. When inflation is higher, interest rates are higher, and seigniorage or the "inflation tax" is higher. Argentina is living off that now. But the point is not to inflate forever and to forswear bigger inflation taxes. Keeping this small seigniorage is one reason for countries to keep their currency and peg to the dollar or run a currency board. The currency board holds interest-bearing dollar assets, and the government gets the interest. Nice. But as I judge above, the extra precommitment value of total dollarization is worth the small lost seigniorage. Facing Argentina's crisis, plus its catastrophic century of lost growth, lost seigniorage is a cost that I judge far below the benefit. Other countries dollarize, but agree with the US Fed to rebate them some money for the seigniorage. Indeed, if Argentina dollarizes and holds 10% of its GDP in non-interest-bearing US dollars, that's a nice little present to the US. A dollarization agreement with Argentina to give them back the seignorage would be the least we can do. But I don't think Argentina should hold off waiting for Jay Powell to answer the phone. The Fed has other fires to put out. If Argentina unilaterally dollarizes, they can work this sort of thing out later. Dollarization would obviously be a lot easier if it is worked out together with the US government and US banks. Getting cash sent to Argentina, getting banks to have easy payment systems in dollars and links to US banks would make it all easier. If Argentina gets rid of its central bank it still needs a payment system to settle claims in dollars. Accounts at, say, Chase could function as a central bank. But it would all be easier if the US cooperates. Updates:Some commenters point out that Argentina may be importing US monetary policy just as the US imports Argentine fiscal policy. That would lead to importing a big inflation. They suggest a Latin American Monetary Union, like the euro, or using a third country's currency. The Swiss franc is pretty good. Maybe the Swiss can set the world standard of value. Both are good theoretical ideas but a lot harder to achieve in the short run. Dollarization will be hard enough. Argentines have a lot of dollars already, most trade is invoiced in dollars so getting dollars via trade is relatively easy, the Swiss have not built out a banking infrastructure capable of being a global currency. The EMU lives on top of the EU, and has its own fiscal/monetary problems. Building a new currency before solving Argentina's problems sounds like a long road. The question asked was dollarization, so I stuck to that for now. I imagined here unilateral dollarization. But I didn't emphasize enough: The US should encourage dollarization! China has figured this out and desperately wants anyone to use its currency. Why should we not want more people to use our currency? Not just for the seigniorage revenue, but for the ease of trade and international linkages it promotes. The Treasury and Fed should have a "how to dollarize your economy" package ready to go for anyone who wants it. Full integration is not trivial, including access to currency, getting bank access to the Fed's clearing systems, instituting cyber and money laundering protocols, and so forth. Important update: Daniel Raisbeck and Gabriela Calderon de Burgos at CATO have a lovely essay on Argentinian dollarization, also debunking an earlier Economist article that proclaimed it impossible. They include facts and comparison with other dollarization experiences, not just theory as I did. (Thanks to the correspondent who pointed me to the essay.) Some quotes:At the end of 2022, Argentines held over $246 billion in foreign bank accounts, safe deposit boxes, and mostly undeclared cash, according to Argentina's National Institute of Statistics and Census. This amounts to over 50 percent of Argentina's GDP in current dollars for 2021 ($487 billion). Hence, the dollar scarcity pertains only to the Argentine state....The last two dollarization processes in Latin American countries prove that "purchasing" the entire monetary base with U.S. dollars from one moment to the next is not only impractical, but it is also unnecessary. In both Ecuador and El Salvador, which dollarized in 2000 and 2001 respectively, dollarization involved parallel processes. In both countries, the most straightforward process was the dollarization of all existing deposits, which can be converted into dollars at the determined exchange rate instantly.in both Ecuador and El Salvador, dollarization not only did not lead to bank runs; it led to a rapid and sharp increase in deposits, even amid economic and political turmoil in Ecuador's case....There is a general feature of ending hyperinflation: People hold more money. In this case, people hold more bank accounts once they know those accounts are safe. Short summary of the rest, all those dollar deposits (out of mattresses into the banking system) allowed the central bank to retire its local currency liabilities. Emilio Ocampo, the Argentine economist whom Milei has put in charge of plans for Argentina's dollarization should he win the presidency, summarizes Ecuador's experience thus:People exchanged their dollars through the banks and a large part of those dollars were deposited in the same banks. The central bank had virtually no need to disburse reserves. This was not by design but was a spontaneous result.In El Salvador also, Dollar deposits also increased spontaneously in El Salvador, a country that dollarized in 2001. By the end of 2022, the country's deposits amounted to 49.6 percent of GDP—in Panama, another dollarized peer, deposits stood at 117 percent of GDP.El Salvador's banking system was dollarized immediately, but the conversion of the circulating currency was voluntary, with citizens allowed to decide if and when to exchange their colones for dollars. Ocampo notes that, in both Ecuador and El Salvador, only 30 percent of the circulating currency had been exchanged for dollars four months after dollarization was announced so that both currencies circulated simultaneously. In the latter country, it took over two years for 90 percent of the monetary base to be dollar‐based.Cachanosky explains that, in an El Salvador‐type, voluntary dollarization scenario, the circulating national currency can be dollarized as it is deposited or used to pay taxes, in which case the sums are converted to dollars once they enter a state‐owned bank account. Hence, "there is no need for the central bank to buy the circulating currency" at a moment's notice.Dollarization starts with both currencies and a peg. As long as people trust that dollarization will happen at the peg, the conversion can take a while. You do not need dollars to soak up every peso on day 1. Dollarization is, above, a commitment that the peg will last for years, not a necessary commitment that the peg will last a day. I speculated about private borrowing at lower rates than the sovereign, once default rather than inflation is the only way out for the sovereign. This happened: ... as Manuel Hinds, a former finance minister in El Salvador, has explained, solvent Salvadorans in the private sector can borrow at rates of around 7 percent on their mortgages while international sovereign bond markets will only lend to the Salvadoran government at far higher rates. As Hinds writes, under dollarization, "the government cannot transfer its financial costs to the private sector by printing domestic money and devaluing it."A nice bottom line: Ask people in Ecuador, El Salvador, and Panama what they think:This is yet another lesson of dollarization's actual experience in Latin American countries. It is also a reason why the vast majority of the population in the dollarized nations has no desire for a return to a national currency. The monetary experiences of daily life have taught them that dollarization's palpable benefits far outweigh its theoretical drawbacks. Even more important update:From Nicolás Cachonosky How to Dollarize Argentina The central problem is non-money liabilities of the central bank. A detailed plan. Many other blog posts at the link. See his comment below. Tyler Cowen on dollarization in Bloomberg. Great quote: The question is not how to adopt a new currency, it is how to adopt a new currency and retain a reasonable value for the old one. Dollarization is easy. Hyperinflate the Peso to zero a la Zimbabwe. Repeat quote. Emilio Ocampo on dollarization as a commitment device. One of the main reasons to dollarize is to eliminate high, persistent, and volatile inflation. However, to be effective, dollarization must generate sufficient credibility, which in turn depends critically on whether its expected probability of reversal is low.... The evidence suggests that, in the long-run, the strongest insurance against reversal is the support of the electorate, but in the short-run, institutional design [dollarization] can play a critical role.Fifty years ago, in testimony to U.S. Congress, Milton Friedman argued that "the whole reason why it is an advantage for a developing country to tie to a major country is that, historically speaking, the internal policies of developing countries have been very bad. U.S. policy has been bad, but their policies have been far worse. ... (1973, p.127)."In this respect, not much has changed in Argentina since. Craig Richardson explains how dollarization failed in Zimbabwe, a wonderful cautionary tale. Deficits did not stop, the government issued "bonds" and forced banks to buy them, bank accounts became de linked from currency. Gresham's law prevailed, the government "bonds" circulating at half face value drove out cash dollars. With persistent government and trade deficits there was a "dollar shortage."