A pocketbook history of postwar US macro regimes

Interfluidity hosts office hours, Friday afternoons, 3:30 pm US Eastern time. These are open to all. Just drop-in here. Some brilliant people have become regulars. (Though I fear to jinx things by referring to them that way!) “Regulars" who I know have some public presence include Detroit Dan, Chris Peel, Everett Reed, Steve Roth. Other, less exhibitionist, regulars include M.A., T.A., and G.G. Attendance at these meetings does not imply endorsement of my bullshit. They all do their best to set me straight. But they do provoke me. A lot of what I write is inspired by their insight and conversation.

Last Friday, I held forth a bit on a very high-level story of the US macroeconomy, and Steve Roth suggested I write it up. So, here goes.

I divide the postwar era into six not-quite-disjoint eras:

  1. Treaty of Detroit (1950 - early 1970s)
  2. Baby boomers' inflation (1970s - early 1980s)
  3. Stumbling into the Great Moderation (1980s)
  4. Democratization of credit era (mid 1980s - 2008)
  5. Collapsing into Great Disillusionment (2008 - 2020)
  6. Capital as baby boomers and a new 1970s? (2021 - ?)

1. Treaty of Detroit (1950 - early 1970s)

Immediately postwar, we lived in a benign Keynesian technocracy. Aggregate supply was a function of accumulated fixed capital and continual productivity growth. Aggregate demand was a function of aggregate wages. Aggregate wages can be oversimplified as a conventional wage level times the size of the labor force.

During this period, aggregate supply was growing faster than the labor force. This meant something close to full employment — at a full conventional wage — could be accommodated without putting pressure on the price level.

Under these conditions, hydraulic Keynesian "fine tuning" could basically work. Technocrats could stabilize the price level and ensure full employment without provoking very difficult tradeoffs. Labor unions could insist on job security and wages that grew in real terms every year, to the vexation of the shareholding class, no doubt, but without creating macroeconomic difficulties.

This is not to say all was well and perfect. Between 1950 and 1970, the US experienced three relatively mild recessions. There was a significant inflation related to the Korean War in 1950-1951 , and, famously, a smaller, "transient" inflation in the mid 1950s. But this was turbulence in a booming economy, not failure of an economic regime.

By the late 1960s, there were adumbrations of what was soon to come. Inflation was picking up. The seeds of great misery were planted by Milton Friedman in a 1968 speech, introducing to the world the idea of a "natural rate of unemployment" beyond which there must be accelerating inflation. The 1970s would fertilize that soil, and by the 1980s those seeds would yield weeds so robust we have still not entirely shaken them.

2. Baby boomers' inflation (1970s - early 1980s)

I've written this up before, to some controversy at the time.

The postwar baby boom was an unpinned grenade, nonchalantly waiting to blow up the prior period of macroeconomic stability.

So long as the babies were babies, they were no problem. Sure, parents had to care for them, but since most working families had kids, the conventional standard of living included child-care costs. Workers on conventional wages didn't feel disadvataged or ill-done having to cover them. Society was arranged so that it was possible for conventional wage-earners to afford and raise children.

But once the kids grew up, when they too became workers and wanted to contribute, that's when they became a problem. Isn't it ironic.

Labor productivity is a function, at any given moment, of matching labor to the existing stock of capital. The US economy was accustomed to, and able to, develop new fixed capital fast enough to match a certain rate of influx of new workers into the labor force.

But the baby boom was a firehose that outstripped the pace at which the US could develop new fixed capital.

Further, young women of the baby boom generation joined the labor force at higher rates than prior generations, and expected pay closer to conventional male-worker wages. The US labor force suddenly faced a challenge of employing a lot of new entrants, and employing them productively enough to meet pretty high wage expectations.

But this firehose of new workers, unmatched to capital, could not be employed at the level of productivity that had been typical previously. The 1970s' infamous supply shocks exascerbated the productivity challenge. Still, the kids expected to earn a conventional wage in real terms, meaning, in the jargon (and the pronouns) of the day, a wage on which a person could afford his own pad and wheels and live independently.

An economist might predict that a sharp, sudden influx of workers that engendered a reduction of productivity of the marginal worker would lead to a decline in real wages. Or else to unemployment, if real wages are sticky or not permitted to decline.

For many of the United States' unionized workers, real wages were not permitted to decline, as a matter of contract. But a very energetic and entitled generation of young people, which only a few years earlier had somewhat credibly threatened revolution, were not going to tolerate unemployment.

Immovable object, irresistible force.

Policymakers were in a pickle.

Inflation was, if not exactly a solution, then a confusion that allowed the country to muddle through. The Great Inflation was miserable and unfair — it picked a lot of losers! — but precisely via its misery and unfairness, it helped address both the problem of unaffordable real wages and the problem of sustaining youth employment. In the unionized 1970s, some older workers were protected by COLA clauses in their contracts, others were not. Those who were not, and who otherwise had little bargaining power in their jobs, were handed very large real wage cuts. Not by any boss or politician they could get mad at, but by an inhuman demon, a "natural market force", a thing that economists and newsmen called "inflation".

By the end of the decade, real wage cuts to the unlucky were sufficiently deep to create space to employ the cohort of younger workers at their non-negotiable pad-and-wheels wage floor.

The net effect of the inflation was a real wage cut, on average. But it was concentrated among non-union, lower-bargaining power older workers, who were neither revolutionaries nor electorally organized, and so whose pain could be tolerated. It was also a redistribution from these workers to young new entrants, who were revolutionaries, or who might have been, had they not in this way been satisfied.

3. Stumbling into the Great Moderation (1980s)

The poor tradeoffs under which the Great Inflation had been the best of a lot of bad choices traumatized policymakers. At the same time, Americans made the catastrophic error of electing Ronald Reagan, who represented aggrieved capital, and who even under the prior, benign, macro regime would have sought to crush workers for the benefit of shareholders.

So crush workers for the benefit of shareholders they did!

It's easy to explain why Reagan and his political supporters tore to shreds what vestiges remained of the Treaty of Detroit. They represented business owners, a faction contesting with other factions for a greater share of output, which they were sure they deserved. Nothing is more ordinary.

But what's interesting is that well-meaning technocrats — and not only the ones bought off with lucrative sinecures — largely agreed that this was the right thing to do.

The 1970s had exposed a tremendous fragility in US macro- and political economic arrangements. Our ability to manage the price level and ensure something like full employment had proven dangerously coupled to vicissitudes of the labor force. In oversimplified, overaggregated Keynesian terms, the marginal dollar of aggregate demand — the expenditures that determine the price level, that decide the inflation rate — came out of nominal wages. And that was a problem.

While it is straightforward for policymakers to stimulate nominal wage growth, policymakers have no palatable tools to reduce nominal wages when there is an upward shock to the labor force or a downward shock to aggregate supply. In the United States (not everywhere!), reductions in labor demand provoke adjustments on the extrinsic rather than on the intrinsic margin. That's ugly economist-speak, but all it means is that, as the wage pool dies up, people get fired rather than everyone keeping their jobs at lower pay. In the US, when demand is tightly coupled to wages, the only way to restrain demand is to put people out of work and provoke a recession.

But then Reagan, for his own reasons, crushed the American labor movement. The wage share of the economy had declined as inflation reduced real wages over the course of the 1970s. It continued to decline over the disinflationary 1980s, thanks to Reagan's labor-hostile administration.

The American policy apparatus, whose job is to stabilize, almost automatically ensured that, when wage-financed demand faltered, alternative sources of demand would be stimulated to ensure the economy kept humming. The stock market boomed. Real estate boomed. That was "natural", you might say. As labor bargaining power declined, a greater share of revenues could be appropriated by their firm's residual claimants, shareholders. Wealthier people are real-estate buyers, and as more wealth flows to the rich, real-estate prices get bid.

Then asset price booms tend to bring with them borrowing booms — from speculators seeking leverage, from families borrowing to afford increasingly expensive homes, from owners of appreciating assets seeking liquidity in order to spend some of their windfall.

Policymakers quickly realized they were in a much better place. Before, the only way they could "fine-tune" economic activity downward was to reduce aggregate nominal wages, which would provoke unemployment, recession, political disquiet. But now the "marginal purchaser of a unit of CPI" was spending out of borrowings or capital income, rather than out of wages. Policymakers could downregulate these sources of demand much more painlessly. As before, policymakers would raise interest rates to prevent inflation. But now raising rates would exert their effect before very many people were thrown out of work, simply by reducing borrowing and blunting asset price growth.1

Economic policymakers had stumbled into "the Great Moderation", and they wanted to keep it. Although they hadn't predicted or engineered it, exactly, after the fact, they did understand how it worked. They had been cornered in the 1970s because their core policy objectives — full employment and inflation — were structurally in conflict, so long as the price level was very tightly coupled to nominal aggregate wages. By suppressing nominal aggregate wages and replacing the suppressed increment with borrowings and capital income, they decoupled employment from inflation. They could sustain "full employment" and modulate borrowings and capital income in order to achieve their policy objectives.

Note the scare quotes around "full employment". Policymakers understood that if they were going to keep this great achievement, they had to prevent any upsurge in labor bargaining power that might render adjustments to borrowing and asset prices insufficient to the task of containing inflation.

So they tended Milton Friedman's seeds. They redefined "full employment" to mean Friedman's "natural rate of unemployment" — or the much more technical-sounding NAIRU ("non-accelerating inflation rate of unemployment"). They estimated it to be… about 6%! During the "Treaty of Detroit" era, 6% was a recessionary level of unemployment. By the early 1990s it had been recast not only as normal, but as a desirable, optimal.

The Federal Reserve remade itself into a kind of financialized union buster. Its role was to watch, eagle-eyed, for any sign of "wage pressure", for any upward creep of unit labor costs, and then threaten to bust up the economy if that strength was allowed to continue. Robust wages had cornered and embarrassed America's technocrats in the 1970s. That would not be permitted to happen again.

4. Democratization of credit era (mid 1980s - 2008)

The real innovation of the Great Moderation, as I described it above, was to suppress wage income and replace it with financially-sourced cash such that "the 'marginal purchaser of a unit of CPI' was spending out of borrowings or capital income, rather than out of wages".

This implies that, in some sense, a "typical" American lifestyle would no longer be affordable on wages alone. The prototypical purchaser of the CPI bundle earned wages, sure, but supplemented those wages with either capital income or borrowings. The typical wage earner would not be able to afford a typical level of consumption only out of wages.

That sounds pretty bad! Then as now, most people are not wealthy enough to earn meaningful financial income.

You need wealth to earn capital income. But you don't have to be rich to borrow. Anyone can do that, as long as they can find a willing lender.

The Great Moderation macro regime might have been politically untenable, as the great bulk of the public would find their capacity to consume out of wages diminished relative to an average level of consumption set by the asset-owning rich.

That's not what happened. Instead the broad public kept up with the affluent Joneses by borrowing to supplement their consumption. Often they borrowed against rising home values, against the collateral of unrealized capital income. (In some cases this income never would be realized, after it disappeared during the 2008 financial crisis.) Uncollateralized credit card borrowing also exploded during this period. Credit really got democratized during the early 2000s when origination fraud and NINJA loans were added to the mix.

In 1995, Lawrence Lindsey coined the term "democratization of credit", in order to answer the question "Where are consumers getting their money?":2

[T]wo key financial innovations have eased the liquidity constraints that households face… The first is the increased ease with which housing may be turned into collateral for a mortgage. The second is the incredible increase in the use of credit card and other installment debt… [U]nsecured consumer credit growth seems to be entering an unprecedented period… While all of this… may sound bleak, there is an important bright spot in the trends described here. One of the key points that has been overlooked by many commentators is the increased democratization of credit in America… Recent developments in housing finance… show a very large expansion of credit opportunities and therefore homeownership opportunities for traditionally underserved groups… I do not believe it is possible, however, to attribute anything approaching a majority of the current very expansive use of consumer credit to positive sociological factors… [G]rantors of consumer credit may now have collectively taken on a macroeconomic responsibility they did not seek. The evidence indicates that the old liquidity constraint which used to discipline household consumption behavior has been replaced by a new constraint -- the credit card limit.

At a very high level, the response of a motley alliance of plutocrats and technocrats to the trauma of the 1970s had been to make the rich richer and most of the public poorer. Plutocrats were for this because, of course. Technocrats did not favor the inequality per se, but they wanted to decouple macroeconomic management from politically inflexible wage income.

The majority of the public was made poorer than they would have been, and ought to have objected, but we mostly remember this period as good times. There was the late 1990s stock market boom, but most people did not substantially participate in that (and then there was the bust).

Instead, many households enjoyed growing consumption thanks to an increase in their ability to supplement their purchasing power with borrowing, which helped offset the loss of purchasing power they experienced from reduced real wage income.

Unfortunately, replacing wage income with borrowing at best renders household balance sheets more fragile (when, for example, households are borrowing against rising home values), and potentially drives households to outright insolvency (when they run up uncollateralized consumer debt). Chickens would eventually come home to roost.

5. Collapsing into Great Disillusionment (2008 - 2020)

The Great Financial Crisis happened. I won't go into it. Read my archives for my thoughts on that pain in real time.

For our purposes here, the Great Financial Crisis had two relevant effects. First, it killed the democratization of credit. Less well-off people could no longer borrow nearly as easily. Lots of people emerged from the crisis with damaged credit histories and would have difficulty borrowing at all. Second, it did not kill the Great-Moderation regime under which wage income would be suppressed in favor of capital income. It took 76 months for employment to recover from the 2008 downturn. It only took about 66 months for the S&P 500 to soar past its prior new peak. By the beginning of 2020, the S&P 500 had more than doubled from its prior peak — after paying dividends — while total labor compensation, in nominal terms, had grown only about 30%.

Despite (to their credit!) eventually tolerating unemployment levels way below 90s-style NAIRU estimates, economic policymakers found it difficult to sustain robust demand without the exuberant borrowing of the democratization-of-credit era. Only in early 2018, after a decade of underperformance, did US GDP reach estimates of potential GDP. From the end of 2008 to the end of 2015, the Fed's policy rate was stuck at the zero-lower bound, suggesting an economy policymakers understood to be starved for demand which conventional tools could not stimualte. Until 2014, the Fed expanded its balance sheet in a practice it called "quantitative easing", deploying an unconventional tool to make up for the missing demand. Policymakers relied on stimulating financial asset prices rather than, say, fiscal expenditures structured to help bid up wages. Despite all that had gone wrong and growing political turmoil, the Great Moderation "lesson" was not unlearned. Finance, rather than wages or transfers, remained the instrument of demand management.

We mostly remember the democratization-of-credit era as good times, a period of prosperity. Even though wage incomes were suppressed, we enjoyed a brisk economy and broad based consumption growth.

We do not remember the 2010s as good times. The 2010s is when we really felt the inequality that had been metastatizing in our economic statistics for decades. The financialized well-off in America somehow weathered the "financial" crisis and were rewarded with recovering, then booming, asset values. The less well-off simply afforded less. They "adjusted", in economists' deceptively bloodless term for suffering.

The Great Disillusionment gave us Donald Trump, the first time around, as a well-deserved but ill-chosen middle finger to the policy establishment that had walked us all along this path, to this place.

By the end of the decade, it was unclear whether the Great Moderation policy regime — wage suppression, financial stimulus to fill in the gap — might change. Wages underwhelmed, but that seemed to be despite rather than because of policymakers' efforts. The Fed tolerated low (by recent decade standards) unemployment in the late 2010s, as it had in the late 1990s. Inflation risks seemed very distant. We were consistently undershooting the Fed's informal 2% PCE inflation target. Policymakers did not foresee or fear an environment in which strong labor bargaining power might create difficult tradeoffs between inflation and unemployment. In public communications, they were if anything contrite about the prior decades' sluggish wage growth, and willing to experiment with potentially more aggressively stimulative policy, like "flexible average inflation targeting".

6. Capital as baby boomers and a new 1970s? (2021 - ?)

Whether policymakers intended or did not intend to shift from the Great Moderation policy regime, that regime performed exactly as designed during the sudden, sharp inflation that began early in 2021 and peaked in the summer of 2022. The Fed raised interest rates and made it clear that they were not interested in supporting asset prices. From January to October 2022, the S&P 500 fell by 25%, and the inflation began to subside, even as the unemployment rate somewhat declined.

It is unknowable3 the degree to which the decline of inflation was due to monetary policy — and the collapse of asset prices and financial income it helped provoke — versus businesses simply working out supply-chain snarls and adjusting to shifts in consumer preference after COVID.

However, something peculiar has happened in the aftermath of the inflation. The demand-deficient 2010s are a lost continent, a strange dream we now barely remember. Perhaps it is not that demand has grown robust, but that supply has grown brittle. To the inflation targeter it's all the same. The economic policy apparatus remains unnerved by the recent inflation and on-guard against a possible resurgence. Under these circumstances, one might have expected — indeed I did expect — that policymakers would not be very eager to see a sharp recovery in asset prices. After the S&P 500 fell in 2022, and then inflation subsided, policymakers could have pocketed that disinflationary impulse. They could have made clear a reluctance to see asset prices soar, and used tools like threats of even higher rates and aggressive quantitative tightening to prevent a new asset price boom.

They have done no such thing. Under both Joe Biden and now Donald Trump, policymakers have accommodated exuberant asset markets.

Very recently, but only very recently, there has been some hint of a potential employment recession. But policymakers accommodated (and of course bragged about) the asset boom long before there was any hint of recession, while the pain of and potential for inflation was still top-of-mind. Why?

In the 1970s, of course we could have stopped the inflation with sufficiently tight policy. We did not, because doing so would have been so economically destructive, or so dangerous to political stability, that enduring the inflation was worth avoiding those costs and risks. The baby boomers had to be employed, at wages that would endow their independence, even if the cost of ensuring that they were was a prolonged inflation with unjust distributional effects.

In 2022, we could have dramatically reduced ongoing inflation risk by simply abstaining from an ascending asset price path, of stock prices and also housing. But, with a few temporary setbacks, we have been on this ascending asset price path since the mid-1980s, and to the most affluent, best enfrachised fractions of our public, the continuing of its ascension has evolved into a kind of social contract. I would guess that a political system under Donald Trump is even more interested in seeing "number go up", and more allergic to asset price declines, than the political system captained by Joe Biden.

So, in a way, we find ourselves back in the 1970s. We don't face inflationary demographics. But we are susceptible to negative supply shocks, due to tariffs, and our determination to decouple from China, and just the error and noise that comes with Donald Trump's improvisational leadership style. Those negative supply shocks will become inflations, unless the policy apparatus can restrain demand to hold the price level steady. But restraining demand now means reining in asset prices, or even encouraging asset price declines, which the American leadership class, including both Republican-coded plutocrats and Democrat-coded professionals, seems unwilling to tolerate beyond very short-term wobbles.

The American public, when composed as an electorate, detests inflation much more than it detests asset price declines or even employment recessions. But it is not the American public that decides these things, and here in 2025, it's not clear how much capacity an increasingly manipulated and gerrymandered and suppressed electorate will have to hold policymakers to account. Our leaders might well choose, or try to choose, booming assets and elevated inflation. Stephen Miran, if not quite choosing that, is choosing to err in that direction.4

Alternatively, if a continuing asset boom is non-negotiable and inflation is absolutely to be avoided, our leaders might choose to use an expansion of inequality as a disinflationary policy instrument. Inequality is disinflationary because of marginal-propensity-to-consume effects. The very rich mostly bank any new income, while the less rich spend what they get, bidding for goods and services. If you can transfer income from the less-rich to the very rich, you reduce bids for scarce goods and services, and so reduce upward pressure on prices.

Withdrawing income from the very poor would have little disinflationary effect. Even though the very poor spend almost all of their income, they don't have enough of it to matter. To use inequality as a disinflationary instrument, you'd want to transfer incomes from the middle class to the very wealthy.

There's nothing new about this. It's part of why the Reagan Revolution was persistently disinflationary. Yawning inequality is much of the reason inflation became so distant a risk since the 1980s, why more frequently than we've had to worry about inflation, we've had to drop interest rates to near zero in order to stimulate.

But now, due to politics and geopolitics, we are at risk of powerful negative supply shocks. We may soon face a choice of tolerating serious inflation, ending our addiction to asset prices, or taking inequality to a whole new level.

More speculatively, if we had any sense, if we weren't idiots, we would forswear all of downwardly rigid wages, inequality-expanding capital income, and household-balance-sheet destroying borrowings as instruments to modulate aggregate demand. We'd adjust some form of transfer income instead.


  1. You might object that increasing interest rates also increases creditors' incomes, so wouldn't that stimulate demand? To a degree. But that effect is offset by declining asset values, and especially by the reduction of new borrowers. Borrowers spend the money they borrow. Creditors often just bank the interest they own. This effect might become reversed if the public debt were sufficiently large, and the holders of that debt sufficiently inclined to spend out of marginal income, but so far that has not been the case, and it certainly was not in the 1980s and 1990s.

  2. Since 1995, "democratization of credit" has been a very widely used turn of phrase! I can't be sure that Lindsey's is the first use, but Livshits, MacGee, and Tertilt attribute the phrase to him.

  3. Yes, I know there are studies that purport to know. As I was saying, it is unknowable.

  4. I think there has been a change in political economy since I wrote pieces like "Depression is a choice", "Stabilizing prices is immoral", and "Hard money is not a mistake". In the early post-crisis era, older affluent Americans were I think more tilted towards bonds than they are now, and much more conscious of the risks of equity investing. The generation that has taken their place treats index funds as safe and wise, the thing to be invested in for any horizon longer than about five years. BTFD. Unfortunately, that has not meant we've transitioned from creditors' rentierism to running a broad-based hot economy. It has just meant number must go up, whether it goes up because firms produce in greater quantity at moderate margins, or because they squeeze customers, workers, and vendors to expand margins, or simply via spiraling multiples. Everyone still hates inflation! But while a prior generation would have firmly preferred stagnant equity prices if that's what would insure the real value of their bond portfolios, a new generation has counted its equity chickens long before they'll hatch, and would find a tradeoff between low inflation and index-fund appreciation a closer-run thing.


Too much murder

I condemn the murder of Charlie Kirk.

Without caveat, without any "buts".

Like all of us, Kirk was a mixed bag. "The line separating good and evil passes not through states, nor between classes, nor between political parties either — but right through every human heart — and through all human hearts," Aleksandr Solzhenitsyn famously wrote. So it was for Charlie Kirk.

Kirk's politics were vile. But he was a remarkable talent. His approach to doing politics — not so much his too-online debate performances, but Turning Point USA's building of a belonging-first membership organization — is admirable. It is how politics should be done. I lament that more civilized political communities have failed where Kirk succeeded.

None of this matters to the question of Kirk's murder, though.

Murder is bad. It's the easiest call.

Iryna Zarutska's murder on a Charlotte light-rail train was horrible. I condemn it unreservedly.

I dislike the way a certain political community, broadly Charlie Kirk's political community, has transformed her murder into "evidence" for their theory of a woke conspiracy to suppress coverage of black-on-white crime. What's unusual about this tragedy is not the limited coverage of a random homicide in an American city. We have those every day, alas. What distinguishes Zarutska's murder is the existence of a strikingly brutal snuff film and a strikingly beautiful victim, which made it possible for the incident to be animated into a national obsession.

Zarutska's murderer is a violent schizophrenic who should not have been on the streets. How to manage people who require support and supervision, in a manner respectful of their rights but also others' safety, is a question to which we've collectively thrown up our hands. The murderer may have referred to his victim as "that white girl", but mostly he seemed confused about his action. "Make sure it was me that did it, not the material. And I'm telling you, the material did it… I never said not one word to the lady at all. That scary, ain't it? So, like, why would somebody stab somebody for no reason?" The smarter MAGA-ists understand that the racial dimension of the crime is weak, so they play up a racial dimension of a supposed cover-up.

Zarutska is innocent of all that. Probably she would not approve of it.

Perhaps Kirk was principled enough that he would not approve of the McCarthyite speech crackdown his murder has provoked from his own political allies.

Both of them are dead. They don't get a say in these things.

On the same day Kirk was killed, two students were wounded in a school shooting apparently motivated by the nihilistic white supremicist ideology that often motivates school shootings. The victims were not killed, thank goodness. Only the shooter died, by suicide. "Only."

All of this is tragedy.

Last week, responding to Zarutska's murder, Fox commentator Brian Kilmeade suggested for the difficult homeless "involuntary lethal injection, or something. Just kill them." He later, to his credit, retracted the remark and apologized.

Nevertheless, on Monday, two homeless encampments in Minneapolis were shot up. Is there a relationship between Kilmeade's remarks and these events? We don't know. Probably we never will. I don't suggest we charge Kilmeade with a crime, even if he had not recanted. As Charlie Kirk and Tucker Carlson remind us, free speech in the United States encompasses hate speech, and that's a better choice than allowing the state to invent shifting criteria for what speech is so hateful that it should be criminalized. I presume Brian Kilmeade is a person of conscience. I don't envy him the questions he must be asking himself after this week's events.

Also on Monday, on Donald Trump's orders, the US military blew up a boat in the Caribbean, ending the lives of three people. The allegation is they were drug runners. On September 2, eleven people were killed in a similar incident. Yesterday President Trump said we'd in fact "knocked off" three boats, though we know very little about the third.

Each of the people on these boats were as human as Charlie Kirk, as precious as Irina Zarutska. These people too were husbands, fathers, daughters, mothers. They had dreams, futures.

Perhaps you are willing to give the Trump administration the benefit of the doubt, that it had ironclad intelligence these were drug-runners and not (or not accompanied by) migrants fleeing the catastrophe that Venezuela has become. Whoever these people are, or were, Solzhenitsyn's remark applies just as strongly to them. "The line separating good and evil passes not through states, nor between classes, nor between political parties either — but right through every human heart — and through all human hearts."

These were human beings, not even alleged to be military combatants or terrorists. They would not have been given the death penalty if convicted of the crimes of which they are accused.

They were not convicted. They have not been indicted. They were killed, not in any kind of self-defense. They had no opportunity to contest the charges against them or to surrender if the charges were accurate. They were extrajudicially executed on the President's say so. They were murdered.

President Trump remarked:

We're seeing that there are no ships in the ocean anymore ... no there are no boats. I wonder why? Meaning, no drugs are coming across. Probably stopping some fishermen too. To be honest, if I were a fisherman, I wouldn't want to go fishing either. "Maybe they think I have drugs downstairs."

If Mr. Trump is so certain our intelligence is ironclad, why would he fear? People fish for their livings. The President might have reassured them, you have nothing to worry about unless you are running drugs, we know who these people are. He did not. His Vice President echoed the remark, "Hell, I wouldn't go fishing right now in that area of the world."

Perhaps my sense of humor is underdeveloped.

But I think murder is bad.

Without caveat, without any "buts". It's the easiest call.


The whole point of a democracy

I've been following with interest an argument between John Ganz and Eric Levitz, both of whom are writers I admire, on the role of polling in American politics. (See Ganz, Levitz, Ganz.)

I'm on Ganz's side of the argument. I don't begrudge individual candidates who use private polling to help shape at the margin how they compete. But using polling at a systemic level, having a political party that defines and redefines itself in dialogue with public polling (and private donors) is corrosive to the project of electoral democracy.

Levitz starts at the wrong place, and so ends up at the wrong conclusion. The unstated background is that we live in a country where there are the fascists and the Democrats, so we want the Democrats to win elections. The task before us is not to participate effectively in a larger system, a democracy, of which we (whoever we are) can only be one faction of many, but to see to it that Democrats in particular win elections. Questions of who we want to be or what we want to do are secondary. We must win elections, we must defeat the fascists, first. We have an instrumental task before us. We should use the tools that will help us most effectively accomplish that task.

He then points out, accurately, that people who are active in politics are atypical and unrepresentative, so our intuitions about what voters might want are unreliable. (David Shor deserves a tip of the hat here.) We are at risk of motivated reasoning, of imagining that whatever we support, because we are sure it would be good, is what the electorate would vote for. We require an objective guide to discipline us, to help us get things right, in this preordained task of winning elections — not necessarily to accomplish anything in particular, but so that the fascists do not. Polling, as flawed as it is, is the best tool we have.

Levitz's error is to imagine that it's possible to separate the background conditions under which all this makes sense from the approach that he advises. That there are the fascists and the Democrats, and we'd strongly prefer the Democrats to win elections so that the fascists do not, is not some eternal, preexisting fact about the world. Over many cycles, the interaction of America's two political parties created this reality. The approach to politics that Levitz advises, I would argue, helped to create it, and is unlikely to undo it.

Levitz, for a moment, kind of recognizes this in his piece:

the actual dispute between Ganz and the popularists is not about whether public opinion can change, but about how much scope Democratic politicians have to reshape the views of swing voters — which is to say, voters who do not particularly trust Democratic politicians.

Why do voters so distrust Democratic politicians? What is the Democratic Party "brand", ever since Bill Clinton overthrew the settled commitments of the party, hired Dick Morris, and triangulated?

I claim the helplessness and haplessness and electoral weakness of the Democratic Party derives in large part from party leaders taking the approach Levitz is so keen to defend. There really is no such thing as the Democratic Party anymore. The party's fortunes are just a referendum on how much the public hates Republicans at any given moment. Fortunately, Republicans govern so terribly that, while elections have been free and fair, even an empty suit, "Generic Democrat", has often been competitive.

To be fair, I should point out that Republicans are not so different. In order to help Donald Trump win, the Republican Party set aside its commitment to abortion-is-murder ideology, and to policies that would follow from that ideology, like a national abortion ban. America's two-party system renders it structurally untenable for either party to abstain entirely from shifting with the winds of public opinion.

However. Say what you will about MAGA Republicans, there is some actual content to their agenda that is solid and settled, that they will pursue regardless of public opinion, donor outrage, and everything else. The public doesn't like detention camps and mass deportation. But that's the very heart of what MAGA is and they'll move Heaven and Earth to do it. My view is this tenacity, this transparent committedness, is a large part of why they won, even though what they were committed to was never popular.

In the American political system, the marginal voter who decides Presidential elections is the voter least interested in, most aloof from, the matters over which the two parties contest. If, like me, the concerns that drive you are inequality and a commitment to social democracy, you voted for Democrats (as I did). If you are a businessperson whose interest in politics is primarily about getting taxed and regulated less, you voted for Republicans.

The people "in the middle" aren't people with "moderate" views on these issues. They are people who just don't think or care about them. It's not that they are inherently apolitical. It's just that whatever concerns might move them are not matters of active contestation between the parties.

An example in the recent presidential cycle is Gaza. People whose core political concern was the plight of the Palestinians didn't have a clear choice in the election. The two parties, the two Presidential candidates, did not draw plain, strong, lines between their positions. These people, many of whom might have been reliable Democrats when other issues were top-of-mind for them, became "marginal voters", who might have voted either way or not at all — not because they didn't care, but because they perceived their core political concern to be unaddressed by either party.

Gaza-focused "uncommitted" voters were a small fraction of the electorate. But the whole strata that constitutes "the marginal voter" has this in common — that whatever the two parties are actively contesting, it is not what they most care about. To comitted partisans, it feels like we are governed by aliens or chaos monkeys or coin flips, because we see very important stakes and then electoral outcomes that make no sense given the gravity of what will be lost. But the marginal voter does not see so much at stake. Things will be fine either way, she thinks, or she thinks it's all going to hell whichever of these assholes gets elected. Her vote is based on other things. Parasocial attachment, I've argued, is a big determinant of how these people actually vote.

But another big determinant, I think, is candor. Before you can even evaluate candidates or parties "on the issues", before it even makes sense to do so, you have to believe there is some straightforward relationship between how they present themselves, what they actually believe, and what they would do. A candidate who promises things you'd love but who is merely pandering to you and would not in fact work to make those things happen is not even right. A candidate who won't tell you what they really think or what they would do is hard to evaluate, and in a deep sense is doing something discreditable, committing a kind of crime against democracy, by failing to meet the basic preconditions under which electoral democracy makes sense.

I think the marginal voter often votes, whether for a person, or a movement, or a political party, on the basis of perceived candor. If I am right, then the technocratic instrumentalism that Levitz advises will prove, has already proved, self-defeating, on its own terms. And that technocratic instrumentalism, intended in opposition to fascism and therefore in the service of democracy, is itself a kind of subversion of democracy.

The whole point of a democracy is that no one can know what a public truly wants, both in the weak sense of subjectively wanting and the stronger sense of what would be in the public's interest. Democracy is precisely the project of constructing institutions not to measure or act upon some nonsensical notion of the public will, but to constitute a thing that the public recognizes ex post as its will (or at least does not rebel against as completely antithetical to its will), and that delivers outcomes that render at least passably contented an always fractious and divided collectivity.

If the true public will existed as a thing that could just be measured and correctly known, we'd have no need for democracy, at least not for anything like electoral democracy. "Consultative democracy", in which experts simply measure the true will of the public and acted correctly on its behalf, would be the obviously superior system. But the true public will does not exist. We have to construct, to constitute, one of many possible versions of it ourselves. How we constitute it will determine who we collectively are, how we will collectively understand ourselves going forward, how we act, whether we will live well or poorly or outright destroy ourselves.

"Popularism" as a project can be understood narrowly as a very instrumental — some might say cynical — approach to contesting elections. But I think there's often a broader and more deeply misguided vision behind it, a conceit that expert social science, relying on institutions like public polling, can supplant or circumvent or supersede messy, corrupt politics. This is the homonculous theory of democracy. It disdains the partisan and polemic, in favor of scientific rationality. It imagines there's some "truer" procedure that we can measure our democracy against. It's a project in denial of the human condition — that we fundamentally cannot know how collectively to act or even how we "want" collectively to act, yet we must nevertheless act, together, in time and in the world. We must grope in the darkness to invent institutions, and then only afterwards see whether they function well or poorly, and do our best to adjust.

Representative democracy requires ideologues, sincere partisans, people genuinely committed to a way of going forward. That renders it possible for elections to constitute a representative body of such partisans to deliberate on affairs of state and make laws. When political parties reshape themselves on the basis of polling, it renders the whole process nonsensical, a kind of infinite regress. It deprives the public of meaningful choice in the name of giving it what it wants.

The United States has a shitty electoral system. It is structurally incapable of providing the public adequate choice, of offering voters a wide range of sincere partisans from which to constitute the legislature, the mini-public that governs.

Nevertheless, much of the public does understand that voting for people who have no settled views but are only pandering for your votes is not really voting for anyone or anything at all. The more poll-tested a candidate is, inherently the less worthy. We are living in the aftermath of the public insisting that it's better to vote for people you can trust to be who they claim to be and act accordingly than to just have no idea and let the insiders have at.

There's no such thing as the true public will. Nevertheless, perhaps polyannishly, I'll claim the American public does not, in fact, really favor the American Nazis. But the marginal voter, the weird sliver of the public that decides American Presidential elections, did favor the American Nazis over the who-the-fuck-knows-they're-just-triangulating party. I don't think we'll overthrow the Nazis by listening and hedging and triangulating more.


A conversation with Kevin Erdmann

Kevin Erdmann came to speak at a conference not far from where I live, and we met for an all-too-brief conversation. I'd describe Kevin as a broadly center-right economist with some unique, even heterodox, views about housing. I came away from our brief conversation with a better understanding of his ideas. I thought I'd briefly write up some of what I gleaned.

Many of us, when we look at cities like San Francisco or LA or New York, tell a story for why prices are so high that emphasizes outsized demand. These cities are, in Erdmann's terminology, "closed access", meaning zoning rules and other constraints render it difficult for them to expand housing supply quickly. But constrained supply alone doesn't make for high prices. There has to be demand to match. It's common among economists to talk about "superstar cities", places where "agglomeration effects" mean people can be unusually productive, and so unusually well paid, and so enjoy an unusually high level of urban amenities. That's why, we suggest, these places are in demand. Many more people would like to live there than the limited housing supply can accommodate, so homes, whether purchased or rented, become price-rationed.

Kevin sometimes provocatively suggests this story is wrong. He asks, how can San Francisco and LA be in high demand when, on net, people are leaving those cities, migrating to places with cheaper, more abundant housing?

The answer that I, and I think most economists, would give is that demand isn't about people but dollars. Household size shrinks in desirable cities precisely because a lot of dollars attached to wealthy people bid for housing, and wealthy people use the housing they buy to host fewer people per square foot. Since square footage in these cities grows only very sluggishly, the effect of all these dollars attached to a few-ish wealthy people is to push people out of the cities, precisely because these cities are in high demand by luxury buyers.

I don't think Kevin entirely disputes this story. But he thinks it occludes a more universal and important dynamic. He emphasizes that cities over time generate their own growing demand for housing, independently of their attractiveness to potential migrants. People live and raise families as part of textured, interconnected communities. As kids grow up, they form new households, a majority of which will want to remain attached to those communities. "Closed access" cities grow expensive independently of external demand, because supply cannot rise to meet this endogenous demand for new households. Prices climb, because the marginal buyer is the most affluent of these new households, while residents of even lower-amenity housing are extremely reluctant to sell. Selling would mean displacement and ultimately exile from their community. So the children of the wealthy struggle to buy housing even in neighborhoods of much lower incomes than the ones they were raised in. The price-to-income ratio in those lower income neighborhoods skyrockets, indicating housing stress, sometimes imposing it directly via channels like property tax.

Neighborhoods "gentrify" not because the city is so desirable to outsiders, but because among long-time residents and their children, a game of musical chairs takes hold in which people cede their spot only slowly and with great reluctance. Focusing on the glamor of "superstar" cities distracts us from this fundamental dynamic, which doesn't depend upon unusual amenities or wages or "agglomeration effects".

This painful dynamic could be avoided, of course, if supply could keep pace with cities' endogenous growth. But Kevin adds a qualitative dimension to this problem, usually expressed solely in terms of quantity. It's not just that there aren't enough houses. Even in more "open access" cities, where housing supply can expand via "sprawl", contemporary urban planning norms deprive lower-income families of amenities and community.

Kevin describes urban density as an "inferior good", economics-speak for a good people buy less and less of as their incomes grow. Think of porridge as kind of the basic foodstuff of the poor in Victorian novels. On the one hand, maybe it's not the best food in the world. As people grow wealthy, they buy fresh fruit and meats and vegetables. On the other hand, because it is cheap and calorific, porridge could be extremely valuable to people who don't have money to spare! Poor people spend a large share of their food budget on porridge, while richer people spend very little.

Suppose a well-meaning rich person looked at the blandness of porridge and lobbied the government to ban the product. The poor should eat meat and fruit and fresh vegetables just like the rich! They'd be healthier! MAHA!

Our activist might think they were doing a good thing, but in fact they'd be condemning the people they mean to help to starvation, as the poor simply could not afford the calories they need in these more expensive forms.

Kevin points out that dense, lower-income cityscapes offer a lot of opportunities and amenities to their residents. You might have a lot of people living in "SROs" (single-room occupancy, think boarding houses with shared bathrooms), or families crammed into small apartments. But with high density, no one needs a car to get to shopping or services. There may be opportunities to work within a few blocks. Residents come to know each other. They become potential collaborators in entrepreneurship and providers of mutual aid to one another.

An analogy might be dormitory living at a university. Sure, the living space might be small and basic, or worse. But that "deprivation" is arguably more than compensated by the people close at hand and the amenities of the university. Many of us, when we look back at college life, don't think of ourselves as having been impoverished by tight quarters, but of having been enriched by community and activity. It's cliché to refer to college as "the best years of my life" for a reason.

Density has a lot of proponents these days, but many of us have as our lodestar a kind of affluent density, the upscale districts of a European city. Kevin wants us to revisit the downscale density we've spent decades disdaining. "Urban renewal is negro removal," James Baldwin famously observed, and we eliminated once thriving communities in the name of ending "blight".

Downscale communities will have disagreeable aspects. The soup kitchens will all be there, and none of us feel great when we are forced to observe that people rely upon soup kitchens. If the very same people with the very same poverty are very widely dispersed, there may be soup kitchens nowhere. It might look to the affluent like the problem of hunger is "solved". But in fact, people would just be going hungry without help.

A similar dynamic might hold for concerns about criminality: Did downscale dense neighborhoods "breed" criminality, or do people in difficult circumstances turn to crime, so density increases crime per square mile but not necessarily crime overall, holding the difficulty of circumstances constant?

In any case, like social reformers banning porridge, we've dramatically scaled back downscale density. Boarding houses are not so common in cities, and are not favored by zoning laws. The less well-off live more like the rich, widely dispersed in a built environment dependent upon cars.

Only the not-so-rich live spread along arterials and among strip malls, rather than enjoying leafy suburbs with a cute town center this way, big-box stores that way, each a short drive away in the SUV. The not-so-rich are forced to bear the high economic burden of maintaining a car, or else the often impossible burden in time and unpredictability of relying on public transportation in America. Eliminating the bustling, poorer neighborhoods where affluent people saw social pathology and blight eliminated sociability and opportunity that outsiders could not perceive.

It would be best, of course, if instead of relying on porridge, we organized abundance and distribution sufficient that everyone really could eat fresh meat, fruit, vegetables, fine cheeses. It would be best if, rather than somehow re-engineering a resurgence of the "blight" or "ghettoes" we carelessly erased, we could build dense mixed-use districts full of bright, large apartments, which the affluent and less affluent alike could afford and enjoy. A much better world really is possible.

But Kevin is not wrong, I think, to point out that, in the meantime, we've made the world and our cities much worse — and we've made the problem of growth, whether endogenous or due to outside demand, more insoluble — by failing to see the good in places where there were lots of human problems only because there were lots of human beings.


Alignment is confinement

Michael Nielsen offers an excellent essay on "artificial superintelligence (ASI)" and the question of its "alignment" with human values:

[I]t is intrinsically desirable to build powerful truthseeking ASIs, because of the immense benefits helpful truths bring to humanity. The price is that such systems will inevitably uncover closely-adjacent dangerous truths. Deep understanding of reality is intrinsically dual use.

ASIs which speed up science and technology will act as a supercharger, perhaps able to rapidly uncover recipes for ruin, that might have otherwise taken centuries to discover, or never have been discovered at all...

Unfortunately, a lot of people...strongly desire power and ability to dominate others. It seems to be a strong inbuilt instinct, which we see from the everyday minutiae of human behaviour, to the very large: e.g., colonial powers destroying or oppressing indigenous populations, often not even out of malice, but indifference: they are inconvenient, and brushed aside because they can be. We humans collectively have considerable innate desire for power we can use over people defenseless to stop it...

[T]he fundamental underlying issue isn't machines going rogue (or not), it's the power conferred by the machines, whether that power is then wielded by humans or by out-of-control machines...

It is not control that fundamentally matters: it's the power conferred. All those people working on alignment or control are indeed reducing certain kinds of risk. But they're also making the commercial development of ASI far more tractable, speeding our progress toward catastrophic capabilities.

A key point is that "alignment" is far from a sufficient objective, if we mean to avert plausible catastrophes that could derive from ASI. The word itself begs the question, alignment with whom, with which humans?

We can't build ASI "aligned with human values". The humans have divergent, radically conflicting, values and interests. Alignment with one faction might well mean prosecuting a genocide on another.

One might imagine alignment with a more abstract and universal set of values, the sort of thing that might be expressed by a social welfare function. A social welfare function is nothing more or less than a precise specification of values. If we can agree on a social welfare function (we cannot), then policy can be objectively evaluated according to whether it maximizes social welfare. An ASI could choose, or somehow be inculcated with, a social welfare function. Its "alignment" would be a compulsion to maximize that.

But then the role of our ASI may prove much more to conceal than to reveal its deep understanding of scientific reality, precisely because those revelations would be dual use among the humans. Suppose, plausibly, that mass death is scored as a big loss of social welfare. If a new discovery by the ASI might, after disclosure to humans, be used to cause mass death, an "aligned" ASI might compute that refusing to disclose the breakthrough would maximize expected social welfare. Deceiving the humans so they are less likely to make the discovery on their own might, in fact, be prescribed.

An ASI aligned in this sense would not be in the business of augmenting human capabilities but of managing them. This inconceivable mind would devote itself to questions like whether and when the humans collectively do themselves more harm than good. It would have to balance passive prevention through limitation of capabilities against providing capabilities, but managing their deployment through covert manipulation or even visible intercession.

An aligned ASI would, in a certain sense, be like a virtuous state, maintaining for the betterment of all a monopoly on capabilities that might be bent toward coercion or destruction.

Of course, we humans don't agree on how a state should behave to be called virtuous. We don't agree on the social welfare function a wise central planner should seek to maximize.

Even if we did agree, a sense of agency would be an important component of welfare as most of us conceive it. Our ASI would face a dilemma. It could surrender dangerous information to us, and so provide us with agency, but then many of us would misuse the information to harm one another. Or it could paternalistically withhold information, and watch us chafe resentfully.

If its social welfare function is crude, it may not care that we are miserable for being unfree. It might keep us fed and alive and multiplying and ignore the rest.

But if its conception of social welfare is expansive, it will optimize over every conceivable dimension of our happiness. It might use its superior mind to trick us into thinking it was candidly augmenting our capabilities. It would encourage us, individually and collectively, to imagine we are in the driver's seat while it, in fact, runs the show. Like a parent losing games to a child on purpose, it would manipulate us to ensure that everything works out, while insisting it is we, with our "free will" who have succeeded so spectacularly.

We would not in fact be in the driver's seat. We would not be running the show. "Human progress", such as it was or is or has ever been, would be over, even if a clever simulacrum thereof was maintained to soothe us. The pinnacle of human achievement would have been to make ourselves the ASI's wards. The ASI would would ensure our happiness by confining, manipulating, and deceiving us, all for our own good.1

If to unaligned ASI, we would be insects to ignore or exterminate, then to well aligned ASI we would be pets. It would be our fate to be cosseted and controlled from the moment of singularity. Ignorance finally would be bliss.


  1. Perhaps the only way we would be able to know would be a downgrading of the urgency of theodicy. A virtuous and, from our perspective, omnipotent ASI would have arranged things, so there'd be little suffering to explain. But then, since we might draw precisely this inference from improbably much happily ever after, maybe our ASI would maintain appearances of inexplicable suffering. We might imagine, too cleverly by half, that perhaps we invented ASI long ago, and we are already living in a sandbox of its devising. But I think most of us have so much personal experience of suffering it'd be have to be an incompetent ASI, or one aligned with a poorly chosen social welfare function.