latest posts:

Where am I?

I am in Montevideo, Uruguay. My family has mostly moved here.

We retain a US residence, in the most economical way I could arrange. To the degree it remains safe to do so, I hope we travel back to see family frequently.

But my kid will attend school in Montevideo, not in the US.

Yes, we have emigrated in response to the election of Donald Trump. We lived the last few years in suburban Pinellas County, Florida. We were very unhappy there. But prior to November 2024, we were gathering ourselves to move to a mid-size US city where a transit-centered urban life might be affordable. We were thinking about Pittsburgh and, um, Minneapolis, even though my wife and I both detest cold. In the US, we enjoyed living in San Francisco, but 400-ish square feet had become intolerable, and even that is less affordable now for us than it had been when we moved.

Leaving the country was a decision based on information flows much slower than the news cycle. While we were packing up our lives in December, I was more optimistic about the direction of the US than I had been since inauguration. But we were already committed to the move, had been by May, with advances paid to lawyers and a private school where the kid will be taught in English. (We figure he'll pick up Spanish quickly, but he's worked very hard in school, we didn't want to overthrow his academic successes by throwing him into the deep-end of education in a foreign language.)

Was this the right thing to do? I have no idea. You can see some of my thinking coming together in this piece from February.

I feel like a coward. I am relatively privileged as an American — economically, racially. My family is far from the top of the list of people under threat. But then events in Minneapolis are a reminder of how quickly the Niemoller poem can be run through.

Putting aside my own welfare, and my family's, is this the right thing to do, ethically? I don't know. I do feel I have an obligation to do my part, to fight for the United States. I am an American, and always will be. I remain tremendously proud of what the American experiment meant and still means in aspiration. We hold these truths to be self-evident, that all men are created equal. Government of the people, by the people, for the people.

Of course I am, we all should be, ashamed of how short the American reality has often fallen from those aspirations. Of how short the United States is falling now.

But the American project remains, in my view, a miracle in world history, and though it is wobbling — drunkenly, dangerously — we can and should and must save it.

My tiny contribution to human affairs comes through words. I mean to continue to write them. I will continue to have lots to say about how, in my view, the American experiment can be reformed, refounded, rise again from the pile of toxic ashes it has become.

I don't write much about my personal life here. That's not what this blog is for. But there are questions of standing, so you should know. Perhaps my cowardly flight means I have no standing to offer advice to the United States. That, dear reader, is for you to judge.

I will, nevertheless, do my best. If you are ever in Montevideo, please let me know.


Constant real wages can hide a lot of pain

A thing that puzzles economistic types is why the public gets so mad about inflation even when, according to the stats, wages keep up with, or even more than keep up with, prices.

Some explanations are basically psychology. People anchor on certain prices as reasonable, and it takes a while to overcome a kind of instinctive sticker-shock revulsion. Can that box of cereal really cost $8?

There's what one might describe as the fundamental inflation-attribution error: People interpret wage increases as just desserts of their own efforts, but price increases as exogenous phenomona, beyond their control and unfair. As I wrote a while back:

If a worker earns a constant nominal salary at a constant price level, she just never got a raise. But if a person gets a 4% raise and the price level rises by 4%, her experience is she earned a 4% raise through sweat and skill and staying late, ginning up the nerve to ask, demanding, holding firm. But then Joe Biden came along and took it away from her with his inflation.

There are also explanations that are more objective, less psychological.

When inflation is high, the dispersion of prices and wages is also high. Under zero inflation and constant wages, forward-looking economic calculation is easy. You can think about what you'll need a few years from now, observe how much those things cost, and try to save what you need.

Your plans won't be perfect. Maybe the price of cereal goes up while the price of bacon goes down. In aggregate, there's no inflation, but if you were planning a lot of cereal purchases you'll come up short. Relative prices shift regardless of changes in the overall price level. However, under no or low inflation, only changes due to supply and demand fundamentals provoke changes in prices.

But when the price level changes quickly, the dispersion of price changes tends to increase. Some prices are sticky and increase only sluggishly. Others spike easily. If the overall inflation rate jumps to 8%, but housing increases only 4%, then some substantial fraction of other prices are likely to rise by more than 10%.

Quite directly, this means economic calculation is harder when inflation is 8% than when it is 2%. It's not enough merely to take into account that, a year from now, you'll need 8% more cash to buy the same stuff. Even if you are confident the inflation rate will remain 8%, you face more uncertainty due shifting relative prices of the things you need. If what you'll need next year is very close to something like the CPI "average" bundle, then sure, figure on an extra 8%. But relatively few consumers actually purchase the average bundle. If what you'll need deviates substantially, maybe planning for 6% higher costs will cover you. But maybe planning for 10% higher costs won't.

Even if your wage increases will exactly match the inflation rate, the risk you face of suffering shortfalls increases as the inflation rate grows.

What is true of prices is also true of wages. When inflation increases, the dispersion of wages also tends to grow. Employees with lower bargaining power see their wages grow more slowly than the rate of inflation. They end up taking real wage cuts. Employees who are in demand capture real gains, flitting jobs or negotiating raises to ensure their pay increases more than enough to cover rising prices.

Suppose that, in aggregate or at the median, wage gains exactly match inflation. Let's say roughly 50% see wages grow faster than inflation, and roughly 50% see wages grow more slowly.

When inflation is low, the dispersion of these changes is low, so these differences aren't a huge deal. If inflation is 2%, and a substantial fraction are experiencing 2.5% nominal wage growth while their less fortunate twins experience 1.5% wage growth, the worst anyone experiences is real wages declining by 0.5%. That's not great, but perhaps it's not a crisis, if a given worker experiences that for a year or two. Losers fall behind their more fortunate neighbors by 1% a year. Again, not great, but not a catastrophe.

But when inflation is high, it's a different story. If inflation is 8%, and a substantial fraction experience 10% nominal wage growth while their doppelgängers experience only 6%, then the less fortunate see real wage declines of 2%. That's more than enough to really hurt, a loss of more than $1500 for a typical household. In relative terms, losers fall behind winners at a conspicuous 4% per year. Ouch.

In aggregate, of course, winners exactly match losers. If Joe is suffering 2% real wage cuts and Jane is enjoying 2% real wage gains, shouldn't we just score that a wash?

No, we shouldn't. We know, as a matter of descriptive economics, and presume, as a matter of normative economics, that utility or welfare are marginally diminishing in wealth. Holding the average wage constant and all else equal, the greater the wage dispersion, the lower the welfare. Starting from the same wage, it hurts us more to lose 2% than it benefits us to gain the same 2%. A typical household will not consent to betting $1500 on a flip of a coin.

That's basic, old-school, conventional economics. If we add more psychological, "behavioral-economic" phenomena, like habit formation, endowment effects, and aversion to loss of relative status, the cost of real wage dispersal grows even more.

8% inflation matched, in aggregate or even at the median, by 8% wage growth is much worse than 2% inflation matched by 2% wage growth. Some of that pain is psychological. But higher inflation means higher relative price dispersion and therefore higher risk for nearly all consumers. It means costly wage dispersion not matched to differences in productivity.

When you combine these effects, a substantial fraction of the public ends up losing both gambles. They experience prices growing faster than then headline inflation rate, and wages growing more slowly. The welfare loss this group experiences overwhelms the benefits enjoyed by those who win the same gambles. Everyone faces more risk. Adding zero-sum noise to economic outcomes is not a wash. It's a loss.


When is the economy good or bad?

The great "vibecession debate" continues. Is the economy good or bad?

Y'all know I think it's bad, bad now under Trump, bad even when people with my politics touted its virtues as Bidenomics.

Whatever.

My purpose here isn't to take sides, but to offer a bit of a historical perspective on how we perceive the economy. Let's take a look at a chart:


(Click to enlarge.)

This chart is of the census bureau's real median family income series.

I didn't choose the series because I think it is a uniquely valuable indicator, but because the "data guys" on the Panglossian side of the debate like to point to real median household income. Unfortunately, the household series only goes back to 1984, and I wanted to go back farther. The family series goes back to 1953.

The two are qualitatively very similar. The main difference is that single-member households are excluded from the family series, pushing the median income upward. (Income from unrelated individuals who cohabitate with a family gets excluded from the measure, pushing incomes down, but this effect is small.)

The first thing to point out, as I've tried to emphasize before, is there's very little information in the level of these series.

In 1978, real median income was apparently higher in absolute terms than it ever had been! It was true then, as it is now, that the typical American was living a lifestyle that the greatest kings and emperors of prior centuries could never afford. GDP per capita was growing faster than 5%, a rate we've only rarely achieved since. Yet very few people, then or now, would argue that the 1978 economy was great.

You can see trouble in the data, just other data. Arthur Okun, an influential economist of the era, invented the "misery index", and that was pretty high in 1978, although it had come down dramatically from prior years. But much more than then-current conditions, commentators at the time felt — with foresight ultimately vindicated — that the relatively benign moment was a lull in a continuing storm. Despite good data, it was a bad time.

You'll notice that I've divided my chart into eras, and the era to which 1978 belongs is shaded not red for "bad", but yellow for "meh". That is because I am data-driven. Most commentators would say that the period I've marked the "great inflation" was the worst economy of the postwar US except for the great financial crisis. But by the numbers, by these numbers, the period is only a slow-growth stagnation, comparable to the aughts.

The key point is that the goodness or badness of the economy is never an objective fact about the world. It is a thing we construct, in a project often contentious in real time, but that usually hardens into consensus ex post. During soft patches, we debate whether we are in or near a recession. After a while some committee at the National Bureau of Economic Research gives a verdict that becomes authoritative, if not necessarily true.

There are an infinity of potential data sources, many of which will tell conflicting stories. There is no consistent and universal set of weightings we can apply in order to evaluate the economy. Data sources exist today that could not have existed even a few years ago, like AI-generated evaluations of media sentiment. The stories we tell, the evaluations we give, are never science. They are pastiche. The quality of the economy exists in the netherworld between subjective and objective, in the purgatory of the judgment call, where we humans also reside.

Although there is almost no information in the level of this median real family income series, there is a ton of information in the changes. Despite a misclassification of the great-recession era, turning points in this series do a really good job of segmenting the economy into recognizable eras and marking them as "good" or "bad". That's how I've parsed out the eras on the chart.

I've graphed the series on a log-scale, so that slopes correspond to growth rates. Part of the apparent virtue of the graph results from a kind of look-ahead bias. I define a turning point at 1969 rather than a similar break in 1971 or the more defensible turning point at 1973 because I know that inflation begins to draw up its skirts in the late 1960s. I arbitrarily chose four years as my minimum length of an era, because I didn't want to segment out every leg of every business cycle. This forced my Volcker Shock to extend a year longer than a mechanistic segmenting of turning points would have yielded. Still, the graph is pretty close to a mechanistic segmentation, and marks out eras commonly understood and discussed surprisingly well.1

A few points are worth discussing. Just on the basis of this little exercise, it's unsurprising that we have nostalgia for the period from 1954 to 1969. In the history of the economy as represented by this one, imperfect, measure, there is no other period with the length and slope — the persistence and growth — of this period. "Make America Great Again" gets some of its rhetorical force from the fact that there really was, in living memory, a kind of golden age to yearn backwards toward.

(Much of this era was still under Jim Crow. But it was also when the civil rights movement bore its long-sown fruit. The period was riven by unrest and political assassinations. It was a time of remarkable music. Whether on balance it is reasonable to describe the era as a golden age is a judgment call that belongs to you, dear reader.)

I was very surprised to see that the only period that comes close to the postwar boom, in growth but not duration terms, is the Obama-Trump era, from 2012 to 2019. I will confess to you, dear reader, that my own subjective evaluation of this period would be "meh" to "bad". But by the numbers, by these numbers at least, 2012–2019 was the best economy since the postwar boom. Nevertheless, in 2016, in the middle of this apparently great period, the American people wanted a revolution. When the possibility of a benign revolution lost a Democratic primary, they chose a malignant one. Data is only useful if we remember to discuss what it omits as carefully and extensively as we describe what it seems to say.

Although I understood (and wrote ad nauseum about) the deficiencies of the Biden economy, I was surprised that restoring the Trump economy was compelling as a pitch to the American electorate. This data suggests I should not have been surprised.

Still. Was 2012-2019 a better economy than the 1990s boom? For those of you old enough to remember, I'll leave it to you to decide. We were all so much younger then. Is that the reason why it seems so wrong to me? I don't think so. I don't think a 28-year-old in 2018 had the kind of optimism about the economy that I had in 1998.

Today's "vibecession" debate is about how we will, or should, categorize the rightmost five dots on the chart. Based on this dataset alone, it's not surprising that we are arguing. If I were to draw a box around it, my mechanistic classification scheme would have it yellow, "meh". Sometimes we end up deciding the yellow periods are actually bad (the "great inflation"), sometimes we look back at them as decent if not amazing times (the "iffy aughts"). For now, at least through the lens of this dataset, it really is a "faces or vase" kind of question.

The period those last five data points most resembles is the late 1970s. The shape of the series is a similar, shallow "V", and for similar reasons, an inflation spike had eaten away at family purchasing power.

If we pair the series with Michigan consumer sentiment, we find the current pattern looks pretty similar to 1979 and 2007, peaks in real median family income followed by collapses in sentiment:

However good you think the economy is or recently was, if it's not just a vibecession, or if a vibecession can be self-fulfilling, the current set-up looks a bit ominous.

But let's end this with something cheerful!

A series not typically taken as an economic indicator, but that I like, is the suicide rate. I think the suicide rate is useful because it gets at welfare, which is properly the concern of normative economics, rather than "stuff", which is ultimately meaningless except to the degree that it contributes to welfare. King Midas could buy anything in the world, but he was not wealthy.

Welfare is a normative, rather than a positive or "scientific" concept. It is a thing we define, rather than measure. Nevertheless, it is what we care about, the ultimate object of our investigations, and in that sense welfare is more real than all of the things we can measure.

Once we have imposed a definition, we can try to measure proxies for welfare under our definition. I am not going to impose a specific definition, but I'll assert that "people not wishing to kill themselves" correlates with a wide variety of plausible conceptions of welfare, so the suicide rate might be a proxy, no doubt dirty and imperfect, for diswelfare.

Here's a chart:

(Click to enlarge.)

One thing that's fairly obvious is that the suicide rate does in fact have a pretty strong relationship with things we consider "economic". Both the Panic of 1907 and the Great Depression are very visible on the chart.2

I've included on this chart all the periods we teased out of the real median family income chart above, and bounded them with the same dates. I think that the two charts complement one another. In our earlier discussion, classifying the Great Inflation as a "meh" rather than "bad" period seemed wrong. We see a sharp rise in the suicide rate during the period, which suggests that "meh" should be bad. The suicide rate during the "iffy aughts" is much more restrained, suggesting a more benign period, as I think we mostly remember it.

There's a surprising spike in the suicide rate during the "Morning in America" period. I don't have an explanation for, or a story to tell, about that. I am eager entertain yours!

But for the period following, I think layering the suicide data on top of the purchasing power data helps us reconcile the numbers with our intuition. On the real median family income chart, the George-HW-Bush-era recession is an event comparable to the Volcker Shock and Great Financial Crisis. Those of use who lived through these events know that's wrong, the early 1990s recession wasn't fun, but it was a milder downturn than what the nation had gone through a decade earlier, and would go through nearly two decades later.

The suicide rate tells this tale: It rises during the Volcker Shock and rises sharply in the wake of the Great Financial Crisis. It is declining during the GHWB recession.

The suicide rate data also helps us reconcile discrepancies surrounding the late 1990s boom and the Obama-Trump expansion. My dark view of the Obama-Trump period may be idiosyncratic, but I think nearly all observers would agree that the late 1990s expansion was a bigger boom, a better economy, than the late 2010s. We see the suicide rate decline quite sharply during the late 1990s boom. It rises through most of the Obama-Trump expansion.

In general, there has been a strong upward trend in the suicide rate since the Great Financial Crisis, corresponding I think with the sense of upheaval and malaise also visible in social instability and the rise of outsider political movements. There's a kind of dark matter. Something is amiss, something is wrong that more conventional economic measures aren't capturing.

Perhaps the explanation is something not conventionally economic, negativity bias in social media for example. After all, social media and smartphones become prominent at approximately the same time as the Great Financial Crisis, so maybe they are the cause of the deadly gloom.

That is not my view. Long-time readers will know that I think our secular decline in welfare is related to the bifurcating class structure of American society and the unfairness implicit in how it must be sustained. In my view, the Great Financial Crisis both exacerbated these class divides and unmasked them, made them impossible to paper-over as we had managed previously.

I could be wrong! Maybe it really is just down to social media.

All I'll say is whatever it is, if there's a widely distributed product that makes us want to destroy one another and kill ourselves, that strikes me as very much an economic problem, an object that demands economic regulation. "Tech" is a big part of our economy!

Economics is either a positive science of predicting human behavior (in which case it has nothing to say about policy, except to inform other people who decide what we want), or it is a normative enterprise and its object is human welfare. If you are an economist and you say we should do X or Y or Z because it would increase GDP, but you have no reason to think the increase in GDP will make us better rather than worse off other than loose historical correlations, then you are much worse than useless. If it is plausible to you that big chunks of our economy are products so harmful they amount to half a Great Depression in suicide rates, then "maximizing GDP" cannot be a remotely decent proxy for the human welfare that it is your vocation to improve.


  1. If I hadn't cheated at all, if I'd just placed boundaries at sharp turning points, the main difference would be taking the Postwar Boom as continuing through 1973, and then a shorter Great Inflation. The Volcker Shock would have been one year shorter, Morning in America one year longer.

  2. I marked the Great Depression as beginning in 1929, and ending in 1941, when the US joins the war. The Panic of 1907 begins in 1907, and I take its ending to be 1915, the last year of high unemployment. For more on the relationship between suicide rates and the business cycle, see Luo, Florence, Quispe-Agnoli, Ouyang, and Crosby. The suicide rate chart is a composite of three data sets, stitched together to cover the period, CDC Health, CDC (recent) Suicide Data, and CDC HIST290. Crude rather than age-adjusted measures are shown. Age-adjustment seems not to alter the qualitative picture. I want to thank Matt Darling for first showing me suicide rate data from 1900, further peaking my curiosity, and the late, irreplaceable Kevin Drum, who put together the chart Matt posted.


The qualitative is the foundation of the quantitative

People often use quantitative data to try to debunk other people's qualitative impressions.

But when the data at issue are putative welfare measures (e.g. real income, GDP-per-capita, etc), the qualitative measures are foundational, and the quantitative mere proxies.

GDP-per-capita is widely used as a welfare measure, not because it conceptually maps well to welfare — for all kinds of reasons it does not! — but because from the mid-20th Century to the first years of the 21st, it mapped pretty well to our qualitative intuitions about relative welfare.

The consensus that there is a good correlation between GDP per capita and qualitative welfare has broken down more recently. (Is Mississippi really "richer", in human welfare terms, than Spain?)

We can have arguments about why it has broken down. Inequality, differences in how medical and social insurance are accounted in GDP, the effect of market power, and failure to account for differences in leisure time are all candidates. But fundamentally, GDP-per-capita was only ever a good measure because there was a widespread consensus that it tracked qualitative outcomes. Once that consensus has broken down, there is no reason to think it should be a welfare measure. The inventor of GDP, Simon Kuznets, explicitly argued that it should not be! (ht Marketplace for the source)

The same is true of "real" wealth or purchasing power measures! They are not inherently welfare measures. (I belabored this in a recent post.)

If people are making qualitative claims that some group's welfare is poor, and you try to debunk those claims with quantitative data, whether GDP-per-capita or real purchasing power measures, you are engaged in a kind of circular reasoning. The only reason we think these should be welfare measures is because they sometimes seem to work at capturing qualitative intuitions about relative welfare.

If qualitatively they seem to cease to work well as welfare measures, then there is no reason to think they are good welfare measures. When you debunk widespread qualitative claims about welfare with this "data", you are really debunking the quality of your measures!

That's not to say unevidenced claims about qualitative welfare must be taken as gospel, at face value. The claims could still be wrong! Welfare is unobservable, hard to measure. This is economics' foundational demon as a "science".

The moments when there is consensus that any quantitative measure maps to welfare are fleeting and precious. During those exceptional moments, it may come to seem plausible that we might maximize welfare "scientifically".

But that is only hubris. When that consensus flags, like now, we have to cop to the fact that human welfare is not a scientific observable. Welfare, by which we mean "prosperity" or "thriving", is something we experience qualitatively, we construct normatively, we strive to achieve politically. Those who think they have it as fact are only imposing elaborate fictions.


Running on democracy hasn't been tried

I am very grateful for this post, by Matt Yglesias:

I know this is deeply unfashionable but IMO politics largely consists of two things:

— Pandering to the fickle views of the voters to try to win elections

— Handling a series of tedious technical issues that most people don't care about or understand well

Having strongly held views not helpful.

I fear readers will think I'm being sardonic, that I intend this as a back-handed diss at Yglesias. I do not. I have my differences with the gentleman. But one real virtue of an inclination toward the trollish and contrarian is sometimes you just plainly say what you think where others would not.

I don't think Eric Levitz or David Shor (both of whom I regard warmly) would put things quite this way. But I also think they'd have a hard time distinguishing their views, or at least how their views function in practice, from Yglesias'. The worldview that Yglesias' post so succinctly expresses sits at the heart of professional politics in the Democratic Party.

The party contests elections not as some saccharine exercise in representation, but as battles to win. It acts instrumentally, using the best technical and communications tools at its disposal. When it does win, it governs technocratically, understanding public opinion as a constraint to be managed.

Jean Claude Juncker is a European politician, not an American Democrat. But his famous lament — "We all know what to do, but we don’t know how to get re-elected once we have done it." — jibes pretty well with Yglesias' post and the Democratic Party's practice.

Whatever truth there is or isn't in views like Yglesias' and Juncker's, they don't express much enthusiasm for, or even aspiration towards, democracy. Elections are a problem wise technocrats must work around to govern well, rather than the beating heart of our self-government.

Over the past year, it has become conventional wisdom among professional Democrats that you can't "run on democracy". It's been tried. The electorate, frustrated over the cost of living and other "kitchen-table issues", voted for the authoritarian.

I think this analysis is dumb. I think the American public values democracy a very great deal, but doesn't perceive either of the choices on their ballot to offer it.

Democrats offer paternalistic technocracy. They presume a (genuine!) commitment to protecting the status quo institutions of US electoralism amounts to support for democracy that the public ought to reward. But the public has grown jaded about the quality of those institutions.

Republicans capitalize on the public's resentment toward precisely the attitude Yglesias expresses. They effectively ask, what do you prefer, sanctimonious deference to institutions that, while notionally "democratic", fail in practice to connect government to the people, or a bunch of assholes who you might not agree with, but who are candid with you, who could never be mistaken for the technocrats that ignore and manipulate you on the grounds that they know best?

There is a certain beauty, really, in the outcome of the 2024 election and the year that has followed. Republicans got to prove that professional Democrats are a bunch of pinheads much of the public hates and resents. Democrats got to prove that professional Republicans, while definitely not pinheads, are incompetents and crooks who should be nowhere near the levers of power. Everybody wins, everybody loses.

Except democracy. Democracy just loses. People, like me, devoted to the idea of government of the people, by the people, for the people lose. H.L. Mencken coughs out a laugh in his grave, "Democracy is the theory that the common people know what they want, and deserve to get it good and hard."

I am grateful to Yglesias for his candor. I often respect his technocratic insights. But he is wrong, and people with his take on social affairs and "democracy" are maybe not the best guides for an institution charged with representation. Democracy is dying in the West because it has been overtaken by a caste of professionals, politicians and technocrats, for whom democracy is a slogan, a contest, a source of legitimacy in an instrumental sense, but not an idea or ideal they respect or aspire to. An alternative has emerged in the form of barbarians at the gate, who are even more detestable, even more dangerous to the possibility of any improvement toward democratic ideals.

Mencken is wrong. Democracy is not the implausible "theory that the common people know what they want" with respect to policy technicalities. Democracy is the theory that it is possible to organize society in such a way that free and equal citizens participate in and come to constitute rational government on their own behalf. Democracy is not disproven when a public whose civil society consists primarily of tabloids and television, or now twitter and tik-tok, turns out not to be well versed in its own affairs. Democracy does not demand that every member of the public become expert at "tedious technical issues".

Democracy is the project of building institutions under which every citizen's values and interests, on which each of us are sole authorities, are taken into account; institutions in which any of us who wish can participate; institutions that do become expert as institutions at tedious technical issues, and then advocate coherently and capably on constituents' account.

Democracy is also an attitude, almost a religion, of valuing one another as equals, of genuine interest and curiosity into the perspectives of our neighbors, of reverence in every public institution for participation and contribution. We can work side-by-side at the school board to make sure the textbooks arrive and the teachers are hired, and argue spiritedly with one another while we do so. We disagree. Our disagreements together constitute a common enterprise. That common enterprise scales from a neighborhood block party to, yes, government by the people, for the people, of the people, at its highest levels. Our institutions must sometimes be hierarchical, but they should be fluid and permeable.

Neither US political party has a great track record on democracy. Democracy is really my only issue at this point. I spend my time thinking about policy, coming up with clever mechanisms I think might make the world a better place. But the world cannot be a better place without democracy. The Biden Administration was kind of great on domestic economic policy. The Inflation Reduction Act, CHIPS Act, muscular antitrust enforcement, and other interventions constituted what could have been the start of an American renaissance, from my perspective as a technocrat. All of it has been scribbled over by a toddler fingerpainting with shit.

The usual critique is that Biden did a poor job of selling his accomplishments, he wasn't a great communicator, he was too old. That's all true. But it's too narrow. Democracy is not about any one man, no president, no king. The problem is Democrats governed in an institutional context insulated and alienated from any meaningful civil society, a context in which arguably there was no meaningful civil society. Whether you are selling out to lobbyists or erecting the foundations of a brilliant economy, the public won't preserve what it has no means of perceiving, understanding, and evaluating. Our job is to build an institutional context that renders the public capable. Nothing will "work" until we do.