Moderation as responsibility

I’ve been thinking a lot about the loneliness of moderates/centrists/whatever you want to call them, in no small part because that’s the camp in which I place myself. While it’s (perhaps undeservably) flattering to think of yourself as “practical” and “reasonable”, it’s not a fun identity. There’s no good art to fall back on when you need to fill in the missing parts of your personality. You are constantly disappointing the more vocal members of the chattering classes while simultaneously sharing their frustration with the fire-dog-meme “This is fine” folks who don’t seem constitutionally capable of noticing when the room is in fact actively on fire. It’s a tough political identity to pin down because it is, at least ostensibly, an identity defined by it’s relation to two polar extremes. Anarchists, socialists, liberals, conservatives, they have an easier time because they can start from first principles and work upwards. As society progresses, so does the middle. To define yourself as wherever the middle stands is to be plastic, externally shaped, even inauthentic. Such a positional identity may be safe, but it’s not especially useful.

I would like to suggestion a more useful lodestone for moderates: responsibility

You have social responsibility. As a moderate I am uncomfortable with the libertarian fetishism of individualism without an obligation to others. With all due deference to “Naked and Afraid”, we are primates, and as such we are just shambling hunks of nutrition for other species if left on our own. Individuals, wholly independent of others, are completely useless. You are useless on your own. All human achievement is predicated on coordination with others. Through families, communities, and states. Through exchange, markets, and firms. You need other people, whether you like them or not. Admitting you need others is not weakness.

You have personal responsibility. As a moderate I am often uncomfortable with the type of socialism that promises relief from the obligations of toil. That your comfort and care can be assured regardless of the efforts and investments you make for yourself. There is no life without toil. There is no life without risk. The only institutions that can wholly shelter you from toil and risk demand the enslavement of others. Sure, you can be a party elite, but you’re only going to be fed and sheltered because of those toiling in the gulag. Admitting that others have an obligation to action and self-care is not cruelty.

Which is all to say that moderates should be up in arms, protesting and raging alongside progressives, liberals, democrats, and (yes) classic conservatives. Not because the current administration has strayed too far down an abstract one-dimensional range of political positions. But because their destruction, grifting, and hate are in direct opposition to everything we hold dear. They accept no responsibility for their actions while acknowledging no responsibility for the welfare of others. They are the antithesis of responsible adults.

I’m not much of a political philosopher, but maybe if I get stuck in an airport long enough I’ll hammer out my own “Theory of Responsibility”. I mean, that’s how Rawls got his magnum opus done, right?

Walking around DC

I’m here to discuss women in the criminal justice system as part of the ongoing BRIDGE series organized by Arnold Ventures. DC remains one of my very favorite cities, one I lived in and around for decades. I arrived with some trepidation, of course, now that the federal government is attempting to “occupy” it while deploying National guard troops (“some armed”) while ICE agents execute their own specific combination of random assault sprinkled in with some light kidnapping. I wasn’t quite sure whether I should expect military vehicles on every other street or just the odd rented van with masked men claiming to be ICE agents pouring out.

What I’ve seen so far is mostly…nothing. I don’t me DC seems normal, not in the slightest. I mean the streets feel emptier. There’s far too few tourists for mid-August. There were families on the steps of the museums, but normally they’d be swarmed. I’m sure to some degree I’m layering my own sensitivies on the scene, but I really do think it is far quieter than it normally is. Than it should be.

Tonight I’m going to head to U street to visit an old friend, have a drink, catch up. I’ve done this a million times, in this exact neighborhood, for going on 20 years. That this time, with a cheap tinny authoritarian claiming to clean up crime while DC is experiencing the lowest rate of violent crime of my lifetime, that this is the only time I’ve really had any sense of insecurity, that something bad could happen around me, is some of a grossest irony I’ve ever experienced first hand.

Anyway, it’s always nice to come home, no matter how hard some are trying to take feeling away.

The economics of damned lies

Economists have become almost comically skeptical of estimated effects. A researcher estimating the effect of X on Y has always had to consider the bias and efficiency of their estimator, where bias is the result of unconsidered or unobserved forces pulling your estimated effect in one particular direction away from the truth (too positive or too negative), and efficiency is the overall noisiness of the estimate, where a less efficient estimater provides too large a range of possible effect sizes.

Under the umbrella of efficiency were concerns about random measurement error – the basic and unavoidable difficulties in accurately recording the the underlying “true” value. Filed under “everywhere and always”, measurement error is often simply the cost of doing business, while nonetheless limiting the precision which the world can be known and, in turn, the precision with which decision making or policy can be calibrated.

Coping with bias has been in many ways the story of empirical economics and the “credibiilty revolution” of the last 25 years. It’s why “identitication strategy” is the fourth slide of almost any microeconomics presentation, why the econometrics of every great applied economics working paper is seemingly obsolete before it finds itself in print, and why there is a genuine possibility I will retire with a half dozen ulcers before I finish this blog post. Economists make themselves crazy thinking, strategizing, and internalizing criticism about the potential bias in their estimates. Selection bias, omitted variable bias, reverse causality, and even observer bias lurk in the shadows of our minds. To be an expert in causal inference is to anticipate and guard against myriad sources of bias in your empirical analysis. For many living economists, however, there is a new bogeyman.

Systemic measurement error.

Sounds banal enough. And if you’re a chemist, it is. The gauge is consistently measuring every temperature too high, mass too low, electromagnetic spectra too red. Something to test for every day. Vigilence and repetition, the solution. For economists, however, the answer is less simple.

What happens when the data is rigged to make the results too good? Unemployment too low. Wages too high. Expenditures too productive. <Redacted> too <redacted>. Economists have looked for cheaters as a research subject and rooted out fraud within scientific endeavor itself. But it is precious few who have made it their job to sift through manipulated public data and carefully distill the true underlying numbers. And for good reason — as soon as you declare the data unreliable, you open the door to your own personal bias. Your politics, career ambitions, or even just your good hearted desire to observe people being more decent than our own pessimissim might otherwise allow for. To allow yourself to manipulate the potentially fraudulent data is to potentially make a bad situation worse.

Replicability and transparency of analysis was important before, but now we’re entering an even more tedious and slow landscape because critics aren’t just going to want to adjudicate your analysis, they’re going to want to adjudicate every observation in your data set. Or perhaps I am being too negative. There is a genuine upside. As people look to distill and correct for systemic measurement error, they’re going to create greater demand for 1) parallel analysis of similar questions using different techniques on the same data and 2) great forensic analysis of data and the institutions that create it. Never forget that sovietology was a genuine research career. More work to be done, but it can be done.

More work that has to be done. Sigh. My stomach hurts.

Bureau of Labor Statistics Under Siege

Thousands of keyboards were likely drenched four days ago as coffee spewed from thousands of nostrils upon reading the headlines that President Trump fired the head of the Bureau of Labor Statistics because he (the prez) didn’t like the July 2025 job numbers that were reported. Apparently, the job stats were not as great as we had been led to expect for the new regime of tariffs and deportations. (Someone should inform the politicians that businessmen need predictability for making any expansionary plans). So, shoot the messenger, that will fix it.

The First Ire was apparently kindled especially by the truly massive downward revisions to the May (-125,000) and June (-133,000) job figures, which reduced the combined employment gain for those months by 258,000. That made for three anemic employment months in a row, which is a different picture that had been earlier portrayed. For those unfamiliar with past BLS reports, that could seem like manipulation or gross incompetence. For instance, whitehouse.gov published an article titled, “BLS Has Lengthy History of Inaccuracies, Incompetence”, excoriating the “Biden-appointed”, now-fired Erika McEntarfer who “consistently published overly optimistic jobs numbers — only for those numbers to be quietly revised later.”

But massive overestimations of jobs creation, followed a month or two or three later by massive downward revisions are pretty standard procedure for the BLS in recent years. Fellow blogger Jeremy Horpedahl has noted prior occurrences of this, e.g. here and here. There is no reason to suspect nefarious motives, though. The understaffed and overworked folks at BLS seem to be doing the best they can. It is just a fact that some key data simply is not available as early as other data. There are also rational adjustments, e.g. seasonal trends, that must first be estimated, and only later get revised.

Bloomberg explains some of the fine points of the recent revisions:

The downward revision to the prior two months was largely a result of seasonal adjustment for state and local government education, BLS said in earlier comments to Bloomberg. Those sectors substantially boosted June employment only to be largely revised away a month later.

But economists say the revisions also point to a more concerning, underlying issue of low response rates.

BLS surveys firms in the payrolls survey over the course of three months, gaining a more complete picture as more businesses respond. But a smaller share of firms are responding to the first poll. Initial collection rates have repeatedly slid below 60% in recent months — down from the roughly 70% or more that was the norm before the pandemic.

In addition to the rolling revisions to payrolls that BLS does, there’s also a larger annual revision that comes out each February to benchmark the figures to a more accurate, but less timely data source. BLS puts out a preliminary estimate of what that revision will be a few months in advance, and last year [2024], that projection was the largest since 2009.

Perhaps it would be wise for the BLS to hang a big “preliminary” label on any of the earlier results they publish, to minimize the howls when the big revisions hit later. Or perhaps some improvements could be made in pre-adjusting the adjustments, since revisions there do seem to swing things around outrageously. I expect forthcoming BLS reports to be the subject of derision from all sides. We all know which parties will scoff if the job report looks great or if it looks not great. Presumably the interim head of the Bureau, William Wiatrowski, is busy polishing his resume.

And POTUS should be careful what he wishes for – “great” job growth numbers would, ironically, strengthen the case for the Fed to delay the interest rate cuts he so desires.

The (attempted) return of Soviet economic statistics

From Warren Nutter’s “The structure and growth of Soviet industry: A comparison with the United States.” The Journal of Law and Economics 2 (1959): 147-174.:

“Let us acknowledge at once that all statistics contain faults and errors. Let us also acknowledge that no government or other agency resists the temptation to stretch figures to its own account if it feels it can get away with it. Representative government, competitive scholarship, and free public discourse are the Western institutions that have counteracted error and misrepresentation in statistics, imperfectly to be sure, but at least to some degree.

The peculiar difficulties with Soviet statistics stem, in the first instance, from the system of authoritarian, centralized planning-from what has been called a “command economy.” Published statistics come from only one source: the state. There are no independent sources to restrain each other or used as checks against each other, except to the extent that related figures published by different state agencies might not be fully coordinated before publication. At the same time, the suppliers of data to the central authorities -the economic and administrative units- have a stake in the figures they report, since their performance is judged on the basis of them. The Soviet statistical authorities do not hide their concern over the misreporting that results from this feature of the economic system. A second set of difficulties stems from the crusading nature of Soviet communism. Statistics are grist for the propaganda mill. Knowing the ideological views of Soviet leaders, one cannot expect them to dispense facts in a passive and detached manner.”

As many of you likely know, the President fired the director of the Bureau of Labor Statistics because he didn’t like that the newest employment numbers painted an unflattering portrait of the US labor market. Fortunately, the US continues to benefit from alternative to government statistics, but make no mistake, the BLS produces the absolute best labor market measurements the world has ever known.

It’s telling that while Soviet data seemed to often fool outsiders at the time (despite the occasionally raising of doubts), there was no such delusion within the Soviet Union, where provincial leaders would consistently look to outside sources for accurate economic reports.

Nutters is credited with co-founding the “Virginia School of Political Economy” at the University of Virginia with future Nobel Laureate James Buchanan. The Virginia school is most associated with public choice economics, something which by the 70s was often construed as an intellectual counterbalance to the modeling of government as infallible corrective to market failures that was particularly enticing to those favoring a socialist planned economy. That an administration and political coalition that loves to rail against omnipresent socialist threats is demanding that the US embrace a Soviet-style data apparatus is a reminder that history is never without irony.

The consequences of a “Papers, please” economy

While DOGE is advertising their new deregulation AI (HT MR) with promises of “trimming 100,000 of those rules“, the reality is that the administration is ushering in the most profound layer of government involvement into our lives since the introduction of the income tax.

It defies the opportunity cost of my time to try to recap the crappy-policy-via-executive-order blunderbuss that has been the last 6 months, but it is sufficient to focus on two dimensions: immigrant targeting and tariffs. ICE is pulling people off of the street and detaining them for hours for “based on their physical appearance” in what can only be described as a dedicated effort to remove current immigrants, denaturalize past immigrants, and deter future immigrants. While these travesties play out one raid and immigration court ambush after another, tariffs are being introduced rapidly and haphazarly, always at the expense of the economy, and sometimes even in opposition to their stated goals of reshoring manufacturing. The prospect for (relatively) frictionless commerce across borders is quickly becoming unobtainable.

So what’s going to happen, now? Is America going to become a Whiter, autarkic island that steadily de-growths itself into a quieter state of nostalgic bliss, cheerfully accepting a shorter, sicker, less opulent life than before? Sure, the food will be worse and more expensive, our electronics obsolete and more expensive, our cars older and made from inferior materials (and more expensive), but that’s just the way things have to be, right? People will live and do as they’re told, right?

Have you met people?

People adapt. They find every workaround, every crack. Their lives will change, in many ways for the much worse, but they will work with what they have to make the best that they can. And this case, the best way to adapt will be to become just a little more criminal. Not fully criminal, just a little more. More aspects of our lives will become akin to driving 10 mph over the speed limit because that’s just what everyone does.

Daily life will, slowly and at times imperceptibly, move underground. More jobs will pay in cash. Fewer exchanges will be made absent a personal relationship. More goods will arrive in suitcases at the luggage return. Friends will ask friends to pick up a phone/earbuds/tablets for them when visiting less economically restricted countries, while also reminding them to delete their messaging apps before heading through security. More goods will be altered from their true, optimal consumable form to qualify for a lower tariff. The advantage of physical over digital media will widen again. Where exchange exists outside of the law, trust needs to be found outside of contract. At the margin, business will become just a fraction of a percent more nepotistic. More employees will be found somewhere in the family tree. Everyone will just become a bit more crooked and, in doing so, expect everyone else to be a just a little more crooked. The US is a shockingly high trust society because it pays to be trustworthy. This is how such things unravel.

More immigrants will live within arrangements that hide them from not just the authorities, but from observation in general. Curtains and blinds covering windows at all hours. Dinner will taken at home rather than the restaurant. Clubs and concert gatherings that appeal to immigrant crowds, or even just less White crowds, will advertise less, relying on word of mouth. Workers will move more often, rather than garner attention. The sick and injured will not be taken to the emergency room. The gaps in an already fractured society will become a little wider.

Employers will keep more people off the books. Off health insurance and workers compensation. Employees will, perversely, be grateful for the lower exposure. Insurance companies will find new ways to audit liability without exposing their clients personnel. Predatory human trafficking will find larger herds of underground populations to hide their practices within. Fewer people will trust and rely on the law. Fewer people will enjoy it’s protections.

What about compensating wage differentials? On the one hand, labor supply will be reduced as it is pushed underground, reducing their numbers and safely available hours. On the other hand, the necessity for employers to reduce the visibility of their workers while incurring the risk of legal punishment will reduce demand. The net effect on equilibrium wages is uncertain. However, those employers who manage to guarantee longer and safer tenured employment will capture greater rents from those they employ. Getting to work more consistently and going to sleep feeling safer is quite the fringe benefit, one that employers may find to be a more a profitable form of compensation than just simple wages.

I’d keep writing, but I already sound like a paranoid crank. I’m not sure I am comfortable, anonymous reader, with this level of intellectual vulnerability in such a public forum. Papers, please.

I’m Chair! 😬

As of July 1st of this year, I am the Chairman of the Department of Economics at my university. It’s one of those positions that includes more work and not much compensation. Depending on who I tell, I’m given both congratulations and condolences. Generally, at my university there is an expectation that department faculty ‘take turns’ being chair. So, we’re expected to serve whether the pay is good or not. There’s a lot of informal practice around this process.

In addition, Economics Majors have been less popular at liberal arts institutions over the past several years. No one knows why and there are probably multiple reasons. At my institution, our department has healthy enrollment among the peripheral majors. So, the Economics BA and BS have lower enrollment, but the Business Economics and the Global Affairs majors are more popular than ever.

All the same, I’d like to increase the number of students who have declared majors in our department and the number of Economics graduates. How do I do that?

Continue reading

Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence

We noted last week Meta’s successful efforts to hire away the best of the best AI scientists from other companies, by offering them insane (like $300 million) pay packages. Here we summarize and excerpt an excellent article in Newsweek by Gabriel Snyder who interviewed Meta’s chief AI scientist, Yann LeCun. LeCun discusses some inherent limitations of today’s Large Language Models (LLMs) like ChatGPT. Their limitations stem from the fact that they are based mainly on language; it turns out that human language itself is a very constrained dataset.  Language is readily manipulated by LLMs, but language alone captures only a small subset of important human thinking:

Returning to the topic of the limitations of LLMs, LeCun explains, “An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1—it’s reactive, right? There’s no reasoning,” a reference to Daniel Kahneman’s influential framework that distinguishes between the human brain’s fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

The limitations of this approach become clear when you consider what is known as Moravec’s paradox—the observation by computer scientist and roboticist Hans Moravec in the late 1980s that it is comparatively easier to teach AI systems higher-order skills like playing chess or passing standardized tests than seemingly basic human capabilities like perception and movement. The reason, Moravec proposed, is that the skills derived from how a human body navigates the world are the product of billions of years of evolution and are so highly developed that they can be automated by humans, while neocortical-based reasoning skills came much later and require much more conscious cognitive effort to master. However, the reverse is true of machines. Simply put, we design machines to assist us in areas where we lack ability, such as physical strength or calculation.

The strange paradox of LLMs is that they have mastered the higher-order skills of language without learning any of the foundational human abilities. “We have these language systems that can pass the bar exam, can solve equations, compute integrals, but where is our domestic robot?” LeCun asks. “Where is a robot that’s as good as a cat in the physical world? We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”

This gap exists because language, for all its complexity, operates in a relatively constrained domain compared to the messy, continuous real world. “Language, it turns out, is relatively simple because it has strong statistical properties,” LeCun says. It is a low-dimensionality, discrete space that is “basically a serialized version of our thoughts.”  

[Bolded emphases added]

Broad human thinking involves hierarchical models of reality, which get constantly refined by experience:

And, most strikingly, LeCun points out that humans are capable of processing vastly more data than even our most data-hungry advanced AI systems. “A big LLM of today is trained on roughly 10 to the 14th power bytes of training data. It would take any of us 400,000 years to read our way through it.” That sounds like a lot, but then he points out that humans are able to take in vastly larger amounts of visual data.

Consider a 4-year-old who has been awake for 16,000 hours, LeCun suggests. “The bandwidth of the optic nerve is about one megabyte per second, give or take. Multiply that by 16,000 hours, and that’s about 10 to the 14th power in four years instead of 400,000.” This gives rise to a critical inference: “That clearly tells you we’re never going to get to human-level intelligence by just training on text. It’s never going to happen,” LeCun concludes…

This ability to apply existing knowledge to novel situations represents a profound gap between today’s AI systems and human cognition. “A 17-year-old can learn to drive a car in about 20 hours of practice, even less, largely without causing any accidents,” LeCun muses. “And we have millions of hours of training data of people driving cars, but we still don’t have self-driving cars. So that means we’re missing something really, really big.”

Like Brooks, who emphasizes the importance of embodiment and interaction with the physical world, LeCun sees intelligence as deeply connected to our ability to model and predict physical reality—something current language models simply cannot do. This perspective resonates with David Eagleman’s description of how the brain constantly runs simulations based on its “world model,” comparing predictions against sensory input. 

For LeCun, the difference lies in our mental models—internal representations of how the world works that allow us to predict consequences and plan actions accordingly. Humans develop these models through observation and interaction with the physical world from infancy. A baby learns that unsupported objects fall (gravity) after about nine months; they gradually come to understand that objects continue to exist even when out of sight (object permanence). He observes that these models are arranged hierarchically, ranging from very low-level predictions about immediate physical interactions to high-level conceptual understandings that enable long-term planning.

[Emphases added]

(Side comment: As an amateur reader of modern philosophy, I cannot help noting that these observations about the importance of recognizing there is a real external world and adjusting one’s models to match that reality call into question the epistemological claim that “we each create our own reality”.)

Given all this, developing the next generation of artificial intelligence must, like human intelligence, embed layers of working models of the world:

So, rather than continuing down the path of scaling up language models, LeCun is pioneering an alternative approach of Joint Embedding Predictive Architecture (JEPA) that aims to create representations of the physical world based on visual input. “The idea that you can train a system to understand how the world works by training it to predict what’s going to happen in a video is a very old one,” LeCun notes. “I’ve been working on this in some form for at least 20 years.”

The fundamental insight behind JEPA is that prediction shouldn’t happen in the space of raw sensory inputs but rather in an abstract representational space. When humans predict what will happen next, we don’t mentally generate pixel-perfect images of the future—we think in terms of objects, their properties and how they might interact

This approach differs fundamentally from how language models operate. Instead of probabilistically predicting the next token in a sequence, these systems learn to represent the world at multiple levels of abstraction and to predict how their representations will evolve under different conditions.

And so, LeCun is strikingly pessimistic on the outlook for breakthroughs in the current LLM’s like ChatGPT. He believes LLMs will be largely obsolete within five years, except for narrower purposes, and so he tells upcoming AI scientists to not even bother with them:

His belief is so strong that, at a conference last year, he advised young developers, “Don’t work on LLMs. [These models are] in the hands of large companies, there’s nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs.”

This approach seems to be at variance with other firms, who continue to pour tens of billions of dollars into LLMs. Meta, however, seems focused on next-generation AI, and CEO Mark Zuckerberg is putting his money where his mouth is.

Impossible Trinity of Macroeconomic Stability

Trump wants both low taxes and low interest rates. I hope that he doesn’t get it.

For the last ten days of my Principles of Macroeconomics course, I emphasize the aggregate supply and aggregate demand model coupled with monetary offset. What’s monetary offset? It says that, given some target and administrative insulation, the Federal Reserve can ‘offset’ the aggregate demand effects of government fiscal policy. It’s what gives us a relatively stable economy, despite big fiscal policy changes from administration to administration.

For example, if the Fed has a 2% inflation target, then they have an idea of how much total spending in the economy (NGDP) must change. If the federal government changes tax revenues or spends more, then the Fed can increase or decrease the money supply in order to achieve the NGDP growth rate that will realize their target. For example, after the 2017 Tax and Jobs Act lowered taxes, the Federal Funds rate rose in 2018. The effect of the tax cuts on NGDP were *offset* by monetary policy tightening to keep inflation near 2%.

If the Fed doesn’t engage in monetary offset, then fiscal policy has a bigger impact on the business cycle, causing more erratic bouts of unemployment and inflation. The economy would be less stable. Importantly, monetary offset  works in both directions. It prevents tight fiscal policy from driving us into a national depression, and loose fiscal policy from fueling inflation. That’s good since politicians face an incentive/speed/knowledge/political problem.

Personally, I would love lower taxes and lower interest rates. I’d get to enjoy more of my income rather than sending it to uncle Sam and, after refinancing, I’d pay less to service my debts. BUT, the same is true for everyone else too. All of that greater spending would result in higher prices and persistent inflation.

Right now, low taxes and high spending meant that the government is running persistent budget deficits – it’s borrowing money. That’s stimulative. If the Fed lowers interest rates, individuals would refinance and borrow more. That’s also stimulative. If both fiscal and monetary policy are stimulative as part of achieving the Fed’s target, then there is nothing wrong. But deviation from that policy goal brings economic turbulence.

This analysis implies an impossible trinity of macroeconomic stability (not the one from international trade):

Continue reading

The option to leave

The US, like every geopolitical entity to ever exist, has produced global public goods (i.e. international security, defeating the Nazis, etc) and global public bads (greenhouse gases, failed interference in other countries, etc).

I would like to posit something very simple: the greatest public good the United States has ever produced is the option to leave where you are and emigrate to the United States. If a country and its leadership is failing, non-trivial fractions of their population had the viable option to pack their bags and walk out the door. Perhaps unfairly, this is doubly true for their best, brightest, and most endowed with resources, making the threat all the more salient. It’s voting with your feet i.e. Tiebout effects writ large.

If you are a failing nation, your options become to watch your population dissipate or put up a wall blocking exit. Either that or, you know, actively take steps to improve your country so that fewer people wish to leave their home and start over elsewhere. The ramifications of stifled immigraion to the United States will be felt for decades, and not just in the United States in the form of an enervated economy and betrayal of our core civic values, but globally in weakended constraints on every failing regime.