How to Train Your Artificial Economist

Apparently Claude 3 Opus AI/LLM is a pretty decent economist:

As much as I appreciate the prospect of an AI economist, allow me to ask the most annoying and, in turn, most important, question an economist can ask of any proposition: “Compared to what?”

It seems to me any consideration of the quality of economic analysis produced by an AI/LLM model demands a series of comparison points. We need bad economic analysis. We need AIs that generate mediocre, decent, atrocious, acceptable, and perhaps if possible, brilliant economic analysis for comparison. Which, it seems to me, is entirely possible given that a large language model (LLM) is trained on reams of text. So, lets do it. Let’s see how many different artificial economists we can produce and observe. A digital zoo of economic Pokemon with less violence and more discussion of underlying elasticities.

What happens when we train Claude on every edition of Mankiw’s principles textbook? Cowen and Tabarrok’s textbook. All of the principles books. The most daunting book in all of graduate economics? What happens when we train it on sociology and anthropology textbooks? NYT and WSJ editorials? What happens when we let it consume nothing but Presidential State of the Union addresses? Campaign speeches? Every book in the Google digital library? Twitter? The economics subreddit? A perfectly respectable blog?

How should we evaluate the outcomes? Should it attempt to complete the prelimary exams to continue your PhD training at the University of Chicago? The final exams in Intermediate Micro and Macro Economics at the University of Virginia? At what price would it have sold shares of Gamestop? Perhaps it could write an explicit function that would advise a family when to buy instead of rent based on age, city, income, and number of children. Maybe it could manage to pull off a reverse-Sokal hoax, writing a paper making a genuine scholarly contribution worthy to pass through the review process at a top 25 peer-reviewed economic journal. Maybe it could convince your brother-in-law to stop asking for stock tips and just buy into index funds.

In the end, the market test for what stands as a valuable contribution from an AI is what will matter for most of us. But the time is quickly approaching when we will leave behind awe- and angsted-filled proclamations of whether an AI model is discretely good or bad, useful or dumb. The next step demands granularity of evaluation and consideration. Perhaps not false cardinal (continuous) values, but ordinal rankings aligned with useful and actionable assessments of their analysis. And in case you think this is dull or tedious, consider for a moment what it will mean to evaluate the analytical skills of AIs stratified by their training materials. It will stand for many as a meta-analysis of the broad merit of entire disciplines, literatures, and oeuvres. It will be coarse and efficient, messy and cruel. It will cultivate and distill the core messages of intellectual and social identities, many of which were previously latent, if not outright inert. Subtext will be made text, it’s merits evaluated and compared.

That last bit is perhaps the most terrifying. The entire culture of etiquette and politeness, of politics, is built around the institutions that ensure that too much is never said too directly. I have no doubt that this has some of you salivating. You are so very comfortable in your truth that it enrages you when you are implored not to call ideas silly, arguments wrong, people stupid. A utopia of the mind awaits us in this new world of AI-adjudicated debates and augmented salons. Be careful what you wish for. And don’t be so sure your imagined AI arbitrator is going to be remotely fair. Or on your side.

An AI is only as good as the material it is trained on. Genuine insights are found in economics journals by the thousands every year, but fallacies and sophistries are found by the billions in the endless sea of casual text that fills the internet, airwaves, and podcasts. We all (all) spend large parts of our day being casually wrong about things because it costs us precisely nothing to be wrong. The law of large numbers, in the parlance of statistics, will innoculate AIs from such intellectual food poisoning as the randomness of our errors cancel out. What that won’t save us from, however, is the raw populism underlying much of the casual text out there. Is it outlandish to say there are more people who receive rewards, pecuniary and non-pecuniary, for telling people what they want to hear rather than the truth? Have you ever consumed any media ever?

I’m not an AI doomer. I remain rather sanguine on the entire enterprise. But part of the human condition is never knowing for 100% sure what is right or wrong. We pass that on to all of our intellectual offspring, no matter how smart or artificial they are. Or least, we should.

2 thoughts on “How to Train Your Artificial Economist

  1. Joy Buchanan March 11, 2024 / 6:36 pm

    Wonder why the AI economist sounds so much like Mike when he’s in a blogging mood… must have trained on our site.

    Liked by 1 person

Leave a comment