Learning the Bitter Lesson at EconLog

I’m in EconLog with:

Learning the Bitter Lesson in 2026

At the link, I speculate on doom, hardware, human jobs, the jagged edge (via a Joshua Gans working paper), and the Manhattan Project. The fun thing about being 6 years late to a seminal paper is that you can consider how its predictions are doing.

Sutton draws from decades of AI history to argue that researchers have learned a “bitter” truth. Researchers repeatedly assume that computers will make the next advance in intelligence by relying on specialized human expertise. Recent history shows that methods that scale with computation outperform those reliant on human expertise. For example, in computer chess, brute-force search on specialized hardware triumphed over knowledge-based approaches. Sutton warns that researchers resist learning this lesson because building in knowledge feels satisfying, but true breakthroughs come from computation’s relentless scaling. 

The article has been up for a week and some intelligent comments have already come in. Folks are pointing out that I might be underrating the models’ ability to improve themselves going forward.

Second, with the frontier AI labs driving toward automating AI research the direct human involvement in developing such algorithms/architectures may be much less than it seems that you’re positing.

If that commenter is correct, there will be less need for humans than I said.

Also, Jim Caton over on LinkedIn (James, are we all there now?) pointed out that more efficient models might not need more hardware. If the AIs figure out ways to make themselves more efficient, then is “scaling” even going to be the right word anymore for improvement? The fun thing about writing about AI is that you will probably be wrong within weeks.

Between the time I proposed this to Econlog and publication, Ilya Sutskever suggested on Dwarkesh that “We’re moving from the age of scaling to the age of research“.

Leave a comment