AI isn’t going to be what you expect

Perhaps a more accurate title would be “AI isn’t going to be what you want it to be or are afraid it will be.” And by “you” I mean specifically you. Whatever you have in your mind’s eye, that’s what you should correct your expectations against. Those rare times where we have the slow unveiling of a revolutionary technology, over the span of years or even decades, there is a window of time where we all form an expectation of that it will look like in it’s final form and we’re all wrong. Everyone of us. Except Neal Stephenson, but that is another story.

I think we come by this bias honestly. There’s this tendency to see a new technology and either try to will it into being exactly that thing that would be a optimal for you, or succumb to pessimistic paranoia that this is why you were always fated to lose. In the early 00’s, the start-up tech boom and, later, stock market bubble were driven, I think, but the irresistable optimism that “The Internet” was a way that someone could enter a new market via their garage and bootstrap their way to millions while skipping those less than fun decades of grinding your way to a customer base. If you had a clever concept, then millions of customers were a click away. It was “idea person” catnip. And by idea person I mean someone who has lots of ideas but rarely can be bothered to follow through with anything more a few days. Eventually enough vaporware was a bought and sold that people started to question what was real, Microstrategy got caught cooking the books, everyone had the “maybe this thing isn’t real” thought all at once, and the market tanked. Flash foward 15 years and the internet had radically changed everyone’s life, but how it did so was in hard to foresee ways, through firms that were painstakingly built by experts and/or exploded into market leadership through network effects they’ve been teaching in Econ 101 for at least 30 years.

I observed a similar effect in my own research career. In my early years I was obsessed with agent-based computatonal modeling (something I’ve written about before). For all the optimism I carried for the methodology, it always paled in comparison to expectations and claims made by other. There was an observable pattern, too. What I saw was a way to model things that weren’t tractable in other economic methods, be it classic analytics, game theory, or dynamic stochastic general equilibrium models. What they saw was a way to write and publish economic models without having to learn high level math. Its both a way in and a way around. A way to skip a stage that they wanted to believe was unnecessary to make a scholarly contribution and/or make a career in academic social science. For some it was also a way to retake scientific territory annexed by economists. In either case, their expectations were deeply biased.

What I hear within a lot of a commentators, particularly those most obsessed and optimistic for AI, is wishing into existence the tool that would best serve them. To reimagine the cliche of a hammer in a world full of nails, they are toolbox that is missing a screwdriver, but have no fear, AI will be the universal screwdriver. No need for screwdrivers anymore, everyone will have a near infinite supply of (near) zero marginal cost universal screwdrivers, ready at a moments notice. If you are a professional screwdriver, well, you are out of luck, but that’s how the fates work and bully for me because I can accomplish so much now that I have an infinite supply of the skill I lacked. I am neither constrained by my own personal deficiencies, nor am I constrained by resources insufficient to hire a team of screwdrivers. I am what I always I dreamed I would be: a specialist in what the world still needs that is no longer dependent or deferential to people with the skills I lack. If a prognosticator is predicting a specific future for AI that will greatly increase their relative status among a narrow strata of professionals or scholars, you should index their prediction accordingly.

The inverse of this, of course, is the people who imagine themselves to be the screwdrivers in the previous story. They have specialized in labor product that is soon to be available at zero marginal cost. They’re value will be decimated and thus there is no hope. The irony, of course, is that it is the exact same story but perhaps seems more likely now that it is put in a pessimistic light. Obsolescence happens, after all. They’re both almost guaranteed to be wrong, though. Both sets of expectations are being radically biased by the narcicissm of the imaginer.

My impulses are, of course, similar to everyone elses. I try to keep this in check through my experience with the tech bubble (N=1, I know). AI will change our lives, but it will probably take at least 5-7 years longer than expected, and at least that long before that change is successfully “monetized”. The changes will be significant, it will show up in almost all of our work lives. It will disappoint in many ways. I remember telling someone that our expectations for the internet were too high for it to ever meet them. Then the iPhone came out and suddenly its penetration into our lives was fully actualized.

I don’t know what you think AI will be, but you’re wrong. And that’s ok. We all are.

Leave a comment