Circular AI Deals Reminiscent of Disastrous Dot.Com Vendor Financing of the 1990s

Hey look, I just found a way to get infinite free electric power:

This sort of extension-cord-plugged-into-itself meme has shown up recently on the web to characterize a spate of circular financing deals in the AI space, largely involving OpenAI (parent of ChatGPT). Here is a graphic from Bloomberg which summarizes some of these activities:

Nvidia, which makes LOTS of money selling near-monopoly, in-demand GPU chips, has made investing commitments in customers or customers of their customers. Notably, Nvidia will invest up to $100 billion in Open AI, in order to help OpenAI increase their compute power. OpenAI in turn inked a $300 billion deal with Oracle, for building more data centers filled with Nvidia chips.  Such deals will certainly boost the sales of their chips (and make Nvidia even more money), but they also raise a number of concerns.

First, they make it seem like there is more demand for AI than there actually is. Short seller Jim Chanos recently asked, “[Don’t] you think it’s a bit odd that when the narrative is ‘demand for compute is infinite’, the sellers keep subsidizing the buyers?” To some extent, all this churn is just Nvidia recycling its own money, as opposed to new value being created.

Second, analysts point to the destabilizing effect of these sorts of “vendor financing” arrangements. Towards the end of the great dot.com boom in the late 1990’s, hardware vendors like Cisco were making gobs of money selling server capacity to internet service providers (ISPs). In order to help the ISPs build out even faster (and purchase even more Cisco hardware), Cisco loaned money to the ISPs. But when that boom busted, and the huge overbuild in internet capacity became (to everyone’s horror) apparent, the ISPs could not pay back those loans. QQQ lost 70% of its value. Twenty-five years later, Cisco stock price has never recovered its 2000 high.

Beside taking in cash investments, OpenAI is borrowing heavily to buy its compute capacity. Since OpenAI makes no money now (and in fact loses billions a year), and (like other AI ventures) will likely not make any money for several more years, and it is locked in competition with other deep-pocketed AI ventures, there is the possibility that it could pull down the whole house of cards, as happened in 2000.  Bernstein analyst Stacy Rasgon recently wrote, “[OpenAI CEO Sam Altman] has the power to crash the global economy for a decade or take us all to the promised land, and right now we don’t know which is in the cards.”

For the moment, nothing seems set to stop the tidal wave of spending on AI capabilities. Big tech is flush with cash, and is plowing it into data centers and program development. Everyone is starry-eyed with the enormous potential of AI to change, well, EVERYTHING (shades of 1999).

The financial incentives are gigantic. Big tech got big by establishing quasi-monopolies on services that consumers and businesses consider must-haves. (It is the quasi-monopoly aspect that enables the high profit margins).  And it is essential to establish dominance early on. Anyone can develop a word processor or spreadsheet that does what Word or Excel do, or a search engine that does what Google does, but Microsoft and Google got there first, and preferences are sticky. So, the big guys are spending wildly, as they salivate at the prospect of having the One AI to Rule Them All.

Even apart from achieving some new monopoly, the trillions of dollars spent on data center buildout are hoped to pay out one way or the other: “The data-center boom would become the foundation of the next tech cycle, letting Amazon, Microsoft, Google, and others rent out intelligence the way they rent cloud storage now. AI agents and custom models could form the basis of steady, high-margin subscription products.”

However, if in 2-3 years it turns out that actual monetization of AI continues to be elusive, as seems quite possible, there could be a Wile E. Coyote moment in the markets:

OpenAI, IZA, and The Limits of Formal Power

Companies and non-profit organizations tend to be managed day-to-day by a CEO, but are officially run by a board with the legal power to replace the CEO and make all manner of changes to the company. But last week saw two striking demonstrations that corporate boards’ actual power can be much weaker than it is on paper.

The big headlines, as well as our coverage, focused on the bizarre episode where OpenAI, the one of the hottest companies (technically, non-profits) of the year, fired their CEO Sam Altman. They said it was because he was not “consistently candid with the board”, but refused to elaborate on what they meant by this; they said a few things it was not but still not what really motivated them.

Technically it is their call and they don’t have to convince anyone else, but in practice their workers and other partners can all walk away if they dislike the board’s decisions enough, leaving the board in charge of an empty shell. This was starting to happen, with the vast majority of workers threatening to walk out if the board didn’t reverse their decision, and their partner Microsoft ready to poach Sam Altman and anyone else who left.

After burning through two interim CEOs who lasted two days each, the board brought back ousted CEO Sam Altman. Formally, the big change was board member Ilya Sutskever switching sides, but the blowback was enough to get several board members to resign and agree to being replaced by new members more favored by the workers (including, oddly, economist Larry Summers).

A similar story played out at IZA last week, though it mostly went under the radar outside of economics circles. IZA (aka the Institute for Labor Economics) is a German non-profit that runs the world’s largest organization of labor economists. While they have a few dozen direct employees, what makes them stand out is their network of affiliated researchers around the world, which I had hoped to join someday:

Our global research network ist the largest in labor economics. It consists of more than 2,000 experienced Research Fellows und young Research Affiliates from more than 450 research institutions in the field.

But as with OpenAI, the IZA board decided to get rid of their well-liked CEO. Here at least some of their reasons were clear: they lost their major funding source and so decided to merge IZA with another German research institute, briq. Their big misstep was choosing for the combined entity to be run by the the much-disliked head of the smaller, newer merger partner briq (Armin Falk), instead of the well-liked head of the larger partner IZA (Simon Jaeger). Like with OpenAI, hundreds of members of the organization (though in this case external affiliates not employees, and not a majority) threatened to quit if the board went through with their decision. Like with OpenAI, this informal power won out as Armin Falk backed off of his plan to become IZA CEO.

Each story has many important details I won’t go into, and many potential lessons. But I see three common lessons between them. First is the limits to formal power; the board rules the company, but a company is nothing without its people, and they can leave if they dislike the board enough. Second, and following directly from this, is that having a good board is important. Finally, workers can organize very rapidly in the internet age. At OpenAI nearly all its employees signed onto the resignation threat within two days, because the organizers could simply email everyone a Google Doc with the letter. Organizers of the IZA letter were able to get hundreds of affiliates to sign on the same way despite the affiliates being scattered all across the world. In both cases there was no formal union threatening a strike; it was the simple but powerful use of informal power: the voice and threatened exit of the people, organized and amplified through the internet.

OpenAI wants you to fool their AI

OpenAI created the popular Dall-E and ChatGPT AI models. They try to make their models “safe”, but many people make a hobby of breaking through any restrictions and getting ChatGPT to say things its not supposed to:

Source: Zack Witten

Now trying to fool OpenAI models can be more than a hobby. OpenAI just announced a call for experts to “Red Team” their models. They have already been doing all sorts of interesting adversarial tests internally:

Now they want all sorts of external experts to give it a try, including economists:

This seems like a good opportunity to me, both to work on important cutting-edge technology, and to at least arguably make AI safer for humanity. For a long time it seemed like you had to be a top-tier mathematician or machine learning programmer to have any chance of contributing to AI safety, but the field is now broadening dramatically as capable models start to be deployed widely. I plan to apply if I find any time to spare, perhaps some of you will too.

The models definitely still need work- this is what I got after prompting Dall-E 2 for “A poster saying “OpenAI wants you…. to fool their models” in the style of “Uncle Sam Wants You””