After the Fall: What Next for Nvidia and AI, In the Light of DeepSeek

Anyone not living under a rock the last two weeks has heard of DeepSeek, the cheap Chinese knock-off of ChatGPT that was supposedly trained using much lower resources that most American Artificial Intelligence efforts have been using. The bearish narrative flowing from this is that AI users will be able to get along with far fewer of Nvidia’s expensive, powerful chips, and so Nvidia sales and profit margins will sag.

The stock market seems to be agreeing with this story. The Nvidia share price crashed with a mighty crash last Monday, and it has continued to trend downward since then, with plenty of zig-zags.

I am not an expert in this area, but have done a bit of reading. There seems to be an emerging consensus that DeepSeek got to where it got to largely by using what was already developed by ChatGPT and similar prior models. For this and other reasons, the claim for fantastic savings in model training has been largely discounted. DeepSeek did do a nice job making use of limited chip resources, but those advances will be incorporated into everyone else’s models now.

Concerns remain regarding built-in bias and censorship to support the Chinese communist government’s point of view, and regarding the safety of user data kept on servers in China. Even apart from nefarious purposes for collecting user data, ChatGPT has apparently been very sloppy in protecting user information:

Wiz Research has identified a publicly accessible ClickHouse database belonging to DeepSeek, which allows full control over database operations, including the ability to access internal data. The exposure includes over a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information.

Shifting focus to Nvidia – – my take is that DeepSeek will have little impact on its sales. The bullish narrative is that the more efficient algos developed by DeepSeek will enable more players to enter the AI arena.

The big power users like Meta and Amazon and Google have moved beyond limited chatbots like ChatGPT or DeepSeek. They are aiming beyond “AI” to “AGI” (Artificial General Intelligence), that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. Zuck plans to replace mid-level software engineers at Meta with code-bots before the year is out.

For AGI they will still need gobs of high-end chips, and these companies show no signs of throttling back their efforts. Nvidia remains sold out through the end of 2025. I suspect that when the company reports earnings on Feb 26, it will continue to demonstrate high profits and project high earnings growth.

Its price to earnings is higher than its peers, but that appears to be justified by its earnings growth. For a growth stock, a key metric is price/earnings-growth (PEG), and by that standard, Nvidia looks downright cheap:

Source: Marc Gerstein on Seeking Alpha

How the fickle market will react to these realities, I have no idea.

The high volatility in the stock makes for high options premiums. I have been selling puts and covered calls to capture roughly 20% yields, at the expense of missing out on any rise in share price from here.

Disclaimer: Nothing here should be considered as advice to buy or sell any security.

DeepSeek vs. ChatGPT: Has China Suddenly Caught or Surpassed the U.S. in AI?

The biggest single-day decline in stock market history occurred yesterday, as Nvidia plunged 17% to shave $589 billion off the AI chipmaker’s market cap. The cause of the panic was the surprisingly good performance of DeepSeek, a new Chinese AI application similar to ChatGPT.

Those who have tested DeepSeek find it to perform about as well as the best American AI models, with lower consumption of computer resources. It is also available much cheaper. What really stunned the tech world is that the developers claimed to have trained the model for only about six million dollars, which is way, way less than the billions that a large U.S. firm like OpenAI, Google, or Meta would spend on a leading AI model. All this despite the attempts by the U.S. to deny China the most advanced Nvidia chips. The developers of DeepSeek claim they worked with a modest number of chips, models with deliberately curtailed capacities which met U.S. export allowances.

One conclusion, drawn by the Nvidia bears, is that this shows you *don’t* need ever more of the most powerful and expensive chips to get good development done. The U.S. AI development model has been to build more, huge, power-hungry data centers and fill them up with the latest Nvidia chips. That has allowed Nvidia to charge huge profit premiums, as Google and other big tech companies slurp up all the chips that Nvidia can produce. If that supply/demand paradigm breaks, Nvidia’s profits could easily drop in half, e.g., from 60+% gross margins to a more normal (but still great) 30% margin.

The Nvidia bulls, on the other hand, claim that more efficient models will lead to even more usage of AI, and thus increase the demand for computing hardware – – a cyber instance of Jevons’ Paradox (where the increase in the efficiency of steam engines in burning coal led to more, not less, coal consumption, because it made steam engines more ubiquitous).

I read a bunch of articles to try to sort out hype from fact here. Folks who have tested DeepSeek find it to be as good as ChatGPT, and occasionally better. It can explain its reasoning explicitly, which can be helpful. It is open source, which I think means the code or at least the “weights” have been published. It does seem to be unusually efficient. Westerners have downloaded it onto (powerful) PCs and have run it there successfully, if a bit slowly. This means you can embed it in your own specialized code, or do your AI apart from the prying eyes of ChatGPT or other U.S. AI providers. In contrast, ChatGPT I think can only be run on a powerful remote server.

Unsurprisingly, in the past two weeks DeepSeek has been the most-uploaded free app, surpassing ChatGPT.

It turns out that being starved of computing power led the Chinese team to think their way to several important innovations that make much better use of computing. See here and here for gentle technical discussions of how they did that. Some of it involved hardware-ish things like improved memory management. Another key factor is they figured out a way to only do training on data which is relevant to the training query, instead of training each time on the entire universe of text.

A number of experts scoff at the claimed six million dollar figure for training, noting that if you include all the costs that were surely involved in the development cycle, it can’t be less than hundreds of millions of dollars. That said, it was still appreciably cheaper than the usual American way. Furthermore, it seems quite likely that making use of answers generated by ChatGPT helped DeepSeek to rapidly emulate ChatGPT’s performance. It is one thing to catch up to ChatGPT; it may be tougher to surpass it. Also, presumably the compute-efficient tricks devised by the DeepSeek team will now be applied in the West, as well. And there is speculation that DeepSeek actually has use of thousands of the advanced Nvidia chips, but they hide that fact since it involved end-running U.S. export restrictions. If so, then their accomplishment would be less amazing.

What happens now? I wish I knew. (I sold some Nvidia stock today, only to buy it back when it started to recover in after-hours trading). DeepSeek has Chinese censorship built into it. If you use DeepSeek, your information gets stored on servers in China, the better to serve the purposes of the government there.

Ironically, before this DeepSeek story broke, I was planning to write a post here this week pondering the business case for AI. For all the breathless hype about how AI will transform everything, it seems little money has been made except for Nvidia. Nvidia has been selling picks and shovels to the gold miners, but the gold miners themselves seem to have little to show for the billions and billions of dollars they are pouring into AI. A problem may be that there is not much of a moat here – – if lots of different tech groups can readily cobble together decent AI models, who will pay money to use them? Already, it is being given away for free in many cases. We shall see…