Skip to content
 

AI: Your little helper in your life. What of Nvidia and the Magnificent Seven?

Here are the questions:

+ Did the Chinese really pull off a major AI breakthrough? Answer: Yes.

+ Are us normal people aware of AI’s powers and going to buy AI big-time? Answer: Yes.

+ Will the Magnificent Seven (and others) make those sales? Answer: Yes.

+ Will Nvidia sell the picks and shovels to help the Magnificent Seven (and others) make those sales? Answer: Yes.

+ Do we need to temper our unbridled enthusiasm for Nvidia’s stock? Answer: Yes.

+ Will Harry sell any of his huge position in Nvidia? Answer: Probably Yes.

For now, he’s mulling, eyeing where to put funds he might free up. Cash does not turn him on, for now.

To start, let’s  look at the Magnificent Six (all tech, not including Tesla) over the last ten days:

Nvidia has done gruesomely. The perception is it will sell less. Meta (Facebook) has done the best. the perception is that it will produce its AI better and more cheaply, now the Chinese have led the way.

For some time, I’ve been wrestling with the question “Will the demand for AI be there?” I’ve been asking my business friends.

My friend, the insurance broker, believes he’ll be able to use AI to do research more quickly and to present more enticing deals to his customers. And do it all with, ultimately, fewer people.

My friend, the building contractor, wants to have AI check out the architectural plans he gets. Can they be built better, and cheaper? Can AI read the plans and spit out what materials he needs to build the project?

My friends, the real estate developer and syndicator, wants AI to run profit scenarios on projects and figure ways to tweak their profitability.

Nobody I’ve spoken with is a computer nerd. But they all intuitively understand the immense power of AI, and how it will revolutionize their lives.

They’re all looking for Their Little Helper.

 

Yesterday, the Economist ran a brilliant piece:

Here’s the Economist:

The market reaction, when it came, was brutal. On January 27th, as investors realised just how good DeepSeek’s “v3” and “R1” models were, they wiped around a trillion dollars off the market capitalisation of America’s listed tech firms. Nvidia, a chipmaker and the chief shovel-seller of the artificial-intelligence (AI) gold rush, saw its value fall by $600bn. Yet even if the Chinese model-maker’s new releases rattled investors in a handful of firms, they should be a cause for optimism for the world at large. DeepSeek shows how competition and innovation will make ai cheaper and therefore more useful.

DeepSeek’s models are practically as good as those made by Google and OpenAI—and have been produced at a fraction of the cost. Barred by American export controls from using cutting-edge chips, the Chinese firm undertook an efficiency drive, even reprogramming the chips it used to train the model to eke out every drop of power. The cost of building an AI model that can stand toe-to-toe with the best has plummeted. Within days, DeepSeek’s chatbot was the most downloaded app on the iPhone.

The contrast with America’s approach could not be starker. Sam Altman, the boss of OpenAI, has spent years telling investors—and America’s new president—that vast sums of money and computing power are needed to stay at the forefront of AI. Investors have accordingly been betting that a handful of firms stand to reap vast monopoly-like rents. Yet if fast followers such as DeepSeek can eat away at that lead for a fraction of the cost, then those profits are at risk.

Nvidia became the most valuable listed company in the world thanks to a widespread belief that building the best AI required paying through the nose for its best chips (on which its profit margins are reported to exceed 90%). No wonder DeepSeek’s success led to a stockmarket drubbing for the chipmaker on January 27th. Others in the data-centre business are also licking their wounds, from Siemens Energy (which would have built the turbines to power the build-out) to Cameco (which would have provided the uranium to fuel the reactors to turn the turbines). Had OpenAI been listed, its stock would surely have taken a tumble as well.

Yet there are far more winners than losers from the DeepSeek drama. Some of them are even within tech. Apple will be cheering that its decision not to throw billions at building AI capabilities has paid off. It can sit back and pick the best models from a newly commoditised selection. Smaller labs, including France’s Mistral and the Emirati TII, will be racing to see if they can adopt the same improvements, and try to catch up with their bigger rivals.

Moreover, efficiency gains are unlikely to result in less spending on ai overall. The Jevons paradox—the observation that greater efficiency can lead to more, not less, use of an industrial input—will surely come into play. The possible applications for a language model with computing costs as cheap as DeepSeek’s ($1 per million tokens) are vastly more numerous than those for Anthropic’s ($15 per million tokens). Many uses for cheaper AI are as yet unimagined.

Even Nvidia may not suffer too much in the long run. Although its market clout may be diminished, it will continue to sell chips in vast quantities. Reasoning models, including DeepSeek’s R1 and OpenAI’s o3, require much more computing power than conventional large language models to answer questions. Nvidia will be hoping to supply some of that.

Chart: The Economist

However, the real winners will be consumers. For AI to transform society, it needs to be cheap, ubiquitous and out of the control of any one country or company. DeepSeek’s success suggests that such a world is imaginable. Take Britain, where Sir Keir Starmer, the prime minister, has unveiled a plan to use AI to boost productivity. If he does not need to pay most of the efficiency gains back to Microsoft in usage fees, his proposal has a better chance of success. When producers’ rents vanish, they remain in the pockets of users.

Some have begun to suggest that DeepSeek’s improvements don’t count because they are a consequence of “distilling” American models’ intelligence into its own software. Even if that were so, r1 remains a ground-breaking innovation. The ease with which DeepSeek found greater efficiency will spur competition. It suggests many more such gains are still to be discovered.

For two years the biggest American ai labs have vied to make ever more marginal improvements in the quality of their models, rather than models that are cheap, fast and good. DeepSeek shows there is a better way. 

My son Michael is a sadist

He sends his poor father a note “Worth Reading.”

The Short Case for Nvidia.

He knows his father has a huge position in Nvidia.

He also knows his father is obsessive. And he will plow through stuff he doesn’t understand. Which 85% of this was.

Here are some “conclusion” type paragraphs I excerpted:

At a high level, NVIDIA faces an unprecedented convergence of competitive threats that make its premium valuation increasingly difficult to justify at 20x forward sales and 75% gross margins. The company’s supposed moats in hardware, software, and efficiency are all showing concerning cracks. The whole world— thousands of the smartest people on the planet, backed by untold billions of dollars of capital resources— are trying to assail them from every angle….

Perhaps most devastating is DeepSeek’s recent efficiency breakthrough, achieving comparable model performance at approximately 1/45th the compute cost. This suggests the entire industry has been massively over-provisioning compute resources. Combined with the emergence of more efficient inference architectures through chain-of-thought models, the aggregate demand for compute could be significantly lower than current projections assume. The economics here are compelling: when DeepSeek can match GPT-4 level performance while charging 95% less for API calls, it suggests either NVIDIA’s customers are burning cash unnecessarily or margins must come down dramatically.

The fact that TSMC will manufacture competitive chips for any well-funded customer puts a natural ceiling on NVIDIA’s architectural advantages. But more fundamentally, history shows that markets eventually find a way around artificial bottlenecks that generate super-normal profits. When layered together, these threats suggest NVIDIA faces a much rockier path to maintaining its current growth trajectory and margins than its valuation implies. With five distinct vectors of attack— architectural innovation, customer vertical integration, software abstraction, efficiency breakthroughs, and manufacturing democratization— the probability that at least one succeeds in meaningfully impacting NVIDIA’s margins or growth rate seems high. At current valuations, the market isn’t pricing in any of these risks.

And the implication for the AI makers/suppliers:

But you better believe that Meta and every other big AI lab is taking these DeepSeek models apart, studying every word in those technical reports and every line of the open source code they released, trying desperately to integrate these same tricks and optimizations into their own training and inference pipelines. So what’s the impact of all that? Well, naively it sort of seems like the aggregate demand for training and inference compute should be divided by some big number. Maybe not by 45, but maybe by 25 or even 30? Because whatever you thought you needed before these model releases, it’s now a lot less.

Read the entire “Short Case for Nvidiahere.

That’s it for now. I’m supremely optimistic.

AI is the greatest invention of our time.

It’s 3:32 AM for me on Thursday morning, January 30, 2025.

I have to be on the tennis court in less than five hours.

Harry Newton