NVIDIA has announced record Q2 revenues and profits in stellar results that exploded analyst expectations – especially those of some Wall Street bloggers who warned investors, only days earlier, that its “party may over” and “not to be the greater fool”. An analyst self-own if ever there was one. (Just toss a coin, it’s as likely to give you a heads-up.)
The software and fabless GPU and SoC designer revealed record revenues of $13.51 billion (against an outlook of $11 billion), up 88% on Q1 and 101% year on year. Net income for the quarter was over $6.1 billion – an order of magnitude higher than last year’s Q2 profit of $656 million.
Ten times profits on a doubling of revenue: impressive stuff. So, it is no surprise that, since January, NVIDIA stock has seen a steady climb from roughly $143 to $471, valuing the company north of $1.16 trillion. In trading today, shares are likely to pass the $500 mark.
So, what’s going on? The answer, of course, is AI and cloud data centers. Three quarters of NVIDIA’s Q2 revenues, $10.3 billion (up 141% sequentially and 171% year on year), come from those data centers – principally those of large cloud services providers.
On the post-results analyst call, founder and CEO Jensen Huang hailed a new age for enterprise technology:
A new computing era has begun. Companies worldwide are transitioning from general-purpose to accelerated computing and generative AI. NVIDIA GPUs connected by our Mellanox networking and switch technologies, and running our CUDA AI software stack, make up the computing infrastructure of generative AI. During the quarter, major cloud service providers announced massive NVIDIA H100 AI infrastructures. And leading enterprise IT system and software providers announced partnerships to bring NVIDIA AI to every industry. The race is on to adopt generative AI.
Maybe so. There is no doubt that compelling enterprise use cases, partnerships, and products exist for GPT and other large-language models, with more incoming.
But, as revealed in our earlier report, to see that as driven by strategic enterprise decisions is risky. Much of the clamor for generative AI at present is via shadow IT, as individuals play with free tools in the cloud. NVIDIA hardware lets them do that, on whichever platform they prefer.
As the FT observed this week (quoted in our report), most C-level enterprise strategists are talking up AI to ride the share price wave, but few are implementing generative tools from the top down; less than one in six, in fact. But that’s not NVIDIA’s concern – for now. Evidence suggests that more stellar quarters are on the horizon.
Executive President and CFO Collette Kress added detail to the headlines:
Data center compute revenue nearly tripled year on year, driven primarily by accelerating demand from cloud service providers and large consumer internet companies for our HGX platform, the engine of generative and Large Language Models.
Major companies including AWS, Google Cloud, Meta, Microsoft Azure, and Oracle Cloud, as well as a growing number of GPU cloud providers, are deploying in-volume HGX systems based on our Hopper and Ampere architecture tensor core GPUs.
Networking revenue almost doubled year on year, driven by our end-to-end InfiniBand networking platform, the gold standard for AI. There is tremendous demand for Nvidia accelerated computing and AI platforms. […] We expect supply to increase each quarter through next year.
By geography, data center growth was strongest in the US, as customers direct their capital investments to AI and accelerated computing. China’s demand was within the historical range of 20% to 25% of our data center revenue, including compute and networking solutions.
So, what are the implications of China still representing a big chunk of revenues, with tensions rising between West and East? Kress said:
We believe that current regulation is achieving the intended results. Given the strength of demand for our products worldwide, we do not anticipate that additional export restrictions on our data center GPUs, if adopted, would have an immediate material impact to our financial results.
Then she added:
However, over the long term, restrictions prohibiting the sale of our data center GPUs to China, if implemented, will result in a permanent loss of opportunity for the US.
An extraordinary statement that might be read as, ‘tie our hands, and the whole country loses’.
Consumer internet companies have also driven strong demand for NVIDIA hardware, she continued:
Their investments in data center infrastructure, purpose-built for AI, are already generating significant returns. For example, Meta recently highlighted that, since launching reels and AI recommendations, it has driven a more than 24% increase in time spent on Instagram. Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA-powered instances in the cloud, as well as demand for on-premise infrastructure.
Yet as we have seen, that isn’t strictly true.
Generative AI app use within the enterprise is certainly booming, as ChatGPT and the rest promise something for nothing; but that’s not the same as enterprise AI deployment. Beware that distinction, as any spike in lawsuits against OpenAI and others, for scraping data that was never in the public domain, may eat into consumer-style enthusiasm.
But little of that will bother NVIDIA, as enterprise use cases do emerge, making the horizon look rosy – in the short to medium term, at least. Kress said:
Let me turn to the outlook for the third quarter of fiscal 2024. Demand for our data center platform for AI is tremendous and broad-based across industries and customers. Our demand visibility extends into next year. Our supply over the next several quarters will continue to ramp as we lower cycle times and work with our supply partners to add capacity.
Virtually every industry can benefit from generative AI, she noted:
AI co-pilots, such as those just announced by Microsoft, can boost the productivity of over a billion office workers and tens of millions of software engineers. Millions of professionals in legal services, sales, customer support, and education will be available to leverage AI systems that are trained in their fields.
And the co-pilots and assistants are set to create new multi-hundred-billion-dollar market opportunities for our customers. We are seeing some of the earliest applications of generative AI in marketing, media, and entertainment.
CEO Huang certainly waved the flag for the technology in a quick ‘LLM 101’ for Wall Street analysts:
Large Language Models are pretty phenomenal. [They have] the ability to understand unstructured language. But at its core, what [an LLM] has learned is the structure of human language. And it has encoded within it – compressed within it – a large amount of human knowledge that it has learned from the corpuses that it studied. When you see smaller models, it’s very likely that they were derived or distilled from, or learned from, larger models, just as you have professors and teachers and students.
And you’re going to see this going forward. You start from a very large model, and it has a large amount of generality and generalization and what’s called zero-shot capability. And so, for a lot of applications, and for questions or skills that you haven’t trained it specifically on, these Large Language Models, miraculously, have the capability to perform them. That’s what makes it so magical!
Hyperbole aside, Huang set out his vision in no uncertain terms:
We’re seeing two simultaneous platform shifts at the same time. One is accelerated computing, and the reason for that is because it’s the most cost-effective, most energy-effective, and most performant way of doing computing now. Then, all of a sudden, enabled by accelerated computing, generative AI came along. And this incredible application now gives everyone two reasons to transition, to do a platform shift from general-purpose computing, the classical way of doing computing, to this new way of doing computing, accelerated computing. So, this is not a near-term thing. This is a long-term industry transition.
The most significant opportunities are in the world’s largest industries, where companies can realize trillions of dollars of productivity gains. It is an exciting time for NVIDIA, our customers, partners, and the entire ecosystem to drive this generational shift in computing.
The best way for companies to increase their throughput, and improve energy efficiency and cost efficiency, is to divert their capital budgets to accelerated computing and generative AI, claimed Huang. In short: send your cash to NVIDIA!
In fairness, it has worked for many investors. The average crypto nerd who bet their shirt on Bitcoin would have done better to buy NVIDIA shares rather than huddle next to one of its GPUs and pray it mines some tokens.