Anyone waiting for the AI boom to cool off will have to wait a lot longer. NVIDIA just posted another massive quarter, powered by the kind of demand most companies can only dream of. Cloud GPUs are sold out. Blackwell demand is overwhelming. Even older architectures like Hopper and Ampere are running at full utilisation. Jensen Huang says we have entered the virtuous cycle of AI, where more compute creates better models and better models create even more compute demand. Whatever label people want to put on this moment, the results tell their own story.

Breakdown:
NVIDIA reported third-quarter fiscal 2026 revenue of fifty seven billion dollars, up twenty two percent from the previous quarter and sixty two percent from last year. The data centre business alone delivered fifty one point two billion dollars, rising sixty six percent year on year. CFO Colette Kress called this a significant achievement given the company’s size.
Blackwell is driving a large part of this momentum. Jensen Huang said Blackwell sales are off the charts and cloud GPUs are sold out. Customers are already shifting from the GB200 line to the GB300, which now makes up two thirds of total Blackwell revenue. Even legacy GPUs across Hopper and Ampere families remain fully utilised.
The company backed its claims with performance numbers. Blackwell Ultra trains models five times faster than Hopper. On DeepSeek R1 benchmarks, NVIDIA says Blackwell delivers ten times higher performance per watt and ten times lower cost per token compared to the H200. These gains could reshape the economics of model training and inference.
Huang also pushed back against fears of an AI bubble. He said the company sees accelerating compute demand across both training and inference, with exponential growth visible in both workloads. NVIDIA believes the industry has entered a compounding cycle where each breakthrough triggers even more infrastructure expansion.
Looking ahead, the Rubin architecture is scheduled for a 2026 ramp. First silicon has already arrived, and the company says the ecosystem is preparing for a fast and wide rollout.
NVIDIA is now involved in AI factory projects that together total nearly five million GPUs across cloud providers, sovereign compute initiatives, enterprises and supercomputing centres. Large deployments include xAI’s Colossus 2, a gigawatt scale data centre, a partnership with AWS and Humane involving up to one hundred fifty thousand accelerators, and a new agreement with Anthropic, which plans to adopt NVIDIA’s architecture and commit up to one gigawatt of compute.
Huang summed it up clearly by stating that NVIDIA runs every important AI model today, whether from OpenAI, Anthropic, xAI, Gemini, science labs, biology teams or robotics groups.
Why this matters:
NVIDIA is no longer just selling GPUs. It is shaping the physical backbone of global AI infrastructure. Every new model requires more compute than the last, and every breakthrough pushes demand higher. Governments are building sovereign AI centres, enterprises are creating internal AI factories and cloud providers are racing to scale capacity. NVIDIA sits at the centre of all of this. Its results are not hype. They are a signal of where the world is heading.
The Big Picture:
AI is shifting from software to heavy infrastructure. This is the cloud era all over again, only larger, faster and more capital intensive. The companies that control compute, networking and acceleration will dictate the pace of global innovation. NVIDIA’s advantage across hardware, software and ecosystem partnerships keeps widening because no other player has matched its full-stack dominance.
The Crunch:
People keep asking when the AI boom ends. NVIDIA keeps answering with numbers that say it is only getting started.





Leave your thoughts