Nvidia unveils Blackwell Ultra and Vera Rubin, its latest AI superchips


At Nvidia’s annual GTC conference in San Jose, CEO Jensen Huang took center stage to unveil an ambitious lineup of AI superchips and next-generation hardware designed to maintain the company’s dominant position in the fast-evolving artificial intelligence market. The event showcased a wave of innovation, headlined by the debut of the Blackwell Ultra chips — a powerful new series engineered to accelerate AI workloads — and the introduction of a revolutionary GPU family called Vera Rubin, slated for release in 2026. This new architecture pairs Nvidia’s first-ever custom-built CPU, named Vera, with a cutting-edge GPU design dubbed Rubin, honoring the pioneering astronomer who uncovered evidence of dark matter.

Nvidia’s meteoric rise has been fueled by the AI explosion triggered by the launch of ChatGPT in late 2022. As AI adoption surged, Nvidia’s sales soared more than sixfold, with the company’s GPUs becoming indispensable for training massive language models. Tech giants like Microsoft, Google, and Amazon leaned heavily on Nvidia’s hardware to power their data centers, and as the AI boom intensifies, these companies are doubling down on their investments in Nvidia’s technology. Both Google and Microsoft used the GTC stage to announce expanded collaborations, signaling Nvidia’s central role in shaping the AI landscape.

The Blackwell Ultra chips are designed to redefine performance and efficiency for large-scale AI tasks. Huang explained that these chips would allow cloud providers to offer high-speed AI services, particularly for latency-sensitive applications like real-time customer interactions, autonomous vehicles, and advanced medical diagnostics. Nvidia projects that Blackwell Ultra could deliver up to 50 times more revenue for cloud providers compared to its previous Hopper generation, thanks to faster data processing, lower energy consumption, and the ability to handle more complex workloads.

The Blackwell Ultra lineup will feature multiple configurations tailored to different needs. The GB300 model combines the GPU with an Nvidia Arm-based CPU for enhanced performance, while the B300 offers the GPU on its own for more flexible deployments. For larger-scale operations, Nvidia revealed an eight-GPU server blade configuration and an even more powerful rack version that houses a staggering 72 Blackwell chips — ideal for data centers tackling enormous AI models.

The Vera Rubin system represents Nvidia’s vision for the future of AI processing. When paired with the Vera CPU, the Rubin GPU will achieve an unprecedented 50 petaflops of performance during inference — more than twice the output of the current Blackwell architecture. It will also support up to 288 GB of ultra-fast memory, a critical advantage for developers working with massive datasets and sophisticated AI models.

One notable shift in Nvidia’s strategy is how it now counts GPUs. Traditionally, when Nvidia combined two chips into a single unit, it still counted as one GPU. With Rubin, each chip will now be recognized individually, even when combined. This approach anticipates the arrival of "Rubin Next," an enhanced version scheduled for 2027, which will incorporate four GPUs into a single processing unit, driving performance even higher for the most demanding AI workloads.

Nvidia also expanded its AI hardware portfolio with new systems aimed at developers and researchers. The DGX Spark and DGX Station desktop machines are built to run complex language models like Meta’s Llama or the Chinese-developed DeepSeek, enabling local model training and inference without relying on large-scale data centers. Alongside these machines, Nvidia launched Dynamo — a comprehensive software suite designed to optimize performance across Nvidia hardware — and introduced advanced networking upgrades, capable of linking thousands of GPUs into a unified, high-performance AI cluster.

Huang addressed growing investor concerns about DeepSeek R1, an AI model developed in China that some feared could weaken Nvidia’s market position. He reassured the audience that far from diminishing Nvidia’s relevance, models like DeepSeek demand even greater computing power due to their sophisticated reasoning capabilities. The Blackwell Ultra chips, he argued, are specifically designed to handle this new class of resource-intensive AI models.

The GTC event wasn’t just about hardware — Nvidia’s software and cloud partnerships took center stage as well. Google Cloud announced an expanded collaboration, bringing Nvidia’s Blackwell GPUs to a new lineup of virtual machines designed to accelerate AI training for businesses and developers. Google DeepMind, the research lab behind the Gemini AI model, is partnering with Nvidia to fine-tune performance on its GPUs. Additionally, Nvidia will adopt DeepMind’s SynthID watermarking tool into its Cosmos video generation platform, aiming to improve transparency and trust in AI-generated content.

Microsoft also announced a broader partnership with Nvidia to supercharge its Azure AI platform. The company revealed that its latest AI services, including ChatGPT and Microsoft Copilot, will now run on upgraded infrastructure powered by Nvidia’s Blackwell chips. Microsoft is also integrating faster versions of Meta’s Llama models on Azure, alongside the newly added Mistral Small 3.1 model, which supports multimodal input — combining both text and images. To support these innovations, Microsoft launched new Azure virtual machines equipped with Nvidia’s high-performance Blackwell GPUs and next-generation networking technology, designed to handle complex AI tasks with greater efficiency. Huang confirmed that even more advanced Nvidia chips will arrive on Azure in 2025, extending this strategic partnership further into the future.

Reflecting on the rapid rise of AI, Huang noted, “This last year is where almost the entire world got involved.” He emphasized that the acceleration of AI development — from research labs to mainstream businesses — has created unprecedented demand for computing power, and Nvidia is committed to meeting that demand with more powerful chips, smarter software, and deeper collaborations with the biggest names in tech.

Nvidia’s GTC conference wasn’t just a product showcase — it was a statement of intent. The company is positioning itself not just as a hardware leader but as the driving force behind the AI revolution, shaping the tools, platforms, and partnerships that will define the next generation of artificial intelligence. With Blackwell Ultra and Vera Rubin leading the charge, Nvidia looks poised to remain at the forefront of AI innovation for years to come.


 

buttons=(Accept !) days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !