Nvidia Blackwell: New superchip fuels race to AI supremacy

Chipmaker Nvidia has released a new superchip that it says will turbocharge artificial intelligence and bring it to a whole new level.

Is this the start of a new power-hungry industrial revolution?It wasn’t a concert — Taylor Swift was nowhere to be seen. Still, thousands of people packed an arena in San Jose, California to listen and cheer on March 18. The star of this show was Jensen Huang and he was on stage showing off a new chip that will be launched later in the year. His two-hour performance has since been watched by nearly 27 million people on YouTube.

Huang is the CEO and co-founder of Nvidia and was presenting in his customary black leather jacket at the company’s annual developer conference. Though still not a household name beyond the tech community, Nvidia caused waves recently after its market capitalization topped $2 trillion (€1.84 trillion), making it the third-most valuable listed company in the US behind Microsoft and Apple. All this is linked to the company’s semiconductors, called graphics processor units (GPUs). Nvidia is a chip designer — and outsources chipmaking to expert manufacturers.

Its hardware was initially used for video gaming. But the company found other options like cryptocurrency mining, 3-D modeling and self-driving vehicles. Most importantly, they pivoted to integrating their chips into generative artificial intelligence (GAI) systems — a form of self-learning artificial intelligence capable of generating text, images, or other media. At first glance, technology just for artificial intelligence (AI) may seem like a short road, but the possibilities around the technology have taken the world by storm since the introduction of ChatGPT in November 2022. Today, Nvidia’s biggest customers are cloud-computing titans and companies that build AI models.

The new Blackwell superchip Through its know-how, Nvidia has the chance to power this transformative technology. Currently, it holds around 80% of the global market for such AI chips. The new chip presented in California is called Blackwell. With 208 billion transistors, it is an upgrade of the company’s H100 chip, which Huang said is currently the most advanced GPU in production.

The next-generation chip is 30 times quicker at some tasks than its foregoer. To develop Blackwell the company spent around $10 billion said Huang. Each chip will cost $30,000-40,000. The company hopes its newest product will increase its hold on the AI chip market.

How does the technology work? The Blackwell chip is part of an advanced system that the company says can be used for “for trillion-parameter scale generative AI.” The chips break tasks into small pieces. This parallel processing makes it possible to work out calculations faster. The new chip has a number of key features that reduce both latency and energy use, says Bibhu Datta Sahoo, a professor who works at the University at Buffalo’s Center for Advanced Semiconductor Technologies.

Among other features, the Blackwell chip enables connecting a large number of GPUs so that large AI models can be trained with a smaller carbon footprint. And it incorporates accelerated decompression of most major data formats, which enables the shift of data processing from different types of chips. Asked if the chip could change the world, Sahoo told DW that it is difficult to say with so many teams working on things that could revolutionize AI model training. Nonetheless, “the Blackwell chip is a very good step in the right direction.” More power, less energy For Huang change cannot come fast enough pointing out that general-purpose computing has run out of steam and accelerated computing has reached a turning point.

We are seeing the start of a new industrial revolution, he said in San Jose. Creative ways need to be found to scale up while driving down costs so society can “consume more and more computing while being sustainable.” To make this possible data centers need to grow and become more powerful. But some fear that power-hungry AI chips will just add to energy use and strain grids. Nvidia sees the problem and says its new chip — though more powerful — is more energy efficient.

Experts agree. Based on available data the sheer performance of the Blackwell chip can reduce energy consumption by a factor of 3 to 4 compared to the previous generation of GPUs for training large AI models, said Sahoo. This energy efficiency is especially important “considering the fact that the power consumption of data centers in the US is expected to reach 35 GW by 2030, up from 17 GW in 2022.” The road ahead is not without roadblocks Despite all the advancements to build powerful chips for the next-generation of AI, some have their doubts and fear a financial bubble as investors pile in. So far, it has been the AI hardware makers that have seen the biggest boom.

This is natural since the underlying infrastructure must be in place before software can be used. Now that the infrastructure is coming into place, the technology can expand. To secure its position, Nvidia is ramping up investments in its networking and software offerings to connect and manage its superchip hardware. Yet the future holds a number of other challenges.

Growing demand for semiconductors could put a strain on global supply chains. Most precarious is the fact that so much chip manufacturing is based in Taiwan

. Finally, the competition is not waiting to see what happens. Big rivals like Intel and AMD plus startups Cerebras and Groq are all working on their own chips. Even Nvidia’s biggest customers — Amazon, Google and Microsoft — are getting into the chip design business. In an industry where size matters and new technology is quickly outdated, it will be an expensive race to stay on top. Edited by: Uwe Hessler

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *