Network based systems

Broadcom unveils Tomahawk 5 chip to unlock AI networking

strataxgs-tomahawk-5-business-briefing-deck-2022-07-27-presented-to-ian-king-slide-11

With RDMA over Converged Ethernet, or RoCE, Ethernet switching is poised to replace InfiniBand as the interconnect for GPUs, says Ethernet switch chip vendor Broadcom.

Broadcom 2022

For some time, specialists in the field of computer networks have been talking about a second network. The usual network is the one that connects client computers to servers, the LAN. The rise of artificial intelligence has created a network “behind” this network, a “scalable” network to run AI tasks such as deep learning programs that must be trained on thousands of GPUs.

This has led to what switch silicon vendor Tomahawk describes as a critical impasse. Nvidia, the main supplier of GPU chips running deep learning, is also becoming the main supplier of networking technology to interconnect the chips, using the InfiniBand technology it added when it acquired Mellanox in 2020.

The danger, some say, is that it’s all tied to one company, with no diversification and no way to build a data center where many chips are competing.

“What Nvidia is doing is saying, I can sell a GPU for a few thousand dollars, or I can sell the equivalent of an embedded system for half a million to over a million dollars,” said said Ram Velaga, senior vice president. and general manager of the Core Switching Group of networking chip giant Broadcom, in an interview with ZDNet.

“That’s not right with cloud providers at all,” Velaga said. ZDNet, i.e. Google and Meta from Amazon and Alphabet and others. Indeed, the economics of these cloud giants is based on reducing costs as they scale computing resources, which dictates avoiding the use of a single supplier.

“And now there’s this tension in this industry,” he said.

To resolve this tension, Broadcom says the solution is to follow the open networking path of Ethernet technology and move away from the proprietary path of InfiniBand.

Broadcom unveiled tuesday Tomahawk 5, the company’s latest switch chip, capable of interconnecting a total of 51.2 terabits per second of bandwidth between endpoints.

“There’s a commitment with us, saying, Hey, look, if the Ethernet ecosystem can help address all of the benefits that InfiniBand is able to bring to a GPU interconnect, and bring it to a big technology public like Ethernet, so that it can be ubiquitous, and create a very large network fabric, it will help people win on the merits of the GPU, rather than the merits of a proprietary network,” Velaga said.

The Tomahawk 5, available now, follows Broadcom’s previous part, the Tomahawk 4, which was a 25.6 terabit per second chip, by two years.

The Tomahawk 5 part aims to level the playing field by adding features that were the preserve of InfiniBand. The main difference is latency, the average time to send the first bit of data from point A to point B. Latency has been an advantage for InfiniBand, which becomes especially crucial when going from GPU to memory and vice-versa, either to retrieve input data or to retrieve parameter data for large neural networks in AI.

A new technology called RDMA over Converged Ethernet, or RoCE, bridges the latency gap between InfiniBand and Ethernet. With RoCE, an open standard trumps the tight coupling of Nvidia and Infiniband GPUs.

“Once you get RoCE, there’s no longer that infiniband advantage,” Velaga said. “Ethernet performance actually matches InfiniBand.”

“Our thesis is that if we can outperform InfiniBand, chip-to-chip, and you have a whole ecosystem that is really looking for Ethernet success, you have a recipe for replacing infiniband with Ethernet and enabling a large ecosystem of GPUs to succeed,” said Velaga.

broadcom-ram-velaga-headshot-2022

Cloud computing giants such as Amazon “insist that the only way to sell the GPU to them is to use a standard NIC interface capable of transmitting over Ethernet,” says Ram Velaga, general manager of Broadcom’s Core Switching Group. .

Broadcom, 2022

The reference to a large GPU ecosystem is actually an allusion to the many competing silicon vendors in the AI ​​market that are coming up with new chip architectures.

They include a series of well-funded startups such as Cerebras Systems, Graphcore and SambaNova, but they also include cloud providers’ own silicon, such as Google’s own Tensor Processing Unit, or TPU, and Amazon’s Trainium chip. All these efforts could possibly have more opportunities if the computing resources did not depend on a single network sold by Nvidia.

“The big cloud guys are saying today, we want to build our own GPUs, but we don’t have InfiniBand fabric,” Velaga observed. “If you can give us a fabric equivalent to Ethernet, we can do the rest of this stuff ourselves.”

Broadcom is betting that as the latency issue subsides, InfiniBand’s weaknesses will become apparent, such as the number of GPUs the technology can support. “InfiniBand has always been a system that had a certain scale limit, maybe a thousand GPUs, because it didn’t really have a distributed architecture.”

Additionally, Ethernet switches can service not only GPUs, but also Intel and AMD processors, so merging networking technology into a single approach has some economic advantages, Velaga suggested.

“I expect the fastest adoption in this market to come from GPU interconnect, and over a period of time I would probably expect the balance to be fifty-fifty,” said said Velaga, “because you’ll have the same technology that can be used for CPU interconnect and GPU interconnect, and the fact that there are a lot more CPUs sold than GPUs, you’ll have volume normalization.” GPUs will consume the majority of the bandwidth, while CPUs can consume more ports on an Ethernet switch.

In line with this vision, Velaga points to special capabilities for AI processing, such as a total of 256 ports of 200 Ethernet ports at 200 gigabits per second, the most of any switch chip. Broadcom says such a dense 200 gigabyte port configuration is important for enabling “flat, low-latency AI/ML clusters.”

Although Nvidia has a lot of power in the data center world, with sales of data center GPUs this year expected at $16 billion, the buyers, the cloud companies, also have a lot of power, and the advantage is on their side.

“The big cloud guys want this,” Velaga said of InfiniBand’s pivot to Ethernet. “When you have these huge clouds with a lot of buying power, they’ve shown they’re able to force a supplier to fall apart, and that’s the momentum we’re on,” Velaga said. . “All these clouds really don’t want that, and they insist that the only way to sell them the GPU is with a standard NIC interface that can transmit over Ethernet.

“It’s already happening: you look at Amazon, that’s how they buy, look at Meta, Google, that’s how they buy.”


Source link