Cryptopolitan
2025-07-15 16:57:58

Broadcom takes aim at Nvidia with new Tomahawk Ultra chip as AI battle escalates

Broadcom’s chip division rolled out a new networking processor on Tuesday, designed to turbocharge artificial intelligence workloads that demand tight coordination among hundreds of compute units. The announcement marks another salvo in its tussle with AI chip-making leader, Nvidia. This latest Broadcom device, known as Tomahawk Ultra , serves as a traffic controller, shuttling vast volumes of data between dozens or even hundreds of silicon chips housed within a single server rack. Tomahawk Ultra directly challenges Nvidia AI training and inference rely on “scale‑up” computing, where chips are clustered closely to share data at blistering speeds. Until now, Nvidia’s NVLink Switch reigned supreme for that task, but Broadcom claims its newcomer can link up to four times more processors in a single network. Rather than leaning on a proprietary interconnect, the Ultra leverages a turbo‑charged version of Ethernet, beefed up for low latency and high throughput. Ram Velaga , Broadcom’s senior vice president and general manager of Broadcom’s Core Switching Group, told reporters that the chip can manage communications among far more units than Nvidia’s rival product, all while using a broadly supported protocol. “Tomahawk Ultra is a testament to innovation, involving a multi-year effort by hundreds of engineers who reimagined every aspect of the Ethernet switch.” Velaga. “This highlights Broadcom’s commitment to invest in advancing Ethernet for high-performance networking and AI scale-up,” added Velaga. Broadcom has already been supplying chip‑making services to customers like Google, helping the search giant assemble its own AI accelerators as an alternative to Nvidia GPU s. With Tomahawk Ultra now shipping, the company is hoping to further erode Nvidia’s dominance by offering data center architects a switch that scales to larger clusters at similar, or better, speeds. Broadcom engineers spent about three years in development The processors will be fabricated by Taiwan Semiconductor Manufacturing Co. using its five‑nanometer node, the same advanced process behind many of the world’s fastest chips. Velaga noted that Broadcom’s engineering teams spent roughly three years in development, originally targeting high‑performance computing markets before pivoting to the booming generative AI sector. “AI and HPC workloads are converging into tightly coupled accelerator clusters that demand supercomputer-class latency, critical for inference, reliability, and in-network intelligence from the fabric itself,” said Kunjan Sobhani, lead semiconductor analyst, Bloomberg Intelligence . “Demonstrating that open-standards Ethernet can now deliver sub-microsecond switching, lossless transport, and on-chip collectives marks a pivotal step toward meeting those demands of an AI scale-up stack, projected to be double digit billions in a few years.” Sobhani. In traditional scale‑out setups, servers are spread across racks and linked via standard networks, which adds latency. Scale‑up, by contrast, keeps compute elements within a narrow physical footprint, often a single rack, so bits bounce back and forth in microseconds. That kind of speed is vital when training massive neural nets or running real‑time inference. As AI models grow ever larger, the race is on to design infrastructure that can handle exabytes of parameters while staying cost‑effective and power-efficient. By adopting Ethernet, a well‑understood, open standard, and pushing its performance envelope, Broadcom hopes to offer data‑center operators an easier path to expanding their AI farms without being locked into one vendor’s ecosystem. With Tomahawk Ultra now in customers’ hands, the contest over who supplies the world’s AI engines is entering a new, more crowded phase, one where openness and scale could tip the balance just as much as raw chip horsepower. This also sets Broadcom on the path to becoming a force to be reckoned with in the AI industry. The company was in Q1 tipped to become one of the major firms to own a stake and operate Intel’s factories together with TSMC, Nvidia, and AMD, in a deal reportedly proposed by TSMC. Your crypto news deserves attention - KEY Difference Wire puts you on 250+ top sites

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.