Nvidia to showcase Blackwell server installations at Sizzling Chips 2024

Photo of author

By Calvin S. Nelson


Ahead-looking: Nvidia might be showcasing its Blackwell tech stack at Sizzling Chips 2024, with pre-event demonstrations this weekend and on the principal occasion subsequent week. It is an thrilling time for Nvidia lovers, who will get an in-depth take a look at a few of Staff Inexperienced’s newest expertise. Nevertheless, what stays unstated are the potential delays reported for the Blackwell GPUs, which may impression the timelines of a few of these merchandise.

Nvidia is decided to redefine the AI panorama with its Blackwell platform, positioning it as a complete ecosystem that goes past conventional GPU capabilities. Nvidia will showcase the setup and configuration of its Blackwell servers, in addition to the combination of varied superior elements, on the Sizzling Chips 2024 convention.

Lots of Nvidia’s upcoming shows will cowl acquainted territory, together with their information heart and AI methods, together with the Blackwell roadmap. This roadmap outlines the discharge of the Blackwell Extremely subsequent yr, adopted by Vera CPUs and Rubin GPUs in 2026, and the Vera Extremely in 2027. This roadmap had already been shared by Nvidia at Computex final June.

For tech lovers desperate to dive deep into the Nvidia Blackwell stack and its evolving use circumstances, Sizzling Chips 2024 will present a possibility to discover Nvidia’s newest developments in AI {hardware}, liquid cooling improvements, and AI-driven chip design.

One of many key shows will supply an in-depth take a look at the Nvidia Blackwell platform, which consists of a number of Nvidia elements, together with the Blackwell GPU, Grace CPU, BlueField information processing unit, ConnectX community interface card, NVLink Swap, Spectrum Ethernet change, and Quantum InfiniBand change.

Moreover, Nvidia will unveil its Quasar Quantization System, which merges algorithmic developments, Nvidia software program libraries, and Blackwell’s second-generation Transformer Engine to boost FP4 LLM operations. This growth guarantees important bandwidth financial savings whereas sustaining the high-performance requirements of FP16, representing a significant leap in information processing effectivity.

One other point of interest would be the Nvidia GB200 NVL72, a multi-node, liquid-cooled system that includes 72 Blackwell GPUs and 36 Grace CPUs. Attendees may even discover the NVLink interconnect expertise, which facilitates GPU communication with distinctive throughput and low-latency inference.

Nvidia’s progress in information heart cooling may even be a subject of dialogue. The corporate is investigating the usage of heat water liquid cooling, a way that would scale back energy consumption by as much as 28%. This system not solely cuts vitality prices but additionally eliminates the need for under ambient cooling {hardware}, which Nvidia hopes will place it as a frontrunner in sustainable tech options.

According to these efforts, Nvidia’s involvement within the COOLERCHIPS program, a U.S. Division of Vitality initiative geared toward advancing cooling applied sciences, might be highlighted. By way of this mission, Nvidia is utilizing its Omniverse platform to develop digital twins that simulate vitality consumption and cooling effectivity.

In one other session, Nvidia will talk about its use of agent-based AI techniques able to autonomously executing duties for chip design. Examples of AI brokers in motion will embrace timing report evaluation, cell cluster optimization, and code technology. Notably, the cell cluster optimization work was just lately acknowledged as the most effective paper on the inaugural IEEE Worldwide Workshop on LLM-Aided Design.

Leave a Comment