NVIDIA Says Its New H100 Data Center GPU Is Up To Six Times Faster Than Its Previous

Midway through last year, NVIDIA announced the first-ever data center CPU. At the time, the company only shared a few tidbits about the chip, noting, for example, that it would use the chip to provide data transfer rates of up to 900 GB/s between components. Fast forward to the 2022 GPU Technology Conference, which kicked off Tuesday morning. At the event, CEO Jensen Huang unveiled the Grace CPU Superchip, the first discrete CPU that NVIDIA plans to release as part of the Grace series.

Built on ARM’s recently introduced, the Grace CPU Superchip is actually two Grace CPUs connected through the company’s aforementioned NVLink interconnect technology. It integrates a whopping 144 ARM cores into a single socket and consumes about 500 watts of power. Ultra-fast LPDDR5x memory built into the chip provides bandwidth speeds of up to 1 terabyte per second.

While they’re very different chips, a helpful way to conceptualize NVIDIA’s new silicon is to think of Apple’s recently announced . In the simplest of terms, the M1 Ultra consists of two M1 Max chips connected via Apple’s aptly named UltraFusion technology.

When NVIDIA starts shipping the Grace CPU Superchip to customers like the Department of Energy in the first half of 2023, it will give them the option to configure it as a standalone CPU system or as part of a server with up to eight Hoppers. -based GPUs (more on that later). The company claims its new chip is twice as fast as traditional servers. NVIDIA estimates that it will achieve a score of approximately 740 points in SPECrate®2017_int_base benchmarks, placing it in the higher end data center processors.

In addition to the Grace CPU Superchip, NVIDIA has its highly anticipated . Named after pioneering computer scientists, it’s the successor to the company’s current ones (you know, the one that powers all of the company’s impossible-to-find RTX 30-series GPUs). Before you get excited, you should know that NVIDIA hasn’t announced any mainstream GPUs at GTC. Instead, we got the . It’s an 80 billion transistor behemoth built using TSMC’s advanced 4nm process. At the heart of the H100 is NVIDIA’s new Transformer Engine, which the company claims offers unparalleled performance when it comes to calculating transformer models. In recent years, transformer models have become extremely popular with AI scientists working with systems such as GPT-3 and AlphaFold. NVIDIA claims that the H100 can reduce the time it takes to train large models to days and even just a few hours. The H100 will be available later this year.

All products recommended by Engadget have been selected by our editorial team, independent of our parent company. Some of our stories contain affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Leave a Comment