Nvidia reveals brand-new GPU-based platform to sustain generative AI efficiency

Uncategorized

Nvidia has actually announced a new AI computing platform called Nvidia HGX H200, a turbocharged variation of the business’s Nvidia Hopper architecture powered by its newest GPU offering, the Nvidia H200 Tensor Core.The company likewise is coordinating with HPE to provide a supercomputing system, developed on the Nvidia Grace Hopper GH200 Superchips, specifically developed for generative AI training.A surge in enterprise interest in AI has fueled demand for Nvidia GPUs to handle generative AI and high-performance computing workloads. Its most current GPU, the Nvidia H200, is the first to provide HBM3e, high bandwidth memory that is 50 %faster than present HBM3, allowing for the delivery of 141GB of memory at 4.8 terabytes per second, supplying double the capacity and 2.4 times more bandwidth than its predecessor, the Nvidia A100.Nvidia unveiled the first HBM3e processor, the GH200 Grace Hopper Superchip platform, in August “to fulfill [the] rising need for generative AI, “founder and CEO of Nvidia, Jensen Huang, stated at the time.

The introduction of the Nvidia H200 will result in further efficiency leaps, the business said in a statement, adding that when compared to its H100 offering, the new architecture will almost double the reasoning speed on Meta’s 70 billion-parameter LLM Llama-2. Criteria associate with how neural networks are set up.”To develop intelligence with generative AI and HPC applications, vast amounts of data should be efficiently processed at high speed utilizing big, fast GPU memory, “said Ian Buck, vice president of hyperscale and HPC at Nvidia in a declaration accompanying the statement.”With Nvidia H200, the market’s leading end-to-end AI supercomputing platform just got faster to solve a few of the world’s crucial difficulties. “H200-powered systems are anticipated to beginshipping in the 2nd quarter of 2024, with the Nvidia H200 Tensor Core GPU readily available in HGX H200 server boards with four-and eight-way configurations.An eight-way HGX H200 offers over 32 petaflops of FP8 deep learning compute and 1.1 TB of aggregate high-bandwidth memory for the greatest efficiency in generative AI and HPC applications, Nvidia said. A petaflop is a step of efficiency for a computer system that can compute a minimum of one thousand trillion, or one quadrillion, drifting point operations per second. An FP8 is an eight-bit floating point format requirements, created to alleviate the sharing of deep knowing networks in between hardware platforms.The H200 can be released in any kind of information center, including on facilities, cloud, hybrid-cloud and edge and will also be available in the GH200 Grace Hopper Superchip platform.Nvidia powers new HPE

AI training service with GH200 Grace Hopper Superchips 2 weeks after it was exposed that the UK’s Isambard-AI supercomputer would be built with HPE’s Cray EX supercomputer innovation and powered by Nvidia GH200

Grace Hopper Superchips, the 2 companies have once again teamed up to supply a new supercomputing turnkey system that supports the development of generative AI.The brand-new system consists of preconfigured and pretested AI and machine learning software application, and also consists of liquid-cooled supercomputers, sped up calculate, networking, storage

, and services. Based on the very same architecture as Isambard-AI, the solution will integrate with HPE Cray supercomputing innovation and be powered by Nvidia Grace Hopper GH200 Superchips, enabling AI proving ground and large enterprises to speed up the training of a design by 2-3 times.”Together

, this option uses companies the unmatched scale and performance required for huge AI workloads, such as large … Source

Leave a Reply

Your email address will not be published. Required fields are marked *