AMD reveals exascale data-center accelerator at CES


The Customer Electronic Devices Program (CES) may be the last location you ‘d expect an enterprise product to launching, however AMD unveiled a new server accelerator amongst the multitude of consumer CPUs and GPUs it launched at the Las Vegas show.AMD took the wraps off its Instinct MI300 accelerator, and it’s a doozy.The accelerated processing unit(APU)is a mix of 13 chiplets, consisting of CPU cores, GPU cores, and high bandwidth memory(HBM). Tallied together, AMD’s Impulse MI300 accelerator is available in at 146 billion transistors. For comparison, Intel’s enthusiastic Ponte Vecchio processor will be around 100 billion transistors, and Nvidia’s Hopper H100 GPU is a simple 80 billion transistors.The Instinct MI300 has 24 Zen 4 CPU cores and 6 CDNA chiplets. CDNA is the data center variation of AMD’s RDNA consumer graphics innovation. AMD has not said the number of GPU cores per chiplet there are.

Rounding off the Instinct MI300 is 128MB of HBM3 memory stacked in a 3D design.The 3D style permits tremendous information throughput in between the CPU, GPU and memory passes away. Data does not need to go from the CPU or GPU to DRAM; it goes out to the HBM stack, significantly decreasing latency. It also permits the CPU and GPU

to deal with the very same information in memory simultaneously, which accelerates processing.AMD CEO Lisa Su announced the chip at the end of her 90-minute CES keynote, saying MI300 is”the first chip that unites a CPU, GPU, and memory into a single integrated design. What this enables us to do is share system resources for the memory and IO, and it results in a considerable boost in efficiency and performance along with [being] much easier to program.”Su said the MI300 delivers eight times the AI efficiency and 5 times the performance per watt of the Instinct MI250. She discussed the much-hyped AI chatbot ChatGPT and noted it takes months to train the designs; the MI300 will cut the training time from months to weeks, which could conserve countless dollars on electrical energy, Su said.Mind you, AMD’s MI250 is an excellent piece of silicon, utilized in the first exascale supercomputer, Frontier, at the Oak Ridge National Laboratory. AMD’s MI300 chip resembles what Intel is doing with Falcon Shores, due in 2024, and Nvidia is making with its Grace Hopper Superchip, due later this year. Su said the chip remains in the labs now and tasting to pick clients, with

a launch anticipated in the 2nd half of the year.New AI accelerator on tap from AMD The Impulse isn’t the only business statement at CES. Su likewise presented the Alveo V70 AI reasoning accelerator. Alveo belongs to the Xilinx FPGA line AMD obtained last year, and it’s built with AMD’s XDNA AI engine

technology. It can deliver 400 million AI operations per second on a variety of AI models, consisting of video analytics and customer recommendation engines, according to AMD.Su said that in video analytics, the Alveo V70 provides 70 %more street coverage for smart-city applications, 72 %more healthcare facility bed coverage for patient monitoring, and 80 %more checkout

lane coverage in a wise retail store than the competitors, however she didn’t state what the competition is.All of this is within a 75-watt power envelope and a little type factor. AMD is going to take pre-orders for the V70 … Source

Leave a Reply

Your email address will not be published. Required fields are marked *