Intel Vision 2024 Offers New Look at Gaudi 3 AI Chip


After first announcing the presence of the Gaudi 3 AI accelerator last year, Intel is ready to put the chip in the hands of OEMs in Q2 2024. Intel revealed this and other news, consisting of a new Xeon 6 brand name and an open Ethernet requirement for AI work, at a pre-briefing hung on April 1 ahead of the Intel Vision conference, which is being held April 8-9 in Phoenix, Arizona.

Gaudi 3 AI accelerator will ship to Dell, Hewlett Packard Enterprise, Lenovo and Supermicro

The Gaudi 3 will introduce with Dell, Hewlett Packard Business, Lenovo and Supermicro as OEM partners.

Intel Gaudi 3 will be available from suppliers in three kind aspects: mezzanine card, universal baseboard or PCle CEM. Gaudi 3 has 40% faster time-to-train large language designs versus NVIDIA’s H100 AI chip and is 50% faster inferencing on LLMs versus the NVIDIA H100, Intel said.

Gaudi 3 might go head-to-head with NVIDIA’s newly revealed AI accelerator chip, Blackwell. Gaudi 3 is “highly competitive,” said Jeff McVeigh, corporate vice president & general manager of the Software Engineering Group at Intel. McVeigh noted that real-world testing for the 2 products hasn’t been possible yet.

New Xeon 6 brand name can be found in Q2

Xeon 6 processors, which are available in the 2 variants of Performance-core and Efficient-core, will deliver quickly. E-core processors will ship in Q2 2024, with P-core processors following not long after.

The 2 versions of Xeon 6 processors share the very same platform foundation and software stack. The Efficiency core is enhanced for compute-intensive and AI work, while the Efficient-core is enhanced for effectiveness in the very same work. The Intel Xeon 6 processor with the E-core reveals 2.4 times efficiency per watt enhancement compared to prior generations and 2.7 times efficiency per rack improvement compared to prior generations.

The Xeon 6 processor shows significant energy savings compared to the 2nd Gen Intel Xeon Processor due to needing fewer server racks, for approximately 1 megawatt of power reduction.

Network interface card supports open web standard for AI work

As part of Intel’s effort to supply a wide range of AI infrastructure, the business announced an AI network user interface card for Intel Ethernet Network Adapters and Intel IPUs. The AI network interface cards, which are already being used by Google Cloud, will offer a safe way to offload performance like storage, networking and container management and manage AI facilities, Intel said. The intent is to be able to train and run reasoning on the progressively bigger generative AI designs Intel predicts companies will want to deploy all over Ethernet.

Intel is dealing with the Ultra Ethernet Consortium to produce an open standard for AI networking over Ethernet.

The AI network interface cards are expected to be available in 2026.

Far-flung scalable systems technique aims to smooth out AI adoption

In order to prepare for what the business forecasts will be the future of AI, Intel plans to roll out a scalable systems strategy for enterprise.

“We desire it to be open and for enterprises to have choice in hardware, software and applications,” said Intel Elder Vice President and General Supervisor of Network and Edge Group Sachin Katti at the pre-briefing.

In order to do so, the scalable systems technique supplies Intel products for all sectors of AI within the enterprise: hardware, software, structures and tools. Intel is working with a range of partners to make this method a truth, consisting of:

  • Google Cloud.
  • Thales.
  • Cohesity.
  • NAVER.
  • Bosch.
  • Ola/Kutrim.
  • NielsenIQ.
  • Seekr.
  • IFF.
  • CtrlS Group.
  • Landing AI.
  • Roboflow.

Intel forecasts a future of AI agents and AI functions

Katti said in the pre-brief that enterprise remains in an age of AI copilots. Next may come an age of AI agents, which can coordinate other AI to carry out tasks autonomously, followed by an age of AI functions. The increase of AI functions might imply groups of representatives taking on the work of an entire department, Sachin said.

SEE: Articul8, makers of a generative AI software platform, spun out of Intel in January. (TechRepublic)

Competitors to Intel

Intel is attempting to set itself apart from competitors by focusing on interoperability outdoors ecosystem. Intel completes in the AI chip area with:

  • NVIDIA, which announced the next-generation Blackwell chip in March 2024.
  • AMD, which in February 2024 revealed a new architectural solution for AI inferencing based upon AMD Ryzen Embedded processors.

Intel completes for chip production supremacy with Taiwan Semiconductor Manufacturing Co., Samsung, IBM, Micron Technologies, Qualcomm and others.

TechRepublic is covering Intel Vision remotely.


Leave a Reply

Your email address will not be published. Required fields are marked *