Intel Announces Numerous High Performance Computing Innovations

0
Intel Announces Numerous High Performance Computing Innovations

At the 2021 International Supercomputing Conference (ISC), Intel announced its innovations in high-performance computing (HPC). The company presented a series of technological innovations, partnerships and customer adoption. There were several advances in its Xeon processor for HPC and artificial intelligence, but also innovations in software, memory, networking technologies in several HPC use cases and exascale-class storage.

3rd Generation Intel Xeon Scalable Processors to Power Next Generation Supercomputers

Intel has announced the launch of its 3rd generation Intel Xeon Scalable processors in early 2021, which the firm says offers up to 53% better performance than the 2nd generation processor in a range of HPC workloads such as financial services, manufacturing and life sciences.

Compared to an AMD EPYC 7543 processor, Intel claims that NAMD performs 62% better, LAMMPS performs 57% better, RELION performs 68% better and Binomial Options performs 37% better. Several HPC labs, supercomputing centers and original equipment manufacturers have already adopted the Intel Computing Platform: Dell Technologies, HPE, Korea Meteorological Administration, Lenovo, Max Planck Computing and Data Facility, Oracle, Osaka University and University of Tokyo.

Trish Damkroger, vice president and general manager of the High Performance Computing Division at Intel, elaborates on Intel’s initiatives to improve HPC:

“To maximize HPC performance, we need to leverage all the computing resources and technological advances at our disposal. Intel is the driving force
the industry’s evolution to exascale computing, and the progress we are making with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage and high-speed networking are bringing us closer to that achievement.”

High-bandwidth memory embedded in 3rd generation Intel Xeon Scalable processors

The next generation of Intel Xeon Scalable processors (codenamed “Sapphire Rapids”) will feature embedded high-bandwidth memory (HBM), which will significantly increase memory bandwidth and significantly improve the performance of HPC applications that run relevant workloads. Users will be able to run workloads using only high-bandwidth memory or in combination with DDR5 memory.

Rick Stevens, associate director of Computing, Environment and Life Sciences at Argonne National Laboratory, discusses the integration of HBM into the next generation of Intel Xeon Scalable processors:

“Achieving exascale results requires rapid access and processing of massive amounts of data. The integration of high-bandwidth memory into the Intel Xeon Scalable processors will significantly increase Aurora’s memory bandwidth
of Aurora and allow us to harness the power of artificial intelligence and data analysis to perform advanced simulations and 3D modeling.”

The “Sapphire Rapids” based platform is expected to offer unique capabilities to accelerate HPC, including increased I/O bandwidth with 5.0 PCI Express (versus 4.0 PCI Express) and support for Compute Express Link (CXL) 1.1, enabling advanced use cases in compute, networking and storage.

Charlie Nakhleh, associate director of the Laboratory for Weapons Physics at Los Alamos National Laboratory, discusses the possibilities that the next generation of Intel Xeon Scalable processors could bring to the Crossroads supercomputer:

“The Crossroads supercomputer at Los Alamos National Labs is designed to advance the study of complex physical systems for science and national security. Intel’s next-generation Sapphire Rapids Xeon processor, combined with high-bandwidth memory, will dramatically improve the performance of memory-intensive workloads in our Crossroads system. The product [Sapphire Rapids with HBM] accelerates the most complex physics and engineering computations, enabling us to take on major research and development responsibilities in the areas of global security, energy technologies and economic competitiveness.”

Of note, a new integrated AI acceleration engine called Intel® Advanced Matrix Extensions (AMX) will be designed to deliver significant performance increases for inference and deep learning.

Other innovations announced by Intel

Other innovations announced by the technology group include:

  • Commercial support for DAOS: Launch of the commercialization phase of distributed application object storage (DAOS), an open source defined object store designed to optimize data exchange on Intel HPC architectures. DAOS support is now available to partners as an L3 support offering, enabling partners to provide a complete turnkey storage solution by combining it with customer services.
  • Extending Intel Ethernet for HPC: A new High Performance Networking with Ethernet (HPN) solution, which extends the capabilities of Ethernet technology for the smallest clusters in the HPC segment. The HPN solution uses standard Intel Ethernet 800 series network adapters and controllers, switches based on Intel Tofino P4 programmable Ethernet switching ASICs and Intel Ethernet Fabric suite software. HPN delivers InfiniBand-like application performance at a lower cost while taking advantage of the ease of use offered by Ethernet.
  • Powering up Intel’s Xe-HPC GPU (codenamed “Ponte Vecchio”): “Ponte Vecchio” is an Xe architecture-based GPU optimized for HPC and AI workloads. It will leverage Intel’s Foveros 3D packaging technology to integrate several IPs into the package, including HBM memory as well as other IPs. The GPU is architected with compute, memory and array to meet the scalable needs of the world’s most advanced supercomputers, such as Aurora. “Ponte Vecchio will be available as an OCP Accelerator Module (OAM) and subsystems, providing the scaling and expansion capabilities required for HPC applications.

Translated from Intel annonce de nombreuses innovations en matière de calcul à haute performance