AMD introduced the powerful Instinct MI100 compute accelerator based on the CDNA architecture. NVIDIA responded with updated A100

0
454

Last updated on December 8th, 2022 at 02:35 pm

AMD introduced the powerful Instinct MI100 compute accelerator based on the CDNA architecture. NVIDIA responded with updated A100

AMD and NVIDIA today unveiled new graphics solutions for high-performance computing based on artificial intelligence algorithms. And if in the case of NVIDIA we are talking about updating the already existing accelerator A100, which has increased the amount of memory and its bandwidth, then AMD presented a completely new solution in the form of the Instinct MI100 accelerator based on the new CDNA architecture.

NVIDIA A100
NVIDIA A100

The new version of the graphics accelerator NVIDIA A100, like the original, is based on the Ampere architecture. The novelty is distinguished from the previous version by the increase from 40 to 80 GB the amount of memory HBM2e, as well as the increased bandwidth from 1,555 to 2 TB / s. The rest of the characteristics of both versions of the accelerator are the same.

At the moment, the new version of A100 is presented only in the SXM3 form factor, therefore it is intended for use as part of NVIDIA’s own DGX computing platform, as well as as part of HGX platforms from partners. The latter will present special kits early next year to integrate the new version of the A100 into their existing solutions, including options that support the installation of 4 and 8 accelerators.

The new product from NVIDIA will have to compete against a completely new solution from AMD – the Instinct MI100 computational graphics accelerator, built on the basis of the 7-nm CDNA architecture. Unlike the RDNA architecture, which is used in gaming and professional rendering solutions, the main profile of CDNA is high-performance computing and working with artificial intelligence algorithms.

AMD Instinct MI100 is designed to work with PCIe 4.0 x16 (64 GB / s) interface. The GPU Instinct MI100 uses 120 Compute Units, which contain new blocks for matrix operations that are used in computing acceleration tasks related to AI algorithms. According to AMD, the new blocks do not work at the expense of classic computing. For example, the peak performance in FP64 applications is 11.5 teraflops, and for FP32 it is exactly twice as much – 23 teraflops, which is higher than the performance declared for NVIDIA A100.

Also Read:   All AMD Radeon RX 5000 Graphics Discontinued

However, in the same bfloat16 calculations, the Instinct MI100 from AMD loses to its competitor – 92.3 teraflops versus 312 teraflops. In all fairness, it should be noted that here we are talking about a comparison with the SXM version of the NVIDIA A100 accelerator. The PCIe version of the accelerator from NVIDIA can be somewhat slower due to lower power consumption in real tasks. In turn, Instinct MI100 is currently presented only in the form factor of a full-size PCIe card with a consumption level of 300 watts.

AMD Instinct MI100 is equipped with 32 GB of HBM2 memory with a bandwidth of 1.23 TB / s. For comparison, the original NVIDIA A100 has 40GB of HBM2e memory with a bandwidth of 1.555 TB / s. With three Infinity Fabric (IF) interfaces of 92 GB / s (276 GB / s total), up to four Instinct MI100 accelerators can be chained in a one-to-one manner. At the same time, the bandwidth level does not depend on which interfaces (PCIe 3.0 or 4.0) the set of Instinct MI100 is connected to. The same PCIe version of NVIDIA A100 has only one NVLink interface, allowing only two cards to be combined. However, the bandwidth is higher and amounts to 600 GB / s.

According to AMD, its new solution is 1.8-2.1 times more attractive in terms of performance per dollar than NVIDIA’s with its A100.

Dell PowerEdge R7525, Gigabyte G482-Z54, HPE Apollo 6500 Gen10 Plus, and Supermicro AS-4124GS-TNR will be the first systems to receive the new AMD Instinct MI100 compute graphics accelerators. It is noted that some of the company’s partners have already received new accelerators, as well as systems based on them for evaluating performance and adapting the software.

You can read more about the announcements of NVIDIA A100 and AMD Instinct MI100 computational graphics accelerators at our subsidiary ServerNews.

Don’t forget to leave us a comment below and let us know what you think! Share Our Website for Technology News , Health News , Latest Smartphones , Mobiles , Games , LifeStyle , USA News & Much more...