AMD introduces Instinct MI100 data center accelerator

Spread the love

AMD introduces its Instinct MI100 accelerator. This chip is based on the company’s CDNA architecture and is intended for high-performance computing. AMD provides the MI100 with 32GB of HBM2 memory and 120 compute units.

AMD equips its MI100 chip with 120 compute units, providing the accelerator with 7680 stream processors. The MI100 also has 32GB of HBM2 memory, with a theoretical bandwidth of up to 1.23TB / s. The chip also receives support for PCIe 4.0 and has a tdp of 300W. The manufacturer makes the accelerator using a 7nm process.

The Instinct MI100 accelerator is the first chip to use AMD’s new CDNA architecture. The manufacturer previously announced that it would split its GPU architectures into two separate parts; RDNA for gaming and graphics workloads, and CDNA for data centers and other high-performance computing use cases.

This is why AMD has removed almost all hardware for graphics tasks from the CDNA architecture, the company reports in a white paper. For example, the architecture does not provide hardware for rasterization, tessellation and blending. The display engine has also been removed from CDNA. The MI100 will still support HEVC, H.264 and VP9 decoding, as such functionality may be useful for certain machine learning workloads.

The company further states that its MI100 chip “performs up to seven times better” than the previous Instinct accelerator. AMD also writes that the new MI100 accelerator is “the first data center GPU to pass the 10Tflops of FP64 computing power.” On its website, AMD claims, among other things, that the MI100 is faster than the A100 from Nvidia, of which an 80GB version was announced earlier today.

The single precision FP32 computing power would further amount to a maximum of 23.1 Flops. The company also speaks of a Matrix Core engine that should further improve computing performance for HPC and AI workloads. AMD also mentions a second generation Infinity Fabric to connect multiple accelerators together. In this way, a maximum of two clusters with four GPUs can be connected to each other in a server, AMD reports.

The Instinct MI100 should be available for the enterprise market by the end of this year. Currently Dell, Gigabyte, HP Enterprise and Supermicro are working on servers based on the new chips.

You might also like