Intel Introduces Xeon Scalable Processors for Data Centers

Spread the love

Intel has announced its Xeon server processors based on the Skylake-SP platform. It concerns 51 different Xeon models, which the company releases under the Bronze, Silver, Gold and Platinum series.

Intel previously announced the naming and some type numbers of the Xeon Scalable family, but now the server processors have actually appeared. The most powerful chips are the Platinum models in the 81xx series, which have up to 28 cores and are suitable for servers with two, four and eight sockets, thanks to the presence of three upis or ultra path interconnects.

The Gold-Xeons come in a 61xx and 51xx series, and have a maximum of 22 and 14 cores respectively. The 61xx chips have three upis and are suitable for dual and quad socket configurations, the 51xx models have to make do with two upis, for use in a maximum of two sockets. The Silver line consists of CPUs with up to twelve cores that can end up in dual socket systems. The processors support DDR4-2400 memory. Bronze occupies the lower end of the range, with processors that have up to eight cores and support DDR4-2133.

The chip design is characterized by a mesh topology, which enables Intel to increase the amount of cores relatively easily, hence the name Xeon Scalable. For example, the that of a 28-core Xeon can be represented as a matrix of six by six, with the top row being occupied by the I/O, such as the upis and pci-e controllers. On either side of the fourth row are the memory controllers. The mesh configuration should replace the ring bus of previous Xeons. This should result in lower latencies and requires less space for the metal interconnects.

Intel makes the processors based on the Skylake architecture at 14nm. The chips support the avx-512 set for executing instructions on 512bit data. Intel has also overhauled the L2 and L3 cache hierarchy. Shared L3 cache has decreased from 2.5MB per core to 1.375MB per core and is no longer inclusive; the cache no longer contains a copy of data from the L2 cache by default. In addition, Intel has increased the amount of L2 cache per core from 256kB to 1MB. According to Intel, the changed cache structure provides significant performance gains.

Intel combines the processors with the Lewisburg chipset, or C620. This includes support for up to four 10Gbit/s network controllers and hardware QuickAssist accelerators, which can help accelerate applications such as encryption and compression. The integration of Omni-Path interconnects, Intel’s Infiniband alternative, is optional. The TDPs of the Xeons range from 205W for the largest processors to 85W for the smaller variants. There are also special models with lower TDPs down to 70W for a lifespan of up to ten years.

Intel itself speaks of the ‘biggest progress for data centers in ten years’ and the company claims to have already supplied 500,000 Xeon Scalable processors. It was already clear that Google is among the customers, but Amazon and Microsoft also support the generation, according to Intel.

You might also like