Intel battles GPUs for artificial intelligence

Spread the love

Intel has unveiled a strategy to improve the performance of artificial intelligence applications. The chip manufacturer comes with the Nervana platform, which should train models for deep learning faster than GPUs.

According to Intel CEO Brian Krzanich, 97 percent of data center servers for AI applications already run on Intel products, such as the Xeon processors and Xeon Phi accelerators. However, competition from accelerators based on general purpose GPUs, such as Nvidia’s Tesla cards, is increasing, especially when training neural networks. To counter this competition, Intel comes with Nervana.

Intel announced in August that it had acquired Nervana and the first product is expected to be released in the first half of 2017, which is still being developed under the working name Lake Crest. Because that development is still largely done by Nervana itself, the first product is still produced at 28nm at TSMC. Later, the switch to, for example, 14nm will probably be made at Intel.

Lake Crest is an accelerator consisting of twelve processing clusters arranged on a chip. According to EETimes, the interconnects for the asic are bidirectional 100Gbit/s connections and the combination with 32GB of HBM2 memory means the memory bandwidth is 8Tbit/s.

Intel is also working on Knights Crest; this is a combination of a Xeon processor and a Nervana accelerator. By 2020, the Nervana platform should have reduced the time it takes to train deep learning models by a factor of 100 compared to GPUs, Intel promises.

Intel also announced that Knights Mill will be released in 2017. This is a 14nm produced Xeon Phi accelerator which is based on x86. Knights Mill can directly access 400GB of memory. Compared to the current Knights Landing Xeon Phi, which offers a computing power of 7Tflops, Knight Mill is said to deliver four times better performance.

Finally, Intel announced that the company has shipped the first Skylake-based Xeons to customers. These chips, previously codenamed Purley, include Advanced Vector Extensions 512 for vector calculations. One of the customers could be Google: that company announced that it has started a new alliance with Intel for data center hardware.

You might also like