At Baidu’s Create conference for AI developers, the company in collaboration with Intel announced a new partnership to work together on Intel’s new Nervana Neural Network Processor for training. The two companies will combine to develop high-speed accelerator hardware capable of training AI models quickly and power-efficiently.
A pillar of Intel’s emerging AI product portfolio – Nervana Neural Network Processor for training (NNP-T), will be a collaborative development effort of Intel with Baidu, the Beijing-based AI and Internet Company commonly referred as “the Google of China,”. They had 2018 revenues of more than $100 billion.
It is optimized for image recognition and also has an architecture distinct from any other chips that lack a standard cache hierarchy, and its on-chip memory is managed by software. According to Intel, the NNP-T’s 24 compute clusters, 32GB of HBM2 stacks, and local SRAM enables it to deliver up to 10 times more than that of the AI training performance competing for graphics cards and 3-4 times the performance of Lake Crest, the company’s first NNP chip.
Intel with Baidu to optimize Nervana Neural Network Processor. It’s not the first time Intel and Baidu have teamed up to develop solutions targeting AI applications. In 2017, Intel launched its Nervana NNP AI chip for inference at the Consumer Electronics Show last January. The NNP-I, built on a 10-nanometer process is optimized for image recognition that includes Intel Ice Lake cores for general operations, and neural network acceleration said Rao at the CES.