Disclaimer: We may earn a commission if you make any purchase by clicking our links. Please see our detailed guide here.

Follow us on:

Google News
Whatsapp

Intel Partners Baidu to Optimize Nervana Neural Network Processor

Moupiya Dutta
Moupiya Dutta
She finds it interesting to learn and analyze society. she keeps herself updated, emphasizing technology, social media, and science. She loves to pen down her thoughts, interested in music, art, and exploration around the globe.

Join the Opinion Leaders Network

Join the Techgenyz Opinion Leaders Network today and become part of a vibrant community of change-makers. Together, we can create a brighter future by shaping opinions, driving conversations, and transforming ideas into reality.

At Baidu’s Create conference for AI developers, the company in collaboration with Intel announced a new partnership to work together on Intel’s new Nervana Neural Network Processor for training. The two companies will combine to develop high-speed accelerator hardware capable of training AI models quickly and power-efficiently.

A pillar of Intel’s emerging AI product portfolio – Nervana Neural Network Processor for training (NNP-T), will be a collaborative development effort of Intel with Baidu, the Beijing-based AI and Internet Company commonly referred as “the Google of China,”. They had 2018 revenues of more than $100 billion.

It is optimized for image recognition and has an architecture distinct from any other chips lacking a standard cache hierarchy. Its on-chip memory is managed by software. According to Intel, the NNP-T’s 24 compute clusters, 32GB of HBM2 stacks, and local SRAM enables it to deliver up to 10 times more than that of the AI training performance competing for graphics cards and 3-4 times the performance of Lake Crest, the company’s first NNP chip.

Intel with Baidu to optimize Nervana Neural Network Processor. It’s not the first time Intel and Baidu have teamed up to develop solutions targeting AI applications. In 2017, Intel launched its Nervana NNP AI chip for inference at the Consumer Electronics Show last January. The NNP-I, built on a 10-nanometer process, is optimized for image recognition that includes Intel Ice Lake cores for general operations and neural network acceleration said Rao at the CES.

Join 10,000+ Fellow Readers

Get Techgenyz’s roundup delivered to your inbox curated with the most important for you that keeps you updated about the future tech, mobile, space, gaming, business and more.

Recomended

Partner With Us

Digital advertising offers a way for your business to reach out and make much-needed connections with your audience in a meaningful way. Advertising on Techgenyz will help you build brand awareness, increase website traffic, generate qualified leads, and grow your business.

Power Your Business

Solutions you need to super charge your business and drive growth

More from this topic