Nvidia releases World’s first Deep learning Supercomputer DGX-1

Nvidia releases World’s first Deep learning Supercomputer DGX-1
HIGHLIGHTS

DGX1’s new powerful chip has 150 billion transistors laid on a single chip, which makes it the world’s largest chip.

Nvidia has released a new state-of-the-art chip, the Tesla P100 GPU and this puts machine learning on a whole new level. At Nvidia’s GPU conference, CEO Jen-Hsun Huang revealed that the new chip can perform deep learning neural network tasks 12 times faster than company’s previous top-end system.

The P100 costed over $2 billion in investment and R&D making it a huge commitment to AI from Nvidia’s side. With 150 billion transistors on this single powerful chip, Nvidia claims that it is the world’s largest chip. It has packed eight of these into a crazy-powerful supercomputer called the DGX-1. This machine is ready to run with deep-learning software pre-installed and the firsts of these are shipping to AI researchers at MIT, Stanford and UC Berkeley. At the event, Huang called the DGX-1 “one beast of a machine’. 

So far, Nvidia has been making high-power graphic processing chips for the video game industry. Neural network deep learning is a type of AI where data is fed through layers of simulated neurons to train a computer to recognise complex patterns. Lots of tech companies, in specific Google, Microsoft, Amazon, Facebook are now committed to developing deep-learning technology and Nvidia wants to position itself as an AI chip maker.

AI demands huge processing power to make machines do tasks which are impossible or highly complex for humans to code. Last year, Microsoft bagged first place at the ImageNet computer vision challenge because it used a neural net that was five times deeper than any used in history. DeepMind, to train its Go-playing AlphaGo used a humungous amount of computing power – 1202 chips and 176 PCs to be precise. 

To handle bigger and complex data, more layers of neurons are needed by a deep-learning machine. This can accomplish impressive machine-learning feats, for instance perfect image recognition in self-driving cars. Thus, researchers and data scientists need more powerful chips, which is what Nvidia is planning to produce.

Digit.in
Logo
Digit.in
Logo