Google announced in its recent I/O conference that it has developed Tensor Processing Unit (TPU), an application specific processor designed for deep learning. Google states that TPU demonstrates faster intelligence capabilities to manage AI workloads. Google proudly claims that TPUs are high-tech chips that drive forward chip-technology seven years into the future.
This custom IC (Integrated Circuit) finds its application in Google’s TensorFlow machine learning software. Thousands of them are being used in its own data centres for over a year now. They are used to power components that perform deep learning tasks, which include Google’s search algorithm and StreetView feature of Google Maps. Notably, AlphaGo, Google’s computer program which beat a professional Go player recently, was also powered by TPUs.
GPUs (Graphic Processor Unit) are preferred for deep learning tasks than CPUs (Computer Processing Unit) because it is more efficiently suited for faster algorithms. But fundamentally, GPUs were designed to perform calculations for processing and display of 3D graphics. Although NVIDIA’s graphic card architecture is suitable for deep learning, dedicated processors such as TPUs which are essentially designed for deep learning are certainly superior. Google claims that its TPUs are 10 times more efficient in terms of performance per watt when compared to the conventional alternatives.
Google’s TPUs are accelerators for deep learning while Intel chips are general-purpose processors. Since Google makes use of deep learning for most of its applications now, designing its own chip seems meaningful. While TPUs will probably not replace Intel’s server chips, it is definitely a potential threat to NVIDIA since it plays in the same deep-learning field.
In addition, Google aims to make the TPUs available over its cloud computing platform. This would give Google a competitive edge over Amazon’s AWS and Microsoft’s Azure. At present, Google has no plans to sell the TPUs to third-parties.