WHY THIS MATTERS IN BRIEF
- Many of the breakthroughs in artificial intelligence have only been made possible because companies now have access to previously unimaginable amounts of computing power, but now there is an arms race to create the “best” AI infrastructure stack
Google CEO Sundar Pichai announced today at the Google I/O developer conference that Google have begun to build its own custom Application-Specific Integrated Circuit (ASIC) chips called Tensor Processing Units (TPUs), after the name of it’s open source deep learning framework TensorFlow. But the technology is one of a kind – something that makes sense only at Google’s hyper scale.
These TPUs were the ones used in the AlphaGo artificial intelligence (AI) powered Go player that beat top ranked Go player Lee Sedol earlier this year and they’ve also already been put to work inside Google search and Google Street View. Now it sounds like they will become available for other companies to use, too.
“When you use the Google Cloud Platform, you can take advantage of TPUs as well,” said Pichai, but Google aren’t just relying on new speciality hardware to take on public cloud leader Amazon Web Services (AWS), over time Google will expose more and more machine learning APIs, Pichai said. Google has already launched the Cloud Machine Learning Platform service and the Vision API.
“Our goal is to lead the industry on machine learning and make that innovation available to our customers,” said Google distinguished hardware engineer Norm Jouppi.
“Building TPUs into our infrastructure stack will allow us to bring the power of Google to developers across software like TensorFlow and Cloud Machine Learning with advanced acceleration capabilities. Machine Learning is transforming how developers build intelligent applications that benefit customers and consumers, and we’re excited to see the possibilities come to life.”