Scroll Top

Google’s AI is now helping chip designers design faster AI chips

WHY THIS MATTERS IN BRIEF

As AI becomes more capable of innovating and designing new products the global rate of innovation and disruption is going to exponentially explode.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

Artificial Intelligence (AI) has been getting better for some time now at designing and innovating products, such as designing and creating new AI’s, as well as everything from chairs and clothes lines through to art, lunar landers, music, robots, and trainers, and recently there’s been a lot of focus on designing computer chips that are optimised to run AI workloads. The trouble is that it takes years to design a chip, and the universe of machine learning algorithms moves a lot faster than that. Therefore it shouldn’t be any surprise that in an ideal world you want a chip that’s optimised to do today’s AI, not the AI of two to five years ago, so Google have come up with a solution – get AI to design an AI chip.

 

See also
Inside Google’s plan to make sure AI is safe

 

“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fuelling advances in the other,” they write in a paper describing the work that posted today to Arxiv.

“We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn’t exist,” says Azalia Mirhoseini, a senior research scientist at Google. “If we reduce the design cycle, we can bridge the gap.”

Mirhoseini and senior software engineer Anna Goldie have come up with a neural network that learned to do a particularly time-consuming part of chip design called “placement.” After studying chip designs for long enough Google’s new AI ended up being able to produce a design for a Google Tensor Processing Unit in less than 24 hours that beats several weeks-worth of design effort by human experts in terms of power, performance, and square area.

 

See also
Self-destructing algorithms could usher in a new era of cyber security

 

Placement is so complex and time-consuming because it involves placing blocks of logic and memory or clusters of those blocks called macros in such a way that power and performance are maximized and the area of the chip is minimized. Heightening the challenge is the requirement that all this happen while at the same time obeying rules about the density of interconnects. Goldie and Mirhoseini targeted chip placement, because even with today’s advanced tools, it takes a human expert weeks of iteration to produce an acceptable design.

Goldie and Mirhoseini modelled chip placement as a reinforcement learning problem. Reinforcement learning systems, unlike typical deep learning, do not train on a large set of labelled data. Instead, they learn by doing, adjusting the parameters in their networks according to a reward signal when they succeed. In this case, the reward was a proxy measure of a combination of power reduction, performance improvement, and area reduction. As a result, the placement-bot becomes better at its task the more designs it does.

The team hopes AI systems like theirs will lead to the design of “more chips in the same time period, and also chips that run faster, use less power, cost less to build, and use less area,” says Goldie.

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This