Scroll Top

Intel unveils Loihi, a revolutionary new Neuromorphic self-learning processor

WHY THIS MATTER IN BRIEF

As the age of silicon based computing systems nears its natural conclusion new computing platforms are emerging that are capable of self-learning and revolutionising AI development, and which are millions of times faster and more energy efficient than today’s “antiquated” systems.

 

Intel recently held their annual keynote in Las Vegas and while most of the coverage was pretty much the usual symphony of marketing material and future projects I think one particular segment stood out above the rest – their announcements around their latest Neuromorphic and Quantum processors.

 

See also
The US's first 3D printed community goes on sale in Texas

 

It is no secret that we’re nearing the cliff edge of what’s possible with today’s silicon based computing platforms, even as we see a path to 5nm1nm and even 0.5nm transistors, but as the economics of switching from one fabrication process to another increase almost exponentially it’s looking increasingly likely that silicon will only take us so far. This in part is one of the reasons why today we’re seeing a proliferation of new computing architectures and types that include the development of new Chemical, DNA, Liquid, Neuromorphic, Photonic, and yes, Quantum computing platforms, and it wouldn’t be too much of a stretch to suggest that the much lauded “death of Silicon” is one of Intel’s main motivators for investigating, and experimenting with, new processor types.

 

The future of AI
 

Recently Intel unveiled their first 17 Qubit Quantum computing chip, and now, a few months later the company has announced its first commercial foray into Neuromorphic computing, a form of computing that could one day see the awesome power of today’s biggest supercomputers condensed down into a computing package no larger than your fingernail.

 

See also
MIT computing breakthrough will put a human brain in your pocket

 

The architecture of Intel’s new Loihi chip, as it’s called, which is basically a self-learning neuromorphic processor, the kind that will one day help us revolutionise Artificial Intelligence (AI) all over again, operates in a similar way to the human brain. Just like the human brain it’s designed to create new internal neural pathways over time, something that in our case gives us humans our IQ and our astounding problem solving capability, and as a result, and thanks to the presence of over 130,000 artificial neurons and the equivalent of over 130 million human synapses, Loihi will be able to learn by itself.

 

Training the Loihi chip in the lab
 

Loihi’s digital circuitry mimics the mechanics of the human brain which not only helps it accelerate machine learning tasks to crazy speeds but let it do so using just a thousandth of the computing power, and energy, of today’s increasingly antiquated looking systems.

Neuromorphic chip models draw inspiration from how human neurons communicate and learn, using spikes and plastic, or in laymans terms “artificial,” synapses, like the ones recently developed by MIT that operate billions of times faster than our own human neurons, that can be modulated based on timing, and it’s this trait that one day will let these new computing platforms self-organize and make decisions based on patterns and associations by themselves without the need for human input or intervention.

 

See also
Breakthrough gene therapy treatment cures children born without immune systems

 

The Loihi test chip is based on Intels 14nm process technology and its features include a fully asynchronous neuromorphic “many core” mesh that supports a wide range of sparse, hierarchical and recurrent neural network topologies, with each neuron capable of communicating with thousands of other neurons, and each of these “Neuromorphic cores” include a learning engine that can be programmed to adapt its network parameters, or “learning methodology,” to support supervised, unsupervised, reinforcement and other AI learning “paradigms.” It also allows for the basic development and testing of several algorithms, with high algorithmic efficiency, for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.

Intel now plans to spend the first half of 2018 sharing the chip with leading university and research institutions with a focus on advancing AI, where an increasing need for collection, analysis and decision making from highly dynamic and unstructured natural data is driving demand for compute that may outpace both classic CPU and GPU architectures. The future’s arriving, and it’s going to be here sooner than you think…

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This