Scroll Top

AI everywhere as DARPA spins up new intelligent edge computing projects

WHY THIS MATTERS IN BRIEF

Today the vast majority of AI workloads run in centralised hyperscale datacenters, tomorrow they’ll run at the edge of the network on our smart devices, and revolutionise industries in new ways.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

The US military’s bleeding edge research arm DARPA, whose projects recently have included everything from using gamers brainwaves to train swarms of killer robots in battle, and turning plants and animals into living sensor networks, among many other fabulous and weird things, including the development of the world’s first conscious robots, this week announced that they’re looking to fund research into developing what they call Shallow Neural Network Architectures (SNNA’s). In other words, they want to develop lean AI’s that could run on low powered systems at the networks’ edge rather than having to rely on large centralised hyperscale datacenters to do all the heavy lifting for them.

 

See also
First AI crushed human gamers, now it's coaching them

 

The project, codenamed Hyper-Dimensional Data Enabled Neural Networks (HyDDENN) would be able to provide similar results compared to existing state of the art Deep Neural Networks, or DNN’s, running in hyperscale data centers, but without the latency and massive computational requirements.

Conventional DDNs, like OpenAI’s revolutionary GPT-3 natural language model which now has over 175 billion parameters, are “growing wider and deeper, with the complexity growing from hundreds of millions to billions of parameters in the last few years,” a DARPA presolicitation document says. “The basic computational primitive to execute training and inference functions in DNN’s is the ‘multiply and accumulate (MAC)’ operation. As DNN parameter count increases, SOA networks require tens of billions of MAC operations to carry out one inference.”

This means that the accuracy of DNN “is fundamentally limited by available MAC resources,” DARPA says. “Consequently, SOA high accuracy DNNs are hosted in the cloud centers with clusters of energy hungry processors to speed up processing. This compute paradigm will not satisfy many DoD applications which demand extremely low latency, high accuracy Artificial Intelligence (AI) with severe size, weight, and power constraints.”

 

See also
This AI relies on human-like memory to create songs from lyrics

 

With HyDDENN, the agency hopes to break free from a reliance on large MAC-based DDNs it says.

“HyDDENN will explore and develop innovative data representations with shallow NN architectures based on efficient, non-MAC, digital compute primitives to enable highly accurate and energy efficient AI for DoD Edge systems.”

The aim is to reduce parameter counts by at least 10 fold, while maintaining accuracy in comparison with a similar MAC-based DNN solution. “With efficient digital compute hardware, these innovations will lead to at least 100 fold reduction in combined compute power and throughput, while retaining high-accuracy output when compared to the SOA DNN approach.”

 

See also
Google's Project Magenta wants to be a musician

 

Although DARPA’s focus is on military applications – where the increasingly connected battlefield will require significant tactical Edge deployments, such as the ability of AI’s to build military networks on the fly, the agency believes the technology could find use elsewhere.

“It is expected that HyDDENN will have significant impact in the areas of Edge/IoT communications and contextual Edge sensing and classification,” the document states. DoD-relevant applications mentioned by DARPA include contextual communications, speech recognition, gesture recognition, and medical diagnostics.

For a project with such lofty aims, HyDDENN has limited funding available. The award value of the Phase 1 Feasibility Study (6 months) tops out at $300,000, while the Phase 2 Proof of Concept (12 months) should not exceed $700,000.

 

See also
Samsung’s new robot does all your housework and serves you wine

 

By the end of the second phase, the project’s researchers are expected to have developed an ASIC architecture and high level logic designs at register-transfer level “as well as a project plan to implement a future fully programmable integrated chip-scale digital IC with the proposed HD data representation, logic primitives, and shallow HD NN to attain the HyDDENN performance goals and metrics for the targeted application.”

Related Posts

Leave a comment

FREE! 2024 TRENDS AND EMERGING TECHNOLOGY CODEXES
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This