Scroll Top

Tech giants team up to create the world’s first Deep Learning standard



Many of today’s technology companies are creating proprietary AI’s that can’t interoperate with each other but a new framework from Facebook and Microsoft is setting out to change that.


As Artificial Intelligence (AI), and in this case more specifically Machine Learning becomes more pervasive across our society it’s inevitable that more companies have jumped onto the bandwagon to create their own versions, and in several cases companies like AMD, Fujitsu, Google and Nvidia have all teamed up to ensure their products are compatible. But that said, the software that runs most of today’s Deep Learning (DL) and AI specific hardware is still proprietary so now Facebook and Microsoft have teamed together to develop a new common framework for developing DL models that can interoperate and talk with each other.


See also
Elon Musk announced SpaceX is sending two private citizens to the Moon


The Open Neural Network Exchange (ONNX), which is available on Github, is described as a standard that will allow developers to move their neural networks from one framework to another, provided both adhere to the ONNX standard.

According to the joint press release from the two companies, this isn’t currently the case. Companies must choose the framework they’re going to use for their model before they start developing it, but the framework that offers the best options for testing and tweaking a neural network aren’t necessarily the frameworks with the features you want when you bring a product to market.

In their press release the companies state that Caffe2, PyTorch, and Microsoft’s Cognitive Toolkit will all support the ONNX standard when it’s released later this month, and that models trained with one framework will be able to move to another for inference.


See also
Singapore, the world leader in education, bans all student rankings


Facebook’s side of the post has a bit more detail on how this benefits developers and what kind of code compatibility was required to support it. It describes PyTorch as having been built to “push the limits of research frameworks, to unlock researchers from the constraints of a platform and allow them to express their ideas easier than before.”

Caffe2, in contrast, emphasizes “products, mobile, and extreme performance in mind. The internals of Caffe2 are flexible and highly optimized, so we can ship bigger and better models into underpowered hardware using every trick in the book.”

By creating a standard that allows models to move from one framework to another AI developers can now take advantage of the strengths of both, but rhere are still some limitations. ONNX isn’t currently compatible with dynamic flow control in PyTorch, and Facebook states other incompatibilities with “advanced programs” in PyTorch that it doesn’t detail.


See also
Facebook 3D photos sources depth information straight from your camera


Still, despite the teething issues it has to be said that this early effort to create common ground, and a common dl framework, is a positive step, after all, most of the ubiquitous ecosystems we take for granted, such as USB compatibility, 4G LTE networks, and Wi-Fi, just to name a few, are fundamentally enabled by standards and over time those standards have helped propel their growth and adoption.

A siloed go it alone solution is fine for a company that wants to develop a solution that it’s only going to use in house, but if you want to offer a platform that others can use to build content, then standardising that model is how you encourage others to use it.

The major difference between Microsoft and the other companies developing AI and DL products is the difficulty Microsoft faces in baking them into its consumer facing products. With Windows 10 Mobile effectively dead, MS has to rely on its Windows market to drive people towards Cortana, and that’s an intrinsically weaker position than Apple or Google, both of which have huge mobile platforms or Facebook, which has over a billion users.



See also
Stanford researchers built a particle accelerator on a computer chip


While it’s hoped that the new ONNX framework will benefit everyone in the space it therefore goes without saying that it might benefit Microsoft more than many of the other players, but now at least we can all start to rest assured that all our models will play nicely with each other. That is of course unless they don’t develop their own secret language and lock us out of our systems… ahem.

Related Posts

Leave a comment


Awesome! You're now subscribed.

Pin It on Pinterest

Share This