Scroll Top

General AI, the “holy grail” of AI, demonstrated for the first time

WHY THIS MATTERS IN BRIEF

AI’s that can learn quickly, and from very small data sets, will fundamentally change how we train AI’s, their capabilities, and how fast they are deployed into the market.

 

Recently we saw a new “Master algorithm” that could be used to create the first generation of super intelligent machines, and now a team of researchers from Maryland, USA, announced this week that they’ve invented a general Artificial Intelligence (AI) way for machines to identify and process 3D images that doesn’t require humans to go through the tedium of inputting specific information that accounts for each and every instance, scenario, difference, change and category that could crop up, and they claim it’s a world first, even though it follows on from a not too dissimilar breakthrough from Google DeepMind whose own platform, Alpha Zero, recently taught itself a mix of board games including chess to a grand master level, in just four hours.

 

See also
Yet another AI has invented its own secret gibberish language to communicate

 

All that said though it’s important to say here that while doing away with huge volumes of training data used to train today’s AI’s is a staggering breakthrough of potential epic proportions this announcement doesn’t yet herald the emergence of fabled Artificial General Intelligence (AGI) which is still a little way off and which Google Deepmind published an architecture for last year. However, this new development, albeit a distant cousin of AGI for now, will no doubt have an influence on the speed at which the first AGI’s now emerge.

This is actually a huge deal for the technology sector, and it’s a massive step for the case of general AI versus specific AI, and once fine tuned, this development will have the power to shape and change how everyone, from police and intelligence officials, to retail marketers and medical professionals, go about their daily business.

 

Some examples

 

First, the basics, a quick rundown of existing technology.

Currently, neural networks, defined as those computing systems that are aimed at mimicking how humans think and make decisions, are only as good as the information that’s inputted. But seriously, that information has to get quite specific, and it has to account for each and every circumstance that could come, else face system failure.

 

See also
Charity fundraisers go virtual as first Ready Player Golf tournament tees off

 

For example, driverless vehicle creators, in order to guarantee safe passage to their car passengers, have to be able to design algorithms that account for every scenario they could encounter on the road, everything from pedestrian crosswalks and bike paths to sidewalks and concrete barriers. They also have to account for every bitty scenario that could cross the autonomous car’s path in order to program that technologically powered car with the proper computerised directions to react. Miss a scenario, design it incorrectly, and, potentially, as happened recently with a fatal driverless Uber accident, someone dies.

That’s specific AI for you, the system can only follow the step-by-step and detailed directions that are inputted, and ordinarily the real programming challenge in specific AI comes when glitches are discovered.

For now let’s keep using the driverless car example. Let’s just say a pedestrian is indeed hit and injured due to a system design flaw that failed to take into account the scenario that brought on that particular accident. Designers, reeling from the failure, under pressure from the manufacturers and industry folk, now have to find a speedy fix for the system. What they have to do, in essence, is go back into the code and input data that accounts for that particular scenario, in order to prevent a similar accident from happening again, but the problem is, they can’t just stick in a new direction, or a new command. The system designers have to go back to the algorithm beginning, start all over with the existing steps, the old steps, and then layer in the new directions. They then have to return to step one all over again, repeating all the information they’ve already inputted, all the commands they’ve already codified, and weave in the new as they proceed.

 

See also
An AI has learned to predict people's moods from the way they walk

 

Tedious? Much. That’s how specific AI works, and its limits are, well, truly limited.

Now enter ZAC, which stands for Z Advanced Computing, who are bringing general AI to the 3D world.

“We have figured out how to apply General-AI to machine learning, for the first time in history,” said ZAC executives Bijan Tadayon and Saied Tadayon in a PowerPoint, during a private presentation from their Maryland home-based office.

Their demonstration of their ZAC Cloud technology focused on the identification and description of various types of shoes — open-toed versus close toed, high-heeled versus flat, buckled boot versus leather banded boot, and so on. Basically, it entailed dragging and dropping an image of a shoe across a computer screen into their ZAC Platform feature. Sound trivial? Hardly.

That simple act could very well prove to be “the shot heard around the technology world,” and here’s how they explained it.

“In our demo, we [chose] shoe, because shoe represents quite a complex object with many details and huge variations. Often, shoe designers throw in interesting features and various bells or whistles … A trained neural network … can only generically recognise shoes, e.g., ‘brown boot,’ and it is incapable of detailed recognition, especially for small features (e.g., bands, rings, and buckles),” they said.

 

See also
Meet the robot that wants to replace fork lift truck drivers

 

But the ZAC platform? It correctly identified distinguishing features of the shoes, open-toe, versus closed-toe, for example, and in a variety of different angles, whether the image was presented as toe-to-the-front or toe-to-the-side without extra training. What’s of even greater significance is that theirs doesn’t take a total reprogramming of the system in order to add in new information, the new information, or “learnings” can simply be layered and inputted atop the old.

Now think of the applications… there are billions, literally.

Retailers trying to snag online customers can market, advertise and showcase their products with a variety of descriptors, in a variety of angles, and add in new merchandise without having to reprogram the entire system, and buyers, meanwhile, will be able to search on specific terms that will direct them right toward the products they’re actually seeking, “bells or whistles” and all.

And that’s just retail. Think medical applications, national security and law enforcement.

“With our demo, we have demonstrated general-AI technology for the first time in history, as the tip of the iceberg, developed by us to revolutionize AI and Machine Learning,” said Bijan Tadayon, “we have demonstrated a very complex task … [that’s] not at all possible with the use of Deep Neural Networks or other specific-AI technologies.”

And, maybe even more eye-opening for the programmers of the world…

 

See also
Nestle uses AI to set the creative rules for their 15,000 strong marketing team

 

“With our demo,” he went on, “we have demonstrated that you do not need millions of images to train a complex task, we only need a small number of training samples to do the training. That is the ‘Holy Grail’ of AI and Machine Learning.”

Also, given Team ZAC’s background, this can hardly be dismissed as pie-in-the-sky stuff either… in mid-May, Team ZAC earned a “Judges Choice Award” for their general AI innovation at the US-China Innovation Alliance forum in Texas, out of a field of about 120 participants, and they’ve been given all paid trips to China to present their findings to major technology companies and investors in the next few months. And the so called “Tadayons” themselves hold impressive technology credentials that include graduation at the top of their class from Cornell University, the invention and patent of more than 100 science based applications and products, and the start up and development of several technological business ventures.

In other words ZAC isn’t made up of a team of randoms, and it could be that they’ve just, as they call it, invented the holy grail of machine learning. Watch this space, closely…

Related Posts

Leave a comment

FREE! 2024 TRENDS AND EMERGING TECHNOLOGY CODEXES
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This