WHY THIS MATTERS IN BRIEF
AI is showing that given the ability to manipulate things in the real world it can do things better than expert humans can …
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
A little while ago an Artificial Intelligence (AI) beat a human team for the first ever time in a physical sport, in this case drone racing. Let’s face it, the ability to beat human players in all kinds of games like chess, Dota, Go, and Starcraft is no longer surprising, after all, AI has proved it can outperform its animate creators in certain tasks, especially when it comes to processing and analysing information. But, despite all this, physical skill has remained a human prerogative – until now.
Researchers at ETH Zurich have created an AI powered robot with the task to learn how to play the popular labyrinth maze game. The goal of the game is simple: using two knobs, you have to steer a marble ball from a start to an end point without it falling into the holes across the board.
See it in action
But if you’ve ever played it, you know it’s actually quite challenging. The scientific explanation behind its difficulty is that it requires acute motor skills, spatial reasoning abilities — and a lot of practice.
The robot, named CyberRunner, is equipped with two motors (its hands), a camera (its eyes), and a computer (its brain), allowing it to play the game just like a person would.
Much like a human, CyberRunner learns through experience by leveraging recent advances in model-based reinforcement learning, which enables the AI to make decisions and choose potential successful behaviours by predicting the outcomes of different courses of action.
While playing the game, CyberRunner makes observations of the labyrinth and receives rewards based on its performance. It keeps a memory of the collected experience, which the algorithm uses to learn how the system behaves. Based on this knowledge, it’s able to recognise the most promising behaviours. As a result, the robot’s use of the two motors continuously improves and CyberRunner keeps getting better while the algorithm runs every time it plays.
The robot received 6.06 hours of practice. Impressively, it beat the previous world record set by Lars Göran Danielsson, a player since 1988, who set a time of 15.41 seconds in 2022. CyberRunner completed the game in 14.48 seconds – faster by over 6% compared to the human record holder.
Notably, during the learning process, the robot discovered shortcuts and found ways to cheat – a behaviour that research is studying as an innate human trait. Project lead researchers, Thomas Bi and Prof. Raffaello D’Andrea, had to step in and instruct CyberRunner to not skip parts of the maze.
A preprint of the research paper is already available on CyberRunner.ai, while Bi and D’Andrea will also open source the project on the website.
“We believe that this is the ideal testbed for research in real-world machine learning and AI. Prior to CyberRunner, only organisations with large budgets and custom-made experimental infrastructure could perform research in this area. Now, for less than 200 dollars, anyone can engage in cutting-edge AI research,” said D’Andrea.
“Furthermore, once thousands of CyberRunners are out in the real-world, it will be possible to engage in large-scale experiments, where learning happens in parallel, on a global scale. The ultimate in Citizen Science!”