Scroll Top

AI beats human champions at another physical skill game


AI is showing that given the ability to manipulate things in the real world it can do things better than expert humans can …


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

A little while ago an Artificial Intelligence (AI) beat a human team for the first ever time in a physical sport, in this case drone racing.  Let’s face it, the ability to beat human players in all kinds of games like chess, Dota, Go, and Starcraft is no longer surprising, after all, AI has proved it can outperform its animate creators in certain tasks, especially when it comes to processing and analysing information. But, despite all this, physical skill has remained a human prerogative – until now.


See also
New algorithms save lives by taking seconds to calculate future Tsunami impacts


Researchers at ETH Zurich have created an AI powered robot with the task to learn how to play the popular labyrinth maze game. The goal of the game is simple: using two knobs, you have to steer a marble ball from a start to an end point without it falling into the holes across the board.


See it in action


But if you’ve ever played it, you know it’s actually quite challenging. The scientific explanation behind its difficulty is that it requires acute motor skills, spatial reasoning abilities — and a lot of practice.

The robot, named CyberRunner, is equipped with two motors (its hands), a camera (its eyes), and a computer (its brain), allowing it to play the game just like a person would.


See also
DeepMind's RobotCat AI generates its own data to learn new skills by itself


Much like a human, CyberRunner learns through experience by leveraging recent advances in model-based reinforcement learning, which enables the AI to make decisions and choose potential successful behaviours by predicting the outcomes of different courses of action.

While playing the game, CyberRunner makes observations of the labyrinth and receives rewards based on its performance. It keeps a memory of the collected experience, which the algorithm uses to learn how the system behaves. Based on this knowledge, it’s able to recognise the most promising behaviours. As a result, the robot’s use of the two motors continuously improves and CyberRunner keeps getting better while the algorithm runs every time it plays.

The robot received 6.06 hours of practice. Impressively, it beat the previous world record set by Lars Göran Danielsson, a player since 1988, who set a time of 15.41 seconds in 2022. CyberRunner completed the game in 14.48 seconds – faster by over 6% compared to the human record holder.


See also
NYPD sends in their robot DigiDog to spy on perps in the Bronx


Notably, during the learning process, the robot discovered shortcuts and found ways to cheat – a behaviour that research is studying as an innate human trait. Project lead researchers, Thomas Bi and Prof. Raffaello D’Andrea, had to step in and instruct CyberRunner to not skip parts of the maze.

A preprint of the research paper is already available on, while Bi and D’Andrea will also open source the project on the website.

“We believe that this is the ideal testbed for research in real-world machine learning and AI. Prior to CyberRunner, only organisations with large budgets and custom-made experimental infrastructure could perform research in this area. Now, for less than 200 dollars, anyone can engage in cutting-edge AI research,” said D’Andrea.


See also
NASA experiments with self-healing robots made from ice for deep space exploration


“Furthermore, once thousands of CyberRunners are out in the real-world, it will be possible to engage in large-scale experiments, where learning happens in parallel, on a global scale. The ultimate in Citizen Science!”

Related Posts

Leave a comment


Awesome! You're now subscribed.

Pin It on Pinterest

Share This