DeepMind’s AI can now successfully control plasma in a fusion reactor


In order to maintain fusion superheated plasma has to be held in magnetic confinement and AI’s proving to be very good at managing that.


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, read our codexes, or browse my blog.

If I asked you to name a few things that Artificial Intelligence (AI) can do you might include it being able to break its own reality, evolve, imagine, spawn children AI, and create synthetic genomes for future designer humans. But you probably didn’t say control superheated plasma. So far the closest that AI has gotten to helping the world figure out fusion is to figure out that Cold Fusion, in other words fusion at room temperature is impossible, until it turns out it might not be … but that’s another story.


See also
World first as scientists use tractor beams to make objects dance


Now it turns out that DeepMind’s streak of applying its world-class AI to hard science problems continues. In collaboration with the Swiss Plasma Center at EPFL in Switzerland the UK based AI firm has now trained a deep reinforcement learning algorithm to control the superheated soup of matter inside a nuclear fusion reactor. The breakthrough, published in the journal Nature, could help physicists better understand how fusion works, and potentially speed up the arrival of an unlimited source of clean energy.


The Future of Energy, by Futurist Speaker Matthew Griffin


“This is one of the most challenging applications of reinforcement learning to a real world system,” says Martin Riedmiller, a researcher at DeepMind.

In nuclear fusion, the atomic nuclei of hydrogen atoms get forced together to form heavier atoms, like helium. This produces a lot of energy relative to a tiny amount of fuel, making it a very efficient source of power. It is far cleaner and safer than fossil fuels or conventional nuclear power, which is created by fission—forcing nuclei apart. It is also the process that powers stars.


See also
Bright future, solar power is the cheapest form of energy in 58 countries


Controlling nuclear fusion on Earth is hard, however. The problem is that atomic nuclei repel each other. Smashing them together inside a reactor can only be done at extremely high temperatures, often reaching hundreds of millions of degrees – hotter than the center of the sun. At these temperatures, matter is neither solid, liquid, nor gas. It enters a fourth state, known as plasma: a roiling, superheated soup of particles.


Courtesy: EPFL


The task is to hold the plasma inside a reactor together long enough to extract energy from it. Inside stars, plasma is held together by gravity. On Earth, researchers use a variety of tricks, including lasers and by using magnets that are so powerful that they could lift aircraft carriers off the ground. In a magnet-based reactor, known as a tokamak, the plasma is trapped inside an electromagnetic cage, forcing it to hold its shape and stopping it from touching the reactor walls, which would cool the plasma and damage the reactor.


See also
Scientists have a plan to re-freeze the Arctic


Controlling the plasma requires constant monitoring and manipulation of the magnetic field. The team trained its reinforcement-learning algorithm to do this inside a simulation. Once it had learned how to control—and change—the shape of the plasma inside a virtual reactor, the researchers gave it control of the magnets in the Variable Configuration Tokamak (TCV), an experimental reactor in Lausanne. They found that the AI was able to control the real reactor without any additional fine tuning. In total, the AI controlled the plasma for only two seconds but this is as long as the TCV reactor can run before getting too hot and melting so it could have been longer.

Ten thousand times a second, the trained neural network takes in 90 different measurements describing the shape and position of the plasma and adjusts the voltage in 19 magnets in response. This feedback loop is far faster than previous reinforcement-learning algorithms have had to deal with. To speed things up, the AI was split into two neural networks. A large network, called a critic, learned via trial and error how to control the reactor inside the simulation. The critic’s ability was then encoded in a smaller, faster network, called an actor, that runs on the reactor itself.

Source: Deepmind

Related Posts

Leave a comment

Awesome! You're now subscribed.

Pin It on Pinterest

Share This