Scroll Top

Black box AI’s learn to express themselves so researchers can read their minds

WHY THIS MATTERS IN BRIEF

Some AI’s are black boxes which means people don’t know how they do what they do, or how they reach their decisions, and that’s a major issue for different industry use cases.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

The Artificial Intelligence (AI) behind self driving cars, medical image analysis and other computer vision applications rely on what’s called Deep Neural Networks, or DNN’s for short. Loosely modelled on the brain DNN’s consist of layers of interconnected “neurons” – mathematical functions that send and receive information like the neurons in the human brain that “fire” like neurons in response to features of the input data.

 

See also
First spacecraft piloted by a humanoid robot docks with ISS

 

The first layer in a DNN processes a raw data input, such as pixels in an image, then passes that information to the next layer above, triggering some of those neurons, which then pass a signal to even higher layers until eventually the AI figures out what it’s looking at.

But here’s the problem, says Duke University computer science professor Cynthia Rudin: “We can input, say, a medical image, and observe what comes out the other end, for example ‘this is a picture of a malignant lesion’, but it’s hard to know what happened in between.”

It’s what’s known as AI’s black box problem and if you want to know how an AI made the decision it did, for whatever reason, then it’s one of, if not the, fields biggest problem.

 

See also
An AI hiring company says it can predict job hopping based on your interview

 

Furthermore, as AI is increasingly used to diagnose everything from disease to take out hostile targets for the military, whether it’s in real life or in the cyber world, not knowing why an AI did what it did so you can query it or validate it is a pressing problem that has to be solved sooner rather than later.

“The problem with deep learning models is they’re so complex that we don’t actually know what they’re learning or how they’re applying it,” said Zhi Chen, a Ph.D. student in Rudin’s lab at Duke. “They can often leverage and use information we don’t want them to, and their reasoning processes can be completely wrong.”

Now though, as the field of so called Explainable AI, where people try read the minds of these AI’s, by using anything from cell biology, oddly, to different visualisation techniques, takes root, Rudin, Chen and Duke undergraduate Yijie Bei have come up with a way to address this black box issue. By modifying the reasoning process behind the AI’s predictions the team were able to troubleshoot the networks and understand whether they were trustworthy.

 

See also
No humans required, the fully autonomous AI running a Wall Street hedge fund

 

Most methods in this field often attempt to uncover what led a computer vision system to the right answer after the fact, by pointing to the key features or pixels that identified an image: “The growth in this chest X-ray was classified as malignant because, to the model, these areas are critical in the classification of lung cancer.” But such approaches don’t reveal the network’s reasoning, just where it was looking.

The Duke team tried a different tack. Instead of attempting to account for a network’s decision-making on a post hoc basis, their method trains the network to show its work by expressing its understanding about concepts along the way. Their method works by revealing how much the network calls to mind different concepts to help decipher what it sees.

“It disentangles how different concepts are represented within the layers of the network,” Rudin said.

 

See also
MIT's new AI can create videos of the future by looking at a photo

 

Given an image of a library, for example, the approach makes it possible to determine whether and how much the different layers of the neural network rely on their mental representation of “books” to identify the scene.

The researchers found that, with a small adjustment to a neural network, it is possible to identify objects and scenes in images just as accurately as the original network, and yet gain substantial interpretability in the network’s reasoning process.

“The technique is very simple to apply,” Rudin said.

The method controls the way information flows through the network. It involves replacing one standard part of a neural network with a new part. The new part constrains only a single neuron in the network to fire in response to a particular concept that humans understand. The concepts could be categories of everyday objects, such as “book” or “bike.” But they could also be general characteristics, such as such as “metal,” “wood,” “cold” or “warm.” By having only one neuron control the information about one concept at a time, it is much easier to understand how the network “thinks.”

 

See also
Norwegian robot learns to self-evolve and 3D print itself in the lab

 

The researchers tried their approach on a neural network trained by millions of labelled images to recognize various kinds of indoor and outdoor scenes, from classrooms and food courts to playgrounds and patios. Then they turned it on images it hadn’t seen before. They also looked to see which concepts the network layers drew on the most as they processed the data.

Chen pulls up a plot showing what happened when they fed a picture of an orange sunset into the network. Their trained neural network says that warm colors in the sunset image, like orange, tend to be associated with the concept “bed” in earlier layers of the network. In short, the network activates the “bed neuron” highly in early layers. As the image travels through successive layers, the network gradually relies on a more sophisticated mental representation of each concept, and the “airplane” concept becomes more activated than the notion of beds, perhaps because “airplanes” are more often associated with skies and clouds.

It’s only a small part of what’s going on, to be sure. But from this trajectory the researchers are able to capture important aspects of the network’s train of thought.

 

See also
Researchers have read the mind of a “black box” AI using cell biology

 

The researchers say their module can be wired into any neural network that recognizes images. In one experiment, they connected it to a neural network trained to detect skin cancer in photos.

Before an AI can learn to spot melanoma, it must learn what makes melanomas look different from normal moles and other benign spots on your skin, by sifting through thousands of training images labelled and marked up by skin cancer experts.

But the network appeared to be summoning up a concept of “irregular border” that it formed on its own, without help from the training labels. The people annotating the images for use in artificial intelligence applications hadn’t made note of that feature, but the machine did.

 

See also
Russia reportedly re-activates autonomous "Deadhand" AI, plugs it into nuclear arsenal

 

“Our method revealed a shortcoming in the dataset,” Rudin said. Perhaps if they had included this information in the data, it would have made it clearer whether the model was reasoning correctly. “This example just illustrates why we shouldn’t put blind faith in black box models with no clue of what goes on inside them, especially for tricky medical diagnoses,” Rudin said.

The team’s work appeared Dec. 7 in the journal Nature Machine Intelligence.

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This