Scroll Top

Simple pixel hack cripples state of the art AI medical imaging systems

WHY THIS MATTERS IN BRIEF

As AI becomes increasingly pervasive in healthcare companies need to be aware that a new simple attack on their AI imaging and diagnostic systems could cause untold chaos, and even result in deaths.

 

Recently we’ve seen how a duelling AI that adjusted a pixel or two in a selfie defeated state of the art facial recognition systems. While on the one hand these type of hacks could be used to let terrorists potentially evade border security, in another industry, healthcare, these same hacks, it turns out can turn the same powerful Artificial Intelligence (AI) systems that are meant to analyze medical images like X-Rays into worthless binary scrap. And there may be enormous incentives to carry out such attacks for healthcare fraud and other nefarious ends, say the researchers who just successfully carried the attack out.

 

See also
Inside Google’s plan to make sure AI is safe

 

“The most striking thing to me as a researcher crafting these attacks was probably how easy they were to carry out,” says study lead author Samuel Finlayson, a computer scientist and biomedical informatician at Harvard Medical School in Boston, “this was in practice a relatively simple process that could easily be automated.”

“In addition to how easy they are to carry out, I was also surprised by how relatively unknown these weaknesses are to the medical community,” says study co-author Andrew Beam, one of Samuel’s colleagues, “there is a lot of coverage about how accurate deep learning can be in medical imaging, but there is a dearth of understanding about potential weaknesses and security issues.”

 

In these examples, the percentages listed represent what a model has said is the probability that each image shows evidence of a disease. Green tags indicate that the model was correct in its analysis, and red tags indicate the model was incorrect.

 

AI systems known as deep learning neural networks are increasingly helping analyse medical images. For example, recently AI became better at diagnosing lung diseases than doctors, as well as skin cancer and other diseases.

Then, in April, the US Food and Drug Administration (FDA) announced the approval of the first AI system that can be used for medical diagnosis without the input of a human clinician. Given the costs of healthcare in the US, one might imagine that AI could help make medical imaging cheaper by taking humans out of the loop, say Finlayson, Beam, and their colleagues in the study.

“We as a society stand to receive enormous benefit from the deliberate application of machine learning in healthcare,” Finlayson says, “however, as we integrate these incredible tools into the healthcare system, we need to be acutely aware of their potential downsides as well.”

 

See also
New Smart Bandage heals wounds faster and talks to your doctor

 

The researchers examined how difficult it was to fool medical image analysis software. Computer scientists regularly test deep learning systems with so-called “adversarial examples” crafted to make the AIs misclassify them in order to find out the possible limitations of current deep learning methods.

The scientists note there may be major incentives to attack medical image analysis software. The healthcare economy is huge, with the US alone spending roughly $3.3 trillion, or 17.8 percent of GDP, on healthcare in 2016, and medical fraud is already routine. One 2014 study estimated medical fraud cost as much as $272 billion in 2011.

In the new study, the researchers tested deep learning systems with adversarial examples on three popular medical imaging tasks, classifying diabetic retinopathy from retinal images, pneumothorax from chest X-rays, and melanoma from skin photos. In such attacks, pixels within images are modified in a way that might seem like a minimal amount of noise to humans, but could trick these systems into classifying all these pictures incorrectly.

The scientists note their attacks could make deep learning systems misclassify images up to 100 percent of the time, and that modified images were imperceptible from real ones to the human eye. They add that such attacks could work on any image, and could even be incorporated directly into the image-capture process.

 

See also
The US Navy asks for massive drone swarms to overcome future adversaries

 

“One criticism that we have received is that if someone has access to the underlying data, then they could commit many different kinds of fraud, not just using adversarial attacks,” Beam says. “This is true, but we feel that adversarial attacks are particularly pernicious and subtle, because it would be very difficult to detect that the attack has occurred.”

There are many possible reasons that deep learning systems might be attacked for medical fraud, the researchers say. With eye images, they note insurers might want to reduce the rate of surgeries they have to pay for. With chest X-rays, they note companies running clinical trials might want to get the results they want, given that one 2017 study estimated the median revenues across individual cancer drugs was as high as $1.67 billion four years after approval. With skin photos, the researchers note that dermatology in the US operates under a model wherein a physician or practice is paid for the procedures they perform, causing some dermatologists to perform a huge number of unnecessary procedures to boost revenue.

Such attacks might also be carried out to sabotage the test results of patients so they do not get the treatment they need.

“However, medical fraud is much more pervasive than medical sabotage, and we expect this will likely remain the case even as technology advances,” Finlayson says, “Deep learning may be a new technology, but the humans who use it, for good or ill, are driven by the same motivations we’ve always been, and greed is sadly a fairly universal vice.”

 

See also
Researchers eavesdrop on conversations remotely using lightbulbs

 

Finlayson notes that “computer scientists are working hard to build machine learning models that aren’t susceptible to adversarial attacks in the first place. This is a promising area of research, but has yet to deliver a golden bullet, we have still yet to see a model that is both highly accurate and highly resistant to attacks.”

Another way to defend against these kinds of attacks is to shore up medical infrastructure.

“We can work on building medical IT systems that carefully track medical images and ensure that they aren’t being manipulated,” Finlayson says, “even basic measures to implement these sorts of infrastructural defenses could do a lot to prevent adversarial attacks, but I don’t think there is a simple golden bullet on this end either, since there are new types of adversarial attacks being discovered every day.”

“The greatest tragedy in my mind would be if someone took the existence of adversarial examples as proof that machine learning shouldn’t be developed or used in healthcare,” Finlayson says, “all of us on this paper are extremely bullish on deep learning for healthcare. We just think that it’s important to be aware of how these systems could be abused and to safeguard against this abuse in advance.”

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This