ѻýҕl

Digital Mammograms Show Vulnerability to Cyberattacks

<ѻýҕl class="mpt-content-deck">— Two out of three radiologists couldn't tell that images were manipulated
MedpageToday

CHICAGO – An artificial intelligence (AI) program was able to add and subtract malignant features to digital mammography images, researchers reported, pointing to the vulnerability of machine-based image analysis to malicious hacking.

If such an AI program were deployed in the real world, healthy patients could be misdiagnosed as having cancer, or more worrisome, actual malignant tumors could go undiagnosed.

In a study reported at the Radiological Society of North America (RSNA) annual meeting, images from cancer patients and healthy controls housed in two public databases were selected to "train a cycle-consistent generative adversarial network (CycleGAN) on mammographic data to (either) inject or remove features of malignancy, and to determine whether these AI-mediated attacks could be detected by radiologists," explained Anton Becker, MD, of University Hospital Zurich, and colleagues.

Out of the three radiologists reading the images, only one could discriminate between the original and modified images slightly better than just guessing or by chance, with an area under the receiver-operating characteristic (ROC) curve of 0.66 (P=0.008).

Becker told ѻýҕl that cyberattacks on hospitals that employ AI could start occurring in 5 to 10 years, as the software is rolled out and installed. He said that problems with AI corruption of imaging could also occur in prostate studies, liver scans, and brain scans. Patients could unknowingly have suspicious lesions appear on their medical exams, causing alarm and unnecessary resource utilization, he added.

The current study was intended to alert the medical and AI communities to the possible "dark side" of using the deep machine leaning programs, Becker said.

The authors ran two experiments training CycleGAN on low and high resolution images (256 by 256 pixels and 512 by 408 pixels).

The trio of radiologists read the images and rated the likelihood of malignancy on a scale from 1-5, as well as the likelihood that the image was being manipulated. The readout was evaluated by a ROC analysis.

The authors reported that, at the lower resolution, only one radiologist exhibited a lower detection of cancer (AUC 0.85 vs 0.63, P=0.06), while the other two were unaffected (AUC 0.67 vs 0.69 and AUC 0.75 vs 0.77, P=0.55).

At the higher resolution, all radiologists showed a significantly lower detection rate of cancer in the modified images (AUC 0.77-0.84 vs AUC 0.59-0.69, P=0.008). Additionally, they were able to reliably detect modified images due to a better visibility of artifacts (AUCs 0.92, 0.92 and 0.97), Becker's group noted.

"We found that 12%-25% of the images fooled the radiologists," Becker said.

"A CycleGAN can implicitly learn malignant features and then inject or remove them, so that a substantial proportion of small mammographic images would consequently be misdiagnosed," the authors concluded.

They also pointed out that, "At higher resolutions... the method is currently limited and has a clear trade-off between the manipulation of images and the introduction of artifacts."

In a separate RSNA study, Israeli researchers demonstrated a CT system hack.

"The CT modality consists of an ecosystem of components that communicate with each other within the CT's ecosystem," wrote Tom Mahler, a PhD candidate at the Ben Gurion University of the Negev in Beersheba, and colleagues. "As technology advances, the CT's ecosystem becomes more connected to the hospital's network and the Internet, exposing it to a variety of security vulnerabilities and threats to potential cyber-attacks."

They used a CT phantom to show a successful bypass of CT security protection mechanisms in order to manipulate the scanner's behavior. "The combination of ionizing radiation, potentially harmful to patients, and security vulnerabilities to cyber-attacks, results in possible dangerous scenarios that compromise a patients safety," they noted.

Mahler told ѻýҕl that one goal is to develop "solutions to prevent such attacks in order to protect medical devices. Our solution monitors the outgoing commands from the device before they are executed, and will alert and possibly halt those commands if abnormalities are detected."

They concluded that hacking CT systems is "no longer theoretical... This calls for an immediate improvement of [CT scanner security] and the further mitigation of risks to patients."

RSNA spokesperson Max Wintermark, MD, told ѻýҕl, "You also have to be alert for the insertion of bias into the original artificial intelligence software, even without having ill intentions. Like any technology that we use, it is very important to think of the ethical aspects of its use as well."

"We should not be using technology blindly, but we must think of all of the aspects -- the positive ones and the negative ones -- so we can use the technology responsibly," said Wintermark, who is at Stanford University in California.

"The ease in which Dr. Becker [and colleagues] was able to manipulate the images is a little bit scary," Wintermark added. "However, I can tell you that from my experience in hospitals, the measures that are taken to protect patient information is formidable. Sometimes, it is hard for me to access a patient's image because there are so many layers of password security -- but I think that is a good thing."

Disclosures

Becker, Mahler, and Wintermark disclosed no relevant relationships with the industry.

Primary Source

Radiological Society of North America

Becker A, et al "Injecting and removing malignant features in mammography with CycleGAN: Investigation of an automated adversarial attack using neural networks" RSNA 2018.

Secondary Source

Radiological Society of North America

Source Reference: Mahler T, et al "CTrl-Alt-Radiate?" RSNA 2018; Abstract SSJ13-05.