TY - JOUR
T1 - On the role of deep learning model complexity in adversarial robustness for medical images
AU - Rodriguez, David
AU - Nayak, Tapsya
AU - Chen, Yidong
AU - Krishnan, Ram
AU - Huang, Yufei
N1 - Publisher Copyright:
© 2022, The Author(s).
PY - 2022/12
Y1 - 2022/12
N2 - Background: Deep learning (DL) models are highly vulnerable to adversarial attacks for medical image classification. An adversary could modify the input data in imperceptible ways such that a model could be tricked to predict, say, an image that actually exhibits malignant tumor to a prediction that it is benign. However, adversarial robustness of DL models for medical images is not adequately studied. DL in medicine is inundated with models of various complexity—particularly, very large models. In this work, we investigate the role of model complexity in adversarial settings. Results: Consider a set of DL models that exhibit similar performances for a given task. These models are trained in the usual manner but are not trained to defend against adversarial attacks. We demonstrate that, among those models, simpler models of reduced complexity show a greater level of robustness against adversarial attacks than larger models that often tend to be used in medical applications. On the other hand, we also show that once those models undergo adversarial training, the adversarial trained medical image DL models exhibit a greater degree of robustness than the standard trained models for all model complexities. Conclusion: The above result has a significant practical relevance. When medical practitioners lack the expertise or resources to defend against adversarial attacks, we recommend that they select the smallest of the models that exhibit adequate performance. Such a model would be naturally more robust to adversarial attacks than the larger models.
AB - Background: Deep learning (DL) models are highly vulnerable to adversarial attacks for medical image classification. An adversary could modify the input data in imperceptible ways such that a model could be tricked to predict, say, an image that actually exhibits malignant tumor to a prediction that it is benign. However, adversarial robustness of DL models for medical images is not adequately studied. DL in medicine is inundated with models of various complexity—particularly, very large models. In this work, we investigate the role of model complexity in adversarial settings. Results: Consider a set of DL models that exhibit similar performances for a given task. These models are trained in the usual manner but are not trained to defend against adversarial attacks. We demonstrate that, among those models, simpler models of reduced complexity show a greater level of robustness against adversarial attacks than larger models that often tend to be used in medical applications. On the other hand, we also show that once those models undergo adversarial training, the adversarial trained medical image DL models exhibit a greater degree of robustness than the standard trained models for all model complexities. Conclusion: The above result has a significant practical relevance. When medical practitioners lack the expertise or resources to defend against adversarial attacks, we recommend that they select the smallest of the models that exhibit adequate performance. Such a model would be naturally more robust to adversarial attacks than the larger models.
KW - Adversarial attacks
KW - Adversarial robustness
KW - Medical image classification
KW - Model complexity
KW - Perturbation
UR - http://www.scopus.com/inward/record.url?scp=85132281333&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85132281333&partnerID=8YFLogxK
U2 - 10.1186/s12911-022-01891-w
DO - 10.1186/s12911-022-01891-w
M3 - Article
C2 - 35725429
AN - SCOPUS:85132281333
SN - 1472-6947
VL - 22
JO - BMC Medical Informatics and Decision Making
JF - BMC Medical Informatics and Decision Making
M1 - 160
ER -