On the assessment of robustness of telemedicine applications against adversarial machine learning attacks
Document Type
Book Chapter
Publication Date
2021
Department/School
Information Security and Applied Computing
Publication Title
Advances and trends in artificial intelligence. Artificial intelligence practices
Abstract
Telemedicine applications have been recently evolved to allow patients in underdeveloped areas to receive medical services. Meanwhile, machine learning (ML) techniques have been widely adopted in such telemedicine applications to help in disease diagnosis. The performance of these ML techniques, however, are limited by the fact that attackers can manipulate clean data to fool the model classifier and break the truthfulness and robustness of these models. For instance, due to attacks, a benign sample can be treated as a malicious one by the classifier and vice versa. Motivated by this, this paper aims at exploring this issue for telemedicine applications. Particularly, this paper studies the impact of adversarial attacks on mammographic image classifier. First, mamographic images are used to train and evaluate the accuracy of our proposed model. The original dataset is then poisoned to generate adversarial samples that can mislead the model. For this, structural similarity index is used to determine similarity between clean images and adversarial images. Results show that adversarial attacks can severely fool the ML model.
ISBN
9783030794569
Link to Published Version
Recommended Citation
Yilmaz, I., Baza, M., Amer, R., Rasheed, A., Amsaad, F., & Morsi, R. (2021). On the assessment of robustness of telemedicine applications against adversarial machine learning attacks. In H. Fujita, A. Selamat, J. C.-W. Lin, & M. Ali (Eds.), Advances and trends in artificial intelligence. Artificial intelligence practices (Vol. 12798, pp. 519–529). Springer International Publishing. https://doi.org/10.1007/978-3-030-79457-6_44