Deceptive tricks in artificial intelligence: adversarial attacks in ophthalmology

StatusVoR
Alternative title
Authors
Zbrzezny, Agnieszka
Grzybowski, Andrzej
Monograph
Monograph (alternative title)
Date
2023-05-04
Publisher
Journal title
Journal of Clinical Medicine
Issue
9
Volume
12
Pages
Pages
1-14
ISSN
2077-0383
ISSN of series
Access date
2023-05-04
Abstract PL
Abstract EN
The artificial intelligence (AI) systems used for diagnosing ophthalmic diseases have significantly progressed in recent years. The diagnosis of difficult eye conditions, such as cataracts, diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity, has become significantly less complicated as a result of the development of AI algorithms, which are currently on par with ophthalmologists in terms of their level of effectiveness. However, in the context of building AI systems for medical applications such as identifying eye diseases, addressing the challenges of safety and trustworthiness is paramount, including the emerging threat of adversarial attacks. Research has increasingly focused on understanding and mitigating these attacks, with numerous articles discussing this topic in recent years. As a starting point for our discussion, we used the paper by Ma et al. "Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems". A literature review was performed for this study, which included a thorough search of open-access research papers using online sources (PubMed and Google). The research provides examples of unique attack strategies for medical images. Unfortunately, unique algorithms for attacks on the various ophthalmic image types have yet to be developed. It is a task that needs to be performed. As a result, it is necessary to build algorithms that validate the computation and explain the findings of artificial intelligence models. In this article, we focus on adversarial attacks, one of the most well-known attack methods, which provide evidence (i.e., adversarial examples) of the lack of resilience of decision models that do not include provable guarantees. Adversarial attacks have the potential to provide inaccurate findings in deep learning systems and can have catastrophic effects in the healthcare industry, such as healthcare financing fraud and wrong diagnosis.
Abstract other
Keywords PL
Keywords EN
adversarial attacks
artificial intelligence
ophthalmology
Keywords other
Exhibition title
Place of exhibition (institution)
Exhibition curator
Type
License type
cc-by
Except as otherwise noted, this item is licensed under the Attribution licence | Permitted use of copyrighted works
Funder
Time range from
Time range to
Contact person name
Related publication
Related publication
Grant/project name
Views
Views10
Acquisition Date30.08.2025
Downloads
Downloads2
Acquisition Date30.08.2025
Altmetrics©
Dimensions
Google Scholar
Google Scholar