TY - JOUR
T1 - The adoption of deep learning interpretability techniques on diabetic retinopathy analysis
T2 - a review
AU - Lim, Wei Xiang
AU - Chen, Zhi Yuan
AU - Ahmed, Amr
N1 - Publisher Copyright:
© 2021, International Federation for Medical and Biological Engineering.
PY - 2022/3/1
Y1 - 2022/3/1
N2 - Diabetic retinopathy (DR) is a chronic eye condition that is rapidly growing due to the prevalence of diabetes. There are challenges such as the dearth of ophthalmologists, healthcare resources, and facilities that are unable to provide patients with appropriate eye screening services. As a result, deep learning (DL) has the potential to play a critical role as a powerful automated diagnostic tool in the field of ophthalmology, particularly in the early detection of DR when compared to traditional detection techniques. The DL models are known as black boxes, despite the fact that they are widely adopted. They make no attempt to explain how the model learns representations or why it makes a particular prediction. Due to the black box design architecture, DL methods make it difficult for intended end-users like ophthalmologists to grasp how the models function, preventing model acceptance for clinical usage. Recently, several studies on the interpretability of DL methods used in DR-related tasks such as DR classification and segmentation have been published. The goal of this paper is to provide a detailed overview of interpretability strategies used in DR-related tasks. This paper also includes the authors’ insights and future directions in the field of DR to help the research community overcome research problems. Graphical abstract: [Figure not available: see fulltext.]
AB - Diabetic retinopathy (DR) is a chronic eye condition that is rapidly growing due to the prevalence of diabetes. There are challenges such as the dearth of ophthalmologists, healthcare resources, and facilities that are unable to provide patients with appropriate eye screening services. As a result, deep learning (DL) has the potential to play a critical role as a powerful automated diagnostic tool in the field of ophthalmology, particularly in the early detection of DR when compared to traditional detection techniques. The DL models are known as black boxes, despite the fact that they are widely adopted. They make no attempt to explain how the model learns representations or why it makes a particular prediction. Due to the black box design architecture, DL methods make it difficult for intended end-users like ophthalmologists to grasp how the models function, preventing model acceptance for clinical usage. Recently, several studies on the interpretability of DL methods used in DR-related tasks such as DR classification and segmentation have been published. The goal of this paper is to provide a detailed overview of interpretability strategies used in DR-related tasks. This paper also includes the authors’ insights and future directions in the field of DR to help the research community overcome research problems. Graphical abstract: [Figure not available: see fulltext.]
KW - Deep learning
KW - Diabetic retinopathy
KW - Interpretability
KW - Review
UR - http://www.scopus.com/inward/record.url?scp=85123621390&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85123621390&partnerID=8YFLogxK
U2 - 10.1007/s11517-021-02487-8
DO - 10.1007/s11517-021-02487-8
M3 - Review article
AN - SCOPUS:85123621390
SN - 0140-0118
VL - 60
SP - 633
EP - 642
JO - Medical and Biological Engineering and Computing
JF - Medical and Biological Engineering and Computing
IS - 3
ER -