TY - JOUR
T1 - An embedded recurrent neural network-based model for endoscopic semantic segmentation
AU - Haithami, Mahmood
AU - Ahmed, Amr
AU - Liao, Iman Yi
AU - Jalab, Hamid
N1 - Publisher Copyright:
© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
PY - 2021/4/13
Y1 - 2021/4/13
N2 - Detecting cancers at their early stage would decrease mortality rate. For instance, detecting all polyps during colonoscopy would increase the chances of a better prognoses. However, endoscopists are facing difficulties due to the heavy workload of analyzing endoscopic images. Hence, assisting endoscopist while screening would decrease polyp miss rate. In this study, we propose a new deep learning segmentation model to segment polyps found in endoscopic images extracted during Colonoscopy screening. The propose model modifies SegNet architecture to embed Gated recurrent units (GRU) units within the convolution layers to collect contextual information. Therefore, both global and local information are extracted and propagated through the entire layers. This has led to better segmentation performance compared to that of using state of the art SegNet. Four experiments were conducted and the proposed model achieved a better intersection over union “IoU” by 1.36%, 1.71%, and 1.47% on validation sets and 0.24% on a test set, compared to the state of the art SegNet.
AB - Detecting cancers at their early stage would decrease mortality rate. For instance, detecting all polyps during colonoscopy would increase the chances of a better prognoses. However, endoscopists are facing difficulties due to the heavy workload of analyzing endoscopic images. Hence, assisting endoscopist while screening would decrease polyp miss rate. In this study, we propose a new deep learning segmentation model to segment polyps found in endoscopic images extracted during Colonoscopy screening. The propose model modifies SegNet architecture to embed Gated recurrent units (GRU) units within the convolution layers to collect contextual information. Therefore, both global and local information are extracted and propagated through the entire layers. This has led to better segmentation performance compared to that of using state of the art SegNet. Four experiments were conducted and the proposed model achieved a better intersection over union “IoU” by 1.36%, 1.71%, and 1.47% on validation sets and 0.24% on a test set, compared to the state of the art SegNet.
KW - Embedded RNN
KW - GRU
KW - Polyp Segmentation
KW - SegNet
UR - http://www.scopus.com/inward/record.url?scp=85108846020&partnerID=8YFLogxK
M3 - Conference proceeding article (ISSN)
AN - SCOPUS:85108846020
SN - 1613-0073
VL - 2886
SP - 59
EP - 68
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 3rd International Workshop and Challenge on Computer Vision in Endoscopy, EndoCV 2021
Y2 - 13 April 2021
ER -