Automated segmentation of retinal nonperfusion area in fluorescein angiography in retinal vein occlusion using convolutional neural networks.


Tang Z(1), Zhang X(2), Yang G(1), Zhang G(3)(4), Gong Y(1), Zhao K(1), Xie J(2), Hou J(2), Hou J(2), Sun B(2), Wang Z(1).
Author information:
(1)School of Electronic Science and Engineering, University of Electronic Science and Technology of China, No.4, Section 2, North Jianshe Road, Chengdu, Sichuan, 610054, China.
(2)Shanxi Eye Hospital, 100 Fudong Street, Taiyuan, Shanxi, 030002, China.
(3)Shanxi Intelligence Institute of Big Data Technology and Innovation, 529 South Zhonghuan Street, Taiyuan, Shanxi, 030000, China.
(4)Department of Computer Engineering, Taiyuan University, 18 South Dachang Street, Taiyuan, Shanxi, 030000, China.


PURPOSE: Retinal vein occlusion (RVO) is the second most common cause of vision loss after diabetic retinopathy due to retinal vascular disease. Retinal nonperfusion (RNP), identified on fluorescein angiograms (FA) and appearing as hypofluorescence regions, is one of the most significant characteristics of RVO. Quantification of RNP is crucial for assessing the severity and progression of RVO. However, in current clinical practice, it is mostly conducted manually, which is time-consuming, subjective, and error-prone. The purpose of this study is to develop fully automated methods for segmentation of RNP using convolutional neural networks (CNNs). METHODS: FA images from 161 patients were analyzed, and RNP areas were annotated by three independent physicians. The optimal method to use multi-physicians' labeled data to train the CNNs was evaluated. An adaptive histogram-based data augmentation method was utilized to boost the CNN performance. CNN methods based on context encoder module were developed for automated segmentation of RNP and compared with existing state-of-the-art methods. RESULTS: The proposed methods achieved excellent agreements with physicians for segmentation of RNP in FA images. The CNN performance can be improved significantly by the proposed adaptive histogram-based data augmentation method. Using the averaged labels from physicians to train the CNNs achieved the best consensus with all physicians, with a mean accuracy of 0.883±0.166 with fivefold cross-validation. CONCLUSIONS: We reported CNN methods to segment RNP in RVO in FA images. Our work can help improve clinical workflow, and can be useful for further investigating the association between RNP and retinal disease progression, as well as for evaluating the optimal treatments for the management of RVO.