Robust Detection and Classification of Longitudinal Changes in Color Retinal Fundus Images for Monitoring Diabetic Retinopathy [Електронний ресурс] / Harihar Narasimha-Iyer, Ali Can, Badrinath Roysam и др. // IEEE Transactions on Biomedical Engineering [Електронний ресурс]. – 2006. – № 6. – Pp. 1084 – 1098
- Електронна версія (pdf / 1,2 Mb)
Статистика використання: Завантажень: 8
Складова документа:
IEEE Transactions on Biomedical Engineering [Електронний ресурс] : вестник ин-та радиоинженеров. № 6. 53 / IEEE Engineering in medicine and Biology Group // IEEE Transactions on Biomedical Engineering. – USA, 2006
Анотація:
A fully automated approach is presented for robust detection and classification of changes in longitudinal time-series of color retinal fundus images of diabetic retinopathy. The method is robust to: 1) spatial variations in illumination resulting from instrument limitations and changes both within, and between patient visits; 2) imaging artifacts such as dust particles; 3) outliers in the training data; 4) segmentation and alignment errors. Robustness
to illumination variation is achieved by a novel iterative algorithm to estimate the reflectance of the retina exploiting automatically extracted segmentations of the retinal vasculature, optic disk, fovea, and pathologies. Robustness to dust artifacts is achieved by exploiting their spectral characteristics, enabling application to film-based, as well as digital imaging systems. False changes from alignment errors are minimized by subpixel accuracy registration using a 12-parameter transformation that accounts for unknown
retinal curvature and camer
to illumination variation is achieved by a novel iterative algorithm to estimate the reflectance of the retina exploiting automatically extracted segmentations of the retinal vasculature, optic disk, fovea, and pathologies. Robustness to dust artifacts is achieved by exploiting their spectral characteristics, enabling application to film-based, as well as digital imaging systems. False changes from alignment errors are minimized by subpixel accuracy registration using a 12-parameter transformation that accounts for unknown
retinal curvature and camer