Title: | Color Fundus Photography and Deep Learning Applications in Alzheimer's Disease |
Journal: | Mayo Clinic Proceedings Digital Health |
Published: | 1 Aug 2024 |
DOI: | https://doi.org/10.1016/j.mcpdig.2024.08.005 |
Title: | Color Fundus Photography and Deep Learning Applications in Alzheimer's Disease |
Journal: | Mayo Clinic Proceedings Digital Health |
Published: | 1 Aug 2024 |
DOI: | https://doi.org/10.1016/j.mcpdig.2024.08.005 |
WARNING: the interactive features of this website use CSS3, which your browser does not support. To use the full features of this website, please update your browser.
Objective To report the development and performance of two distinct deep learning models trained exclusively on retinal color fundus photographs to classify Alzheimer's disease (AD). Patients and Methods Two independent datasets (UK Biobank and our tertiary academic institution) of good-quality retinal photographs derived from subjects with AD and controls were used to build two deep learning models, between April 1, 2021 and January 30, 2024. ADVAS is a U-Net-based architecture that employs retinal vessel segmentation. ADRET is a BERT-style self-supervised learning convolutional neural network pretrained on a large dataset of retinal color photographs from UK Biobank. The models' performance to distinguish AD from non-AD was determined using mean accuracy, sensitivity, specificity and receiving operating curves. The generated attention heatmaps were analyzed for distinctive features. Results The self-supervised ADRET model had superior accuracy when compared with ADVAS, in both UK Biobank (98.27% vs 77.20%, P<0.001) and our institutional testing datasets (98.90% vs 94.17%, P=0.04). No major differences were noted between the original and binary vessel segmentation, and between both-eyes versus single-eye models. Attention heatmaps obtained from AD subjects highlighted regions surrounding small vascular branches as areas of highest relevance to the model decision-making. Conclusion A BERT-style self-supervised convolutional neural network pre-trained on a large dataset of retinal color photographs alone can screen symptomatic AD with high accuracy, better than U-Net pretrained models. To be translated in clinical practice, this methodology requires further validation in larger and diverse populations and integrated techniques to harmonize fundus photographs and attenuate the imaging-associated noise.</p>
Enabling scientific discoveries that improve human health