Abstract
Owing to the invasiveness of diagnostic tests for anaemia and the costs</p>
associated with screening for it, the condition is often undetected. Here, we show</p>
that anaemia can be detected via machine-learning algorithms trained using retinal</p>
fundus images, study participant metadata (including race or ethnicity, age, sex and</p>
blood pressure) or the combination of both data types (images and study participant</p>
metadata). In a validation dataset of 11,388 study participants from the UK Biobank,</p>
the metadata-only, fundus-image-only and combined models predicted haemoglobin</p>
concentration (in g dl-1) with mean absolute error values</p>
of 0.73 (95% confidence interval: 0.72-0.74), 0.67 (0.66-0.68) and 0.63 (0.62-0.64),</p>
respectively, and with areas under the receiver operating characteristic curve (AUC)</p>
values of 0.74 (0.71-0.76), 0.87 (0.85-0.89) and 0.88 (0.86-0.89), respectively. For</p>
539 study participants with self-reported diabetes, the combined model predicted</p>
haemoglobin concentration with a mean absolute error of 0.73 (0.68-0.78) and anaemia</p>
an AUC of 0.89 (0.85-0.93). Automated anaemia screening on the basis of fundus</p>
images could particularly aid patients with diabetes undergoing regular retinal</p>
imaging and for whom anaemia can increase morbidity and mortality risks.</p>