Abstract
Deep neural networks can model the nonlinear architecture of polygenic traits, yet the reliability of attribution methods to identify the genetic variants driving model predictions remains uncertain. We introduce a benchmarking framework that quantifies three aspects of interpretability: attribution recall, attribution precision, and stability, and apply it to deep learning models trained on UK Biobank genotypes for standing height prediction. After quality control, feed-forward neural networks were trained on more than half a million autosomal variants from approximately 300 thousand participants and evaluated using four attribution algorithms (Saliency, Gradient SHAP, DeepLIFT, Integrated Gradients) with and without SmoothGrad noise averaging. Attribution recall was assessed using synthetic spike-in variants with known additive, dominant, recessive, and epistatic effects, enabling direct measurement of sensitivity to diverse genetic architectures. Attribution precision estimated specificity using an equal number of null decoy variants that preserved allele structure while disrupting genotype-phenotype correspondence. Stability was measured by the consistency of variant-level attributions across an ensemble of independently trained models. SmoothGrad increased average recall across effect types by approximately 0.16 at the top 1% of the most highly attributed variants and improved average precision by about 0.06 at the same threshold, while stability remained comparable with median relative standard deviations of 0.4 to 0.5 across methods. Among the evaluated attribution methods, Saliency achieved the highest composite score, indicating that its simple gradient formulation provided the best overall balance of recall, precision, and stability.</p>