RT Journal Article SR Electronic T1 Assessing performance of pathogenicity predictors using clinically relevant variant datasets JF Journal of Medical Genetics JO J Med Genet FD BMJ Publishing Group Ltd SP jmedgenet-2020-107003 DO 10.1136/jmedgenet-2020-107003 A1 Adam C Gunning A1 Verity Fryer A1 James Fasham A1 Andrew H Crosby A1 Sian Ellard A1 Emma L Baple A1 Caroline F Wright YR 2020 UL http://jmg.bmj.com/content/early/2020/08/25/jmedgenet-2020-107003.abstract AB Background Pathogenicity predictors are integral to genomic variant interpretation but, despite their widespread usage, an independent validation of performance using a clinically relevant dataset has not been undertaken.Methods We derive two validation datasets: an ‘open’ dataset containing variants extracted from publicly available databases, similar to those commonly applied in previous benchmarking exercises, and a ‘clinically representative’ dataset containing variants identified through research/diagnostic exome and panel sequencing. Using these datasets, we evaluate the performance of three recent meta-predictors, REVEL, GAVIN and ClinPred, and compare their performance against two commonly used in silico tools, SIFT and PolyPhen-2.Results Although the newer meta-predictors outperform the older tools, the performance of all pathogenicity predictors is substantially lower in the clinically representative dataset. Using our clinically relevant dataset, REVEL performed best with an area under the receiver operating characteristic curve of 0.82. Using a concordance-based approach based on a consensus of multiple tools reduces the performance due to both discordance between tools and false concordance where tools make common misclassification. Analysis of tool feature usage may give an insight into the tool performance and misclassification.Conclusion Our results support the adoption of meta-predictors over traditional in silico tools, but do not support a consensus-based approach as in current practice.