Abstract
Marginalized graph kernels have shown competitive performance in molecular machine learning tasks but currently lack measures of interpretability, which are important to improve trust in the models, detect biases, and inform molecular optimization campaigns. We here conceive and implement two interpretability measures for Gaussian process regression using a marginalized graph kernel (GPR-MGK) to quantify (1) the contribution of specific training data to the prediction and (2) the contribution of specific nodes of the graph to the prediction. We demonstrate the applicability of these interpretability measures for molecular property prediction. We compare GPR-MGK to graph neural networks on four logic datasets and find that the atomic attribution of GPR-MGK generally outperforms the atomic attribution of graph neural networks. We also perform a detailed molecular attribution analysis using the FreeSolv dataset, showing how molecules in the training set influence machine learning predictions and why Morgan fingerprints perform poorly on this dataset. This is the first systematic examination of the interpretability of GPR-MGK and thereby an important step in the further maturation of marginalized graph kernel methods for interpretable molecular predictions.
Supplementary materials
Title
Supplementary Information
Description
Supplementary Discussion, Tables, and Figures
Actions