Abstract
Activation energy characterization of competing reactions is a costly, but crucial step for understanding the kinetic relevance of distinct reaction pathways, product yields, and myriad other properties of reacting systems. The standard methodology for activation energy characterization has historically been a transition state search using the highest level of theory that can be afforded. However recently, several groups have popularized the idea of predicting activation energies directly, based on nothing more than the reactant and product graphs, a sufficiently complex neural network, and a broad enough dataset. Here, we have revisited this task using the recently developed Reaction Graph Depth 1 (RGD1) transition state dataset and several newly developed graph attention architectures. All of these new architectures achieve similar state-of-the-art results of ~4 kcal/mol mean absolute error on withheld testing sets of reactions but poor performance on external testing sets composed of reactions with differing mechanisms, reaction molecularity, or reactant size distribution. Limited transferability is also shown to be shared by other contemporary graph to activation energy architectures through a series of case-studies. We conclude that an array of standard graph architectures can already achieve results comparable to the irreducible error of available reaction datasets but that out-of-distribution performance remains poor.
Supplementary materials
Title
Supplementary Information
Description
Contains additional results referenced in the main text.
Actions