Abstract
Forward and retrosynthetic organic reaction prediction are challenging applications of artificial intelligence (AI) research in chemistry. IBM’s freely available RXN for Chemistry (https://rxn.res.ibm.com) treats reaction prediction as a translation problem, by using transformer-based machine learning models trained on patent data to convert reactant SMILES string sequences into product strings. Here we characterize the performance of transformer models on 100 undergraduate text-book problems to expose reaction classes where the fundamentals of organic chemistry are violated. The forward prediction model is generally successful in predicting outcomes for substitution reactions and unsuccessful for elimination and organocopper reactions. For the retrosynthesis model, we found characteristic examples of a lack of atom conservation and nonsensical chemical transformations. We also compared the dif- ferences in molecular complexity and synthetic accessibility between predicted and literature reactions to probe how AI plans reactions compared to humans. Forward predictions replicated a similar distribution of differences in molecular complexity as the human reactions from the literature, whereas retrosynthetic predictions resulted in both positive and negative deviations from literature complexity. Finally, we analyzed the atom mapping in test reactions to expose errors in how the model identifies reactive atoms and species.
Supplementary materials
Title
Supporting Information
Description
Description of columns in linked files
Supporting Figures referred to in the text
Actions
Supplementary weblinks
Title
Repository of test set data and analysis codes
Description
Repository of test set data and analysis codes used to generate the results in this work
Actions
View