Abstract
The study focuses on the proof-of-concept that the human invention of a named reaction can be reproduced by the zero-shot learning version of transformer.
While state-of-art reaction prediction machine learning models can predict chemical reactions through the transfer learning of thousands of training samples with the same reaction types as the ones to predict, how to prepare the models to predict truly "unseen" reactions remains a question. We aim to equip the transformer model with the ability to predict unseen reactions following the concept of "zero-shot learning". To find what kind of auxiliary information is needed, we reproduce the human invention of the Chan-Lam coupling reaction where the inventor was inspired by two existing reactions---Suzuki reaction and Barton's bismuth arylation reaction. After training with the samples from these two reactions as well as the USPTO dataset, the transformer model can pre-dict the Chan-Lam coupling reaction with 55.7% top-1 accuracy which is a huge im-provement comparing to 17.2% from the model trained with the USPTO dataset only. Our model also mimics the later stage of this history where the initial case of Chan-Lam coupling reaction was generalized to a wide range of reactants and reagents via the "one-shot learning" approach. The results of this study show that having existing reactions as auxiliary information can help the transformer predict unseen reactions and providing just one or few samples of the unseen reaction can boost the model's gener-alization ability.
Supplementary materials
Title
Supporting Information Reproducing the Invention of a Named Reaction Predicting Unseen Chemical Reactions via Zero-shot Learning
Description
Actions