Abstract
A longstanding project of the chemical kinetics community is to predict reaction rates and the behavior of reacting systems, even for systems where there are no experimental data. Many important reacting systems (atmosphere, combustion, pyrolysis, partial oxidations) involve a large number of reactions occurring simultaneously, and reaction intermediates that have never been observed, making this goal even more challenging. Improvements in our ability to compute rate coefficients and other important parameters accurately from first principles, and improvements in automated kinetic modeling software, have partially overcome many challenges. Indeed, in some cases quite complicated kinetic models have been constructed which accurately predicted the results of independent experiments. However, the process of constructing the models, and deciding which reactions to measure or compute ab initio, relies on accurate estimates (and indeed most of the numerical rate parameters in most large kinetic models are estimates.) Machine-learned models trained on large datasets can improve the accuracy of these estimates, and allow a better integration of quantum chemistry and experimental data. The need for continued development of shared (perhaps open-source) software and databases, and some directions for improvement, are highlighted. As we model more complicated systems, many of the weaknesses of the traditional ways of doing chemical kinetic modeling, and of testing kinetic models, have been exposed, identifying several challenges for future research by the community