Abstract
Computational methods, exemplified by machine learning (ML), have provided theoretical guidance and solutions for the development of sustainable polymers, accelerating advancements in materials for societal needs such as equipment, environment, health, and green energy. In previous polymer ML workflows, the Simplified Molecular-Input Line-Entry System (SMILES) notation has consistently served as the primary representation of polymer structures, though the inherent randomness of polymers has long posed challenges for SMILES in the representation learning of polymer ML.
Recently, BigSMILES and its extensions have paved the way for more versatile and concise representation of polymer structures. However, whether BigSMILES outperforms SMILES in polymer ML workflows has yet to be systematically explored and demonstrated. To fill this scientific gap, we conducted extensive experiments investigating this question, encompassing a variety of polymer property prediction and inverse design tasks based on both image and text inputs. Our findings reveal that in 11 tasks involving homopolymer systems, BigSMILES-based ML workflows exhibit performance comparable to or even exceeding that of SMILES, underscoring the efficacy of BigSMILES in representing polymer structures. Furthermore, BigSMILES offers a more compact textual representation compared to SMILES, significantly reducing the computational cost of model training, particularly for large language models. Through these comprehensive experiments, we for the first time demonstrate that BigSMILES can achieve performance on par with SMILES, while also facilitating faster model training and reducing energy consumption, which could have a substantial impact on a wide range of polymer tasks in the future, including property prediction (and classification) and polymer generation across various polymer types.