Abstract
The inherent randomness of polymers has long posed challenges for their representation learning in polymer machine learning (ML). The Simplified Molecular-Input Line-Entry System (SMILES) notation, which has excelled in small molecule research, unfortunately, struggles to flexibly capture the complexity of polymer structures, such as random block copolymers. Recently, BigSMILES and its extensions have paved the way for more accurate descriptions of polymer structures. However, whether BigSMILES outperforms SMILES in polymer ML workflows has yet to be systematically explored and demonstrated. To fill this scientific gap, we conducted extensive experiments investigating this question, encompassing a variety of polymer property prediction and inverse design tasks based on both image and text inputs. Our findings reveal that in 11 tasks involving homopolymer systems, BigSMILES-based ML workflows exhibit performance comparable to or even exceeding that of SMILES, underscoring the utility of BigSMILES in representing polymer structures. Furthermore, BigSMILES offers a more compact textual representation compared to SMILES, significantly reducing the computational cost of model training, particularly for large language models. Through these comprehensive experiments, we demonstrate that BigSMILES can achieve performance on par with SMILES, while also facilitating faster model training and reducing energy consumption, which could have a substantial impact on a wide range of polymer tasks in the future, including property prediction (and classification) and polymer generation across various polymer types.