Simulated configurations of flexible knotted rings confined inside a spherical cavity are fed into long-short-term memory neural networks (LSTM NNs) designed to distinguish knot types. The results show that they perform well in knot recognition even if tested against flexible, strongly confined, and, therefore, highly geometrically entangled rings. In agreement with the expectation that knots delocalize in dense polymers, a suitable coarse-graining procedure on configurations boosts the performance of the LSTMs when knot identification is applied to rings much longer than those used for training. Notably, when the NNs fail, the wrong prediction usually belongs to the topological family of the correct one. The fact that the LSTMs can grasp some basic properties of the ring's topology is corroborated by a test on knot types not used for training. We also show that the choice of the NN architecture is important: simpler convolutional NNs do not perform so well. Finally, all results depend on the features used for input. Surprisingly, coordinates or bond directions of the configurations provide the best accuracy to the NNs, even if they are not invariant under rotations (while the knot type is invariant). We tested other rotational invariant features based on distances, angles, and dihedral angles.[GRAPHICS]

Machine Learning Understands Knotted Polymers

Braghetto A.;Kundu S.;Baiesi M.;Orlandini E.
2023

Abstract

Simulated configurations of flexible knotted rings confined inside a spherical cavity are fed into long-short-term memory neural networks (LSTM NNs) designed to distinguish knot types. The results show that they perform well in knot recognition even if tested against flexible, strongly confined, and, therefore, highly geometrically entangled rings. In agreement with the expectation that knots delocalize in dense polymers, a suitable coarse-graining procedure on configurations boosts the performance of the LSTMs when knot identification is applied to rings much longer than those used for training. Notably, when the NNs fail, the wrong prediction usually belongs to the topological family of the correct one. The fact that the LSTMs can grasp some basic properties of the ring's topology is corroborated by a test on knot types not used for training. We also show that the choice of the NN architecture is important: simpler convolutional NNs do not perform so well. Finally, all results depend on the features used for input. Surprisingly, coordinates or bond directions of the configurations provide the best accuracy to the NNs, even if they are not invariant under rotations (while the knot type is invariant). We tested other rotational invariant features based on distances, angles, and dihedral angles.[GRAPHICS]
2023
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3494382
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 6
  • OpenAlex ND
social impact