Recent works have proven the feasibility of fast and accurate time series classification methods based on randomized convolutional kernels [5, 32]. Concerning graph-structured data, the majority of randomized graph neural networks are based on the Echo State Network paradigm in which single layers or the whole network present some form of recurrence [7, 8]. This paper aims to explore a simple form of a randomized graph neural network inspired by the success of randomized convolutions in the 1-dimensional domain. Our idea is pretty simple: implement a no-frills convolutional graph neural network and leave its weights untrained. Then, we aggregate the node representations with global pooling operators, obtaining an untrained graph-level representation. Since there is no training involved, computing such representation is extremely fast. We then apply a fast linear classifier to the obtained representations. We opted for LS-SVM since it is among the fastest classifiers available. We show that such a simple approach can obtain competitive predictive performance while being extremely efficient both at training and inference time.

An Untrained Neural Model for Fast and Accurate Graph Classification

Navarin N.;Pasa L.;Sperduti A.
2023

Abstract

Recent works have proven the feasibility of fast and accurate time series classification methods based on randomized convolutional kernels [5, 32]. Concerning graph-structured data, the majority of randomized graph neural networks are based on the Echo State Network paradigm in which single layers or the whole network present some form of recurrence [7, 8]. This paper aims to explore a simple form of a randomized graph neural network inspired by the success of randomized convolutions in the 1-dimensional domain. Our idea is pretty simple: implement a no-frills convolutional graph neural network and leave its weights untrained. Then, we aggregate the node representations with global pooling operators, obtaining an untrained graph-level representation. Since there is no training involved, computing such representation is extremely fast. We then apply a fast linear classifier to the obtained representations. We opted for LS-SVM since it is among the fastest classifiers available. We show that such a simple approach can obtain competitive predictive performance while being extremely efficient both at training and inference time.
2023
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
32nd International Conference on Artificial Neural Networks, ICANN 2023
978-3-031-44215-5
978-3-031-44216-2
File in questo prodotto:
File Dimensione Formato  
UNTRAINED_GRAPH_NEURAL_NETWORK_ICANN_2023-11.pdf

accesso aperto

Tipologia: Preprint (submitted version)
Licenza: Accesso gratuito
Dimensione 412.46 kB
Formato Adobe PDF
412.46 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3501328
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex ND
social impact