Graph Neural Networks (GNN) show good results in classification and regression on graphs, notwithstanding most GNN models use a limited depth. In fact, they are composed of only a few stacked graph convolutional layers. One reason for this is the number of parameters growing with the number of GNN layers. In this paper, we show how using a recurrent graph convolution layer can help in building deeper GNNs, without increasing the complexity of the training phase, while improving on the predictive performances. We also analyze how the depth of the model influences the final result.
Deep recurrent graph neural networks
Pasa L.;Navarin N.;Sperduti A.
2020
Abstract
Graph Neural Networks (GNN) show good results in classification and regression on graphs, notwithstanding most GNN models use a limited depth. In fact, they are composed of only a few stacked graph convolutional layers. One reason for this is the number of parameters growing with the number of GNN layers. In this paper, we show how using a recurrent graph convolution layer can help in building deeper GNNs, without increasing the complexity of the training phase, while improving on the predictive performances. We also analyze how the depth of the model influences the final result.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
ES2020-107.pdf
accesso aperto
Descrizione: articolo principale
Tipologia:
Published (publisher's version)
Licenza:
Accesso libero
Dimensione
1.63 MB
Formato
Adobe PDF
|
1.63 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.