SRGNN

simple recurrent graph neural network

Autores

  • Juan Belieni Getulio Vargas Foundation
  • Diego Mesquita Getulio Vargas Foundation

Palavras-chave:

Graph Neural Networks, Message Passing, Simple Recurrent GNNs, Aggregation, Combination

Resumo

Graph Neural Networks (GNNs) are capable of learning graph representations via message passing, which consists of performing aggregation and combination operations on the vertices of a graph. The aggregation operation computes a message based on the representation of the neighborhood of a vertex, and the combination operation updates its representation based on the message and the current representation of the vertex. The expressive power of standard GNNs is tied to the color refinement algorithm for detecting isomorphism. Noticeably, this property only requires that we have injective aggregation and combination operations. Typically, GNN variants like GCN and GIN are implemented as a multi-layer neural network, where each layer can learn different weights. However, it is not so common to define these networks as recurrent GNNs, where all the layers have the same weights. This work has the goal of evaluating the learning capabilities of this simpler variant of GNNs, that will be called Simple Recurrent GNNs (SRGNNs).

Downloads

Não há dados estatísticos.

Referências

J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. “Neural Message Passing for Quantum Chemistry”. In: Proceedings of the 34th International Conference on Machine Learning. International Conference on Machine Learning. PMLR, July 17, 2017, pp. 1263–1272. https://proceedings.mlr.press/v70/gilmer17a.html (visited on 03/06/2024).

M. Grohe. The Logic of Graph Neural Networks. Jan. 9, 2022. doi: 10.48550/arXiv.2104.14624. arXiv: 2104.14624 [cs]. http://arxiv.org/abs/2104.14624 (visited on 02/07/2024). preprint.

T. N. Kipf and M. Welling. Semi-Supervised Classification with Graph Convolutional Networks. Feb. 22, 2017. doi: 10.48550/arXiv.1609.02907. arXiv: 1609.02907 [cs, stat]. http://arxiv.org/abs/1609.02907 (visited on 03/05/2024). preprint.

K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How Powerful Are Graph Neural Networks? Feb. 22, 2019. doi: 10.48550/arXiv.1810.00826. arXiv: 1810.00826 [cs, stat]. http://arxiv.org/abs/1810.00826 (visited on 02/28/2024). preprint.

M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola. “Deep Sets”. In: Advances in Neural Information Processing Systems. Vol. 30. Curran Associates, Inc., 2017. https://proceedings.neurips.cc/paper/2017/hash/f22e4747da1aa27e363d86d40ff442fe-Abstract.html (visited on 03/06/2024).

Downloads

Publicado

2025-01-20

Edição

Seção

Resumos