Résumé

With the rise of graph neural networks, sometimes also referred to as geometric deep learning, a range of new types of network layers have been introduced. Since this is a very recent development, the design of new architectures relies a lot on intuition and trial-and-error. In this paper, we evaluate the effect of adding graph pooling layers to a network, which down-sample graphs, and evaluate the performance on three different datasets. We find that especially for smaller graphs, adding pooling layers should be done with caution, as they can have a negative effect on the overall performance.

Détails

Actions