A Proposal of Multi-Layer Perceptron with Graph Gating Unit for Graph Representation Learning and its Application to Surrogate Model for FEM

GNNs are the neural networks for the representation learning of graph-structured data, most of which areconstructed by stacking graph convolutional layers. As stacking n-layers of ones is equivalent to propagating n-hopof neighbor nodes' information, GNNs require enough large number of layers to learn large graphs. However, ittends to degrade the model performance due to the problem called over-smoothing. In this paper, by presentinga novel GNN model, based on stacking feedforward neural networks with gating structures using GCNs, I triedto solve the over-smoothing problem and thereby overcome the difficulty of GNNs learning large graphs. Theexperimental results showed that the proposed method monotonically improved the prediction accuracy up to 20layers without over-smoothing, whereas the conventional method caused it at 4 to 8 layers. In two experiments onlarge graphs, the PPI dataset, a benchmark for inductive node classification, and the application to the surrogatemodel for finite element methods, the proposed method achieved the highest accuracy of the existing methodscompared, especially with a state-of-the-art accuracy of 99.71% on the PPI dataset.