GIN | 0.964±0.008 | 0.744±0.083 | How Powerful are Graph Neural Networks? | |
TokenGT | 0.930±0.018 | 1.038±0.125 | Pure Transformers are Powerful Graph Learners | |
ESA (Edge set attention, no positional encodings) | 0.977±0.001 | 0.595±0.013 | An end-to-end attention-based approach for learning on graphs | - |
GraphGPS | 0.861±0.037 | 1.462±0.188 | Recipe for a General, Powerful, Scalable Graph Transformer | |
GATv2 | 0.970±0.007 | 0.676±0.081 | How Attentive are Graph Attention Networks? | |