GIN | 0.938±0.011 | 0.509±0.044 | How Powerful are Graph Neural Networks? | |
GATv2 | 0.928±0.005 | 0.549±0.020 | How Attentive are Graph Attention Networks? | |
ChemBERTa-2 (MTR-77M) | - | 0.889 | ChemBERTa-2: Towards Chemical Foundation Models | |
TokenGT | 0.892±0.036 | 0.667±0.103 | Pure Transformers are Powerful Graph Learners | |
GraphGPS | 0.911±0.003 | 0.613±0.010 | Recipe for a General, Powerful, Scalable Graph Transformer | |
ESA (Edge set attention, no positional encodings) | 0.944±0.002 | 0.485±0.009 | An end-to-end attention-based approach for learning on graphs | - |