HyperAI

Graph Regression On Zinc Full

Metrics

Test MAE

Results

Performance results of various models on this benchmark

Model Name
Test MAE
Paper TitleRepository
GIN0.068±0.004How Powerful are Graph Neural Networks?
δ-2-LGNN0.045±0.006Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings
ESA + rings + NodeRWSE + EdgeRWSE0.0109±0.0002An end-to-end attention-based approach for learning on graphs-
TokenGT0.047±0.010Pure Transformers are Powerful Graph Learners
δ-2-GNN0.042±0.003Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings
ESA + RWSE (Edge set attention, Random Walk Structural Encoding, tuned)0.0154±0.0001An end-to-end attention-based approach for learning on graphs-
GRIT0.023Graph Inductive Biases in Transformers without Message Passing
GraphGPS0.024±0.007Recipe for a General, Powerful, Scalable Graph Transformer
Graphormer0.036±0.002Do Transformers Really Perform Bad for Graph Representation?
GCN0.152±0.023Semi-Supervised Classification with Graph Convolutional Networks
ESA + RWSE (Edge set attention, Random Walk Structural Encoding)0.017±0.001An end-to-end attention-based approach for learning on graphs-
PNA0.057±0.007Principal Neighbourhood Aggregation for Graph Nets
TIGT0.014Topology-Informed Graph Transformer
GATv20.079±0.004How Attentive are Graph Attention Networks?
GAT0.078±0.006Graph Attention Networks
ESA + RWSE + CY2C (Edge set attention, Random Walk Structural Encoding, clique adjacency, tuned)0.0122±0.0004An end-to-end attention-based approach for learning on graphs-
SignNet0.024±0.003Sign and Basis Invariant Networks for Spectral Graph Representation Learning
GraphSAGE0.126±0.003Inductive Representation Learning on Large Graphs
ESA (Edge set attention, no positional encodings)0.027±0.001An end-to-end attention-based approach for learning on graphs-
0 of 19 row(s) selected.