HyperAI

Graph Regression On Peptides Struct

Metriken

MAE

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Modellname
MAE
Paper TitleRepository
DRew-GCN+LapPE0.2536±0.0015DRew: Dynamically Rewired Message Passing with Delay
Graph Diffuser0.2461±0.0010Diffusing Graph Attention-
CIN++-500k0.2523CIN++: Enhancing Topological Message Passing
GatedGCN-tuned0.2477±0.0009Where Did the Gap Go? Reassessing the Long-Range Graph Benchmark
BoP0.25From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis
Exphormer0.2481±0.0007Exphormer: Sparse Transformers for Graphs
ViT-PS0.2559Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
TokenGT0.2489±0.0013Pure Transformers are Powerful Graph Learners
RDKit + LightGBM0.2459Molecular Fingerprints Are Strong Models for Peptide Function Prediction
NPQ+GATv20.2589±0.0031Neural Priority Queues for Graph Neural Networks-
NeuralWalker0.2463 ± 0.0005Learning Long Range Dependencies on Graphs via Random Walks
ECFP + LightGBM0.2432Molecular Fingerprints Are Strong Models for Peptide Function Prediction
GCN-tuned0.2460±0.0007Where Did the Gap Go? Reassessing the Long-Range Graph Benchmark
Transformer+LapPE0.2529±0.0016Long Range Graph Benchmark
SAN+LapPE0.2683±0.0043Long Range Graph Benchmark
EIGENFORMER0.2599Graph Transformers without Positional Encodings-
ESA + RWSE (Edge set attention, Random Walk Structural Encoding, tuned)0.2393±0.0004An end-to-end attention-based approach for learning on graphs-
GCN0.3496±0.0013Long Range Graph Benchmark
GPS0.2500±0.0005Recipe for a General, Powerful, Scalable Graph Transformer
GraphMLPMixer0.2475±0.0015A Generalization of ViT/MLP-Mixer to Graphs
0 of 39 row(s) selected.