HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Graph Regression
Graph Regression On Peptides Struct
Graph Regression On Peptides Struct
Metriken
MAE
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
MAE
Paper Title
Repository
DRew-GCN+LapPE
0.2536±0.0015
DRew: Dynamically Rewired Message Passing with Delay
Graph Diffuser
0.2461±0.0010
Diffusing Graph Attention
-
CIN++-500k
0.2523
CIN++: Enhancing Topological Message Passing
GatedGCN-tuned
0.2477±0.0009
Where Did the Gap Go? Reassessing the Long-Range Graph Benchmark
BoP
0.25
From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis
Exphormer
0.2481±0.0007
Exphormer: Sparse Transformers for Graphs
ViT-PS
0.2559
Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
TokenGT
0.2489±0.0013
Pure Transformers are Powerful Graph Learners
RDKit + LightGBM
0.2459
Molecular Fingerprints Are Strong Models for Peptide Function Prediction
NPQ+GATv2
0.2589±0.0031
Neural Priority Queues for Graph Neural Networks
-
NeuralWalker
0.2463 ± 0.0005
Learning Long Range Dependencies on Graphs via Random Walks
ECFP + LightGBM
0.2432
Molecular Fingerprints Are Strong Models for Peptide Function Prediction
GCN-tuned
0.2460±0.0007
Where Did the Gap Go? Reassessing the Long-Range Graph Benchmark
Transformer+LapPE
0.2529±0.0016
Long Range Graph Benchmark
SAN+LapPE
0.2683±0.0043
Long Range Graph Benchmark
EIGENFORMER
0.2599
Graph Transformers without Positional Encodings
-
ESA + RWSE (Edge set attention, Random Walk Structural Encoding, tuned)
0.2393±0.0004
An end-to-end attention-based approach for learning on graphs
-
GCN
0.3496±0.0013
Long Range Graph Benchmark
GPS
0.2500±0.0005
Recipe for a General, Powerful, Scalable Graph Transformer
GraphMLPMixer
0.2475±0.0015
A Generalization of ViT/MLP-Mixer to Graphs
0 of 39 row(s) selected.
Previous
Next