Graph Regression On Zinc 500K
Metriken
MAE
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | MAE |
---|---|
self-attention-in-colors-another-take-on | 0.056 |
graph-learning-with-1d-convolutions-on-random | 0.101 |
graph-neural-networks-with-learnable-1 | 0.090 |
benchmarking-graph-neural-networks | 0.214 |
how-powerful-are-graph-neural-networks | 0.526 |
weisfeiler-and-lehman-go-cellular-cw-networks | 0.094 |
recipe-for-a-general-powerful-scalable-graph | 0.070 |
graph-propagation-transformer-for-graph | 0.077 |
sign-and-basis-invariant-networks-for | 0.084 |
provably-powerful-graph-networks | 0.303 |
towards-better-graph-representation-learning | 0.066 |
transformers-for-capturing-multi-level-graph | 0.062 |
substructure-aware-graph-neural-networks | 0.072 |
learning-efficient-positional-encodings-with | 0.0655 |
geometric-deep-learning-on-graphs-and | 0.292 |
graph-learning-with-1d-convolutions-on-random | 0.088 |
learning-efficient-positional-encodings-with | 0.0696 |
graph-neural-networks-with-learnable-1 | 0.095 |
edge-augmented-graph-transformers-global-self | 0.108 |
neural-message-passing-for-quantum-chemistry | 0.145 |
unlocking-the-potential-of-classic-gnns-for | 0.065 |
on-the-equivalence-between-graph-isomorphism | 0.353 |
graph-neural-networks-with-learnable-1 | 0.104 |
improving-spectral-graph-convolution-for | 0.0698 |
equivariant-matrix-function-neural-networks | 0.063 |
extending-the-design-space-of-graph-neural-1 | 0.059 |
masked-attention-is-all-you-need-for-graphs | 0.051 |
graph-inductive-biases-in-transformers | 0.059 |
residual-gated-graph-convnets | 0.282 |
improving-graph-neural-network-expressivity | 0.101 |
benchmarking-graph-neural-networks | 0.214 |
do-transformers-really-perform-bad-for-graph | 0.122 |
neural-message-passing-for-quantum-chemistry | 0.252 |
ckgconv-general-graph-convolution-with | 5.9 |
inductive-representation-learning-on-large | 0.398 |
weisfeiler-and-lehman-go-cellular-cw-networks | 0.079 |