Graph Classification On Proteins
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Accuracy |
---|---|
panda-expanded-width-aware-message-passing | 76 |
how-powerful-are-graph-neural-networks | 75.536±1.851 |
spectral-multigraph-networks-for-discovering | 76.5% |
rep-the-set-neural-networks-for-learning-set | 70.74% |
graph-convolutional-networks-with | 76.60% |
weisfeiler-and-leman-go-neural-higher-order | 76.4% |
wasserstein-embedding-for-graph-learning | 76.5% |
gaussian-induced-convolution-for-graphs | 77.65% |
a-novel-higher-order-weisfeiler-lehman-graph | 76.5 |
a-non-negative-factorization-approach-to-node | 72.1% |
spi-gcn-a-simple-permutation-invariant-graph | 72.06% |
recipe-for-a-general-powerful-scalable-graph | 77.143±1.494 |
optimal-transport-for-structured-data-with | 74.55% |
how-powerful-are-graph-neural-networks | 76,2% |
graph-capsule-convolutional-neural-networks | 76.40% |
diffwire-inductive-graph-rewiring-via-the | 74.91% |
graph-representation-learning-via-hard-and | 77.92% |
online-graph-dictionary-learning | 74.86 |
hierarchical-graph-representation-learning | 76.25% |
provably-powerful-graph-networks | 77.20% |
graph-kernels-a-survey | 76.31% |
a-fair-comparison-of-graph-neural-networks-1 | 73% |
unsupervised-universal-self-attention-network | 78.53% |
on-valid-optimal-assignment-kernels-and | 76.4% |
randomized-schur-complement-views-for-graph | 84.3 |
diffwire-inductive-graph-rewiring-via-the | 75.34% |
self-attention-graph-pooling | 70.04% |
graph-star-net-for-generalized-multi-task-1 | 77.90% |
graph-representation-learning-via-hard-and | 78.23% |
graph-attention-networks | 76.786±1.670 |
discriminative-graph-autoencoder | 77.71% |
190910086 | 81.70% |
hierarchical-graph-pooling-with-structure | 84.91 |
segmented-graph-bert-for-graph-instance | 77.09% |
towards-a-practical-k-dimensional-weisfeiler | 74.60% |
dgcnn-disordered-graph-convolutional-neural | 75.1% |
capsule-graph-neural-network | 76.28% |
wasserstein-weisfeiler-lehman-graph-kernels | 74.28% |
relation-order-histograms-as-a-network | 77.89% |
subgraph-networks-with-application-to | 76.78% |
a-simple-yet-effective-baseline-for-non | 74.7% |
semi-supervised-classification-with-graph | 75.536±1.622 |
semi-supervised-graph-classification-a | 77.26% |
the-multiscale-laplacian-graph-kernel | 76.34% |
how-attentive-are-graph-attention-networks | 77.679±2.187 |
graph-level-representation-learning-with | 75.67% |
a-simple-yet-effective-baseline-for-non | 72.7% |
principal-neighbourhood-aggregation-for-graph | 77.679±3.281 |
relational-reasoning-over-spatial-temporal | 80.36% |
fine-tuning-graph-neural-networks-by | - |
a-simple-baseline-algorithm-for-graph | 73.6% |
distinguishing-enzyme-structures-from-non | 74.22% |
accurate-learning-of-graph-representations-1 | 75.09% |
understanding-attention-in-graph-neural | 77.09% |
edge-contraction-pooling-for-graph-neural | 73.5% |
graph-trees-with-attention | 75.6 |
a-simple-yet-effective-baseline-for-non | 73.7% |
edge-contraction-pooling-for-graph-neural | 72.5% |
pinet-a-permutation-invariant-graph-neural | 75% |
efficient-graphlet-kernels-for-large-graph | 71.67% |
cin-enhancing-topological-message-passing | 80.5 |
dissecting-graph-neural-networks-on-graph | 76.46% |
template-based-graph-neural-network-with | 82.9 |
weisfeiler-and-leman-go-neural-higher-order | 75.9% |
neighborhood-enlargement-in-graph-neural | 78.97% |
graph-classification-with-recurrent | 74.8% |
masked-attention-is-all-you-need-for-graphs | 82.679±0.799 |
an-end-to-end-deep-learning-architecture-for | 76.26% |
panda-expanded-width-aware-message-passing | 76 |
function-space-pooling-for-graph | 72.8% |
a-fair-comparison-of-graph-neural-networks-1 | 73.7% |
capsule-neural-networks-for-graph | 74.1% |
transitivity-preserving-graph-representation | 80.12 ±0.32 |
cell-attention-networks | 78.2% |
diffwire-inductive-graph-rewiring-via-the | 75.38% |
self-attention-graph-pooling | 71.86% |
asap-adaptive-structure-aware-pooling-for | 74.19% |
graph-u-nets | 77.68% |
graph-representation-learning-via-hard-and | 78.65% |
dropgnn-random-dropouts-increase-the | 76.3% |
deep-graph-kernels | 75.68% |
maximum-entropy-weighted-independent-set | 80.71% |
diffwire-inductive-graph-rewiring-via-the | 75.03% |
quantum-based-subgraph-convolutional-neural | 78.80% |
improving-attention-mechanism-in-graph-neural | 76.81% |
graph-isomorphism-unet | 77.6% |
hierarchical-representation-learning-in-graph | 73.3% |
fea2fea-exploring-structural-feature | 77.8% |
a-persistent-weisfeilerlehman-procedure-for | 75.36% |
quantum-based-subgraph-convolutional-neural | 78.35% |
learning-metrics-for-persistence-based-2 | 78.8% |
generalizing-topological-graph-neural | 78.8% |
unsupervised-universal-self-attention-network | 80.01% |
fast-graph-representation-learning-with | 75.1% |
dissecting-graph-neural-networks-on-graph | 77.44% |
dagcn-dual-attention-graph-convolutional | 76.33% |
panda-expanded-width-aware-message-passing | 76.17 |
graph2vec-learning-distributed | 73.3% ± 2.05% |
panda-expanded-width-aware-message-passing | 75.759 |