Graph Classification On Nci1
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
比較表
モデル名 | Accuracy |
---|---|
self-attention-graph-pooling | 74.06% |
graph-isomorphism-unet | 80.2% |
pure-transformers-are-powerful-graph-learners | 76.740±2.054 |
a-simple-baseline-algorithm-for-graph | 75.2% |
gaussian-induced-convolution-for-graphs | 84.08% |
ddgk-learning-graph-representations-for-deep | 68.1% |
cin-enhancing-topological-message-passing | 85.3% |
self-attention-graph-pooling | 67.45% |
on-valid-optimal-assignment-kernels-and | 86.1% |
optimal-transport-for-structured-data-with | 86.42% |
dissecting-graph-neural-networks-on-graph | 83.65% |
weisfeiler-and-leman-go-neural-higher-order | 76.2% |
learning-metrics-for-persistence-based-2 | 87.2% |
cell-attention-networks | 84.5% |
asap-adaptive-structure-aware-pooling-for | 71.48 |
fea2fea-exploring-structural-feature | 74.9% |
neighborhood-enlargement-in-graph-neural | 83.85% |
graph2vec-learning-distributed | 73.22% ± 1.81% |
wasserstein-weisfeiler-lehman-graph-kernels | 85.75% |
dagcn-dual-attention-graph-convolutional | 81.68% |
improving-attention-mechanism-in-graph-neural | 82.28% |
do-transformers-really-perform-bad-for-graph | 77.032±1.393 |
learning-universal-adversarial-perturbations | 85.50% |
optimal-transport-for-structured-data-with | 72.82% |
hierarchical-graph-pooling-with-structure | 78.45% |
an-end-to-end-deep-learning-architecture-for | 69.00% |
capsule-graph-neural-network | 78.35% |
how-powerful-are-graph-neural-networks | 84.818±0.936 |
a-fair-comparison-of-graph-neural-networks-1 | 80% |
principal-neighbourhood-aggregation-for-graph | 84.964±1.391 |
masked-attention-is-all-you-need-for-graphs | 87.835±0.644 |
dropgnn-random-dropouts-increase-the | 84.331±1.564 |
graph-kernels-a-survey | 85.12% |
capsule-neural-networks-for-graph | 65.9% |
graph-classification-using-structural | 67.71% |
provably-powerful-graph-networks | 83.19% |
recipe-for-a-general-powerful-scalable-graph | 85.110±1.423 |
dynamic-edge-conditioned-filters-in | 83.8% |
semi-supervised-classification-with-graph | 84.185±0.644 |
hierarchical-representation-learning-in-graph | 73.5% |
wasserstein-embedding-for-graph-learning | 76.8% |
spi-gcn-a-simple-permutation-invariant-graph | 64.11% |
graph-classification-with-recurrent | 80.7% |
a-novel-higher-order-weisfeiler-lehman-graph | 73.5 |
subgraph-networks-with-application-to | 70.26% |
transitivity-preserving-graph-representation | 77.55 ±0.16% |
relation-order-histograms-as-a-network | 81.63% |
towards-a-practical-k-dimensional-weisfeiler | 85.5% |
a-fair-comparison-of-graph-neural-networks-1 | 76.4% |
graph-trees-with-attention | 75.9% |
how-attentive-are-graph-attention-networks | 82.384±1.700 |
dissecting-graph-neural-networks-on-graph | 81.43% |
how-powerful-are-graph-neural-networks | 82.7% |
propagation-kernels-efficient-graph-kernels | 84.5% |
a-simple-yet-effective-baseline-for-non | 73.0% |
learning-convolutional-neural-networks-for | 76.34% |
generalizing-topological-graph-neural | 85.1% |
weisfeiler-and-leman-go-neural-higher-order | 86.1% |
relational-reasoning-over-spatial-temporal | 74.48% |
template-based-graph-neural-network-with | 88.1% |
optimal-transport-for-structured-data-with | 85.82% |
graph-capsule-convolutional-neural-networks | 82.72% |
graph-attention-networks | 85.109±1.107 |
pre-training-graph-neural-networks-on | 79.75±0.82 |
a-non-negative-factorization-approach-to-node | 66.2% |
improving-spectral-graph-convolution-for | 84.87% |
spectral-multigraph-networks-for-discovering | 83.4% |