Graph Classification On Nci109
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
비교 표
모델 이름 | Accuracy |
---|---|
self-attention-graph-pooling | 67.86 |
gaussian-induced-convolution-for-graphs | 82.86 |
spectral-multigraph-networks-for-discovering | 82.0 |
principal-neighbourhood-aggregation-for-graph | 83.382±1.045 |
learning-metrics-for-persistence-based-2 | 87.3 |
recipe-for-a-general-powerful-scalable-graph | 81.256±0.501 |
graph2vec-learning-distributed | 74.26 |
cell-attention-networks | 83.6 |
graph-attention-networks | 82.560±0.601 |
dropgnn-random-dropouts-increase-the | 83.961±1.141 |
how-attentive-are-graph-attention-networks | 83.092±0.764 |
hierarchical-graph-pooling-with-structure | 80.67 |
pre-training-graph-neural-networks-on | 77.54±1.51 |
graph-isomorphism-unet | 77 |
generalizing-topological-graph-neural | 84.0 |
subgraph-networks-with-application-to | 71.06 |
transitivity-preserving-graph-representation | 75.45±1.26 |
propagation-kernels-efficient-graph-kernels | 83.5 |
semi-supervised-classification-with-graph | 83.140±1.248 |
asap-adaptive-structure-aware-pooling-for | 70.07 |
unsupervised-inductive-whole-graph-embedding | 69.17 |
cin-enhancing-topological-message-passing | 84.5 |
masked-attention-is-all-you-need-for-graphs | 84.976±0.551 |
self-attention-graph-pooling | 74.06 |
graph-convolutional-networks-with | 74.90 |
provably-powerful-graph-networks | 82.23 |
pure-transformers-are-powerful-graph-learners | 72.077±1.883 |
on-valid-optimal-assignment-kernels-and | 86.3 |
relational-reasoning-over-spatial-temporal | 75.85 |
when-work-matters-transforming-classical | 80.66 |
capsule-neural-networks-for-graph | 58.04 |
do-transformers-really-perform-bad-for-graph | 74.879±1.183 |
improving-spectral-graph-convolution-for | 83.62 |
how-powerful-are-graph-neural-networks | 84.155±0.812 |
unsupervised-inductive-whole-graph-embedding | 74.48 |
towards-a-practical-k-dimensional-weisfeiler | 84.7 |
dynamic-edge-conditioned-filters-in | 82.14 |