Graph Classification On Dd
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy |
---|---|
a-non-negative-factorization-approach-to-node | 76.0% |
graph-star-net-for-generalized-multi-task-1 | 79.60% |
an-end-to-end-deep-learning-architecture-for | 79.37% |
dgcnn-disordered-graph-convolutional-neural | 77.21% |
semi-supervised-classification-with-graph | 78.151±3.465 |
graph-convolutional-networks-with | 78.6% |
hierarchical-representation-learning-in-graph | 72% |
how-attentive-are-graph-attention-networks | 75.966±2.191 |
pure-transformers-are-powerful-graph-learners | 73.950±3.361 |
dissecting-graph-neural-networks-on-graph | 78.62% |
deep-graph-kernels | 73.50% |
principal-neighbourhood-aggregation-for-graph | 78.992±4.407 |
hierarchical-graph-representation-learning | 82.07% |
graph-attention-networks | 73.109±3.413 |
semi-supervised-graph-classification-a | 80.88% |
graph-trees-with-attention | 76.2% |
accurate-learning-of-graph-representations-1 | 78.72% |
unsupervised-universal-self-attention-network | 95.67% |
a-simple-yet-effective-baseline-for-non | 77.5% |
graph-level-representation-learning-with | 78.64% |
wasserstein-embedding-for-graph-learning | 78.6% |
190910086 | 82.40% |
hierarchical-graph-representation-learning | 80.64% |
maximum-entropy-weighted-independent-set | 84.33% |
self-attention-graph-pooling | 76.45% |
graph-representation-learning-via-hard-and | 81.71% |
masked-attention-is-all-you-need-for-graphs | 83.529±1.743 |
unsupervised-universal-self-attention-network | 80.23% |
asap-adaptive-structure-aware-pooling-for | 76.87 |
a-fair-comparison-of-graph-neural-networks-1 | 76.6% |
an-end-to-end-deep-learning-architecture-for | 78.72% |
capsule-graph-neural-network | 75.38% |
wasserstein-weisfeiler-lehman-graph-kernels | 79.69% |
graph-capsule-convolutional-neural-networks | 77.62% |
hierarchical-graph-pooling-with-structure | 80.96% |
self-attention-graph-pooling | 76.19% |
understanding-attention-in-graph-neural | 78.36% |
learning-convolutional-neural-networks-for | 76.27% |
anonymous-walk-embeddings | 71.51% |
graph-u-nets | 82.43% |
dynamic-edge-conditioned-filters-in | 74.1% |
how-powerful-are-graph-neural-networks | 77.311±2.223 |
a-simple-yet-effective-baseline-for-non | 75.5% |
learning-metrics-for-persistence-based-2 | 82.0% |
dropgnn-random-dropouts-increase-the | 78.151±3.711 |
relation-order-histograms-as-a-network | 80.45% |
propagation-kernels-efficient-graph-kernels | 78.8% |
capsule-neural-networks-for-graph | 74.86% |
a-simple-baseline-algorithm-for-graph | 24.6% |
dissecting-graph-neural-networks-on-graph | 78.78% |
a-novel-higher-order-weisfeiler-lehman-graph | 75.4 |
ddgk-learning-graph-representations-for-deep | 83.14% |