Graph Classification On Enzymes
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
비교 표
모델 이름 | Accuracy |
---|---|
dagcn-dual-attention-graph-convolutional | 58.17% |
transitivity-preserving-graph-representation | 67.22±3.92 |
spectral-multigraph-networks-for-discovering | 61.7% |
masked-attention-is-all-you-need-for-graphs | 79.423±1.658 |
evolution-of-graph-classifiers | 55.67 |
capsule-graph-neural-network | 54.67% |
template-based-graph-neural-network-with | 75.1 |
how-attentive-are-graph-attention-networks | 77.987±2.112 |
semi-supervised-classification-with-graph | 73.466±4.372 |
panda-expanded-width-aware-message-passing | 46.2 |
wasserstein-embedding-for-graph-learning | 60.5 |
demo-net-degree-specific-graph-neural | 27.2 |
hierarchical-graph-representation-learning | 63.33% |
dissecting-graph-neural-networks-on-graph | 69.50% |
online-graph-dictionary-learning | 71.47 |
fea2fea-exploring-structural-feature | 48.5 |
dynamic-edge-conditioned-filters-in | 52.67% |
deep-graph-kernels | 53.43% |
a-fair-comparison-of-graph-neural-networks-1 | 58.2% |
improving-spectral-graph-convolution-for | 73.33 |
a-fair-comparison-of-graph-neural-networks-1 | 59.6% |
optimal-transport-for-structured-data-with | 71.00% |
graph-star-net-for-generalized-multi-task-1 | 67.1% |
graph-trees-with-attention | 59.6 |
a-simple-yet-effective-baseline-for-non | 35.3% |
190910086 | 67.30% |
improving-attention-mechanism-in-graph-neural | 58.45 |
graph-isomorphism-unet | 70% |
hierarchical-representation-learning-in-graph | 43.9% |
graph-convolutional-networks-with | 65.0% |
wasserstein-weisfeiler-lehman-graph-kernels | 59.13% |
how-powerful-are-graph-neural-networks | 68.303±4.170 |
hierarchical-graph-pooling-with-structure | 68.79 |
bridging-the-gap-between-spectral-and-spatial | 78.39 |
a-simple-baseline-algorithm-for-graph | 43.7% |
principal-neighbourhood-aggregation-for-graph | 73.021±2.512 |
panda-expanded-width-aware-message-passing | 43.9 |
graph-classification-with-recurrent | 48.4% |
spi-gcn-a-simple-permutation-invariant-graph | 50.17% |
panda-expanded-width-aware-message-passing | 53.1 |
dissecting-graph-neural-networks-on-graph | 70.17% |
capsule-neural-networks-for-graph | 27% |
hierarchical-graph-representation-learning | 62.53% |
panda-expanded-width-aware-message-passing | 31.55 |
gaussian-induced-convolution-for-graphs | 62.50% |
graph-attention-networks | 78.611±1.556 |
a-non-negative-factorization-approach-to-node | 24.1% |
towards-a-practical-k-dimensional-weisfeiler | 58.2% |
fine-tuning-graph-neural-networks-by | - |
bridging-the-gap-between-spectral-and-spatial | 65.13 |
dropgnn-random-dropouts-increase-the | 65.128±4.117 |
recipe-for-a-general-powerful-scalable-graph | 78.667±4.625 |
when-work-matters-transforming-classical | 67.50% |