HyperAI
HyperAI초신경
홈
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
전체 사이트 검색...
⌘
K
홈
SOTA
그래프 분류
Graph Classification On Collab
Graph Classification On Collab
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Accuracy
Paper Title
Repository
GFN-light
81.34%
Are Powerful Graph Neural Nets Necessary? A Dissection on Graph Classification
GMT
80.74%
Accurate Learning of Graph Representations with Graph Multiset Pooling
G_DenseNet
83.16%
When Work Matters: Transforming Classical Network Structures to Graph CNN
-
DGCNN
68.34%
DGCNN: Disordered Graph Convolutional Neural Network Based on the Gaussian Mixture Model
-
DGCNN
73.76%
An End-to-End Deep Learning Architecture for Graph Classification
-
sGIN
80.71%
Mutual Information Maximization in Graph Neural Networks
PPGN
81.38%
Provably Powerful Graph Networks
GCN
80.6%
Fast Graph Representation Learning with PyTorch Geometric
GraphSAGE
73.9%
A Fair Comparison of Graph Neural Networks for Graph Classification
R-GIN + PANDA
77.8%
PANDA: Expanded Width-Aware Message Passing Beyond Rewiring
R-GCN + PANDA
71.4%
PANDA: Expanded Width-Aware Message Passing Beyond Rewiring
hGANet
77.48%
Graph Representation Learning via Hard and Channel-Wise Attention Networks
GCN + PANDA
68.4%
PANDA: Expanded Width-Aware Message Passing Beyond Rewiring
Graph U-Nets
77.56%
Graph U-Nets
1-NMFPool
65.0%
A Non-Negative Factorization approach to node pooling in Graph Convolutional Neural Networks
-
CT-Layer
69.87%
DiffWire: Inductive Graph Rewiring via the Lovász Bound
U2GNN (Unsupervised)
95.62%
Universal Graph Transformer Self-Attention Networks
U2GNN
77.84%
Universal Graph Transformer Self-Attention Networks
FactorGCN
81.2%
Factorizable Graph Convolutional Networks
NDP
79.1%
Hierarchical Representation Learning in Graph Neural Networks with Node Decimation Pooling
0 of 38 row(s) selected.
Previous
Next