HyperAI
HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
그래프 회귀
Graph Regression On Pcqm4Mv2 Lsc
Graph Regression On Pcqm4Mv2 Lsc
평가 지표
Test MAE
Validation MAE
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Test MAE
Validation MAE
Paper Title
Repository
EGT
0.0862
0.0857
Global Self-Attention as a Replacement for Graph Convolution
-
GPTrans-L
0.0821
0.0809
Graph Propagation Transformer for Graph Representation Learning
-
GPS
0.0862
0.0852
Recipe for a General, Powerful, Scalable Graph Transformer
-
TIGT
-
0.0826
Topology-Informed Graph Transformer
-
Graphormer + GFSA
-
0.0860
Graph Convolutions Enrich the Self-Attention in Transformers!
-
Graphormer
-
0.0864
Do Transformers Really Perform Bad for Graph Representation?
-
EGT+SSA+Self-ensemble
-
0.0865
The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
-
GCN
0.1398
0.1379
Semi-Supervised Classification with Graph Convolutional Networks
-
ESA (Edge set attention, no positional encodings)
N/A
0.0235
An end-to-end attention-based approach for learning on graphs
-
MLP-Fingerprint
0.1760
0.1753
OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs
-
GRIT
-
0.0859
Graph Inductive Biases in Transformers without Message Passing
-
Uni-Mol+
0.0705
0.0693
Highly Accurate Quantum Chemical Property Prediction with Uni-Mol+
-
GRPE-Large
0.0876
0.0867
GRPE: Relative Positional Encoding for Graph Transformer
-
TokenGT
0.0919
0.0910
Pure Transformers are Powerful Graph Learners
-
EGT + Triangular Attention
0.0683
0.0671
Global Self-Attention as a Replacement for Graph Convolution
-
TGT-At
0.0683
0.0671
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers
-
GIN
0.1218
0.1195
How Powerful are Graph Neural Networks?
-
GPTrans-T
0.0842
0.0833
Graph Propagation Transformer for Graph Representation Learning
-
Transformer-M
0.0782
0.0772
One Transformer Can Understand Both 2D & 3D Molecular Data
-
EGT+SSA
-
0.0876
The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
-
0 of 20 row(s) selected.
Previous
Next
Graph Regression On Pcqm4Mv2 Lsc | SOTA | HyperAI초신경