HyperAI
HyperAI超神経
ホーム
ニュース
最新論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
日本語
HyperAI
HyperAI超神経
Toggle sidebar
サイトを検索…
⌘
K
ホーム
SOTA
グラフ回帰
Graph Regression On Pcqm4Mv2 Lsc
Graph Regression On Pcqm4Mv2 Lsc
評価指標
Test MAE
Validation MAE
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Test MAE
Validation MAE
Paper Title
Repository
EGT
0.0862
0.0857
Global Self-Attention as a Replacement for Graph Convolution
-
GPTrans-L
0.0821
0.0809
Graph Propagation Transformer for Graph Representation Learning
-
GPS
0.0862
0.0852
Recipe for a General, Powerful, Scalable Graph Transformer
-
TIGT
-
0.0826
Topology-Informed Graph Transformer
-
Graphormer + GFSA
-
0.0860
Graph Convolutions Enrich the Self-Attention in Transformers!
-
Graphormer
-
0.0864
Do Transformers Really Perform Bad for Graph Representation?
-
EGT+SSA+Self-ensemble
-
0.0865
The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
-
GCN
0.1398
0.1379
Semi-Supervised Classification with Graph Convolutional Networks
-
ESA (Edge set attention, no positional encodings)
N/A
0.0235
An end-to-end attention-based approach for learning on graphs
-
MLP-Fingerprint
0.1760
0.1753
OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs
-
GRIT
-
0.0859
Graph Inductive Biases in Transformers without Message Passing
-
Uni-Mol+
0.0705
0.0693
Highly Accurate Quantum Chemical Property Prediction with Uni-Mol+
-
GRPE-Large
0.0876
0.0867
GRPE: Relative Positional Encoding for Graph Transformer
-
TokenGT
0.0919
0.0910
Pure Transformers are Powerful Graph Learners
-
EGT + Triangular Attention
0.0683
0.0671
Global Self-Attention as a Replacement for Graph Convolution
-
TGT-At
0.0683
0.0671
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers
-
GIN
0.1218
0.1195
How Powerful are Graph Neural Networks?
-
GPTrans-T
0.0842
0.0833
Graph Propagation Transformer for Graph Representation Learning
-
Transformer-M
0.0782
0.0772
One Transformer Can Understand Both 2D & 3D Molecular Data
-
EGT+SSA
-
0.0876
The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
-
0 of 20 row(s) selected.
Previous
Next
Graph Regression On Pcqm4Mv2 Lsc | SOTA | HyperAI超神経