HyperAI
HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
图回归
Graph Regression On Pcqm4Mv2 Lsc
Graph Regression On Pcqm4Mv2 Lsc
评估指标
Test MAE
Validation MAE
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Test MAE
Validation MAE
Paper Title
Repository
EGT
0.0862
0.0857
Global Self-Attention as a Replacement for Graph Convolution
-
GPTrans-L
0.0821
0.0809
Graph Propagation Transformer for Graph Representation Learning
-
GPS
0.0862
0.0852
Recipe for a General, Powerful, Scalable Graph Transformer
-
TIGT
-
0.0826
Topology-Informed Graph Transformer
-
Graphormer + GFSA
-
0.0860
Graph Convolutions Enrich the Self-Attention in Transformers!
-
Graphormer
-
0.0864
Do Transformers Really Perform Bad for Graph Representation?
-
EGT+SSA+Self-ensemble
-
0.0865
The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
-
GCN
0.1398
0.1379
Semi-Supervised Classification with Graph Convolutional Networks
-
ESA (Edge set attention, no positional encodings)
N/A
0.0235
An end-to-end attention-based approach for learning on graphs
-
MLP-Fingerprint
0.1760
0.1753
OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs
-
GRIT
-
0.0859
Graph Inductive Biases in Transformers without Message Passing
-
Uni-Mol+
0.0705
0.0693
Highly Accurate Quantum Chemical Property Prediction with Uni-Mol+
-
GRPE-Large
0.0876
0.0867
GRPE: Relative Positional Encoding for Graph Transformer
-
TokenGT
0.0919
0.0910
Pure Transformers are Powerful Graph Learners
-
EGT + Triangular Attention
0.0683
0.0671
Global Self-Attention as a Replacement for Graph Convolution
-
TGT-At
0.0683
0.0671
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers
-
GIN
0.1218
0.1195
How Powerful are Graph Neural Networks?
-
GPTrans-T
0.0842
0.0833
Graph Propagation Transformer for Graph Representation Learning
-
Transformer-M
0.0782
0.0772
One Transformer Can Understand Both 2D & 3D Molecular Data
-
EGT+SSA
-
0.0876
The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
-
0 of 20 row(s) selected.
Previous
Next