HyperAI

Graph Regression On Pcqm4Mv2 Lsc

المقاييس

Test MAE
Validation MAE

النتائج

نتائج أداء النماذج المختلفة على هذا المعيار القياسي

اسم النموذج
Test MAE
Validation MAE
Paper TitleRepository
EGT0.08620.0857Global Self-Attention as a Replacement for Graph Convolution
GPTrans-L0.08210.0809Graph Propagation Transformer for Graph Representation Learning
GPS0.08620.0852Recipe for a General, Powerful, Scalable Graph Transformer
TIGT-0.0826Topology-Informed Graph Transformer
Graphormer + GFSA-0.0860Graph Convolutions Enrich the Self-Attention in Transformers!
Graphormer-0.0864Do Transformers Really Perform Bad for Graph Representation?
EGT+SSA+Self-ensemble-0.0865The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
GCN0.13980.1379Semi-Supervised Classification with Graph Convolutional Networks
ESA (Edge set attention, no positional encodings)N/A0.0235An end-to-end attention-based approach for learning on graphs-
MLP-Fingerprint0.17600.1753OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs
GRIT-0.0859Graph Inductive Biases in Transformers without Message Passing
Uni-Mol+0.07050.0693Highly Accurate Quantum Chemical Property Prediction with Uni-Mol+
GRPE-Large0.08760.0867GRPE: Relative Positional Encoding for Graph Transformer
TokenGT0.09190.0910Pure Transformers are Powerful Graph Learners
EGT + Triangular Attention0.06830.0671Global Self-Attention as a Replacement for Graph Convolution
TGT-At0.06830.0671Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers
GIN0.12180.1195How Powerful are Graph Neural Networks?
GPTrans-T0.08420.0833Graph Propagation Transformer for Graph Representation Learning
Transformer-M0.07820.0772One Transformer Can Understand Both 2D & 3D Molecular Data
EGT+SSA-0.0876The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
0 of 20 row(s) selected.