HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
利用規約
プライバシーポリシー
日本語
HyperAI
HyperAI超神経
Toggle Sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
プラットフォーム
ホーム
SOTA
意味解析
Semantic Parsing On Wikitablequestions
Semantic Parsing On Wikitablequestions
評価指標
Accuracy (Dev)
Accuracy (Test)
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Accuracy (Dev)
Accuracy (Test)
Paper Title
ARTEMIS-DA
-
80.8
ARTEMIS-DA: An Advanced Reasoning and Transformation Engine for Multi-Step Insight Synthesis in Data Analytics
TabLaP
/
76.6
Accurate and Regret-aware Numerical Problem Solver for Tabular Question Answering
SynTQA (GPT)
-
74.4
SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA
Mix SC
/
73.6
Rethinking Tabular Data Understanding with Large Language Models
SynTQA (RF)
/
71.6
SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA
CABINET
/
69.1
CABINET: Content Relevance based Noise Reduction for Table Question Answering
Chain-of-Table
/
67.31
Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding
Tab-PoT
/
66.78
Efficient Prompting for LLM-based Generative Internet of Things
Dater
64.8
65.9
Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning
LEVER
64.6
65.8
LEVER: Learning to Verify Language-to-Code Generation with Execution
TabSQLify (col+row)
-
64.7
TabSQLify: Enhancing Reasoning Capabilities of LLMs Through Table Decomposition
Binder
65.0
64.6
Binding Language Models in Symbolic Languages
OmniTab-Large
62.5
63.3
OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
NormTab (Targeted) + SQL
-
61.20
NormTab: Improving Symbolic Reasoning in LLMs Through Tabular Data Normalization
ReasTAP-Large
59.7
58.7
ReasTAP: Injecting Table Reasoning Skills During Pre-training via Synthetic Reasoning Examples
TAPEX-Large
57.0
57.5
TAPEX: Table Pre-training via Learning a Neural SQL Executor
MAPO + TABERTLarge (K = 3)
52.2
51.8
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
T5-3b(UnifiedSKG)
50.65
49.29
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
TAPAS-Large (pre-trained on SQA)
/
48.8
TAPAS: Weakly Supervised Table Parsing via Pre-training
Structured Attention
43.7
44.5
Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs
0 of 21 row(s) selected.
Previous
Next
Semantic Parsing On Wikitablequestions | SOTA | HyperAI超神経