HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
利用規約
プライバシーポリシー
日本語
HyperAI
HyperAI超神経
Toggle Sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
プラットフォーム
ホーム
SOTA
意味曖昧性解消
Word Sense Disambiguation On Words In Context
Word Sense Disambiguation On Words In Context
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Accuracy
Paper Title
COSINE + Transductive Learning
85.3
Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach
PaLM 540B (finetuned)
78.8
PaLM: Scaling Language Modeling with Pathways
ST-MoE-32B 269B (fine-tuned)
77.7
ST-MoE: Designing Stable and Transferable Sparse Expert Models
DeBERTa-Ensemble
77.5
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Vega v2 6B (fine-tuned)
77.4
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
UL2 20B (fine-tuned)
77.3
UL2: Unifying Language Learning Paradigms
Turing NLR v5 XXL 5.4B (fine-tuned)
77.1
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
T5-XXL 11B
76.9
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
DeBERTa-1.5B
76.4
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
ST-MoE-L 4.1B (fine-tuned)
74
ST-MoE: Designing Stable and Transferable Sparse Expert Models
SenseBERT-large 340M
72.1
SenseBERT: Driving Some Sense into BERT
SenseBERT-base 110M
70.3
SenseBERT: Driving Some Sense into BERT
PaLM 2-L (one-shot)
66.8
PaLM 2 Technical Report
BERT-large 340M
65.5
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
FLAN-T5-Large 783M
64.7
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
LaMini-F-T5 783M
63.8
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Context2vec
59.3
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
DeConf
58.7
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
SW2V
58.1
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
ElMo
57.7
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
0 of 37 row(s) selected.
Previous
Next
Word Sense Disambiguation On Words In Context | SOTA | HyperAI超神経