HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
의미 분석
Word Sense Disambiguation On Words In Context
Word Sense Disambiguation On Words In Context
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Accuracy
Paper Title
COSINE + Transductive Learning
85.3
Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach
PaLM 540B (finetuned)
78.8
PaLM: Scaling Language Modeling with Pathways
ST-MoE-32B 269B (fine-tuned)
77.7
ST-MoE: Designing Stable and Transferable Sparse Expert Models
DeBERTa-Ensemble
77.5
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Vega v2 6B (fine-tuned)
77.4
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
UL2 20B (fine-tuned)
77.3
UL2: Unifying Language Learning Paradigms
Turing NLR v5 XXL 5.4B (fine-tuned)
77.1
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
T5-XXL 11B
76.9
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
DeBERTa-1.5B
76.4
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
ST-MoE-L 4.1B (fine-tuned)
74
ST-MoE: Designing Stable and Transferable Sparse Expert Models
SenseBERT-large 340M
72.1
SenseBERT: Driving Some Sense into BERT
SenseBERT-base 110M
70.3
SenseBERT: Driving Some Sense into BERT
PaLM 2-L (one-shot)
66.8
PaLM 2 Technical Report
BERT-large 340M
65.5
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
FLAN-T5-Large 783M
64.7
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
LaMini-F-T5 783M
63.8
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Context2vec
59.3
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
DeConf
58.7
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
SW2V
58.1
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
ElMo
57.7
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
0 of 37 row(s) selected.
Previous
Next