HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Cross-Lingual Question Answering
Cross Lingual Question Answering On Xquad
Cross Lingual Question Answering On Xquad
Metrics
Average F1
Results
Performance results of various models on this benchmark
Columns
Model Name
Average F1
Paper Title
mLUKE-E
74.2
mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models
Coupled
-
Rethinking embedding coupling in pre-trained language models
ByT5 XXL
-
ByT5: Towards a token-free future with pre-trained byte-to-byte models
Decoupled
-
Rethinking embedding coupling in pre-trained language models
0 of 4 row(s) selected.
Previous
Next