Cross Lingual Question Answering On Tydiqa
المقاييس
EM
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
اسم النموذج | EM | Paper Title | Repository |
---|---|---|---|
U-PaLM-540B (CoT) | 54.6 | Transcending Scaling Laws with 0.1% Extra Compute | - |
Flan-PaLM 540B (direct-prompting) | 67.8 | Scaling Instruction-Finetuned Language Models | |
Decoupled | 42.8 | Rethinking embedding coupling in pre-trained language models | |
ByT5 XXL | 60.0 | ByT5: Towards a token-free future with pre-trained byte-to-byte models | |
ByT5 (fine-tuned) | 81.9 | ByT5: Towards a token-free future with pre-trained byte-to-byte models | |
PaLM 2-M (one-shot) | - | PaLM 2 Technical Report | |
Flan-U-PaLM 540B (direct-prompting) | 68.3 | Scaling Instruction-Finetuned Language Models | |
PaLM 2-S (one-shot) | - | PaLM 2 Technical Report | |
U-PaLM 62B (fine-tuned) | 78.4 | Transcending Scaling Laws with 0.1% Extra Compute | - |
PaLM 2-L (one-shot) | - | PaLM 2 Technical Report | |
PaLM-540B (CoT) | 52.9 | PaLM: Scaling Language Modeling with Pathways |
0 of 11 row(s) selected.