HyperAI

Long Range Modeling On Scrolls

المقاييس

Avg.
CNLI
GovRep
Nrtv
QALT EM-T/H
QMSum
Qspr
SumScr

النتائج

نتائج أداء النماذج المختلفة على هذا المعيار القياسي

اسم النموذج
Avg.
CNLI
GovRep
Nrtv
QALT EM-T/H
QMSum
Qspr
SumScr
Paper TitleRepository
LongT5 Base38.685.657.7 / 30.0 / 31.423.037.9 / 36.633.9 / 11.0 / 22.846.634.8 / 9.6 / 21.1LongT5: Efficient Text-To-Text Transformer for Long Sequences
BART-LS39.7687.159.4 / 29.8 / 30.826.237.8 / 34.035.1 / 11.0 / 22.048.737.7 / 10.2 / 21.5Adapting Pretrained Text-to-Text Models for Long Text Sequences
BART-large SLED37.9987.357.5 / 26.3 / 27.424.134.8 / 34.834.2 / 11.0 / 22.046.935.2 / 8.7 / 19.4Efficient Long-Text Understanding with Short-Text Models
LongT5 XL42.5388.261.1 / 32.3 / 33.729.346.0 / 42.134.9 / 11.8 / 23.553.135.8 / 9.6 / 21.1LongT5: Efficient Text-To-Text Transformer for Long Sequences
LongT5 Large41.0387.361.3/32.2/33.827.240.6 / 38.635.1 / 12.0 / 23.352.360.3 / 31.1 / 32.8LongT5: Efficient Text-To-Text Transformer for Long Sequences
Naive19.356645.3 / 17.9 / 20.81.525.2 / 26.114.2 / 2.0 / 9.33.419.6 / 1.8 / 11.0SCROLLS: Standardized CompaRison Over Long Language Sequences
UL2 20B-88.7------UL2: Unifying Language Learning Paradigms
BART Base29.0177.447.9 / 18.6 / 22.715.426.0 / 25.930.2 / 8.7 / 20.726.327.2 / 4.9 / 16.7SCROLLS: Standardized CompaRison Over Long Language Sequences
PEGASUS-X-Base--59.3 / 29.3 / 30.9--32.9 / 9.8 / 21.4- 35.0 / 8.9 / 20.4Investigating Efficiently Extending Transformers for Long Input Summarization
LED Base--------SCROLLS: Standardized CompaRison Over Long Language Sequences
CoLT5 XL43.5188.461.3/32.2/33.831.148.1/43.836.2/12.9/24.353.936.4/10.2/21.7CoLT5: Faster Long-Range Transformers with Conditional Computation-
UL237.87-53.6 / 26.1 / 28.824.245.8 / 40.731.1 / 8.5 / 20.437.632.9 / 7.8 / 19.4UL2: Unifying Language Learning Paradigms
PEGASUS-X--60.3 / 30.0 / 31.5--33.2 / 9.6 / 21.6 -35.7 / 9.1 / 20.6Investigating Efficiently Extending Transformers for Long Input Summarization
0 of 13 row(s) selected.