Conversational Response Selection On Dstc7
평가 지표
1-of-100 Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | 1-of-100 Accuracy | Paper Title | Repository |
---|---|---|---|
Bi-encoder | 66.3% | Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring | |
Sequential Attention-based Network | 64.5% | Sequential Attention-based Network for Noetic End-to-End Response Selection | |
Multi-context ConveRT | 71.2% | ConveRT: Efficient and Accurate Conversational Representations from Transformers | |
Sequential Inference Models | 60.8% | Building Sequential Inference Models for End-to-End Response Selection | |
Bi-encoder (v2) | 70.9% | Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring |
0 of 5 row(s) selected.