Conversational Response Selection On E
Métriques
R10@1
R10@2
R10@5
Résultats
Résultats de performance de divers modèles sur ce benchmark
Tableau comparatif
Nom du modèle | R10@1 | R10@2 | R10@5 |
---|---|---|---|
multi-hop-selector-network-for-multi-turn | 0.606 | 0.770 | 0.937 |
one-time-of-interaction-may-not-be-enough-go | 0.563 | 0.768 | 0.950 |
efficient-dynamic-hard-negative-sampling-for | 0.957 | 0.986 | 0.997 |
fine-grained-post-training-for-improving | 0.870 | 0.956 | 0.993 |
sequential-matching-network-a-new | 0.453 | 0.654 | 0.886 |
dialogue-response-selection-with-hierarchical | 0.721 | 0.896 | 0.993 |
grayscale-data-construction-and-multi-level | 0.613 | 0.786 | 0.964 |
speaker-aware-bert-for-multi-turn-response | 0.704 | 0.879 | 0.985 |
interactive-matching-network-for-multi-turn | 0.621 | 0.797 | 0.964 |
utterance-to-utterance-interactive-matching | 0.616 | 0.806 | 0.966 |
do-response-selection-models-really-know-what | 0.762 | 0.905 | 0.986 |
modeling-multi-turn-conversation-with-deep | 0.501 | 0.700 | 0.921 |
learning-an-effective-context-response | 0.776 | 0.919 | 0.991 |
two-level-supervised-contrastive-learning-for-1 | 0.927 | 0.974 | 0.997 |
contextual-masked-auto-encoder-for-retrieval | 0.930 | 0.977 | 0.997 |