Semantic Retrieval On Contract Discovery
평가 지표
Soft-F1
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | Soft-F1 | Paper Title | Repository |
---|---|---|---|
k-NN with sentence n-grams, GPT-2 embeddings, fICA | 0.51 | Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines | |
Human baseline | 0.84 | Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines | |
Sentence BERT | 0.31 | Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines | |
DBTW, GPT-1 embeddings, fICA | 0.51 | Dynamic Boundary Time Warping for Sub-sequence Matching with Few Examples | - |
Universal Sentence Encoder | 0.38 | Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines | |
LSA baseline | 0.39 | Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines |
0 of 6 row(s) selected.