Semantic Retrieval On Contract Discovery
Metrics
Soft-F1
Results
Performance results of various models on this benchmark
Model Name | Soft-F1 | Paper Title | Repository |
---|---|---|---|
k-NN with sentence n-grams, GPT-2 embeddings, fICA | 0.51 | Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines | |
Human baseline | 0.84 | Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines | |
Sentence BERT | 0.31 | Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines | |
DBTW, GPT-1 embeddings, fICA | 0.51 | Dynamic Boundary Time Warping for Sub-sequence Matching with Few Examples | - |
Universal Sentence Encoder | 0.38 | Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines | |
LSA baseline | 0.39 | Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines |
0 of 6 row(s) selected.