DV-ngrams-cosine + NB-weighted BON (re-evaluated) | 93.68 | The Document Vectors Using Cosine Similarity Revisited | |
Llama-2-70b-chat (0-shot) | 95.39 | LlamBERT: Large-scale low-cost data annotation in NLP | |
FLAN 137B (few-shot, k=2) | 95 | Finetuned Language Models Are Zero-Shot Learners | |
DV-ngrams-cosine + RoBERTa.base | 95.92 | The Document Vectors Using Cosine Similarity Revisited | |
DV-ngrams-cosine with NB sub-sampling + RoBERTa.base | 95.94 | The Document Vectors Using Cosine Similarity Revisited | |