Parameter Efficient Fine Tuning On Boolq
평가 지표
Accuracy (% )
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | Accuracy (% ) | Paper Title | Repository |
---|---|---|---|
LLaMA2-7b | 81.93 | DoRA: Weight-Decomposed Low-Rank Adaptation | |
LLaMA2-7b | 82.63 | QLoRA: Efficient Finetuning of Quantized LLMs | |
LLaMA2-7b | 80.28 | LoRA: Low-Rank Adaptation of Large Language Models | |
LLaMA2-7b | 82.63 | GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs |
0 of 4 row(s) selected.