Command Palette
Search for a command to run...
Parameter Efficient Fine Tuning On Hellaswag
평가 지표
Accuracy (% )
평가 결과
이 벤치마크에서 각 모델의 성능 결과
| Paper Title | ||
|---|---|---|
| LLaMA2-7b | 76.68 | GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs |
| LLaMA2-7b | 76.67 | LoRA: Low-Rank Adaptation of Large Language Models |
| LLaMA2-7b | 76.27 | DoRA: Weight-Decomposed Low-Rank Adaptation |
0 of 3 row(s) selected.