Parameter Efficient Fine Tuning On Hellaswag
评估指标
Accuracy (% )
评测结果
各个模型在此基准测试上的表现结果
模型名称 | Accuracy (% ) | Paper Title | Repository |
---|---|---|---|
LLaMA2-7b | 76.27 | DoRA: Weight-Decomposed Low-Rank Adaptation | |
LLaMA2-7b | 76.67 | LoRA: Low-Rank Adaptation of Large Language Models | |
LLaMA2-7b | 76.68 | GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs |
0 of 3 row(s) selected.