Parameter Efficient Fine Tuning On Winogrande
評価指標
Accuracy (% )
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | Accuracy (% ) | Paper Title | Repository |
---|---|---|---|
LLaMA2-7b | 69.85 | LoRA: Low-Rank Adaptation of Large Language Models | |
LLaMA2-7b | 70.80 | GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs | |
LLaMA2-7b | 70.09 | DoRA: Weight-Decomposed Low-Rank Adaptation |
0 of 3 row(s) selected.