HyperAI
4 days ago

SingLoRA: Low Rank Adaptation Using a Single Matrix

David Bensaïd, Noam Rotstein, Roy Velich, Daniel Bensaïd, Ron Kimmel
SingLoRA: Low Rank Adaptation Using a Single Matrix
Abstract

Low-Rank Adaptation (LoRA) has significantly advanced parameter-efficientfine-tuning of large pretrained models. LoRA augments the pre-trained weightsof a model by adding the product of two smaller matrices that together form alow-rank matrix update. Recent research has shown that scale disparitiesbetween these two matrices often cause unstable training dynamics, leading tosuboptimal performance. In this paper, we propose SingLoRA, which reformulateslow-rank adaptation by learning the weights update as a decomposition of asingle low-rank matrix multiplied by its transpose. This simple designinherently removes inter-matrix scale conflicts, ensuring stable optimization,and roughly halves the parameter count. We analyze SingLoRA within theinfinite-width neural network framework, showing that it guarantees stablefeature learning by construction. Extensive experiments on multiple tasksvalidate these benefits. In common sense reasoning, fine-tuning LLama 7B onMNLI with SingLoRA achieves 91.3% accuracy - surpassing LoRA (89.1%) and LoRA+(90.2%) - while using only 60% of their parameter budget. In image generation,fine-tuning Stable Diffusion with SingLoRA significantly improves imagefidelity on DreamBooth, achieving a DINO similarity score of 0.151, compared toscores of 0.148 and 0.143 for DoRA and LoRA, respectively.