HyperAIHyperAI

Command Palette

Search for a command to run...

Low-Rank Adaptation LoRA

Date

2 years ago

LoRA (Low-Rank Adaptation) is a popular technique for fine-tuning LLM (Large Language Model), originally proposed by researchers from Microsoft in the paper "LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS"proposed in.

In today's fast-paced technology landscape, large AI models are driving breakthrough advances in a variety of fields. However, customizing these models for specific tasks or datasets can be a computationally and resource-intensive endeavor. LoRA (Low Level Adaptation) is a breakthrough, efficient fine-tuning technology, which can leverage the power of these advanced models for custom tasks and datasets without straining resources or being too costly. The basic idea is to design a low-rank matrix that is then added to the original matrix. In this context, an "adapter" is a set of low-rank matrices that, when added to a base model, produce a fine-tuned model. It allows performance close to that of full-model fine-tuning with less space requirements. Language models with billions of parameters may only require millions of parameters for LoRA fine-tuning.

References

【1】https://www.ml6.eu/blogpost/low-rank-adaptation-a-technical-deep-dive

【2】https://zh.wikipedia.org/wiki/_()

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Low-Rank Adaptation LoRA | Wiki | HyperAI