Technique · training
LoRA (Low-Rank Adaptation)
Parameter-efficient fine-tuning that injects low-rank decomposition matrices into attention weights, training <1% of parameters.
1
Products deploying
5y
Avg research → prod
5y
First commercial deploy
Deployment timeline
- high