Skip to content
gentic.news — AI News Intelligence Platform

Technique · training

LoRA (Low-Rank Adaptation)

Parameter-efficient fine-tuning that injects low-rank decomposition matrices into attention weights, training <1% of parameters.

Origin: Microsoft, 2021-06Read origin paper →Also known as: LoRA
1
Products deploying
5y
Avg research → prod
5y
First commercial deploy

Deployment timeline

  1. Qwen 3.6

    Deployed 2026-03-31 · Velocity 5y

    Qwen models support LoRA for efficient fine-tuning.

    high

Techniques built on this

LoRA (Low-Rank Adaptation) — Origin, Deployments, and Velocity | gentic.news