Skip to content
gentic.news — AI News Intelligence Platform

Technique · training

QLoRA

LoRA fine-tuning on 4-bit quantized base weights, enabling 65B-model fine-tuning on a single 48GB GPU.

Origin: University of Washington, 2023-05Read origin paper →Also known as: Quantized LoRA
0
Products deploying
Avg research → prod
First commercial deploy

Deployment timeline

No verified deployments yet in our tracked product set.

QLoRA — Origin, Deployments, and Velocity | gentic.news