Technique · training
QLoRA
LoRA fine-tuning on 4-bit quantized base weights, enabling 65B-model fine-tuning on a single 48GB GPU.
0
Products deploying
—
Avg research → prod
—
First commercial deploy
Deployment timeline
No verified deployments yet in our tracked product set.