Skip to main content
Run any Skill in Manus
with one click

qlora

// Memory-efficient fine-tuning with 4-bit quantization and LoRA adapters. Use when fine-tuning large models (7B+) on consumer GPUs, when VRAM is limited, or when standard LoRA still exceeds memory. Builds on the lora skill.

$ git log --oneline --stat
stars:24
forks:0
updated:May 6, 2026 at 04:35
SKILL.md
readonly