Skip to main content
Run any Skill in Manus
with one click

lora

// Parameter-efficient fine-tuning with Low-Rank Adaptation (LoRA). Use when fine-tuning large language models with limited GPU memory, creating task-specific adapters, or when you need to train multiple specialized models from a single base.

$ git log --oneline --stat
stars:24
forks:0
updated:May 6, 2026 at 04:35
File Explorer
2 files
SKILL.md
readonly