What is a LoRA?
A LoRA (Low-Rank Adaptation) is a small add-on file that adjusts an AI model's behavior without replacing it. Think of it as a patch — it modifies specific aspects of the base model while keeping everything else intact.
For WAN video models, LoRAs can change:
- Style: Push output toward a specific anime style, retro look, flat colors, etc.
- Characters: Generate a specific character consistently
- Motion: Add particular motion patterns (bouncing, camera movement, etc.)
- Speed: Some LoRAs reduce the number of steps needed for generation
LoRA files are typically 100-800 MB in size. You place them in ComfyUI/models/loras/.
Where to Find LoRAs
CivitAI is the main source for WAN video LoRAs. Search for "WAN 2.2 LoRA" and filter by the Video category.
When searching, pay attention to the base model listed on the LoRA page. WAN has different model variants (T2V 14B, I2V 14B, TI2V 5B) and LoRAs are trained for a specific variant. Using a LoRA on the wrong variant may produce broken output.
For the 14B T2V model used in the beginner guide, look for LoRAs labeled "WAN 2.2 T2V" or "T2V-A14B".
Types of LoRAs You Will Find
- Style LoRAs: Change the visual style (anime, cartoon, retro, flat color, etc.)
- Character LoRAs: Trained on specific anime characters for consistent generation
- Motion/Pose LoRAs: Add specific body motion or camera movement patterns
- Speed LoRAs (Lightning/Turbo): Reduce required steps from 20 to 4-6, with some quality trade-off
How to Add a LoRA in ComfyUI
The basic process is the same for any LoRA:
- Download the LoRA file and place it in
ComfyUI/models/loras/ - Add a LoraLoaderModelOnly node to your workflow
- Place it between your model loader and the next node in the chain
- Select the LoRA file and set the strength
The Node Chain
For the workflow from the beginner guide:
Unet Loader (GGUF) → LoraLoaderModelOnly → ModelSamplingSD3 → KSampler
You can stack multiple LoRAs by chaining multiple LoraLoaderModelOnly nodes. The order matters — the first LoRA in the chain has the most influence.
Key Settings
- strength_model: Controls how strongly the LoRA affects the output. Most LoRAs work well between 0.7 and 1.2. Check the LoRA's CivitAI page for recommended values
- Trigger words: Many LoRAs require a specific trigger word in your prompt to activate. This is listed on the LoRA's page
Why LoraLoaderModelOnly?
WAN video models use a separate text encoder (T5) that does not need LoRA modification. The standard LoraLoader node tries to modify both the model and the text encoder, which can cause errors. Always use LoraLoaderModelOnly for WAN video LoRAs.
Tips
- Start without LoRAs. The base WAN 14B model already produces good results. Add LoRAs only when you want a specific effect that prompting alone cannot achieve
- Test one LoRA at a time. Adding too many LoRAs at once makes it hard to tell what is causing issues
- Check compatibility. A LoRA trained for the 14B model will not work correctly on the 5B model (different architecture). Always match the LoRA to your model variant
- Adjust strength gradually. If a LoRA produces artifacts or oversaturated colors, lower the strength. If the effect is barely visible, raise it
What Next?
- Image to Video (I2V): The I2V pipeline produces higher quality results. See the I2V guide
- Browse NijiTube: Many creators share their LoRA settings alongside their videos — check what works for others
- CivitAI: Browse the WAN 2.2 LoRA category to see example outputs and find LoRAs that match your style