NijiTube

Getting Started with AI Anime Video Generation

What You'll Need


  • A GPU with at least 8GB VRAM (RTX 3070 or better recommended)
  • ComfyUI installed on your system
  • WAN 2.2 model downloaded

  • Step 1: Install ComfyUI


    ComfyUI is a node-based interface for Stable Diffusion that makes it easy to create complex generation workflows.


  • Clone the ComfyUI repository
  • Install the required Python dependencies
  • Download the WAN 2.2 model and place it in the models directory

  • Step 2: Set Up Your First Workflow


    Start with a simple txt2img workflow:

  • Add a "Load Checkpoint" node and select WAN 2.2
  • Add a "CLIP Text Encode" node for your prompt
  • Connect them to a "KSampler" node
  • Add a "VAE Decode" node and "Save Image" node

  • Step 3: Generate Your First Video


    For video generation with WAN 2.2:

  • Set the frame count to 16-32 frames
  • Use a resolution of 832x1216 for portrait or 1216x832 for landscape
  • Recommended settings: 30 steps, CFG 7.5, Euler sampler

  • Tips for Better Results


  • Use **-tune animation** flag when encoding with FFmpeg
  • Add quality tags like "masterpiece, best quality" to your prompts
  • Experiment with different LoRA models for style variation
  • Getting Started with AI Anime Video Generation | NijiTube