If you have explored video generation lately, you already know the biggest frustration is the motion control that's still pretty rough. Most models can generate good looking frames, but the moment you ask them to move something precisely say, shift a character's hand, rotate an object, or guide a camera path they start falling apart. They usually depend on extra encoders, fragile architectures, and limited fine tuning options. Wan-Move, a new motion-control framework from Alibaba Group and Tongji Lab, released under Apache 2.0.
![]() |
| WanMove model Showcase |
The model understands exactly how every element should move. It generates a 5 second, 480p video with smooth, precise motions that match your drawn paths. And all of this works on top of existing image to video models without architecture modifications.
![]() |
| wanMove model pipeline (Ref-Research paper) |
The research team focused on a simple but powerful methodology to make the model's own latent features motion aware instead of adding new motion modules. More detailed insights can be found into their research paper. Their
approach is surprisingly elegant and uses- Dense Point Trajectories,
Trajectory Projection Into Latent Space, Propagating First-Frame
Features with Zero Architectural Change.
Installation
1. First step is to install ComfyUI if not yet. If already installed, update it from the Manager by selecting Update All option.
2. Make sure you have Kijai's custom node Wan Video wrapper installed. If already have then update the custom nodes from the Manager.
3. Download Wan Move models from Kijai's hugging face repository. Choose the one that suits your system resources:
(a) Wan Move FP8 ( Wan21-WanMove_fp8_scaled_e4m3fn_KJ.safetensors ), for 12 to 16 GB VRAM.
(b) Wan Move FP16 ( Wan21-WanMove_fp16.safetensors ), for higher VRAM 24 GB or more with better output. Save it inside your ComfyUI/models/diffusion_models folder.
4. Restart and Refresh ComfyUI.
Workflow
1. After installing all, you will get the workflow (wanvideo_WanMove_I2V_example_01.json) inside your ComfyUI/custom_nodes/ComfyUI-WanVideoWrapper/example_workflows folder.
2. Drag and drop the workflow into ComfyUI. If you get missing red error nodes, install them from Manager section by selecting Install missing nodes button. The workflow is based on basic Wan 2.1 I2V framework. So all the basic models(Lightx2v I2V 14b cfg step distill lora model, wan2.1vae, clip, umt5-xxl etc ) will be same. Load them all into its relative node.
3. Execute your workflow by setting up the nodes:
(a) Load your image into Load image node.
(b) Load Wan Move model (Fp16 / FP8 variant) into WanVideo Model loader node.
(c) Load wan 2.1 Model into model loader node.
(d) Load wan 2.1vae, text encoders into their respective nodes.
(e) Add your detailed positive and negative prompts into prompt box.
(f) Here, the new node is the Spline Editor node from where you can control the movement of your character.
Use your cursor to drag and make spline path with pointers. You can use Ctrl + click to add control points (sub-divide) between the two points. Use Shift key + Click to add new control pointer at end of the path. Using the Right click will delete the point. You can not delete the pointers from start or end of the path. Multiple spline paths can also be added to control multiple objects.
(g) Hit run to execute the workflow.
You can use Wan Video Block swap node (block swapping feature) if having low VRAMs. Set the Value around 30-40 to run the model efficiently. This will often avoid any OOM (out of memory) errors.
WanVideo Sampler node Settings:
Scheduler-Dpm++_sde
Steps-4
Shift-5
At the starting, you get weird video effects. You need to used to with
this to get perfection. By this, you can change the body movements,
facing expression, camera zooming effects etc.








