You may have used Wan 2.1 for video generation and noticed that the output sometimes fails to maintain facial consistency. Recently, Bytedance released the Lynx model, which is trained on Wan 2.1 and delivers more satisfying results by preserving identity across diverse environments.
![]() |
| wan lynx showcase (Ref-official page) |
The model incorporates two lightweight adapters:
(a) IP-Adapter- Converts ArcFace-derived facial features into compact identity tokens. This serves the same purpose as the previous method used for consistent generation.
(b) Reference Adapter- Utilizes dense VAE parameters from the reference image.
The model is open-source (registered under Apache 2.0 license) so it can be used for commercial and personal projects. You can get the more theoretical information by accessing their research paper. This is helpful for creating your viral AI influencer, Social media (Instagram/Tiktok) or any kind of UGC (user-generated) videos.
Installation
1. Setup Kijai's Wan Video wrapper custom node by following our Wan2.1 installation tutorial if you have not yet. If already have, then just update this custom node from Manager.
2. Download Wan 2.1 Lynx official model from Bytedance if your system can handle it.
People having low Vrams need to use Kijai's optimized Wan2.1 Lynx (ip layers, ref layers and re-sampler) model. Save it into your ComfyUI/models/diffusion_models folder.
(a) Wan2_1-T2V-14B-Lynx_full_ip_layers_fp16.safetensors (full fp16) or
Wan2_1-T2V-14B-Lynx_lite_ip_layers_fp16.safetensors (lite fp16)
(b) Wan2_1-T2V-14B-Lynx_full_ref_layers_fp16.safetensors (full fp16)
(c) lynx_full_resampler_fp32.safetensors (full fp32) or
lynx_lite_resampler_fp32.safetensors (lite for low vrams)
3. Rest of the models (Wan2.1 T2V14b, lightX2V T2V 14b step distill v2 lora, wan 2.1 14b fun InP, clip, vae, text encoders etc.) will be same as you use for Wan 2.1 model. If you want to download then follow the Wan2.1 tutorial.
4. Restart and refresh ComfyUI.
Workflow
1. After installing Wan video Wrapper custom node, you will get the workflow (wanvideo_T2V_14B_lynx_example_01.json) inside your ComfyUI/custom_nodes/ComfyUI-WanVideoWrapper/example_workflows folder. If you donot get it, means you haven't updated the Wan video wrapper custom node from Manager.
2. Drag and drop into ComfyUI. If you get missing nodes error, just install them by navigating to Custom Nodes manager option from Manager.
(a) Upload you input image into Load image node. You should use good quality input images to get better output. The parts of the character in image shouldn't be cut out.
(b) Load the wan Lynx (ip layers, ref layers and resampler) models into Wan Video Extra model select node.
(c) Load the wan Lynx IP layer model into Load Lynx Resampler node.
(d) Load the LightX2V distilled model.
(e) Load Wan 2.1 T2V 14b model into Wan video model loader node.
(f) Put your prompt into prompt box(WanVideo Text Encode node). Use detailed prompt. You can use NLP based prompting for better output.
Wan Video Sampler node settings (use the same settings as of Wan2.1 14 T2V. Follow our wan 2.1 detailed tutorial):
CFG-1
Shift-8
Steps-5
Scheduler-LCM
Wan Video Add lynx Embedd node-
Control (increase/decrease) the ip_scale and ref_scale to maintain the identity preservation. Just stay closer to 0.9 for both the parameters.
(g) Hit Run to start the generation.



