Lucy Edit- Video editing and Character Replacement

 

You obviously tried editing a video especially when it involves changing outfits, replacing backgrounds, or tweaking objects mid scene, you know how frustrating it can be. Lucy edit, a new generation of instruction-guided video editing, built to make complex edits as easy as typing a sentence.

The model is powered by Wan2.2 5B released by DecartAI, a robust 5-billion parameter model known for its high compression VAE + DiT architecture. The model is registered under non-commercial license means you can use it for testing and non-commercial purposes. 

Traditional tools often demand painstaking frame by frame masking, motion tracking, and sometimes even re-shooting parts of the video to get things right. When you try AI based tools, they often ruin the motion consistency.

This means Lucy Edit can process video data efficiently while maintaining crisp detail and stable motion. The developers at Decart AI took it further to refining the motion preservation and edit-reliability aspects to outperform common inference-time methods that often glitch or blur during edits. To get the detailed research overview you can access their research paper

 

Installation

Update Comfyui from Manager

1. Setup ComfyUI  if you haven't yet. Older user need to update ComfyUI from Manager section by clicking on Update All.

2. Move into ComfyUI/custom_nodes folder. Open command prompt. Clone the Lucy Edit repository by using following command:

git clone https://github.com/DecartAI/Lucy-Edit-ComfyUI.git
 

Now, install the required dependencies:

cd Lucy-Edit-ComfyUI

pip install -r requirements.txt
 

 

download lucy edit model

3. This comes with two setup- API based and Local setup. To run locally, you need to download the Lucy Edit model from official repository, choose the one that suits your system resources:

(a) Lucy Edit Dev Cui Fp16 (lucy-edit-dev-cui-fp16.safetensors) for low vrams.

(b) Lucy Edit Dev Cui Fp32 (lucy-edit-dev-cui.safetensors) for high vrams more than 16 GB.

You can follow the quantized models tutorial if you are unable to diffrentiate between them.

Save it into your ComfyUI/models/diffusion_models folder.


4. The text encoders and vae models are same as for Wan2.2 5B model. So, if you already have then downloading again is not required. 

But if you want then download the text encoder (umt5_xxl_fp8_e4m3fn_scaled.safetensors), put it into ComfyUI/models/text_encoders folder and Wan 2.2 Vae (wan2.2_vae.safetensors) save it into ComfyUI/models/vae folder.


5. Restart and refresh ComfyUI.




Workflow


1. After Cloning the repository, you will get the workflow inside your ComfyUI/custom_nodes/Lucy-Edit-ComfyUI/examples folder.

(a) basic-lucy-edit-dev.json (Basic workflow). 

(b) basic-api-lucy-edit.json (API based). For this, you need to setup your API from DecartAI platform. You will get 5000 free credits as reported by DecartAI team.

2. Drag and drop into Comfyui.

Load the lucy edit model, text encoders, and vae into its respective nodes.

Load your video for editing.

KSampler settings-

CFG- 5.0
Frames-81
steps- 50
Sampler- Euler

Add 20-30 words for better results to make the model to understand better.

Click Run to execute the workflow.

 

 

This feels like a turning point for AI driven video editing. What Photoshop did for image manipulation, Lucy Edit could do for video workflows.  It's not just about making creative edits easier but it's about removing the technical barriers that hold creators back. 

Whether you are an indie filmmaker, a YouTuber, or a studio experimenting with AI pipelines, this model seems designed to slot right into existing workflows without breaking a sweat.