Until now, high-performance image editing with generative models was locked behind closed APIs and proprietary tools thats limiting innovation, control, and accessibility for developers and researchers. FLUX.1 Kontext changes that. Built by Black Forest Labs, this open-source 12B parameter model delivers proprietary-level image editing while running on consumer hardware.
Types:
(a) FLUX.1 Kontext [pro] (used via API) - This is the Commercial variant, that's related to fast iterative editing
(b) FLUX.1 Kontext [max] (used via API) - This is for Experimental Purpose with stronger prompt adherence supported
(c) FLUX.1 Kontext [dev] - The Open source variant, released under non-commercial license that can be used on application.
![]() |
Flux Kontext Working (Ref. Official Hugging Face repo) |
Here, we will talk about the Flux.1 Kontext Dev that's free for research and non-commercial use under the FLUX.1 Non-Commercial License, with full support for ComfyUI, HuggingFace Diffusers, and TensorRT available from day one. People can find more in-depth information into their research paper.
Trained and evaluated on KontextBench, FLUX.1 Kontext dev outperforms both open models like Bytedance Bagel and HiDream-E1-Full, and even closed systems like Gemini-Flash Image from Google. Independent evaluations by Artificial Analysis back these findings, validating its lead in categories like character preservation, iterative/local/global editing, and scene consistency.
Table of Contents:
Installation
Setup ComfyUI (New user) . Update ComfyUI from the Manager section (for older users) by selecting Update All.
TYPE: A
1. Setup the native Flux settings. If you already using the Flux workflows, then downloading VAE, text encoders are not required as its using the same models.
2. Download the Flux1-dev-kontext from Hugging face. Place this into ComfyUI/ models/diffusion_models folder.
3. Download VAE and save it into ComfyUI/ models/vae folder.
4. Download text encoders ( clip_l, t5xxl_fp16 or t5xxl_fp8_e4m3fn_scaled ) and save them into ComfyUI/ models/text_encoders folder.
5. Restart ComfyUI and refresh it to take effect.
TYPE: B
You can also use the GGUF Variant that will give you great inference speed. Follow the instructions if you have not setup yet.
(a) Setup Flux GGUF custom node (ComfyUI-GGUF) by city96.
(b) Download the Flux Kontext GGUF model. These range from Q2 for faster generation with low quality to Q8 for higher quality with slow speed from Quantstack's Hugging face repository.
Save it into the ComfyUI/models/unet folder.
Workflow
1. Make sure you have the latest ComfyUI version. Open ComfyUI.
Go to Workflow section(on top left ) >>Browse Templates >> Flux >> Flux.1 Kontext Dev(for single subject) or Flux Kontext Dev Grouped (for multiple subject). Click and run the workflow.
If you are using GGUF Flux Kontext model, use the same workflow, then replace the Load diffusion model node with Unet Loader GGUF node. That's it, rest will be as similar.
2. Upload you image to stylize or edit. Put prompt you want to do with the uploaded image into prompt box.
3. Load Flux Kontext model dev model, VAE, text encoders and hit Run button to initiate the workflow.
Test 1 (Human pictography)
![]() |
Inputted Image |
This is our inputted image. We put the prompt into prompt box:
Prompt: girl is wearing black beautiful gown
CFG : 2.5
Steps: 28
![]() |
Output image |
Test 2 (Product Photography)
![]() |
inputted image as energy drink for product photography test |
![]() |
Output image as energy drink for product photography test |
Test 3 (Editing Test)
![]() |
inputted image for editing testing |
![]() |
output image for editing testing |