![]() |
| Flux2 Dev showcase |
It simplifies the whole prompt-to-image pipeline by switching to a single Mistral Small 3.1 text encoder, and brings serious upgrades in flexibility, architecture, and multi-image control. Whether you want to generate, edit, or blend up to 10 reference images at once, FLUX.2 gives you a cleaner, smarter, and far more capable playground to create in.
The model is registered under FLUX [dev] Non-Commercial License means generated outputs can be used for personal, scientific, and commercial purposes, but the dedicated model can't be deployed for profit making.
Table of Contents
Installation
1. Install ComfyUI if you have not done yet. Update ComfyUI from the Manager by selecting Update All if you already using.
2. Download Flux.2 Dev from the community (choose any of them as per system resources):
(a) Flux2 Dev BF16 (flux2-dev.safetensors) from Black forest labs. Before downloading, you need to accept their conditions and share your profile details.
(b) Flux2 Dev FP8 (flux2_dev_fp8mixed.safetensors) optimized by ComfyUI
(c) Flux2 Dev GGUF by city96 (Q2 for fast inference to Q8 for better quality).
(d) Flux2 Dev GGUF by orabazes (Q2 for fast inference to Q8 for better quality).
If using GGUF models, make sure you have ComfyUI-GGUF custom node by City 96. If not yet just install from Manager by selecting Custom Nodes Manager option.
Save it into ComfyUI/models/diffusion_models folder.
If you do not know what is FP8/BF16/GGUF model variants, just follow our quantization guide to get the detailed overview.
3. Download flux 2 Vae (flux2-vae.safetensors) and save this into ComfyUI/models/vae folder.
4. Download text encoder (mistral_3_small_flux2_bf16.safetensors Or mistral_3_small_flux2_fp8.safetensors ). Choose any one of them according to your system resources. Save this into ComfyUI/models/text_encoders folder.
5. Restart and refresh ComfyUI to take effect.
Workflow
1. Download Flux2 Dev Workflow from our Hugging face repository.
(a) Flux2_Dev_workflow.png (basic workflow)
(b) Flux2_Dev_GGUF.json ( workfow for GGUF variant)
2. Drag and drop into ComfyUI.
The workflow combined with both function:
-Text To Image
-Image To Image (max 10 supported)
You can simply unbypass (shortcut- CTRL+B) the ReferenceLatent nodes to work on reference images. If you like to add more reference images, you can add multiple reference images(max 10 images) by following the pattern.
But, if you do not want any reference image, just select all the ReferenceLatent nodes and then use CTRL+B to bypass it, and the workflow will be converted into the basic Text to Image workflow.
(a) Load Flux2 Dev(bf16/fp8) model into Load diffusion model loader node. Use unet loader node if using GGUF.
(b) Load clip, text encoders,vae into their respective node.
(c) Put positive prompts into prompt box. Negative prompts are not required.
(d) Set KSampler Settings:
Steps-50 (28 is good)
CFG- 4.0
Sampler- euler
Prompt used (for text To Image)- an American teenage girl taking a selfie, clear camera-facing pose, professional glam makeup, soft natural lighting, smooth skin texture, youthful modern styling, trendy outfit, high-quality portrait photography style
Prompt style (for Image To Image): Apply the design from Reference Image 1 onto objects in Reference Image 2.











