Install Z Image, Wan 2.2, Flux, Kontext, Qwen Image, Lumina in Forge Neo

 

setup models in forge neo

All the trending models(Z Image Base/Turbo, Wan2.2, Flux Dev, Flux Kontext Dev, Qwen Image, Lumina etc) now supported in Forge Neo

There are multiple model variants (nvf4, bf16, gguf, fp8 etc) for different models available are listed below. Choose the one that's supported for your system resources. You can also follow our quantization model tutorial to get the detailed overview.

 

Download the models for Forge Neo

 Follow the details to do the installation for required models:

Model Downloads and required Text encoders with VAE
 
Architecture Checkpoint UNet / DiT Text Encoder VAE
SD1 CivitAI N/A N/A vae-ft-mse-840000
SDXL CivitAI N/A N/A sdxl-vae-fp16-fix
Lumina-Image-2.0 fp16 (gemma_2_2b)

(Flux)

ae
Flux-Dev

(t5xxl)

Flux-Kontext N/A
Z-Image(Base)

qwen_3_4b

Z-Image-Turbo
Flux.2-Klein 4B flux2-vae
Flux.2-Klein 9B

qwen_3_8b

Wan 2.2 T2V

umt5_xxl

wan_2.1_vae
Wan 2.2 I2V
Qwen-Image

qwen_2.5_vl_7b

qwen_image_vae
Qwen-Image-Edit
Anima bf16

qwen_3_0.6b

bf16


Save Model files and folders

To make sure everything works correctly inside your WebUI setup, it's important to place each model file in the proper folder. Here is a simple breakdown:


Checkpoint/UNet/DiT models

Place these inside:

webui(root-folder)\models\Stable-diffusion

These are your main generation models, so keeping them in the Stable-diffusion folder ensures they show up correctly inside the WebUI model selector.


Text Encoder models

Put these in:

~webui\models\text_encoder

Text encoders are responsible for understanding and processing your prompts. If they’re not in the correct folder, your model may fail to load or generate properly.


VAE files

These should go into:
~webui\models\VAE

The VAE (Variational Autoencoder) affects image color, detail clarity, and overall quality. Keeping it in the correct VAE folder allows you to select it separately if needed.


Important notes for gguf:
If you're planning to use the GGUF version of qwen_2.5_vl_7b for img2img, there's one extra step:

You’ll also need to download the corresponding mmproj file and select it in the interface.

Without the mmproj file, img2img functionality won’t work properly with the GGUF version. So make sure both files are downloaded and correctly selected before testing.


For Nunchaku models users:
If you're using Nunchaku model variant, make sure you download the correct model version based on your GPU:

-If you have an RTX 50 series GPU, download the fp4 version.
-For all other GPUs, download the int4 version.

Choosing the correct version ensures better compatibility and performance. Using the wrong one may cause loading issues or reduced efficiency.

Error Handling
If you accidentally download an unsupported version, the model may fail to load or cause errors inside the WebUI.
Some model versions are not supported, so avoid downloading these: 
-nvfp4 
-fp8mixed 

Instead, use one of the supported format:-
-fp8_scaled
-gguf

 

Settings and Parameters 

 

(a) Flux1 Dev

 

Flux1 Dev

 

(b) Z Image Turbo

 

Z Image Turbo


 

(c) Flux Kontext

 

Flux Kontext

 

 

(d) Wan 2.2 (Text to Image)

 

Wan 2.2 (Text to Image)

 

 

(e) Wan 2.2 (Image to Video & Text to Video)

 

Wan 2.2 (Image to Video & Text to Video)

 

(f) Qwen Image

 

Qwen Image

 

(g) Qwen Image Edit

 

Qwen Image Edit

 

 (h) Stable Diffusion XL (SDXL)

 

Stable Diffusion XL (SDXL)

 

(i)  Lumina Image 2.0

 

Lumina Image 2.0

 

 (j) Stable Diffusion 1.5

 

 Stable Diffusion 1.5