CivArchive
    Z Image Turbo - Text Encoder
    NSFW
    Preview 111675706

    Z-Image-Turbo is live for Generation! We're monitoring performance, and we'll be tweaking functionality behind the scenes over the next few days!

    Z-Image is a powerful and highly efficient image generation model with 6B parameters. It is currently has three variants:

    • šŸš€ Z-Image-Turbo (this model) – A distilled version of Z-Image that matches or exceeds leading competitors with only 8 NFEs (Number of Function Evaluations). It offers āš”ļøsub-second inference latencyāš”ļø on enterprise-grade H800 GPUs and fits comfortably within 16G VRAM consumer devices. It excels in photorealistic image generation, bilingual text rendering (English & Chinese), and robust instruction adherence.

    • 🧱 Z-Image-Base – The non-distilled foundation model. By releasing this checkpoint, we aim to unlock the full potential for community-driven fine-tuning and custom development.

    • āœļø Z-Image-Edit – A variant fine-tuned on Z-Image specifically for image editing tasks. It supports creative image-to-image generation with impressive instruction-following capabilities, allowing for precise edits based on natural language prompts.

    Original ComfyUI Models: https://huggingface.co/Comfy-Org/z_image_turbo/tree/main/split_files

    Original HF Repo: https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

    Description

    FAQ

    Comments (274)

    BastardOGDec 8, 2025Ā· 13 reactions
    CivitAI

    So if i post a image with overswollen breasts and big clit of a middle age woman it gets blocked to get moderated by mods but if others posts lolis or things that make you puke your guts out they say nothing...get your AI $Hit to do a better job please...this is why i stopped paying ( i wanna pay but my things get deleted and other posts not)

    VisionaryAI_StudioDec 9, 2025Ā· 3 reactions

    Hmm, the question is whether that makes you better than people who post lolis...

    BastardOGDec 9, 2025

    @VisionaryAI_StudioĀ well ...i think not , but i wanted to post that big tit girl ... it was funny and creepy at the same time... it reminded me of SD :D

    qekDec 9, 2025Ā· 1 reaction

    Real, I reported some cubcon to see how the mods work, it didn't get removed

    BastardOGDec 9, 2025

    @qekĀ i have no idea what that is but i think the mods have their hands full (i don't know with what) and they are trying really hard...i think :) i deleted the images even the ones that were posted (the one used with a lora but the one without lora was too "real" ) oh well , shitposting goes on ...

    Tozi_WhiteDec 9, 2025Ā· 1 reaction

    I honestly don't know what you're posting here that's causing your renders to be taken down. I post a lot of NSFW stuff and there's zero waiting or removal.

    qekDec 9, 2025

    @Tozi_WhiteĀ "there's zero waiting", no, my images appear after big delays, up to 90 minutes

    BastardOGDec 9, 2025Ā· 2 reactions

    @Tozi_WhiteĀ  none of your images get the yellow tringle with a ! in the upper corner of the image ? lucky you ...i tried to post yesterday and all of them were waiting for moderation...they had prompt , checkpoint and workflow embedded ... a few hours ago they were all still waiting for moderation and i deleted them ( i don't want to get banned for a curvy woman with big tits in thongs)

    Only_Fuuka_OFDec 9, 2025Ā· 16 reactions
    CivitAI

    does this explode 8 gb vream cards?

    eldritchadamDec 9, 2025Ā· 4 reactions

    not a bit! I'm getting 2 image generations at a time in about 70-80 seconds. The nature of the model means those two images are very similar, but sometimes there's a clearly better output between them. Big differences require changing prompt or clever noise injection techniques.

    Only_Fuuka_OFDec 9, 2025

    @eldritchadamĀ thanks

    eldritchadamDec 9, 2025

    @Only_Fuuka_OFĀ also worth noting, I'm generating fairly large - 1024 by 1360. 9 Steps. Basically the recommended default workflow published with the model, added the SeedVarianceEnhancer custom node which helps a bit with variation. But all-in-all, this model is loads of fun to play with.

    Only_Fuuka_OFDec 9, 2025Ā· 2 reactions

    @eldritchadamĀ I'm using FORGE so i have little more than steps, cfg, resolution, vae and enconder

    qekDec 10, 2025Ā· 2 reactions

    @Only_Fuuka_OFĀ Forge does not work, need Forge Neo

    Only_Fuuka_OFDec 10, 2025

    @qekĀ ah, won't use it then

    4070444890Dec 11, 2025

    i loaded the zimage template from the latest comfyui ran in lowram mode and switched model to a fp8 version, it works, 25secs x pic. edit, the e5m2 version

    Only_Fuuka_OFDec 11, 2025

    @4070444890Ā all good but i don't wana give up normal forge

    qekDec 12, 2025

    @Only_Fuuka_OFĀ give up!

    Only_Fuuka_OFDec 12, 2025Ā· 1 reaction

    @qekĀ NEVAAAH

    it's exploding here on clip node with 12 vram

    qekDec 13, 2025

    @instahentaigames2111Ā šŸ”„šŸ‘Øā€šŸš’šŸš’

    kholsaksham533Dec 14, 2025

    @Only_Fuuka_OFĀ Neo is 100% same, like Forge. Plus has support for new models. Memory managament is the same, what is good for sdxl, but not for models with huge encoders.

    Only_Fuuka_OFDec 14, 2025

    @kholsaksham533Ā idk about it being the same, went from a111 to forge and suddenly the exact same prompts and settings was giving out diferent stuff so I'm pretty on the fence

    utak3rDec 16, 2025

    @instahentaigames2111Ā I'm running it perfectly well on 12GB.

    loudhippo840689Dec 17, 2025

    do yourselves a favor and install stability matrix and use it to install comfui, if your not using comfyui you are causing yourself all kinds of headaches for no reason trust me , comfyui is what newer models are designed to work on, its what the model designers themselves use and its what gets the most up to date support. using forge or a111 means your stuck using the same internal workflow for every project that was created by someone else and your at their mercy as to when and if you get updates to it and that simply doesnt work for every project, sometimes you need to use a different method to achieve the results you want and thats what comfui allows you to do, comfyui seems complicated at first but play around with it a bit and it becomes super intuitive, it has templates for every major model built into the menu that work out of the box and then you have the freedom to combine and use parts of other workflows at will, learning to use comfyui gives you so much freedom to rearrange and build workflows that do what you want them to do, i switched to comfui so i could run a model that wasnt available on other platforms at the time and since i have learned to use it everything else seems generic, i have built workflows for specific use cases and saved a huge library that is ever evolving using bleeding edge technology as it gets released. when people say "until its available for forge/a111 im not using it" dont realize that its not that the models arent compatible its forge/a111 that aren't compatible and specific support for new models has to be written for them leaving you always behind the curve , by using comfyui you get day1 support for most if not all models. another exciting part about it is coming here to civitai and finding premade workflows for all kinds of new features and plugins built and ready to go. comfui is the standard and most everything else is a pretty gui slapped on top of what is essentially comfyui workflows running in the background. in basic form all of these models start with the same basic components

    a model loader
    a text(clip) loader and encoder
    a vae loader and encoder/decoder
    some random noise in the form of an image
    a sampler that takes all of this and makes stuff out of it

    load up a simple template and youll see its really not very hard to customize your own workflows

    1 load a model connect it to your ksampler
    2 load a text encoder and connect it to a prompt input box and then to your ksampler
    3 load a vae , connect it to a vae encoder and connect a blank image to that vae encoder then connect that vae encoder to your sampler


    boom thats a basic workflow

    most models and workflows follow this same principal, its really not that hard trust me

    your model is the brain, your text encoder is a translator for human language to ai language and vae is the translator for pictures to ai language , all of this gets sent to the sampler in ai language (latent data) and it draws an image

    in essence its converting words and image data into a format that can run on your gpu then using the sampler as a calculator to crunch the numbers on your gpu

    if i can do it, anyone can do it trust me


    for anyone who has taken the time to read this long message i challenge you to install comfyui using stability matrix as your installer on windows, then load a basic sdxl workflow from the templates in the menu on the top left and go to each of the boxes it creates and give it the model that it is asking for, once you have done that you will be well on your way to using comfui with ease

    Only_Fuuka_OFDec 17, 2025Ā· 1 reaction

    @loudhippo840689Ā bro ain't nobody reading all that

    qekDec 18, 2025

    @loudhippo840689Ā I am not agree with everything there

    AkaHimePDec 11, 2025Ā· 9 reactions
    CivitAI

    How I use this on A1111

    J1BDec 11, 2025Ā· 6 reactions

    A1111 has not been updated in over 1 year, it is dead.
    My advice would be to move to Comfyui, there is an easy one click installer here: https://github.com/Tavris1/ComfyUI-Easy-Install

    BIG_ADec 11, 2025

    Use it on imagination~

    PeckerfaceDec 11, 2025Ā· 3 reactions

    You can't. ForgeNeo has support an it has a very similar interface to A1111

    BastardOGDec 11, 2025

    SD.Next just made the update to FLUX2 , Z-image , you can try that |||||| yeah , later edit ... it's slower than comfyUI and harder to get it started ...

    MMA2Dec 11, 2025Ā· 2 reactions

    I'm using SD Webui Forge Neo.

    GitHub - Haoming02/sd-webui-forge-classic at neo https://share.google/8K8WNofZPfrPgieyF

    hakfull95539Dec 11, 2025

    @J1BĀ any notebook comfyui for colab ? like TheLastBen for A1111 ?

    J1BDec 11, 2025Ā· 1 reaction

    @hakfull95539Ā  I have used this one Click ComfyUI Runpod template with Qwen-image as the default a few times: https://console.runpod.io/hub/template/t1t5m22wdy?id=t1t5m22wdyit

    is not free if thats what you are hoping, but you might be able to get a $5 referral link from somebody.

    qekDec 12, 2025Ā· 1 reaction

    @hakfull95539Ā Create it yourself or fork another one

    jazara930Dec 14, 2025Ā· 3 reactions

    I confirm as you have already been told that with Forge-neo it works perfectly.

    hakfull95539Dec 14, 2025

    @J1BĀ thank you bro

    bitzupaDec 17, 2025Ā· 7 reactions

    @J1BĀ install forgeneo, it looks like A1111 but supports almost everything. Definitely dont use comfy you will spend weeks troubleshooting nodes instead of generating.

    qekDec 17, 2025Ā· 2 reactions

    @bitzupaĀ false

    unterwegsduvet108Dec 18, 2025Ā· 2 reactions

    @bitzupaĀ skill issue tbh, comfy is easy

    huggarDec 18, 2025Ā· 3 reactions

    Like others have mentioned here, Forge Neo works well, i started with comfy but moved to forge neo for ease of use.

    qekDec 18, 2025Ā· 2 reactions

    @huggarĀ why would one prefer the inferior one (Neo)

    bougyakumahouJan 4, 2026Ā· 2 reactions

    @J1BĀ comfy is cancer and will make you want to redact yourself. worst trash ever invented. nobody wants to spend 3 days troubleshooting except linux clowns.

    UltraFreshDec 11, 2025Ā· 27 reactions
    CivitAI

    I HAVE COME HERE TO CHEW ASS, AND KICK BUBBLEGUM.... euhh wait..

    jazara930Dec 13, 2025Ā· 8 reactions
    CivitAI

    Checkpoint is amazing! It's incredibly fast on the RTX 3090 with just 8 steps. The realism is very good, and I was impressed with the quality and precision of the text generation!

    It also works pretty well with Forge. Do you know if there's a way to use Lora for Flux with Z-Image?

    VeralieJan 15, 2026

    Would you mind sharing your settings? I've got a 3090 but it's been taking 15 seconds per image and not loading everything into VRAM. So I'll sit at about 14GB used VRAM with the other 10 wasted

    jazara930Jan 17, 2026

    Steps: 10, Sampler: Res Multistep, Schedule type: Simple, CFG scale: 1, Shift: 3.5, Seed: -1, Size: 896x1152, Model hash: 2407613050, Model: Z-Image_Turbo bf16, Denoising strength: 0.25, Detail Daemon: "D1:1,0,both,0.12,0.2,0.8,0.4,1.5,0.07,-0.04,0.15,1", Hires Module 1: Use same choices, Hires sampler: Euler, Hires CFG Scale: 1, Hires upscale: 1.5, Hires steps: 5, Hires upscaler: 4x-ClearRealityV1, Version: neo, Diffusion in Low Bits: Automatic (fp16 LoRA), Module 1: ae, Module 2: ZImage_qwen_3_4bText_Encoder_2168935

    210881175Dec 14, 2025Ā· 26 reactions
    CivitAI

    VERY stupid in undestanding promps, why uses 6Gb LLM clip?

    qekDec 14, 2025

    Flux.2 uses a 24B LLM, its prompt adherence may suck anyway

    NabonidusDec 18, 2025

    The Qwen3 model used for CLIP is not configured as a general purpose reasoning/thinking AI. Don't think of it as an LLM, that's not what it does.

    Compared to most models I've found it very good at following prompts; short tag style prompts, short natural language sentences and long word-jumbles enhanced by an external LLM all work.



    mphobbitDec 18, 2025

    If you have problems with prompting, u can use this guide by sweetmax797 to improve them. He created an app but also describes a prompt template for any LLM you prefer.

    210881175Dec 14, 2025Ā· 2 reactions
    CivitAI

    In z-image LLM is Qwen3-4B-Instruct-2507

    you can even use Qwen3-4B-Thinking-2507 model for fun (without any advantages and miracles)

    qekDec 15, 2025

    Pointless, and it uses Qwen3-4B by default. I downloaded an abliterated version just in case anyway

    mphobbitDec 16, 2025

    It seems it's a adopted version for image generation. Usage FP8 versions of original LLM gave mess or size mismatch errors instead pictures in my case.

    qekDec 16, 2025

    They do not give extremely different results btw, but may be just variations

    210881175Dec 17, 2025

    @qekĀ I noticed that it follows the prompts more closely (thinking model)

    varnaartDec 15, 2025Ā· 2 reactions
    CivitAI

    I created pictures with a negative prompt a week ago. But I can not see where I can write my negative prompt today. What happens with online generation?

    qekDec 15, 2025

    It used not to allow to change CFG and step count

    theallyDec 16, 2025Ā· 1 reaction

    Z-Image-Turbo doesn't have the concept of a negative prompt, per the authors of the model.

    AkalabethDec 19, 2025

    What @theally said. It’s a distilled model that runs at CFG 1 - negative prompts don’t work here.

    aces_21676Dec 15, 2025Ā· 8 reactions
    CivitAI

    Newb question - Can you use turbo loras with the full Z Image version, or vice/versa - can you use non turbo loras for the Z image Turbo checkpoint?

    I'm using comfyui. I'm pretty good at image to video now but surprisingly photos are harder than video so far to me.

    NabonidusDec 16, 2025Ā· 6 reactions

    Z-Image-Turbo has a "turbo" built in; hence the name and use of 8 or 9 steps. We don't have the base model yet; we should be getting a Z-Image-Omni-Base soon.

    So any existing loras are intended for use with Z-Image-Turbo, even if people shorten that to "Z-Image" in descriptions.

    qekDec 16, 2025

    @NabonidusĀ non turbo is Z Image De-Turbo

    thumperunit441Dec 16, 2025

    @qekĀ which still isnt the full model.

    NabonidusDec 17, 2025

    @qekĀ ...which is a modified Z-Image-Turbo intended for use training loras for Z-Image-Turbo.

    WittyBittyDec 18, 2025Ā· 7 reactions
    CivitAI

    I'm doing something wrong, 8 steps of this model takes the same wall clock time or more than 25 steps of illustrious

    NabonidusDec 18, 2025Ā· 1 reaction

    Check that you're not losing time swapping models in/out of VRAM (use GGUF model/clip if needed) and make certain cfg is set to 1.0.

    If you're confident with python environment you can get some big speedups with sage attention and torch compile.

    It's still going to be slower than an optimized illustrious, but it's far more capable... and it's an order of magnitude faster than FLUX.2 on most consumer hardware.

    qekDec 18, 2025

    What's the program?

    mphobbitDec 18, 2025

    LLM text encoder consumes most time and memory. Try FP8 one. Unet has nothing special in fact.

    Or try Lumina. This model is tuned towards anime and in fact it's similar to Illustrious (2B model); Z has the same ideas as Lumina (usage LLM as a text encoder) .

    qekDec 18, 2025

    @mphobbitĀ What about Cosmos Predict2 2b t2i?

    mphobbitDec 18, 2025

    @qekĀ Hadn't tried, but outputs from its civit gallery look interesting. It seems that community doesn't not support it comparing with Lumina.

    qekDec 18, 2025

    @mphobbitĀ I still have Cosmos Predict2 2b t2i, got some images looking like real photos, but I should dedicate some time to try again to see more

    WittyBittyDec 18, 2025

    @NabonidusĀ I'm using Q6_k gguf. Comfy is running with --highvram to not unload models, also I'm using --use-pytorch-cross-attention because I gave up installing sage attention. I am able to use torch compile to cudagraphs, (I also gave up installing triton/inductor), will see if I this improves time when I ask for multiple images.

    From what I looked it is expected to be slower, but with better prompt understanding?

    WittyBittyDec 18, 2025

    @qekĀ swarmui/comfy

    WittyBittyDec 18, 2025

    Did some tinkering and got faster results, only some seconds faster than Illustrious, but maybe can help others.

    What I found out is that the gguf text encoder and model versions execute slower, maybe because of how SwamUI/Comfy handles them, or (most likely) because of my processor architecture, which is AMD Strix Halo.

    With the gguf version I was getting 35 seconds for 1024x1024 image. With safetensors I got 25s, and after adding --bf16-unet --bf16-text-enc to comfy initialization I am getting 22s. I'm using torch cross attention, and no torch compile.


    Illustrious here executes 25 steps in about 25seconds, so z-image is now 10% faster.

    For others with strix halo, this is my current Comfy startup parameter string (I'm using windows without WSL):
    ```
    --windows-standalone-build --fast --highvram --disable-mmap --use-pytorch-cross-attention --disable-auto-launch --force-non-blocking --supports-fp8-compute --bf16-unet --bf16-text-enc
    ```

    qekDec 18, 2025

    @WittyBittyĀ if ComfyUI's backend is 0.4+, it should be faster, refer to https://github.com/comfyanonymous/ComfyUI/pull/11057

    qekDec 18, 2025

    Stop, "8 steps of this model takes the same wall clock time or more than 25 steps of illustrious" is almost valid

    WittyBittyDec 18, 2025Ā· 1 reaction

    @qekĀ yes, I just realized this, the gain appear to be easier to write natural language prompts and have texts in the image.

    qekDec 19, 2025Ā· 1 reaction

    @WittyBittyĀ Depends on other things like sampler...

    WittyBittyDec 19, 2025Ā· 1 reaction

    @qekĀ I'm using the dpm++ sde heun with SGM uniform

    210881175Dec 19, 2025Ā· 1 reaction

    This is normal

    210881175Dec 19, 2025Ā· 1 reaction

    @qekĀ 5 sec (z-turbo) vs 45 sec (cosmos) look strange.

    qekDec 19, 2025Ā· 1 reaction

    @210881175Ā Cosmos models aren't distilled

    RavirKunDec 19, 2025Ā· 15 reactions
    CivitAI

    is this model uncensored? why ni**les and vag*na is weird?

    whateverrDec 19, 2025Ā· 1 reaction

    its not fine tuned like the models youre using where they dont look weird. the fine tunes should come after the non turbo zimage model is released

    theallyDec 19, 2025Ā· 7 reactions

    It hasn't been specifically trained on adult bits. That will improve over time, with community finetunes which improve the NSFW capabilities.

    qekDec 19, 2025

    @whateverrĀ already available: LoRAs

    210881175Dec 20, 2025Ā· 15 reactions

    Hey man i can say "nipples" and "vagina"

    And nothing will happen to me :)

    qekDec 20, 2025Ā· 7 reactions

    @210881175Ā p*nis

    NabonidusDec 21, 2025Ā· 2 reactions

    It's uncensored, which means it doesn't actively fight you. That's not the same as actually being trained on NSFW content, but compared to other base models it does nipples and vaginas better than any I can think of. Just don't ask it to make a penis.

    Because it's not fighting NSFW generations it's easy to make loras to fill in the bits it doesn't know, as opposed to a model like SD 3.5 which was so badly censored it couldn't generate a woman laying on grass or many other SFW images.

    asukaylink663Dec 31, 2025Ā· 1 reaction

    The censorship comes from the text encoder, and there isn't one without censorship yet.

    ctcf6Jan 3, 2026

    @asukaylink663Ā good to know. thanks

    Mayer2003Dec 20, 2025Ā· 8 reactions
    CivitAI

    Great model, but it delivers slightly better results when using an 8B LLM as the text encoder.

    faker1600773Dec 20, 2025

    I get an error when I switch my text encode from Qwen3-4B-UD-Q6_K_XL.gguf to Qwen3-8B-Q8_0.gguf (4B-8B)

    RuntimeError: Given normalized_shape=[2560], expected input with shape [*, 2560], but got input of size[1, 128, 4096]

    They are not compatible.

    Mayer2003Dec 20, 2025

    I don’t use the Comfy GGUF loader; there’s another one. Just use the Comfy Manager search function and look up ā€œGGUF.ā€ I ran into the same error.

    byvalyiDec 22, 2025Ā· 1 reaction

    @Mayer2003Ā What specific gguf downloader do you use?

    byvalyiDec 22, 2025

    @Mayer2003Ā thk

    xNzXJan 4, 2026

    @byvalyiĀ People say something strange with this "calcius gguf" node - https://github.com/city96/ComfyUI-GGUF/issues/230#issuecomment-2704748235
    https://github.com/city96/ComfyUI-GGUF/issues/256#issuecomment-2849342007 And suggest, may be better than using this "Calcius Heretic text ecoder" to use Qwen3 8B from here - https://huggingface.co/ggml-org/Qwen3-8B-GGUF

    Mayer2003Jan 5, 2026

    @xNzXĀ Sounds good, I’ll give it a try, thanks.

    marcyyyyDec 20, 2025Ā· 3 reactions
    CivitAI

    I cant get it to work, i use all the huggingface files correctly but it says my gpu sucks, but my gpu is decent and has enough vram.

    qekDec 20, 2025

    What's the repo?

    210881175Dec 20, 2025

    try to use zImageTurboAIO

    https://civitai.com/models/2173571/z-image-turbo-aio

    (take workflow from sample images)

    marcyyyyDec 20, 2025

    I get a long size mismatch error for a buncha things.
    size mismatch for noise_refiner.0.attention.qkv.weight: copying a param with shape torch.Size([11520, 3840]) from checkpoint, the shape in current model is torch.Size([3840, 2304]). size mismatch for noise_refiner.0.attention.out.weight: copying a param with shape torch.Size([3840, 3840]) from checkpoint, the shape in current model is torch.Size([2304, 2304]). size mismatch for noise_refiner.0.attention.q_norm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for noise_refiner.0.attention.k_norm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
    etc

    BastardOGDec 20, 2025

    @marcyyyyĀ updated comfyUI ? if yes , best you use a fresh copy of portable , it's faster than trying to find out where all the bugs are , just save checkpoints , lora and workflows before

    NabonidusDec 21, 2025

    Which GPU, and what is the actual error?

    210881175Dec 21, 2025

    @marcyyyyĀ yes, get latest ComfyUI

    top_hell_managerDec 26, 2025Ā· 5 reactions
    CivitAI

    I have a problem: ComfyUI won’t let me use this model as a checkpoint, only together with Stable Diffusion. I tried using https://civitai.com/models/372465/pony-realism and connected it as a LoRA, and then it works.

    Maybe I did something wrong? Can I use only this model as a checkpoint? And is it normal that as soon as I move away from a portrait-style image, it immediately starts generating anomalies and two or more characters?

    qekDec 26, 2025

    "I tried using https://civitai.com/models/372465/pony-realism and connected it as a LoRA, and then it works."
    ?? as a LoRA?

    sylvanmotenai826Dec 28, 2025Ā· 1 reaction

    In ComfyUI, Z-Image safetensor model file is located in: ...\ComfyUI\models\diffusion_models

    and Z-Image GGUF model in:

    ...\ComfyUI\models\UNET

    xNzXDec 28, 2025Ā· 2 reactions

    Yes, you should use this model as only one checkpoint. Here is basic workflow - https://www.imagebam.com/view/ME19694D You can upload this PNG image in ComfyUI and get a workflow.

    top_hell_managerDec 28, 2025

    @xNzXĀ thanks for help

    AdaptiveVisionJan 2, 2026

    That's not a LoRA model, if you don't already know you should understand the difference between checkpoint and LoRa.

    xDegenerateDec 31, 2025Ā· 4 reactions
    CivitAI

    Enjoying this one quite a lot, sharper than other models and it really got that realistic pro photo feel to it (like studio photos).

    Didn't download it before because it is 'older' than other ZIT models but apparently not worse :)

    Silverglasses67Jan 1, 2026Ā· 3 reactions
    CivitAI

    impressive

    GeCoJan 1, 2026Ā· 2 reactions
    CivitAI

    I have a question. :

    Who has ever been able to run z-image on GTX cards like 1070 or 1080 ti ???

    confyUI versions newer than 0.3.45, I can't install - either a DLL error, or writes update nvidia driver, but the driver is fresh.

    A fresh forge-neo will not be installed either...

    NNAIJan 2, 2026Ā· 1 reaction

    comfyui installs the newest torch by default. I'm not familiar with your cards but I found using python 3.12 and torch 2.7.1 works the best for me.

    You could try this for a fresh comfy install:
    go to a folder where you want to install comfyui and open a command line

    git clone https://github.com/comfyanonymous/ComfyUI.git

    cd ComfyUi

    python -m venv .venv

    .venv\Scripts\activate

    pip install "torch<2.8" "torchvision<0.23" "torchaudio<2.8" --extra-index-url https://download.pytorch.org/whl/cu128

    pip install "xformers<0.0.32" --index-url https://download.pytorch.org/whl/cu128 --no-deps

    pip install -r requirements.txt

    pip install "triton-windows<3.4"

    pip install https://github.com/woct0rdho/SageAttention/releases/download/v2.2.0-windows.post3/sageattention-2.2.0+cu128torch2.7.1.post3-cp39-abi3-win_amd64.whl

    GeCoJan 3, 2026

    Everything breaks down at this stage:

    pip install "torch<2.8" "torchvision<0.23" "torchaudio<2.8" --extra-index-url https://download.pytorch.org/whl/cu128

    I can't download some of the files, so the process stops.

    zooltoolJan 5, 2026

    I was trying to use ChatGPT to help me out, but after a few days I realized it wasn't up to that task at all. Then I asked Claude Sonnet 4.5 and had the solution in no time. I think I've already deleted that chat window, but you might want to try yourself. BTW I run Z-Image on a 1070 card with only 8GB of system RAM.

    JustAiGuyJan 24, 2026

    @zooltoolĀ how much time does it takes for one generation?

    NNAIJan 1, 2026Ā· 3 reactions
    CivitAI

    So far not bad, relatively fast, but
    - it has serious fetish for ears, earrings and piercings, I can't make any hair style where ears are hidden reliably.
    - if I mention 3d, everyone become a child and age control becomes very hard.
    - darker images are usually very noisy.
    - tails on humans only work from front view, or it grows from weird places.
    - animal fur is usually messy.
    - with more detailed prompt the seed variations are low, outputs are almost identical.

    For people on window with 4000 series cards I recommend using KJNodes for sageattention + torch compile and staying with torch 2.7.1

    qekJan 1, 2026Ā· 1 reaction

    "I can't make any hair style where ears are hidden reliably" I know it does not have an excellent prompt adherence, but it is not unsolvable!
    "if I mention 3d, everyone become a child" I couldn't reproduce it, I made some fake 3D images with Z and I didn't even mention ages, there were adults. Proof: https://civitai.com/images/112964657
    "darker images are usually very noisy" Try other samplers+schedulers and shift values. Not noisy, proof: https://civitai.com/images/115828687
    "tails on humans only work from front view" Why? I couldn't reproduce it either
    "animal fur is usually messy" Why? I don't have the issue
    "with more detailed prompt the seed variations are low" I am aware of the issue, but I don't have the problem, my outputs with different seeds are very different, no (noise) modifications, no fine-tuned Z
    "For people on window" Windows*? "To be on a window" sounds awkward :c

    docilerJan 4, 2026Ā· 4 reactions
    CivitAI

    I absolutely love the model, I do have a challenge with the nipples, they all feel a bit postpartum. Not just by my own generations, but also based on what I see in the comments/posts below. Anyone found a solution for that?

    qekJan 4, 2026Ā· 1 reaction

    A lora

    docilerJan 5, 2026

    @qekĀ Yeah... thing is though: I work a lot with wildcards, and I do not always want to have nipples. But when they're there, they should be good. Such Lora's may exist, but it's kind of a challenge.

    docilerJan 4, 2026Ā· 2 reactions
    CivitAI

    I'm facing 1 issue: the moment I interact with comfyui while this model runs, my iteration speed drops from 5sec/it to 70+sec/it. Even just tweaking a prompt or navigating is enough to do the harm.

    Anybody else experienced that?

    qekJan 4, 2026

    Of course, it uses 100% of a CPU and a GPU, do you have another GPU? Have enough reserved VRAM?

    NNAIJan 4, 2026Ā· 1 reaction

    You can try disabling hardware acceleration in your browser while using CumfyUI, or try starting ComfyUi with --reserve-vram 0.8 option to allow more VRAM reserve for the system. default is 0.6 on windows

    WingoJan 6, 2026

    ComfyUI doesn’t recompute nodes whose inputs haven’t changed since the previous run. For example, on the first generation it has to load the text encoder, upscaler, VAE, etc., but if you press Generate again without modifying the prompt or changing any node connections, it reuses the cached results and should run much faster.

    A jump from 5 sec/it to 70+ sec/it is extremely unusual. This usually happens when some part of the pipeline gets offloaded from GPU to CPU/RAM — for instance, if your CLIP Loader or another model component doesn’t fit in VRAM and silently falls back to system memory. In that case even a small UI interaction can trigger a reload or re-allocation, which stalls the entire iteration.

    On my setup the difference is minimal (around 2.13–2.30 sec/it), even when interacting with the UI, so such a huge slowdown strongly suggests VRAM pressure or offloading issues rather than ComfyUI's recomputation logic.

    You might want to check:

    - whether Z-Image or other nodes force additional models into memory,

    - VRAM usage right before / after interacting with the UI,

    - whether your CLIP/Text Encoder is being offloaded to CPU,

    - if any background software is stealing VRAM mid-run.

    liziabc258Jan 6, 2026

    å“¦ļ¼Œęˆ‘ä¹Ÿęœ‰åŒę ·ēš„é—®é¢˜

    csatchelx873Jan 4, 2026Ā· 3 reactions
    CivitAI

    so far not bad but will need to experiment more

    SakanakoChanJan 5, 2026Ā· 7 reactions
    CivitAI

    Idk why but it seems like no matter how i tried to adjust the prompts or other parameters, the face cannot be as stable and good as i want (especially if i do latent upscale)... Maybe its because this is just a turbo model?

    qekJan 5, 2026

    Because algorithmic latent upscale sucks, it makes your latent noisy and broken and it needs a high denoise value to denoise

    SakanakoChanJan 5, 2026

    @qekĀ i do feel latent upscale sucks in face in realistic models but it does help on improving the details and fixing some glitches. Like if the character has very complicated nail polish, a basic 720p image tends to not perform well, but after latent upscaling it to 1080p, its gonna be much better, the only bad thing is like latent upscale will destroy the face in 90% cases. But I dont wanna spend much time on fixing hands or some details using detailer nodes, nor do i wanna use nodes like ultimate sd upscale to upscale and meanwhile fix some detail errors cuz it costs too much time.. I also tried seedvr2, the result is good but it really relies on a well-generated base image cuz it wont help fixing any detail errors. And also I tried using super-resolution model to upscale the image and then use i2i with a very small denoise to polish the image still the result is not as good as I want... Is there any good way to prevent latent upscale from destroying the face? (or in other words to have things except face polished like how latent upscale does while the face remains good? I know i can do this with mask stuff but i just wanna know if there's some quick way to do so)

    qekJan 5, 2026

    @SakanakoChanĀ Why not just using a GAN upscaler and then tiled img2img to refine?

    NNAIJan 5, 2026

    Latent upscale usually need denoise at least 0.5 which will alter the imege. Instead use image upscale and lower denoise 0.15-0.25

    Vae Decode -> Upscale image by -> Vae Encode -> Ksampler with lower denoise -> Vae Decode
    It is a bit slower but allows lower denoise.
    From my experience you'll need to add about 5% contrast back before Encoding.
    Also this is subjective but depending on the prompt for some images I prefered the original image over the upscaled one.

    If you load any of my newer images, it has an upscaler subgraph on the upper-right part, and ignore all my custom nodes.

    wktraJan 7, 2026Ā· 21 reactions
    CivitAI

    No matter how hard i try, i can't get this model to generate a woman with a fkat chest...

    qekJan 7, 2026Ā· 1 reaction

    It is easy

    ogbtjyqat450Jan 8, 2026Ā· 5 reactions

    dafkuq?

    qekJan 8, 2026Ā· 3 reactions

    Lol I can generate breasts of all sizes, just need a good prompt and a LoRA for bigger breasts

    wktraJan 9, 2026Ā· 3 reactions

    @qek you misread. It's VERY difficult to get flat chested females. Especially flat chested anime illustrated females. TRY IT

    cielbleu9991Jan 9, 2026Ā· 4 reactions

    I agree, I tried "very small breasts" and "flat chest", but it always gives medium breasts results.

    qekJan 9, 2026Ā· 5 reactions

    @wktraĀ No, I just generated flat chested women, lol. Of course I misread "fkat chest"

    ogbtjyqat450Jan 9, 2026Ā· 4 reactions

    @qekĀ Fkantastic!

    bougyakumahouJan 19, 2026Ā· 1 reaction

    It skews heavily toward 40 year old roastie hags. Try this LORA Tits Size Slider - Z Image Turbo - v1.0 | ZImageTurbo LoRA | Civitai

    bougyakumahouJan 19, 2026Ā· 1 reaction

    Skews heavily towards 40 year old roastie hags. There is a lora called tits size slider. that might help.

    prompthomelandJan 10, 2026Ā· 14 reactions
    CivitAI

    Usefull, replace my current main model for picture generation

    zalupamirok696Jan 11, 2026Ā· 28 reactions
    CivitAI

    Censorship must be removed from the model!!!

    kuska128Jan 14, 2026Ā· 1 reaction

    this is the original SFW model,the creator also uploaded a uncensored version

    theallyJan 14, 2026Ā· 4 reactions

    @kuska128Ā That's... not correct. There is only one official Z-Image-Turbo model.

    zalupamirok696Jan 14, 2026Ā· 1 reaction

    @kuska128Ā Realy? Could you please share a link to it?

    zalupamirok696Jan 14, 2026Ā· 2 reactions

    @theallyĀ Dude, I can't even get the characters to kiss properly! And what's wrong with nipples under clothing? Who declared a holy war on them?

    theallyJan 14, 2026Ā· 3 reactions

    @zalupamirok696Ā Limitations in the model training data, not censorship. Wait for the Z-Image-Base release, then the community will be able to go ham finetuning it into something more capable of NSFW generation.

    SSSONGGGJan 14, 2026Ā· 3 reactions

    This is not due to censorship, but rather poor content quality resulting from an insufficient proportion of training data; the model has not undergone specialized training on NSFW-related content.

    SSSONGGGJan 14, 2026Ā· 4 reactions

    You can refer to FLUX.2 for this. True censorship means the model will ignore your prompts entirely. For example, if you prompt for "nude from head to toe", the model will simply disregard it and generate a person wearing clothes instead.

    qekJan 14, 2026Ā· 1 reaction

    @6vidit9Ā The creator made a new, upgraded version: https://civitai.com/models/2242173

    qekJan 14, 2026Ā· 2 reactions

    @zalupamirok696Ā "I can't even get the characters to kiss properly! And what's wrong with nipples under clothing?" I have neither of these issues even with the original Z Image Turbo

    zalupamirok696Jan 15, 2026Ā· 1 reaction

    @qekĀ It's a pity that there isn't even a single piece of evidence to support your words in your profile)

    zalupamirok696Jan 15, 2026

    @6vidit9Ā The examples didn't impress me, unfortunately...)

    6vidit9Jan 15, 2026

    @zalupamirok696Ā That's fine ig, I have generated many from that same and have loved so far.

    6vidit9Jan 15, 2026Ā· 1 reaction

    @zalupamirok696Ā You need to work on your prompt ig, if you'd see people here, they have generated plenty of NSFW images.

    qekJan 15, 2026Ā· 1 reaction

    @zalupamirok696Ā "It's a pity that there isn't even a single piece of evidence to support your words in your profile" Ok, I will post 100 images of kissing pairs to comply with your trolling then? I already smashed NNAI's comment, saw it? We have nothing to talk about then, I only see smiling AI generated woman on your profile

    zalupamirok696Jan 15, 2026Ā· 3 reactions

    @qekĀ It's a shame I'll never see them because you banned me, hahahaha!

    zalupamirok696Jan 16, 2026Ā· 1 reaction

    @6vidit9Ā Controlnet doesn't count; the model can't even draw female genitalia properly, let alone male ones. And what's the reason for that? Lack of time again? Seriously, there wasn't enough time to draw a woman's pussy and nipples? How can such a powerful model draw NSFW text2img? Forgive me, but all NSFW finetunes look like the work of children with special needs. No offense.

    SSSONGGGJan 16, 2026Ā· 2 reactions

    @zalupamirok696Ā It is hard to expect a large company to blatantly develop a model that includes NSFW content — neither FLUX nor SD can do this, as it would arouse the vigilance of society and the government.

    kuska128Jan 24, 2026Ā· 1 reaction

    @theallyĀ yeah my bad, confused you with somone else.

    CorgblamJan 13, 2026Ā· 9 reactions
    CivitAI

    I keep getting a Qwen3 error on Forge Neo. No idea what that means.

    ian538Jan 14, 2026Ā· 1 reaction

    I just went through this you have to download the encoder and vae and put it in neo the encoder and vae is listed on the checkpoint make sure to use those

    CorgblamJan 14, 2026Ā· 1 reaction

    @ian538Ā Thanks. Cant believe I didnt see that.

    caramelrudy801Jan 16, 2026Ā· 1 reaction

    @CorgblamĀ I have the same error, I already download the encoder and vae but I still have this problem. Are you using Windows ? Linux ?

    CorgblamJan 16, 2026Ā· 1 reaction

    @caramelrudy801Ā Windows 10. And really all I did was drop them in the Text Encoder folder and VAE folder respectively, loaded them up at the same time in the VAE dropdown menu along with the Z-Image model, and it worked.

    caramelrudy801Jan 16, 2026Ā· 1 reaction

    @CorgblamĀ thanks for your answer, I read on a forum that you need to have version 2.6 of NEO and install Triton for Windows, I'll try it tonight.

    dragonalumniJan 14, 2026Ā· 12 reactions
    CivitAI

    Does this just not work for A1111?

    qekJan 15, 2026Ā· 1 reaction

    Nope

    discipleofyakubJan 15, 2026Ā· 2 reactions

    get sd-webui-forge-neo if you dont like comfy

    210881175Jan 17, 2026Ā· 2 reactions

    A1111 is dead

    qekJan 20, 2026Ā· 1 reaction

    Why do you suspect updates of the GUI when it clearly says that the last commit was made 2 years ago!

    caramelrudy801Jan 15, 2026Ā· 12 reactions
    CivitAI

    Hello guys, I install Neo today with python 3.11 (before I was using A1111 with python 3.9). I would like to test ZIT checkpoints, but I have a quen3 error ? Somebody can help me ? (I am beginner, my OS is win10). Tks !

    _Tigerman_Jan 17, 2026Ā· 2 reactions

    ZIT in Neo uses the UI preset LUMINA in the top left corner. You will need a z image turbo(ZIT) checkpoint. Use a 6GB model to start then upgrade if your vram allows a bigger model. Put in neo\models\Stable-diffusion. You will need the flux vae (319mb) often named ae.safetensors in your neo\models\VAE. Lastly you will need the text encoder that is called qwen_3_4b.safetensors (7.5GB) in neo\models\text_encoder. Then select these files from the top of the page drop down boxes called Checkpoint for the zit checkpoint file and VAE/ Text encoder dropdown box for the ae and qwen_3_4b files. Lastly set the slider called CFG scale to 1. ZIT doesn't use the negative prompt 1 = disable neg prompt. Write a prompt then hit generate.

    caramelrudy801Jan 19, 2026Ā· 1 reaction

    @_Tigerman_Ā Thanks a lot for you answer.The only way that I found to use ZIT on NEO was to erase my folder, and restart a clean install using the Stable Diffusion tutorial (https://www.stablediffusiontutorials.com/2025/11/forge-neo-installation.html) now it is working fine ! BTW, what is the "switch" parameter ? and do somebody know if ther is a way to use it in the X/Y/Z plot ?

    BiTZeroJan 18, 2026Ā· 14 reactions
    CivitAI

    I really like ZIT. Prompt adherence is great. But with the size limit of the prompt window of CivitAI online generation is very hard. Can we make a petition for it to be enlarged?

    bougyakumahouJan 19, 2026Ā· 2 reactions

    Prompt adherence is terrible. What are you talking about? Try making a woman without hips that had 5 kids. it's nearly impossible.

    qekJan 19, 2026Ā· 2 reactions

    @bougyakumahouĀ your fear mongering again

    bougyakumahouJan 20, 2026Ā· 2 reactions

    @qekĀ Ok reddit

    theallyJan 27, 2026Ā· 2 reactions

    We'll raise Z-Image-Turbo and Base to 20,000 characters shortly!

    ravemry9Jan 27, 2026Ā· 1 reaction

    @theallyĀ Very kind, but I'd hate to affect site functionality. Really, I doubt we need more than 18k. 19k at most.

    bougyakumahouJan 19, 2026Ā· 30 reactions
    CivitAI

    Got this thing working, and it mostly ignores prompts (especially for clothing), if they are longer than 5 words, sucks at lighting prompting, completely ignores negatives, can't get different faces, heavily tends towards unattractiveness, and gives women 40 year old mom hips after 5 kids.

    qekJan 20, 2026Ā· 26 reactions
    CivitAI
    210881175Jan 21, 2026Ā· 3 reactions

    not today :(

    qekJan 21, 2026Ā· 1 reaction
    jaykrownJan 24, 2026Ā· 14 reactions
    CivitAI

    As far as my research goes, this is currently the most efficient and powerful model available for local generation for anyone with less than 20GB of VRAM. It's a very good combination of efficiency and prompt accuracy. I think this is a good replacement for Flux.

    rivdemon1221554Jan 25, 2026Ā· 9 reactions
    CivitAI

    edit: after testing, yes it's definitely because people like their 'fast' generation. Overall a decent model, not as in depth when it comes to finer detail and long prompts but it's good for use cases.

    RisingVJan 26, 2026Ā· 9 reactions
    CivitAI

    Hi, does anyone know which sampler is used for ZIT in the civitai generator? You can't choose one in the settings and the generation data just says "undefined".

    theallyJan 27, 2026Ā· 2 reactions

    We're going to expose the sampler/scheduler in the interface shortly! You'll be able to pick from all the usual options.

    RisingVJan 27, 2026Ā· 1 reaction

    @theallyĀ ok, thanks

    reikasamaJan 29, 2026Ā· 12 reactions
    CivitAI

    All the Loras stopped working. Now they do nothing to the image suddenly. Using Civit onsite.

    theallyJan 29, 2026Ā· 2 reactions

    The Dev team is aware, fix soon! If you notice issues, please submit them to [email protected]

    reikasamaJan 29, 2026Ā· 1 reaction

    @theallyĀ thank you!

    reikasamaJan 29, 2026Ā· 1 reaction

    Working again.

    reikasamaJan 29, 2026Ā· 7 reactions
    CivitAI

    Any chance for remacri upscale of Z-image turbo on civit?

    theallyJan 29, 2026Ā· 4 reactions

    We have upscaling in the Generator, from the Magic Wand menu on generated images.

    reikasamaJan 29, 2026Ā· 1 reaction

    @theallyĀ I know but it's not creative upscaling. With Pony I use to do remacri upscale which in essence increases details to an incredible fidelity, where I can choose the level of Denoise of the upscale, and then I do a final 2x on that to reach around 4k.

    AddicteddJan 29, 2026Ā· 9 reactions
    CivitAI

    Its nuts that this state of the art model is free :D its on par whit aurora in image department šŸ˜Ž

    ovatographyJan 31, 2026Ā· 12 reactions
    CivitAI

    Thanks for this amazing model šŸ˜, and special thanks for your support and generosity šŸ»šŸ’Æ

    MahaVakyasFeb 1, 2026Ā· 13 reactions
    CivitAI

    I tried using this model in ForgeUI in Pinokio but it keeps saying "ValueError: Failed to recognize model type!" I tried the ComfyUI layout you have for one of the images and it errors out almost immediately. HELP!?

    WetnWildFeb 3, 2026Ā· 11 reactions
    CivitAI

    I've been testing for a few days and it's unbelievable how fast it is in low memory (8GB). A bit dumb sometimes about understanding prompt in comparison to Qwen, but so fast.

    SCreelFeb 3, 2026Ā· 2 reactions

    Hi! I also have 8GB of VRAM. But it doesn't work for me at all. :'(

    damir_96586599Feb 3, 2026Ā· 8 reactions
    CivitAI

    Great model, good prompt understanding, but, i always get blurred images, no surface details, washed out colors, skin is one solid piece, no variations, no details.....

    I tried all variants of samplers/schedulers, i tried various cfg, various steps... always the same.

    Skin enhancements loras simply don't do anything, so...what i do wrong?

    jupiterloverful289Feb 4, 2026Ā· 1 reaction

    what platform are you using?

    BoundingBoxesFeb 5, 2026Ā· 2 reactions

    I reckon you need to upscale

    ALFARANKOFeb 4, 2026Ā· 7 reactions
    CivitAI

    hey people,

    i need some help with forge ui. i am really confused about which sampler and scheduler combo to use. i’ve tried a bunch of them but i keep jumping from one combination to another every time i change models and it’s getting frustrating.

    does anyone have a go-to combo that works well for the original models and the tuned ones from the community? just looking for something consistent so i can stop guessing.?????????????

    _Tigerman_Feb 17, 2026Ā· 2 reactions

    This page is useful for showing which text encoders and vae you should use. It also has links for where to download them. https://github.com/Haoming02/sd-webui-forge-classic/wiki/Download-Models

    neowolfoneFeb 5, 2026Ā· 7 reactions
    CivitAI

    The SD.Next documentation states that it supports ZIT, but there are no installation instructions. Could you tell me which folder I should put the model in so that SD.Next will see it? It's not visible in the model drop-down list, but the text "encoder" and "vae" are visible. I've tried folders Diffuserors, Unet. and also created a "Zimage" folder. When automatically downloading from the interface itself (default downloadable models list), an error occurs and the interface crashes.
    SD.Next + Zluda Radeon 6800 16GB VRAM + 32GB RAM

    xNzXFeb 6, 2026Ā· 1 reaction

    Try ~\models\diffusion_models

    wooba001466Feb 9, 2026Ā· 1 reaction

    Is there a unet folder, if so try it.

    wooba001466Feb 9, 2026Ā· 1 reaction

    I used to have an amd gpu and had to switch to nvidia, sd.next was the only thing I found that worked and I was stuck to sd 1.5

    ng554Feb 5, 2026Ā· 11 reactions
    CivitAI

    Does anyone have a prompt for a realistic penis and scrotum they would care to share?Circumcised, uncut, erect or flaccid. Realistic size and appearance. Granted I don't have a lot of experience with this, but no matter what I try I end up with porn star size absurdities. I am using this model in Draw Things, but I don't think that makes a difference. Thanks in advance.

    xNzXFeb 6, 2026Ā· 3 reactions

    THis model censured against human's henitalia. Only ugly "pieces of something" generated - that is censoreship settings.

    BzzzDarklordFeb 6, 2026Ā· 11 reactions
    CivitAI

    And so, @Civitai gives me a new reason yesterday to consider leaving them again: apparently their new politics either giving moderator right to amateur haters that will drink your blood if they could OR they now want to blackmail me by blocking on TOS w/o an actual reason (btw from same set its images that remain unblocked) and demand yellow currency to unblock others. where lies the moral line, eh? CIVITAI? you are allowing TONNS of pure disgusting porn to be hosted on your server, your images section full of this filth , But if someone posts lightly erotic theme, you play a moral 'angel' card? SHAME ON YOU!

    theallyFeb 6, 2026Ā· 5 reactions

    What?

    Send an email to [email protected] and I'll look into it.

    EMYSTRATRIELFeb 6, 2026Ā· 3 reactions

    Lightly erotic theme!🤔

    xNzXFeb 6, 2026Ā· 2 reactions

    Let me guess... banned image was from "semester exams" set? Civitai banned all "school" NSFW works as "minors in porn". All other images from this set will be banned too.

    BzzzDarklordFeb 6, 2026Ā· 2 reactions

    @xNzXĀ there no minors in prompt, do homeworks

    BzzzDarklordFeb 6, 2026Ā· 1 reaction

    @theallyĀ i know you would review ,@theally, but what for, do you think it would make any difference? Its just easier not to post anymore, to make the haters happy.

    xNzXFeb 7, 2026Ā· 1 reaction

    @BzzzDarklordĀ Who talk about prompt? Images are banned for their appearance. And you don't answer - was I'm right in my guess? Earlier, I've got banned set with enough adult female - but is school settings. So, as I understand from that, school board on wall = cause ban.

    xNzXFeb 7, 2026Ā· 1 reaction

    "btw from same set its images that remain unblocked" — Yes, that confused me too. But here is practice of Civitai's moderation: moderator get report for one image -- quickly ban them - and quickly goes away. HE DO NOT look on other images from this set. LATER, he look on other images - and ban them all, AND SET "warning point" to your account. And if you get 3 warning points - they simply ban your account. So, be careful. And, if ONE image from set is banned - YOU should manually remove all other similar images. THAT is a Civitai's moderation logic.

    BzzzDarklordFeb 7, 2026Ā· 2 reactions

    @xNzXĀ Not planning removing ANYTHING... And don't give a flip about "harmed" feelings of some "new European" perverts practicing Sharia. Its like the idea to cover up statue of liberty because of its breasts. Nude body will remain and will celebrate its freedom. if you under 18, go get permission from your parent.

    xNzXFeb 7, 2026Ā· 1 reaction

    @BzzzDarklordĀ Of course, you're free to have any opinion you want. But censorship is a matter of policy. The site will make its decision, and if there are a lot of prohibited images - they'll give you a strike point. Three strikes - and they'll delete your account. I've lost my first account with thinking exactly like you.

    dashr608516Feb 25, 2026Ā· 1 reaction

    ok bye, you won’t be missed 🤔

    ConejoquehaceFeb 9, 2026Ā· 13 reactions
    CivitAI

    Which Workflow can you recommend,? (beginner friendly), is there one with an IMG2IMG option? thanks for your time.

    xNzXFeb 10, 2026Ā· 3 reactions

    Here is very BASIC img2img workflow for you (with switcher to text2img) - simple, with minimum custom nodes. You can load this PNG image in your ComfyUI and get workflow (use denoise 0.5-0.6 for img2img) - https://civitai.com/posts/26510791/

    ConejoquehaceFeb 10, 2026Ā· 1 reaction

    @xNzXĀ Thanks for the reference, Ill check it ASAP.

    greybladesFeb 10, 2026Ā· 13 reactions
    CivitAI

    What Clip should I use?

    I suppose I should also ask how to, as I cant figure out how to download the qwen_3_4 thing from the workflow I saw.

    BzzzDarklordFeb 11, 2026Ā· 1 reaction

    download from here the version you need, you can try all just make sure difussion model would be fp8 if you got fp8 text encoder, if missing vae, or other stuff ,you can find under same parental folder on HF text encoders

    BzzzDarklordFeb 11, 2026Ā· 1 reaction

    p.s. personally strongly recommend bf16 version

    Amir1308Feb 18, 2026Ā· 24 reactions
    CivitAI

    how can i avoid asian forever?
    it always causes me problems

    TheP3NGU1NFeb 21, 2026Ā· 10 reactions

    use 'caucasian'
    shocking i know.

    Amir1308Feb 21, 2026Ā· 2 reactions

    @TheP3NGU1NĀ thanks

    J1BFeb 19, 2026Ā· 20 reactions
    CivitAI

    @theally are custom ZIT models ever coming to the onsite Generator? or would the demand be too much for the GPU's?

    11423813Feb 23, 2026Ā· 16 reactions
    CivitAI

    Why aren't my images uploaded? All images are marked "Not published." Is there a way to fix this?

    BastardOGFeb 23, 2026Ā· 1 reaction

    some of mine get deleted if they have the yellow triangle on top

    BzzzDarklordMar 7, 2026

    usually image uploading is not equal to image posting on civitai, images you posted should appear under your profile, it takes some time to civitai to account those for 'images to display' or 'images to limit their visibility', sometimes process requires a moderator review to approve, if automatic bot detected key words /key expressions it did not like, once became 'images to display' you will see those under your images section

    EleniumFeb 24, 2026Ā· 28 reactions
    CivitAI

    A wonderful model, thank you very much!

    RaiganFeb 25, 2026Ā· 37 reactions
    CivitAI

    I'm not sure if I should recommend this model to anyone. It's addictive.

    AeralynaiaFeb 26, 2026Ā· 23 reactions
    CivitAI

    I love you so much! Thank you <3 Hi everyone! I am so happy for Z-Image!

    petercaldwell1505833Feb 27, 2026Ā· 22 reactions
    CivitAI

    I'm using forge neo, and ZIT works just fine, but when I try to use ZIB, I get the error message: "RuntimeError: The size of tensor a (1280) must match the size of tensor b (160) at non-singleton dimension 1"

    xNzXFeb 28, 2026Ā· 2 reactions

    Ask ChatGPT - he knows. As example, here - https://gpt-chatbot.ru/gpt-5 Ask in your native language.

    amattke7417Mar 13, 2026Ā· 10 reactions
    CivitAI

    bad. needs too much resources to do what other models can do better

    prabas029548Mar 13, 2026Ā· 10 reactions
    CivitAI

    I was just checking which is the best-ranked model, and yours came up first. After installing it, I realized people were right—the realism and image quality are amazing. I’ve tried different checkpoints, but none feel like this one. It’s so addictive; every time I see a photo of an Instagram model, I think I can recreate it the same way. There are only minor skin realism issues, but the LORAs are doing a fantastic job. Please keep up the great work—much appreciated.

    AddicteddMar 13, 2026Ā· 2 reactions

    Creator: Tongyi-MAI (also referred to as Tongyi-MAI Lab) is an AI research lab under Alibaba Group / Alibaba Cloud, part of their broader Tongyi (通义) AI ecosystem

    prabas029548Mar 19, 2026

    @AddicteddĀ  are you telling me to download that one?

    AddicteddMar 19, 2026

    @prabas029548Ā No, just telling who is behind the model :) what you downloaded here on (civitai) is correct one, no worries.

    prabas029548Mar 19, 2026

    @AddicteddGot it, thanks for clarifying! It’s great to know Tongyi-MAI Lab is the team behind it. I’m glad the version I found on Civitai is the right one—really appreciate the reassurance. Amazing work by them! Hopefully in the future we’ll be able to create 1080p AI videos on 8GB VRAM in just 40 seconds.

    nobagic830514Mar 18, 2026Ā· 8 reactions
    CivitAI

    just checked the videos section how can i make the same using which tool / workflow are they using for video genration

    freestuffpl0x42069Apr 27, 2026

    if you have comfyui usually you can click and drag the image from civitai to the comfyui workplace and it will show the workflow used to make the image. I have a few

    Iceisnice88Mar 24, 2026Ā· 10 reactions
    CivitAI

    Is there any specific reason i cannot generate nsfw images? No matter the prompt the model always presents clothed charaters.

    fabioff000614Mar 24, 2026Ā· 7 reactions

    Of course. This model doesn't have any naked images how do you expect it to generate naked images?

    oneeyed2Mar 27, 2026

    This checkpoint does have nude images in its training data, and it is perfectly capable of generating semi-nudes and nudes. Sex scenes is another matter though.

    You can try some very basic prompts (e.g.: "photo of a nude woman") on the civitai generator, and you'll see ZiT works fine as is.

    Other online generators often have nsfw filters in place which I guess might be your issue.

    Iceisnice88Apr 1, 2026

    @oneeyed2Ā Thank you, i was going for nudity, not action. Yet the prompt did not yield ever partial nudity, despite several attempts.

    oneeyed2Apr 1, 2026

    @Iceisnice88Ā Were you trying on civitai or locally? What was the exact prompt? Any lora used? Are you using the regular civitai (.com) or the .green one?

    Here is a test image generated on civitai: https://civitai.com/images/126099962

    As you can see, it does work. Try and remix it.

    towepa7730782Apr 2, 2026Ā· 15 reactions
    CivitAI

    A wonderful model

    dorkusdingusApr 4, 2026Ā· 8 reactions
    CivitAI

    what sampler and schedule type do I use for the best results

    ArumatoMidorimaApr 5, 2026

    Sampler: res_multistep
    scheduler: simple
    (these setup is fast and outputs nice quality, also dpmpp_sde/beta setup is good as well)

    Mayer2003Apr 5, 2026

    Deis/deis_2m_ode/beta is great for realism and skin texture.

    robbie2837725Apr 6, 2026Ā· 10 reactions
    CivitAI
    Great model, love the turbo speed and quality!
    mytronlight354Apr 7, 2026Ā· 16 reactions
    CivitAI

    Can it run on 8gb vram

    Mayer2003Apr 8, 2026

    Yeah google or bing "z-image turbo gguf"

    anasexta42122Apr 13, 2026Ā· 1 reaction
    CivitAI

    RuntimeError: ERROR: clip input is invalid: None

    help?

    martindieterApr 21, 2026Ā· 4 reactions
    CivitAI

    I use this model with Forge Neo.

    I like the quality and, of course, the speed.

    Unfortunately, the output variations aren't very good...see my article on this topic: https://civitai.red/articles/28925/z-image-turbo-seed-variations

    KIRAAIKAApr 29, 2026
    CivitAI

    This is an extremely high-quality model. It's a pity that it doesn't support NSFW content; otherwise, I would play it constantly. If possible, I hope to see an upgraded version in the future that incorporates new training datasets—specifically, high-quality nude photography and images featuring spread genitalia.

    Checkpoint
    ZImageTurbo

    Details

    Downloads
    21,858
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/7/2025
    Updated
    5/3/2026
    Deleted
    -