CivArchive
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    🟡:Flux Models 🟢:SD 3.5 Models 🔵:SD XL Models 🟣:SD 1.5 Models 🔴:Expired Models

    🟡STOIQO NewReality is a cutting-edge model designed to generate photographic content. It empowers users to capture fine details in portraits, explore breathtaking landscapes, and bring mythical creatures to life with unparalleled variety and precision. This model is tailored for artists and creators who demand high-quality results, featuring dynamic light management and intricate textures, making it the ultimate tool for top-tier creative work.

    Versions and Recommended Settings:

    • MAIN SAMPLER: dpmpp_2m+ sgm_uniform

    • CFG : 3+ | STEPS: 30+

    • (Recommended): CFG: 5 | STEPS: 40

    Finally here is a first finetuning test on SD3.5 for NewReality. As always the first goal of the Alpha version is to give a focus to the photographic aspect and textures by expanding the range of subjects that it can reproduce. Future finetuning will focus more on individual components and details.

    The Main Download (Full 26GB) is the AIO version with t5xxl_fp16, clip_l, clip_g and VAE already included.

    In the file section of the sidebar you can instead find the Secondary Download (Pruned 16GB) of the version containing only the Model, so you can use it accompanied by the clips of your choice, depending on whether you prefer the fp8, the fp16 or want to experiment with new clip_l or t5.

    • MAIN SAMPLER: Euler+ Beta

    • GUIDANCE (In Forge:'Distilled CFG Scale'): 2+ | STEPS: 15+

    • (Recommended): GUIDANCE: 2.5 | STEPS: 25

    NewReality FLUX.1 Dev has reached its Alpha version, but I'm still in an experimental phase with the Flux component. Working on the Unet architecture has proven to be more time-consuming and challenging than initially expected. As a result, I will release smaller, more frequent updates during this initial period to gather valuable feedback. I’m fully aware of the existing issues within the model, and your input will be instrumental in addressing and refining them.

    The Main Download (Full 11GB) is the version containing only the Unet, so you can use it accompanied by the clips of your choice, depending on whether you prefer the fp8, the fp16 or want to experiment with new clip_l or t5.

    In the file section of the sidebar you can instead find the Secondary Download (Pruned 20GB) of the AIO version with t5xxl_fp16, clip_l and VAE already included.

    ATTENTION: The Full/Pruned nomenclature is just to be able to keep the Unet as the main download. All the others change with the AIO as the main. The two models are composed as mentioned above.

    Refiner is Unnecessary and Clip and VAE are Included

    • MAIN SAMPLER: dpmpp_sde + Karras

    • CFG: 2+ | STEPS: 15+

    • (Recommended): CFG: 4 | STEPS: 30

    NewReality XL PRO is a fine-tuned model designed with a specific focus on enhancing prompt adherence, earning it the name "PRO" (PRO-mpt). This version excels in managing lighting and image composition, offering a more refined and precise visual output based on the given instructions.

    Refiner is Unnecessary and Clip and VAE are Included

    • MAIN SAMPLER: dpmpp_3m_sde + Exponential

    • CFG: 2+ | STEPS: 15+

    • (Recommended): CFG: 4 | STEPS: 20

    Refiner is Unnecessary and Clip and VAE are Included

    • MAIN SAMPLER: dpmpp__sde + karras

    • CFG: 1+ | STEPS: 4+

    • (Recommended): CFG: 2 | STEPS: 4

    Clip and VAE are Included

    • MAIN SAMPLER: dpmpp__sde + karras

    • CFG: 2+ | STEPS: 15+

    • (Recommended): CFG: 5 | STEPS: 25

    Credits:

    NewReality is the result of countless hours of work, involving several hundred iterations through training, merging of models and LoRA (Low-Rank Adaptation), fine-tuning specific blocks or captions, as well as performing additive and subtractive merges. Due to the complexity of this process, it is difficult to provide a precise, step-by-step account of every decision and experiment. However, each phase has played a significant role in shaping the final product, whether by direct influence or through valuable lessons learned along the way.

    Given the intricacies and challenges involved, I want to extend my heartfelt gratitude to the creators of the models that have contributed to this journey, either directly or indirectly. Regardless of whether they were incorporated into the final iteration, their work provided inspiration, insight, and progress throughout the project.

    To all the creators whose models supported this endeavor, I extend my deepest thanks. I encourage everyone to explore and support their work, as it deserves recognition for the incredible value it brings to the community.

    EauDeNoire humblemikey socalguitarist ZyloO xlabs_ai aramintastudio Seeker70 Ai_Art_Vision PromptoAI alvdansen aip0pp SG_161222

    WORK IN PROGRESS...

    Description

    FAQ

    Comments (118)

    323f802Oct 26, 2024· 8 reactions
    CivitAI

    Any thoughts on SD 3.5. It's insane how fast this came in. What are it's limits? Does it do well with some things, not others?

    wiizOct 27, 2024· 2 reactions

    It's only good at 1mp resolution. Very good at styles, but anatomy is terrible. It's a step up from SDXL, but not as good as flux. Unlike flux, it can be properly trained tho.

    LeeAeronOct 26, 2024· 4 reactions
    CivitAI

    Anyone know, ForgeAi work with SD3.5? or there's no support yet?

    cptdanOct 26, 2024· 4 reactions

    not yet, we're all waiting for lllyasviel. Rumors say he'll add support after the 3.5 medium release.

    4331997Oct 26, 2024· 7 reactions
    CivitAI

    Any plans for a Q8 gguf?

    alternative_UniverseOct 26, 2024· 1 reaction

    Is that better than large turbo or fp8?

    MescalambaOct 26, 2024· 2 reactions

    @P_Universe Its fp16 in "zip" basically, Q8 is always better than fp8, if we dont mind somewhat slower diffusion due compression.

    @Mescalamba thank you so much for the explanation

    ArkhanerOct 27, 2024· 1 reaction

    I always make my own GGUF out of the base models - quantification is pretty simple ... ComfyUI-GGUF/tools at main · city96/ComfyUI-GGUF

    anduxOct 26, 2024· 2 reactions
    CivitAI

    what are the advantages of SD 3.5? i mean, it's huge, how about the speed?

    henlarsen1973Oct 26, 2024· 4 reactions

    on my 3090 its a little faster than flux but not much.

    pychobj2001741Oct 27, 2024· 2 reactions

    I think the comprehension is a bit better then flux. What I've read.

    5160838Nov 1, 2024· 3 reactions

    Better prompt understanding than Flux.

    Slightly worse quality than Flux but not much.

    Better quality than SDXL.

    BlairBrooksBBOct 26, 2024· 1 reaction
    CivitAI

    Thank you for updating this model, good sir. I'm hooked on 3.5 away from FLUX this week, so glad to get a great model like this one to use. Keep up the good work!

    StargateMaxOct 27, 2024
    CivitAI

    I tried to run the SD3.5 model and it totally froze my PC, including the mouse cursor, I only heard a heavy HDD work even though my C drive is NVMe where ComfyUI & checkpoints are. I waited for 15 minutes and did a hard reset. The Flux version does work, no issues. RTX3080, Ryzen 9 5900X, 32 Gb DDR4. If there's a solution, then I'll try again sometime, but until then I stay away from 3.5.

    FauxRealDoeOct 27, 2024

    You can hardly immediately blame 3.5 for the behavior you experienced. Did you try again? Are you sure everything necessary for 3.5 is installed and up to date? In general, Flux is more resource intensive than 3.5, so it doesn't make much sense to put the blame on the checkpoint. I've been finding it pretty fun to work with so far, so I hope you get it figured out!

    StargateMaxOct 27, 2024

    @FauxRealDoe It happened again. Maybe something wrong with my Comfy, even though it works fine with Flux and SDXL.

    noyartOct 27, 2024

    @StargateMax what flux model are you using, the defult one or some smaller sized?

    psychologicauNov 3, 2024· 1 reaction

    It sounds like you filled up the system ram and entered a swap storm. A couple of things, watch the memory usage when it's running to confirm, and check where your swap file is - move it to the NVMe if it's on a mechanical drive. Specific fix? Maybe try smaller quantized models for text encoders etc.

    lukasv111Oct 27, 2024· 1 reaction
    CivitAI

    Can I sell photos that I generate with this model? (Flux models)

    colinw2292823Oct 27, 2024· 9 reactions

    Its AI, just do it anyways? Flux is clearly originally trained on screenshots from the internet so who cares?

    pychobj2001741Oct 27, 2024· 6 reactions

    Sd35 is free for both commercial and non commercial is you make less that 1 mil. But flux dev 1 is free only for personal use not commercial without a license from them. Flux schnell is free to use commercially. But like the person above said go ahead just don't get popular... That probably have a watermark system. Lol

    bahamutwwOct 30, 2024

    @pychobj2001741 You right ! haha

    AikoMatsuiNov 1, 2024· 1 reaction

    @pychobj2001741 I would assume SDXL and SD1.5 is the same? Free for both Commercial and Non Commercial

    pychobj2001741Nov 1, 2024· 1 reaction

    @AikoMatsui I couldn't rember for SDXL so I looked at it license at hugging face, the base sdxl 1.0 and summed up that it's license w/ perplexity: The CreativeML Open RAIL++-M License allows both personal and commercial use of the model and its derivatives, provided that users comply with specific use-based restrictions. These restrictions prohibit certain uses, such as generating harmful content or exploiting minors. Users must also share the license terms when redistributing the model or its derivatives 

    It looks like SD.15 is on the same license.

    In short, yes, they are both free to use.

    AikoMatsuiNov 2, 2024

    @pychobj2001741 The minor part I get it. It should be part of ethical use. Good to know that it's "free" to use.

    ALIENHAZE
    Author
    Nov 2, 2024· 2 reactions

    All outputs generated by my models (SD1.5, SDXL, SD3.5 and Flux) are for commercial use within the limits imposed by the providers (StabilityAI and BlackForestLabs).

    @pychobj2001741 has already explained everything best, but to clarify also the outputs of flux.1DEV are for commercial use.

    The policy requires that in order to use the 1.Dev models for commercial use, with a paid generation service, or direct selling, you need a license from BlackForestLabs. The outputs of these cannot be used as datasets for training or fine-tuning other models, but they do not claim the single sale of the generated works.
    @colinw2292823 @lukasv111 @bahamutww @AikoMatsui

    Mr_JingujiOct 28, 2024· 2 reactions
    CivitAI

    CLIPTextEncode

    'NoneType' object has no attribute 'tokenize'

    help pls

    weirdglitchNov 1, 2024

    use a dual clip loader and download separate clip models for flux models without clip

    ALIENHAZE
    Author
    Nov 2, 2024· 2 reactions

    @Mr_Jinguji on what model? and what interface are you using?

    MatoCreatesOct 28, 2024
    CivitAI

    can I use this with automatic1111?

    RomloOct 28, 2024· 2 reactions

    yes you can

    RomloOct 28, 2024· 1 reaction

    not sure about the 3.5 though, a1111 was not updated recently

    brucewayne0Oct 29, 2024· 1 reaction

    I got error on my a1111:
    : ('Cannot copy out of meta tensor; no data!',).

    While copying the parameter named "first_stage_model.decoder.conv_out.bias", whose dimensions in the model are torch.Size([3]) and whose dimensions in the checkpoint are torch.Size([3]), an exception occurred : ('Cannot copy out of meta tensor; no data!',).

    Mr_JingujiOct 29, 2024· 5 reactions

    You can't atm

    But in the next decade? maybe

    323f802Oct 29, 2024· 1 reaction

    @Mr_Jinguji in 10 years AI will be sending our generated images in the form of WestWorld copies to our homes. Hopefully earlier.

    koalamalassNov 2, 2024· 1 reaction

    use webui forge

    ALIENHAZE
    Author
    Nov 2, 2024· 1 reaction

    @MatoCreates @brucewayne0 @Romlo SDXL and SD1.5 work on almost all interfaces. SD3.5 is only compatible with ComfyUI I think so far, there are unofficial ways to use it with ForgeUI. Flux works with ForgeUI and ComfyUI. Automatic1111 hasn't updated support for new models in a while.

    BleedheartNov 4, 2024

    automatic1111 cant handle it sorry

    michaltajchert134Oct 31, 2024· 1 reaction
    CivitAI

    Any body have a workflow for it?

    cosmicrainNov 1, 2024· 2 reactions
    CivitAI

    XL PRO Recommended Steps = 3 is a typo, right? Should probably be 30

    AikoMatsuiNov 1, 2024

    yes. it should be 30. but running the dpmpp_sde vs the 3m_sde is much slower. So expect 2x to 3x the time needed to run. The results are fairly close unless you push the sharpness up.

    ALIENHAZE
    Author
    Nov 2, 2024

    @cosmicrain Yes, there are 30, typo, I corrected it! Thanks for the feedback

    JohnnyAppletreeNov 3, 2024· 1 reaction
    CivitAI

    Turbo 3.5! Please....

    ibrahimananiskimNov 4, 2024
    CivitAI

    I'm having terrible faces loras, can you share nodes with any lora so I can copy that. Thank you

    abelspicyNov 5, 2024
    CivitAI

    How to train Lora on this in kohya_ss?
    getting this error:

    Traceback (most recent call last):

    File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/train_network.py", line 1396, in <module>

    trainer.train(args)

    File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/train_network.py", line 344, in train

    model_version, text_encoder, vae, unet = self.load_target_model(args, weight_dtype, accelerator)

    File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/train_network.py", line 102, in load_target_model

    text_encoder, vae, unet, = trainutil.load_target_model(args, weight_dtype, accelerator)

    File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/library/train_util.py", line 4813, in load_target_model

    text_encoder, vae, unet, load_stable_diffusion_format = loadtarget_model(

    File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/library/train_util.py", line 4768, in loadtarget_model

    text_encoder, vae, unet = model_util.load_models_from_stable_diffusion_checkpoint(

    File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/library/model_util.py", line 1005, in load_models_from_stable_diffusion_checkpoint

    converted_unet_checkpoint = convert_ldm_unet_checkpoint(v2, state_dict, unet_config)

    File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/library/model_util.py", line 267, in convert_ldm_unet_checkpoint

    new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]

    KeyError: 'time_embed.0.weight'

    Traceback (most recent call last):

    File "/home/ubuntu/kohya_flux/kohya_ss/venv/bin/accelerate", line 8, in <module>

    sys.exit(main())

    File "/home/ubuntu/kohya_flux/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main

    args.func(args)

    File "/home/ubuntu/kohya_flux/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1106, in launch_command

    simple_launcher(args)

    File "/home/ubuntu/kohya_flux/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 704, in simple_launcher

    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)

    subprocess.CalledProcessError: Command '['/home/ubuntu/kohya_flux/kohya_ss/venv/bin/python', '/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/train_network.py', '--config_file', './lora/config_lora-20241105-230620.toml']' returned non-zero exit status 1

    kapec512Nov 6, 2024· 5 reactions
    CivitAI

    FLUX 1.D ALPHA TWO - sometimes it causes GPU blackout on my 4070Ti :D During rendering, it suddenly shut downs into doom and rotors are going into fullspeed. It scared shit out of me.

    It happens only when i'm using this particual model. When combining it with LORA, it happens more often. Alpha One caused no issues at all.

    RumbleMonkNov 16, 2024· 1 reaction

    Would be great to get a GGUF version of this!

    makisekurisu_jpNov 7, 2024· 8 reactions
    CivitAI

    GGUF Q8 PLEASE~m(__ __)m

    maxpower2000_Nov 9, 2024· 1 reaction

    Perhaps a GGUF Q5 if you have a minute hehe

    ExterMinuteNov 7, 2024
    CivitAI

    is there a recommended Max & Base shift value for this model? Shift values is the one thing i can never seem to understand what works best

    everdalevNov 8, 2024· 2 reactions
    CivitAI

    I noticed there is a weird border on the edge of the generated images. Is there any ways to fix this?

    MysticMindAiNov 17, 2024

    I'm getting that too, and I believe that has something to do with the 1MP limit for SD3.5L. I saw a video of some sort of control node for it, and need to find it again, because it helped out greatly. Perhaps someone within this page can point us in the right direction.

    6vidit9Nov 9, 2024
    CivitAI

    Hey, I'm not getting the onsite generation option for this new model (even tho the option is there to use), the only thing I'm getting is "This is an experimental build, and as such pricing and results are subject to change", it is greyed out :(

    Lyahn_hereNov 10, 2024

    same here!

    gastffNov 11, 2024

    Same here too

    ARCFXNov 10, 2024· 5 reactions
    CivitAI

    Over 24GB? What kind of setup do you need to run this?! The 4090 only has 24GB...

    bsdesignNov 10, 2024· 4 reactions

    On my 2070 (8Gb) graphics card in Forge, the generation process takes about 60-90sec at the usual recommended settings.

    LoliGardenNov 12, 2024· 1 reaction

    @bsdesign does this hurt the gpu

    bsdesignNov 12, 2024

    @LoliGarden Tough question... I haven't found such information on the net yet.

    LavishNov 13, 2024· 1 reaction

    @LoliGarden The more fancy stuff your PC needs to run SD, the worse it gets. (100 add-ons, 100 diff. tweaks etc.)

    TwistedAIProductionsNov 24, 2024· 1 reaction

    @LoliGarden I hurt my 3080 by rendering AI videos with Deforum, however I was running the videocard at 100% utilization for over 24 hours straight at a time. What happened was my thermal paste dried up and my thermal pads degraded and the card started overheating. I just had to replace paste and pads and now im back in action. The factory pads and paste most card manufacturers use are not up to par anyways so I shouldn't have to do that again any time soon. Honestly running AI on your video card is the same kind of wear and tear a card would receive mining crypto 24/7. It heats up your memory modules more than anything. A little preventative maintenance can go a long way.

    lusiwei85942Nov 27, 2024

    maybe RTX A6000

    daerraghNov 10, 2024· 7 reactions
    CivitAI

    The Best Flux model, hands down.

    I would really love to see it updated to push the boundary even further.

    And I'm hoping for a NF4 version, too.

    antonfawkes33350Nov 11, 2024· 10 reactions
    CivitAI

    SD3.5 medium version when?

    punkbuzter340Nov 19, 2024

    SD3.0 is the Medium version of SD3.
    SD3.5 is the Large version of SD3.
    What's missing is something that normal people can run, the Small version.

    antonfawkes33350Nov 19, 2024· 2 reactions
    798thtgNov 21, 2024· 1 reaction

    me too!

    punkbuzter340Nov 23, 2024

    @antonfawkes33350 SAI said they'd release SD3 in Small Medium Large sizes, apparently they've broken their promise then.

    DerD4nnYNov 18, 2024
    CivitAI

    Pruned is 20 and Full is 11 ? Something mixed ?

    mykeehuNov 18, 2024

    Read information:
    "The Main Download (Full 11GB) is the version containing only the Unet, so you can use it accompanied by the clips of your choice, depending on whether you prefer the fp8, the fp16 or want to experiment with new clip_l or t5.

    In the file section of the sidebar you can instead find the Secondary Download (Pruned 20GB) of the AIO version with t5xxl_fp16, clip_l and VAE already included."

    antonfawkes33350Nov 18, 2024· 1 reaction

    @mykeehu what's the point of having pruned model if you are gonna make it a 20 gigabyte download???

    punkbuzter340Nov 19, 2024

    @antonfawkes33350 Simplicity if you use Comfy as you don't need all the extra nodes for VAE and Tripple CLIP Loader. And ease of use for A1111 users as they don't have to download all the CLIP models and the VAE.
    ... Would be my guess.

    ShencilLizardNov 20, 2024· 12 reactions
    CivitAI

    Do you plan on releasing Q8_0 GGUFs for your models anytime soon? It's less than 1GB more VRAM than fp8 but almost fp16 quality.

    OcinNov 20, 2024
    CivitAI

    what are the recommendet ratios?

    unmysticNov 22, 2024· 1 reaction
    CivitAI

    This is a great SD 3.5 model! thanks!!!!

    VelvetElvisNov 25, 2024· 2 reactions
    CivitAI

    How would you use the main AIO model in Comfy? I'm struggling to find the right node to use to load it and be able to pull out the clip/vae. I can get it to run just fine in Forge, but the comfy node setup escapes me.

    AI_Chef_RamsayNov 28, 2024
    CivitAI

    This is my go to flux model for sure. For a long time Alienhaze checkpoints have been my go to, with Afrodite one of my faves for XL but this is my preferred for flux.

    neophoeusDec 11, 2024
    CivitAI

    Pruned fatter than Full???

    ShencilLizardDec 14, 2024· 1 reaction

    I hope you get pruned for not reading.

    EisramDec 15, 2024· 7 reactions
    CivitAI

    anyone experiences the error?: KeyError: 'gelu_new'

    saelin33445Dec 16, 2024
    CivitAI

    have too much bug in sd1.5ver

    thebadsleepwellDec 20, 2024
    CivitAI

    For the XL models, I see the preferred sampler is MAIN SAMPLER: dpmpp__sde. I don't see that in Forge, and can't find any information about how to get it. Any ideas?

    ArialieDec 22, 2024· 4 reactions

    It's DPM++ SDE

    XmutsixDec 26, 2024
    CivitAI

    Do I need to use auxiliary models for Flux 11.08GB versions such as ae,clip_l,t5???

    Haircut66Dec 29, 2024

    The Main Download (Full 11GB) is the version containing only the Unet, so you can use it accompanied by the clips of your choice, depending on whether you prefer the fp8, the fp16 or want to experiment with new clip_l or t5.

    In the file section of the sidebar you can instead find the Secondary Download (Pruned 20GB) of the AIO version with t5xxl_fp16, clip_l and VAE already included.

    skpManiacDec 28, 2024
    CivitAI

    I'm very happy, this works in Automatic111, I get this error, is it anything to worry about?

    Thanks so much for your hard work :)

    mayurta221780Dec 29, 2024
    CivitAI

    Can this be run on RTX4060 graphic card? It has 8 GB VRAM.

    condzero1950Jan 21, 2025

    You should specify the environment you are/will be using (i.e. comfy, forge, SDNext, or your own setup) as there are different ways to save on GPU when running models. You should be able to by offloading components to the CPU. You also save GPU memory on quantizing components of the model. Using GGUF versions of the model, etc.

    sourav4068362Jan 23, 2025

    its gonna run in 4060 as in i am using now but the thing is i have 32 gb ram if you have enough ram you can run 22 gb flux dev model with t5 fp 16 without any issue . it will take around 1.30 min to 2 min depend on the workflow . with 20 steps

    mmdd2543Dec 31, 2024· 4 reactions
    CivitAI

    Just a heads up. For me this model page with all your STOIQO models doesn't come up if I search for "STOIQO NewReality" in Civitai's search field. I had to do a search engine search to find it. Is it just me?

    abbruzzzi831Jan 3, 2025· 1 reaction

    Same

    lamoidfl697Jan 2, 2025· 1 reaction
    CivitAI

    stoiqoNewrealityFLUXSD35_XLPRO simply does not work for me. I get very strange patterns and then it hangs. Using Automatic1111 txt2img with controlnet openpose. Windows 10 box with RTX 4070 gpu (12GB VRAM)

    dvdufoJan 6, 2025

    Same here.

    Rudi_aus_BuddelnJan 9, 2025· 2 reactions

    nobody uses automatic1111 anymore, use forge

    dvdufoJan 15, 2025

    @Rudi_aus_Buddeln I get those strange patterns on comfyui. I'm afraid maybe my downloaded file is broken or incomplete.

    condzero1950Jan 18, 2025
    CivitAI

    Well, I d/l'ed the prune model and now putting it through its paces to provide some feedback. First few images derived from author's same prompts. I must say that this model generates images every bit as good and maybe better than FLUX DEV as you can see below. Using a custom scheduler / sampler based off of Katherine Crowson's excellent k diffusion sampler. Seems a CFG of 2.5 and 30 steps work pretty good. I don't even have to bump up the shift parameter (like I sometimes have to do with the base model) to generate good images!

    NOOBDAJan 21, 2025
    CivitAI

    Unable to download.

    F370NJan 23, 2025· 6 reactions
    CivitAI

    This is easily one of the most underused, underrated models available for 1.5. It baffles me that more people don't know about or use it. Are you still working on/developing it further? Specifically the 1.5 branch?

    foggyghost0Jan 25, 2025· 2 reactions
    CivitAI

    Is the black forest lab's Realism Lora merged into the flux version? Thx!

    theno1Jan 28, 2025
    CivitAI

    fp16 weights are heavy.

    Do you have fp8 weights?

    EDIT: IDK whether it is a good idea but I was able to reduce the inference time with T5fp8

    kaivanbi922Jan 31, 2025

    FP8 is a great version too. But for max details youre going FP16 with more steps (DPMPP_2m for example with 40-50 steps). In FP8 youre fine with 30.

    engineerFeb 9, 2025

    @kaivanbi922 And it takes forever, even on a high end (at its time) GPU like 3090Ti.

    MihaiBMar 12, 2025

    @engineer 3090Ti is not high end

    justinof1503328Jan 31, 2025
    CivitAI

    where do I have to install this in comfyui?

    PeachCandiPresentsJan 31, 2025

    In the models folder under checkpoints or diffusion model depending on which you download. All resources are available online if you're able to do a quick Google search. Focus on learning from YouTube and asking model specific questions here.

    justinof1503328Feb 1, 2025

    @PeachCandi Thats what I have done before, nothing works

    juanvelezFeb 6, 2025

    @justinof1503328 Do you use it locally or use any cloud-based service? That's gonna depend on which...

    fujihitaFeb 1, 2025· 1 reaction
    CivitAI

    While sd3.5 model produces great results, the inference time is unbearable even with fp8 encoder. It feels like a Large model than the tagged Medium and it is likely the case, given it's thrice the size of sd3.5 base.

    condzero1950Feb 2, 2025

    Not sure why you find it this way. I don't find them bad at all. 3.5M is fairly zippy and 3.5L is 1/2 again faster than Flux Dev. Your environment and hardware up to the task?

    fujihitaFeb 5, 2025

    @condzero1950 No, that's why I'm specifically using SD 3.5 Medium and not Large. This checkpoint you're looking at is labeled Medium but it's weighing like a Large model. I have other Medium checkpoints that run 9 times as fast. You get prompt coherence and output quality like a Large model so I'm suspecting maybe the labeling is wrong.

    condzero1950Feb 6, 2025

    "so I'm suspecting maybe the labeling is wrong."

    Yes, I found this out with one model fine tune. Labeled SD 3.5M but clearly a large. Let the size be your guide (all things being equal) as to what it might be.

    MysticMindAiMar 16, 2025

    Is this really a sd3.5 Large model?? If that's the case, I use Wavespeed either way.

    MysticMindAiMar 16, 2025

    So, it does appear to be large just by the amount of time it takes. Typically, I have been using Absynth and sometimes vanilla 3.5L and the times are similar between all three. So, it seems quite likely this is the SD3.5Large variant, which makes sense being the quality is a bit better than Medium. Using Wavespeed I can obtain about 40-45 seconds with a batch of 2 images most times running at 40 steps. Running at 50 steps only adds about 5-7 seconds total.

    Additionally, when trying out medium vs large loras, i always have issues trying out medium loras.

    6vidit9Feb 5, 2025· 2 reactions
    CivitAI

    Thank you very much for putting it back to the onsite generator, I missed this phenomenal checkpoint 🙏❤️

    martindieterFeb 25, 2025
    CivitAI

    I like using your SD3.5 model. However, in 90% of all results there are errors at the top of the image. Like a kind of shift. Can you explain why this is happening or what setting needs to be made to prevent this?

    maffystovskyOct 20, 2025

    Hello. Out of curiosity, did you find a solution? I just started using the model and am having the same issue with the top of the images.

    Kiri_Pale_RiderFeb 27, 2025
    CivitAI

    I'm using Flux alpha two version, but it's making anime style output in a way I don't understand. The two images are just photorealistic but I can't understand what am I doing wrong?