🟡:Flux Models 🟢:SD 3.5 Models 🔵:SD XL Models 🟣:SD 1.5 Models 🔴:Expired Models
🟡STOIQO NewReality is a cutting-edge model designed to generate photographic content. It empowers users to capture fine details in portraits, explore breathtaking landscapes, and bring mythical creatures to life with unparalleled variety and precision. This model is tailored for artists and creators who demand high-quality results, featuring dynamic light management and intricate textures, making it the ultimate tool for top-tier creative work.
Versions and Recommended Settings:

MAIN SAMPLER: dpmpp_2m+ sgm_uniform
CFG : 3+ | STEPS: 30+
(Recommended): CFG: 5 | STEPS: 40
Finally here is a first finetuning test on SD3.5 for NewReality. As always the first goal of the Alpha version is to give a focus to the photographic aspect and textures by expanding the range of subjects that it can reproduce. Future finetuning will focus more on individual components and details.
The Main Download (Full 26GB) is the AIO version with t5xxl_fp16, clip_l, clip_g and VAE already included.
In the file section of the sidebar you can instead find the Secondary Download (Pruned 16GB) of the version containing only the Model, so you can use it accompanied by the clips of your choice, depending on whether you prefer the fp8, the fp16 or want to experiment with new clip_l or t5.

MAIN SAMPLER: Euler+ Beta
GUIDANCE (In Forge:'Distilled CFG Scale'): 2+ | STEPS: 15+
(Recommended): GUIDANCE: 2.5 | STEPS: 25
NewReality FLUX.1 Dev has reached its Alpha version, but I'm still in an experimental phase with the Flux component. Working on the Unet architecture has proven to be more time-consuming and challenging than initially expected. As a result, I will release smaller, more frequent updates during this initial period to gather valuable feedback. I’m fully aware of the existing issues within the model, and your input will be instrumental in addressing and refining them.
The Main Download (Full 11GB) is the version containing only the Unet, so you can use it accompanied by the clips of your choice, depending on whether you prefer the fp8, the fp16 or want to experiment with new clip_l or t5.
In the file section of the sidebar you can instead find the Secondary Download (Pruned 20GB) of the AIO version with t5xxl_fp16, clip_l and VAE already included.
ATTENTION: The Full/Pruned nomenclature is just to be able to keep the Unet as the main download. All the others change with the AIO as the main. The two models are composed as mentioned above.
Refiner is Unnecessary and Clip and VAE are Included
MAIN SAMPLER: dpmpp_sde + Karras
CFG: 2+ | STEPS: 15+
(Recommended): CFG: 4 | STEPS: 30
NewReality XL PRO is a fine-tuned model designed with a specific focus on enhancing prompt adherence, earning it the name "PRO" (PRO-mpt). This version excels in managing lighting and image composition, offering a more refined and precise visual output based on the given instructions.
Refiner is Unnecessary and Clip and VAE are Included
MAIN SAMPLER: dpmpp_3m_sde + Exponential
CFG: 2+ | STEPS: 15+
(Recommended): CFG: 4 | STEPS: 20
Refiner is Unnecessary and Clip and VAE are Included
MAIN SAMPLER: dpmpp__sde + karras
CFG: 1+ | STEPS: 4+
(Recommended): CFG: 2 | STEPS: 4
Clip and VAE are Included
MAIN SAMPLER: dpmpp__sde + karras
CFG: 2+ | STEPS: 15+
(Recommended): CFG: 5 | STEPS: 25
Credits:
NewReality is the result of countless hours of work, involving several hundred iterations through training, merging of models and LoRA (Low-Rank Adaptation), fine-tuning specific blocks or captions, as well as performing additive and subtractive merges. Due to the complexity of this process, it is difficult to provide a precise, step-by-step account of every decision and experiment. However, each phase has played a significant role in shaping the final product, whether by direct influence or through valuable lessons learned along the way.
Given the intricacies and challenges involved, I want to extend my heartfelt gratitude to the creators of the models that have contributed to this journey, either directly or indirectly. Regardless of whether they were incorporated into the final iteration, their work provided inspiration, insight, and progress throughout the project.
To all the creators whose models supported this endeavor, I extend my deepest thanks. I encourage everyone to explore and support their work, as it deserves recognition for the incredible value it brings to the community.
EauDeNoire humblemikey socalguitarist ZyloO xlabs_ai aramintastudio Seeker70 Ai_Art_Vision PromptoAI alvdansen aip0pp SG_161222
WORK IN PROGRESS...
Description
FAQ
Comments (118)
Any thoughts on SD 3.5. It's insane how fast this came in. What are it's limits? Does it do well with some things, not others?
It's only good at 1mp resolution. Very good at styles, but anatomy is terrible. It's a step up from SDXL, but not as good as flux. Unlike flux, it can be properly trained tho.
Anyone know, ForgeAi work with SD3.5? or there's no support yet?
not yet, we're all waiting for lllyasviel. Rumors say he'll add support after the 3.5 medium release.
Any plans for a Q8 gguf?
Is that better than large turbo or fp8?
@P_Universe Its fp16 in "zip" basically, Q8 is always better than fp8, if we dont mind somewhat slower diffusion due compression.
@Mescalamba thank you so much for the explanation
I always make my own GGUF out of the base models - quantification is pretty simple ... ComfyUI-GGUF/tools at main · city96/ComfyUI-GGUF
what are the advantages of SD 3.5? i mean, it's huge, how about the speed?
on my 3090 its a little faster than flux but not much.
I think the comprehension is a bit better then flux. What I've read.
Better prompt understanding than Flux.
Slightly worse quality than Flux but not much.
Better quality than SDXL.
Thank you for updating this model, good sir. I'm hooked on 3.5 away from FLUX this week, so glad to get a great model like this one to use. Keep up the good work!
I tried to run the SD3.5 model and it totally froze my PC, including the mouse cursor, I only heard a heavy HDD work even though my C drive is NVMe where ComfyUI & checkpoints are. I waited for 15 minutes and did a hard reset. The Flux version does work, no issues. RTX3080, Ryzen 9 5900X, 32 Gb DDR4. If there's a solution, then I'll try again sometime, but until then I stay away from 3.5.
You can hardly immediately blame 3.5 for the behavior you experienced. Did you try again? Are you sure everything necessary for 3.5 is installed and up to date? In general, Flux is more resource intensive than 3.5, so it doesn't make much sense to put the blame on the checkpoint. I've been finding it pretty fun to work with so far, so I hope you get it figured out!
@FauxRealDoe It happened again. Maybe something wrong with my Comfy, even though it works fine with Flux and SDXL.
@StargateMax what flux model are you using, the defult one or some smaller sized?
It sounds like you filled up the system ram and entered a swap storm. A couple of things, watch the memory usage when it's running to confirm, and check where your swap file is - move it to the NVMe if it's on a mechanical drive. Specific fix? Maybe try smaller quantized models for text encoders etc.
Can I sell photos that I generate with this model? (Flux models)
Its AI, just do it anyways? Flux is clearly originally trained on screenshots from the internet so who cares?
Sd35 is free for both commercial and non commercial is you make less that 1 mil. But flux dev 1 is free only for personal use not commercial without a license from them. Flux schnell is free to use commercially. But like the person above said go ahead just don't get popular... That probably have a watermark system. Lol
@pychobj2001741 You right ! haha
@pychobj2001741 I would assume SDXL and SD1.5 is the same? Free for both Commercial and Non Commercial
@AikoMatsui I couldn't rember for SDXL so I looked at it license at hugging face, the base sdxl 1.0 and summed up that it's license w/ perplexity: The CreativeML Open RAIL++-M License allows both personal and commercial use of the model and its derivatives, provided that users comply with specific use-based restrictions. These restrictions prohibit certain uses, such as generating harmful content or exploiting minors. Users must also share the license terms when redistributing the model or its derivatives
It looks like SD.15 is on the same license.
In short, yes, they are both free to use.
@pychobj2001741 The minor part I get it. It should be part of ethical use. Good to know that it's "free" to use.
All outputs generated by my models (SD1.5, SDXL, SD3.5 and Flux) are for commercial use within the limits imposed by the providers (StabilityAI and BlackForestLabs).
@pychobj2001741 has already explained everything best, but to clarify also the outputs of flux.1DEV are for commercial use.
The policy requires that in order to use the 1.Dev models for commercial use, with a paid generation service, or direct selling, you need a license from BlackForestLabs. The outputs of these cannot be used as datasets for training or fine-tuning other models, but they do not claim the single sale of the generated works.
@colinw2292823 @lukasv111 @bahamutww @AikoMatsui
CLIPTextEncode
'NoneType' object has no attribute 'tokenize'
help pls
use a dual clip loader and download separate clip models for flux models without clip
@Mr_Jinguji on what model? and what interface are you using?
can I use this with automatic1111?
yes you can
not sure about the 3.5 though, a1111 was not updated recently
I got error on my a1111:
: ('Cannot copy out of meta tensor; no data!',).
While copying the parameter named "first_stage_model.decoder.conv_out.bias", whose dimensions in the model are torch.Size([3]) and whose dimensions in the checkpoint are torch.Size([3]), an exception occurred : ('Cannot copy out of meta tensor; no data!',).
You can't atm
But in the next decade? maybe
@Mr_Jinguji in 10 years AI will be sending our generated images in the form of WestWorld copies to our homes. Hopefully earlier.
use webui forge
@MatoCreates @brucewayne0 @Romlo SDXL and SD1.5 work on almost all interfaces. SD3.5 is only compatible with ComfyUI I think so far, there are unofficial ways to use it with ForgeUI. Flux works with ForgeUI and ComfyUI. Automatic1111 hasn't updated support for new models in a while.
automatic1111 cant handle it sorry
Any body have a workflow for it?
XL PRO Recommended Steps = 3 is a typo, right? Should probably be 30
yes. it should be 30. but running the dpmpp_sde vs the 3m_sde is much slower. So expect 2x to 3x the time needed to run. The results are fairly close unless you push the sharpness up.
@cosmicrain Yes, there are 30, typo, I corrected it! Thanks for the feedback
Turbo 3.5! Please....
I'm having terrible faces loras, can you share nodes with any lora so I can copy that. Thank you
How to train Lora on this in kohya_ss?
getting this error:
Traceback (most recent call last):
File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/train_network.py", line 1396, in <module>
trainer.train(args)
File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/train_network.py", line 344, in train
model_version, text_encoder, vae, unet = self.load_target_model(args, weight_dtype, accelerator)
File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/train_network.py", line 102, in load_target_model
text_encoder, vae, unet, = trainutil.load_target_model(args, weight_dtype, accelerator)
File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/library/train_util.py", line 4813, in load_target_model
text_encoder, vae, unet, load_stable_diffusion_format = loadtarget_model(
File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/library/train_util.py", line 4768, in loadtarget_model
text_encoder, vae, unet = model_util.load_models_from_stable_diffusion_checkpoint(
File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/library/model_util.py", line 1005, in load_models_from_stable_diffusion_checkpoint
converted_unet_checkpoint = convert_ldm_unet_checkpoint(v2, state_dict, unet_config)
File "/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/library/model_util.py", line 267, in convert_ldm_unet_checkpoint
new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]
KeyError: 'time_embed.0.weight'
Traceback (most recent call last):
File "/home/ubuntu/kohya_flux/kohya_ss/venv/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/ubuntu/kohya_flux/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/home/ubuntu/kohya_flux/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1106, in launch_command
simple_launcher(args)
File "/home/ubuntu/kohya_flux/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 704, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/ubuntu/kohya_flux/kohya_ss/venv/bin/python', '/home/ubuntu/kohya_flux/kohya_flux/sd-scripts/train_network.py', '--config_file', './lora/config_lora-20241105-230620.toml']' returned non-zero exit status 1
FLUX 1.D ALPHA TWO - sometimes it causes GPU blackout on my 4070Ti :D During rendering, it suddenly shut downs into doom and rotors are going into fullspeed. It scared shit out of me.
It happens only when i'm using this particual model. When combining it with LORA, it happens more often. Alpha One caused no issues at all.
Would be great to get a GGUF version of this!
GGUF Q8 PLEASE~m(__ __)m
Perhaps a GGUF Q5 if you have a minute hehe
is there a recommended Max & Base shift value for this model? Shift values is the one thing i can never seem to understand what works best
I noticed there is a weird border on the edge of the generated images. Is there any ways to fix this?
I'm getting that too, and I believe that has something to do with the 1MP limit for SD3.5L. I saw a video of some sort of control node for it, and need to find it again, because it helped out greatly. Perhaps someone within this page can point us in the right direction.
Hey, I'm not getting the onsite generation option for this new model (even tho the option is there to use), the only thing I'm getting is "This is an experimental build, and as such pricing and results are subject to change", it is greyed out :(
same here!
Same here too
Over 24GB? What kind of setup do you need to run this?! The 4090 only has 24GB...
On my 2070 (8Gb) graphics card in Forge, the generation process takes about 60-90sec at the usual recommended settings.
@bsdesign does this hurt the gpu
@LoliGarden Tough question... I haven't found such information on the net yet.
@LoliGarden The more fancy stuff your PC needs to run SD, the worse it gets. (100 add-ons, 100 diff. tweaks etc.)
@LoliGarden I hurt my 3080 by rendering AI videos with Deforum, however I was running the videocard at 100% utilization for over 24 hours straight at a time. What happened was my thermal paste dried up and my thermal pads degraded and the card started overheating. I just had to replace paste and pads and now im back in action. The factory pads and paste most card manufacturers use are not up to par anyways so I shouldn't have to do that again any time soon. Honestly running AI on your video card is the same kind of wear and tear a card would receive mining crypto 24/7. It heats up your memory modules more than anything. A little preventative maintenance can go a long way.
maybe RTX A6000
The Best Flux model, hands down.
I would really love to see it updated to push the boundary even further.
And I'm hoping for a NF4 version, too.
SD3.5 medium version when?
SD3.0 is the Medium version of SD3.
SD3.5 is the Large version of SD3.
What's missing is something that normal people can run, the Small version.
@punkbuzter340 That is false? https://huggingface.co/stabilityai/stable-diffusion-3.5-medium
me too!
@antonfawkes33350 SAI said they'd release SD3 in Small Medium Large sizes, apparently they've broken their promise then.
Pruned is 20 and Full is 11 ? Something mixed ?
Read information:
"The Main Download (Full 11GB) is the version containing only the Unet, so you can use it accompanied by the clips of your choice, depending on whether you prefer the fp8, the fp16 or want to experiment with new clip_l or t5.
In the file section of the sidebar you can instead find the Secondary Download (Pruned 20GB) of the AIO version with t5xxl_fp16, clip_l and VAE already included."
@mykeehu what's the point of having pruned model if you are gonna make it a 20 gigabyte download???
@antonfawkes33350 Simplicity if you use Comfy as you don't need all the extra nodes for VAE and Tripple CLIP Loader. And ease of use for A1111 users as they don't have to download all the CLIP models and the VAE.
... Would be my guess.
Do you plan on releasing Q8_0 GGUFs for your models anytime soon? It's less than 1GB more VRAM than fp8 but almost fp16 quality.
what are the recommendet ratios?
This is a great SD 3.5 model! thanks!!!!
How would you use the main AIO model in Comfy? I'm struggling to find the right node to use to load it and be able to pull out the clip/vae. I can get it to run just fine in Forge, but the comfy node setup escapes me.
This is my go to flux model for sure. For a long time Alienhaze checkpoints have been my go to, with Afrodite one of my faves for XL but this is my preferred for flux.
Pruned fatter than Full???
I hope you get pruned for not reading.
anyone experiences the error?: KeyError: 'gelu_new'
have too much bug in sd1.5ver
For the XL models, I see the preferred sampler is MAIN SAMPLER: dpmpp__sde. I don't see that in Forge, and can't find any information about how to get it. Any ideas?
It's DPM++ SDE
Do I need to use auxiliary models for Flux 11.08GB versions such as ae,clip_l,t5???
The Main Download (Full 11GB) is the version containing only the Unet, so you can use it accompanied by the clips of your choice, depending on whether you prefer the fp8, the fp16 or want to experiment with new clip_l or t5.
In the file section of the sidebar you can instead find the Secondary Download (Pruned 20GB) of the AIO version with t5xxl_fp16, clip_l and VAE already included.
I'm very happy, this works in Automatic111, I get this error, is it anything to worry about?
Thanks so much for your hard work :)
Can this be run on RTX4060 graphic card? It has 8 GB VRAM.
You should specify the environment you are/will be using (i.e. comfy, forge, SDNext, or your own setup) as there are different ways to save on GPU when running models. You should be able to by offloading components to the CPU. You also save GPU memory on quantizing components of the model. Using GGUF versions of the model, etc.
its gonna run in 4060 as in i am using now but the thing is i have 32 gb ram if you have enough ram you can run 22 gb flux dev model with t5 fp 16 without any issue . it will take around 1.30 min to 2 min depend on the workflow . with 20 steps
Just a heads up. For me this model page with all your STOIQO models doesn't come up if I search for "STOIQO NewReality" in Civitai's search field. I had to do a search engine search to find it. Is it just me?
Same
stoiqoNewrealityFLUXSD35_XLPRO simply does not work for me. I get very strange patterns and then it hangs. Using Automatic1111 txt2img with controlnet openpose. Windows 10 box with RTX 4070 gpu (12GB VRAM)
Well, I d/l'ed the prune model and now putting it through its paces to provide some feedback. First few images derived from author's same prompts. I must say that this model generates images every bit as good and maybe better than FLUX DEV as you can see below. Using a custom scheduler / sampler based off of Katherine Crowson's excellent k diffusion sampler. Seems a CFG of 2.5 and 30 steps work pretty good. I don't even have to bump up the shift parameter (like I sometimes have to do with the base model) to generate good images!
Unable to download.
This is easily one of the most underused, underrated models available for 1.5. It baffles me that more people don't know about or use it. Are you still working on/developing it further? Specifically the 1.5 branch?
Is the black forest lab's Realism Lora merged into the flux version? Thx!
fp16 weights are heavy.
Do you have fp8 weights?
EDIT: IDK whether it is a good idea but I was able to reduce the inference time with T5fp8
FP8 is a great version too. But for max details youre going FP16 with more steps (DPMPP_2m for example with 40-50 steps). In FP8 youre fine with 30.
@kaivanbi922 And it takes forever, even on a high end (at its time) GPU like 3090Ti.
@engineer 3090Ti is not high end
where do I have to install this in comfyui?
In the models folder under checkpoints or diffusion model depending on which you download. All resources are available online if you're able to do a quick Google search. Focus on learning from YouTube and asking model specific questions here.
@PeachCandi Thats what I have done before, nothing works
@justinof1503328 Do you use it locally or use any cloud-based service? That's gonna depend on which...
While sd3.5 model produces great results, the inference time is unbearable even with fp8 encoder. It feels like a Large model than the tagged Medium and it is likely the case, given it's thrice the size of sd3.5 base.
Not sure why you find it this way. I don't find them bad at all. 3.5M is fairly zippy and 3.5L is 1/2 again faster than Flux Dev. Your environment and hardware up to the task?
@condzero1950 No, that's why I'm specifically using SD 3.5 Medium and not Large. This checkpoint you're looking at is labeled Medium but it's weighing like a Large model. I have other Medium checkpoints that run 9 times as fast. You get prompt coherence and output quality like a Large model so I'm suspecting maybe the labeling is wrong.
"so I'm suspecting maybe the labeling is wrong."
Yes, I found this out with one model fine tune. Labeled SD 3.5M but clearly a large. Let the size be your guide (all things being equal) as to what it might be.
Is this really a sd3.5 Large model?? If that's the case, I use Wavespeed either way.
So, it does appear to be large just by the amount of time it takes. Typically, I have been using Absynth and sometimes vanilla 3.5L and the times are similar between all three. So, it seems quite likely this is the SD3.5Large variant, which makes sense being the quality is a bit better than Medium. Using Wavespeed I can obtain about 40-45 seconds with a batch of 2 images most times running at 40 steps. Running at 50 steps only adds about 5-7 seconds total.
Additionally, when trying out medium vs large loras, i always have issues trying out medium loras.
Thank you very much for putting it back to the onsite generator, I missed this phenomenal checkpoint 🙏❤️
I like using your SD3.5 model. However, in 90% of all results there are errors at the top of the image. Like a kind of shift. Can you explain why this is happening or what setting needs to be made to prevent this?
Hello. Out of curiosity, did you find a solution? I just started using the model and am having the same issue with the top of the images.
I'm using Flux alpha two version, but it's making anime style output in a way I don't understand. The two images are just photorealistic but I can't understand what am I doing wrong?
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.