This Lora is extraxted from a fine tuned checkpoint, based on Flux dev de-distilled.
This checkpoint has been trained on high resolution images that have been processed to enable the fine-tune to train on every single detail of the original image, thus working around the 1024x1204 limitation, enabling the model to produce very fine details during tiled upscales that can hold up even in 32K upscales. The result, extremely detailed and realistic skin and overall realism at an unprecedented scale.
This first alpha version has been trained on male subjects only but elements like skin details will likely partically carry over though not confirmed.
Training for female subjects happening as we speak.
Highest quality slow version:
Turbo version with shamelessly good quality!!!:
Resources:
Lora:
Fast Lora: https://huggingface.co/ostris/OpenFLUX.1/blob/main/openflux1-v0.1.0-fast-lora.safetensors
Turbo Lora: https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha
Text encoders:
ViT-L-14-BEST-smooth-GmP-ft:
t5-v1_1-xxl:
https://huggingface.co/city96/t5-v1_1-xxl-encoder-bf16/resolve/main/model.safetensors?download=true
Clip G:
Recommended settings:
Without Turbo and Fast Lora
CFG: 2-4
Steps: 50-60
With Turbo and Fast Lora
CFG: 2-4
Steps: 8-16
Turbo Lora weight: 1
Fast Lora weight: 0.33
This Flux variant works with negative prompts so use them if you want to. Negative prompt will work with and without the Turbo Lora.
Description
FAQ
Comments (15)
6 GB LoRA? Am I reading that right? Would that work even on a 24GB GPU? What's the benefit to using the LoRA rather than the full Sigma Vision model? With 6GB in size, using the original model would be less heavy on the VRAM, no? If my math is correct, Flux Dev FP8 ~11GB + this 6GB Lora + ~6GB for the CLIP encoders (FP8) = ~23GB VRAM used. A bit excessive IMO.
I don't think it makes a difference. The 6 GB are being loaded and replace 6 GB from the 23 GB of trhe full model that's how it works I believe. I'll extract some addtional ranks and upload them but I believer rank 640 is lossless.
The purpose is so you can use it in combination with your favourite checkpoint to enhance it's skin details.
@tamtamx1332 can we use it for fp8 models too?
@tamtamx1332 thanks for all your hard work btw. doing gods work.
@cutetodeath78409597 yes that works.
@cutetodeath78409597 thank you and very welcome!
@tamtamx1332 You were right! I didn't get out of memory error, even when using with a couple of other LoRAs. I've posted an image in the gallery. Good job!
Is this necessary for Sigma Vision Alpha1 checkpoint, or standalone, to use with standard flux model? Any differences (quality) between both?
I used this lora with the base flux 1D model, used strength 1.0 and ran it for 50 steps. My results below were mixed. In some cases the images look like cartoons the skin texture completely gone as if flux doubled down on removing any traces of skin texture. I can't recommend this lora.
@condzero1950 Hi. I'd like to offer some suggestions based on personal experience.
Looking at your images, I notice that your Flux Guidance is set to 4, which is too high for photo-realistic images in Flux. Normally, anything higher than 3-3.5 will start giving your images the plastic look. Try Flux Guidance in the range of 2 to 3 max.
Also, I'm not sure what sampler and scheduler you've used, but you should avoid Euler for realistic images that require good and detailed textures. Try DPMPP_2M + Beta.
I've posted and image in the gallery so you can see my settings which are basically what I outlined above.
Another thing to keep in mind that the model was probably not trained on fantastical characters, so prompts asking for such will most likely not yield good results. If you absolutely need to depict a fantastical creature, I would probably add an additional LoRA or two specifically made for such tasks.
I hope this helps!
@condzero1950 as mmdd2543 recommended you need to lower the guidance scale. I've seen people going as low as 1.5 or so for best results. Also this is a Lora extracted from Flux dev dedistilled so it will work best with with a flux dev dedistilled worklfow with real CFG. I wished people would stop using the regular Flux dev model which is a distilled model thus limited in quality and prompt coherance.
@tamtamx1332 Sorry for the off-topic, but do you know which Flux Dev fine tunes, be it distilled or de-distilled, have the best LoRA compatibility? In my personal experience, most fine-tunes have poor LoRA compatibility and most of the time image quality degrades when I use LoRAs with them.
@mmdd2543 I'm not sure. I'm just sticking to Flux dev dedistilled for training and not touching any of the regular Flux dev fine tunes since realism, quality and prompt adherance are very important for me. I saw people also training Lora's they want to use for Flux dev vanilla, with Flux dev dedistilled. This model for example is recommended for training Lora for all Flux models and in fact it's a Flux dev dedistilled variant. https://civitai.com/models/808669/flux-dev2pro-fp8-special-use-for-training-flux-lora
When you say this was built on Flux Dev de-distilled, do you mean Chroma? Also, Flux Dev or Flux Schnell? Because in the base model you have Flux Schnell added.
















