This checkpoint is a 8-step distilled Lora, trained based on FLUX.1-dev model.
It use a multi-head discriminator to improve the distill quality. the model can be used for T2I, inpainting controlnet and other FLUX related models. The recommended guidance_scale=3.5 and lora_scale=1, Euler = Simple.

Key Features:
Rapid Image Generation: FLUX.1-Turbo-Alpha allows users to create images quickly, making it ideal for projects requiring fast turnaround times
Versatility in Applications: This model supports text-to-image generation and inpainting, providing users with the flexibility to create varied and intricate scenes easily.
Optimized Settings: For optimal performance, it is recommended to use a guidance scale of 3.5 and a Lora scale of 1, which allows for effective control over the output quality and style.
Future-Proofing: Upcoming versions of FLUX.1-Turbo-Alpha will include configurations that utilize fewer steps, further enhancing speed without sacrificing quality.
Ideal Use Cases:
FLUX.1-Turbo-Alpha is perfect for artists, designers, and content creators who need quick and high-quality image outputs. Its adaptability to various creative applications makes it a valuable addition to any toolkit, whether for professional projects or personal exploration.
With FLUX.1-Turbo-Alpha, you can harness the power of AI to elevate your creative projects, transforming ideas into stunning visuals in mere moments.
Thanks to the original creator :
alimama-creative
Original HuggingFace Repo :
https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha
Description
Nothing to add, for now.
FAQ
Comments (21)
Incredible release up coming guys :)
trust me, it's incredible <3 you will like it :)
it is not an 8step turbo
on hugging-faceis nothinkg like that discribed ...
a Lora that makes a picture a little better, whether it's 8 or 45 steps
Hi thanks for your comment !
i made several test with that lora with Euler simple 8 Steps VS Euler Simple 28 steps.
And indeed the quality is less that the regular dev version at 28 steps (higher will be always better).
BUT, it can do barely the same things in a less time with more precision and qquality (that's normal),
It can be useful for small configuations that cannot handle a full dev model.
But you're right, this Lora is in Alpha version, like it was one day for SDXL ^^
Give it some time :) i hope and i think it will improve again and again :)
Oh and the description come directly from Hugging Face btw ^^.
@EKKIVOK Quote "It can be useful for small configuations that cannot handle a full dev model."
It's a LORA! It ADDS to a model, not replaces it. Therefore it will be more likely to cause OOM issues for people with smaller amounts of VRAM. Indeed, with ComfyUI, with its darn awful memory-management, one can see so many peeps enquire why adding a LORA ruins their render time.
@blobby99 it's half true. But here i'm talking about gpu. models and loras with forge are added to RAM.
there is many configuration with a lot of ram but poor gpu. that system management was not implemented in comfyui. but in Forge, yes. it's called the shared memory, GPU/RAM. so you are in the true but it's 50/50.
It's for that many people have different result with this lora. cause not everybody have the same computer configuration and settings. It's for that i use "It can be" and not "it will" ^^
👍 Very good lora, it works perfectly and the results are impeccable.
Thanks :)
amazing detailed lora, but it's kinda not that consistent when used with some character loras, i found that "ByteDance Hyper-FLUX Acceleration LoRA" works better with character loras than the turbo's alpha sadly, even tho i like the quality of the turbo one's better!
great work btw buddy i would hope in the upcoming updates it these problems get fixed!! 👏
the model you mention is the second version of that one, so technicaly it's normal that is a little bit more advanced ^^
anyone have this running with a flux fill model on comfyui? Always get ERROR lora diffusion_model.img_in.weight shape '[3072, 384]' is invalid for input of size 196608
yeah me too, the same, idk why
same happened to me (btw my model is "flux1-fill-dev-fp8.safetensors")
same
Any solution or replacement for this?
hey ! i think that it came from the model OR the input images used....
did anybody figure this out, i'm getting the same error
check if the checkpoint and the lora are trained on the same model, es FLUX 1
First of all, I have to thank you for the fantastic job you did, thank you!Speed and excellent image quality are the characteristics of this model. This is a great model especially for those with a weaker GPU and they want to use Flux dev.Thank you once again for this model, which I can recommend to everyone.
<3 thanks
will try it with my 3060 6g😋
It will work :)
Details
Files
FLUX.1-Turbo-Alpha.safetensors
Mirrors
FLUX.1-Turbo-Alpha.safetensors
flux1-turbo-alpha-8steps.safetensors
FLUX.1-Turbo-Alpha.safetensors
flux_turbo_v1_1.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
diffusion_pytorch_model.safetensors
flux-turbo-8steps.safetensors
flux.1-turbo.safetensors
FLUX.1-Turbo-Alpha.safetensors
flux1dev-8steps.safetensors
turbo_diffusion_pytorch_model.safetensors
flux-turbo-8steps.safetensors
flux.1-turbo.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
flux_turbo.safetensors
FLUX.1-Turbo-Alpha.safetensors
Flux-Turbo-Alpha.safetensors
FLUX 1 Turbo Alpha.safetensors
flux_turbo.safetensors
fluxturbo.safetensors
flux-alimama-FLUX1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
flux1dev-8steps.safetensors
Flux_Turbo_8steps.safetensors
FLUX1-Turbo-Alpha8steps_LoRA_inpaint.safetensors
diffusion_pytorch_model.safetensors
FLUX.1-Turbo-Alphal.safetensors
turboku.safetensors
diffusion_pytorch_model.safetensors
Turbo-Alpha-Fluxl.safetensors
flux.1-turbo-alpha.safetensors
alimama-creativeFLUX.1-Turbo-Alpha.safetensors
FLUX1-Turbo-Alpha.safetensors
diffusion_pytorch_model.safetensors
flux1-turbo-alpha-lora.safetensors
flux1-turbo-alpha-lora.safetensors
【FLUX加速模型】FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha-4steps.safetensors
FLUX.1-Turbo-Alpha.safetensors
flux-turbo-alpha-lora.safetensors
ali-flux-8step.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
Flux-Turbo-Lora-8step.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha diffusion_pytorch_model.safetensors
flux-alimama-FLUX1-Turbo-Alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
flux1_turbo_alpha.safetensors
flux1-8-step.safetensors
flux_turbo_alpha.safetensors
FLUX.1-Turbo-Alpha.safetensors
FLUX1-Turbo-Alpha.safetensors
diffusion_pytorch_model.safetensors
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.









