CivArchive
    Flux.1 Lite 8B Alpha (Freepik Company) - Comfy Workflow
    Preview 36267598

    Source: https://huggingface.co/Freepik/flux.1-lite-8B-alpha from Freepik

    The alpha release of Flux.1 Lite, an 8B parameter transformer model distilled from the FLUX.1-dev model. This version uses 7 GB less RAM and runs 23% faster while maintaining the same precision (bfloat16) as the original model. Download GGUF version here.

    ☕ Buy me a coffee: https://ko-fi.com/ralfingerai
    🍺 Join my discord: https://discord.com/invite/pAz4Bt3rqb

    Description

    FAQ

    Comments (18)

    LeeAeronOct 24, 2024· 2 reactions
    CivitAI

    Very interesting! Downloading and will test it right now!

    Thank y ou for your work!

    LeeAeronOct 24, 2024
    CivitAI

    what vae I have to use with it?

    REgenerationSDOct 24, 2024
    CivitAI

    what graphics card does it work for? and at what speed? can 8gb vram work?

    MescalambaOct 24, 2024· 1 reaction

    If your PC has enough ram (24gb or so) it can run, just needs to offload it.. even 22gb models work on low vram GPUs, just requires a lot of system memory .. and time, cause its slower with offload to system memory.

    Model doesnt have to be fully in GPU memory, but its a lot faster if it is.

    sevenof9247Oct 24, 2024

    nop 12GB is lilit - FLUX isnt that good yet ... you can still work witl SDXL an PONY ;)

    mr_chrismaOct 24, 2024· 1 reaction
    CivitAI

    Could you go a bit more into the process you've used? I see flux LoRAs working with it, and I would expect a distilled model to be utterly incompatible, as I believe dev and schnell are, for example.

    Edit: Ah, I see it's more along the lines of smart layer-skipping. Cool.

    sevenof9247Oct 24, 2024
    CivitAI

    error dont work in

    WEBForge-UI ...

    all 30 models iv donwlaoded working so fare including GGUFs, BNF4, DEV , Schnell

    MescalambaOct 24, 2024

    Its diffusion model, not sure what is webforge workflow for that, but it requires that.

    You probably used checkpoint models only before, which is slightly different thing.

    RalFinger
    Author
    Oct 25, 2024

    Also didn´t work with ForgeUI for me, but I attached a ComfyUI workflow

    MescalambaOct 24, 2024· 3 reactions
    CivitAI

    Image quality is really good. Model size is really big and LORA isnt really working, it loads part of it, not all of it and throws quite a few messages in comfyUI console.

    But, I think its on a good way. Some smaller quant should give probably same quality as FP8 regular dev, which IMHO is a win.

    Yea and its really fast.

    MagicandmoreOct 25, 2024· 4 reactions
    CivitAI

    The model ist the biggest gift for all of us that do not own a consumer grade tiny little RTX 4090 with 24GB VRAM since the release of the original Flux1.dev model.

    It works like a charm for me in my standard ComfyUI workflow with each and every LORA I am using with it.

    The model is very small when used right and really really fast.

    CLIP encoders and the model fit perfectly in the VRAM of my RTX 4060TI and multiple LORAs are merged with the model on the fly while rendering the image. 20s rendering time per image was a dream before and now it is real.

    You definetely should thank Ralfinger for hosting this model here and give it more than a try.

    MescalambaOct 25, 2024· 2 reactions
    CivitAI

    If you want, there are smaller quants on https://huggingface.co/city96/flux.1-lite-8B-alpha-gguf/tree/main .. but problem with LORA not really working right is still there, due shifted blocks. We could use some lora blocks patch setup to apply LORA correctly with this model, cause otherwise its really pretty good and fast model.

    5550139Oct 26, 2024
    CivitAI

    do you know if there is a reason why this version and also the gguf version of it completely ignores the conditioning of the controlnet?

    drderpOct 26, 2024· 1 reaction
    CivitAI

    so far results are pretty decent and shaved off 2 seconds (went from 12 to 10) of render time at lower vram use as well. so a good start. i wonder if it is easier to finetune?

    kiryanton930Nov 14, 2024

    12 seconds? I have 7 minutes!

    alex456mint1Nov 2, 2024· 1 reaction
    CivitAI

    Thank you so much for this art generative model, everything works really well. Finally I got good results locally. I have been experimenting for a long time with art-generative models online and locally on my computer. And finally found the optimal workflow! Thank the gods and RalFinger. If interested, I'll tell you in more detail how I used this model. Thanks again! Looking forward to the beta version of the model.

    dddimishNov 3, 2024· 2 reactions
    CivitAI

    For some reason, details are lost. Prompt "Batman standing at the grave. Pencildrawing" in most cases does not draw the grave, only Batman. With other prompts, a decrease in details is also noticeable. Compared with the quantized dev on q3-q4

    Checkpoint
    Flux.1 D

    Details

    Downloads
    252
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/24/2024
    Updated
    5/14/2026
    Deleted
    -

    Files

    flux1Lite8BAlpha_comfyWorkflow_trainingData.zip