CivArchive
    Preview undefined

    Originally posted on Huggingface.

    AuraFlow is the fully open-sourced largest flow-based text-to-image generation model.

    This model achieves state-of-the-art results on GenEval. Read our blog post for more technical details.

    The model is currently in beta. We are working on improving it and the community's feedback is important. Join fal's Discord to give us feedback and stay in touch with the model development.

    Credits: A huge thank you to @cloneofsimo and @isidentical for bringing this project to life. It's incredible what two cracked engineers can achieve in such a short period of time. We also extend our gratitude to the incredible researchers whose prior work laid the foundation for our efforts.

    Description

    v3

    FAQ

    Comments (38)

    SouthbayJayAug 27, 2024
    CivitAI

    Ally, are you going to post V3?

    theallyAug 27, 2024· 1 reaction

    Uploading right now! Thanks! :)

    skygerAug 28, 2024· 5 reactions
    CivitAI

    I posted a few images generating off of it in v.3 with Comfy. Its looking much better than v.1, but definitely is screaming for some pony support, mainly with anatomy in general.

    buiscuit_monsterAug 29, 2024

    Apparently Pony is doing a model based on this.

    skygerAug 29, 2024

    @buiscuit_monster Yup thats mostly why I'm getting acquainted with it because pony6 is my most used model and want to prepare for it.

    ArtisticWaifuSep 8, 2024· 2 reactions

    @buiscuit_monster yeah, but is it also usable in forge/a1111? im afraid it will only work in comfy, i dont wanna be forced to use that

    So how's in comparison with v2?, didn't the prompt adherence decreased?

    skygerSep 20, 2024

    @P_Universe I did not test v.2, only v.1 and v.3

    WizerAug 30, 2024· 1 reaction
    CivitAI

    A kind of hard mode from the world of AI

    skygerAug 30, 2024· 2 reactions
    CivitAI

    adding an update from my initial testing -> don't use ancestral samplers as i don't believe its implemented yet, was getting extremely distorted/glitched images. both for euler ancestral and for dpm++ ancestral.

    WizerAug 31, 2024

    I'm using euler + sgm_uniform.

    So what sampler and steps do you recommend?

    skygerSep 25, 2024

    @P_Universe any of the others were rendering okay images and up to personal preference as usual. just for now, stay away from ancestral ones. you'll get a completely warped image that looks like a failing blue screen of death if you know what i mean.

    MairnexAug 31, 2024· 5 reactions
    CivitAI

    How to fine-tune this model?

    ArtisticWaifuSep 8, 2024
    CivitAI

    usable on forge/a1111?

    not_a_numberSep 25, 2024· 9 reactions
    CivitAI

    On the 4G front I can report that it technically works, practically, forget it: You're looking at like nine or ten minutes per iteration. Not per generation, per iteration.

    LemonSparkleOct 11, 2024

    Have you tried to get Flux Schnell working?

    not_a_numberOct 11, 2024

    @LemonSparkle Haven't tried, and don't think I would have any chance: My card would need to upcast to fp16 and then we're looking at 30G for the pruned model. Compare with SDXL which runs fine-ish (as in spending more time computing instead of doing memory transfers), 6.4G at fp16.

    LemonSparkleOct 11, 2024

    @not_a_number Maybe you could do one of the NF4 versions? I think I saw someone say it was like ~7.6G in NF4.

    not_a_numberOct 12, 2024

    @LemonSparkle ROCm doesn't support NF4 at all, least of all the ancient version I need to run to get it to work on NAVI14 (RX5500). I'm actually still in awe that AuraFlow even technically worked (though it's unusable) and SDXL is not fast, but absolutely usable. The card is barely not a potato and it's not exactly new.

    BattleRabbitAIartOct 10, 2024· 2 reactions
    CivitAI

    Maybe I'm just a big dumb dumb (okay, that's not a 'maybe', I am) but I don't know what "flow based" means. Does this mean I can or cannot use it within a1111 or it's better but under-supported brother Forge? Do I need to get Comfy? Or is this something new entirely?

    WizerOct 11, 2024· 9 reactions

    Use ComfyUI

    BattleRabbitAIartOct 11, 2024· 2 reactions

    @Wizer Cool, thanks for the info! I got another reason to download and get into using ComfyUI.

    StecFXOct 25, 2024· 14 reactions

    @Wizer Comfy is terrible though.

    WizerOct 25, 2024· 4 reactions

    @StecFX If you think so, you can try using diffusers directly )

    Usage example at the link below

    https://huggingface.co/fal/AuraFlow-v0.3

    dextermorganNov 12, 2024

    @BattleRabbitAIart  I guess you can use SwarmUI, it's basically a more user-friendly UI over comfy.

    BattleRabbitAIartNov 28, 2024

    @dextermorgan I was actually just talking about Swarm with one of my co-workers. It definitely stands out to me and I'll probably give it a try at some point.

    StinkekNov 19, 2024· 8 reactions
    CivitAI

    As a frequent user of Pony Diffusion V6 XL I couldn't ignore the news that V7 will be based on AuraFlow. My PC is far below the requirements, so I thought I'd give this model a try on auraflowai.com, but I tried to generate something twice and the process just seems to go on forever without giving me any pictures or error messages. Yesterday I must've watched the tab for at least half an hour, and NOTHING happened. The FAQ section of that site doesn't say how long generation takes nor anything about possible issues with generation. I can't use Diffusionbee or Draw Things like Huggingface offers because I'm on Windows.

    There's controversy surrounding Pony V7 since this model is obscure and many people would've preferred it on Flux or SD3.5, and the 24 GB VRAM requirement cuts off many potential users (though I heard PonyFlow may get more optimised).

    WizerNov 19, 2024· 2 reactions

    AF works with rtx 2060 6gb, the lack of vram is compensated* by RAM (or swap file). You can use ComfyUI to run this locally.
    *(NVIDIA Control Panel -> Manage 3D settings -> CUDA - Sysmem Fallback Policy -> Prefer Sysmem Fallback)

    StinkekNov 19, 2024

    Still nothing on the site hours later.

    StinkekNov 19, 2024

    @Wizer Can I run AF on my GTX 1650 4GB?

    WizerNov 19, 2024· 1 reaction

    @Stinkek 
    I think it will work, but the speed will be very slow.

    You can try running the model in Google Colab (may have to create a swap file)

    https://colab.research.google.com/github/comfyanonymous/ComfyUI/blob/master/notebooks/comfyui_colab.ipynb

    theallyNov 28, 2024· 5 reactions

    We're working to Bring AuraFlow to the Civitai Generator - watch this space!

    StinkekDec 1, 2024· 1 reaction

    Currently trying to generate something locally, in ComfyUI (I grabbed the wokrflow from here and set it to use v0.3 instead of v0.1 because that's what I went with). I enabled Prefer System Fallback.
    My GPU has heated up to 80-82 Celsius, CPU (32 GB RAM) is using ~50% of its power. I wish this was like Fooocus where you can watch generation process, I don't even know if whatever I'm doing is worth it.

    Edit: it's gone on for so long my GPU now has a major drop in performance (30% and less) while its temperature stays the same.

    Edit 2: it finished! I hit Queue at 8:36 PM and I got the result at 9:20 PM, so the process took slightly under an hour. That's considering I didn't change any settings (kept 25 steps and such).
    For comparison, generation on SDXL in Fooocus with Speed preset (30 steps) took me ~6 minutes. I don't use any other preset: Quality is too slow and isn't worth it for my oddly specific goals, the faster options are too poor in quality and don't support negative prompts.

    StinkekDec 3, 2024

    @Wizer I can generate using AuraFlow on my PC, but it's not viable as it takes almost an hour to generate a single image. Even if I double-check my prompt the result may not be worth the time.

    I'm now looking at the Colab page and I don't know what to do with it at all, I only worked with RVC Colab before, and that was with step-by-step guidance. Also is it pay-to-use?

    Maybe I should just wait for Civitai generation.

    WizerDec 4, 2024

    @Stinkek Google Colab can be used for free. I have not been able to reconfigure CUDA to use RAM, so there are some limitations in working with AuraFlow. There is enough VRAM to run the base model, but not enough to use VAE at the same time.

    You can save the latent using the SaveLatent node and then restart comfyui, or use locally installed comfyui to retrieve images from the latent. To load the latent, you need to use the LoadLatent node, first moving comfyUI_00001_.latent from “comfyUI/output/latents” to “comfyUI/input”.

    To use Comfyui in google colab you need to run:

    1. Environment Setup - this will install all the necessary components

    2. Checkpoints - this will download the necessary models, by default it is sd1.5, you can remove the “#” symbols under the necessary links, and also add the necessary links, in our case link to AF

    !wget -c https://huggingface.co/fal/AuraFlow-v0.3/resolve/main/aura_flow_0.3.safetensors -P ./models/checkpoints/

    3. Run ComfyUI with cloudflared (Recommended Way) (to restart the program run step 3 again).

    In the console you will get the following text:

    “This is the URL to access ComfyUI: https:// ... .trycloudflare.com.”

    Important: the runtime environment must use T4 GPU (should be set by default, but double-check just in case).

    (And, of course, we look forward to Civitai generator support for the AF.)

    caleb5339726Jan 3, 2025

    @Stinkek not sure if you know this now but efficient nodes whatever they are called has an "efficient ksampler which shows the generation as it happens. workflows always try to be as basic as possible. i notice even with professionals that they have a ridiculous amount of steps that a efficient ksampler would show half are doing nothing. see it all the time

    CuauhtemocI5MALMar 15, 2026
    CivitAI

    Which tools are the best for Training a LoRA with this model?

    Checkpoint
    AuraFlow

    Details

    Downloads
    1,663
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/16/2024
    Updated
    5/13/2026
    Deleted
    -

    Available On (2 platforms)

    Same model published on other platforms. May have additional downloads or version variants.