Full Checkpoint with improved TE do not load additional CLIP/TE
FLUX.1 (Base UNET) + Google FLAN
This model took the 42GB FP32 Google Flan T5xxl and quantized it with improved CLIP-L for Flux. To my knowledge no one else has posted or attempted this.
Quantized from FP32 T5xxl (42GB 11B Parameter)
Base UNET no baked lora's or other changes
Full FP16 version is available.
Description
FAQ
Details
Downloads
86,073
Platform
TensorHub
Platform Status
Available
Created
12/22/2024
Updated
3/20/2025
Deleted
-
Files
Available On (6 platforms)
Same model published on other platforms. May have additional downloads or version variants.
SeaArt
FLUX + FLAN (65GB Source) - FLUX_DEV_FLAN_BF16SeaArt
FLUX + FLAN (65GB Source) - FLUX_Dev-FLAN_NF4SeaArt
FLUX + FLAN (65GB Source) - FLUX_Schnell_FLAN_BF16SeaArt
FLUX + FLAN (65GB Source) - FLUX_Schnell-FLAN_NF4TensorArt
⚡️FLUX Dev/Schnell (Base UNET) + Google FLAN FP16 - DevTensorHub
⚡️FLUX Dev/Schnell (Base UNET) + Google FLAN FP16 - Dev










