CivArchive
    LTX-2.3 10Eros NVFP4 - nvfp4mixed-nc
    NSFW

    This is a conversion of 10Eros to nvfp4 format using Silveroxides' convert_to_quant script. Some layers and weights are kept in native weights format for better output quality.

    NC (non-calibrated) version is a direct conversion without SVD optimization.

    Regular one has some additional tuning, experimental parameters.

    These models are drop-in replacement in native ComfyUI workflows, performance gain is only noticeable on Nvidia 5000 series GPU and with a moderately up-to-date environment (PyTorch >= 2.10, CUDA >= 13.0).

    Description

    FAQ

    Comments (16)

    lanceshockerMay 5, 2026· 1 reaction
    CivitAI

    So what different with NC?

    Melanippe
    Author
    May 5, 2026

    as written:
    > NC (non-calibrated) is a direct conversion without SVD optimization.
    > the other has some additional tuning but it's more experimental.
    From the few comparisons I did, NC seems has better audio-video alignment, but motion in the learned one seems smoother.

    gackt2May 6, 2026· 8 reactions
    CivitAI

    people have money nowadays to own a blackwell GPU?

    NUGGZ1616May 8, 2026

    Got mine 5 months ago and it was only $1000

    gackt2May 8, 2026

    @NUGGZ1616 that sounds expensive lol

    jean16HarryMay 8, 2026· 1 reaction

    j'ai la 6000. Depuis je mange des nouilles, mais elle marche bien

    a13phMay 9, 2026

    No. But why let that stop you?

    SUVO_RAWMay 12, 2026

    5090

    Starry_EyesMay 6, 2026· 1 reaction
    CivitAI

    please do a gguf

    firemanbrakeneckMay 6, 2026· 1 reaction

    Silver's git only supports a handful of quants. Ggufs conversion is handled by city96's repo, but it was always rather flimsy, needing the model structure for deciding on how to quantise the different layers, relying on some modded and frozen version of the main gguf repo, to make it work with image / video models. City stopped maintaining it around the time ltx2 dropped and there seem to have only been pulls for gemma.

    Starry_EyesMay 7, 2026

    @firemanbrakeneck Dammit, was really hoping to trying this somehow on my 3070. Guess I have to stick with ltx unsloth+loras.

    firemanbrakeneckMay 8, 2026

    @Starry_Eyes Yeah, I feel you. You do have sulphur's extracted lora version though: https://huggingface.co/SulphurAI/Sulphur-2-base/blob/main/sulphur_lora_rank_768.safetensors

    Lora extraction should be more commonly achievable and give fair results (as I understand, you compute the sort of weights applied to the base checkpoint which would most closely result in the target checkpoint - it's the same type of math used in resizing which works with absolutely everything).

    nessleonhartMay 7, 2026
    CivitAI

    did a direct swap of eros fp8_learned for this model. takes longer, worse quality, and OOMs at a lower total frame count. If this needs different settings than eros please add them in the post. Edit: 5090/128. can make 800+ frames with _learned. this OOMs intermittently as low as 150

    Melanippe
    Author
    May 7, 2026

    Well, worse quality is expected being a heavier quantization, the others are not. Check that your comfy and dependencies are up to date, in the start up logs you should see a line "comfy_kitchen backend ... available: True ... capabilities: [things with nvfp4]"

    I can do 1440 frames without offloading on a 5090, LTX lose consistency at this length but the workflow runs fine.

    delta45424155May 7, 2026

    Do you have torch 2.11.0 or higher installed? That was my problem.

    ElGordoAIMay 9, 2026
    CivitAI

    hmm quite a surprise, thanks for the quant

    Checkpoint
    LTXV 2.3

    Details

    Downloads
    454
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/5/2026
    Updated
    5/14/2026
    Deleted
    -

    Files

    ltx2310erosNVFP4_nvfp4mixedNc.safetensors