Titanforge: Fixing Flux's anatomy while uncensoring and enhancing compatibility
Current status: 🟩🟩🟩🟨🔳🔳🔳🔳🔳🔳 3,5 / 10 (experimental stage)
Titanforge is a fine-tuned model built on the Flux dev architecture, designed to dramatically enhance the representation of both male and female anatomy. The base model in Flux heavily censors certain content and anatomy, which I believe indirectly influences the accuracy of body representations, even in non-explicit scenes. As a proponent of uncensored models, my goal with Titanforge is to break these barriers and offer:
Comprehensive anatomical understanding of the human body, improving not only surface-level accuracy but also deep anatomical consistency.
Enhanced photorealism, providing more lifelike images.
Expanded pose versatility, with a broader comprehension of human poses and the adult language used to describe it.
Better compatibility with adult-themed Loras, by understanding explicit terms and scenes, allowing smoother integration without censorship constraints.
Getting rid of the Chindimple ;)
Current Progress:
Titanforge is a work in progress. The model is currently at 36,000 steps, trained on a carefully curated dataset of 1,200 hand-tagged images (avoiding generic tags like '1girl'). My focus has been on ensuring the model learns not just isolated anatomical scenes but also how to infer context when referencing different body parts. There’s still much to be done, Titanforge is about 35% complete by my estimate.
Although the model is trained on an extended and uncensored anatomy, I also try not to ignore the artistic aspects. So the model should be useful for all purposes.
The training has been challenging. My hardware (RTX 4090) is maxed out, and training speed has slowed due to VRAM overflow spilling into RAM. As a result, the process is much slower than expected. If you'd like to support the project and help accelerate the training, feel free to check out my GoFundMe campaign: https://gofund.me/c2d02973.
Things to Keep in Mind:
Titanforge is still experimental, and there are occasional issues where the model struggles, especially with more explicit prompts. Shorter prompts tend to work better at this stage. Additionally, the representation of male anatomy is still inconsistent, some scenes render very well, while others fall apart. Anything explicit I show at this point is likely cherrypicked, as the model needs more refinement to handle these scenes consistently. Feedback is always welcome, and any support would be greatly appreciated as the model continues to evolve.
Description
FP8
FAQ
Comments (22)
I would recommend you put your money towards training on cloud GPUs rather than buying a physical card. The cost is like USD 0.80 per hour.
Let me know if you want a Python script to strip the VAE and CLIP tensors from your checkpoint. Most people do not want the bloated file; just providing the unet by itself saves everyone 4+ GB each.
Basically, you're right, but this also incurs high costs in the long term. With a local GPU, I am more independent (e.g. from the Internet or the cloud provider), have more control, less time pressure and have the option of using the GPU for other purposes.
I like single package file,saves a lot of trouble
You're in the minority based on community discussions. It doesn't really save any trouble - handling the VAE and CLIP files is very simple. It's just wasting 4+ GB of bandwidth and disk space for every single download for no benefit.
@volatilevisage2
I am also a friend of the one file solution, especially as a Forge user. But I can well understand the arguments in favor of a separate solution and will look into the best way to solve it :) But I won't be home for the next few days.
Just out of interest, where do these discussions take place?
Well as a low vram user I would appreciate some gguf versions. Or I would try to create them if someone who knows just send me tutorials and code.
I will provide a gguf version with lower quants in the next few days. I won't be home until tomorrow. I'll write to you when I've quantified the model :)
testing variations of poses with Evolve in Ruined Fooocus , excellent anatomies ! thanks a lot !
impressive
Looks great! If possible would love to see a fp16 version.
Seconded strongly. FP16 is a joy to behold even though there's less demand at the moment.
Is it possible to train LoRA on top of the model with this type of configuration? My existing Lora all come out looking... well wrong on top of this model, but if the base model can improve the anatomy on the LoRA I am willing to retrain them on this version. I get this error when trying to load up in with kohya
Traceback (most recent call last): File "/notebooks/kohya_ss/sd-scripts/flux_train_network.py", line 519, in <module> trainer.train(args) File "/notebooks/kohya_ss/sd-scripts/train_network.py", line 402, in train self.cache_text_encoder_outputs_if_needed(args, accelerator, unet, vae, text_encoders, train_dataset_group, weight_dtype) File "/notebooks/kohya_ss/sd-scripts/flux_train_network.py", line 212, in cache_text_encoder_outputs_if_needed unet.to("cpu") File "/notebooks/kohya_ss/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1174, in to return self._apply(convert) File "/notebooks/kohya_ss/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/notebooks/kohya_ss/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 805, in _apply param_applied = fn(param) File "/notebooks/kohya_ss/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1167, in convert raise NotImplementedError( NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device. Traceback (most recent call last): File "/notebooks/kohya_ss/venv/bin/accelerate", line 8, in <module> sys.exit(main()) File "/notebooks/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main args.func(args) File "/notebooks/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1106, in launch_command simple_launcher(args) File "/notebooks/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 704, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/notebooks/kohya_ss/venv/bin/python', '/notebooks/kohya_ss/sd-scripts/flux_train_network.py', '--config_file', './outputs/brsd/config_lora-20240924-234207.toml']' returned non-zero exit status 1Also, I am in the process of working on cleaning and captioning a dataset of ~200 quality M4M nsfw photos with high-quality NLP captions + some curated tags (Flux prefers NLP, apparently) in various positions and angles. LMK if they'd be useful for you.
NLP isn't as much of a preference as it is a default. Tags flow against the base training data of Flux and other models that use T5 for CLIP. The more unique the tag, the less understanding the T5 can muster into usable tokens. It's a fundamental deviation from booru-style tagging, which (as Pony has very clearly demonstrated) has a very hard limit to its usefulness.
@UnsignedLongshanks "It's a fundamental deviation from booru-style tagging, which (as Pony has very clearly demonstrated) has a very hard limit to its usefulness." LOUDER FOR THE PEOPLE IN THE BACK! AMEN...THANK YOU! I was scuffed at for suggesting this half a year ago.
Well, not sure where is problem, but it just crashes with ComfyUI.
Pretty decent, seems like it struggles with female body type variety (chest size, fitness, etc) and still plagued by vagina censorship. But it's a great start! Much better nude depictions than I've seen from any other Flux finetune or LoRA. Keep up the great work! ❤️❤️
Does this one go in the Unet folder or in the regular checkpoints folder?
Hey mate. Glad I just found this project, strongly support the goal. I see you're taking donations for accelerating the training, and saw that you're currently using an RTX 4090. Wanted to let you know that the specs for the RTX 5090 are pretty much confirmed (32GB VRAM / 24k CUDA cores) so if you're planning a complete rebuild, I'd recommend holding off until it drops in Q1 2025. It'll be expensive for sure, but nothing compared to the data-center-monopoly-milking cards 😆
Maybe my favorite Flux checkpoint. Very creative.
Hi! Great model. Are there any plans for a fp16 version? I am getting quite better results on fp16 unets rather fp8. Thanks!
While the model itself seems to be marvelous, I would suggest using a more theme-appropriate main picture.
I was expecting some Fantasy-related checkpoint from the main pic, and we all know reading comprehension isn't the highest on the internet 😂.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
















