there are more sample images this time around.
do check them out.
hope it is sufficient for your imaginations.
but seems there is a lot of confusion. so to help those people out:
very handsome malay burly beefy grandfather just1nhigh quality realistic photogeneral army militaryfull bodywalking shopping mallconfident fierce muscular thighsmiddle aged equinecock realistic photohairytry to use the above in short 2 -3 words sentences and you should be good.
[[datasets]]
[[datasets.subsets]]
num_repeats = 1
[general]
resolution = 1024
shuffle_caption = true
keep_tokens = 3
flip_aug = true
caption_extension = ".txt"
enable_bucket = true
bucket_reso_steps = 64
bucket_no_upscale = true
min_bucket_reso = 256
max_bucket_reso = 2048Description
below are the prompts used for the sample images. this is the first lora that i get to use the early access feature. cool~!
[
{"prompt": "malay father with thick hyper puffy pectorals, nipple piercings, prince albert piercings, and chain inserted into urethra,"},
{"prompt": "malay father with hyper erect equinecock, hairy thick pectorals thick muscular thighs"},
{"prompt": "realistic photograph of group of married malay men, beefy bodybuilders, hyper erections, thick PA and chain insertions"}
][additional_network_arguments]
unet_lr = 0.0005
text_encoder_lr = 5e-5
network_dim = 128
network_alpha = 16
network_module = "networks.lora"
[optimizer_arguments]
learning_rate = 0.0005
lr_scheduler = "cosine_with_restarts"
lr_scheduler_num_cycles = 3
lr_warmup_steps = 0
optimizer_type = "Adafactor"
optimizer_args = [ "scale_parameter=False", "relative_step=False", "warmup_init=False",]
[training_arguments]
max_train_steps = 0
max_train_epochs = 73
sample_sampler = "euler_a"
train_batch_size = 4
noise_offset = 0.1
clip_skip = 1
weighted_captions = false
max_token_length = 225
lowram = false
max_data_loader_n_workers = 8
persistent_data_loader_workers = true
save_precision = "bf16"
mixed_precision = "bf16"
save_state = false
xformers = true
sdpa = true
no_half_vae = true
gradient_checkpointing = true
gradient_accumulation_steps = 1
[advanced_training_config]
multires_noise_iterations = 6
multires_noise_discount = 0.3
min_snr_gamma = 5.0
[model_arguments]
v2 = false
[dreambooth_arguments]
prior_loss_weight = 1.0
[dataset_arguments]
cache_latents = trueLooks like we don't have an active mirror for this file right now.
CivArchive is a community-maintained index — we catalog mirrors that volunteers upload to HuggingFace, torrents, and other public hosts. Looks like no one has uploaded a copy of this file yet.
Some files do get recovered over time through contributions. If you're looking for this one, feel free to ask in Discord, or help preserve it if you have a copy.