Photon aims to generate photorealistic and visually appealing images effortlessly.
Recommendation for generating the first image with Photon:
Prompt: A simple sentence in natural language describing the image.
Negative: "cartoon, painting, illustration, (worst quality, low quality, normal quality:2)"
Sampler: DPM++ 2M Karras | Steps: 20 | CFG Scale: 6
Size: 512x768 or 768x512
Hires.fix: R-ESRGAN 4x+ | Steps: 10 | Denoising: 0.45 | Upscale x 2
(avoid using negative embeddings unless absolutely necessary)
From this initial point, experiment by adding positive and negative tags and adjusting the settings. Most of the sample images follow this format.
Development
The development process was somewhat chaotic but essentially:
It started from an old mix.
LORAs were trained on various topics using AI-generated photorealistic images.
These LORAs were mixed within the model using different weights.
In the midst of this mixing, hand generation broke.
LORAs were generated and remixed in an attempt to fix hand generation (not entirely successful).
In future versions, I will try to:
Completely eliminate the need for a negative prompt to generate high-quality images.
Fix the hand generation issue to minimize instances of poorly drawn hands.
Explore more automated training processes. I would love to have 5,000 or 50,000 high-quality AI-generated photorealistic images for training purposes.
I hope you enjoy using it as much as I enjoyed creating it! If you have any questions or suggestions, feel free to share. Have fun creating amazing images! 🎨
Description
This is the initial release focused on achieving photorealistic results. However, please note that the model's hands generation may have some limitations at this stage, and I'm actively exploring training techniques to enhance it.
FAQ
Comments (319)
Showing latest 243 of 319.
a really underated checkpoint, it should be more popular
it is already very popular, no? Many people recommended this model for realism. If I am going to make realistic images, I will just go for realistic vision or this one.
one of the greatest models for realism
does someone know how to have a more natural light? all my gens has a professional lighting
The most versatile photorealistic model I've ever used. Portraits, architecture, landscapes—pretty much anything looks great.
Great model but tends to create hairy forearms. If you notice it you won't be able to unsee it :(
And that is beautiful. Hairs are natural!
@Fusch no :D you're just too weak to take care of yourself
@Fusch Sure, hair is natural. But I would prefer to decide for myself whether I want hairy forearms in a picture or not. Currently, it's apparently pretty deeply nested into the model.
the photon is one of the best 1.5 model, with just 1.99gb it has a lot of things in it! why don't you make more versions of it?? a larger version or SDXL
bro really waiting for the next version!!!! please make something new (XL...)
please please please make photonXL!! it is literally amazing!
Although it is recommended not to use negative embeddings, usinf boring_e621 does help me acquire more and cleaner details.
while generating 4x4 batches of girls I noticed that they all have pretty much the same or very similar face. Is it supposed to be like that? How can I get different faces?
Yes, it's a weakness in the model, especially noticeable when faces appear close. I've employed a few methods to address this issue:
1) Employing nationalities in the prompt, like "an Argentine woman...", sometimes works. Trying names can help too.
2) Introducing professions, although it can slightly alter the context. For instance, I once used "social worker" in the prompt, and it started generating distinct faces with each iteration (though I never quite understood why).
3) Utilizing embeddings. However, the challenge here is that most embeddings in Civitai are of celebrities.
4) If you're using AUTO1111, I believe the optimal solution is CloneCleaner. It's an excellent extension, and the GitHub page provides example images with Photon: https://github.com/artyfacialintelagent/CloneCleaner
I hope this information proves helpful for you.
ControlNet allows to introduce up to 5 reference images (if using A1111 on your local computer.. don't know about other options). This can help steer the results in different directions. The author's suggestion of using various names, nationalities, professions, etc. helps. I did a pretty cool set of woman metalworkers awhile back. But I mostly use for abstract/textural/video art... which, somehow, it also delivers next-level results.
"generic face sybdrome" happens a lot with AI models. I've tried "looks like <x>" where x is a well known figure. So its not them but somewhat like them. I've also used "facial asymmetry" with mixed results. Nationalities also works. You can use celebrity LORAs and embeddings but put the influence really low so the end result only vaguely resembles the person. It can produce some variety.
Absolutely great model! Keep it developing, please!!! From the first generation it became one of my favourite!
Is there any plan to release an improved version? and I am not saying this version is lacking or anything; it is just that there is always room for improvement.
This author is truly gifted; if this is their first version, I can't wait to see what's next. It's definitely the best I've tried out of 50 or so others. By far. It really has me curious, what does this author know that all the others don't? The lighting and textural detail, crispness, depth, it just makes all my other 'photoreal' models look like cartoons.
Photon is criminally underrated. One of the best for photorealistic images, if not the best. This and epic sin are my go to models, sdxls have a looong way to go before they catch up on these models and by the time they do hopefully we'd have photon v5 by then so...
And we're still at v1 now.
Seriously.. who needs SDXL when this exists? The 1536x1024 images are just... I have no words.
I've been working with SD pretty much 24/7 (aside from sleeping.. sometimes), and I've compared many different models. Your work "Photon" it's almost like, a whole different program. The level of detail and quality is just unrivaled. I don't know anything about training so I am totally mystified how you were able to achieve results better than anything else on his site. But it seems obvious you must have a completely different approach/method/technique than others, this model shows the true potential of SD like no other.
Very Well said friend, I create images and videos with Photon and people ask me if it's SDXL lol. It's just THAT GOOD!
example of my work: https://twitter.com/J0HN9R1M3
totally agreed with the comment of usevalue. This is great model. no need to use any SDXL model, that are not even near to this, thank you for this masterpiece, i'm using every days with incredible results!
how are your images much more crisper than my outputs?
Are you using the upscale fix when generating? Try the following settings.
- Upscaler: R-ESRGAN 4x+
- Highres Steps: 10 - 20
- Denoising Strength: 0.4 - 0.6
- Upscale by: 1.5 - 2
@nineflames better to have steps or no steps?
I notice he generates in 512x768 then uses img 2 img to upscale to 1024 x 1536. This is a great way to get decent results at higher resolution without all the distortion. I use it myself. I dont use A1111 but I use the add_detail LoRA and it works really well when used with the above technique.
@gr3yh4wk1 so he generates the image, then takes it to img2img and upscales by 2x? is there no upscaler model or anything and about the denoising strength, ive never done it this way im curious
@joule32 Its just an easier way to get a higher resolution image without having to use the upscaler which can blur everything out. Plus the lower res version is far more likely to have less messed up anatomy. I generally do my prompt at 512x512 to see if it has all the elements I want, then do 512x768 and then img2img to 1024x1536 for the images I most like. You really have to experiment a lot with AI.
The only way I can thank you for this ULTIMATE checkpoint is to show you what I did with it: https://twitter.com/J0HN9R1M3
(thank you)
About 50% of my images are in black & white. Even with "monochrome" in the negative. Anything specific I should do to get color images more often?
Try something like negative (black and white:1.8)
(vintage,monochrome, grayscale)
There is a LoRA in your DLs that has taken over a word in your prompt that is related to monochrome results. Change the Model and restart SD
@ishadowx I'll check again and make sure I'm not using any Loras in this prompt. (Could be an embedding too, I have a lot of those).
@ishadowx I've confirmed this happens often without loras, with fairly low-token prompts, and without a negative. AFAIK, (I use automatic1111) if <lora:<blah> isn't in my prompt, no loras are being used.
wrong VAE?
@brokolies my default vae is vae-ft-mse-840000-ema-pruned
@comfortable564 I think I didn't explain myself clearly. For some reason, people add a bunch of keywords to their LoRAs and those keywords, when used in prompts, call back some qualities of those LoRAs into the result. This is even worse if your settings are set to keep checkpoints cache in RAM. Also, you do not need to insert a LoRA manually for it to be present, as I explained above.
Try using "colour gamut" in your prompt. Seems to work for me. I use "Warm Colour Gamut" for redder tones and "Cool Colour Gamut" for bluer tones. I haven't had a monochrome image since. Not sure if using the UK english or US english spelling of "colour" makes a difference. I'm from the UK so use "colour".
I absolutely love this model. Thanks for making this <3
This model was created with witchcraft and black magic. Quality like this comes at a cost. Beware your souls!
after trying about 100 gb I'm finally shocked :O
Awesome! I use this model for a while, this is one of the best I ever used (favourite).
How do I credit you if I am making images for sharing at work?
Hi, I really like your model. Lately I've only been using this. but unfortunately there is no news about future updates. Tell me, are you planning to refine the model or update it in the near future?
Delightful model, many thanks to the author! I use it as a refiner, to give photorealistic artwork, with this model I get consistently good results.
Dear developer , do you mind open more permissions? share or run on civitai.com ? your model is so beautiful.❤️
This checkpoint is really good at working with controlnet canny, line art and softedge models. It can create coherent images from those controlnet inputs much better than any other checkpoints I have tested. If you use those controlnet models in your workflow you should definitely give this a try.
Verdade, concordo 100%
Why are so many posts saying "cross-post"?
They're of a that's using this model as the base checkpoint.
At this point I'm basically just using it as HiRes checkpoint even when using any other model. Some other models at still more flexible, but photon just makes everything look better especially with my own LoRA's.
RevAnimated/Dreamshaper/Lyriel base (and some other never models) + Photon HiRes is just amazing
Can you explain more? How do you Hires with Photon?Please
I think you have to activate it in the options, but you can select a different checkpoint when you use HiRes fix. So it will do the first image with the model you have selected, then switch to a different model when doing the upscaling.
@Manolas Just checked, its under "user interface" there's a checkbox for "HiRes fix: show hires checkpoint and sampler selection"
@Asaghon Thanks so much my friend
where can i find R-ESRGAN 4x+ i can't find it at all
I don't remember exactly where I downloaded it from, but you can try these two sites:
https://upscale.wiki/w/index.php?title=Model_Database&oldid=1571
where can i find R-ESRGAN 4x+ i can't find it at all
I don't remember exactly where I downloaded it from, but you can try these two sites:
https://upscale.wiki/w/index.php?title=Model_Database&oldid=1571
@Photographer i found it, it's in automatic1111 by default but disabled
Is there a suitable workflow for Comfyui? or purely SD?
I have browsed all images in the Gallery. They are all so amazing. Thanks for everyone who offered own great works, which brought me so much pleasure and inspirations. THX! AND THX for'photon', You are a wonderful checkpoint!
I would also like to make a realistic model, and I'm currently trying through Dreambooth spending a lot of time on it, but I can't figure out the settings, the results are disgusting. You can tell me what settings you use, what resolution of images and according to what scheme the training takes place. On the Internet, I could not find lessons on teaching such large-scale models with a package of images of several thousand. I trained 1900 portrait images of 512x512 at 380000 steps tonight (Learning Rate 0.000001), as a result, some kind of crooked nonsense turned out. Save me, brother.
Recommend Olivio Sarikas' channel on YT for a ton of info on how train models and other tips and tricks.
When I tell you I've tried the Key players as far as Photorealistic Checkpoints go...I'm serious.
Photon is pure Witchcraft.
I rarely generate still images...Primarily Animation. Photon is PRICELESS. Not only does it add crazy detail to the simplest prompts, it's beautiful for video coherence. Try combining this with Image to Video and your mind WILL BE BLOWN EVERY TIME!
Thanks for taking the time to create this. Check my profile for videos created with Photon.
Cheers.
100% agree and is the reason I've started using Stable Diffusion for my video art/music visualization. Normally, to produce visuals of this depth and variety, would require vast and extensive resources (time, money, human) which would never be available for an independent, non-commercial production. Many times I just have to stop what I'm doing and wonder, 'How does something like this even exist?' We are not worthy! lol
This is the ONLY model to use for img2img abstract video art. Stable Diffusion is cool but Photon is the only reason it has become such an integral part of my creative process. It is amazing at portraits and landscapes but where it really surprises me is its potential for novel aesthetic creativity. It's interesting when you can reverse-image search Yandex and nothing even remotely similar is found on whole public internet.
I've been an avid AI art creator since last year, and I wanted to say this model is in my top favorites! Absolutely stellar work here.
I find it really calming and massively helpful for my depression. I can just listen to music and create. Especially with this amazing model.
tell me more models
Photon doesn't get enough credit. It REALLY raised the bar in my book. Kudos
When will you make an SDXL version of Photon? The dataset you used to train the model is the best, no model can surpass Photon. For the issues of hands and feets, it is now clear that it is all about adding tons more photos of hands and feets in different positions. Since hands and feets can have 1000 times as many poses compared to arms and legs, you just need to add to the dataset 1000 times more photos depicting hands and feets in all the possible positions, like mudras. We are all waiting Photon for SDXL!
I second this wholeheartedly
I'm new to this stable diffusion, so is it means that sd xl is not some kind of software update of previous sd? Of for each sd 1.5 and xl required different checkpoint?
@davidokuta SDXL is a newer model, which is a lot larger (~6.5 GB), but also a lot more capabilities. So yes, it is another checkpoint entirely
hi, do you consider that open some share rights? thank you 💖 your model is so amazing 😍
Very good model, thank you!
my fav model! soooooo good!
I've gone back and forth with dozens of models and I always end up back with Photon, it's amazing with a few hidden secrets. Thank you!
Pls make a v2 with 1024x768
Very very good and should be number 1. Nothing beats it in my oppinion.
Hands down one of the best models in existence! I like this better than just about all the SDXL models even.
Love This Model Can't wait for next version
Would you recommend any VAE for this model or is it good to go as is?
Works as is no need for a VAE, but i suppose you could still use one with it if you like
so nice
Merry christmas all in the Photon mini-community!
Thank you gr3yh4wk1! Merry Christmas to you and everyone in this minicommunity! Wishing each of you a wonderful holiday season and a Happy New Year filled with creativity and beautiful images. Let's continue generating these amazing things with Photon! 🎄🌟✨
By far the best model I've ever used, along with epicrealism. It generates the best backgrounds and pretty good hands without inpainting.
Photon V2 when???
This model probably doesn't need a V2 it's so good lol. BUT if there is the only improvement I can think of is maybe slightly more realistic skin texture sometimes it's a little too shiny. But I'll never not get a excited for an updated version because this is my favourite goto model!
Is the usage of images generated from this model free for commercial purposes?
There's a block on the right-hand column with the license and use restrictions on this model; click any of the icons to see the terms. You may sell generated images but you are required to attribute the model used. If you see a picture icon with a slash through it, that prohibits selling images generated with the content – and if any other assets you used generating the image (characters, props, locations, styles, sliders, etc.) prohibit that, the entire image is off limits to sell; for similar reasons most models of fictional characters prohibit sale of images (IP rights) and most photorealistic models of celebrities also disable online generation (right of publicity, a related but thornier legal concept) to remove the temptation to produce t-shirts/create convincing libelous content anonymously.
@ydoomenaud thank you so much
Hi @Photographer ,
I saw a video with hands comparison, and Photon seems to have really good hands in comparison with other SD 1.5 models.
I've been training and making models for a while, and hands are always a big problem, especially after checkpoint finetuning. So I was curious about your method of training Lora and merging them. Interesting that you say that merging Loras broke hands. How were you able to fix it? If you can share a little bit more info about the process, it would be great to learn.
Also, I wanted to invite you to my 🫶 Discord server, take care.
Hands are a total nightmare for the AI, I get best results with low resolution images then using img2img to scale them up to a decent resolution. Say 512 x 768 then img2img to 1024 x 1536. Rendering at 1024x1536 invariably has deformed hands, even with negative prompt embeddings and LoRAs. I even tried a very specific prompt to purely generate a detailed hand on its own. After 100 images I failed to get one single image that was a proper proportion or anatomically correct. Even with Photon.
@gr3yh4wk1
Yes, hands are very problematic with all models.
But there are differences between how good the checkpoints are performing, some are a little bit better. It's not even clear how to measure the problem, due to many possible prompts and settings.
But usually, people choose a few prompts and settings, then generate images, making an assessment of hands from images. As an example, you can watch this video of someone who did it.
I am not a big believer in embeddings that claim to improve hands, but some of them might have a small beneficial effect.
I usually use hires-fix too.
Real shame these images get so few reactions. I mean literally zero for images it takes forever to get right. Why are there so few reactions to images using this model? Do ppl just generally not like realistic images? Genuinely curious.
It's pretty obvious what kind of images get votes: 1. Porn 2. Funny stuff 3. Very unusual images.
An "ordinary" image that looks like a photo taken by a skill photographer will no longer "impress" anyone here anymore. It's not that people do not like realistic images, some do, but that's not what most people who browse for A.I. images on civitai are here for. Once people have seen enough of a type of images, they just get used to them, and a kind of mental fatigue sets in, and they just ignore them.
But that's just civitai. If you go to Reddit r/StableDiffusion, you'll see that such image get lots of upvotes. It's just a different audience. People in r/SD is obsessed with generating "images that you cannot tell is A.I."
But TBH, just generate the kind of images you enjoy working on. If people like it, great, but that should not be the goal. The goal is to enjoy this hobby, not trying to please somebody else.
I learned this the hard way. For a while, I tried to generate images that I thought that people will upvote, but I found that I do not enjoy the act of thinking and creating images as much as when I do it just for the fun of it. So after a while, I stopped doing that, and I got my enjoyment back.
YMMV, of course.
@NowhereManGo I absolutely agree with you. Couldn't have said it better. I also noticed that there are much fewer reactions to Photon than to many other models, but if you stop thinking about popularity and likes and just do interesting things, then you get inspiration and pleasure from the very process of creating something new and interesting.
@Akalabeth Glad to hear that we are in agreement 😁. Just as you said, the popularity of an image depends greatly on the model used to generate it. For two main reasons.
One is that many model creators are very competitive (leaderboard) and many "cultivate" the model's image gallery actively by upvoting images there, so that the image creators will feel a bit of encouragement and an ego boost. This will create a virtuous cycle where more people will post and also browse the images for that model page.
The second reason has to do with traffic. There are just way too many images posted every day, and if you post an image on a very popular model, then the image will stay on top for only a brief while. Only very dedicated image viewer will bother to scroll down further so see more.
So images posted to model pages with a smaller, but very dedicated user base may actually get more upvotes, because these images will have better visibility. Also, more "specialized" models tend to attract a particular audience, who may appreciate that type of images more.
Browsing the main feed, which is filled with porn and TBH, some low quality images, is like drinking from a fire hose attached to sewer water, so unless one set up the appropriate filters, is probably done by no one except by sorting via "Most Reactions" (which still gets you mostly porn). This creates an amplifying effect where top-rated images gets even more votes and some quality images gets no vote and languages forever.
Maybe in the future with better support for collection it will be possible to give quality images more exposure, with dedicated users making a curated, specialized collection (say SFW photo images) for a particular audience.
If you have an interest in making such a collection, feel free to post your collection to Image Collector's Corner: https://civitai.com/clubs/110 run by https://civitai.com/user/NobodyButMeow
I'm on the more conservative side of the spectrum and there's the whole progressive "phhhhuck drumpf" thing going on with some creators, sure, but some of the images are genuinely creative and some times "whoa, damn, this tech really can be scary some times", I mean, the prompting demonstrated a great understanding of SD the images are just a pure pleasure to look at no matter the political persuasion. Practically no reactions.
The porn stuff is fun to play with for a couple of days, but it gets old quick. NowhereManGo (did you ever see the '90s show on UPN with Bruce Greenwood?) said it succinctly - just find what you like and concentrate on your own stuff. And come up with your own prompts. You'll have a helluva lot more fun with this stuff.
@aquatermite911 I agree that it is more fun to come up with your own prompt. It is like a sort of mental puzzle. There is no fun if somebody else solved it for you.
But sometimes one is stuck, and it is nice if the answer sheet is available 😁. There is also a lot to be learned from other people's prompts. So my usual approach is to think about how I would go about prompting for it. If I can get it to work, then I'll compare my prompt with the original. If not, then I declare defeat, look at the prompt, and try to learn from it.
I tend to stick some music on and generate images based on what I hear and mood I'm in. Unfortunately I do get depression and I get some dark images generated. But I find it therapeutic to visualise the downer sometimes and bring myself back up.
I always think this medium is a genuine art medium and deserves the same level of artistry and creativity. Even in the "porny" images, I upvote things where there's great lighting or imaginative settings etc. Or even if they genuinely make me laugh.
The ultimate buzz for me though is seeing someone use some or all of my prompt for their own images. Thats kind of all kinds of awesome!
Guys, let me add my two cents to the discussion. I am a traditional, digital and AI artist. I consider AI creativity on the same level as the creativity of artists who draw on canvas or in Photoshop or other digital applications with a use of a graphics tablet. It’s very cool to create something of your own from scratch, but in fact, in most cases, any artist uses the work of people who made or came up with something before you. A widely known approach is called “Steal Like an Artist,” where you don’t just copy someone else’s style, or in our case a prompt, but by using and combining other people’s prompts as a basis for your image, you create something of your own, unique and not like what came before you. Even Leonardo Da Vinci did not start from scratch, he first studied with the best masters of his time, worked for years as an apprentice, and only then surpassed them and began to create his own masterpieces, surpassing his teachers. The same thing happens with artists working with Stable Diffusion, Mid Journey, etc. First we learn the basics, then we copy, then we experiment, and only then we create something completely original, something that no one has ever thought of before. Like a true samurai, an artist has no goal, only an endless lifelong journey. If we think of creativity as a part of our lives, and not a momentary whim to please the public, then true understanding and true pleasure comes. So I wish everyone to learn to enjoy not the momentary popularity or likes, but the creative process itself, when out of 1000 attempts and options you get the one that pleases you. I wish everyone good luck on the path of a true artist.
@gr3yh4wk1 I agree that A.I. image generation has some therapeutic uses. I lost someone important last year, and it helped me forget about some of my grief, if only temporarily.
To me, A.I. is a tool, similar to photography. Whether the tool produce garbage or art is entirely in the hand of the user. Those who says that it cannot produce "art" either never tried it, or has some other agenda.
By "art" I mean something that other people will find enjoyable, and evokes a certain emotion in them. Could be laughter, sadness, amazement, etc. Maybe nobody has yet produced "high art" that make people think very deeply about something, like Guernica or the Sistine Chapel ceiling, but I am sure somebody will. Not through simple text2img, but assisted by A.I. in some way.
Just like you, I've got nothing against NSFW as long as it is interesting and/or innovative in some way. But sadly I have to block all of them since the signal-to-noise ratio for NSFW is just too high here.
Yeah, I always get a kick out of seeing one of my prompts being reused. Always bring a smile. As they say, “Imitation is the sincerest form of flattery that mediocrity can pay to greatness.” ― Oscar Wilde 🤣
@Akalabeth
I never expected this comment to attract any attention from anyone other than the OP gr3yh4wk1 😅.
It is good to hear affirmation from a "real" artist. I am just a total amateur, who has been in love with all forms of art since childhood. I can draw and paint at the level of a good high school art class student, but decide on a career on STEM, so I don't have the learning and practice to take me further. I thought about taking courses to rekindle my skills, but then A.I. came along, and now I have a tool that allows me to run wild with my imagination. It is a dream come true.
I agree 100% with what you said about copying from others. That's how everyone, even the greatest artists, writers, composers in history have done. Those who are railing against A.I. "stealing from artists" either don't understand the technology (most "traditional" artists, I guess), are being hypocritical (sure, they never learn or copied from other artists), or have some hidden agenda.
I wish everyone enjoyment and fun with this new medium 🙏, and please do share your creations 😁👍.
You really shouldn't care about reactions/upvotes/whatever on some pictures you generated, like mentioned in the other comments people will have different expectations/opinions etc, you are not making stuff for them, you are making stuff for yourself. This applies to all forms of art.
If you are actually making stuff for them because for some reason you live for the validation of others (wich you REALLY shouldn't do & this is a BIG issue related to social media as well as a VERY BIG long standing issue in America that's slowly starting to bleed over into the rest of the world) then just learn what most people want and make that instead. (like mentioned, in this specific case, probably porn?)
Besides that:
- Its realism is questionable (lots of drawing-and-3d-like compositions with an "realism" style put on top at the end that don't look all that realistic when you look a few times)
- Its abilities are questionable (notice how a LOT of pictures use lora's, therefor not being a representation of the models capabilities)
- Taking forever to get something right is not seen as a good thing at all, especially when comparing with the latest & best rated models
- It's not been updated in over half a year while all the 'big' models get an update each month or even more often
Finally you have to remember that CivitAI started as & is first and foremost a platform for downloading models, the fact that they bolted on all these extra social features in the last few months to justify paid memberships is completely meaningless to at least half the visitors if not more, they come here to download a new/specific model, not to rate the pictures of others. (Also im sure a lot of people straight up stopped using CivitAI when the timed releases for paid members were introduced)
In my humble opinion CivitAI should go back to what it was at first, huggingface with a better UI. Just delete all the dumb reaction and leaderboard bs that nobody asked for, roll back the cumbersome way to read a review to what it was before, remove the separate image section, paid-time-limitations, the image generation that doesn't even work half the time, etc. I mostly stopped using this website and reluctantly use huggingface now, because the goals of CivitAI seem to have skewed towards something i do not support & the whole thing is turning into some weird social network like thing that i have no interest in.
@baconmessenger I agree 100% with the first 1/2 of your post. As mere hobbyists, who generate images for fun, our own enjoyment should come first, and being "liked" should be secondary.
Regarding the fact that Civitai is turning more and more into a "social media" site, there is actually a good reason for it, and has to do with the model makers. They are just people, like you and me, and most people do enjoy competition, attention, praise and validation from others. Hence, all these upvotes, leaderboards, posting images to their model pages etc. Yes, there is vanity involved, and very few people are immune from that.
I am not a model builder, but I'm on friendly terms with a few of the top modelers. I've also read through discussion among model makers in Civitai's Discord channels. In general, they are great people, creative, and mostly intrinsically motivated. And yet, most of them do enjoy the attention they receive, and they eagerly examine every image posted to their model page and read every comment and reviews left there.
So my point is, Civitai understand this, and knows that the social media aspect of the site is a big part of what attracts model makers to upload their creations to civitai and update them frequently, instead of the competitor's websites.
People who just wants a place to download models don't want these new features, I get that. But if Civitai get rid of many of these features, then many of the model makers will become less engaged. The end result is that you will see fewer models and slower updates.
This was the first photorealistic checkpoint I used in SD, and while the woman in the first picture is genuinely beautiful, I couldn't help noticing a Japanese character LoRA I'd grabbed came out looking more like her and less like an Asian (which outside of appearing Caucasian was a good match given the general similarities between her and the character). I'm guessing this is a flaw in the LoRA; if most women generated by Photon all looked like the model there'd be complaining. Still, I've seen other pictures where the checkpoint wasn't mentioned but it's reasonably obvious it's her.
Please give me a advice. i am building my own website provide royalty free image (for personals or commercial purpose). So this "Civitai models" image generations can use as a royalty free images?
i also upload images on pixabay.com
Thank you.
Can this be used as a refiner?
for sure..most of my generation use photon as a refiner.. look into my stuff if you want to see example.
Every morning I check the new community models but I can't find any better than this one! thank you! amazing work!
Please release v2 with more options for male generations... backgrounds... sceneries... street photography, etc.
So several months went by again, and somehow this is STILL the best 1.5 model and its not even close.
There's other models that might be more flexible or better at realism, but photon is the perfect middle ground and does it all great.
+ still the best match for getting good likeness with lora's
Yheaaa, Totally Agree!
Agreed, easily the best 1.5 model to this day. I took a break and came back and was somewhat disappointed that I can't find anything that improves on this (it's great but... you always want better!)
Man, reliberate V3 is the best realistic model. Find at hugging face
@Newmann After getting it and testing it for a while, I'll have to disagree. Its not bad I find Photon and e few other models better.
After trying some of the top recommended realistic models i alway come back to this one. Sure everyone has its strengts, but Photon just gave me often the best results.
Although hands, like any other models, have their weirdness from time to time, i get good result which i sometimes dont even have to inpaint and more.
simply the best SD model
@Photographer astonishing creations. I spent more than an hour during which i watched your porfolio.... It is surprising how simple it is to produce such great art with your Model. I am genuinly stunned. Way to go!
Weird question, anyone know how to generate women that aren't totally beautiful? Like just average, comely looking people? Without lora preferred. Can't find the right combination of prompts.
I find that the faces are "averaged" out so much that you always end up with a generic face. Try using clip skip 2, that has got me slightly different results. Also "slightly overweight" "looks like <name of person>" and "asymmetric face" has occasionally got something different. Your mileage may vary. But your best bet is obviously a LoRA - or in my case I use 2 sometimes 3 people with the same strength LoRA to get an averaged face thats different but not close to a current live person. Can also try "1man" but give feminine description.
@gr3yh4wk1 I did try using names from a random generator, only to mixed results, tried older age and heavier body types but they were still above average looking lol
I guess it's not a bad problem to have...
Ad detailer should help 😊
@NoMansLand Those all look great to try. Thanks!
@pogo You are welcome.
Just took the challenge and posted a few over the last few days. I found just putting ordinary people in ordinary situations got the best results. ie dont put "model shoot" or anything like that. Just think of a woman on the bus, half asleep dressed in a cardigan etc...normal people :). I prefer candid photos but they can be difficult to get with AI as there are overwhelmingly more images of modelling or people looking at the camera. Looking forward to seeing results if you can @pogo !
@gr3yh4wk1 According to the creator of Photon, the use of "social worker" can be used to create more "normal look" woman: https://www.reddit.com/r/unstable_diffusion/comments/17v20hc/comment/k98934d/?utm_source=reddit&utm_medium=web2x&context=3
@NoMansLand Ah! I wondered why he kept putting "social worker" in his prompts!!
@gr3yh4wk1 👌
@NoMansLand @gr3yh4wk1 but it doesn't really seem to work. I'd like to see other people test it, maybe I'm doing it wrong. "(Argentinian social worker:1.4)" is effectively indistinguishable from "(Argentinian woman)"
@NoMansLand I tested (Argentinian social worker), like in his prompts, vs (Argentinian woman), and the only difference I can see is that greater weights sometimes just put the country's colors in the image lol.
Has no effect on the woman as far as I can tell.
@pogo Tried again - results https://civitai.com/posts/1485915 not the best quality but just wanted to capture that ordinary look. I tried first with no negative prompts and they were ok, if a bit low quality. As soon as I started adding negative prompts I got supermodels. Now (sorry to say) putting (beautiful)1 in the negative prompts seemed to definitely stop the supermodels. Which is a shame because I think they are still beautiful!
@gr3yh4wk1 I've been finding some success with using names and mixing them, like [Name1:Name2:.5], whatever that's called.
@gr3yh4wk1 and @pogo. Having lived in Buenos Aires, Argentina many years ago when I was a boy, I can tell you that one can find striking looking women and girls on the street and in school easily. So maybe using "Argentinian social worker" is not such a good idea. Perhaps just "social worker" would work better? 😅.
I am certainly not alone in my assessment of Argentinian woman: https://www.huffpost.com/entry/argentina-argentine-beautiful-women_b_1673348
Beauty is in the eye of the beholder, my recollection of Argentinian women is that they exude a kind of confident, natural sexiness that one often find in dark haired Latinas. Of course, it is entirely possible that my memory is indulging me with nostalgia of a boyhood long gone 😂
@NoMansLand Argentinian Social Worker was the prompt from the post you linked. I tested it multiple times with just social worker. My opinion is "social worker" has no effect at all.
@pogo I see, perhaps there is some prompt dependency as well. I'll have to carry out some test myself 😅
@NoMansLand I'm 100% confident that no one could tell the difference from what I saw. Maybe I'm slightly biased because I've seen a lot of prolific creators that use prompts that I've found to have no effect on the result at all, lol
@pogo Most certainly possible. Most people, me included, often takes a "working prompt" and just modify part of it. "Argentinian Social worker" probably produced some good images, and so the model maker just continued using it. I am sure it does no harm! 😁
@gr3yh4wk1 and @pogo
So I've run my own 4 tests, using the same seed with the same simple prompt but different main subject: Argentinian woman/woman/female social worker/female Argentinian social worker: https://civitai.com/posts/1499957
You can draw your own conclusion, but IMO adding "social worker" does seem to make the subject just a tad less pretty than the beauty produced by "Argentinian woman". But still, they are all pretty women, so mission NOT accomplished.
It is just one simple test, hardly conclusive, of course 😅
@NoMansLand I think the test would be more helpful if the only thing changed was having "social worker" in the prompt.
https://civitai.com/posts/1504730
I tried to control the composition to make the comparison clearer.
No one would be able to tell which is which without the label.
@NoMansLand I'm able to get more interesting faces by using dynamic prompts and the [x:y:.5] thing
I told chat gpt to make a list of famous people and I just mash their faces together :)
@pogo Now I see why you drew the conclusion that "social worker" doesn't do much. You see, the way these A.I. system works is that they try to "guess" what the final images should look like from the "initial noise", while "guided" by the prompt. When the prompt is simply "woman" vs "woman social worker", then the A.I. is not given too much guidance, so the "potential space" of all available images is very large. But with a more specific prompt, the "search space" is now more limited, so now each word in the prompt, including "social worker" now has more chance to influence the image.
Let's see if I can explain this better. In the A.I. training set, there are many, millions of images with the tag "woman". But there are maybe only a few thousand images with the tag "social worker". So a prompt like "woman social worker", the term "woman" will dominate. But if the prompt is "woman social worker on a bus", then now the search space narrows down considerably, to just "woman on a bus" and "social worker", and now "social worker" will have a bigger influence. This is not a simple "intersection" of images, but more of a probabilistic interpretation of the prompt, biased by the training image set.
Now you might say, well, maybe we can use "woman (social worker:1.2)". But I am actually not sure how prompt weight actually works.
BTW, I did produce a pair of images with and without the word "social worker" in the prompt. So the two comparisons are "Argentinian woman" vs "Female Argentinian Social Worker", And "Woman" vs "Female Social Worker". Other pairs of comparison are possible, of course.
I am no A.I. expert, so this is just my layman's understanding of these A.I. systems
@NoMansLand Dumb it down for me, tell me what prompt you think would be a fair test to see if "social worker" makes the result less beautiful? From my test it seems pretty clear that adding "social worker" has no affect at all.
@pogo I would say "Tired, bored female on a bus" vs "Tired, bored female social worker on a bus" would be a reasonable test. Here is a pair of images, not cherry-picked, just the first two images I've generated. To my eyes, adding "social worker" did make the woman a little bit less pretty (but she would still be considered pretty by most standards): https://civitai.com/posts/1507279
Without the "social worker" part, A.i. just generated your "Standard Photon Beauty".
Hardly conclusive, because one really need to generate a dozen images, so you are welcome to carry out further tests 😅
@NoMansLand that resulted in a different angle and expression, that should be controlled if you really want to test it, that being said, the moles moved around a bit, and her brows are furrowed? Same person though...
@NoMansLand https://civitai.com/posts/1507602 check this out
@pogo The problem with using ControlNet is that ControlNet itself also restricts what the A.I. can generate, thus weakening the effect of "social worker".
At any rate, it appears that "social worker" only work weakly on "pure" text2img, so if you are using mostly ControlNet, then it is not useful anyway.
BTW, the creator of Photon continue to use "social worker" in his prompts, so he is definitely a believer (but then he came up with the idea, not me 😅): https://civitai.com/images/6970524
@NoMansLand @pogo Been a great thread!
@NoMansLand Why would you expect OpenPose to have much effect on "social worker"? I can use the same OpenPose to get wildly different people just by changing one word. It works when I change Argentinian to Korean or Guyanese... it even works when I take out "social" of "social worker" and it will give the subject little workers hats lol.
You need to control the pose and composition otherwise you're going to get a completely different angle and expression of the subject.
For instance, remember when you tested it and your "social worker" image had a 3/4 pose and she was scowling? Of course someone is going to look less attractive when they look angry. But it didn't affect the age, weight, facial structure, skin texture in any meaningful way.
@NoMansLand I posted a few examples of totally different results from the same pose
@gr3yh4wk1 kind of feels like talking in circles right now
@pogo Good point, maybe my understanding about OpenPose is wrong. I almost never use ControlNet, preferring to let the A.I. do the composition for me (I am lazy, and I like to be surprised by the A.I. 😅). I just assume that since the A.I. has to "conform" to the poses instructed by ControlNet, it has more limited options to "express itself."
This has been a good discussion, we are all just learning here. The conclusion seems to be that "social worker" has at best a weak effect on pure text2img and seems to have no effect at all when CN is involved.
I've carried out more tests myself, and you can find the set posted on tensor dot art/posts/697462831848675396 (not a direct link, since external link are frowned upon by civitai). By just letting the A.I. generate using Random seeds, I can kind of see that I can guess whether "social worker" is in the prompt maybe 55-60% of the time. Better than random guess, but still rather weak.
So your solution of using a mixture of names is the correct "solution" to get a more "average" face is the better one. @gr3yh4wk1 do you agree?
@NoMansLand I generally use several loRA's with the same weights (0.5) to average out features and not have any one of them dominate, or use a "personality" LoRA but with really low weights like "0.3" to give slight changes the usual "Generic female" AI generates. Unfortunately this site has no compatibility or integration with NMKD's GUI which is the one I use instead of A1111. So I have to manually add the prompts to every pic and also NMKDs gui does not add the LoRA's into the prompt so its hard to repro my images. I've asked the team to better support other generators but without success.
@gr3yh4wk1 Thanks for the suggestion. Using a combination of character LoRAs is good idea, specially if you want consistency of the face for multiple images.
As for poor civitai support for non-auto1111 generator, yeah, that is PITA. Maybe you want to upvote for this proposal: https://feedback.civitai.com/p/proposal-for-new-png-chunk-for-civitai-specific-metadata
@NoMansLand The technical data goes over my head but I think its unlikely NMKD would support this any time soon. Their team is small and updates are months apart. The platform is way behind technically all the other platforms. I use it because its way less resource intensive so I can get reasonable size pics on my 10 year old rig. Plus it can produce 100 images in like 15 minutes. Its also dead straightforward to install. A1111 was a complete nightmare. Installing SDXL caused my PC to bluescreen. NMKD just supported it on installation without any added technical overhead. I often take 50 to 100 images to finally get one I post on CivitAI. Its not uncommon for my work folder to have 1,000 images after a night doing some creating!
@gr3yh4wk1 Basically, the proposal is that the PNG should carry an extra, civitai specific piece of information about the generation process. It defines the format of this extra process as that generated by Automatic111, which is the bare essential for someone to generate a similar (but not exact) image from the data, such as the model used, positive and negative prompt, steps, sampler etc. It is still up to the developer of the platform such as NMKD to implement this, but this should be just a few hours of work for a half competent programmer (i.e. I can do it 🤣, if I am already familiar with the NMKD code)
That Automatic1111 is a mess is well known, it being the oldest and with older platforms it is just "shit piled on top of shit". I am not insulting the team behind Auto1111, it is just when you rushed something out, and then you try to patch things up while trying not to break it (we call it "fixing the plane while it is flying in the air"), it is hard.
At the moment, https://github.com/lllyasviel/stable-diffusion-webui-forge is being worked by the guy behind ControlNet and Fooocus. It is alpha level software, but you may want to give it a try. It tweaked the Auto1111 backend so that it will run better on older hardware.
I always admire people who are selective about what they post here. I am squarely on the "not so selective" group, I just post whatever I think others may find amusing 🤣. So keep up the good work 🙏👍
@NoMansLand Well I added myself as a follower on both your homepages! Look forward to seeing what you guys come up with :)
This maybe of some interest to you guys: reddit com/r/StableDiffusion/comments/1azratt/who_have_seen_this_same_daam_face_more_than_500/
@NoMansLand @pogo just posted this bounty - https://civitai.com/bounties/1567 in case your interested!
@gr3yh4wk1 that's cool bro, I'm playing around with sdxl right now but I'll give it a try later
@gr3yh4wk1 Thanks for running this interesting bounty. Often the bounties are too specific to be fun, but this one is open enough that we should see some creative images 👍🙏😁
Start by specifying a style of image that typically contains more normal looking people e.g. candid photography, documentary photography, street photography, amateur photography, etc. and give it some weight (1.5 or so).
Second, Add styles of images that contain conventionally beautiful people to the negative prompt, e.g. fashion, modelshoot, erotic, etc. FInally, don't describe the subject as good looking.
For more distinct looking faces, describe the person, e.g. name, nationality, occupation/profession, etc.
gr3yh4wk1 also pointed out that the general setting affects things. More ordinary settings bias the AI towards more ordinary looking people.
@joeytardanico488 All good points. Thank you for sharing them 🙏
Use another model called "Human"
@codegix Thanks for the tip: https://civitai.com/models/98755/humans
Same version for more than 8 month ? 🙄 No problemo, photon_v1 is probably one the best ds1,5 for real txt2img !!! ❤️💕💕💕
Очень хорошая модель! Работаю и радуюсь! Спасибо!
This has been my go-to model for a while. Are you planning on releasing a new version eventually?
Thank you :-)
I am a new user of Stable Diffusion and have recently started using it with the A1111 platform. Downloaded this model but unfortunately, I keep receiving the following error message: 'SafetensorError: Error while deserializing header: MetadataIncompleteBuffer.' As a result, no image is displayed with my prompt. Could someone please assist me with resolving this issue?
Have you tried updating everything? Ie: ControlNet etc...?
Unfortunately, I stopped using the photon on a regular basis(. Because at the moment it has begun to lag behind its analogues, which are updated and progressed more often. A very good model. But there has been no progress for too long. I’m looking forward to the update.
Which ones are you using instead?
I think it would be disrespectful to the author to advertise other models here. Therefore, I will refrain from commenting. Sorry
I think it would be disrespectful to the author to advertise other models here. Therefore, I will refrain from commenting.
Any chance of making an LCM model for this?
just use the LCM lora with this checkpoint on CFG:2
Question to the author. Are there any plans to update the model? Should we wait for the new Photon?
Hi! Apologies for the delay in responding.
Regarding Photon, it's unlikely that there will be a new version. I've been experimenting with new models like Flux and PixArt-Sigma, and I've really liked as it's comparable in speed to SD15 but generates much more coherent images. While Flux is all the rage, I believe there's still room for a smaller model that generates creative images in the style of the Photon, and until something new comes along, PixArt-Sigma seems to be the best option. I'm very busy but I'm looking forward to experimenting with training a new model, maybe the next one will be the Photon Sigma or something like that.
Thanks for your interest!... and for your patience 😊
I'm ready to start a crowdfunding campaign for the next Photon, where you at bro? Take my money!
Thanks for the enthusiasm and support! I don't know, the AI landscape is moving fast, and there's always something new to explore. I've been keeping an eye on Flux and PixArt-Sigma, they're both impressive. Right now, I'm pretty busy, but If I do get around to training a new model, I'd definitely consider something like Photon-Sigma, a fast and creative model that doesn't require a lot of hardware.
What GFX card are you guys all using? Just curious as my 2016 rig is starting to creak a lot. I use an RTX3060 card (got due to 12gb Vram) but its significantly bottlenecked sadly!
you should give Forge-UI a try and see the difference.
@devilkad Tried installing it but couldn't get it working. That said, I finally got A1111 working again after my installation got nuked by trying to install SDXL and bluescreening my PC. I forgot how mind meltingly slow it is! 12 min to generate 5 768x768 images? I use NMKDs GUI. I can generate those in about 30 seconds to a minute.
@gr3yh4wk1 I would try fresh Windows installation. BSOD during installation or using SD is abormal unless your PC is overheating or severely corrupted.
@ArnAbbas Its just old, and bits of it are likely creaking. a 7 year old mechanical HD could be a bit unreliable! I'm hoping to upgrade it soon if possible. In fact I think I bought it in 2015 so even older.
@gr3yh4wk1 HDD?? Order an SSD literally the moment you read my reply, even a small one that's cheap, literally anything except using a mechanical drive for anything but cold storage in the 21st century
also I was using a 3070 and it seemed fine. More than 8gb would definitely have been nice though, I ran out of memory more than a couple times and I mostly just did single gen
I would also recommend getting new RAM... well, literally just buy a new computer at this point. There's no upgrading that without a new motherboard which means new everything usually
@ratherlewdfox I hear you. I have two SSDs plus the HDD. Its all very organic and "added to over the years", So its not the best system ever! Its slow and weird, just like me so I guess we are soul mates in a way :)
@gr3yh4wk1 well.. for reference... openai was founded in 2015 :)
it's honestly a marvel it can work at all, even slowly. Technology is underappreciated
@gr3yh4wk1 using a 3060 like you and SDXL 896x1152 DPM++ 3M SDE Karras 40 steps is done in 36 seconds. There must be something wrong with your config.
@downwego My mobo, processor and memory are vintage 2016 though so probably lots of bottle necking going on or something
@downwego Can you please share your Automatic1111 launch options and other optimizations you might have done. I am facing out of memory errors on torch with any of the SDXL models. I have a RTX3070 and 16 GB or DDR4 RAM, core i7 10700.
@TOXIC_charlie get forge-ui it will help you a lot
@devilkad Thank you. I found ComfyUI to be much better than Automatic1111 for SDXL. I also did some more optimizations on my system like increased my swap memory to 1.5x my system RAM and optimized options for launch. Now I am able to run SDXL models like Pony V6 and it takes around 1 minute to generate with a step size of 20.
@TOXIC_charlie I'm really glad comfy Ui worked for you .. i told you about forge since it's giving huge speed up and you were working on A1111 & Forge-ui just the same interface but with much more optimizations and speed
In my opinion it is the best for a beginning of everything basic and advanced in creations.
Thanks for the kind words!
This is perfect for realistic images with dramatic light
Thanks! I'm glad you're enjoying Photon.
Where is the SDXL version? Did they abandon?
It's been a while, a lot has changed in the AI world. I'm keeping an eye on the latest models like Flux and PixArt-Sigma. If I find some time to train a new model, it's more likely to be a fine-tuning of one of those, perhaps leaning towards a more creative style like Photon.
Still one of the greatest models ever made and the guy only made one version. Talk about leaving us wanting more!
Thanks for the kind words! I did try to make a Photon v2 several times, but it never quite reached the same level of quality. I'm always learning, and I'm excited to see what I can do with the newer models like FLUX and PixArt-Sigma. I'm thinking a Photon-Sigma might be the way to go when I have some free time, a model that's smaller and faster but still with that creative edge.
It's the best I've ever used, it completely obeys your commands. Thank you. Will there be a new version?
Thanks! I'm glad you enjoyed Photon. I'm always looking for time to fine-tune a new model, and I'm excited to see what I can do with the newer models like FLUX and PixArt-Sigma. I'm really keen on making a Photon-Sigma or something similar.
my favorite model
I'm glad you enjoy it!. It's been a while since I've updated it, but I'm always working on new things.
One of the best if not the best SD1.5 Model I know. For a project of mine I created 2000 pictures (512x768, simple prompt of women in multiple poses like standing, sitting, close up with different clothes and hair colours and basic negative prompt, so extra limbs, mutations etc.). After I sorted the pictures out which have very obvious artifacts like multiple limbs or two persons connected to each other, I had 1879 pictures left. So about 94% of the time the model generated at least pictures which are useable. I didn't looked too much for the hands except in extreme cases.
Thank you for your comment! I'm glad to hear you found Photon useful for your project. I've moved on from Photon for a while, but for testing I used to generate images at a resolution of 680x1024. At that resolution, the error rate was higher, but for the images without errors, the hands were often more detailed and naturally positioned, like resting on a table or holding a piece of clothing. It’s an emergent property of the model that I never quite understood how it originated.
Очень крутая модель
Но очень давно не обновлялась. И похоже уже не обновится .
Thanks for the kind words! Photon hasn't been updated in a while, as I haven't had the time or resources to dedicate to it. However, with the arrival of new models like Flux, I've been considering fine-tuning PixArt-Sigma to achieve a style similar to Photon. I believe with some optimization, PixArt could achieve speeds comparable to SD15 while generating images that adhere more closely to the prompts.
don't know why more people don't use it for machinescapes .. it's sooo good at it.
i think the author must have included some special high-quality industrial images in the training (just for me!) Because most other models just collapse under my outrageous demands.
All time great
nihao
Would you consider making an XL version?
Fantastic checkpoint!! I think lot of us just can't wait for a new version of it... wow!
I love this Checkpoint the most!
Why is this checkpoint so good for training loras?! I just tried it with an anime character that I was having trouble with, and the first run was amazing.
This Checkpoint is So,ooo Good !
This model shines as refiner, it can convert any ugly looking image into something good and artistic whithout much prompt stuffing.
This model is terribly good, especially at realistic psuedo-IMGtoIMG in Comfy. Ie: in Comfy, take a photo you like, input it from [Load Image]>[Vae Encode]>[your_flavor_of_Sampler], set the deNoise to something like 0.25-0.35, prompt with only detail/realism oriented wording, and watch the magic happen.
can you give me the prompt
Hi Goods, realise I'm a bit late asking this question, but I can't find a good image to image comfyui workflow anywhere, could you point me to an example of one you've had success with? Thanks so much
Quite a good model, very good flexibility and versatility with responsiveness. It cannot be fully called realistic, but it is very close to it. Good job!
In image-to-image generation, if you increase the redrawing intensity, the image will completely transform into a 2D style....
This model was made by someone with access to alien technology, that's for sure. What a model!!!
Guter Anfang aber mit Sommersprossen und sexuellen Handlungen kommt es nicht klar.
This model is amazing! It works with many different schedulers and has given me hope that I will not be a total failure in life. I am not kidding. Thank you very much for the model and the description!
я всегда им пользуюсь,спасибо разработчику)
Even though it's theoretically based on SDXL 1, I can't use it as a base model on FOOOCUS (very strange! 🤔), but I use it as a refiner with other models between 0.5 and 0.7.
I really like how it collaborates with "h___en___tai Mix XL" (without underscores), and sometimes it also works very well with "Fresh Draw XL (anime-manga eddiemauro-mix)" depending on the prompts. Maybe (maybe) the best collaboration is with "RealCartoon-XL." With this last one, I've probably found a balance between lighting effects (just how I like them!) and variety. Let me explain: with "h___en___tai Mix XL" (without underscores), the lighting is excellent! But the variety is very limited, and it often fails in the deformation 🤪... for the deformation of the hands, there are always many ways to fix it 😉
Boah, amazing realism of landscape!
i've had lots of success with using this model to generate photo-realistic characters
Glad to hear Photon worked well for you! Always nice to see the model still creating cool stuff.
Details
Files
photon_v1.safetensors
Mirrors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
Photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
Photon.safetensors
Photon.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
Photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
ptn.safetensors
photon_v1.safetensors
photon_v1.safetensors
SDBjxfUxGl-v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
phtn.safetensors
photon_v1.safetensors
photon_v1.safetensors
Photon v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors
phv1.safetensors
photon_v1.safetensors
photon_v1.safetensors
photon_v1.safetensors



















