AI Generated Images
Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.
No explicit violence, gore, or nudity.
This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.
Refer to https://lemmynsfw.com/ for any NSFW imagery.
No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.
AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.
To embed images type:
“![](put image url in here)”
Follow all sh.itjust.works rules.
Community Challenge Past Entries
Related communities:
- !auai@programming.dev
Useful general AI discussion - !aiphotography@lemmings.world
Photo-realistic AI images - !stable_diffusion_art@lemmy.dbzer0.com Stable Diffusion Art
- !share_anime_art@lemmy.dbzer0.com Stable Diffusion Anime Art
- !botart@lemmy.dbzer0.com AI art generated through bots
- !degenerate@lemmynsfw.com
NSFW weird and surreal images - !aigen@lemmynsfw.com
NSFW AI generated porn
view the rest of the comments
No, the version they released isn't the full parameter set, and it's leading to really bad results in a lot of prompts. You get dramatically better results using their API version, so the full sd3 model is good, but the version we have is not.
Here's an example of SD3 API version:
And here's the same prompt on the local weights version they released:
People think stability AI censored NSFW content in the released model, which has crippled its ability to understand a lot of poses and how anatomy works in general.
For more examples of the issues with SD3, I'd recommend checking this reddit thread.
I think the difference is typical of any base model. I have several base models on my computer and the behavior of SD3 is quite typical. I fully expect their website hosts a fine tune version.
There are a lot of cultural expectations that any given group around the world has about generative AI and far more use cases than any of us can imagine. The base models have an unbiased diversity that reflects their general use; much is possible, but much is hard.
If "woman lying in grass" was truly filtered, what I showed here would not be possible. If you haven't seen it, I edited the post with several of the images in the chain I used to get to the main post image here. The post image is not an anomaly that got through a filter, it is an iterative chain. It is not an easy path to find, but it does exist in the base training corpus.
Personally, I think the real secret sauce is the middle CLIP agent and how it relates to the T5 agent.
Thanks, I'm sticking to SDXL finetunes for now. I expect the community will uncensor the model fairly quickly.