Its time to promote, https://lemmy.dbzer0.com/c/stable_diffusion_art.
Very helpfull and relaxing,
This is a most excellent place for technology news and articles.
Its time to promote, https://lemmy.dbzer0.com/c/stable_diffusion_art.
Very helpfull and relaxing,
Hi there! Looks like you linked to a Lemmy community using a URL instead of its name, which doesn't work well for people on different instances. Try fixing it like this: !stable_diffusion_art@lemmy.dbzer0.com
Talking to a text-to-image model is kinda like meeting someone from a different generation and culture that only half knows your language. You have to spend time with them to be able to communicate with them better and understand the “generational and cultural differences” so to speak.
Try checking out PromptHero or Civit.ai to see what prompts people are using to generate certain things.
Also, most text-to-image models are not made to be conversational and will work better if your prompts are similar to what you’d type in when searching for a photo on Google Images. For example, instead of a command like “Generate a photo for me of a…”, do “Disposable camera portrait photo, from the side, backlight…”
Dall-E 3 seems to be the easiest to use and from my experience, does pretty well with prompts like that.
The issue is that it's quick to throttle you after a while and it's heavily censored for seemingly innocuous words.
Stable Diffusion can be a bit dumb sometimes, occasionally giving you an image of a person wearing jean everything. Now if you're willing to put in the time to learn to use Stable Diffusion, and you are able to run it on your PC, it's got a lot of freedom and unlimited image output as fast as your GPU can handle. You could use the "regional prompter" extension to mark zones where you want jeans to be, a specific shirt, etc. Or use inpaint to regenerate a masked area. It's more work, but it's very flexible and controllable.