this post was submitted on 30 Jan 2024
29 points (91.4% liked)

AI Generated Images

7178 readers
114 users here now

Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.

No explicit violence, gore, or nudity.

This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.

Refer to https://lemmynsfw.com/ for any NSFW imagery.

No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.

AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.

To embed images type:

“![](put image url in here)”

Follow all sh.itjust.works rules.


Community Challenge Past Entries

Related communities:

founded 1 year ago
MODERATORS
 
top 8 comments
sorted by: hot top controversial new old
[–] aeronmelon@lemmy.world 4 points 9 months ago

Well, it got the number of finger correct.

[–] Halcyon@discuss.tchncs.de 3 points 9 months ago

My Conpwuter is broken.

[–] admin@sh.itjust.works 3 points 9 months ago

Conpwter repair & Insalltion 9s!

[–] cloudless@feddit.uk 3 points 9 months ago (2 children)

It is really weird how the AI seems to intentionally misspell most of the words. It doesn't even seem to be mixing up languages, I really don't understand the logic behind how the AI created this.

[–] 31337@sh.itjust.works 10 points 9 months ago* (last edited 9 months ago)

The Stable Diffusion algorithm is strange, and I'm surprised someone thought of it, and surprised it works.

IIRC it works like this: Stable Diffusion starts with an image of completely random noise. The idea is that the text prompt given to the model describes a hypothetical image where the noise was added. So, the model tries to "predict," given the text, what the image would look like if it was denoised a little bit. It does this repeatedly until the image is fully denoised.

So, it's very easy for the algorithm to make a "mistake" in one iteration by coloring the wrong pixels black. It's unable to correct it's mistake in later denoising iterations, and just fills in the pixels around it with what it thinks looks plausible. And, it can't really "plan" ahead of time, it can only do one denoising operation at a time.

[–] VubDapple@lemmy.world 7 points 9 months ago

It doesn't understand language. Its just producing something that looks like it superficially.

[–] Deceptichum@kbin.social 2 points 9 months ago

Calvin’s all grown up

[–] 0oWow@lemmy.world 2 points 9 months ago

Ah good. I was looking for someone who can make kappees.