this post was submitted on 05 Oct 2024
638 points (95.8% liked)
Not The Onion
12344 readers
409 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Comments must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This article is annoyingly one-sided. The tool performs an act of synthesis just like an art student looking at a bunch of art might. Sure, like an art student, it could copy someone's style or even an exact image if asked (though those asking may be better served by torrent sites). But that's not how most people use these tools. People create novel things with these tools and should be protected under the law.
It’s deterministic. I can exactly duplicate your “art” by typing in the same sentence. You’re not creative, you’re just playing with toys.
That's actually fundamentally untrue, like independent of your opinion, I promise that when people generate an image with a phrase it will be different and is not deterministic ( not in the way you mean ) .
You and I cannot type the same prompt into the same AI generative model and receive the same result, no system works with that level of specificity, by design.
They pretty much all use some form of entropy / noise.
This can actually be true, depending on how the system is configured.
For instance, if you and someone else use the same locally-hosted Stable Diffusion UI, both put the exact same prompt, and are using the same seed, # of steps, and dimensions, you'll get an identical result.
The only reason outputs are different between prompts is because of the noise from the seed, normally randomly set between generations, which can be easily set to the same value as someone else's generation, and will yield an identical result unless the prompt is changed.
It’s literally as true as it can possibly be. Given the same inputs (including the same seed), a diffusion model will produce exactly the same output every time. It’s deterministic in the most fundamental meaning of the word. That’s why when you share an image on CivitAI people like it when you share your input parameters, so they can duplicate the image. I have recreated the exact same images using models from there.
Humans are not deterministic (at least as far as we know). If I give two people exactly the same prompt, and exactly the same “training data” (show them the same references, I guess), they will never produce the same output. Even if I give the same person the same prompt, they won’t be able to reproduce the same image again.
I do actually believe that everything, including human behavior is deterministic. I also believe there is nothing special about human consciousness or creation tbh