this post was submitted on 24 May 2024
1047 points (96.5% liked)

Technology

59495 readers
3050 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] QuadratureSurfer@lemmy.world 21 points 6 months ago (1 children)

Technically, generative AI will always give the same answer when given the same input. But, what happens is a "seed" is mixed in to help randomize things, that way it can give different answers every time even if you ask it the same question.

[–] jyte@lemmy.world 8 points 6 months ago (1 children)

What happened to my computers being reliable, predictable, idempotent ? :'(

[–] QuadratureSurfer@lemmy.world 4 points 6 months ago (1 children)

They still are. Giving a generative AI the same input and the same seed results in the same output every time.

[–] jyte@lemmy.world 1 points 5 months ago (1 children)

Technically they still are, but since you don't have a hand on the seed, practically they are not.

[–] QuadratureSurfer@lemmy.world 1 points 5 months ago (1 children)

OK, but we're discussing whether computers are "reliable, predictable, idempotent". Statements like this about computers are generally made when discussing the internal workings of a computer among developers or at even lower levels among computer engineers and such.

This isn't something you would say at a higher level for end-users because there are any number of reasons why an application can spit out different outputs even when seemingly given the "same input".

And while I could point out that Llama.cpp is open source (so you could just go in and test this by forcing the same seed every time...) it doesn't matter because your statement effectively boils down to something like this:

"I clicked the button (input) for the random number generator and got a different number (output) every time, thus computers are not reliable or predictable!"

If you wanted to make a better argument about computers not always being reliable/predictable, you're better off pointing at how radiation can flip bits in our electronics (which is one reason why we have implemented checksums and other tools to verify that information hasn't been altered over time or in transition). Take, for instance, the example of what happened to some voting machines in Belgium in 2003: https://www.businessinsider.com/cosmic-rays-harm-computers-smartphones-2019-7

Anyway, thanks if you read this far, I enjoy discussing things like this.

[–] jyte@lemmy.world 1 points 5 months ago

You are taking all my words way too strictly as to what I intended :)

It was more along the line : Me, a computer user, up until now, I could (more or less) expect the tool (software/website) I use in a relative consistant maner (be it reproducing a crash following some actions). Doing the same thing twice would (mostly) get me the same result/behaviour. For instance, an Excel feature applied on a given data should behave the same next time I show it to a friend. Or I found a result on Google by typing a given query, I hopefully will find that website again easily enough with that same query (even though it might have ranked up or down a little).

It's not strictly "reliable, predictable, idempotent", but consistent enough that people (users) will say it is.

But with those tools (ie: chatGPT), you get an answer, but are unable to get back that initial answer with the same initial query, and it basically makes it impossible to get that same* output because you have no hand on the seed.

The random generator is a bit streached, you expect it to be different, it's by design. As a user, you expect the LLM to give you the correct answer, but it's actually never the same* answer.

*and here I mean same as "it might be worded differently, but the meaning is close to similar as previous answer". Just like if you ask a question twice to someone, he won't use the exact same wording, but will essentially says the same thing. Which is something those tools (or rather "end users services") do not give me. Which is what I wanted to point out in much fewer words :)