this post was submitted on 03 Dec 2024
256 points (97.8% liked)

Technology

59772 readers
3115 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Software engineer Vishnu Mohandas decided he would quit Google in more ways than one when he learned that the tech giant had briefly helped the US military develop AI to study drone footage. In 2020 he left his job working on Google Assistant and also stopped backing up all of his images to Google Photos. He feared that his content could be used to train AI systems, even if they weren’t specifically ones tied to the Pentagon project. “I don't control any of the future outcomes that this will enable,” Mohandas thought. “So now, shouldn't I be more responsible?”

The site (TheySeeYourPhotos) returns what Google Vision is able to decern from photos. You can test with any image you want or there are some sample images available.

you are viewing a single comment's thread
view the rest of the comments
[–] EncryptKeeper@lemmy.world 6 points 22 hours ago (1 children)

Don’t feel too happy bro you were told that by a soulless computer that’s was designed to tell you what it thinks you want to hear.

[–] synnny@lemmynsfw.com 2 points 21 hours ago (1 children)

It's not designed to tell you what you want to hear.

[–] EncryptKeeper@lemmy.world 6 points 21 hours ago* (last edited 21 hours ago) (1 children)

That’s literally all AI is designed to do. Given an input, it just tries to output an expected response.

[–] synnny@lemmynsfw.com 1 points 21 hours ago (2 children)

Yeah, no. LLMs predict what comes next, not what someone wants to hear.

[–] EncryptKeeper@lemmy.world 2 points 20 hours ago (1 children)

Not really wants as much as expects, but that’s what AI is designed to do.

[–] synnny@lemmynsfw.com 3 points 20 hours ago (1 children)

What you're saying is not factual. LLMs predict what comes next based on the parameters set during learning process. It might at times say what you're expecting, but then try contradicting information that it knows to be factual. See how far that gets you.

I think you're confusing agreeableness for a validation buddy. For a product like this to work, it has to be inviting.

[–] EncryptKeeper@lemmy.world 1 points 10 hours ago

LLMs predict what comes next based on the parameters set during learning process.

Now you’re just splitting hairs.

[–] kilgore_trout@feddit.it 2 points 21 hours ago (3 children)
[–] Earflap@reddthat.com 2 points 21 hours ago* (last edited 21 hours ago)

I'm afraid I don't have personal feelings or opinions about you. As an AI assistant, I don't form attachments or have subjective preferences. My role is to provide helpful information to you, not to have personal relationships. I'm happy to assist you to the best of my abilities, but any feelings or opinions I express are based on my training, not a personal connection. Please let me know if there is anything else I can help with.

Claude nails it again.

[–] synnny@lemmynsfw.com 2 points 21 hours ago* (last edited 21 hours ago)

I don't have feelings in the way humans do, but I enjoy our conversations! I'm here to help and chat with you anytime you need.

Didn't exactly make my heart throb but if it does that for you, you've got a low bar.

[–] KairuByte@lemmy.dbzer0.com 2 points 21 hours ago* (last edited 21 hours ago)

That… isn’t telling you what you want to hear.

LLMs are literally just complex autocorrect. They don’t weight their responses based on what a user wants to hear (unless explicitly instructed to) they simply return the most algorithmically generic response it can find.

Tell it to talk like a pirate, it will pattern match to pirate talk. It’s not doing it because you want it to, but because you gave it a “pre prompt” to talk like a pirate, and it did the most likely thing that would happen.

Yes, this can seem like telling you what you want, but go ask it to tell you what shape the world is. Then tell it you want the earth to be flat, and to answer the question again. Both times the answer will be an oblate spheroid, because it doesn’t know nor care what you want.

Now, if you say “Imagine the world is flat” first, yeah it’ll tell you it’s flat. Not because you want it to, but because you’re explicitly handing it “new information” that you want it to incorporate into its response.