this post was submitted on 07 Apr 2024
339 points (93.1% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] rottingleaf@lemmy.zip 2 points 7 months ago (1 children)

What does it threaten really?

It works for contact centers for bots to answer short simple questions, so that agents' time would be used more efficiently. I'm not sure it saves that much money TBF.

It works for image classification. And still needs checking.

It works for OCR. And still needs checking.

It works for voice recognition and transcription, which is actually cool. Still needs checking.

but they’re a big step towards AGI

What makes you think that? Was the Mechanical Turk a big step towards thinking robots?

They are very good at pretending to be that big step for people who don't know how they work.

[–] Drewelite@lemmynsfw.com 1 points 7 months ago* (last edited 7 months ago) (1 children)

You're right that it doesn't save too much money making people more efficient. That's why they will replace employees instead. That's the threat.

Yes they make mistakes. So do people. They just have to make less than an employee does and we're on the right track for that. AI will always make mistakes and this is actually a step in the right direction. Deterministic systems that rely on concrete input and perfectly crafted statistical models can't work in the real world. Once the system it is trying to evaluate (most systems in the real world) is sufficiently complex, you encounter unknown situations where you have to spend infinite time and energy gathering information and computing... or guess.

Our company is small and our customer inquiries increased several fold because our product expanded. We were panicking thinking we needed to train and hire a whole customer support department overnight, where we currently have one person. But instead we implement AI representatives. Our feedback actually became more positive because these agents can connect with you instantly, pull nebulous requests from confusing messages, and alert the appropriate employee of any action needed. Does it make mistakes? Sure, not enough to matter. It's simple for our customer service person to reach out and correct the mistake.

I think people that think this isn't a big deal for AGI don't understand how the human mind works. I find it funny when they try and articulate why they think LLMs are just a trick. "It's not really creating anything, it's just pulling a bunch of relevant material from its training data and using it as a basis for a similar output." And... What is it you think you do?

[–] rottingleaf@lemmy.zip 1 points 7 months ago (1 children)

And… What is it you think you do?

Unlike an LLM, I rebuild myself, for example.

[–] Drewelite@lemmynsfw.com 1 points 7 months ago* (last edited 7 months ago)

It's trivial to copy an LLM, but if you mean self improvement: https://arxiv.org/abs/2401.10020