this post was submitted on 17 Jun 2024
516 points (99.2% liked)

Technology

59589 readers
2891 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ours@lemmy.world 5 points 5 months ago (1 children)

There is no certainty that LLMs can overcome the current limitations they are stumbling on.

I think developments in AI will come but there is no guarantee they will. They seem to be suffering from the Pareto Principle just like self-driving car ML models and this despite huge investments.

[–] jas0n@lemmy.world 3 points 5 months ago (1 children)

100% this. The base algorithms used in LLMs have been around for at least 15 years. What we have now is only slightly different than it was then. The latest advancement was training a model on stupid amounts of scraped data off the Internet. And it took all that data to make something that gave you half decent results. There isn't much juice left to squeeze here, but so many people are assuming exponential growth and "just wait until the AI trains other AI."

It's really like 10% new tech and 90% hype/marketing. The worst is that it's got so many people fooled you hear many of these dumb takes from respectable journalists interviewing "tech" journalists. It's just perpetuating the hype. Now your boss/manager is buying in =]

[–] ours@lemmy.world 2 points 5 months ago

Breakthroughs are so interesting and the reason predicting the future of tech is so hard. Text embedding and "Internet scale" training are likely the things that allowed this AI boom and the amazing initial results.

I think many people see AI (and other tech) moving linearly from the current point forward but any software developer knows this is rarely the case. And no one can predict the next breakthrough.

It doesn't help the hype and confusion around ML/LLM/AGI. And because on the surface LLMs seem intelligent people misunderstand their capabilities (much like politicians). They certainly have fantastic uses just as they are now but a lot of people are overly optimistic (or pessimistic depending on your point of view) of our new "AI overlords".

Personally, LLMs are absolutely amazing at supporting me in my professional writing. I don't let it do my work but it helps me play around to find a better way to express some things like if I had a sparing writing partner.