this post was submitted on 24 Feb 2024
239 points (91.9% liked)

Technology

59589 readers
3148 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Ropianos@feddit.de 16 points 9 months ago (9 children)

There are quite a lot of AI-sceptics in this thread. If you compare the situation to 10 years ago, isn't it insane how far we've come since then?

Image generation, video generation, self-driving cars (Level 4 so the driver doesn't need to pay attention at all times), capable text comprehension and generation. Whether it is used for translation, help with writing reports or coding. And to top it all off, we have open source models that are at least in a similar ballpark as the closed ones and those models can be run on consumer hardware.

Obviously AI is not a solved problem yet and there are lots of shortcomings (especially with LLMs and logic where they completely fail for even simple problems) but the progress is astonishing.

[–] AnarchistArtificer@slrpnk.net 3 points 9 months ago (4 children)

I think a big obstacle to meaningfully using AI is going to be public perception. Understanding the difference between CHAT-GPT and open source models means that people like us will probably continue to find ways of using AI as it continues to improve, but what I keep seeing is botched applications, where neither the consumers nor the investors who are pushing AI really understand what it is or what it's useful for. It's like trying to dig a grave with a fork - people are going to throw away the fork and say it's useless, not realising that that's not how it's meant to be used.

I'm concerned about the way the hype behaves because I wouldn't be surprised if people got so sick of hearing about AI at all, let alone broken AI nonsense, that it hastens the next AI winter. I worry that legitimate development may be held back by all the nonsense.

[–] FaceDeer@kbin.social 2 points 9 months ago (2 children)

I actually think public perception is not going to be that big a deal one way or the other. A lot of decisions about AI applications will be made by businessmen in boardrooms, and people will be presented with the results without necessarily even knowing that it's AI.

[–] AnarchistArtificer@slrpnk.net 1 points 9 months ago

I've seen a weird aspect of it from the science side, where people writing grant applications or writing papers feel compelled to incorporate AI into it, because even if they know that their sub-field has no reliable use-cases for AI yet, they're feeling the pressure of the hype.

Specifically, when I say the pressure of the hype, I mean that some of the best scientists I have known were pretty bad at the academic schmoozing that facilitates better funding and more prestige. In practice, businessmen in boardrooms are often the ones holding the purse strings and sometimes it's easier to try to speak their language than to "translate" one's research to something they'll understand.

[–] lolcatnip@reddthat.com 0 points 9 months ago

Businessmen are just the public but with money.

load more comments (1 replies)
load more comments (5 replies)