this post was submitted on 30 Jul 2025
261 points (95.2% liked)
Technology
73540 readers
2712 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The thing is, he is referencing specifically a model that recently demonstrated (to its developers at least) the ability to self improve without direct human input. But obviously there are caveats to what that actually means.
That he is referencing something specific and recent though makes me think he's being genuine here, he believes what he is saying.
Obviously, hes almost certainly jumping the gun. Hes demonstrated a lack of critical thinking when it comes to new technological developments. See: all the money he dumped into VR and the Metaverse. (I say this as much as I personally like VR, its not exactly a money maker)
I see some commenters here pointing out that he is a coder, but he's probably not coded anything for more than a decade at this point. He is fully immersed in the Silicon Valley koolaid, only he seems to use more public facing optimism about AI compared to other CEOs.
A jabber bot that slightly improved (through pre training) it's words per minute when responding but is still just mindlessly jabbering/hallucinating is just as dumb as it was before. I mean random chance is a not insignificant factor with LLMs... But also, it's a pretty big assumption these days that "scientific" papers mean anything. There's already been sobmany fraudulent LLM papers, like LLMs "teaching themselves other languages" or LLMs "showing ability to reason" when all of that was just a product of the training data.