this post was submitted on 04 Sep 2025
143 points (96.7% liked)

Technology

74827 readers
3756 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://programming.dev/post/36866515

Comments

top 50 comments
sorted by: hot top controversial new old
[–] Gbagginsthe3rd@aussie.zone 3 points 2 hours ago

Lemmy does not accept having a nuanced point of view on AI. Yeah its not perfect but its still pretty impressive in many ways

[–] _stranger_@lemmy.world 5 points 8 hours ago
[–] nutsack@lemmy.dbzer0.com 11 points 9 hours ago (1 children)

then some people are going to lose money

[–] sugar_in_your_tea@sh.itjust.works 3 points 9 hours ago (1 children)

Unfortunately, me included, since my retirement money is heavily invested in US stocks.

[–] Modern_medicine_isnt@lemmy.world 2 points 8 hours ago (1 children)

Meh, they come back up over time. Long term, the US stock market has only gone up.

Yup, I'm not worried, just noting that I'll be among those who will lose money.

[–] Corelli_III@midwest.social 18 points 14 hours ago (2 children)

"what if the obviously make-believe genie wasn't real"

capitalists are so fucking stupid, they're just so deeply deeply fucking stupid

[–] douglasg14b@lemmy.world 1 points 3 hours ago

I mean sure, yeah, it's not real now.

Does that mean it will never be real? No, absolutely not. It's not theoretically impossible. It's quite practically possible, and we inch that way slowly, but by bit, every year.

It's like saying self-driving cars are impossible in the '90s. They aren't impossible. You just don't have a solution for them now, but there's nothing about them that makes it impossible, just our current technology. And then look it today, we have actual limited self-driving capabilities, and completely autonomous driverless vehicles in certain geographies.

It's definitely going to happen. It's just not happening right now.

[–] JcbAzPx@lemmy.world 6 points 11 hours ago

Reality doesn't matter as long as line goes up.

[–] oyo@lemmy.zip 31 points 20 hours ago (3 children)

We'll almost certainly get to AGI eventually, but not through LLMs. I think any AI researcher could tell you this, but they can't tell the investors this.

[–] Saledovil@sh.itjust.works 3 points 15 hours ago (1 children)

What if we're not smart enough to build something like that?

[–] scratchee@feddit.uk 7 points 14 hours ago (2 children)

Possible, but seems unlikely.

Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.

If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).

So yeah. My money is that we’ll figure it out sooner or later.

Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.

[–] pulsewidth@lemmy.world 1 points 2 hours ago

Yeah and it only took evolution (checks notes) 4 billion years to go from nothing to a brain valuable to humans.

I'm not so sure there will be a fast return in any economic timescale on the money investors are currently shovelling into AI.

We have maybe 500 years (tops) to see if we're smart enough to avoid causing our own extinction by climate change and biodiversity collapse - so I don't think it's anywhere near as clear cut.

Oh jeez, please don't say "cheap brain-scale computers" next to "AGI" like that. There are capitalists everywhere.`

[–] ghen@sh.itjust.works 7 points 20 hours ago (1 children)

Once we get to AGI it'll be nice to have an efficient llm so that the AGI can dream. As a courtesy to it.

[–] Buddahriffic@lemmy.world 12 points 16 hours ago (3 children)

Calling the errors "hallucinations" is kinda misleading because it implies there's regular real knowledge but false stuff gets mixed in. That's not how LLMs work.

LLMs are purely about word associations to other words. It's just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it's trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.

All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an "end" token.

Earlier on when using LLMs, I'd ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn't do. Its capabilities don't actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn't even have to reflect how it really works.

[–] JeremyHuntQW12@lemmy.world 2 points 8 hours ago

No that's only a tiny part of what LLMs do.

When you enter a sentence, it first parses the sentence to obtain vectors, then it ranks the vectors, then it vectors down to a database, then it reconstructs the sentence from the information its obtained.

Unlike most software we’re familiar with, LLMs are probabilistic in nature. This means the link between the dataset and the model is broken and unstable. This instability is the source of generative AI’s power, but it also consigns AI to never quite knowing the 100 percent truth of its thinking.

But what is truth ? As Lionel Huckster would say.

Most of these so-called "hallucinations" are not errors at all. What has happened is that people have had multiple entries and they have only posted the last result.

For instance, one example was where Gemini suggested cutting the legs off couch to fit it into a room. What the poster failed to reveal was that they were using Gemini to come up with solutions to problems in a text adventure game...

[–] ghen@sh.itjust.works 3 points 16 hours ago

Yeah you're right, even in my cynicism I was still too hopeful for it LOL

[–] nialv7@lemmy.world -3 points 12 hours ago* (last edited 12 hours ago) (1 children)

Well, you described pretty well what llms were trained to do. But from there you can't derive how they are doing it. Maybe they don't have real knowledge, or maybe they do. Right now literally no one can definitively claim one way or the other, not even top of the field ML researchers. (They may have opinions though)

I think it's perfectly justified to hate AI, but it's better to have a less biased view of what it is.

[–] Buddahriffic@lemmy.world 1 points 9 hours ago (1 children)

I don't hate AI or LLMs. As much as it might mess up civilization as we know it, I'd like to see the technological singularity during my lifetime, though I think the fixation on LLMs will do more to delay than realize that.

I just think that there's a lot of people fooled by their conversational capability into thinking they are more than what they are and using the fact that these models are massive with billions or trillions of weighs that the data is encoded into and no one understands how they work to the point of being able to definitively say "this is why it suggested glue as a pizza topping" to put whether or not it approaches AGI in a grey zone.

I'll agree though that it was maybe too much to say they don't have knowledge. "Having knowledge" is a pretty abstract and hard to define thing itself, though I'm also not sure it directly translates to having intelligence (which is also poorly defined tbf). Like one could argue that encyclopedias have knowledge, but they don't have intelligence. And I'd argue that LLMs are more akin to encyclopedias than how we operate (though maybe more like a chatbot dictionairy that pretends to be an encyclopedia).

[–] nialv7@lemmy.world 1 points 7 hours ago

Leaving aside the questions whether it would benefit us, what makes you think LLM won't bring about technical singularity? Because, you know, the word LLM doesn't mean that much... It just means it's a model, that is "large" (currently taken to mean many parameters), and is capable of processing languages.

Don't you think whatever that will bring about the singularity, will at the very least understand human languages?

So can you clarify, what is it that you think won't become AGI? Is it transformer? Is it any models that trained in the way we train llms today?

[–] JcbAzPx@lemmy.world -1 points 11 hours ago (1 children)

Also not likely in the lifetime of anyone alive today. It's a much harder problem than most want to believe.

[–] Modern_medicine_isnt@lemmy.world 2 points 8 hours ago (1 children)

Everything is always 5 to 10 years away until it happens. Agi cpuld happen any day in the next 1000 years. There is a good chance you won't see it coming.

[–] jj4211@lemmy.world 1 points 8 hours ago

Pretty much this. LLMs came out of left field going from morning to what it is more really quickly.

If expect the same of AGI, not correlated to who spent the most or is best at LLM. It might happen decades from now or in the next couple of months. It's a breakthrough that is just going to come out of left field when it happens.

[–] YoHoHoAndAVialOfKetamine@lemmy.dbzer0.com 5 points 14 hours ago (2 children)

Is it just me or is social media not able to support discussions with enough nuance for this topic, like at all

[–] douglasg14b@lemmy.world 1 points 3 hours ago

It's not because people really cannot critically think anymore.

[–] Gsus4@mander.xyz 2 points 11 hours ago* (last edited 11 hours ago)

You need ground rules and objectives to reach any desired result. E.g. a court, an academic conference, a comedy club, etc. Online discussions would have to happen under very specific constraints and reach enough interested and qualified people to produce meaningful content...

[–] abbiistabbii@lemmy.blahaj.zone 35 points 1 day ago (3 children)

Listen. AI is the biggest bubble since the south sea one. It's not so much a bubble, it's a bomb. When it blows up, The best case scenario is that several al tech companies go under. The likely scenario is that it's going to cause a major recession or even a depression. The difference between the .com bubble and this bubble is that people wanted to use the internet and were not pressured, harassed or forced to. When you have a bubble based around the technology that people don't really find use for to the point where CEOs and tech companies have to force their workers and users to use it even if it makes their output and lives worse, that's when you know it is a massive bubble.

On top of that, I hope these tech bros do not create an AGI. This is not because I believe that AGI is an existential threat to us. It could be, be it our jobs or our lives, but I'm not worried about that. I'm worried about what these tech bros will do to a sentient, sapient, human level intelligence with no personhood rights, no need for sleep, that they own and can kill and revive at will. We don't even treat humans we acknowledge to be people that well, god knows what we are going to something like an AGI.

Meh, some people do want to use AI. And it does have decent use cases. It is just massively over extended. So it won't be any worse than the dot com bubble. And I don't worry about the tech bros monopolizing it. If it is true AGI, they won't be able to contain it. In the 90s I wrote a script called MCP... for tron. It wasn't complicated, but it was designed to handle the case that servers dissappear... so it would find new ones. I changed jobs, and they couldn't figure out how to kill it. Had to call me up. True AGI will clean thier clocks before they even think to stop it. So just hope it ends up being nice.

[–] iAvicenna@lemmy.world 10 points 1 day ago (2 children)

Well if tech bros create and monopolize AGI, it will be worse than slavery by a large margin.

[–] vacuumflower@lemmy.sdf.org 3 points 1 day ago

It'll just make real humans more replaceable, thus make murder and slavery easier.

load more comments (1 replies)
load more comments (1 replies)
load more comments
view more: next ›