this post was submitted on 23 Mar 2026
61 points (72.3% liked)

Technology

83027 readers
3486 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] rizzothesmall@sh.itjust.works 7 points 1 hour ago

Literally the story above this in my feed is OpenAI shutting down expensive services 😂

You goofy goobers

That man is a verbal slut. He will say anything.

[–] NotMyOldRedditName@lemmy.world 3 points 3 hours ago

How many R's are in strawberry?

[–] SoloCritical@lemmy.world 13 points 5 hours ago

No.. you haven’t.

[–] Formfiller@lemmy.world 2 points 4 hours ago
[–] duncan_bayne@lemmy.world 2 points 5 hours ago

I'll believe him when he tears off his skin suit.

[–] CeeBee_Eh@lemmy.world 11 points 8 hours ago

This guy has completely lost the plot. I don't think it's possible to be even more disconnected from reality.

[–] ThunderComplex@lemmy.today 12 points 10 hours ago

>You think you've achieved AGI
>I know you haven't

We are not the same

[–] kewjo@lemmy.world 12 points 11 hours ago (1 children)

if agi then why still jobs?

[–] VindictiveJudge@lemmy.world 10 points 6 hours ago

Fun fact: if true AGI were a thing, those AI programs would be people and not paying them for their work would be slavery.

[–] IchNichtenLichten@lemmy.wtf 21 points 13 hours ago (1 children)

If I was a NVDA investor, I'd be worried. This clown is doing nothing but gaslighting and lying these days.

[–] cheat700000007@lemmy.world 4 points 13 hours ago

But you're wrong, you're all wrong!

[–] andallthat@lemmy.world 11 points 12 hours ago

"my chatbot told me so!"

[–] Frenchgeek@lemmy.ml 15 points 14 hours ago

Started lying at the second word, then.

[–] entropiclyclaude@lemmy.wtf 16 points 16 hours ago (2 children)

These fuckers will claim whatever nonsense to keep themselves relevant enough to take on more debt before they collapse.

[–] awake@lemmy.wtf 1 points 9 hours ago

Looking at their history they were always able to create markets for their GPUs and AI has been obviously incredible for them. There will be the next hot thing after AI and they’ll try to have that, too. The alternatives to CUDA are not there yet, ROCm is still lacking and fiddly. I see a lot of things happening but NVIDIA collapsing for whatever reason is not part of that.

[–] fierysparrow89@lemmy.world 1 points 14 hours ago

I agree, they start to sound desperate to keep their current momentum going. I think the bubble will burst soon. Things look solid until they're not.

[–] MonkderVierte@lemmy.zip 18 points 22 hours ago* (last edited 22 hours ago) (1 children)

The Turing thing again, how good a system is at mimicking a human? Like, lot's of dog owners could swear; the dog is smarter than a cat. But dogs are only better at reading their human.

I'll believe him, if he let's the LLM do his job.

[–] wewbull@feddit.uk 12 points 20 hours ago

Cats may be able to read their human just as well or better, but as they don't give a shit, there's no feedback to base anything on.

[–] PushButton@lemmy.world 9 points 20 hours ago

His can we take this idiot seriously; slop DLSS, tgen telling us we are wrong about this (the buddy telling me what I prefer), then we achieved AGI...

How low can he falls?

Oh yes we have achieved AGI! But what we really need is Artificial General Super Intelligence! Just another trillion and it will be useful bro!

[–] Zozano@aussie.zone 61 points 1 day ago (3 children)

LLMs aren't AI, let alone AGI.

They're fucking prediction engines with extra functions.

[–] Onihikage@piefed.social 28 points 1 day ago

The best description I've ever heard of LLMs is "a blurry jpeg of the internet". From the perspective of data compression and retrieval, they're impressive... but they're still a blurry jpeg. The image doesn't change, you can only zoom in on different parts of it and apply extra filters, and there's nothing you can truly do about the compression artifacts (what we call "hallucinations"). It can't think, it can't learn, it just is, and that's all it will ever be.

[–] unnamed1@feddit.org 2 points 23 hours ago (1 children)

So are we. Your definition of AI also seems off. It’s a field of computer science dealing with seemingly cognitive algorithms. Basically everything that is not rule based programming. I work in AI production since over ten years. It is absolutely valid and necessary to hate AI, but not to deny technical functionality. Also the other answer to your comment: of course training a neural network is a form of learning. Wether it is by reinforcement or by training data. There are many applications of ML since many years before LLMs, it makes no sense to deny that it exists.

[–] BigJohnnyHines@lemmy.ca 1 points 13 hours ago (1 children)

What’s your psychology background?

[–] unnamed1@feddit.org 1 points 9 hours ago

I get that you’re trolling but I don’t understand where you’re coming from. Why psychology?

load more comments (1 replies)
[–] Kolanaki@pawb.social 24 points 1 day ago

Average Gaslighting Idiot.

AKA "a CEO."

[–] Peruvian_Skies@sh.itjust.works 105 points 1 day ago

Sure you do. It's not at all a transparent attempt to prolong the bubble.

[–] mrmaplebar@fedia.io 35 points 1 day ago (1 children)

I think you're a bullshitting con artist.

[–] inari@piefed.zip 9 points 23 hours ago

Grifter gonna grift

[–] Technus@lemmy.zip 71 points 1 day ago (10 children)

I only have a rather high level understanding of current AI models, but I don't see any way for the current generation of LLMs to actually be intelligent or conscious.

They're entirely stateless, once-through models: any activity in the model that could be remotely considered "thought" is completely lost the moment the model outputs a token. Then it starts over fresh for the next token with nothing but the previous inputs and outputs (the context window) to work with.

That's why it's so stupid to ask an LLM "what were you thinking", because even it doesn't know! All it's going to do is look at what it spat out last and hallucinate a reasonable-sounding answer.

[–] Modern_medicine_isnt@lemmy.world 0 points 2 hours ago (1 children)

I agree, ut not because of lost state. As mentioned by others, state can be managed. You could also just do a feedback loop. These improve, but don't solve. The issue is that it doesn't understand. You mention that it is just a word predictor. And that is the heart of it. It predicts based on odds more or less, not on understanding. That said, it has room to improve. I think having lots and lots of agents that are highly specialized is probably the key. The more narrow the focus, the closer prediction comes to fact. Then throw in asking 5 versions of the agent the same question and tossing the outliers and you should get pretty useful. Not AGI, but useful. The issue is that with current technology, that is simply too expensive. So a breakthrough in the expense of current AI is needed first, then we can get more useful AI. AGI will be a significantly different technology.

[–] Technus@lemmy.zip 1 points 51 minutes ago

The conversion of the output to tokens inherently loses a lot of the information extracted by the model and any intermediate state it has synthesized (what it "thinks" of the input).

Until the model is able to retain its own internal state and able to integrate new information into that state as it receives it, all it will ever be able to do is try to fill in the blanks.

load more comments (9 replies)
[–] Almacca@aussie.zone 31 points 1 day ago* (last edited 1 day ago) (1 children)

Geez. You can almost smell the desperation on this guy.

[–] SaveTheTuaHawk@lemmy.ca 4 points 20 hours ago

Well, he wears the same leather jacket 24/7 so he can't smell good.

[–] RedFrank24@piefed.social 47 points 1 day ago (2 children)

So why do we need Jensen Huang?

[–] wewbull@feddit.uk 6 points 20 hours ago

Why do we need any of them? They've completed the job. All future plans cancelled.

[–] MrVilliam@sh.itjust.works 30 points 1 day ago (1 children)

Exactly. CEO is maybe the easiest job for an AI to take over, so an AGI is possibly the most perfect candidate for that role.

Put up or shut up, tech bro CEOs. Replace yourself if it's so fucking amazing.

[–] kkj@lemmy.dbzer0.com 4 points 1 day ago (1 children)
[–] MrVilliam@sh.itjust.works 4 points 23 hours ago

Just replacing one eco horror with another.

[–] GottaHaveFaith@fedia.io 16 points 1 day ago

I just dropped an AGI down the toilet AMA

[–] meme_historian@lemmy.dbzer0.com 39 points 1 day ago* (last edited 1 day ago)

Fridman, the podcast’s host, defines AGI as an AI system that’s able to “essentially do your job,” as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real — asking if it’s, say, five, 10, 15, or 20 years away — and Huang responds, “I think it’s now. I think we’ve achieved AGI.”

So we've achieved AGI in the sense that it could replace a nonsensical fart-sniffing clown, hyping a horde of morons into valuating a company at orders of magnitude its actual worth?

[–] Dindonmasker@sh.itjust.works 16 points 1 day ago

Guys i think i just found AGI in my gramp's old stuff.

[–] acosmichippo@lemmy.world 15 points 1 day ago

fart sniffer

load more comments
view more: next ›