this post was submitted on 26 Mar 2025
3 points (80.0% liked)

Technology

72356 readers
2895 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] glibg@lemmy.ca 5 points 3 months ago (2 children)
[–] theterrasque@infosec.pub 2 points 3 months ago (1 children)

The quote was originally on news and journalists.

[–] DeltaWingDragon@sh.itjust.works 2 points 2 months ago

The phenomenon is called Gell-Mann amnesia

[–] LovableSidekick@lemmy.world 1 points 3 months ago* (last edited 3 months ago) (2 children)

Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let's not think about that either. AI Bad!

[–] starman2112@sh.itjust.works 3 points 3 months ago

This is a salient point that's well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It's super easy to call out a bad research study and have it retracted. But you can't just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they're synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.

[–] Shanmugha@lemmy.world 2 points 3 months ago* (last edited 3 months ago) (1 children)

I'll bait. Let's think:

-there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it

  • now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output

  • now llm is asked about the topic and computes the answer string

By definition that answer string can contain all the probably-wrong things without proper indicators ("might", "under such and such circumstances" etc)

If you want to say 40% wrong llm means 40% wrong sources, prove me wrong

[–] LovableSidekick@lemmy.world 1 points 3 months ago (1 children)

It's more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that's how you want to spend your time, hey knock yourself out.

load more comments (1 replies)
[–] KingThrillgore@lemmy.ml 3 points 3 months ago (1 children)
[–] jade52@lemmy.ca 2 points 3 months ago (2 children)

What the fuck is vibe coding... Whatever it is I hate it already.

[–] NostraDavid@programming.dev 2 points 3 months ago

Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup "Eureka Labs - we are building a new kind of school that is AI native") make a tweet defining the term:

There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

People ignore the "It's not too bad for throwaway weekend projects", and try to use this style of coding to create "production-grade" code... Lets just say it's not going well.

source (xcancel link)

[–] Cgers@lemmy.dbzer0.com 2 points 3 months ago

Using AI to hack together code without truly understanding what your doing

[–] N0body@lemmy.dbzer0.com 1 points 3 months ago (3 children)

people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI

Preying on the vulnerable is a feature, not a bug.

[–] NostraDavid@programming.dev 0 points 3 months ago (1 children)

That was clear from GPT-3, day 1.

I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time...

[–] trotfox@lemmy.world 1 points 3 months ago

Ugh, that hit me hard. Poor lady. I hope it helped in some way.

[–] Tylerdurdon@lemmy.world 0 points 3 months ago (1 children)

I kind of see it more as a sign of utter desperation on the human's part. They lack connection with others at such a high degree that anything similar can serve as a replacement. Kind of reminiscent of Harlow's experiment with baby monkeys. The videos are interesting from that study but make me feel pretty bad about what we do to nature. Anywho, there you have it.

[–] graphene@lemm.ee 0 points 3 months ago (1 children)

And the amount of connections and friends the average person has has been in free fall for decades...

[–] trotfox@lemmy.world 1 points 3 months ago (1 children)

I dunno. I connected with more people on reddit and Twitter than irl tbh.

Different connection but real and valid nonetheless.

I'm thinking places like r/stopdrinking, petioles, bipolar, shits been therapy for me tbh.

[–] in4apenny@lemmy.dbzer0.com 1 points 3 months ago

At least you're not using chatgpt to figure out the best way to talk to people, like my brother in finance tech does now.

[–] Deceptichum@quokk.au 0 points 3 months ago* (last edited 3 months ago) (1 children)

These same people would be dating a body pillow or trying to marry a video game character.

The issue here isn’t AI, it’s losers using it to replace human contact that they can’t get themselves.

[–] morrowind@lemmy.ml 0 points 3 months ago (1 children)

You labeling all lonely people losers is part of the problem

[–] BradleyUffner@lemmy.world 0 points 3 months ago (5 children)

If you are dating a body pillow, I think that's a pretty good sign that you have taken a wrong turn in life.

load more comments (5 replies)
[–] LovableSidekick@lemmy.world 1 points 3 months ago* (last edited 3 months ago) (1 children)

TIL becoming dependent on a tool you frequently use is "something bizarre" - not the ordinary, unsurprising result you would expect with common sense.

[–] emeralddawn45@discuss.tchncs.de 0 points 3 months ago

If you actually read the article Im 0retty sure the bizzarre thing is really these people using a 'tool' forming a roxic parasocial relationship with it, becoming addicted and beginning to see it as a 'friend'.

[–] gamer@lemm.ee 1 points 3 months ago

That is peak clickbait, bravo.

[–] flamingo_pinyata@sopuli.xyz 1 points 3 months ago (1 children)

But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it's obviously robotic answers?

But then there's people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.

[–] Kolanaki@pawb.social 1 points 3 months ago

If you're also dumb, chatgpt seems like a super genius.

[–] cupcakezealot@lemmy.blahaj.zone 0 points 3 months ago (1 children)

chatbots and ai are just dumber 1990s search engines.

[–] mycelium_underground@lemmy.world 1 points 3 months ago

I remember 90s search engines. AltaVista was pretty ok a t searching the small web that existed, but I'm pretty sure I can get better answers from the LLMs tied to Kagi search.

AltaVista also got blown out of the water by google(back when it was just a search engine), and that was in the 00s not the 90s. 25 to 35 years ago is a long time, search is so so much better these days(or worse if you use a "search" engine like Google now).

Don't be the product.

[–] PieMePlenty@lemmy.world 0 points 3 months ago (1 children)

Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI... we used to call OCR AI, now we know better.

[–] Lifter@discuss.tchncs.de 1 points 3 months ago

LLM is a subset of ML, which is a subset of AI.

[–] El_Azulito@lemmy.world 0 points 3 months ago (1 children)

I mean, I stopped in the middle of the grocery store and used it to choose best frozen chicken tenders brand to put in my air fryer. …I am ok though. Yeah.

[–] aceshigh@lemmy.world 1 points 3 months ago

At the store it calculated which peanuts were cheaper - 3 pound of shelled peanuts on sale, or 1 pound of no shell peanuts at full price.

[–] HappinessPill@lemmy.ml 0 points 3 months ago* (last edited 3 months ago) (2 children)

Do you guys remember when internet was the thing and everybody was like: "Look, those dumb fucks just putting everything online" and now is: "Look at this weird motherfucker that don't post anything online"

[–] TheBat@lemmy.world 1 points 3 months ago

I remember when internet was a place

[–] NikkiDimes@lemmy.world 1 points 3 months ago

Remember when people used to say and believe "Don't believe everything you read on the internet?"

I miss those days.

load more comments
view more: next ›