this post was submitted on 27 Sep 2025
448 points (98.5% liked)

Technology

75551 readers
2462 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] justsomeguy@lemmy.world 65 points 18 hours ago (4 children)

The last 5% aren't a nice bonus. They are everything. A 95% self driving car won't do. Giving me random hallucinations when I try to look up important information won't do either even if it just happens 1 out of 20 times. That one time could really screw me so I can't trust it.

Currently AI companies have no idea how to get there yet they sell the promise of it. Next year, bro. Just one more datacenter, bro.

[–] Almacca@aussie.zone 2 points 7 hours ago (1 children)

I get to ride in lots of different cars as part of my job, and some of the new ones display the current speed limit on the dash. It is incorrect quite regularly. My view is if you can't trust it 100% of the time you can't trust it at all and you might as well turn it off. I feel the same about a.i.

[–] prex@aussie.zone 2 points 5 hours ago (1 children)

The ADAC in new cars varies so much in implementation. None of it can be trusted (like you said, the sign recognition is iften wrong) but as a backup reminder it can be great. eg: lane centring etc. If it feels like its seizing control from me it can be terrifying. eg: automatic braking out of the blue.

[–] Almacca@aussie.zone 1 points 4 hours ago

Couple of examples from just this last week: I was on a multi-lane road with a posted 60km/h speed limit, and the car was trying to tell the driver it was 40, and beeped at them whenever they went over it. Another one complained about crossing the centreline marking because we were going around parked cars and there was no choice. Thankfully the car didn't seize control in those situations and just gave an audible warning, but if it had we'd have been in the pooh, especially that second one.

[–] Feyd@programming.dev 51 points 17 hours ago (2 children)

People tell me the hallucinations aren't a big deal because people should fact check everything.

  1. People aren't fact checking
  2. If you have to fact check every single thing you're not saving any time over becoming familiar with whatever the real source of info is
[–] lobut@lemmy.ca 16 points 16 hours ago (1 children)

My friend told me that one of her former colleagues, wicked smart dude, was talking to her about space. Then he went off about how there were pyramids on Mars. She was like, "oh ... I'm quite caught up on this stuff and I haven't heard of this info. Where can I find this info?" The guy apparently has been having super long chats with whatever LLMand thinks that they're now diving into the "truth" now.

[–] brsrklf@jlai.lu 11 points 15 hours ago (1 children)

Worse, since generating a whole bunch of potentially correct text is basically effortless now, you've got a new batch of idiots just "contributing" to discussions by leaving a regurgitated wall of text they possibly didn't even read themselves.

So not only those are not fact checking, when you point that you didn't ask for a LLM's opinion, they're like "what's the problem? Is any of this wrong?" Because it's entirely your job to check something they copy-pasted in 5 seconds.

[–] justsomeguy@lemmy.world 5 points 13 hours ago

So many posts on on social media are obviously AI generated and it immediately makes me disregard them but I'm worried about later stages when people make an effort to mask it. Prompt it to generate text without giveaways like dashes. Have intentional mistakes or a general lack of proper structure and punctuation in there and it will be incredibly hard to tell.

[–] UnderpantsWeevil@lemmy.world 17 points 17 hours ago (1 children)

99% won't do when the consequences of that last 1% are sever.

There's more than one book on the subject, but all the cool kids were waving around their copies of The Black Swan at the end of 2008.

Seems like all the lessons we were supposed to learn about stacking risk behind financial abstractions and allowing business to self-regulate in the name of efficiency have been washed away, like tears in the rain.

[–] snooggums@piefed.world 10 points 17 hours ago (1 children)

99% won't do when the consequences of that last 1% are sever.

As an example, your whole post is great but I can't help but notice the one tiny typo that is like 1% of the letters. Heck, a lot of people probably didn't even notice just like they don't notice when AI returns the wrong results.

A multi billion dollar technical system should be far better than someone posting to the fediverse in their spare time, but it is far worse. Especially since those types of tiny errors will be fed back into future AI training and LLM design is not and never will be self correcting because it works with the data it has and it needs so much that it will always include scraped stuff.

[–] homesweethomeMrL@lemmy.world 5 points 13 hours ago

It should, but it cant. OpenAI just admitted this in a recent paper. It’s baked in, the hallucinations. Chaos is baked in to the binary technology.

[–] Zwuzelmaus@feddit.org 3 points 15 hours ago

won't do either even if it just happens 1 out of 20 times. That one time could really screw me so I can't trust it.

20 is also the number of times you go to work per month.

Now imagine crashing your car once every month...