this post was submitted on 01 May 2026
54 points (89.7% liked)

Technology

84256 readers
5754 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 18 comments
sorted by: hot top controversial new old
[–] eager_eagle@lemmy.world 6 points 15 minutes ago (1 children)

Waste of energy. It's like asking a person to estimate a non-trivial angle. Either use a model trained for that task, or don't bother.

[–] Corkyskog@sh.itjust.works 4 points 10 minutes ago

The point is they are advertising that these models can do it.

[–] Buffalox@lemmy.world 19 points 2 hours ago (2 children)

It’s the same photo, the same model, the same question. But you won’t get the same answer. Not even close — and the differences are large enough to cause a hypoglycaemic emergency.

OK I wonder if there's something wrong with the photo.
The photo:

WTF!!??
That's like estimating the carbs in 2 slices of standard sandwich bread! Of course not all bread has the same amount of sugar, but a reasonable range based on an average should be a dead easy answer.

I thought the headline sounded crazy, but try to read the article, and it actually becomes worse. I have said it many times before, these AI chatbots should not be legal, they put lives at risk.

[–] inari@piefed.zip 9 points 1 hour ago (2 children)

To be fair there's no way of knowing what the filling is, so the AI may be guessing based on that too

[–] Carnelian@lemmy.world 11 points 1 hour ago

The apps are advertising that they can do this tho. Many of them are aggressively sponsoring YouTubers who advertise you can basically just wave your phone over the food and it takes away all the “work” from traditional calorie counting apps

[–] PatrickYaa@feddit.org 9 points 1 hour ago (1 children)

But the ai assumes itself infallible, at least it could ask...

[–] inari@piefed.zip 2 points 1 hour ago* (last edited 1 hour ago)

That's true, it should ask follow-up questions, or at least clarify its assumptions

[–] MagicShel@lemmy.zip -2 points 1 hour ago (2 children)

They put lives at risk the same way every single product at your local home improvement store does. When you misuse a tool for a purpose it wasn't intended and isn't good at, you're going to get bad results.

This is an issue for the educational system, not the legal system.

[–] Steve@startrek.website 8 points 25 minutes ago (1 children)

What if the packaging on every tool at home depot grossly misrepresented its capabilities and/or purpose?

This chainsaw cures cancer? Hot damn somebody call RFK!

Concrete mix goes great with pancakes, etc.

[–] MagicShel@lemmy.zip 1 points 4 minutes ago (1 children)

Does OpenAI claim ChatGPT is fit for those purposes? No.

The concrete itself will happily mix into your pancakes.

[–] Steve@startrek.website 1 points 2 minutes ago

I think the whole point of this discussion is that the various peddlers of AI in fact do make wild claims about their capability.

[–] HuudaHarkiten@piefed.social 6 points 28 minutes ago* (last edited 28 minutes ago)

As others have pointed out, this is also a problem with how they are advertising it.

If duct tape was advertised as something that you can use to hold your roof beams together, you'd have a issue with that.

[–] MightEnlightenYou@lemmy.world 2 points 1 hour ago* (last edited 1 hour ago) (1 children)

People should read the top comments on Hackernews instead of anyone here, they're more informed on the topic than Lemmy is

[–] Oisteink@lemmy.world 2 points 1 hour ago

Yeah - if you’re after AI fanbois you should head over there. They’re not that bright, but if you check show and tell you can see what claude’s been ut to last two days

[–] psycho_driver@lemmy.world 0 points 1 hour ago* (last edited 1 hour ago) (1 children)

Bruh a couple of months ago I asked it (Gemini) to check the number of characters, including spaces, in a potential game character name because I was working at the time and couldn't stop to check my in-head count. It told me 21--I had counted 20. I thought I must have gotten distracted and miscounted. Later when I had time to actually focus on the issue it turned out AI had miscounted a 20 character string (maybe counting the null terminating character?).

[–] boonhet@sopuli.xyz 5 points 1 hour ago (1 children)

AI doesn't see individual characters, it sees tokens, with most tokens being a word or part of a word. That's why per-character questions have such a high failure rate.

[–] PunnyName@lemmy.world -3 points 51 minutes ago (1 children)

It's it doesn't understand the simple concept of the number of letters and spaces, it needs to be reprogrammed.

[–] boonhet@sopuli.xyz 6 points 42 minutes ago

It doesn't understand anything though? It never will. It's a probability machine. If you choose to believe its output, that's on you. I use it as a coding assistant to get boring things done faster. Fire a prompt at claude code, grab a coffee, check out the diff. But that last step is crucial. Can't trust AI output blindly.