this post was submitted on 09 Feb 2026
183 points (97.4% liked)

Technology

81118 readers
4697 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Armand1@lemmy.world 13 points 4 days ago* (last edited 4 days ago) (1 children)

Hmmm...

As the article correctly states, machine learning ("AI" is a misnomer that has stuck imo) has been used successfully for decades in medicine.

Machine learning is inherently about spotting patterns and inferring from them. The problem, I think, is two-fold:

  1. There are more "AI" products than ever, not all companies build it in responsibly and it's difficult for regulators to keep up with them.

The gutting of these regulatory agencies by the current US administration does not help ofc, but many of them were already severely undermanned.

  1. As AI is normalised, some doctors will put too much trust in these systems.

This isn't helped by the fact that the makers of these products are likely to exaggerate the capabilities of their products. This may be reflected in the products themselves, where they may not properly communicate the degree of certainty of a diagnosis / conclusion (e.g. "30% certainty this lesion is cancerous")

[โ€“] HubertManne@piefed.social 7 points 4 days ago

It seems like a lot of ai problems is how people treat it. It needs to be treated like a completely naive and inexperienced intern or student or just helper. Everyone should expect that all output has to be carefully looked over like a teacher checking a students work.