this post was submitted on 20 Feb 2025
68 points (88.6% liked)

Technology

73342 readers
5141 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 10 comments
sorted by: hot top controversial new old
[–] regrub@lemmy.world 73 points 5 months ago* (last edited 5 months ago) (1 children)

TL;DR: yes

It's unfortunate that LLMs are the only thing that come to mind when AI is mentioned though. Something that can do pattern recognition better than a human can is good for this application

[–] Wxnzxn@lemmy.ml 41 points 5 months ago (1 children)

Even if it were to do pattern recognition as well as or slightly worse than a human, it's still worthwhile. As the article points out: It's basically a non-tiring, always-ready second opinion. That alone helps a lot.

[–] vividspecter@lemm.ee 14 points 5 months ago* (last edited 5 months ago) (2 children)

One issue I could see is using it not as a second opinion, but the only opinion. That doesn't mean this shouldn't be pursued, but the incentives toward laziness and cost-cutting are obvious.

EDIT: One another potential issue is the AI detection being more accurate with certain groups (i.e. White Europeans), which could result in underdiagnosis in minority groups if the training data set doesn't include sufficient data for those groups. I'm not sure if that's likely with breast cancer detection, however.

[–] anarchrist@lemmy.dbzer0.com 8 points 5 months ago (1 children)

Also if it's integrated poorly. Like if you have the human only serve as a secondary check to the AI, which is mostly right, you condition the human to just click through and defer to the AI. The better way to do this would be to have both the human and AI judge things independently and review carefully where they disagree but that won't save anyone money.

[–] desktop_user@lemmy.blahaj.zone 5 points 5 months ago

if the court system allowed deferring partial fault for "preventable" deaths to the hospital for employing practices that are not in the best interests of the patient it might give them a financial incentive.

[–] Wxnzxn@lemmy.ml 2 points 5 months ago

Definitely, here's hoping the accountability question will prevent that, but the incentive is there, especially in systems with for-profit healthcare.

[–] Mihies@programming.dev 15 points 5 months ago* (last edited 5 months ago) (1 children)

I remember, when we were learning prolog, that in the 70s, or something like, that they were already experimenting with AI and it was quite good at diagnostics. However doctors were scared of losing jobs instead of embracing it and using it as a tool. So they dropped it at the time. Hopefully they will use it as an additional tool this time and everybody profits.

[–] Neuromancer49@midwest.social 13 points 5 months ago (1 children)

My favorite AI fact is from cancer research. The New Yorker has a great article about how an algorithm used to identify and price out pastries at a Japanese bakery found surprising success as a cancer detector. https://www.newyorker.com/tech/annals-of-technology/the-pastry-ai-that-learned-to-fight-cancer