this post was submitted on 09 Feb 2026
325 points (99.7% liked)

Technology

80916 readers
3746 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

top 50 comments
sorted by: hot top controversial new old
[–] alzjim@lemmy.world 10 points 2 hours ago

Calling chatbots “terrible doctors” misses what actually makes a good GP — accessibility, consistency, pattern recognition, and prevention — not just physical exams. AI shines here — it’s available 24/7 🕒, never rushed or dismissive, asks structured follow-up questions, and reliably applies up-to-date guidelines without fatigue. It’s excellent at triage — spotting red flags early 🚩, monitoring symptoms over time, and knowing when to escalate to a human clinician — which is exactly where many real-world failures happen. AI shouldn’t replace hands-on care — and no serious advocate claims it should — but as a first-line GP focused on education, reassurance, and early detection, it can already reduce errors, widen access, and ease overloaded systems — which is a win for patients 💙 and doctors alike.

/s

[–] SuspciousCarrot78@lemmy.world 3 points 1 hour ago* (last edited 1 hour ago)

So, I can speak to this a little bit, as it touches two domains I'm involved it. TL;DR - LLMs bullshit and are unreliable, but there's a way to use them in this domain as a force multiplier of sorts.

In one; I've created a python router that takes my (deidentified) clinical notes, extract and compacts input (user defined rules) and creates a summary, then -

  1. benchmarks the summary against my (user defined) gold standard and provides management plan (again, based on user defined database).

  2. this is then dropped into my on device LLM for light editing and polishing to condense, which I then eyeball, correct and then escalate to supervisor for review.

Additionally, the llm generated note can be approved / denied by the python router, in the first instance based on certain policy criteria I've defined.

It can also suggest probable DDX based on my database (which are .CSV based)

Finally, if the llm output fails policy check, the router tells me why it failed and just says "go look at the prior summary and edit it yourself".

This three step process takes the tedium of paperwork from 15-20 mins to 1 minute generation, 2 mins manual editing, which is approx a 5-7x speed up.

The reason why this is interesting:

All of this runs within the llm (it calls / invokes the python tooling via >> command) and is 100% deterministic; no llm jazz until the final step, which the router can outright reject and is user auditble anyway.

Ive found that using a fairly "dumb" llm (Qwen2.5-1.5B), with settings dialed down, produces consistently solid final notes (5 out of 6 are graded as passed on first run by router invoking policy document and checking output). Its too dumb to jazz, which is useful in this instance.

Would I trust the LLM, end to end? Well, I'd trust my system, approx 80% of the time. I wouldn't trust ChatGPT ... even though its been more right than wrong in similar tests.

[–] pleksi@sopuli.xyz 4 points 2 hours ago

As a phycisian ive used AI to check if i have missed anything in my train of thought. Never really changed my decision though. Has been useful to hather up relevant sitations for my presentations as well. But that’s about it. It’s truly shite at interpreting scientific research data on its own for example. Most of the time it will parrot the conclusions of the authors.

[–] Tollana1234567@lemmy.today 1 points 1 hour ago* (last edited 1 hour ago)

its basically a convoluted version of webmd. even MD mods in medical subs are more accurate.

[–] Etterra@discuss.online 3 points 2 hours ago

I didn't need a study to tell me not to listen to a hallucinating parrot-bot.

[–] Treczoks@lemmy.world 9 points 8 hours ago

One needs a study for that?

[–] spaghettiwestern@sh.itjust.works 13 points 9 hours ago* (last edited 9 hours ago) (2 children)

Most doctors make terrible doctors.

[–] Sektor@lemmy.world 2 points 44 minutes ago

But the good ones are worth a monument in the place they worked.

[–] sbbq@lemmy.zip 3 points 8 hours ago

My dad always said, you know what they call the guy who graduated last in his class at med school? Doctor.

[–] BeigeAgenda@lemmy.ca 45 points 12 hours ago (3 children)

Anyone who have knowledge about a specific subject says the same: LLM'S are constantly incorrect and hallucinate.

Everyone else thinks it looks right.

[–] agentTeiko@piefed.social 4 points 6 hours ago

Yep its why CLevels think its the Holy Grail they don't see it as everything that comes out of their mouth is bullshit as well. So they don't see the difference.

[–] IratePirate@feddit.org 23 points 11 hours ago* (last edited 11 hours ago) (1 children)

A talk on LLMs I was listening to recently put it this way:

If we hear the words of a five-year-old, we assume the knowledge of a five-year-old behind those words, and treat the content with due suspicion.

We're not adapted to something with the "mind" of a five-year-old speaking to us in the words of a fifty-year-old, and thus are more likely to assume competence just based on language.

[–] leftzero@lemmy.dbzer0.com 7 points 5 hours ago (1 children)

LLMs don't have the mind of a five year old, though.

They don't have a mind at all.

They simply string words together according to statistical likelihood, without having any notion of what the words mean, or what words or meaning are; they don't have any mechanism with which to have a notion.

They aren't any more intelligent than old Markov chains (or than your average rock), they're simply better at producing random text that looks like it could have been written by a human.

[–] IratePirate@feddit.org 2 points 2 hours ago

I am aware of that, hence the ""s. But you're correct, that's where the analogy breaks. Personally, I prefer to liken them to parrots, mindlessly reciting patterns they've found in somebody else's speech.

[–] zewm@lemmy.world 5 points 11 hours ago

It is insane to me how anyone can trust LLMs when their information is incorrect 90% of the time.

[–] irate944@piefed.social 69 points 13 hours ago (2 children)

I could've told you that for free, no need for a study

[–] rudyharrelson@lemmy.radio 93 points 13 hours ago* (last edited 13 hours ago) (5 children)

People always say this on stories about "obvious" findings, but it's important to have verifiable studies to cite in arguments for policy, law, etc. It's kinda sad that it's needed, but formal investigations are a big step up from just saying, "I'm pretty sure this technology is bullshit."

I don't need a formal study to tell me that drinking 12 cans of soda a day is bad for my health. But a study that's been replicated by multiple independent groups makes it way easier to argue to a committee.

[–] irate944@piefed.social 29 points 13 hours ago (1 children)

Yeah you're right, I was just making a joke.

But it does create some silly situations like you said

[–] rudyharrelson@lemmy.radio 15 points 13 hours ago (1 children)

I figured you were just being funny, but I'm feeling talkative today, lol

[–] IratePirate@feddit.org 5 points 11 hours ago

A critical, yet respectful and understanding exchange between two individuals on the interwebz? Boy, maybe not all is lost...

[–] Knot@lemmy.zip 17 points 12 hours ago

I get that this thread started from a joke, but I think it's also important to note that no matter how obvious some things may seem to some people, the exact opposite will seem obvious to many others. Without evidence, like the study, both groups are really just stating their opinions

It's also why the formal investigations are required. And whenever policies and laws are made based on verifiable studies rather than people's hunches, it's not sad, it's a good thing!

[–] Telorand@reddthat.com 8 points 12 hours ago

The thing that frustrates me about these studies is that they all continue to come to the same conclusions. AI has already been studied in mental health settings, and it's always performed horribly (except for very specific uses with professional oversight and intervention).

I agree that the studies are necessary to inform policy, but at what point are lawmakers going to actually lay down the law and say, "AI clearly doesn't belong here until you can prove otherwise"? It feels like they're hemming and hawwing in the vain hope that it will live up to the hype.

[–] BillyClark@piefed.social 6 points 12 hours ago (1 children)

it’s important to have verifiable studies to cite in arguments for policy, law, etc.

It's also important to have for its own merit. Sometimes, people have strong intuitions about "obvious" things, and they're completely wrong. Without science studying things, it's "obvious" that the sun goes around the Earth, for example.

I don’t need a formal study to tell me that drinking 12 cans of soda a day is bad for my health.

Without those studies, you cannot know whether it's bad for your health. You can assume it's bad for your health. You can believe it's bad for your health. But you cannot know. These aren't bad assumptions or harmful beliefs, by the way. But the thing is, you simply cannot know without testing.

[–] Slashme@lemmy.world 2 points 1 hour ago

Or how bad something is. "I don't need a scientific study to tell me that looking at my phone before bed will make me sleep badly", but the studies actually show that the effect is statistically robust but small.

In the same way, studies like this can make the distinction between different levels of advice and warning.

[–] eager_eagle@lemmy.world 3 points 13 hours ago* (last edited 13 hours ago)

Also, it's useful to know how, when, or why something happens. I can make a useless chatbot that is "right" most times if it only tells people to seek medical help.

load more comments (1 replies)
[–] pageflight@piefed.social 19 points 11 hours ago

Chatbots are terrible at anything but casual chatter, humanity finds.

[–] Sterile_Technique@lemmy.world 15 points 10 hours ago* (last edited 10 hours ago) (1 children)

Chipmunks, 5 year olds, salt/pepper shakers, and paint thinner, also all make terrible doctors.

Follow me for more studies on 'shit you already know because it's self-evident immediately upon observation'.

[–] kescusay@lemmy.world 5 points 10 hours ago

I would like to subscribe to your newsletter.

[–] theunknownmuncher@lemmy.world 14 points 12 hours ago (1 children)

A statistical model of language isn't the same as medical training??????????????????????????

[–] scarabic@lemmy.world 1 points 8 hours ago* (last edited 8 hours ago) (1 children)

It’s actually interesting. They found the LLMs gave the correct diagnosis high-90-something percent of the time if they had access to the notes doctors wrote about their symptoms. But when thrust into the room, cold, with patients, the LLMs couldn’t gather that symptom info themselves.

[–] Hacksaw@lemmy.ca 5 points 6 hours ago

LLM gives correct answer when doctor writes it down first.... Wowoweewow very nice!

[–] GnuLinuxDude@lemmy.ml 11 points 13 hours ago (1 children)

If you want to read an article that’s optimistic about AI and healthcare, but where if you start asking too many questions it falls apart, try this one

https://text.npr.org/2026/01/30/nx-s1-5693219/

Because it’s clear that people are starting to use it and many times the successful outcome is it just tells you to see a doctor. And doctors are beginning to use it, but they should have the professional expertise to understand and evaluate the output. And we already know that LLMs can spout bullshit.

For the purposes of using and relying on it, I don’t see how it is very different from gambling. You keep pulling the lever, oh excuse me I mean prompting, until you get the outcome you want.

[–] HeyThisIsntTheYMCA@lemmy.world 1 points 7 hours ago (1 children)

the one time my doctor used it and i didn't get mad at them (they did the google and said "the ai says" and I started making angry Nottingham noises even though all the ai did was tell us exactly what we had just been discussing was correct) uh, well that's pretty much it I'm not sure where my parens are supposed to open and close on that story.

[–] GnuLinuxDude@lemmy.ml 6 points 4 hours ago

Be glad it was merely that and not something like this https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/

In 2021, a unit of healthcare giant Johnson & Johnson announced “a leap forward”: It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses...

At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients’ heads during operations.

Cerebrospinal fluid reportedly leaked from one patient’s nose. In another reported case, a surgeon mistakenly punctured the base of a patient’s skull. In two other cases, patients each allegedly suffered strokes after a major artery was accidentally injured.

FDA device reports may be incomplete and aren’t intended to determine causes of medical mishaps, so it’s not clear what role AI may have played in these events. The two stroke victims each filed a lawsuit in Texas alleging that the TruDi system’s AI contributed to their injuries. “The product was arguably safer before integrating changes in the software to incorporate artificial intelligence than after the software modifications were implemented,” one of the suits alleges.

[–] JoMiran@lemmy.ml 9 points 12 hours ago
[–] homes@piefed.world 7 points 12 hours ago* (last edited 12 hours ago)

This is a major problem with studies like this : they approach from a position of assuming that AI doctors would be competent rather than a position of demanding why AI should ever be involved with something so critical, and demanding a mountain of evidence to prove why it is worthwhile before investing a penny or a second in it

“ChatGPT doesn’t require a wage,” and, before you know it, billions of people are out of work and everything costs 10000x your annual wage (when you were lucky enough to still have one).

How long until the workers revolt? How long have you gone without food?

[–] thesohoriots@lemmy.world 5 points 12 hours ago (1 children)

This says you’re full of owls. So we doing a radical owlectomy or what?

[–] sbv@sh.itjust.works 1 points 8 hours ago (1 children)

It looks like the LLMs weren't trained for medical tasks. The study would be more interesting if it had been run on something built for the task.

[–] green_red_black@slrpnk.net 3 points 2 hours ago

But that’s just it. Allegedly from the Snake Oil Tech Bros the AI is supposedly capable of doing medical tasks, or really just about everything

[–] supersquirrel@sopuli.xyz 4 points 12 hours ago

pikachufacegravestone.jpeg

[–] FelixCress@lemmy.world 3 points 11 hours ago

... You don't say.

[–] HubertManne@piefed.social 4 points 12 hours ago

its not ready to take any role. It should not be doing anything but assiting. So yeah you can talk to a chat bot instead of filling out that checklist and the output might be useful to the doc while he then talks with you.

[–] Rhoeri@piefed.world 4 points 12 hours ago* (last edited 12 hours ago)

So the same tech that lonely incels use to make themselves feel important doesn’t make good doctors? Ya don’t say?

[–] cecilkorik@piefed.ca 3 points 12 hours ago

It's great at software development though /s

Remember that when software written by AI will soon replace all the devices doctors use daily.

[–] NuXCOM_90Percent@lemmy.zip 3 points 12 hours ago (4 children)

How much of that is the chat bot itself versus humans just being horrible at self reporting symptoms?

That is why "bedside manner" is so important. Connect the dots and ask follow up questions for clarifications or just look at a person and assume they are wrong. Obviously there are some BIG problems with that (ask any black woman, for example) but... humans are horrible at reporting symptoms.

Which gets back to how "AI" is actually an incredible tool (especially in this case when it is mostly a human language interface to a search engine) but you still need domain experts in the loop to understand what questions to ask and whether the resulting answer makes any sense at all.

Yet, instead, people do the equivalent of just raw dogging whatever the first response on stack overflow is.

load more comments (4 replies)
[–] Lembot_0006@programming.dev 3 points 13 hours ago

You know what else is a bad doctor? My axe!

load more comments
view more: next ›