this post was submitted on 04 Jan 2024
180 points (90.5% liked)

Technology

59605 readers
4202 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate | It was bad at recognizing relationships and needs selective training, researchers say.::It was bad at recognizing relationships and needs selective training, researchers say.

you are viewing a single comment's thread
view the rest of the comments
[–] Darorad@lemmy.world 98 points 10 months ago (13 children)

Why do people keep expecting a language model to be able to do literally everything. AI works best when it's a model trained to solve a problem. You can't just throw everything at a chatbot and expect it to have any sort of competence.

[–] Cheers@sh.itjust.works 1 points 10 months ago (1 children)

Because Google's med palm 2 is a medically trained chatbot that performs better than most med students, and some med professionals. Further training and refinement using new chatbot findings like mixture of experts and chain of thought are likely to improve results.

[–] Darorad@lemmy.world 5 points 10 months ago (1 children)

Exactly, med-palm 2 was specifically trained for being a medical chatbot, not general purpose like chatgpt

[–] Hotzilla@sopuli.xyz 1 points 10 months ago

Train with the internet, get results like it is in Internet. Are medical content in Internet good? No, it is shit, so it will give shit results.

These are great base models, understanding larger context is always better for LLM, but specialization is needed for these kind of contexts.

load more comments (11 replies)