this post was submitted on 28 Oct 2024
239 points (99.2% liked)

Technology

59534 readers
3143 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 13 comments
sorted by: hot top controversial new old
[–] NABDad@lemmy.world 46 points 3 weeks ago

Many, many years ago, the hospital where I work had a medical transcription company to transcribe dictated radiology results.

At the time, users would access the server via DEC terminals or a terminal application on their computer.

One radiologist set up a script in the terminal application to sign off all his reports with one click. Another radiologist liked it so the first let the second copy it.

Later, the second radiologist opened a ticket with IT because all his reports were being signed by the first radiologist. Yeah, because he didn't update the script to change the username and password being used to sign the reports.

That's an amusing anecdote, but the terror comes from the fact that NEITHER RADIOLOGIST WAS READING THEIR REPORTS. BEFORE SIGNING THEM.

The reason they are supposed to sign the report is to confirm that they reviewed the work of the transcriptionist and verified that the report was correct.

No matter what the tool is, doctors will assume the results are correct and sign off on them without checking.

[–] mipadaitu@lemmy.world 27 points 3 weeks ago (2 children)

This shows that AI isn’t an infallible machine that gets everything right — instead, we can think of it as a person who can think quickly, but its output needs to be double-checked every time. AI is certainly a useful tool in many situations, but we can’t let it do the thinking for us, at least for now.

No, it's not "like a person who can think." Unless you mean it's like an ADHD person who got distracted halfway through the transcript and started working on a different project in the same file.

[–] homesweethomeMrL@lemmy.world 24 points 3 weeks ago (1 children)

Agreed.

we can think of it as a person who can think quickly

No.

Do not do this. This way lies madness. It's a text prediction system which is incredibly complex just to get it to barf out three sentences that sound about right. It is not "thinking" shit.

[–] Pips@lemmy.sdf.org 8 points 3 weeks ago

It's a more complicated version of that feature where Gmail offers suggested responses like "let me look into that" and "thank you."

[–] rottingleaf@lemmy.world 9 points 3 weeks ago (1 children)

As an ADHD person (among other things), I don't think I can be replaced with an LLM either.

[–] Flocklesscrow@lemm.ee 4 points 3 weeks ago

"Because, unlike some other LLMs, I can speak with an English accent."

[–] themurphy@lemmy.ml 10 points 3 weeks ago

It's great they start using these tools, but I hope they keep in mind that it needs much fine tuning before it's reliable enough.

You need to use a product in practice to make it better. But you don't need to rely on it from the start. They need to invest the time in implementation, and where it can be used and shouldn't be used.

Nice progress towards something good, but we're not there yet.

[–] RobotToaster@mander.xyz 5 points 3 weeks ago (3 children)

How can it be that bad?

I've used zoom's ai transcriptions, for far less mission critical stuff, and it's generally fine, (I still wouldn't trust it for medical purposes)

[–] huginn@feddit.it 27 points 3 weeks ago

Zoom ai transcriptions also make things up.

That's the point. They're hallucination engines. They pattern match and fill holes by design. It doesn't matter if the match isn't perfect, it will patch it over with nonsense instead.

[–] Grimy@lemmy.world 16 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Whisper has been known to hallucinate during long moments of silence. Most of their examples though are most likely due to bad audio quality.

I use whisper quite a bit and it will fumble a word here or there but never to the extent that is being shown in the article.

[–] QuadratureSurfer@lemmy.world 7 points 3 weeks ago

Same, I'd say it's way better than most other transcription tools I've used, but it does need to be monitored to catch when it starts going off the rails.

[–] brbposting@sh.itjust.works 2 points 3 weeks ago

Thanks for watching!

[–] ElPussyKangaroo@lemmy.world 9 points 3 weeks ago

It's not the transcripts that are the issue here. It's that the transcripts are being interpreted by the model to give information.