this post was submitted on 07 Mar 2026
966 points (98.9% liked)

Technology

82414 readers
3968 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Zink@programming.dev 7 points 4 hours ago

I'm a human being and I'm pretty sure I am already not allowed to give legal or medical advice to anybody in new york or any other state.

[–] NutWrench@lemmy.world 3 points 4 hours ago

Chat bots should never give medical advice. Chat bots dispense basic, standalone factoids, like "aspirin is a pain reliever." But they don't know or care about dosages, comorbid conditions or whether or not you live or die, so they won't ask follow up questions.

[–] SaveTheTuaHawk@lemmy.ca 5 points 5 hours ago
[–] melfie@lemy.lol 5 points 6 hours ago (1 children)

In the US especially, medical professionals are overworked and simply don’t have the time and energy properly diagnose. If you have a more complex, chronic issue, there’s a good chance you’ll be waiting months at a time to see various specialists who are only going to spend about 10 distracted minutes thinking about your case and might not even have any useful insights, or they might misdiagnose you and make your condition worse. You basically have to do your own research and show them studies. If you’re a person of color or a woman, etc., there’s a good chance you won’t even be taken seriously. In an ideal world, it would work like it does on TV, but in the real world, it’s all about maximizing profits and the patients be damned. Sure, LLMs are unreliable, but they do at least provide ideas to research.

[–] SaveTheTuaHawk@lemmy.ca 8 points 5 hours ago* (last edited 5 hours ago) (1 children)

That's not why people are using ChatBots, they are using Chatbots because they can't afford healthcare.

and before we get out the tiny violins for MDs, they gatekeep the system to keep their salaries high.

Bad news folks, MDs are using ChapGPT on the sly.

[–] melfie@lemy.lol 5 points 4 hours ago* (last edited 4 hours ago)

they are using Chatbots because they can't afford healthcare

Even if they do spend their limited resources on healthcare, there’s a good chance it’s going to be a waste of money.

before we get out the tiny violins for MDs

A lot of MDs are pretty useless in the first place, and that’s a big part of the problem. Maximizing the patient load doesn’t help anything. Just because someone can memorize and regurgitate information well, that doesn’t mean they’re going to be effective at their job. It’s often necessary to shop around to find someone who doesn’t suck, which is especially difficult for anyone who can’t afford it.

[–] willington@lemmy.dbzer0.com 13 points 18 hours ago (1 children)
  1. Make laws against chatbots.
  2. Demand proof you are not a chatbot.
  3. Surveillance capitalism.

The real target here is population control.

The lawmakers, which take billionaire money by the ton, who HAVE NEVER given a shit, suddenly, NOW, they want to protect the vulnerable. Abso fucking lutely laughable on its face.

[–] militaryintelligence@lemmy.world 4 points 17 hours ago

Agreed. It's never about protection, just covert exploitation

[–] deathbird@mander.xyz 23 points 1 day ago (6 children)

If implemented, that would just ban chatbots that use large language models. It's not a terrible idea.

What would actually happen is that so-called AI chatbot systems would try to detect if someone is from New York and then try to exclude them from receiving medical or legal advice, fail, and then get sued and then pay a small fine, over and over again forever.

load more comments (6 replies)
[–] moroninahurry@piefed.social 10 points 1 day ago (2 children)

Laws like this are great for these companies. This is how they will justify removing access to useful information and putting it behind paywalls. But oh your need a prescription so now the insurance companies are involved (spoiler: they already are) and so you don't even have access to pay out the nose for medical information.

Then when Google search has been completely replaced with AI, you won't even be able to search for medical information.

Healthcare companies aren't about to provide anything for free.

[–] Routhinator@startrek.website 8 points 23 hours ago (1 children)

Most of the medical information coming up these days is garbage and you should be going to a known, reputable site and searching their database. LLMs have been trained on absolute garbage. There is nothing of value being kept from anyone here.

[–] presoak@lazysoci.al 2 points 5 hours ago (1 children)

LLMs have been trained on absolute garbage

It depends on the LLM actually.

Specialized medical LLMs are actually very accurate.

[–] badgermurphy@lemmy.world 1 points 4 hours ago* (last edited 29 minutes ago)

I'm sure the quality of the LLM output does vary a lot based on the size of the scope it covers and the training data set.

However, I believe that if it were possible to get an LLM to be "quite accurate" in any context, that would make it easy to find a path to profitability for that tool, but I don't think we have seen that materialize anywhere.

I believe that the best they can get is "more accurate" than the mean, but still not accurate enough to reliably make anyone money*.

*Nvidia notwithstanding

[–] Soup@lemmy.world 6 points 23 hours ago (11 children)

LLMs and chatbots should not be giving medical advice. You are afraid of the private healthcare system, not the lack of access to the most janky bandaid fix for its failures.

load more comments (11 replies)
[–] artyom@piefed.social 124 points 2 days ago* (last edited 2 days ago) (1 children)

Hell yeah, let's hold them accountable for disinformation. They'll be gone completely in a matter of months.

Want to get rid of that responsibility? Direct the user to the source. Oh wait, that's just a search engine.

[–] iopq@lemmy.world 12 points 1 day ago* (last edited 1 day ago) (1 children)

It's a bit different, because a search engine can give you 0 results. An AI is trained on getting the most correct answers so it always guesses, it's the best way to score on an evaluation

[–] XTL@sopuli.xyz 4 points 1 day ago

Except for refusals, but that's a kind of answer as well.

[–] supersquirrel@sopuli.xyz 100 points 2 days ago (4 children)

I think a better solution is to ban techbros from giving serious economic or cultural advice and take computers away from business majors.

[–] HeyThisIsntTheYMCA@lemmy.world 40 points 1 day ago (4 children)

Please don't take them entirely away. Maybe just internet access? 30ish years had to do accounting by hand. In those green ledgers. It took approximately twelve times longer to do it by hand than to do it with a computer. And it made me shrimp like 5 times worse. I needed an architect's table what angled the top of it in order to work properly but I could neither get one supplied by the employer nor afford to give one to the employer.

Not all technology is bad

load more comments (4 replies)
load more comments (3 replies)

Would be nice if regular legal and health advice was in any way affordable though

[–] ieGod@lemmy.zip 23 points 1 day ago (1 children)

I don't see how you police/enforce this. The technology is out of the bag, people will find ways to access. Do we need age/location verification for this now too? What if I'm running a local agent? I don't agree with this.

[–] cmnybo@discuss.tchncs.de 28 points 1 day ago (4 children)

The law would allow you to sue whoever is running the chatbot. If you run your own LLM locally and take bad advice from it, then it's your own fault.

load more comments (4 replies)
[–] dhruv3006@lemmy.world 12 points 1 day ago

You bring a regulation - can you really enforce it?

[–] mrmaplebar@fedia.io 34 points 2 days ago (6 children)

This reads as a way to protect white collar industries from the effects of AI without addressing the root problem--that AI does not actually think, and that it is little more than a meat grinder full of scraped data.

load more comments (6 replies)
[–] tinkermeister@lemmy.world 29 points 1 day ago* (last edited 1 day ago) (7 children)

I may have become too cynical but, as is often the case when you dig deeper, this sounds like the result of lobbyists trying to protect licensing rather than people.

We can be dumb, but we’ve been doing web searches for legal and medical advice for ages because it is too damned expensive and time consuming to go to professionals for every little thing. Not to mention, doctors have so little time for you that it is hard to get them to listen to the whole story to make connections between symptoms.

The LLMs already tell you that they aren’t licensed professionals and, for many, provide citations for their sources (miles better than your typical health website).

As a personal anecdote, my son was having stomach pain but was planning to tough it out. He checked with ChatGPT and it recommended he go to the ER. He did, and if he hadn’t, he would likely be dead now. He spent 3 days in the hospital having his bowels unobstructed through a tube in his nose.

There is value in people having that kind of information at their fingertips.

Regulation is absolutely needed, but I would rather they focus on protecting us from AI being used for military purposes, mass surveillance, etc. rather than protecting citizens from ourselves.

load more comments (7 replies)
load more comments
view more: next ›