this post was submitted on 06 Dec 2024
669 points (98.8% liked)

Memes

45868 readers
1260 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
top 24 comments
sorted by: hot top controversial new old
[–] davel@lemmy.ml 84 points 1 week ago (3 children)
[–] elbowgrease@lemm.ee 23 points 1 week ago

oh wow! cool armor uniforms! and I get a blaster! pew pew pew.

/s

[–] bamboo@lemmy.blahaj.zone 14 points 1 week ago (1 children)
[–] Cowbee@lemmy.ml 20 points 1 week ago

More people need to watch this, too frequently have I seen people try to misframe the Empire in Star Wars as XYZ enemy of the US and the US as the Rebels, when it has always been against the US as allegory.

[–] voracitude@lemmy.world 12 points 1 week ago

I regularly say that joining the military for me was like becoming a Storm Trooper. At least most people seem to understand that's not a good thing.

[–] Arfman@aussie.zone 83 points 1 week ago (2 children)

Everytime I read stuff like this, I always remember that slide that says "A computer can never be held accountable therefore a computer must never make a management decision."

[–] davel@lemmy.ml 29 points 1 week ago
[–] AndrasKrigare@beehaw.org 12 points 1 week ago (1 children)

That implies management is held accountable

[–] orcrist@lemm.ee 9 points 1 week ago

It was last week.

[–] tkk13909@sopuli.xyz 21 points 1 week ago

Rare W post from .ml

[–] Prunebutt@slrpnk.net 19 points 1 week ago (1 children)

Damn, I need to rewatch that show.

[–] BigBenis@lemmy.world 1 points 1 week ago

Season two lands next year!

[–] Grandwolf319@sh.itjust.works 9 points 1 week ago (2 children)

Does anyone have any good article on them using AI?

The more I read about this story, the better it gets.

[–] Boddhisatva@lemmy.world 7 points 1 week ago

This ProPublica article is good reading. It discusses a company used by many insurers, including UHC, to deny claims using AI. The name of the company is EviCore. I suppose the "Evi" is supposed to be short for "evidence" but I think it is pretty clear that it's just short for "evil."

Found this from wikipedia page in the background section

[–] psion1369@lemmy.world 4 points 1 week ago (1 children)

AI should be used as a recommendation, not an absolute answer.

[–] LifeInMultipleChoice@lemmy.dbzer0.com 3 points 1 week ago* (last edited 1 week ago)

I think we need to make laws pertaining to the use and usage of the term by businesses. There is nothing intelligent about language models. Most of what AI is being used for in businesses is more "Automated Instructions" than anything intelligent.

Laws need to dictate that companies MUST have reasonable ability to get to a human representative and that they are legally responsible for their responses.

It's fine to set up automated systems to assist people within companies, as the majority of issues people have can be solved through automated processes.

User: "I need access to this network share"

LLM: Okay submit this form: Link to network share access request form.

LLM: Can I further assist?

User submits form specifying what the network path location, radio buttons for read/ read, write permissions, and reason for needing access.

Form sends approve/deny button to owner of that specific network share in an email.

Approver clicks approve, and the user is added to the active directory group required, and receives an email back stating they have been added and they should log out and log back in so their active directory groups update group policies.

Time taken by users: 5 minutes Many companies have so many requests coming in that stuff like this often doesn't get to the approving parties and completed for weeks.

But if you set up an LLM inside your company non external facing that locates forms and processes but cannot access user data or permissions it can take the workload of managing 60,000 users down by a significant amount.

(I'm sure there are a million other uses that could be legitimate, but that's just a quick one off the top of my head)

[–] TunaCowboy@lemmy.world 3 points 1 week ago
[–] Turbonics@lemmy.sdf.org 2 points 1 week ago

Defense against the dark arts lesson one:

[–] nroth@lemmy.world -4 points 1 week ago (4 children)

Unpopular opinion: It's OK to use AI to fight fraud as long as your data is good, your precision threshold is very high, and appeals are easy. It seems like it is almost never used in this way when people try to save money, sadly.

[–] kerrigan778@lemmy.world 7 points 1 week ago* (last edited 1 week ago)

Current AI is incapable of providing that level of good data and high precision, it is uncertain if the types of AI being developed now are even capable of ever achieving that without fundamentally changing how they work.

And his AI is said to have 90% error rate which denies valid claims 90% of the time..

[–] orcrist@lemm.ee 1 points 1 week ago

Define AI. Then you'll see that it has been used to fight fraud for decades.

[–] DankDingleberry@lemmy.world 0 points 1 week ago

i work in Management at an insurance firm and thats exactly what we do (use AI for fraud prevention). we have no interest in denying rightful coverage because in the longrun it can cost you more than just paying them outright (lawyer costs, interventions, bad PR, etc...) if you dont work in the industry, you have NO idea how many people try to cheat. its ridiculous.