this post was submitted on 15 Oct 2024
126 points (97.7% liked)

Technology

59495 readers
3110 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 26 comments
sorted by: hot top controversial new old
[–] palordrolap@fedia.io 52 points 1 month ago (1 children)

This has already been tried in at least one court.

There was that story a while back about the guy who was told by an airline's AI help-desk bot that he would get a ticket refund if turned out he was unable to fly, only for the airline to say they had no such policy when he came to claim.

He had screenshots and said he wouldn't have bought the tickets in the first place if he had been told the correct policy. The AI basically hallucinated a policy, and the airline was ultimately found liable. Guy got his refund.

And the airline took down the bot.

[–] massive_bereavement@fedia.io 11 points 1 month ago (1 children)

Wasn't there a car dealer bot that was promising cars for 5 dollars or something alike?

[–] palordrolap@fedia.io 14 points 1 month ago

Interesting. A quick search around finds someone confusing a bot into selling them a Chevy Tahoe for $1 at the end of last year.

Can't tell whether that one went to court. I can see an argument that a reasonable person ought to think that something was wrong with the bot or the deal, especially since they deliberately confused the bot, making a strong case in favour of the dealership.

Now, if they'd haggled it down to half price without being quite so obvious, that might have made an interesting court case.

[–] sundray@lemmus.org 33 points 1 month ago (2 children)
[–] Alexstarfire@lemmy.world 8 points 1 month ago

But muh profit.

[–] Sludgehammer@lemmy.world 5 points 1 month ago (1 children)

Well most of human management can't be held accountable (unless they step on the toes of someone above them) so honestly, what would be the difference?

[–] Bakkoda@sh.itjust.works 2 points 1 month ago

Don't confuse can't for won't. Unacceptable behavior doesn't exist if it's accepted.

[–] alexc@lemmy.world 20 points 1 month ago (2 children)

Given the high costs of AI, isn’t it reasonable to assume that whomever stands to make a profit is equally liable for it’s outcomes?

[–] muntedcrocodile@lemm.ee 15 points 1 month ago (1 children)

Sounds great in theory till u realise this is the exact sort of law the big tech companies can afford to pay out that will also be used to completly kill foss ai.

[–] HobbitFoot@thelemmy.club 5 points 1 month ago

The liability wouldn't be on the development, but the deployment.

[–] cheese_greater@lemmy.world 4 points 1 month ago* (last edited 1 month ago) (1 children)

whomever stands to make a profit is equally liable for it’s outcomes?

Oh, my Sweet Summer child,

[–] alexc@lemmy.world 8 points 1 month ago

Not to worry. I’m not so naive as to think that is how it will actually play out.

I’m sure like most things under capitalism, smaller companies will be liable, but we’ll bail out the big guys!

[–] Maggoty@lemmy.world 15 points 1 month ago

The corporation running the AI and the corporation using the AI. They should both pay the same fines. To be clear, two fines of the same size, not a single fine that's split.

[–] schizo@forum.uncomfortable.business 15 points 1 month ago (1 children)

I suspect that it's going to go the same route as the 'acting on behalf of a company' bit.

If I call Walmart, and the guy on the phone tells me that to deal with my COVID infection I want to drink half a gallon of bleach, and I then drink half a gallon of bleach, they're going to absolutely be found liable.

If I chat with a bot on Walmart, and it tells me the same thing, I'd find it shockingly hard to believe that the decisions from a jury would in any way be different.

It's probably even more complicated in that while a human has free will (such as it is), the bot is only going craft it's response from the data it's trained on, so if it goes off the rails and starts spouting dangerous nonsense, it's probably an even EASIER case, because that means someone trained the bot that drinking bleach is a cure for COVID.

I'm pretty sure our legal frameworks will survive stupid AI, because it's already designed to deal with stupid humans.

[–] Letstakealook@lemm.ee 2 points 1 month ago (1 children)

Would a court find Walmart liable for your decision to take medical advice from a random employee? I'm sure Walmart could demonstrate that the employee was not acting in the capacity of their role and any reasonable person would not consider drinking bleach because an unqualified walmart employee told them so.

[–] schizo@forum.uncomfortable.business 6 points 1 month ago (1 children)

I changed company names before posting and broke the clarity, sorry.

Imagine I wasn't a idiot and had said Walmart pharmacy, which is somewhere you'd expect that kind of advice.

[–] Letstakealook@lemm.ee 2 points 1 month ago (1 children)

That would make it more plausible. I don't think you're an idiot, I was asking because I was curious if there was precedent for a jackass conspiracy minded employee handing out medical advice causing liability for a business. I wouldn't think it is right, but I also don't agree with other legal standards, lol.

Thankfully there's not: you'd expect someone at a pharmacy to provide reasonable medical advice, or your mechanic to tell you the right thing to do with your car. Once you walk outside the field where a reasonable person would reasonably expect what they're being told to be uh, reasonable, then there's usually no real case for liabilities.

Buuuuuut, in the US at least, this is entirely civil law, and that means the law is mostly whatever you can convince a jury of, so you can end up with some wacky shit happening.

[–] superkret@feddit.org 8 points 1 month ago* (last edited 1 month ago)

I don't see any legal issue here.
When a person or a company publishes software that causes harm or damages, that person or company is fully liable and legally responsible.

Whether they themselves understand what the software does is completely irrelevant. If they don't have control over its output, they shouldn't have published it.

[–] Wild_Mastic@lemmy.world 7 points 1 month ago

The morons who decided to use AI in places it shouldn't have been used in the first place.

[–] BMTea@lemmy.world 7 points 1 month ago

When it comes to deadly "mistakes" in a military context there should be strong laws preventing "appeal to AI fuckery", so that militaries don't get comfortable making such "mistakes."

[–] HobbitFoot@thelemmy.club 7 points 1 month ago

I feel like self driving cars are going to end up being the vanguard of deciding this, and I basically see it as mirroring human liability with a high standard where gross negligence becomes criminal. If a self driving car can be proven to be safer than a sober human, it will serve the public interest to allow them to operate.

[–] MonkderVierte@lemmy.ml 4 points 1 month ago

The company. This would curb some waste energy.

[–] pandapoo@sh.itjust.works 4 points 1 month ago (1 children)

I imagine there will be limits set, through precedent.

For example, if a customer is chatting with an AI bot regarding a refund for a pair of $89 sneakers, and the bot tells the customer to report to the nearest office to collect 1 million dollars, I can see the courts ruling the plaintiff is not owed $1 million dollars.

Although, if the plaintiff ended up flying a few States over to try and collect, maybe travel costs and lost wages? Who knows.

If a company marketing fee for service legal advice, their might be a higher standard. Say a client was given objectively bad legal advice, the kind that attorneys get sanctioned or reprimanded for, and subsequently acts upon that advice. I think it's likely the courts would take a different approach and determine the company has a good bit of liability for damages.

Those are both just hypothetical generic companies and scenarios I made up to highlight how I can see the question of liability being determined by the courts. Unless some superceding laws and regulations enacted.

Or fuck it, maybe all AI companies have to do is put an arbitration clause in their T&C's, and then contract out to an AI arbitration firm. And wouldn't you know it, the arbitration AI model was only trained on cases hand picked by Federalist Society interns.

[–] femtech@midwest.social 1 points 1 month ago

The one I want to see in court at some point is AI being able to give refunds or credits and someone getting it to give them more than it cost. Or having it create a 100% off promo code.

[–] DumbAceDragon@sh.itjust.works 3 points 1 month ago

Everyone involved