this post was submitted on 05 Aug 2024
91 points (98.9% liked)

Technology

59534 readers
3199 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 23 comments
sorted by: hot top controversial new old
[–] NeoNachtwaechter@lemmy.world 34 points 3 months ago (3 children)

When these AIs make autonomous decisions that inadvertently cause harm – whether financial loss or actual injury – whom do we hold liable?

The person who allowed the AI to make these decisions autonomously.

We should do it like Asimov has shown us: create "robot laws" that are similar to slavery laws:

In principle, the AI is a non-person and therefore a person must take responsibility.

[–] Nommer@sh.itjust.works 6 points 3 months ago

No you see the corporations will just lobby until the courts get enough money to classify AI as it's own individual entity, just like with citizens united.

[–] Nomecks@lemmy.ca 4 points 3 months ago* (last edited 3 months ago)

The whole point of Asimov's three laws were to show how they could never work in reality because it would be very easy to circumvent them.

[–] RandomVideos@programming.dev 1 points 3 months ago

(At least in Romania) if a child does a crime, the parents are punished

The person allowing the AI to make these decisions should be punished until the AI is at least 15 years old(and killing it and replacing it with a clone of the AI or a better AI with the same name doesnt mean the age doesnt reset to 0)

[–] JohnDClay@sh.itjust.works 31 points 3 months ago (2 children)

The person who decided to use the AI

[–] chakan2@lemmy.world 10 points 3 months ago (1 children)

There are going to be a lot of instances going forward where you don't know you were interacting with an AI.

If there's a quality check on the output, sure, they're liable.

If a Tesla runs you into an ambulance at 80mph...the very expensive Tesla lawyers will win.

It's a solid quandary.

[–] JohnDClay@sh.itjust.works 4 points 3 months ago (1 children)

Why would the lawyer defendant not know they're interacting with AI? Would the AI generated content appear to be actual case law? How would that confusion happen?

[–] chakan2@lemmy.world 1 points 3 months ago

Immediate things that come to mind are bots on Reddit. Twitter is 70% bot traffic. People interact with them all day every day and don't know.

That quickly spirals into customer service. If you're not talking to a guy with a thick Indian accent, could be a bot at this point.

A lot of professional business services are exploring AI hard...what happens when one tells the business to do something monumentally stupid and said business does it? Is it the people who are training the AI? Is the machine at fault for a hallucination? Is it the poor schmuck at the bottom that pushed the delete button?

It's not cut and dry when you're interacting with a machine any more.

[–] tal@lemmy.today 8 points 3 months ago (1 children)

My guess is that it's gonna wind up being a split, and it's not going to be unique to "AI" relative to any other kind of device.

There's going to be some kind of reasonable expectation for how a device using AI should act, and then if the device acts within those expectations and causes harm, it's the person who decided to use it.

But if the device doesn't act within those expectations, then it's not them, may be the device manufacturer.

[–] JohnDClay@sh.itjust.works 4 points 3 months ago

Yeah, if the company making the ai makes false claims about it, then it'd be on them at least partially.

[–] nullPointer@programming.dev 8 points 3 months ago* (last edited 3 months ago) (2 children)

if the source code for said accusing AI cannot be examined and audited by the defense; the state is denying the defendant their right to face their accuser. mistrial.

[–] NeoNachtwaechter@lemmy.world 1 points 3 months ago

What determines the decisions/actions of an AI?

Hint: It is not source code.

[–] conciselyverbose@sh.itjust.works 0 points 3 months ago

This makes no sense. The source code isn't "their accuser" (regardless of the fact that they're very obviously also not the defendant either).

AI is nothing but a distraction. It's not an entity. The negligence is exactly the same as it would be for any other piece of software doing something that caused harm.

It's rarely going to be criminal (though it should be, more often, regardless of "AI" nonsense, when company executives take grossly negligent shortcuts that kill people), but AI doesn't require any extra laws.

[–] towerful@programming.dev 7 points 3 months ago* (last edited 3 months ago) (1 children)

Follow sensible H&S rules.
Split the responsibility between the person that decided AI is able to do this task and the company that sold the AI saying it's capable of this.

For the case of the purchasing company, obviously start with the person that chose that AI, then spread that responsibility up the employment chain. So the manager that approved it, the managers manager, all the way to the executive office & company as a whole.
If investigation shows that the purchasing company ignored sales advice, then it's all on the purchasing company.

If the investigation shows that the purchasing company followed the sales advice, then the responsibility is split, unless the purchasing company can show that they did due diligence in the purchase.
For the supplier, the person that sold that tech. If the investigation shows that the engineers approved that sales pitch, then that engineers employment chain. If the sales person ignored the devs, then the sales employment chain. Up to the executive level.

No scape goats.
Whatever happens, C office, companies, and probably a lot of managers get hauled into court.
Make it rough for everyone in the chain of purchase and supply.
If the issue is a genuine mistake, then appropriate insurance will cover any damages. If the issue is actually fraud, then EVERYONE (and the company) from the level of handover upwards should be punished

[–] Drusas@kbin.run 4 points 3 months ago (1 children)
[–] Hawke@lemmy.world 4 points 3 months ago (1 children)
[–] Drusas@kbin.run 1 points 3 months ago
[–] fubarx@lemmy.ml 6 points 3 months ago (2 children)

This topic came up when self-driving was first coming up. If a car runs over someone, who is to blame?

  • Person in driver seat
  • Dealer
  • Car manufacturer
  • Supplier who provided the driving control system
  • The people who designed the algorithm and did the ML training
  • People who wrote and tested the code
  • Insurer

Most of these would likely be indemnified by all kinds of legal and contractual agreements, but the matter would still stand that someone died.

[–] Badeendje@lemmy.world 1 points 3 months ago (1 children)

Throughout the entire chain based on value/value add. Not to the consumer.

So if a car manufacturer adds a shitty 3rd party self-driving to their car. And the license etc is 100 euro per car and the car 10k and sold by the dealer for 20k..

  • 100/20k for the 3rd party
  • 10k/20k for the manufacturer
  • 10k/20k for the dealer

Hhmm how would this work for private re-sale... Still the dealer imho.

[–] conciselyverbose@sh.itjust.works 1 points 3 months ago* (last edited 3 months ago) (1 children)

Dealers don't (and shouldn't have to) validate safety features. If they're approved by the NHTSA, that's their responsibility handled.

It's all the manufacturer.

[–] Badeendje@lemmy.world 1 points 3 months ago

Fair enough.

[–] HauntedCupcake@lemmy.world 1 points 3 months ago

An insurer is an interesting one for sure. They'd have the stats of how many times that AI model makes mistakes and be able to charge accordingly. They'd also have the funds and evidence to go after big corps if their AI was faulty.

They seem like a good starting point, until negligence elsewhere can be proven.

[–] technocrit@lemmy.dbzer0.com 5 points 3 months ago* (last edited 3 months ago)

"AI" (aka a computer) doesn't make mistakes. People make the mistake of using AI.