this post was submitted on 10 Dec 2024
738 points (97.8% liked)

Games

32901 readers
1327 users here now

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Weekly Threads:

What Are You Playing?

The Weekly Discussion Topic

Rules:

  1. Submissions have to be related to games

  2. No bigotry or harassment, be civil

  3. No excessive self-promotion

  4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

  5. Mark Spoilers and NSFW

  6. No linking to piracy

More information about the community rules can be found here.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] RunawayFixer@lemmy.world 3 points 6 days ago (1 children)

You can't create an automated machine, let it run loose without supervision and then claim to not be responsible for what the machine does.

Maybe just maybe this was the very first instance of their ai malfunctioning (which I don't believe for a second), in which case the correct response of Brandshield would have been to announce that they would temporarily suspend the activities of this particular program & promise to implement improvements so that it would not happen again. Brandshield has done neither of these, which tells me that it's not the first time and also that Brandshield has no intention of preventing it from happening again in the future.

[–] JackbyDev@programming.dev 1 points 6 days ago (1 children)

I'm not trying to exonerate them of any blame, I'm just saying "knowingly" implies a human looking at something and making a decision as opposed to a machine making a mistake.

[–] RunawayFixer@lemmy.world 0 points 6 days ago (1 children)

I made an automaton. I set the parameters in such a way that there is a large variability of actions that my automaton can take. My parameters do not pre-empt my automaton from taking certain illegal actions. I set my automaton loose. After some time it turns out that my automaton has taken an illegal action against a specific person. Did I know that my automaton was going to commit a illegal action against that specific person? No, I did not. Did I know that my automaton was sooner or later going to commit certain illegal actions? Yes I did, because those actions are within the parameters of the automaton. I know my automaton is capable of doing illegal actions and given enough incidences there is an absolute certainty that it will do those illegal actions. I do not need to interact with my automaton in any way to know that some of it's actions will be illegal.

[–] JackbyDev@programming.dev 0 points 6 days ago (1 children)

I'm not trying to exonerate them of any blame

[–] RunawayFixer@lemmy.world 1 points 6 days ago* (last edited 6 days ago)

And I'm not saying that you are. I tried to show with a parable that they do not need to see their machine's actions to know that some of it's actions are illegal. That's what we were disagreeing on: that they know.