this post was submitted on 25 Nov 2023
1 points (100.0% liked)

Technology

72356 readers
2814 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 20 comments
sorted by: hot top controversial new old
[–] HiddenLayer5@lemmy.ml 0 points 2 years ago* (last edited 2 years ago) (1 children)

Remember: There is no such thing as an "evil" AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.

[–] phoneymouse@lemmy.world 0 points 2 years ago* (last edited 2 years ago) (1 children)

Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.

[–] cosmicrookie@lemmy.world -1 points 2 years ago* (last edited 2 years ago)

Especially one that is made to kill everybody else except their own. Let it replace the police. I'm sure the quality controll would be a tad stricter then

[–] BombOmOm@lemmy.world 0 points 2 years ago* (last edited 2 years ago) (1 children)

As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.

[–] Chuckf1366@sh.itjust.works 0 points 2 years ago (1 children)

Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

[–] FaceDeer@kbin.social 0 points 2 years ago (1 children)

Imagine a mine that could recognize "that's just a child/civilian/medic stepping on me, I'm going to save myself for an enemy soldier." Or a mine that could recognize "ah, CenCom just announced a ceasefire, I'm going to take a little nap." Or "the enemy soldier that just stepped on me is unarmed and frantically calling out that he's surrendered, I'll let this one go through. Not the barrier troops chasing him, though."

There's opportunities for good here.

[–] Nudding@lemmy.world -1 points 2 years ago

Lmao are you 12?

[–] Immersive_Matthew@sh.itjust.works -1 points 2 years ago (1 children)

We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.

[–] shrugal@lemm.ee 0 points 2 years ago (1 children)

Both honesty. AI can reduce accountability and increase the power small groups of people have over everyone else, but it can also go haywire.

It will go haywire in areas for sure.

[–] cosmicrookie@lemmy.world -1 points 2 years ago* (last edited 2 years ago) (2 children)

It's so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can't punish AI for doing something wrong. AI does not require a raise for doing something right either

[–] zalgotext@sh.itjust.works 0 points 2 years ago (1 children)

You can't punish AI for doing something wrong.

Maybe I'm being pedantic, but technically, you do punish AIs when they do something "wrong", during training. Just like you reward it for doing something right.

[–] cosmicrookie@lemmy.world -1 points 2 years ago

But that is during training. I insinuated that you can't punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen

[–] reksas@lemmings.world 0 points 2 years ago* (last edited 2 years ago) (1 children)

That is like saying you cant punish gun for killing people

edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.

[–] cosmicrookie@lemmy.world -1 points 2 years ago

Sorry, but this is not a valid comparison. What we're talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?

[–] cosmicrookie@lemmy.world -1 points 2 years ago (1 children)

The only fair approach would be to start with the police instead of the army.

Why test this on everybody else except your own? On top of that, AI might even do a better job than the US police

[–] ultra@feddit.ro 0 points 2 years ago (1 children)

But that AI would have to be trained on existing cops, so it would just shoot every black person it sees

[–] cosmicrookie@lemmy.world -1 points 2 years ago

My point being that there would be more motivation to filter Derek Chauvin type of cops from the AI library than a soldier with a trigger finger.

[–] Silverseren@kbin.social -1 points 2 years ago

The sad part is that the AI might be more trustworthy than the humans being in control.