this post was submitted on 24 Jan 2024
292 points (97.4% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TwilightVulpine@lemmy.world 13 points 10 months ago (1 children)

There's always small hardware quirks to be accounted for, but when we are talking about machine learning, which is not directly programmed, it's less applicable to blame developers.

The issue is that computer system are now used to whitewash mistakes or biases with a veneer of objective impartiality. Even an accounting system's results are taken as fact.

Consider that an AI trained with data from the history of policing and criminal cases might make racist decisions, because the dataset includes a plenty of racist bias, but it's very easy for the people using it to say "welp, the machine said it so it must be true". The responsibility for mistakes is also abstracted away, because the user and even the software provider might say they had nothing to do with it.

[–] Teluris@lemmy.world 2 points 10 months ago (1 children)

I the example you gave I would actually put the blame the software provider. It wouldn't be ridiculously difficult to anonimize the data, get rid of name, race, gender, and leave only the information about the crime committed, the evidence, any extenuating circumstances, and the judgment.

It's more difficult then simply throwing in all the data, but it can and should be done. It could still contain some bias, based on things like the location of the crime. But the bias would be already greatly reduced.

[–] TwilightVulpine@lemmy.world 6 points 10 months ago* (last edited 10 months ago) (1 children)

I don't think you can completely anonymize data and still end up with useful results, because the AI will be faced with human inconsistency and biases regardless. Take away personally identifiable information and it might mysteriously start behaving harsher regarding certain locations, like, you know, districts where mostly black and poor people live.

We'd need to have a reckoning with our societal injustices before we can determine what data can be used for many purposes. Unfortunately many people who are responsible for these injustices are still there, and they will be the people who will determine if the AI output is serving their purpose or not.

[–] HauntedCupcake@lemmy.world 5 points 10 months ago

The "AI" that I think is being referenced is one that instructs officers to more heavily patrol certain areas based on crime statistics. As racist officers often patrol black neighbourhoods more heavily, the crime statistics are higher (more crimes caught and reported as more eyes are there). This leads to a feedback loop where the AI looks at the crime stats for certain areas, picks out the black populated ones, then further increases patrols there.

In the above case, any details about the people aren't needed, only location, time, and the severity of the crime. The AI is still being racist despite race not being in the dataset