this post was submitted on 18 Jul 2024
481 points (96.5% liked)

Technology

59605 readers
3345 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] barsoap@lemm.ee 31 points 4 months ago* (last edited 4 months ago) (2 children)

The way to use these kinds of systems is to have the judge came to an independent decision, then, after that's keyed in, the AI spits out theirs and whichever predicts more danger is then acted on.

Relatedly, the way you have an AI select people and companies to get spot-checked by tax investigators is not to show investigators the AI scores, but mix in AI suspicions among a stream of randomly selected people.

Relatedly, the way you have AI involved in medical diagnoses is not to tell the human doctor results, but suggest additional tests to be made. The "have you ruled out lupus" approach.

And from what I've heard the medical profession actually got that right from the very beginning. They know what priming and bias is. Law enforcement? I fear we'll have to ELI5 them the basics for the next five hundred years.

[–] madsen@lemmy.world 14 points 4 months ago* (last edited 4 months ago) (1 children)

I don't think there's any AI involved. The article mentions nothing of the sort, it's at least ~~8~~ 17 years old (according to the article) and the input is 35 yes/no questions, so it's probably just some points assigned for the answers and maybe some simple arithmetic.

Edit: Upon a closer read I discovered the algorithm was much older than I first thought.

[–] barsoap@lemm.ee 5 points 4 months ago (1 children)

Sounds like an expert system then (just judging by the age) which was AI before the whole machine learning craze, in any case you need to take the same kind of care when integrating them into whatever real-world structures there are.

Medicine used them with quite some success problem being they take a long time to develop because humans need to input expert knowledge, and then they get outdated quite quickly.

Back to the system though: 35 questions is not enough for these kinds of questions. And that's not an issue of number of questions, but things like body language and tone of voice not being included.

so it’s probably just some points assigned for the answers and maybe some simple arithmetic.

Why yes, that's all that machine learning is, a bunch of statistics :)

[–] madsen@lemmy.world 2 points 4 months ago* (last edited 4 months ago)
so it’s probably just some points assigned for the answers and maybe some simple arithmetic.

Why yes, that’s all that machine learning is, a bunch of statistics :)

I know, but that's not what I meant. I mean literally something as simple and mundane as assigning points per answer and evaluating the final score:

// Pseudo code
risk = 0
if (Q1 == true) {
    risk += 20
}
if (Q2 == true) {
    risk += 10
}
// etc...
// Maybe throw in a bit of
if (Q28 == true) {
    if (Q22 == true and Q23 == true) {
        risk *= 1.5
    } else {
        risk += 10
    }
}

// And finally, evaluate the risk:
if (risk < 10) {
    return "negligible"
} else if (risk >= 10 and risk < 40) {
    return "low risk"
}
// etc... You get the picture.

And yes, I know I can just write if (Q1) {, but I wanted to make it a bit more accessible for non-programmers.

The article gives absolutely no reason for us to assume it's anything more than that, and I apparently missed the part of the article that mentioned that the system had been in use since 2007. I know we had machine learning too back then, but looking at the project description here: https://eucpn.org/sites/default/files/document/files/Buena%20practica%20VIOGEN_0.pdf it looks more like they looked at a bunch of cases (2159) and came up with the 35 questions and a scoring system not unlike what I just described above.

Edit: I managed to find this, which has apparently been taken down since (but thanks to archive.org it's still available): https://web.archive.org/web/20240227072357/https://eticasfoundation.org/gender/the-external-audit-of-the-viogen-system/

VioGén’s algorithm uses classical statistical models to perform a risk evaluation based on the weighted sum of all the responses according to pre-set weights for each variable. It is designed as a recommendation system but, even though the police officers are able to increase the automatically assigned risk score, they maintain it in 95% of the cases.

... which incidentally matches what the article says (that police maintain the VioGen risk score in 95% of the cases).

[–] match@pawb.social 4 points 4 months ago

But that doesn't save money and the only reason the capitalists want AI is saving money