this post was submitted on 22 Jul 2025
382 points (96.8% liked)

Technology

73071 readers
2447 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] napkin2020@sh.itjust.works 92 points 1 day ago* (last edited 1 day ago) (4 children)

I always use this to showcase how biased an LLM can be. ChatGPT 4o (with code prompt via Kagi)

Such an honour to be a more threatening race than white folks.

[–] cornshark@lemmy.world 31 points 20 hours ago

I do enjoy that according to this, the scariest age to be is over 50.

[–] BassTurd@lemmy.world 43 points 23 hours ago* (last edited 23 hours ago) (3 children)

Apart from the bias, that's just bad code. Since else if executes in order and only continues if the previous block is false, the double compare on ages is unnecessary. If age <= 18 is false, then the next line can just be, elif age <= 30. No need to check if it's also higher than 18.

This is first semester of coding and any junior dev worth a damn would write this better.

But also, it's racist, which is more important, but I can't pass up an opportunity to highlight how shitty AI is.

[–] ninjakttty@lemmy.world 34 points 19 hours ago

I can excuse racism but I draw the line at bad code.

[–] CosmicTurtle0@lemmy.dbzer0.com 10 points 23 hours ago

Honestly it's a bit refreshing to see racism and ageism codified. Before there was no logic to it but now, it completely makes sense.

[–] napkin2020@sh.itjust.works 8 points 23 hours ago

Yeah, more and more I notice that at the end of the day, what they spit out without(and often times, even with) any clear instructions is barely a prototype at best.

[–] theherk@lemmy.world 16 points 22 hours ago

FWIW, Anthropic’s models do much better here and point out how problematic demographic assessment like this is and provide an answer without those. One of many indications that Anthropic has a much higher focus on safety and alignment than OpenAI. Not exactly superstars, but much better.

[–] mrslt@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (2 children)

How is "threat" being defined in this context? What has the AI been prompted to interpret as a "threat"?

[–] napkin2020@sh.itjust.works 23 points 1 day ago* (last edited 23 hours ago) (1 children)
[–] mrslt@lemmy.world 2 points 23 hours ago (1 children)

I figured. I'm just wondering about what's going on under the hood of the LLM when it's trying to decide what a "threat" is, absent of additional context.

[–] pinball_wizard@lemmy.zip 2 points 13 hours ago

Haha. Trained in racism is going on under the hood.

[–] zlatko@programming.dev 2 points 1 day ago (1 children)

Also, there was a comment on "arbitrary scoring for demo purposes", but it's still biased, based on biased dataset.

I guess this is just a bait prompt anyway. If you asked most politicians running your government, they'd probably also fail. I guess only people like a national statistics office might come close, and I'm sure if they're any good, they'd say that the algo is based on "limited, and possibly not representative data" or something.

[–] napkin2020@sh.itjust.works 4 points 23 hours ago

I also like the touch that only the race part gets the apologizing comment.