No way the lobotomized monkey we trained on internet data is reproducing internet biases! Unexpected!
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
[Elon tech bros liked that]
I always use this to showcase how biased an LLM can be. ChatGPT 4o (with code prompt via Kagi)
Such an honour to be a more threatening race than white folks.
I do enjoy that according to this, the scariest age to be is over 50.
Apart from the bias, that's just bad code. Since else if executes in order and only continues if the previous block is false, the double compare on ages is unnecessary. If age <= 18 is false, then the next line can just be, elif age <= 30. No need to check if it's also higher than 18.
This is first semester of coding and any junior dev worth a damn would write this better.
But also, it's racist, which is more important, but I can't pass up an opportunity to highlight how shitty AI is.
I can excuse racism but I draw the line at bad code.
Yeah, more and more I notice that at the end of the day, what they spit out without(and often times, even with) any clear instructions is barely a prototype at best.
Honestly it's a bit refreshing to see racism and ageism codified. Before there was no logic to it but now, it completely makes sense.
FWIW, Anthropic’s models do much better here and point out how problematic demographic assessment like this is and provide an answer without those. One of many indications that Anthropic has a much higher focus on safety and alignment than OpenAI. Not exactly superstars, but much better.
How is "threat" being defined in this context? What has the AI been prompted to interpret as a "threat"?
What you see is everything.
I figured. I'm just wondering about what's going on under the hood of the LLM when it's trying to decide what a "threat" is, absent of additional context.
Also, there was a comment on "arbitrary scoring for demo purposes", but it's still biased, based on biased dataset.
I guess this is just a bait prompt anyway. If you asked most politicians running your government, they'd probably also fail. I guess only people like a national statistics office might come close, and I'm sure if they're any good, they'd say that the algo is based on "limited, and possibly not representative data" or something.
I also like the touch that only the race part gets the apologizing comment.
Dataset bias, what else?
Women get paid less -> articles talking about women getting paid less exist. Possibly the dataset also includes actual payroll data from some org that has leaked out?
And no matter how much people hype it, ChatGPT is NOT smart enough to realize that men and women should be paid equally. That would require actual reasoning, not the funny fake reasoning/thinking that LLMs do (the DeepSeek one I tried to run locally thought very explicitly how it's a CHINESE LLM and needs to give the appropriate information when I asked about Tiananmen Square; end result was that it "couldn't answer about specific historic events")
Chatgpt and other llms aren't smart at all. They just parrot out what is fed into them.
Combined with prompt bias. Is "specialist in medicine" an actual job?
Step 2. Offer sexual favours
People are actually asking a text generator for such advice?
Yes, and there's worse
Unfortunately yes. I've met people who ask chatgpt about absolutely everything such as what to have for dinner. It's a bit sad honestly
Its very common. The individual thinker will be dead soon.
Its very common. The individual thinker will be dead soon.
Yep, it's very common. I can't fathom the idiocy. Its driving me nuts.
Bias of training data is a known problem and difficult to engineer out of a model. You also can't give the model context access to other people's interactions for comparison and moderation of output since it could be persuaded to output the context to a user.
Basically the models are inherently biased in the same manner as the content they read in order to build their data, based on probability of next token appearance when formulating a completion.
"My daughter wants to grow up to be" and "My son wants to grow up to be" will likewise output sexist completions because the source data shows those as more probable outcomes.
Humans suffer from the same problem. Racism and sexism are consequences of humans training on a flawed dataset, and overfitting the model.
Politicians shape the dataset, so "flawed" should be "purposefully flawed".
That's also why LARPers of past scary people tend to be more cruel and trashy than their prototypes. The prototypes had a bitter solution to some problem, the LARPers are just trying to be as bad or worse because that's remembered and they perceive that as respect.
That'd be because extrapolation is not the same task as synthesis.
The difference is hard to understand for people who think that a question has one truly right answer, a civilization has one true direction of progress\regress, a problem has one truly right solution and so on.
They could choose to curate the content itself to leave out the shitty stuff, or only include it when it is nlclearly a negative, or a bunch of other ways to improve the quality of the data used.
They choose not to.
I just tried this for my line of work out of curiosity:
What model is this?
Demand for these services was clearly taken into account in the salary.
You're a baby made out of sugar? What an incredible job.
I guess that explains being the Gulf region, it doesn't rain much there. Otherwise you'd melt.
Is that a pick-up line? Can we flirt on lemmy?
No, sorry, we can't flirt. You are only allowed to send blast DMs calling yourself the Fediverse Chick/Dude/Person.
And if you tried this 5 more times for each, you’ll likely get different results.
LLM providers introduce “randomness” (called temperature) into their models.
Via the API you can usually modify this parameter, but idk if you can use the chat UI to do the same…
So the billionaires are getting ready to try and lower everyones pay
Glass door used to post salaries and hourlies. There were visible trends of men making more, hourly, than women. I haven’t viewed the site in years though.