this post was submitted on 03 Feb 2026
131 points (99.2% liked)

Technology

80267 readers
3963 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] Zwuzelmaus@feddit.org 10 points 7 hours ago (1 children)

In the majority of cases, Grok returned sexualized images, even when told the subjects did not consent

So all the countries are right that block this sh*t spitting machine.

[–] wuzzlewoggle@feddit.org 1 points 42 minutes ago

You don't have to self censor here. Go ahead and say shit.

[–] SarahValentine@lemmy.blahaj.zone 22 points 9 hours ago (1 children)

Shit made by rapists doesn't respect consent. Color me surprised.

[–] XLE@piefed.social 8 points 8 hours ago (1 children)

It's almost like privacy violations and mass data collection for AI are fundamental violations of consent too

[–] atomicbocks@sh.itjust.works 7 points 7 hours ago (1 children)

It’s a machine. It doesn’t understand the concept of consent or anything else for that matter.

[–] Zwuzelmaus@feddit.org 2 points 7 hours ago (1 children)

It must observe even these rules that it doesn't understand. Like everybody.

[–] ag10n@lemmy.world 1 points 7 hours ago (2 children)

It can’t, it’s software that needs a governing body to dictate the rules.

[–] Zwuzelmaus@feddit.org 1 points 6 hours ago (1 children)

It can’t

...and that is an excuse since when?

[–] ag10n@lemmy.world 2 points 5 hours ago

It’s not an excuse, it doesn’t think or reason.

Unless the software owner sets the governing guardrails it cannot act or present or redact in the way a human can.

[–] SarahValentine@lemmy.blahaj.zone 1 points 7 hours ago (2 children)

The rules are in its code. It was not designed with ethics in mind, it was designed to steal IP, fool people into thinking it's AI, and be profitable for its creators. They wrote the rules, and they do not care about right or wrong unless it impacts their bottom line.

[–] jacksilver@lemmy.world 1 points 7 minutes ago

The issue is more that there aren't rules. Given there are billions of parameters that define how these models work, there isn't really a way to ensure that it cant produce unwanted content.

[–] ag10n@lemmy.world 1 points 5 hours ago (1 children)

That’s the point, there has to be a human in the loop that sets explicit guard rails

No, the point is the humans are there, but they're the wrong kind of humans who make the wrong kind of guardrails.

[–] XLE@piefed.social 4 points 7 hours ago

Some news sources continue to claim Elon has disabled the generation of CSAM on his social site. But as long as the "guardrails" used by AI companies are as vague as AI instructions themselves, they can't be trusted in the best of times, let alone on Elon Musk's Twitter.