this post was submitted on 11 Dec 2025
451 points (96.5% liked)

Technology

77571 readers
5841 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Devial@discuss.online 129 points 2 days ago* (last edited 2 days ago) (4 children)

The article headline is wildly misleading, bordering on being just a straight up lie.

Google didn't ban the developer for reporting the material, they didn't even know he reported it, because he did so anonymously, and to a child protection org, not Google.

Google's automatic tools, correctly, flagged the CSAM when he unzipped the data and subsequently nuked his account.

Google's only failure here was to not unban on his first or second appeal. And whilst that is absolutely a big failure on Google's part, I find it very understandable that the appeals team generally speaking won't accept "I didn't know the folder I uploaded contained CSAM" as a valid ban appeal reason.

It's also kind of insane how this article somehow makes a bigger deal out of this devolper being temporarily banned by Google, than it does of the fact that hundreds of CSAM images were freely available online and openly sharable by anyone, and to anyone, for god knows how long.

[–] forkDestroyer@infosec.pub 6 points 1 day ago (1 children)

I'm being a bit extra but...

Your statement:

The article headline is wildly misleading, bordering on being just a straight up lie.

The article headline:

A Developer Accidentally Found CSAM in AI Data. Google Banned Him For It

The general story in reference to the headline:

  • He found csam in a known AI dataset, a dataset which he stored in his account.
  • Google banned him for having this data in his account.
  • The article mentions that he tripped the automated monitoring tools.

The article headline is accurate if you interpret it as

"A Developer Accidentally Found CSAM in AI Data. Google Banned Him For It" ("it" being "csam").

The article headline is inaccurate if you interpret it as

"A Developer Accidentally Found CSAM in AI Data. Google Banned Him For It" ("it" being "reporting csam").

I read it as the former, because the action of reporting isn't listed in the headline at all.

^___^

[–] Blubber28@lemmy.world 1 points 1 day ago

This is correct. However, many websites/newspapers/magazines/etc. love to get more clicks with sensational headlines that are technically true, but can be easily interpreted as something much more sinister/exciting. This headline is a great example of it. While you interpreted it correctly, or claim to at least, there will be many people that initially interpret it the second way you described. Me among them, admittedly. And the people deciding on the headlines are very much aware of that. Therefore, the headline can absolutely be deemed misleading, for while it is absolutely a correct statement, there are less ambiguous ways to phrase it.

[–] cupcakezealot@piefed.blahaj.zone 1 points 1 day ago (1 children)

so they got mad because he reported it to an agency that actually fights csam instead of them so they can sweep it under the rug?

[–] Devial@discuss.online 17 points 1 day ago* (last edited 1 day ago) (1 children)

They didn't get mad, they didn't even know THAT he reported it, and they have no reason or incentive to swipe it under the rug, because they have no connection to the data set. Did you even read my comment ?

I hate Alphabet as much as the next person, but this feels like you're just trying to find any excuse to hate on them, even if it's basically a made up reason.

[–] cupcakezealot@piefed.blahaj.zone -3 points 1 day ago (2 children)

they obviously did if they banned him for it; and if they're training on csam and refuse to do anything about it then yeah they have a connection to it.

[–] Devial@discuss.online 5 points 1 day ago* (last edited 1 day ago)

Also, the data set wasn't hosted, created, or explicitly used by Google in any way.

It was a common data set used in various academic papers on training nudity detectors.

Did you seriously just read the headline, guess what happened, and are now arguing based on that guess that I, who actually read the article, am wrong about it's content ? Because that's sure what it feels like reading your comments......

[–] Devial@discuss.online 2 points 1 day ago* (last edited 1 day ago)

So you didn't read my comment then did you ?

He got banned because Google's automated monitoring system, entirely correctly, detected that the content he unzipped contained CSAM. It wasn't even a manual decision to ban him.

His ban had literally nothing whatsoever to do with the fact that the CSAM was part of an AI training data set.