this post was submitted on 21 Nov 2024
71 points (98.6% liked)

Technology

59534 readers
3168 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest AI technology striving to expose unreported CSAM at scale.

top 48 comments
sorted by: hot top controversial new old
[–] Churbleyimyam@lemm.ee 5 points 2 hours ago (2 children)

I think all CSAM should be destroyed out of respect for the victims, not proliferated. I don't care who is hanging onto this material or for what purpose.

[–] Ghostie21@lemmy.world 3 points 34 minutes ago

How is this proliferating csam? Also, how do you expect them to find csam without having known images? It gives a really nice way to check based on hashes without having someone look at every picture on someone's harddrive. With this AI it should greatly help determining new or unknown images while minimizing the number of actual people that have to see that stuff, and who get scarred from looking at such images. The only reason to be against this is if you are looking at CP and want it to be harder to find, or if you don't understand how this technology is being used.

[–] sunzu2@thebrainbin.org 2 points 1 hour ago

This ain't about the victims... It never was, otherwise churches would NOT exist in current form.

This is about police and corpo state gaining power.

[–] TheHobbyist@lemmy.zip 39 points 4 hours ago* (last edited 4 hours ago) (2 children)

Thorn, the company backed by Ashton Kutcher and which tried to get its way to monitor all messages in the EU via Chat Control. No thanks.

https://fortune.com/europe/2023/09/26/thorn-ashton-kutcher-ylva-johansson-csam-csa-regulation-european-commission-encryption-privacy-surveillance/

[–] Erasmus@lemmy.world 26 points 4 hours ago (2 children)

Just remember folks. Kutcher is a slimeball too.

The guy went from a D list star and hanging out with the likes of Danny Masterson and going to Diddy’s infamous parties - to suddenly overnight courting the US government and being the face of ‘helping’ children everywhere.

Yeah right…..

[–] chonglibloodsport@lemmy.world 5 points 1 hour ago (1 children)

I’d be wary of calling him guilty by association. Maybe when he realized who he was really hanging out with he was so horrified and disgusted that he just had to get involved and do something to fight back?

[–] Erasmus@lemmy.world 1 points 34 minutes ago (1 children)

It’s awful coincidental that he seems to hang out with the ‘rapist’ crowd. Even going as far as writing a letter for Masterson as to how nice of a guy he is to try to get him a lenient sentence.

Even Hollywood has ostracized him and his wife - news sites recently reported they were looking to leave the country and let things cool off for a while.

I’m sure everyone is right though that keep posting here, that he is a swell guy who was just in the wrong place at the wrong time, multiple times. Several years worth of multiple times with wrong people. Just a coincidence.

[–] BassTurd@lemmy.world 1 points 3 minutes ago

The difference between us giving him a benefit of the doubt and claiming innocence and your take, is that you are labeling him a pedophile without proof. That's a significant claim if false, and imo takes an assumption too far. Maybe he's bad and it should be looked into, but saying he did something because he was on a show with and good friends with a guy that happened to be a rapist is wrong.

[–] ninekeysdown@lemmy.world 16 points 3 hours ago (1 children)

People can grow and change. Not saying he did or didn’t. Just saying that people aren’t a monolith. It’s plausible he just grew and his views changed / evolved.

That being said, it’s highly convenient where he’s positioned himself these days…

[–] sunzu2@thebrainbin.org 3 points 2 hours ago

Regime whores are all about proximity

[–] sunzu2@thebrainbin.org 2 points 4 hours ago

Has he ever called put the Catholic church or does he only care about pictures of abuse but not actual abuse?

[–] Kyrgizion@lemmy.world 66 points 5 hours ago (1 children)

Not a single peep about false positives.

I'm sure it won't be abused though. And if anyone does complain, just get their electronics seized and checked, because they must be hiding something!

[–] oldfart@lemm.ee 33 points 4 hours ago (2 children)

Reminds me of the A cup breasts porn ban in Australia a few years ago, because only pedos would watch that

[–] AmidFuror@fedia.io 1 points 53 minutes ago

Australia has a more general ban on selling or exhibiting hard porn, but is is legal to possess it. So it's not just small boobs.

[–] Clinicallydepressedpoochie@lemmy.world 21 points 3 hours ago (2 children)

Awe man, I love all titties. Variety is the spice of life.

[–] DScratch@sh.itjust.works 18 points 3 hours ago (1 children)

Not to mention the self image impact such things would have on women with smaller breasts, who (as I understand it) generally already struggle with poor self image due to breast size.

[–] sunzu2@thebrainbin.org 5 points 2 hours ago

Clearly the state gives zero fucks about these women, or anyone else or even "the children"

Catholic Church is still around for a reason

[–] user224@lemmy.sdf.org 9 points 3 hours ago (1 children)

Believe it or not, straight to jail.

[–] Clinicallydepressedpoochie@lemmy.world 11 points 3 hours ago (1 children)

If this is the price I must pay, I will pay it, sir! No man should be deprived of privately viewing a consenting adults perfectly formed small tit's. They can take my liberty, they can take my livelihood, but they will never take away my boner for puffy nipples on a small chested half Japanese woman!

[–] solomon42069@lemmy.world 7 points 2 hours ago

What is the charge? Biting a breast? A succulent Chinese breast?

[–] sunzu2@thebrainbin.org 26 points 4 hours ago (1 children)

I am a bit confused how it is legal for them to have the training data here?

Like is there anything a corpo can't do?

Like why can't subway Jared and Catholic church "train the AI"

Only half way joking, what's the catch here?

[–] MentalEdge@sopuli.xyz 19 points 4 hours ago (1 children)

There are laws around it. Law enforcement doesn't just delete any digital CSAM they seize.

Known CSAM is archived and analyzed rather than destroyed, and used to recognize additional instances of the same files in the wild. Wherever file scanning is possible.

Institutions and corporation can request licenses to access the database, or just the metadata that allows software to tell if a given file might be a copy of known CSAM.

This is the first time an attempt is being made at using the database to create software able to recognize CSAM that isn't already known.

I'm personally quite sceptical of the merit. It may well be useful for scanning the public internet, but I'm guessing the plan is to push for it to be somehow implemented for private communication, no matter how badly that compromises the integrity of encryption.

[–] melroy@kbin.melroy.org 7 points 3 hours ago (1 children)

So doesn't that make the law enforcement having the biggest CP collection from everybody? This sounds kinda dangerous...

[–] MentalEdge@sopuli.xyz 10 points 2 hours ago* (last edited 2 hours ago)

It does. Kinda.

The police are seldom allowed to be in possession of CSAM, except for in terms of grabbing the hardware which contains it in an arrest. The database used in modern detection tools is maintained by NCMEC which has special permission to do so.

And of course there are risks, but it's just digital data. Unless you are creating more, you're not actively harming anyone. And law enforcement absolutely needs that data to take some of the most obvious steps to prevent it being spread further.

Obviously, someone has access, but to get to the actual media files wouldn't be simple. What typically happens, is that anyone wanting to detect CSAM, is given a hashed version of the database. They can then scan their systems for CSAM by hashing any media they are hosting, and seeing whether there are any matches.

Whenever possible, people aren't handling the actual media. But for any detection to be possible to begin with, the database of the actual media does need to be maintained somewhere.

AI is a touchier subject, as you can't train a model to recognize CSAM not already in the database using hashes, so in those cases you have to work with actual real media. This is only recently becoming a thing.

It also leaves open the possibility for false positives. An oft cited example is parents taking pictures of their own children for innocent reasons, or doctors and parents handling images for valid medical reasons. In a system that flagged such content, it would mean someone else would be seeing that "private" content because it was flagged.

[–] db0@lemmy.dbzer0.com 24 points 5 hours ago* (last edited 5 hours ago)

It's the earliest AI technology striving to expose unreported CSAM at scale.

horde-safety has been out for a year now. Just saying... It's not a trained AI model in this way, but it's still using Neural Networks (i.e. "AI Technology")

[–] hendrik@palaver.p3x.de 18 points 5 hours ago (2 children)

And will we get that technology to keep the Fediverse and free platforms safe? Probably not. All the predecessors have been kept away for sole use of the big players, despite populism always claiming we need to introduce total surveillance to keep the children safe...

[–] BetaDoggo_@lemmy.world 1 points 1 hour ago* (last edited 1 hour ago)

If everyone has access to the model it becomes much easier to find obfuscation methods and validate them. It becomes an uphill battle. It's unfortunate but it's an inherent limitation of most safeguards.

[–] riskable@programming.dev 14 points 5 hours ago (1 children)

I was going to say... Sure would be nice to have this feature in all the open source AI image generator tools but you're absolutely right 😩

[–] hendrik@palaver.p3x.de 7 points 4 hours ago

Yeah, unless someone publishes even a set of hashes of known bad content for the general public... I kind of doubt the true intentions are preventing CSAM to the benefit of everyone.

[–] Nurse_Robot@lemmy.world 8 points 4 hours ago (1 children)

This is a great development, albeit with a lot of soul crushing development behind it I assume. People who have to look at CSAM or whatever the acronym is have a miserable job, so I'm very supportive of trying to automate that away from people.

[–] atomicorange@lemmy.world 1 points 28 minutes ago* (last edited 28 minutes ago)

Yeah, I’m happy for AI to take this particular horrifying job from us. Chances are it will be overtuned (too strict), but if there’s a reasonable appeals process I could see it saving a lot of people the trauma of having to regularly view the worst humanity has to offer without major drawbacks.

[–] floofloof@lemmy.ca 12 points 5 hours ago (4 children)

This seems like a potential actual good use of AI. Can't have been much fun to train it though.

And is there any risk of people turning these kinds of models around and using them to generate images?

[–] FaceDeer@fedia.io 7 points 3 hours ago

And is there any risk of people turning these kinds of models around and using them to generate images?

There isn't really much fundamental difference between an image detector and an image generator. The way image generators like stable diffusion work is essentially by generating a starting image that's nothing but random static and telling the generator "find the cat that's hidden in this noise."

It'll probably take a bit of work to rig this child porn detector up to generate images, but I could definitely imagine it happening. It's going to make an already complicated philosophical debate even more complicated.

[–] Jimbabwe@lemmy.world 18 points 4 hours ago (3 children)

If AI was reliable, maybe. MAYBE. But guess what? It turns out that “advanced autocomplete” does a shitty job of most things, and I bet false positives will be numerous.

[–] Chozo@fedia.io 11 points 4 hours ago

This is not that kind of AI.

It's possible to have a good AI system, but it takes millions of dollars and several thousand manhours to do, and most companies won't put in the effort.

But, there should always be a human in the loop.

[–] AwesomeLowlander@sh.itjust.works 3 points 4 hours ago (2 children)

"detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster."

False positives don't matter if they stick to the stated intended purpose of making it easier to detect CSAM manually.

[–] spankmonkey@lemmy.world 2 points 1 hour ago

if they stick to the stated intended purpose

They never do.

[–] Voroxpete@sh.itjust.works 6 points 3 hours ago* (last edited 3 hours ago) (1 children)

The problem is that they won't.

Yes, AI tools, in the hands of skilled people, can be very helpful.

But "AI" in capitalism doesn't mean "more effective workers", it means "fewer workers." The issue isn't technological so much as cultural. You fundamentally cannot convince an MBA not to try to automate away jobs.

(It's not even a money thing; it's about getting rid of all those pesky "workers rights" that workers like to bring with us)

Here's the thing. This technology is unequivocally one of the things AI would be very useful for. It can potentially do a lot of good. Yes, MBAs could screw it up like they screw anything else up in society. That doesn't mean we shouldn't be happy that we've created this new tech.

[–] catloaf@lemm.ee 7 points 4 hours ago

Nobody would have been looking directly at the source data. The FBI or whoever provides the dataset to approved groups, but after that you just say "use all the images in this folder" and it goes. But I don't even know if they actually provide real full-resolution images, or just perceptual hashes, or downsampled images.

And while it's possible to use the dataset to generate new images assuming the training data had full-res images, like I said, I know they investigate the people making the request before allowing access. And access is probably supervised and audited.

[–] mspencer712@programming.dev 9 points 5 hours ago

I think image generators in general work by iteratively changing random noise and checking it with a classifier, until the resulting image has a stronger and stronger finding of “cat” or “best quality” or “realistic”.

If this classifier provides fine grained descriptive attributes, that’s a nightmare. If it just detects yes or no, that’s probably fine.

[–] horse_tranquilizers@sh.itjust.works -1 points 3 hours ago (2 children)

At this point how does it differ w/ generating AI powered CP? morons

[–] Railcar8095@lemm.ee 3 points 1 hour ago

It differs in basically being something completely different. This is a classification model, doesn't have generative capabilities. Even if you were to get the model and it's weights, and you tried to reverse engineer an "input" that it would classify as CP, it would most likely look like pure noise to you.

Moron

[–] xionzui@sh.itjust.works 5 points 3 hours ago (1 children)

Uh, well this one tells you if an image looks like it or not. It doesn’t generate images

[–] horse_tranquilizers@sh.itjust.works 4 points 3 hours ago (1 children)

If it knows if an image looks like it it can generate something like it, one step further

[–] melroy@kbin.melroy.org 2 points 3 hours ago (1 children)

Correct, this kind of software is trained on CP data. So such models can be easily used to generate CP instead of recognizing it, which makes them very dangerous indeed.

Same idea as the current models that are trained to recognized cars, these models can also be used to generate a car from noise as a starting poiint.

[–] xionzui@sh.itjust.works 2 points 1 hour ago

In pretttty sure you can’t just run it in reverse like that. There’s a whole different training and operation methodology you have to use to support generating images rather than simple text classification