Do you think this approach would be worth a try for the threaded Fediverse (aka Lemmy)? I mean your use-case is very different. We have some rudimentary image detection to flag other kinds of unwanted images in Piefed. I could experiment with something like https://github.com/monatis/clip.cpp. Have it go through the media cache and see if it can do something useful for us. But I don't think it'd be worth all the effort unless the whole approach is somewhat accurate and runs in real time on average VPSes.
hendrik
This one? I loosely followed your work... Maybe I should try it someday. See how it does on a regular VPS. Thanks for the link to the IFTAS. Seems they have curated some useful links... I'll have a look at their articles. Hope they get somewhere with that. At this point, I don't think there is any blocklist accessible to the average Fediverse admin?!
Edit: Thx, saw your other comment with the link to horde-safety.
You're probably right. I'm not sure if it's a good idea to walk close to the edge with things like this, though. Every update to the detection model could change things and get them in jail.... So I certainly wouldn't play a cat and mouse game with something that has several years of jailtime attached... But then I don't really know the thought process of the average pedo. And AI image detection comes with problems anyways. In the article they say it detected 6 million pictures already. While keeping quiet about the rate of false positives. We know people have gotten in serious trouble for (false) claims. And I also wouldn't want to be the Fediverse admin who has to go through thousands of flagged pictures and look at them and decide which is which. With consequences attached... Maybe a database of hashes would be the only option. That doesn't detect new pictures, but at the same time it comes without flase positives and you can't draw conclusions from hash values.
Yeah, unless someone publishes even a set of hashes of known bad content for the general public... I kind of doubt the true intentions are preventing CSAM to the benefit of everyone.
And will we get that technology to keep the Fediverse and free platforms safe? Probably not. All the predecessors have been kept away for sole use of the big players, despite populism always claiming we need to introduce total surveillance to keep the children safe...
Also helps the general population to take part in science. Plus, you can read things your (small) academic institution didn't subscribe to. And when I tried it, it was super convenient. Just put in the DOI and get a paper, no other steps like university logins, VPNs etc needed.
every time I find myself trying to browse enclosures without having an account and they simply won't allow me to browse much before prompting me to sign up or subscribe to view more.
Yeah, like Pintrest and Facebook an a lot of services these days. I avoid those like the plague. That's enshittification and for the users: living within small confined spaces. Though, that'd get me started babbling about freedom and starting the technobabble on how the internet is supposed to liberate information, and not confine it...
as simple as them being open and ad-free
I generally recommend uBlock to my friends. With that, 90% of the internet is ad-free. And I don't mind watching the advertisements itself... It's (again) the other things that come with it. The tracking, selling of data, being an object to the ad selling algorithms...
I can't help but immediately proceed to the technobabble... Maybe with a few exceptions. I could explain why it's stupid to watch 2 ads before each Youtube video.
I think this is good advice, but has some caveats. If you skip the technobabble and politics about free (as in freedom), what's left? If it's just a platform that feels more complicated to sign up, because you have to learn about instances and it's not clear which one you want, plus your friends aren't there, plus it's just 45k users total instead of a lot...?
I mean we then need some positive thing. For all I care, we might call it detox. But what's the detox? We'd need something like a substancially better (healthier/more welcoming) culture, less posts that make up for that with quality... And I'm not 100% sure we're there... Feel free to disagree or comment on my perspective... I mean the atmosphere here is nicer than on Reddit. But not radically different, in my opinion.
We had the same discussion 3 weeks ago: https://lemmy.world/post/21202413
Tl;dr: mastodon.social is hardcoded in the program. So it supports that one instance only.
And I think OP is sneaking this post in from Reddit. The mentioned discussion on "selfhosted", isn't what happened here. So I guess they mean r/Selfhosted
You could add Fediverse support: https://github.com/gitroomhq/postiz-app/issues/345
(Just thinking of that, since we're currently talking via the Fediverse. But this isn't a request, I don't use social media enough to need a scheduling tool.)
Ah, I didn't know. I've never ran my own Lemmy instance. I just found out after installing Piefed, that content doesn't appear on it's own. I had to go ahead and add some communities manually. Idk if that changed since.
Though, I don't think that means they won't get any better. It just means they don't scale by feeding in more training data. But that's why OpenAI changed their approach and added some reasoning abilities. And we're developing/researching things like multimodality etc... There's still quite some room for improvements.