There is a rumor that OpenAI downloaded the entirety of LibGen to train their AI models. No definite proof yet, but it seems very likely.
https://torrentfreak.com/authors-accuse-openai-of-using-pirate-sites-to-train-chatgpt-230630/
There is a rumor that OpenAI downloaded the entirety of LibGen to train their AI models. No definite proof yet, but it seems very likely.
https://torrentfreak.com/authors-accuse-openai-of-using-pirate-sites-to-train-chatgpt-230630/
Could be postgres-related. Federation is only "jammed" if the source instance thinks your instance is having an issue because it takes too long to respond. Maybe enabling slow query log on postgres and then reviewing that log could point you in the right direction.
Yes because it now uses docker-compose by default: https://docs.podman.io/en/latest/markdown/podman-compose.1.html
It's easier to start with docker first simply because of the sheer amount of learning resources available on the internet. If you're having issues, you can usually find a solution quickly with a search engine.
That's being said, there's not much differences on how to use them these days. You can even run docker compose on podman.
Checking lemmy.world federation status at: https://phiresky.github.io/lemmy-federation-state/site?domain=lemmy.world . There, you can see that social.packetloss.gg is listed among the "407 failing instances".
You'll need to check if your server actually configured to receive federation traffics. If you're using cloudflare or some other web application firewall, make sure they're not doing any anti bots measures on the /inbox
endpoint. For example, in Cloudflare, create a new WAF rule (Security -> WAF) for /inbox
and set it to skip all security.
If you don't use any web application firewall at all, did you just upgraded your instance from v18.x to v19.x recently right before experiencing federation issue? v19.x has increased resource consumption and will have problem running on small server after running for a while. For small VPS (~4GB of RAM), you might want to adjust database pool_size to <30 on lemmy.hjson
file. Restarting lemmy AND postgres every once in a while also helps if you're on a small VPS.
What's the point of planning to integrate with activitypub if they do shit like this?
Creators must disclose content that:
Makes a real person appear to say or do something they didn’t do
Alters footage of a real event or place
Generates a realistic-looking scene that didn’t actually occur
So, they want deepfakes to be clearly labeled, but if the entire video was scripted by chatgpt, the AI label is not required?
I think that's the icon for archived posts, not ads marking.
Mostly for convenience and standardizing your security procedure. Most apps popular for self hosting now supports OIDC, so it's no brainer to setup. On the other hand, most apps don't support 2fa, or support it in a weird way (e.g. no recovery code). By using an identity service, you can be sure all your apps follow the same login standard you setup.
For those apps that don't support OIDC, you can simply slap oauth2proxy in front of it and it's done.
If you have some error message, it would be easier to identify the issue. Typical problems:
[your host]/realms/[your realm]/.well-known/openid-configuration/
to [your host]/auth/realms/[your realm]/.well-known/openid-configuration/
, and some apps still use the old one. You might be able to correct this by manually entering keycloak endpoint in your oidc settings.Currently it's using ~511MB of memory, which is comparable to typical web apps. CPU usage is almost zero because it's idle most of the time (you're practically only using it on login only).
I'm still on keycloak v19 and haven't had a change to upgrade to the latest version yet and have no idea how much memory the latest version will use, but I remember testing keycloak before they migrated to quarkus and it was sitting at ~2GB memory and was immediately turned off by it. I gave it a try again after I heard the memory usage got better and stick around since then.
That's depend on how you run your other services. For example, if you use docker for your other services, then just run n8n with docker.