gencha

joined 1 year ago
[–] gencha@lemm.ee -1 points 2 months ago

Ultimately, it doesn't matter what caused you to be blocked from Docker Hub due to rate-limiting. When you're in that scenario, it's most cost efficient to buy your way out.

If you can't even imagine what would lead up to such a situation, congratulations, because it really sucks.

Yes, there should be a cache. But sometimes people force pull images on service start, to ensure they get the latest "latest" tag. Every tag floats, not just "latest". Lots of people don't pin digests in their OCI references. This almost implies wanting to refresh cached tags regularly. Especially when you start critical services, you might pull their tag in case it drifted.

Consider you have multiple hosts in your home lab, all running a good couple services, you roll out that new container runtime upgrade to your network, it resets all caches and restarts all services. Some pulls fail. Some of them are for DNS and other critical services. Suddenly your entire network is down, and you can't even get on the Internet, because your pihole doesn't start. You can't recover, because you're rate-limited.

I've been there a couple of times until I worked on better resilience, but relying on docker.io is still a problem in general. I did pay them for quite some time.

This is only one scenario where their service bit me. As a developer, it gets even more unpleasant, and I'm not talking commercial.

[–] gencha@lemm.ee 1 points 2 months ago

It's the way to go, but too difficult for most users in my experience. They rather just install Docker Desktop and use git bash. Sad reality

[–] gencha@lemm.ee 42 points 2 months ago (5 children)

Their entire offering is such a joke. I'm forced to use Docker Desktop for work, as we're on Windows. Every time that piece of shit gets updated, it's more useless garbage. Endless security snake oil features. Their installer even messes with your WSL home directory. They literally fuck with your AWS and Azure credentials to make it more "convenient" for you to use their cloud integrations. When they implemented that, they just deleted my AWS profile from my home directory, because they felt it should instead be a symlink to my Windows home directory. These people are not to be trusted with elevated privileges on your system. They actively abuse the privilege.

The only reason they exist is that they are holding the majority of images hostage on their registry. Their customers are similarly being held hostage, because they started to use Docker on Windows desktops and are now locked in. Nobody gives a shit about any of their benefits. Free technology and hosting was their setup, now they let everyone bleed who got caught. Prices will rise until they find their sweet spot. Thanks for the tech. Now die already.

[–] gencha@lemm.ee 4 points 2 months ago

They use Windows

[–] gencha@lemm.ee 3 points 2 months ago

Not having to install dependencies is a benefit of containers and their images. That's a pretty big thing to miss. Maybe give it a closer look.

[–] gencha@lemm.ee 5 points 2 months ago

Your choice of container runtime has zero impact on the rate-limits of Docker Hub. They probably had a container image proxy already and just switched because Docker is a security nightmare and needlessly heavy.

[–] gencha@lemm.ee 3 points 2 months ago

I gave podman compose a fresh try just the other day and was happy to see that it "just worked".

I'm personally pissed about aardvark-dns, which provides DNS for podman. The version that is still in Debian Stable sets a TTL of 24h on A record responses. This caused my entire service network to be disrupted whenever a pod restarted. The default behavior for similar resolvers is to set a TTL of 0. It's like people who maintain it take it as an opportunity to rewrite existing solutions in Rust and implement all the bugs they can. Sometimes feels like someone just thought it would be a fun summer break project to implement DNS or network security.

[–] gencha@lemm.ee 9 points 2 months ago (2 children)

A single malfunctioning service that restarts in a loop can exhaust the limit near instantly. And now you can't bring up any of your services, because you're blocked.

I've been there plenty of times. If you have to rely on docker.io, you better pay up. Running your own NexusRM or Harbor to proxy it can drastically improve your situation though.

Docker is a pile of shit. Steer clear entirely of any of their offerings if possible.

[–] gencha@lemm.ee 1 points 2 months ago

Reddit is free. Other people paying for your free service is a very weak argument to bring up. If Lemmy dies today, nobody but hobbyists and amateurs will care. Just like with LE.

[–] gencha@lemm.ee 1 points 2 months ago

I've been there. Not every CA is equal. Those kind of CAs were shit. LE is convenient. There are more options though.

[–] gencha@lemm.ee 1 points 2 months ago (2 children)

I actually agree. For the majority of sites and/or use cases, it probably is sufficient.

Explaining properly why LE is generally problematic, takes considerable depth of information, that I'm just not able to relay easily right now. But consider this:

LE is mostly a convenience. They save an operator $1 per month per certificate. For everyone with hosting costs beyond $1000, this is laughable savings. People who take TLS seriously often have more demands than "padlock in the browser UI". If a free service decides they no longer want to use OCSP, that's an annoying disruption that was entirely not worth the $1 https://www.abetterinternet.org/post/replacing-ocsp-with-crls/

LE has no SLA. You have no guarantee to be able to ever renew your certificate again. A risk not anyone should take.

Who is paying for LE? If you're not paying, how can you rely on the service to exist tomorrow?

It's not too long ago that people said "only some sites need HTTPS, HTTP is fine for most". It never was, and people should not build anything relevant on "free" security today either.

view more: ‹ prev next ›