farcaller

joined 1 year ago
[–] farcaller@fstab.sh 5 points 1 month ago

If tailscale inside a container allows you to talk to it via “direct” connection and not a derp proxy, then it will offer you better service isolation (can set the tailscale ACLs for this specific service) without sacrificing performance.

Tailscale pushes for it because it just ties you in more. It allows to to utilize the ACLs better, to see your thing in their service mesh, and every service will count against the free node limit.

In practice, I often do both. E.g. I’ll have my http ingress exposed to tailscale and route a bunch of different services through it at a single tailscale node, where the access control is done by services individually. But I’ll also run a pod-to-pod tailscale between two k8s clusters because tailscale ACL is just convenient.

[–] farcaller@fstab.sh 1 points 1 month ago

Updates to DNS, yes. Not necessarily to your primary zone. In other words, you don’t need access to the name servers for your highly privileged example.com zone, only the nameservers for inconsequential.example.com. With the challenge delegation you can easily narrow the scope by CNAMEing the relevant _acme-challenge enries in your primary domain once. This not only removes the need for the validator to modify your primary zone, but also scopes what subdomains it can validate, too. So the blast radius decreases.

I, too, maintain several devices that insist on having the certificates (and keys, yuck) being fed to them by hand. I automated it all, because I don’t see why a human should be in a loop of copying the secret material. Automaton is good.

[–] farcaller@fstab.sh -2 points 1 month ago (3 children)

How complicated is it to have a CNAME? /s

[–] farcaller@fstab.sh 21 points 1 month ago (5 children)

You can delegate to isolated nameservers with DNS-01, there's no need to have control over the primary zone: https://www.eff.org/deeplinks/2018/02/technical-deep-dive-securing-automation-acme-dns-challenge-validation

[–] farcaller@fstab.sh 4 points 1 month ago

ECC is slightly more required for ZFS because its ARC is generally more aggressive than the usual linux caching subsystem. That said, it's not a hard requirement. My curent NAS was converted from my old windows box (which apparently worked for years with bad ram). Zfs uncovered the problem in the first 2 days by reporting the (recoverable) data corruption in the pool. When I fixed the ram issue and hash-checked against the old backup all the data was good. So, effectively, ZFS uncovered memory corruption and remained resilient against it.

[–] farcaller@fstab.sh 8 points 1 month ago (1 children)

given time in lieu

after squadron 42 ships*

[–] farcaller@fstab.sh 3 points 1 month ago
[–] farcaller@fstab.sh 5 points 1 month ago (2 children)

I had exactly the same use case and I ended up with a 40G DAC fiber for that case. It ended up cheaper than converting the whole lan to 10G.

That said, it feels like used 10G equipment is easier to come by than 2.5G for now, and if you have 2G fiber uplink and only 1G past the router then it’s a waste.

[–] farcaller@fstab.sh 18 points 2 months ago (3 children)

Garage is trivial to get up and running and it’s more lightweight than minio nowadays.

[–] farcaller@fstab.sh 2 points 2 months ago (1 children)

Actual public services run there, yeah. In case if any is compromised they can only access limited internal resources, and they'd have to fully compromise the cluster to get the secrets to access those in the first place.

I really like garage. I remember when minio was straightforward and easy to work with. Garage is that thing now. I use it because it's just co much easier to handle file serving where you have s3-compatible uploads even when you don’t do any real clustering.

[–] farcaller@fstab.sh 2 points 3 months ago (3 children)

I’ve dealt with exactly the same dilemma in my homelab. I used to have 3 clusters, because you'd always want to have an "infra" cluster which others can talk to (for monitoring, logs, docker registry, etc. workloads). In the end, I decided it's not worth it.

I separated on the public/private boundary and moved everything publicly facing to a separate cluster. It can only talk to my primary cluster via specific endpoints (via tailscale ingress), and I no longer do a multi-cluster mesh (I used to have istio for that, then cilium). This way, the public cluster doesn’t have to be too large capacity-wise, e.g. all the S3 api needs are served by garage from the private cluster, but the public cluster will reverse-proxy into it for specific needs.

[–] farcaller@fstab.sh 2 points 3 months ago

any oauth (I use kanidm) and oauth2-proxy solves that and now you can easily use passkeys to log into your intranet resources.

view more: ‹ prev next ›