andrew

joined 1 year ago
[–] andrew@lemmy.stuart.fun 1 points 1 year ago* (last edited 1 year ago)

I'm quite happy with Backblaze B2 for my backup storage. I think I pay like $3/mo for a few hundred gigabytes though they did recently change their pricing. Iirc it wasn't going to affect me much. On top of their security settings like encryption and deletion locks, I use local encrypted backup tools like restic that make it dead simple to worry less.

[–] andrew@lemmy.stuart.fun 1 points 1 year ago

Stars and Stripes out for Harambe.

[–] andrew@lemmy.stuart.fun 1 points 1 year ago

Uh hello, it's called jacktivism.

[–] andrew@lemmy.stuart.fun 1 points 1 year ago

Look it's easy, you just wait until the 13th of the month to figure out which format it is. Is 12 days really so much to ask?

[–] andrew@lemmy.stuart.fun 0 points 1 year ago* (last edited 1 year ago)

If this was not a zero day being actively exploited then you would be 100% correct. As it is currently being exploited and a fix is available, visibility is significantly more important than anything else or else the long tail of upgrades is going to be a lot longer.

Keep in mind a list of federated instances and their version is available at the bottom of every lemmy instance (at /instances), so this is a really easy chain to follow and try to exploit.

The discovery was largely discussed in the lemmy-dev Matrix channel, fixes published on github, and also discussed on a dozen alternate lemmy servers. This is not an issue you can really keep quiet any longer, so ideally now you move along to the shout it from the mountaintop stage.

[–] andrew@lemmy.stuart.fun 1 points 1 year ago* (last edited 1 year ago)

And to a large extent, there is automatic software that can audit things like dependencies. This software is also largely open source because hey, nobody's perfect. But this only works when your source is available.

[–] andrew@lemmy.stuart.fun 0 points 1 year ago* (last edited 1 year ago) (1 children)

I'm also running arch. Unfortunately I've been running mine long enough that it's just my own bespoke Ansible playbooks for configs that have morphed only as required by breaking changes or features/security I want to add. I think the best way to start from scratch these days is kubeadm, and I think it should be fairly straightforward on arch or whatever distro you like.

Fundamentally my setup is just kubelet and kubeproxy on every node, the oci runtime (CRIO for me), etcd (set up manually but certs are automated now) and then some k8s manifests templated and dropped into the k8s manifest folder for the control plane on 3 nodes for HA. The more I think about it, the more I remember how complicated it is unless you want a private CA. Which I have and love the convenience and privacy it affords me (no CTL exposing domain names unless I need public certs and they're public anyway).

I have expanded to 6 nodes (5 of which remain, RIP laptop SSD) and just run arch on all of them because it kinda just works and I like the consistency. I also got quite good at the arch install in the process.

[–] andrew@lemmy.stuart.fun 0 points 1 year ago* (last edited 1 year ago) (3 children)

If you've got >=3 machines and >=3 devices, I'd suggest at least strongly considering Rook. It should allow for future growth and will let you tolerate the loss of one node at the storage level too, assuming you have replication configured. Which (replication params) you can set per StorageClass in case you want to squeeze every last byte out for cases where you don't need storage-level replication.

I've run my own k8s cluster for years now, and solid storage from rook really made it take off with respect to how many applications I can build and/or run on it.

As for backup, there's velero. Though I haven't gotten it to work on bare metal. My ideal would be to just use it to store backups in Backblaze B2 given the ridiculously low cost. Presumably I could get there with restic, since that's my outside-k8s backup solution, but I still haven't gotten that set up since it's much more cloud-provider friendly.

view more: ‹ prev next ›