farcaller

joined 2 years ago
[–] farcaller@fstab.sh 2 points 1 year ago (1 children)

In the context of my comments here, any mention of "S3" means "S3-compatible" in the way that's implemented by Garage. I hope that clarifies it for you.

[–] farcaller@fstab.sh 2 points 1 year ago (3 children)

Clearly I mean Garage in here when I write "S3." It is significantly easier and faster to run hugo deploy and let it talk to Garage, then to figure out where on a remote node the nginx k8s pod has its data PV mounted and scp files into it. Yes, I could automate that. Yes, I could pin the blog's pod to a single node. Yes, I could use a stable host path for that and use rsync, and I could skip the whole kubernetes insanity for a static html blog.

But I somewhat enjoy poking the tech and yes, using Garage makes deploys faster and it provides me a stable well-known API endpoint for both data transfers and for serving the content, with very little maintenance required to make it work.

[–] farcaller@fstab.sh 2 points 1 year ago (6 children)

S3 storage is simpler than running scp -r to a remote node, because you can copy files to S3 in a massively parallel way and scp is generally sequential. It's very easy to protect the API too, as it's just HTTP (and at it, it's also significantly faster than WebDAV).

[–] farcaller@fstab.sh 4 points 1 year ago

Of course it does AI now!

But seriously, the easiest guide for minio setup meant using their operator. The garage guide was: spin up this single deploy and it works from there.

[–] farcaller@fstab.sh 17 points 1 year ago (15 children)

I remember when minio just started and it was small and easy to run. Nowadays, it's a full-blown enterprise product, though, full of features you’ll never care about in a homelab eating on your cpu and ram.

Garage is small and easy to run. I’ve been toying with it for several months and I’m more than happy with its simple API and tiny footprint. I even run my (static html) blog off it because it's just easier to deploy it to a S3-compatible API.

[–] farcaller@fstab.sh 6 points 1 year ago

Specifically, use home.arpa, if you must use a private domain.

[–] farcaller@fstab.sh 1 points 1 year ago (1 children)

FWIW that java app isn’t much memory hungry and it's not cpu-intensive at all. There are no issues with running java apps at all if you spend 5 minutes figuring the basix flags on how to set the memory limits or run it in a memory-limited cgroup via some containers runtime.

[–] farcaller@fstab.sh 9 points 1 year ago (1 children)

I run k3s in my homelab as a single node cluster. I’m very familiar with kubernetes in general, so it's just easier for me to reason with a control plane.

Some of the benefits I find useful:

  • ArgoCD set to fire and forget will automatically update software versions as they happen. I use nix to lower the burden of maintaining my chart forks. Sometimes they break, but
  • VictoriaMetrics easily collects all the metrics from everything in the cluster with very little manual tinkering, so I am notified when things break, and
  • zfs-localpv provides in-cluster management for data snapshots, so when things do break I can easily roll back to a known good state.

k3s is, of course, a memory hog, I'd estimate it and cilium (my CNS of choice) eat up about 2Gb ram and a bit under one core. It's something you can tune to some extent, though. But then, I can easily do pod routing via VPN and create services that will automatically get a public IP from my endless IPv6 pool and get that address assigned a DNS name in like 10 lines of Yaml.

[–] farcaller@fstab.sh 8 points 1 year ago

IIRC they demonstrated an interaction with Siri where it asks the user for consent before enriching the data through chatgpt. So yeah, that seems to mean your data is sent out (if you consent).

[–] farcaller@fstab.sh 25 points 1 year ago (10 children)

If you drop the projector, then airpods already do it better when paired with the watch. There's no point in such a device at all, then.

[–] farcaller@fstab.sh 4 points 1 year ago (1 children)

Is there anything interesting at all reported in /proc/spl/kstat/zfs/dbgmsg?

[–] farcaller@fstab.sh 1 points 1 year ago

I did ran out of pcie, yeah :-( the network peaks at about 26gbit/s, which is the most you can squeeze out of pcie 3.0 x4. I could move the nvmes off the pcie 4.0 x16 (I have two m2 slots on the motherboard itself), but I planned to expand the nvme storage to 4x SSDs and I’m out of the pci lanes on the other end of the fiber either way (that box has all x16 going to the gpu)

view more: ‹ prev next ›