a_fancy_kiwi

joined 2 years ago
[–] a_fancy_kiwi@lemmy.world 3 points 4 days ago

The house I bought had one of these installed already. Works great with the homeassistant ZWA-2 antenna.

[–] a_fancy_kiwi@lemmy.world 1 points 1 week ago

No, you actually caught be at the perfect time, the transfer to my temporary pool is almost done. I was just curious how inheritance worked on a pool but after giving it some thought, your recommendation makes more sense; turn it on when I know I need it vs turn it off when I know I don’t. Thanks for the advice.

[–] a_fancy_kiwi@lemmy.world 1 points 1 week ago (2 children)

Let’s say I did turn on compression on root. I can’t then turn it off on a per file system basis where it isn’t needed?

[–] a_fancy_kiwi@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Yeah I'm not excited about the write and rebuild times being slower but the read times should still be pretty good. Considering I don't have any more space for drives in my server and I don't know how crazy hdd drive prices will get in the next 12 months, the guaranteed 2 drive failure resiliency is more important to me at the moment. My current 1 drive failure resiliency, 2 if I'm lucky, has me worried. My backups are on shucked drives and I don't want to be put in a situation where I have to rely on them to restore 😅

[–] a_fancy_kiwi@lemmy.world 2 points 2 weeks ago (1 children)

Thank you for this

31
Raid Z2 help (lemmy.world)
submitted 2 weeks ago* (last edited 2 weeks ago) by a_fancy_kiwi@lemmy.world to c/selfhosted@lemmy.world
 

tldr: I'm going to set up raid z2 with 4x8TB hard drives. I'll have photos, documents (text, pdf, etc.), movies/tv shows, and music on the pool. Are the below commands good enough? Anything extra you think I should add?

sudo zpool create mypool raidz2 -o ashift=12 /dev/disk/by-id/12345 ...

zfs set compression=lz4 mypool #maybe zstd?
zpool set autoexpand=on mypool
zpool set autoreplace=on mypool #I might keep this off. I can see myself forgetting in the future
zpool set listsnapshots=on mypool

With ai raising hard drive prices, I over spent on 3x10TB drives in order to reorganize my current pool and have 3 hard drives sitting on a shelf in the event of a failure. My current pool was built over time but it currently consists of 4x8TB drives. They are a mirrored stripe so a usable 16TB. If I understand it correctly, I can lose 1 drive for sure without losing data and maybe a second drive depending on which drive fails. Because of that, I want to move to raid z2 to ensure I can lose 2 drives without data loss. I'm going to move data from my 4x8TB drives, to the 3x10TB, reconfigure the 4x8TB, and move everything back. I run Immich, plex/jellyfin, and navidrome off the pool. All other documents are basically there for long term storage just in case. What options should I use for raid z2 when setting it up?

I know I can look this stuff up. I have been and continue to do so, I was just hoping for some advise from people that are more knowledgeable about this than me. The move from the 4x8TB drives to the 3x10TB is going to take ~3 days so I really don't want to mess this up and have to start over 😅

Edit:

After looking up each property, this is the command I will probably end up using to create the raid z2 pool, thanks Avid Amoeba:

~~sudo zpool create
-o ashift=12 -o acltype=posixacl -o xattr=sa
-o compression=lz4 -o dnodesize=auto -o relatime=on
-o normalization=formD
raidz2
mypool
/dev/disk/by-id/12345 ...~~

Edit2:

Above command didn't work on my machine. The order and uppercase "O" matters. Had to do this:

sudo zpool create \
  mypool \
  raidz2 \
  -o ashift=12 -O compression=lz4 \
  -O normalization=formD -O acltype=posixacl \
  -O xattr=sa -O dnodesize=auto \
  -O relatime=on \
  /dev/disk/by-id/12345 ...

Edit3:

And finally, after all this, I set up my tmp pool of 3x10TB disks as a raid z2 instead of raid z1. Spent a day and a half transferring before I finally saw my mistake after running out of space 🫠

[–] a_fancy_kiwi@lemmy.world 5 points 3 weeks ago (2 children)

I’ll look into it, thanks.

I’m still in the information gathering phase. Do you know if the element client works with the continuwuity server? Is it as easy as entering the domain, user, and password in the client?

[–] a_fancy_kiwi@lemmy.world 5 points 3 weeks ago

Fair criticism. I just don’t have a lot of free time. I can invest in Element but I wanted to crowd source information to see if it was worth it or if there was an easier way. It doesn’t get much easier than Docker

[–] a_fancy_kiwi@lemmy.world 2 points 3 weeks ago (3 children)

Out of curiosity, what makes it better?

A quick search says it’s a package manger for kubernetes. Besides plex, everything I selfhost is just for me. Would you say helm/kubernetes is worth looking into for a hobbyist who doesn’t work in the tech field?

 

My friends are open to leaving Discord which has finally given me a reason to look into Element/Matrix. I found the install instructions and am immediately put off. Is this it? No official docker compose? 😞

[–] a_fancy_kiwi@lemmy.world 1 points 1 month ago

Linux has gotten really good over the last ~15 years. It used to be that if you didn’t have the most up to date packages, you would be missing game changing features. Now, the distribution you use almost doesn’t matter because even the older packages are good enough for most things.

To answer your question, if it weren’t for gaming, no I wouldn’t mind using Debian as my daily driver. If I ever needed a new package for whatever reason, I would use flatpaks, snaps, docker, or Distrobox to get it.

[–] a_fancy_kiwi@lemmy.world 4 points 1 month ago (2 children)

Personally, yeah it’s the old packages. I want to play games on my desktop and have the newest DE features. An arch based distro seems like it’ll keep up better than Debian.

For my servers though, I only use Debian.

[–] a_fancy_kiwi@lemmy.world 4 points 1 month ago

I’m assuming you mean LXC? It’s doable but without some sort of orchestration tools like Nix or Ansible, I imagine on-going maintenance or migrations would be kind of a headache.

[–] a_fancy_kiwi@lemmy.world 6 points 1 month ago (1 children)

You might come across docker run commands in tutorials. Ignore those. Just focus on learning docker compose. With docker compose, the run command just goes into a yaml file so it’s easier to read and understand what’s going on. Don’t forget to add your user to the docker group so you aren’t having to type sudo for every command.

Commands you’ll use often:

docker compose up - runs container

docker compose up -d - runs container in headless mode

docker compose down - shuts down container

docker compose pull - pulls new images

docker image list - lists all images

docker ps - lists running containers

docker image prune -a - deletes images not being used by containers to free up space

 

I recently noticed that htop displays a much lower 'memory in use' number than free -h, top, or fastfetch on my Ubuntu 25.04 server.

I am using ZFS on this server and I've read that ZFS will use a lot of RAM. I also read a forum where someone commented that htop doesn't show caching used by the kernel but I'm not sure how to confirm ZFS is what's causing the discrepancy.

I'm also running a bunch of docker containers and am concerned about stability since I don't know what number I should be looking at. I either have a usable ~22GB of available memory left, ~4GB, or ~1GB depending on what tool I'm using. Is htop the better metric to use when my concern is available memory for new docker containers or are the other tools better?

Server Memory Usage:

  • htop = 8.35G / 30.6G
  • free -h =
               total        used        free      shared  buff/cache   available
Mem:            30Gi        26Gi       1.3Gi       730Mi       4.2Gi       4.0Gi
  • top = MiB Mem : 31317.8 total, 1241.8 free, 27297.2 used, 4355.9 buff/cache
  • fastfetch = 26.54GiB / 30.6GiB

EDIT:

Answer

My Results

tldr: all the tools are showing correct numbers. Htop seems to be ignoring ZFS cache. For the purposes of ensuring there is enough RAM for more docker containers in the future, htop seems to be the tool that shows the most useful number with my setup.

 

This is a continuation of my other post

I now have homeassistant, immich, and authentik docker containers exposed to the open internet. Homeassistant has built in 2FA and authentik is being used as the authentication for immich which supports 2FA. I went ahead and blocked connections from every country except for my own via cloudlfare (I'm aware this does almost nothing but I feel better about it).

At the moment, if my machine became compromised, I wouldn't know. How do I monitor these docker containers? What's a good way to block IPs based on failed login attempts? Is there a tool that could alert me if my machine was compromised? Any recommendations?

EDIT: Oh, and if you have any recommendations for settings I should change in the cloudflare dashboard, that would be great too; there's a ton of options in there and a lot of them are defaulted to "off"

 

tldr: I'd like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I'm not sure what the best/safest way to do it is. Asking my partner to use tailscale or wireguard is asking too much unfortunately. I was curious to know what you all recommend.

I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I'm kind of unsure what the best approach is. Hosting services on the internet has risk and I'd like to reduce that risk as much as possible.

  1. I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?

  2. Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

  3. What's the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.

  4. Any other tips or info you care to share would be greatly appreciated.

  5. Feel free to talk me out of it as well.

EDIT:

If anyone comes across this and is interested, this is what I ended up going with. It took an evening to set all this up and was surprisingly easy.

  • domain from namecheap
  • cloudflare to handle DNS
  • Nginx Proxy Manager for reverse proxy (seemed easier than Traefik and I didn't get around to looking at Caddy)
  • Cloudflare-ddns docker container to update my A records in cloudflare
  • authentik for 2 factor authentication on my immich server
 

I've been interested in building a DIY NAS out of an SBC for a while now. Not as my main NAS but as a backup I can store offsite at a friend or relative's house. I know any old x86 box will probably do better, this project is just for the fun of it.

The Orange Pi 5 looks pretty decent with its RK3588 chip and M.2 PCIe 3.0 x4 connector. I've seen some adapters that can turn that M.2 slot into a few SATA ports or even a full x16 slot which might let me use an HBA.

Anyway, my question is, assuming the CPU isn't a bottle neck, how do I figure out what kind of throughput this setup could theoretically give me?

After a few google searches:

  • PCIe Gen 3 x4 should give me 4 GB/s throughput
  • that M.2 to SATA adapter claims 6 ~~GB/s~~ Gb/s throughput
  • a single 7200rpm hard drive should give about 80-160MB/s throughput

My guess is that ultimately, I'm limited by that 4GB/s throughput on the PCIe Gen 3 x4 slot but since I'm using hard drives, I'd never get close to saturating that bandwidth. Even if I was using 4 hard drives in a RAID 0 config (which I wouldn't do), I still wouldn't come close. Am I understanding that correctly; is it really that simple?

view more: next ›