klangcola

joined 2 years ago
[–] klangcola@reddthat.com 2 points 6 days ago

Oh cool, didn't know you could do that

[–] klangcola@reddthat.com 2 points 6 days ago (2 children)

Ah, I didn't even consider ads in the UI would be a thing. How disgusting

[–] klangcola@reddthat.com 3 points 1 week ago (2 children)

Regarding DRM, Netflix (and probably others) require the Widewine library to play back DRM content. This works perfectly fine on a normal Ubuntu PC, but does not work on the Pi because the library does not support ARM, only x86.

So Id just get any normal PC. Used enterprise mini PCs can be had for quite cheap, and they are small and efficient, and high quality. Search for HP, Dell or Lenovo mini PCs , or 1 litre PCs.

[–] klangcola@reddthat.com 2 points 1 week ago (5 children)

None at all? If so how? My friends with Apple TV get an obnoxious amount of ads in their YouTube app for example.

[–] klangcola@reddthat.com 2 points 1 week ago

Nice, my HM90s have a really great cooling solution for the CPU (big silent fan, fine finned heat sink). But no cooling on the bottom side of the main board, which houses the RAM, a NVMe and two 2,5" SATA SSDs.

As usual, the arch wiki is super helpful also for non-arch distros https://wiki.archlinux.org/title/Lm_sensors#Adding_DIMM_temperature_sensors

[–] klangcola@reddthat.com 2 points 1 week ago (2 children)

Regarding mini PCs; Beware of RAM overheating!

I bought some Minisforum HM90 for Proxmox selfhosting, installed 64gb RAM (2x32gb DDR4 3200MHz sticks), ran memtest first to ensure the RAM was good, and all 3 mini PCs failed to various degrees.

The "best" would run for a couple of days and tens of passes before throwing multiple errors (tens of errors) then run for another few days without errors.

Turns out the RAM overheated. 85-95 C surface temperature. (There's almost no space or openings for air circulation on that side of the PC). Taking the lid off the PC, let 2/3 computers run memtest for a week with no errors, but one still gave the occasional error bursts. RAM surface temperature with the lid off was still 80-85 C.

Adding a small fan creating a small draft dropped the temperature to 55-60 C. I then left the computer running memtest for a few weeks while I was away, then another few weeks while busy with other stuff. It has now been 6 weeks of continuous memtest, so I'm fairly confident in the integrity of the RAM, as long as they're cold.

Turns out also some, but not all, RAM sticks have onboard temperature sensors. lm-sensors can read the RAM temperature, if the sticks have the sensor. So I'm making a Arduino solution to monitor the temperature with a IR sensor and also control an extra fan.

[–] klangcola@reddthat.com 1 points 1 week ago

Game changing! I've never heard of Hoarder before, but will look in to it now.

LinkDing also has a REST API but I don't see the option to send attachment files

[–] klangcola@reddthat.com 2 points 1 week ago (3 children)

+1 for SingleFile

I recently tried LinkWarden, Linkding and Archivebox for making offline copies. They all had the same issue of running in to a Captcha or login wall for the sites I wanted to capture.
SingleFile to the rescue, as it uses your current browser session as a logged in and verified human.

Linkeding allows you to upload the singlefile html file attached to it link, but I didn't see such an option for Linkwarden.

[–] klangcola@reddthat.com 2 points 2 weeks ago

I hadn't considered giant data sets, like Jellyfin movie library, or Immich photo library. Though for Jellyfin I'd consider only the database and config as "Jellyfin data", while the movie library is its own entity, shared to Jellyfin

[–] klangcola@reddthat.com 1 points 2 weeks ago (1 children)

How does this work? Where is additional space used for cache, server or client?

Or are you saying everything is on one host at the moment, and you use NFS from the host to the docker container (on the same host)?

[–] klangcola@reddthat.com 2 points 2 weeks ago (1 children)

This has been my thinking too.

Though after reading mbirth's comment I realised it's possible to use named volumes and explicitly tell it where on disk to store the volume:

    volumes:
      - my-named-volume:/data/
volumes:
  my-named-volume:
    driver: local
    driver_opts:
      type: none
      device: "./folder-next-to-compose-yml"
      # device: "/path/to/well/known/folder"
      o: bind

It's a bit verbose, but at least I know which folder and partition holds the data, while keeping the benefits of named volumes.

[–] klangcola@reddthat.com 1 points 2 weeks ago

Yeah that's fair, permission issues can be a pain to deal with. Guess I've been lucky I haven't had any significant issues with permissions and docker-containers specifically yet.

 

What are the pros and cons of using Named vs Anonymous volumes in Docker for self-hosting?

I've always used "regular" Anonymous volumes, and that's what is usually in official docker-compose.yml examples for various apps:

volumes:
  - ./myAppDataFolder:/data

where myAppDataFolder/ is in the same folder as the docker-compose.yml file.

As a self-hoster I find this neat and tidy; my docker folder has a subfolder for each app. Each app folder has a docker-compose.yml, .env and one or more data-folders. I version-control the compose files, and back up the data folders.

However some apps have docker-compose.yml examples using named volumes:

services:
  mealie:
    volumes:
      - mealie-data:/app/data/
volumes:
  mealie-data:

I had to google documentation https://docs.docker.com/engine/storage/volumes/ to find that the volume is actually called mealie_mealie-data

$ docker volume ls
DRIVER    VOLUME NAME
...
local     mealie_mealie-data

and it is stored in /var/lib/docker/volumes/mealie_mealie-data/_data

$ docker volume inspect mealie_mealie-data
...
  "Mountpoint": "/var/lib/docker/volumes/mealie_mealie-data/_data",
...

I tried googling the why of named volumes, but most answers were talking about things that sounded very enterprise'y, docker swarms, and how all state information should be stored in "the database" so you shouldnt need to ever touch the actual files backing the volume for any container.

So to summarize: Named volumes, why? Or why not? What are your preferences? Given the context that we are self-hosting, and not running huge enterprise clusters.

view more: next ›