Well, you gotta start it somehow. You could rely on compose'es built-in service management which will restart containers upon system reboot if they were started with -d
, and have the right restart policy. But you still have to start those at least once. How'd you do that? Unless you plan to start it manually, you have to use some service startup mechanism. That leads us to systemd unit. I have to write a systemd unit to do docker compose up -d
. But then I'm splitting the service lifecycle management to two systems. If I want to stop it, I no longer can do that via systemd. I have to go find where the compose file is and issue docker compose down
. Not great. Instead I'd write a stop line in my systemd unit so I can start/stop from a single place. But wait 🫷 that's kinda what I'm doing isn't it? Except if I start it with docker compose up
without -d
, I don't need a separate stop line and systemd can directly monitor the process. As a result I get logs in journald
too, and I can use systemd's restart policies. Having the service managed by systemd also means I can use aystemd dependencies such as fs mounts, network availability, you name it. It's way more powerful than compose's restart policy. Finally, I like to clean up any data I haven't explicitly intended to persist across service restarts so that I don't end up in a situation where I'm debugging an issue that manifests itself because of some persisted piece of data I'm completely unaware of.
avidamoeba
Let me know how the search performs once it's done. Speed of search, subjective quality, etc.
Why start anew instead of forking or contributing to Jellyfin?
I think I lost neurons reading this. Other commenters in this thread had the resilience to explain what the problems with it are.
I use a fixed tag. 😂 It's more a simple way to update. Change the tag in SaltStack, apply config, service is restarted, new tag is pulled. If the tag doesn't change, the pull is a noop.
Let me know how inference goes. I might recommend that to a friend with a similar CPU.
Yup. Everything is in one place and there's no hardcoded paths outside of the work dir making it trivial to move across storage or even machines.
Because I clean everything up that's not explicitly on disk on restart:
[Unit]
Description=Immich in Docker
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
WorkingDirectory=/opt/immich-docker
ExecStartPre=-/usr/bin/docker compose kill --remove-orphans
ExecStartPre=-/usr/bin/docker compose down --remove-orphans
ExecStartPre=-/usr/bin/docker compose rm -f -s -v
ExecStartPre=-/usr/bin/docker compose pull
ExecStart=/usr/bin/docker compose up
Restart=always
RestartSec=30
[Install]
WantedBy=multi-user.target
Did you run the Smart Search job?
That's a Celeron right? I'd try a better AI model. Check this page for the list. You could try the heaviest one. It'll take a long time to process your library but inference is faster. I don't know how much faster it is. Maybe it would be fast enough to be usable. If not usable, choose a lighter model. There's execution times in the table that I assume tell us how heavy the models are. Once you change a model, you have to let it rescan the library.
Yes, it depends on how you're managing the service. If you're using one of the common patterns via systemd, you may be cleaning up everything, including old volumes, like I do.
E: Also if you have any sort of lazy prune op running on a timer, it could blow it up at some point.
Okay that's gotta be radically different!