avidamoeba

joined 2 years ago
[–] avidamoeba@lemmy.ca 2 points 4 months ago

Yup. Everything is in one place and there's no hardcoded paths outside of the work dir making it trivial to move across storage or even machines.

[–] avidamoeba@lemmy.ca 3 points 4 months ago (8 children)

Because I clean everything up that's not explicitly on disk on restart:

[Unit]
Description=Immich in Docker
After=docker.service 
Requires=docker.service

[Service]
TimeoutStartSec=0

WorkingDirectory=/opt/immich-docker

ExecStartPre=-/usr/bin/docker compose kill --remove-orphans
ExecStartPre=-/usr/bin/docker compose down --remove-orphans
ExecStartPre=-/usr/bin/docker compose rm -f -s -v
ExecStartPre=-/usr/bin/docker compose pull
ExecStart=/usr/bin/docker compose up

Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target
[–] avidamoeba@lemmy.ca 1 points 4 months ago (8 children)

Did you run the Smart Search job?

[–] avidamoeba@lemmy.ca 2 points 4 months ago* (last edited 4 months ago) (12 children)

That's a Celeron right? I'd try a better AI model. Check this page for the list. You could try the heaviest one. It'll take a long time to process your library but inference is faster. I don't know how much faster it is. Maybe it would be fast enough to be usable. If not usable, choose a lighter model. There's execution times in the table that I assume tell us how heavy the models are. Once you change a model, you have to let it rescan the library.

[–] avidamoeba@lemmy.ca 7 points 4 months ago* (last edited 3 months ago)

Yes, it depends on how you're managing the service. If you're using one of the common patterns via systemd, you may be cleaning up everything, including old volumes, like I do.

E: Also if you have any sort of lazy prune op running on a timer, it could blow it up at some point.

[–] avidamoeba@lemmy.ca 4 points 4 months ago (1 children)

Check your Syncthing settings. It's a very reliable piece of software. Other than that Immich, but that's a slightly different use case.

[–] avidamoeba@lemmy.ca 9 points 4 months ago (1 children)

I switched to the same model. It's absolutely spectacular. The only extra thing I did was to increase the concurrent job count for Smart Search and to give the model access to my GPU which sped up the initial scan at least an order of magnitude.

[–] avidamoeba@lemmy.ca 16 points 4 months ago* (last edited 4 months ago) (17 children)

Oh, and if you haven't changed from the default ML model, please do. The results are phenomenal. The default is nice but only really needed on really low power hardware. If you have a notebook/desktop class CPU and/or GPU with 6GB+ of RAM, you should try a larger model. I used the best model they have and it consumes around 4GB VRAM.

[–] avidamoeba@lemmy.ca 7 points 4 months ago (1 children)

Would this work with a public dynamic DNS?

[–] avidamoeba@lemmy.ca 8 points 4 months ago

Use low power radio like ZigBee or Z-Wave, exclusively, unless you have a good reason to trust the device.

[–] avidamoeba@lemmy.ca 3 points 4 months ago

No issues with Debian / Ubuntu on many laptops since early 2010s, mostly with Intel graphics. I had a Vostro 1400 with Nvidia and it also resumed fine, but that was 2009-11 so the experience with the Nvidia driver from that time is likely irrelevant.

view more: ‹ prev next ›