this post was submitted on 24 Feb 2026
98 points (81.8% liked)

Selfhosted

56867 readers
290 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

The Huntarr situation (score 200+ and climbing today) is getting discussed as a Huntarr problem. It's not. It's a structural problem with how we evaluate trust in self-hosted software.

Here's the actual issue:

Docker Hub tells you almost nothing useful about security.

The 'Verified Publisher' badge verifies that the namespace belongs to the organization. That's it. It says nothing about what's in the image, how it was built, or whether the code was reviewed by anyone who knows what a 403 response is.

Tags are mutable pointers. huntarr:latest today is not guaranteed to be huntarr:latest tomorrow. There's no notification when a tag gets repointed. If you're pulling by tag in production (or in your homelab), you're trusting a promise that can be silently broken.

The only actually trustworthy reference is a digest: sha256:.... Immutable, verifiable, auditable. Almost nobody uses them.

The Huntarr case specifically:

Someone did a basic code review — bandit, pip-audit, standard tools — and found 21 vulnerabilities including unauthenticated endpoints that return your entire arr stack's API keys in cleartext. The container runs as root. There's a Zip Slip. The maintainer's response was to ban the reporter.

None of this would have been caught by Docker Hub's trust signals, because Docker Hub's trust signals don't evaluate code. They evaluate namespace ownership.

What would actually help:

  • Pull by digest, not tag. Pin your compose files.
  • Check whether the image is built from a public, auditable Dockerfile. If the build process is opaque, that's a signal.
  • Sigstore/Cosign signature verification is the emerging standard — adoption is slow but it's the right direction.
  • Reproducible builds are the gold standard. Trust nothing, verify everything.

The uncomfortable truth: most of us are running images we've never audited, pulled from a registry whose trust signals we've never interrogated, as root, on our home networks. Huntarr made the news because someone did the work. Most of the time, nobody does.

you are viewing a single comment's thread
view the rest of the comments
[–] CameronDev@programming.dev 38 points 14 hours ago (2 children)

Pull by digest just ensures that people end up running an ancient version, vulnerabilities and all long after any issues were patched, so that isn't a one-size-fits-all solution either.

Most projects are well behaved, so pulling latest makes sense, they likely have fixes that you need. In the case of an actually malicious project, the answer is to not run it at all. Huntarr showed their hand, you cannot trust any of their code.

[–] Kushan@lemmy.world 4 points 6 hours ago (1 children)

I generally agree with the sentiment but don't pull by latest, or at the very least don't expect every new version to work without issue.

Most projects are very well behaved as you say but they still need to upgrade major versions now and again that contains breaking charges.

I spebt an afternoon putting my compose files into git, setting up a simple CI pipeline and use renovate to automatically create PR's when things update. Now all my services are pinned to specific versions and when there's an update, I get a PR to make the change along with a nice change log telling me what's actually changed.

It's a little more effort but things don't suddenly break any more. Highly recommend this approach.

[–] CameronDev@programming.dev 3 points 5 hours ago

That does sound like a good approach. Are you able to share that CI pipeline? I am mostly happy to risk the occasional breakage, nothing is really critical. But something more reliable would probably save me some drama every so often when it does break.

[–] wilo108@lemmy.ml 6 points 13 hours ago (3 children)

I use digests in my docker compose files, and I update them when new versions are released (after reading the release notes) 🤷

[–] suicidaleggroll@lemmy.world 18 points 12 hours ago (2 children)

Unfortunately that approach is simply not feasible unless you have very few containers or you make it your full time job.

[–] wilo108@lemmy.ml 6 points 11 hours ago

I dunno, I've never found it all that onerous.

I have a couple of dozen (perhaps ~50) containers running across a bunch of servers, I read the release notes via RSS so I don't go hunting for news of updates or need to remember to check, and I update when I'm ready to. Security updates will probably be applied right away (unless I've read the notes and decided it's not critical for my deployment(s)), for feature updates I'll usually wait a few days (dodged a few bullets that way over the years) or longer if I'm busy, and for major releases I'll often wait until the first point release unless there's something new I really want.

Unless there are breaking changes it takes a few moments to update the docker-compose.yaml and then dcp (aliased to docker compose pull) and dcdup (aliased to docker compose down && docker compose up -d && docker compose logs -f).

I probably do spend upwards of maybe 15 or 20 minutes a week under normal circumstances, but it's really not a full time job for me 🤷.

[–] RIotingPacifist@lemmy.world 1 points 10 hours ago

Yeah this is why I use Debian instead of containers, you can read the release notes on a stable release.

[–] BradleyUffner@lemmy.world 7 points 10 hours ago* (last edited 3 hours ago)

Is manually updating based on trusting the accuracy of the release notes any more secure than just trusting "latest"?

[–] CameronDev@programming.dev 7 points 13 hours ago

You might, but I bet the majority of people set and forget.

I rely on watchtower to keep things up to date.