One proxy with two NICs downstream? Does that solve the "single point of failure" risk or am I being overly cautious?
Plus, the internal and external services are running on the same box. Is that where my real problem lies?
One proxy with two NICs downstream? Does that solve the "single point of failure" risk or am I being overly cautious?
Plus, the internal and external services are running on the same box. Is that where my real problem lies?
selfh.st
selfh.st is an independent publication created and curated by Ethan Sholly. [...] selfh.st draws inspiration from a number of sources including reddit's r/selfhosted subreddit, the Awesome-Selfhosted project on GitHub, and the #selfhosted/#homelab communities on Mastodon.
and also
This Week in Self-Hosted is sponsored by Tailscale, trusted by homelab hobbyists and 4,000+ companies. Check out how businesses use Tailscale to manage remote access to k8s and more.
awesome-selfhosted.net
This list is under the Creative Commons Attribution-ShareAlike 3.0 Unported License. Terms of the license are summarized here. The list of authors can be found in the AUTHORS file. Copyright © 2015-2024, the awesome-selfhosted community
Here's the docker stats of my Nextcloud containers (5 users, ~200GB data and a bunch of apps installed):

No DB wiz by a long shot, but my guess is that most of that 125MB is actual data. Other Postgres containers for smaller apps run 30-40MB. Plus the container separation makes it so much easier to stick to a good backup strategy. Wouldn't want to do it differently.
This is the setup I have (Nextcloud, Keepass Desktop, Keepass2android+webdav) and k2a handles file discrepancies very well. I always pick "merge" when it is informing me of a conflict on save. Have been using it like that for years without a problem.
Edit: added benefit, I have the Keepass extension installed in my Nextcloud, so as long as I can gain access to it, I have access to my passwords, no devices needed.
Page loading times, general stability. Everything, really.
I set it up with sqlite initially to test if it was for me, and was surprised how flaky it felt given how highly people spoke about it. I'm really glad I tried with postgres instead of just tearing it down. But my experience is highly anecdotal, of course.
You can do batch operations in a document view. Select multiple documents and change the attributes in the top menu. Which commands are you missing?
Slow and unreliable with sqlite, but rock solid and amazing with postgres.
Today, every document I receive goes into my duplex ADF scanner to scan to a network share which is monitored by Paperless. Documents there are ingested and pre-tagged, waiting for me to review them in the inbox. Unlike other posters here, I find the tagging process extremely fast and easy. Granted, I didn't have to bring in thousands of documents to begin with but started from a clean slate.
What's more, development is incredibly fast-moving and really useful features are added all the time.
You know your stuff, man! It's exactly as you say. 🙏
My config was more or less identical to yours, and that removed some doubt and let me focus on the right part: Without a network config on br0, the host isn't bringing it up on boot. I thought it had something to do with the interface having an IP, but turns out the following works as well:
user@edge:/etc/systemd/network$ cat wan0.network
[Match]
Name=br0
[Network]
DHCP=no
LinkLocalAddressing=ipv4
[Link]
RequiredForOnline=no
Thank you once again!
No worries. It has a stripe integration, too, so it's easy to handle payments without having to hold customers' credit card info.
You can easily host the community edition in Docker or otherwise. Odoo has a steep learning curve but it's very versatile. It can definitely do what you describe.
The services run on a separate box; yet to be decided on which VLAN I put it. I was not planning to have it in the DMZ but to create ingress firewall rules from the DMZ.