this post was submitted on 29 Jun 2024
118 points (97.6% liked)

Selfhosted

40329 readers
368 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi all. I was curious about some of the pros and cons of using Proxmox in a home lab set up. It seems like in most home lab setups it’s overkill. But I feel like there may be something I’m missing. Let’s say I run my home lab on two or three different SBCs. Main server is an x86 i5 machine with 16gigs memory and the others are arm devices with 8 gigs memory. Ample space on all. Wouldn’t Proxmox be overkill here and eat up more system resources than just running base Ubuntu, Debian or other server distro on them all and either running the services needed from binary or docker? Seems like the extra memory needed to run the Proxmox software and then the containers would just kill available memory or CPU availability. Am I wrong in thinking that Proxmox is better suited for when you have a machine with 32gigs or more of memory and some sort of base line powerful cpu?

you are viewing a single comment's thread
view the rest of the comments
[–] machinin@lemmy.world 25 points 4 months ago (1 children)

For me, pros are:

  • Fun to learn something new
  • Easy to test different systems. For example, I can play with different router or NAS software without having a separate computer around.
  • I've been able to create different "computers" that serve different needs and require different levels of security.
  • Currently, a cluster is probably overkill, it was a fun experiment.

Cons

  • Updating all the different systems can be a pain. I could probably automate it, but I haven't made the time to learn it yet.
  • As a beginner, I'm throwing a bunch of parts together and hoping it will work. I should probably be more strategic in my implementation, but I don't know what to prioritize. I'm sure I'll have to start over in the future.
  • With the previous point, the storage setup doesn't seem very intuitive. I probably need to set up that better.
  • I haven't quite figured out backups yet. My VM backups all seem too big. I need to figure that out and automate it.

Hope this is helpful.

[–] umbrella@lemmy.ml 3 points 4 months ago

a simple cron job pointing to an update.sh with an apt update && apt upgrade -y does the trick.

i wouldnt recommend you to completely automate it though

debian has unattended-updates by default and generally takes care of itself