this post was submitted on 12 Feb 2024
31 points (94.3% liked)

Selfhosted

40645 readers
290 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

with the demise of ESXi, I am looking for alternatives. Currently I have PfSense virtualized on four physical NICs, a bunch of virtual ones, and it works great. Does Proxmox do this with anything like the ease of ESXi? Any other ideas?

you are viewing a single comment's thread
view the rest of the comments
[–] TCB13@lemmy.world 2 points 10 months ago* (last edited 10 months ago) (16 children)

Yes it does run, but BSD-based VMs running on Linux have their details as usual. This might be what you're looking for: https://discuss.linuxcontainers.org/t/run-freebsd-13-1-opnsense-22-7-pfsense-2-7-0-and-newer-under-lxd-vm/15799

Since you want to run a firewall/router you can ignore LXD's networking configuration and use your opnsense to assign addresses and whatnot to your other containers. You can created whatever bridges / vlan-based interface on your base system and them assign them to profiles/containers/VMs. For eg. create a cbr0 network bridge using systemd-network and then run lxc profile device add default eth0 nic nictype=bridged parent=cbr0 name=eth0 this will use cbr0 as the default bridge for all machines and LXD won't provide any addressing or touch the network, it will just create an eth0 interface on those machines attached to the bridge. Then your opnsense can be on the same bridge and do DHCP, routing etc. Obviously you can passthrough entire PCI devices to VMs and containers if required as well.

When you're searching around for help, instead of "Incus" you can search for "LXD" as it tend to give you better results. Not sure if you're aware but LXD was the original project run by Canonical, recently it was forked into Incus (and maintained by the same people who created LXD at Canonical) to keep the project open under the Linux Containers initiative.

[–] tofubl@discuss.tchncs.de 1 points 10 months ago* (last edited 10 months ago) (5 children)

I have another question, if you don't mind: I have a debian/incus+opnsense setup now, created bridges for my NICs with systemd-networkd and attached the bridges to the VM like you described. I have the host configured with DHCP on the LAN bridge and ideally (correct me if I'm wrong, please), I'd like the host to not touch the WAN bridge at all (other than creating it and hooking it up to the NIC).

Here's the problem: if I don't configure the bridge on the host with either dhcp or a static IP, the opnsense VM also doesn't receive an IP on that interface. I have a br0.netdev to set up the bridge, a br0.network to connect the bridge to the NIC, and a wan.network to assign a static IP on br0, otherwise nothing works. (While I'm working on this, I have the WAN port connected to my old LAN, if it makes a difference.)

My question is: Is my expectation wrong or my setup? Am I mistaken that the host shouldn't be configured on the WAN interface? Can I solve this by passing the pci device to the VM, and what's the best practice here?

Thank you for taking a look! 😊

[–] TCB13@lemmy.world 2 points 10 months ago* (last edited 10 months ago) (4 children)

Am I mistaken that the host shouldn’t be configured on the WAN interface? Can I solve this by passing the pci device to the VM, and what’s the best practice here?

Passing the PCI network card / device to the VM would make things more secure as the host won't be configured / touching the network card exposed to the WAN. Nevertheless passing the card to the VM would make things less flexible and it isn't required.

I think there's something wrong with your setup. One of my machines has a br0 and a setup like yours. 10-enp5s0.network is the physical "WAN" interface:

root@host10:/etc/systemd/network# cat 10-enp5s0.network
[Match]
Name=enp5s0

[Network]
Bridge=br0 # -> note that we're just saying that enp5s0 belongs to the bridge, no IPs are assigned here.
root@host10:/etc/systemd/network# cat 11-br0.netdev
[NetDev]
Name=br0
Kind=bridge
root@host10:/etc/systemd/network# cat 11-br0.network
[Match]
Name=br0

[Network]
DHCP=ipv4 # -> In my case I'm also requesting an IP for my host but this isn't required. If I set it to "no" it will also work.

Now, I have a profile for "bridged" containers:

root@host10:/etc/systemd/network# lxc profile show bridged
config:
 (...)
description: Bridged Networking Profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
(...)

And one of my VMs with this profile:

root@host10:/etc/systemd/network# lxc config show havm
architecture: x86_64
config:
  image.description: HAVM
  image.os: Debian
(...)
profiles:
- bridged
(...)

Inside the VM the network is configured like this:

root@havm:~# cat /etc/systemd/network/10-eth0.network
[Match]
Name=eth0

[Link]
RequiredForOnline=yes

[Network]
DHCP=ipv4

Can you check if your config is done like this? If so it should work.

[–] tofubl@discuss.tchncs.de 1 points 10 months ago* (last edited 10 months ago) (1 children)

My config was more or less identical to yours, and that removed some doubt and let me focus on the right part: Without a network config on br0, the host isn't bringing it up on boot. I thought it had something to do with the interface having an IP, but turns out the following works as well:

user@edge:/etc/systemd/network$ cat wan0.network
[Match]
Name=br0

[Network]
DHCP=no
LinkLocalAddressing=ipv4

[Link]
RequiredForOnline=no

Thank you once again!

[–] TCB13@lemmy.world 2 points 10 months ago* (last edited 10 months ago) (1 children)

Oh, now I remembered that there's ActivationPolicy= on [Link] that can be used to control what happens to the interface. At some point I even reported a bug on that feature and vlans.

I thought it had something to do with the interface having an IP (...) LinkLocalAddressing=ipv4

I'm not so sure it is about the interface having an IP... I believe your current LinkLocalAddressing=ipv4 is forcing the interface to get up since it has to assign a local IP. Maybe you can set LinkLocalAddressing=no and ActivationPolicy=always-up and see how it goes.

[–] tofubl@discuss.tchncs.de 2 points 10 months ago (1 children)

You know your stuff, man! It's exactly as you say. 🙏

[–] TCB13@lemmy.world 1 points 10 months ago

You're welcome.

load more comments (2 replies)
load more comments (2 replies)
load more comments (12 replies)