moonpiedumplings

joined 1 year ago
[–] moonpiedumplings@programming.dev 5 points 7 months ago* (last edited 7 months ago)

I can't find the source code for this extension

[–] moonpiedumplings@programming.dev 3 points 7 months ago (1 children)

If you're not trying to create complex virtual networks, or have hardware accelerated graphics, VirtualBox can be a bit unintuitive, but has all of the features that VMWare makes you pay for, available for free.

[–] moonpiedumplings@programming.dev 5 points 8 months ago* (last edited 8 months ago) (2 children)

I use this too, and it should be noted that this does not require wireguard or any VPN solution. Rathole can be served publicly, allowing a machine behind a NAT or firewall to connect.

What made it better?

[–] moonpiedumplings@programming.dev 8 points 8 months ago (3 children)

Upstart was better, but even Ubuntu, who was by the creators of upstart (Canonical) decided to switch to systemd after using upstart for a bit?

[–] moonpiedumplings@programming.dev 0 points 8 months ago (1 children)

No, it is lock in. If apple allowed for multiple app stores other than their own, then users could pay for an app on one app store, and then not have to pay again on another, potentially even on non-apple devices.

I encountered this when I first purchased minecraft bedrock edition on the amazon kindle. Rather than repurchasing it on the google play store when on a non-amazon, I simply tracked down the Amazon app store for non-amazon devices, and redownloaded it from there. No lock in to Amazon or other android devices, both ways.

Now, the Apple app store would still probably not work on androids... but now they would actually have to compete for users on the app store, by offering something potentially better than transferable purchases across ecosystems.

I suspect the upcoming Epic store for iOS and android may be like that... pay for a game/app on one OS, get it available for all platforms where you have the Epic store. But the only reason the Epic store is even coming to iOS is because Apple has been forced to open up their ecosystem.

[–] moonpiedumplings@programming.dev 2 points 8 months ago (1 children)

LXD/Incus. It's truly free/open

Please stop saying this about lxd. You know it isn't true, ever since they started requiring a CLA.

LXD is literally less free than proxmox, looking at those terms, since Canonical isn't required to open source any custom lxd versions they host.

Also, I've literally brought this up to you before, and you acknowledged it. But you continue to spread this despite the fact that you should know better.

Anyway, Incus currently isn't packaged in debian bookworm, only trixie.

The version of lxd debian packages is before the license change so that's still free. But for people on other distros, it's better to clarify that incus is the truly FOSS option.

Also switched here. OBS on wayland has some new features, that I'm excited to take advantage of, but I still cannot find a way to share some windows, but not an entire monitor.

OBS has another feature: "virtual monitor". It does what it sounds like, and creates a virtual monitor, which you can then treat like a real monitor, like extending to, or unifying outputs, etc.

It also has a feature to share the entire workspace, but it doesn't work like I expect, and instead uses all monitors (not workspaces) as a single input source. I suspect that's a bug tbh, because this behavior is useless considering you can just add monitors as a source side by side.

[–] moonpiedumplings@programming.dev 3 points 8 months ago* (last edited 8 months ago)

I remember this being brought up with an acquaintance, but basically there's a bug where the newest fedora kernel isn't compatible with VMWare.

So yeah. Either wait for a kernel patch, or wait for VMWare to fix their stuff. But they might not, other users have mentioned that they've gone downhill after being bought by Broadcom.

If you want 3d acceleration on virtualized Linux guests, other than vmware, you have two options:

  • GPU passthrough
  • Virtual gpu (virgl/virtualgl/egl-headless)

The latter is basically only going to work on a Linux host, virtualizing Linux guests (although it is possible on windows, with caveats).

The other downside is that no matter which option you pick, it's all going to end up being a bit more tinkering (either a little — assign a vm a gpu, or a lot, install unsigned windows drivers), compared to VMWare's "just works"/one click 3d acceleration setup.

[–] moonpiedumplings@programming.dev 6 points 8 months ago* (last edited 8 months ago) (1 children)

Dockers manipulation of nftables is pretty well defined in their documentation

Documentation people don't read. People expect, that, like most other services, docker binds to ports/addresses behind the firewall. Literally no other container runtime/engine does this, including, notably, podman.

As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.

Too bad people don't read that advice. They just deploy the webtop docker compose, without understanding what any of it is. I like (hate?) linuxserver's webtop, because it's an example of the two of the worst footguns in docker in one

To include the rest of my comment that I linked to:

Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.

On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.

You originally stated:

I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

And I'm trying to say that even if that was true, it would still be better than a footgun where people expose stuff that's not supposed to be exposed.

But that isn't the case for podman. A quick look through the github issues for podman, and I don't see it inundated with newbies asking "how to expose services?" because they assume the firewall port needs to be opened, probably. Instead, there are bug reports in the opposite direction, like this one, where services are being exposed despite the firewall being up.

(I don't have anything against you, I just really hate the way docker does things.)

[–] moonpiedumplings@programming.dev 7 points 8 months ago (1 children)

Probably not an issue, but you should check. If the port opened is something like 127.0.0.1:portnumber, then it's only bound to localhost, and only that local machine can access it. If no address is specified, then anyone with access to the server can access that service.

An easy way to see containers running is: docker ps, where you can look at forwarded ports.

Alternatively, you can use the nmap tool to scan your own server for exposed ports. nmap -A serverip does the slowest, but most indepth scan.

view more: ‹ prev next ›