moonpiedumplings

joined 1 year ago
[–] moonpiedumplings@programming.dev 2 points 9 months ago* (last edited 9 months ago)

Don’t do unattended upgrades. Neither host nor containers. Do blind or automated updates if you want but check up on them and be ready to roll back if something is wrong.

Those issues are only common on rolling releases. On stable distros, they put tape between breaking changes, test that tape, and then roll out updates.

Debian, and many other distros support it officially: https://wiki.debian.org/UnattendedUpgrades. It's not just a cronjob running "apt install", but an actual process, including automated checks. You can configure it to not upgrade specific packages, or stick to security updates.

As for containers, it is trivial to rollback versions, which is why unattended upgrades are ok. Although, if data or configuration is corrupted by a bug, then you probably would have to restore from backup (probably something I should have suggested in my initial reply).

It should be noted that unattended upgrade doesn't always mean "upgrade to the latest version". For docker/podman containers, you can pin them to a stable release, and then it will do unattended upgrades within that release, preventing any major breaking changes.

Similarly, on many distros, you can configure them to only do the minimum security updates, while leaving other packages untouched.

People should use what distro they know best. A rolling distro they know how to handle is much better than a non-rolling one they don’t.

I don't really feel like reinstalling the bootloader over ssh, to a machine that doesn't have a monitor, but you do you. There are real significant differences between stable and rolling release distros, that make a stable release more suited for a server, especially one you don't want to baby remotely.

I use arch. But the only reason I can afford to baby a rolling release distro is because I have two laptops (both running arch). I can feel confident that if one breaks, I can use the other. All my data is replicated to each laptop, and backed up to a remote server running syncthing, so I can even reinstall and not lose anything. But I still panicked when I saw that message suggesting that I should reinstall grub.

That remote server? Ubuntu with unattended upgrades, by the way. Most VPS providers will give you a linux distro image with unattended security upgrades enabled because it removes a footgun from the customer. On Contabo with Rocky 9, it even seems to do automatic reboots. This ensures that their customers don't have insecure, outdated binaries or libraries.

Docker doesn’t “bypass” the firewall. It manages rules so the ports that you pass to host will work. Because there’s no point in mapping blocked ports. You want to add and remove firewall rules by hand every time a container starts or stops, and look up container interfaces yourself? Be my guest.

Docker is a way for me to run services on my server. Literally every other service application respects the firewall. Sometimes I want services to be exposed on my home network, but not on a public wifi, something docker isn't capable of doing, but the firewall is. Sometimes I may want to configure a service while keeping it running. Or maybe I want to test it locally. Or maybe I want to use it locally

It's only docker where you have to deal with something like this:

***
services:
  webtop:
    image: lscr.io/linuxserver/webtop:latest
    container_name: webtop
    security_opt:
      - seccomp:unconfined #optional
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - SUBFOLDER=/ #optional
      - TITLE=Webtop #optional
    volumes:
      - /path/to/data:/config
      - /var/run/docker.sock:/var/run/docker.sock #optional
    ports:
      - 3000:3000
      - 3001:3001
    restart: unless-stopped

Originally from here, edited for brevity.

Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.

Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker "bypasses" the firewall.

On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that's better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren't exposed to the internet, and docker throws that out the window.

[–] moonpiedumplings@programming.dev 3 points 9 months ago (2 children)

A tip I have is to move away from manjaro.

When you use a rolling release, you lose one of the main features of stable release distros: Automatic, unattended upgrades. AFAIK, every stable release distro has those, and none of the rolling releases do (except maybe opensuses's new slowroll and centos rolling, but I wouldn't recommend or use them).

Manjaro has other issues too, but that's the big one.

Although I use arch on my laptop, I run debian on my server because I don't want to have to baby it, especially since I primarily access it remotely. Automatic upgrades are one less complication removed, allowing me to focus on my server itself.

As for application deployment itself, I recommend using application containers, either via docker or podman. There are many premade containers for those platforms, for apps like jellyfin, or the various music streaming apps people use to replace spotify (I can't remember any of the top of my head, but I know you have lots of options).

However, there are two caveats to docker (not podman) people should know:

  • Docker containers don't auto update. Although you can use something like watchtower to automatically update them. As for podman, podman has an auto update command you can probably configure to run regularly.
  • Docker bypasses your firewall. If you forward port 80, docker will go around the firewall and publish it. The reason for this is that most linux firewalls work by using iptables or nftables behind the hood, but docker also edits those directly... this has security implications, I've seen many container services people didn't intend to put on the public internet, on there.

Podman, however, respects your firewall rules. Podman isn't perfect though, there are some apps that won't run in podman containers, although my use case is a little more niche (greenbone service and vulnerability scanner).

As for where to start, projects like linuxserver provide podman/docker containers, which you can use to deploy many apps fairly easily, once you learn how to launch apps with the compose file. Check out this nextcloud dockerized, they provide. Nextcloud is a google drive alternative, although sometimes people complain about it being slow.. I don't know about the quality of linuxserver's nextcloud, so you'd have to do some research for that, and find a good docker container.

[–] moonpiedumplings@programming.dev 1 points 10 months ago* (last edited 10 months ago)

your typical manga/light novel weebo

No chinese support :(

I read a ton of web novels translated from Chinese, and reading the untranslated versions would be a fun way to learn Chinese. Or Korean.

I don't really like the Japanese light novels as much.

Edit: hmmm, it seems like their are similar projects, and some have custom language support. I may need to look into those into the future.

[–] moonpiedumplings@programming.dev 3 points 10 months ago* (last edited 10 months ago) (2 children)

The tldr as I understand it is that Mac M1/M2 devices are unique in that the vram (gpu ram) is the same as the normal ram. This sharing allows LLM models to run on the gpu of those chips, and in their "vram" as well, allowing you to run bigger models on smaller devices.

Llama.cpp was the software that users did this with originalky. I can't find the original guide/article I looked at, but here is a github gist, where the commenters have done benchmarks:

https://gist.github.com/cedrickchee/e8d4cb0c4b1df6cc47ce8b18457ebde0

[–] moonpiedumplings@programming.dev 2 points 10 months ago (1 children)

Did you test with different kernels? Them using a custom scheduler that prioritizes desktop applications might cause background things to run slower.

Plus, the use of ananicy (cpu/ram limiter) limits stuff like that as well.

I use cachyos because they set up zram, anf uksmd by defualt. That's ram compression and deduplication, and it'a pretty powerful in my experience. If you're using cachyos, then uksmdstats and zramctl can give you an idea of how much you are saving.

[–] moonpiedumplings@programming.dev 4 points 10 months ago* (last edited 10 months ago) (1 children)

If I run two mysql containers, it won't necessarily take twice the resources of a single mysql containers

It's complicated, but essentially, no.

Docker images, are built in layers. Each layer is a step in the build process. Layers that are identical, are shared between containers to the point of it taking up the ram of only running the layer once.

Although, it should be noted that docker doesn't load the whole container into memory, like a normal linux os. Unused stuff will just sit on your disk, just like normal. So rather, binaries or libraries loaded twice via two docker containers will only use up the ram of one instance. This is similar to how shared libraries reduce ram usage.

Docker only has these features, deduplication, if you are using overlayfs or aufs, but I think overlayfs is the default.

https://moonpiedumplings.github.io/projects/setting-up-kasm/#turns-out-memory-deduplication-is-on-by-default-for-docker-containers

Should you run more than one database container? Well I dunno how mysql scales. If there is performance benefit from having only one mysqld instance, then it's probably worth it. Like, if mysql uses up that much ram regardless of what databases you have loaded in a way that can't be deduplicated, then you'd definitely see a benefit from a single container.

What if your services need different database versions, or even software? Then different database containers is probably better.

[–] moonpiedumplings@programming.dev 10 points 10 months ago (1 children)

you'd really have to verify isolation.

What if they live streamed the entire process, like on twitch?

There exists stuff like this.

Virtualxposed. Sandvxposed.

The most popular one I heard about vmos, https://www.vmos.com/

But that one was android 8 (I think?) closed source, and probably spyware inside and outside the thing.

Also, new changes by google may break these emulator type apps:

https://www.reddit.com/r/SamsungDex/comments/16r1tg8/phantom_process_killer_solution_in_android_14/

[–] moonpiedumplings@programming.dev 1 points 10 months ago* (last edited 10 months ago)

https://obsproject.com/forum/threads/solved-record-multiple-windows-but-not-all.106931/

in addition to windowed projector (creates window of what obs would be streaming)

A but hacky, and a pain to set up past 2 windows, but it works. I do this, creating a windowed projector, and then just share only that window.

nvlc/ vlc -I ncurses for cli.

[–] moonpiedumplings@programming.dev 4 points 10 months ago (1 children)

Nginx and nginx proxy manager are two different things, although nginx proxy manager uses nginx underneath the hodd.

Nginx is a lightweight reverse proxy and http(s) server configured via config files.

https://nginx.org/en/

Nginx proxy manager is a docker container that runs nginx, but also had a webui on top of it to make it much, much easier to configure.

Sometimes abbreviated as NPM.

https://nginxproxymanager.com/

That's why people keep asking you for your nginx config since when you just say nginx, people are expecting that you are using just nginx, and configuring it through text files.

I really like zellij:

https://github.com/zellij-org/zellij

Terminal multiplexer like tmux, but more intuitive to use.

view more: ‹ prev next ›