this post was submitted on 03 Jan 2024
80 points (85.1% liked)

Linux

48328 readers
641 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I have a few Linux servers at home that I regularly remote into in order to manage, usually logged into KDE Plasma as root. Usually they just have several command line windows and a file manager open (I personally just find it more convenient to use the command line from a remote desktop instead of directly SSH-ing into the system), but if I have an issue, I've just been absentmindedly searching stuff up and trying to find solutions using the preinstalled Firefox instance from within the remote desktop itself, which would also be running as root.

I never even thought to install uBlock Origin on it or anything, but the servers are all configured to use a PiHole instance which blocks the vast majority of ads. However, I do also remember using the browser in my main server to figure out how to set up the PiHole instance in the first place, and that server also happens to be the most important one and is my main NAS.

I never went on any particularly shady websites, but I also don't remember exactly which websites I've been on as root, though I do seem to remember seeing ads during the initial pihole setup, because it didn't go very smoothly and I was searching up error messages trying to get it to work.

This is definitely on me, but it never crossed my mind until recently that it might be a bad idea to use a browser as root, and searching online everyone just states the general cybersecurity doctrine to never do it (which I'm now realizing I shouldn't have) but no one seems to be discussing how risky it actually is. Shouldn't Firefox be sandboxing every website and not allowing anything to access the base system? Between "just stop doing it" and "you have to reinstall the OS right now there's probably already a virus on there," how much danger do you suppose I'm in? I'm mainly worried about the security/privacy of my personal data I have stored on the servers. All my servers run Fedora KDE Spin and have Intel processors if that makes a difference?

you are viewing a single comment's thread
view the rest of the comments
[–] amju_wolf@pawb.social 23 points 10 months ago* (last edited 10 months ago) (1 children)

I don't want to step on your workflow too much since it somehow seems to work for you but your main issue stems from the fact that you clearly don't work with your server as if it actually was a server.

You shouldn't really have a desktop interface running there in the first place (let alone as root and then using it as a regular user). You should ask yourself what it actually solves for you and be open to trying different (and more standard) solutions to what you're trying to achieve.

It'd probably consist of less clicking and using the CLI a bit more, but for stuff like file management you can still easily use mc.

If you need terminal sessions that keep scrollback and don't stop when you disconnect you should learn to use tmux or screen or something like that. But then again if you're running actual software in there then you should probably use a service (daemon) for that.

As for whether it's a security issue, yeah it most definitely is. Just like it's a security issue to run literally any networked application as root. Security isn't black and white and there are trade offs to be made but most people wouldn't consider what you're doing a reasonable tradeoff.

[–] HiddenLayer5@lemmy.ml 1 points 10 months ago* (last edited 10 months ago) (3 children)

I had actually moved from a fully CLI server to one with a full desktop when I upgraded from a single board computer to x86. The issue is that it's not just a NAS, but I regularly use it to offload long operations (moving, copying, or compressing files, mostly) so I don't need to use my PC for those. To do that I just remote into it and type in the command, then I can turn my PC off or do whatever without affecting the operation. So in a way it's a second PC that also happens to be a server for my other machines.

I use screen occasionally, and I used to use it a lot more when it was CLI only, but I find it really unwieldy due to how it manages multiple active terminals where you have to type in the ID of each screen to go back into it, and also because it refuses to scroll even when run in a terminal emulator that supports scrolling, where it just cycles between recent commands when you move the scroll wheel.

Not trying to make excuses, just trying to explain my reasoning. I know it's bad practice and none of these are things I'd do if I was managing an actual production server, but since it's only accessible from my LAN I tend to be a lot more lax with it.

I'm wondering if I could benefit from some kind of virtualized setup that separates the server stuff while still letting me remote into a desktop on the same machine for doing stuff, or if I can get away with just remoting into not the root user. Though I've never used a hypervisor and have no idea how to so I'm not sure how well that would go, since the well-known open source ones like Xen seem really technical and really feels like something not meant to be used outside an actual data centre.

[–] amju_wolf@pawb.social 9 points 10 months ago* (last edited 10 months ago)

I see. In that case you should really try tmux; I didn't vibe with screen either but I find tmux quite usable.

For the most part I just open several terminal windows/tabs on my local machine and remote with each one to the server, and I use tmux only when I explicitly need to keep something running. Since that's usually just one thing I can use like two tmux commands and don't need anything else.

Oh and for stuff like copying and such I'd use rsync instead of primitive cp so that in case it gets interrupted I only copy what's needed.

I wouldn't bother with virtualization and such; you'd only complicate things for yourself. Try to keep it simple but do it properly: learn some command line basics and you'll see that in a year it'll become second nature.

[–] Illecors@lemmy.cafe 7 points 10 months ago* (last edited 10 months ago)

Sorry, this is very much a PEBKAC issue. This is a excerpt from my tmux config:

# Start windows and panes at 1, not 0
set -g base-index 1
setw -g pane-base-index 1

# Use Alt-arrow keys without prefix key to switch panes
bind -n M-Left select-pane -L
bind -n M-Right select-pane -R
bind -n M-Up select-pane -U
bind -n M-Down select-pane -D

# Shift arrow to switch windows
bind -n S-Left  previous-window
bind -n S-Right next-window

# No delay for escape key press
set -sg escape-time 0

# Increase scrollback buffer size from 2000 to 50000 lines
set -g history-limit 50000

# Increase tmux messages display duration from 750ms to 4s
set -g display-time 4000

# Bind pane creation keys to reuse current directory
bind % split-window -h -c "#{pane_current_path}"
bind '"' split-window -v -c "#{pane_current_path}"

I hope the comments are self explanatory.

Scrolling works with Ctrl+b Page Up/Down. There are other shortcuts, but this is probably the most obvious. q to quit scrolling.

Ctrl+b d to detach from a session. tmux a to attach. As always, many options are available to have many named sessions running simultaneously, but that is for a later time.

[–] giloronfoo@beehaw.org 6 points 10 months ago (1 children)

I'd go for remoting in as not root as the first (and maybe only) step for better security.

From there, running the services in VMs would probably be the next step. Docker might be better, but I have gotten into that yet myself.

As for hypervisor, KVM has worked great for me.

[–] pbjamm@beehaw.org 2 points 10 months ago

KVM is awesome. It is the core of Proxmox which is my preferred way to manage VMs and LXC containers now. I used to run debian+KVM+virt-manager or cockpit but Proxmox does all the noodling setup for me and then just works.