hallettj

joined 9 months ago
[–] hallettj@leminal.space 2 points 1 month ago
14
submitted 1 month ago* (last edited 1 month ago) by hallettj@leminal.space to c/linux@lemmy.ml
 

Some app launchers these days run each app in a new systemd scope, which puts the app process and any child processes into their own cgroup. For example I use rofi which does this, and I noticed that fuzzel does also. That is handy for tracking and cleaning up child processes!

You can see how processes are organized by running,

$ systemctl --user status

I think that's a quite useful way to see processes organized. Looking at it I noticed a couple of scopes that shouldn't still be running.

Just for fun I wanted to use this to try to script a better killall. For example if I run $ killscope slack I want the script to:

  1. find processes with the name "slack"
  2. find the names of the systemd scopes that own those processes (for example, app-niri-rofi-2594858.scope)
  3. kill processes in each scope with a command like, systemctl --user stop app-niri-rofi-2594858.scope

Step 2 turned out to be harder than I liked. Does anyone know of an easy way to do this? Ideally I'd like a list of all scopes with information for all child processes in JSON or another machine-readable format.

systemctl --user status gives me all of the information I want, listing each scope with the command for each process under it. But it is not structured in an easily machine-readable format. Adding --output json does nothing.

systemd-cgls shows the same cgroup information that is shown in systemctl --user status. But again, I don't see an option for machine-readable output.

systemd-cgtop is interesting, bot not relevant.

Anyway, I got something working by falling back on the classic commands. ps can show the cgroup for each process:

$  ps x --format comm=,cgroup= | grep '^slack\b'
slack           0::/user.slice/user-1000.slice/user@1000.service/app.slice/app-niri-rofi-2594858.scope
slack           0::/user.slice/user-1000.slice/user@1000.service/app.slice/app-niri-rofi-2594858.scope
slack           0::/user.slice/user-1000.slice/user@1000.service/app.slice/app-niri-rofi-2594858.scope
...

The last path element of the cgroup happens to be the scope name. That can be extracted with awk -F/ '{print $NF}' Then unique scope names can be fed to xargs. Here is a shell function that puts everything together:

function killscope() {
    local name="$1"
    ps x --format comm=,cgroup= \
        | grep "^$name\b" \
        | awk -F/ '{print $NF}' \
        | sort | uniq \
        | xargs -r systemctl --user stop
}

It could be better, and it might be a little dangerous. But it works!

[–] hallettj@leminal.space 3 points 1 month ago

When I researched this previously I concluded that there are two very good options for regular backups: Borg and Restic. These are especially efficient at backing up a diff of what has changed since the last backup. So you get snapshots of your filesystem state at each backup point without using a huge amount of space. You can mount any snapshot as a virtual directory. After the initial backup, incremental backups take a minute or two.

I use Borg, and I back up to cloud storage on Borgbase. I use Vorta as a GUI for Borg. I have Vorta start automatically when I start my window manager, and I have it set up for daily backups. I set up the same thing on my kid's computer.

I back up my home directory. I have some excluded directories like ~/.cache, and Steam's data directory. I use Baobab to find large directories that I don't want backed up.

I use the "exclude caches" option in the Borg "create archive" settings. That automatically excludes Rust target/ directories because they follow the Cache Directory Tagging Specification. Not all programming languages' tooling follows that spec so I also use directory name pattern excludes. For example I have an exclude pattern for .*/node_modules/.*

I use NixOS, and I keep my system config in a git repo so I don't need backups for anything outside my home directory.

[–] hallettj@leminal.space 2 points 4 months ago

It would make sense for the terminal to handle syntax highlighting since that would match how editors work. But the convention is that the shell handles highlighting, not the terminal. You can check which shell you are running with the command,

$ echo $SHELL

It's done that way because the shell is a running program that is capable of telling the terminal which colors to show (by mixing color escape sequences into text). Compare that to code in an editor which is text, not a running program so the only option is for the editor to handle highlighting[1]. Editors need syntax files to configure highlighting for all the different programming languages, while terminals don't need this because the shell tells them what colors to show.

[1] setting aside the "semantic highlighting" LSP capability - that was invented long after syntax highlighting conventions were established

[–] hallettj@leminal.space 4 points 4 months ago

Seems like a matter of preference, and I see the logic in it. I'll mention that Nushell makes it easy to create custom shell functions that are invoked as sub-commands in this manner. https://www.nushell.sh/book/custom_commands.html#command-names

[–] hallettj@leminal.space 14 points 4 months ago (9 children)

Are there other relevant standards? The XDG base directory specification has been around for a long time, and is well established.

Maybe your comment wooshed over my head; if so I apologize.

[–] hallettj@leminal.space 25 points 4 months ago (1 children)

Are you saying that you don't want to write your software according to the XDG spec, or that you don't want to set the XDG env vars on your system? If it's the second that's fine - apps using XDG work just fine if you ignore it. If it's the first I'd suggest reconsidering because XDG can make things much easier for users of your software who have system setups or preferences that are different from yours; and using XDG doesn't cause problems for users who ignore it.

OP's recommendation is aimed mostly at software authors.

[–] hallettj@leminal.space 24 points 4 months ago

So yes, "XDG" stands for "Cross-Desktop Group" - but I don't agree that using the spec assumes a windowing system. The base directory spec involves checking for certain environment variables for guidance on where to put files, and falling back to certain defaults if those variables are not set. It works fine on headless systems, and on systems that are not XDG-aware (I suppose that means systems that don't set the relevant env vars).

OTOH as another commenter pointed out the base directory spec can make software work when it otherwise wouldn't on a system that doesn't have a typical home directory layout or permissions.

[–] hallettj@leminal.space 6 points 5 months ago

Probably not directly helpful, but Nix packages for Chromium and Electron apps are set up so that you can switch to native Wayland mode globally by setting an environment variable, NIXOS_OZONE_WL=1

I don't know of any global setting that isn't distro-specific.

[–] hallettj@leminal.space 1 points 5 months ago

This seems like the right answer to me. Whether or not you decide to dual boot, make one of these USB keys so you can recover if something goes wrong.

[–] hallettj@leminal.space 3 points 6 months ago

When I was using Debian I found I could generally get the latest version of software I wanted from Nix if it wasn't in the main Debian repos, or was outdated. Nix works quite well on any Linux distro - it doesn't interfere with the rest of the system.

[–] hallettj@leminal.space 3 points 6 months ago

All I can tell you is that this is done differently for each shell. So decide whether you want completions for bash, zsh, fish, all of the above, or whatever, and look at the docs for the relevant shells.

[–] hallettj@leminal.space 3 points 6 months ago

This is why I switched to labelling USB sticks with two-character codes, and I keep a file that lists the current content of each stick.

44
submitted 7 months ago* (last edited 7 months ago) by hallettj@leminal.space to c/linux@lemmy.ml
 

Passkeys seem like a great idea, and we are at a point where, although things are still very much in flux, software passkeys managed by password managers are starting to be usable. I thought I'd share the workflow that's working for me on Linux with some sites, and ask the community for more tips & tricks.

A passkey is a client certificate - which is an old idea, but now there are some new standards in place*. When you log into a website, instead of sending a password you send a message signed using the private key on your hardware security device, or stored in your password manager. If you use a password manager the flow is about the same as with passwords: your password manager pops up and asks if you want to log in to the given website. But instead of sending a password to the browser, message signing takes place in the password manager. Unlike passwords those signed messages can't be replayed. Arguably you can skip sending MFA codes and get about the same (or maybe better) security with passkeys than you were getting with passwords + MFA.

Complications come up because support for passkey APIs is still patchy. On Linux I think there is system-level support for hardware keys, but not for passkey managers (password managers that can do passkey signing). But you can close that gap using browser extensions! I'm using Enpass with it's Firefox extension. Signing into websites in Firefox using passkeys works quite well in some of the sites I've tried. (I've also tested with Bitwarden's browser extension, and it works just as well.**) Although creating passkeys doesn't work on all of those sites.

  • I was able to create a passkey on Github, and sign in with it.
  • I was able to create a passkey for the demo at https://www.passkeys.io/, and sign in with it.
  • I couldn't create passkeys for Google, but I could log in with passkeys created on another device, and synced by Enpass to my Linux machine.
  • I can use a passkey for MFA on Discord, but they don't seem to be using them for logins yet.
  • I'm not getting options to use my passkeys on Amazon or Paypal, but I was able to create passkeys for these sites on Android.

Without using a browser extension Chrome on Linux does have a feature to sign in with passkeys on mobile devices. I don't think this works with third-party passskey managers. On some sites Chrome gave me the option to log in using the automatically-generated, Google-managed passkey on my phone. It didn't actually worked for me - my phone showed a message saying "connecting to device" but never actually connected.

That brings me to the Android side. Since some sites will let me log in with passkeys but not create them it's helpful to have another option for creating passkeys. Android is further along in implementing system level passkey support (only in Android 14 or later). But it's not perfect yet. Firefox for Android is not working with passkey managers yet, but there is a ticket to track this. Third-party passkey managers work in Chrome for Android, but only if you enable an experimental flag:

  • open chrome://flags/
  • find the setting "Android Credential Management for passkeys"
  • set the value to "Enabled for Google Password Manager and 3rd party passkeys"

* "Passkey" seems to be an umbrella term for WebAuthn or FIDO U2F. It looks like WebAuthn is a part of FIDO2.

** From a cursory look at the two I feel more comfortable with Enpass' browser extension than with Bitwarden's. I'm not positive, but it looks like Bitwarden loads credentials in the extension itself which puts all of your secrets in the browser process. OTOH the Enpass extension uses IPC to send requests to the Enpass desktop app. But as many will point out, Bitwarden's clients are open-source and audited while Enpass' software is closed-source.

view more: next ›