linuxPIPEpower

joined 2 years ago

This is true, here is a brief list: https://www.libreoffice.org/discover/who-uses-libreoffice/

But I'm sure it is a massive project you would need to have sufficient motivation at all levels. Not at all a trivial project.

I am curious how these changes feel on-the-ground to the affected workers who had no personal interest in linux or free software.

[–] linuxPIPEpower@discuss.tchncs.de 2 points 1 month ago* (last edited 1 month ago)

It is a question I've spent a lot of time trying to work out. Can't speak to docker.

Some of the specifics of Keeps and Dontkeeps depend on details of your system. You have to find out where the distro, DM and other apps keep the following:

Dontkeeps:

  • trashes
  • temp files
  • file indexes .. IMHO these dont backup properly if you leave them in and will prevent you from completing the task
  • device files

Keeps:

  • list of installed packages

explicit and deps separate if possible

  • config files: /etc, ~/.config, ~/.* on a case by case basis... I say remove the obvious large temp dirs and keep the rest by default for simplicity
    • for the system configs I've had a tool called etckeeper running for a while because it was highly recommended but I've never actually restored from it...
  • personal documents and other files such as typically kept in the home directory
  • /root occasionally has something you need

Ways to investigate:

  • use a disk usage utility to find out where your storage is being used up ... It'll help you find large Dontkeeps
  • watch for recently modified files
  • dirs and files that are modified all the time are usually temp dirs. But sometimes they have something useful like your firefox profile.

Most backup solutions are ONE of the following:

  1. User files
  2. System files

Don't spend too much time crying about needing two solutions. Just make your backup today and reach perfection later.

Remember: sync isn't backup. Test your backup if you can (but its not as easy as it sounds). Off site your most precious files.

Dont you have to power off the system and boot from clonezilla image to crea5e the backup?

I have also been confounded by the situation.

It is even worse when you are on the secondary market. The company's product pages are broken. Trying to compare across different release years is way harder.

I assumed the reason for this had to do with the production systems and supply chains. They can get a certain number of x parts at y price from a factory located in a given location. You get enough parts in proximity to each other and you make it a model.

Its one thing for a small company to have enough components to have only a few models but with the volume dell or HP moves, they would need to really invest in suppliers or actually make the components themselves.

I dont imagine the marketing people have come up with all the options, they're just the ones who have to try to sell want they're given.

[–] linuxPIPEpower@discuss.tchncs.de 3 points 2 months ago* (last edited 2 months ago) (4 children)

I've not used Guix but I don't think any distro has anything close to number of desirable available packages as arch--- so be prepared for that. My ventures into debian, suse and fedora were made quite annoying by having to work around the many missing packages. Including user-facing applications, dependencies and background programs. I never quite got down with distrobox, maybe that's the cure.

this chart on wikipedia gives the impression that Debian has more packages but that's not the way it feels when you are looking for something. Maybe they have a lot of dot matrix printer libraries from 1992 or something which bring the number up.

Arch includes a lot of not-at-all-free packages (which it is impossible to distinguish in pacman or other tool as far as I can find), orphaned, new packages that haven't yet made it into other repos, and packages where no attempt has been made to submit them to other repos.

On arch I have virtually never had to go outside the repos for packages. It's very hard to give up once you are used to it. (Even though it's better to use properly libre/free stuff and other benefits of a more curated approach like security, stability and quality.)

The other benefit of being in highschool is many people have loads of time to spend.

I honestly don't know if there is any advice I can give to someone with a fulltime job and care giving responsibilities that would be convincing.

 

I have a lowend netbook with debian-type linux only (no dualboot). Power management should be via XFCE4's xfce4-power-manager-settings.

I'm having weird behavior with suspend and trying to identify/troubleshoot it. It seems to be usually draining power and never charging when the lid is closed for many hours.

I tried explicitly entering power off, hibernate and suspend followed by unplugging then leaving it a few hours but couldn't replicate. It seems to be doing something on its own after being unplugged a long time.

What logs can I look at to see when my device changes its power modes, what were the triggers, what settings are governing it etc?

I can't tell if it's a software issue or there is some sort of power saving thing going on in the hardware or what.

Just hoping for some investigation tips here, I know its not enough info to solve.

Edit to clarify no dual boot.

 

I want to move a directory with a bunch of subdirectories and files. But I have the feeling there might be some symlinks to a few of them elsewhere on the file system. (As in the directory contains the targets of symlinks.)

How do I search all files for symlinks pointing to them?

Some combination of find, stat, ls, realpath, readlink and maybe xargs? I can't quite figure it out.

No I'm one of those "think!" nutjobs

[–] linuxPIPEpower@discuss.tchncs.de 1 points 5 months ago (2 children)

We're trying to have an intelligent discussion here.

You think every Lemmy admin should be forced to fed with CSAM instances, and therefor host on their own servers CSAM? Wow great plan you have for expanding the fediverse.

[–] linuxPIPEpower@discuss.tchncs.de 0 points 5 months ago (5 children)

It would be solved for people who are primarily interested in tech and gaming. How about bellow challenges?

Gaming is huge so presumably lots of gamers are interested in the wider world, which is not exactly well represented here compared to the major platforms.

And we can't ignore the inherent complexities of federation. If a user signs up to another instance but for some reason that instance (or game 's instance) is blocked by others or even goes offline, then it will be confusing if not ruining of their experience.

[–] linuxPIPEpower@discuss.tchncs.de 2 points 7 months ago (1 children)

In another subthread I came up with the below, is this what you mean? I haven't tried it yet.

  • /home/user/folderApple is always empty
  • /home/user/folderApple-original mounts ontop of /home/user/folderApple at boot
  • then /mnt/drive/folderBanana also mounts ontop of /home/user/folderApple when/if it becomes available (later in the order)

Ideally I'd like to avoid a script because my experience is they aren't very durable. I make mistakes and they are difficult to troubleshoot. So I am trying to just use the tools that are already available in the system.

But maybe there is something in the idea of using a second mount, like if

  • /home/user/folderApple is always empty
  • /home/user/folderApple-original mounts ontop of /home/user/folderApple at boot
  • then /mnt/drive/folderBanana also mounts ontop of /home/user/folderApple when/if it becomes available (later in the order)
[–] linuxPIPEpower@discuss.tchncs.de 2 points 7 months ago (1 children)

The results are the same no matter which order I do the mounts in.

 

I have 2 directories which both have stuff in them:

  • /home/user/folderApple

  • /mnt/drive/folderBanana

I want to mount folderBanana onto folderApple like this:

sudo mount --bind "/mnt/drive/folderBanana" "/home/user/folderApple"

But I still want to be able to access the contents of folderApple while this is activated. From what I am reading, binding the original directory to a new location should make it available, like this:

mkdir "/home/user/folderApple-original"
sudo mount --bind  "/home/user/folderApple" "/home/user/folderApple-original"

But this just binds /mnt/drive/folderBanana to /home/user/folderApple-original as well. I tried reversing the order and result is the same.

How do I tell mount to look for the underlying directory?

I am happy to use symlinks or something else if it'll reliably get the job done, I am not wedded to this mechanism.

(The purpose of all this is that when an external drive is connected, I can have the storage conveniently available, but when it is not connected, the system will fallback to internal storage. But then I will want to move files between the fallback and external locations when both are available. So I need to see both locations at once.)

 

Is there anyway to pass terminal colors through a pipe?

As a simple example, ls -l --color=always | grep ii.

When you just run the ls -l --color=always part alone, you get the filenames color coded. But adding grep ii removes the color coding and just has the grep match highlighting.

Screenshot of both examples:

In the above example I would want ii.mp3 and ii.png filenames to retain the cyan and magenta highlighting, respectively. With or without the grep match highlighting.

Question is not specific to ls or grep.

If this is possible, is there a correct term/name for it? I am unable to locate anything.

 

Once again I try to get a handle of my various dotfiles and configs. This time I take another stab at gnu stow as it is often recommended. I do not understand it.

Here's how I understand it: I'm supposed to manually move all my files into a new directory where the original are. So for ~ I make like this:

~
  - dotfiles
      - bash
         dot-bashrc
         dot-bash_profile
      - xdg
            - dot-config
                user-dirs.dirs
      - tealdeer
            - dot-config
                - tealdeer
                       config.toml

then cd ~/dotfiles && stow --dotfiles .

Then (if I very carefully created each directory tree) it will symlink those files back to where they came from like this:

~
  .bashrc
  .bash_profile
   - .config
        user-dirs.dirs
      - tealdeer
          config.toml

I don't really understand what this application is doing because setting up the dotfiles directory is a lot more work than making symlinks afterwards. Every instructions tells me to make up this directory structure by hand but that seems to tedious with so many configs; isn't there some kind of automation to it?

Once the symlinks are created then what?

  • Tutorials don't really mention it but the actual manual gives me the impression this is a packager manager in some way and that's confusing. Lots of stuff about compiling

  • I see about how to combine it with git. Tried git-oriented dotfile systems before and they just aren't practical for me. And again I don't see what stow contributing; git would be doing all the work there.

  • Is there anything here about sharing configs between non-identical devices? Not everything can be copy/pasted exactly. Are you supposed to be making git branches or something?

The manual is not gentle enough to learn from scratch. OTOH there are very very short tutorials which offer little information.

I feel that I'm really missing the magic that's obvious to everyone else.

 

Title is TLDR. More info about what I'm trying to do below.

My daily driver computer is Laptop with an SSD. No possibility to expand.

So for storage of lots n lots of files, I have an old, low resource Desktop with a bunch of HDDs plugged in (mostly via USB).

I can access Desktop files via SSH/SFTP on the LAN. But it can be quite slow.

And sometimes (not too often; this isn't a main requirement) I take Laptop to use elsewhere. I do not plan to make Desktop available outside the network so I need to have a copy of required files on Laptop.

Therefor, sometimes I like to move the remote files from Desktop to Laptop to work on them. To make a sort of local cache. This could be individual files or directory trees.

But then I have a mess of duplication. Sometimes I forget to put the files back.

Seems like Laptop could be a lot more clever than I am and help with this. Like could it always fetch a remote file which is being edited and save it locally?

Is there any way to have Laptop fetch files, information about file trees, etc, located on Desktop when needed and smartly put them back after editing?

Or even keep some stuff around. Like lists of files, attributes, thumbnails etc. Even browsing the directory tree on Desktop can be slow sometimes.

I am not sure what this would be called.

Ideas and tools I am already comfortable with:

  • rsync is the most obvious foundation to work from but I am not sure exactly what would be the best configuration and how to manage it.

  • luckybackup is my favorite rsync GUI front end; it lets you save profiles, jobs etc which is sweet

  • freeFileSync is another GUI front end I've used but I am preferring lucky/rsync these days

  • I don't think git is a viable solution here because there are already git directories included, there are many non-text files, and some of the directory trees are so large that they would cause git to choke looking at all the files.

  • syncthing might work. I've been having issues with it lately but I may have gotten these ironed out.

Something a little more transparent than the above would be cool but I am not sure if that exists?

Any help appreciated even just idea on what to web search for because I am stumped even on that.

 

For a given device, sometimes one linux distro perfectly supports a hardware component. Then if I switch distros, the same component no longer functions at all, or is very buggy.

How do I find out what the difference is?

 

cross-posted from: https://discuss.tchncs.de/post/13814482

I just noticed that eza can now display total disk space used by directories!

I think this is pretty cool. I wanted it for a long time.

There are other ways to get the information of course. But having it integrated with all the other options for listing directories is fab. eza has features like --git-awareness, --tree display, clickable --hyperlink, filetype --icons and other display, permissions, dates, ownerships, and other stuff. being able to mash everything together in any arbitrary way which is useful is handy. And of course you can --sort=size

docs:

  --total-size               show the size of a directory as the size of all
                             files and directories inside (unix only)

It also (optionally) color codes the information. Values measures in kb, mb, and gb are clear. Here is a screenshot to show that:

eza --long -h --total-size --sort=oldest --no-permissions --no-user

Of course it take a little while to load large directories so you will not want to use by default.

Looks like it was first implemented Oct 2023 with some fixes since then. (Changelog). PR #533 - feat: added recursive directory parser with `--total-size` flag by Xemptuous

 

I just noticed that eza can now display total disk space used by directories!

I think this is pretty cool. I wanted it for a long time.

There are other ways to get the information of course. But having it integrated with all the other options for listing directories is fab. eza has features like --git-awareness, --tree display, clickable --hyperlink, filetype --icons and other display, permissions, dates, ownerships, and other stuff. being able to mash everything together in any arbitrary way which is useful is handy. And of course you can --sort=size

docs:

  --total-size               show the size of a directory as the size of all
                             files and directories inside (unix only)

It also (optionally) color codes the information. Values measures in kb, mb, and gb are clear. Here is a screenshot to show that:

eza --long -h --total-size --sort=oldest --no-permissions --no-user

Of course it take a little while to load large directories so you will not want to use by default.

Looks like it was first implemented Oct 2023 with some fixes since then. (Changelog). PR #533 - feat: added recursive directory parser with `--total-size` flag by Xemptuous

 

Question: Is there any auto-correct that works globally in all (or at least, many) applications? Particularly non-terminal. So for example firefox (like this text box I'm typing into), chat, text editors, word processors etc?

Example: I often type "teh" when I meant "the". I would like to have that change automagically.

I'm sure somewhere in my life (not in linux


maybe on mac?) I had the ability to right click on a red-underlined misspelled word in any application and select "always change this fix this to.." and then it would.

Autokey is the only close suggestion I can find. But I guess you have to tell it about every single replacement through the configuration? Are there any pre-made configurations of common misspellings?

How is the performance if you end up with dozens, hundreds, of phrases for it to look out for?

Not looking for: a code linter, command line corrections or grammerly which are the suggestions I have found when searching.

 

I have a multiple user linux system. Well actually a couple of them. They are running different distros which are arch-based, debian-based and fedora-based.

I want to globally use non-executable components not available via my system's package manager. Such as themes, icons, cursors, wallpapers and sounds.

Some of them are my own original work that I manage in git repos. Others are downloaded as packages/collections. If there is a git repo available I prefer to clone because it can theoretically be updated by pulling. And sometimes I make my own forks or branches of other people's work. So it's really a mix.

I want to keep these in a totally separate area where no package manager will go. So that it is portable and can be backed up / copied between systems without confusion. Which is why I don't want to use /usr/local.

I also want to be able to add/edit in this area without su to root. So that I can easily modify or add items which then can be accessed by all users. Also a reason to avoid /usr/local

I tried making a directory like /home/shared/themes then symlinking ~/.themes in different users to that. It sometimes worked OK but I ran into permissions issues. Git really didn't seem to like sharing repos between users. I can live with only using a single user to edit the repos but it didn't like having permissions recursively changed to even allow access.

Is there a way to tell linux to look in a custom location for these resources for every user on the system? I also still want it to look in the normal places so I can use the package managers when possible.

fonts - once solved

On one install, I found a way to add a system-wide custom font directory though I am not able to recall how that was done. I believe it had to do with xorg or x11 config files. I can't seem to find in my shell histories how it was done but I will look some more. I do recall the method was highly specific to fonts and didn't appear to be transferable to other resources.

49
submitted 2 years ago* (last edited 2 years ago) by linuxPIPEpower@discuss.tchncs.de to c/linux@lemmy.ml
 

I accidentally removed a xubuntu live usb from the computer while it was running but it seems to be working just fine. I can even launch applications that werent already open.

Is that expected? I have always thought you need to be careful to avoid bumping the usb drive or otherwise disturbing it.

Where is everything being stored? In RAM? Is the whole contents of the usb copied into RAM or just some parts?

Edit: tried it with manjaro and it fell apart. All kinds of never before seen errors. Replacing the usb didnt fix it. Couldnt even shut down the machine, had to hard power off.

view more: next ›