cyberwolfie

joined 2 years ago
[–] cyberwolfie@lemmy.ml 1 points 1 week ago

It is important. But I find the ones I have tried good, and would survive if I had to use either of them. I use KDE Plasma on my main personal laptop, I have Cinnamon running on a living room computer connected to my TV (not an ideal solution, but I've so far not taken the time to optimize the setup) and GNOME om my work laptop. I much prefer KDE Plasma out of them, but I like the others also.

[–] cyberwolfie@lemmy.ml 9 points 1 week ago

I am not sure the shareholders will accept such a meager compensation. Did you include emotional damage in your estimate?

[–] cyberwolfie@lemmy.ml 14 points 1 week ago* (last edited 1 week ago) (3 children)

Correct me if I’m wrong, but Anna’s archive is not giving you song downloads, but rather metadata

Were they not going to release the songs as well? They just started with the metadata?

ETA: Yes, this is from their blog post about it:

The data will be released in different stages on our Torrents page:

[X] Metadata (Dec 2025)
[ ] Music files (releasing in order of popularity)
[ ] Additional file metadata (torrent paths and checksums)
[ ] Album art
[ ] .zstdpatch files (to reconstruct original files before we added embedded metadata)

[–] cyberwolfie@lemmy.ml 44 points 1 week ago (20 children)

Can we also take a moment to acknowledge how utterly unhinged this part is?

"This marks not just the next chapter, but the next book in SpaceX and xAI's mission: scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!"

[–] cyberwolfie@lemmy.ml 2 points 3 weeks ago

Their web installer makes degoogling accessible to everyone.

With the right hardware

[–] cyberwolfie@lemmy.ml 5 points 1 month ago

We would need open source cars which will never happen.

:'(

[–] cyberwolfie@lemmy.ml 1 points 1 month ago* (last edited 1 month ago)

When I initially scanned thtough the headlines, I read this one as "ASUS plans to produce RAM problems" and thought "Yeah, of course they are". My expectations of companies seem to be very low in general these days...

[–] cyberwolfie@lemmy.ml 9 points 1 month ago

That's a weird thing to present as an absolute truth. As someone who has exstensively used both Windows (3.1, 95, 98, ME, XP, Vista, 7, 8, 10 and 11) and macOS (from 2011-2022), and now using KDE Plasma on my daily driver laptop, GNOME at work and Cinnamon for my living room machine: all three Linux DE are superior experiences.

Surely there are people who would prefer Windows and macOS over them, but it is highly subjective.

[–] cyberwolfie@lemmy.ml 2 points 1 month ago

They got camera working on FP3 and FP4. Fairphone employs a guy who basically does work to get their hardware to run Linux.

[–] cyberwolfie@lemmy.ml 4 points 1 month ago

I would use Audiobookshelf as a source for Music Assistant, and then play them via Music Assistant. That way I can use my Sonos speakers (and eventually Snapcast speakers), synchronize across rooms etc. If I had to use Audiobookshelf directly, I would either play it from my TV with the TV on (only other way I can use my Sonos Beam) or on my phone with a Bluetooth speaker or headphones.

[–] cyberwolfie@lemmy.ml 4 points 1 month ago

How voluntary is it when these platforms have a monopolistic grasp on how consumers access music these days? And the more people believe that the artists are actually fairly compensated from this model, the firmer this grasp becomes. What choice do they have of being there if they want to have any kind of reach?

A Spotify Premium subscriptions will cost someone 156€ a year. If that person instead spent that entire music budget on purchasing albums from select musicians according to the enjoyment they derive from their works, or buy concert tickets or merch, and decides to pirate the rest of their music listening, what changes? For the consumer, they are now left with actual, irrevocable access (legal and illegal) to the same music you had rented access to before, and have spent the same amount of money. For the musicians, the ones who received the purchases are left with much more of your dedicated music spend, and the rest will have marginally less (their share based on total streams of your monthly subscription x12). For Spotify and Taylor Swift, they receive marginally less money (but more than the artists you actually listen to) of which they should probably not have received to begin with.

[–] cyberwolfie@lemmy.ml 5 points 1 month ago (2 children)

I'm not sure how you think Spotify compensation works, but it is not a "one stream and you get paid"-deal, but rather a revenue share model where artists are compensated from a large pool by total streams. The main share of your Spotify monthly subscription that goes to compensating artists goes to Taylor Swift, Bad Bunny etc. Being a top listener to your favorite, but underground band contributes negligibly to what they actually get paid.

If you care about their compensation, buy the album as directly from them as possible, or buy merch/go to concerts, and recommend their msuic to other people so they might end up paying customers. Subscribing to Spotify and thinking they get a fair deal out of that is not the way, and increasingly not the way (with their GenAI-shenanigans).

 

I work in a corporation with an IT-department that is all in on whatever Microsoft is offering. My team has for some time gotten more and more autonomy in tooling as IT is overloaded and forced to relinquish some control, but we still rely on them for supplying compliant machines that have access to our resources.

I requested a Linux machine just over 5 months ago, and I finally got it this week. It is running Ubuntu with GNOME, not my first choice, but the only thing that is Microsoft Intune compliant as far as I know.

So far it is such a relief. A better specced machine with less bloat running on it. It should be far between any OOM-issue I get now... Slightly annoying having to use Edge for any service requiring corporate SSO, but I'll swallow that pill...

 

cross-posted from: https://lemmy.ml/post/35272958

I am looking into getting a BOSS RC-5 looping pedal for my guitar, and I am curious if anyone has any experience with using it with Linux?

It makes use of this BOSS Tone Studio to allow adding additional backing tracks, but it is only officially supported for Windows and macOS. I could not find many examples of people using it on Linux, but for the most part any discussion I could find was in the context of their amplifiers.

I wonder if it should be straightforward to run it through Wine? As far as I can tell, you only need to set it up as a storage medium and connect it to your machine, although you can't just drag the files directly onto it.

It is not a deal breaker for me if I can't get it working, but it would certainly be a benefit if I could.

 

Recently at work I've been thrown into running some Python scripts in a Docker container (all previous Docker-experience is limited to pulling images from container registries to host some stuff at home). It's a fairly simple script, but I want to do two things simultaneously that I have so far been unable to accomplish: redirecting some prints to a file while also allowing the script to run a cleanup process when it gets a SIGTERM. I'm posting this here because I think this is mainly signal handling thing in Linux, but maybe it's more Docker specific (or even Docker Swarm)?

I'm not on my work computer now, but the entrypoint in the Dockerfile is basically something like this:

ENTRYPOINT ['/bin/bash', '-c', 'python', 'my_script.py', '|', 'tee', 'some_file.txt']

Once I started piping, the signal handling in my script stopped working when the containers were shut down. If I understood it correctly it's because tee becomes the main process (or at least the main child of the main process which is bash?) and Python is deferred to the background and thus never gets the signal to terminate gracefully. But surely there must be some elegant way to make sure it also gets it?

And yes, I understand I can rewrite my script to handle this directly, and that is my plan for work tomorrow, but I want to understand this better to broaden my Linux-knowledge. But my head was spinning after reading up on this (I got lost at trap), and I was hoping someone here had a succinct explanation on what is going on under the hood here?

 

I frequently use KRunner to do simple sums when doing my accounting. I keep a ledger with numbers formatted as e.g. 1,000.00. My system settings in KDE for number formatting under Region & Language is set to British English, i.e. the way I want it. However, whenever I copy a sum from KRunner, e.g. "1000.25 + 1000.25", it is copied as "2000,5" (i.e. no thousands-delimiter, wrong decimal point and only one decimal number). It gets a bit annoying to change this manually.

I can't seen to find any specific settings for this in KRunner or the Calculator plugin, and I would expect it to respect KDE's own settings.

Does anyone know how to force KRunner to do my bidding here?

 

I have a server running Debian that has been connected to WiFi for a long time, but I have since moved it and given it a wired connection. It still seems to be using WiFi though, and in my router settings it shows up as connected through WiFi and not through ethernet.

Now I want to make sure that I can switch over from WiFi to ethernet directly from an ssh-connection so I won't have to connect a screen to get direct access.

What is my best bet here? A lot of the tools I find used for different network operations are not pre-installed, and I don't want to be installing just everything being suggested. Can I solve this by installing network-manager and using nmcli?

EDIT: I also want to disable the wireless card.

EDIT2: No eth-interface shows up when running ip link show, EDIT3: but r8169 0000:02:00.0 enp2s0: renamed from eth0 shows up in dmesg and enp2s0 shows up in ip link show, so I guess it is recongized then.

[SOLVED] EDIT4: I made the modifications manually in etc/network/interfaces, and now it seems to work. I entered the following lines:

auto enp2s0
iface enp2s0 inet dhcp

And then it showed up in my router. I then continued to comment out the lines enabling the wireless interface, and after reboot it works fine still.

 

SOLVED: BananaTrifleViolin's post contains the solution.

Flatseal won't start by itself anymore, which is a known issue. I got it running by running

GSK_RENDERER=gl com.github.tchx84.Flatseal

and inspired by a response in the above linked issue, I wanted to add GSK_RENDERER=gl as a variable in Flatseal so I could open it without having to manually run this in the terminal.

However, I seem to have screwed that up, and written GSK_RENDERER=ng instead, because the application still won't run, and now I get the following output anytime I try to open it by the method above:

(com.github.tchx84.Flatseal:2): Gsk-WARNING **: 22:09:54.997: Unrecognized renderer "ng". Try GSK_RENDERER=help
MESA-INTEL: warning: ../src/intel/vulkan/anv_formats.c:782: FINISHME: support YUV colorspace with DRM format modifiers
MESA-INTEL: warning: ../src/intel/vulkan/anv_formats.c:814: FINISHME: support more multi-planar formats with DRM modifiers
Gdk-Message: 22:09:55.406: Error 71 (Protocol error) dispatching to Wayland display.

However, I can't for the life of me seem to correct this. I've tried running the above command again, I've tried overriding it with flatpak:

flatpak override --env=GSK_RENDERER=gl com.github.tchx84.Flatseal

(which yielded a "permission denied", and nothing happening after running with sudo)

I've reinstalled the applications several times, including removing the config files from ~/.var/app/com.github.tchx84.Flatseal and checked that /var/app/ does not contain any config files.

I just want to reset the user input changes I made to this flatpak and start over, but I'm getting no where...

 

After a fairly hassle-free year or so with this Epson ET-2815 printer, the cyan now won't print at all (no lines, no nothing - printing a full cyan page just yields white). I believe the print head is fully clogged and I want to perform a print head cleaning. I need the epson-printer-utility to do so (available from here, manual here), which I did not set up when I initially set up the printer.

I have installed epson-printer-utility as instructed and run it through the terminal, but I am met with a error message saying "The printer was not found". The printer is otherwise found on the network and configured in CUPS, and I can print just fine with it (up until the cyan channel now doesn't work anymore).

I ran across this old post suggesting that the udev-rule is copied over to /etc/udev/rules.d, but the installation process seems to have taken care of that already.

This print head function is also available through this god-awful mobile app that I had to use to set it up, but now the app also cannot find the printer, even though I try to connect directly to the IP. I have ensured that my phone is on the same network as the printer, but alas.

This happened straight after I set up the integration in Home Assistant, but I imagine this is just a coincidence. I last used the printer just over a month ago.

Anyone have any experience dealing with this?

31
submitted 1 year ago* (last edited 1 year ago) by cyberwolfie@lemmy.ml to c/linux@lemmy.ml
 

I'm running Jellyfin on a Debian-server in my home, and I have the associated media folders set up as samba shares so that I can transfer any new media from my laptop to the server through Dolphin (KDE file manager).

This has for the most part worked very well (except slow speeds), but I've had an issue recently where the files are not copied over properly. This resulted in glitches in for example music files that would stop playback. I checked the checksums of some of these files, and they were different from source. Seems like the glitchy files are missing some data, but at no point were I notified about this. It works fine after I removed the files and transferred again, and now the checksums match.

Is this a common issue with samba, or could it be a sign that my HDD is acting up?

 

I am contemplating buying one of the Seagate OneTouch Hub external hard drives as a backup for my media that's currently stored on some other external hard drives connected to my home server since they are always spinning.

My local retailers don't give me many options as far as large storage storage solution goes, and the only other viable option now is a WD My BOOK 14 TB.

However, the retailer I will be buying it from goes out of its way to state that Windows or macOS is required. Is there any reason I should believe that I will run into troubles under Linux? I've had no issues whatsoever with some other Seagate hard drives (Expansion 5 TB), which I just instantly reformat to ext4 and use as normal. My guess is that this is just for the included software? I just want to make sure before I order.

(More long term I will set up a NAS, but for now time to learn and configure is more scarce than money, so I just want a solution that will prevent me from losing my data)

EDIT: For anyone coming to this later wondering the same thing, I can confirm that it works just fine. It is just the included backup software that is not compatible. I've formatted it to ext4 and currently using rsync to backup my media.

 

I want to mirgrate my Nextcloud instance from a VPS to server in my home. I run the Nextcloud AIO Docker container, which uses Borg backup. The backup repo is about ~70 GB.

How would I best go about transferring it? Is using scp a good solution here (in combination with nohup so that I don't have to keep my ssh session active)? Or is there some other best practice way of doing this?

 

I switched to Linux about 1.5 years ago now when replacing my old Macbook Pro with a Tuxedo Infinity Book. I am super happy with the transition, and for the most part my digital life has severely improved as a result of it. There's one thing in particular though that I haven't fully grasped or understood despite all the talk about it, and that really has mostly caused confusion on my part, and that is Xorg/X11 (I don't know the difference...) vs. Wayland.

I started out with Tuxedo OS 1 and 2 running KDE Plasma 5.x.x, and thus have been on X11 for the most part since switching to Linux. I never dared switching to Wayland myself. However, they somewhat recently started offering optional upgrades to Tuxedo OS 3 running KDE Plasma 6 where Wayland is the default, and I took the plunge. The only real difference I noticed was small annoyances that I had to fix. Glitching windows running on XWayland and having to configure some .desktop-files to force apps to launch natively in Wayland. Apps not showing the correct desktop icons but the generic Wayland logo instead, making Alt+Tabbing a bit more difficult because it is harder to tell applications apart. Annoying smooth scrolling (I don't want scrolling to have as much friction as polished ice) activated in all kinds of applications that I seem to have to turn off individually. Nothing breaking (though I haven't dared booting with my Nvidia dGPU yet in fear of breaking something irreversibly...), but I haven't noticed any improvements either, and I find it a bit frustrating not knowing where to make the necessary changes and always having to search for it seemingly on a case by case basis.

Now for instance I was updating FreeTube to a new version, and the flags I previously added to the ́.desktop'-file suddenly doesn't work anymore (--enable-features=UseOzonePlatform,WaylandWindowDecorations --ozone-platform-hint=auto). The application won't launch unless I remove them, but then it launches under XWayland instead. Not that I have any issues so far running it like that, but I guess I would prefer to run everything natively in Wayland if I can.

 

I am currently in the process of finally getting rid of my Meta-account. In the process I have requested data extraction. The media stuff was made available pretty quickly, but the data logs are still being processed. Does anyone know what data they actually contain, and whether there's any point in waiting for it?

The reason I ask is that I also recently got a notification saying that will soon train their AI-model on my data which they will use the "legitimate interest" bullshit to do. I want to have my account deleted by the time this will be phased in (towards the end of June).

So now I am in the dilemma of waiting for the data logs to complete (which I don't know how long will take) or just delete my account in hopes that it will be purged before the AI-stuff goes into effect. I am unable to find out exactly what these data logs consists of and whether there is any point in keeping onto them for whatever reason.

Now, whether I can trust that they actually delete the data is another matter, but at least I would've done what I can, and they would break the law if the retain the data after my deletion request (under GDPR).

view more: next ›