N0x0n

joined 10 months ago
[–] N0x0n@lemmy.ml 2 points 2 days ago

This is probably the best answer you will get OP ! I have done some encode (BD -> SVT-AV1) and everything FBJimmy said is everything I have gathered through my search on how to get the best quality/speed encode without loosing to much of fine details.

This won't make you happy if what you want is to use GPU encoding, cauz this is for on the fly encoding (streaming via twitch, Youtube, whatever...). It seems a nice idea to do GPU encoding but CPU software encoding is way more efficient than GPU.

It seems You aren't looking for quality video encoding, but more speedy encoding? If that's the case, yeah GPU encoding seems the best idea here. But can't help sorry...

Most of the encode I have done with ffmpeg on AV1 got arround 20fps ? Yes it's slow, however I get near "lossless" quality with an acceptable file size to serve over Jellyfin. Also, I never heard someone mention that 80-90% CPU utilization is bad for your CPU if your temps are all right (over 80° seems a bit alarming). Sure if you're doing video encoding every day, your CPU will suffer offer time, I mean that's practically what they are build for... Processing information ! And like everything, the more you use it, the more it wears out (the same goes for your GPU...)

But I can understand your determination and hope you will find your way arround. I'm also stubborn when I want something to work the way I want.

[–] N0x0n@lemmy.ml 3 points 2 days ago* (last edited 2 days ago)

If it's a steam pirated game and already extracted, you can just create a dummy steam account, add the executable as non-steam game and run proton from steam (I had good success with proton experimental).

Everything else should be run via Lutris + wine prefixes (or whatever windows subsystem emulator you chose).

It's fairly easy when you know what you're doing but still not as easy as you imagine on Windows itself. I would say, most game run all right? I recently played The last of us I via lutris+wine prefixes. Some fps drops and 1 crash on a 5 hour session, seems pretty reasonable.

However, lutris + wine prefixes are harder to get right depending the wine version installed and what graphic options you want, it can get frustrating specially if you don't know what game needs what windows trick (directx9, vscru2015...).

I had mostly good success rate with the staging version of wine (I think that's what proton experimental on steam is) and doing it wrong, you can go from a burning messsy non playable game to something as smooth as on Windows.

So yeah, it involves more personal implication to get it right and yes it's still harder to play pirated games on Linux than on Windows but easier than 5 years ago!

[–] N0x0n@lemmy.ml 1 points 2 days ago
[–] N0x0n@lemmy.ml 2 points 3 days ago (1 children)

floppyfw : turn a floppy into a firewall

Wait what? :o

[–] N0x0n@lemmy.ml 2 points 3 days ago (1 children)

Yeaaah I hate to admit it... But Samba is the only crossplatform sharing protocol that works with every OS... I wish I could switch to NFS.

[–] N0x0n@lemmy.ml 2 points 5 days ago

when my girlfriend's build pipeline finishes

😂😂

[–] N0x0n@lemmy.ml 2 points 1 week ago

Tell me how it went :) curious if it worked out !

[–] N0x0n@lemmy.ml 2 points 1 week ago

Some people will probably disagree with me but I consider Debian stable as a server distribution not as a daily drive system.

Debian testing is probably the better choice if you want to daily drive Debian or consider or more up to date distro. If you're relatively new to GNU/Linux, don't bother with bleeding edge distros or exotics ones like Arch, EndeavourOS, Gentoo, NixOS...

If you find your way to distrowatch.com you will see EndeavourOS very high in the rankings, but it's a rolling release distribution. While it's easier to maintain/install than Arch, it has a learning curve and needs regular attention and reading the docs/forum.

I have seen a lot of people recommend the following:

  • Linux mint
  • Pop! OS
  • Fedora
  • OpenSUSE
[–] N0x0n@lemmy.ml 4 points 1 week ago (2 children)

If you only want to find duplicates give czkawka a try. It has a nice GUI too.

I don't know how good it is for music duplicates I only used it to find duplicate pictures and worked very well !

[–] N0x0n@lemmy.ml 21 points 1 week ago (1 children)

One person but 10 fingers to write the code. That's a lot of guys at work here.

[–] N0x0n@lemmy.ml 1 points 1 week ago

Thanks for the tip !! I will certainly give it a look, It's kinda annoying for my family members to always connect via wireguard.

For me it's fine though, I even route my traffic to ProtonVPN but my family is always nagging how they need to "do something" to get access to the hosted services or that it "doesn't work".

[–] N0x0n@lemmy.ml 4 points 1 week ago* (last edited 1 week ago) (6 children)

Except that everything is under your control and not managed by a third party, not much I think.

If this setup works for you and you're happy with it, just keep it going.

If you have time to spare, want to learn new things, tinkerer arround with network security, certificates, DNS, reverse proxy and, and, and... You can give it a try in a virtual machine and docker containers. But keep in mind that's not an easy way and involves a lot of personal time before you get a GOOD working self-hosted / exposed services.

I wouldn't recommend to open any port on your router except for a secured tunnel like wireguard and connect to your services through that tunnel. Opening port 443/80 on your router is bound to some heavy automated scanning and brute force by bots. If you don't have the necessary knowledge/tool/hardware, this is just going to put you at risk of ddos and remote attacks.

That's way something like cloudflare is populare, they most of the time take care of that nuisance and also why something like wireguard is popular among the selfhosting community.

36
submitted 1 month ago* (last edited 1 month ago) by N0x0n@lemmy.ml to c/linux@lemmy.ml
 

TIL something new... My hate for MacOS took over common logic. 2.8GB, 3 seconds file transfer on USB was to beautiful to be true. After some further investigation and hints from @JonnyRobbie@lemmy.world @nanook@friendica.eskimo.com I learned that Linux writes to cache before writing it to the device, to see whats happening in the background: sync & watch -n 1 grep -e Dirty: /proc/meminfo.

Still, the transfer speed on Linux was slightly faster than on MacOS. My rant was unjustified, It just my fault for being clueless on some more advanced Linux stuff. But I learned something new today, so this post was actually helpful !

Howerver, I still hate MacOS and will probably give Asahi remix a try.

Thanks to everyone !


Hey guys ! I'm getting tired/bored of MacOS' shenanigans... Yesterday was the last drop that make me think of trying an alternative.

While trying to upload a 2.8 GB file over to an USB-C stick it took like 8 minutes? Okay that's "good" enough if you only do it from time to time... But 25 files takes literally 1h30min... Are we in 2001?

I mean the exact same 2.8GB file, with the exact same USB-C stick took FU***** 3 seconds on Linux !!

Ohh and don't think I didn't tried to "fix" the issue, after a long search on the web I came across a lot of people having similar issues that aren't fixed since 2 major updates? With a total radio silence from the shiny poisonous Apple...

Among other things I tried:

  • Disable Spotlight indexing sudo mdutil -a -i off
  • Reformat the USB stick from Mac
  • All available filesystem FAT32, exFAT...(yes even MacOS native APFS)
  • Another USB stick
  • ....

Enough is enough. I was willing to learn their way of thinking for my personal experience and somehow always got my way around to reproduce what I learned on Linux to Mac. But now that there is an alternative OS, I think I'm ready to get back home.

So does anyone here already gave Asahi Remix a try? If so what was your experience with it?

I read their FAQ and most of their documentation and it seems good enough for daily drive (except for some quirks here and there) but I wanted to hear from people who already made the jump and how was their personal feeling.


PS: I got that MacOS for my birthday from a family member with good intentions. That wasn't a personal choice. While I'm more than happy and thankful for the gift, I totally hate it more and more... Especially because MOST of my self-hosted services, applications, scripts, are open source.

 

Hi everyone :).

Just getting started with Manjaro as daily drive to get some easier arched based distro. Except for the LVM bug with calamares everything is pretty smooth :).

But at first boot, I saw they have added their personal Manjaro logo on boot and I directly though of the bug exploit logoFAIL I heard a few month ago and It made me curious if this is something that could be exploitable by Manjaro.

Probably not, this would harm their image and hard worked system, but I'm still curious... If someone smarter/more knowledgeable than me could chime in and give some valuable information on this topic regarding Manjaro, I would really appreciate it !

Thank you !

11
submitted 6 months ago* (last edited 6 months ago) by N0x0n@lemmy.ml to c/homelab@lemmy.ml
 

Hi everyone :)

It's time to switch and give my home network a proper minimal hardware upgrade. Right now everything is managed by my ISP's AIO firewall/router combo. Which works okayish, but I'm already doing some firewall/dns/VPN stuff on my minimal spare laptop server to bypass most of my ISP's restrictions. So it's time to get a little bit "crazy" !

While I do have some "power user" knowledge regarding Linux/server/selfhosted services/networking, I'm a bit clueless hardware wise, specially regarding my ISP's 2.5G ethernet port.

I do have a 5giga connection from my Internet provider (Obtic fiber) which is divided into 4 ethernet ports (Eth1 2.5G, Eth2 1G, Eth3 1G, Eth4 0,500G or something in that range). And right now the Eth1 port is connected through an old 1G switch.

  1. To take full advantage of my ISP's 2.5G ethernet port do I need a router AND a switch capable of 2.5G througput ? Or only the router and the switch is going to divid it accordingly between all connected devices on a 1G switch?

I'm also looking for some recommendation/personal experience for a router and a switch with a budget of 250e.

First I was interested into a BananaPI as a router, to tinker a bit, but it seems a bit of a hassle to flash it with OpenWRT, then I found an interesting post on Lemmy talking about the Intel N100 Celeron N5105, which looks like more what I'm looking for but I'm not sure ?

  1. I have no idea what's the best bet, a SBC (bananapi mini, orange pi, raspberry pi...) a fully fleged router (like TP-Link AX1800 and flash it with opensense/openwrt) or an Intel N100 Celeron N5105 Soft Router ?

The capabilities I'm looking for:

  • VLAN capable
  • AP VLAN capabable to segment wifi
  • Taking advantage of my ISP's 2.5G ethernet port
  • Firewall customization capabilities

I have an eye on a managed switch I found on amazon (SODOLA 6 Port 2.5G Web Managed) but I have no idea how reliable they are, I have never heard of SODOLA.

  1. Any good recommendation I should look at for a managed switch that would work great with the same capabilities above?

  2. Probably last question, is regarding wifi APs. Is it possible to make an access point from my router even tough it hasn't atennas? If I connect an access point directly to my router, will it be capable of giving away wifi connection?

Thanks for reading though, I'm a bit unsure how I should spend my money to have a minimal but reliable/capable homelab setup. Every advice is welcome. But keep in mind, I want to keep it minimal, a good enough routing capbability with intermediate firewall customisation. I'm already hosting a few containers with a spare laptop and the traffic isn't going to be to crazy.

 

Hi everyone !

Right now I can't decide wich one is the most versatile and fit my personal needs, so I'm looking into your personal experience with each one of them, if you mind sharing your experience.

It's mostly for secure shared volumes containing ebooks and media storage/files on my home network. Adding some security into the mix even tough I actually don't need it (mostly for learning process).

More precisely how difficult is the NFS configuration with kerberos? Is it actually useful? Never used kerberos and have no idea how it works, so it's a very much new tech on my side.

I would really apreciate some indepth personal experience and why you would considere one over another !

Thank you !

16
submitted 7 months ago* (last edited 7 months ago) by N0x0n@lemmy.ml to c/linux@lemmy.ml
 

Hello !

Getting a bit annoyed with permission issues with samba and sshfs. If someone could give me some input on how to find an other more elegant and secure way to share a folder path owned by root, I would really appreciate it !

Context

  • The following folder path is owned by root (docker volume):

/var/lib/docker/volumes/syncthing_data/_data/folder

  • The child folders are owned by the user server

/var/lib/docker/volumes/syncthing_data/_data/folder

  • The user server is in the sudoers file
  • Server is in the docker groupe
  • fuse.confhas the user_allow_other uncommented

Mount point with sshfs

sudo sshfs server@10.0.0.100:/var/lib/docker/volumes/syncthing_data/_data/folder /home/user/folder -o allow_other

Permission denied

Things I tried

  • Adding other options like gid 0,27,1000 uid 0,27,1000 default_permissions...
  • Finding my way through stackoverflow, unix.stackexchange...

Solution I found

  1. Making a bind mount from the root owned path to a new path owned by server

sudo mount --bind /var/lib/docker/volumes/syncthing_data/_data/folder /home/server/folder

  1. Mount point with sshfs

sshfs server@10.0.0.100:/home/server/folder /home/user/folder

Question

While the above solution works, It overcomplicates my setup and adds an unecessary mount point to my laptop and fstab.

Isn't there a more elegant solution to work directly with the user server (which has root access) to mount the folder with sshfs directly even if the folder path is owned by root?

I mean the user has root access so something like:

sshfs server@10.0.0.100:/home/server/folder /home/user/folder -o allow_other should work even if the first part of the path is owned by root.

Changing owner/permission of the path recursively is out of question !

Thank you for your insights !

 

Hello again :)

I'm not talking about a broken wg connection, everything works as expect through the CLI and systemctl.

But the NetworkManger GUI in Gnome shows my Wireguard connection as it was "not connected" and when I click on the switch it actually disconnects my wg interface.

Also when I try to edit my connection through

nmcli connection modify wg0 connection.autoconnect yes

and restart my wireguard connection with

systemctl restart wg-quick@wg0

It recreates a new wireguard interface.

While everything works as expected with the usual tools (wg-quick, systemctl...) the GUI seems "broken".

Someone else noticed or is this somehow related to my setup?

Debian 12 bookworm
Gnome 
nmcli tools 1.42.4
 

Solved

After interesting/insightful inputs from different users, here are the takeaways:

  • It doesn't have some critical or dangerous impact or implications when extracted
  • It contains the tared parent folder (see below for some neat tricks)
  • It only overwrites the owner/permission if ./ itself is included in the tar file as a directory.
  • Tarbombs are specially crafted tar archives with absolute paths / (by default (GNU) tar strips absolute paths and will throw a warning except if used with a special option –absolute-names or -P)
  • Interesting read: Path-traversal vulnerability (../)

Some neat trick I learned from the post

Temporarily created subshell with its own environment:

Let’s say you’re in the home directory that’s called /home/joe. You could go something like:

> (cd bin && pwd) && pwd
/home/joe/bin
/home/joe

source

Exclude parent folder and ./ ./file from tar

There are probably a lot of different ways to achieve that expected goal:

(cd mydir/ && tar -czvf mydir.tgz *)

find mydir/ -printf "%P\n" | tar -czf mytar.tgz --no-recursion -C mydir/ -T - source


~~The absolute path could overwrite my directory structure (tarbomb) source Will overwrite permission/owner to the current directory if extracted. source~~

I'm sorry if my question wasn't clear enough, I'm really doing my best to be as comprehensible as possible :/


Hi everyone !

I'm playing a bit around with tar to understand how it works under the hood. While poking around and searching through the web I couldn't find an actual answer, on what are the implication of ./ and ./file structure in the tar archive.

Output 1

sudo find ./testar -maxdepth 1 -type d,f -printf "%P\n" | sudo tar -czvf ./xtractar/tar1/testbackup1.tgz -C ./testar -T -
#output
> tar tf tar1/testbackup1.tgz 

text.tz
test
my
file.txt
.testzero
test01/
test01/never.xml
test01/file.exe
test01/file.tar
test01/files
test01/.testfiles
My test folder.txt

Output 2

sudo find ./testar -maxdepth 1 -type d,f  | sudo tar -czvf ./xtractar/tar2/testbackup2.tgz -C ./testar -T -
#output
>tar tf tar2/testbackup2.tgz

./testar/
./testar/text.tz
./testar/test
./testar/my
./testar/file.txt
./testar/.testzero
./testar/test01/
./testar/test01/never.xml
./testar/test01/file.exe
./testar/test01/file.tar
./testar/test01/files
./testar/test01/.testfiles
./testar/My test folder.txt
./testar/text.tz
./testar/test
./testar/my
./testar/file.txt
./testar/.testzero
./testar/test01/
./testar/test01/never.xml
./testar/test01/file.exe
./testar/test01/file.tar
./testar/test01/files
./testar/test01/.testfiles
./testar/My test folder.txt

The outputs are clearly different and if I extract them both the only difference I see is that the second outputs the parent folder. But reading here and here this is not a good solution? But nobody actually says why?

Has anyone a good explanation why the second way is bad practice? Or not recommended?

Thank you :)

 

Hello everyone !

I have no idea if I’m in the right community, because it’s a mix of hardware and some light code/command to extract the power consumption out of my old laptop. I need some assistance and if someone way more intelligent than me could check the code and give feedback :)

Important infos

  • 12 year old ASUS N76 laptop
  • Bare bone server running Debian 12
  • No battery (died long time ago)

Because I have no battery connected to my laptop It's impossible to use tools like lm-sensors, powerstat, powertop to output the wattage. But from the following ressource I can estimate the power based on the Energy.

time=1
declare T0=($(sudo cat /sys/class/powercap/*/energy_uj)); sleep $time; declare T1=($(sudo cat /sys/class/powercap/*/energy_uj))
for i in "${!T0[@]}"; do echo - | awk "{printf \"%.1f W\", $((${T1[i]}-${T0[i]})) / $time / 1e6 }" ; done

While It effectively outputs something, I'm not sure if I can rely on that to estimate the power consumption and if the code is actually correct? :/

Thanks :).

Edit:

My goal is to calculate the power drawn from my laptop without any electric appliance (maybe a worded my question/title wrong?). While It could be easily done with the top package or lm-sensors, this only work by measuring the battery discharge, which in my case is impossible because my laptop is directly connected to the outlet with his power cord (battery died years ago).

I dug a bit further through the web and found someone who asked the same question on superuser.com. While this gives a different reference point, nobody actually could answer the question.

This seems a bit harder than I though and is actually related to the /sys/class/powercap/*/energy_uj files and though someone could give me a bit more details on how this works and what the output actually shows.

This is also related to the power capping framework in the linux kernel? And as per the documentation this is representing the CPU packages current energy counter in micro joules.

So I came a bit closer in understanding how it works and what it does, even tough I’m still not sure what am I actually looking at :\ .

 

Hi everyone :)

I'm slowly getting used on how to navigate and edit things in the terminal without leaving the keyboard and arrow keys. I'm getting faster and It improved my workflow in the terminal (Yeahhii).

ctrl + a e f b u k ...
alt + f b d ...

But yesterday I had such a bad experience while editing a backup bash script with nano. It took me like an hour to completely edit small changes like a caveman and always broke the editor when I used memory reflex terminal shortcuts.

This really pissed me... I know nano also has minimal/limited shortcuts but having to memorize and switch between different one for different purpose seems like a waste of time.

I think I tried emacs a few month ago but It didn't clicked. I didn't spend enough time though, tried it for a few minutes and deleted it afterwards. Maybe I should give it a second try?

I also gave Vim a try, but that session is still open and can't exit (😂 )! Vim seems rather to complex for my workflow, I'm just a self-taught poweruser making his way through linux. Am I wrong?

Isn't there something more "universal" ? That works everywhere I go the same? Something portable, so I can use it everywhere I go?

I'm very interested in everyone's thought, insight, personal experience and tip/tricks to avoid what happened yesterday !

Thanks !

view more: next ›