IsoKiero

joined 1 year ago
[–] IsoKiero@sopuli.xyz 1 points 2 months ago (1 children)

As far as I know it is the default way of handling multiple DNS servers. I'd guess that at least some of the firmware running around treats them as primary/secondary, but based on my (limited) understanding at least majority of linux/bsd based software uses one or the other more or less randomly without any preference. So, it's not always like that, but I'd say it's less comon to treat dns entries with any kind of preference instead of picking one out randomly.

But as there's a ton of various hardware/firmware around this of course isn't conclusive, for your spesific case you need to dig out pretty deep to get the actual answer in your situation.

[–] IsoKiero@sopuli.xyz 6 points 2 months ago (4 children)

have an additional external DNS server

While I agree with you that additional DNS server is without a question a good thing, on this you need to understand that if you set up two nameservers on your laptop (or whatever) they don't have any preference. So, if you have a pihole as one nameserver and google on another you will occasionally see ads on things and your pihole gets overrided every now and then.

There's multiple ways of solving this, but people often seem to have a misinformed idea that the first item on your dns server list would be preferred and that is very much not the case.

Personally I'm running a pihole for my network on a VM and if that's down for a longer time then I'll just switch DNS servers from DHCP and reboot my access points (as family hardware is 99% on wifi) and the rest of the family has working internet while I'm working to bring rest of the infrastructure back on line, but that's just my scenario, yours will most likely be more or less different.

[–] IsoKiero@sopuli.xyz 2 points 2 months ago

Back in the day with dial-up internet man pages, readmes and other included documentation was pretty much the only way to learn anything as www was in it's very early stages. And still 'man ' is way faster than trying to search the same information over the web. Today at the work I needed man page for setfacl (since I still don't remember every command parameters) and I found out that WSL2 Debian on my office workstation does not have command 'man' out of the box and I was more than midly annoyed that I had to search for that.

Of course today it was just a alt+tab to browser, a new tab and a few seconds for results, which most likely consumed enough bandwidth that on dialup it would've taken several hours to download, but it was annoying enough that I'll spend some time at monday to fix this on my laptop.

[–] IsoKiero@sopuli.xyz 5 points 2 months ago (1 children)

I mean that the product made in here is not the website and I can well understand that the developer has no interest of spending time for it as it's not beneficial to the actual project he's been working with. And I can also understand that he doesn't want to receive donations from individuals as that would bring in even more work to manage which is time spent off the project. A single sponsor with clearly agreed boundaries is far more simple to manage.

[–] IsoKiero@sopuli.xyz 20 points 2 months ago (7 children)

You do realize that man pages don't live on the internet? The kernel.org one is the offical project website, as far as I know, but the project itself is very much not for the web presense, but for the vastly useful documentation included on your distribution.

[–] IsoKiero@sopuli.xyz 10 points 2 months ago (1 children)

The threat model seems a bit like fearmongering. Sure, if your container gets breached and attacker can (on some occasions) break out of it, it's a big deal. But how likely that really is? And even if that would happen isn't the data in the containers far more valuable than the base infrastructure under it on almost all cases?

I'm not arguing against SELinux/AppArmor comparison, SElinux can be more secure, assuming it's configured properly, but there's quite a few steps on hardening the system before that. And as others have mentioned, neither of those are really widely adopted and I'd argue that when you design your setup properly from the ground up you really don't need neither, at least unless the breach happens from some obscure 0-day or other bug.

For the majority of data leaks and other breaches that's almost never the reason. If your CRM or ecommerce software has a bug (or misconfiguration or a ton of other options) which allows dumping everyones data out of the database, SElinux wouldn't save you.

Security is hard indeed, but that's a bit odd corner to look at it from, and it doesn't have anything to do with Debian or RHEL.

[–] IsoKiero@sopuli.xyz 3 points 2 months ago

If I had to guess, I'd say that e1000 cards are pretty well supported on every public distribution/kernel they offer without any extra modules, but I don't have any around to verify it. At least on this ubuntu I don't find any e1000 related firmware package or anything else, so I'd guess it's supported out of the box.

For the ifconfig, if you omit '-a' it doesn't show interfaces that are down, so maybe that's the obvious you're missing? It should show up on NetworkManager (or any other graphical tool, as well as nmcli and other cli alternatives), but as you're going trough the manual route I assume you're not running any. Mii-tool should pick it up too on command line.

And if it's not that simple, there seems to be at least something around the internet if you search for 'NVM cheksum is not valid' and 'e1000e', spesifically related to dell, but I didn't check that path too deep.

[–] IsoKiero@sopuli.xyz 0 points 2 months ago

A part of it is because technology, specially a decade or so ago, had restrictions. Like with ADSL which often/always couldn't support higher upload speeds due to the end user hardware, and the same goes with 4/5G today, your cellphone just doesn't have the power to transmit as fast/far as the tower access point.

But with wired connections, specially with fibre/coax, that doesn't apply and money comes in to play. ISPs pay for the bandwidth to the 'next step' on the network. Your 'last mile' ISP buys some amount of traffic from the 'state wide operator' (kind-of, depends heavily on where you live, but the analogy should work anyways) and that's where the "upload" and "download" traffic starts to play a part. I'm not an expert by any stretch here, so take this with a spoonful of salt, but the traffic inside your ISP's network and going trough their hardware doesn't cost 'anything' (electricity for the switches/routers and their maintenance is excluded as a cost of doing business) but once you push additional 10Gbps to the neighboring ISP it requires resources to manage that.

And that (at least in here) where the asymmetric connections plays a part. Let's say that you have a 1Gbps connection to youtube/netflix/whatever. The original source needs to pay for the network for the bandwidth for your stream to go trough in order to get a decent user experience. But the traffic from your ISP to the network is far less, a blunt analogy would be that your computer sends a request to the network 'show me the latest Me. Beast video' and youtube server says 'sure, here's a few gigabits of video'.

Now, when everyone pays for the 'next step' connection by the actual amount of data consumed (as their hardware needs to have the capacity to take the load). On your generic home user profile, the amount downloaded (and going trough your network) is vastly bigger than the traffic going out of your network. That way your last mile ISP can negotiate with the 'upstream' operator to have capacity to take 10Gbps in (which is essentially free once the hardware is purchased) and that you only send 1Gbps out, so 'upstream' operator needs to have a lot less capacity going trough their network to 'the other way'.

So, as the link speeds and amount of traffic is billed separately, it's way more profitable to offer 1Gbps down and 100Mbps up for the home user. This all is of course a gross simplification of everything and in real world things are vastly more complex with caching servers, multiple connections to the other networks and so on, but at the end every bit you transfer has a price and if you mostly offer to sink in the data your users want and it's significantly less than the data your users push trough to the upstream there's money to be made in this imbalance and that's why your connection might be asymmetric.

[–] IsoKiero@sopuli.xyz 5 points 2 months ago (3 children)

And pico is short from 'Pine Composer'. Nano was originally called 'tip' (This Is not Pico), but that name was already used by another program. And 'elm' besides being a tree is a short from 'Electronic Mail'.

[–] IsoKiero@sopuli.xyz 18 points 2 months ago (6 children)

GNU

Which stands for 'GNU is not Unix'. Also 'less' (which is more). Pine is(was) Program for Internet News and Email and the FOSS fork is 'Alpine' or 'Alternatively Licensed Program for Internet News and Email'. And there's a ton more of wordplays and other more or less fun stuff on how/why things are named like they are.

[–] IsoKiero@sopuli.xyz 2 points 2 months ago (1 children)

I've read Linus's book several years ago, and based on that flimsy knowledge on back of my head, I don't think Linus was really competing with anyone at the time. Hurd was around, but it's still coming soon(tm) to widespread use and things with AT&T and BSD were "a bit" complex at the time.

BSD obviously has brought a ton of stuff on the table which Linux greatly benefited from and their stance on FOSS shouldn't go without appreciation, but assuming my history knowledge isn't too badly flawed, BSD and Linux weren't straight competitors, but they started to gain traction (regardless of a lot longer history with BSD) around the same time and they grew stronger together instead of competing with eachother.

A ton of us owes our current corporate lifes to the people who built the stepping stones before us, and Linus is no different. Obviously I personally owe Linus a ton for enabling my current status at the office, but the whole thing wouldn't been possible without people coming before him. RMS and GNU movement plays a big part of that, but equally big part is played by a ton of other people.

I'm not an expert by any stretch on history of Linux/Unix, but I'm glad that the people preceding my career did what they did. Covering all the bases on the topic would require a ton more than I can spit out on a platform like this, I'm just happy that we have the FOSS movement at all instead of everything being a walled garden today.

view more: ‹ prev next ›