IsoKiero

joined 1 year ago
[–] IsoKiero@sopuli.xyz 7 points 2 months ago (4 children)

That kind of depends on how you define FOSS. The way we think of that today was in very early stages back in the 1991 and the orignal source was distributed as free, both as in speech and as in beer, but commercial use was prohibited, so it doesn't strictly speaking qualify as FOSS (like we understand it today). About a year later Linux was released under GPL and the rest is history.

Public domain code, academic world with any source code and things like that predate both Linux and GNU by a few decades and even the Free Software Foundation came 5-6 years before Linux, but the Linux itself has been pretty much as free as it is today from the start. GPL, GNU, FSF and all the things Stallman created or was a part of (regardless of his conflicting personality) just created a set of rules on how to play this game, pretty much before any game or rules for it existed.

Minix was a commercial thing from the start, Linux wasn't, and things just refined on the way. You are of course correct that the first release of Linux wasn't strictly speaking FOSS, but the whole 'FOSS' mentality and rules for it wasn't really a thing either back then.

There's of course adacemic debate to have for days on which came first and what rules whoever did obey and what release counts as FOSS or not, but for all intents and purposes, Linux was free software from the start and the competition was not.

[–] IsoKiero@sopuli.xyz 20 points 3 months ago

As a rule of thumb, if you pay more money you get a better product. With spinning drives that almost always means that more expensive drives (in average) run longer than cheaper ones. Performance is another metric, but balancing those is where the smoke and mirrors come into play. You can get a pretty darn fast drive for a premium price which will fail in 3-4 years or for a similar price you can get a bit slower drive which will last you a decade. And that's in average. You might get a 'cheap' brand high-performance drive to run without any issues for a long long time and you might also get a brand name NAS drive which will fail in 2 years. Those averages start to play a role if you buy drives by a dozen.

Backblaze (among others) publish their very real world statistics on which drives to choose (again, on average), but for home gamer that's not usually an option to run enough drives to get any benefits from statistical point of view. Obviously something from HGST or WD will most likely outperform any no-name brand from aliexpress and personally I'd only get something rated for 24/7 use, like WD RED, but it's not a guarantee that those will actually run any longer as there's always deviations from their gold standard.

So, long story short, you will most likely get a significantly different results depending on which brand/product line you choose, but it's not guaranteed, so you need to work around that with backups, different raid scenarios (likely raid 5 or 6 for home gamer) and acceptable time for downtime (how fast you can get a replacement, how long it'll take to pull data back from backups and so on). I'll soon migrate my setup from somewhat professional setting to more hobbyist one and with my pretty decent internet connectivity I most likely go with 2-1-1 setup instead of the 'industry standard' 3-2-1 (for serious setup you should probably learn what those really mean, but in short: number of copies existing - number of different storage media - number of offsite copies),

On what you really should use, that depends heavily on your usage. For a media library a 5400rpm bigger drive might be better than a bit smaller 7200rpm drive and then there's all kinds of edge cases plus potential options for ssd-caching and a ton of other stuff, so, unfortunately, the actual answer has quite a few of variables, starting from your wallet.

[–] IsoKiero@sopuli.xyz 5 points 3 months ago

$ whatis date

date (1) - print or set the system date and time

[–] IsoKiero@sopuli.xyz 1 points 3 months ago

In theory you just send a link to click and that's it. But, as there always is a but, your jitsi setup most likely don't have massive load balancing, dozens of locations for servers and all the jazz which goes around random network issues and everything else which keeps the internet running.

There's a ton of things well outside your control and they may or may not bite you in the process. Big players have tons of workforce and money to make sure that kind of things don't happen and they still do now and then. Personally, for a single use scenario like yours, I wouldn't bother, but I'm not stopping you either, it's a pretty neat thing to do. My (now dead) jitsi instance once saved a city council meeting when teams had issues and that got me a pretty good bragging rights, so it can be pretty rewarding too.

[–] IsoKiero@sopuli.xyz 4 points 3 months ago (2 children)

Jitsi works, and they have open relays to test with, but as the thing here is very much analog and I'd assume she'd just need to see your position, how hands move etc, the audio quality isn't the most important thing here. Sure, it helps, but personally I'd just use zoom/teams/hangouts/something readily available and invest in a decent microphone (and audio in general) + camera.

That way you don't need to provide helpdesk on how to use your thing and waste time from actual lessons nor need to debug server issues while you've been scheduled to train with your teacher.

[–] IsoKiero@sopuli.xyz 3 points 3 months ago (1 children)

Linux, so even benchmarking software is near impossible unless you’re writing software which is able to leverage the specific unique features of Linux which make it more opimized.

True. I have no doubt that you could set up a linux system to calculate pi to 10 million digits (or something similar) more power efficiently than windows-based system, but that would include compiling your own kernel leaving out everything unnecesary for that particular system, shutting down a ton of daemons which is commonly run on a typical desktop and so on and waste a ton more power on testing that you could never save. And that might not even be faster, just less power hungry, but no matter what that would be far far away from any real world scenario and instead be a competition to build a hardware and software to do that very spesific thing with as little power as possible.

[–] IsoKiero@sopuli.xyz 6 points 3 months ago (4 children)

Interesting thought indeed, but I highly doubt that difference is anything you could measure and there's a ton of contributing factors, like what kind of services are running on a given host. So, in order to get a reasonable comparison you should run multiple different software with pretty much identical usage patterns on both operating systems to get any kind of comparable results.

Also, the hardware support plays a big part. A laptop with dual GPUs and a "perfect" support from drivers on Windows would absolutely wipe the floor with Linux which couldn't switch GPUs at the fly (I don't know how well that scenario is supported on linux today). Same with multicore-cpu's and their efficient usage, but I think on that the operating system plays a lot smaller role.

However changes in hardware, like ARM CPUs, would make a huge difference globally, and at least traditionally that's the part where linux shines on compatibility and why Macs run on batteries for longer. But in the reality, if we could squeeze more of our CPU cycles globally to do stuff more efficiently we'd just throw more stuff on them and still consume more power.

Back when cellphones (and other rechargeable things) became mainstream their chargers were so unefficient that unplugging them actually made sense, but today our USB-bricks consume next to nothing when they're idle so it doesn't really matter.

[–] IsoKiero@sopuli.xyz 11 points 3 months ago

At least in here some of the older modems, specially from ADSL-era, only had two pairs in them, so they were only good up to 100Base-T, which is roughly 7MB/s. So maybe check if that's the case and throw those into recycling bin.

[–] IsoKiero@sopuli.xyz 6 points 3 months ago

At work where cable runs are usually made by maintenance people the most common problem is poor termination. They often just crimp a connector instead of using patch panels/sockets and unwind too much of the cable before connector which causes all kinds of problems. With proper termination problems usually go away.

But it can be a ton of other stuff too. Good cable tester is pretty much essential to figure out what's going on. I'm using 1st gen version of Pocketethernet and it's been pretty handy, but there's a ton of those available, just get something a bit better than a simple indicator with blinking leds which can only indicate if the cable isn't completely broken.

[–] IsoKiero@sopuli.xyz 30 points 3 months ago (5 children)

Yep. I'm running 1/1Gbps wan connection over cat5e just fine. Even on very noisy environment at work with a longish run (70+ meters) we ran pretty damn stable 1/1Gbps over good quality cat7.

[–] IsoKiero@sopuli.xyz 11 points 3 months ago

It depends heavily on what you do and what you're comparing yourself against. I've been making a living with IT for nearly 20 years and I still don't consider myself to be an expert on anything, but it's a really wide field and what I've learned that the things I consider 'easy' or 'simple' (mostly with linux servers) are surprisingly difficult for people who'd (for example) wipe the floor with me if we competed on planning and setting up an server infrastructure or build enterprise networks.

And of course I've also met the other end of spectrum. People who claim to be 'experts' or 'senior techs' at something are so incompetent on their tasks or their field of knowledge is so ridiculously narrow that I wouldn't trust them with anything above first tier helpdesk if even that. And the sad part is that those 'experts' often make way more money than me because they happened to score a job on some big IT company and their hours are billed accordingly.

And then there's the whole other can of worms on a forums like this where 'technical people' range from someone who can install a operating system by following instructions to the guys who write assembly code to some obscure old hardware just for the fun of it.

[–] IsoKiero@sopuli.xyz 2 points 3 months ago (1 children)

Grub supports software raid just fine. The main issue is that you need to modify grub configuration to add bootloader on both drives, but even if you don't it's pretty simple to recreate needed files for second drive when the primary one dies.

view more: ‹ prev next ›