teawrecks

joined 1 year ago
[–] teawrecks@sopuli.xyz 0 points 8 months ago (3 children)

More specifically, the container is run on bare metal if the host is running on bare metal. You are correct in this thread, not sure why you're being downvoted. I guess people don't know what virtualization technology is or when it is used.

If the nextcloud container is slow, it's for reasons other than virtualization.

[–] teawrecks@sopuli.xyz 5 points 8 months ago

It's all about where the packages and services are installed

No. Your packages and services could be on a network share on the other side of the world, but where they are run is what matters here. Processes are always loaded into, and run from main memory.

"Running on bare metal" refers to whether the CPU the process is being run on is emulated/virtualized (ex. via Intel VT-x) or not.

A VM uses virtualization to run an OS, and the processes are running within that OS, thus neither is running on bare metal. But the purpose of containers is to run them wherever your host OS is running. So if your host is on bare metal, then the container is too. You are not emulating or virtualizing any hardware.

Here's an article explaining the difference in more detail if needed.

[–] teawrecks@sopuli.xyz 3 points 8 months ago

Yeah, this matches my experience. 10+ years ago, a windows update might randomly wipe out grub and I have to live boot and repair it. These days, my dual boot config has worked without issue for several years.

[–] teawrecks@sopuli.xyz 1 points 8 months ago

As the other person said, I don't think the SSD knows about partitions or makes any assumptions based on partitioning, it just knows if you've written data to a certain location, and it could be smart enough to know how often you're writing data to that location. So if you keep writing data to a single location, it could decide to logically remap that location in logical memory to different physical memory so that you don't wear it out.

I say "could" because it really depends on the vendor. This is where one brand could be smart and spend the time writing smart software to extend the life of their drive, while another could cheap out and skip straight to selling you a drive that will die sooner.

It's also worth noting that drives have an unreported space of "spare sectors" that it can use if it detects one has gone bad. I don't know if you can see the total remaining spare sectors, but it typically scales with the size of a drive. You can at least see how many bad sectors have been reallocated using S.M.A.R.T.

[–] teawrecks@sopuli.xyz 1 points 8 months ago

Yeah, it wouldn't be for no reason, I still have a desktop on Manjaro that I've been meaning to swap to endeavorOS. But I pretty much just use arch flavors rather than arch because they're quicker to install lol.

[–] teawrecks@sopuli.xyz 1 points 8 months ago

Seriously? Why be like this? It feels like a Lemmy thing for people to have a chip on their shoulder all the time.

You shared your understanding, and then I shared mine (in fewer words). I also summarized in once sentence at the bottom. Was just trying to have a conversation, sorry.

[–] teawrecks@sopuli.xyz 2 points 8 months ago (6 children)

Afaik, the wear and tear on SSDs these days is handled under the hood by the firmware.

Concepts like Files and FATs and Copy-on-Write are format-specific. I believe that even if a filesystem were to deliberately write to the same location repeatedly to intentionally degrade an SSD, the firmware will intelligently shift its block mapping around under the hood so as to spread out the wear. If the SSD detects a block is producing errors (bad parity bits), it will mark it as bad and map in a new block. To the filesystem, there's still perfectly good storage at that address, albeit with a potential one-off read error.

The larger sizes SSD just gives the firmware more extra blocks to pull from.

[–] teawrecks@sopuli.xyz 9 points 8 months ago

Assume your hard drives will fail. Any time I get a new NAS drive, I do a burn-in test (using a simple badblocks run, can take a few days depending on the size of the drive, but you can run multiple drives in parallel) to get them past the first ledge of the bathtub curve, and then I put them in a RaidZ2 pool and assume it will fail one day.

Therefore, it's not about buying the best drives so they never fail, because they will fail. It's about buying the most cost effective drive for your purpose (price vs avg lifespan vs size). For this part, definitely refer to the Backblaze report someone else linked.

[–] teawrecks@sopuli.xyz 3 points 8 months ago (2 children)

I used Fedora on my laptop for like 4 years. It came with gnome, and was very stable. I didn't know a lot about Linux at the time, but it treated me well.

Eventually, I was learning graphics and the mesa drivers in fedora's repos were lacking specific OGL support I wanted to try out. I tried installing mesa from source, but it didn't go very smoothly.

This is when I learned about arch's rolling release model. I ran antergos for a while, then manjaro, and now endeavor, and more recently I've heard arch has a fancy installer wizard so I might just do that next.

I would still recommend Fedora (or Mint) as someone's first go at Linux. I don't think you need to try arch until you know why you're using it.

[–] teawrecks@sopuli.xyz 1 points 9 months ago (1 children)

Lol man, we're just so far off topic from the point I was trying to make though, which is that a user friendly mobile experience built on linux is totally possible, it doesn't have to be a "build-it-yourself" headache, it doesn't require interfacing with a CLI, and we don't even have to wonder if that's true because it's been done and is massively successful. That's all. If you'd like to nitpick whether it's "actually Linux" or "kinda Linux" I'm just gonna give you a swirly.

[–] teawrecks@sopuli.xyz 1 points 9 months ago (3 children)

Yeah, I feel like at this point you're not even disagreeing, you're just saying I'm wrong because you don't want to be wrong. You didn't even give me anything to refute this time. That's fine, you're right, cheers.

[–] teawrecks@sopuli.xyz 1 points 9 months ago (5 children)

It's common for Linux distros to make changes specific to their distro. Adding and removing modules, adding custom changes, and offering those changes back to mainline. This is how Linux works and what makes it so great.

It's not as though Google hard forked Linux 15 years ago and have just done their own thing ever since, they're regularly merging Linux LTS. Here's a diagram from Google of what that looks like.

MacOSX is a hard fork from Mach, which fits your French analogy more accurately. Android is more like a Boston accent; it's a dialect but never very far from it's origin.

view more: ‹ prev next ›