I mean, I took a guess. I don't know what you want from me.
teawrecks
That's a question for a web developer, which I am not. I would expect it to be the max common resolution width. A quick Google shows that modern ultrawides are 5120x1440. So that's probably why.
Which are you suggesting?
- that the image could be losslessly compressed more efficiently?
- that lossy compression should be used more aggressively?
- that there is extra data hidden in the file?
I know it's beside your point, but I want to chime in...
My understanding of the history of fashion is that back in the 1950s America...they were trying to nudge culture into accepting their worldview.
On the contrary, I don't think that's how the mentality came about, or was held at that time at all. If you go back to the 1850s or 1750s, suits and dresses (or some older variant of them) were a sign of wealth, intelligence, high class living, etc. They had to be hand-tailored by experts using rare fabrics and dyes that had to be shipped all around the world. Then the industrial revolution came, and clothing was able to be mass produced (usually at the cost of quality). Suddenly the middle class had access to suits and dresses, but the perception that it was something for the wealthy was still there. For many businesses targeting the middle class, the suit and dress WERE the uniform, as a means of displaying how regal their brand is.
And it's not like we've gotten past this. If you go on any of the social media sites with ads, take a look at what you see: some knock-off piece of trendy clothing that's made to look like a high end fashion brand, but targeting the lower/middle class.
All that said, I'm all for the "punk rock" mentality. Don't do what your parents did just because society told them to tell you it was important. Stick it to the man, yadda yadda. But I think it's a trap to assume that the 1950s proletariat felt any differently than the same class of people do today.
As for windows v linux, of the people who are aware of both yet continue using windows, I think most would say that they use it specifically because they have a "preference for something that i can just set up and not have to tinker with" and because they also aren't making their choice based on "the trackers in win11 or because [they] care that Microsoft is an evil megacorp".
I assume you could open it up and reset bios by shorting a couple of pins or pulling the CMOS battery. Google the ThinkPad model number and "bios reset".
Note that if safeboot is enabled this could lock you out of the OS, but given that you were able to wipe the OS without accessing BIOS anyway, it makes me think it's not.
Do with that information what you will, good luck.
Just trying to parse your comment, I assume your first "this" and second "this" are referring to different things, right?
He also keeps explaining to me why Fedora better than my “nerd OS”
lol he's already a true linux user.
But probably best to have a talk about gatekeeping linux though. There's no wrong way to run linux.
It's crazy how, when you think in terms of modern windows requirements, a dual core, 1.6Ghz, 4.5W cpu sounds like a rock. But if you showed that to someone in the early 2000s running XP with a single core 500Mhz, they would expect it to be blazing fast. Linux gives you the ability to have that performance, along with modern security and functionality, even if windows won't 👍.
So let me give an example, and you tell me if I understand. If you change 1MB in the middle of a 1GB file, the filesystem is smart enough to only allocate a new 1MB chunk and update its tables to say "the first 0.5GB lives in the same old place, then 1MB is over here at this new location, and the remaining 0.5GB is still at the old location"?
If that's how it works, would this over time result in a single file being spread out in different physical blocks all over the place? I assume sequential reads of a file stored contiguously would usually have better performance than random reads of a file stored all over the place, right? Maybe not for modern SSDs...but also fragmentation could become a problem, because now you have a bunch of random 1MB chunks that are free.
I know ZFS encourages regular "scrubs" that I thought just checked for data integrity, but maybe it also takes the opportunity to defrag and re-serialize? I also don't know if the other filesystems have a similar operation.