avidamoeba

joined 1 year ago
[–] avidamoeba@lemmy.ca 6 points 1 month ago* (last edited 1 month ago)

Productivity isn't affected much by individuals beyond some marginal differences. An accountant from 1920 can never be as productive as an accountant today no matter how hard they try. When productivity is discussed by economists, it means investment in equipment and training that makes people produce more for the same hours. When productivity is discussed by business leaders in relation to unions, you're being lied to or they're incompetent.

[–] avidamoeba@lemmy.ca 71 points 1 month ago

This is likely the case with GM given that their manufacturing is unionised. Engineers just got a demo what that can do for them last year. They aren't getting the raise assembly workers got.

[–] avidamoeba@lemmy.ca 35 points 1 month ago

Not bad, I thought our heads were still further up our asses.

[–] avidamoeba@lemmy.ca 6 points 1 month ago

This is what it was when they introduced it. I used to work for an Android OEM at the time and the product people really wanted to get their hands on curved screens for the same reason. Eventually they got Samsung to sell them some but it wasn't as curved as the ones Samsung used on their devices to keep differentiation. It still cost twice what flat screens which ate a significant chunk of the profit margin.

[–] avidamoeba@lemmy.ca 56 points 1 month ago (19 children)

There have been plenty of fads over the lifespan of the smartphone market. E.g. curved edge screens. I think curved screens are another and Apple is right to ignore it. There's too many compromises required for a foldable and not much benefit to be worth it.

[–] avidamoeba@lemmy.ca 4 points 1 month ago (2 children)

A Linux executable can't be named ending on .lnk? 🤔🤔

[–] avidamoeba@lemmy.ca 13 points 1 month ago

I use Immich. It does what you described as well.

[–] avidamoeba@lemmy.ca 4 points 1 month ago (7 children)

I'm not sure why every time I look at this project, it rubs me the wrong way. Anyone found anything wrong with it?

[–] avidamoeba@lemmy.ca 1 points 2 months ago* (last edited 2 months ago)

As far as I can tell it dates back to at least 2010 - https://docs.oracle.com/cd/E19253-01/819-5461/githb/index.html. See the Solaris version. You can try it with small test files in place of disks and see if it works. I haven't done it expansion yet but that's my plan for growing beyond the 48T of my current pool. I use ZFS on Linux btw. Works perfectly fine.

[–] avidamoeba@lemmy.ca 1 points 2 months ago* (last edited 2 months ago)

I think data checksums allow ZFS to tell which disk has the correct data when there's a mismatch in a mirror, eliminating the need for 3-way mirror to deal with bit flips and such. A traditional mirror like mdraid would need 3 disks to do this.

[–] avidamoeba@lemmy.ca 2 points 2 months ago* (last edited 2 months ago) (2 children)

Not that I want to push ZFS or anything, mdraid/LVM/XFS is a fine setup, but for informational purposes - ZFS can absolutely expand onto larger disks. I wasn't aware of this until recently. If all the disks of an existing pool get replaced with larger disks, the pool can expand onto the newly available space. E.g. a RAIDz1 with 4x 4T disks will have usable space of 12T. Replace all disks with 8T disks (one after another so that it can be done on the fly) and your pool will have 24T of space. Replace those with 16T and you get 48T, and so on. In addition you can expand a pool by adding another redundant topology just like you can with LVM and mdraid. E.g. 4x 4T RAIDz1 + 3x 8T RAIDz2 + 2x 16T mirror for a total of 44T. Finally, expanding existing RAIDz with additional disks has recently landed too.

And now for pushing ZFS - I was doing file based replication on a large dataset for many years. Just going over all the hundreds of thousands of dirs and files took over an hour on my setup. That's then followed by a diff transfer. Think rsync or Syncthing. That's how I did it on my old mdraid/LVM/Ext4 setup, and that's how I continued doing on my newer ZFS setup. Recently I tried using ZFS send/receive which operates within the filesystem. It completely eliminated the dataset file walk and stat phase since the filesystem already knows all of the metadata. The replication was reduced to just the diff file transfer time. What used to take over an hour got reduced to seconds or minutes, depending on the size of the changed data. I can now do multiple replications per hour without significant load on the system. Previously it was only feasible overnight because the system would be robbed of IOPS for over an hour.

[–] avidamoeba@lemmy.ca 1 points 2 months ago* (last edited 2 months ago) (4 children)

If you can, move to a RAID-equivalent setup with ZFS (preferred in my opinion) in order to also know about and fix silent data corruption. RAIDz1, RAIDz2 would do the equivalent to RAID5, RAID6. That should eliminate one more variable with cheap drives.

view more: ‹ prev next ›