avidamoeba

joined 1 year ago
[–] avidamoeba@lemmy.ca 2 points 20 hours ago

Yes I do and LFP has been manufactured and integrated at scale for a very long time.

[–] avidamoeba@lemmy.ca 1 points 1 day ago (2 children)

How's the density compared to LFP?

[–] avidamoeba@lemmy.ca 12 points 1 day ago (2 children)

LFP is not new. It's been in cars since Fisker integrated A123's batteries. CATL and other manufacturers have been churning out LFP in volume for over a decade now.

[–] avidamoeba@lemmy.ca 1 points 1 day ago* (last edited 1 day ago)

What makes you think VC won't have another product in the pipeline to take over from most users exiting BlueSky? Just like they had BS ready to scoop up most of the Xodus.

Not a critique really, the Fediverse will still be there and will get some people every time.

[–] avidamoeba@lemmy.ca 5 points 1 day ago* (last edited 1 day ago)

The .world instance is very well funded - they take more money than they spend. Not profit since they're a nonprofit, they save it as well as give to others. Meanwhile the Lemmy developers still don't have proper full time funding. This information is public. If I were you I'd subscribe to fund the developers for now. Lemmy.ca is also well funded.

[–] avidamoeba@lemmy.ca 11 points 2 days ago (5 children)

How many does Threads have?

[–] avidamoeba@lemmy.ca 4 points 3 days ago

What I took from this post is that every living room / home theater setup needs a server rack instead of a HiFi rack. Dudnt matter what you thrown in it, it looks badass.

[–] avidamoeba@lemmy.ca 8 points 4 days ago (2 children)

Imagine I'm an idiot, do you have a link or description for where to find those? I looked for them some time ago and found nothing. 🥹

[–] avidamoeba@lemmy.ca 4 points 4 days ago (7 children)

Where do you get installation media for Windows LTSC?

[–] avidamoeba@lemmy.ca 23 points 5 days ago (3 children)

This is beautiful! It's like a textbook example for everyone paying attention to draw crisp conclusions for how the system works.

 

Is that a thing at all? I doubt it but thought I'd check just in case.

76
submitted 2 months ago* (last edited 2 months ago) by avidamoeba@lemmy.ca to c/linux@lemmy.ml
 

Personal use numbers:

  • Ubuntu: 27.7%
  • Debian: 9.8%
  • Other Linux: 8.4%
  • Arch: 8%
  • Red Hat: 2.3%
  • Fedora: 4.8%
 

It's fairly obvious why stopping a service while backing it up makes sense. Imagine backing up Immich while it's running. You start the backup, db is backed up, now image assets are being copied. That could take an hour. While the assets are being backed up, a new image is uploaded. The live database knows about it but the one you've backed up doesn't. Then your backup process reaches the new image asset and it copies it. If you restore this backup, Immich will contain an asset that isn't known by the database. In order to avoid scenarios like this, you'd stop Immich while the backup is running.

Now consider a system that can do instant snapshots like ZFS or LVM. Immich is running, you stop it, take a snapshot, then restart it. Then you backup Immich from the snapshot while Immich is running. This should reduce the downtime needed to the time it takes to do the snapshot. The state of Immich data in the snapshot should be equivalent to backing up a stopped Immich instance.

Now consider a case like above without stopping Immich while taking the snapshot. In theory the data you're backing up should represent the complete state of Immich at a point in time eliminating the possibility of divergent data between databases and assets. It would however represent the state of a live Immich instance. E.g. lock files, etc. Wouldn't restoring from such a backup be equivalent to kill -9 or pulling the cable and restarting the service? If a service can recover from a cable pull, is it reasonable to consider it should recover from restoring from a snapshot taken while live? If so, is there much point to stopping services during snapshots?

 

She lost to Labour.

90
submitted 6 months ago* (last edited 6 months ago) by avidamoeba@lemmy.ca to c/selfhosted@lemmy.world
 

Have some new old stock SATA drives vomiting at you?

[  234.811385] ata1.00: status: { DRDY }
[  234.811392] ata1: hard resetting link
[  240.139340] ata1: link is slow to respond, please be patient (ready=0)
[  244.855349] ata1: COMRESET failed (errno=-16)
[  244.855375] ata1: hard resetting link
[  250.199443] ata1: link is slow to respond, please be patient (ready=0)
[  254.875508] ata1: COMRESET failed (errno=-16)
[  254.875533] ata1: hard resetting link
[  260.211562] ata1: link is slow to respond, please be patient (ready=0)
[  289.919779] ata1: COMRESET failed (errno=-16)
[  289.919810] ata1: limiting SATA link speed to 3.0 Gbps
[  289.919816] ata1: hard resetting link
[  294.963876] ata1: COMRESET failed (errno=-16)
[  294.963904] ata1: reset failed, giving up
[  294.963909] ata1.00: disable device

Grab your contact cleaner and clean their SATA connectors!

I just bought a new 1TB Crucial MX500 made in god knows what year and installed it in a virgin SATA port of a M710q made in 2016 and I got the vomit you see above every time I loaded the drive. Reseated all the connectors. More vomit. Scratched my head a couple of times reaching for the trash bin and I had a brainwave that there might be oxidation from sitting naked with the elements. Took out the DeoxIt Gold, dabbed all the connectors on the SATA path, cycled them a few times, powered on and loaded the drive. No more vomit.

 

cross-posted from: https://lemmy.ca/post/19442327

It's a known bug from upstream mutter. A fix is being worked on and there's a PPA with the updated packages by the Ubuntu developer working on the fix. It resolved the problem on my end.

 

It's a known bug from upstream mutter. A fix is being worked on and there's a PPA with the updated packages by the Ubuntu developer working on the fix. It resolved the problem on my end.

100
submitted 7 months ago* (last edited 7 months ago) by avidamoeba@lemmy.ca to c/framework@lemmy.ml
 

...in using my Framework 2.5GbE cards to speed up a large data transfer to 2.5Gb. Got 0.28 instead. 🤭

These aren't the USB-A to USB-C adapters I was looking for. 😂

25
submitted 8 months ago* (last edited 8 months ago) by avidamoeba@lemmy.ca to c/selfhosted@lemmy.world
 

The backup doc from Immich states that one should use Postgres' dump functionality to backup the database, as well as copy the upload location.

Is there any counter indication to doing this instead:

  • Create a dir immich with subdirs db and library
  • Mount the db dir as a volume for the database
  • Mount the library dir as a volume for the upload location
  • Backup the whole immich dir without dumping the Postgres db. (Stop Immich while before doing this)
51
submitted 8 months ago* (last edited 8 months ago) by avidamoeba@lemmy.ca to c/selfhosted@lemmy.world
 

I'm trying to decide how to import my Google Photos Takeout backup. I see two general ways:

  • Import it by uploading it to Immich (immich-go, etc.)
  • Add it as an External library

Has anyone done it one way or the other? Any recommendation, pros/cons or gotchas?

view more: next ›