this post was submitted on 08 Apr 2024
180 points (93.7% liked)

Selfhosted

40313 readers
185 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] emptiestplace@lemmy.ml 24 points 7 months ago (18 children)

Oh no.

Unfortunately I have a lot of experience with this: attaching permanent array members via USB is a bad idea. OP, if it's not too late, and assuming you haven't already and decided to double down on yolo, I'd recommend reading about the downsides of this approach. It is easy to find relevant discussions (and catastrophes) in r/zfs.

Thunderbolt enclosures are a bit more expensive, but they won't periodically fuck up your shit just because.

[–] Shimitar@feddit.it 4 points 7 months ago* (last edited 7 months ago) (6 children)

Been on USB enclosures using Linux software raid for 20 years and never lost a bit so far.

Didn't go cheap with USB jbod, and i have no idea if zfs is more sensitive to USB... But I don't use zfs either so don't know.

But again I have been using two jbods over USB:

  • 4 SSDS split on two RAID1s on USB3
  • 2 HDDs on RAID1 on USBC

All three raid are managed by Linux software raid stack.

The original one I think I started in the 2000's, then upgraded disks many times and slowly moving to ssds to lower heat production and power usage.

Keep them COOL that's important.

[–] lightrush@lemmy.ca 2 points 7 months ago* (last edited 7 months ago)

I've been on the USB train since 2019.

You're exactly right, you gotta get devices with good USB-to-SATA chipsets, and you gotta keep them cool.

I've been using a mix of WD Elements, WD MyBook and StarTech/Vantec enclosures (ASM1351). I've had to cool all the chipsets on WD because they like bolt the PCBs straight to the drive so it heats up from it.

From all my testing I've discovered that:

  • ASM1351 and ASM235CM are generally problem-free, but the former needs passive cooling if close to a disk. A small heatsink adhered with standard double-sided heat conductive tape is good enough.
  • Host controllers matter too. Intel is generally problem-free. So is VIA. AMD has some issues on the CPU side on some models which are still not fully solved.

I like this box in particular because it uses a very straightforward design. It's got 4x ASM235CM with cooling connected to a VIA hub. It's got a built-in power supply, fan, it even comes with good cables. It fixes a lot of the system variables to known good values. You're left with connecting it to a good USB host controller.

WD PCB on disk

load more comments (5 replies)
load more comments (16 replies)