stuner

joined 1 year ago
[–] stuner@lemmy.world 1 points 1 week ago

Unfotunately, I can help you with that. The machine is not running any VMs.

[–] stuner@lemmy.world 2 points 1 week ago (2 children)

It's possible, but you should be able to see it quite easily. In my case, the CPU utilization was very low, so the same test should also not be CPU-bottlenecked on your system.

[–] stuner@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (4 children)

I'm seeing very similar speeds on my two-HDD RAID1. The computer has an AMD 8500G CPU but the load from ZFS is minimal. Reading / writing a 50GB /dev/urandom file (larger than the cache) gives me:

  • 169 MB/s write
  • 254 MB/s read

What's your setup?

[–] stuner@lemmy.world 2 points 1 week ago

With version 2.3 (currently in RC), ZFS will at least support RAIDZ expansion. That should already help a lot for a NAS usecase.

[–] stuner@lemmy.world 6 points 1 week ago* (last edited 1 week ago)

We use Alma Linux at work and it's fine, I suppose. I see two main reasons why you'd choose an EL linux distro:

  1. You have (professional) software that officially supports it. RHEL's release model makes it an attractive target for proprietary software and many vendors choose to support it.
  2. You need/want very long support cycles. You can run 10-year-old software even though you probably shouldn't.

Apart from those, it's a competent distro, Red Hat know what they're doing. If you want the equivalent to an Ubuntu LTS / Debian in the Fedora world, it get's the job done. I quite like their approach of keeping the core OS stable while updating drivers, tools, and compilers (e.g., the kernel version number has very little meaning in RHEL).

Is the experience very different from Fedora?

Yes. the age of the core packages is very noticeable. The number of fully supported packages is also very small and you need to go to EPEL very quickly (at which point you're no longer getting enterprise support...). On the plus side, it's much more stable than Fedora in my experience.

Edit: My main recommendation for a stable distro would probably be Debian unless one of the above points applies.

[–] stuner@lemmy.world 1 points 2 weeks ago (1 children)

That system also sounds a lot more capable than mine. How did you end up with 25 VMs?

[–] stuner@lemmy.world 1 points 2 weeks ago

I'm running it in a regular mATX case (Node 804) but I think you can also get AM5 motherboards in rack-mount cases.

[–] stuner@lemmy.world 1 points 2 weeks ago (5 children)

Perhaps my recent NAS/home server build can serve as a bit of an inspiration for you:

  • AMD Ryzen 8500G (8 cores, much more powerful than your two CPUs, with iGPU)
  • Standard B650 mainboard, 32 GB RAM
  • 2 x used 10 TB HDDs in a ZFS pool (mainboard has 4x SATA ports)
  • Debian Bookworm with Docker containers for applications (containers should be more efficient than VMs).
  • Average power consumption of 19W. Usually cooled passively.

I don't think it's more efficient to separate processing and storage so I'd only go for that if you want to play around with a cluster. I would also avoid SD cards as a root FS, as they tend to die early and catastrophically.

[–] stuner@lemmy.world 5 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

It sounds like Proton VPN (or its repo) is causing issues for you. Given that it's a paid service, you can probably contact their support.

Alternatively, you can also look for the repo file in /etc/yum.repos.d, something like /etc/yum.repos.d/file_name.repo, for Proton VPN. You can then disable it by renaming it to .repo.disabled and try again (sudo dnf upgrade in the terminal). Note: This is not really a permanent solution, as it will disable updates for Proton VPN.

[–] stuner@lemmy.world 16 points 2 weeks ago

It sounds like the criterion is "is newer microcode available". So it doesn't look like a marketing strategy to sell new CPUs.

[–] stuner@lemmy.world 2 points 3 weeks ago

Nice, congrats on getting it to work! :) Native Debian packages are also nice. It can just get difficult if you want the latest stuff.

[–] stuner@lemmy.world 2 points 3 weeks ago

I used the docker compose template from https://hub.docker.com/_/drupal and mostly changed the image:

Compose file

# Drupal with PostgreSQL
#
# Access via "http://localhost:8080"
#   (or "http://$(docker-machine ip):8080" if using docker-machine)
#
# During initial Drupal setup,
# Database type: PostgreSQL
# Database name: postgres
# Database username: postgres
# Database password: example
# ADVANCED OPTIONS; Database host: postgres

version: '3.1'

services:

  drupal:
    # image: drupal:10-apache
    # image: drupal:10.3.7-apache-bookworm
    # image: drupal:10.3.6-apache-bookworm
    image: drupal:11.0.5-apache-bookworm
    # image: drupal:10-php8.3-fpm-alpine
    ports:
      - 8080:80
    volumes:
      - /var/www/html/modules
      - /var/www/html/profiles
      - /var/www/html/themes
      # this takes advantage of the feature in Docker that a new anonymous
      # volume (which is what we're creating here) will be initialized with the
      # existing content of the image at the same location
      - /var/www/html/sites
    restart: always
    environment:
      PHP_MEMORY_LIMIT: "1024M"

  postgres:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: example
    restart: always

The details for the v11 image are here: https://hub.docker.com/layers/library/drupal/11.0.5-apache-bookworm/images/sha256-0e41e0173b4b5d470d30e2486016e1355608ab40651549e3e146a7334f9c8f77?context=explore

view more: next ›