spaghetti_carbanana

joined 1 year ago
[–] spaghetti_carbanana@krabb.org 3 points 7 months ago

This is the method I use in your scenario, OP. You can use Folder2iso to get the files in that you need. If the OS has official VMware tools, you can also mount the VMware Tools ISO straight from workstation into the VM and this will give you the clipboard service so you can copy and paste files between the host and VM, if this scenario is permitted within your isolation needs.

Otherwise, go the ISO route. You just can't bring stuff out of the VM back to the host is all.

[–] spaghetti_carbanana@krabb.org 5 points 7 months ago (2 children)

The two aren't even in the same league. I'm a big open source advocate don't get me wrong, but VirtualBox is horrible to use and its not what OP asked.

[–] spaghetti_carbanana@krabb.org 3 points 8 months ago

Putting his whole Sisyphussy into it

[–] spaghetti_carbanana@krabb.org 2 points 8 months ago

Sorry I meant TIL about it being considered stable, haha. I've known about Fedora because I used it when it was meant to replace the free Red Hat Linux.

As for Steam, I don't recall how I installed it, sorry! I just recall significant grief getting it going (again, perhaps a skill issue) but had no big roadblocks using OpenSUSE.

[–] spaghetti_carbanana@krabb.org 1 points 8 months ago (2 children)

TIL about Fedora, last I knew it was a rolling bleeding edge OS. Clearly lots of movement in the Red Hat camp.

As for gaming, drivers were not the problem for me. Getting games to run with ease was. On OpenSUSE, I just install Steam, enable Proton and basically go at that point. Red Hat was non-trivial to do this. Could be a skill issue, but I had a better time getting going with OpenSUSE TW.

[–] spaghetti_carbanana@krabb.org 7 points 8 months ago (11 children)

Sort of, OpenSUSE Tumbleweed. I started on OpenSUSE Leap but had issues getting things like GPU and Steam working. Red Hat was also a non-starter because of the lack of gaming functionality.

TW works great for gaming and the enterprise features I care about (like domain joining) work out of the box. Its certainly harder to set up than something more geared towards home use (typically one of the various the downstreams of Debian or Arch) but that doesn't bother me.

[–] spaghetti_carbanana@krabb.org 2 points 8 months ago

Second to this - for what its worth (and I may be tarred and feathered for saying this here), I prefer commercial software for my backups.

I've used many, including:

  • Acronis
  • Arcserve UDP
  • Datto
  • Storagecraft ShadowProtect
  • Unitrends Enterprise Backup (pre-Kaseya, RIP)
  • Veeam B&R
  • Veritas Backup Exec

What was important to me was:

  • Global (not inline) deduplication to disk storage
  • Agent-less backup for VMware/Hyper-V
  • Tape support with direct granular restore
  • Ability to have multiple destinations on a backup job (e.g. disk to disk to tape)
  • Encryption
  • Easy to set up
  • Easy to make changes (GUI)
  • Easy to diagnose
  • Not having to faff about with it and have it be the one thing in my lab that just works

Believe it or not, I landed on Backup Exec. Veeam was the only other one to even get close. I've been using BE for years now and it has never skipped a beat.

This most likely isn't the solution for you, but I'm mentioning it just so you can get a feel for the sort of considerations I made when deciding how my setup would work.

[–] spaghetti_carbanana@krabb.org 3 points 8 months ago (1 children)

As others have mentioned its important to highlight the difference between a sync (basically a replica of the source) vs a true backup which is historical data.

As far as tools goes, if the device is running OMV you might want to start by looking at the options within OMV itself to achieve this. A quick google hinted at a backup plugin that some people seem to be using.

If you're going to be replicating to a remote NAS over the Internet, try to use a site-to-site VPN for this and do not expose file sharing services to the internet (for example by port forwarding). Its not safe to do so these days.

The questions you need to ask first are:

  1. What exactly needs to be backed up? Some of it? All of it?
  2. How much space does the data I need backed up consume? Do I have enough to fit this plus some headroom for retention?
  3. How many backups do I want to retain? And for how long? (For example you might keep 2 weeks of daily backups, 3 months of weekly backups, 1 year of monthly backups)
  4. How feasible is it to run a test restore? How often am I going to do so? (I can't emphasise test restores enough - your backups are useless if they aren't restorable)
  5. Do you need/want to encrypt the data at rest?
  6. Does the internet bandwidth between the two locations allow for you to send all the data for a full backup in a reasonable amount of time or are you best to manually seed the data across somehow?

Once you know that you will be able to determine:

  1. What tool suits your needs
  2. How you will configure the tool
  3. How to set up the interconnects between sites
  4. How to set up the destination NAS

I hope I haven't overwhelmed, discouraged or confused you more and feel free to ask as many questions as you need. Protecting your data isn't fun but it is important and its a good choice you're making to look into it

[–] spaghetti_carbanana@krabb.org 2 points 8 months ago (1 children)

Back in the day when the self-hosted $10 license existed I was using JIRA Service Desk to do this. As far as ticketing systems go it was very easy to work with and didn't slow me down too much.

I know you don't want a ticket system but I'm just curious what other people will suggest because I'm in the same boat as you.

Currently I haphazardly use Joplin to take very loose notes and sync them to Nextcloud.

If you want a very simple option with minimal setup and overhead you could use Joplin to create separate notes for each "part" of your lab and just add a new line with a date, time and summary of the change.

I do also use SnipeIT to track all my hardware and parts, which allows you to add notes and service history against the hardware asset.

Other than that, I'm keen to see what everyone else says

[–] spaghetti_carbanana@krabb.org 15 points 9 months ago

Servers are a different story but for Desktop, OpenSUSE.

Because:

  • It's stable even on their rolling OS (Tumbleweed)
  • Gaming works exceptionally well
  • CUDA works with little effort
  • RPM-based (personal preference)
  • zypper is an excellent package manager and my experience has been better than that of yum/dnf
  • Extensive native packages and 3rd party repos
  • No covert advertising in the OS
  • Minimal (no?) Telemetry
  • Easy to bind to active directory
  • it feels polished and well built
  • I do not have to mess with it to make it work

Part of my transition from Windows to Linux was that basic tasks like installing software or even the OS itself shouldn't be a high effort endeavour. I should be able to point to a package file or run a package manager and be able to go about my day without running "make" and working my way through dependency hell.

I say this as a Linux user of all different flavours for well over 15 years who has a deep love for what it brings to the table. If we want it to be common place with non-IT folks, it needs to work and it needs to be simple to use.

[–] spaghetti_carbanana@krabb.org 1 points 9 months ago* (last edited 9 months ago)

Power

  • 2x feeds into the rack (same circuit but we'll work on that)
  • Eaton 2000VA double conversion UPS on Feed A
  • APC 1500VA line interactive UPS on Feed B (bypassed, replacing it with another double conversion 2kVA eventually)

Network

  • 2x Dell N2048P, stacked (potentially getting replaced with 2x stacked Cisco 9300)
  • FortiGate firewall
  • 1000/50 FTTP primary Internet link
  • 4G backup Internet link using a different Telco (the dream is to replace this with Starlink)

Storage

  • Synology 4-bay NAS with 4x4TB in RAID-10 (for overflow storage from Virtual SAN cluster)
  • HP MSL2024 8GB Fiber Channel LTO5 Tape autoloader for off-site backup

Compute

  • Dell R520 running VMware ESX for Production (2x Xeon E5-2450L, 80GB DDR3, 4x500GB SSD RAID-10 for Virtual SAN, 1x10TB SATA "scratch" disk, 2x10G fibre storage NICs, 2x1G copper NICs for VM traffic)
  • Dell R330 running VMware ESX for backups and DR (1x Xeon E3-1270v5, 32GB DDR4, 2x512GB SSD RAID-1, 2x4TB HDD RAID-1, 8G FC card for tape library)

A second prod host will join the R520 soon to add some redundancy and mirror the Virtual SAN.

All VMs are backed up and kept in an encrypted on-site data store for at least 4 weeks. They're duplicated to tape (encrypted) once a month and taken off site. Those are kept for 1 year minimum. Cloud backup storage will never replace tape in my setup.

Services

As far as "public facing" goes, the list is very short:

Though I do run around 30-40 services all up on this setup (not including actual non-prod lab things that are on other servers or various SBCs around the place).

If I had unlimited free electricity and no functioning ears I'd be using my Cisco UCS chassis and Nexus 5K switch/fabric extenders. But it just isn't meant to be (for now, haha).

[–] spaghetti_carbanana@krabb.org 16 points 9 months ago

Jumping on the OpenSUSE bandwagon. I use it daily, have been running the same install of Tumbleweed for years without issue. I'm using KDE Plasma which it let's you choose as part of the installation which fulfils that requirement for you as well.

If you're familiar with Redhat you'll feel at home on it. Zypper is the package manager instead of yum/dnf and works really well (particularly when coping with dependency issues.

I've worked with heaps of distros over the years (Ubuntu, Debian, Fedora, RHEL, old school Red Hat, CentOS, Rocky, Oracle, even a bit of Alpine and some BSD variants) and OpenSUSE is definitely my favourite for a workstation.

view more: next ›