TedZanzibar

joined 2 years ago
[–] TedZanzibar@feddit.uk 23 points 2 weeks ago

I used to work at a games studio that would get these delivered fairly regularly, usually paired with a particular motherboard and presumably a custom BIOS.

I think we were technically supposed to return them but the manufacturers never enforced it, so once the chip was actually released to the public - and assuming the sample was stable enough for general use - the PC would rotate into normal stock and eventually get sold for cheap to staff or end up in the spare parts bin.

While it was cool at first to get pre-production chips before anyone else, it became pretty mundane and I'm not at all surprised to see them out in the wild decades later. Interesting piece of history though!

[–] TedZanzibar@feddit.uk 4 points 3 weeks ago (1 children)

My workplace ran off DL360s (the 1U variant of this) of various generations for 20 or 30 years. I remember getting the first G5 in and being really impressed by the way the components all slotted in so easily and pretty much everything was hot-swappable. And the no-nut rail system was a revalation.

They were great systems for their time but that power consumption is crazy by today's standards!

As for feedback, you have a very confusing sentence about 2.5" and 3.5" drives being the same size. Took me far too long to realise you meant capacity and not physical dimensions!

[–] TedZanzibar@feddit.uk 14 points 2 months ago

Just a PSA for anybody reading the thread, though it doesn't really help with the question at hand... On the very slim chance that your workplace uses Bitwarden Enterprise it's worth knowing that every licensed user gets a free family plan that can be tied to an existing personal account, provided it's hosted in the same region.

We do use it but very few of our own users are even aware of the perk so I like to spread it around when I get the chance!

[–] TedZanzibar@feddit.uk 1 points 2 months ago

It's an 8 bay unit with six drives that are a mix of WD Red and Seagate Ironwolf, all NAS grade drives, basically. The other two slots have SSDs for hosting the aforementioned containers and VMs.

The largest drives I have are 4TB though, so maybe the larger capacity ones are louder? I also ran the fan profile in whatever the quietest setting is.

[–] TedZanzibar@feddit.uk 14 points 2 months ago (4 children)

I am a tech oriented person, I work in IT, and a Syno ticks the boxes in many respects.

  • Low power draw. Power efficiency is very important to me, especially for something that runs 24/7. I don't know how efficient self-build options are these days, but 10 years ago I couldn't get close to the efficiency of a good NAS.

  • Set and forget. I maintain enough systems at work so I don't really want to spend all of my free time maintaining my own. A Syno "just works", it can run for months or years without a reboot (and when it does need one, it does it by itself overnight), and I can easily upgrade or swap a dead drive in a couple of minutes. When the entire NAS dies I can stick the drives in a new one and be up and running almost instantly.

  • Size and noise. I don't have a massive house, so I need something that can sit on a shelf and be unobtrusive. In our last house it was literally sat in the living room, spinning drives constantly, and nobody was bothered by it.

The Syno I have is plenty good enough to run a bunch of Docker containers and a few VMs for all of my self hosted stuff, and it just does the job efficiently, quietly, and without complaining or needing constant maintenance.

I don't like this creep towards requiring branded drives and memory, though I'm pretty sure it's not legal in the EU. Regardless there are ways around it.

[–] TedZanzibar@feddit.uk 14 points 2 months ago

If you own a domain, which you do, you can get wildcard certs from Let's Encrypt using a DNS challenge. Most (all?) popular reverse proxies can do this either natively or via an addon/module, you just need to use a supported DNS provider.

[–] TedZanzibar@feddit.uk 0 points 3 months ago (1 children)

Judging by the rest of the thread I'm going to get downvoted for this, but what the hell:

I'm sure I'll switch to Jellyfin eventually but I tried it out a few weeks ago to see what all the hype was about and it just... wasn't great. It was difficult to setup, with way too many overly-complicated settings, and then it refused to play one of the two test files I tried. Like it or not there's a reason that Plex is the dominant player in the game, and a large part of that reason is that it verges on plug-and-play for simplicity of both setup and use.

Yes, it sucks that they're removing remote streaming for free users, but I imagine there's a significant chunk of users who don't know or care how to properly open their server up to the world and are relying on the Plex proxies for their streams (which happens entirely in the background), and those aren't going to be cheap to run. Maybe putting them behind a paywall will provide the resources to make them faster.

I did buy a lifetime pass last time they announced a price hike; it's honestly paid for itself many times over, and I've been encouraging other users I know to do the same before this next one, because yes, it is a significant hike this time around. That said, while I wouldn't pay monthly for it, I do still feel like the lifetime pass is tremendous value for such a polished product. It's a shame they've had to do it at all, but I don't begrudge them for it.

[–] TedZanzibar@feddit.uk 3 points 3 months ago (1 children)

Good shout. I've just recently moved from Pihole to Adguard Home myself, complete with Hagezi lists. I consider myself very tech savvy and I work in the field but AGH suits my needs much better.

One example is wildcard DNS to route all of my hosted services via reverse proxy. In Pihole I had to make weird blocking rules to make this work, but AGH has specific settings for it. It also supports DoH out of the box, whereas Pihole needs non-standard faffery to get it working.

Very pleased with AGH in general.

[–] TedZanzibar@feddit.uk 2 points 3 months ago

HACS installs community integrations whereas addons are like external programs that hook in HA. You can do the same thing with HA in Docker by installing the addon containers separately and then hooking them in manually but HA OS makes it much simpler.

For example I'm running the Mosquitto broker, Z2M, a Visual Studio Code server, diyhue, and Music Assistant as addons.

Docco page about it is here: https://www.home-assistant.io/addons/

[–] TedZanzibar@feddit.uk 2 points 3 months ago (2 children)

If you want to give Home Assistant a try like others are suggesting, save yourself some time and hassle and install Home Assistant OS in a virtual machine. While you absolutely can run it in Docker you lose out on some neat quality of life improvements like add ons (which, funnily enough, are Docker containers pre-configured to hook in HA).

[–] TedZanzibar@feddit.uk 7 points 4 months ago (1 children)

Exactly this. Also it annoys me that Namecheap tries to automatically "top up funds" over a month before renewals are due. I think they've always done it but it wound me up enough this year to move to Cloudflare.

[–] TedZanzibar@feddit.uk 2 points 4 months ago

Yeah, everything that's already been said, except that I specifically chose an off-the-shelf Synology NAS with Docker support to run my core setup for this exact reason. It needs a reboot maybe once or twice a year for critical updates but is otherwise rock solid.

I have since added a small N100 box for things that need a little extra grunt (Plex mainly) but I run Ubuntu Server LTS with Docker on that and do maintenance on it about as often as I reboot the NAS.

 

Quick overview of my setup: Synology NAS running a whole bunch of Docker containers and a couple of full blown VMs, and an N100 based mini PC running Ubuntu Server for those containers that benefit from hardware acceleration.

On the NAS I have a Linux Mint VM that I use for various desktoppy things, but performance via RDP or NoMachine and so on is just bad. I think it's ultimately due to the lack of acceleration, so I'd like to try running it from the mini PC instead but I'm struggling to find hypervisor options.

VirtualBox can be done headless, apparently, but the package installed via Apt wants to install X/Wayland and the entire desktop experience. LXC looks like it might be a viable option with its web frontend but it appears to be conflicting with Docker atm and won't run the setup.

Another option is to redo the machine with UnRaid or TrueNAS Scale but as they're designed to be full fledged NAS OSes I don't love that idea.

So what would you do? Does anyone have a similar setup with advice?

Thanks all!

Edit: Thanks for everyone's comments. I still can't get LXC to work, which is a shame because it has a nice web frontend, so I'll give KVM a go as my next option. Failing that I might well backup my Docker volumes, blat the whole thing and see what Proxmox can do.

Edit 2: Webtop looks to be exactly what I was looking for. Thanks again for everyone's help and suggestions.

 

Specifically from the standpoint of protecting against common and not-so-common exploits.

I understand the concept of a reverse proxy and how works on the surface level, but do any of the common recommendations (npm, caddy, traefik) actually do anything worthwhile to protect against exploit probes and/or active attacks?

Npm has a "block common exploits" option but I can't find anything about what that actually does, caddy has a module to add crowdsec support which looks like it could be promising but I haven't wrapped my head around it yet, and traefik looks like a massive pain to get going in the first place!

Meanwhile Bunkerweb actually looks like it's been built with robust protections out of the box, but seems like it's just as complicated as traefik to setup, and DNS based Let's Encrypt requires a pro subscription so that's a no-go for me anyway.

Would love to hear people's thoughts on the matter and what you're doing to adequately secure your setup.

Edit: Thanks for all of your informative replies, everyone. I read them all and replied to as many as I could! In the end I've managed to get npm working with crowdsec, and once I get cloudflare to include the source IP with the requests I think I'll be happy enough with that solution.

 

I work in tech and am constantly finding solutions to problems, often on other people's tech blogs, that I think "I should write that down somewhere" and, well, I want to actually start doing that, but I don't want to pay someone else to host it.

I have a Synology NAS, a sweet domain name, and familiarity with both Docker and Cloudflare tunnels. Would I be opening myself up to a world of hurt if I hosted a publicly available website on my NAS using [insert simple blogging platform], in a Docker container and behind some sort of Cloudflare protection?

In theory that's enough levels of protection and isolation but I don't know enough about it to not be paranoid about everything getting popped and providing access to the wider NAS as a whole.

Update: Thanks for the replies, everyone, they've been really helpful and somewhat reassuring. I think I'm going to have a look at Github and Cloudflare's pages as my first port of call for my needs.

view more: next ›