dan

joined 1 year ago
[–] dan@upvote.au 1 points 1 day ago

Shouldn't be too difficult to swap it out for ZeroSSL. You'd need to remember to update CAA records though.

[–] dan@upvote.au 1 points 1 day ago

I think Cloudflare enshittifying is a bigger risk that Let's Encrypt.

[–] dan@upvote.au 2 points 1 day ago* (last edited 1 day ago)

ZeroSSL, plus a few paid companies support ACME (I know Sectigo and GoDaddy do). Sure, the latter are paid services, but in theory you can switch to them and use the exact same setup you're currently using with Let's Encrypt, just with some config changes.

[–] dan@upvote.au 7 points 1 day ago* (last edited 1 day ago)

They also made it a open protocol (the ACME protocol), so now there's a bunch of certificate providers that implement the same protocol and thus can work with the same client apps (Certbot, acme.sh, etc). I know Sectigo and GoDaddy support ACME at least. So even if you don't use Let's Encrypt, you can still benefit from their work.

[–] dan@upvote.au 1 points 1 day ago

I remember the days when each site that wanted to use SSL had to have a dedicated IP.

[–] dan@upvote.au 3 points 1 day ago (1 children)

Why not script it so you don't have to do it manually?

[–] dan@upvote.au 3 points 1 day ago

TLS certificates have huge margins, so web hosts love selling them.

[–] dan@upvote.au 7 points 1 day ago* (last edited 1 day ago)

I'd also argue that the fact that it's 100% automated and their software is open source makes it objectively more secure. On the issuing side, there's no room for human error, social engineering, etc.

[–] dan@upvote.au 1 points 3 days ago (1 children)

At least here in California, having solar panels on a non south facing roof usually only reduces production by 10-20%, as long as it's not entirely north facing. Solar systems are often slightly undersized - it's more cost effective to size it so it handles average load rather than the summer peaks you only see for a few weeks per year - so the actual difference for a given system may be less.

With my system, I see the best output from south-east facing panels since they get the morning sun. West facing panels are also fairly popular here due to time-of-use electricity plans. Some electricity plans have peak pricing from 4 to 9 pm, so people want to try and collect as much sunlight as possible during that period before sunset.

[–] dan@upvote.au 3 points 3 days ago* (last edited 3 days ago)

They're installing ridiculously small systems so that they're barely compliant, but the systems aren't very useful to the people that buy the house.

[–] dan@upvote.au 1 points 4 days ago

I don't think I know enough to answer that question, sorry!

[–] dan@upvote.au 8 points 4 days ago (1 children)

They use a mixture of Windows and Linux. They do use Linux quite a bit, but they also have a lot of Hyper-V servers.

15
submitted 10 months ago* (last edited 10 months ago) by dan@upvote.au to c/selfhosted@lemmy.world
 

I love Sentry, but it's very heavy. It runs close to 50 Docker containers, some of which use more than 1GB RAM each. I'm running it on a VPS with 10GB RAM and it barely fits on there. They used to say 8GB RAM is required but bumped it to 16GB RAM after I started using it.

It's built for large-scale deployments and has a nice scalable enterprise-ready design using things like Apache Kafka, but I just don't need that since all I'm using it for is tracking bugs in some relatively small C# and JavaScript projects, which may amount to a few hundred events per week if that. I don't use any of the fancier features in Sentry, like the live session recording / replay or the performance analytics.

I could move it to one of my 16GB or 24GB RAM systems, but instead I'm looking to evaluate some lighter-weight systems to replace it. What I need is:

  • Support for C# and JavaScript, including mapping stack traces to original source code using debug symbols for C# and source maps for JavaScript.
    • Ideally supports React component stack traces in JS.
  • Automatically group the same bugs together, if multiple people hit the same issue
    • See how many users are affected by a bug
  • Ignore particular errors
  • Mark a bug as "fixed in next release" and reopen it if it's logged again in a new release
  • Associate bugs with GitHub issues
  • Ideally supports login via OpenID Connect

Any suggestions?

Thanks!

 

Sorry for the long post. tl;dr: I've already got a small home server and need more storage. Do I replace an existing server with one that has more hard drive bays, or do I get a separate NAS device?


I've got some storage VPSes "in the cloud":

  • 10TB disk / 2GB RAM with HostHatch in LA
  • 100GB NVMe / 16GB RAM with HostHatch in LA
  • 3.5TB disk / 2GB RAM with Servarica in Canada

The 10TB VPS has various files on it - offsite storage of alert clips from my cameras, photos, music (which I use with Plex on the NVMe VPS via NFS), other miscellaneous files (using Seafile), backups from all my other VPSes, etc. The 3.5TB one is for a backup of the most important files from that.

The issue I have with the VPSes is that since they're shared servers, there's limits in terms of how much CPU I can use. For example, I want to run PhotoStructure for all my photos, but it needs to analyze all the files initially. I limit Plex to maximum 50% of one CPU, but limiting things like PhotoStructure would make them way slower.

I've had these for a few years. I got them when I had an apartment with no space for a NAS, expensive power, and unreliable Comcast internet. Times change... Now I've got a house with space for home servers, solar panels so running a server is "free", and 10Gbps symmetric internet thanks to a local ISP, Sonic.

Currently, at home I've got one server: A HP ProDesk SFF PC with a Core i5-9500, 32GB RAM, 1TB NVMe, and a single 14TB WD Purple Pro drive. It records my security cameras (using Blue Iris) and runs home automation stuff (Home Assistant, etc). It pulls around 41 watts with its regular load: 3 VMs, ~12% CPU usage, constant ~34Mbps traffic from the security cameras, all being written to disk.

So, I want to move a lot of these files from the 10TB VPS into my house. 10TB is a good amount of space for me, maybe in RAID5 or whatever is recommended instead these days. I'd keep the 10TB VPS for offsite backups and camera alerts, and cancel the other two.

Trying to work out the best approach:

  1. Buy a NAS. Something like a QNAP TS-464 or Synology DS923+. Ideally 10GbE since my network and internet connection are both 10Gbps.
  2. Replace my current server with a bigger one. I'm happy with my current one; all I really need is something with more hard drive bays. The SFF PC only has a single drive bay, its motherboard only has a single 6Gbps SATA port, and the only PCIe slots are taken by a 10Gbps network adapter and a Google Coral TPU.
  3. Build a NAS PC and use it alongside my current server. TrueNAS seems interesting now that they have a Linux version (TrueNAS Scale). Unraid looks nice too.

Any thoughts? I'm leaning towards option 2 since it'll use less space and power compared to having two separate systems, but maybe I should keep security camera stuff separate? Not sure.

1
submitted 1 year ago* (last edited 1 year ago) by dan@upvote.au to c/selfhosted@lemmy.world
 

I couldn't find a "Home Networking" community, so this seemed like the best place to post :)

My house has this small closet in the hallway and thought it'd make a perfect place to put networking equipment. I got an electrician to install power outlets in it, ran some CAT6 myself (through the wall, down into the crawlspace, to several rooms), and now I finally have a proper networking setup that isn't just cables running across the floor.

The rack is a basic StarTech two-post rack (https://www.amazon.com/gp/product/B001U14MO8/) and the shelving unit is an AmazonBasics one that ended up perfectly fitting the space (https://www.amazon.com/gp/product/B09W2X5Y8F/).

In the rack, from top to bottom (prices in US dollars):

  • TP-Link ER8411 10Gbps router. My main complaint about it is that the eight 'RJ45' ports are all Gigabit, and there's only two 10Gbps ports (one SFP+ for WAN, and one SFP+ for LAN). It can definitely reach 10Gbps NAT throughput though. $350
  • Wiitek SFP+ to RJ45 module for connecting Sonic's ONT (which only has an RJ45 port), and 10Gtek SFP+ DAC cable to connect router to switch.
  • MikroTik CRS312-4C+8XG-RM managed switch (runs RouterOS). 12 x 10Gbps ports. I bought it online from Europe, so it ended up being ~$520 all-in, including shipping.
  • Cable Matters 24-port keystone patch panel.
  • TP-Link TL-SG1218MPE 16-port Gigabit PoE switch. 250 W PoE power budget. Used for security cameras - three cameras installed so far.
  • Tripp Lite 14 outlet PDU.

Other stuff:

  • AdTran 622v ONT provided by my internet provider (Sonic), mounted to the wall.
  • HP ProDesk 600 G5 SFF PC with Core i5-9500. Using it for a home server running Home Assistant, Blue Iris, Node-RED, Zigbee2MQTT, and a few other things. Bought it off eBay for $200.
    • Sonoff Zigbee dongle plugged in to the front USB port
  • (next to the PC) Raspberry Pi 4B with SATA SSD plugged in to it. Not doing anything at the moment, as I migrated everything to the PC.
  • (not pictured) Wireless access point is just a basic Netgear one I bought from Costco a few years ago. It's sitting on the top shelf. I'm going to replace it with a TP-Link Omada ceiling-mounted one once their wifi 7 access points have been released.

Speed test: https://www.speedtest.net/my-result/d/3740ce8b-bba5-486f-9aad-beb187bd1cdc

Edit: Sorry, I don't know why the image is rotated :/ The file looks fine on my computer.

view more: next ›