this post was submitted on 04 Feb 2026
13 points (100.0% liked)

Selfhosted

55833 readers
626 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I was hoping you guys could help me with a bit of a more out-of-the-ordinary situation. My older father, who has very little technical knowledge, is the owner of a local news outlet and is in the process of modernizing the whole website and its infrastructure. He is in talks with a local developer (just one guy) who has been maintaining everything for the past 5 years to transfer everything to a new dedicated server and make some much-needed software and design changes. He is currently running everything on an older Hetzner dedicated server, which we decided to upgrade very soon to the Hetzner AX102 (Ryzen 9 7950X3D, 128 GB DDR5 ECC, 2 × 1.92 TB NVMe SSD Datacenter Edition, and a 1 Gbit/s port with unlimited bandwidth). He has asked me to try to help him achieve a favorable outcome because he is aware that, due to his lack of technical knowledge, he might be taken advantage of or, at the very least, the developer will only do the bare minimum because no one will check his work, even though this process is not exactly cheap, at least by our country’s standards.

I only possess a basic understanding of most of what hosting such a site optimally on a dedicated server entails, as this is not my area of expertise, but I am willing to learn in order to help my father, at least to the point where we don’t get scammed and we are able to take full advantage of the new hardware to make the site load instantly.

More context:

  • The site is based on WordPress, and we plan to keep it that way when we make the transfer. The developer told me he would strongly prefer running AlmaLinux 10 with NGINX for our particular context and will likely use Bricks as a page builder. I would prefer not to change these, since it would likely create unneeded friction with him.
  • There are about 150k–250k average monthly users according to Google Analytics, depending on the time of year and different events, most of them from our area.
  • About 80% of readers are using smartphones.
  • There are a few writers who publish multiple articles daily (20–25 in a 24-hour window). The articles always contain at least text and some images. There’s a strong dependency on Facebook, as most of the readers access those articles from our Facebook page. This might be relevant for caching strategies and other settings.

For now, as a caching strategy for optimal speed, Gemini analyzed my requirements and recommended a tiered “in-memory” caching strategy to handle high traffic without a CDN. Could you validate whether these specific recommendations are optimal, since I am highly skeptical of AIs?

Page Cache: it suggests mapping Nginx FastCGI Cache directly to RAM (tmpfs). It recommends using ngx_cache_purge with the Nginx Helper plugin to instantly invalidate only the Homepage and Categories upon publishing. It also advises stripping tracking parameters (e.g., fbclid) to prevent cache fragmentation.

  1. Object Cache: It proposes using Valkey (Server-side) paired with the Redis Object Cache plugin. The specific advice is to connect them via Unix Socket (instead of TCP) for the lowest possible latency.
  2. PHP Layer: It recommends PHP 8.5 with OPcache and JIT (Tracing mode) enabled, optimized to keep the runtime entirely in memory.

**I’d appreciate any thoughts or advice you might have on the overall situation, not just the caching side of things. The caching is just what I managed to study so far since the AI insisted it was particular important for this setup. **😊

top 6 comments
sorted by: hot top controversial new old
[–] alpha1beta@piefed.social 1 points 1 hour ago

Two things to consider - check out Pressable or another dedicated WP Host. If you're over the price for shared hosting, they're competitive with Dedicated/VPS + addon backup solutions. They have a ton of caching built in, plus hourly backups. But it's not for everyone.

One thing with the CDN considerations - where's your audience? Local like in or around one city? Or local as in one country. The wider the reach, the more a CDN is beneficial. It doesn't sound like it would help a lot. But it can also offload storage and the load of serving those requests.

To add what others said - Caching. You could do it on site and add Cloudflare on top of it. But you'll probably want to add a few custom rules to cloudflare like Geo-restriction + no caching on /wp-admin/. Cloudflare also has anti-bot tech.

Beyond that, I've been waging a war on bots for a number of reasons. One of the easiest ways to block them is to block ASNs if you use Cloudflare. If AI or bot traffic is a problem, read on. If not, don't worry about any of this.

If you want to block IP ranges yourself in Apache/nginx, your firewall, or your VPS provider's firewall, start with looking up IPs in Amazon and Microsoft's IPs (Like as listed here: https://ipinfo.io/AS16509) and start with the largest ranges.

With one line you can block 4.1M IPs from Amazon: 3.0.0.0/10 - start with these and go down to /16 and in a few hours you'll kill access to tens of millions of bots.

You can also block by user agents.

I'm happy to share some Apache Rules/files if it would be helpful.

My theory on blocking is simple: I try to block as much as possible as far from the application layer at possible. It costs the most, in computational resources you pay for, to add a firewall inside of WordPress, like Wordfence. It also protects you the least. Blocking at Cloudflare and the VPS's providers firewalls would be most efficient, followed by the firewall on the VPS, followed by an Apache/Nginx firewall, and then your application layer - WordPress. If you're problems are mostly bot traffic, you want block as much bad traffic as possible without false positives.

[–] clifmo@programming.dev 2 points 1 hour ago* (last edited 1 hour ago)

OpenLiteSpeed https://openlitespeed.org/

Host-specific guides (but no hetzner):

https://docs.litespeedtech.com/cloud/images/wordpress/

Very easy, robust, fast.

You can def roll your own sever and solution, but WordPress needs a lot of help. As other commentors said, you need to bypass both the database and PHP as much as possible, via caching.

While a simple redis or valkey store solves that, you're relying on some integration thru the php layer to make it happen, usually some plugin.

Serving files or otherwise caching directly thru the webserver is gonna make it faaaaast.

Then there's the question of database writes. Who is writing to your database, where, and how often?

Edit: I see you have editors updating content 1-2x per hour. They should rewrite caches hot on each update so they're the only ones paying the db latency cost.

[–] dan@upvote.au 5 points 3 hours ago* (last edited 3 hours ago) (2 children)

Use a page caching plugin that writes HTML files to disk. I don't do a lot with WordPress any more, but my preferred one was WP Super Cache. Then, you need to configure Nginx to serve pages directly from disk if they exist. By doing this, page loads don't need to hit PHP and you effectively get the same performance as if it were a static site.

See how you go with just that, with no other changes. You shouldn't need FastCGI caching. If you can get most page loads hitting static HTML files, you likely won't need any other optimizations.

One issue you'll hit is if there's any highly dynamic content on the page, that's generated on the server. You'll need to use JavaScript to load any dynamic bits. Normal article editing is fine, as WordPress will automatically clear related caches on publish.

For the server, make sure it's located near the region where the majority of your users are located. For 200k monthly hits, I doubt you'd need a machine as powerful as the Hetzner one you mentioned. What are you using currently?

[–] rimu@piefed.social 3 points 2 hours ago* (last edited 2 hours ago)

This is good advise, listen to dan. WP Super Cache is amazing although getting it working just right can take some tweaking.

The Redis Object Cache plugin is worth a try. It'll only take a minute to set up.

Is it 200k users or 200k page loads? Those are really different as each user will load multiple pages in a month. If it's 200k page loads then that server is way way too powerful (and expensive). Don't let a crappy developer hide their lack of optimization skills by throwing your money at the problem.

[–] Andres4NY@social.ridetrans.it 1 points 3 hours ago (1 children)

@dan @goldensw Yes, this. I did almost exactly what you did (taking over maintenance of an older wordpress site used by a local news org), and it was in rough shape. The config is a bit crotchety (like most things wordpress these days), but we're using WP Fastest Cache to create static html pages and a custom nginx configuration to read directly off those static pages (without hitting the php interpreter) for non-logged-in users. Basically try_files /.../cache/$uri, which falls back to php.

[–] Andres4NY@social.ridetrans.it 1 points 3 hours ago

@dan @goldensw The vast majority of traffic is going to be the first day or week that a new article is published, social media or whatever driving lots of traffic to that same article over and over. Loading the php interpreter each time, even if it's reading cached data, *will* make the site fall over. Static files will not.

Though nowdays there's stupid AI bots doing pathological stuff, so that may become an issue as well that requires some further adjustments.