My site's a WP that I started up in like 2012 and it's been running continuously ever since, it's sitting behind Apache, it's got a cache plugin, nothing special at all.
I can recommend the WP Super Cache plugin[0] - it can generate pre-gzipped static sites and then you can setup your webserver[1] to prefer those, or non-compressed static files and only when not in cache actually bother passing down the request into PHP.
Gives me an 97 and 88 respectively[2] on that pagespeed tool.
Never heard of pagespeed before (yeah, lucky 10k etc, but also I'm just not a web dev) and this pointed up some huge easy wins on my image heavy homelab site, so thanks for that (setting `Cache-control` at all, and forcing `img loading="lazy"`.) To shortcut some obvious followups: (a) the pagespeed engine is called "lighthouse" and it's on github (Apache licensed)(b) it's also built in to Chrome DevTools (inspect) but for me it was all the way over on the » section of the toolbar. So you can run it locally on a private/staging site. again, thanks!
Shoemakers kids have holes in their shoes. In my heyday of freelance work my website was always the last priority. The freelance work I was doing kept me so busy I had no time to spit shine my own site.
While the site is using a CDN (Cloudflare), pages are not cached ("cf-cache-status: DYNAMIC"). Sometimes this actually makes the site slower to load as we're going <-> cloudflare <-> server instead of directly to the server.
Im skeptical about this. PHP Opcache already loads and compiles all php files into ram, such that it only happens once (or again when theyre modified). Its the single most useful thing you can do with a wordpress site (other than running on decent hardware, which most hosts do not have)
Mysql/mariadb have a lot of ram caching as well - they haven't just been sitting on their hands for decades... Still, lets say that this helps - you could probably just use db replication to another machine (or even another container on same machine) that persists to disk.
And the (likely AI-generated) colour commentating got tiring fairly quickly.
Most of all, without a before/after comparison, this is meaningless
Out of curiosity do you know why Cloudflare has blocked you? Browser integrity check, IP reputation, etc? I'm trying to have more lax settings for my static website, and would love to make sure it's as accessible as possible.
Likely country blocks in this case, can't have the mean Europeans read the precious content. Works fine with a US VPN (bad IP reputation), doesn't work with my german local provider (good IP reputation).
From my experience with Cloudflare, hard blocks are either caused by something that is similar to known exploits or by the site operator blocking countries/ip ranges. Everything else it's the annoying captcha or the less annoying "managed challenge" (usually a quick check before loading the page or a request to manually click a box).
Linux page cache + PHP-FPM OPcache already keep hot PHP in RAM (no per-request disk hits after warm-up), and if your dataset fits in memory you size innodb_buffer_pool you don’t move the whole MariaDB datadir to tmpfs and throw away durability.
Sorry, you have been blocked
You are unable to access rickconlee.com
Why have I been blocked?
This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.
What can I do to resolve this?
You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.
Cloudflare Ray ID: 98d88522ad55dbab Your IP:
2a02:3035:671:c0fa:cfc0:827c:68bc:ba8e Performance & security by Cloudflare
Alternatively, copy everything to /dev/shm before launch, call it a day. Now everything is loaded from RAM. Updates involve copying from source directory to /dev/shm then restart.
Reminds me of MongoDB (first few years at least). Screaming fast. Until a crash happens and the data is missing - then all that's left is screaming.
I also run MariaDB in RAM, but only for integration tests where data is fictional anyway. Otherwise I'm sure you can't come up with a better solution than they did just by using some sysadmin tricks.
I assume that modern filesystem caching to RAM is incredibly sophisticated, but wouldn't a typical website already be fully served from RAM? If you only have a few megabytes of code + assets, won't the OS see this is hot data, not being updated, so it is no longer being read from disk sometime after the initial read?
A performance problem I’ve run into with small websites like this one is many caching systems are tuned for bigger companies or hotter programs, and basically every load ends up an “exceptional” cold start case. VMs wake up, Cloudflare actually only keeps your data one place, there’s no sane HTTP caching value, and, yeah, files are read from disk. Worse, it’s easy to miss during testing by loading things more frequently. I’m sure there are filesystem or server parameters to tweak, but I do think small websites that want great performance should be, somewhere somehow, managing caching manually.
Its not the the data is only cached by Cloudflare in one place, its that it is cached at the edge node nearest to the user that last made the request. Geographically different users will likely hit a completely different edge node that needs to hit your origin to populate its cache.
Cloudflare has a free tiered caching option that helped my site. Instead of cache missing on local edge nodes always having to hit the origin, the edge node can sometimes pull the data from another Cloudflare server. It reduced load on my origin.
Agree with needing to tune and validate caching, one of the biggest changes my PHP site was tuning apc/OPcache sizes.
Cloudflare will actually slow down TTFB for small, less popular sites since they don't maintain a keepalive connection to the origin. This means you pay an additional TCP/TLS setup cost from the Cloudflare POP to the origin which is worse than a direct connection. I also tried testing a smart-placed worker and cloudflared, neither of which seemed to help.
The code contains database calls that have to run, reads and writes. Processors are sophisticated, but if they (or rather, they and the os) were that smart, WordPress installations would be a lot faster by default.
And maybe that's not surprising, think about it: the typical server does a lot of other things additionally to serving one specific WordPress site.
Depends on the amount of data/assets. With all the AI bots its easy for (default) caches to be undersized since sites no longer have most frequently accessed URLs, every URL (and query param combo) ends up being frequently accessed.
> most of the pages/load will be on the CDN and caching anyways
Not always. Most CDNs for websites (essentially reverse proxies) don't cache pages by default so private content isn't made publicly available. You have to enable/configure what is cached.
In this case, Cloudflare isn't caching the HTML: "cf-cache-status: DYNAMIC". If page cache was enabled, it would be something like "HIT" or "MISS".
I think they're saying that the metadata checks on cached files still have to go to disk. I would be surprised if that was consistently true, but I can't make any other sense out of it.
The bigger tell is that half the article is unnecessary space-filling hyperbole...
The overwhelming majority of the speed-up here would come from the database, which is trivially easy to run on tmpfs. When using Docker, it's literally a one-liner! For example:
docker run -d --tmpfs /var/lib/mysql -p 127.0.0.1::3306/tcp -e MYSQL_ROOT_PASSWORD=[password] mariadb:11.8
Of course you need a really good backup story for this to be a reasonable choice.
"Let’s break down the tune. This is not a just WordPress install. This is a brass-knuckle brawl between performance and everything that dares to slow it down."
WP has corporate momentum/network effect though, in the same way that Jira, Jenkins, and Java (among other things) have.
For a long time now WP no longer just caters to the hacker "Code is Poetry" crowd---and, of course, even less so nowadays with the controversies WP has embroiled itself with. The people who are inclined to choose WP by default do so because of the wealth of plugins available to them, be that Shopify integration or fine-grained tracking of a marketing campaign. They would wonder why someone would ever prefer something "headless". They think static websites are the dinosaurs the Y2K comet wiped out.
Sure we can argue about whether WP is the "best solution" but WP is definitely the solution that works acceptably out-of-the-box. Your CMS of choice probably has a bunch of out-of-the-box solutions for common concerns as well but I doubt that it can handle the edge cases that Head of Marketing will inevitably introduce with their ol' reliable set of integrated services. Shopify + Google Analytics + Salesforce + Airtable[1] always worked for them with WP but suddenly this allegedly-better "headless" CMS is throwing all sorts of dumb errors.
And if a plugin is not available, there is no shortage of WP/PHP developers who can make one at a reasonable price. In contrast, I'm sorry, but honestly your comment is the first time I've heard about Strapi and Astro.
I'm not saying I like the status quo but if someone asks me for a WP site, I give them a hardened EC2 box with WP over Apache/NGINX. Then I return to frying bigger fish.
It's always "in most cases". I simply don't see space for WordPress on green field sites anymore. If something is so small it does not require CMS - it's better to use pure HTML. Otherwise "in most cases" it's better to use headless CMS.
Do you work with non-technical users? There are few (none that I'm actually aware of) static site generators that are friendly enough for a comms team in a large enterprise for example. I note that Strapi also puts key features such as SSO behind an Enterprise pay wall... so that's already a massive negative.
WordPress has it's place, a blanket no against one of the most popular CMSes on the Internet is a pretty hot take.
And my plain ol' wordpress an 83: https://pagespeed.web.dev/analysis/http-egypt-urnash-com/a1r...
My site's a WP that I started up in like 2012 and it's been running continuously ever since, it's sitting behind Apache, it's got a cache plugin, nothing special at all.