Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How to Run WordPress completely from RAM (rickconlee.com)
33 points by indigodaddy 3 months ago | hide | past | favorite | 51 comments


Pagespeed gives this man's site a 72: https://pagespeed.web.dev/analysis/https-rickconlee-com/foj4...

And my plain ol' wordpress an 83: https://pagespeed.web.dev/analysis/http-egypt-urnash-com/a1r...

My site's a WP that I started up in like 2012 and it's been running continuously ever since, it's sitting behind Apache, it's got a cache plugin, nothing special at all.


I can recommend the WP Super Cache plugin[0] - it can generate pre-gzipped static sites and then you can setup your webserver[1] to prefer those, or non-compressed static files and only when not in cache actually bother passing down the request into PHP.

Gives me an 97 and 88 respectively[2] on that pagespeed tool.

[0] https://wordpress.org/plugins/wp-super-cache/

[1] https://github.com/simonrupf/docker-php-wp/blob/master/etc/n...

[2] https://pagespeed.web.dev/analysis/https-bobcat-dssr-ch/0ziq... / https://pagespeed.web.dev/analysis/https-simon-rupf-net/auk5...


Never heard of pagespeed before (yeah, lucky 10k etc, but also I'm just not a web dev) and this pointed up some huge easy wins on my image heavy homelab site, so thanks for that (setting `Cache-control` at all, and forcing `img loading="lazy"`.) To shortcut some obvious followups: (a) the pagespeed engine is called "lighthouse" and it's on github (Apache licensed)(b) it's also built in to Chrome DevTools (inspect) but for me it was all the way over on the » section of the toolbar. So you can run it locally on a private/staging site. again, thanks!


Shoemakers kids have holes in their shoes. In my heyday of freelance work my website was always the last priority. The freelance work I was doing kept me so busy I had no time to spit shine my own site.

This is a Wordpress site, pagespeed is 96 https://pagespeed.web.dev/analysis/https-www-storwell-com/wn...


While the site is using a CDN (Cloudflare), pages are not cached ("cf-cache-status: DYNAMIC"). Sometimes this actually makes the site slower to load as we're going <-> cloudflare <-> server instead of directly to the server.


Im skeptical about this. PHP Opcache already loads and compiles all php files into ram, such that it only happens once (or again when theyre modified). Its the single most useful thing you can do with a wordpress site (other than running on decent hardware, which most hosts do not have)

Mysql/mariadb have a lot of ram caching as well - they haven't just been sitting on their hands for decades... Still, lets say that this helps - you could probably just use db replication to another machine (or even another container on same machine) that persists to disk.

And the (likely AI-generated) colour commentating got tiring fairly quickly.

Most of all, without a before/after comparison, this is meaningless


Ironically, Cloudflare is preventing me from reading the site:

  Sorry, you have been blocked
  You are unable to access rickconlee.com


Out of curiosity do you know why Cloudflare has blocked you? Browser integrity check, IP reputation, etc? I'm trying to have more lax settings for my static website, and would love to make sure it's as accessible as possible.


Likely country blocks in this case, can't have the mean Europeans read the precious content. Works fine with a US VPN (bad IP reputation), doesn't work with my german local provider (good IP reputation).


Works fine from Norway, so probably not a country block. Site blocking EU usually block Norway as well.


Working in the UK too. Also works with a VPN via Portugal, France, Netherlands, and Denmark. Only got blocked when accessing via Brazil.


Same here. My German IP is blocked.


From my experience with Cloudflare, hard blocks are either caused by something that is similar to known exploits or by the site operator blocking countries/ip ranges. Everything else it's the annoying captcha or the less annoying "managed challenge" (usually a quick check before loading the page or a request to manually click a box).


I'm blocked on Firefox on Ubuntu 25.04, on a German residential FTTH line, and I'm also blocked with Firefox and Chrome on Android 15 over 5G.


For me it's just visiting the site from my phone with a very typical German network provider (o2). Just followed the hacker news link ...


No idea. Vanilla desktop and mobile Safari, cable and 5G, all blocked.

Cloudflare Ray ID: 98d7e0b319f3f390


I have also been blocked. We are probably in a country where the author has decided to block us on CloudFlare.


Linux page cache + PHP-FPM OPcache already keep hot PHP in RAM (no per-request disk hits after warm-up), and if your dataset fits in memory you size innodb_buffer_pool you don’t move the whole MariaDB datadir to tmpfs and throw away durability.


It just says

Sorry, you have been blocked You are unable to access rickconlee.com Why have I been blocked?

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data. What can I do to resolve this?

You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.

Cloudflare Ray ID: 98d88522ad55dbab Your IP: 2a02:3035:671:c0fa:cfc0:827c:68bc:ba8e Performance & security by Cloudflare

Thanks cloudflare!


I have also been blocked. We are probably in a country where the author has decided to block us on CloudFlare.


Alternatively, copy everything to /dev/shm before launch, call it a day. Now everything is loaded from RAM. Updates involve copying from source directory to /dev/shm then restart.


How is that different from what the article documents?


I guess it's the same, except I expressed it in ~30 words instead of ~2K words, and using 3.2MB less of data to communicate it :)


Reminds me of MongoDB (first few years at least). Screaming fast. Until a crash happens and the data is missing - then all that's left is screaming.

I also run MariaDB in RAM, but only for integration tests where data is fictional anyway. Otherwise I'm sure you can't come up with a better solution than they did just by using some sysadmin tricks.


I assume that modern filesystem caching to RAM is incredibly sophisticated, but wouldn't a typical website already be fully served from RAM? If you only have a few megabytes of code + assets, won't the OS see this is hot data, not being updated, so it is no longer being read from disk sometime after the initial read?


A performance problem I’ve run into with small websites like this one is many caching systems are tuned for bigger companies or hotter programs, and basically every load ends up an “exceptional” cold start case. VMs wake up, Cloudflare actually only keeps your data one place, there’s no sane HTTP caching value, and, yeah, files are read from disk. Worse, it’s easy to miss during testing by loading things more frequently. I’m sure there are filesystem or server parameters to tweak, but I do think small websites that want great performance should be, somewhere somehow, managing caching manually.


Its not the the data is only cached by Cloudflare in one place, its that it is cached at the edge node nearest to the user that last made the request. Geographically different users will likely hit a completely different edge node that needs to hit your origin to populate its cache.

Cloudflare has a free tiered caching option that helped my site. Instead of cache missing on local edge nodes always having to hit the origin, the edge node can sometimes pull the data from another Cloudflare server. It reduced load on my origin.

Agree with needing to tune and validate caching, one of the biggest changes my PHP site was tuning apc/OPcache sizes.


Cloudflare will actually slow down TTFB for small, less popular sites since they don't maintain a keepalive connection to the origin. This means you pay an additional TCP/TLS setup cost from the Cloudflare POP to the origin which is worse than a direct connection. I also tried testing a smart-placed worker and cloudflared, neither of which seemed to help.


They can use keepalive but it's likely the small sites are not getting enough traffic on the edge nodes to maintain the connections.

You don't think taking a small hit on TTFB is a good trade off for the improved scaling that a CDN offers?

Gone are the days that you don't have to worry about bot traffic being a DDOS. An unresponsive site is a lot worse than an extra TCP/TLS setup.


> files are read from disk.

Disk as in spinning round circles, or disk as in NVMe drive, because there's a pretty massive difference in latency.


The code contains database calls that have to run, reads and writes. Processors are sophisticated, but if they (or rather, they and the os) were that smart, WordPress installations would be a lot faster by default.

And maybe that's not surprising, think about it: the typical server does a lot of other things additionally to serving one specific WordPress site.


Depends on the amount of data/assets. With all the AI bots its easy for (default) caches to be undersized since sites no longer have most frequently accessed URLs, every URL (and query param combo) ends up being frequently accessed.


This site is infected with cloudflare, can't see the article.


And pagespeed score reflects that. It does not matter if it’s in ram, most of the pages/load will be on the CDN and caching anyways.

Servers put the most requested into RAM also.


> most of the pages/load will be on the CDN and caching anyways

Not always. Most CDNs for websites (essentially reverse proxies) don't cache pages by default so private content isn't made publicly available. You have to enable/configure what is cached.

In this case, Cloudflare isn't caching the HTML: "cf-cache-status: DYNAMIC". If page cache was enabled, it would be something like "HIT" or "MISS".


> It’s mounted in a tmpfs-backed RAMDisk

> Because even well-cached files still require filesystem-level permission checks, read cycles, and sometimes fragmentation reassembly.

I'd assume tmpfs still requires filesystem-level permission checks, or am I mistaken?


I think they're saying that the metadata checks on cached files still have to go to disk. I would be surprised if that was consistently true, but I can't make any other sense out of it.


Why not just use a reverse proxy that caches pages into RAM, or use Cloudflare or such?


RAM is expensive. You can go a long way by using NGINX fastcgi cache for non-authenticated traffic, avoiding even hitting PHP altogether.


It would have been nice to provide performance comparison between the stock installation and running from RAM.


AI slop - AI generated image at the top and text is full of em-dashes and chatGPT-isms.


The bigger tell is that half the article is unnecessary space-filling hyperbole...

The overwhelming majority of the speed-up here would come from the database, which is trivially easy to run on tmpfs. When using Docker, it's literally a one-liner! For example:

docker run -d --tmpfs /var/lib/mysql -p 127.0.0.1::3306/tcp -e MYSQL_ROOT_PASSWORD=[password] mariadb:11.8

Of course you need a really good backup story for this to be a reasonable choice.


Wow that's neat, had no idea about --tmpfs flag (I submitted this article because I found it interesting but am not the author)


Yeah I skimmed it and it all read like when I ask ChatGPT to go wild and be “fun”.


"Let’s break down the tune. This is not a just WordPress install. This is a brass-knuckle brawl between performance and everything that dares to slow it down."


Mandatory reminder - don't use WordPress in 2025.

Use headless CMS plus static site generator. e.g. Strapi plus Astro


WP has corporate momentum/network effect though, in the same way that Jira, Jenkins, and Java (among other things) have.

For a long time now WP no longer just caters to the hacker "Code is Poetry" crowd---and, of course, even less so nowadays with the controversies WP has embroiled itself with. The people who are inclined to choose WP by default do so because of the wealth of plugins available to them, be that Shopify integration or fine-grained tracking of a marketing campaign. They would wonder why someone would ever prefer something "headless". They think static websites are the dinosaurs the Y2K comet wiped out.

Sure we can argue about whether WP is the "best solution" but WP is definitely the solution that works acceptably out-of-the-box. Your CMS of choice probably has a bunch of out-of-the-box solutions for common concerns as well but I doubt that it can handle the edge cases that Head of Marketing will inevitably introduce with their ol' reliable set of integrated services. Shopify + Google Analytics + Salesforce + Airtable[1] always worked for them with WP but suddenly this allegedly-better "headless" CMS is throwing all sorts of dumb errors.

And if a plugin is not available, there is no shortage of WP/PHP developers who can make one at a reasonable price. In contrast, I'm sorry, but honestly your comment is the first time I've heard about Strapi and Astro.

I'm not saying I like the status quo but if someone asks me for a WP site, I give them a hardened EC2 box with WP over Apache/NGINX. Then I return to frying bigger fish.

[1] Just an example.


While WP is not my first choice, it's also not good to give binary answers as the only factual options.

There will be lots strapi and astro or whatever preference/interpretation we have personally can't do.


It's always "in most cases". I simply don't see space for WordPress on green field sites anymore. If something is so small it does not require CMS - it's better to use pure HTML. Otherwise "in most cases" it's better to use headless CMS.


Do you work with non-technical users? There are few (none that I'm actually aware of) static site generators that are friendly enough for a comms team in a large enterprise for example. I note that Strapi also puts key features such as SSO behind an Enterprise pay wall... so that's already a massive negative.

WordPress has it's place, a blanket no against one of the most popular CMSes on the Internet is a pretty hot take.


That's understandable, but diy not the default place of the majority.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: