Skip to content

Wordpress is a Pig and Dreamhost’s VPSes aren’t configured for it.

Last updated on February 12, 2017

I’ve been fighting with this for quite sometime. I moved to a VPS over a year ago in hopes of a more stable Dreamhost experience, and for a while it was. Then about 9 months ago my site started crashing in out of the blue. I’d be chugging along just fine then, “blam!”, site down. I started aggressively caching things with WP Super Cache, then W3 Total Cache. It helped a little, but ultimately things just got more and more unstable. About 2 months ago I gave up on using PhpMyAdmin when I needed to do SQL stuff, simply because it was an instacrash for my VPS. About 3 weeks ago, I had enough and decided it was time to seriously track down the problem.

To make a long story short, Wordpress, is a massive memory hog. I’m pushing on average 30MB before I even start loading plugins. That’s not a lot if you have a 8GB server dedicated to nothing but pushing wordpress but, 30MB is 5-10% of a small VPS. I’ve gone though all the Wordpress tuning guides I can find. I’ve manually cleaned up the database. Nothing really helps. Of course if the server was configured for the load it has, the problem would be considerably smaller.

Which brings us back to Dreamhost’s VPS. They say it’s designed to scale with the RAM that’s allocated to it. Sure, maybe if you’re running static HTML pages. In which case the 69 concurrent clients configured on a 400MB server would get ~6MB a piece which is just barely enough for Apache to serve static HTML. Even then it doesn’t really work out, since there’s non Apache overhead. In fact, now that I think about it, by default under full load, Apache is configured in such a way that it can easily exceed a VPS’s memory allotment just serving static content. 😮

Then comes mod_fcgi. By default it’s configured to allow 20 instances per process class (I’ll come back to this), and the Apache default is 1000.

What are process classes and why are they important. Process classes are spawned by the same executable and share a common virtual host and identity. For example, if my virtual host for cult-of-tech spawns a CGI process, that process can’t be used by another virtual host on my server. Now here’s the kicker. When wordpress gets going and everything is loaded, that 30MB+ of Wordpress and all the PHP overhead + whatever space you allot for caching (XCacahe, APC) is how big the fcgi istance will be. In my case, that means each php.cgi instance is 60-70MB. On a 400MB server, that means once you spawn 5-6 php processes you’ve used up the entirety of your VPS’s memory and, again, blam!

The kicker though, is Dreamhost’s overly aggressive memory manager on their VPSes. Instead of killing off processes, or for that matter special casing it and just restarting Apache if it’s running, the watchdog merely kills off the VPS. Well it may do more, because it can take 10 minutes for the VPS to come back up unless you manually reboot it.

Interestingly enough the answer to all of this is not to simply throw money at it. In the process of troubleshooting this I temporary pushed my memory limits up, and even at 600MB or 800MB the config would still allow enough processes to cause the server to crash. For that matter my development server, which has a ton of RAM available, can comfortably do many of the things that was causing my VPS to crash, without exceeding a 200MB memory foot print.

Simply put, there’s no reason a lightly trafficked Wordpress site should require more than 300MB, maybe 400MB, but certainly not 600MB to simply stay upright. At least not with a properly configured server behind it.

The moral of the story is:

  • Wordpress is a memory pig, and they need to seriously consider a couple of releases focusing entirely on performance and lowering the memory footprint.
  • Dreamhost’s configuration for Apache and mod_fcgi on their VPSes is overly generous for small servers and needs to be curtailed to more reasonable numbers.
  • Dreamhost’s VPS memory watch dog is aggressive, and naive, and will take down a server in a hard to quickly recover way to insure it doesn’t use more resources than the client is paying for.

And what am I doing about this?

I’ve curtailed my Apache and mod_fcgi configs to more reasonable settings.

I’ve set mod_fcgi’s MaxProcesses directive to floor( (400 - typical_process_size) / typical_process size) and my Apache MaxClients to floor(((typ-cgi-process size * 2) - 20) /5). I won’t be anymore specific than that, because what will actually work while still being performant, varies based on site, software, traffic, caching, and number of virtual hosts.

Published inComputersWordpress

2 Comments

  1. I am having similar issues with DH right now on a very lightly traveled website. You can visit it at http://tjpowell.net

    Were you able to resolve the problem by moving to another hosting company? If so, who? I was looking at Media Temple. I have heard good things about them.

    Thanks

  2. Jason

    As a matter of fact, yes, I’m quite happily running well inside 300MB now, often with between 100 and 150 MB free. Getting there has been something of a discovery process though.

    On the back end I switched from Apache to Nginx. Nginx uses an asynchronous I/O model instead of the synchronous one that Apache uses. As a result Nginx doesn’t require a thread, and the associated memory usage, to be spun up for each client that connections (the Apache workers). That alone pushed me down to being at least stable inside the 300MB minimum VPS footprint. I’ve posted some of my notes on the process here and here.

    On top of that you’ll definitely want to make sure you have Xcache running. If the PHP version you’re using doesn’t have a DH provided version it’s easy enough to build yourself and get running based on the instructions at xcache.lighttpd.net/.

    The big fix, and this took me a long time to track down, was within WordPress itself. Specifically, the problem stems from the autoload flag on options stored in the wp_options table, and number of plugins which have been very poorly written. By default adding options to the wp_options table sets the autoload flag to true. When the flag is set WordPress loads and caches the option when the page is initialized instead of doing a separate database call when the option is queried using get_option. What I was finding was a great number of plugins had stored a lot of non-configuration data in the wp_options table, and it was flagged as autoload, that didn’t need to be. In the end I ended up modifying a couple of plugins to not use the wp_options table at all, and manually went though in the mysql client and set a lot of plugin specific options to not autoload.

Comments are closed.