40 Million hits a day on WordPress using a $10 VPS

I recently tested many of the biggest names in managed WordPress hosting in my article Managed WordPress Hosting Performance Benchmarks. I am preparing to do a second round of testing with double the number of companies on board. Some of us like to setup servers ourselves (or are cheap).

Given a reasonable VPS, what sort of performance can we get out of it?

10 million hits as measured by Blitz.io was the benchmark to beat based on a previous iteration of this question.

I decided to test this from the ground up, let’s start with the most basic configuration and gradually try to improve it.

All tests were performed on a $10/Month 1GB Ram Digital Ocean VPS running Ubuntu 14.04×64. All code and documentation are also available on GitHub.

LAMP Stack

Based on my previous experience benchmarking WordPress, I didn’t have high hopes for this test. Last time I crashed MySql almost instantly. This time I ran Blitz a lot slower, from 1-50 users. The performance wasn’t impressive, it started slowing down almost immediately and continued to get worse. No surprises.

default-lamp

The LAMP stack setup script is available on GitHub. Download full Blitz results from LAMP Stack (PDF).

LAMP + PHP5-FPM

The next thing I tried was PHP-FPM(FastCGI Process Manager). It got slightly better performance with just under 200ms faster response times at 50 users. But the graph looks pretty similar, we’re seeing quickly increasing response times as the number of users goes up. Not a great improvement.

lamp-php-fpm

The LAMP+ PHP5-FPM setup script is available on GitHub. Download full Blitz results from LAMP+PHP5-FPM (PDF).

Nginx + PHP-FPM (aka LEMP Stack)

Maybe the problem is Apache? I tried Nginx next. What happened? I got a worse performance than the default LAMP stack (wtf?). Everyone said Nginx was faster. Turns out, it’s not magically faster than Apache (and appears worse out of the box).

lemp-php-fpm

The LEMP + PHP-FPM setup script is available on GitHub. Download full Blitz results from LEMP+PHP-FPM (PDF).

Microcaching

I’ve written about creating a reverse proxy and cache in nginx before. But I’ve already setup Nginx as my web server, I don’t need to reverse proxy this time. Nginx has fastcgi_cache which allows us to cache results from fastcgi processes (PHP). So I applied the same technique here and the results were staggering. The response time dropped to 20ms (+/- 2ms) and it scaled from 1 to 1000 concurrent users.

“This rush generated 28,924 successful hits in 60 seconds and we transferred 218.86 MB of data in and out of your app. The average hit rate of 482/second translates to about 41,650,560 hits/day.”

All that with only 2 errors (connection timeouts).

lemp-microcache

The LEMP + PHP-FPM + microcaching setup script is available on GitHub. Download full Blitz results from LEMP+PHP-FPM + microcaching (PDF).

Mircocaching Config Walkthrough

We do the standard

apt-get update
apt-get -y install nginx
sudo apt-get -y install mysql-server mysql-client
apt-get install -y php5-mysql php5-fpm php5-gd php5-cli

This gets us Nginx, MySql and PHP-FPM.

Next we need to tweak some PHP-FPM settings. I am using some one liners to edit /etc/php5/fpm/php.ini and /etc/php5/fpm/pool.d/www.conf to uncomment and change some settings [turning cgi.fix_pathinfo=0 and uncommenting the listen.(owner|group|mode) settings].

sed -i "s/^;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/" /etc/php5/fpm/php.ini
sed -i "s/^;listen.owner = www-data/listen.owner = www-data/" /etc/php5/fpm/pool.d/www.conf
sed -i "s/^;listen.group = www-data/listen.group = www-data/" /etc/php5/fpm/pool.d/www.conf
sed -i "s/^;listen.mode = 0660/listen.mode = 0660/" /etc/php5/fpm/pool.d/www.conf

Now make sure we create a folder for our cache

mkdir /usr/share/nginx/cache

Which will we need in our Nginx configs. In our /etc/nginx/sites-available/default config we add this into our server {} settings. We also make sure to add index.php to our index command and set our server_name to a domain or IP.

location ~ \.php$ {
		try_files $uri =404;
		fastcgi_split_path_info ^(.+\.php)(/.+)$;
		fastcgi_cache  microcache;
		fastcgi_cache_key $scheme$host$request_uri$request_method;
		fastcgi_cache_valid 200 301 302 30s;
		fastcgi_cache_use_stale updating error timeout invalid_header http_500;
		fastcgi_pass_header Set-Cookie;
		fastcgi_pass_header Cookie;
		fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
		fastcgi_pass unix:/var/run/php5-fpm.sock;
		fastcgi_index index.php;
		include fastcgi_params;
}

Then we move on to our /etc/nginx/nginx.conf and make a few changes. Like increasing our worker_connections. We also add this line in our http{} before including our other configs:

fastcgi_cache_path /usr/share/nginx/cache/fcgi levels=1:2 keys_zone=microcache:10m max_size=1024m inactive=1h;

This creates our fastcgi_cache.

All of these are done in somewhat ugly one-liners in the script (if someone has a cleaner way of doing this, please share!), I’ve cleaned them up and provided the full files for comparison.

Go Big or Go Home

Since Nginx didn’t seem to blink when I hit it with 1000 users, I wondered how high it would really go. So I tried from 1-3000 users and guess what?

“This rush generated 95,116 successful hits in 60 seconds and we transferred 808.68 MB of data in and out of your app. The average hit rate of 1,585/second translates to about 136,967,040 hits/day.”

The problem was I started getting errors: “4.74% of the users during this rush experienced timeouts or errors!” But it amazingly peaked at an astonishing 2,642 users per second. I watched my processes while the test was running and saw all 4 nginx workers fully maxing out the CPU (25% each) while the test was running. I think I hit the limit a 1GB, 1 Core VPS can handle. This setup was a champ though, I’m not sure what caused the big spike (perhaps a cache refresh), but if you wanted to roll your own WordPress VPS and serve a lot of static content, this template should be a pretty good starting point.
lemp-microcache-3000

Download full results of 3000 users blitz test (PDF)

Conclusion

There are definitely a lot of improvements that can be made on this config. It doesn’t optimize anything that doesn’t hit the cache (which will be any dynamic content, most often logged in users). It doesn’t talk about security at all. It doesn’t do a lot of things. If you aren’t comfortable editing php, nginx and other linux configs/settings and are running an important website, you probably should go with a managed wordpress company. If you really need performance and can’t manage it yourself, you need to look at our Managed WordPress Hosting Performance Benchmarks. If you just want a good web hosting company, take a look at our web hosting reviews and comparison table.

All code and documentation is available on GitHub

Thanks and Credits:

The title was inspired by Ewan Leith’s post 10 Million hits a day on WordPress using a $15 server. Ewan built a server that handled 250 users/second without issue using Varnish, Nginx, PHP-APC, and W3 Total Cache.

A special thanks goes to A Small Orange who have let me test up multiple iterations of their LEMP stack and especially Ryan MacDonald at ASO who spent a lot of time talking WordPress performance with me.

The following two tabs change content below.
avatar
Kevin Ohashi is the geek-in-charge at Review Signal. He is passionate about making data meaningful for consumers. Kevin is based in Washington, DC.

34 thoughts on “40 Million hits a day on WordPress using a $10 VPS

  1. avatarSam

    Nginx is freakin ridiculous when properly configured. This is a must-have part of *any* static website stack. I’ve seen a lot of home-rolled caching solutions that don’t work well (I’m looking at you, WPEngine). Thanks for the easy tips on configuring it.

    Reply
    1. avatarKevin Ohashi Post author

      Honestly, I am not sure if it work out of the box. If you wanted to try and share the results, I would be more than happy to share them as an update or separate post. I’ve been focused on single sites (not multi-sites). I assume you’re going to minimally change the nginx configs to respond to multiple domains. Beyond that, I’m really not sure.

      Reply
  2. avatar@MarkGavalda

    This is absolutely great for static websites, we (at Kinsta) use a very similar setup! Nginx is awesome. :)
    The tricky part is getting dynamic websites to perform the same, where static html caching is not an option. And after that, load balancing so your site is truly running 24/7!
    Thanks for sharing these great articles Kevin!

    Reply
  3. Pingback: What Level of WordPress Performance from $10 Hosting?

  4. avatarPhil

    Thanks for this highly interesting article. I’m going to share this with a hoster we currently work with, were somehow results with NGiNX and caching never quite were what we expected – very fast, but try explaining your client that they have to wait 10 or 60 minutes until some cache expires to see most changes…

    BTW, I’m also looking forward to see the second part of your managed WordPress hosting review – I’m especially intrigued to see how Kinsta will hold up. I also hope that you will test GoDaddy; they’re currently ridiculously cheap, and I think offer the exact same thing as MediaTemple: exact same servers, 100% same caching plugin/setup (Batcache and Varnish), minus the Pro stuff like staging and GIT at the moment.

    Reply
    1. avatarKevin Ohashi Post author

      With this setup I was caching for only 30 seconds. It’s designed just to handle massive traffic spikes but very short that content should be updated without any issue.

      Second round is beginning tomorrow, Kinsta is on board last I checked. GoDaddy was tested last time. I’ve got them onboard again along with Media Temple (let’s see if there is a difference!).

      Reply
      1. avatarPhil

        I’d be really astonished if there were really noticeable differences between GoDaddy and Media Temple – their caching plugin is one and the same (gd-system-plugin.php inside /wp-content/mu-plugin), and server addresses are identical.
        Of course, the backend _is_ different, with (mt)s being very elegantly designed, and already feature-enabled where GD is only promising stuff for the future.
        I’m actually waiting for any of them have servers in Europe, as even with CloudFlare Railgun enabled (tested with Media Temple) latency is killing speed.

        Reply
  5. avatarDaan Kortenbach

    Try this stack:
    Ubuntu 14.04
    Nginx 1.72
    HHVM 3.1.0
    MariaDB 10

    Nginx has default settings for max concurrents. Set it higher.
    Also set max open file limits.
    Put Nginx cache in /var/run.
    Use a fragment cache for WordPress (by Rarst)

    Try without microcache and with, you’ll be amazed.

    You can compile nginx with purge module. Using Nginx-Helper plugin you can purge cache on demand.

    Reply
    1. avatarKevin Ohashi Post author

      Do you actually run this and have some benchmarks? I don’t have much time this week to try it, but perhaps I can find some time next week.

      Curious if you had to rank the performance improvements here, which ones do you find/think are giving the most boost?

      Reply
      1. avatarDaan Kortenbach

        Yes, I run this on my own site as a test (click on my name). I have to mention I’ve added ngx_pagespeed, which adds another layer of optimisation. View the source to see what ngx_pagespeed does to the code. Cache is set to 30 days.

        The stack I’ve benchmarked is what I wrote earlier. Performance is about 10 to 20% better versus your scores. Everything is served from the cache, you can ignore the stack behind it as it makes no difference. My guess is the gain is because of finetuning Nginx and Ubuntu to accept more concurrents. Moving to a 2GB server (needed to compile HHVM yourself) will perform even better but during the tests I did I’ve never seen server load go up much. I suspect there is much more to gain from further tuning Ubuntu and Nginx. We need to get the load up to 50% purely by serving concurrents to see what it can actually handle (at 50% I would spin up another droplet in load balancing scenarios).

        I see you use “fastcgi_cache_use_stale updating”, that is an important setting. Without it your performance would be much lower in real world scenarios.

        For real world scenarios a 1 second cache is enough if the stack behind it is fast enough to serve pages quickly. If a single post/page gets hammered (reddit frontpage or whatever) your server will only generate the page once per second. This is where HHVM and fragment caching comes in. HHVM has a just in time compiler. WordPress will perform much much much better with it. Fragment caching will speed WordPress up a lot too. Add object-cache.php and you’re almost done.

        You can fine tune the maximum number of seconds you would need to microcache by counting how many (dynamic) urls your website has, take in account how many concurrents you can handle without a cache and base your valid time on that. “fastcgi_cache_use_stale updating” takes care of speedy page serving while updating the cache in the background. This method will ensure your website almost performs as a true dynamic site without the risk of ever going down from load or visitors seeing old cache.

        Just my 2ct’s ;)

        Reply
        1. avatarKevin Ohashi Post author

          I appreciate it! Real world scenarios get a lot more interesting once you’re not serving directly from cache. That’s the next area I’d like to explore. It seems like you’ve done quite a bit in this area, if you’re interested in contributing, I’ve got a public GitHub repo (https://github.com/kevinohashi/WordPressVPS) if you’ve got a setup script for the stack you described, I would love to include it.

          Anyways, when I get some time hopefully next week, I am definitely going to be trying some of this stuff. Thanks for all the details on your setup.

          Reply
        1. avatarRowan Gillson

          Kevin and Daan,

          Just adding my 2 cents. You can also gain some additional benefits on the DO VPS with some tuning of your /etc/defaults, /etc/sysctl.conf and /etc/security/limits.conf files. I managed to get similar results as yours Kevin without doing any FastCGI caching and using a slightly tweaked LEMP stack.

          In short, what you should be looking at is increasing the number of open files your DO virtual machine can handle. I also added some swap space too as MySQL is always a little memory hungry and the performance of DO’s SSD storage is a significant order of magnitude better performing than the old spinning drives.

          I’ll probably do a write up soon of the entire configuration when I get a chance. I’ve been super impressed with the uptime and the performance of Digital Ocean.

          Look to increase this

          Reply
    2. avatarThapelo Radebe

      Hi, I’m a Tech noob and I just bought a DigitalOcean droplet. I would like to run that stack. I’ve had a problem installing HHVM 3.0, it says something about not running in a built in web server. I think I can install nginx and mariaDB but I ended up messing up my droplet and having to start again.

      would appreciate instructions like

      sudo XXXXXXXXX YYYYYY

      on the following instructions

      Nginx has default settings for max concurrents. Set it higher.
      Also set max open file limits.
      Put Nginx cache in /var/run.

      Reply
  6. Pingback: The Weekly WordPress News, Tutorials & Resources Roundup No.73

  7. avatarAnh Tran

    This is amazing test. I haven’t known about the microcaching in nginx, but I’ve been using pagespeed and it’s very fast. Can you share how to create a test with graphics like you did? I just want to check my configuration, test its performance but don’t know how to check that.

    Thank you very much.

    Reply
  8. Pingback: WordPress VIP learning, Javascript in core, and changing lives with WordPress

  9. avatarJean-Baptiste

    Hi,
    Great post !
    I highly recommend using a cache system.
    Server-side as Varnish or Redis but you have to configure the purge policy. With Varnish you can manage ESI (Edge Side Include) for partial caching.
    Front-side (a caching plugin) as WP Rocket (disclaimer I’m the cofounder), and mounting the cache folder directly on the ram.

    Reply
  10. avatarChris McCoy

    curious what type of site you tested this on, 40 million hits a day is quite a lot of traffic, must be in the top 200 of alexa at that rate.

    Reply
    1. avatarKevin Ohashi Post author

      The idea of 40M hits/day comes from the load it was able to sustain and the stats from Blitz based on load testing. I don’t actually have a site getting that much traffic (wouldn’t that be nice!).

      Reply
  11. Pingback: Rebuilding all of my websites - Waking up in Geelong

  12. avatarMxx

    Kevin,
    Instead of ugly one-liners :) , use Ansible to create extremely easy to read, yet graceful and idempotent playbooks.
    Ansible is great because unlike Chef/Puppet, it doesn’t require anything special on the managed servers(other than ssh) and the master computer doesn’t need much beyond python and a few support libs.
    You’ll be up and running with basic code in a couple of hours and pretty much following best practices and in a couple of days.

    Reply
  13. Pingback: TechNewsLetter: Vol2 | Vishnu-Tech-Notes

  14. avatarMxyzptlkFishStiks

    You can make Apache just as fast with Mod Cache. It’s the kernel’s sendfile syscall that is working it’s magic. When Apache and NGiNX are serving those files from the page cache, instead of copying everything twice from userspace to kernelspace and back (seek, read, write syscalls) , it’s straight from the page cache* to the network socket for subsequent requests.

    * = http://www.linuxatemyram.com

    I’ll never forget a co-worker of mine, back in the late 90′s, freaking out that his first Linux install was hogging all the ram. LOL!

    Reply
  15. avatarAddison

    I am using Magento. There is no doubt fastcgi_cache is boosting the request from hundreds to thousands. The main problem I got is related to things which don’t have to be cached. For example I am trying to add a product into comparison list. I can’t. I see cached messages in other pages. Can you help? For sure using fastcgi_cache needs some rules for no_cache. One thing is clear, Varnish is not performing like fastcgi_cache. Such of configuration could replace Varnish as full page cache in Magento. Please help with no caching.

    Reply
    1. avatarKevin Ohashi Post author

      I haven’t really used Magento much, so it’s hard for me to offer any specific rules to help with your issue. It might be worth asking on StackOverflow and give example URLs at least to help people understand what data is being passed and how.

      Reply
  16. avatarTony

    Kevin thanks for this great article, it is nice to see people like you who understands the subject and are able to share it with the rest of us.

    I did a similar setup using Amazon’s new T2.micro instance which is free for the first year (if you are a new customer) using LEMP configuration with some added configuration, call it my special recipe, and I made the Amazon AMIs available to everyone. I even included a step by step instruction on how to create an Amazon AWS account, setting up billing alerts to keep it under control if you get crazy popular and so.

    It’s at I would appreciate any comments you might have.

    WordPress is a great tool for enabling everyday users to express their ideas and businesses on the Internet, making it available at a minimum cost with maximum performance is my goal. I hope that I managed to contribute to this with Purdox.com

    Best regards,
    Tony

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Current day month ye@r *