Author Archives: Kevin Ohashi

avatar

About Kevin Ohashi

Kevin Ohashi is the geek-in-charge at Review Signal. He is passionate about making data meaningful for consumers. Kevin is based in Washington, DC.

GoDaddy WordPress Hosting Review

GoDaddyLogo

This post is based off WordPress Hosting Performance Benchmarks, where you can read the full details of how GoDaddy performed against the competition.

Overview

GoDaddy [Reviews] is the company that sparked this series of WordPress performance testing. They said their WordPress hosting was as good as anyone else on the markets WordPress hosting. I wanted to see if it was true in our first round of WordPress testing and it certainly was. The second round of testing did not disappoint either. GoDaddy maintained their position in the top tier of managed WordPress providers. This article summarizes GoDaddy's performance under multiple testing scenarios.

The Plan

All testing was done on GoDaddy's shared WordPress Hosting service which cost $6.99/month.The plan allows for 1 site, 100GB of space and 25,000 visitors per month. It also had the usual features of automated backups, 24/7 support via phone or ticket and free migrations.

Performance

LoadStorm

The first performance test was done with LoadStorm. GoDaddy made it to the final round of testing where 2000 concurrent users were logging into WordPress and browsing the test site. The test was designed to test non-cached performance by logging users into WordPress. It caused many hosting setups to crumble. You can see GoDaddy's result in this graph (click on it to play with the interactive results):

Load-Storm-GoDaddy-2000

 

GoDaddy actually pushed through triple the amount of data compared to every other web hosting company. Amazingly, it didn't struggle with this at all. The error rate was an infinitesimal, only 92 errors in over 285,000 requests. Other than a little spike, it handled the entire test without missing a beat.

Blitz

The second load test that was run on GoDaddy was Blitz.io. Blitz was used to test cached performance. It simply requested the home page of our test site from 1-2000 times per second.

Blitz-GoDaddy-2000

 

GoDaddy's Blitz results look exactly like what you hope for. A small spike at the very beginning and then perfectly stable performance afterwards. There are no complaints against its cached performance. Full Blitz Results (PDF)

Uptime

Two third-party uptime monitoring services (UptimeRobot and StatusCake) tracked the test site for a month. The results for GoDaddy was 99.9% and 100% uptime respectively. That is the uptime level you would expect of any good service.

WebPageTest

“WebPagetest is an open source project that is primarily being developed and supported by Google as part of our efforts to make the web faster.” WebPageTest grades performance and allows you to run tests from multiple locations simulating real users. GoDaddy was tested from Dulles, VA, Miami, FL, Denver, CO, and Los Angeles, CA.

Company Dulles,VA Miami, FL Denver, CO Los Angeles, CA Average
GoDaddy 1.607 1.355 0.934 0.855 1.18775

There was absolutely no issues with their WebPageTest results, it loaded very quickly at an average speed of under 1.2 seconds to completely load.

Conclusion

GoDaddy [Reviews] is one of the top tier WordPress hosting providers when looking at performance. GoDaddy continues to surprise me. They flew through all the tests, including a weird issue where they transferred 3X the data during the LoadStorm test and didn’t show any signs of stress. The only comparison I have to last time is the Blitz testing, where they eked out another 3000+ hits and raised their hits/second from 829 to 888. GoDaddy also raised their max hit rate marginally from 1750 to 1763. What’s more impressive is they reduced their errors+timeouts from 686 to 93. More hits with less errors. From a performance perspective, they did excellent in absolute terms and relative to their last benchmarks.

Get 25% off with Coupon Code: cjcwp1

GoDaddyLogo

 

Black Friday – Cyber Monday Web Hosting Deals

These are all the deals from companies we track here at Review Signal that are having Black Friday - Cyber Monday specials.

When? Deals
A Small Orange [Reviews Midnight Friday to 11:59pm Monday All Shared & Business Hosting plans: 75% off any billing cycle. (Coupon code: GIVETHANKS)
All Cloud plans: More resources and free addons! 2x HD & RAM, FREE VIP Boost Addon, FREE Softaculous Addon (No code needed)
Semi-Dedicated Hosting Plans - 25% off any billing cycle. (Coupon code: SEMI14)
Use Code DEDITHX for 45% off Starter Servers (Dedicated Annual Plan)
Use code STANDARDTHX for 47% off Standard Servers (Dedicated Annual Plan)
Use code PROTHX for 49% off Professional Servers (Dedicated Annual Plan)
Use code ULTIMATETHX for 42% off Ultimate Servers (Dedicated Annual Plan)
BlueHost [Reviews] Friday 12:01am EST to Sunday 11:59pm Pricing as low as $3.49
Monday 12:01am EST to 11:59PM Pricing as low as $2.95
FlyWheel [Reviews] Midnight Friday to 11:59pm Monday 50% off annual hosting plans
Host Gator [Reviews] 12am CST to 1am CST 75% off
1am CST-11:59pm Monday 55% off
Randomly 75% off for 1 hour 9 times from Black Friday-Cyber Monday
Kinsta Friday 0:00 GMT to Monday 23:59 PST 50% off first month
3 months free with annual plan
SiteGround [Reviews] Friday through Monday Up to 70% off
WPEngine [Reviews] Ends 12/2/14 4 months free with annual plan with coupon code 'CyberHosting14'

 

Did I miss any? Leave a comment below.

WordPress Hosting Performance Benchmarks (November 2014)

LoadStormLogo

Sponsored by LoadStorm. The easy and cost effective load testing tool for web and mobile applications.

This is the second round of managed WordPress web hosting performance testing. You can see the original here. The latest (2015 Edition) can be found here.

Companies Tested

A Small Orange* [Reviews]
BlueHost [Reviews]
CloudWays* [Reviews]
DreamHost [Reviews]
FlyWheel* [Reviews]
GoDaddy* [Reviews]
Kinsta*
LightningBase*
MediaTemple* [Reviews]
Nexcess*
Pagely* [Reviews]
Pantheon* [Reviews]
PressLabs*
SiteGround*† [Reviews]
WebSynthesis* [Reviews]
WPEngine* [Reviews]

Note: Digital Ocean and Pressable were removed from testing.

*Company donated an account to test on. I checked to make sure I was on what appeared to be a normal server.

†Tests were performed with SiteGround's proprietary SuperCacher module turned on fully.

The Products (Click for Interactive Table)

 

wordpress hosting product chart screenshot

Methodology

The question I tried to answer is how well do these WordPress hosting services perform? I tested each company on two distinct measures of performance: peak performance and consistency.

All tests were performed on an identical WordPress dummy website with the same plugins except in cases where hosts added extra plugins. Each site was monitored for one month (July 2014) for consistency.

1. LoadStorm

LoadStorm was kind enough to give me unlimited resources to perform load testing on their platform and multiple staff members were involved in designing and testing these WordPress hosts. I created identical scripts for each host to load a site, login to the site and browse the site. Then I increased the user load until a web host started to fail. I stopped at 2000 concurrent users for the web hosts that were left unscathed by load testing. Logged in users were designed to break some of the caching and better simulate real user load which a lot of people (both readers and hosting companies) requested after the first round of testing.

2. Blitz.io

I used Blitz again to compare against previous results. First test was 60 seconds, scaling from 1-1000 users. The second test was 60 seconds, scaling from 1-2000.

3. Uptime (UptimeRobot and StatusCake)

Consistency matters. I wanted to see how well these companies performed over a longer period of time. I used two separate uptime monitoring services over the course of a month to test consistency.

4. WebPageTest.org

"WebPagetest is an open source project that is primarily being developed and supported by Google as part of our efforts to make the web faster." WebPageTest grades performance and allows you to run tests from multiple locations simulating real users. I tested from Dulles, VA, Miami, FL, Denver, CO, and Los Angeles, CA.

Background Information

Before I go over the results I wanted to explain and discuss a few things. Every provider I tested had the latest version of WordPress installed. I had to ask a lot of companies to disable some security features to perform accurate load tests. Those companies were: GoDaddy, LightningBase, MediaTemple, SiteGround and WebSynthesis. I also asked DreamHost and WPEngine, but they refused my request.

Some companies were more cooperative than others. SiteGround spent hours with me customizing their security features to let the load testing tools bypass their security measures. PressLabs ran into an issue we were never able to resolve to get Load Storm to work properly on their servers. We spent hours trying to fix it, but couldn't find a solution. That's why they are missing some test data.

CloudWays is an interesting platform that let's you deploy your WordPress stack to either Digital Ocean or Amazon's EC2 servers. I was given a server on each platform of near comparable specs (EC2 Small 1.7GB vs Digital Ocean 2GB). So CloudWays is listed as CloudWays AWS and CloudWays DO to indicate which provider the stack was running on.

Pantheon was tested on their free development environment which I was told is identical to their production environment.

Results

Load Storm

I ran multiple Load Storm tests to get a sense of where to start testing. The first was 1-100 users, which not a single company struggled with. The second was 50-500 users, which again nobody struggled with. So the first meaningful test was 100-1000 users. For the companies that didn't struggle there, I did a 500-2000 user test. I ran these tests with an immense amount of help from Scott Price at LoadStorm. He spent hours with me, teaching me how to use LoadStorm, build tests and offering guidance/feedback on the tests themselves.

 Test 1. 100-1000 Concurrent Users over 30 minutes

 

Request Count Average RPS Peak Response Time (ms) Average Response Time (ms) Average Throughput (kB/s) Errors
A Small Orange 116127 64.52 2752 356 1318.55 41
BlueHost 107427 59.68 16727 1306 1159.55 13351
Cloudways DO 103359 55.57 16983 1807 1169.28 2255
Cloudways AWS 87447 47.01 16286 5436 821.75 18530
DreamHost 115634 62.17 15514 441 1244.31 4327
FlyWheel 116027 62.38 775 368 1287.86 0
GoDaddy 133133 71.58 1905 434 3883.42 0
Kinsta 116661 62.72 552 309 1294.77 0
LightningBase 117062 62.94 1319 256 1324.89 12
MediaTemple 116120 62.43 793 403 1304.27 0
Nexcess 116634 62.71 15085 294 1299.85 8
Pagely 119768 64.39 1548 461 1227.06 0
Pantheon 117333 63.08 528 264 1316.41 0
SiteGround 117961 63.42 939 165 180.09 0
WebSynthesis 116327 62.54 1101 332 1285.83 0
WPEngine 123901 68.83 10111 416 1302.44 2956

Discussion of Load Storm Test 1 Results

There was a pretty clear division of good and bad performance in this test. Most companies didn't struggle at all. A few collapsed: BlueHost, CloudWays AWS, CloudWays DO, and DreamHost. BlueHost started spewing 500 errors almost as soon as we started. CloudWays AWS started timing out immediately. CloudWays DO started having issues around 800 users and then started timing out. DreamHost started giving 503 Service Unavailable almost right away. It looks like our script triggered a security mechanism but they refused to work with me to test any further.

SiteGround ran into a security measure we weren't able to get around in time for publishing this article. The server seemed to just throttle the connection again.

PressLabs isn't listed because we couldn't get LoadStorm to work on their system. I am not sure what was different about their backend, but I tried to work with PressLabs and LoadStorm to get it working to no avail.

 

  • Load-Storm-A-Small-Orange-2000
  • Load-Storm-Fly-Wheel-2000
  • Load-Storm-GoDaddy-2000
  • Load-Storm-Kinsta-2000
  • Load-Storm-Lightning-Base-2000
  • Load-Storm-Nexcess-2000
  • Load-Storm-Pagely-2000
  • Load-Storm-Pantheon-2000
  • Load-Storm-SiteGround-2000
  • Load-Storm-WebSynthesis-2000

Test 2. 500 - 2000 Concurrent Users over 30 Minutes

I removed the hosts that failed and doubled the concurrent users for the second test.

Request Count Average RPS Peak Response Time (ms) Average Response Time (ms) Average Throughput (kB/s) Errors
A Small Orange 248249 133.47 5905 436 2639.68 0
FlyWheel 236474 127.14 3811 983 2499.11 16841
GoDaddy 285071 153.26 8896 371 8255.24 92
Kinsta 248765 133.74 942 316 2714.82 0
LightningBase 248679 133.7 3887 343 2763.92 23
MediaTemple 249125 133.94 1499 313 2748.32 9
Nexcess 243115 130.71 15097 388 2644.72 80
Pagely 256163 137.72 15078 446 2621.04 1
Pantheon 250063 134.44 1111 297 2754.67 0
WebSynthesis 240305 129.2 4389 743 2598.83 1173

Discussion of Load Storm Test 2 Results 

FlyWheel started to fail around 1500 users causing 502 errors and remained constant at that level of failure. I'm not sure what the bottleneck was, but it didn't overload the server, but I suspect the I/O of something bottle-necked causing a certain amount of requests to fail. WebSynthesis had a few errors as well, they were 5 separate spikes somewhat evenly spaced out. The server didn't show signs of failure, it looks like it might have been an issue with caches being refreshed and some requests failing in the meantime. WebSynthesis' error rate was still under 0.5%, so I don't have any real issue with those errors. The slower average response time can also be attributed to the spikes in performance.

Remarkably, some companies didn't even struggle. Kinsta kept sub one second response times for 30 minutes and nearly a quarter million requests. Most companies had a spike or two causing a higher peak response time, but Kinsta and Pantheon didn't (and Media Temple had a tiny one at 1.5 seconds). Simply amazing performance.

Another interesting note, GoDaddy pushed triple the amount of data through because their admin screen had a lot more resources being loaded. That's why the average throughput is so high. Despite that fact, it didn't seem to impact their performance at all, which is astounding.

Full Interactive Test Results

A Small Orange
FlyWheel
GoDaddy
Kinsta
LightningBase
MediaTemple
Nexcess
Pagely
Pantheon
SiteGround
WebSynthesis

Blitz.io

 Test 1. 1-1000 Concurrent Users over 60 seconds

Blitz Test 1. Quick Results Table

Success Errors Timeouts Avg Hits/Second Avg Response (ms)
A Small Orange 27595 14 0 460 67 ms
BlueHost 23794 1134 189 397 160 ms
CloudWays AWS 24070 162 148 401 138 ms
CloudWays DO 27132 118 127 452 49 ms
DreamHost 13073 45 7885 218 21 ms
FlyWheel 28669 20 10 478 27 ms
GoDaddy 26623 8 5 444 104 ms
Kinsta 27544 0 0 459 69 ms
LightningBase 27893 0 1 465 56 ms
MediaTemple 26691 8 9 445 102 ms
Nexcess 18890 2288 641 337 517 ms
Pagely 25358 9 0 423 156 ms
Pantheon 27676 21 0 461 64 ms
PressLabs 25903 143 0 432 89 ms
SiteGround 24939 0 0 416 152 ms
WebSynthesis 28913 0 0 482 19 ms
WPEngine 23074 121 4 385 247 ms

Discussion of Blitz Test 1 Results

I learned from the last round of testing that any hosting that isn't optimized at all for WordPress (default install) will get destroyed by these tests. So I didn't include any of them this time. There wasn't any as catastrophic failures this time.

Who performed without any major issues?

A Small Orange, FlyWheel, GoDaddy, Kinsta, LightningBase, MediaTemple, Pagely, Pantheon, SiteGround, WebSynthesis all performed near perfect. There's nothing more to say for these companies other than they did excellent. All of their error/time rates were below 0.5%.

Who had some minor issues?

CloudWays AWS, CloudWays DO, PressLabs and WPEngine. All four of these providers had over 100 errors/timeouts and had an error/timeout rates between 0.5%-2%. Not a huge deal, but definitely not perfect.

Who had some major issues?

BlueHost, DreamHost, and Nexcess. BlueHost started show stress around 40 seconds in and started to buckle around 47 seconds. DreamHost had a couple spikes in response time and errors. However, it looks like the load testing tool may have hit some type of security limit because requests started timing out but it gave very fast responses and maintained roughly 250 hits/second constantly. It doesn't look like the server was failing. I couldn't get them to disable the security to really test it, so it's hard to say much more. Nexcess started to show stress around 20 seconds and buckle around 30 seconds.

 Test 2. 1-2000 Concurrent Users over 60 seconds

  • Blitz-A-Small-Orange-2000
  • Blitz-Blue-Host-2000
  • Blitz-CloudWays-AWS-2000
  • Blitz-CloudWays-DO-2000
  • Blitz-Dream-Host-2000
  • Blitz-Fly-Wheel-2000
  • Blitz-GoDaddy-2000
  • Blitz-Kinsta-2000
  • Blitz-LightningBase-2000
  • Blitz-Media-Temple-2000
  • Blitz-Nexcess-2000
  • Blitz-Pagely-2000
  • Blitz-Pantheon-2000
  • Blitz-PressLabs-2000
  • Blitz-SiteGround-2000
  • Blitz-WebSynthesis-2000
  • Blitz-WPEngine-2000

Blitz Test 2. Quick Results Table

Success Errors Timeouts Avg Hits/Second Avg Response (ms)
A Small Orange 54152 26 1 903 77 ms
BlueHost 29394 14368 3408 490 234 ms
CloudWays AWS 25498 4780 8865 425 338 ms
CloudWays DO 53034 1477 49 884 58 ms
DreamHost 10237 5201 20396 171 201 ms
FlyWheel 56940 121 68 949 29 ms
GoDaddy 53262 29 64 888 105 ms
Kinsta 55011 32 0 917 69 ms
LightningBase 55648 0 0 927 58 ms
MediaTemple 53363 16 28 889 100 ms
Nexcess 25556 15509 4666 426 279 ms
Pagely 51235 41 2 854 147 ms
Pantheon 55187 91 0 920 65 ms
PressLabs 35547 4105 1569 592 326 ms
SiteGround 42645 490 220 711 276 ms
WebSynthesis 57776 1 0 963 20 ms
WPEngine 39890 304 333 665 364 ms

Discussion of Blitz Test 2 Results

Who performed without any major issues?

A Small Orange, FlyWheel, GoDaddy, Kinsta, LightningBase, MediaTemple, Pagely, Pantheon, WebSynthesis all performed near perfect. All of their error/time rates were around 0.5% or lower.

Who had some minor issues?

SiteGround and WPEngine. All four of these providers had over 100 errors/timeouts and had an error/timeout rates between 0.5%-2%. SiteGround started to show some stress around 30 seconds and didn't started to have real issues after 50 seconds (errors). WPEngine started to show stress around 20 seconds and performed slightly erratically until the end of the test.

Who had some major issues?

BlueHost, CloudWays AWS, CloudWays DO, DreamHost, Nexcess, and PressLabs. The four that had major issues from last around completely failed with error/timeout rates exceeding 50%. DreamHost who looked like it was fine behind the security measures buckled around 35 seconds into this test and started returning errors, increased response times and the hits/second dropped. CloudWays DO definitely started to stress and show signs of buckling around 50 seconds. But its error rate was still under 3%. I don't think it would have lasted much longer had the tests gone further, but it was the least worst failure. PressLabs was a surprise, it started to show stress around 25 seconds and started to buckle around 35 seconds into the test.

 Full Blitz Results (PDFs)

A Small OrangeBlueHost, CloudWays AWS, CloudWays DO, DreamHost, FlyWheel, GoDaddy, Kinsta, LightningBase, MediaTemple, Nexcess, Pagely, Pantheon, PressLabs, SiteGroundWebSynthesis, WPEngine.

Uptime Monitoring

Both uptime monitoring solutions were third party providers that offer free services. All the companies were monitored over an entire month (July 2014).

Uptime Robot

Uptime (%)
A Small Orange 100
BlueHost 99.71
CloudWays AWS 100
CloudWays DO 99.93
DreamHost 99.92
FlyWheel 99.97
GoDaddy 99.9
Kinsta 100
LightningBase 100
MediaTemple 99.81
Nexcess 100
Pagely 99.95
Pantheon 100
PressLabs 100
SiteGround 100
WebSynthesis 100
WPEngine 100

According to UptimeRobot not a single company was below 99.5% uptime. In fact, with the exception of Media Temple and BlueHost, they were all above 99.9% uptime. For reference 99.5% uptime is 3.5 hours of downtime per month. 99.9% is <45 minutes of downtime per month. Overall, nothing to really complain about according to Uptime Robot.

StatusCake

Availability (%) Response Time (ms)
A Small Orange 1 0.23s
BlueHost 0.9969 2.45s
CloudWays AWS 0.998 0.75s
CloudWays DO 1 2.41s
DreamHost 1 2.22s
FlyWheel 0.999 1.99s
GoDaddy 1 2.41s
Kinsta 1 2.13s
LightningBase 1 1.6s
MediaTemple 1 1.18s
Nexcess 1 2.33s
Pagely 1 2.49s
Pantheon 1 2.04s
PressLabs 1 1.49s
SiteGround 0.9993 1.64s
WebSynthesis 1 1.77s
WPEngine 1 2.76s

According to StatusCake, the results look even better. I used multiple services to monitor because there can be networking issues unrelated to a web host's performance. StatusCake only detected issues with four companies, which is fewer than UptimeRobot detected. It's hard to say which is better or right. But they both say that uptime didn't really seem to be an issue for any company.

StatusCake also provides an average response time metric. According to them, it's using a browser instance and fully rendering the page. They also have many different geographical locations that they are testing from. I don't have any further insight into these tools beyond what I can read on their documentation. If they are to be believed, A Small Orange has astonishingly fast performance. WPEngine is the slowest average load time at 2.76 seconds which isn't that bad.

 

WebPageTest.org

Every test was run with the settings: Chrome Browser, 9 Runs, native connection (no traffic shaping), first view only. This was tested against the default install from every company.

Company Dulles,VA Miami, FL Denver, CO Los Angeles, CA Average
A Small Orange 1.443 0.801 0.836 0.64 0.93
BlueHost 1.925 1.321 1.012 0.785 1.26075
CloudWays AWS 0.655 0.867 0.967 0.746 0.80875
CloudWays DO 0.493 0.851 1.036 0.811 0.79775
DreamHost 1.177 0.863 1.067 1.147 1.0635
FlyWheel 0.497 0.864 1.066 1.109 0.884
GoDaddy 1.607 1.355 0.934 0.855 1.18775
Kinsta 0.759 0.752 0.947 0.592 0.7625
LightningBase 0.584 0.787 0.936 0.675 0.7455
MediaTemple 1.516 0.983 0.955 0.555 1.00225
Nexcess 1.433 1.139 1.196 0.859 1.15675
Pagely 6.831 0.86 0.913 0.709 2.32825
Pantheon 0.654 0.828 0.923 0.954 0.83975
PressLabs 0.715 1.018 1.213 0.723 0.91725
SiteGround 1.392 1.239 1.01 1.212 1.21325
WebSynthesis 0.407 0.835 0.982 1.024 0.812
WPEngine 0.821 1.086 0.839 0.685 0.85775

There isn't much surprising here. The pack is really tight with less than a half second difference average between the top and bottom hosts. If we exclude Pagely. I'm not sure what happened with their Dulles, VA test, but it seems like there was something terribly wrong with the network when I tested it. The average response times from every other location were incredibly fast (<1 second). I'm going to chalk it up to a bad node somewhere causing that particular test to perform so poorly, almost certainly not a reflection of their hosting.

What is interesting, compared to last time is that these companies are getting faster. There was only one company with a sub 1 second average last time. Now there are 10 companies (11 if you count Pagely). Three of them were above one second last time, so they are showing signs of improvement (Pagely, WebSynthesis, WPEngine). It also means there is a lot of new competition that is not behind the entrenched players in terms of performance.

Conclusion

Every service seems to have their issues somewhere if you look hard enough. I try to avoid injecting my personal opinion and bias as much as possible. So I won't be ranking or outright saying any single company is the best. Some providers did exceptionally well and tended to clump together performance-wise, I will call those the top tier providers. This top tier designation is related to performance only and is claimed only from the results of these tests. What each of these companies is offering is different and may best suit different audiences depending on a variety of factors beyond performance, such as features, price, support, and scale (I tested mostly entry level plans). But I will provide a short summary and discussion of the results for each provider.

Top Tier WordPress Hosting Performance

A Small Orange, GoDaddy, Kinsta, LightningBase, MediaTemple, Pagely, Pantheon, WebSynthesis

Each of these companies were below the 0.5% error rate on all load testing all the way up to 2000 concurrent users on both LoadStorm and Blitz.

Honorable Mention

FlyWheel gets an honorable mention. They performed really well on many of the tests. FlyWheel fell apart on the final LoadStorm test to 2000 logged in users. I'll explain more in their individual section as to why this is deserving of an honorable mention.

Amazon Web Services (AWS) vs Digital Ocean

One of the most interesting comparisons to me was CloudWays. They provide you with the ability to choose which VPS provider and type you want. It then sets up their WordPress configuration (in an identical manner from my understanding) on the VPS. I was granted access to one Amazon and one Digital Ocean VPS from them. The Amazon was a small (1.7GB ram) and the Digital Ocean was a 2GB ram instance.

aws_vs_digital_ocean_loadstorm

The head-to-head results from LoadStorm (1000 user test) results above pretty clearly show Digital ocean performing better in every category (with the exception of Peak Response Time which is a timeout). Digital Ocean sent more data, had less errors and it did it faster.

aws_vs_digital_ocean_blitz

The Blitz.io results show pretty clearly that Digital Ocean is outperforming AWS by a wide margin as well. It delivered twice as many hits with less errors and time outs.

It's pretty easy to conclude based on the tests that on the low-end VPSs, that Digital Ocean's hardware outperforms Amazon's hardware.

Individual Host Analysis

A Small Orange

They've improved their LEMP stack since the last time I tested. They never buckled in any test and were definitely one of the best. Their staff was incredibly friendly (special thank you to Ryan MacDonald) and they've stepped up their performance game. The one thing that isn't quite there yet is the documentation/user experience, there are a lot of improvements they could make to make their LEMP stack more accessible to the less tech savvy. All in all, the experience was in-line with what I would expect from a company that has one of the highest support ratings on our site.

BlueHost

Their WordPress offering is brand new. It struggled in every load test. Their price is on the middle-high end but the performance was not. Ultimately, they fell short of where I would expect based on pricing and the competition.

CloudWays

CloudWays was certainly an interesting company to test given that they had two entries, one running on Amazon Web Services (EC2) and another on Digital Ocean. The Digital Ocean VPS outperformed AWS in every category which was interesting. The AWS instance's performance was near the bottom of the pack performance wise, but the Digital Ocean one was in the middle. It is a very interesting platform they have built which allows deployment and management across providers. However, their performance isn't quite there yet. Other companies are running on the same hardware and getting better results. CloudWays doesn't do just WordPress, so it's easy to understand why their performance might not quite be as good as some of their competitors who solely focus on WordPress.

DreamHost

DreamPress was another disappointment. The security features hid some of the performance weakness on the first Blitz test, but it completely failed on the second. The way DreamPress is designed it says it has automatic RAM scaling and each site is run by two VPS instances. It's very unclear what resources you are really getting for your money. They are charging $50/month for a 1GB ram VPS, so I get the feeling a lot of resources are shared and it may not be a true VPS.

FlyWheel

FlyWheel were excellent on every test except the final 2000 logged in user test from LoadStorm. They are built on top of Digital Ocean and I was using the smallest VPS. Yet their performance beat VPSs on Digital Ocean that had four times the resources (CloudWays DO). For cached content on the Blitz test, they had the second highest hits/second and response time. I suspect the testing hit a hardware maximum. FlyWheel had the best performance with the lowest dedicated resources (512MB ram). The companies that outperformed it had more resources dedicated to them or shared resources which presumably would allow access to far greater than 512MB ram. It was an impressive performance given what they are selling and combined with them having the best reviews of any company Review Signal has ever tracked. FlyWheel certainly merit serious consideration.

GoDaddy

GoDaddy continues to surprise me. They flew through all the tests, including a weird issue where they transferred 3X the data during the LoadStorm test and didn't show any signs of stress. The only comparison I have to last time is the Blitz testing, where they eked out another 3000+ hits and raised their hits/second from 829 to 888. GoDaddy also raised their max hit rate marginally from 1750 to 1763. What's more impressive is they reduced their errors+timeouts from 686 to 93. More hits with less errors. From a performance perspective, they did excellent in absolute terms and relative to their last benchmarks.

Kinsta

A new-comer that jumped straight to the top of the performance tiers. Kinsta's performance was amazing in the Load Storm 2000 logged in user test. They had the lowest peak response time and zero errors over a 30 minute test. They didn't struggle with any tests whatsoever and showed zero downtime. Kinsta's performance was top tier.

LightningBase

Another new-comer that jumped straight to the top. One of the cheapest too starting at under $10. LightningBase aced the Blitz testing and did excellent on Load Storm. There was no downtime monitored. LightningBase belongs in the top tier and is delivering amazing value.

Media Temple

Media Temple is interesting because I was told it was running the same technology as GoDaddy (GoDaddy bought Media Temple a year ago). They have a few more premium features like Git and a staging environment. Media Temple's performance was superb. It actually beat GoDaddy's performance in just about every measure by a marginal amount on both Load Storm and Blitz's load testing. If GoDaddy has top tier performance, Media Temple certainly does as well.

Nexcess

Nexcess's performance was excellent in the Load Storm testing. However, it completely collapsed during the Blitz load testing. I'm really not sure what to make of those results. Perhaps the underlying shared hardware is very good but the static caching setup isn't quite up to snuff? It's probably not worth speculating, suffice to say, Nexcess ended up looking like a middle of the pack web host instead of a top tier one because of the Blitz test.

Pagely

Pagely put on another spectacular performance. They handled the Load Storm test with 1 error. Blitz results stayed similar to the last run. They handled more hits, but had a few more errors+timeouts (1 last time, 43 this time). Really not much to add here other than they continue to be in the top tier.

Pantheon

Pantheon specialized in Drupal hosting, so I was wondering how well it would translate to WordPress. The short answer is, it converted over really well. They had a flawless run on the LoadStorm test - 0 errors and not even any spikes in response time over 30 minutes. They are one of the most expensive (only second to PressLabs) options on this list, but definitely make a case for it. Perfect uptime and near flawless load testing sent them easily into the top tier.

PressLabs

It's hard to write much about PressLabs because we couldn't get LoadStorm to work properly to test out their hosting. However, their Blitz results were lackluster. For the most expensive plan we tested, it was a bit of a disappointment to see it not do stunningly well.

SiteGround

SiteGround sadly didn't do as well as they did last time. Their Blitz load testing score went down slightly. We couldn't bypass their security measures to properly test Load Storm. They obviously have some good protection measures to prevent malicious users from trying to access too many things, but it also meant I couldn't get a deeper look this time around. That was a change from the last round of testing. Slightly disappointing to see the performance dip, but I hope it was due to the extra security measures they put in place that made testing them difficult.

WebSynthesis

WebSynthesis was teetering on the Load Storm test of having too many errors (0.5%), but they were under it and handled the test quite well. They also had no weird security issues this time around, and WebSynthesis led the pack on Blitz testing. They went from 871 hits/second to 963 hits/second; leading every provider on the Blitz tests with a whopping 1 error to boot. Sprinkle in some perfect up time numbers and it's clear WebSynthesis is still a top tier provider and is continuing to get better.

WPEngine

I feel like I could copy+paste my last conclusion about WPEngine. "WPEngine had some issues. Uptime was not one of them, they were perfect or upwards of 99.9% in that department. However, their performance shortcomings became apparent during the load tests." They didn't even make it to the final round of Load Storm testing. They were also middle of the pack on the Blitz testing. Compared to the last round of Blitz testing, the results were nearly identical, with slightly fewer errors+timeouts. I'm not sure if I should be disappointed to not see improvement or relieved to see them maintain the exact same performance and consistency. Their vaunted rankings on Review Signal's reviews have slipped relative to a few of the other providers on here (FlyWheel and WebSynthesis). While they were once leading the pack in technology, the rest of the pack is starting to catch up.

 

Thank Yous

A special thanks goes out to the sponsor of this post and an individual employee, Scott Price of Load Storm, who worked countless hours with me in order to perform these tests.

I want to thank all the companies that participated in these tests. I tested the support staff a fair bit at some of them and I thank them for their time and patience.

A special thanks goes to Chris Piepho from LightningBase also provided a lot of feedback based on the original article and helped improve the methodology for this round.

A huge thanks goes out to Mark Gavalda at Kinsta for his feedback and performance testing discussions. He's tested some further out stuff than I have like HHVM and php-ng performance. Also to their designer, Peter Sziraki, who designed the the header image for this article.

 

Introducing Pagely and FlyWheel

I am happy to announce two new hosts on Review Signal today.

One of them was a long time in coming, Pagely. The original managed WordPress hosting company. In my original managed WordPress hosting performance benchmarks, Pagely came out at the top - having no trouble with any of the tests I threw at their services.

Pagely_october_2014

 

It's a bit disappointing to see that their reviews don't quite match their performance. From what I can tell, it looks like they've struggled with some major outages in the past. However, there is an upward trend in opinions about them. Their performance is top notch, it would be great to see the rest of the service catch up.

Our second addition to Review Signal is FlyWheel which has an astounding 95% Overall Rating. FlyWheel is another managed WordPress hosting service built on top of Digital Ocean. I don't think I've ever seen a company have such positive reviews. It's a struggle to find anyone saying something negative about them. It's wonderful see such positive reviews for a new company.

Flywheel_october_2014

I hope they can keep it up, but my past experience says all the companies which start out so remarkably strong generally tend to come down to more 'normal' levels in the 70%ish range. Their competitors WPEngine (82% -> 73%) and WebSynthesis (83% -> 76%) both did. Also the company they were built on top of, Digital Ocean, went from 81% to 76%. Great service seems like the hardest problem to scale for a web hosting company. I hope FlyWheel can break the rules and continue it's streak of excellence.

40 Million hits a day on WordPress using a $10 VPS

I recently tested many of the biggest names in managed WordPress hosting in my article Managed WordPress Hosting Performance Benchmarks. (Update: 2016 WordPress Hosting Performance Benchmarks) I am preparing to do a second round of testing with double the number of companies on board. Some of us like to setup servers ourselves (or are cheap).

Given a reasonable VPS, what sort of performance can we get out of it?

10 million hits as measured by Blitz.io was the benchmark to beat based on a previous iteration of this question.

I decided to test this from the ground up, let's start with the most basic configuration and gradually try to improve it.

All tests were performed on a $10/Month 1GB Ram Digital Ocean VPS running Ubuntu 14.04x64. All code and documentation are also available on GitHub.

LAMP Stack

Based on my previous experience benchmarking WordPress, I didn't have high hopes for this test. Last time I crashed MySql almost instantly. This time I ran Blitz a lot slower, from 1-50 users. The performance wasn't impressive, it started slowing down almost immediately and continued to get worse. No surprises.

default-lamp

The LAMP stack setup script is available on GitHub. Download full Blitz results from LAMP Stack (PDF).

LAMP + PHP5-FPM

The next thing I tried was PHP-FPM(FastCGI Process Manager). It got slightly better performance with just under 200ms faster response times at 50 users. But the graph looks pretty similar, we're seeing quickly increasing response times as the number of users goes up. Not a great improvement.

lamp-php-fpm

The LAMP+ PHP5-FPM setup script is available on GitHub. Download full Blitz results from LAMP+PHP5-FPM (PDF).

Nginx + PHP-FPM (aka LEMP Stack)

Maybe the problem is Apache? I tried Nginx next. What happened? I got a worse performance than the default LAMP stack (wtf?). Everyone said Nginx was faster. Turns out, it's not magically faster than Apache (and appears worse out of the box).

lemp-php-fpm

The LEMP + PHP-FPM setup script is available on GitHub. Download full Blitz results from LEMP+PHP-FPM (PDF).

Microcaching

I've written about creating a reverse proxy and cache in nginx before. But I've already setup Nginx as my web server, I don't need to reverse proxy this time. Nginx has fastcgi_cache which allows us to cache results from fastcgi processes (PHP). So I applied the same technique here and the results were staggering. The response time dropped to 20ms (+/- 2ms) and it scaled from 1 to 1000 concurrent users.

"This rush generated 28,924 successful hits in 60 seconds and we transferred 218.86 MB of data in and out of your app. The average hit rate of 482/second translates to about 41,650,560 hits/day."

All that with only 2 errors (connection timeouts).

lemp-microcache

The LEMP + PHP-FPM + microcaching setup script is available on GitHub. Download full Blitz results from LEMP+PHP-FPM + microcaching (PDF).

Mircocaching Config Walkthrough

We do the standard

apt-get update
apt-get -y install nginx
sudo apt-get -y install mysql-server mysql-client
apt-get install -y php5-mysql php5-fpm php5-gd php5-cli

This gets us Nginx, MySql and PHP-FPM.

Next we need to tweak some PHP-FPM settings. I am using some one liners to edit /etc/php5/fpm/php.ini and /etc/php5/fpm/pool.d/www.conf to uncomment and change some settings [turning cgi.fix_pathinfo=0 and uncommenting the listen.(owner|group|mode) settings].

sed -i "s/^;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/" /etc/php5/fpm/php.ini
sed -i "s/^;listen.owner = www-data/listen.owner = www-data/" /etc/php5/fpm/pool.d/www.conf
sed -i "s/^;listen.group = www-data/listen.group = www-data/" /etc/php5/fpm/pool.d/www.conf
sed -i "s/^;listen.mode = 0660/listen.mode = 0660/" /etc/php5/fpm/pool.d/www.conf

Now make sure we create a folder for our cache

mkdir /usr/share/nginx/cache

Which will we need in our Nginx configs. In our /etc/nginx/sites-available/default config we add this into our server {} settings. We also make sure to add index.php to our index command and set our server_name to a domain or IP.

location ~ \.php$ {
		try_files $uri =404;
		fastcgi_split_path_info ^(.+\.php)(/.+)$;
		fastcgi_cache  microcache;
		fastcgi_cache_key $scheme$host$request_uri$request_method;
		fastcgi_cache_valid 200 301 302 30s;
		fastcgi_cache_use_stale updating error timeout invalid_header http_500;
		fastcgi_pass_header Set-Cookie;
		fastcgi_pass_header Cookie;
		fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
		fastcgi_pass unix:/var/run/php5-fpm.sock;
		fastcgi_index index.php;
		include fastcgi_params;
}

Then we move on to our /etc/nginx/nginx.conf and make a few changes. Like increasing our worker_connections. We also add this line in our http{} before including our other configs:

fastcgi_cache_path /usr/share/nginx/cache/fcgi levels=1:2 keys_zone=microcache:10m max_size=1024m inactive=1h;

This creates our fastcgi_cache.

All of these are done in somewhat ugly one-liners in the script (if someone has a cleaner way of doing this, please share!), I've cleaned them up and provided the full files for comparison.

Go Big or Go Home

Since Nginx didn't seem to blink when I hit it with 1000 users, I wondered how high it would really go. So I tried from 1-3000 users and guess what?

"This rush generated 95,116 successful hits in 60 seconds and we transferred 808.68 MB of data in and out of your app. The average hit rate of 1,585/second translates to about 136,967,040 hits/day."

The problem was I started getting errors: "4.74% of the users during this rush experienced timeouts or errors!" But it amazingly peaked at an astonishing 2,642 users per second. I watched my processes while the test was running and saw all 4 nginx workers fully maxing out the CPU (25% each) while the test was running. I think I hit the limit a 1GB, 1 Core VPS can handle. This setup was a champ though, I'm not sure what caused the big spike (perhaps a cache refresh), but if you wanted to roll your own WordPress VPS and serve a lot of static content, this template should be a pretty good starting point.
lemp-microcache-3000

Download full results of 3000 users blitz test (PDF)

Conclusion

There are definitely a lot of improvements that can be made on this config. It doesn't optimize anything that doesn't hit the cache (which will be any dynamic content, most often logged in users). It doesn't talk about security at all. It doesn't do a lot of things. If you aren't comfortable editing php, nginx and other linux configs/settings and are running an important website, you probably should go with a managed wordpress company. If you really need performance and can't manage it yourself, you need to look at our Managed WordPress Hosting Performance Benchmarks. If you just want a good web hosting company, take a look at our web hosting reviews and comparison table.

All code and documentation is available on GitHub

Thanks and Credits:

The title was inspired by Ewan Leith's post 10 Million hits a day on WordPress using a $15 server. Ewan built a server that handled 250 users/second without issue using Varnish, Nginx, PHP-APC, and W3 Total Cache.

A special thanks goes to A Small Orange who have let me test up multiple iterations of their LEMP stack and especially Ryan MacDonald at ASO who spent a lot of time talking WordPress performance with me.

Bias, Negativity, Sentiment and Review Signal

Photo Credit: _Abhi_

People are more likely to express negative sentiments or give negative reviews than they are positive ones.

I hear this in almost every discussion about Review Signal and how it works. There is certainly lots of studies to back this up. One major study concluded that bad is a stronger than good. One company found people were 26% more likely to share bad experiences. There is plenty of research in the area of Negativity Bias for the curious readers.

Doesn't that create problems for review sites?

The general response I have to this question is no. It doesn't matter if there is a negativity bias when comparing between companies because it's a relative comparison. No company, at least not at the start, has an unfair advantage in terms of what their customers will say about them.

Negativity bias may kick in later when customers have had bad experiences and want to continually share that information with everyone and anyone despite changes in the company. Negative inertia or the stickiness of negative opinion is a real thing. Overcoming that is something that Review Signal doesn't have any mechanism to deal with beyond simply counting every person's opinion once. This controls it on an individual level, but not on a systemic level if a company has really strong negative brand associations.

What if a company experiences a disaster, e.g. a major outage, does that make it hard to recover in the ratings?

This was a nuanced question that I hadn't heard before and credit goes to Reddit user PlaviVal for asking.

Luckily, major outages are a rare event. They are fascinating to observe from a data perspective. The most recent and largest outage was the EIG (BlueHost, HostGator, JustHost, HostMonster) outage in August 2013. If we look at the actual impact of the event, I have a chart available here.

When I looked at the EIG hosts' post-outage, there really hasn't been a marked improvement in their ratings. Review Signal's company profiles have Trends tabs on every company which graph on a per month basis to see how a company is done over the past 12 months.

BlueHost-May2014 HostGator-May2014

There is definitely some variance, but poor ratings post-outage seem quite common. It's hard to make an argument that these companies have recovered to their previous status and are simply being held back by major outcries that occurred during the outage.

The only other company with a major outage I can track in the data is GoDaddy. GoDaddy have had numerous negative events in their timeline since we started tracking them. There has been the elephant killing scandal, SOPA, DNS outages and multiple super bowl events.

godaddy_chart

August 2012 - July 2013

Godaddy-May2014

June 2013 - May 2014

There are clear dips for events such as the September 2012 DNS Outage, the Superbowl in February. Their overall rating is 46% right now and the trend is slightly up. But they seem to hang around 45-50% historically and maintain that despite the dips from bad events. There is arguably some room to for them be rated higher depending on the time frame you think is fair, but we're talking a couple percent at most.

What about outages affecting multiple companies? eg. Resellers, infrastructure providers, like Amazon, who others are hosting on top of. Are all the companies affected equally?

No. Just because there is an outage with a big provider that services multiple providers doesn't mean that all the providers will be treated identically. The customer reaction may be heavily influenced by the behavior of the provider they are actually using.

Let's say there is an outage in Data Center X(DC X). It hosts Host A and Host B. DC X has an outage lasting 4 hours. Host A tells customers 'sorry, it's all DC X's fault' and Host B tells customers 'We're sorry, our DC X provider is having issues, to make up for the downtime your entire month's bill is free because we didn't meet our 99.99% uptime guarantee.' Just because Host A and Host B had identical technical issues, I imagine the responses from customers would be different. I've definitely experienced great customer service which changed my opinion of a company dramatically on how they handled a shitty situation. I think the same applies here.

Customer opinions are definitely shaped by internal and external factors. The ranking system here at Review Signal definitely isn't perfect and has room for improvement. That said, right now, our rankings don't seem to be showing any huge signs of weakness in the algorithms despite the potential for issues like the ones talked about here to arise.

Going forward, the biggest challenge is going to be creating a decay function. How much is a review today worth versus a review in the past? At some point, a review of a certain age just isn't as good as a recent review. At some point, this is a problem I'm going to have to address and figure out. But now, it's on the radar but it doesn't seem like a major issue yet.

Introducing Windows Azure

I am happy to announce we've added Windows Azure hosting platform to Review Signal today. Azure is definitely a big player in the cloud server market. It was also one a very notable absence in our listings. Now that we've added Azure and Amazon AWS in the past few months, our cloud listings for IaaS providers looks a lot more complete.

Windows Azure comes in with a 70% overall rating which is quite respectable and puts it right next to RackSpace on the rankings. Although the support scores seem to be a lot lower at an underwhelming 56%.

Want to see the full cloud provider rankings? Visit our complete rankings and click on the Cloud tab.

The Best Web Hosting Company

Who is the best web host? Which web hosting company is the best?

This is such a common question and people keep asking it. Why? Because there is no 'best' company.

Best is pretty well defined, but human perspective and opinion is not. Every company has many properties that define its service. Some examples of those properties might be customer support, price, uptime, hardware, software, sales people, and engineers. There are a few very defined properties like price. But price is meaningless without the greater context of 'What am I getting for that price?'

So really, we're left with a large set of hard to quantify and compare properties that make up service companies.

So let's throw our hands up and give up. Not quite. Some companies are definitely better than others. Some are definitely worse. Most occupy a middle ground of being ok. The larger the company, the more opportunity for variance in experience. With web hosting companies, most people's experiences are either touch the technical side or human side of the company. The technical side of webhosting (at least in the shared market) is quickly becoming commoditized. So that really leaves the human side to impact opinion of a company.

I have a personal opinion that you can attempt to quantify the somewhat intangible human experiences with big data. If you are able to collect enough opinions about multiple companies, you could compare people's average experience. That is the entire basis of Review Signal.

Check Out Review Signal's Web Hosting Review Data and Compare Web Hosting Companies

So we're back to the question of what is the best web hosting company? Based on the hundreds of thousands of opinions Review Signal has collected the answer is, it depends. No company is close to perfect. That seems like a reasonable outcome. People are going to have bad experiences and encounter problems with any service company. The highest rated company we are tracking right now is WebSynthesis at 84% (source: WebSynthesis Reviews - Updated May 2014). That means 16% of people expressed an unfavorable opinion of them. The lowest rated is MochaHost at 19% (source: MochaHost Reviews - Updated May 2014). So even the least liked company has 19% of people expressing favorable opinions.

So you could end up happy or angry with any company. All you can do is hedge your bets by picking a company that a greater percentage of people like, relatively speaking. The other issue is information and experiences change. Companies get bought/sold. They move. They make personnel changes. Opinions of a company can be fluid. However, there is rarely titanic shifting of opinions without a catastrophic event (for example: Post Mortem of the EIG Outage (August 2, 2013) That Affected BlueHost, HostGator, JustHost and HostMonster). You're more likely to see slow changes over long periods of time.

So what's the best web hosting company? It depends. But we've built a tool to help you make smarter hosting choices based on what everyone else is saying. Just check out the data we've collected.

How to send email from the command line with SendGrid

Sending email should be simple. It's something I really don't like thinking about.

I ran into an issue where I wanted to send email from the command line in a bash script that helps power Review Signal. A notification when something goes wrong.

The only problem, I don't have any mailing program installed. I also don't really want to send email from my servers because making sure they get through isn't something I want to spend a lot of time thinking about.

The first solution that jumped to mind was SendGrid, who I had the pleasure of trying out their API at PayPal BattleHack DC. It was dead simple. I had integrated it into PHP though. I didn't know if it worked from the command line.

I checked the docs and found they had a rest API.

curl -d 'to=destination@example.com&toname=Destination&subject=Example Subject&text=testingtextbody&from=info@domain.com&api_user=your_sendgrid_username&api_key=your_sendgrid_password' https://api.sendgrid.com/api/mail.send.json

If you want to clean it up with variables:

#!/bin/sh
SGTO=receiver@example.com
SGTONAME='Some Name'
SGSUBJECT='Email Subject'
SGFROM=from@example.com
SGTEXT='Email Text'
SGUSER=user
SGPASS=password
curl -d "to=${SGTO}&toname=${SGTONAME}&subject=${SGSUBJECT}&text=${SGTEXT}&from=${SGFROM}&api_user=${SGUSER}&api_key=${SGPASS}" https://api.sendgrid.com/api/mail.send.json

Voila! Sending emails from my bash script is now simple.

Is your site patched against Heartbleed? (CVE-2014-0160)

I had the 'fun' experience of patching against this vulnerability today. Although, when I rebooted one of my primary servers, it failed to reboot and caused two hours of downtime. Sorry about that to anyone who couldn't access this site.

If you're wondering if you are vulnerable, check your site for Heartbleed vulnerability.

As far as actually patching, I only did it manually on some Ubuntu 12.04 systems. It was fairly simple. Just run

apt-get update && apt-get upgrade

That should take care of it. If you want to learn more, go to Heartbleed.com.

Comic courtesy XKCD:

Loading...

Interested in seeing which web hosting companies people love (and hate!)? Click here and find out how your web host stacks up.