Hacker News new | past | comments | ask | show | jobs | submit login

I'm not going to respond to your personal attacks; just the factual content of you post(s).

I understand that you perceived my posting of the benchmark link to be inappropriate and I agree with you to an extent. It remains to say that the link represents my proof within the context of the conversation chain from earlier.

You seem to be a long-term contributor here and I'm sure it's appreciated. I would be interested if you addressed the flaws or issues with the benchmark results. I do make mistakes just like anybody but own up to them at the end.

Let me know if you have any question and I'll be happy to discuss.

Regards Joe




Flaws with the benchmark:

1. Misrepresentation

The title says 'Apache vs Litespeed', when in reality it is 'Varnish vs Litespeed', Apache has nothing to do with this particular benchmark.

2. Questionable results

Varnish serving 217.05 req/sec for a static page at 100 concurrency level? Something is seriously messed up in the setup; Varnish can easily handle thousands of req/s for something as simple as looking up a cache entry and returning a static file for 100 clients. Oh wait, 11177.99 [Kbytes/sec] - looks like you saturated the pipe, making it a test of how fast your network is, and nothing to do with Varnish.

3. Flawed benchmark testing procedure

You can't expect a simple `ab -c 100 -n 10000` to provide any insight whatsoever towards the general handling of a server. The only thing obvious is that Litespeed has been saturated with 100 concurrent connections. If you're going to do a benchmark, you must post varying concurrency levels at least (along with exact setup configurations). Likewise, cpu and memory usage are just as important to providing valuable insight.

4. Comparing apples to oranges

I'm not entirely sure what Litespeed is doing (eg. does it cache statically?) since you haven't specified it exactly, but considering that almost 70% of the requests failed and almost all requests after the first few had a different content length, I would assume that Litespeed is returning a dynamic request each time, whereas Varnish is returning a static file. I have no doubt in my mind that if you ran the same scenario (ie. Litespeed serving a static file as well) the results would be much different.

5. Irrelevant to the article benchmark

Saying "But Nginx is a bit slower than Varnish", then linking to a terrible irrelevant benchmark and using circular logic (based on another completely crap Litespeed vs Nginx for serving php!? benchmark) to infer that Varnish is faster than nginx hurts my brain.

I ran a really simple benchmark on Varnish 2.1.3 and nginx 0.8.49's proxy_cache directive on a single url; nginx is faster.

However, what if we test a heavy config with a bunch of complicated rules? Varnish's VCL compiles straight to C and might give it the edge in those cases. What if we use test a million different urls and really test how each one handles their cache hit/miss procedures. How do they fare under real world scenarios where thousands of different clients on various connection speeds are connected? I know nginx is damn efficient memory wise when dealing with a load of keepalives.

6. Conclusion

Good benchmarks are difficult.


1. All is disclosed in the page. Varnish is indeed mentioned on the page as being the crucial piece that made is all happen. But mea culpa, the title "Apache vs Litespeed" does need the Varnish bit added. I'll correct it shortly.

2. You're making the assumption that varnish/VCL is caching the first server-side result indefinitely (ttl). That is not the case. So apache is still doing work behind the scenes, albeit less work than it's used to. Whatever hosting conditions we imposed on Varnish/apache are the same imposed on lsws. It's fair game (for better or worse).

3. Actually Litespeed is able to handle more than 100 concurrent connections. What choked it apparently is its inability to manage the 100 PHP requests well. It actually ran out of memory during the experiments. Again, Litespeed was given the same amount of memory and was under the same conditions as Apache/Varnish. While the environment/conditions are not optimal, both varnish/httpd and Litespeed started equal. At first, we started with -c 10 and -n 10000 but both setups performed fairly well. So we upped it to -c 100. Checkout the load average from the Litespeed node below.

4. One of the goals of the experiment is to install stock Litespeed and stock Apache (RPM). No configuration changes were made to either. Litespeed has static object caching (20MB) enabled by default. Varnishd was given a 16MB cache. Stock installs are common in the Web hosting business, which is the scope/interest here.

5. Based on experience with client portals, Varnish as a cache seems to perform better than Nginx as pure cache. It could a perception. We use Nginx and/or Varnish where it makes sense though. They're both free and open so why not!

6. Can't agree more. Just for the record, we didn't select the hardware and environment to favor either web server. We had a vacant server, network, and resources and went from there. The fact that this environment did not produce good results for Litespeed is purely coincidental.

FYI, Litespeed Tech noticed the benchmark and mentioned that they're working on developing a cache module that will ship with Litespeed web server. We'll see.

[root@lsws_node /]# while [ true ]; do uptime; sleep 3; done 19:45:37 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:40 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:43 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:46 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:49 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:52 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:55 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:58 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:46:01 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:46:05 up 0 min, 0 users, load average: 2.88, 0.60, 0.19 19:46:09 up 0 min, 0 users, load average: 5.45, 1.17, 0.38 19:46:12 up 0 min, 0 users, load average: 7.82, 1.73, 0.57 19:46:15 up 0 min, 0 users, load average: 7.82, 1.73, 0.57 19:46:18 up 1 min, 0 users, load average: 10.08, 2.30, 0.76 19:46:21 up 1 min, 0 users, load average: 10.08, 2.30, 0.76 Segmentation fault Segmentation fault Segmentation fault -bash: /usr/bin/uptime: Cannot allocate memory -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done 19:46:40 up 1 min, 0 users, load average: 18.36, 4.66, 1.56 19:46:43 up 1 min, 0 users, load average: 17.53, 4.72, 1.60 19:46:46 up 1 min, 0 users, load average: 17.53, 4.72, 1.60 19:46:49 up 1 min, 0 users, load average: 16.13, 4.64, 1.59 19:46:52 up 1 min, 0 users, load average: 14.84, 4.56, 1.58 19:46:55 up 1 min, 0 users, load average: 14.84, 4.56, 1.58 19:46:58 up 1 min, 0 users, load average: 13.65, 4.49, 1.57 19:47:01 up 1 min, 0 users, load average: 13.65, 4.49, 1.57 19:47:04 up 1 min, 0 users, load average: 12.56, 4.41, 1.56 19:47:07 up 1 min, 0 users, load average: 11.55, 4.34, 1.55 19:47:10 up 1 min, 0 users, load average: 11.55, 4.34, 1.55 19:47:13 up 1 min, 0 users, load average: 10.62, 4.27, 1.55 19:47:16 up 1 min, 0 users, load average: 10.62, 4.27, 1.55 19:47:19 up 2 min, 0 users, load average: 9.77, 4.20, 1.54 19:47:22 up 2 min, 0 users, load average: 8.99, 4.13, 1.53 19:47:25 up 2 min, 0 users, load average: 8.99, 4.13, 1.53 19:47:28 up 2 min, 0 users, load average: 8.27, 4.06, 1.52 19:47:31 up 2 min, 0 users, load average: 8.27, 4.06, 1.52 19:47:34 up 2 min, 0 users, load average: 7.61, 3.99, 1.51 19:47:37 up 2 min, 0 users, load average: 7.00, 3.92, 1.50




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: