Hacker News new | past | comments | ask | show | jobs | submit login

1) Nginx is a fully fledged Web server and has some performance penalty compared to Varnish. The latter is lightening fast.

2) Nginx is no match for Varnish's VCL for header/cache manipulation

Regards




1- yes but at the very highest loads 2- you don't need that much manip. for simple load balancing. 3- can you backup your claims with reference(s)?


1 - not necessarily 2 - it depends on app complexity. Most people that read Y need it. 3 - Benchmarks? sure :) http://www.unixy.net/apache-vs-litespeed


What does an Apache v Litespeed benchmark have to do with anything?


The above post proposed that Varnish is not needed because Nginx could handle the task well with the best performance. But Nginx is a bit slower than Varnish. It's been shown in benchmarks that Litespeed is a bit faster than Nginx. By inductive reasoning to show that Varnish is faster than Nginx, I posted a link to show that Varnish is faster than Litespeed, which in turn is faster than Nginx. So Varnish is faster than Nginx.

Let me know if you have any question/comment.

Regards


My question is how you manage to stay in business with such flawed 'reasonings'.

That benchmark in particular is so full of crap that people will become dumber after reading it.


I'm not going to respond to your personal attacks; just the factual content of you post(s).

I understand that you perceived my posting of the benchmark link to be inappropriate and I agree with you to an extent. It remains to say that the link represents my proof within the context of the conversation chain from earlier.

You seem to be a long-term contributor here and I'm sure it's appreciated. I would be interested if you addressed the flaws or issues with the benchmark results. I do make mistakes just like anybody but own up to them at the end.

Let me know if you have any question and I'll be happy to discuss.

Regards Joe


Flaws with the benchmark:

1. Misrepresentation

The title says 'Apache vs Litespeed', when in reality it is 'Varnish vs Litespeed', Apache has nothing to do with this particular benchmark.

2. Questionable results

Varnish serving 217.05 req/sec for a static page at 100 concurrency level? Something is seriously messed up in the setup; Varnish can easily handle thousands of req/s for something as simple as looking up a cache entry and returning a static file for 100 clients. Oh wait, 11177.99 [Kbytes/sec] - looks like you saturated the pipe, making it a test of how fast your network is, and nothing to do with Varnish.

3. Flawed benchmark testing procedure

You can't expect a simple `ab -c 100 -n 10000` to provide any insight whatsoever towards the general handling of a server. The only thing obvious is that Litespeed has been saturated with 100 concurrent connections. If you're going to do a benchmark, you must post varying concurrency levels at least (along with exact setup configurations). Likewise, cpu and memory usage are just as important to providing valuable insight.

4. Comparing apples to oranges

I'm not entirely sure what Litespeed is doing (eg. does it cache statically?) since you haven't specified it exactly, but considering that almost 70% of the requests failed and almost all requests after the first few had a different content length, I would assume that Litespeed is returning a dynamic request each time, whereas Varnish is returning a static file. I have no doubt in my mind that if you ran the same scenario (ie. Litespeed serving a static file as well) the results would be much different.

5. Irrelevant to the article benchmark

Saying "But Nginx is a bit slower than Varnish", then linking to a terrible irrelevant benchmark and using circular logic (based on another completely crap Litespeed vs Nginx for serving php!? benchmark) to infer that Varnish is faster than nginx hurts my brain.

I ran a really simple benchmark on Varnish 2.1.3 and nginx 0.8.49's proxy_cache directive on a single url; nginx is faster.

However, what if we test a heavy config with a bunch of complicated rules? Varnish's VCL compiles straight to C and might give it the edge in those cases. What if we use test a million different urls and really test how each one handles their cache hit/miss procedures. How do they fare under real world scenarios where thousands of different clients on various connection speeds are connected? I know nginx is damn efficient memory wise when dealing with a load of keepalives.

6. Conclusion

Good benchmarks are difficult.


1. All is disclosed in the page. Varnish is indeed mentioned on the page as being the crucial piece that made is all happen. But mea culpa, the title "Apache vs Litespeed" does need the Varnish bit added. I'll correct it shortly.

2. You're making the assumption that varnish/VCL is caching the first server-side result indefinitely (ttl). That is not the case. So apache is still doing work behind the scenes, albeit less work than it's used to. Whatever hosting conditions we imposed on Varnish/apache are the same imposed on lsws. It's fair game (for better or worse).

3. Actually Litespeed is able to handle more than 100 concurrent connections. What choked it apparently is its inability to manage the 100 PHP requests well. It actually ran out of memory during the experiments. Again, Litespeed was given the same amount of memory and was under the same conditions as Apache/Varnish. While the environment/conditions are not optimal, both varnish/httpd and Litespeed started equal. At first, we started with -c 10 and -n 10000 but both setups performed fairly well. So we upped it to -c 100. Checkout the load average from the Litespeed node below.

4. One of the goals of the experiment is to install stock Litespeed and stock Apache (RPM). No configuration changes were made to either. Litespeed has static object caching (20MB) enabled by default. Varnishd was given a 16MB cache. Stock installs are common in the Web hosting business, which is the scope/interest here.

5. Based on experience with client portals, Varnish as a cache seems to perform better than Nginx as pure cache. It could a perception. We use Nginx and/or Varnish where it makes sense though. They're both free and open so why not!

6. Can't agree more. Just for the record, we didn't select the hardware and environment to favor either web server. We had a vacant server, network, and resources and went from there. The fact that this environment did not produce good results for Litespeed is purely coincidental.

FYI, Litespeed Tech noticed the benchmark and mentioned that they're working on developing a cache module that will ship with Litespeed web server. We'll see.

[root@lsws_node /]# while [ true ]; do uptime; sleep 3; done 19:45:37 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:40 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:43 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:46 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:49 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:52 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:55 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:58 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:46:01 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:46:05 up 0 min, 0 users, load average: 2.88, 0.60, 0.19 19:46:09 up 0 min, 0 users, load average: 5.45, 1.17, 0.38 19:46:12 up 0 min, 0 users, load average: 7.82, 1.73, 0.57 19:46:15 up 0 min, 0 users, load average: 7.82, 1.73, 0.57 19:46:18 up 1 min, 0 users, load average: 10.08, 2.30, 0.76 19:46:21 up 1 min, 0 users, load average: 10.08, 2.30, 0.76 Segmentation fault Segmentation fault Segmentation fault -bash: /usr/bin/uptime: Cannot allocate memory -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done 19:46:40 up 1 min, 0 users, load average: 18.36, 4.66, 1.56 19:46:43 up 1 min, 0 users, load average: 17.53, 4.72, 1.60 19:46:46 up 1 min, 0 users, load average: 17.53, 4.72, 1.60 19:46:49 up 1 min, 0 users, load average: 16.13, 4.64, 1.59 19:46:52 up 1 min, 0 users, load average: 14.84, 4.56, 1.58 19:46:55 up 1 min, 0 users, load average: 14.84, 4.56, 1.58 19:46:58 up 1 min, 0 users, load average: 13.65, 4.49, 1.57 19:47:01 up 1 min, 0 users, load average: 13.65, 4.49, 1.57 19:47:04 up 1 min, 0 users, load average: 12.56, 4.41, 1.56 19:47:07 up 1 min, 0 users, load average: 11.55, 4.34, 1.55 19:47:10 up 1 min, 0 users, load average: 11.55, 4.34, 1.55 19:47:13 up 1 min, 0 users, load average: 10.62, 4.27, 1.55 19:47:16 up 1 min, 0 users, load average: 10.62, 4.27, 1.55 19:47:19 up 2 min, 0 users, load average: 9.77, 4.20, 1.54 19:47:22 up 2 min, 0 users, load average: 8.99, 4.13, 1.53 19:47:25 up 2 min, 0 users, load average: 8.99, 4.13, 1.53 19:47:28 up 2 min, 0 users, load average: 8.27, 4.06, 1.52 19:47:31 up 2 min, 0 users, load average: 8.27, 4.06, 1.52 19:47:34 up 2 min, 0 users, load average: 7.61, 3.99, 1.51 19:47:37 up 2 min, 0 users, load average: 7.00, 3.92, 1.50


What do you need a fully-fledged web server on the edge nodes for? Wouldn't it be better for varnish to proxy everything to the back end dynamic server directly, except for requests that need to use the local edge node nginx?

The post describes a setup with two cacheing layers on each edge node, varnish cache in front of ngingx cache, which has got to be sub-optimal.


Nginx is used as a disk cache and varnish is used as a memory cache. So you're saying that why not scrap nginx and let varnish run the show on the edge nodes? That works just fine too.

Keep in mind that the goal of the article is to build an affordable CDN based on VM edge nodes. Memory is still expensive on VMs (especially if you're going to be renting VMs in costly locations). So if you have say a 20GB static store that needs to be served, on a 2GB RAM VPS, Varnish will only be able to cache that much files and be forced to make round trips to the dynamic node on miss.

Varnish does have "persistent" storage capabilities but from what I've read they're still experimental.

Regards


Not sure why you're getting negged here; using nginx without Varnish, you're not going to get nearly the speed, and using Vanish on Apache just doesn't make a lot of sense.

Not sure what you mean by "has some performance penalty compared to Varnish" however.


Nginx "has some performance penalty compared to Varnish"

The performance penalty comes the number of system calls needed to complete the request. According to the Varnish arch docs, it's 18 syscalls (http://varnish-cache.org/wiki/ArchitectNotes). Also, Varnish doesn't malloc for each new request/header. It mallocs once per workspace.

Unfortunately there's no such thing as Nginx vs Varnish benchmarks because Varnish is just a cache (apple to orange comparison).

Regards Joe


Nginx has had an apples to apples proxy_cache directive since 0.7 which does the exact same job as Varnish (albeit lacking a few features yet).

Also read my other reply to your comment: http://news.ycombinator.com/item?id=1593611

I suggest you take down that benchmark article, because it is flawed on too many levels.


Header manip is very important if one needs to control caching. I'm not sure Nginx has this capability at such granular level. Varnish's VCL excels at this.

As explained above, all info has been disclosed on the page in question. It's up to the reader to agree or disagree with the results. Let's face it, benchmarks are always going to be controversial and imperfect. One should always take the initiative to verify results by running their own tests.

Regards

Joe


Nginx does have that capability on a much more granular level in fact.

That benchmark was entirely useless for all the points I mentioned in the other post, but predominantly because you saturated your network on one of those tests, which meant the bottleneck was the network and not the webserver.

All your counterpoints were ignorant/naive and just went to show me how much you don't understand what is going on.

This wouldn't be a bad thing in itself, everyone has to start somewhere, but to think you do this for a living...


I don't respond to ad hominem attacks.

Let me clear up a few things. Again, the point of the experiment is not to setup the perfect environment for either VM but to subject both Web servers to fair and equal conditions. Whether it is 1GB RAM, 100Mbps switch ports, stock configurations, etc. It's fair game.

Now, I don't know if you have experience in the Web hosting business but perfection is not and never will be the goal. Clients could care less about the back-end or details as long as the response they get at the browser is acceptable. That's what drives our efforts in technology.

You keep on repeating the idea that no one needs both Varnish and Nginx in their design. Again, there's benefit in running such configuration. It's practical, effective, and thankfully keeps the business flowing. Here's an example of a successful business, besides us ;), doing it: http://www.heroku.com. If you don't realize the benefits of running Varnish over Nginx or Nginx over Varnish in certain situations, that would be one stubborn opinion. So please, stop my wasting my time.

Regards

Joe


> You keep on repeating the idea that no one needs both Varnish and Nginx in their design.

I never said that at any point.

What I did say was your reasoning that Varnish is faster than Nginx is flawed:

    Apache+Varnish is 97% faster than Litespeed [1]
    Litespeed is faster than Nginx [2]
    *therefore I infer* Varnish is faster than Nginx.
Those cited benchmarks are both crap, and if they are relevant to a particular setup (ie yours), then you cannot use them on a public forum as proof that one is faster than the other.

You see, the problem I have is that you're spreading FUD which many will read and believe (as you do).

[1] http://www.unixy.net/apache-vs-litespeed

[2] http://blog.litespeedtech.com/2010/01/06/benchmark-compariso...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: