Bad performance of nginx with Tomcat vs. Apache with Tomcat

Chang Song changsong at
Sat Sep 5 17:45:48 MSD 2009

On Sep 4, 2009, at 6:52 PM, Maxim Dounin wrote:

> Hello!
> On Fri, Sep 04, 2009 at 08:48:57AM +0900, Chang Song wrote:
>> On Sep 4, 2009, at 7:44 AM, Maxim Dounin wrote:
>>> Hello!
>>> On Thu, Sep 03, 2009 at 01:50:44PM +0900, Chang Song wrote:
>>>> Sometime ago, I posted an message about Nginx performance when  
>>>> paired
>>>> with Tomcat.
>>>> We recently did extensive in-house testing of various workload
>>>> against
>>>> Nginx with Tomcat vs Apache vs Tomcat.
>>>> Apache wins hands down.
>>>> Here's the basic setup
>>>> 1. Nginx (2 proc/8192 connections) -> http/1.0 -> Tomcat (HTTP
>>>> connector)
>>>> 2. Apache (512 prefork) -> AJP -> Tomcat (AJP)
>>>> Both KeepAlive off (we don't use KeepAlive due to L4 switch)
>>> As long as you don't care about concurrency > 512 and client
>>> keepalive - Apache may be better (or at least easier) choise here.
>> Right.
>> In a real world, where Tomcat uses most of CPU cycles and web servers
>> are tightly capacity planned for horizontal scalability, we should
>> not allow more than that in one dual-core web server.
>> We are simply doing stress test to find out max capacity of nginx
>> vs apache (CPU usage comparisons as well)
> Am I right in the assumption that nginx and Tomcat runs on the
> same host?  This isn't recommended since they will compete for
> resources in some not really predictable way, but at least you
> have to increase nginx workers priority in this case.

All WAS (web application servers) are configured with Apache and Tomcat
on the same machine. Nginx and Tomcat should be configured on the same

>>>> The physical server is 2 core Intel Xeon, which is typical web  
>>>> server
>>>> config here.
>>>> We have three grinder 3.2 load generators.
>>>> We tested 4K and 20K Tomcat simple HTML file, 20K simple HTML with
>>>> intentional 10% 200ms
>>>> sleep in Tomcat serving (emulate slow DB query), etc.
>>>> Every single case, Apache wins by at least 10-15%.
>>>> Throughput and response time.
>>>> Nginx uses a bit less CPU cycles (10-20%), but it is not able drive
>>>> Tomcat to 100% CPU.
>>>> Here's my take on this performance problem.
>>>> 1. Lack of AJP support, which is an optimized HTTP protocol
>>>>  First of all, this is a serious bottleneck.
>>>>  * AJP has much less communication overhead than HTTP
>>>> 2. Lack of HTTP KeepAlive support for proxy
>>>>  * Lack of AJP may be compensated with HTTP keepalive support since
>>>> there are
>>>>    at least twice the number of TIME_WAIT sockets (connection
>>>> establishment mean time
>>>>    is at least twice - three times slower than that of Apache)
>>> I believe Tomcat has noticeble overhead in connection
>>> establishment code (though never tested it myself), so lack of
>>> backend keepalive support may be an issue in your case.
>>> System overhead shouldn't be noticable (it's about 1 ms on usual
>>> backend networks).
>> Maxim.
>> 1ms matters for us. Average response time for short HTML files ranges
>> 1ms. even less.
> Typical dynamic html generation will take much more, while static
> should be served directly by nginx.  So on usual setup benefit from
> keepalive connections to backends isn't really measurable.
> Note that I do not try to convince anybody that keepalive to
> backends isn't really needed, just explaining why it's not a high
> priority task in development.

Right, typical app server takes the most of user CPU cycles in WAS.
and yes, backend keepalive benefits are not measurable.

As I already have mentioned in the other mail, http11apr protocol  
changes everything.
Now I have better-than apache performance.
Nginx rocks!!!! You guys are the best.

>>>> 3. Lack of connection pooling
>>>>  * Ey-balancer makes things a bit easier, response times are  
>>>> stable,
>>>> but still the same
>>>>    average TPS and response time.
>>>> 4. There seems to be a huge bug in connection management code
>>>>  Two mix of transactions: 20K HTML serving and 8K HTML with
>>>> intentional 200ms delay in Tomcat logic
>>>>  With Apache, 20K HTML serving took  36 ms on average while 8K HTML
>>>> took 258 ms
>>>>  With Nginx,  20K HTML serving took 600 ms on average while 8K HTML
>>>> took 817 ms
>>>>  I really cannot explain these difference. Not even TCP connection
>>>> overhead or lack of AJP.
>>> I believe this time differences has something to do with how
>>> Tomcat handles connection establishment / close.
>>> Currently nginx will consider request complete only after
>>> backend's connection close, and if Tomcat waits a bit for some
>>> reason before closing connection - this may lead to above numbers.
>>> You may try to capture single request between nginx and Tomcat
>>> with tcpdump to prove this.
>> This is an excerpt from access log file
>> * Lightly loaded nginx
>> up_response_t:0.002 svc_t:0.002 "GET /static/20k.jsp HTTP/1.1"
>> up_response_t:0.204 svc_t:0.204 "GET /index_think.jsp HTTP/1.1"
>> 20k file takes 2ms, the intentional 200ms delay file took 204ms.
> These numbers looks pretty good.  Basically nginx introduced no
> delay, so my assumption about connection close problems was wrong.

Yes, above numbers are good.

>> * Heavily loaded nginx
>> up_response_t:0.427 svc_t:0.889 "GET /index_think.jsp HTTP/1.1"
>> up_response_t:0.155 svc_t:0.430 "GET /static/20k.jsp HTTP/1.1"
>> 20k text file took 430 ms, and 200ms delay file took 889ms.
>> so upstream proxy processing took 155 ms (Tomcat) and nginx queuing  
>> took
>> 430ms for 20k HTML file.
>> I don't think tcpdump reveals anything of other than packet  
>> exchanges;
>> delays of these magnitudes
>> happens in nginx
>> Maybe a bug in ey-balancer?
>> Don't know.
> EY balancer will queue requests that doesn't fit into configured
> limit, and queue time will be counted in $request_time (not
> $upstream_response_time).  If you use it in your test - it
> explains the above numbers under load.

> No idea how correct EY balancer is but anyway it will not be able to  
> fully
> load number of backend workers identical to configured number of
> max_connections.  Since queue sits in nginx in this case, and there
> is always some delay between requests from backend's point of
> view.
> Unless you have huge number of backends capabale of processing
> small number of concurrent connections you may want to try
> standard round-robin balancer instead.  Make sure you have
> reasonable listen queue size on your backend.
> Maxim Dounin

Yes, the queuing time accounts against request_time.
Maxim. This workload represents a workload in which nginx should shine.
When some requests in app server or DB server takes more than usual,
web server thread / process should not hold up other short requests  
like apache.
That was the original intention of the workload.
My expectation was that nginx would show lighted loaded case all the  

As you mentioned, since the queue sits in nginx and Tomcat is actually
holding up the requests.

These are all very synthetic and microbenchmark approach that does not  
real-world scenario. Just to show nginx vs apache performance  

Since http11apr protocol allows nginx to show its full strength, I am  
a happy feet again ;-)

Thanks for everything, Maxim. (and Igor, of course)

>> I have been an nginx pusher in my company but since we are moving to
>> Tomcat setup,
>> I am not able to convince people to use nginx at this point.
>> Thanks, Maxim
>>> Maxim Dounin
>>>> My questions is "should I abandon nginx at this point"?
>>>> I know nginx is great proxy and static file server but I cannot
>>>> prove my
>>>> point with Tomcat over and over again.
>>>> Thank you
>>>> Chang

More information about the nginx mailing list