Bad performance of nginx with Tomcat vs. Apache with Tomcat

Chang Song changsong at me.com
Fri Sep 4 03:48:57 MSD 2009


On Sep 4, 2009, at 7:44 AM, Maxim Dounin wrote:

> Hello!
>
> On Thu, Sep 03, 2009 at 01:50:44PM +0900, Chang Song wrote:
>
>> Sometime ago, I posted an message about Nginx performance when paired
>> with Tomcat.
>> We recently did extensive in-house testing of various workload  
>> against
>> Nginx with Tomcat vs Apache vs Tomcat.
>>
>> Apache wins hands down.
>>
>> Here's the basic setup
>>
>> 1. Nginx (2 proc/8192 connections) -> http/1.0 -> Tomcat (HTTP
>> connector)
>> 2. Apache (512 prefork) -> AJP -> Tomcat (AJP)
>>
>> Both KeepAlive off (we don't use KeepAlive due to L4 switch)
>
> As long as you don't care about concurrency > 512 and client
> keepalive - Apache may be better (or at least easier) choise here.

Right.
In a real world, where Tomcat uses most of CPU cycles and web servers
are tightly capacity planned for horizontal scalability, we should
not allow more than that in one dual-core web server.

We are simply doing stress test to find out max capacity of nginx
vs apache (CPU usage comparisons as well)


>> The physical server is 2 core Intel Xeon, which is typical web server
>> config here.
>> We have three grinder 3.2 load generators.
>> We tested 4K and 20K Tomcat simple HTML file, 20K simple HTML with
>> intentional 10% 200ms
>> sleep in Tomcat serving (emulate slow DB query), etc.
>>
>> Every single case, Apache wins by at least 10-15%.
>> Throughput and response time.
>> Nginx uses a bit less CPU cycles (10-20%), but it is not able drive
>> Tomcat to 100% CPU.
>>
>> Here's my take on this performance problem.
>>
>> 1. Lack of AJP support, which is an optimized HTTP protocol
>>   First of all, this is a serious bottleneck.
>>
>>   * AJP has much less communication overhead than HTTP
>>
>> 2. Lack of HTTP KeepAlive support for proxy
>>
>>   * Lack of AJP may be compensated with HTTP keepalive support since
>> there are
>>     at least twice the number of TIME_WAIT sockets (connection
>> establishment mean time
>>     is at least twice - three times slower than that of Apache)
>
> I believe Tomcat has noticeble overhead in connection
> establishment code (though never tested it myself), so lack of
> backend keepalive support may be an issue in your case.
>
> System overhead shouldn't be noticable (it's about 1 ms on usual
> backend networks).

Maxim.
1ms matters for us. Average response time for short HTML files ranges
1ms. even less.


>> 3. Lack of connection pooling
>>
>>   * Ey-balancer makes things a bit easier, response times are stable,
>> but still the same
>>     average TPS and response time.
>>
>> 4. There seems to be a huge bug in connection management code
>>
>>   Two mix of transactions: 20K HTML serving and 8K HTML with
>> intentional 200ms delay in Tomcat logic
>>
>>   With Apache, 20K HTML serving took  36 ms on average while 8K HTML
>> took 258 ms
>>   With Nginx,  20K HTML serving took 600 ms on average while 8K HTML
>> took 817 ms
>>
>>   I really cannot explain these difference. Not even TCP connection
>> overhead or lack of AJP.
>
> I believe this time differences has something to do with how
> Tomcat handles connection establishment / close.
>
> Currently nginx will consider request complete only after
> backend's connection close, and if Tomcat waits a bit for some
> reason before closing connection - this may lead to above numbers.
>
> You may try to capture single request between nginx and Tomcat
> with tcpdump to prove this.

This is an excerpt from access log file

* Lightly loaded nginx

up_response_t:0.002 svc_t:0.002 "GET /static/20k.jsp HTTP/1.1"
up_response_t:0.204 svc_t:0.204 "GET /index_think.jsp HTTP/1.1"

20k file takes 2ms, the intentional 200ms delay file took 204ms.

* Heavily loaded nginx

up_response_t:0.427 svc_t:0.889 "GET /index_think.jsp HTTP/1.1"
up_response_t:0.155 svc_t:0.430 "GET /static/20k.jsp HTTP/1.1"

20k text file took 430 ms, and 200ms delay file took 889ms.

so upstream proxy processing took 155 ms (Tomcat) and nginx queuing  
took 430ms for 20k HTML file.

I don't think tcpdump reveals anything of other than packet exchanges;  
delays of these magnitudes
happens in nginx

Maybe a bug in ey-balancer?
Don't know.

I have been an nginx pusher in my company but since we are moving to  
Tomcat setup,
I am not able to convince people to use nginx at this point.

Thanks, Maxim


> Maxim Dounin
>
>> My questions is "should I abandon nginx at this point"?
>> I know nginx is great proxy and static file server but I cannot  
>> prove my
>> point with Tomcat over and over again.
>>
>> Thank you
>>
>> Chang
>>
>>
>>
>>
>






More information about the nginx mailing list