Bad performance of nginx with Tomcat vs. Apache with Tomcat

István leccine at gmail.com
Sat Sep 5 18:10:34 MSD 2009


alright, i was just trying to tell you that sometimes you have to step a bit
back and see the whole picture. this is all. so, finally the  http11apr was
the issue if understand well your last mail.

nginx ftw :)


regards,
Istvan

On Sat, Sep 5, 2009 at 3:02 PM, Chang Song <changsong at me.com> wrote:

>
>
> I think I have already said everything about my config. nginx has very
> simple config.
> Here's my config except all those variations of parameters I tried
> (ey-balancer is off)
>
> worker_processes  2;
> events {
>     worker_connections  8192;
>     accept_mutex off;
> }
> http {
>     include       mime.types;
>     default_type  application/octet-stream;
>     access_log  off;
>     keepalive_timeout  0;
>     server {
>         listen       80;
>         server_name  localhost;
>
>         location / {
>             root   html;
>             index  index.html index.htm;
>         }
>
>         location ~ \.jsp$ {
>             root         /usr/local/tomcat5.5;
>             proxy_pass   http://127.0.0.1:8080
>         }
>     }
> }
>
>
> I didn't want to include the config file since I don't think it reveals
> anything wrong
> and there were so many variations of them as I already said.
>
> I just wanted to share my nginx with Tomcat performance experience to see
> if anyone else
> has the same problem or anyone else already have a solution.
>
> Thank you for your reply.
>
>
>
> On Sep 5, 2009, at 2:00 AM, István wrote:
>
> well we can argue on the tools you are using but the point is that you have
> an obvious performance drop somehow :)
>
>
> better to move to that direction to be able to find, fix it.
>
> however i have no problem if you are using apache further, just don't see
> the point to share your experience here if you don't want to share the
> config
>
>
> regards,
> Istvan
>
> On Fri, Sep 4, 2009 at 12:36 AM, Chang Song <changsong at me.com> wrote:
>
>>
>>
>> Hi, Istvan
>> It is not about 50K single node nginx throughput.
>> I have a standard TCP/IP tuning settings.
>>
>> We can reach 70K throughput in some workloads.
>> THey ALL depends on workloads.
>>
>> I have not given all the details so that why you are saying that but
>> we are measure and capturing every possible system resource under /proc.
>>
>> dstat does not tell you everything.
>> We are currently using collectl and collectd, and captures everything
>> under /proc
>>
>> This is a sample of nginx access log (proxy service time and nginx service
>> time is there)
>>
>> [03/Sep/2009:12:19:02 +0900] 10.25.131.46 200 8254 gzip:-% conns:199229
>> up_response_t:0.392 svc_t:0.831 "GET /index_think.jsp HTTP/1.1"
>> [03/Sep/2009:12:19:02 +0900] 10.25.131.48 200 20524 gzip:-% conns:199622
>> up_response_t:0.150 svc_t:0.668 "GET /static/20k.jsp HTTP/1.1"
>>
>> I don't think we are in a position to debug deeper into nginx since we
>> need to move on.
>> Thanks
>>
>>
>>
>> On Sep 4, 2009, at 7:19 AM, István wrote:
>>
>> I see,
>> well I was able to reach about 50K req/s on a single node with nginx with
>> tuning linux/tcp stack/nginx and i learned one thing:
>>
>> measure instead of guess
>>
>> (and as a side effect: debug instead of guess.)
>>
>> So, if i were you i would start dstat(dstat -cgilpymn) on that host and
>> see the different elements of you system, enable debug logging, even
>> stracing nginx
>>
>> This is the way I think.
>>
>> Regards,
>> Istvan
>>
>>
>>
>> On Thu, Sep 3, 2009 at 10:51 PM, Chang Song <changsong at me.com> wrote:
>>
>>>
>>> Istvan.
>>> It didn't really matter what config I used.
>>>
>>> I tried all combinations of
>>>
>>> worker_process (2-512)
>>> worker_connections (512-16000)
>>> accept_mutex (on/off)
>>> tcp_nopush (on/off)
>>> tcp_nodelay (on/off)
>>> proxy_buffer* (various sizes)
>>> and other proxy related parameters you can imagine.
>>> The one I showed you has the best performance
>>> The following showed the best performance across the board
>>>
>>> worker_process 2;           # since we have 2 core machine
>>> worker_connections 16000;
>>> accept_mutex off;
>>> max_connections 256;        # ey-balancer (tomcat had 512 threads)
>>> everything else default
>>>
>>> Thank you.
>>>
>>>
>>>
>>> On Sep 3, 2009, at 4:26 PM, István wrote:
>>>
>>> I think it would be beneficial to show us your nginx config :)
>>>
>>> Regards,
>>> Istvan
>>>
>>> On Thu, Sep 3, 2009 at 5:50 AM, Chang Song <changsong at me.com> wrote:
>>>
>>>> Sometime ago, I posted an message about Nginx performance when paired
>>>> with Tomcat.
>>>> We recently did extensive in-house testing of various workload against
>>>> Nginx with Tomcat vs Apache vs Tomcat.
>>>>
>>>> Apache wins hands down.
>>>>
>>>> Here's the basic setup
>>>>
>>>> 1. Nginx (2 proc/8192 connections) -> http/1.0 -> Tomcat (HTTP
>>>> connector)
>>>> 2. Apache (512 prefork) -> AJP -> Tomcat (AJP)
>>>>
>>>> Both KeepAlive off (we don't use KeepAlive due to L4 switch)
>>>>
>>>> The physical server is 2 core Intel Xeon, which is typical web server
>>>> config here.
>>>> We have three grinder 3.2 load generators.
>>>> We tested 4K and 20K Tomcat simple HTML file, 20K simple HTML with
>>>> intentional 10% 200ms
>>>> sleep in Tomcat serving (emulate slow DB query), etc.
>>>>
>>>> Every single case, Apache wins by at least 10-15%.
>>>> Throughput and response time.
>>>> Nginx uses a bit less CPU cycles (10-20%), but it is not able drive
>>>> Tomcat to 100% CPU.
>>>>
>>>> Here's my take on this performance problem.
>>>>
>>>> 1. Lack of AJP support, which is an optimized HTTP protocol
>>>>   First of all, this is a serious bottleneck.
>>>>
>>>>   * AJP has much less communication overhead than HTTP
>>>>
>>>> 2. Lack of HTTP KeepAlive support for proxy
>>>>
>>>>   * Lack of AJP may be compensated with HTTP keepalive support since
>>>> there are
>>>>     at least twice the number of TIME_WAIT sockets (connection
>>>> establishment mean time
>>>>     is at least twice - three times slower than that of Apache)
>>>>
>>>> 3. Lack of connection pooling
>>>>
>>>>   * Ey-balancer makes things a bit easier, response times are stable,
>>>> but still the same
>>>>     average TPS and response time.
>>>>
>>>> 4. There seems to be a huge bug in connection management code
>>>>
>>>>   Two mix of transactions: 20K HTML serving and 8K HTML with intentional
>>>> 200ms delay in Tomcat logic
>>>>
>>>>   With Apache, 20K HTML serving took  36 ms on average while 8K HTML
>>>> took 258 ms
>>>>   With Nginx,  20K HTML serving took 600 ms on average while 8K HTML
>>>> took 817 ms
>>>>
>>>>   I really cannot explain these difference. Not even TCP connection
>>>> overhead or lack of AJP.
>>>>
>>>> My questions is "should I abandon nginx at this point"?
>>>> I know nginx is great proxy and static file server but I cannot prove my
>>>> point with Tomcat over and over again.
>>>>
>>>> Thank you
>>>>
>>>> Chang
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> the sun shines for all
>>>
>>>
>>>
>>
>>
>> --
>> the sun shines for all
>>
>>
>>
>
>
> --
> the sun shines for all
>
>
>


-- 
the sun shines for all
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://nginx.org/pipermail/nginx/attachments/20090905/6f47766c/attachment.html>


More information about the nginx mailing list