AWS HAProxy latency to Nginx?

barkloud nginx-forum at nginx.us
Tue Jul 12 01:24:26 MSD 2011


> A session doesn't really mean that everything is
> php related - there might be a static content
> fetches (for example if the same 
> nginx/backend serves also images) and/or keepalive
> connections.

Ah yes, our nginx only handles php requests, but we do get random
requests for non-existent js and image files what are rejected (in the
nginx error.log), so those must contribute to the 200+ session listed on
the HAProxy but that don't get assigned to a php process.
So a more accurate #s of valid php requests would be to look be (#
concurrent session - # of error) at any given moment on the HAProxy
stats page, right? 

I'm still not quite certain how the 8 work_processes value comes into
play when optimizing nginx... I know you should set it to the # of
cores, but with AWS, their calculation of cores doesn't directly pertain
to the # of physical cores on the box. But beyond that, is it just a
matter of experimenting with values to see what results are better?
But maybe that's out of the scope of this thread for now. 

> Besides a single php child if the
> application code is optimal can 
> manage to complete way more than 1 request per a
> given time period (1 second).
> Personally I feel that 80 php process for 200
> req/sec is somewhat too much - I try to to keep
> like 200-300 req/sec per 10-20 childs 
> and bash the programmers for every page which
> doesn't load in 0.0x secs.

I don't think HAProxy is calculating 200 req/sec, but rather 200
concurrent requests at that given moment when the stats page refreshes.
I ran a quick benchmark on the cli using "ab -n 200 -c 100
http://localhost/........." and see we're doing about 400+ req/sec,
given my conf file.
I'm assuming by having 200 max_children in the conf file, that's setting
a max of 200 php requests at any given moment (child-to-request ratio is
1:1).
But anyways, what #s should I be tweaking in the conf files, if
possible, to achieve better #s? Isn't the spawning of 80 processes done
automatically and the only thing I can control is capping it (with
max_children)? 



> You didn't post the FPM manager line (eg something
> like pm = dynamic) that way the php-fpm master
> process spawns extra childs when 
> there is a need (also informs in the log file if
> the max setting is too low) and kills the unneeded
> ones.

I have it set as dynamic. 

> The php part depends on the specific application -
> eg there easily can be code that completes in
> nano/micro-seconds while the same 
> time some people easily manage to write endless
> loops etc (btw php-fpm has a nice feature to
> monitor/backtrace such scripts and 
> forcibly kill if they take too long ).
> 
> But to make 100% sure HaProxy offers a neat status
> page where you can see the connection/request
> error counts that way indicating if 
> there is a problem at the backends.
> 
> For extra debug you always enable logging on both
> instances (frontend / backend) and compare the
> incoming / forwarded and served 
> requests.
> 
> 
> rr
> 
> 
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://nginx.org/mailman/listinfo/nginx

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,212221,212227#msg-212227




More information about the nginx mailing list