Ah thanks, in which case it probably is wise to use the dynamic spawning method with a guarunteed amount of 32 processes. (max of 2-3x that or something along those lines) I guess.<br><br>By the way Ive tested this on HTTP backends a little without any obvious errors, im planning on loading it on a live server later this week for some real testing. Will this be packaged as a module in the future or do you think it will be included in the core? I suppose Igor would be best to answer that question.<br>
<br><div class="gmail_quote">On Sat, Jul 30, 2011 at 2:13 AM, Maxim Dounin <span dir="ltr"><<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hello!<br>
<div class="im"><br>
On Sat, Jul 30, 2011 at 01:54:36AM +1000, SplitIce wrote:<br>
<br>
> Correct me if im wrong but wouldnt the correct value to use for keepalive be<br>
> 31(/workers) in this case?<br>
<br>
</div>Even 31(/workers) may be too big: (1) there may be more workers<br>
during reload and (2) under load you are expected to always have<br>
some connections busy (and hence not counted against keepalive<br>
limit), and requests in listen queue will be still waiting<br>
forever.<br>
<br>
Maxim Dounin<br>
<div><div></div><div class="h5"><br>
><br>
> On Sat, Jul 30, 2011 at 1:36 AM, Maxim Dounin <<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>> wrote:<br>
><br>
> > Hello!<br>
> ><br>
> > On Fri, Jul 29, 2011 at 03:43:56PM +0200, Thomas Love wrote:<br>
> ><br>
> > > ><br>
> > > > > On 26 July 2011 13:57, Maxim Dounin <<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>> wrote:<br>
> > > > ><br>
> > > > > > Hello!<br>
> > > > > ><br>
> > > > > > Attached patch (against 1.0.5) introduces upstream keepalive<br>
> > > > > > support for memcached, fastcgi and http. Note the patch is<br>
> > > > > > experimental and may have problems (though it passes basic smoke<br>
> > > > > > tests here). Testing is appreciated.<br>
> > > > > ><br>
> > > > > ><br>
> > > > > Sounds great. Is it expected to work in this case:<br>
> > > > ><br>
> > > > > upstream fastcgi_backend {<br>
> > > > > server unix:/tmp/php-fpm.sock<br>
> > > > > keepalive 32;<br>
> ><br>
> > Try something like<br>
> ><br>
> > keepalive 10;<br>
> ><br>
> > here instead. See below.<br>
> ><br>
> > > > > }<br>
> > > ><br>
> > > > Yes (though I'm not sure if php is able to maintain connections<br>
> > > > alive, but quick look suggests it should).<br>
> > ><br>
> > ><br>
> > > Out of interest I have tested the above (on 1.0.5) under heavy load and I<br>
> > > have run into a problem. 40 - 90 seconds after startup all requests start<br>
> > > returning 502 and the log is flooded with:<br>
> > ><br>
> > > [error] 2120#0: *37802582 connect() to unix:/tmp/php-fpm.sock failed (11:<br>
> > > Resource temporarily unavailable) while connecting to upstream, [...]<br>
> > > upstream: "fastcgi://unix:/tmp/php-fpm.sock:"<br>
> > ><br>
> > > I am running php-fpm 0.5.14 with 32 * PHP 5.2.17 processes.<br>
> > ><br>
> > > How long it takes seems to depend on request rate. It looks like the<br>
> > php-fpm<br>
> > > listen backlog is overflowing (it's at 8178, but that's normally<br>
> > sufficient<br>
> > > when keepalive is disabled).<br>
> ><br>
> > With "keepalive 32;" you keep busy all of your 32 php processes<br>
> > even if there are no active requests, and there are no processes<br>
> > left to process listen queue.<br>
> ><br>
> > On the other hand, nginx will still try to establish new<br>
> > connection if there are no idle one sitting in connections cache<br>
> > during request processing. Therefore some requests will enter<br>
> > php's listen queue and have no chance to leave it. Eventually<br>
> > listen queue will overflow.<br>
> ><br>
> > Please also note that "keepalive 10;" means each nginx worker will<br>
> > keep up to 10 connections. If you are running multiple workers<br>
> > you may need to use lower number.<br>
> ><br>
> > Maxim Dounin<br>
> ><br>
> > _______________________________________________<br>
> > nginx-devel mailing list<br>
> > <a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
> > <a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
> ><br>
><br>
><br>
><br>
> --<br>
</div></div>> Warez Scene <<a href="http://thewarezscene.org" target="_blank">http://thewarezscene.org</a>> Free Rapidshare<br>
> Downloads<<a href="http://www.nexusddl.com" target="_blank">http://www.nexusddl.com</a>><br>
<div><div></div><div class="h5"><br>
> _______________________________________________<br>
> nginx-devel mailing list<br>
> <a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
<br>
_______________________________________________<br>
nginx-devel mailing list<br>
<a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><a href="http://thewarezscene.org" target="_blank">Warez Scene</a> <a href="http://www.nexusddl.com" target="_blank">Free Rapidshare Downloads</a><br><br>