10% 500 Errors

James Golick jamesgolick at gmail.com
Mon Mar 10 16:47:01 MSK 2008


Nothing... all of the mongrels respond normally when hit without nginx.

This has got to be an nginx issue...

On Mon, Mar 10, 2008 at 9:40 AM, James Golick <jamesgolick at gmail.com> wrote:

> Yeah, I don't think it's mongrel, as they're all started the identical way
> (thru a process monitor).
>
> Also, every time I get the error, I see one of those issues in the nginx
> error.log. But, I guess it still could be happening upstream....
>
> I will test them all and report back.
>
>
> On Mon, Mar 10, 2008 at 9:29 AM, Phillip B Oldham <
> phill at theactivitypeople.co.uk> wrote:
>
> >  I was seeing something similar with PHP5 fastcgi and lighttpd, though
> > it was a lot more than 10% - maybe 25-50. I'm getting a little worried as I
> > think I may be seeing the same thing at around 5-10% with nginx.
> >
> > During my investigations into why this was happening in lighttpd, I came
> > across the following paragraph in the mod_fcgi docs:
> >
> > Adds a MaxRequestsPerProcess parameter that allows mod_fcgid to exit
> > after handling a certain number of requests, similar to the existing
> > ProcessLifeTime option.
> >
> > This solves a problem with PHP in FastCGI mode. By default, PHP stops
> > accepting new FastCGI connections after handling 500 requests;
> > unfortunately, there is a potential race condition during the PHP cleanup
> > code in which PHP can be shutting down but still have the socket open, so
> > mod_fcgid under heavy load can send request number 501 to PHP and have it
> > "accepted", but then PHP appears to simply exit, causing errors.
> >
> > Not too sure if your rails app/mongrel is restarting processes after a
> > set limit and coming across the same race condition?
> >
> > If this problem is could become aparent in nginx it would be great if
> > there was a plugin to spawn and manage fcgi threads which could limit the
> > number of connections to each backend.
> >
> > HTH.
> > Phill
> >
> > James Golick wrote:
> >
> > I have nginx running as a proxy to about twelve upstream app servers,
> > serving a rails app. Nothing else really in this configuration.
> >
> > I am seeing about 10% of requests throwing 500 errors, and this in my
> > error log:
> >
> > 2008/03/10 08:41:05 [info] 6632#0: *12005 client closed prematurely
> > connection while sending response to client, client: xxx, server: xxx,
> > request: xxx, host: xxx, referrer: xxx
> >
> > I'm also seeing lots of:
> >
> > client xxx closed keepalive connection
> >
> > but that strikes me as normal, and I'm seeing:
> >
> > client closed prematurely connection while reading client request line,
> > client: xxx, server: xxx
> >
> > I have googled far and wide, and the best answers I came up with were to
> > add these lines to my conf:
> >
> > proxy_ignore_client_abort  on;
> > proxy_next_upstream error;
> >
> > but, that doesn't seem to have solved the problem.
> >
> > Any ideas?
> >
> > Thanks in advance.
> >
> >
> > --
> >
> > *Phillip B Oldham*
> > The Activity People
> > phill at theactivitypeople.co.uk
> > ------------------------------
> >
> > *Policies*
> >
> > This e-mail and its attachments are intended for the above named
> > recipient(s) only and may be confidential. If they have come to you in
> > error, please reply to this e-mail and highlight the error. No action should
> > be taken regarding content, nor must you copy or show them to anyone.
> >
> > This e-mail has been created in the knowledge that Internet e-mail is
> > not a 100% secure communications medium, and we have taken steps to ensure
> > that this e-mail and attachments are free from any virus. We must advise
> > that in keeping with good computing practice the recipient should ensure
> > they are completely virus free, and that you understand and observe the lack
> > of security when e-mailing us.
> > ------------------------------
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://nginx.org/pipermail/nginx/attachments/20080310/8e75d22c/attachment.html>


More information about the nginx mailing list