proxy_next_upstream, only "connect" timeout?, try 2

Maxim Dounin mdounin at mdounin.ru
Fri Jun 15 10:29:35 UTC 2012


Hello!

On Fri, Jun 15, 2012 at 10:07:35AM +0200, Gábor Farkas wrote:

> On Fri, Jun 15, 2012 at 10:02 AM, Maxim Dounin <mdounin at mdounin.ru> wrote:
> > Hello!
> >
> > On Fri, Jun 15, 2012 at 09:36:39AM +0200, Gábor Farkas wrote:
> >
> >> regarding my original email:
> >> http://article.gmane.org/gmane.comp.web.nginx.english/34175
> >>
> >> i assume the silence means there is no such way.
> >>
> >> would it be hard to implement it? i can try it, but i'd need to know
> >> if it's at least possible or not,
> >> if there are the necessary 'hooks' in place and such...
> >
> > We probably need something more generic, i.e. some distinction
> > between idempotent and non-idempotent cases in
> > proxy_next_upstream.  This should allow to retry GET/HEAD at any
> > point, while keeping POSTs safe.
> 
> i agree, but i would still prefer to be able to specify the
> on-connect-timeout-only too,
> there are some cases where i do not want to repeat even a GET request,
> and generally it is a safer bet for me to not-repeat anything that
> already 'reached'
> the upstream.

Sure, this needs some configuration for a things which should be 
considered idempotent, as http methods are often misused.

> you see, my specific problem is that i have multiple upstreams, and i want nginx
> to go to the next upstream when an upstream's socket-backlog is full.
> and currently
> i am unable to do this...

Practical solution for the specific problem of full listen queue 
is to instruct your OS to return RST in this case (this 
is usually the default behaviour, but Linux seems to be an 
exception), and use "proxy_next_upstream error".  You may also try 
small proxy_connect_timeout vs. long proxy_read_timeout and 
proxy_send_timeout.

This still has a theoretical problem though, as error might occur 
for some reason during sending request to an upstream, and this 
will trigger use of next upstream server.

Maxim Dounin



More information about the nginx mailing list