Hold requests long enough for me to restart upstream?

Cliff Wells cliff at develix.com
Fri Mar 20 23:46:52 MSK 2009


On Thu, 2009-03-19 at 15:01 -0700, Rt Ibmer wrote:
> I use nginx 0.6.31 with proxy_pass to front end requests to a servlet
> running in Jetty (the upstream).
> 
> Sometimes I need to update a jar on the upstream, which requires
> restarting jetty to take effect.
> 
> I am looking for a way to tell nginx that if it gets a connection
> failure at the upstream (which is what happens when jetty is in the
> process of restarting since nothing is listening on that port during
> the restart) that it should give it say 20 seconds before erroring out
> the request back to the browser.
> 
> Certainly this will back up the processing a bit, but it should be
> very short as it only takes Jetty about 10 seconds to restart and
> start listening again on its port. During slow periods we are only
> getting 2-5 requests per second so there should be plenty of resources
> for nginx to queue up these requests while it waits for Jetty.
> 
> Can someone please tell me what settings I should use so that nginx
> will wait up to 20 seconds for the upstream to restart so that it
> doesn't return an error to the browser?
> 
> In the past I have tried setting all these to 20 seconds:
>     proxy_connect_timeout   20s;
>     proxy_send_timeout      20s;
>     proxy_read_timeout      20s;
> 
> but when I restarted Jetty, right away the nginx error logs started
> showing errors like:
> [error] 6445#0: *141102686 connect() failed (111: Connection refused)
> while connecting to upstream

That's because there is nothing to connect to, which is different than a
timeout.  The backend socket is closed rather than open but
unresponsive.

> Are the above configuration parameters correct for what I am trying to
> do and maybe I just didn't set them right? Or is there some other way?
> 
> Basically what I'm trying to do is set those settings high, tell nginx
> to reload its config, then bounce jetty, then have nginx hold the
> requests long enough to get through once jetty is back up and then
> have the requests go through to jetty without losing any requests.
> Then after jetty is restarted I would put the timeouts back to normal
> levels like 3s, until the next time I have to do an update.

Have you considered running two instances of Jetty rather than just one?
Then you could use the upstream directive to manage this.

Regards,
Cliff






More information about the nginx mailing list