Nginx session-stickiness

Corey Donohoe atmos at
Mon Apr 7 09:26:58 MSD 2008

On Sun, Apr 6, 2008 at 10:42 PM, Rob Mueller <robm at> wrote:
> > +1, it is common for our app to have backends that can't be
> > connect()ed temporarily during a roll or restart.
> >
>  At the moment we do this by having a separate file included as:
>   include /etc/nginx-servers.conf;
>  A separate process is kept running which every 10 seconds queries our DB
> for "up" servers and rebuilds the nginx-servers.conf file. If a server is
> marked as down, it adds a "down" suffix to the appropriate server, and then
> HUPs nginx.
>  Our code to do a rolling restart of the backends basically updates the DB
> to let it know the backend is down, waits 15 seconds, restarts the backend,
> then marks it as up again in the DB, waits 15 seconds, then moves to the
> next server.
>  This pretty much ensures that no clients see any downtime at all, though I
> think "keep alive" connections may still see a problem, haven't tested
> closely...

IMHO, this is overkill.  It's really neat, but I don't think you need
to do this at all.  We host lots of rails apps and don't run into
problems that require that kind of approach.  You'll get error log
messages, but clients don't notice.  The only time we restart/HUP
nginx is when we rotate logs or upgrade nginx itself.  Nginx's
upstream stuff has always been smart enough to detect the appropriate
server to send requests to without hacks like this, it's one of the
reasons I loved it immediately. :)  We've had unfair weighting with
the fair queueing patch but even then it sends requests to available
upstream servers.  You're basically faking load balancer heartbeats
inside nginx and afaik you don't need to.  If you're on j2ee, php,
python or whatever apps then that might make sense, if you're on rails
I wouldn't do this.

Corey Donohoe

More information about the nginx mailing list