Is it possible to monitor the fair proxy balancer?

Grzegorz Nosek grzegorz.nosek at gmail.com
Sat Jun 28 23:54:06 MSD 2008


On sob, cze 28, 2008 at 09:28:45 +0200, Alexander Staubo wrote:
> On Sat, Jun 28, 2008 at 2:31 PM, Grzegorz Nosek
> <grzegorz.nosek at gmail.com> wrote:
> > However, there still remains the issue of communication between the load
> > balancer and the outside world, i.e. *how* would you like to be told
> > that a backend has been deemed up/down
> 
> HAProxy -- apologies for having to mention it again, but it's a useful
> template -- has a simple status page similar to Nginx's stub status.
> It comes in HTML and CSV formats, and lists all backends (and
> frontends) and their status (up, down, going down, going up) and a ton
> of metrics (current number of connections, number of bytes transfered,
> error count, retry count, and so on). It can also export the same
> information on a secure domain socket if you don't want to go through
> HTTP.

The question wasn't really caused by the perceived impossibility of the
task :) A status page is certainly simple enough (i.e. fits in the nginx
model somewhat), though it has the disadvantage that you have to poll it
periodically. I don't think that a dedicated socket for querying
backends is a good design for nginx, so I'd like to gather ideas about
how to notify the outside world. A log message? Sending a signal
somewhere? An SNMP trap? Every way has its advantages and disadvantages,
so I'd like to pick the one that sucks the least.

> 
> > and *how* would you like to tell
> > nginx that backend 1.2.3.4 is currently down?
> 
> Pardon me for asking a naive question, but to change the list of
> backends, would you not simply edit the config file and do a SIGHUP? I
> would reset whatever internal structures that are kept by the workers,
> but I can't think of anything that's not okay to lose.

Yes. That's the obvious solution but apparently not always acceptable,
especially when you'd want to use an external monitoring system to do
this automatically.

> >  - a new option, e.g. max_requests 10 10 20 20 (specifying the number
> >   for each backend in the order of server directives)
> 
> That's a horrible syntax and one that is going to cause problems as
> you add or remove backends from the config. A max_requests setting
> belongs on each backend declaration.

Like I wrote in the snipped part, I cannot easily add options to the
server directives (at least without patching nginx or reinventing the
square wheel). I don't like the max_requests idea too, for precisely the
same reason. I presume that means the overloading of weight=X is at
least acceptable.

> > So, what say you, is such a feature (amounting to returning 502 errors
> > after a certain amount of concurrent requests is reached) generally
> > desired? If so, how would you like to configure it?
> 
> You should only return an error if a request cannot be served within a
> given timeout, not when all backends are full.

Will have to think about it. This has the potential of busy-looping when
all the backends are indeed full (or down, but then one can just send a
hard error and be done with it). I don't think nginx has a way to be
told "everything is unavailable now, come back to me in a second or
two" or even better "I'll tell you when to ask me again".

Best regards,
 Grzegorz Nosek






More information about the nginx mailing list