Feature request: Run a script when upstream detected down/up

François Battail fb at francois.battail.name
Wed Apr 30 14:54:48 MSD 2008


Mansoor Peerbhoy <mansoor at ...> writes:

> a) for instance, having an external program monitor nginx logs for a
particular log message (or "event"), [...]

OK, that's monitoring but still require lot of system calls and even bandwidth.

> b) with this approach, the problem of counting the number of times an event
was fired in a particular time [...]

True, but it's no more monitoring but debugging.

> c) also, if this event approach is taken further, this can also be used as a
non-intrusive way of profiling

True, but it's no more monitoring but profiling.

> d) and it is also purely in the spirit of a webserver -- apache, if you
recollect, can pipe its access log to a program [...]

It's Unix philosophy: "everything is a file" but sometimes a more pragmatic
approach is more efficient. Redirecting logs to another server or program can be
useful but most likely to do heavy statistical computations or intrusion
tentative detection or legal access log archivage.

What's the purpose of an error log? To help to traceback an incident.
What's the purpose of monitoring? To told you there is an incident and to give
you valuable data to understand what kind of incident it is.

It's not the same thing, so it's not the same tools. 

At this time, the few things monitored by Nginx are provided by the stub_status
module. How it works: you make an HTTP GET on a specific loation and it will
returns some variable values in text form.

What I propose is a more general mechanism which is to use shared memory instead
of an HTTP request. As most scripting languages offer the access to shared
memory you can do what you want after.

How it should work?

A Nginx module wanting to have some variables monitored should register these
during init, something like:

ngx_monitored_value_t * ngx_register_monitoring_value (ngx_string_t * name) ;

Nginx will then reserve an area in the shared memory to store:

"<name>:XXXXXXXX\n"

After startup the module will modify the value using the atomic_t primitives.
And at the end of the main worker cycle for each variable monitored we simply do
a loop to "snprintf" the value into the area in shared memory.

Nginx is of course write only for security reason.

Then you will be able to exploit the data, something like this:

accept:00000002
read:000000001
write:000000001
wait:000000001
mybackendserver1status:00000000
mybackendserver2status:00000001
...

How it costs? The cost of modifying under mutex or spinlock the variables
(already the case for the sub_status module) and the cost, for each variable, to
do an sprintf to transform an integer into its hexadecimal representation. Say
100 assembly instructions each time the main worker do a cycle, it's nothing.
I'm sure it's way faster than sub_status_module ;-)

It's easy, simple, fast and useful (at least for one person). The sub_status
module is not affected only the Nagios and Collectd plugins need to be upgraded
to exploit the shared memory, not a big deal I think.

Best regards.






More information about the nginx mailing list