logging into script

Igor Sysoev igor at sysoev.ru
Thu Jan 14 16:51:58 MSK 2010


On Thu, Jan 14, 2010 at 01:52:02PM +0100, Dennis J. wrote:

> Why is logging into a pipe considered a waste of CPU?
> The log parser throws away some data, aggregates the rest and then writes 
> it to a remote database. The "tail -f" approach would waste lokal disk i/o 
> by writing data unnecessarily to disk which i would then have to read again 
> with the script.
> Why is this considered more efficient than handing the data directly over 
> to a script?

It is not considered as more efficient. It may be more efficient because of
bulk data processing. Note also, that logged data are written to disk, but
are not read because they are already in OS cache: they are just copied.

Logging to pipe is a CPU waste because it causes a lot of context switches
and memory copies for every log operation:

1) nginx writes to a pipe,
2) context switch to script,
2) script reads from the pipe,
3) script processes line,
4) script writes to a database,
5) context switch to nginx.

instead of single memory copy operation to a log file.

> Regards,
>    Dennis
> 
> On 01/14/2010 07:03 AM, Peter Leonov wrote:
> > This thread might be of help:
> > http://nginx.org/pipermail/nginx/2009-June/013042.html
> >
> > Ny the way "tail -F" is the only recommended way to do the near real-time log parsing with nginx.
> >
> > On 14.01.2010, at 8:12, Dennis J. wrote:
> >
> >> Hi,
> >> Is there a nginx equivalent to apaches CustomLog directive with the "|" prefix so it logs into stdin of another program/script? I need to do real-time processing of the access log data and I'm wondering how I can accomplish this once I switch to nginx.


-- 
Igor Sysoev
http://sysoev.ru/en/



More information about the nginx mailing list