logging into script

Valery Kholodkov valery+nginxen at grid.net.ru
Thu Jan 14 20:02:17 MSK 2010


----- Dennis J. <dennisml at conversis.de> wrote:
> > It is not considered as more efficient. It may be more efficient because of
> > bulk data processing. Note also, that logged data are written to disk, but
> > are not read because they are already in OS cache: they are just copied.
> >
> > Logging to pipe is a CPU waste because it causes a lot of context switches
> > and memory copies for every log operation:
> 
> Hm, interesting. I didn't know that writing to a pipe actually forces a 
> context switch. I was under the impression that the writing process could 
> use up it's time slice to write an arbitrary amount of data into the pipe 
> and when the OS scheduler switches to the script it would read all the data 
> from that pipe.
> 
> The "tail -f" approach looks racy to me though. The log would grow fairly 
> fast which means it would probably have to be rotated at least once per 
> hour or the disk will fill up. I'm not sure how to process this rotation 
> with "tail -f" without potentially missing some data.

Yes. This controversy motivated me to create UDP logger.

Although there are clean ways to do this "tail -f", there might be demand for alternative solution.

-- 
Regards,
Valery Kholodkov



More information about the nginx mailing list