logging into script

Maxim Dounin mdounin at mdounin.ru
Thu Jan 14 18:03:41 MSK 2010


Hello!

On Thu, Jan 14, 2010 at 03:48:35PM +0100, Dennis J. wrote:

> On 01/14/2010 02:51 PM, Igor Sysoev wrote:
> >On Thu, Jan 14, 2010 at 01:52:02PM +0100, Dennis J. wrote:
> >
> >>Why is logging into a pipe considered a waste of CPU?
> >>The log parser throws away some data, aggregates the rest and then writes
> >>it to a remote database. The "tail -f" approach would waste lokal disk i/o
> >>by writing data unnecessarily to disk which i would then have to read again
> >>with the script.
> >>Why is this considered more efficient than handing the data directly over
> >>to a script?
> >
> >It is not considered as more efficient. It may be more efficient because of
> >bulk data processing. Note also, that logged data are written to disk, but
> >are not read because they are already in OS cache: they are just copied.
> >
> >Logging to pipe is a CPU waste because it causes a lot of context switches
> >and memory copies for every log operation:
> 
> Hm, interesting. I didn't know that writing to a pipe actually
> forces a context switch. I was under the impression that the writing
> process could use up it's time slice to write an arbitrary amount of
> data into the pipe and when the OS scheduler switches to the script
> it would read all the data from that pipe.
> 
> The "tail -f" approach looks racy to me though. The log would grow
> fairly fast which means it would probably have to be rotated at
> least once per hour or the disk will fill up. I'm not sure how to
> process this rotation with "tail -f" without potentially missing
> some data.

tail -F will do the trick

It's still racy as long as your app reading logs won't be able to 
cope with load and finish reading of one file before second 
rotation happens.  But in this case the only expected result of 
piping logs directly from nginx is brick instead of server.

Maxim Dounin



More information about the nginx mailing list