Hi Maxim,<br><br>On Saturday, 1 February 2014, Maxim Dounin <<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
If you just care about precision of a time measured, on Linux you may try<br>
using timer_resolution with some reasonable value like 10ms. On<br>
Linux time is updated using signals in this case, and time logged<br>
will be more accurate regardless of how long event loop iteration<br>
takes.</blockquote><div><br></div><div>Aha, so timer_resolution will "fix" it on Linux? (Our production boxes are Linux, my current testing was OSX). On OSX it looked like timer-resolution never ran while the gzipping was happening, I just assumed it would be similar cross-platform (though I did test I could replicate the original <span></span>issue on both).</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
The really bad thing is actually that this can easily introduce<br>
potentially huge latency spikes. If this is often happens in<br>
practice, it might be a good idea to introduce something similar<br>
to sendfile_max_chunk we already have to resolve similar problems<br>
with sendfile(), see <a href="http://nginx.org/r/sendfile_max_chunk" target="_blank">http://nginx.org/r/sendfile_max_chunk</a>.</blockquote><div><br></div><div>So where could/would the max chunk limit sit? At the socket-write end of the process?</div>
<div><br></div><div>Thanks,</div><div><br></div><div>Rob :)</div>