[PATCH] Use clock_gettime(REAL_FAST) on DragonFlyBSD

Maxim Dounin mdounin at mdounin.ru
Thu Jun 2 18:21:15 UTC 2016


Hello!

On Thu, Jun 02, 2016 at 11:19:26PM +0800, Sepherosa Ziehau wrote:

> On Thu, Jun 2, 2016 at 11:05 PM, Maxim Dounin <mdounin at mdounin.ru> wrote:
> > Hello!
> >
> > On Thu, Jun 02, 2016 at 08:48:19PM +0800, Sepherosa Ziehau wrote:
> >
> >> On DragonflyBSD only clock_gettime(*_FAST) avoids system call and time
> >> counter read cost (kinda like Linux VDSO).  This gives some
> >> improvement for performance and reduces latency.
> >
> > At least FreeBSD also has clock_gettime(*_FAST), from where it was
> > taken to DragonFly BSD.  And Linux 2.6.32+ has
> > clock_gettime(*_COARSE).
> >
> > We do have plans to use monotonic versions, see
> > https://trac.nginx.org/nginx/ticket/189.
> 
> Yeah, if we can use clock_gettime(REAL_FAST/REAL_COARSE) that would be great :)
> 
> Should I submit another patch for it?  I think the clockid can be put
> into the config file.
> 
> I patched the Dfly specifically, mainly because its gettimeofday is
> still a syscall and read time counter.  While both FreeBSD-current and
> Linux (which seems to have a sysctl to enable) has VDSO
> gettimeofday().

I don't really think we really should introduce 
clock_gettime(CLOCK_REALTIME*) instead of gettimeofday().  It's 
not really needed as gettimeofday() isn't called often.

Though we can consider doing this as a part of 
clock_gettime(CLOCK_MONOTONIC) work as we are going to add 
appropriate configure tests anyway.

> >> My testing setup consists two clients and one nginx (recent head code)
> >> server, two clients connected to the nginx server through two 10Ge
> >> (82599).  15K concurrent connections from each client; client runs a
> >> modified version of wrk and each connection only carries one request
> >> for an 1KB file.
> >>
> >> Performance goes from 183Kreqs/s to 184Kreqs/s (admittedly pretty
> >> minor).  Average latency reduces 2ms (98ms -> 96ms).  Latency stdev
> >> reduces 6ms (198ms -> 192ms).
> >
> > This doesn't look statistically significant.  Have you tried
> > something like ministat(1) to test the results?
> 
> I have wrk --latency data:
> 
> Before:
>   Latency Distribution
>      50%   26.03ms
>      75%   73.95ms
>      90%  285.33ms
>      99%    1.04s
> 
> After:
>   Latency Distribution
>      50%   22.63ms
>      75%   77.42ms
>      90%  278.79ms
>      99%    1.01s
> 
> Admittedly, kinda like acceptable measurement errors :P.

Doesn't look like a statistically significant difference for me, 
at least in the absense of a margin of error.

If you think there is a difference, please try ministat, as 
already suggested.  Not sure it's available on DragonFly BSD, but 
you can easily grab it here:

https://svnweb.freebsd.org/base/head/usr.bin/ministat/

-- 
Maxim Dounin
http://nginx.org/



More information about the nginx-devel mailing list