Configurable sleep period for connections

u5h u5.horie at gmail.com
Mon Mar 27 00:25:56 UTC 2023


Hi, I had a same issue in those days.
Did you try the proxy_cache_lock_timeout?

https://forum.nginx.org/read.php?2,276344,276349#msg-276349

But the below article said if you reduce simply the once busy loop time, it
may not resolve this problem for which based on the nginx event
notification mechanism in case it has many concurrently same content
request.

https://blog.lumen.com/pulling-back-the-curtain-development-and-testing-for-low-latency-dash-support/

By the way, we might have been better to use nginx at mailing such a user
level discussion.

—
Yugo Horie

On Fri, Mar 24, 2023 at 19:18 Maxim Dounin <mdounin at mdounin.ru> wrote:

> Hello!
>
> On Fri, Mar 24, 2023 at 09:24:25AM +0100, Roy Teeuwen wrote:
>
> > You are absolutely right, I totally forgot about the cache_lock.
> > I have listed our settings below.
> >
> > The reason we are using the cache_lock is to save the backend
> > application to not get 100's of requests when a stale item is
> > invalid. Even if we have use_stale updating, we notice that only
> > the first request will use the stale item, the following
> > requests will do a new request even though there is already a
> > background request going on to refresh the stale item. (This
> > does not happen if we set keepalive to 0, where new connections
> > are being used, but has the performance degradation as
> > mentioned.) This was the reasoning for the cache_lock, but that
> > gives the issue about the 500ms lock, while the item might
> > already be refreshed after 100ms.
>
> To re-iterate: proxy_cache_lock is not expected to affect requests
> if there is an existing cache item (and keepalive shouldn't affect
> proxy_cache_lock in any way; not to mention that the "keepalive"
> directive, which configures keepalive connections cache to
> upstream servers, does not accept the "0" value).
>
> You may want to dig further into what actually happens in your
> configuration.  I would recommend to start with doing a debug log
> which shows the described behaviour, and then following the code
> to find out why the cache lock kicks in when it shouldn't.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx-devel/attachments/20230327/7ba0ee95/attachment.htm>


More information about the nginx-devel mailing list