Configurable sleep period for connections
Roy Teeuwen
roy at teeuwen.be
Fri Mar 24 08:24:25 UTC 2023
Hey Maxim,
You are absolutely right, I totally forgot about the cache_lock. I have listed our settings below.
The reason we are using the cache_lock is to save the backend application to not get 100's of requests when a stale item is invalid. Even if we have use_stale updating, we notice that only the first request will use the stale item, the following requests will do a new request even though there is already a background request going on to refresh the stale item. (This does not happen if we set keepalive to 0, where new connections are being used, but has the performance degradation as mentioned.) This was the reasoning for the cache_lock, but that gives the issue about the 500ms lock, while the item might already be refreshed after 100ms.
So is there an option to make the 500ms in the ngx_http_file_cache.c <http://ngx_http_file_cache.ch/> configurable? Are there any downsides to that? Or is there a better alternative
Our config:
add_header X-Cache-Status $upstream_cache_status always;
# tag::proxy_cache[]
proxy_cache content-proxy-7b49d8c897-62gk9;
proxy_cache_background_update on;
proxy_cache_bypass $arg_nocache;
proxy_cache_lock on;
proxy_cache_use_stale updating error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_cache_valid 404 5m;
# end::proxy_cache[]
# tag::upstream[]
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
# end::upstream[]
# This header will be used to indicate a request coming from nginx
# With this header set we can return 503 only to nginx when a server is in maintenance mode
proxy_set_header X-Content-Cache $hostname;
proxy_ssl_server_name on;
proxy_read_timeout 10;
proxy_connect_timeout 10;
proxy_send_timeout 10;
Greets,
Roy
> On 23 Mar 2023, at 18:44, Maxim Dounin <mdounin at mdounin.ru> wrote:
>
> Hello!
>
> On Thu, Mar 23, 2023 at 09:26:48AM +0100, Roy Teeuwen wrote:
>
>> We are using NGINX as a proxy / caching layer for a backend
>> application. Our backend has a relatively slow response time,
>> ranging between the 100 to 300ms. We want the NGINX proxy to be
>> as speedy as possible, to do this we have implemented the
>> following logic:
>>
>> - Cache all responses for 5 mins (based on cache control
>> headers)
>> - Use stale cache for error's on the backend
>> - Do a background update for stale cache
>>
>> The last part has an issue, namely if a first request reaches
>> nginx, it will trigger a background request, but other requests
>> for the same resource will be locked until this background
>> request is finished instead of still returning the stale cache
>> that is available. This is caused by the fact that there is a
>> keepalive on the connection, which locks all subsequent requests
>> until the background request is finished.
>
> Could you please clarify what you are trying to describe?
>
> Keepalive on the connection might delay handling of subsequent
> requests on the same connection (and not other requests to the
> same resource).
>
> Other requests to the same resource might be delayed by the
> proxy_cache_lock (https://nginx.org/r/proxy_cache_lock), but it is
> not something in your description, and it only works for new cache
> elements and has no effect when there is a stale cache item.
>
>> The issue that we are facing in this situation is that the
>> locking is very long, namely 500ms hardcoded. I think it is
>> caused by this:
>> https://github.com/nginx/nginx/blob/master/src/core/ngx_connection.c#L703
>
> This looks completely unrelated. A 500ms delay can be seen with
> proxy_cache_lock as previously mentioned, see here:
>
> http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_file_cache.c#l455
>
> But again, it is not expected to appear in the use case you
> describe.
>
>> This means that our relatively slow backend of 100-200ms
>> actually gets worse than better.
>>
>>
>> Is it an option to make this 500ms a configurable setting
>> instead of 500ms? Are there any downsides to making this 500ms
>> lower? I'd be willing to see if we can contribute this.
>>
>>
>> Another option that I'd tried is to set the keepalive to 0, so
>> that every request is a new connection. In small amounts of
>> requests this actually seemed to solve the issue, but the moment
>> that we went to a real life situation, this degraded the
>> performance massively, so we had to revert this
>
> First of all, it might be a good idea to better understand what
> is the issue you are seeing.
>
> Also make sure that you have "proxy_cache_use_stale updating"
> enabled (https://nginx.org/r/proxy_cache_use_stale). It is
> designed exactly for the use case you describe, and works quite
> well in most use cases.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx-devel/attachments/20230324/7c9ecc74/attachment-0001.htm>
More information about the nginx-devel
mailing list