Configurable sleep period for connections

Maxim Dounin mdounin at mdounin.ru
Thu Mar 23 17:44:42 UTC 2023


Hello!

On Thu, Mar 23, 2023 at 09:26:48AM +0100, Roy Teeuwen wrote:

> We are using NGINX as a proxy / caching layer for a backend 
> application. Our backend has a relatively slow response time, 
> ranging between the 100 to 300ms. We want the NGINX proxy to be 
> as speedy as possible, to do this we have implemented the 
> following logic:
> 
> - Cache all responses for 5 mins (based on cache control 
> headers)
> - Use stale cache for error's on the backend
> - Do a background update for stale cache
> 
> The last part has an issue, namely if a first request reaches 
> nginx, it will trigger a background request, but other requests 
> for the same resource will be locked until this background 
> request is finished instead of still returning the stale cache 
> that is available. This is caused by the fact that there is a 
> keepalive on the connection, which locks all subsequent requests 
> until the background request is finished.

Could you please clarify what you are trying to describe?

Keepalive on the connection might delay handling of subsequent 
requests on the same connection (and not other requests to the 
same resource).

Other requests to the same resource might be delayed by the 
proxy_cache_lock (https://nginx.org/r/proxy_cache_lock), but it is 
not something in your description, and it only works for new cache 
elements and has no effect when there is a stale cache item. 

> The issue that we are facing in this situation is that the 
> locking is very long, namely 500ms hardcoded. I think it is 
> caused by this:
> https://github.com/nginx/nginx/blob/master/src/core/ngx_connection.c#L703

This looks completely unrelated.  A 500ms delay can be seen with 
proxy_cache_lock as previously mentioned, see here:

http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_file_cache.c#l455

But again, it is not expected to appear in the use case you 
describe.

> This means that our relatively slow backend of 100-200ms 
> actually gets worse than better.
> 
> 
> Is it an option to make this 500ms a configurable setting 
> instead of 500ms? Are there any downsides to making this 500ms 
> lower? I'd be willing to see if we can contribute this.
> 
> 
> Another option that I'd tried is to set the keepalive to 0, so 
> that every request is a new connection. In small amounts of 
> requests this actually seemed to solve the issue, but the moment 
> that we went to a real life situation, this degraded the 
> performance massively, so we had to revert this

First of all, it might be a good idea to better understand what 
is the issue you are seeing.

Also make sure that you have "proxy_cache_use_stale updating" 
enabled (https://nginx.org/r/proxy_cache_use_stale).  It is 
designed exactly for the use case you describe, and works quite 
well in most use cases.

-- 
Maxim Dounin
http://mdounin.ru/


More information about the nginx-devel mailing list