[PATCH 00 of 15] Serve all requests from single tempfile
    Jiří Setnička 
    jiri.setnicka at cdn77.com
       
    Mon Feb  7 11:16:00 UTC 2022
    
    
  
Hello!
> We developed the proxy_cache_tempfile mechanism, which acts similarly to
> the proxy_cache_lock, but instead of locking other requests waiting for
> the file completion, we open the tempfile used by the primary request
> and periodically serve parts of it to the waiting requests.
>
> [...]
>
> We tested this feature thoroughly for the last few months and we use
> it already in part of our infrastructure without noticing any negative
> impact, We noticed only a very small increase in memory usage and a
> minimal increase in CPU and disk io usage (which corresponds with the
> increased throughput of the server).
>
> We also did some synthetic benchmarks where we compared vanilla nginx
> and our patched version with and without cache lock and with cache
> tempfiles. Results of the benchmarks, charts, and scripts we used for it
> are available on my Github:
>
>    https://github.com/setnicka/nginx-tempfiles-benchmark
Highlighting this.
Have you had already time to look at the proposed changes? What do you 
think about them?
I am mostly interested in if there isn't some obvious fundamental 
misconception which I forget. As I wrote before we already use nginx 
with this patch in small part of our infrastructure without noticing any 
negative impact. But our usecase is quite specific and maybe there could 
be some hidden flaw in general use.
Jiří Setnička
CDN77
    
    
More information about the nginx-devel
mailing list