Nginx proxy cache purge process does not clean up items fast enough for new elements nginx-forum at
Wed Oct 16 09:24:01 UTC 2019


We have a nginx fronting our object storage which caches large objects.
Objects are as large as 100GB. The nginx cache max size is set to about

When there is a surge of large object requests and disk quickly fills up,
nginx runs into out of disk space error. I was expecting the cache manager
to purge items based on LRU and make room for the new elements, but that
does not happen.

I can reproduce the problem with a simple test case:


proxy_cache_path /tmp/cache levels=1:2 keys_zone=cache_one:256m inactive=2d
max_size=16G use_temp_path=off;


    Run a request to download a file of 15GB, it is served correctly and
stored in cache.
    Run a second request to download a different file of 10GB, it will fail
with something like this:

2019/10/04 11:49:08 [crit] 20206#20206: *21 pwritev()
"/tmp/cache/9/fa/a301d42ca6e5d4188c38ecf56aa3afa9.0000000001" has written
only 221184 of 229376 while reading upstream, client:, server:
eos_cache_filer, request: "GET...
2019/10/04 12:07:29 [crit] 21201#21201: *487 pwrite()
"/tmp/cache/9/fa/a301d42ca6e5d4188c38ecf56aa3afa9.0000000002" failed (28: No
space left on device) while reading upstream, client:, server:
eos_cache_filer, request: 

Can I tune some cache_manager parameters to make this work? Is there a way
to disable buffering in such case - ideally download should not fail, it
should just disable caching and buffering.


Posted at Nginx Forum:,285896,285896#msg-285896

More information about the nginx mailing list