Problem with big files

jakubp nginx-forum at
Mon Jun 16 21:05:03 UTC 2014


Recently I hit quite big problem with huge files. Nginx is a cache fronting
an origin which serves huge files (several GB). Clients use mostly range
requests (often to get parts towards the end of the file) and I use a patch
Maxim provided some time ago allowing range requests to receive HTTP 206 if
a resource is not in cache but it's determined to be cacheable...

When a file is not in cache and I see a flurry of requests for the same file
I see that after proxy_cache_lock_timeout - at that time the download didn't
reach the first requested byte of a lot of requests - nginx establishes a
new connection to upstream for each client and initiates another download of
the same file. I understand why this happens and that it's by design but...
That kills the server. Multiple writes to temp directory basically kill the
disk performance (which in turn blocks nginx worker processes).

Is there anything that can be done to help that? Keeping in mind that I
can't afford serving HTTP 200 to a range request and also I'd like to avoid
clients waiting for the first requested byte forever...

Thanks in advance!


Posted at Nginx Forum:,250899,250899#msg-250899

More information about the nginx mailing list