<div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span style="font-family:arial,sans-serif;font-size:13px">I use a patch</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">Maxim provided some time ago allowing range requests to receive HTTP 206 if</span><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">a resource is not in cache but it's determined to be cacheable...</span></blockquote>
<div><br></div><div>Can you please link to this patch? </div><div><br></div><div class="gmail_extra"><div><div dir="ltr"><span style="border-collapse:collapse;font-family:arial,sans-serif;font-size:13px"><div>
<span style="border-collapse:collapse;font-family:arial,sans-serif;font-size:13px"><br></span></div><div><span style="border-collapse:collapse;font-family:arial,sans-serif;font-size:13px">Reg</span>ards,</div><div><br></div>
<a href="http://www.twitter.com/jdorfman" target="_blank">Justin Dorfman</a><br><br>Director of Developer Relations<br><a href="http://twitter.com/MaxCDNDeveloper" target="_blank">MaxCDN</a><br></span><div></div></div></div>
<br><br><div class="gmail_quote">On Mon, Jun 16, 2014 at 2:05 PM, jakubp <span dir="ltr"><<a href="mailto:nginx-forum@nginx.us" target="_blank">nginx-forum@nginx.us</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi<br>
<br>
Recently I hit quite big problem with huge files. Nginx is a cache fronting<br>
an origin which serves huge files (several GB). Clients use mostly range<br>
requests (often to get parts towards the end of the file) and I use a patch<br>
Maxim provided some time ago allowing range requests to receive HTTP 206 if<br>
a resource is not in cache but it's determined to be cacheable...<br>
<br>
When a file is not in cache and I see a flurry of requests for the same file<br>
I see that after proxy_cache_lock_timeout - at that time the download didn't<br>
reach the first requested byte of a lot of requests - nginx establishes a<br>
new connection to upstream for each client and initiates another download of<br>
the same file. I understand why this happens and that it's by design but...<br>
That kills the server. Multiple writes to temp directory basically kill the<br>
disk performance (which in turn blocks nginx worker processes).<br>
<br>
Is there anything that can be done to help that? Keeping in mind that I<br>
can't afford serving HTTP 200 to a range request and also I'd like to avoid<br>
clients waiting for the first requested byte forever...<br>
<br>
Thanks in advance!<br>
<br>
Regards,<br>
Kuba<br>
<br>
Posted at Nginx Forum: <a href="http://forum.nginx.org/read.php?2,250899,250899#msg-250899" target="_blank">http://forum.nginx.org/read.php?2,250899,250899#msg-250899</a><br>
<br>
_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</blockquote></div><br></div></div>