<div dir="ltr">I had the same problem and I wrote a patch to reuse the file with I already have in tmp directory for the second stream (and for all streams before the file is completely cached). Unfortunately I cannot share it but can give you an idea how to do it.<br>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Aug 27, 2013 at 8:43 PM, Alex Garzão <span dir="ltr"><<a href="mailto:alex.garzao@azion.com" target="_blank">alex.garzao@azion.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello Wandenberg,<br>
<br>
Thanks for your reply.<br>
<br>
Using proxy_cache_lock, when the second request arrive, it will wait<br>
until the object is complete in the cache (or until<br>
proxy_cache_lock_timeout expires). But, in many cases, my upstream has<br>
a really slow link and NGINX needs more than 30 minutes to download<br>
the object. In practice, probably I will see a lote of parallel<br>
downloads from the same object.<br>
<br>
Someone has other idea? Or I'm wrong about proxy_cache_lock?<br>
<br>
Regards.<br>
<div class="im HOEnZb">--<br>
Alex Garzão<br>
Projetista de Software<br>
Azion Technologies<br>
alex.garzao (at) <a href="http://azion.com" target="_blank">azion.com</a><br>
<br>
<br>
</div><div class="HOEnZb"><div class="h5">On Mon, Aug 26, 2013 at 6:58 PM, Wandenberg Peixoto<br>
<<a href="mailto:wandenberg@gmail.com">wandenberg@gmail.com</a>> wrote:<br>
> Try to use the proxy_cache_lock configuration, I think this is what you are<br>
> looking for.<br>
> Don't forget to configure the proxy_cache_lock_timeout to your use case.<br>
><br>
> On Aug 26, 2013 6:54 PM, "Alex Garzão" <<a href="mailto:alex.garzao@azion.com">alex.garzao@azion.com</a>> wrote:<br>
>><br>
>> Hello guys,<br>
>><br>
>> This is my first post to nginx-devel.<br>
>><br>
>> First of all, I would like to congratulate NGINX developers. NGINX is<br>
>> an amazing project :-)<br>
>><br>
>> Well, I'm using NGINX as a proxy server, with cache enabled. I noted<br>
>> that, when two (or more) users trying to download the same object, in<br>
>> parallel, and the object isn't in the cache, NGINX download them from<br>
>> the upstream. In this case, NGINX creates one connection to upstream<br>
>> (per request) and download them to temp files. Ok, this works, but, in<br>
>> some situations, in one server, we saw more than 70 parallel downloads<br>
>> to the same object (in this case, an object with more than 200 MB).<br>
>><br>
>> If possible, I would like some insights about how can I avoid this<br>
>> situation. I looked to see if it's just a configuration, but I didn't<br>
>> find nothing.<br>
>><br>
>> IMHO, I think the best approach is share the temp file. If possible, I<br>
>> would like to known your opinions about this approach.<br>
>><br>
>> I looked at the code in ngx_http_upstream.c and ngx_http_proxy.c, and<br>
>> I'm trying to fix the code to share the temp. I think that I need to<br>
>> do the following tasks:<br>
>><br>
>> 1) Register the current downloads from upstreams. Probably I can<br>
>> address this with a rbtree, where each node has the unique object id<br>
>> and a list with downstreams (requests?) waiting for data from the<br>
>> temp.<br>
>><br>
>> 2) Disassociate the read from upstream from the write to downstream.<br>
>> Today, in the ngx_event_pipe function, NGINX reads from upstream,<br>
>> writes to temp, and writes to downstream. But, as I can have N<br>
>> downstreams waiting data from the same upstream, probably I need to<br>
>> move the write to downstream to another place. The only way I think is<br>
>> implementing a polling event, but I know that this is incorrect<br>
>> because NGINX is event based, and polling waste a lote of CPU.<br>
>><br>
>> 3) When I know that there more data in temp to be sent, which function<br>
>> I must use? ngx_http_output_filter?<br>
>><br>
>> Suggestions will welcome :-)<br>
>><br>
>> Thanks people!<br>
>><br>
>> --<br>
>> Alex Garzão<br>
>> Projetista de Software<br>
>> Azion Technologies<br>
>> alex.garzao (at) <a href="http://azion.com" target="_blank">azion.com</a><br>
>><br>
>> _______________________________________________<br>
>> nginx-devel mailing list<br>
>> <a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
>> <a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
><br>
><br>
> _______________________________________________<br>
> nginx-devel mailing list<br>
> <a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
<br>
_______________________________________________<br>
nginx-devel mailing list<br>
<a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
</div></div></blockquote></div><br></div>