Sharing data when download the same object from upstream
Wandenberg Peixoto
wandenberg at gmail.com
Mon Aug 26 21:58:35 UTC 2013
Try to use the proxy_cache_lock configuration, I think this is what you are
looking for.
Don't forget to configure the proxy_cache_lock_timeout to your use case.
On Aug 26, 2013 6:54 PM, "Alex Garzão" <alex.garzao at azion.com> wrote:
> Hello guys,
>
> This is my first post to nginx-devel.
>
> First of all, I would like to congratulate NGINX developers. NGINX is
> an amazing project :-)
>
> Well, I'm using NGINX as a proxy server, with cache enabled. I noted
> that, when two (or more) users trying to download the same object, in
> parallel, and the object isn't in the cache, NGINX download them from
> the upstream. In this case, NGINX creates one connection to upstream
> (per request) and download them to temp files. Ok, this works, but, in
> some situations, in one server, we saw more than 70 parallel downloads
> to the same object (in this case, an object with more than 200 MB).
>
> If possible, I would like some insights about how can I avoid this
> situation. I looked to see if it's just a configuration, but I didn't
> find nothing.
>
> IMHO, I think the best approach is share the temp file. If possible, I
> would like to known your opinions about this approach.
>
> I looked at the code in ngx_http_upstream.c and ngx_http_proxy.c, and
> I'm trying to fix the code to share the temp. I think that I need to
> do the following tasks:
>
> 1) Register the current downloads from upstreams. Probably I can
> address this with a rbtree, where each node has the unique object id
> and a list with downstreams (requests?) waiting for data from the
> temp.
>
> 2) Disassociate the read from upstream from the write to downstream.
> Today, in the ngx_event_pipe function, NGINX reads from upstream,
> writes to temp, and writes to downstream. But, as I can have N
> downstreams waiting data from the same upstream, probably I need to
> move the write to downstream to another place. The only way I think is
> implementing a polling event, but I know that this is incorrect
> because NGINX is event based, and polling waste a lote of CPU.
>
> 3) When I know that there more data in temp to be sent, which function
> I must use? ngx_http_output_filter?
>
> Suggestions will welcome :-)
>
> Thanks people!
>
> --
> Alex Garzão
> Projetista de Software
> Azion Technologies
> alex.garzao (at) azion.com
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx-devel/attachments/20130826/d7f5f520/attachment.html>
More information about the nginx-devel
mailing list