<div dir="ltr">Hello,<br><div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Aug 28, 2013 at 7:56 PM, Alex Garzão <span dir="ltr"><<a href="mailto:alex.garzao@azion.com" target="_blank">alex.garzao@azion.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello Anatoli,<br>
<br>
Thanks for your reply. I will appreciate (a lot) your help :-)<br>
<br>
I'm trying to fix the code with the following requirements in mind:<br>
<br>
1) We were upstreams/downstreams with good (and bad) links; in<br>
general, upstream speed is more than downstream speed but, in some<br>
situations, the downstream speed is a lot more quickly than the<br>
upstream speed;<br></blockquote><div>I think this is asynchronous and if the upstream is faster than the downstream it save the data to cached file faster and the downstream gets the data from the file instead of the mem buffers.<br>
<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
2) I'm trying to disassociate the upstream speed from the downstream<br>
speed. The first request (request that already will connect in the<br>
upstream) download data to temp file, but no longer sends data to<br>
downstream. I disabled this because, in my understand, if the first<br>
request has a slow downstream, all others downstreams will wait data<br>
to be sent to this slow downstream.<br></blockquote><div>I think this is not necessary.<br> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
My first doubt is: Need I worry about downstream/upstream speed?<br>
<br></blockquote><div>No<br> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Well, I will try to explain what I did in the code:<br>
<br>
1) I created a rbtree (currrent_downloads) that keeps the current<br>
downloads (one rbtree per upstream). Each node keeps the first request<br>
(request that already will connect into upstream) and a list<br>
(download_info_list) that will keep two fields: (a) request waiting<br>
data from the temp file and (b) file offset already sent from the temp<br>
file (last_offset);<br>
<br></blockquote><div><br>I have the same but in ordered array (simple implementation). Anyway the rbtree will do the same. But this structure should be in shared memory because all workers should know which files are currently in downloading from upstream state. The should exist in tmp directory.<br>
<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
2) In ngx_http_upstream_init_request(), when the object isn't in the<br>
cache, before connect into upstream, I check if the object is in<br>
rbtree (current_downloads);<br>
<br>
3) When the object isn't in current_downloads, I add a node that<br>
contains the first request (equal to current request) and I add the<br>
current request into the download_info_list. Beyond that, I create a<br>
timer event (polling) that will check all requests in<br>
download_info_list and verify if there are data in temp file that<br>
already not sent to the downstream. I create one timer event per<br>
object [1].<br>
<br>
4) When the object is in current_downloads, I add the request into<br>
download_info_list and finalize ngx_http_upstream_init_request() (I<br>
just return without execute ngx_http_upstream_finalize_request());<br>
<br>
5) I have disabled (in ngx_event_pipe) the code that sends data to<br>
downstream (requirement 2);<br>
<br>
6) In the polling event, I get the current temp file offset<br>
(first_request->upstream->pipe->temp_file->offset) and I check in the<br>
download_info_list if this is > than last_offset. If true, I send more<br>
data to downstream with the ngx_http_upstream_cache_send_partial (code<br>
bellow);<br>
<br>
7) In the polling event, when pipe->upstream_done ||<br>
pipe->upstream_eof || pipe->upstream_error, and all data were sent to<br>
downstream, I execute ngx_http_upstream_finalize_request for all<br>
requests;<br>
<br>
8) I added a bit flag (first_download_request) in ngx_http_request_t<br>
struct to avoid request to be finished before all requests were<br>
completed. In ngx_http_upstream_finalize_request() I check this flag.<br>
But, in really, I don't have sure if is necessary avoid this<br>
situation...<br>
<br>
<br>
Bellow you can see the ngx_http_upstream_cache_send_partial code:<br>
<br>
<br>
/////////////<br>
static ngx_int_t<br>
ngx_http_upstream_cache_send_partial(ngx_http_request_t *r,<br>
ngx_temp_file_t *file, off_t offset, off_t bytes, unsigned last_buf)<br>
{<br>
ngx_buf_t *b;<br>
ngx_chain_t out;<br>
ngx_http_cache_t *c;<br>
<br>
c = r->cache;<br>
<br>
/* we need to allocate all before the header would be sent */<br>
<br>
b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t));<br>
if (b == NULL) {<br>
return NGX_HTTP_INTERNAL_SERVER_ERROR;<br>
}<br>
<br>
b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t));<br>
if (b->file == NULL) {<br>
return NGX_HTTP_INTERNAL_SERVER_ERROR;<br>
}<br>
<br>
/* FIX: need to run ngx_http_send_header(r) once... */<br>
<br>
b->file_pos = offset;<br>
b->file_last = bytes;<br>
<br>
b->in_file = 1;<br>
b->last_buf = last_buf;<br>
b->last_in_chain = 1;<br>
<br>
b->file->fd = file->file.fd;<br>
b->file->name = file-><a href="http://file.name" target="_blank">file.name</a>;<br>
b->file->log = r->connection->log;<br>
<br>
out.buf = b;<br>
out.next = NULL;<br>
<br>
return ngx_http_output_filter(r, &out);<br>
}<br>
////////////<br>
<br>
My second doubt is: Could I just fix ngx_event_pipe to send to all<br>
requests (instead of to send to one request)? And, if true,<br>
ngx_http_output_filter can be used to send a big chunk at first time<br>
(300 MB or more) and little chunks after that?<br>
<br></blockquote><div><br>Use smaller chunks.<br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Thanks in advance for your attention :-)<br>
<br>
[1] I know that "polling event" is a bad approach with NGINX, but I<br>
don't know how to fix this. For example, the upstream download can be<br>
very quickly, and is possible that I need send data to downstream in<br>
little chunks. Upstream (in NGINX) is socket event based, but, when<br>
download from upstream finished, which event can I expect?<br>
<div class="im"><br>
Regards.<br>
--<br>
Alex Garzão<br>
Projetista de Software<br>
Azion Technologies<br>
alex.garzao (at) <a href="http://azion.com" target="_blank">azion.com</a><br>
<br>
</div><div class=""><div class="h5">_______________________________________________<br>
nginx-devel mailing list<br>
<a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
</div></div></blockquote></div><br></div><div class="gmail_extra">You are on a right way. Just keep digging. Do not forget to turn off this features when you have flv or mp4 seek, partial requests and content-ecoding different than identity because you will send broken files to the browsers.<br>
</div></div></div>