Dealing with buffered data with upstream generated response
Umesh Sirsiwal
usirsiwal at verivue.com
Fri Dec 2 16:13:15 UTC 2011
Hey Maxim,
I spent some time yesterday and today trying to reproduce this issue
with standard gzip module. But, unfortunately I could not. May be I am
doing something wrong in my module.
My configuration looked like the following. I was using nginx version
1.1.10.
-Umesh
------
worker_processes 1;
daemon off;
http {
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend {
server 127.0.0.1:8080;
keepalive 1;
}
server {
listen 8080 sndbuf=10k;
server_name localhost;
proxy_buffering on;
proxy_buffers 10000 4k;
gzip on;
gzip_proxied any;
location / {
proxy_pass http://http_backend/local/;
proxy_http_version 1.1;
proxy_set_header Connection "";
#limit_rate 100K;
}
location /local/ {
alias /tmp/nginx-test-dL3kfZazKJ/;
}
}
}
-----
On 11/29/2011 02:19 PM, Umesh Sirsiwal wrote:
> Thanks Maxim,
>
>>> I am developing a body filter which transforms outgoing stream. As
>>> part of the function, the incoming stream is copied in new buffer
>>> and sent. This in some cases results in a condition where my filter
>>> has busy buffers but the upper layers don't have any busy buffer. In
>>> our case the data is generated by upstream module. Since upstream
>>> module does not pay attention to connection->buffered, the
>>> output_filter is never called again to flush my busy buffers and the
>>> transfer just hangs.
>>>
>>> Adding r->connection->buffered to the or condition solves the hang.
>>>
>>> if (u->out_bufs || u->busy_bufs) {
>>> rc = ngx_http_output_filter(r, u->out_bufs);
>>>
>>> if (rc == NGX_ERROR) {
>>> ngx_http_upstream_finalize_request(r, u, 0);
>>> return;
>>> }
>>>
>>> ngx_chain_update_chains(&u->free_bufs,&u->busy_bufs,
>>> &u->out_bufs, u->output.tag);
>>> }
>>>
>>> Is this nginx bug or am I missing something?
>> I tend to think this is a bug, and it should probably manifest
>> itself with e.g. gzip filter as well.
>>
>> (Well, as far as I see transfer shouldn't hang completely, but it
>> will be only resumed as soon as upstream sends some more data or
>> closes connection.)
> In our application we use Upstream KeepAlive module to keep the upstream
> connection open. So if this happens towards the end of the last
> subrequest response, the connection hangs.
>> It would be superb if you'll be able to provide test case which
>> catches this for our test suite (http://mdounin.ru/hg/nginx-tests).
> I will try to see if I can reproduce it with gzip filter. I should be
> able to reproduce it with
> limit-rate+gzip-filter+proxy+upstream-keepalive module combination. But,
> I am not sure if the nginx-test test-suite includes upstream-keepalive.
>
>> I'll look into this more closely as time permits.
>>
>> Maxim Dounin
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
More information about the nginx
mailing list