questions about upstream buffering mode
wubingzheng at 163.com
Sun Dec 12 16:40:05 MSK 2010
Thanks for your reply, but I am still not clear.
At 2010-12-12 14:20:05，Eugaia <ngx.eugaia at gmail.com> wrote:
>On 12/12/2010 06:45, Wu Bingzheng wrote:
>> hi all,
>> There are 2 modes of upstream, buffering and non-buffering. And there
>> are some difference between them:
>> 1. non-buffering mode doesn't support limit-rate.
>> 2. a request in non-buffering mode decides the end of upstream by a)
>> close of upstream; b) comparing the length of recived data and
>> headers_out.content_length. While the request in buffering mode
>> decides the end only by the close of upstream. As a result, in
>> buffering mode, the upstream(such as a memcached cache) can't be
>> keepalive, which leads the request in nginx to end after keepalive-time.
>> I want to know why does these difference exist?
>> Can't the non-buffering mode support limit-rate?
>The problem with this is that if you're supporting limit rate, what
>happens if you receive more data from the upstream than the limit rate
>would allow you to send to the client? You'd have to buffer it (at
>least partially, either in memory or on disk). ;-)
I don't think your explanation is right. In non-buffering mode, even if there is no limit-rate, the downstream connetion still maybe slower than the upstream, such as because the downstream network situation is bad. Now, if the downstream is slower, and the data buffer is full, nginx will stop recieving data from upstream.
So I think, adding limit-rate into non-buffering mode will not bring any problem about the data buffer you said. If the downstream is blocked by limit-rate and the data buffer is full, just stop receiving data, just like what it does now.
>> can't the buffering mode decide the end of request by content-length?
>I personally can't immediately see a reason why not (but I'd be
>interested to know if there is one). It's probably not there just
>because in most cases the connections to the upstreams won't fail, and
>so it's more efficient to not check the size when each packet is received.
Just because of efficience? But it causes some inconvenience, such as the upstream server can't be configed as keepalive mode (if the upstream is http proxy, it can be keepalive, because nginx dosen't support keepalive as a http client).
there is a good addon module, HttpUpstreamKeepaliveModule, which can re-use memcached upstream connections. But it need the memcahced server to be configed as keepalive mode. Which means, if we configed the memcached upstream module as buffering mode(may because we need limit-rate...), we can't config the memcached server in keepalive mode, so we can't use HttpUpstreamKeepaliveModule.
Thanks very much,
>nginx mailing list
>nginx at nginx.org
More information about the nginx