This patch should work between nginx-1.2.6 and nginx-1.3.8.<div><br></div><div>The documentation is here: </div><div><br></div><div><div>## client_body_postpone_sending ##</div><div><br></div><div>Syntax: **client_body_postpone_sending** `size`</div>
<div><br></div><div>Default: 64k</div><div><br></div><div>Context: `http, server, location`</div><div><br></div><div>If you specify the `proxy_request_buffering` or `fastcgi_request_buffering` to be off, Nginx will send the body to backend when it receives more than `size` data or the whole request body has been received. It could save the connection and reduce the IO number with backend. </div>
<div> </div><div>## proxy_request_buffering ##</div><div><br></div><div>Syntax: **proxy_request_buffering** `on | off`</div><div><br></div><div>Default: `on`</div><div><br></div><div>Context: `http, server, location`</div>
<div><br></div><div>Specify the request body will be buffered to the disk or not. If it's off, the request body will be stored in memory and sent to backend after Nginx receives more than `client_body_postpone_sending` data. It could save the disk IO with large request body.</div>
<div> Note that, if you specify it to be off, the nginx retry mechanism with unsuccessful response will be broken after you sent part of the request to backend. It will just return 500 when it encounters such unsuccessful response. This directive also breaks these variables: $request_body, $request_body_file. You should not use these variables any more while their values are undefined.</div>
<div><br></div><div>## fastcgi_request_buffering ##</div><div><br></div><div>Syntax: **fastcgi_request_buffering** `on | off`</div><div><br></div><div>Default: `on`</div><div><br></div><div>Context: `http, server, location`</div>
<div><br></div><div>The same as `proxy_request_buffering`.</div></div><div><br></div><br><div class="gmail_quote">2013/1/13 li zJay <span dir="ltr"><<a href="mailto:zjay1987@gmail.com" target="_blank">zjay1987@gmail.com</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hello!<div><br></div><div>@yaoweibin</div><div><div class="im"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<font color="#cccccc">If you are eager for this feature, you could try my patch: <a href="https://github.com/taobao/tengine/pull/91" target="_blank">https://github.com/taobao/tengine/pull/91</a>. This patch has been running in our production servers.</font></blockquote>
</div><div class="gmail_extra">what's the nginx version your patch based on?</div><div class="gmail_extra"><br></div><div class="gmail_extra">Thanks!</div><div><div class="h5"><div class="gmail_extra"><br></div><div class="gmail_extra">
<div class="gmail_quote">On Fri, Jan 11, 2013 at 5:17 PM, Ҧΰ±ó <span dir="ltr"><<a href="mailto:yaoweibin@gmail.com" target="_blank">yaoweibin@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I know nginx team are working on it. You can wait for it.<div><br></div><div>If you are eager for this feature, you could try my patch: <a href="https://github.com/taobao/tengine/pull/91" target="_blank">https://github.com/taobao/tengine/pull/91</a>. This patch has been running in our production servers.<br>
<br><div class="gmail_quote">2013/1/11 li zJay <span dir="ltr"><<a href="mailto:zjay1987@gmail.com" target="_blank">zjay1987@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div><div>
<div dir="ltr">Hello!<div><br></div><div>is it possible that nginx will not buffer the client body before handle the request to upstream?<br clear="all"><div><br></div><div>we want to use nginx as a reverse proxy to upload very very big file to the upstream, but the default behavior of nginx is to save the whole request to the local disk first before handle it to the upstream, which make the upstream impossible to process the file on the fly when the file is uploading, results in much high request latency and server-side resource consumption.</div>
<div><br></div><div>Thanks!</div>
</div></div>
<br></div></div>_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><span><font color="#888888"><br></font></span></blockquote></div><span><font color="#888888"><br>
<br clear="all"><div><br></div>-- <br>Weibin Yao<br>Developer @ Server Platform Team of Taobao
</font></span></div>
<br>_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br></blockquote></div><br>
</div></div></div></div></div>
<br>_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br>Weibin Yao<br>Developer @ Server Platform Team of Taobao