how to completely disable buffering request body

phani prasad mailforpps at gmail.com
Tue Aug 23 16:33:01 UTC 2016


Hi all,

for one of our products we have chosen nginx as our webserver and using
fastCGI to talk to upstream(application) layer. We have a usecase where in
the client sends huge payload typically in MB and nginx is quick enough
reading all the data and buffering it . Whereas our upstream server is
quite slower in consuming the data.  This resulted in timeout on client
side since the upstream cant respond with status code until unless it
finish reading the complete payload. Additional information is, the request
is chunked.

To address this we have tried several options but  nothing worked out so
far.

1. we turned off fastcgi_request_buffering setting it to off.

This would only allow nginx not to buffer request payload into a temp file
before forwarding it to application/upstream. But it still use some buffer
chains to write to upstream.

2. setting client_body_buffer_size .

this would only check if request body size is larger than
client_body_buffer_size, then it writes whole or part of body to file.
How does this work in case of chunked request body?
What is the max chunk size that nginx can allocate?
What if upstream is slow in consuming the data and nginx still tries to
writev chain of buffers to the pipe?
How many max chain buffers nginx would maintain to buffer request body? If
so is it configurable?


What other options that we can try out? we want to completely disable
request body buffering and would want to stream the data as it just arrives
from client. and if upstream is busy , nginx should be able to tune itself
in the sense it should wait reading further data from client until upstream
is ready to be written.

Any help is much appreciated as this is blocking one of our product
certifications.

Thanks
Prasad.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx-devel/attachments/20160823/92ed3794/attachment.html>


More information about the nginx-devel mailing list