[PATCH] Proxy: support configuration of socket buffer sizes

Maxim Dounin mdounin at mdounin.ru
Tue May 23 20:24:33 UTC 2017


Hello!

On Mon, May 22, 2017 at 07:02:04PM +0000, Karstens, Nate wrote:

> Maxim,
> 
> I'd be happy to explain. Our application is actually relying 
> more on the change to support to SO_SNDBUF option, but I noticed 
> the SO_RCVBUF option there and thought it was worth exposing 
> that at the same time.
> 
> The application we're having issues with is essentially a file 
> transfer through a proxy to a relatively slow storage medium. 
> Because we're using a proxy there are three buffers involved on 
> the receive end of the HTTP request: 1) receive buffer on 
> external nginx socket, 2) send buffer in nginx proxy module, and 
> 3) receive buffer on proxied server. So, in a system where each 
> buffer is a maximum of 5 MB, you can have 3 x 5 = 15 MB of data 
> in the TCP buffers at a given point in time.

Send buffer on the client contributes to overral buffering as 
well, and probably should be counted too.  But, frankly, 5MB is a 
lot, and much higher than typical default, and may result in 
problems by itself.  See 
https://blog.cloudflare.com/the-curious-case-of-slow-downloads/ 
for a description of a problem Cloudflare faced with such socket 
buffer sizes.

> In most circumstances, I don't think this would be a problem. 
> However, writing the data is so slow that the HTTP client times 
> out waiting for a reply (which is only sent once all data has 
> been written out). Unfortunately, we cannot solve this by 
> increasing the client's timeout. We found that reducing the size 

Are you using some custom client, or a browser?

Even with buffers as huge as 3x5MB, a really slow backend will be 
required to trigger a timeout in a typical browser, as browsers 
seems to happily wait for 60+ seconds (actually, much more, but 60 
seconds is a default timeout in nginx).  This means that backend 
needs to be slower than 250 KB/seconds for this to become a 
problem even without any socket tuning, and this sounds more like 
an 1x CD than even a class 2 SD-card. 

> of each buffer -- using the "rcvbuf" parameter to the "listen" 
> directive lets us configure SO_RCVBUF for the first and third 
> sockets mentioned above, and this patch lets us configure 
> SO_SNDBUF of the second socket -- reduces the time between when 
> the client sends the last byte of its request and when it 
> receives a reply, ultimately preventing the timeout. We would 
> prefer not to adjust the system's default sizes for these 
> buffers because that negatively impacts performance on other 
> applications used by the system.
> 
> Although this seems like a fairly-specific use-case, I think it 
> can be generalized as: 1) the client cannot wait indefinitely 
> after sending the last byte of the request, 2) the server must 
> process all data before it can generate a reply, and 3) the 
> server processes data relatively slowly. This seemed general 
> enough that it was worth adding the functionality for our own 
> use and thought it might be applicable to other users as well.

Normally, nginx reads the whole request from the client, and only 
after that starts sending it to a backend server.  This 
effectively means infinite buffering, and certainly will trigger a 
timeout if the backend server is not able to process the request 
in a reasonable time.  Socket buffer sizes may become important 
when using "proxy_request_buffering off" and/or non-http 
connections (e.g., WebSocket ones), but these are specific by 
itself.

Overall, thank you for the patch, but it looks like something very 
specific for your particular use case.  We would like to avoid 
introducing this into nginx, at least till there are more requests 
for this.

I would also recommend to take a closer look at your setup.  
Numbers you've provided suggest that there may be something wrong 
elsewhere, and the fact that smaller buffers fix the problem may 
be unrelated.

-- 
Maxim Dounin
http://nginx.org/


More information about the nginx-devel mailing list