[PATCH] Proxy: support configuration of socket buffer sizes
Nate.Karstens at garmin.com
Mon May 22 19:02:04 UTC 2017
I'd be happy to explain. Our application is actually relying more on the change to support to SO_SNDBUF option, but I noticed the SO_RCVBUF option there and thought it was worth exposing that at the same time.
The application we're having issues with is essentially a file transfer through a proxy to a relatively slow storage medium. Because we're using a proxy there are three buffers involved on the receive end of the HTTP request: 1) receive buffer on external nginx socket, 2) send buffer in nginx proxy module, and 3) receive buffer on proxied server. So, in a system where each buffer is a maximum of 5 MB, you can have 3 x 5 = 15 MB of data in the TCP buffers at a given point in time.
In most circumstances, I don't think this would be a problem. However, writing the data is so slow that the HTTP client times out waiting for a reply (which is only sent once all data has been written out). Unfortunately, we cannot solve this by increasing the client's timeout. We found that reducing the size of each buffer -- using the "rcvbuf" parameter to the "listen" directive lets us configure SO_RCVBUF for the first and third sockets mentioned above, and this patch lets us configure SO_SNDBUF of the second socket -- reduces the time between when the client sends the last byte of its request and when it receives a reply, ultimately preventing the timeout. We would prefer not to adjust the system's default sizes for these buffers because that negatively impacts performance on other applications used by the system.
Although this seems like a fairly-specific use-case, I think it can be generalized as: 1) the client cannot wait indefinitely after sending the last byte of the request, 2) the server must process all data before it can generate a reply, and 3) the server processes data relatively slowly. This seemed general enough that it was worth adding the functionality for our own use and thought it might be applicable to other users as well.
From: nginx-devel [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Maxim Dounin
Sent: Friday, May 19, 2017 1:21 PM
To: nginx-devel at nginx.org
Subject: Re: [PATCH] Proxy: support configuration of socket buffer sizes
On Thu, May 18, 2017 at 06:22:41PM +0000, Karstens, Nate wrote:
> I just wanted to follow up on this patch and make sure that the fraud
> detection notice or confidentiality notice added by my company wasn't
> precluding it from consideration.
No, it wasn't. And the fraud detection notice seems to be added on your incoming mail, the mailing list copy don't contain anything like this, see http://nginx.org/pipermail/nginx-devel/2017-April/009876.html.
Avoiding confidentiality noticies on public mailing lists might be a good idea though.
> # HG changeset patch
> # User Nate Karstens <nate.karstens at garmin.com> # Date
> 1493467011 18000
> # Sat Apr 29 06:56:51 2017 -0500
> # Node ID 1251a543804b17941b2c96b84bd1f4e58a37bc15
> # Parent 8801ff7d58e1650c9d1abb50e09f5979e4f9ffbf
> Proxy: support configuration of socket buffer sizes
> Allows the size of the buffers used by the TCP sockets for HTTP proxy
> connections to be configured. The new configuration directives are:
> * proxy_socket_rcvbuf
> * proxy_socket_sndbuf
> These correspond with the SO_RCVBUF and SO_SNDBUF socket options,
> This is be useful in cases where the proxy processes received data
> slowly. Data was being buffered in three separate TCP buffers
> (nginx-from-client receive, nginx- to-proxy send, and proxy-from-nginx
> receive). The cumulative effect is that the client thinks it has sent
> all of the data, but times out waiting for a reply from the proxy,
> which cannot reply because it is still processing the data in its
In practice, we've never seen cases when default socket buffer sizes on backend connections are not appopriate, and/or tuning system default is not sufficient. So even, as you can see from the code, nginx is able to tune SO_RCVBUF in ngx_event_connect_peer(), this was never exposed to configuration.
This may be related to the fact that HTTP in general doesn't really depends on particular parts of a request being buffered, and nginx does not use pipelining in requests.
Could you please elaborate more on the use case where you see the problem described, and why tuning system defaults is not sufficient in your case?
nginx-devel mailing list
nginx-devel at nginx.org
CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of the intended recipient(s) and contain information that may be Garmin confidential and/or Garmin legally privileged. If you have received this email in error, please notify the sender by reply email and delete the message. Any disclosure, copying, distribution or use of this communication (including attachments) by someone other than the intended recipient is prohibited. Thank you.
More information about the nginx-devel