[PATCH] Proxy: support configuration of socket buffer sizes

Karstens, Nate Nate.Karstens at garmin.com
Tue Dec 12 15:44:42 UTC 2017


Maxim,

I wanted to follow up on this because our software has been released and so I can provide a few more details about our setup.

Our use case involves the user transferring a large amount of data from a mobile device (running iOS or Android) through the HTTP proxy and storing it onto an SD card (the SD card is provided by the user). If the SD card's write speed is too slow, then the proxy send buffer will eventually fill up, causing a long delay between when the HTTP client sends packets, which ultimately leads to the client timing out.

Because the request contains a large amount of data, "proxy_request_buffering" is set to "off".

We looked into adjusting the behavior of the client, but this was not possible. To explain, there are generally two ways a mobile app can use HTTP to transfer data. The most straightforward way is to actively transfer the data using either an HTTP client integrated into the app or the HTTP libraries provided by the platform. However, for large transfers this requires the user to keep the mobile app in the foreground -- if the mobile OS detects that the user is not actively using the app then it will suspend the app, which interrupts the transfer.

The alternative method to transfer data is to utilize the HTTP background transfer functionality provided by the mobile OS. With this method, the app configures the HTTP transfer and then asks the mobile OS to complete it on the app's behalf. The advantage here is that the user can use other apps while the mobile OS completes the transfer. Unfortunately, the mobile OS provides no ability to customize connection parameters, such as timeouts.

I read through the Cloudflare blog entry (thank you -- that was interesting). Their case is different because they were having problems with a download, while we are having problems with an upload. As such, perhaps the "proxy_send_timeout" setting applies in this case instead of the "send_timeout" that they were using. Still, it seems undesirable to increase the timeout value to be too large because it increases the time before you can detect a problem with the connection. The Cloudflare team also considered reducing the TCP buffer size, but ultimately decided on a different solution. I noticed that their solution has not been incorporated into the main distribution, do you know why that is?

For us, the option to configure the TCP buffer sizes seems to be the most straightforward.

In our system, the default TCP send buffer size is 16kB. However, this can grow to a maximum of 4MB (the Linux stack adjusts this in tcp_sndbuf_expand() -- see tcp_input.c). We do not think that reducing the system-wide maximum TCP buffer size is ideal in this case because the server provides many other TCP applications that work well with the default value; adjusting that value presents additional risk and may lead to more widespread application-specific tuning.

Thanks for your consideration,

Nate

-----Original Message-----
From: nginx-devel [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Maxim Dounin
Sent: Tuesday, May 23, 2017 3:25 PM
To: nginx-devel at nginx.org
Subject: Re: [PATCH] Proxy: support configuration of socket buffer sizes

Hello!

On Mon, May 22, 2017 at 07:02:04PM +0000, Karstens, Nate wrote:

> Maxim,
>
> I'd be happy to explain. Our application is actually relying more on
> the change to support to SO_SNDBUF option, but I noticed the SO_RCVBUF
> option there and thought it was worth exposing that at the same time.
>
> The application we're having issues with is essentially a file
> transfer through a proxy to a relatively slow storage medium.
> Because we're using a proxy there are three buffers involved on the
> receive end of the HTTP request: 1) receive buffer on external nginx
> socket, 2) send buffer in nginx proxy module, and
> 3) receive buffer on proxied server. So, in a system where each buffer
> is a maximum of 5 MB, you can have 3 x 5 = 15 MB of data in the TCP
> buffers at a given point in time.

Send buffer on the client contributes to overral buffering as well, and probably should be counted too.  But, frankly, 5MB is a lot, and much higher than typical default, and may result in problems by itself.  See https://blog.cloudflare.com/the-curious-case-of-slow-downloads/
for a description of a problem Cloudflare faced with such socket buffer sizes.

> In most circumstances, I don't think this would be a problem.
> However, writing the data is so slow that the HTTP client times out
> waiting for a reply (which is only sent once all data has been written
> out). Unfortunately, we cannot solve this by increasing the client's
> timeout. We found that reducing the size

Are you using some custom client, or a browser?

Even with buffers as huge as 3x5MB, a really slow backend will be required to trigger a timeout in a typical browser, as browsers seems to happily wait for 60+ seconds (actually, much more, but 60 seconds is a default timeout in nginx).  This means that backend needs to be slower than 250 KB/seconds for this to become a problem even without any socket tuning, and this sounds more like an 1x CD than even a class 2 SD-card.

> of each buffer -- using the "rcvbuf" parameter to the "listen"
> directive lets us configure SO_RCVBUF for the first and third sockets
> mentioned above, and this patch lets us configure SO_SNDBUF of the
> second socket -- reduces the time between when the client sends the
> last byte of its request and when it receives a reply, ultimately
> preventing the timeout. We would prefer not to adjust the system's
> default sizes for these buffers because that negatively impacts
> performance on other applications used by the system.
>
> Although this seems like a fairly-specific use-case, I think it can be
> generalized as: 1) the client cannot wait indefinitely after sending
> the last byte of the request, 2) the server must process all data
> before it can generate a reply, and 3) the server processes data
> relatively slowly. This seemed general enough that it was worth adding
> the functionality for our own use and thought it might be applicable
> to other users as well.

Normally, nginx reads the whole request from the client, and only after that starts sending it to a backend server.  This effectively means infinite buffering, and certainly will trigger a timeout if the backend server is not able to process the request in a reasonable time.  Socket buffer sizes may become important when using "proxy_request_buffering off" and/or non-http connections (e.g., WebSocket ones), but these are specific by itself.

Overall, thank you for the patch, but it looks like something very specific for your particular use case.  We would like to avoid introducing this into nginx, at least till there are more requests for this.

I would also recommend to take a closer look at your setup.
Numbers you've provided suggest that there may be something wrong elsewhere, and the fact that smaller buffers fix the problem may be unrelated.

--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx-devel mailing list
nginx-devel at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

________________________________

CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of the intended recipient(s) and contain information that may be Garmin confidential and/or Garmin legally privileged. If you have received this email in error, please notify the sender by reply email and delete the message. Any disclosure, copying, distribution or use of this communication (including attachments) by someone other than the intended recipient is prohibited. Thank you.


More information about the nginx-devel mailing list