Tuning client request buffering in ngx_http_proxy_module

bengalih nginx-forum at forum.nginx.org
Tue Feb 1 20:19:55 UTC 2022


Thank you for the response Maxim.

Unfortunately, while it reinforces what my understanding is from the
documentation, it doesn't allow me to reconcile what I am seeing or help me
understand how to tune properly.

Let me explain what I am seeing now based upon your explanation.

(N.B.: In my initial post I was experiencing the issue with my backend IIS
server pointing its virtual directory to a UNC path which was causing very
slow uploads and exacerbating all issues.  I have not solved that problem
yet so in the meantime I have redirected to a local path and my speeds are
much improved.  However, I still am experiencing these same issues to a
lesser degree:)

In my tests I am uploading a ~800MB file via WebDAV.  I am terminating
client https at NGINX and then proxying to the backend using http.

My direct https speeds to the backend are about 100mbps. (sans NGINX)
As soon as I go through NGINX those speeds are almost cut in half (~55mbps)
*if* "proxy_request_buffering off".
My NGINX (v1.18) is running on embedded hardware and only has about 100MB
RAM available (2 GB swap).
I do see very high CPU utilization when uploading and blame the decrease in
bandwidth to this - something I need to live with for now.
However memory utilization is almost nothing with this configuration.

If I keep proxy_request_buffering the default of "on" then my speeds are
further reduced to about 30mbps.
In addition to high CPU usage I also see memory usage of about 75% of my
available memory (50-75MB).

When I have "proxy_request_buffering off" I don't appear to have any issues
(apart from the speed mentioned above).

However, with "proxy_request_buffering on" (default) I have the following
strange behavior:

First, upload speeds are slower - around 25-30mbps (~55% slower than with
"proxy_request_buffering off").
Also, the upload pauses throughout for seconds at a time.  I am taking these
into account when calculating the mbps.
Upon completion of the upload, or rather at 99% of the upload (~806MB of
811MB) my client pauses for about 40 seconds before finishing the last 1%; I
am not taking this time into account when factoring the upload mbps. 
(before I fixed my UNC pathing issues and my upload speed was around 17mbps
it actually took 5-8 minutes to complete this last 1%).

During this time I can see via packet capture that the NGINX server is still
sending HTTP/TCP data to the backend server.  When this data completes then
the client finally triggers a success.  Clearly the NGINX server is still
buffering the data and the client is waiting for a response from the backend
server that it has finally received all data so that it can report a
successful.

An LSOF on the worker process shows the following file at the location I
defined for "client_body_temp_path":

nginx   25101 nobody    7u      REG        8,1 546357248  129282
/tmp/mnt/flash_drive/tmp/0000000009 (deleted)

LSOF says (deleted), although the size listed in LSOF continues to grow up
to the maximum size of the uploaded file.
Additionally, doing a "df" shows that my drive has been filled up an
equivalent amount.
However this file doesn't exist when looking in this location.

So based on the above I have two questions:
1) While I understand that the entire file must be written to disk because
the upload size is greater than the buffers, where is this file being
written?  It is clearly being written to the disk but LSOF shows it as
deleted even though it continues to grow (as reflected by LSOF and df) as
the upload continues.

2) It would seem, that with such large uploads it makes the most sense to
keep "proxy_request_buffering off" but assuming you needed the advantages of
this (like you specify in your first reply), is there anything that can be
done to tune this so that the speeds are faster and especially there isn't
such a long delay at the 99% upload?  I played around with some buffer
settings, but none of them seem to really make any noticeable effect.

Any additional knowledge you can impart is appreciated.

Thanks.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293558,293566#msg-293566



More information about the nginx mailing list