Tuning client request buffering in ngx_http_proxy_module

bengalih nginx-forum at forum.nginx.org
Wed Feb 2 17:42:02 UTC 2022


> Note that SSL is likely the most important contributor to CPU 
> utilization in this setup.  It might be a good idea to carefully 
> tune ciphers used.

I believe I have set this fairly appropriately.  If you know of a resource
that would explain this in more detail I would appreciate it.

> To re-iterate what was already written in the previous message: 
> nginx opens a temporary file, immediately deletes it, and then 
> uses this file for disk buffering.  The file is expected to be 
> deleted from the very start, and it is expected to grow over time 
> as it is used for disk buffering.  

My apologies.  I must have missed that in the initial response as I do not
totally comprehend how/why this mechanism works.
If a file is deleted, I am not aware of how it can still be used as a
buffer.  This must be a linux mechanism I am not familiar with.
I guess I don't understand the difference between the default then of
"client_body_in_file_only off" and "client_body_in_file_only on", at least
in the case when the file is bigger than the buffer.  When I have it set to
on I can at least see the whole file on disk, but when it is off you state
the file is deleted and yet the space the file uses still remains.

> As far as I understand from your description, the "long delay at 
> the 99% upload" you see with request buffering switched on isn't 
> really at 99%, but instead at 100%, when the request is fully sent 
> from the client to nginx, and the delay corresponds to sending the 
> request body from nginx to the backend server.  To speed up this 
> process, you have to speed up your backend server and connection 
> between nginx and the backend, as well as the disk on nginx 
> server. 

You are probably right that the upload has completed 100%, but the client
cannot complete until a response is received from the backend server.
The NGINX server and backend server are both connected into the same gigabit
switch.  This is all consumer grade hardware, but during these tests as very
little utilization.  As I do not have these issues between clients and the
backend server inside the network, I can only assume that the issue is the
NGINX box itself and its inability to send the data off fast enough to the
backend.  This is probably exacerbated by the overtaxed CPU.

> Given limited resources on nginx box, as well as small number of 
> clients and only one backend server, "proxy_request_buffering 
> off;" might be actually a better choice in your setup.

I think you are right in this case and luckily my needs for this application
allow that to be a reasonable choice.

Thank you for helping me understand the process a bit better.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293558,293578#msg-293578



More information about the nginx mailing list