<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head><meta content="text/html;charset=UTF-8" http-equiv="Content-Type"></head><body ><div style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: 10pt;"><div>Dear Maxim Dounin, Team & Community,<br></div><div><br></div><div>Thank you for your suggestions.<br></div><div><br></div><div>Would be helpful if you could suggest the following,</div><div><br></div><div><blockquote style="border: 1px solid rgb(204, 204, 204); padding: 7px;"><div>> In general, $request_time minus $upstream_response_time is the slowness introduced by the client. <br></div><div><br></div><div>1. It's true most of the time. But clients are not willing to accept unless they see a log from server side. (Say the client server itself is running in another hosing services like amazon EC2 instance)<br></div></blockquote></div><blockquote style="border: 1px solid rgb(204, 204, 204); padding: 7px;"><div><div>> <span style="orphans: 2; text-align: start; text-indent: 0px; widows: 2; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">Further, $request_time can be saved at various request processing stages, such as after reading request headers via the "set"</span><br></div><div><span style="orphans: 2; text-align: start; text-indent: 0px; widows: 2; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">directive, or via a map when sending the response headers. This provides mostly arbitrary time measurements if you need it. </span><br></div><div><br></div><div><span style="orphans: 2; text-align: start; text-indent: 0px; widows: 2; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;">2. How do we get control in nginx configuration when the last byte of request body is received from the client</span><br></div></div></blockquote><div><blockquote style="border: 1px solid rgb(204, 204, 204); padding: 7px; background-color: rgb(255, 255, 255);"><div>> For detailed investigation on what happens with the particular client, debugging log is the most efficient instrument, notably the "debug_connection" directive which makes it possible to <br></div><div>activate debug logging only for a particular client <br></div><div><br></div><div>This debug log would definitely help to check the last byte of the request body !<br></div><div> <br></div><div>3. But is it recommended to used nginx built with --with-debug in production environments<br></div><div>4. We receive such slow requests infrequently. Enabling debug log is producing a huge amount of logs/per request (2MB of log file per 10 MB request body upload) and it becomes hard to identify the slow request in that. Thats why it is mentioned as no straightforward way to measure the time taken by client to send the request body completely. <br></div></blockquote><div><blockquote style="border: 1px solid rgb(204, 204, 204); padding: 7px; background-color: rgb(255, 255, 255);"><div>> Is there a timeout for the whole request? <br></div><div><br></div><div>5. How to prevent attacks like slow-loris DDos from exhausting the client connections when using the open-source version. Timeouts such as client_body_timeout are not much helpful for such attacks.<br></div></blockquote></div><div><br></div><div><br></div></div><div id="Zm-_Id_-Sgn" data-zbluepencil-ignore="true" data-sigid="795269000000038001"><div>Thanks & Regards,<br></div><div>Devarajan D.<br></div></div><div><br></div><div class="zmail_extra_hr" style="border-top: 1px solid rgb(204, 204, 204); height: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 0px;"><br></div><div class="zmail_extra" data-zbluepencil-ignore="true"><div><br></div><div id="Zm-_Id_-Sgn1">---- On Mon, 02 Oct 2023 03:46:40 +0530 <b>Maxim Dounin <mdounin@mdounin.ru></b> wrote ---<br></div><div><br></div><blockquote id="blockquote_zmail" style="margin: 0px;"><div>Hello!<br><br>On Sun, Oct 01, 2023 at 08:20:23PM +0530, Devarajan D via nginx wrote:<br><br>> Currently, there is no straightforward way to measure the time <br>> taken by client to upload the request body. <br>> <br>> 1. A variable similar to request_time, upstream_response_time <br>> can be helpful to easily log this time taken by client.<br>> So it will be easy to prove to the client where the slowness <br>> is.<br><br>In general, $request_time minus $upstream_response_time is the <br>slowness introduced by the client. (In some cases, <br>$upstream_response_time might also depend on the client behaviour, <br>such as with "proxy_request_buffering off;" or with <br>"proxy_buffering off;" and/or due to proxy_max_temp_file_size <br>reached.)<br><br>Further, $request_time can be saved at various request processing <br>stages, such as after reading request headers via the "set" <br>directive, or via a map when sending the response headers. This <br>provides mostly arbitrary time measurements if you need it.<br><br>For detailed investigation on what happens with the particular <br>client, debugging log is the most efficient instrument, notably <br>the "debug_connection" directive which makes it possible to <br>activate debug logging only for a particular client <br>(<a href="http://nginx.org/r/debug_connection" target="_blank">http://nginx.org/r/debug_connection</a>).<br><br>> 2. Also, is there a timeout for the whole request? <br>> <br>> (say request should be timed out if it is more than 15 <br>> minutes)<br><br>No.<br><br>-- <br>Maxim Dounin<br><a href="http://mdounin.ru/" target="_blank">http://mdounin.ru/</a><br>_______________________________________________<br>nginx mailing list<br><a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br><a href="https://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">https://mailman.nginx.org/mailman/listinfo/nginx</a><br></div></blockquote></div><div><br></div></div><br></body></html>