From nginx-forum at forum.nginx.org Tue Feb 1 00:53:24 2022 From: nginx-forum at forum.nginx.org (bengalih) Date: Mon, 31 Jan 2022 19:53:24 -0500 Subject: Tuning client request buffering in ngx_http_proxy_module Message-ID: I had/have an issue where I am proxying from NGINX to a backend WebDAV server. My upload speeds were slow and had long pauses in them including very long pauses at the end (5 minutes or more on uploads around 1GB). Via packet captures I found that the NGINX server was not transmitting data synchronously to the backend WebDAV server, but was clearly doing buffering despite the fact I had set "proxy_buffering off". Looking at the documentation led me to "proxy_request_buffering off" which seems to have solved my problem and seems to immediately send the entire WebDAV client request directly to the backend server in a synchronous manner. I did however want to experiment with proxy_request_buffering not disabled and see if setting smaller buffers to the backend would perhaps not result in such log asynchronous delays. With "proxy_request_buffering on" (default) I have also set the "client_body_temp_path /tmp/cache" and set "client_body_in_file_only on". In this configuration I can see the client request get placed into the /tmp/cache directory, so I know my directive and paths are working. However, if I set "client_body_in_file_only off" (default) no file gets created at all despite the fact that my client PUT request over WebDAV is undoubtedly larger than any buffer settings. By doing a "df" on my NGINX box I can see that the space on my drive it being eaten up by the same amount equivalent to my upload (i.e. if I upload a 500 MB file I can see my free space decrease by 500 MB). I cannot however see anything in /tmp/cache and have no idea where these files are being placed. I also don't know how/why NGINX is caching/buffering the entire file and what is controlling how it is sending this to the backend server. I have tried playing with the "client_body_buffer_size" but it does not appear to have any effect in how the data gets buffered or the size of the file. Please help me understand how NGINX is working here in the background and how I can tune these settings. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293558,293558#msg-293558 From francis at daoine.org Tue Feb 1 08:42:52 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Feb 2022 08:42:52 +0000 Subject: How to make nginx fail if it can't write to cache dir. In-Reply-To: <653c3b3f641547d0cd1ec032dad48a0c.NginxMailingListEnglish@forum.nginx.org> References: <000a01d81387$6e036aa0$4a0a3fe0$@roze.lv> <653c3b3f641547d0cd1ec032dad48a0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220201084252.GE5824@daoine.org> On Sun, Jan 30, 2022 at 04:01:31AM -0500, unikron wrote: Hi there, > The error on a base dir is critical to all cache requests, so no request > will succeed. Yes. In your case, *all* requests are cache requests, so you want your process to fail entirely. But in the general case, all of the non-cache requests could be handled correctly while the administrator fixes their cache-request configuration, so hard-failing is probably not appropriate there. > I know I can write a script to monitor it, but it seems like the wrong > approach. > I would like to have the option for nginx to stop if it can't do the job as > should, But I guess that if no one answered with that kind of solution, it > doesn't exist in nginx. I believe that you are correct: stock nginx does not provide this facility today. If it is important to your use case, it should be possible for you to modify your nginx source code to do the "stop the process entirely" that you want. For what it's worth: the log line that you showed seems to only be possible when one of the functions: ngx_create_path() or ngx_ext_rename_file() is called; and that does not happen in many places in the source. If you can (encourage someone to) modify the code to hard-exit when your circumstances happen, then you will have a "quick fix" for your use case. After that, maybe the change (controlled by some config option) would be interesting to the wider project; in that case, you might not have to maintain the change as an external patch that you apply-or-adapt to every future nginx release that you want to use. (I don't know if this particular facility could easily be provided by an external module.) In the main, at least one of the reasons why a facility does not exist for general use, is that no-one has yet considered it important enough to arrange that the code be written and shared. You've found such a case that would be convenient for you if it already existed; it doesn't; so you get to decide how important it is to you to make it exist. "Scripting around it" is a perfectly valid option too, of course. Others exist as well, no doubt. That probably isn't the answer that you want; but maybe it will help you search for alternate solutions, once you know that the "easy" option is not there. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Feb 1 09:24:47 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Feb 2022 09:24:47 +0000 Subject: Using single persistent socket to send subrequests In-Reply-To: References: <164306886240.74788.8663387132680835741@ec2-18-197-214-38.eu-central-1.compute.amazonaws.com> Message-ID: <20220201092447.GF5824@daoine.org> On Wed, Jan 26, 2022 at 07:18:02AM +0000, Devashi Tandon wrote: Hi there, You may or may not have seen a parallel mail with the Subject: auth_request sub requests not using upstream keepalive. > I tried with clearing the connections header but NGINX is still sending > the 5th response through a new source port. Let me give a more detailed > configuration we have. Just to inform you, we have our own auth module > instead of using the NGINX auth module. We call ngx_http_post_request to > post subrequests and the code is almost the same as that of auth module. A response there indicates that for the stock nginx auth_request, if there is any body content in the response, that leads to the tcp connection being closed. If your module does the same, then you probably want to either make sure that the auth module response has no body; or make sure that your auth module reads the response body. In very light testing with "keepalive 3;", I saw the same tcp connection being re-used for subsequent requests (from the same client) when the auth response had no body; and new connections being used when the auth response had a body. So I suspect that that is the piece that is missing in your setup. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Feb 1 14:41:22 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Feb 2022 17:41:22 +0300 Subject: Tuning client request buffering in ngx_http_proxy_module In-Reply-To: References: Message-ID: Hello! On Mon, Jan 31, 2022 at 07:53:24PM -0500, bengalih wrote: > I had/have an issue where I am proxying from NGINX to a backend WebDAV > server. > My upload speeds were slow and had long pauses in them including very long > pauses at the end (5 minutes or more on uploads around 1GB). > > Via packet captures I found that the NGINX server was not transmitting data > synchronously to the backend WebDAV server, but was clearly doing buffering > despite the fact I had set "proxy_buffering off". > Looking at the documentation led me to "proxy_request_buffering off" which > seems to have solved my problem and seems to immediately send the entire > WebDAV client request directly to the backend server in a synchronous > manner. Just in case it's no clear, here are some background. By default, nginx tries to minimize time needed for request processing on the backend. It is a common situation when backend servers use process-per-connection model, and thus very ineffective in handling slow clients. In contrast, nginx is an event-based server, and can handle slow clients and lots of connections efficiently. To do so, nginx: - buffers responses, see http://nginx.org/r/proxy_buffering, and - buffers requests, see http://nginx.org/r/proxy_request_buffering Additionally, buffering requests makes it possible to retry requests to a different backend server if the is a failure (see http://nginx.org/r/proxy_next_upstream). Note that using "proxy_request_buffering off;" means retries won't work in most cases for requests with bodies. > I did however want to experiment with proxy_request_buffering not disabled > and see if setting smaller buffers to the backend would perhaps not result > in such log asynchronous delays. > > With "proxy_request_buffering on" (default) I have also set the > "client_body_temp_path /tmp/cache" and set "client_body_in_file_only on". > In this configuration I can see the client request get placed into the > /tmp/cache directory, so I know my directive and paths are working. > > However, if I set "client_body_in_file_only off" (default) no file gets > created at all despite the fact that my client PUT request over WebDAV is > undoubtedly larger than any buffer settings. > > By doing a "df" on my NGINX box I can see that the space on my drive it > being eaten up by the same amount equivalent to my upload (i.e. if I upload > a 500 MB file I can see my free space decrease by 500 MB). I cannot however > see anything in /tmp/cache and have no idea where these files are being > placed. When nginx creates temporary files which are expected to be removed, it opens a file and immediately deletes it. This way temporary files are automatically cleaned up by the system as soon as nginx closes them. Further, if nginx process dies or killed, temporary files are also correctly cleaned up. You can use "lsof" to see such files while they are still open by nginx, they are shown by lsof as "deleted". > I also don't know how/why NGINX is caching/buffering the entire file and > what is controlling how it is sending this to the backend server. I have > tried playing with the "client_body_buffer_size" but it does not appear to > have any effect in how the data gets buffered or the size of the file. The "client_body_buffer_size" controls the size of a memory buffer used for reading the request body. If request buffering is used, anything larger than that will be written to disk, and using larger client_body_buffer_size might help to reduce disk load. If request buffering is switched off, the buffer is used to read the request body and send it to the backend. > Please help me understand how NGINX is working here in the background and > how I can tune these settings. Hope the above explanation helps. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Feb 1 20:19:55 2022 From: nginx-forum at forum.nginx.org (bengalih) Date: Tue, 01 Feb 2022 15:19:55 -0500 Subject: Tuning client request buffering in ngx_http_proxy_module In-Reply-To: References: Message-ID: <1736f4f67deb686134aa6d61ebfc3572.NginxMailingListEnglish@forum.nginx.org> Thank you for the response Maxim. Unfortunately, while it reinforces what my understanding is from the documentation, it doesn't allow me to reconcile what I am seeing or help me understand how to tune properly. Let me explain what I am seeing now based upon your explanation. (N.B.: In my initial post I was experiencing the issue with my backend IIS server pointing its virtual directory to a UNC path which was causing very slow uploads and exacerbating all issues. I have not solved that problem yet so in the meantime I have redirected to a local path and my speeds are much improved. However, I still am experiencing these same issues to a lesser degree:) In my tests I am uploading a ~800MB file via WebDAV. I am terminating client https at NGINX and then proxying to the backend using http. My direct https speeds to the backend are about 100mbps. (sans NGINX) As soon as I go through NGINX those speeds are almost cut in half (~55mbps) *if* "proxy_request_buffering off". My NGINX (v1.18) is running on embedded hardware and only has about 100MB RAM available (2 GB swap). I do see very high CPU utilization when uploading and blame the decrease in bandwidth to this - something I need to live with for now. However memory utilization is almost nothing with this configuration. If I keep proxy_request_buffering the default of "on" then my speeds are further reduced to about 30mbps. In addition to high CPU usage I also see memory usage of about 75% of my available memory (50-75MB). When I have "proxy_request_buffering off" I don't appear to have any issues (apart from the speed mentioned above). However, with "proxy_request_buffering on" (default) I have the following strange behavior: First, upload speeds are slower - around 25-30mbps (~55% slower than with "proxy_request_buffering off"). Also, the upload pauses throughout for seconds at a time. I am taking these into account when calculating the mbps. Upon completion of the upload, or rather at 99% of the upload (~806MB of 811MB) my client pauses for about 40 seconds before finishing the last 1%; I am not taking this time into account when factoring the upload mbps. (before I fixed my UNC pathing issues and my upload speed was around 17mbps it actually took 5-8 minutes to complete this last 1%). During this time I can see via packet capture that the NGINX server is still sending HTTP/TCP data to the backend server. When this data completes then the client finally triggers a success. Clearly the NGINX server is still buffering the data and the client is waiting for a response from the backend server that it has finally received all data so that it can report a successful. An LSOF on the worker process shows the following file at the location I defined for "client_body_temp_path": nginx 25101 nobody 7u REG 8,1 546357248 129282 /tmp/mnt/flash_drive/tmp/0000000009 (deleted) LSOF says (deleted), although the size listed in LSOF continues to grow up to the maximum size of the uploaded file. Additionally, doing a "df" shows that my drive has been filled up an equivalent amount. However this file doesn't exist when looking in this location. So based on the above I have two questions: 1) While I understand that the entire file must be written to disk because the upload size is greater than the buffers, where is this file being written? It is clearly being written to the disk but LSOF shows it as deleted even though it continues to grow (as reflected by LSOF and df) as the upload continues. 2) It would seem, that with such large uploads it makes the most sense to keep "proxy_request_buffering off" but assuming you needed the advantages of this (like you specify in your first reply), is there anything that can be done to tune this so that the speeds are faster and especially there isn't such a long delay at the 99% upload? I played around with some buffer settings, but none of them seem to really make any noticeable effect. Any additional knowledge you can impart is appreciated. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293558,293566#msg-293566 From mdounin at mdounin.ru Tue Feb 1 23:29:09 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Feb 2022 02:29:09 +0300 Subject: Tuning client request buffering in ngx_http_proxy_module In-Reply-To: <1736f4f67deb686134aa6d61ebfc3572.NginxMailingListEnglish@forum.nginx.org> References: <1736f4f67deb686134aa6d61ebfc3572.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Tue, Feb 01, 2022 at 03:19:55PM -0500, bengalih wrote: [...] > My direct https speeds to the backend are about 100mbps. (sans NGINX) > As soon as I go through NGINX those speeds are almost cut in half (~55mbps) > *if* "proxy_request_buffering off". > My NGINX (v1.18) is running on embedded hardware and only has about 100MB > RAM available (2 GB swap). > I do see very high CPU utilization when uploading and blame the decrease in > bandwidth to this - something I need to live with for now. > However memory utilization is almost nothing with this configuration. Note that SSL is likely the most important contributor to CPU utilization in this setup. It might be a good idea to carefully tune ciphers used. > If I keep proxy_request_buffering the default of "on" then my speeds are > further reduced to about 30mbps. > In addition to high CPU usage I also see memory usage of about 75% of my > available memory (50-75MB). Memory usage in this setup is likely related to OS disk cache. Not really something to care about: OS simply uses available free memory for caching. [...] > So based on the above I have two questions: > 1) While I understand that the entire file must be written to disk because > the upload size is greater than the buffers, where is this file being > written? It is clearly being written to the disk but LSOF shows it as > deleted even though it continues to grow (as reflected by LSOF and df) as > the upload continues. To re-iterate what was already written in the previous message: nginx opens a temporary file, immediately deletes it, and then uses this file for disk buffering. The file is expected to be deleted from the very start, and it is expected to grow over time as it is used for disk buffering. > 2) It would seem, that with such large uploads it makes the most sense to > keep "proxy_request_buffering off" but assuming you needed the advantages of > this (like you specify in your first reply), is there anything that can be > done to tune this so that the speeds are faster and especially there isn't > such a long delay at the 99% upload? I played around with some buffer > settings, but none of them seem to really make any noticeable effect. As far as I understand from your description, the "long delay at the 99% upload" you see with request buffering switched on isn't really at 99%, but instead at 100%, when the request is fully sent from the client to nginx, and the delay corresponds to sending the request body from nginx to the backend server. To speed up this process, you have to speed up your backend server and connection between nginx and the backend, as well as the disk on nginx server. You may also try to tune how nginx sends the request body to the backend, such as sendfile (http://nginx.org/r/sendfile), though unlikely it will have noticeable effect with a single client and on embedded hardware. Given limited resources on nginx box, as well as small number of clients and only one backend server, "proxy_request_buffering off;" might be actually a better choice in your setup. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Wed Feb 2 13:29:35 2022 From: francis at daoine.org (Francis Daly) Date: Wed, 2 Feb 2022 13:29:35 +0000 Subject: SSL passtrough In-Reply-To: <0b550d1839dce279310fe9350b1114d9@unau.edu.ar> References: <0b550d1839dce279310fe9350b1114d9@unau.edu.ar> Message-ID: <20220202132935.GA14624@daoine.org> On Fri, Jan 28, 2022 at 10:22:42AM -0300, Daniel Armando Rodriguez via nginx wrote: Hi there, > I have a RP in front of several services and now need to add SSL passtrough > for some of them. So, with this goal set up this config > > stream { > map $ssl_preread_server_name $name { > sub1.DOMAIN sub1; > sub2.DOMAIN sub2; > sub3.DOMAIN sub3; > sub4.DOMAIN sub4; > } Side point -- you might want a "default" there too, in case the incoming name is not one of the expected set. > upstream sub1 { > server x.y.z.1:443; > } > > upstream sub2 { > server x.y.z.1:443; > } > > upstream sub3 { > server x.y.z.1:443; > } > > upstream sub4 { > server x.y.z.1:443; > } > > server { > listen 443; > proxy_pass $name; > ssl_preread on; > } > } > > And yes, four subdomains are hosted in the same VM. This has to do with the > peculiarities of the software used. I guess that this is not the entire config? Because this is "send everything to the same upstream", which should not need any special handling -- just proxy_pass there always. > In order to catch HTTP traffic, and redirect, add this to each subdomain > server. > > server { > listen 80; > return 301 https://$host$request_uri; > } That part would be in the http{} section, not the stream{} section. And all of the usual caveats about "the rest of the config might matter too" apply. But... > Is this the right way to go or am I missing something? ...that config more-or-less matches the example config at https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html, so it looks right to me. Do you see a problem when you try using it? > Also tryied to upgrade nginx using Debian repo but wasn't possible. > Currently installed 1.14.2 under Debian Buster If you can show the commands that you ran, and the response that you got, someone might be able to show why things failed. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Feb 2 17:42:02 2022 From: nginx-forum at forum.nginx.org (bengalih) Date: Wed, 02 Feb 2022 12:42:02 -0500 Subject: Tuning client request buffering in ngx_http_proxy_module In-Reply-To: References: Message-ID: > Note that SSL is likely the most important contributor to CPU > utilization in this setup. It might be a good idea to carefully > tune ciphers used. I believe I have set this fairly appropriately. If you know of a resource that would explain this in more detail I would appreciate it. > To re-iterate what was already written in the previous message: > nginx opens a temporary file, immediately deletes it, and then > uses this file for disk buffering. The file is expected to be > deleted from the very start, and it is expected to grow over time > as it is used for disk buffering. My apologies. I must have missed that in the initial response as I do not totally comprehend how/why this mechanism works. If a file is deleted, I am not aware of how it can still be used as a buffer. This must be a linux mechanism I am not familiar with. I guess I don't understand the difference between the default then of "client_body_in_file_only off" and "client_body_in_file_only on", at least in the case when the file is bigger than the buffer. When I have it set to on I can at least see the whole file on disk, but when it is off you state the file is deleted and yet the space the file uses still remains. > As far as I understand from your description, the "long delay at > the 99% upload" you see with request buffering switched on isn't > really at 99%, but instead at 100%, when the request is fully sent > from the client to nginx, and the delay corresponds to sending the > request body from nginx to the backend server. To speed up this > process, you have to speed up your backend server and connection > between nginx and the backend, as well as the disk on nginx > server. You are probably right that the upload has completed 100%, but the client cannot complete until a response is received from the backend server. The NGINX server and backend server are both connected into the same gigabit switch. This is all consumer grade hardware, but during these tests as very little utilization. As I do not have these issues between clients and the backend server inside the network, I can only assume that the issue is the NGINX box itself and its inability to send the data off fast enough to the backend. This is probably exacerbated by the overtaxed CPU. > Given limited resources on nginx box, as well as small number of > clients and only one backend server, "proxy_request_buffering > off;" might be actually a better choice in your setup. I think you are right in this case and luckily my needs for this application allow that to be a reasonable choice. Thank you for helping me understand the process a bit better. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293558,293578#msg-293578 From francis at daoine.org Fri Feb 4 00:54:16 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 4 Feb 2022 00:54:16 +0000 Subject: Tuning client request buffering in ngx_http_proxy_module In-Reply-To: References: Message-ID: <20220204005416.GC14624@daoine.org> On Wed, Feb 02, 2022 at 12:42:02PM -0500, bengalih wrote: Hi there, > > nginx opens a temporary file, immediately deletes it, and then > > uses this file for disk buffering. The file is expected to be > > deleted from the very start, and it is expected to grow over time > > as it is used for disk buffering. > If a file is deleted, I am not aware of how it can still be used as a > buffer. This must be a linux mechanism I am not familiar with. Yes. A web search for something like "write to deleted file" will give you some information on how it works. Basically, Windows filesystems and Unixy filesystems tend to do things differently. > I guess I don't understand the difference between the default then of > "client_body_in_file_only off" and "client_body_in_file_only on", at least > in the case when the file is bigger than the buffer. http://nginx.org/r/client_body_in_file_only When a request comes in, nginx collects the request body somewhere. That can be "in memory", or "in a file". If you want some (external?) part of your config to do something with the request body, it can be useful to just share the file name, rather than to send the whole content. That only works if there *is* a file name; which you arrange by setting this directive. The value of the directive also determines whether or not you are responsible to delete any created file, after the processing is done. > When I have it set to > on I can at least see the whole file on disk, but when it is off you state > the file is deleted and yet the space the file uses still remains. The space the file uses is taken up on the disk, until the request handling is complete and nginx closes the file handle and the filesystem makes the space available for something else to use. A file is not "really" deleted (as in: the space on-disk is unavailable for something else to use) until the last thing that has the file open, closes it. Cheers, f -- Francis Daly francis at daoine.org From noloader at gmail.com Fri Feb 4 04:00:28 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Thu, 3 Feb 2022 23:00:28 -0500 Subject: Tuning client request buffering in ngx_http_proxy_module In-Reply-To: References: Message-ID: On Wed, Feb 2, 2022 at 12:45 PM bengalih wrote: > > > Note that SSL is likely the most important contributor to CPU > > utilization in this setup. It might be a good idea to carefully > > tune ciphers used. > > I believe I have set this fairly appropriately. If you know of a resource > that would explain this in more detail I would appreciate it. Key exchange is the expensive part of a TLS connection. Once the key exchange is complete, the bulk encryption using AES or ChaCha is relatively fast. Key exchange will be measured in hundreds of connections per second. Once the connection is established thousands of clients can be serviced. But the key exchange is the hard part. If you use an integer field for key exchange, then DH-2048 and DH-3072 will bog the machine down. You can make key exchange easier by using elliptic curves. Also see articles like https://en.wikipedia.org/wiki/TLS_acceleration Jeff Jeff From nginx-forum at forum.nginx.org Fri Feb 4 23:33:08 2022 From: nginx-forum at forum.nginx.org (marioja2) Date: Fri, 04 Feb 2022 18:33:08 -0500 Subject: Problem running nginx in a container Message-ID: I am running nginx in a container created with a docker-compose yaml file. Here is a sample docker-compose.yml and environment file: https://github.com/marioja/Mailu/tree/test-mailu/test-mailu-docker-compose. The marioja/rainloop:1.9.TEST image is created using this Dockerfile in GitHub: https://github.com/marioja/Mailu/blob/test-mailu/webmails/rainloop/Dockerfile The really interesting problem (I have been working on it for 3 days just so you know I am not asking for help without having tried a lot of things before) is that the configuration entries in the /etc/nginx/http.d/rainloop.conf configuration file created when the rainloop image starts from the webmails/rainloop/config/nginx-rainloop.conf in the github branch previously mentioned is not recognized by nginx when the webmail container (using the rainloop image) is started by docker-compose up -d. However, if I restart the webmail container using docker-compose restart webmail then the instructions in that rainloop.conf are then recognized by nginx. I tried to use nginx-debug with error_log set to debug but the output I get is not understandable and does not refer to any config file path or parsing. The reason I am asking this in the nginx mailing list is because I have exhausted all of the tests that can gather information from the container side and I am wondering if there is anything that you guys can thing that could explain this. Here is the directory structure of the files in the container where nginx is running before starting as a result of the docker-compose up -d: @@@ /var/lib/nginx /var/lib/nginx: total 16 drwxr-x--- 4 nginx nginx 4096 Feb 1 15:55 . drwxr-xr-x 1 root root 4096 Feb 1 15:55 .. drwxr-xr-x 2 root root 4096 Feb 1 15:55 html lrwxrwxrwx 1 root root 14 Feb 1 15:55 logs -> /var/log/nginx lrwxrwxrwx 1 root root 22 Feb 1 15:55 modules -> /usr/lib/nginx/modules lrwxrwxrwx 1 root root 10 Feb 1 15:55 run -> /run/nginx drwx------ 2 nginx nginx 4096 Feb 1 15:55 tmp /var/lib/nginx/html: total 16 drwxr-xr-x 2 root root 4096 Feb 1 15:55 . drwxr-x--- 4 nginx nginx 4096 Feb 1 15:55 .. -rw-r--r-- 1 root root 494 Nov 17 13:20 50x.html -rw-r--r-- 1 root root 612 Nov 17 13:20 index.html /var/lib/nginx/tmp: total 8 drwx------ 2 nginx nginx 4096 Feb 1 15:55 . drwxr-x--- 4 nginx nginx 4096 Feb 1 15:55 .. @@@ /etc/nginx /etc/nginx: total 52 drwxr-xr-x 1 root root 4096 Feb 1 15:55 . drwxr-xr-x 1 root root 4096 Feb 3 02:53 .. -rw-r--r-- 1 root root 1077 Nov 17 13:20 fastcgi.conf -rw-r--r-- 1 root root 1007 Nov 17 13:20 fastcgi_params drwxr-xr-x 1 root root 4096 Feb 3 02:53 http.d -rw-r--r-- 1 root root 5231 Nov 17 13:20 mime.types drwxr-xr-x 2 root root 4096 Feb 1 15:55 modules -rw-r--r-- 1 root root 3358 Feb 1 15:55 nginx.conf -rw-r--r-- 1 root root 636 Nov 17 13:20 scgi_params -rw-r--r-- 1 root root 664 Nov 17 13:20 uwsgi_params /etc/nginx/http.d: total 16 drwxr-xr-x 1 root root 4096 Feb 3 02:53 . drwxr-xr-x 1 root root 4096 Feb 1 15:55 .. -rw-r--r-- 1 root root 914 Feb 3 02:53 rainloop.conf /etc/nginx/modules: total 12 drwxr-xr-x 2 root root 4096 Feb 1 15:55 . drwxr-xr-x 1 root root 4096 Feb 1 15:55 .. @@@ /var/log/nginx /var/log/nginx: total 12 drwxr-xr-x 2 nginx nginx 4096 Feb 1 15:55 . drwxr-xr-x 1 root root 4096 Feb 1 15:55 .. @@@ /run/nginx /run/nginx: total 8 drwxr-xr-x 2 nginx nginx 4096 Feb 1 15:55 . drwxr-xr-x 1 root root 4096 Feb 3 02:53 .. @@@ /usr/lib/nginx /usr/lib/nginx: total 12 drwxr-xr-x 3 root root 4096 Feb 1 15:55 . drwxr-xr-x 1 root root 4096 Feb 1 15:55 .. drwxr-xr-x 2 root root 4096 Feb 1 15:55 modules /usr/lib/nginx/modules: total 8 drwxr-xr-x 2 root root 4096 Feb 1 15:55 . drwxr-xr-x 3 root root 4096 Feb 1 15:55 .. This is the content of the rainloop.conf in the webmail container at runtime: server { listen 80 default_server; listen [::]:80 default_server; root /var/www/rainloop; # /dev/stdout (Default), , off access_log off; # /dev/stderr (Default), , debug, info, notice, warn, error, crit, alert, emerg error_log /dev/stderr warn; index index.php; # set maximum body size to configured limit client_max_body_size 58388608; location / { try_files $uri /index.php?$query_string; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.*)$; fastcgi_intercept_errors on; fastcgi_index index.php; fastcgi_keep_conn on; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php7-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location ~ /\.ht { deny all; } location ^~ /data { deny all; } } Is there a way to tell nginx or nginx-debug to tell us what is going on? The error that occurs after docker-compse up -d is that a 5MB attachment posted to the webmail container fails. That same 5MB attachment works after a docker-compose restart. The statement that does not appear to take effect until the docker-compose restart is the client_max_body_size 58388608. Please note that there is a client_max_body_size 1m directive in the /etc/nginx/nginx.conf during both restart (docker-compose up and docker-compose restart). I have searched low and high for any information about this http.d directory but I could not see anything. I am using nginx version 1.20.2 installed from alpinelinux apk command. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293592,293592#msg-293592 From francis at daoine.org Sun Feb 6 22:51:47 2022 From: francis at daoine.org (Francis Daly) Date: Sun, 6 Feb 2022 22:51:47 +0000 Subject: Problem running nginx in a container In-Reply-To: References: Message-ID: <20220206225147.GD14624@daoine.org> On Fri, Feb 04, 2022 at 06:33:08PM -0500, marioja2 wrote: Hi there, There's lots of information here, and I'm not sure what specific parts relate to an nginx issue. I don't have a full answer, but there are some suggestions below that might help point at where the problem and fix might be. > I am running nginx in a container created with a docker-compose yaml file. > Here is a sample docker-compose.yml and environment file: > > https://github.com/marioja/Mailu/tree/test-mailu/test-mailu-docker-compose. I don't see "this is content of the nginx.conf file" on that page. I also don't see "here is where nginx is started". > the configuration entries in the > /etc/nginx/http.d/rainloop.conf configuration file created when the rainloop > image starts from the webmails/rainloop/config/nginx-rainloop.conf in the > github branch previously mentioned is not recognized by nginx when the > webmail container (using the rainloop image) is started by docker-compose up > -d. However, if I restart the webmail container using docker-compose > restart webmail then the instructions in that rainloop.conf are then > recognized by nginx. If you can find the place that starts nginx, perhaps adding a call to nginx with all of the same arguments plus "-T", and catching the stdout and stderr of that somewhere, will help? That should show what config files, with what content, this instance of nginx is reading. If that output differs between the "up -d" and the "restart webmail" calls, that might point at the next place to investigate. > I tried to use nginx-debug with error_log set to debug > but the output I get is not understandable and does not refer to any config > file path or parsing. The nginx debug log can be a bit overwhelming -- there's lots in it, and it is not always immediately clear why what is in it, is in it. But people who know what to look for, and what is "normal" in it, may be able to interpret it for you. Generally, you want something of the form: I make *this* request I get *this* response I want *that* response instead ideally with enough information to allow someone else to repeat what you are doing. And if the log file is especially big, it may be better to host it somewhere and provide a link to it. You can edit the file to hide "private" information -- but if you do change things, please change them consistently. Best would be to build a test system with no private information, and provide the full logs for that system that shows the same problem. > The reason I am asking this in the nginx mailing list is because I have > exhausted all of the tests that can gather information from the container > side and I am wondering if there is anything that you guys can thing that > could explain this. Here is the directory structure of the files in the > container where nginx is running before starting as a result of the > docker-compose up -d: When nginx starts, it reads one configuration file. "nginx -T" should show the contents of that, plus anything "include"'d from it, that nginx sees. It looks like your starting file might be from https://git.alpinelinux.org/aports/tree/main/nginx/nginx.conf with the error_log line (line 12) set to debug; and then your include'd file re-sets it to "warn" and changes client_max_body_size, within that server{} block. >From what you describe, it sounds like the include'd file may not be present when nginx first starts? Are there any requests that you make that do succeed, before the one that fails? Where are you reading the error log? From /var/log/nginx/error.log, or from stderr from the docker command? It might make a difference, if two error_log directives are present. > This is the content of the rainloop.conf in the webmail container at > runtime: Is that "according to something that runs before nginx starts", or something else? If files are being changed, the timing might matter. > # /dev/stdout (Default), , off > access_log off; > > # /dev/stderr (Default), , debug, info, notice, warn, error, crit, > alert, emerg > error_log /dev/stderr warn; When problems are happening, turning on logging can be useful to see what the system thinks is happening. > # set maximum body size to configured limit > client_max_body_size 58388608; That's about 55 MB. > Is there a way to tell nginx or nginx-debug to tell us what is going on? Don't turn off the logging. > The error that occurs after docker-compse up -d is that a 5MB attachment > posted to the webmail container fails. I'm guessing that's a HTTP 413 coming direct from nginx; not any message from the fastcgi service; and not a HTTP 502 coming direct from nginx? The specific details might matter. (413 suggests that the "new" client_max_body_size was not used. 502 suggests that php-fpm was not available. Another error might suggest something else. nginx's error_log will usually indicate why it raised an error response, at a suitable log level.) > That same 5MB attachment works after > a docker-compose restart. The statement that does not appear to take effect > until the docker-compose restart is the client_max_body_size 58388608. > Please note that there is a client_max_body_size 1m directive in the > /etc/nginx/nginx.conf during both restart (docker-compose up and > docker-compose restart). > > I have searched low and high for any information about this http.d directory > but I could not see anything. I am using nginx version 1.20.2 installed > from alpinelinux apk command. Your nginx.conf line 103 might have "include /etc/nginx/http.d/*.conf;" which reads the matching file contents. The directory is only special to this nginx because it is named in the conf file. Good luck with it, f -- Francis Daly francis at daoine.org From omer.t.h.7 at gmail.com Mon Feb 7 01:30:54 2022 From: omer.t.h.7 at gmail.com (OTH) Date: Sun, 6 Feb 2022 17:30:54 -0800 Subject: Problem running nginx in a container In-Reply-To: <20220206225147.GD14624@daoine.org> References: <20220206225147.GD14624@daoine.org> Message-ID: Fyi - I too am having issues with client_body_max_size in a docker container, and in fact I just signed up for this mailing list just because of that. Restarting docker seems to have no effect for me. I will send a detailed email about the issue if I'm not able to figure it out myself soon. Regards On Sun, Feb 6, 2022 at 2:53 PM Francis Daly wrote: > On Fri, Feb 04, 2022 at 06:33:08PM -0500, marioja2 wrote: > > Hi there, > > There's lots of information here, and I'm not sure what specific parts > relate to an nginx issue. > > I don't have a full answer, but there are some suggestions below that > might help point at where the problem and fix might be. > > > I am running nginx in a container created with a docker-compose yaml > file. > > Here is a sample docker-compose.yml and environment file: > > > > > https://github.com/marioja/Mailu/tree/test-mailu/test-mailu-docker-compose > . > > I don't see "this is content of the nginx.conf file" on that page. I > also don't see "here is where nginx is started". > > > the configuration entries in the > > /etc/nginx/http.d/rainloop.conf configuration file created when the > rainloop > > image starts from the webmails/rainloop/config/nginx-rainloop.conf in the > > github branch previously mentioned is not recognized by nginx when the > > webmail container (using the rainloop image) is started by > docker-compose up > > -d. However, if I restart the webmail container using docker-compose > > restart webmail then the instructions in that rainloop.conf are then > > recognized by nginx. > > If you can find the place that starts nginx, perhaps adding a call to > nginx with all of the same arguments plus "-T", and catching the stdout > and stderr of that somewhere, will help? > > That should show what config files, with what content, this instance of > nginx is reading. > > If that output differs between the "up -d" and the "restart webmail" > calls, that might point at the next place to investigate. > > > I tried to use nginx-debug with error_log set to debug > > but the output I get is not understandable and does not refer to any > config > > file path or parsing. > > The nginx debug log can be a bit overwhelming -- there's lots in it, > and it is not always immediately clear why what is in it, is in it. > > But people who know what to look for, and what is "normal" in it, may > be able to interpret it for you. > > Generally, you want something of the form: > > I make *this* request > I get *this* response > I want *that* response instead > > ideally with enough information to allow someone else to repeat what > you are doing. > > And if the log file is especially big, it may be better to host it > somewhere and provide a link to it. > > You can edit the file to hide "private" information -- but if you do > change things, please change them consistently. Best would be to build > a test system with no private information, and provide the full logs > for that system that shows the same problem. > > > The reason I am asking this in the nginx mailing list is because I have > > exhausted all of the tests that can gather information from the container > > side and I am wondering if there is anything that you guys can thing that > > could explain this. Here is the directory structure of the files in the > > container where nginx is running before starting as a result of the > > docker-compose up -d: > > When nginx starts, it reads one configuration file. > > "nginx -T" should show the contents of that, plus anything "include"'d > from it, that nginx sees. > > It looks like your starting file might be from > > https://git.alpinelinux.org/aports/tree/main/nginx/nginx.conf > > with the error_log line (line 12) set to debug; and then your include'd > file re-sets it to "warn" and changes client_max_body_size, within that > server{} block. > > From what you describe, it sounds like the include'd file may not be > present when nginx first starts? Are there any requests that you make > that do succeed, before the one that fails? > > Where are you reading the error log? From /var/log/nginx/error.log, > or from stderr from the docker command? It might make a difference, > if two error_log directives are present. > > > This is the content of the rainloop.conf in the webmail container at > > runtime: > > Is that "according to something that runs before nginx starts", or > something else? > > If files are being changed, the timing might matter. > > > # /dev/stdout (Default), , off > > access_log off; > > > > # /dev/stderr (Default), , debug, info, notice, warn, error, > crit, > > alert, emerg > > error_log /dev/stderr warn; > > When problems are happening, turning on logging can be useful to see > what the system thinks is happening. > > > # set maximum body size to configured limit > > client_max_body_size 58388608; > > That's about 55 MB. > > > Is there a way to tell nginx or nginx-debug to tell us what is going on? > > Don't turn off the logging. > > > The error that occurs after docker-compse up -d is that a 5MB attachment > > posted to the webmail container fails. > > I'm guessing that's a HTTP 413 coming direct from nginx; not any message > from the fastcgi service; and not a HTTP 502 coming direct from nginx? > > The specific details might matter. > > (413 suggests that the "new" client_max_body_size was not used. 502 > suggests that php-fpm was not available. Another error might suggest > something else. nginx's error_log will usually indicate why it raised > an error response, at a suitable log level.) > > > That same 5MB attachment works after > > a docker-compose restart. The statement that does not appear to take > effect > > until the docker-compose restart is the client_max_body_size 58388608. > > Please note that there is a client_max_body_size 1m directive in the > > /etc/nginx/nginx.conf during both restart (docker-compose up and > > docker-compose restart). > > > > I have searched low and high for any information about this http.d > directory > > but I could not see anything. I am using nginx version 1.20.2 installed > > from alpinelinux apk command. > > Your nginx.conf line 103 might have "include /etc/nginx/http.d/*.conf;" > which reads the matching file contents. The directory is only special > to this nginx because it is named in the conf file. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Feb 7 03:08:49 2022 From: nginx-forum at forum.nginx.org (marioja2) Date: Sun, 06 Feb 2022 22:08:49 -0500 Subject: Problem running nginx in a container In-Reply-To: <20220206225147.GD14624@daoine.org> References: <20220206225147.GD14624@daoine.org> Message-ID: <4fb48fd64b00781b49f09ca26737dec9.NginxMailingListEnglish@forum.nginx.org> Francis, see my response inline: Hi there, There's lots of information here, and I'm not sure what specific parts relate to an nginx issue. I don't have a full answer, but there are some suggestions below that might help point at where the problem and fix might be. > I am running nginx in a container created with a docker-compose yaml file. > Here is a sample docker-compose.yml and environment file: > > https://github.com/marioja/Mailu/tree/test-mailu/test-mailu-docker-compose. I don't see "this is content of the nginx.conf file" on that page. I also don't see "here is where nginx is started". The nginx.conf can be seen here: https://1drv.ms/u/s!ApcymW6zCVpnuwSp3gbKFP9U8mak?e=DKziGh > the configuration entries in the > /etc/nginx/http.d/rainloop.conf configuration file created when the > rainloop image starts from the > webmails/rainloop/config/nginx-rainloop.conf in the github branch > previously mentioned is not recognized by nginx when the webmail > container (using the rainloop image) is started by docker-compose up > -d. However, if I restart the webmail container using docker-compose > restart webmail then the instructions in that rainloop.conf are then recognized by nginx. If you can find the place that starts nginx, perhaps adding a call to nginx with all of the same arguments plus "-T", and catching the stdout and stderr of that somewhere, will help? I checked and the output when starting from docker-compose up -d or docker-compose restart is identical. I include it here: https://1drv.ms/u/s!ApcymW6zCVpnuwKQOFFbSCLiZhn9?e=bvrpJU That should show what config files, with what content, this instance of nginx is reading. If that output differs between the "up -d" and the "restart webmail" calls, that might point at the next place to investigate. > I tried to use nginx-debug with error_log set to debug but the output > I get is not understandable and does not refer to any config file path > or parsing. The nginx debug log can be a bit overwhelming -- there's lots in it, and it is not always immediately clear why what is in it, is in it. But people who know what to look for, and what is "normal" in it, may be able to interpret it for you. Generally, you want something of the form: I make *this* request I get *this* response I want *that* response instead ideally with enough information to allow someone else to repeat what you are doing. And if the log file is especially big, it may be better to host it somewhere and provide a link to it. You can edit the file to hide "private" information -- but if you do change things, please change them consistently. Best would be to build a test system with no private information, and provide the full logs for that system that shows the same problem. I ran strace on nginx and I have to trace files. This is the trace file when start with docker-compose up -d: https://1drv.ms/u/s!ApcymW6zCVpnun28ya8wIBQi9qEt?e=AqzHfI This is the trace file when I start with docker-compose restart: https://1drv.ms/u/s!ApcymW6zCVpnuwFnO5IT-2bERyHI?e=FzZxuD I cannot see any difference except that for the response on line 80 contains the prefix "X-Powered-By: PHP/7/4/26\r\n" in the docker-compose up -d case but not in the docker-compose restart. Does this shed some light? Also the message I get from nginx (this is either stdout or stderr) when the error occurs is: webmail_1 | 2022/02/07 02:08:18 [warn] 25#25: *145 a client request body is buffered to a temporary file /var/lib/nginx/tmp/client_body/0000000005, client: 192.168.200.6, server: , request: "POST /?/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdRjXwgf35Z7yvyZwvKejLSOPeJBrY8J_XpMKC_ukp5jM8z9FT0uhT3hH51K4THDw7GWMitAjdQNx1SGw_qF04hNvcQ4_eWaUXxSPiR0vizLEVBScYBZIEXYDHWVTVKbVTYbETLW3kpq214FuLRHhwWLyxjeACQwoxxucaj_uLcm7oyzQjwwUOV9PQKA7l9BPkq0Oq63sFEBS_vVhIplQ6EmFq59lxuSrZiDlhSBOdLIW8h2xD1kz9Twb_-qzwelpHuF-84Lbdg8tUpE7BYbUyU76La5BcFSREyf7qvm8e_mszUfh_Zac-63pk8jrdwmgQEApa0OPJL4NAXTimMG-JmbS9CCPvoGzGmoj12afq0CcwrkmxJI2Z00N2HbfrIns69INt-G-x34tdDwo2k/ HTTP/1.1", host: "mailu.company.local:444", referrer: "https://mailu.company.local:444/webmail/" > The reason I am asking this in the nginx mailing list is because I > have exhausted all of the tests that can gather information from the > container side and I am wondering if there is anything that you guys > can thing that could explain this. Here is the directory structure of > the files in the container where nginx is running before starting as a > result of the docker-compose up -d: When nginx starts, it reads one configuration file. "nginx -T" should show the contents of that, plus anything "include"'d from it, that nginx sees. It looks like your starting file might be from https://git.alpinelinux.org/aports/tree/main/nginx/nginx.conf with the error_log line (line 12) set to debug; and then your include'd file re-sets it to "warn" and changes client_max_body_size, within that server{} block. >From what you describe, it sounds like the include'd file may not be present when nginx first starts? Are there any requests that you make that do succeed, before the one that fails? The included file is present as show in the nginx -T output. Where are you reading the error log? From /var/log/nginx/error.log, or from stderr from the docker command? It might make a difference, if two error_log directives are present. I was reading from /var/log/nginx/error.log which only contains with warn: 2022/02/07 03:01:28 [notice] 23#23: using the "epoll" event method 2022/02/07 03:01:28 [notice] 23#23: nginx/1.20.2 2022/02/07 03:01:28 [notice] 23#23: OS: Linux 5.4.0-96-generic 2022/02/07 03:01:28 [notice] 23#23: getrlimit(RLIMIT_NOFILE): 1024:524288 2022/02/07 03:01:28 [notice] 23#23: start worker processes 2022/02/07 03:01:28 [notice] 23#23: start worker process 24 2022/02/07 03:01:28 [notice] 23#23: start worker process 25 2022/02/07 03:01:28 [notice] 23#23: start worker process 26 2022/02/07 03:01:28 [notice] 23#23: start worker process 27 > This is the content of the rainloop.conf in the webmail container at > runtime: Is that "according to something that runs before nginx starts", or something else? If files are being changed, the timing might matter. The rainloop.conf file is created when the container image is created. This means that it is identical when docker-compose up -d or docker-compose restart is used. I have verified that. > # /dev/stdout (Default), , off > access_log off; > > # /dev/stderr (Default), , debug, info, notice, warn, error, > crit, alert, emerg > error_log /dev/stderr warn; When problems are happening, turning on logging can be useful to see what the system thinks is happening. > # set maximum body size to configured limit > client_max_body_size 58388608; That's about 55 MB. > Is there a way to tell nginx or nginx-debug to tell us what is going on? Don't turn off the logging. > The error that occurs after docker-compse up -d is that a 5MB > attachment posted to the webmail container fails. I'm guessing that's a HTTP 413 coming direct from nginx; not any message from the fastcgi service; and not a HTTP 502 coming direct from nginx? The specific details might matter. (413 suggests that the "new" client_max_body_size was not used. 502 suggests that php-fpm was not available. Another error might suggest something else. nginx's error_log will usually indicate why it raised an error response, at a suitable log level.) > That same 5MB attachment works after > a docker-compose restart. The statement that does not appear to take > effect until the docker-compose restart is the client_max_body_size 58388608. > Please note that there is a client_max_body_size 1m directive in the > /etc/nginx/nginx.conf during both restart (docker-compose up and > docker-compose restart). > > I have searched low and high for any information about this http.d > directory but I could not see anything. I am using nginx version > 1.20.2 installed from alpinelinux apk command. Your nginx.conf line 103 might have "include /etc/nginx/http.d/*.conf;" You are right, I don't know how I missed that. DOH which reads the matching file contents. The directory is only special to this nginx because it is named in the conf file. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list -- nginx at nginx.org To unsubscribe send an email to nginx-leave at nginx.org Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293592,293599#msg-293599 From francis at daoine.org Mon Feb 7 09:20:05 2022 From: francis at daoine.org (Francis Daly) Date: Mon, 7 Feb 2022 09:20:05 +0000 Subject: Problem running nginx in a container In-Reply-To: <4fb48fd64b00781b49f09ca26737dec9.NginxMailingListEnglish@forum.nginx.org> References: <20220206225147.GD14624@daoine.org> <4fb48fd64b00781b49f09ca26737dec9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220207092005.GE14624@daoine.org> On Sun, Feb 06, 2022 at 10:08:49PM -0500, marioja2 wrote: Hi there, > I checked and the output when starting from docker-compose up -d or > docker-compose restart is identical. I include it here: > https://1drv.ms/u/s!ApcymW6zCVpnuwKQOFFbSCLiZhn9?e=bvrpJU Thanks; that is useful to see what nginx thinks the config is. > Generally, you want something of the form: > > I make *this* request > I get *this* response > I want *that* response instead > > ideally with enough information to allow someone else to repeat what you are > doing. > I ran strace on nginx and I have to trace files. This is the trace file > when start with docker-compose up -d: > https://1drv.ms/u/s!ApcymW6zCVpnun28ya8wIBQi9qEt?e=AqzHfI > This is the trace file when I start with docker-compose restart: > https://1drv.ms/u/s!ApcymW6zCVpnuwFnO5IT-2bERyHI?e=FzZxuD In this case, I think that the strace output does show something useful, because... > webmail_1 | 2022/02/07 02:08:18 [warn] 25#25: *145 a client request body > is buffered to a temporary file /var/lib/nginx/tmp/client_body/0000000005, > client: 192.168.200.6, server: , request: "POST > /?/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdRjXwgf35Z7yvyZwvKejLSOPeJBrY8J_XpMKC_ukp5jM8z9FT0uhT3hH51K4THDw7GWMitAjdQNx1SGw_qF04hNvcQ4_eWaUXxSPiR0vizLEVBScYBZIEXYDHWVTVKbVTYbETLW3kpq214FuLRHhwWLyxjeACQwoxxucaj_uLcm7oyzQjwwUOV9PQKA7l9BPkq0Oq63sFEBS_vVhIplQ6EmFq59lxuSrZiDlhSBOdLIW8h2xD1kz9Twb_-qzwelpHuF-84Lbdg8tUpE7BYbUyU76La5BcFSREyf7qvm8e_mszUfh_Zac-63pk8jrdwmgQEApa0OPJL4NAXTimMG-JmbS9CCPvoGzGmoj12afq0CcwrkmxJI2Z00N2HbfrIns69INt-G-x34tdDwo2k/ > HTTP/1.1", host: "mailu.company.local:444", referrer: > "https://mailu.company.local:444/webmail/" The request that you care about is a POST to a url which includes the word Upload. $ grep -n Upload nginx_strace.26 636: 0.000041 recvfrom(15, "POST /?/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdRjXwgf35Z7yvyZwvKejLSOPeJBrY8J_XpMK"..., 1024, 0, NULL, NULL) = 1024 2502: 0.000096 writev(17, [{iov_base="\1\1\0\1\0\10\0\0\0\1\1\0\0\0\0\0\1\4\0\1\r\35\3\0\f\200\0\1\313QUERY_STRING/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdR"..., iov_len=3400}], 1) = 3400 2964: 0.000062 writev(15, [{iov_base="HTTP/1.1 200 OK\r\nDate: Mon, 07 Feb 2022 01:36:10 GMT\r\nContent-Type: application/json; charset=utf-8"..., iov_len=436}, {iov_base="46\r\n", iov_len=4}, {iov_base="{\"Action\":\"Upload\",\"Result\":{\"ErrorCode\":1,\"Error\":\"File is too big\"}}", iov_len=70}, {iov_base="\r\n", iov_len=2}, {iov_base="0\r\n\r\n", iov_len=5}], 5) = 517 $ grep -n Upload nginx_strace.25 488: 0.000134 recvfrom(15, "POST /?/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdRjXwgf35Z7yvyZwvKejLSOPeJBrY8J_XpMK"..., 1024, 0, NULL, NULL) = 1024 2393: 0.000183 writev(17, [{iov_base="\1\1\0\1\0\10\0\0\0\1\1\0\0\0\0\0\1\4\0\1\r\35\3\0\f\200\0\1\313QUERY_STRING/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdR"..., iov_len=3400}], 1) = 3400 2847: 0.000064 writev(15, [{iov_base="HTTP/1.1 200 OK\r\nDate: Mon, 07 Feb 2022 01:37:26 GMT\r\nContent-Type: application/json; charset=utf-8"..., iov_len=410}, {iov_base="bd\r\n", iov_len=4}, {iov_base="{\"Action\":\"Upload\",\"Result\":{\"Attachment\":{\"Name\":\"5kplayer-setup (2).exe\",\"TempName\":\"upload-post-"..., iov_len=189}, {iov_base="\r\n", iov_len=2}, {iov_base="0\r\n\r\n", iov_len=5}], 5) = 610 That looks like, in both cases, nginx sent a HTTP success (200 OK) message, with differing json body content. One has Result: Attachment; the other has Result: ErrorCode. In both cases, it appears that the response came after nginx wrote the request to the fastcgi server. So, unless you have something unusual going on, that "File is too big" came from the php-fpm service, not from nginx. That does not help resolve the inconsistency between an "up" and a "restart" on the docker-compose side; but it does suggest that the difference is on the php side, not the nginx side. So possibly, try the same thing, but see if you can find the php / fastcgi server state in both cases, and see if there is any obvious difference there. If you do want to get fuller nginx logging, then in the rainloop.conf file: > > # /dev/stdout (Default), , off > > access_log off; > > > > # /dev/stderr (Default), , debug, info, notice, warn, error, > > crit, alert, emerg > > error_log /dev/stderr warn; Remove those lines, so that the "http"-level config will apply. > > The error that occurs after docker-compse up -d is that a 5MB > > attachment posted to the webmail container fails. > > I'm guessing that's a HTTP 413 coming direct from nginx; not any message > from the fastcgi service; and not a HTTP 502 coming direct from nginx? >From the strace output, I think I guessed wrong. The error seems not to be coming from nginx. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Feb 7 12:06:28 2022 From: nginx-forum at forum.nginx.org (openworx) Date: Mon, 07 Feb 2022 07:06:28 -0500 Subject: Automatically detect hostname with map als ssl preread Message-ID: I have a IPv4 to IPv6 reverse proxy server so that IPv4 clients can access IPv6 only websites. For each website the website domain and the corresponding IPv6 address must be added manually. Is there a way to automatically detect the AAAA record from the website and use this in upstream? stream { map $ssl_preread_server_name $selected_upstream { www.website1.com upstream_1; www.website2.com upstream_2; www.website3.com upstream_3; } upstream upstream_1 { server [2001:x:x:x::f:244]:443; } upstream upstream_2 { server [2001:x:x:x::f:245]:443; } upstream upstream_3 { server [2001:x:x:x::f:246]:443; } server { listen 443; proxy_pass $selected_upstream; ssl_preread on; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293604,293604#msg-293604 From nginx-forum at forum.nginx.org Tue Feb 8 05:31:55 2022 From: nginx-forum at forum.nginx.org (marioja2) Date: Tue, 08 Feb 2022 00:31:55 -0500 Subject: Problem running nginx in a container In-Reply-To: <20220207092005.GE14624@daoine.org> References: <20220207092005.GE14624@daoine.org> Message-ID: <25bf7a82043c596fda08122dfe62e8e6.NginxMailingListEnglish@forum.nginx.org> Hi, I ran another case with error_log debug. Here are 4 files: error.log.2: https://1drv.ms/u/s!ApcymW6zCVpnuxyYecXcaIlqrVWu?e=WmgaVO error.log.1: https://1drv.ms/u/s!ApcymW6zCVpnuxt2oIE3eLklkND0?e=ZlJNGH access.log.2: https://1drv.ms/u/s!ApcymW6zCVpnuxgl1jCmHH8O-Pmi?e=GJGQgI access.log.1: https://1drv.ms/u/s!ApcymW6zCVpnuxpdqNOQOt3pOMbX?e=Hmtz9Z The files ending in .2 is when docker-compose up -d was used. The files ending in .1 is when the docker-compose restart was used. I turned on log_level to debug on the php-fpm side and I cannot see anything that can explain what is going on. See below in your reply I posted a question. Francis Daly Wrote: ------------------------------------------------------- > On Sun, Feb 06, 2022 at 10:08:49PM -0500, marioja2 wrote: > > Hi there, > > > I checked and the output when starting from docker-compose up -d or > > docker-compose restart is identical. I include it here: > > https://1drv.ms/u/s!ApcymW6zCVpnuwKQOFFbSCLiZhn9?e=bvrpJU > > Thanks; that is useful to see what nginx thinks the config is. > > > Generally, you want something of the form: > > > > I make *this* request > > I get *this* response > > I want *that* response instead > > > > ideally with enough information to allow someone else to repeat what > you are > > doing. > > > I ran strace on nginx and I have to trace files. This is the trace > file > > when start with docker-compose up -d: > > https://1drv.ms/u/s!ApcymW6zCVpnun28ya8wIBQi9qEt?e=AqzHfI > > This is the trace file when I start with docker-compose restart: > > https://1drv.ms/u/s!ApcymW6zCVpnuwFnO5IT-2bERyHI?e=FzZxuD > > In this case, I think that the strace output does show something > useful, because... > > > webmail_1 | 2022/02/07 02:08:18 [warn] 25#25: *145 a client > request body > > is buffered to a temporary file > /var/lib/nginx/tmp/client_body/0000000005, > > client: 192.168.200.6, server: , request: "POST > > > /?/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdRjXwgf35Z7y > vyZwvKejLSOPeJBrY8J_XpMKC_ukp5jM8z9FT0uhT3hH51K4THDw7GWMitAjdQNx1SGw_q > F04hNvcQ4_eWaUXxSPiR0vizLEVBScYBZIEXYDHWVTVKbVTYbETLW3kpq214FuLRHhwWLy > xjeACQwoxxucaj_uLcm7oyzQjwwUOV9PQKA7l9BPkq0Oq63sFEBS_vVhIplQ6EmFq59lxu > SrZiDlhSBOdLIW8h2xD1kz9Twb_-qzwelpHuF-84Lbdg8tUpE7BYbUyU76La5BcFSREyf7 > qvm8e_mszUfh_Zac-63pk8jrdwmgQEApa0OPJL4NAXTimMG-JmbS9CCPvoGzGmoj12afq0 > CcwrkmxJI2Z00N2HbfrIns69INt-G-x34tdDwo2k/ > > HTTP/1.1", host: "mailu.company.local:444", referrer: > > "https://mailu.company.local:444/webmail/" > > The request that you care about is a POST to a url which includes the > word Upload. > > $ grep -n Upload nginx_strace.26 > 636: 0.000041 recvfrom(15, "POST > /?/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdRjXwgf35Z7y > vyZwvKejLSOPeJBrY8J_XpMK"..., 1024, 0, NULL, NULL) = 1024 > 2502: 0.000096 writev(17, > [{iov_base="\1\1\0\1\0\10\0\0\0\1\1\0\0\0\0\0\1\4\0\1\r\35\3\0\f\200\0 > \1\313QUERY_STRING/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSB > HjARdR"..., iov_len=3400}], 1) = 3400 > 2964: 0.000062 writev(15, [{iov_base="HTTP/1.1 200 OK\r\nDate: > Mon, 07 Feb 2022 01:36:10 GMT\r\nContent-Type: application/json; > charset=utf-8"..., iov_len=436}, {iov_base="46\r\n", iov_len=4}, > {iov_base="{\"Action\":\"Upload\",\"Result\":{\"ErrorCode\":1,\"Error\ > ":\"File is too big\"}}", iov_len=70}, {iov_base="\r\n", iov_len=2}, > {iov_base="0\r\n\r\n", iov_len=5}], 5) = 517 > > $ grep -n Upload nginx_strace.25 > 488: 0.000134 recvfrom(15, "POST > /?/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdRjXwgf35Z7y > vyZwvKejLSOPeJBrY8J_XpMK"..., 1024, 0, NULL, NULL) = 1024 > 2393: 0.000183 writev(17, > [{iov_base="\1\1\0\1\0\10\0\0\0\1\1\0\0\0\0\0\1\4\0\1\r\35\3\0\f\200\0 > \1\313QUERY_STRING/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSB > HjARdR"..., iov_len=3400}], 1) = 3400 > 2847: 0.000064 writev(15, [{iov_base="HTTP/1.1 200 OK\r\nDate: > Mon, 07 Feb 2022 01:37:26 GMT\r\nContent-Type: application/json; > charset=utf-8"..., iov_len=410}, {iov_base="bd\r\n", iov_len=4}, > {iov_base="{\"Action\":\"Upload\",\"Result\":{\"Attachment\":{\"Name\" > :\"5kplayer-setup (2).exe\",\"TempName\":\"upload-post-"..., > iov_len=189}, {iov_base="\r\n", iov_len=2}, {iov_base="0\r\n\r\n", > iov_len=5}], 5) = 610 > > That looks like, in both cases, nginx sent a HTTP success (200 OK) > message, with differing json body content. One has Result: Attachment; > the other has Result: ErrorCode. > > In both cases, it appears that the response came after nginx wrote the > request to the fastcgi server. I am not sure what you mean by "the response came after nginx wrote the request to the fastcgi server". I thought the response is the HTTP response to the person that issued the POST (which in my case is another container running nginx also) I do not understand what the fastcgi server has to do here. I do not understand the configuration in rainloop.conf for the php-fpm stuff nor do I understand where it plays in the whole transaction. > > So, unless you have something unusual going on, that "File is too big" > came from the php-fpm service, not from nginx. > > That does not help resolve the inconsistency between an "up" and a > "restart" on the docker-compose side; but it does suggest that the > difference is on the php side, not the nginx side. > > > So possibly, try the same thing, but see if you can find the php / > fastcgi server state in both cases, and see if there is any obvious > difference there. > > If you do want to get fuller nginx logging, then in the rainloop.conf > file: > > > > # /dev/stdout (Default), , off > > > access_log off; > > > > > > # /dev/stderr (Default), , debug, info, notice, warn, > error, > > > crit, alert, emerg > > > error_log /dev/stderr warn; > > Remove those lines, so that the "http"-level config will apply. > > > > The error that occurs after docker-compse up -d is that a 5MB > > > attachment posted to the webmail container fails. > > > > I'm guessing that's a HTTP 413 coming direct from nginx; not any > message > > from the fastcgi service; and not a HTTP 502 coming direct from > nginx? > > From the strace output, I think I guessed wrong. The error seems not > to > be coming from nginx. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293592,293607#msg-293607 From nginx-forum at forum.nginx.org Tue Feb 8 05:48:13 2022 From: nginx-forum at forum.nginx.org (marioja2) Date: Tue, 08 Feb 2022 00:48:13 -0500 Subject: Problem running nginx in a container In-Reply-To: <20220207092005.GE14624@daoine.org> References: <20220207092005.GE14624@daoine.org> Message-ID: <34636f96c128827686dda321886a2f73.NginxMailingListEnglish@forum.nginx.org> See below for more questions (I snipped repetitive stuff that was not necessary for understanding): Francis Daly Wrote: ------------------------------------------------------- > On Sun, Feb 06, 2022 at 10:08:49PM -0500, marioja2 wrote: > (snip) > The request that you care about is a POST to a url which includes the > word Upload. > > $ grep -n Upload nginx_strace.26 > 636: 0.000041 recvfrom(15, "POST > /?/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdRjXwgf35Z7y > vyZwvKejLSOPeJBrY8J_XpMK"..., 1024, 0, NULL, NULL) = 1024 > 2502: 0.000096 writev(17, > [{iov_base="\1\1\0\1\0\10\0\0\0\1\1\0\0\0\0\0\1\4\0\1\r\35\3\0\f\200\0 > \1\313QUERY_STRING/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSB > HjARdR"..., iov_len=3400}], 1) = 3400 > 2964: 0.000062 writev(15, [{iov_base="HTTP/1.1 200 OK\r\nDate: > Mon, 07 Feb 2022 01:36:10 GMT\r\nContent-Type: application/json; > charset=utf-8"..., iov_len=436}, {iov_base="46\r\n", iov_len=4}, > {iov_base="{\"Action\":\"Upload\",\"Result\":{\"ErrorCode\":1,\"Error\ > ":\"File is too big\"}}", iov_len=70}, {iov_base="\r\n", iov_len=2}, > {iov_base="0\r\n\r\n", iov_len=5}], 5) = 517 > > $ grep -n Upload nginx_strace.25 > 488: 0.000134 recvfrom(15, "POST > /?/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSBHjARdRjXwgf35Z7y > vyZwvKejLSOPeJBrY8J_XpMK"..., 1024, 0, NULL, NULL) = 1024 > 2393: 0.000183 writev(17, > [{iov_base="\1\1\0\1\0\10\0\0\0\1\1\0\0\0\0\0\1\4\0\1\r\35\3\0\f\200\0 > \1\313QUERY_STRING/Upload/&q[]=/_eiVamsWTAQ12l6UF7vBn495f4U5LAGxhZ0nSB > HjARdR"..., iov_len=3400}], 1) = 3400 > 2847: 0.000064 writev(15, [{iov_base="HTTP/1.1 200 OK\r\nDate: > Mon, 07 Feb 2022 01:37:26 GMT\r\nContent-Type: application/json; > charset=utf-8"..., iov_len=410}, {iov_base="bd\r\n", iov_len=4}, > {iov_base="{\"Action\":\"Upload\",\"Result\":{\"Attachment\":{\"Name\" > :\"5kplayer-setup (2).exe\",\"TempName\":\"upload-post-"..., > iov_len=189}, {iov_base="\r\n", iov_len=2}, {iov_base="0\r\n\r\n", > iov_len=5}], 5) = 610 > If you look 3 lines above the line with the writev for HTTP/1.1 200 OK you will find that for the nginx_strace.26 log file (which failed with File is too big) the recvfrom says that "X-Powered-By: PHP/7.4.26\r\nServer: RainLoop\r\n" whereas in the nginx_strace.25 (which worked) There is no "X-Powered-By: PHP/7.4.26\r\n", just the "Server:RainLoop\r\n" > That looks like, in both cases, nginx sent a HTTP success (200 OK) > message, with differing json body content. One has Result: Attachment; > the other has Result: ErrorCode. > (snip) > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293592,293608#msg-293608 From roger at netskrt.io Tue Feb 8 07:39:08 2022 From: roger at netskrt.io (Roger Fischer) Date: Mon, 7 Feb 2022 23:39:08 -0800 Subject: corrupted cache file: proxy_cache_valid ignored Message-ID: <7D53E0B8-5E58-43D6-AD58-BAF99F9CE11E@netskrt.io> Hello, we have observed a case where it seems that the proxy_cache_valid directive is ignored. nginx version: 1.19.9 Config: proxy_cache_valid 200 206 30d; Scenario: * A cache file was corrupted (a file system issue). A part of the section that contains the headers had been overwritten with binary data. * The resource represented by the corrupted cache file is requested. * NGINX detects the corrupted cache file, and proxies the request upstream. * The request is rejected by the upstream, with the upstream returning a 403 status. * The 403 is returned to the client. This is all good, but * The request is repeated, and a cached 403 is returned, despite only caching 200 and 206. * Upon examination, the cache file contains the 403 response from upstream. Has anyone else seen something like this? Could this possibly be a bug? Unfortunately I am not in a position to try to reproduce this at this time. Thanks… Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Feb 8 11:15:18 2022 From: nginx-forum at forum.nginx.org (rjvbzeoibvpzie) Date: Tue, 08 Feb 2022 06:15:18 -0500 Subject: ssl_reject_handshake disallow TLSv1.3 Message-ID: <9d2493ed212615771ef4e66d07c4bda8.NginxMailingListEnglish@forum.nginx.org> ssl_protocols TLSv1.2 TLSv1.3; server { listen 443 ssl default_server; ssl_reject_handshake on; } This does not allow ANY other server to be reached with TLSv1.3 server { listen 443 ssl default_server; ssl_certificate ssl/cert.pem; return 444; } This allow ANY server to be reached with TLSv1.2 or TLSV1.3 (as configured). See https://stackoverflow.com/questions/71023951/ssl-alert-number-70-with-tlsv1-3/71032567#71032567 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293611,293611#msg-293611 From pluknet at nginx.com Tue Feb 8 12:59:09 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 8 Feb 2022 15:59:09 +0300 Subject: ssl_reject_handshake disallow TLSv1.3 In-Reply-To: <9d2493ed212615771ef4e66d07c4bda8.NginxMailingListEnglish@forum.nginx.org> References: <9d2493ed212615771ef4e66d07c4bda8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <55D6E791-ADE5-4C60-A4B2-EAC238C8A7C7@nginx.com> > On 8 Feb 2022, at 14:15, rjvbzeoibvpzie wrote: > > ssl_protocols TLSv1.2 TLSv1.3; > > server { > listen 443 ssl default_server; > ssl_reject_handshake on; > } > > This does not allow ANY other server to be reached with TLSv1.3 > [..] You didn't specify OpenSSL version, so I assume this belongs to https://trac.nginx.org/nginx/ticket/2071#comment:1 -- Sergey Kandaurov From mdounin at mdounin.ru Tue Feb 8 13:29:50 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Feb 2022 16:29:50 +0300 Subject: corrupted cache file: proxy_cache_valid ignored In-Reply-To: <7D53E0B8-5E58-43D6-AD58-BAF99F9CE11E@netskrt.io> References: <7D53E0B8-5E58-43D6-AD58-BAF99F9CE11E@netskrt.io> Message-ID: Hello! On Mon, Feb 07, 2022 at 11:39:08PM -0800, Roger Fischer wrote: > we have observed a case where it seems that the > proxy_cache_valid directive is ignored. > > nginx version: 1.19.9 > > Config: proxy_cache_valid 200 206 30d; > > Scenario: > * A cache file was corrupted (a file system issue). A part of > the section that contains the headers had been overwritten with > binary data. > * The resource represented by the corrupted cache file is > requested. > * NGINX detects the corrupted cache file, and proxies the > request upstream. > * The request is rejected by the upstream, with the upstream > returning a 403 status. > * The 403 is returned to the client. > This is all good, but > * The request is repeated, and a cached 403 is returned, despite > only caching 200 and 206. > * Upon examination, the cache file contains the 403 response > from upstream. > > Has anyone else seen something like this? Could this possibly be > a bug? As long as you don't have corresponding proxy_ignore_headers in your configuration, cache validity can be also set by the response itself: via Cache-Control, Expires, and X-Accel-Expires headers. Check the response headers in the cache file to see if there are any of these. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Feb 8 14:05:49 2022 From: nginx-forum at forum.nginx.org (marioja2) Date: Tue, 08 Feb 2022 09:05:49 -0500 Subject: Problem running nginx in a container In-Reply-To: <20220207092005.GE14624@daoine.org> References: <20220207092005.GE14624@daoine.org> Message-ID: It just dawned on me that the difference between the case where result is attachment and the case with result is error code is most likely related to the application reacting to a difference in the environment. Maybe a file or folder permission. I did check both directory tree and could not see a difference. I will look at the code. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293592,293620#msg-293620 From ACouch at cuahsi.org Tue Feb 8 14:23:47 2022 From: ACouch at cuahsi.org (Alva Couch) Date: Tue, 8 Feb 2022 14:23:47 +0000 Subject: Strategies for large-file upload? Message-ID: I’m new to this list but have been running an NGINX community stack including NGINX community/Gunicorn/Django for several years in production. My site is a science data repository. We have a need to accept very large files as uploads: 10 gb and above, with a ceiling of 100 gb or so. What strategies have people found to be successful for uploading very large files? Please include both “community” and “plus” solutions. Thanks Alva L. Couch Senior Architect of Data Services, CUAHSI -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshe at ymkatz.net Tue Feb 8 16:08:54 2022 From: moshe at ymkatz.net (Moshe Katz) Date: Tue, 8 Feb 2022 11:08:54 -0500 Subject: Strategies for large-file upload? In-Reply-To: References: Message-ID: Our "large" files are usually closer to 1 GB than 10-100 GB, but the general idea should be the same. We tried using https://github.com/vkholodkov/nginx-upload-module (docs at https://www.nginx.com/resources/wiki/modules/upload/), but we had a hard time getting it to work properly, and it might not be compatible with newer nginx versions. (I don't remember why we didn't try using the project that was forked from.) We considered a similar project https://github.com/pgaertig/nginx-big-upload but then we decided that we did not want to use an nginx extension because we were worried about future maintainability (as you can see, none of those projects have been updated for a while.) We are currently using the tus resumable update protocol - https://tus.io/ - they have client and server implementations available in most common programming languages. You might also consider using something like MinIO, Ceph, or anything else that provides an Amazon-S3-compatible API (which includes multi-part upload), but those need an S3-compatible upload tool so it's a lot more work to allow uploads from the browser. (You could look at https://github.com/TTLabs/EvaporateJS or https://github.com/zdresearch/s3-multipart-upload-javascript-browser for more about that, but I haven't tried them.) On Tue, Feb 8, 2022 at 9:24 AM Alva Couch wrote: > I’m new to this list but have been running an NGINX community stack > including NGINX community/Gunicorn/Django for several years in production. > > > > My site is a science data repository. We have a need to accept very large > files as uploads: 10 gb and above, with a ceiling of 100 gb or so. > > > > What strategies have people found to be successful for uploading very > large files? Please include both “community” and “plus” solutions. > > > > Thanks > > Alva L. Couch > > Senior Architect of Data Services, CUAHSI > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noloader at gmail.com Tue Feb 8 18:48:53 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 8 Feb 2022 13:48:53 -0500 Subject: Strategies for large-file upload? In-Reply-To: References: Message-ID: On Tue, Feb 8, 2022 at 9:27 AM Alva Couch wrote: > > I’m new to this list but have been running an NGINX community stack including NGINX community/Gunicorn/Django for several years in production. > > My site is a science data repository. We have a need to accept very large files as uploads: 10 gb and above, with a ceiling of 100 gb or so. > > What strategies have people found to be successful for uploading very large files? Please include both “community” and “plus” solutions. I hope I don't sound like a heretic, but I would consider another solution like scp or sftp. The reason for the suggestion... web servers serve web pages. They are not file transfer agents. Use the right tool for the job. And I am happy to concede web servers do a fine job with small files, like 4KB or 4MB. But the limit is in place to protect the web server and keep it on track with its primary mission of serving content, not receiving it. Jeff From francis at daoine.org Wed Feb 9 00:25:23 2022 From: francis at daoine.org (Francis Daly) Date: Wed, 9 Feb 2022 00:25:23 +0000 Subject: Problem running nginx in a container In-Reply-To: <25bf7a82043c596fda08122dfe62e8e6.NginxMailingListEnglish@forum.nginx.org> References: <20220207092005.GE14624@daoine.org> <25bf7a82043c596fda08122dfe62e8e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220209002523.GF14624@daoine.org> On Tue, Feb 08, 2022 at 12:31:55AM -0500, marioja2 wrote: Hi there, > error.log.2: https://1drv.ms/u/s!ApcymW6zCVpnuxyYecXcaIlqrVWu?e=WmgaVO > error.log.1: https://1drv.ms/u/s!ApcymW6zCVpnuxt2oIE3eLklkND0?e=ZlJNGH > access.log.2: https://1drv.ms/u/s!ApcymW6zCVpnuxgl1jCmHH8O-Pmi?e=GJGQgI > access.log.1: https://1drv.ms/u/s!ApcymW6zCVpnuxpdqNOQOt3pOMbX?e=Hmtz9Z These logs show pretty much the same thing, to me: nginx does the same thing in both cases, and gets different responses from php-fpm. You'll want to ask php-fpm, or the php code, why that is. > > That looks like, in both cases, nginx sent a HTTP success (200 OK) > > message, with differing json body content. One has Result: Attachment; > > the other has Result: ErrorCode. > > > > In both cases, it appears that the response came after nginx wrote the > > request to the fastcgi server. > > I am not sure what you mean by "the response came after nginx wrote the > request to the fastcgi server". I thought the response is the HTTP response > to the person that issued the POST (which in my case is another container > running nginx also) I do not understand what the fastcgi server has to do > here. I do not understand the configuration in rainloop.conf for the php-fpm > stuff nor do I understand where it plays in the whole transaction. The client/browser makes a HTTP POST request to nginx. Because of the nginx configuration, nginx knows to handle that request by acting as a fastcgi client, and making a fastcgi request to the fastcgi server that is listening on the unix-domain socket /var/run/php7-fpm.sock. That fastcgi server does whatever it wants, and sends a response to nginx. nginx sends a modified version of that response back to the client. In one case, the response from the fastcgi server is some JSON that says things worked; in the other, it is some JSON that says the input file was too big. Also in one case, the response headers include "X-Powered-By: PHP/7.4.26", and in the other case, they do not. Under what circumstances will your fastcgi server not return the X-Powered-By header? Those circumstances happen in error_log.2 above, and not in error_log.1. Searching the web for "File is too big" and rainloop does offer some php.ini-config suggestions. (One of the earlier suggestions was that the same file content, but uploaded with a shorter file name, would work.) Is there any chance that your php-fpm service acts differently between an "up" and a "restart"? Good luck with it, f -- Francis Daly francis at daoine.org From noloader at gmail.com Wed Feb 9 02:40:13 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 8 Feb 2022 21:40:13 -0500 Subject: ssl_reject_handshake disallow TLSv1.3 In-Reply-To: <55D6E791-ADE5-4C60-A4B2-EAC238C8A7C7@nginx.com> References: <9d2493ed212615771ef4e66d07c4bda8.NginxMailingListEnglish@forum.nginx.org> <55D6E791-ADE5-4C60-A4B2-EAC238C8A7C7@nginx.com> Message-ID: On Tue, Feb 8, 2022 at 8:02 AM Sergey Kandaurov wrote: > > > > On 8 Feb 2022, at 14:15, rjvbzeoibvpzie wrote: > > > > ssl_protocols TLSv1.2 TLSv1.3; > > > > server { > > listen 443 ssl default_server; > > ssl_reject_handshake on; > > } > > > > This does not allow ANY other server to be reached with TLSv1.3 > > [..] > > You didn't specify OpenSSL version, so I assume this > belongs to https://trac.nginx.org/nginx/ticket/2071#comment:1 Also see https://github.com/openssl/openssl/issues/13291. Jeff From praveenssit at gmail.com Wed Feb 9 10:47:57 2022 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Wed, 9 Feb 2022 16:17:57 +0530 Subject: [Help][File download issue] Message-ID: Hello, I'm trying to download a file using curl and it's failing with the below error message. curl: (18) transfer closed with 723659786 bytes remaining to read In nginx error logs, I see the warning below. 2022/02/09 10:35:35 [warn] 15737#15737: *74233 an upstream response is buffered to a temporary file /var/lib/nginx/proxy/9/84/0000000849 while reading upstream, client: 1.2.3.4, server: x.y.z, request: "POST /v1/events/shares/search HTTP/1.1", upstream: " http://172.31.0.61:80/v1/events/shares/search", host: "x.y.z" Is it something to do with nginx config? Thanks. -- *Regards,* *K S Praveen Kumar* -------------- next part -------------- An HTML attachment was scrubbed... URL: From praveenssit at gmail.com Thu Feb 10 05:22:18 2022 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Thu, 10 Feb 2022 10:52:18 +0530 Subject: [Help][File download issue] In-Reply-To: References: Message-ID: Hello, I tried passing keep-alive to curl and still it is failing at 60%. Any clues ? On Wed, Feb 9, 2022 at 4:17 PM Praveen Kumar K S wrote: > Hello, > > I'm trying to download a file using curl and it's failing with the below > error message. > > curl: (18) transfer closed with 723659786 bytes remaining to read > > In nginx error logs, I see the warning below. > > 2022/02/09 10:35:35 [warn] 15737#15737: *74233 an upstream response is > buffered to a temporary file /var/lib/nginx/proxy/9/84/0000000849 while > reading upstream, client: 1.2.3.4, server: x.y.z, request: "POST > /v1/events/shares/search HTTP/1.1", upstream: " > http://172.31.0.61:80/v1/events/shares/search", host: "x.y.z" > > Is it something to do with nginx config? Thanks. > > -- > > > *Regards,* > > *K S Praveen Kumar* > -- *Regards,* *K S Praveen Kumar* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lance at wordkeeper.com Fri Feb 11 01:35:27 2022 From: lance at wordkeeper.com (Lance Dockins) Date: Thu, 10 Feb 2022 19:35:27 -0600 Subject: Nginx + NJS 0.7.2 Refusing to Compile In-Reply-To: <2fd35cec-d9c3-428d-8d76-ef872b09c0ed@Spark> References: <2fd35cec-d9c3-428d-8d76-ef872b09c0ed@Spark> Message-ID: Hello all, I’m trying to build Nginx with NJS so we can start experimenting with it.  I do need to build our own Nginx for a variety of reasons so I can’t opt for pre-compiled packages.  I’ve tested all sorts of different build options from our standard customized build all the way down to almost the most basic build.  No matter what options I specify, if I provide OpenSSL 3.0.1 (haven’t tried plain old 3.0) at all, NJS falls to compile with Nginx ending with this generic error: build/src/njs_diyfp.o build/src/njs_dtoa.o build/src/njs_dtoa_fixed.o build/src/njs_str.o build/src/njs_strtod.o build/src/njs_murmur_hash.o build/src/njs_djb_hash.o build/src/njs_utf8.o build/src/njs_utf16.o build/src/njs_arr.o build/src/njs_rbtree.o build/src/njs_lvlhsh.o build/src/njs_trace.o build/src/njs_random.o build/src/njs_md5.o build/src/njs_sha1.o build/src/njs_sha2.o build/src/njs_time.o build/src/njs_file.o build/src/njs_malloc.o build/src/njs_mp.o build/src/njs_sprintf.o build/src/njs_utils.o build/src/njs_chb.o build/src/njs_value.o build/src/njs_vm.o build/src/njs_vmcode.o build/src/njs_boolean.o build/src/njs_number.o build/src/njs_symbol.o build/src/njs_string.o build/src/njs_object.o build/src/njs_object_prop.o build/src/njs_array.o build/src/njs_json.o build/src/njs_function.o build/src/njs_regexp.o build/src/njs_date.o build/src/njs_error.o build/src/njs_math.o build/src/njs_timer.o build/src/njs_module.o build/src/njs_event.o build/src/njs_extern.o build/src/njs_variable.o build/src/njs_builtin.o build/src/njs_lexer.o build/src/njs_lexer_keyword.o build/src/njs_parser.o build/src/njs_generator.o build/src/njs_disassembler.o build/src/njs_array_buffer.o build/src/njs_typed_array.o build/src/njs_promise.o build/src/njs_encoding.o build/src/njs_iterator.o build/src/njs_scope.o build/src/njs_async.o build/src/njs_buffer.o build/external/njs_crypto_module.o build/external/njs_fs_module.o build/external/njs_query_string_module.o build/build/njs_modules.o make[2]: Leaving directory `/root/njs-0.7.2' make[1]: Leaving directory `/root/nginx-1.21.6' make: *** [build] Error 2 As soon as I remove OpenSSL 3.0.1 from the build, it compiles - even if I compile in OpenSSL with a different static library than the system default.  All variations of the regular build that I do work fine.  I can compile regular Nginx with all sorts of other stuff (e.g. Lua, Brotli, etc) and it all works just fine 100% of the time until we add in NJS (even if we remove all of the 3rd party extensions from the build).  At the moment, I’m sort of stuck between either using OpenSSL 3.0.1 or using Nginx + NJS.  Is NJS 0.7.2 suffering from some sort of OpenSSL 3.0.1 incompatibility?  Or are there special build directives that we need to pass in to make it compatible?  Or does the final error output from above mean something else? Just to avoid potential red herrings with this, here is one of the most basic configure/build commands that we’re using.  I’ve intentionally stripped most tweaks that we would usually use and it still fails.  If it matters, this is being compiled on CentOS 7 with Linux kernel 5.14 or greater (tried this on multiple systems).  Nginx version is 1.21.6, NJS is 0.7.2, and OpenSSL is 3.0.1.  GCC version is 10.2.1 (but I’ve tried with other versions as well). ./configure \ --prefix=/usr/share/nginx \ --user=nobody \ --group=nobody \ --with-pcre-jit \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-openssl=$STATICLIBSSL \ --with-http_realip_module \ --with-http_auth_request_module \ --with-http_gzip_static_module \ --with-http_v2_module \ --with-http_sub_module \ --with-libatomic \ --with-file-aio \ --with-http_xslt_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_gunzip_module \ --with-threads \ --add-dynamic-module=/root/njs-${NJS}/nginx Any insights that you can share would help immensely. -- Lance Dockins -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Feb 11 11:56:49 2022 From: nginx-forum at forum.nginx.org (ckchauhan) Date: Fri, 11 Feb 2022 06:56:49 -0500 Subject: SSL_shutdown() failed (SSL: error:14094123:SSL routines:ssl3_read_bytes:application data after close notify) while proxying connection Message-ID: Hello All, We have our applications running on NGINX server with CentOS configured to use upstream servers. We have all applications working fine. Below are the details: - NGINX Version 1.20.1 - OpenSSL version 1.1.11 - NGINX is not configured to use SSL but upstreams are, below are the snapshot of the configuration. cisco.upstream upstream ciscoapi { server 127.0.0.1:6302; ## ${ADMIN_STREAM_PORT} keepalive 32; # server OTHERSERVER:6302 backup; ## ${ADMIN_STREAM_PORT} ${OTHER_SERVER} ${PRIVATE_ELB} } cisco.stream server { listen 6302 ssl; ## ${ADMIN_STREAM_PORT} ssl_certificate /opt/lynx/cert/public.pem; ## ${INSTALL_BASE_PATH} ssl_certificate_key /opt/lynx/cert/private.key; ## ${INSTALL_BASE_PATH} proxy_pass localhost:6301; ## ${ADMIN_SVC_PORT} } ciscomiddleware.stream server { listen 6307 ssl; ## ${MW_STREAM_PORT} ssl_certificate /opt/lynx/cert/public.pem; ## ${INSTALL_BASE_PATH} ssl_certificate_key /opt/lynx/cert/private.key; ## ${INSTALL_BASE_PATH} proxy_pass localhost:6306; ## ${MW_SVC_PORT} proxy_ssl_server_name on; } ciscomiddleware.upstream upstream ciscomiddlewareapi { server 127.0.0.1:6307; ## ${MW_STREAM_PORT} keepalive 32; # server OTHERSERVER:6307 backup; ## ${MW_STREAM_PORT} ${OTHER_SERVER} ${PRIVATE_ELB} } Nginx.conf # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; worker_rlimit_nofile 16384; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 16384; # multi_accept off; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main buffer=16k; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_requests 100000; keepalive_timeout 300; # client_body_timeout 600; # client_header_timeout 600; # server_tokens off; types_hash_max_size 4096; include /etc/nginx/mime.types; default_type application/octet-stream; proxy_buffering off; proxy_buffer_size 8k; proxy_read_timeout 300s; proxy_connect_timeout 75s; proxy_send_timeout 600s; send_timeout 600s; large_client_header_buffers 4 64k; client_max_body_size 128m; client_body_buffer_size 128m; client_header_buffer_size 128m; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 127.0.0.1:80; # listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { # proxy_read_timeout 300; # proxy_connect_timeout 75; # proxy_send_timeout 600; proxy_http_version 1.1; proxy_set_header Connection ""; } error_page 404 /404.html; location = /404.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } # Settings for a TLS enabled server. # # server { # listen 443 ssl http2 default_server; # listen [::]:443 ssl http2 default_server; # server_name _; # root /usr/share/nginx/html; # # ssl_certificate "/etc/pki/nginx/server.crt"; # ssl_certificate_key "/etc/pki/nginx/private/server.key"; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 10m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # # # Load configuration files for the default server block. # include /etc/nginx/default.d/*.conf; # # location / { # } # # error_page 404 /404.html; # location = /404.html { # } # # error_page 500 502 503 504 /50x.html; # location = /50x.html { # } # } } Recently, we have been performing Load Test on this using JMETER as load generation tool. Mostly it runs as expected however we get the below error on random basis. In logs we have observed that, - It is showing 502 Bad Gateway error [SSL Shutdown]. - “SSL_shutdown() failed (SSL: error:14094123:SSL routines:ssl3_read_bytes:application data after close notify) while proxying connection, client: 127.0.0.1, server: 0.0.0.0:6307, upstream: "127.0.0.1:6306", bytes from/to client:0/0, bytes from/to upstream:0/0”. - The error occurs when max response time breaches 120 sec. We have tried to identify the cause by, - Following the nginx, github and stackoveflow. - Made changes multiple times to proxy read timeout, upgrading OPENSSL version and other tweaks. But still we are not able to get to the root cause of the issue or fix. We have been struggling since more than four weeks now. Can you help us please? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293640,293640#msg-293640 From pluknet at nginx.com Fri Feb 11 13:46:48 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 11 Feb 2022 16:46:48 +0300 Subject: Nginx + NJS 0.7.2 Refusing to Compile In-Reply-To: References: <2fd35cec-d9c3-428d-8d76-ef872b09c0ed@Spark> Message-ID: <1E76CFE5-0EB2-47A0-9857-906A7156F49F@nginx.com> > On 11 Feb 2022, at 04:35, Lance Dockins wrote: > > Hello all, > > I’m trying to build Nginx with NJS so we can start experimenting with it. I do need to build our own Nginx for a variety of reasons so I can’t opt for pre-compiled packages. I’ve tested all sorts of different build options from our standard customized build all the way down to almost the most basic build. No matter what options I specify, if I provide OpenSSL 3.0.1 (haven’t tried plain old 3.0) at all, NJS falls to compile with Nginx ending with this generic error: > > build/src/njs_diyfp.o build/src/njs_dtoa.o build/src/njs_dtoa_fixed.o build/src/njs_str.o build/src/njs_strtod.o build/src/njs_murmur_hash.o build/src/njs_djb_hash.o build/src/njs_utf8.o build/src/njs_utf16.o build/src/njs_arr.o build/src/njs_rbtree.o build/src/njs_lvlhsh.o build/src/njs_trace.o build/src/njs_random.o build/src/njs_md5.o build/src/njs_sha1.o build/src/njs_sha2.o build/src/njs_time.o build/src/njs_file.o build/src/njs_malloc.o build/src/njs_mp.o build/src/njs_sprintf.o build/src/njs_utils.o build/src/njs_chb.o build/src/njs_value.o build/src/njs_vm.o build/src/njs_vmcode.o build/src/njs_boolean.o build/src/njs_number.o build/src/njs_symbol.o build/src/njs_string.o build/src/njs_object.o build/src/njs_object_prop.o build/src/njs_array.o build/src/njs_json.o build/src/njs_function.o build/src/njs_regexp.o build/src/njs_date.o build/src/njs_error.o build/src/njs_math.o build/src/njs_timer.o build/src/njs_module.o build/src/njs_event.o build/src/njs_extern.o build/src/njs_variable.o build/src/njs_builtin.o build/src/njs_lexer.o build/src/njs_lexer_keyword.o build/src/njs_parser.o build/src/njs_generator.o build/src/njs_disassembler.o build/src/njs_array_buffer.o build/src/njs_typed_array.o build/src/njs_promise.o build/src/njs_encoding.o build/src/njs_iterator.o build/src/njs_scope.o build/src/njs_async.o build/src/njs_buffer.o build/external/njs_crypto_module.o build/external/njs_fs_module.o build/external/njs_query_string_module.o build/build/njs_modules.o > make[2]: Leaving directory `/root/njs-0.7.2' > make[1]: Leaving directory `/root/nginx-1.21.6' > make: *** [build] Error 2 > > As soon as I remove OpenSSL 3.0.1 from the build, it compiles - even if I compile in OpenSSL with a different static library than the system default. All variations of the regular build that I do work fine. I can compile regular Nginx with all sorts of other stuff (e.g. Lua, Brotli, etc) and it all works just fine 100% of the time until we add in NJS (even if we remove all of the 3rd party extensions from the build). At the moment, I’m sort of stuck between either using OpenSSL 3.0.1 or using Nginx + NJS. Is NJS 0.7.2 suffering from some sort of OpenSSL 3.0.1 incompatibility? Or are there special build directives that we need to pass in to make it compatible? Or does the final error output from above mean something else? > > Just to avoid potential red herrings with this, here is one of the most basic configure/build commands that we’re using. I’ve intentionally stripped most tweaks that we would usually use and it still fails. If it matters, this is being compiled on CentOS 7 with Linux kernel 5.14 or greater (tried this on multiple systems). Nginx version is 1.21.6, NJS is 0.7.2, and OpenSSL is 3.0.1. GCC version is 10.2.1 (but I’ve tried with other versions as well). > > ./configure \ > --prefix=/usr/share/nginx \ > --user=nobody \ > --group=nobody \ > --with-pcre-jit \ > --with-http_ssl_module \ > --with-http_stub_status_module \ > --with-openssl=$STATICLIBSSL \ > --with-http_realip_module \ > --with-http_auth_request_module \ > --with-http_gzip_static_module \ > --with-http_v2_module \ > --with-http_sub_module \ > --with-libatomic \ > --with-file-aio \ > --with-http_xslt_module \ > --with-http_flv_module \ > --with-http_mp4_module \ > --with-http_gunzip_module \ > --with-threads \ > --add-dynamic-module=/root/njs-${NJS}/nginx > What is the exact build error? Builds fine here with the reported nginx/njs/OpenSSL versions. -- Sergey Kandaurov From lance at wordkeeper.com Fri Feb 11 14:00:47 2022 From: lance at wordkeeper.com (Lance Dockins) Date: Fri, 11 Feb 2022 08:00:47 -0600 Subject: Nginx + NJS 0.7.2 Refusing to Compile In-Reply-To: <1E76CFE5-0EB2-47A0-9857-906A7156F49F@nginx.com> References: <2fd35cec-d9c3-428d-8d76-ef872b09c0ed@Spark> <1E76CFE5-0EB2-47A0-9857-906A7156F49F@nginx.com> Message-ID: <03e02f83-6f1f-46b6-bd8f-1804ea239678@Spark> That isn’t exactly clear, actually.  The last few lines of the build process were in my email.  I am not seeing a particular error above that that would indicate where the problem is.  Other than just looking in the last lines of output from the build, are there better ways to track such an error? -- Lance Dockins Minister of Magic WordKeeper Office: 405.585.2500 Cell: 405.306.7401 https://wordkeeper.com On Feb 11, 2022, 7:49 AM -0600, Sergey Kandaurov , wrote: > > > On 11 Feb 2022, at 04:35, Lance Dockins wrote: > > > > Hello all, > > > > I’m trying to build Nginx with NJS so we can start experimenting with it. I do need to build our own Nginx for a variety of reasons so I can’t opt for pre-compiled packages. I’ve tested all sorts of different build options from our standard customized build all the way down to almost the most basic build. No matter what options I specify, if I provide OpenSSL 3.0.1 (haven’t tried plain old 3.0) at all, NJS falls to compile with Nginx ending with this generic error: > > > > build/src/njs_diyfp.o build/src/njs_dtoa.o build/src/njs_dtoa_fixed.o build/src/njs_str.o build/src/njs_strtod.o build/src/njs_murmur_hash.o build/src/njs_djb_hash.o build/src/njs_utf8.o build/src/njs_utf16.o build/src/njs_arr.o build/src/njs_rbtree.o build/src/njs_lvlhsh.o build/src/njs_trace.o build/src/njs_random.o build/src/njs_md5.o build/src/njs_sha1.o build/src/njs_sha2.o build/src/njs_time.o build/src/njs_file.o build/src/njs_malloc.o build/src/njs_mp.o build/src/njs_sprintf.o build/src/njs_utils.o build/src/njs_chb.o build/src/njs_value.o build/src/njs_vm.o build/src/njs_vmcode.o build/src/njs_boolean.o build/src/njs_number.o build/src/njs_symbol.o build/src/njs_string.o build/src/njs_object.o build/src/njs_object_prop.o build/src/njs_array.o build/src/njs_json.o build/src/njs_function.o build/src/njs_regexp.o build/src/njs_date.o build/src/njs_error.o build/src/njs_math.o build/src/njs_timer.o build/src/njs_module.o build/src/njs_event.o build/src/njs_extern.o build/src/njs_variable.o build/src/njs_builtin.o build/src/njs_lexer.o build/src/njs_lexer_keyword.o build/src/njs_parser.o build/src/njs_generator.o build/src/njs_disassembler.o build/src/njs_array_buffer.o build/src/njs_typed_array.o build/src/njs_promise.o build/src/njs_encoding.o build/src/njs_iterator.o build/src/njs_scope.o build/src/njs_async.o build/src/njs_buffer.o build/external/njs_crypto_module.o build/external/njs_fs_module.o build/external/njs_query_string_module.o build/build/njs_modules.o > > make[2]: Leaving directory `/root/njs-0.7.2' > > make[1]: Leaving directory `/root/nginx-1.21.6' > > make: *** [build] Error 2 > > > > As soon as I remove OpenSSL 3.0.1 from the build, it compiles - even if I compile in OpenSSL with a different static library than the system default. All variations of the regular build that I do work fine. I can compile regular Nginx with all sorts of other stuff (e.g. Lua, Brotli, etc) and it all works just fine 100% of the time until we add in NJS (even if we remove all of the 3rd party extensions from the build). At the moment, I’m sort of stuck between either using OpenSSL 3.0.1 or using Nginx + NJS. Is NJS 0.7.2 suffering from some sort of OpenSSL 3.0.1 incompatibility? Or are there special build directives that we need to pass in to make it compatible? Or does the final error output from above mean something else? > > > > Just to avoid potential red herrings with this, here is one of the most basic configure/build commands that we’re using. I’ve intentionally stripped most tweaks that we would usually use and it still fails. If it matters, this is being compiled on CentOS 7 with Linux kernel 5.14 or greater (tried this on multiple systems). Nginx version is 1.21.6, NJS is 0.7.2, and OpenSSL is 3.0.1. GCC version is 10.2.1 (but I’ve tried with other versions as well). > > > > ./configure \ > > --prefix=/usr/share/nginx \ > > --user=nobody \ > > --group=nobody \ > > --with-pcre-jit \ > > --with-http_ssl_module \ > > --with-http_stub_status_module \ > > --with-openssl=$STATICLIBSSL \ > > --with-http_realip_module \ > > --with-http_auth_request_module \ > > --with-http_gzip_static_module \ > > --with-http_v2_module \ > > --with-http_sub_module \ > > --with-libatomic \ > > --with-file-aio \ > > --with-http_xslt_module \ > > --with-http_flv_module \ > > --with-http_mp4_module \ > > --with-http_gunzip_module \ > > --with-threads \ > > --add-dynamic-module=/root/njs-${NJS}/nginx > > > > What is the exact build error? > Builds fine here with the reported nginx/njs/OpenSSL versions. > > -- > Sergey Kandaurov > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From lance at wordkeeper.com Fri Feb 11 14:02:05 2022 From: lance at wordkeeper.com (Lance Dockins) Date: Fri, 11 Feb 2022 08:02:05 -0600 Subject: Nginx + NJS 0.7.2 Refusing to Compile In-Reply-To: <1E76CFE5-0EB2-47A0-9857-906A7156F49F@nginx.com> References: <2fd35cec-d9c3-428d-8d76-ef872b09c0ed@Spark> <1E76CFE5-0EB2-47A0-9857-906A7156F49F@nginx.com> Message-ID: Oh.  I should also mention that I have been able to get this to compile on CentOS 8.  But it has failed on multiple different CentOS 7 machines. -- Lance Dockins Minister of Magic WordKeeper Office: 405.585.2500 Cell: 405.306.7401 https://wordkeeper.com On Feb 11, 2022, 7:49 AM -0600, Sergey Kandaurov , wrote: > > > On 11 Feb 2022, at 04:35, Lance Dockins wrote: > > > > Hello all, > > > > I’m trying to build Nginx with NJS so we can start experimenting with it. I do need to build our own Nginx for a variety of reasons so I can’t opt for pre-compiled packages. I’ve tested all sorts of different build options from our standard customized build all the way down to almost the most basic build. No matter what options I specify, if I provide OpenSSL 3.0.1 (haven’t tried plain old 3.0) at all, NJS falls to compile with Nginx ending with this generic error: > > > > build/src/njs_diyfp.o build/src/njs_dtoa.o build/src/njs_dtoa_fixed.o build/src/njs_str.o build/src/njs_strtod.o build/src/njs_murmur_hash.o build/src/njs_djb_hash.o build/src/njs_utf8.o build/src/njs_utf16.o build/src/njs_arr.o build/src/njs_rbtree.o build/src/njs_lvlhsh.o build/src/njs_trace.o build/src/njs_random.o build/src/njs_md5.o build/src/njs_sha1.o build/src/njs_sha2.o build/src/njs_time.o build/src/njs_file.o build/src/njs_malloc.o build/src/njs_mp.o build/src/njs_sprintf.o build/src/njs_utils.o build/src/njs_chb.o build/src/njs_value.o build/src/njs_vm.o build/src/njs_vmcode.o build/src/njs_boolean.o build/src/njs_number.o build/src/njs_symbol.o build/src/njs_string.o build/src/njs_object.o build/src/njs_object_prop.o build/src/njs_array.o build/src/njs_json.o build/src/njs_function.o build/src/njs_regexp.o build/src/njs_date.o build/src/njs_error.o build/src/njs_math.o build/src/njs_timer.o build/src/njs_module.o build/src/njs_event.o build/src/njs_extern.o build/src/njs_variable.o build/src/njs_builtin.o build/src/njs_lexer.o build/src/njs_lexer_keyword.o build/src/njs_parser.o build/src/njs_generator.o build/src/njs_disassembler.o build/src/njs_array_buffer.o build/src/njs_typed_array.o build/src/njs_promise.o build/src/njs_encoding.o build/src/njs_iterator.o build/src/njs_scope.o build/src/njs_async.o build/src/njs_buffer.o build/external/njs_crypto_module.o build/external/njs_fs_module.o build/external/njs_query_string_module.o build/build/njs_modules.o > > make[2]: Leaving directory `/root/njs-0.7.2' > > make[1]: Leaving directory `/root/nginx-1.21.6' > > make: *** [build] Error 2 > > > > As soon as I remove OpenSSL 3.0.1 from the build, it compiles - even if I compile in OpenSSL with a different static library than the system default. All variations of the regular build that I do work fine. I can compile regular Nginx with all sorts of other stuff (e.g. Lua, Brotli, etc) and it all works just fine 100% of the time until we add in NJS (even if we remove all of the 3rd party extensions from the build). At the moment, I’m sort of stuck between either using OpenSSL 3.0.1 or using Nginx + NJS. Is NJS 0.7.2 suffering from some sort of OpenSSL 3.0.1 incompatibility? Or are there special build directives that we need to pass in to make it compatible? Or does the final error output from above mean something else? > > > > Just to avoid potential red herrings with this, here is one of the most basic configure/build commands that we’re using. I’ve intentionally stripped most tweaks that we would usually use and it still fails. If it matters, this is being compiled on CentOS 7 with Linux kernel 5.14 or greater (tried this on multiple systems). Nginx version is 1.21.6, NJS is 0.7.2, and OpenSSL is 3.0.1. GCC version is 10.2.1 (but I’ve tried with other versions as well). > > > > ./configure \ > > --prefix=/usr/share/nginx \ > > --user=nobody \ > > --group=nobody \ > > --with-pcre-jit \ > > --with-http_ssl_module \ > > --with-http_stub_status_module \ > > --with-openssl=$STATICLIBSSL \ > > --with-http_realip_module \ > > --with-http_auth_request_module \ > > --with-http_gzip_static_module \ > > --with-http_v2_module \ > > --with-http_sub_module \ > > --with-libatomic \ > > --with-file-aio \ > > --with-http_xslt_module \ > > --with-http_flv_module \ > > --with-http_mp4_module \ > > --with-http_gunzip_module \ > > --with-threads \ > > --add-dynamic-module=/root/njs-${NJS}/nginx > > > > What is the exact build error? > Builds fine here with the reported nginx/njs/OpenSSL versions. > > -- > Sergey Kandaurov > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From peljasz at yahoo.co.uk Sat Feb 12 10:11:25 2022 From: peljasz at yahoo.co.uk (lejeczek) Date: Sat, 12 Feb 2022 10:11:25 +0000 Subject: listen on IPs but do not fail if one is absent - ? References: <9860ac91-895c-3936-88dd-db97697cd9a2.ref@yahoo.co.uk> Message-ID: <9860ac91-895c-3936-88dd-db97697cd9a2@yahoo.co.uk> Hi guys a novice here so go easy on me with this question: having multiple 'listen' with IPs or, just one 'listen' with a hostname which resolves to more than one IP - is it possible to tell Nginx not fail when one of IPs is absent, does not exist? many thanks, L. From francis at daoine.org Sat Feb 12 13:26:17 2022 From: francis at daoine.org (Francis Daly) Date: Sat, 12 Feb 2022 13:26:17 +0000 Subject: listen on IPs but do not fail if one is absent - ? In-Reply-To: <9860ac91-895c-3936-88dd-db97697cd9a2@yahoo.co.uk> References: <9860ac91-895c-3936-88dd-db97697cd9a2.ref@yahoo.co.uk> <9860ac91-895c-3936-88dd-db97697cd9a2@yahoo.co.uk> Message-ID: <20220212132617.GG14624@daoine.org> On Sat, Feb 12, 2022 at 10:11:25AM +0000, lejeczek via nginx wrote: Hi there, > having multiple 'listen' with IPs or, just one 'listen' with a hostname > which resolves to more than one IP - is it possible to tell Nginx not fail > when one of IPs is absent, does not exist? I think that stock nginx does not support that. There are possibly ways to avoid the failure; but they all fundamentally are different ways to do "only bind to locally-existing addresses". Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Sat Feb 12 13:28:01 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 12 Feb 2022 16:28:01 +0300 Subject: listen on IPs but do not fail if one is absent - ? In-Reply-To: <9860ac91-895c-3936-88dd-db97697cd9a2@yahoo.co.uk> References: <9860ac91-895c-3936-88dd-db97697cd9a2.ref@yahoo.co.uk> <9860ac91-895c-3936-88dd-db97697cd9a2@yahoo.co.uk> Message-ID: Hello! On Sat, Feb 12, 2022 at 10:11:25AM +0000, lejeczek via nginx wrote: > having multiple 'listen' with IPs or, just one 'listen' with a hostname > which resolves to more than one IP - is it possible to tell Nginx not > fail when one of IPs is absent, does not exist? nginx won't fail if it also listens on * with the same port (as it won't actually try to listen on the particular IP addresses in this case, see the description of the "bind" parameter at http://nginx.org/r/listen). -- Maxim Dounin http://mdounin.ru/ From peljasz at yahoo.co.uk Sun Feb 13 07:56:16 2022 From: peljasz at yahoo.co.uk (lejeczek) Date: Sun, 13 Feb 2022 07:56:16 +0000 Subject: listen on IPs but do not fail if one is absent - ? In-Reply-To: <20220212132617.GG14624@daoine.org> References: <9860ac91-895c-3936-88dd-db97697cd9a2.ref@yahoo.co.uk> <9860ac91-895c-3936-88dd-db97697cd9a2@yahoo.co.uk> <20220212132617.GG14624@daoine.org> Message-ID: <70c93eed-fe0a-d224-ff00-0c392bd2f716@yahoo.co.uk> On 12/02/2022 13:26, Francis Daly wrote: > On Sat, Feb 12, 2022 at 10:11:25AM +0000, lejeczek via nginx wrote: > > Hi there, > >> having multiple 'listen' with IPs or, just one 'listen' with a hostname >> which resolves to more than one IP - is it possible to tell Nginx not fail >> when one of IPs is absent, does not exist? > I think that stock nginx does not support that. > > There are possibly ways to avoid the failure; but they all fundamentally > are different ways to do "only bind to locally-existing addresses". > > Cheers, > > f I can confess I return to Nginx after long many years of a divorce and this curious fact - if Nginx cannot do that - will be a surprise to me. I thought such a "feature" would be in Nginx by now, if not devised by developers than included by popular demand - looking at the options/params to 'listen', something like 'remain' or 'insist' which would instruct Nginx to start & continue to work and hook onto the IP when/after it appeared(but also continue to work after IP disappeared) thanks, L. From francis at daoine.org Sun Feb 13 11:24:50 2022 From: francis at daoine.org (Francis Daly) Date: Sun, 13 Feb 2022 11:24:50 +0000 Subject: listen on IPs but do not fail if one is absent - ? In-Reply-To: <70c93eed-fe0a-d224-ff00-0c392bd2f716@yahoo.co.uk> References: <9860ac91-895c-3936-88dd-db97697cd9a2.ref@yahoo.co.uk> <9860ac91-895c-3936-88dd-db97697cd9a2@yahoo.co.uk> <20220212132617.GG14624@daoine.org> <70c93eed-fe0a-d224-ff00-0c392bd2f716@yahoo.co.uk> Message-ID: <20220213112450.GH14624@daoine.org> On Sun, Feb 13, 2022 at 07:56:16AM +0000, lejeczek via nginx wrote: > On 12/02/2022 13:26, Francis Daly wrote: > > On Sat, Feb 12, 2022 at 10:11:25AM +0000, lejeczek via nginx wrote: Hi there, > > > having multiple 'listen' with IPs or, just one 'listen' with a hostname > > > which resolves to more than one IP - is it possible to tell Nginx not fail > > > when one of IPs is absent, does not exist? > > I think that stock nginx does not support that. > > > > There are possibly ways to avoid the failure; but they all fundamentally > > are different ways to do "only bind to locally-existing addresses". > I can confess I return to Nginx after long many years of a divorce and this > curious fact - if Nginx cannot do that - will be a surprise to me. Use cases tend to be addressed when a developer has the incentive to write the code. If the feature that you are hoping for, has not been implemented in a way that you are hoping for, then probably no-one cared enough to ensure that it was done in that way. > I thought such a "feature" would be in Nginx by now, if not devised by > developers than included by popular demand - looking at the options/params > to 'listen', something like 'remain' or 'insist' which would instruct Nginx > to start & continue to work and hook onto the IP when/after it appeared(but > also continue to work after IP disappeared) As Maxim indicates in the parallel reply: nginx will not fail if it does not try to bind() to a non-existing address:port. And you can arrange that, by making sure that your "listen ip:port" directive does not include any of the parameters that require a bind(); and by making sure that, for each port that you listen on, there is also a "listen *:port"-equivalent directive somewhere in the config. So possibly the feature that you want already exists with restrictions that you are happy to work within? Cheers, f -- Francis Daly francis at daoine.org From hritikxx8 at gmail.com Sun Feb 13 16:45:01 2022 From: hritikxx8 at gmail.com (Hritik Vijay) Date: Sun, 13 Feb 2022 22:15:01 +0530 Subject: Is nginx still vulnerable to CVE-2009-4487 ? Message-ID: Hello The advisories page (https://nginx.org/en/security_advisories.html) for nginx mentions the following: An error log data are not sanitized Severity: none CVE-2009-4487 Not vulnerable: none Vulnerable: all Was this vulnerability ever fixed ? If so, can we please get the advisory updated ? Hrtk From moshe at ymkatz.net Sun Feb 13 18:44:00 2022 From: moshe at ymkatz.net (Moshe Katz) Date: Sun, 13 Feb 2022 13:44:00 -0500 Subject: Is nginx still vulnerable to CVE-2009-4487 ? In-Reply-To: References: Message-ID: I can't speak for the nginx team, but as noted by "Severity: none", I assume they agree with many other vendors that this is not actually a vulnerability in nginx itself. For example, here is what the authors of Varnish said in response to this CVE: > This is not a security problem in Varnish or any other piece of software which writes a logfile. > > The real problem is the mistaken belief that you can cat(1) a random logfile to your terminal safely. > >This is not a new issue. I first remember the issue with xterm(1)'s inadvisably implemented escape-sequences in a root-context, brought up heatedly, in 1988, possibly late 1987, at Copenhagens University Computer Science dept. (Diku.dk). Since then, nothing much have changed. > > The wisdom of terminal-response-escapes in general have been questioned at regular intervals, but still none of the major terminal emulation programs have seen fit to discard these sequences, probably in a misguided attempt at compatibility with no longer used 1970'es technology. > > I admit that listing "found a security hole in all HTTP-related programs that write logfiles" will look more impressive on a resume, but I think it is misguided and a sign of trophy-hunting having overtaken common sense. > > Instead of blaming any and all programs which writes logfiles, it would be much more productive, from a security point of view, to get the terminal emulation programs to stop doing stupid things, and thus fix this and other security problems once and for all. Moshe On Sun, Feb 13, 2022 at 11:46 AM Hritik Vijay wrote: > Hello > > The advisories page (https://nginx.org/en/security_advisories.html) for > nginx mentions the following: > An error log data are not sanitized > Severity: none > CVE-2009-4487 > Not vulnerable: none > Vulnerable: all > > Was this vulnerability ever fixed ? If so, can we please get the > advisory updated ? > > Hrtk > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfs.world at gmail.com Sun Feb 13 21:17:27 2022 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Sun, 13 Feb 2022 13:17:27 -0800 Subject: Is nginx still vulnerable to CVE-2009-4487 ? In-Reply-To: References: Message-ID: On Sun, Feb 13, 2022 at 10:45 AM Moshe Katz wrote: > > I can't speak for the nginx team, but as noted by "Severity: none", I assume they agree with many other vendors that this is not actually a vulnerability in nginx itself. > > For example, here is what the authors of Varnish said in response to this CVE: > > > This is not a security problem in Varnish or any other piece of software which writes a logfile. > > > > The real problem is the mistaken belief that you can cat(1) a random logfile to your terminal safely. > > > >This is not a new issue. I first remember the issue with xterm(1)'s inadvisably implemented escape-sequences in a root-context, brought up heatedly, in 1988, possibly late 1987, at Copenhagens University Computer Science dept. (Diku.dk). Since then, nothing much have changed. > > > > The wisdom of terminal-response-escapes in general have been questioned at regular intervals, but still none of the major terminal emulation programs have seen fit to discard these sequences, probably in a misguided attempt at compatibility with no longer used 1970'es technology. > > > > I admit that listing "found a security hole in all HTTP-related programs that write logfiles" will look more impressive on a resume, but I think it is misguided and a sign of trophy-hunting having overtaken common sense. > > > > Instead of blaming any and all programs which writes logfiles, it would be much more productive, from a security point of view, to get the terminal emulation programs to stop doing stupid things, and thus fix this and other security problems once and for all. > this is all fair and good (and I don't disagree that terminal emulators need to get better) - but I'm just wondering, does anybody here do error logging at info or debug? If you send the logs off somewhere to a logging system, how do you parse these logs? -jf From nginx-forum at forum.nginx.org Mon Feb 14 02:15:30 2022 From: nginx-forum at forum.nginx.org (marioja2) Date: Sun, 13 Feb 2022 21:15:30 -0500 Subject: Problem running nginx in a container In-Reply-To: <20220209002523.GF14624@daoine.org> References: <20220209002523.GF14624@daoine.org> Message-ID: <3c0e41c8b283ba6eedac743417248f29.NginxMailingListEnglish@forum.nginx.org> It took a while and it was starting at me in the face but I did figure it out. The script that starts the container would make changes to some php-fpm7 (fastcgi process manager) after it got started. Obviously, this did not take effect until the restart of the container. This explains the difference in behaviour. Thanks Francis Daly for all your insightful comments. Not being familiar with nginx I would have never figured it out without your input. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293592,293656#msg-293656 From praveenssit at gmail.com Mon Feb 14 03:48:03 2022 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Mon, 14 Feb 2022 09:18:03 +0530 Subject: [Help][File download issue] In-Reply-To: References: Message-ID: Hello, Not sure if my messages are reaching. Can someone please confirm ? On Thu, Feb 10, 2022 at 10:52 AM Praveen Kumar K S wrote: > Hello, > > I tried passing keep-alive to curl and still it is failing at 60%. Any > clues ? > > On Wed, Feb 9, 2022 at 4:17 PM Praveen Kumar K S > wrote: > >> Hello, >> >> I'm trying to download a file using curl and it's failing with the below >> error message. >> >> curl: (18) transfer closed with 723659786 bytes remaining to read >> >> In nginx error logs, I see the warning below. >> >> 2022/02/09 10:35:35 [warn] 15737#15737: *74233 an upstream response is >> buffered to a temporary file /var/lib/nginx/proxy/9/84/0000000849 while >> reading upstream, client: 1.2.3.4, server: x.y.z, request: "POST >> /v1/events/shares/search HTTP/1.1", upstream: " >> http://172.31.0.61:80/v1/events/shares/search", host: "x.y.z" >> >> Is it something to do with nginx config? Thanks. >> >> -- >> >> >> *Regards,* >> >> *K S Praveen Kumar* >> > > > -- > > > *Regards,* > > > *K S Praveen Kumar* > -- *Regards,* *K S Praveen Kumar* -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip.montanaro at gmail.com Mon Feb 14 13:39:46 2022 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Mon, 14 Feb 2022 07:39:46 -0600 Subject: Obvious malware rejection module? Message-ID: I have a simple website with NGINX fronting Gunicorn and Flask. Of course, within minutes of it going live, I started to get obvious crap, probing for vulnerabilities. Nothing's gotten through yet, at least as far as I can tell. Even so, it would be nice if such malware-type requests were rejected by NGINX before they reach the backend. Is there a module for NGINX which implements something like a blackhole list similar to what you find on email servers, that is, offloading the acceptance or rejection of certain paths to a community-managed database? I scrolled through the list here: https://www.nginx.com/resources/wiki/modules/ but didn't see anything obvious. I could establish my own rewrite rules (and probably will) for some of the most egregious requests (anything ".php" would get dropped, for example), but was hoping something already existed. Thanks, Skip Montanaro -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Mon Feb 14 15:58:27 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Mon, 14 Feb 2022 18:58:27 +0300 Subject: Obvious malware rejection module? In-Reply-To: References: Message-ID: Hi Skip, hope you're doing well. On Mon, Feb 14, 2022 at 07:39:46AM -0600, Skip Montanaro wrote: > I have a simple website with NGINX fronting Gunicorn and Flask. Of course, > within minutes of it going live, I started to get obvious crap, probing for > vulnerabilities. Nothing's gotten through yet, at least as far as I can > tell. Even so, it would be nice if such malware-type requests were rejected > by NGINX before they reach the backend. > > Is there a module for NGINX which implements something like a blackhole > list similar to what you find on email servers, that is, offloading the > acceptance or rejection of certain paths to a community-managed database? I > scrolled through the list here: > > https://www.nginx.com/resources/wiki/modules/ > > but didn't see anything obvious. I could establish my own rewrite rules > (and probably will) for some of the most egregious requests (anything > ".php" would get dropped, for example), but was hoping something already > existed. You'd probably need to install a WAF, Web Application Firewall. Some of those are avaialble for free. -- Sergey Osokin From skip.montanaro at gmail.com Mon Feb 14 16:28:41 2022 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Mon, 14 Feb 2022 10:28:41 -0600 Subject: Obvious malware rejection module? In-Reply-To: References: Message-ID: > > You'd probably need to install a WAF, Web Application Firewall. Some > of those are avaialble for free. > Thanks, Sergey, that's an interesting topic. It looks like I have some reading to do... Skip Montanaro -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Mon Feb 14 16:29:18 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Mon, 14 Feb 2022 19:29:18 +0300 Subject: [Help][File download issue] In-Reply-To: References: Message-ID: Hi Praveen, hope you're doing well these days. On Wed, Feb 09, 2022 at 04:17:57PM +0530, Praveen Kumar K S wrote: > Hello, > > I'm trying to download a file using curl and it's failing with the below > error message. > > curl: (18) transfer closed with 723659786 bytes remaining to read > > In nginx error logs, I see the warning below. > > 2022/02/09 10:35:35 [warn] 15737#15737: *74233 an upstream response is > buffered to a temporary file /var/lib/nginx/proxy/9/84/0000000849 while > reading upstream, client: 1.2.3.4, server: x.y.z, request: "POST > /v1/events/shares/search HTTP/1.1", upstream: " > http://172.31.0.61:80/v1/events/shares/search", host: "x.y.z" > > Is it something to do with nginx config? Thanks. I'd recommend to visit the following link, https://support.f5.com/csp/article/K48373902 Hope that helps. -- Sergey A. Osokin From lists at lazygranch.com Mon Feb 14 17:03:50 2022 From: lists at lazygranch.com (lists) Date: Mon, 14 Feb 2022 09:03:50 -0800 Subject: Obvious malware rejection module? In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From skip.montanaro at gmail.com Mon Feb 14 20:26:28 2022 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Mon, 14 Feb 2022 14:26:28 -0600 Subject: Obvious malware rejection module? In-Reply-To: References: Message-ID: Thanks for the reply. I actually am interested in bots, at least in the form of legitimate web crawlers. My itty bitty website was specifically intended to expose the archives of a defunct (but formerly public) mailing list to interested parties and said crawlers: https://smontanaro.net/CR/ Performance requirements are decidedly minimal. In addition, the last time I did anything with the web was about 20 years ago, so this whole enterprise is a fun (for some definition of "fun") reacquaintance with more current web application development. I spent most of my career (I'm now retired) using Python to implement various things for various uses, so Flask was a pretty straightforward way to get something up and running quickly. As a placeholder for something more sophisticated, a couple location directives squashed a large fraction of the problematic URIs. Skip Montanaro -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Mon Feb 14 23:14:03 2022 From: lists at lazygranch.com (lists) Date: Mon, 14 Feb 2022 15:14:03 -0800 Subject: Obvious malware rejection module? In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From noloader at gmail.com Tue Feb 15 00:48:36 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Mon, 14 Feb 2022 19:48:36 -0500 Subject: Obvious malware rejection module? In-Reply-To: References: Message-ID: On Mon, Feb 14, 2022 at 6:17 PM lists wrote: > ... > > I have plenty of transit capacity. I can serve 3TB a month and I do 30GB. > What I don't have is CPU power. I have a one CPU VPS. The CPU is shared > resource. I think the RAM used by the VPS is more "available," if that > makes any sense. That is they don't swap you out but let your VPS sit in > RAM. So something RAM intensive is fine but CPU intensive is not. > Off-topic, we used to use GoDaddy for VPS for our free/open source software project. It was a crummy service offering one virtual core, 1 GB of RAM and no swap file. The VPS ran on a circa-2005 4-core Athlon machine. The server had constant problems because the OOM killer would whack our MySQL process. We needed it for a Mediawiki installation . The machine could not handle the LAMP stack. And GoDaddy would send us nastygrams threatening to stop service because of bots. When the spam bots tried to create a new page the wiki server pegged at about 100% cpu for a moment while that new editor was spun-up. We now use Ionos (https://www.ionos.com/hosting/web-hosting). The cheapest plan is $1/month. We splurge a bit and pay $5 for the extra core and extra memory. No more OOM problems, and no more nastygrams. Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip.montanaro at gmail.com Tue Feb 15 01:27:15 2022 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Mon, 14 Feb 2022 19:27:15 -0600 Subject: Obvious malware rejection module? In-Reply-To: References: Message-ID: > Off-topic, we used to use GoDaddy for VPS for our free/open source > software project. > We now use Ionos (https://www.ionos.com/hosting/web-hosting). The cheapest > plan is $1/month. We splurge a bit and pay $5 for the extra core and extra > memory. No more OOM problems, and no more nastygrams. > Off-topic or not, I appreciate the details. My little website is currently using Oracle's "always free" service, single core VM with 2gb RAM. So far, so good. Of course, I'm just serving static email files with a small amount of filtering, so not much work to do. I also appreciate the bot feedback from the Lazy G Ranch. I will look into that. Skip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From crenatovb at gmail.com Tue Feb 15 02:08:53 2022 From: crenatovb at gmail.com (Carlos Renato) Date: Mon, 14 Feb 2022 23:08:53 -0300 Subject: NGINX load balancing - Proxy Message-ID: Hey guys, Can someone help me? I'm using NGINX to direct connections to two Proxy servers. I did a simple setup. upstream webgateway { server 192.168.239.151:9090; server 192.168.239.152:9090; } server { listen 81; server_name proxy.lab.local; location / { proxy_pass http://webgateway; } } NGINX is listening on port 81. If I configure the proxy IP in the browser, the client "goes out" to the Internet. Browser: 192.168.239.151:9090 or 192.168.239.152:9090 - Its Ok! If I configure the NGINX IP in the browser, the client "does not go out" to the internet. Browser: 192.168.239.151:81 - No! The packet even arrives at the proxy, but the browser tries to load " http://webgateway.com" Can someone help me? Thanks. -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Feb 15 02:49:39 2022 From: lists at lazygranch.com (lists) Date: Mon, 14 Feb 2022 18:49:39 -0800 Subject: Obvious malware rejection module? In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Tue Feb 15 03:52:12 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 15 Feb 2022 06:52:12 +0300 Subject: NGINX load balancing - Proxy In-Reply-To: References: Message-ID: Hi Carlos, hope you're doing well. On Mon, Feb 14, 2022 at 11:08:53PM -0300, Carlos Renato wrote: > Hey guys, > > Can someone help me? I'm using NGINX to direct connections to two Proxy > servers. > > I did a simple setup. > > upstream webgateway { > server 192.168.239.151:9090; > server 192.168.239.152:9090; > } > > server { > listen 81; > server_name proxy.lab.local; > > location / { > proxy_pass http://webgateway; > } > } > > NGINX is listening on port 81. > > If I configure the proxy IP in the browser, the client "goes out" to the > Internet. > > Browser: > 192.168.239.151:9090 or 192.168.239.152:9090 - Its Ok! > > If I configure the NGINX IP in the browser, the client "does not go out" to > the internet. > > Browser: > 192.168.239.151:81 - No! > > The packet even arrives at the proxy, but the browser tries to load " > http://webgateway.com" My guess is you want to configure NGINX as a forward proxy, and not as a reverse proxy. And if so, that's not the case for NGINX. -- Sergey A. Osokin From nginx-forum at forum.nginx.org Tue Feb 15 04:51:45 2022 From: nginx-forum at forum.nginx.org (ckchauhan) Date: Mon, 14 Feb 2022 23:51:45 -0500 Subject: SSL_shutdown() failed (SSL: error:14094123:SSL routines:ssl3_read_bytes:application data after close notify) while proxying connection In-Reply-To: References: Message-ID: <596bd7c29263e02917b3422d660eecb2.NginxMailingListEnglish@forum.nginx.org> Hi Team, Can anyone help us here? We really need your inputs here. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293640,293677#msg-293677 From crenatovb at gmail.com Tue Feb 15 11:38:07 2022 From: crenatovb at gmail.com (Carlos Renato) Date: Tue, 15 Feb 2022 08:38:07 -0300 Subject: NGINX load balancing - Proxy In-Reply-To: References: Message-ID: Hello, I would like to use NGINX to balance traffic between two McAfee (standalone) proxy. I've made some advances and I'm able to open an HTTP page. Now, I need the client to open an HTTPS request. Em ter., 15 de fev. de 2022 às 00:54, Sergey A. Osokin escreveu: > Hi Carlos, > > hope you're doing well. > > On Mon, Feb 14, 2022 at 11:08:53PM -0300, Carlos Renato wrote: > > Hey guys, > > > > Can someone help me? I'm using NGINX to direct connections to two Proxy > > servers. > > > > I did a simple setup. > > > > upstream webgateway { > > server 192.168.239.151:9090; > > server 192.168.239.152:9090; > > } > > > > server { > > listen 81; > > server_name proxy.lab.local; > > > > location / { > > proxy_pass http://webgateway; > > } > > } > > > > NGINX is listening on port 81. > > > > If I configure the proxy IP in the browser, the client "goes out" to the > > Internet. > > > > Browser: > > 192.168.239.151:9090 or 192.168.239.152:9090 - Its Ok! > > > > If I configure the NGINX IP in the browser, the client "does not go out" > to > > the internet. > > > > Browser: > > 192.168.239.151:81 - No! > > > > The packet even arrives at the proxy, but the browser tries to load " > > http://webgateway.com" > > My guess is you want to configure NGINX as a forward proxy, and not > as a reverse proxy. And if so, that's not the case for NGINX. > > -- > Sergey A. Osokin > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Feb 15 13:02:52 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 15 Feb 2022 13:02:52 +0000 Subject: NGINX load balancing - Proxy In-Reply-To: References: Message-ID: <20220215130252.GI14624@daoine.org> On Tue, Feb 15, 2022 at 08:38:07AM -0300, Carlos Renato wrote: Hi there, > Hello, I would like to use NGINX to balance traffic between two McAfee > (standalone) proxy. nginx as a server will listen for http or https requests; it does not "do" http-proxy requests. (As in: it is not a http (forward) proxy server.) nginx as a client will make http or https requests of another server; it does not make http-proxy requests. (As in: it will not talk to a http proxy server.) There are some circumstances under which you can kind-of sort-of make it work maybe well enough sometimes; but you would be fighting the application and things will probably not be smooth. So, for http-proxy traffic, you are probably better off using nginx's "stream" feature instead of "http" feature, and just let nginx be a tcp-pass-through. http://nginx.org/r/stream and things like http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html and http://nginx.org/en/docs/stream/ngx_stream_upstream_module.html -- it's conceptually similar to the http things that you already know, except there is nothing http-specific about it. > I've made some advances and I'm able to open an HTTP page. > > Now, I need the client to open an HTTPS request. If the client is configured to use a http proxy for a https request, it will probably issue a CONNECT request to the proxy, expecting that the proxy will open a connection to the external https server. If nginx is a tcp-pass-through, all of that will be done on your upstream McAfee servers, the way that you expect. Good luck with it, f -- Francis Daly francis at daoine.org From crenatovb at gmail.com Tue Feb 15 13:29:50 2022 From: crenatovb at gmail.com (Carlos Renato) Date: Tue, 15 Feb 2022 10:29:50 -0300 Subject: NGINX load balancing - Proxy In-Reply-To: <20220215130252.GI14624@daoine.org> References: <20220215130252.GI14624@daoine.org> Message-ID: Hi Francis, Thanks for the reply. My file is like this. upstream webgateway { server 192.168.239.151:9090; server 192.168.239.152:9090; keepalive 10; } server { listen 9191; server_name proxy.lab.local; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Request-URI $request_uri; proxy_redirect off; proxy_pass http://webgateway; } } I'm able to open HTTP requests in the client's browser. The problem is being the HTTPS requests. Is there any way for NGINX to receive the traffic and forward it (balanced) to the proxy servers? A simpler way. That way I could include the Web Gateway certificate in the Windows client. Thank You! Em ter., 15 de fev. de 2022 às 10:05, Francis Daly escreveu: > On Tue, Feb 15, 2022 at 08:38:07AM -0300, Carlos Renato wrote: > > Hi there, > > > Hello, I would like to use NGINX to balance traffic between two McAfee > > (standalone) proxy. > > nginx as a server will listen for http or https requests; it does not "do" > http-proxy requests. (As in: it is not a http (forward) proxy server.) > > nginx as a client will make http or https requests of another server; > it does not make http-proxy requests. (As in: it will not talk to a http > proxy server.) > > There are some circumstances under which you can kind-of sort-of make > it work maybe well enough sometimes; but you would be fighting the > application and things will probably not be smooth. > > So, for http-proxy traffic, you are probably better off using nginx's > "stream" feature instead of "http" feature, and just let nginx be a > tcp-pass-through. > > http://nginx.org/r/stream and things like > http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html and > http://nginx.org/en/docs/stream/ngx_stream_upstream_module.html -- > it's conceptually similar to the http things that you already know, > except there is nothing http-specific about it. > > > I've made some advances and I'm able to open an HTTP page. > > > > Now, I need the client to open an HTTPS request. > > If the client is configured to use a http proxy for a https request, > it will probably issue a CONNECT request to the proxy, expecting that > the proxy will open a connection to the external https server. > > If nginx is a tcp-pass-through, all of that will be done on your upstream > McAfee servers, the way that you expect. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwpowelllde at gmail.com Tue Feb 15 13:41:08 2022 From: mwpowelllde at gmail.com (Michael Powell) Date: Tue, 15 Feb 2022 08:41:08 -0500 Subject: OAuth/OpenID Message-ID: Hello, Setting up some web sites, etc, looking into alternatives to Amazon Cognito, for instance, for user and/or 'identity' management, integration with 3P OAuth providers, i.e. Google, Facebook, etc. As I understand it, nginx provides these features, and more? Thank you... Michael W. Powell -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Feb 15 14:13:59 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 15 Feb 2022 14:13:59 +0000 Subject: NGINX load balancing - Proxy In-Reply-To: References: <20220215130252.GI14624@daoine.org> Message-ID: <20220215141359.GJ14624@daoine.org> On Tue, Feb 15, 2022 at 10:29:50AM -0300, Carlos Renato wrote: Hi there, > My file is like this. Untested by me, but I have edited this to now resemble what I think you want... > upstream webgateway { > server 192.168.239.151:9090; > server 192.168.239.152:9090; > } > > server { > listen 9191; > proxy_pass webgateway; > } ...that is, keep your upstream{} but remove the keepalive; adjust your server{} to just have "listen" and a different "proxy_pass"; and put the whole thing inside "stream{}" not "http{}". > I'm able to open HTTP requests in the client's browser. > The problem is being the HTTPS requests. > Is there any way for NGINX to receive the traffic and forward it (balanced) > to the proxy servers? > A simpler way. That way I could include the Web Gateway certificate in the > Windows client. If I have understood correctly what you are trying to do, the notes at https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/ look relevant. That expands on what is at http://nginx.org/en/docs/stream/ngx_stream_core_module.html. Cheers, f -- Francis Daly francis at daoine.org From osa at freebsd.org.ru Tue Feb 15 15:08:21 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 15 Feb 2022 18:08:21 +0300 Subject: OAuth/OpenID In-Reply-To: References: Message-ID: Hi Michael, hope you're doing well. On Tue, Feb 15, 2022 at 08:41:08AM -0500, Michael Powell wrote: > Hello, > > Setting up some web sites, etc, looking into alternatives to Amazon > Cognito, for instance, for user and/or 'identity' management, integration > with 3P OAuth providers, i.e. Google, Facebook, etc. As I understand it, > nginx provides these features, and more? Yes, it's possible to setup OIDC flow with NGINX products. Please note an Identity Provider (IdP) needs to be configured as well, and that one is a separate product. Here's the reference implementation of OpenID Connection integration for NGINX Plus, [1]. It uitilizes some NGINX Plus features, such as auth_jwt directive, [2] from the ngx_http_auth_jwt_module, [3], keyval [4] and keyval_zone [5] directives from ngx_http_keyval_module [6] module, and NGINX JavaScript module, [7]. References: [1] https://github.com/nginxinc/nginx-openid-connect [2] https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html#auth_jwt [3] https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html [4] https://nginx.org/en/docs/http/ngx_http_keyval_module.html#keyval [5] https://nginx.org/en/docs/http/ngx_http_keyval_module.html#keyval_zone [6] https://nginx.org/en/docs/http/ngx_http_keyval_module.html [7] http://nginx.org/en/docs/njs/ -- Sergey Osokin From crenatovb at gmail.com Tue Feb 15 15:13:28 2022 From: crenatovb at gmail.com (Carlos Renato) Date: Tue, 15 Feb 2022 12:13:28 -0300 Subject: NGINX load balancing - Proxy In-Reply-To: <20220215141359.GJ14624@daoine.org> References: <20220215130252.GI14624@daoine.org> <20220215141359.GJ14624@daoine.org> Message-ID: Hi Francis, Thanks for the reply and willingness to help me. [root at proxy conf.d]# cat teste.conf upstream webgateway { server 192.168.239.151:9090; server 192.168.239.152:9090; } server { listen 9191; proxy_pass webgateway; } [root at proxy conf.d]# I cannot start NGINX. Feb 15 01:56:24 proxy.lab.local systemd[1]: Starting The nginx HTTP and reverse proxy server... Feb 15 01:56:24 proxy.lab.local nginx[12274]: nginx: [emerg] "proxy_pass" directive is not allowed here in /etc/nginx/conf.d/teste.conf:8 Feb 15 01:56:24 proxy.lab.local nginx[12274]: nginx: configuration file /etc/nginx/nginx.conf test failed Feb 15 01:56:24 proxy.lab.local systemd[1]: nginx.service: control process exited, code=exited status=1 Feb 15 01:56:24 proxy.lab.local systemd[1]: Failed to start The nginx HTTP and reverse proxy server. Feb 15 01:56:24 proxy.lab.local systemd[1]: Unit nginx.service entered failed state. Feb 15 01:56:24 proxy.lab.local systemd[1]: nginx.service failed. I can only start NGINX if the file is as below. upstream webgateway { server 192.168.239.151:9090; server 192.168.239.152:9090; } server { listen 9191; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://webgateway; } } The behavior on the client is the same. I can load HTTP page and cannot HTTPS. On the Web Gateway, I view the access logs with the client's IP. In tests, I noticed that the "http_pass" below the "listen", I can't start NGINX. I had to also put it as "http" or http://webgateway; If you don't put the parameters below, the proxy tries to load the page http://webgateway. proxy_set_header Host $host; The parameter below registers the IP-Real of the client in the proxy. proxy_set_header X-Real-IP $remote_addr; Any tips? Thank You. Em ter., 15 de fev. de 2022 às 11:16, Francis Daly escreveu: > On Tue, Feb 15, 2022 at 10:29:50AM -0300, Carlos Renato wrote: > > Hi there, > > > My file is like this. > > Untested by me, but I have edited this to now resemble what I think > you want... > > > upstream webgateway { > > server 192.168.239.151:9090; > > server 192.168.239.152:9090; > > } > > > > server { > > listen 9191; > > proxy_pass webgateway; > > } > > ...that is, keep your upstream{} but remove the keepalive; adjust your > server{} to just have "listen" and a different "proxy_pass"; and put > the whole thing inside "stream{}" not "http{}". > > > I'm able to open HTTP requests in the client's browser. > > The problem is being the HTTPS requests. > > Is there any way for NGINX to receive the traffic and forward it > (balanced) > > to the proxy servers? > > A simpler way. That way I could include the Web Gateway certificate in > the > > Windows client. > > If I have understood correctly what you are trying to do, the notes at > > https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/ > look relevant. That expands on what is at > http://nginx.org/en/docs/stream/ngx_stream_core_module.html. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.vybihal at gmail.com Tue Feb 15 15:22:30 2022 From: josef.vybihal at gmail.com (=?UTF-8?Q?Josef_Vyb=C3=ADhal?=) Date: Tue, 15 Feb 2022 16:22:30 +0100 Subject: NGINX load balancing - Proxy In-Reply-To: References: <20220215130252.GI14624@daoine.org> <20220215141359.GJ14624@daoine.org> Message-ID: Seems to me, that you are not using stream{} module as noted by Francis, and you are still putting server block to http{}. If not, post nginx -T J. On Tue, Feb 15, 2022 at 4:17 PM Carlos Renato wrote: > > Hi Francis, > Thanks for the reply and willingness to help me. > > [root at proxy conf.d]# cat teste.conf > upstream webgateway { > server 192.168.239.151:9090; > server 192.168.239.152:9090; > } > > server { > listen 9191; > proxy_pass webgateway; > } > [root at proxy conf.d]# > > I cannot start NGINX. > > Feb 15 01:56:24 proxy.lab.local systemd[1]: Starting The nginx HTTP and reverse proxy server... > Feb 15 01:56:24 proxy.lab.local nginx[12274]: nginx: [emerg] "proxy_pass" directive is not allowed here in /etc/nginx/conf.d/teste.conf:8 > Feb 15 01:56:24 proxy.lab.local nginx[12274]: nginx: configuration file /etc/nginx/nginx.conf test failed > Feb 15 01:56:24 proxy.lab.local systemd[1]: nginx.service: control process exited, code=exited status=1 > Feb 15 01:56:24 proxy.lab.local systemd[1]: Failed to start The nginx HTTP and reverse proxy server. > Feb 15 01:56:24 proxy.lab.local systemd[1]: Unit nginx.service entered failed state. > Feb 15 01:56:24 proxy.lab.local systemd[1]: nginx.service failed. > > I can only start NGINX if the file is as below. > > upstream webgateway { > server 192.168.239.151:9090; > server 192.168.239.152:9090; > } > > server { > listen 9191; > > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_pass http://webgateway; > } > } > > The behavior on the client is the same. > I can load HTTP page and cannot HTTPS. On the Web Gateway, I view the access logs with the client's IP. > > In tests, I noticed that the "http_pass" below the "listen", I can't start NGINX. > I had to also put it as "http" or http://webgateway; > > If you don't put the parameters below, the proxy tries to load the page http://webgateway. > proxy_set_header Host $host; > > The parameter below registers the IP-Real of the client in the proxy. > proxy_set_header X-Real-IP $remote_addr; > > Any tips? Thank You. > > > > Em ter., 15 de fev. de 2022 às 11:16, Francis Daly escreveu: >> >> On Tue, Feb 15, 2022 at 10:29:50AM -0300, Carlos Renato wrote: >> >> Hi there, >> >> > My file is like this. >> >> Untested by me, but I have edited this to now resemble what I think >> you want... >> >> > upstream webgateway { >> > server 192.168.239.151:9090; >> > server 192.168.239.152:9090; >> > } >> > >> > server { >> > listen 9191; >> > proxy_pass webgateway; >> > } >> >> ...that is, keep your upstream{} but remove the keepalive; adjust your >> server{} to just have "listen" and a different "proxy_pass"; and put >> the whole thing inside "stream{}" not "http{}". >> >> > I'm able to open HTTP requests in the client's browser. >> > The problem is being the HTTPS requests. >> > Is there any way for NGINX to receive the traffic and forward it (balanced) >> > to the proxy servers? >> > A simpler way. That way I could include the Web Gateway certificate in the >> > Windows client. >> >> If I have understood correctly what you are trying to do, the notes at >> https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/ >> look relevant. That expands on what is at >> http://nginx.org/en/docs/stream/ngx_stream_core_module.html. >> >> Cheers, >> >> f >> -- >> Francis Daly francis at daoine.org >> _______________________________________________ >> nginx mailing list -- nginx at nginx.org >> To unsubscribe send an email to nginx-leave at nginx.org > > > > -- > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From crenatovb at gmail.com Tue Feb 15 15:31:06 2022 From: crenatovb at gmail.com (Carlos Renato) Date: Tue, 15 Feb 2022 12:31:06 -0300 Subject: NGINX load balancing - Proxy In-Reply-To: References: <20220215130252.GI14624@daoine.org> <20220215141359.GJ14624@daoine.org> Message-ID: Hi, Josef This what I get when trying to start NGINX with the simplified file. [root at proxy conf.d]# cat webgateway.conf upstream webgateway { server 192.168.239.151:9090; server 192.168.239.152:9090; } server { listen 9191; proxy_pass webgateway; } } [root at proxy conf.d]# nginx -t nginx: [emerg] "proxy_pass" directive is not allowed here in /etc/nginx/conf.d/webgateway.conf:8 nginx: configuration file /etc/nginx/nginx.conf test failed [root at proxy conf.d]# systemctl status nginx ● nginx.service - The nginx HTTP and reverse proxy server Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2022-02-15 02:38:56 -03; 39s ago Process: 14415 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS) Process: 14700 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE) Process: 14699 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS) Main PID: 14417 (code=exited, status=0/SUCCESS) Feb 15 02:38:56 proxy.lab.local systemd[1]: Starting The nginx HTTP and reverse proxy server... Feb 15 02:38:56 proxy.lab.local nginx[14700]: nginx: [emerg] "proxy_pass" directive is not allowed here in /etc/nginx/conf.d/webgateway.conf:8 Feb 15 02:38:56 proxy.lab.local nginx[14700]: nginx: configuration file /etc/nginx/nginx.conf test failed Feb 15 02:38:56 proxy.lab.local systemd[1]: nginx.service: control process exited, code=exited status=1 Feb 15 02:38:56 proxy.lab.local systemd[1]: Failed to start The nginx HTTP and reverse proxy server. Feb 15 02:38:56 proxy.lab.local systemd[1]: Unit nginx.service entered failed state. Feb 15 02:38:56 proxy.lab.local systemd[1]: nginx.service failed. [root at proxy conf.d]# Thank You! Em ter., 15 de fev. de 2022 às 12:25, Josef Vybíhal escreveu: > Seems to me, that you are not using stream{} module as noted by > Francis, and you are still putting server block to http{}. If not, > post nginx -T > J. > > On Tue, Feb 15, 2022 at 4:17 PM Carlos Renato wrote: > > > > Hi Francis, > > Thanks for the reply and willingness to help me. > > > > [root at proxy conf.d]# cat teste.conf > > upstream webgateway { > > server 192.168.239.151:9090; > > server 192.168.239.152:9090; > > } > > > > server { > > listen 9191; > > proxy_pass webgateway; > > } > > [root at proxy conf.d]# > > > > I cannot start NGINX. > > > > Feb 15 01:56:24 proxy.lab.local systemd[1]: Starting The nginx HTTP and > reverse proxy server... > > Feb 15 01:56:24 proxy.lab.local nginx[12274]: nginx: [emerg] > "proxy_pass" directive is not allowed here in /etc/nginx/conf.d/teste.conf:8 > > Feb 15 01:56:24 proxy.lab.local nginx[12274]: nginx: configuration file > /etc/nginx/nginx.conf test failed > > Feb 15 01:56:24 proxy.lab.local systemd[1]: nginx.service: control > process exited, code=exited status=1 > > Feb 15 01:56:24 proxy.lab.local systemd[1]: Failed to start The nginx > HTTP and reverse proxy server. > > Feb 15 01:56:24 proxy.lab.local systemd[1]: Unit nginx.service entered > failed state. > > Feb 15 01:56:24 proxy.lab.local systemd[1]: nginx.service failed. > > > > I can only start NGINX if the file is as below. > > > > upstream webgateway { > > server 192.168.239.151:9090; > > server 192.168.239.152:9090; > > } > > > > server { > > listen 9191; > > > > location / { > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_pass http://webgateway; > > } > > } > > > > The behavior on the client is the same. > > I can load HTTP page and cannot HTTPS. On the Web Gateway, I view the > access logs with the client's IP. > > > > In tests, I noticed that the "http_pass" below the "listen", I can't > start NGINX. > > I had to also put it as "http" or http://webgateway; > > > > If you don't put the parameters below, the proxy tries to load the page > http://webgateway. > > proxy_set_header Host $host; > > > > The parameter below registers the IP-Real of the client in the proxy. > > proxy_set_header X-Real-IP $remote_addr; > > > > Any tips? Thank You. > > > > > > > > Em ter., 15 de fev. de 2022 às 11:16, Francis Daly > escreveu: > >> > >> On Tue, Feb 15, 2022 at 10:29:50AM -0300, Carlos Renato wrote: > >> > >> Hi there, > >> > >> > My file is like this. > >> > >> Untested by me, but I have edited this to now resemble what I think > >> you want... > >> > >> > upstream webgateway { > >> > server 192.168.239.151:9090; > >> > server 192.168.239.152:9090; > >> > } > >> > > >> > server { > >> > listen 9191; > >> > proxy_pass webgateway; > >> > } > >> > >> ...that is, keep your upstream{} but remove the keepalive; adjust your > >> server{} to just have "listen" and a different "proxy_pass"; and put > >> the whole thing inside "stream{}" not "http{}". > >> > >> > I'm able to open HTTP requests in the client's browser. > >> > The problem is being the HTTPS requests. > >> > Is there any way for NGINX to receive the traffic and forward it > (balanced) > >> > to the proxy servers? > >> > A simpler way. That way I could include the Web Gateway certificate > in the > >> > Windows client. > >> > >> If I have understood correctly what you are trying to do, the notes at > >> > https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/ > >> look relevant. That expands on what is at > >> http://nginx.org/en/docs/stream/ngx_stream_core_module.html. > >> > >> Cheers, > >> > >> f > >> -- > >> Francis Daly francis at daoine.org > >> _______________________________________________ > >> nginx mailing list -- nginx at nginx.org > >> To unsubscribe send an email to nginx-leave at nginx.org > > > > > > > > -- > > > > > > _______________________________________________ > > nginx mailing list -- nginx at nginx.org > > To unsubscribe send an email to nginx-leave at nginx.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Feb 15 16:47:01 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 15 Feb 2022 16:47:01 +0000 Subject: NGINX load balancing - Proxy In-Reply-To: References: <20220215130252.GI14624@daoine.org> <20220215141359.GJ14624@daoine.org> Message-ID: <20220215164701.GK14624@daoine.org> On Tue, Feb 15, 2022 at 12:31:06PM -0300, Carlos Renato wrote: Hi there, > This what I get when trying to start NGINX with the simplified file. > > [root at proxy conf.d]# cat webgateway.conf > upstream webgateway { > server 192.168.239.151:9090; > server 192.168.239.152:9090; > } > > server { > listen 9191; > proxy_pass webgateway; > } > } I suggest: remove or rename that webgateway.conf file, so that it will not match whatever "include" line is already in your /etc/nginx/nginx.conf. Then add those 10 lines to your /etc/nginx/nginx.conf, inside the already-existing "stream{}" block; or create a new "stream{}" block between the "events{}" block and the "http{}" block that are already in your nginx.conf, and put those 10 lines inside that. The reason is: when nginx runs, it reads one config file. That config file might "include" some others. Typically, there is something like "include /etc/nginx/conf.d/*.conf;" within the http{} block; but we do not want to have this upstream-and-server config within http{}; it must be within stream{}. If your nginx binary does not include stream{}, then you probably want to see about getting a new binary, if you want to use nginx to do the thing that you want to do. Good luck with it, f -- Francis Daly francis at daoine.org From mwpowelllde at gmail.com Tue Feb 15 20:18:43 2022 From: mwpowelllde at gmail.com (Michael Powell) Date: Tue, 15 Feb 2022 15:18:43 -0500 Subject: OAuth/OpenID In-Reply-To: References: Message-ID: On Tue, Feb 15, 2022 at 10:08 AM Sergey A. Osokin wrote: > Hi Michael, > > hope you're doing well. > > On Tue, Feb 15, 2022 at 08:41:08AM -0500, Michael Powell wrote: > > Hello, > > > > Setting up some web sites, etc, looking into alternatives to Amazon > > Cognito, for instance, for user and/or 'identity' management, integration > > with 3P OAuth providers, i.e. Google, Facebook, etc. As I understand it, > > nginx provides these features, and more? > > Yes, it's possible to setup OIDC flow with NGINX products. Please note > an Identity Provider (IdP) needs to be configured as well, and that one > is a separate product. > So it is not 'free' or even 'open source'? What is the pricing/cost behind that? Trying to inquire about that through the NGINX site, but my email is not allowed there apparently. We are effectively early stage startup so it is what it is. Is there another way to obtain pricing for our cost purposes? Thank you... Here's the reference implementation of OpenID Connection integration > for NGINX Plus, [1]. It uitilizes some NGINX Plus features, such as > auth_jwt directive, [2] from the ngx_http_auth_jwt_module, [3], keyval [4] > and keyval_zone [5] directives from ngx_http_keyval_module [6] module, > and NGINX JavaScript module, [7]. > > References: > [1] https://github.com/nginxinc/nginx-openid-connect > [2] https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html#auth_jwt > [3] https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html > [4] https://nginx.org/en/docs/http/ngx_http_keyval_module.html#keyval > [5] https://nginx.org/en/docs/http/ngx_http_keyval_module.html#keyval_zone > [6] https://nginx.org/en/docs/http/ngx_http_keyval_module.html > [7] http://nginx.org/en/docs/njs/ > > -- > Sergey Osokin > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at stormy.ca Tue Feb 15 23:03:05 2022 From: paul at stormy.ca (Paul) Date: Tue, 15 Feb 2022 18:03:05 -0500 Subject: OAuth/OpenID In-Reply-To: References: Message-ID: <79315467-148f-6f97-aa41-b6e02f080d55@stormy.ca> On 2022-02-15 3:18 p.m., Michael Powell wrote: > > On Tue, Feb 15, 2022 at 10:08 AM Sergey A. Osokin > wrote: > > Hi Michael, > > hope you're doing well. > > On Tue, Feb 15, 2022 at 08:41:08AM -0500, Michael Powell wrote: > > Hello, > > > > Setting up some web sites, etc, looking into alternatives to Amazon > > Cognito, for instance, for user and/or 'identity' management, > integration > > with 3P OAuth providers, i.e. Google, Facebook, etc. As I > understand it, > > nginx provides these features, and more? > > Yes, it's possible to setup OIDC flow with NGINX products.  Please note > an Identity Provider (IdP) needs to be configured as well, and that one > is a separate product. > > So it is not 'free' or even 'open source'? What is the pricing/cost > behind that? This is probably going well beyond the normal scope of this list, but have you looked at open source openLDAP? It might do what you want, but might prove time-consuming at your end to set it up. Up to you to decide how your inhouse costs compare to "not free" outside expertise. > Trying to inquire about that through the NGINX site, but my > email is not allowed there apparently Well... gmail in not exactly a business address. Have you tried old-fashioned telephone at 1-800-915-9122? Paul --- Disclaimer: I have absolutely no monetary affiliation whatsoever with nginx We are effectively early stage > startup so it is what it is. Is there another way to obtain pricing for > our cost purposes? Thank you... > > Here's the reference implementation of OpenID Connection integration > for NGINX Plus, [1].  It uitilizes some NGINX Plus features, such as > auth_jwt directive, [2] from the ngx_http_auth_jwt_module, [3], > keyval [4] > and keyval_zone [5] directives from ngx_http_keyval_module [6] module, > and NGINX JavaScript module, [7]. > > References: > [1] https://github.com/nginxinc/nginx-openid-connect > > [2] > https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html#auth_jwt > [3] https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html > > [4] > https://nginx.org/en/docs/http/ngx_http_keyval_module.html#keyval > > [5] > https://nginx.org/en/docs/http/ngx_http_keyval_module.html#keyval_zone > > [6] https://nginx.org/en/docs/http/ngx_http_keyval_module.html > > [7] http://nginx.org/en/docs/njs/ > > -- > Sergey Osokin > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > \\\||// (@ @) ooO_(_)_Ooo__________________________________ |______|_____|_____|_____|_____|_____|_____|_____| |___|____|_____|_____|_____|_____|_____|_____|____| |_____|_____| mailto:paul at stormy.ca _|____|____| From nginx-forum at forum.nginx.org Wed Feb 16 01:56:18 2022 From: nginx-forum at forum.nginx.org (mevan) Date: Tue, 15 Feb 2022 20:56:18 -0500 Subject: help: proxy to https is working websockets is not! Message-ID: <5c285f15bbd3f959aac3c4cd87a27617.NginxMailingListEnglish@forum.nginx.org> I have the following configuration working. I am able to login to the application and most of it works. However there are certain elements which function as websockets. I see this message in the browser devtools: WebSocket connection to 'wss://erx.asdf.com/ws/stats' failed: edge.min.js:3210 How can I get this last part working? Here is my configuration so far: server { listen 80; server_name erx.asdf.com; location / { return 301 https://$host$request_uri; } } upstream backend { server 192.168.1.1:443; } server { listen 443; server_name erx.asdf.com; location / { proxy_pass https://backend; proxy_ssl_server_name on; proxy_ssl_name DOMAIN; proxy_set_header Host $host; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293691,293691#msg-293691 From francis at daoine.org Wed Feb 16 08:26:09 2022 From: francis at daoine.org (Francis Daly) Date: Wed, 16 Feb 2022 08:26:09 +0000 Subject: help: proxy to https is working websockets is not! In-Reply-To: <5c285f15bbd3f959aac3c4cd87a27617.NginxMailingListEnglish@forum.nginx.org> References: <5c285f15bbd3f959aac3c4cd87a27617.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220216082609.GL14624@daoine.org> On Tue, Feb 15, 2022 at 08:56:18PM -0500, mevan wrote: Hi there, > WebSocket connection to 'wss://erx.asdf.com/ws/stats' failed: > edge.min.js:3210 > > How can I get this last part working? Does the description at https://nginx.org/en/docs/http/websocket.html or https://www.nginx.com/blog/websocket-nginx/ work for you? Cheers, f -- Francis Daly francis at daoine.org From crenatovb at gmail.com Wed Feb 16 16:05:27 2022 From: crenatovb at gmail.com (Carlos Renato) Date: Wed, 16 Feb 2022 13:05:27 -0300 Subject: NGINX load balancing - Proxy In-Reply-To: <20220215164701.GK14624@daoine.org> References: <20220215130252.GI14624@daoine.org> <20220215141359.GJ14624@daoine.org> <20220215164701.GK14624@daoine.org> Message-ID: Hi Francis, thanks you! Em ter., 15 de fev. de 2022 às 13:49, Francis Daly escreveu: > On Tue, Feb 15, 2022 at 12:31:06PM -0300, Carlos Renato wrote: > > Hi there, > > > This what I get when trying to start NGINX with the simplified file. > > > > [root at proxy conf.d]# cat webgateway.conf > > upstream webgateway { > > server 192.168.239.151:9090; > > server 192.168.239.152:9090; > > } > > > > server { > > listen 9191; > > proxy_pass webgateway; > > } > > } > > I suggest: remove or rename that webgateway.conf file, so that it will not > match whatever "include" line is already in your /etc/nginx/nginx.conf. > > Then add those 10 lines to your /etc/nginx/nginx.conf, inside the > already-existing "stream{}" block; or create a new "stream{}" block > between the "events{}" block and the "http{}" block that are already in > your nginx.conf, and put those 10 lines inside that. > > > The reason is: when nginx runs, it reads one config file. That config > file might "include" some others. Typically, there is something like > "include /etc/nginx/conf.d/*.conf;" within the http{} block; but we do > not want to have this upstream-and-server config within http{}; it must > be within stream{}. > > If your nginx binary does not include stream{}, then you probably want > to see about getting a new binary, if you want to use nginx to do the > thing that you want to do. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lance at wordkeeper.com Thu Feb 17 05:02:55 2022 From: lance at wordkeeper.com (Lance Dockins) Date: Wed, 16 Feb 2022 23:02:55 -0600 Subject: NJS FastCGI Response Filtering In-Reply-To: <90fbf831-a009-40d4-bba0-d4c80bb2ce0b@Spark> References: <90fbf831-a009-40d4-bba0-d4c80bb2ce0b@Spark> Message-ID: <50840b6e-7f81-4bbb-8220-8d577f5983e0@Spark> Is there a good way to reliably filter the response body from a FastCGI script via NJS?  I’ve done this with Lua before but NJS seems to be a bit different in terms of how the response body process works.  The end goal is basically that we just have a way to grab the text back from a FastCGI/PHP script and can then edit that response.  There’s a very specific end goal that we are trying to achieve that has to go through Nginx and can’t be done with PHP. Initially I tried just using something like r.responseText since the call to FastCGI in Nginx should be its own subrequest but that doesn’t seem to work.  I’ve tried via js_body_filter but that keeps returning what looks like either encrypted or compressed data even though PHP compression is off and the FastCGI upstream is for a socket on the same server defined as another location in the same vhost (so the connection to the socket shouldn’t be suffering from encryption in that layer). I’ve tried a few different options: location = /index.php { include includes/fastcgi.conf; js_body_filter db.failed; fastcgi_pass unix:/var/run/php-fpm/account.sock; } With this being the function in db.js function failed(r, data, flags){     r.error(data);     r.sendBuffer(data); } r.error is just there to log the response as a test to see what it is structured like to code further from there.  That logs data like this: 2022/02/16 11:56:20 [error] 18589#18589: *61 js: �Z�r���) (ڕ��;�����LB��,��Y�;�rMa�3�b��!E�\��9/�C�KN�J�>%����()`fwv��%��A�,�u����n�4�{����?���^>;�ljI9�Dē&�&�LM�ܨ&�8�zҌ%S@Ű���� �S0E Q��f����+�Ia�LA)PERf��lU���2�F�`���fKȩ�\^V�1Y  ^��b��n�:h�4#����O�i ��Fߏ=��93��O0���(Q0��)��?�,��� The response from the PHP file is just basic HTML so it shouldn’t look like this. I’ve also tried using a js_content filter to try to use a subrequest to the PHP location block in the vhost, but that doesn’t seem to work at all.  It just errs out.  Here’s a very basic attempt to try to use that location.  I’ve tried this both with the named location block and a custom named internal block that I called /fastcgi to try to use the URI style syntax in the sub request. The js_content filter is in a location block and is calling successfully but the code that it runs fails when trying to sub request to FastCGI.  Here’s a rough example that I’ve tried. location = /fastcgi { internal; include includes/fastcgi.conf; fastcgi_pass unix:/var/run/php-fpm/dbadmin.sock; } async function content(r){     await r.subrequest('/fastcgi', {         body: r.requestBuffer,         method: r.method,         detached: false     }, function(r){         return r.responseText;     }); } That just seems to err like this: 2022/02/16 11:53:37 [error] 16995#16995: *54 FastCGI sent in stderr: "Access to the script '/home/account/www/fastcgi' has been denied (see security.limit_extensions)" while reading response header from upstream, client: 68.109.218.131, server:trequest: "GET /index.php HTTP/2.0", subrequest: "/fastcgi", upstream: "fastcgi://unix:/var/run/php-fpm/account.sock:" The specific solution strategy doesn’t particularly matter as long as the end result is that we can receive and interact with the response body from a FastCGI request. Does anyone have any ideas of how to just get the response body from a FastCGI request in NJS? Any insights would be a huge help. -- Lance Dockins -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Thu Feb 17 12:10:26 2022 From: iippolitov at nginx.com (Igor Ippolitov) Date: Thu, 17 Feb 2022 12:10:26 +0000 Subject: NJS FastCGI Response Filtering In-Reply-To: <50840b6e-7f81-4bbb-8220-8d577f5983e0@Spark> References: <90fbf831-a009-40d4-bba0-d4c80bb2ce0b@Spark> <50840b6e-7f81-4bbb-8220-8d577f5983e0@Spark> Message-ID: <6c99e1a7-991a-4921-e397-5871461dbae3@nginx.com> Hello Lance, While I'm not 100% sure is there a chance that the reply is gzipped? May be resetting acccept-encoding header would help? Regards, Igor On 17.02.2022 05:02, Lance Dockins wrote: > Is there a good way to reliably filter the response body from a > FastCGI script via NJS?  I’ve done this with Lua before but NJS seems > to be a bit different in terms of how the response body process > works.  The end goal is basically that we just have a way to grab the > text back from a FastCGI/PHP script and can then edit that > response.  There’s a very specific end goal that we are trying to > achieve that has to go through Nginx and can’t be done with PHP. > > Initially I tried just using something like r.responseText since the > call to FastCGI in Nginx should be its own subrequest but that doesn’t > seem to work.  I’ve tried via js_body_filter but that keeps returning > what looks like either encrypted or compressed data even though PHP > compression is off and the FastCGI upstream is for a socket on the > same server defined as another location in the same vhost (so the > connection to the socket shouldn’t be suffering from encryption in > that layer). > > I’ve tried a few different options: > > location = /index.php { > include includes/fastcgi.conf; > > js_body_filter db.failed; > > fastcgi_pass unix:/var/run/php-fpm/account.sock; > } > > With this being the function in db.js > > functionfailed(/r/, /data/, /flags/){ > r.error(data); > r.sendBuffer(data); > } > > r.error is just there to log the response as a test to see what it is > structured like to code further from there.  That logs data like this: > 2022/02/16 11:56:20 [error] 18589#18589: *61 js: �Z�r���) > (ڕ��;�����LB��,��Y�;�rMa�3�b��!E�\��9/�C�KN�J�>%����()`fwv��%��A�,�u����n�4�{����?���^>;�ljI9�Dē&�&�LM�ܨ&�8�zҌ%S@Ű���� > �S0E Q��f����+�Ia�LA)PERf��lU���2�F�`���fKȩ�\^V�1Y >  ^��b��n�:h�4#����O�i > ��Fߏ=��93��O0���(Q0��)��?�,��� > > The response from the PHP file is just basic HTML so it shouldn’t look > like this. > > I’ve also tried using a js_content filter to try to use a subrequest > to the PHP location block in the vhost, but that doesn’t seem to work > at all.  It just errs out.  Here’s a very basic attempt to try to use > that location.  I’ve tried this both with the named location block and > a custom named internal block that I called /fastcgi to try to use the > URI style syntax in the sub request. > > The js_content filter is in a location block and is calling > successfully but the code that it runs fails when trying to sub > request to FastCGI.  Here’s a rough example that I’ve tried. > > location = /fastcgi { > internal; > include includes/fastcgi.conf; > > fastcgi_pass unix:/var/run/php-fpm/dbadmin.sock; > } > > asyncfunctioncontent(/r/){ > awaitr.subrequest('/fastcgi', { > body:r.requestBuffer, > method:r.method, > detached:false >     }, function(/r/){ > returnr.responseText; >     }); > } > > > That just seems to err like this: > > 2022/02/16 11:53:37 [error] 16995#16995: *54 FastCGI sent in stderr: > "Access to the script '/home/account/www/fastcgi' has been denied (see > security.limit_extensions)" while reading response header from > upstream, client: 68.109.218.131, server:t > request: "GET /index.php HTTP/2.0", > subrequest: "/fastcgi", upstream: > "fastcgi://unix:/var/run/php-fpm/account.sock:" > > The specific solution strategy doesn’t particularly matter as long as > the end result is that we can receive and interact with the response > body from a FastCGI request. > > Does anyone have any ideas of how to just get the response body from a > FastCGI request in NJS? > > Any insights would be a huge help. > > > -- > Lance Dockins > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Feb 17 18:00:32 2022 From: nginx-forum at forum.nginx.org (petecooper) Date: Thu, 17 Feb 2022 13:00:32 -0500 Subject: "SSL: error:0A0000B9:SSL routines::no cipher match" with Mozilla modern ciphers v5.5 Message-ID: <492f81a47b1f4eed861c849aa91e2746.NginxMailingListEnglish@forum.nginx.org> Hello. I am running Nginx 1.21.6 with OpenSSL 3.0.1 and the Mozilla [1] 'Modern' ciphers 4.0 without issue. When I change the ciphers to Mozilla 'modern' 5.5, Nginx fails a config test with: nginx: [emerg] SSL_CTX_set_cipher_list("TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256") failed (SSL: error:0A0000B9:SSL routines::no cipher match). The line in nginx.conf with 'Modern' 5.5 ciphers (fails test) is: ssl_ciphers 'TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256'; The line in nginx.conf with 'Modern' 4.0 ciphers (passes test) is: ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256'; At compile time, I used the `--with-openssl` flag to point to the Nginx-specific OpenSSL, which I confirm is 3.0.1: $ sudo /etc/nginx/openssl/bin/openssl version OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021) That instance of OpenSSL has the following ciphers: $ sudo /etc/nginx/openssl/bin/openssl ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-25 6-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES256-CBC-SHA:DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA The three ciphers used in Mozilla 'Modern' 5.5 are "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256", which happen to be the first three ciphers in the long list above. it follows that OpenSSL 3.0.1 has support for these three ciphers, and the naming convention matches. I am unsure why Nginx shows the error relating to "no cipher match" when the ciphers are present in the TLS library. The system-native OpenSSL includes those ciphers, too (again, first three in the list of ciphers): $ openssl version OpenSSL 1.1.1f 31 Mar 2020 $ openssl ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-25 6-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES256-CBC-SHA:DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA I would be grateful for any advice on what I am doing wrong here, especially for further reading where I can better understand any missteps. I appreciate this is a wall-of-text email, I've tried my best to split it up sensibly. Thank you for reading, and thank you for your time & expertise. [1] https://wiki.mozilla.org/Security/Server_Side_TLS Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293703,293703#msg-293703 From nginx-forum at forum.nginx.org Fri Feb 18 11:53:32 2022 From: nginx-forum at forum.nginx.org (petecooper) Date: Fri, 18 Feb 2022 06:53:32 -0500 Subject: "SSL: error:0A0000B9:SSL routines::no cipher match" with Mozilla modern ciphers v5.5 In-Reply-To: <492f81a47b1f4eed861c849aa91e2746.NginxMailingListEnglish@forum.nginx.org> References: <492f81a47b1f4eed861c849aa91e2746.NginxMailingListEnglish@forum.nginx.org> Message-ID: <79db6c1df8622ec461754e54729d3135.NginxMailingListEnglish@forum.nginx.org> I am following up with fresh eyes.The 3x ciphers that cause problems are: TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 I have just noticed each cipher name above has an underscore `_` character as a separator. The working ciphers all use a dash `-` as a separator. Might that be a factor in Nginx rejecting the cipher names? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293703,293704#msg-293704 From nginx-forum at forum.nginx.org Fri Feb 18 12:09:19 2022 From: nginx-forum at forum.nginx.org (petecooper) Date: Fri, 18 Feb 2022 07:09:19 -0500 Subject: "SSL: error:0A0000B9:SSL routines::no cipher match" with Mozilla modern ciphers v5.5 In-Reply-To: <492f81a47b1f4eed861c849aa91e2746.NginxMailingListEnglish@forum.nginx.org> References: <492f81a47b1f4eed861c849aa91e2746.NginxMailingListEnglish@forum.nginx.org> Message-ID: Please ignore this thread, I found the answer: https://trac.nginx.org/nginx/ticket/1529#comment:1 Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293703,293705#msg-293705 From francis at daoine.org Fri Feb 18 14:14:03 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 18 Feb 2022 14:14:03 +0000 Subject: "SSL: error:0A0000B9:SSL routines::no cipher match" with Mozilla modern ciphers v5.5 In-Reply-To: References: <492f81a47b1f4eed861c849aa91e2746.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220218141403.GM14624@daoine.org> On Fri, Feb 18, 2022 at 07:09:19AM -0500, petecooper wrote: Hi there, > Please ignore this thread, I found the answer: > > https://trac.nginx.org/nginx/ticket/1529#comment:1 Thanks for following up with the solution. I expect that you have read the rest of that page, and now have a working system, but just in case not (and: for anyone else following along): comment 20 on that page shows an nginx config line that should work with a new-enough nginx; and comments 12 and 19 show external-to-nginx config that should work with any version of nginx. (Both depend on a new-enough openssl.) Thanks, f -- Francis Daly francis at daoine.org From noloader at gmail.com Fri Feb 18 15:18:46 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Fri, 18 Feb 2022 10:18:46 -0500 Subject: "SSL: error:0A0000B9:SSL routines::no cipher match" with Mozilla modern ciphers v5.5 In-Reply-To: <79db6c1df8622ec461754e54729d3135.NginxMailingListEnglish@forum.nginx.org> References: <492f81a47b1f4eed861c849aa91e2746.NginxMailingListEnglish@forum.nginx.org> <79db6c1df8622ec461754e54729d3135.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Fri, Feb 18, 2022 at 6:57 AM petecooper wrote: > > I am following up with fresh eyes.The 3x ciphers that cause problems are: > > TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 > > I have just noticed each cipher name above has an underscore `_` character > as a separator. The working ciphers all use a dash `-` as a separator. > > Might that be a factor in Nginx rejecting the cipher names? The cipher suite names look Ok, see https://www.openssl.org/docs/man1.1.1/man1/ciphers.html. It's probably a downlevel version of OpenSSL. Jeff From nginx-forum at forum.nginx.org Sun Feb 20 13:17:21 2022 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Sun, 20 Feb 2022 08:17:21 -0500 Subject: Nginx rewrite issue Message-ID: <5743ed67372cd38b6bf865fccda0b284.NginxMailingListEnglish@forum.nginx.org> Hello, I want to write a rewrite like http://url/index.php?target=server1 and http://url/target=server1 in Nginx and I want to use it in reverse proxy. This is possible in AWS, but how can I do it in Nginx? I tried as follows. Not worked. location = /index.html?target=server1 { proxy_pass http://server1; } location = /index.html?target=server2 { proxy_pass http://server2; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293708,293708#msg-293708 From paul at stormy.ca Sun Feb 20 17:50:08 2022 From: paul at stormy.ca (Paul) Date: Sun, 20 Feb 2022 12:50:08 -0500 Subject: Nginx rewrite issue In-Reply-To: <5743ed67372cd38b6bf865fccda0b284.NginxMailingListEnglish@forum.nginx.org> References: <5743ed67372cd38b6bf865fccda0b284.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 2022-02-20 8:17 a.m., Dr_tux wrote: > Hello, > > I want to write a rewrite like http://url/index.php?target=server1 and > http://url/target=server1 in Nginx and I want to use it in reverse proxy. > This is possible in AWS, but how can I do it in Nginx? > > I tried as follows. Not worked. > > location = /index.html?target=server1 { > proxy_pass http://server1; > } > > location = /index.html?target=server2 { > proxy_pass http://server2; > } What do your error logs show? Maybe the '=' after 'location' is superfluous? Please refer to the manual at https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/>. And for more info on using wildcards in 'location', please see nginx documentation is pretty good, so start there. If you still have problems after "read the manual" full details of your server setup and errors logged would be helpful for others to try and help you. HTH -- Paul From nginx-forum at forum.nginx.org Sun Feb 20 19:40:05 2022 From: nginx-forum at forum.nginx.org (OS_Killer) Date: Sun, 20 Feb 2022 14:40:05 -0500 Subject: Nginx rewrite issue In-Reply-To: <5743ed67372cd38b6bf865fccda0b284.NginxMailingListEnglish@forum.nginx.org> References: <5743ed67372cd38b6bf865fccda0b284.NginxMailingListEnglish@forum.nginx.org> Message-ID: <45fb848ab484f1c657f8b0b4651350d5.NginxMailingListEnglish@forum.nginx.org> Arguments from the requests are not taken into account when nginx choosing locations. Try to use the 'map' directive instead. For example: ======= map $arg_target $backend { 'server1' 'server1'; 'server2' 'server2'; default 'server1'; } server { listen 80; location = /index.html { proxy_pass http://$backend; } } ======= https://nginx.org/en/docs/http/ngx_http_map_module.html#map https://nginx.org/en/docs/http/ngx_http_core_module.html#var_arg_ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293708,293710#msg-293710 From nginx-forum at forum.nginx.org Mon Feb 21 19:20:01 2022 From: nginx-forum at forum.nginx.org (george22) Date: Mon, 21 Feb 2022 14:20:01 -0500 Subject: Nginx reverse proxy with Weblogic - Sticky learn Message-ID: Hi I've implemented sticky learn for my weblogic application running on 3 servers using Nginx as the reverse proxy and trying to get an understanding of how nginx sticky learn works. The JSESSIONID cookies are of the form !. I understand that nginx learns from the 'sticky create' to determine which upstream server the request should be routed to using lookup cookie. But it is not clear from the documentation what happens when the upstream primary server is down. Will nginx route to secondary server? Based on my initial tests it looks like it does not route to the secondary server. I need confirmation if this is expected behavior. What do I need to do to route requests to secondary server if primary server is down using sticky learn? upstream app1 { least_conn; zone app1 64k; server srv1.example.com:5111; server srv2.example.com:5111; server srv3.example.com:5111; sticky learn create=$upstream_cookie_JSESSIONID lookup=$cookie_JSESSIONID zone=client-sessions:1m timeout=2h; } Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293721,293721#msg-293721 From osa at freebsd.org.ru Tue Feb 22 04:34:40 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 22 Feb 2022 07:34:40 +0300 Subject: Nginx reverse proxy with Weblogic - Sticky learn In-Reply-To: References: Message-ID: Hi, hope you're doing well. On Mon, Feb 21, 2022 at 02:20:01PM -0500, george22 wrote: > Hi > > I've implemented sticky learn for my weblogic application running on 3 > servers using Nginx as the reverse proxy and trying to get an understanding > of how nginx sticky learn works. > > The JSESSIONID cookies are of the form ! id 1>. > > I understand that nginx learns from the 'sticky create' to determine which > upstream server the request should be routed to using lookup cookie. But it > is not clear from the documentation what happens when the upstream primary > server is down. Will nginx route to secondary server? Based on my initial > tests it looks like it does not route to the secondary server. I need > confirmation if this is expected behavior. What do I need to do to route > requests to secondary server if primary server is down using sticky learn? > > > upstream app1 { > least_conn; > zone app1 64k; > server srv1.example.com:5111; > server srv2.example.com:5111; > server srv3.example.com:5111; > sticky learn > create=$upstream_cookie_JSESSIONID > lookup=$cookie_JSESSIONID > zone=client-sessions:1m > timeout=2h; > } The sticky [1] directive, from the http_upstream module [2], is the part of the commercial subscription [3], so I'd recommend to send your questions and concerns to the NGINX Plus support team, [4]. Thank you. [1] https://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky [2] https://nginx.org/en/docs/http/ngx_http_upstream_module.html [3] https://www.nginx.com/products/ [4] https://www.nginx.com/support -- Sergey A. Osokin From fraczekp at gmail.com Wed Feb 23 03:45:33 2022 From: fraczekp at gmail.com (Pawel Fraczek) Date: Tue, 22 Feb 2022 20:45:33 -0700 Subject: Help with UDP load balancing passive health checks Message-ID: Hi, I'm trying to building a syslog load balancer and I'm running into issues with the failover of UDP messages. TCP works just fine, when the server goes down, all messages failover to the active server. But with UDP, that does not happen. Maybe someone can point me to what I'm doing wrong. Below is the config. upstream syssrv { server 192.168.167.108:5500 max_fails=2 fail_timeout=15s; server 192.168.167.109:5500 max_fails=2 fail_timeout=15s; } server { listen 5500; proxy_protocol on; proxy_pass syssrv; proxy_timeout 1s; proxy_connect_timeout 1s; } server { listen 5500 udp; proxy_pass syssrv; proxy_timeout 1s; proxy_connect_timeout 1s; proxy_bind $remote_addr transparent; } } I have a script that enumerates each message (n) like this "Testing -proto: udp - n" I see both servers getting the message when they are online (even - odd numbers) but when one goes down, once server continues to only get the even numbers, so I'm losing 50% of the messages. I tried to debug the setup and I see nginx marking that the udp packets timed out. I see this: 2022/02/22 20:05:13 [info] 21362#21362: *777 udp client 192.168.167.101:51529 connected to 0.0.0.0:5500 2022/02/22 20:05:13 [info] 21362#21362: *777 udp proxy 192.168.167.101:34912 connected to 192.168.167.108:5500 2022/02/22 20:05:13 [info] 21362#21362: *779 udp client 192.168.167.101:53862 connected to 0.0.0.0:5500 2022/02/22 20:05:13 [info] 21362#21362: *779 udp proxy 192.168.167.101:35506 connected to 192.168.167.109:5500 Then this: 2022/02/22 20:05:14 [info] 21362#21362: *771 udp timed out, packets from/to client:1/0, bytes from/to client:145/0, bytes from/to upstream:0/145 But, it's not redirecting the connection to the healthy server. This seems pretty simple but any ideas what I'm doing wrong? It would seem that the non-commercial version should be able to do this, no? Any help is appreciated. I also tried to add a backup, but it doesn't work with UDP -- Pawel -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Feb 24 08:30:03 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 24 Feb 2022 11:30:03 +0300 Subject: Help with UDP load balancing passive health checks In-Reply-To: References: Message-ID: > On 23 Feb 2022, at 06:45, Pawel Fraczek wrote: > > Hi, I'm trying to building a syslog load balancer and I'm running into issues with the failover of UDP messages. TCP works just fine, when the server goes down, all messages failover to the active server. But with UDP, that does not happen. Maybe someone can point me to what I'm doing wrong. Below is the config. > upstream syssrv { > server > 192.168.167.108:5500 > max_fails=2 fail_timeout=15s; > server > 192.168.167.109:5500 > max_fails=2 fail_timeout=15s; > } > server { > listen 5500; > proxy_protocol on; > proxy_pass syssrv; > proxy_timeout 1s; > proxy_connect_timeout 1s; > } > server { > listen 5500 udp; > proxy_pass syssrv; > proxy_timeout 1s; > proxy_connect_timeout 1s; > proxy_bind $remote_addr transparent; > } > } > > I have a script that enumerates each message (n) like this "Testing -proto: udp - n" > I see both servers getting the message when they are online (even - odd numbers) but when one goes down, once server continues to only get the even numbers, so I'm losing 50% of the messages. > I tried to debug the setup and I see nginx marking that the udp packets timed out. I see this: > 2022/02/22 20:05:13 [info] 21362#21362: *777 udp client 192.168.167.101:51529 connected to 0.0.0.0:5500 > > 2022/02/22 20:05:13 [info] 21362#21362: *777 udp proxy > 192.168.167.101:34912 connected to 192.168.167.108:5500 > > 2022/02/22 20:05:13 [info] 21362#21362: *779 udp client > 192.168.167.101:53862 connected to 0.0.0.0:5500 > > 2022/02/22 20:05:13 [info] 21362#21362: *779 udp proxy > 192.168.167.101:35506 connected to 192.168.167.109:5500 > Then this: > 2022/02/22 20:05:14 [info] 21362#21362: *771 udp timed out, packets from/to client:1/0, bytes from/to client:145/0, bytes from/to upstream:0/145 > > But, it's not redirecting the connection to the healthy server. This seems pretty simple but any ideas what I'm doing wrong? It would seem that the non-commercial version should be able to do this, no? > Any help is appreciated. I also tried to add a backup, but it doesn't work with UDP The stream module has no notion of the application protocol, hence it only switches to next upstream on connect() errors. Due to the nature of the UDP protocol, which is essentially connectionless, usually it cannot be reported as connect() failure if the peer is down. In this case, it is only seen as connection timeout or recv() error while reading back from upstream. This means no next upstream logic for UDP. The waiting time can be shortened, if the peer reports back with the ICMP error, such as "port unreachable". In this case, it is seen as recv() error immediately, without waiting for connection timeout. Any way, the peer is marked as failed, that is, it is switched off temporarily for subsequent connections until "fail_timeout". This is logged as "upstream server temporarily disabled" on the [warn] logging level. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Thu Feb 24 11:19:57 2022 From: nginx-forum at forum.nginx.org (George) Date: Thu, 24 Feb 2022 06:19:57 -0500 Subject: Nginx map assigned variable usage in upstream? Message-ID: I am trying use a Nginx map assigned variable in an upstream but it doesn't seem to work? The map is concatenated $uri$args assigning a PHP-FPM fastcgi PHP pool to variable $pool and then setting the $pool variable in an upstream. map $uri$args $pool { default 127.0.0.1:9000; "~/index.php/args" 127.0.0.1:9002; } upstream php { zone php_zone 64k; server $pool; keepalive 2; } But if I try this, nginx config test gives me nginx -t nginx: [emerg] host not found in upstream "$pool" in ... What am I missing? cheers George Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293738,293738#msg-293738 From nginx-forum at forum.nginx.org Thu Feb 24 11:28:09 2022 From: nginx-forum at forum.nginx.org (OS_Killer) Date: Thu, 24 Feb 2022 06:28:09 -0500 Subject: Nginx map assigned variable usage in upstream? In-Reply-To: References: Message-ID: <58f63e85576fac8e1ca16cbb7c86d1ad.NginxMailingListEnglish@forum.nginx.org> '$pool' variable will be created just after the request come. This is how the 'map' works. To make this work you will have to use this variable in "proxy_pass". ======= server { ..... proxy_pass http://$pool; } ======= Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293738,293739#msg-293739 From nginx-forum at forum.nginx.org Thu Feb 24 11:31:16 2022 From: nginx-forum at forum.nginx.org (OS_Killer) Date: Thu, 24 Feb 2022 06:31:16 -0500 Subject: Nginx map assigned variable usage in upstream? In-Reply-To: <58f63e85576fac8e1ca16cbb7c86d1ad.NginxMailingListEnglish@forum.nginx.org> References: <58f63e85576fac8e1ca16cbb7c86d1ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5e3c7bade62ea3dbef437ad311bc14fe.NginxMailingListEnglish@forum.nginx.org> Something like this: ======= map $uri$args $pool { default php2; "~/index.php/args" php1; } upstream php1 { zone php_zone 64k; server 127.0.0.1:9002; keepalive 2; } upstream php2 { zone php_zone 64k; server 127.0.0.1:9000; keepalive 2; } server { ..... proxy_pass http://$pool; } ======= Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293738,293740#msg-293740 From nginx-forum at forum.nginx.org Thu Feb 24 11:45:58 2022 From: nginx-forum at forum.nginx.org (George) Date: Thu, 24 Feb 2022 06:45:58 -0500 Subject: Nginx map assigned variable usage in upstream? In-Reply-To: <58f63e85576fac8e1ca16cbb7c86d1ad.NginxMailingListEnglish@forum.nginx.org> References: <58f63e85576fac8e1ca16cbb7c86d1ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9f66a80e1f63008eee79b0d9dc2f7e81.NginxMailingListEnglish@forum.nginx.org> I see. I am currently trying to use the $pool assigned variable for PHP-FPM though as in fastcgi_pass $pool; and not proxy_pass Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293738,293741#msg-293741 From nginx-forum at forum.nginx.org Thu Feb 24 11:52:34 2022 From: nginx-forum at forum.nginx.org (OS_Killer) Date: Thu, 24 Feb 2022 06:52:34 -0500 Subject: Nginx map assigned variable usage in upstream? In-Reply-To: <9f66a80e1f63008eee79b0d9dc2f7e81.NginxMailingListEnglish@forum.nginx.org> References: <58f63e85576fac8e1ca16cbb7c86d1ad.NginxMailingListEnglish@forum.nginx.org> <9f66a80e1f63008eee79b0d9dc2f7e81.NginxMailingListEnglish@forum.nginx.org> Message-ID: Yes. Of course "fastcgi_pass $pool;". This should work. ====== Parameter value can contain variables. ====== https://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_pass Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293738,293742#msg-293742 From nginx-forum at forum.nginx.org Thu Feb 24 16:27:17 2022 From: nginx-forum at forum.nginx.org (swatluv) Date: Thu, 24 Feb 2022 11:27:17 -0500 Subject: the event "ngx_master_14268" was not signaled for 5s Message-ID: <3fd4d6dae908c692408d8b4eef69d0f4.NginxMailingListEnglish@forum.nginx.org> Hello Members, I started using nginx a week before and all naive. My client want to access CMS using domain-int.com/myapplication for which I need to set up nginx. But I am getting error when I edit my conf file ( I change server section) as shown below server { listen 443 ssl; server_name domain-int.com; ssl_certificate C:/Users/me/Documents/domain-certificates-https/domain-int-crt.crt; ssl_certificate_key C:/Users/me/Documents/domain-certificates-https/domain-int-private.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { # Or whatever port your server is running on proxy_pass http://127.0.0.1:4502; } } Issue : when I run start nginx on cmd, it prompt me for pem password which is admin for me and then I see error in log file "the event "ngx_master_14268" was not signaled for 5s" any help appreciated Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293747,293747#msg-293747 From mdounin at mdounin.ru Thu Feb 24 16:39:28 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Feb 2022 19:39:28 +0300 Subject: the event "ngx_master_14268" was not signaled for 5s In-Reply-To: <3fd4d6dae908c692408d8b4eef69d0f4.NginxMailingListEnglish@forum.nginx.org> References: <3fd4d6dae908c692408d8b4eef69d0f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Thu, Feb 24, 2022 at 11:27:17AM -0500, swatluv wrote: > I started using nginx a week before and all naive. My client want to access > CMS using domain-int.com/myapplication for which I need to set up nginx. But > I am getting error when I edit my conf file ( I change server section) as > shown below > server { > listen 443 ssl; > server_name domain-int.com; > ssl_certificate > C:/Users/me/Documents/domain-certificates-https/domain-int-crt.crt; > ssl_certificate_key > C:/Users/me/Documents/domain-certificates-https/domain-int-private.key; > > ssl_session_cache shared:SSL:1m; > ssl_session_timeout 5m; > > ssl_ciphers HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > > location / { > # Or whatever port your server is running on > proxy_pass http://127.0.0.1:4502; > > } > } > > Issue : when I run start nginx on cmd, it prompt me for pem password which > is admin for me and then I see error in log file "the event > "ngx_master_14268" was not signaled for 5s" Try removing the password from the SSL key: this is something you probably want to do anyway to ensure automatic startup. Alternatively, use the ssl_password_file directive (http://nginx.org/r/ssl_password_file) to provide password for the key. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Feb 24 17:43:33 2022 From: nginx-forum at forum.nginx.org (swatluv) Date: Thu, 24 Feb 2022 12:43:33 -0500 Subject: the event "ngx_master_14268" was not signaled for 5s In-Reply-To: References: Message-ID: Thanks Maxim for the response. any direction how to remove password from ssl key (I am using windows) ? And forgot to mention before I am using nginx 1.20.2 version if that makes any difference here. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293747,293749#msg-293749 From i at qingly.me Thu Feb 24 18:06:06 2022 From: i at qingly.me (wordlesswind) Date: Fri, 25 Feb 2022 02:06:06 +0800 Subject: About nginx and OCSP Must-Staple Message-ID: <36296c88-1218-ceb3-b0da-ac3b81b4fbbe@qingly.me> Hello guys, I enabled OCSP Must-Staple, then I found that after restarting nginx, I always get "MOZILLA_PKIX_ERROR_REQUIRED_TLS_FEATURE_MISSING" error when visiting my website for the first time. I think this error means that the server is not caching OCSP information. My nginx.conf is as follows:     server {         listen   443 ssl http2 reuseport;         listen   [::]:443 ssl http2;         server_name  example.org;         ssl_certificate      /path/to/ecc/fullchain.cer;         ssl_certificate_key  /path/to/ecc/example.org.key;         ssl_certificate      /path/to/rsa/fullchain.cer;         ssl_certificate_key  /path/to/rsa/example.org.key;         ssl_stapling         on;         resolver             valid=300s;         ssl_stapling_verify  on;         ssl_session_cache    shared:SSL:10m;         ssl_session_timeout  1d;         ssl_protocols        TLSv1.2 TLSv1.3;         ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256;         ssl_ecdh_curve       secp384r1;         ssl_early_data       on;         …    } Since I have ECC and RSA dual certificates configured and they are intact. Therefore I did not configure "ssl_trusted_certificate". Do I need to configure other parameters like "ssl_ocsp" to solve the problem I'm having now? Also I found a small issue, I noticed that the latest version of Google Chrome/Microsoft Edge will choose to get RSA certificate instead of ECC certificate.   RSA 4096 R3   ECC 384 E1   Issuer Let's Encrypt I wonder why Chromium made this choice. Thank you! Best Regards, wordlesswind From mdounin at mdounin.ru Thu Feb 24 18:40:36 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Feb 2022 21:40:36 +0300 Subject: the event "ngx_master_14268" was not signaled for 5s In-Reply-To: References: Message-ID: Hello! On Thu, Feb 24, 2022 at 12:43:33PM -0500, swatluv wrote: > Thanks Maxim for the response. > any direction how to remove password from ssl key (I am using windows) ? You have to obtain working OpenSSL somewhere (something like https://wiki.openssl.org/index.php/Binaries might be a good place to start looking). Assuming you have OpeSSL and RSA keys, something like this should work: openssl rsa -in pwd.key -out nopwd.key For ECDSA, try "openssl ec ..." instead. > And forgot to mention before I am using nginx 1.20.2 version if that makes > any difference here. It doesn't matter. -- Maxim Dounin http://mdounin.ru/ From sca at andreasschulze.de Thu Feb 24 18:43:17 2022 From: sca at andreasschulze.de (A. Schulze) Date: Thu, 24 Feb 2022 19:43:17 +0100 Subject: About nginx and OCSP Must-Staple In-Reply-To: <36296c88-1218-ceb3-b0da-ac3b81b4fbbe@qingly.me> References: <36296c88-1218-ceb3-b0da-ac3b81b4fbbe@qingly.me> Message-ID: <674249f0-53cd-1fbf-bc3b-d8e9c26b4bc4@andreasschulze.de> Am 24.02.22 um 19:06 schrieb wordlesswind via nginx: > I enabled OCSP Must-Staple, then I found that after restarting nginx, I always get "MOZILLA_PKIX_ERROR_REQUIRED_TLS_FEATURE_MISSING" error when visiting my website for the first time. Hi, this is known behavior (reference welcome). You may configure ssl_stapling_file to serve the OCSP response also for the very first response. Or write a script that fetch https://example.org immediately after reload. Andreas From fraczekp at gmail.com Fri Feb 25 05:30:21 2022 From: fraczekp at gmail.com (Pawel Fraczek) Date: Thu, 24 Feb 2022 22:30:21 -0700 Subject: Help with UDP load balancing passive health checks In-Reply-To: References: Message-ID: Thanks for the explanation. Sorry if I'm being dense but is there some way to get udp passive health check to fail to the next server? Meaning,based on my configuration, am I doing something wrong or is this simply unavailable with udp? Thanks On Thu, Feb 24, 2022, 1:31 AM Sergey Kandaurov wrote: > > > On 23 Feb 2022, at 06:45, Pawel Fraczek wrote: > > > > Hi, I'm trying to building a syslog load balancer and I'm running into > issues with the failover of UDP messages. TCP works just fine, when the > server goes down, all messages failover to the active server. But with UDP, > that does not happen. Maybe someone can point me to what I'm doing wrong. > Below is the config. > > upstream syssrv { > > server > > 192.168.167.108:5500 > > max_fails=2 fail_timeout=15s; > > server > > 192.168.167.109:5500 > > max_fails=2 fail_timeout=15s; > > } > > server { > > listen 5500; > > proxy_protocol on; > > proxy_pass syssrv; > > proxy_timeout 1s; > > proxy_connect_timeout 1s; > > } > > server { > > listen 5500 udp; > > proxy_pass syssrv; > > proxy_timeout 1s; > > proxy_connect_timeout 1s; > > proxy_bind $remote_addr transparent; > > } > > } > > > > I have a script that enumerates each message (n) like this "Testing > -proto: udp - n" > > I see both servers getting the message when they are online (even - odd > numbers) but when one goes down, once server continues to only get the even > numbers, so I'm losing 50% of the messages. > > I tried to debug the setup and I see nginx marking that the udp packets > timed out. I see this: > > 2022/02/22 20:05:13 [info] 21362#21362: *777 udp client > 192.168.167.101:51529 connected to 0.0.0.0:5500 > > > > 2022/02/22 20:05:13 [info] 21362#21362: *777 udp proxy > > 192.168.167.101:34912 connected to 192.168.167.108:5500 > > > > 2022/02/22 20:05:13 [info] 21362#21362: *779 udp client > > 192.168.167.101:53862 connected to 0.0.0.0:5500 > > > > 2022/02/22 20:05:13 [info] 21362#21362: *779 udp proxy > > 192.168.167.101:35506 connected to 192.168.167.109:5500 > > Then this: > > 2022/02/22 20:05:14 [info] 21362#21362: *771 udp timed out, packets > from/to client:1/0, bytes from/to client:145/0, bytes from/to upstream:0/145 > > > > But, it's not redirecting the connection to the healthy server. This > seems pretty simple but any ideas what I'm doing wrong? It would seem that > the non-commercial version should be able to do this, no? > > Any help is appreciated. I also tried to add a backup, but it doesn't > work with UDP > > The stream module has no notion of the application protocol, > hence it only switches to next upstream on connect() errors. > > Due to the nature of the UDP protocol, which is essentially > connectionless, usually it cannot be reported as connect() > failure if the peer is down. In this case, it is only seen > as connection timeout or recv() error while reading back > from upstream. This means no next upstream logic for UDP. > > The waiting time can be shortened, if the peer reports > back with the ICMP error, such as "port unreachable". > In this case, it is seen as recv() error immediately, > without waiting for connection timeout. > > Any way, the peer is marked as failed, that is, it is switched > off temporarily for subsequent connections until "fail_timeout". > This is logged as "upstream server temporarily disabled" > on the [warn] logging level. > > -- > Sergey Kandaurov > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Fri Feb 25 07:58:59 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 25 Feb 2022 10:58:59 +0300 Subject: Help with UDP load balancing passive health checks In-Reply-To: References: Message-ID: <64F6314B-8A21-4E8C-8640-B12112654C5D@nginx.com> > On 25 Feb 2022, at 08:30, Pawel Fraczek wrote: > > Thanks for the explanation. Sorry if I'm being dense but is there some way to get udp passive health check to fail to the next server? > > Meaning,based on my configuration, am I doing something wrong or is this simply unavailable with udp? > > Thanks > I believe there's no such option for UDP with passive checks. -- Sergey Kandaurov