From nginx-forum at forum.nginx.org Tue Sep 1 05:12:03 2020 From: nginx-forum at forum.nginx.org (moyamos) Date: Tue, 01 Sep 2020 01:12:03 -0400 Subject: How do I add text to a response from a remote URL in NGINX? In-Reply-To: <000001d67f99$c8c02dd0$5a408970$@roze.lv> References: <000001d67f99$c8c02dd0$5a408970$@roze.lv> Message-ID: <2eb86998b98415a778dd85871574bcfc.NginxMailingListEnglish@forum.nginx.org> Thanks for the reply. :-) I have added a location as follows: location ~/src/(.*) { proxy_pass http://externalserver.com; } It works when the entire URL is loaded in a browser. However, in the "autoindex" page, "Object Moved This document may be found _here_" show up before and after the files list. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289227,289261#msg-289261 From francis at daoine.org Tue Sep 1 11:07:26 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Sep 2020 12:07:26 +0100 Subject: How do I add text to a response from a remote URL in NGINX? In-Reply-To: <2eb86998b98415a778dd85871574bcfc.NginxMailingListEnglish@forum.nginx.org> References: <000001d67f99$c8c02dd0$5a408970$@roze.lv> <2eb86998b98415a778dd85871574bcfc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200901110726.GD29287@daoine.org> On Tue, Sep 01, 2020 at 01:12:03AM -0400, moyamos wrote: Hi there, It looks like you had: location /src/ { alias /storage/path/content/; } and the url /src/before_body.txt would provide the content of the local file /storage/path/content/before_body.txt. Now instead you want the content of the url http://externalserver.com/before_body.txt? In that case, just change the "alias" line to "proxy_pass http://externalserver.com/;", and change nothing else. > I have added a location as follows: > > location ~/src/(.*) { > proxy_pass http://externalserver.com; > } This should be broadly similar, except it will fetch the url http://externalserver.com/src/before_body.txt -- maybe that is what you want? > It works when the entire URL is loaded in a browser. However, in the > "autoindex" page, "Object Moved > This document may be found _here_" show up before and after the files list. It works for me, providing the content of http://externalserver.com/src/before_body.txt, before the autoindex-generated content. Can you show one complete config that does not do what you want? f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 1 11:53:25 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Sep 2020 12:53:25 +0100 Subject: transforming static files In-Reply-To: References: Message-ID: <20200901115325.GE29287@daoine.org> On Mon, Aug 31, 2020 at 01:38:28PM -0400, Mark Lybarger wrote: Hi there, > i also have some .bin files that can be converted using a custom java api. > how can i easily hook the bin files to processed through a command on the > system? > > java -jar MyTranscoder.jar myInputFile.bin The easy way would seem to be for you to process those static input files into the desired static output files, and then let nginx serve the static output files as files. If you want something more than that, you are probably going to have to decide what desired behaviour is, and come up with a design that implements it. It's hard to be more specific than that, with the information available. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 1 12:12:26 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Sep 2020 13:12:26 +0100 Subject: Nginx TCP/UDP Load Balancer In-Reply-To: <0ca6a93f22f9adf57fa96edaffaf1a22.NginxMailingListEnglish@forum.nginx.org> References: <0ca6a93f22f9adf57fa96edaffaf1a22.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200901121226.GF29287@daoine.org> On Mon, Aug 31, 2020 at 06:15:00AM -0400, Dr_tux wrote: Hi there, > Hi, I have 2 turn server. I would like to use Nginx for load balancer them. > But I have a problem. When I use the AWS ELB it works perfectly. If I try > with Nginx, I got an error. > > Remote addr should be client_ip. Nginx, send itself IP address to coturn > server. I don't know the details of a turn server; but depending on the overall design of your solution, it is possible that proxy_bind (http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_bind) will be useful. You will probably want to make sure that you understand how each packet will flow from original-client to end-server and back. You apparently have a working system using AWS ELB, so perhaps watching the traffic there will show you how it needs to be in order to work. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 1 12:40:02 2020 From: nginx-forum at forum.nginx.org (moyamos) Date: Tue, 01 Sep 2020 08:40:02 -0400 Subject: How do I add text to a response from a remote URL in NGINX? In-Reply-To: <20200901110726.GD29287@daoine.org> References: <20200901110726.GD29287@daoine.org> Message-ID: <296303705a4a424ea93a7f3279fcc199.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Tue, Sep 01, 2020 at 01:12:03AM -0400, moyamos wrote: > > Hi there, Thanks Francis for your reply. :-) > It looks like you had: > > location /src/ { > alias /storage/path/content/; > } > and the url /src/before_body.txt would provide the content of the > local > file /storage/path/content/before_body.txt. > > Now instead you want the content of the url > http://externalserver.com/before_body.txt? Yes, that's right. [...] > Can you show one complete config that does not do what you want? server { listen 80; root /storage/path; index index.html; server_name test.domain.com; location / { try_files $uri $uri/ =404; add_before_body /src/before_body.txt; add_after_body /src/after_body.txt; autoindex on; } location /src/ { # alias /storage/path/content/; proxy_pass http://externalserver.com; } } The result is the same as my previous location. "Object Moved This document may be found _here_" is showing up before and after the files list. When I am clicking on the "_here_" link, the content is loaded correctly. But, on "https". There is a redirect from "http" to "https" on externalserver.com. As soon as I am changing to location /src/ { proxy_pass https://externalserver.com; } following errors are logged: 2020/09/01 16:48:41 [error] 8445#8445: *33789440 peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking to upstream, client: YY.YYY.YYY.YY, server: test.domain.com, request: "GET / HTTP/1.1", subrequest: "/src/after_body.txt", upstream: "https://XXX.XXX.XXX.XXX:443/src/after_body.txt", host: "test.domain.com" 2020/09/01 16:48:41 [error] 8445#8445: *33789440 peer closed connection in SSL handshake (104: Connection reset by peer) while sending to client, client: YY.YYY.YYY.YY, server: test.domain.com, request: "GET / HTTP/1.1", subrequest: "/src/after_body.txt", upstream: "https://XXX.XXX.XXX.XXX:443/src/after_body.txt", host: "test.domain.com" > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289227,289266#msg-289266 From nginx-forum at forum.nginx.org Tue Sep 1 13:34:23 2020 From: nginx-forum at forum.nginx.org (nathanpgibson) Date: Tue, 01 Sep 2020 09:34:23 -0400 Subject: Connection timeout on SSL with shared hosting In-Reply-To: <20200826091043.GA29287@daoine.org> References: <20200826091043.GA29287@daoine.org> Message-ID: <6ca1cf7a38a08e6605e10aecd6ec06a7.NginxMailingListEnglish@forum.nginx.org> > (I guess you either removed the INPUT DROP rule; or added an explicit > "allow 443" beside the "allow 80" rule that was already there. > Whichever > it was, it was "make the local firewall allow the traffic get to > nginx".) Right, the allow 443 actually existed but there was a rule above it that was routing traffic such that it didn't even get to my allow rule. Using iptables -nvL I was able to see the packet count and see that 0 packets were getting to my allow rule. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289099,289268#msg-289268 From r at roze.lv Tue Sep 1 13:54:06 2020 From: r at roze.lv (Reinis Rozitis) Date: Tue, 1 Sep 2020 16:54:06 +0300 Subject: How do I add text to a response from a remote URL in NGINX? In-Reply-To: <296303705a4a424ea93a7f3279fcc199.NginxMailingListEnglish@forum.nginx.org> References: <20200901110726.GD29287@daoine.org> <296303705a4a424ea93a7f3279fcc199.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000401d68067$5bc8ee50$135acaf0$@roze.lv> > > Now instead you want the content of the url > > http://externalserver.com/before_body.txt? > > Yes, that's right. Can you actually open the file on the external server - http://externalserver.com/src/before_body.txt and does it have the content you expect (without redirects)? Note that since you have proxy_pass without the trailing slash nginx will send the request as '/src/before_body.txt'. If there is no such /src/ directory on the remote server but only /before_body.txt then you have to add the slash at the end: location /src/ { proxy_pass http://externalserver.com/; } rr From francis at daoine.org Tue Sep 1 13:56:37 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Sep 2020 14:56:37 +0100 Subject: How do I add text to a response from a remote URL in NGINX? In-Reply-To: <296303705a4a424ea93a7f3279fcc199.NginxMailingListEnglish@forum.nginx.org> References: <20200901110726.GD29287@daoine.org> <296303705a4a424ea93a7f3279fcc199.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200901135637.GG29287@daoine.org> On Tue, Sep 01, 2020 at 08:40:02AM -0400, moyamos wrote: > Francis Daly Wrote: > > On Tue, Sep 01, 2020 at 01:12:03AM -0400, moyamos wrote: Hi there, > location / { > try_files $uri $uri/ =404; > add_before_body /src/before_body.txt; > add_after_body /src/after_body.txt; > autoindex on; > } > > location /src/ { > # alias /storage/path/content/; > proxy_pass http://externalserver.com; That will end up with nginx requesting http://externalserver.com/src/before_body.txt and http://externalserver.com/src/after_body.txt > The result is the same as my previous location. "Object Moved This document > may be found _here_" is showing up before and after the files list. When I > am clicking on the "_here_" link, the content is loaded correctly. But, on > "https". There is a redirect from "http" to "https" on externalserver.com. That would explain the message. Have you the option to change externalserver.com not to issue that redirect? (Alternatively, you could save the content into the local files and just serve those; but I guess you want to use the external content to stay current when it changes.) > location /src/ { > proxy_pass https://externalserver.com; > } > > following errors are logged: > > 2020/09/01 16:48:41 [error] 8445#8445: *33789440 peer closed connection in > SSL handshake (104: Connection reset by peer) while SSL handshaking to > upstream, client: YY.YYY.YYY.YY, server: test.domain.com, request: "GET / > HTTP/1.1", subrequest: "/src/after_body.txt", upstream: > "https://XXX.XXX.XXX.XXX:443/src/after_body.txt", host: "test.domain.com" That says that the nginx client and the upstream server are not able to agree a ssl session. Is "XXX.XXX.XXX.XXX" the same as "externalserver.com"? If not, can you use "curl" or something similar to fetch "https://XXX.XXX.XXX.XXX:443/src/after_body.txt" without errors? Maybe you need some extra ssl config in nginx, such as http://nginx.org/r/proxy_ssl_server_name and friends? Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 1 19:34:33 2020 From: nginx-forum at forum.nginx.org (Jorge Enrique Diaz) Date: Tue, 01 Sep 2020 15:34:33 -0400 Subject: Rewrite .htaccess on nginx Message-ID: i want to do this in nginx Options All -Indexes RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] when converting it as it appears in several forums it gives me this script user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; autoindex off; autoindex off; location / { if (!-e $request_filename){ rewrite ^(.*)$ index.php?url=$1 break; } } #keepalive_timeout 0; keepalive_timeout 65; #gzip on; include "C:/laragon/etc/nginx/php_upstream.conf"; include "C:/laragon/etc/nginx/sites-enabled/*.conf"; client_max_body_size 2000M; server_names_hash_bucket_size 64; } DOES NOT START NGIX AND I GET THE FOLLOWING ERROR SERVICE NGINX CANNOT START RASON : NGINX EMERG LOCATION IS NOT ALLOWED HERE IN C.\LARAGON\BIN\NGINX\NGINX1.14.0/CONF/NGINX.CONF:37 HELPPPP ME PLEASE Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289276,289276#msg-289276 From nginx-forum at forum.nginx.org Tue Sep 1 20:47:55 2020 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Tue, 01 Sep 2020 16:47:55 -0400 Subject: Nginx TCP/UDP Load Balancer In-Reply-To: <20200901121226.GF29287@daoine.org> References: <20200901121226.GF29287@daoine.org> Message-ID: <5dad60b48615f32d88eda43057010f72.NginxMailingListEnglish@forum.nginx.org> Thank you very much for your answer, but I tried it :) did not work. I would like to forward client IP address directly to turn servers. But I always see Nginx Ip on Turn Servers. Best. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289231,289277#msg-289277 From francis at daoine.org Tue Sep 1 22:20:58 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Sep 2020 23:20:58 +0100 Subject: Nginx TCP/UDP Load Balancer In-Reply-To: <5dad60b48615f32d88eda43057010f72.NginxMailingListEnglish@forum.nginx.org> References: <20200901121226.GF29287@daoine.org> <5dad60b48615f32d88eda43057010f72.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200901222058.GH29287@daoine.org> On Tue, Sep 01, 2020 at 04:47:55PM -0400, Dr_tux wrote: Hi there, > Thank you very much for your answer, but I tried it :) did not work. I would > like to forward client IP address directly to turn servers. But I always see > Nginx Ip on Turn Servers. Fair enough. If you can show the config that you used, perhaps someone will be able to reproduce the problem and find a solution. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 1 22:36:05 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Sep 2020 23:36:05 +0100 Subject: Rewrite .htaccess on nginx In-Reply-To: References: Message-ID: <20200901223605.GI29287@daoine.org> On Tue, Sep 01, 2020 at 03:34:33PM -0400, Jorge Enrique Diaz wrote: Hi there, > i want to do this in nginx > > Options All -Indexes > RewriteEngine on > > RewriteCond %{REQUEST_FILENAME} !-d > RewriteCond %{REQUEST_FILENAME} !-f > RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] That is approximately "try_files $uri $uri/ /index.php?url=$uri;", which would appear inside a location{}; and there would also be (probably) something involving fastcgi_pass for handling the "php" requests. http://nginx.org/r/try_files for the try_files documentation, and similar urls for any other directives used. > http { > location / { http://nginx.org/r/location "location" cannot be directly inside "http"; it must be inside "server". > NGINX EMERG LOCATION IS NOT ALLOWED HERE IN > C.\LARAGON\BIN\NGINX\NGINX1.14.0/CONF/NGINX.CONF:37 That is reporting that "location" cannot be directly inside "http". Probably the simplest is to start with whatever initial config file your system has, and see what "location" block inside a "server" block is used to handle a test request. Then adjust that "location", or add a new one beside it, for your testing. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Sep 2 05:08:43 2020 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Wed, 02 Sep 2020 01:08:43 -0400 Subject: Nginx TCP/UDP Load Balancer In-Reply-To: <20200901222058.GH29287@daoine.org> References: <20200901222058.GH29287@daoine.org> Message-ID: <560b1412c454c9c35aec0c1cb07e1225.NginxMailingListEnglish@forum.nginx.org> When I add the proxy_bind parameter, requests are never forwarded to the server behind. If I do not add it, the output on the turn server is as follows. Output: 96: handle_udp_packet: New UDP endpoint: local addr Turn_Server_IP:3478, remote addr NGINX_IP:59902 stream { upstream stream_backend { server Turn_Server_IP:3478; } server { listen 3478 udp; proxy_pass stream_backend; proxy_bind $remote_addr transparent; } } Thanks for help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289231,289281#msg-289281 From francis at daoine.org Wed Sep 2 07:56:52 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 2 Sep 2020 08:56:52 +0100 Subject: Nginx TCP/UDP Load Balancer In-Reply-To: <560b1412c454c9c35aec0c1cb07e1225.NginxMailingListEnglish@forum.nginx.org> References: <20200901222058.GH29287@daoine.org> <560b1412c454c9c35aec0c1cb07e1225.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200902075652.GJ29287@daoine.org> On Wed, Sep 02, 2020 at 01:08:43AM -0400, Dr_tux wrote: Hi there, > When I add the proxy_bind parameter, requests are never forwarded to the > server behind. Is there any hint in your nginx logs of what is happening? For example, on one old system here, when I test the config as root, I can see: # sbin/nginx -t nginx: [emerg] transparent proxying is not supported on this platform, ignored in /usr/local/nginx/conf/nginx.conf:240 nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful and when I try to connect to my nginx udp port from a remote machine, I see no hint of the request being forwarded; but I do see a [crit] message in the nginx error log, of the form "bind(client ip) failed (99: Cannot assign requested address) while connecting to upstream" When I try to connect from the local machine, I do see the request being forwarded, with the same source address as my original packet used -- the 192.168.x one, or the 127.0.x one. So proxy_bind is being attempted, and my operating system setup is preventing the "external" address being used. > server { > listen 3478 udp; > proxy_pass stream_backend; > proxy_bind $remote_addr transparent; > } That looks correct; but the IP address that nginx is allowed to set as the source IP for the packets that it sends, is not only controlled by nginx. If you have similar logs, you may have a similar problem that may be fixable by re-configuring the supporting system. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Sep 2 13:11:39 2020 From: nginx-forum at forum.nginx.org (moyamos) Date: Wed, 02 Sep 2020 09:11:39 -0400 Subject: How do I add text to a response from a remote URL in NGINX? In-Reply-To: <20200901110726.GD29287@daoine.org> References: <20200901110726.GD29287@daoine.org> Message-ID: <3400af80048dd70aa4d969d8c3e7fc63.NginxMailingListEnglish@forum.nginx.org> Hi Reinis and Francis, Thanks for your hands. I resolved the issue. I've moved before_body.txt and after_body.txt to another domain without https forced redirect. It works fine with a simple proxy_pass even over https. Thanks for all your help. Best regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289227,289286#msg-289286 From ss17 at fea.st Thu Sep 3 05:25:27 2020 From: ss17 at fea.st (RA) Date: Thu, 03 Sep 2020 10:55:27 +0530 Subject: Does NGINX read auth_basic_user_file on every connection? Message-ID: Hi. How does NGINX process auth_basic_user_file? 1) Does it read it in entirety on every connection? 2) Does it read it line by line on every connection and stops when a match is found? 3) Does it read it full on start and re-reads it only if the file is changed? If its either 1 or 2, then is it not very inefficient to read a file on just every connection? If the file has fairly large number of entries (5-10mb), will it not affect the performance of web server in general? There should be some "indexed" approach to this. Thanks. From nginx-forum at forum.nginx.org Thu Sep 3 06:18:12 2020 From: nginx-forum at forum.nginx.org (jasonsx) Date: Thu, 03 Sep 2020 02:18:12 -0400 Subject: nginx last version on windows Message-ID: <9fc0beaffa3b49b155d0838c155c3416.NginxMailingListEnglish@forum.nginx.org> hi today i'm start using nginx 1.19.2 trying to create upload files and i got this 2020/09/03 08:02:21 [emerg] 8252#1152: unknown directive "upload_pass" in C:\nginx-1.19.2/conf/vhost/ug04.cn.center.conf:43 my config server { listen 82; server_name 192.168.1.2; index index.html index.htm index.php; root /nginx-1.19.2/html/center/manager/sites/cp; client_max_body_size 500m; location /nginx-1.19.2/html/center/manager/app/cp/views { upload_pass /nginx-1.19.2/html/center/upload; upload_store /dev/shm; upload_store_access user:r; upload_set_form_field $upload_field_name[name] "$upload_file_name"; upload_set_form_field $upload_field_name[content_type] "$upload_content_type"; upload_set_form_field $upload_field_name[path] "$upload_tmp_path"; upload_aggregate_form_field "$upload_field_name[md5]" "$upload_file_md5"; upload_aggregate_form_field "$upload_field_name[size]" "$upload_file_size"; upload_pass_form_field "^.*$"; upload_cleanup 400 404 499 500-505; } i search like a 2 day and fail with it i found upload module but i can't install it in windows any idea ? thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289290,289290#msg-289290 From francis at daoine.org Thu Sep 3 09:48:03 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Sep 2020 10:48:03 +0100 Subject: nginx last version on windows In-Reply-To: <9fc0beaffa3b49b155d0838c155c3416.NginxMailingListEnglish@forum.nginx.org> References: <9fc0beaffa3b49b155d0838c155c3416.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200903094803.GK29287@daoine.org> On Thu, Sep 03, 2020 at 02:18:12AM -0400, jasonsx wrote: Hi there, > and i got this > 2020/09/03 08:02:21 [emerg] 8252#1152: unknown directive "upload_pass" in > C:\nginx-1.19.2/conf/vhost/ug04.cn.center.conf:43 That means that the nginx binary that you are using does not include a module that uses that directive. > i search like a 2 day and fail with it i found upload module but i can't > install it in windows > any idea ? If you need the module, you must either find-or-build a nginx binary that includes it; or (if it can be a dynamic module) find-or-build the module and load it. Depending on what you want to do, you may not need the module. http://nginx.org/r/upload_pass suggests that it is not a stock nginx module that you are hoping to use -- for a third-party module, you may want to check which nginx version it is declared to work with. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Sep 3 10:25:58 2020 From: nginx-forum at forum.nginx.org (jasonsx) Date: Thu, 03 Sep 2020 06:25:58 -0400 Subject: nginx last version on windows In-Reply-To: <20200903094803.GK29287@daoine.org> References: <20200903094803.GK29287@daoine.org> Message-ID: <34baa9f759137651d3a6635e7e308f0c.NginxMailingListEnglish@forum.nginx.org> thank you for fast replay well i was search alot and alot for the binary this include the module in it but i can't found any of nginx binray already download alot of binary from website and fail with it i'm a windows user so the linux will be hard to me thank Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289290,289293#msg-289293 From nginx-forum at forum.nginx.org Thu Sep 3 10:41:08 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 03 Sep 2020 06:41:08 -0400 Subject: nginx last version on windows In-Reply-To: <20200903094803.GK29287@daoine.org> References: <20200903094803.GK29287@daoine.org> Message-ID: <9731011096133ddcecbb3b46ace01368.NginxMailingListEnglish@forum.nginx.org> I will have a look to see if this can be included in our next version. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289290,289295#msg-289295 From nginx-forum at forum.nginx.org Thu Sep 3 10:49:42 2020 From: nginx-forum at forum.nginx.org (allenhe) Date: Thu, 03 Sep 2020 06:49:42 -0400 Subject: Worker process core dumped Message-ID: <0ad03c292feecfe7cb78b8646d5f1251.NginxMailingListEnglish@forum.nginx.org> version: 1.17.8 debug log: -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2020/09/03 14:09:21 [error] 320#320: *873195 upstream timed out (110: Connection timed out) while connecting to upstream, client: 10.68.23.2, server: , request: "POST /api/hmf-controller/v1/cometd/handshake HTTP/1.1", upstream: "http://172.30.8.71:12280/api/hmf-controller/v1/cometd/handshake", host: "10.226.208.117:28001", referrer: "https://10.226.208.117:28001/uportal/framework/default.html" 2020/09/03 14:09:21 [debug] 320#320: *873195 finalize http upstream request: 504 2020/09/03 14:09:21 [debug] 320#320: *873195 finalize http proxy request 2020/09/03 14:09:21 [debug] 320#320: *873195 close http upstream connection: 359 2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000000E1A110, unused: 48 2020/09/03 14:09:21 [debug] 320#320: *873195 reusable connection: 0 2020/09/03 14:09:21 [debug] 320#320: *873195 http finalize request: 504, "/api/hmf-controller/v1/cometd/handshake?" a:1, c:1 2020/09/03 14:09:21 [debug] 320#320: *873195 http special response: 504, "/api/hmf-controller/v1/cometd/handshake?" 2020/09/03 14:09:21 [debug] 320#320: *873195 headers more header filter, uri "/api/hmf-controller/v1/cometd/handshake" 2020/09/03 14:09:21 [debug] 320#320: *873195 lua header filter for user lua code, uri "/api/hmf-controller/v1/cometd/handshake" 2020/09/03 14:09:21 [debug] 320#320: *873195 code cache lookup (key='header_filter_by_lua_nhli_08c8ad024deaf339a3f72ac205896eb4', ref=3) 2020/09/03 14:09:21 [debug] 320#320: *873195 code cache hit (key='header_filter_by_lua_nhli_08c8ad024deaf339a3f72ac205896eb4', ref=3) 2020/09/03 14:09:21 [debug] 320#320: shmtx lock 2020/09/03 14:09:21 [debug] 320#320: shmtx unlock 2020/09/03 14:09:21 [debug] 320#320: shmtx lock 2020/09/03 14:09:21 [debug] 320#320: shmtx unlock 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36: header_filter(): executing plugin "msb_admin_controller": header_filter 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36: header_filter(): executing plugin "hide-dexmesh-error-stack": header_filter 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36: header_filter(): executing plugin "addheaders": header_filter 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36: header_filter(): executing plugin "divide": header_filter 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36: header_filter(): executing plugin "redirect-transformer-plugin": header_filter 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36: header_filter(): executing plugin "auth-plugin": header_filter 2020/09/03 14:09:21 [debug] 320#320: shmtx lock 2020/09/03 14:09:21 [debug] 320#320: shmtx unlock 2020/09/03 14:09:21 [debug] 320#320: shmtx lock 2020/09/03 14:09:21 [debug] 320#320: shmtx unlock 2020/09/03 14:09:21 [debug] 320#320: *873195 lua capture header filter, uri "/api/hmf-controller/v1/cometd/handshake" 2020/09/03 14:09:21 [debug] 320#320: *873195 HTTP/1.1 504 Gateway Time-out 2020/09/03 14:09:21 [debug] 320#320: *873195 write new buf t:1 f:0 0000000001C340D8, pos 0000000001C340D8, size: 348 file: 0, size: 0 2020/09/03 14:09:21 [debug] 320#320: *873195 http write filter: l:0 f:0 s:348 2020/09/03 14:09:21 [debug] 320#320: *873195 http output filter "/api/hmf-controller/v1/cometd/handshake?" 2020/09/03 14:09:21 [debug] 320#320: *873195 http copy filter: "/api/hmf-controller/v1/cometd/handshake?" 2020/09/03 14:09:21 [debug] 320#320: *873195 lua capture body filter, uri "/api/hmf-controller/v1/cometd/handshake" 2020/09/03 14:09:21 [debug] 320#320: *873195 http postpone filter "/api/hmf-controller/v1/cometd/handshake?" 0000000001C343C0 2020/09/03 14:09:21 [debug] 320#320: *873195 write old buf t:1 f:0 0000000001C340D8, pos 0000000001C340D8, size: 348 file: 0, size: 0 2020/09/03 14:09:21 [debug] 320#320: *873195 write new buf t:0 f:0 0000000000000000, pos 0000000000A64620, size: 114 file: 0, size: 0 2020/09/03 14:09:21 [debug] 320#320: *873195 write new buf t:0 f:0 0000000000000000, pos 0000000000A65960, size: 41 file: 0, size: 0 2020/09/03 14:09:21 [debug] 320#320: *873195 write new buf t:0 f:0 0000000000000000, pos 0000000000A657C0, size: 402 file: 0, size: 0 2020/09/03 14:09:21 [debug] 320#320: *873195 http write filter: l:1 f:0 s:905 2020/09/03 14:09:21 [debug] 320#320: *873195 http write filter limit 0 2020/09/03 14:09:21 [debug] 320#320: *873195 malloc: 0000000001AE0030:16384 2020/09/03 14:09:21 [debug] 320#320: *873195 SSL buf copy: 348 2020/09/03 14:09:21 [debug] 320#320: *873195 SSL buf copy: 114 2020/09/03 14:09:21 [debug] 320#320: *873195 SSL buf copy: 41 2020/09/03 14:09:21 [debug] 320#320: *873195 SSL buf copy: 402 2020/09/03 14:09:21 [debug] 320#320: *873195 SSL to write: 905 2020/09/03 14:09:21 [debug] 320#320: *873195 SSL_write: 905 2020/09/03 14:09:21 [debug] 320#320: *873195 http write filter 0000000000000000 2020/09/03 14:09:21 [debug] 320#320: *873195 http copy filter: 0 "/api/hmf-controller/v1/cometd/handshake?" 2020/09/03 14:09:21 [debug] 320#320: *873195 http finalize request: 0, "/api/hmf-controller/v1/cometd/handshake?" a:1, c:1 2020/09/03 14:09:21 [debug] 320#320: *873195 set http keepalive handler 2020/09/03 14:09:21 [debug] 320#320: *873195 http close request 2020/09/03 14:09:21 [debug] 320#320: *873195 lua request cleanup: forcible=0 2020/09/03 14:09:21 [debug] 320#320: *873195 lua log handler, uri:"/api/hmf-controller/v1/cometd/handshake" c:1 2020/09/03 14:09:21 [debug] 320#320: *873195 code cache lookup (key='nhlf_9c4416184f27253b6f5f86c35c6afc6b', ref=4) 2020/09/03 14:09:21 [debug] 320#320: *873195 code cache hit (key='nhlf_9c4416184f27253b6f5f86c35c6afc6b', ref=4) 2020/09/03 14:09:21 [info] 320#320: *873195 [lua] logger.lua:27: 5382af7ecb49a6a9ce6f006cf859799b {"matched":"hmf-controller","auth-plugin add Z-EXTENT":true,"svc_type":"api"} while logging request, client: 10.68.23.2, server: , request: "POST /api/hmf-controller/v1/cometd/handshake HTTP/1.1", upstream: "http://172.30.8.71:12280/api/hmf-controller/v1/cometd/handshake", host: "10.226.208.117:28001", referrer: "https://10.226.208.117:28001/uportal/framework/default.html" 2020/09/03 14:09:21 [debug] 320#320: fetching key "ranoss|hmf-controller|v1|172.30.8.71:12280-2-start_time" in shared dict "metrics" 2020/09/03 14:09:21 [debug] 320#320: shmtx lock 2020/09/03 14:09:21 [debug] 320#320: shmtx unlock 2020/09/03 14:09:21 [debug] 320#320: shmtx lock 2020/09/03 14:09:21 [debug] 320#320: shmtx unlock 2020/09/03 14:09:21 [debug] 320#320: shmtx lock 2020/09/03 14:09:21 [debug] 320#320: shmtx unlock 2020/09/03 14:09:21 [debug] 320#320: shmtx lock 2020/09/03 14:09:21 [debug] 320#320: shmtx unlock 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40: log(): executing plugin "msb_admin_controller": log 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40: log(): executing plugin "hide-dexmesh-error-stack": log 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40: log(): executing plugin "addheaders": log 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40: log(): executing plugin "divide": log 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40: log(): executing plugin "redirect-transformer-plugin": log 2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40: log(): executing plugin "auth-plugin": log 2020/09/03 14:09:21 [debug] 320#320: shmtx lock 2020/09/03 14:09:21 [debug] 320#320: shmtx unlock 2020/09/03 14:09:21 [debug] 320#320: fetching key "circuitbreaker.ranoss.hmf-controller.v1.2020-09-03T01:06:36Z.172.30.8.71:12280.status" in shared dict "circuitbreaker" 2020/09/03 14:09:21 [debug] 320#320: shmtx lock 2020/09/03 14:09:21 [debug] 320#320: shmtx unlock 2020/09/03 14:09:21 [debug] 320#320: *873195 http log handler 2020/09/03 14:09:21 [debug] 320#320: *873195 http map started 2020/09/03 14:09:21 [debug] 320#320: *873195 http script var: "504" 2020/09/03 14:09:21 [debug] 320#320: *873195 http map: "504" "1" 2020/09/03 14:09:21 [debug] 320#320: *873195 http script var: "1" 2020/09/03 14:09:21 [debug] 320#320: *873195 posix_memalign: 0000000001343E90:4096 @16 2020/09/03 14:09:21 [debug] 320#320: *873195 run cleanup: 0000000001C32EB8 2020/09/03 14:09:21 [debug] 320#320: lua release ngx.ctx at ref 121 2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000001C315F0, unused: 0 2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000001C32600, unused: 0 2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000001C33610, unused: 189 2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000001343E90, unused: 3568 2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000000C404B0 2020/09/03 14:09:21 [debug] 320#320: *873195 hc free: 0000000000000000 2020/09/03 14:09:21 [debug] 320#320: *873195 hc busy: 0000000000000000 0 2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000001AE0030 2020/09/03 14:09:28 [alert] 46#46: worker process 320 exited on signal 11 (core dumped) -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- coredump backtrace: -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Program terminated with signal SIGSEGV, Segmentation fault. #0 ngx_http_set_keepalive (r=0x1c31640) at src/http/ngx_http_request.c:3178 3178 if (r->headers_out.persist_front_end_connection && !c->tcp_keepalive) { (gdb) bt #0 ngx_http_set_keepalive (r=0x1c31640) at src/http/ngx_http_request.c:3178 #1 ngx_http_finalize_connection (r=0x1c31640) at src/http/ngx_http_request.c:2720 #2 0x00000000004aca34 in ngx_http_upstream_handler (ev=0x15431d0) at src/http/ngx_http_upstream.c:1290 #3 0x0000000000478776 in ngx_event_expire_timers () at src/event/ngx_event_timer.c:94 #4 0x0000000000478405 in ngx_process_events_and_timers (cycle=cycle at entry=0xdf7540) at src/event/ngx_event.c:271 #5 0x000000000048120d in ngx_worker_process_cycle (cycle=0xdf7540, data=) at src/os/unix/ngx_process_cycle.c:821 #6 0x000000000047f7de in ngx_spawn_process (cycle=cycle at entry=0xdf7540, proc=0x4811c0 , data=0x0, name=0x74ad49 "worker process", respawn=respawn at entry=6) at src/os/unix/ngx_process.c:199 #7 0x0000000000482873 in ngx_reap_children (cycle=0xdf7540) at src/os/unix/ngx_process_cycle.c:688 #8 ngx_master_process_cycle (cycle=0xdf7540, cycle at entry=0xb041a0) at src/os/unix/ngx_process_cycle.c:181 #9 0x0000000000455e79 in main (argc=, argv=) at src/core/nginx.c:385 (gdb) p r $1 = (ngx_http_request_t *) 0x1c31640 (gdb) p c $2 = (ngx_connection_t *) 0x7fb482641fa8 (gdb) p *r Cannot access memory at address 0x1c31640 (gdb) p *c $3 = {data = 0x13f5f50, read = 0x14587a0, write = 0x1542db0, fd = 168, recv = 0x489bf0 , send = 0x4890e0 , recv_chain = 0x48a2f0 , send_chain = 0x489420 , listening = 0x11e21a0, sent = 905, log = 0x13f5ef0, pool = 0x13f5e90, type = 1, sockaddr = 0x13f5ee0, socklen = 16, addr_text = {len = 10, data = 0x13f5f40 "10.68.23.2.2"}, proxy_protocol = 0x0, ssl = 0x13f5fa8, udp = 0x0, local_sockaddr = 0x13f6060, local_socklen = 16, buffer = 0x13f6000, queue = {prev = 0x0, next = 0x0}, number = 873195, requests = 15, buffered = 0, log_error = 2, timedout = 0, error = 0, destroyed = 1, idle = 0, reusable = 0, close = 0, shared = 0, sendfile = 1, sndlowat = 0, tcp_nodelay = 1, tcp_nopush = 0, need_last_buf = 0, tcp_keepalive = 0, logged = 0, sendfile_task = 0x0} (gdb) -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- It looks like the "r" is not accessable, but I remember the "request" objects are pre-allocated NOT in the heap? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289296,289296#msg-289296 From nginx-forum at forum.nginx.org Thu Sep 3 10:55:56 2020 From: nginx-forum at forum.nginx.org (jasonsx) Date: Thu, 03 Sep 2020 06:55:56 -0400 Subject: nginx last version on windows In-Reply-To: <9731011096133ddcecbb3b46ace01368.NginxMailingListEnglish@forum.nginx.org> References: <20200903094803.GK29287@daoine.org> <9731011096133ddcecbb3b46ace01368.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7ff8affbc50a8c916c66a51cb4209120.NginxMailingListEnglish@forum.nginx.org> thanks you this will be great :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289290,289297#msg-289297 From nginx-forum at forum.nginx.org Thu Sep 3 11:08:22 2020 From: nginx-forum at forum.nginx.org (allenhe) Date: Thu, 03 Sep 2020 07:08:22 -0400 Subject: Worker process core dumped In-Reply-To: <0ad03c292feecfe7cb78b8646d5f1251.NginxMailingListEnglish@forum.nginx.org> References: <0ad03c292feecfe7cb78b8646d5f1251.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8de784cfc71749487ecb1428ff50ec5d.NginxMailingListEnglish@forum.nginx.org> I was wrong, the request object was created on the fly with the pool object. here the pool was detroyed before the r was referenced which caused the core dump. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289296,289298#msg-289298 From nginx-forum at forum.nginx.org Thu Sep 3 11:19:08 2020 From: nginx-forum at forum.nginx.org (allenhe) Date: Thu, 03 Sep 2020 07:19:08 -0400 Subject: Worker process core dumped In-Reply-To: <8de784cfc71749487ecb1428ff50ec5d.NginxMailingListEnglish@forum.nginx.org> References: <0ad03c292feecfe7cb78b8646d5f1251.NginxMailingListEnglish@forum.nginx.org> <8de784cfc71749487ecb1428ff50ec5d.NginxMailingListEnglish@forum.nginx.org> Message-ID: to be more self assurance, can somebody confirm that the "r" is no longer accessable after this? ngx_http_free_request(r, 0); thank you in advance! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289296,289299#msg-289299 From mdounin at mdounin.ru Thu Sep 3 12:22:55 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Sep 2020 15:22:55 +0300 Subject: Does NGINX read auth_basic_user_file on every connection? In-Reply-To: References: Message-ID: <20200903122255.GA12747@mdounin.ru> Hello! On Thu, Sep 03, 2020 at 10:55:27AM +0530, RA wrote: > How does NGINX process auth_basic_user_file? > > 1) Does it read it in entirety on every connection? No. > 2) Does it read it line by line on every connection and stops > when a match is found? No, though what nginx does is somewhat close. It reads the user file by using a fixed-size buffer, and then scans the buffer contents to find lines. As long as the user is found, it stops. See the code for further details. > 3) Does it read it full on start and re-reads it only if the > file is changed? No. > If its either 1 or 2, then is it not very inefficient to read a > file on just every connection? If the file has fairly large > number of entries (5-10mb), will it not affect the performance > of web server in general? There should be some "indexed" > approach to this. Reading the user file is not generally a problem, since it is cached by OS. Unwise choice of the password hashing algorithm usually have much larger impact on basic authentication and the performane of the web server in general, since basic authentication implies checking the password on each request. On the other hand, using user files with fairly large number of entries might not be a good idea either. If you want to deploy authentication in setups with many thousands of users, you may want to use different authentication mechanism. In particular, you may plug your own, written in your favorite language, by using the auth_request directive, see here for details: http://nginx.org/en/docs/http/ngx_http_auth_request_module.html -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Thu Sep 3 12:31:02 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Sep 2020 13:31:02 +0100 Subject: nginx last version on windows In-Reply-To: <7ff8affbc50a8c916c66a51cb4209120.NginxMailingListEnglish@forum.nginx.org> References: <20200903094803.GK29287@daoine.org> <9731011096133ddcecbb3b46ace01368.NginxMailingListEnglish@forum.nginx.org> <7ff8affbc50a8c916c66a51cb4209120.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200903123102.GL29287@daoine.org> On Thu, Sep 03, 2020 at 06:55:56AM -0400, jasonsx wrote: Hi there, For what it's worth: to handle file uploads in nginx, you need nginx + your own code; or nginx + this upload module + your own code. Using this module might make the "your own code" part a bit easier to write; but it will still need to be written. So if you need to have file uploads working before someone prepares the module for you, you can make the "your own code" part handle it all. (Some languages / libraries do make the "your own code" part simpler than some others.) Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Sep 3 12:40:40 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Sep 2020 15:40:40 +0300 Subject: Worker process core dumped In-Reply-To: References: <0ad03c292feecfe7cb78b8646d5f1251.NginxMailingListEnglish@forum.nginx.org> <8de784cfc71749487ecb1428ff50ec5d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200903124040.GB12747@mdounin.ru> Hello! On Thu, Sep 03, 2020 at 07:19:08AM -0400, allenhe wrote: > to be more self assurance, > can somebody confirm that the "r" is no longer accessable after this? > ngx_http_free_request(r, 0); The ngx_http_free_request() function frees the request, and using "r" after the call is an error. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Sep 3 13:26:16 2020 From: nginx-forum at forum.nginx.org (jasonsx) Date: Thu, 03 Sep 2020 09:26:16 -0400 Subject: nginx last version on windows In-Reply-To: <20200903123102.GL29287@daoine.org> References: <20200903123102.GL29287@daoine.org> Message-ID: <6fbb188ca49a8a2e1806f09c11d5d88c.NginxMailingListEnglish@forum.nginx.org> thanks for you replay yep i'm already download nginx src and trying to compiler it with the module but i'm stuck in perl 5.10.0 and mingw is using 5.8.8 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289290,289303#msg-289303 From nginx-forum at forum.nginx.org Thu Sep 3 14:12:55 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 03 Sep 2020 10:12:55 -0400 Subject: nginx last version on windows In-Reply-To: <6fbb188ca49a8a2e1806f09c11d5d88c.NginxMailingListEnglish@forum.nginx.org> References: <20200903123102.GL29287@daoine.org> <6fbb188ca49a8a2e1806f09c11d5d88c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9fbad5c9995e97e7955ec0f3833906c5.NginxMailingListEnglish@forum.nginx.org> There's no point in trying, that module's code is full of errors and too many hacks. Your better of with Lua which is embedded in our version, see https://www.gakhov.com/articles/implementing-api-based-fileserver-with-nginx-and-lua.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289290,289307#msg-289307 From nginx-forum at forum.nginx.org Thu Sep 3 15:48:05 2020 From: nginx-forum at forum.nginx.org (jasonsx) Date: Thu, 03 Sep 2020 11:48:05 -0400 Subject: nginx last version on windows In-Reply-To: <9fbad5c9995e97e7955ec0f3833906c5.NginxMailingListEnglish@forum.nginx.org> References: <20200903123102.GL29287@daoine.org> <6fbb188ca49a8a2e1806f09c11d5d88c.NginxMailingListEnglish@forum.nginx.org> <9fbad5c9995e97e7955ec0f3833906c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: thank you i will try it now thanks for you time Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289290,289308#msg-289308 From nginx-forum at forum.nginx.org Sat Sep 5 19:25:11 2020 From: nginx-forum at forum.nginx.org (vishal taneja) Date: Sat, 05 Sep 2020 15:25:11 -0400 Subject: Slow performance when sending a large file upload request via proxy_pass In-Reply-To: <0eb425c4e764553662e0da37880fa56f.NginxMailingListEnglish@forum.nginx.org> References: <0eb425c4e764553662e0da37880fa56f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6e2edae657bf72abff3b605e6cdfe1f3.NginxMailingListEnglish@forum.nginx.org> Dera Sir, I am uploading 5 images each images is 7 mb size.When i try to upload it will take 2 min. I Want uploading speed should be 20 second below are the configration.Please help me to resolve this client_max_body_size 100m; client_body_in_file_only on; client_body_buffer_size 100m; proxy_buffering on; proxy_buffer_size 1k; proxy_buffers 4 100m; proxy_busy_buffers_size 100m; proxy_max_temp_file_size 2048m; proxy_temp_file_write_size 100m; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268317,289327#msg-289327 From nginx-forum at forum.nginx.org Sun Sep 6 14:15:28 2020 From: nginx-forum at forum.nginx.org (ravansh) Date: Sun, 06 Sep 2020 10:15:28 -0400 Subject: Unable to proxy pass to https backend on nginx Message-ID: <9091d2988ffa66d9c20cc8fcf300356c.NginxMailingListEnglish@forum.nginx.org> I am unable to reverse proxy to my https backend. what am i doing wrong? I am using the same set of cert for the backend and frontend as I am running them both on the same machine. I got my certificates from zerossl. Here is the error I get : curl --cacert /etc/ssl/certs/ca_bundle.crt https://www.ravi.guru 502 Bad Gateway

502 Bad Gateway


nginx/1.16.1
In my /var/log/nginx/error.log I get this: 2020/09/06 01:50:53 [error] 2603#0: *4 upstream SSL certificate verify error: (2:unable to get > issuer certificate) while SSL handshaking to upstream, client: 192.168.103.15, server: www.ravi.guru, request: "GET / HTTP/1.1", upstream: "https://192.168.103.15:8080/", host: "www.ravi.guru" When I connect to backend directly, all goes well: curl --cacert /etc/ssl/certs/ca_bundle.crt https://www.ravi.guru:8080 hi my index.html is a file with an entry "hi" =============== Here is my config file =============== server { listen 443 http2 ssl; server_name www.ravi.guru; ssl_certificate /etc/ssl/certs/certificate.crt; ssl_certificate_key /etc/ssl/private/private.key; ssl_trusted_certificate /etc/ssl/certs/ca_bundle.crt; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; location / { proxy_pass https://www.ravi.guru:8080; proxy_ssl_certificate /etc/ssl/certs/certificate.crt; proxy_ssl_certificate_key /etc/ssl/private/private.key; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_ssl_trusted_certificate /etc/ssl/certs/ca_bundle.crt; proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_session_reuse on; } } server { listen 8080 http2 ssl; #listen [::]:443 http2 ssl; server_name www.ravi.guru; ssl_certificate /etc/ssl/certs/certificate.crt; ssl_certificate_key /etc/ssl/private/private.key; ssl_trusted_certificate /etc/ssl/certs/ca_bundle.crt; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; root /var/www/ravi.guru/html; index index.html index.htm index.nginx-debian.html; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289329,289329#msg-289329 From teward at thomas-ward.net Sun Sep 6 18:21:37 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Sun, 06 Sep 2020 14:21:37 -0400 Subject: Unable to proxy pass to https backend on nginx In-Reply-To: <9091d2988ffa66d9c20cc8fcf300356c.NginxMailingListEnglish@forum.nginx.org> References: <9091d2988ffa66d9c20cc8fcf300356c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8bcbbd85-56ce-4a30-a746-4313a15267e1@thomas-ward.net> Bad Gateway indicates the backend you are sending to is not valid in some way - check the nginx error.log output to see what happened when trying to send it to your proxypass'd backend ?Get BlueMail for Android ? -------- Original Message -------- From: ravansh Sent: Sun Sep 06 10:15:28 EDT 2020 To: nginx at nginx.org Subject: Unable to proxy pass to https backend on nginx I am unable to reverse proxy to my https backend. what am i doing wrong? I am using the same set of cert for the backend and frontend as I am running them both on the same machine. I got my certificates from zerossl. Here is the error I get : curl --cacert /etc/ssl/certs/ca_bundle.crt https://www.ravi.guru 502 Bad Gateway

502 Bad Gateway


nginx/1.16.1
In my /var/log/nginx/error.log I get this: 2020/09/06 01:50:53 [error] 2603#0: *4 upstream SSL certificate verify error: (2:unable to get > issuer certificate) while SSL handshaking to upstream, client: 192.168.103.15, server: www.ravi.guru, request: "GET / HTTP/1.1", upstream: "https://192.168.103.15:8080/", host: "www.ravi.guru" When I connect to backend directly, all goes well: curl --cacert /etc/ssl/certs/ca_bundle.crt https://www.ravi.guru:8080 hi my index.html is a file with an entry "hi" =============== Here is my config file =============== server { listen 443 http2 ssl; server_name www.ravi.guru; ssl_certificate /etc/ssl/certs/certificate.crt; ssl_certificate_key /etc/ssl/private/private.key; ssl_trusted_certificate /etc/ssl/certs/ca_bundle.crt; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; location / { proxy_pass https://www.ravi.guru:8080; proxy_ssl_certificate /etc/ssl/certs/certificate.crt; proxy_ssl_certificate_key /etc/ssl/private/private.key; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_ssl_trusted_certificate /etc/ssl/certs/ca_bundle.crt; proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_session_reuse on; } } server { listen 8080 http2 ssl; #listen [::]:443 http2 ssl; server_name www.ravi.guru; ssl_certificate /etc/ssl/certs/certificate.crt; ssl_certificate_key /etc/ssl/private/private.key; ssl_trusted_certificate /etc/ssl/certs/ca_bundle.crt; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; root /var/www/ravi.guru/html; index index.html index.htm index.nginx-debian.html; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289329,289329#msg-289329 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 7 00:59:01 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Sep 2020 03:59:01 +0300 Subject: Unable to proxy pass to https backend on nginx In-Reply-To: <9091d2988ffa66d9c20cc8fcf300356c.NginxMailingListEnglish@forum.nginx.org> References: <9091d2988ffa66d9c20cc8fcf300356c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200907005901.GC18881@mdounin.ru> Hello! On Sun, Sep 06, 2020 at 10:15:28AM -0400, ravansh wrote: > I am unable to reverse proxy to my https backend. what am i doing wrong? I > am using the same set of cert for the backend and frontend as I am running > them both on the same machine. I got my certificates from zerossl. Here is > the error I get : > > curl --cacert /etc/ssl/certs/ca_bundle.crt https://www.ravi.guru > > > 502 Bad Gateway > >

502 Bad Gateway

>
nginx/1.16.1
> > > In my /var/log/nginx/error.log I get this: > > 2020/09/06 01:50:53 [error] 2603#0: *4 upstream SSL certificate verify > error: (2:unable to get > issuer certificate) while SSL handshaking to > upstream, client: 192.168.103.15, server: www.ravi.guru, request: "GET / > HTTP/1.1", upstream: "https://192.168.103.15:8080/", host: "www.ravi.guru" > > When I connect to backend directly, all goes well: > > curl --cacert /etc/ssl/certs/ca_bundle.crt https://www.ravi.guru:8080 Are there any other virtual servers on the port 8080? If yes, you may want to switch on SNI in connections to upstream servers using the proxy_ssl_server_name directive, see here for details: http://nginx.org/r/proxy_ssl_server_name -- Maxim Dounin http://mdounin.ru/ From h33x at softwarecats.dev Mon Sep 7 05:02:58 2020 From: h33x at softwarecats.dev (Anton Demenev) Date: Mon, 7 Sep 2020 12:02:58 +0700 Subject: configuration test ignores custom resolver Message-ID: <1dbf7845-18a5-d38c-fae6-ef4340a324d7@softwarecats.dev> Hi everyone! Unfortunately, I can't find information about how Nginx tests configuration files. In my case I have a two internal DNS zones, .develop and .test. On global http section I added my resolver: ... http { ??? resolver 192.168.140.249 valid=300s; ??? resolver_timeout 1s; ... And I use proxy_pass directive with DNS name likeproxy_pass http://front-dev.develop; I expect, that Nginx start to use resolver for upstream name resolving on test config stage. But everything go wrong... On strace output I see, that on start Nginx uses system resolver, ignoring custom resolver from config. Can anyone help with this? What I do wrong? Regards, Anton. From francis at daoine.org Mon Sep 7 11:10:24 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 7 Sep 2020 12:10:24 +0100 Subject: Unable to proxy pass to https backend on nginx In-Reply-To: <9091d2988ffa66d9c20cc8fcf300356c.NginxMailingListEnglish@forum.nginx.org> References: <9091d2988ffa66d9c20cc8fcf300356c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200907111024.GM29287@daoine.org> On Sun, Sep 06, 2020 at 10:15:28AM -0400, ravansh wrote: Hi there, > I am unable to reverse proxy to my https backend. what am i doing wrong? I > am using the same set of cert for the backend and frontend as I am running > them both on the same machine. I got my certificates from zerossl. Here is > the error I get : > > curl --cacert /etc/ssl/certs/ca_bundle.crt https://www.ravi.guru That response says that curl-client does accept the ssl-negotiation with your port-443 nginx server when it knows to trust the ca_bundle.crt contents. > 2020/09/06 01:50:53 [error] 2603#0: *4 upstream SSL certificate verify > error: (2:unable to get > issuer certificate) while SSL handshaking to > upstream, client: 192.168.103.15, server: www.ravi.guru, request: "GET / > HTTP/1.1", upstream: "https://192.168.103.15:8080/", host: "www.ravi.guru" That log says that nginx-client does not accept the ssl-negotiation with your port-8080 nginx server. > When I connect to backend directly, all goes well: > > curl --cacert /etc/ssl/certs/ca_bundle.crt https://www.ravi.guru:8080 And that response says that curl-client does accept the ssl-negotiation with your port-8080 nginx server when it knows to trust the ca_bundle.crt contents. > =============== > Here is my config file > =============== As an aside: a lot of these directives are only needed if you are using client certificates; you don't appear to be, so you can possibly remove some of these directives for person-clarity. > server { > listen 443 http2 ssl; > server_name www.ravi.guru; > location / { > proxy_pass https://www.ravi.guru:8080; > proxy_ssl_trusted_certificate /etc/ssl/certs/ca_bundle.crt; > proxy_ssl_verify on; > proxy_ssl_verify_depth 2; I guess that one possibility is that the "certificate chain" to be verified is longer than 2; after you've confirmed that the certificate file (below) is correct, it might be worth increasing that depth to whatever your system uses. > } > } > server { > listen 8080 http2 ssl; > #listen [::]:443 http2 ssl; > > server_name www.ravi.guru; > > ssl_certificate /etc/ssl/certs/certificate.crt; Does "grep CERT /etc/ssl/certs/certificate.crt" show one BEGIN/END pair, or more than one? As in -- does that file hold just the this-server certificate, or does it also hold the full chain back to the root? (If it does not hold the full chain, I guess it is possible that curl-client and nginx-client can have different behaviours.) Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Sep 7 11:34:47 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 7 Sep 2020 12:34:47 +0100 Subject: configuration test ignores custom resolver In-Reply-To: <1dbf7845-18a5-d38c-fae6-ef4340a324d7@softwarecats.dev> References: <1dbf7845-18a5-d38c-fae6-ef4340a324d7@softwarecats.dev> Message-ID: <20200907113447.GN29287@daoine.org> On Mon, Sep 07, 2020 at 12:02:58PM +0700, Anton Demenev wrote: Hi there, > Unfortunately, I can't find information about how Nginx tests configuration > files. As far as I know, it's pretty much "do everything apart from actually listen on ports or write to files". > In my case I have a two internal DNS zones, .develop and .test. > > On global http section I added my resolver: > > ... > > http { > ??? resolver 192.168.140.249 valid=300s; > ??? resolver_timeout 1s; > > ... > > And I use proxy_pass directive with DNS name likeproxy_pass > http://front-dev.develop; > > I expect, that Nginx start to use resolver for upstream name resolving on > test config stage. No. nginx uses the system resolver at startup, to resolve whatever "obviously static" hostnames are in the config. nginx uses the "resolver"-directive resolver at runtime, to resolve whatever other hostnames apply then. http://nginx.org/r/proxy_pass has some information about when a resolver is used. > But everything go wrong... > > On strace output I see, that on start Nginx uses system resolver, ignoring > custom resolver from config. > > Can anyone help with this? What I do wrong? https://www.nginx.com/blog/dns-service-discovery-nginx-plus/ shows some more information; the first half of that document is valid for nginx (non-plus). Probably the simplest nginx-way is to use a variable in your proxy_pass directive, so that the hostname is not "obviously static" at startup, and so that the system resolver will not be used then. Good luck with it, f -- Francis Daly francis at daoine.org From atwoodhuang94 at gmail.com Tue Sep 8 03:42:33 2020 From: atwoodhuang94 at gmail.com (=?UTF-8?B?6buE5ZaG?=) Date: Tue, 8 Sep 2020 11:42:33 +0800 Subject: =?UTF-8?Q?Could_backup_request_=28or_hedged_requests_=29_be_supported_?= =?UTF-8?Q?=EF=BC=9F?= Message-ID: Hi? In my work, I use nginx as a http proxy between diffreent services. As we know, Envoy is also a very famous proxy. I have noticed that envoy has a function called 'hedged requests' https://www.envoyproxy.io/docs/envoy/v1.12.2/intro/arch_overview/http/http_routing#request-hedging . This means that Envoy will race multiple simultaneous upstream requests and return the response associated with the first acceptable response headers to the downstream. so ,could nginx support this function ? I have tried to do some change in ngx_http_upstream.c. But I found that too much code needs to be changed to accomplish this function? it's too difficult for me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 9 02:49:44 2020 From: nginx-forum at forum.nginx.org (allenhe) Date: Tue, 08 Sep 2020 22:49:44 -0400 Subject: Worker process core dumped In-Reply-To: <20200903124040.GB12747@mdounin.ru> References: <20200903124040.GB12747@mdounin.ru> Message-ID: <0ee1e051ee78755492bd1d50755e3582.NginxMailingListEnglish@forum.nginx.org> Hi, I found most times using "r" after ngx_http_free_request() won't have any problem. the core dump would happen once for a while in the high load. I am wondering if the "ngx.pfree" does not return the memory back to the os when it's called? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289296,289346#msg-289346 From nginx-forum at forum.nginx.org Wed Sep 9 05:58:42 2020 From: nginx-forum at forum.nginx.org (figshta) Date: Wed, 09 Sep 2020 01:58:42 -0400 Subject: proxy_pass Not Working on Port 80 Message-ID: <666e92adf93f0d0246105723c95e4d70.NginxMailingListEnglish@forum.nginx.org> I have two servers behind on IP address. Server1 is hosting several websites all using TLS exclusively. Recently I set up Server2 and setup one website using reverse proxy from Server1 and finally successfully deployed TLS on it as well. During that setup I had to use port 80 to use Certbot with Let's Encrypt. Now I'm trying to do it again the same way with another domain. The proxy_pass directive works on port 8080, but when I switch it to port 80 I get a 404 error. Here is setup in question: (again, Port 8080 works, but port 80 does not) ---------------------------------- #Proxy server (Server1) # threedaystubble.com server server { listen 80; server_name www.threedaystubble.com threedaystubble.com; location / { proxy_pass http://192.168.3.5:80; } } #Proxied Server (Server2) server { listen 80; server_name threedaystubble.com www.threedaystubble.com; root /var/www/threedaystubble.com; location / { } } -------------------------------------- Also, I checked if anything was happening on port 80 on both machines with net stat -plant: Server1 Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - Server2 Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - Any help would be greatly appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289348,289348#msg-289348 From francis at daoine.org Wed Sep 9 08:57:12 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 9 Sep 2020 09:57:12 +0100 Subject: proxy_pass Not Working on Port 80 In-Reply-To: <666e92adf93f0d0246105723c95e4d70.NginxMailingListEnglish@forum.nginx.org> References: <666e92adf93f0d0246105723c95e4d70.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200909085712.GA30691@daoine.org> On Wed, Sep 09, 2020 at 01:58:42AM -0400, figshta wrote: Hi there, > I have two servers behind on IP address. What does that mean, in terms of "traffic to the IP address gets sent to server#1 or to server'2"? > Server1 is hosting several websites > all using TLS exclusively. That suggests that all incoming traffic to your IP on port 443 gets sent to server#1. > Recently I set up Server2 and setup one website using reverse proxy from > Server1 and finally successfully deployed TLS on it as well. > During that setup I had to use port 80 to use Certbot with Let's Encrypt. > Now I'm trying to do it again the same way with another domain. > The proxy_pass directive works on port 8080, but when I switch it to port 80 > I get a 404 error. What request do you make that returns the 404 error? What response do you want for that request? (Probably something like "http 200 and the contents of *this* file".) (And: what does the nginx log file say about the 404 error? Is it trying to read a different file from what you expect?) > Here is setup in question: (again, Port 8080 works, but port 80 does not) > > ---------------------------------- > #Proxy server (Server1) > > # threedaystubble.com server > server { > listen 80; > server_name www.threedaystubble.com threedaystubble.com; > location / { > proxy_pass http://192.168.3.5:80; With that config, your server#2 will not see a Host: header that includes threedaystubble.com. If your server#2 needs that Host header, things will probably break. > #Proxied Server (Server2) > > server { > listen 80; > > server_name threedaystubble.com www.threedaystubble.com; > > root /var/www/threedaystubble.com; > > location / { > } > > } If that is the entire config on server#2, it should probably work. But if you have more server{} blocks, such that the "default" port-80 server is something else, then that extra config might be causing this not to act in the way that you want. > Any help would be greatly appreciated. Depending on what else is wanted, I'd suggest one of the methods to make sure that the Host: header that you want, is sent to server#2 in the proxy_pass request from server#1. That can be proxy_set_header; or proxy_pass with the hostname and either an upstream of the hostname, or the system resolver to refer to the server#2 address. (Or: just use port 8080 on server#2 for this site; and port 8081 on server#2 for the next site.) Good luck with it, f -- Francis Daly francis at daoine.org From Hans.Liss at uadm.uu.se Wed Sep 9 09:10:20 2020 From: Hans.Liss at uadm.uu.se (Hans Liss) Date: Wed, 9 Sep 2020 11:10:20 +0200 Subject: Full logging Message-ID: <98b460b2-65e9-ae0e-8aea-02c572f5ea3d@uadm.uu.se> Hi! I need a HTTP proxy that can handle requests for a single upstream server, but also log request headers, raw request body, response headers and raw response body for each request. Preferably this should be logged to a separate daily logfile (with date-stamped filename), with timestamps, but the exact format isn't important. I understand that nginx doesn't provide full logging like this out of the box, so I was wondering if it would be difficult to add. Is there a single well-defined point in the code where an upstream request is ready to be sent, and a point where the response has been received? I might be able to take it from there, but it would save a lot of time if I knew where to do this. I'm well aware of the performance penalties involved, but for this particular case, it doesn't matter. Best regards, Hans N?r du har kontakt med oss p? Uppsala universitet med e-post s? inneb?r det att vi behandlar dina personuppgifter. F?r att l?sa mer om hur vi g?r det kan du l?sa h?r: http://www.uu.se/om-uu/dataskydd-personuppgifter/ E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy From nginx-forum at forum.nginx.org Wed Sep 9 09:56:39 2020 From: nginx-forum at forum.nginx.org (doudootiana) Date: Wed, 09 Sep 2020 05:56:39 -0400 Subject: problem ajax with nginx Message-ID: Hy Ladies and gentlemen :) I have migrated my project from Apache2 and php7 to nginx with php-php, version 7 too. With Apache2, everything worked. With Nginx, only ajax fails. With Ajax, i try to execute a php script. Trying to determine where is the problem, i have : -tested php code(echo and phpinfo), directly in html: it works. -tested a javascript which change the appearances of my page : it works. -tested in command line my script php : it works. ( The php script launch a sh script to reboot my Raspberry.) In the javascript where is my ajax, i used success error and complete to return the result. and the function of success and complete are executed. but the card doesn't reboot. I have no error log from nginx and php-fpm :( I've been looking into the problem for several days, which must be a big mistake on my part. here the fonc of nginx : #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root /diagbox/web; index intro.html; } error_page 550 /550.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass unix:/run/php-fpm.sock; fastcgi_param SCRIPT_FILENAME /diagbox/web/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } here the js script : //Constantes fonction reboot const scripts_reboot = '../scripts/reset_reboot/reboot.php' const mess_reboot = "La DiagBox red?marrera apr?s avoir cliquer sur 'ok'. Tu pourras te reconnecter sur l'interface web d'ici 30 secondes environ :)" const err_reboot = "Une erreur a empech? le red?marrage de la carte." //Constantes fonction popup test debit const php_download = "../scripts/test_debit/iperf_download_bouygues.php" const php_upload = "../scripts/test_debit/iperf_upload_bouygues.php" const php_ping = "../scripts/test_debit/iperf_ping.php" function reboot(param1, param2) // param1 = smartphone ou tablette . param2 = oui ou non { if (param1 === 'tablette') { switch (param2) { case 'oui': alert(mess_reboot); $.ajax({ url: scripts_reboot, }); break; default: choix_menu('accueil', 'tablette'); // Fonction accessible grace a la page html break; } } else if (param1 === 'smartphone') { switch (param2) { case 'oui': alert(mess_reboot); $.ajax({ url: scripts_reboot, }); break; default: choix_menu('accueil', 'smartphone'); // Fonction accessible grace a la page html break; } } else { alert(err_reboot); } } //fin script reboot Here the html code, the part which uses the js code :

confirmer le red?marrage de la Diagbox ?

and here, the php code : Thanks by advance :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289351,289351#msg-289351 From r at roze.lv Wed Sep 9 11:00:17 2020 From: r at roze.lv (Reinis Rozitis) Date: Wed, 9 Sep 2020 14:00:17 +0300 Subject: proxy_pass Not Working on Port 80 In-Reply-To: <666e92adf93f0d0246105723c95e4d70.NginxMailingListEnglish@forum.nginx.org> References: <666e92adf93f0d0246105723c95e4d70.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000001d68698$66db7e20$34927a60$@roze.lv> > ---------------------------------- > #Proxy server (Server1) > > # threedaystubble.com server > server { > listen 80; > server_name www.threedaystubble.com threedaystubble.com; > location / { > proxy_pass http://192.168.3.5:80; > } > } In this configuration nginx doesn't pass the Host header to backend. In case there are multiple name based virtualhosts on the 192.168.3.5, you'll always get the default or first one (the order in the backend config). Try to change to this and see if helps: location / { proxy_pass http://192.168.3.5:80; proxy_set_header Host $host; } rr From nginx-forum at forum.nginx.org Wed Sep 9 11:25:04 2020 From: nginx-forum at forum.nginx.org (Pekkonen) Date: Wed, 09 Sep 2020 07:25:04 -0400 Subject: nginx + Wordpress = problems with permalinks Message-ID: <875de0ea44179dede88e14bc28c9d94c.NginxMailingListEnglish@forum.nginx.org> Hi guys! I try to use nginx for Wordpress and have a problems when setup Permalinks to all other format when "Plain" I use nginx 1.14.2 PHP 7.3 This is my config /etc/nginx/sites-available/default server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.html index.htm index.nginx-debian.html index.php; server_name ip_address_of_my_server; location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; } } E.g. I see 404 error for this URL http://sitename.com/category/books-about-water/ For plain settings I see a page - http://sitename.com?cat=16 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289355,289355#msg-289355 From r at roze.lv Wed Sep 9 11:32:02 2020 From: r at roze.lv (Reinis Rozitis) Date: Wed, 9 Sep 2020 14:32:02 +0300 Subject: Full logging In-Reply-To: <98b460b2-65e9-ae0e-8aea-02c572f5ea3d@uadm.uu.se> References: <98b460b2-65e9-ae0e-8aea-02c572f5ea3d@uadm.uu.se> Message-ID: <000101d6869c$d6a81c50$83f854f0$@roze.lv> > I need a HTTP proxy that can handle requests for a single upstream server, > but also log request headers, raw request body, response headers and raw > response body for each request. Preferably this should be logged to a > separate daily logfile (with date-stamped filename), with timestamps, but the > exact format isn't important. By default nginx can log the request body ( http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_body ), for the response body an option is to use the lua module https://github.com/openresty/lua-nginx-module (either compile yourself or use Openresty) But it might be way more simple just to plug a proxy (developed for exact purposes) between nginx and upstream. Like for example https://mitmproxy.org/ I have an old test example (not sure if still working) which shows the idea, but maybe there is a more elegant way to do it: http { log_format bodylog '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_time "$req_headers" "$req_body" "$resp_body"'; sevrver { lua_need_request_body on; set $resp_body ""; set $req_body ""; set $req_headers ""; rewrite_by_lua_block { local req_headers = "Headers: "; ngx.var.req_body = ngx.req.get_body_data(); local h, err = ngx.req.get_headers() for k, v in pairs(h) do req_headers = req_headers .. k .. ": " .. v .. "\n"; end ngx.var.req_headers = req_headers; } body_filter_by_lua ' local resp_body = string.sub(ngx.arg[1], 1, 1000) ngx.ctx.buffered = (ngx.ctx.buffered or "") .. resp_body if ngx.arg[2] then ngx.var.resp_body = ngx.ctx.buffered end '; } } rr From mdounin at mdounin.ru Wed Sep 9 13:01:55 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Sep 2020 16:01:55 +0300 Subject: Worker process core dumped In-Reply-To: <0ee1e051ee78755492bd1d50755e3582.NginxMailingListEnglish@forum.nginx.org> References: <20200903124040.GB12747@mdounin.ru> <0ee1e051ee78755492bd1d50755e3582.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200909130155.GM18881@mdounin.ru> Hello! On Tue, Sep 08, 2020 at 10:49:44PM -0400, allenhe wrote: > I found most times using "r" after ngx_http_free_request() won't have any > problem. the core dump would happen once for a while in the high load. That's because use-after-free errors not always result in segmentation faults as long as the memory isn't yet returned to the OS by the memory allocator. To get consistent errors, consider using AddressSanitizer (https://en.wikipedia.org/wiki/AddressSanitizer). -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Sep 9 14:40:35 2020 From: nginx-forum at forum.nginx.org (figshta) Date: Wed, 09 Sep 2020 10:40:35 -0400 Subject: proxy_pass Not Working on Port 80 In-Reply-To: <000001d68698$66db7e20$34927a60$@roze.lv> References: <000001d68698$66db7e20$34927a60$@roze.lv> Message-ID: <403e912e1db25a014e26ceb9e170191b.NginxMailingListEnglish@forum.nginx.org> Thank you Francis and Rennis. I really appreciate your help, unfortunately it isn't working yet... Are there any another ways to trouble shoot this port problem? Rennis: Your suggestion looked so promising. > In this configuration nginx doesn't pass the Host header to backend. > In case there are multiple name based virtualhosts on the 192.168.3.5, > you'll always get the default or first one (the order in the backend > config). It really makes sense because I do have another configuration pointing to this backend server; it is, however, on port 443. Here is the complete proxy configuration for both sites on Server1: (I added the proxy_set_header directive as suggestedl) #Proxy server (Pi3) (Server1) # houseofavi.com server server { listen 443; server_name houseofavi.com; location / { proxy_pass https://192.168.3.5:443; proxy_set_header Host $host; } ssl_certificate /etc/letsencrypt/live/houseofavi.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/houseofavi.com/privkey.pem; # managed by Certbot } # threedaystubble.com server server { listen 80; server_name www.threedaystubble.com threedaystubble.com; location / { proxy_pass http://192.168.3.5:80; proxy_set_header Host $host; } } Are there another ideas to allow port 80? Francis: I don't want to use port 8080 or 8081 because Certbot requires port 80 and it should work. I plan to have everything on port 443 once it's all set up. I am however, now concerned about running into trouble with multiple sites using 443. It it difficult? I hope I can pass various requests to different "virtual" server blocks on the same port. Here both server configurations on Server2 in case it helps. #Proxied Server (Server2) #houseofavi.com server { if ($host = houseofavi.com) { return 301 https://$host$request_uri; } # managed by Certbot server_name houseofavi.com; return 404; # managed by Certbot root /var/www/houseofavi.com; } server { listen 443 ssl; # managed by Certbot root /var/www/houseofavi.com; location / { } ssl_certificate /etc/letsencrypt/live/houseofavi.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/houseofavi.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } #Proxied Server (Server2) #threedaystubble.com server { listen 80; server_name threedaystubble.com www.threedaystubble.com; root /var/www/threedaystubble.com; location / { } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289348,289362#msg-289362 From nginx-forum at forum.nginx.org Wed Sep 9 15:11:53 2020 From: nginx-forum at forum.nginx.org (figshta) Date: Wed, 09 Sep 2020 11:11:53 -0400 Subject: Redirect Question for Directory Structure Change Message-ID: I'm looking for help with a permanent site-wide redirect. I have a website that I moved to a new server. For some crazy reason the old server had an odd structure that I am changing. The URL root for the old site is http://threedaystubble.com/e/ I have changed it to be http://threedaystubble.com/ Example: threedaystubble.com/e/gallery.html --- redirects to ----> threedaystubble.com/gallery.com There are many other folders and html files that need this site-wide redirect. Of course anyone can access the site when linking to the root, but there are many links out there referring to the old directory structure under /e/. I'm looking for assistance writing a redirect so all those inbound links out there on the internet work again. I tried a couple of ideas, but they didn't work, I thought this location directive inside a server block was best, but it didn't work. location = /e { return 310 $scheme://threedaystubble.com; } Any help would be greatly appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289363,289363#msg-289363 From r at roze.lv Wed Sep 9 16:03:19 2020 From: r at roze.lv (Reinis Rozitis) Date: Wed, 9 Sep 2020 19:03:19 +0300 Subject: Redirect Question for Directory Structure Change In-Reply-To: References: Message-ID: <000001d686c2$bc6e2200$354a6600$@roze.lv> It is a bit unclear if you want only a single rewrite or are there multiple different directory mappings/redirects. > I tried a couple of ideas, but they didn't work, I thought this location directive > inside a server block was best, but it didn't work. > > location = /e { > return 310 $scheme://threedaystubble.com; } This means that only '/e' will be redirected and not '/e/site.html' (and not even '/e/') and it won't preserve the request uri. This should work: location ~ ^/e/(.*) { return 310 $scheme://threedaystubble.com/$1; } Or you could do it via rewrite in the server {} block: rewrite ^/e/(.*) /$1 last; (you can replace the 'last' to 'permanent' if you want the client to know about the structure change and get the 301 redirect). p.s if there are multiple redirect locations (besides /e) an easy way is to group them together via map directive. rr From francis at daoine.org Wed Sep 9 20:46:46 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 9 Sep 2020 21:46:46 +0100 Subject: proxy_pass Not Working on Port 80 In-Reply-To: <403e912e1db25a014e26ceb9e170191b.NginxMailingListEnglish@forum.nginx.org> References: <000001d68698$66db7e20$34927a60$@roze.lv> <403e912e1db25a014e26ceb9e170191b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200909204646.GB30691@daoine.org> On Wed, Sep 09, 2020 at 10:40:35AM -0400, figshta wrote: Hi there, > I really appreciate your help, unfortunately it isn't working yet... > > Are there any another ways to trouble shoot this port problem? What request do you make of nginx-frontend? What request do you want nginx to make of the backend/upstream? What request does nginx actually make of the backend? The logs, or tcpdump, should show you exactly what is happening. > I don't want to use port 8080 or 8081 because Certbot requires port 80 and > it should work. Certbot requires port 80 on the frontend. You get to decide for yourself what happens on the backend - certbot should not know or care. > I plan to have everything on port 443 once it's all set up. Certbot will still require port 80, unless you have an alternative plan. > I am however, now concerned about running into trouble with multiple sites > using 443. > It it difficult? http://nginx.org/en/docs/http/configuring_https_servers.html has some useful notes. > Here both server configurations on Server2 in case it helps. > > #Proxied Server (Server2) > #houseofavi.com > server { > if ($host = houseofavi.com) { > return 301 https://$host$request_uri; > } # managed by Certbot > server_name houseofavi.com; > return 404; # managed by Certbot That is the 404 return that you get, because your frontend nginx did not send the Host: header that you want. (Instead, it sent the Host: header that you configured it to send.) There is, in this case, an implicit "listen 80 default;" in this server{}. So... > server { > listen 80; > server_name threedaystubble.com www.threedaystubble.com; ...this server{} will only be used if you include a Host: header of one of those two strings. Add some logging; or (temporarily) return 200 "this is the backend you want: $request_uri\n"; to see that it is (or is not) being used. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Sep 10 04:25:50 2020 From: nginx-forum at forum.nginx.org (figshta) Date: Thu, 10 Sep 2020 00:25:50 -0400 Subject: Redirect Question for Directory Structure Change In-Reply-To: <000001d686c2$bc6e2200$354a6600$@roze.lv> References: <000001d686c2$bc6e2200$354a6600$@roze.lv> Message-ID: Thank you Reinis. I really appreciate your help and your patience. I am trying to learn this, so seeing what how it works is very useful. To be clear, hopefully, I need all the (multiple) subdirectories of threedaystubble.com/e/ in existing inbound links to refer to the new structure threedaystubble.com/~ >p.s if there are multiple redirect locations (besides /e) an easy way is to group them together via map directive. All the redirect locations are in /e/. In fact the path to everything is the same except I removed the /e/. So, maybe I don't need to use the map directive, right? This didn't work: location ~ ^/e/(.*) { return 310 $scheme://threedaystubble.com/$1; } This seems to work: References: <20200909204646.GB30691@daoine.org> Message-ID: Thank you Francis! I realize that some of these are probably rhetorical questions, but in the interest of learning, I will try to answer them anyway. There is, in this case, an implicit "listen 80 default;" in this server{}. So... >> server { >> listen 80; >> server_name threedaystubble.com www.threedaystubble.com; >....this server{} will only be used if you include a Host: header of one of those two strings. >Add some logging; or (temporarily) >return 200 "this is the backend you want: $request_uri\n"; >to see that it is (or is not) being used. It is clear that you have given me the guidance I need to try figure it out. I will play with it and try to learn it. Thank you! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289348,289371#msg-289371 From nginx-forum at forum.nginx.org Thu Sep 10 05:45:52 2020 From: nginx-forum at forum.nginx.org (figshta) Date: Thu, 10 Sep 2020 01:45:52 -0400 Subject: Redirect Question for Directory Structure Change In-Reply-To: References: <000001d686c2$bc6e2200$354a6600$@roze.lv> Message-ID: <8cd91494034df358503d132ddda70b8f.NginxMailingListEnglish@forum.nginx.org> I was wrong... >This seems to work: >>rewrite ^/e/(.*) /$1 permanent; It only works for the first level... 'threedaystubble.com/Gallery.html' works but other links from that page that got deeper into the file structure do not! So, maybe map directive is needed after all... I'm planing to try that next, but I want to make sure that a more simple 'return' redirect won't work in my case. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289363,289372#msg-289372 From r at roze.lv Thu Sep 10 08:17:04 2020 From: r at roze.lv (Reinis Rozitis) Date: Thu, 10 Sep 2020 11:17:04 +0300 Subject: Redirect Question for Directory Structure Change In-Reply-To: <8cd91494034df358503d132ddda70b8f.NginxMailingListEnglish@forum.nginx.org> References: <000001d686c2$bc6e2200$354a6600$@roze.lv> <8cd91494034df358503d132ddda70b8f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000e01d6874a$c4495c00$4cdc1400$@roze.lv> > I was wrong... > > >This seems to work: > >>rewrite ^/e/(.*) /$1 permanent; > > It only works for the first level... > 'threedaystubble.com/Gallery.html' works but other links from that page that > got deeper into the file structure do not! What do you mean by "got deeper" can you give a sample url that doesn't work? Like what the uri is currently and what you need it to be? Also a more complete nginx configuration might help to pinpoint the problem (because the order of regular expression location matching matters etc). rr From francis at daoine.org Thu Sep 10 08:17:40 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 10 Sep 2020 09:17:40 +0100 Subject: proxy_pass Not Working on Port 80 In-Reply-To: References: <20200909204646.GB30691@daoine.org> Message-ID: <20200910081740.GC30691@daoine.org> On Thu, Sep 10, 2020 at 01:31:48AM -0400, figshta wrote: Hi there, > I realize that some of these are probably rhetorical questions, but in the > interest of learning, I will try to answer them anyway. No, not rhetorical. Also, not general. I mean: when you reply with "does not work", what was the one specific test case that you ran? Something like: I run the command curl -v http://www.example.com/one/two.html and I expect the response "http 301" and a redirect to /one/three.html; or I expect the response "http 200" and the content of the file /usr/local/nginx/html/one/two.html from the machine server2; or whatever exact specific response that you want for this one specific request > > I am mostly working with http/https 'get' requests for now. So, let's say "http://www.example.com/one/two.html". > > I want all requests for specific domains to pass to the backend (Server2) > (The idea is that Server2 will eventually replace Server1 as domains are > eventually moved over to it.) And let's say "/one/two.html", talking to backend "www.example.com". > The backend (server2) is also an nginx server. > I have seen the access logs and error logs for the backend (Server2), but > since I'm new to this, I'm slow to understand it all. If you have more than one backend server{} block, and you write all the logs for all server{} blocks into one file, then you possibly will not easily know which one server{} block nginx used to process this one request. If you make it possible to see what is happening, it will be easier to see what is happening. > I will keep looking at the logs and study tcpdump, thank you. tcpdump will be useful for http content; it can be slightly useful for https content, to show that you are or are not using SNI (which is relevant when you have more than one "virtual host" on the same IP:port). > not know or care. > > Right, and perhaps my scheme is erroneous. > I am trying to keep certificates on both servers. > Originally, I was trying to keep the certificates for domains on the backend > (Server2) on that machine, but I couldn't proxy_pass encrypted traffic > easily. The certbot side seems... complicated. If that wants discussing here, it probably should be in a dedicated thread. It is not directly related to the subject of this thread. (In very short: you need to decide how exactly you want inbound traffic to your public IP address to be handled. After you have a clear design for how you want each request to be handled, you will be able to see how and whether it is possible.) > send the Host: header that you want. > <(Instead, it sent the Host:header that you configured it to send.) > > I commented out 'return 404; # managed by Certbot' and that did the trick. > Now I can use port 80. Thank you! > That said, I don't really understand where I configured the Host: header or > how to do it correctly. There is (potentially) lots in the configuration that you want. So it is worth taking one step at a time. There is also lots of documentation available on the nginx.org web site. For example, every directive (e.g "location") is documented at (e.g.) http://nginx.org/r/location. For nginx http -- a request comes in, and is handled in exactly one server{} block (based on the configuration, and the incoming IP:port and hostname within the request). Then (after rewrite-module directives) the request is handled in exactly one location{} block (based on the configuration and the request). Then either a subrequest is created, and a new rewrite-module/location-selection happens; or the request is just processed. On your frontend, this means that your /one/two.html request to threedaystubble.com is proxy_pass'ed to http://192.168.3.5:80/one/two.html. On your backend, the http://192.168.3.5:80/one/two.html is handled in the "houseofavi.com" server{} block, where it will return 404; or, now that you have removed that, return the content of /var/www/houseofavi.com/one/two.html If you want the request to be handled in the server2 "threedaystubble.com" server block, you'll probably want the proxy_set_header that was mentioned previously. If you use "curl" for testing, rather than a more full-featured web browser, you will have a better chance of seeing what exactly is happening, and you will avoid things like browser caching that can interfere with useful testing. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Sep 10 10:34:07 2020 From: nginx-forum at forum.nginx.org (figshta) Date: Thu, 10 Sep 2020 06:34:07 -0400 Subject: Redirect Question for Directory Structure Change In-Reply-To: <000e01d6874a$c4495c00$4cdc1400$@roze.lv> References: <000e01d6874a$c4495c00$4cdc1400$@roze.lv> Message-ID: Thank you Reinis! Please bear with me... It seems that I'm getting different results than I described earlier... In fact it is now working for the most part... The errors are limited to certain files in Chrome on the Mac, but not in Safari or Firefox. >What do you mean by "got deeper" can you give a sample url that doesn't work? >Like what the uri is currently and what you need it to be? Different browsers show different results... Yes, I'm check in browsers. I'm not good with logs yet. using your suggested rewrite: >rewrite ^/e/(.*) /$1 permanent; This example is for one slide in a slideshow of posters... In Safari on a Mac/iPad/iPhone: http://www.threedaystubble.com/e/Gallery_Posters.html#4 the expected result is: http://www.threedaystubble.com/Gallery_Posters.html#4 In Chrome on a Mac (only?) http://www.threedaystubble.com/e/Gallery_Posters.html#4 the erroneous result is: http://www.threedaystubble.com:8080/Gallery_Posters.html#4 This error is odd because I'm not using port 8080 at this time. The same error is also true for these files: http://www.threedaystubble.com/Gallery_Posters.html and all of the posters in the associated slideshow http://www.threedaystubble.com/e/Gallery_Album.html and all of the art in the slideshow I would think the slideshow widget is the cause, but these other two slide shows in the gallery work correctly. In Chrome on the Mac http://www.threedaystubble.com/e/Gallery_Photos.html the expected result is: http://www.threedaystubble.com/Gallery_Photos.html http://www.threedaystubble.com/e/Gallery_Other.html the expected result is: http://www.threedaystubble.com/Gallery_Other.html Do you think that this might be related to nginx or the redirect? If not, I don't want to bother you about it. The fact that it rewrites threedaystubble.com:8080/~ makes me think it might be related. Odd, however, that is is just happening in Chrome. Safari and Firefox are okay. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289363,289376#msg-289376 From r at roze.lv Thu Sep 10 11:17:23 2020 From: r at roze.lv (Reinis Rozitis) Date: Thu, 10 Sep 2020 14:17:23 +0300 Subject: Redirect Question for Directory Structure Change In-Reply-To: References: <000e01d6874a$c4495c00$4cdc1400$@roze.lv> Message-ID: <000001d68763$f4cd84f0$de688ed0$@roze.lv> > Please bear with me... > It seems that I'm getting different results than I described earlier... > > In fact it is now working for the most part... > The errors are limited to certain files in Chrome on the Mac, but not in Safari > or Firefox. You should clean cache (or set to never cache) for each testing session/config change as browsers tend to cache the redirects very heavily (especially for static content). A better way as Francis suggested is to use a curl or other tools (like wget -S ..) which don't cache anything and you'll always get the actual behavior. rr From nginx-forum at forum.nginx.org Thu Sep 10 13:45:58 2020 From: nginx-forum at forum.nginx.org (figshta) Date: Thu, 10 Sep 2020 09:45:58 -0400 Subject: Redirect Question for Directory Structure Change In-Reply-To: <000001d68763$f4cd84f0$de688ed0$@roze.lv> References: <000001d68763$f4cd84f0$de688ed0$@roze.lv> Message-ID: <74c429fb11284928fe570d0b4f16c8b2.NginxMailingListEnglish@forum.nginx.org> >You should clean cache (or set to never cache) for each testing session/config change as browsers tend to cache the redirects very heavily (especially for static content). >A better way as Francis suggested is to use a curl or other tools (like wget -S ..) which don't cache anything and you'll always get the actual behavior. Thanks, I will work on it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289363,289384#msg-289384 From themadbeaker at gmail.com Thu Sep 10 14:51:53 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 10 Sep 2020 09:51:53 -0500 Subject: Redirect Question for Directory Structure Change Message-ID: You really should use a custom named capture group as the default "$1" (and $2, $3, $4...) can cause erroneous output if there is any other capturing going on in your configuration files... i.e. location ~ ^/e/(?.*) { return 301 /$x1$is_args$args; } As someone else mentioned, be sure to disable caching when doing testing... In Chrome / Firefox, you can press 'F12' for the developer tools, then choose the 'network' tab and check the box 'disable cache'. When you are done testing, just press F12 again... From nginx-forum at forum.nginx.org Thu Sep 10 15:11:48 2020 From: nginx-forum at forum.nginx.org (figshta) Date: Thu, 10 Sep 2020 11:11:48 -0400 Subject: proxy_pass Not Working on Port 80 In-Reply-To: <20200910081740.GC30691@daoine.org> References: <20200910081740.GC30691@daoine.org> Message-ID: Francis, Thank you very much for the detailed reply and being patient with me. I have learned a lot from you and Reinis and I am truly grateful. My sites are now working with TLS/SSL... I couldn't have done it with out you guys!! >curl -v http://www.example.com/one/two.html I see the value in this tool. I will use it in the future. I have much to learn. Thanks again. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289348,289387#msg-289387 From nginx-forum at forum.nginx.org Thu Sep 10 15:18:41 2020 From: nginx-forum at forum.nginx.org (figshta) Date: Thu, 10 Sep 2020 11:18:41 -0400 Subject: Redirect Question for Directory Structure Change In-Reply-To: <74c429fb11284928fe570d0b4f16c8b2.NginxMailingListEnglish@forum.nginx.org> References: <000001d68763$f4cd84f0$de688ed0$@roze.lv> <74c429fb11284928fe570d0b4f16c8b2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <502ac8544243f97af445aaac4c36cdfa.NginxMailingListEnglish@forum.nginx.org> Reinis, Thank you so much for all your help! I have succeeded in getting things set up for now. I couldn't have done it with out you guys. I have learned a lot form you and Francis and I am very grateful. Thank you for being patient with my ignorance. Sorry to both of you for wasting your time with my caching blunders. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289363,289388#msg-289388 From ahmed.sajid at securekey.com Thu Sep 10 22:45:31 2020 From: ahmed.sajid at securekey.com (Ahmed Sajid) Date: Thu, 10 Sep 2020 18:45:31 -0400 Subject: Nginx 1.18 - sub_filter + if statement Message-ID: <8b779b4d-1b0b-8586-9f2d-271ca73bdd18@securekey.com> Hi All, Not sure if this question has been answered before. The only evidence I could find was a StackExchange question https://serverfault.com/questions/774750/use-of-sub-filter-in-if-block-under-nginx-config with no answer. Here's what I'm trying to accomplish: * If the request is coming from a specific using agent, set override var to 1 * under location block if override is 1 only then apply sub_filter Here's the snippet: if ($http_user_agent = 'HTTPie/1.0.3') { set $override_content 1; } location / { proxy_set_header Accept-Encoding ""; proxy_pass $upstream_endpoint; if ($override_content = 1) { sub_filter 'abcabcabcabc' 'xyzxyzxyzxyz'; sub_filter_types 'application/json'; sub_filter_once off; } } I get error nginx: [emerg] "sub_filter" directive is not allowed here When I remove the if block, there's no complain or error and Nginx runs fine. Any idea why this doesn't work? Best Regards, Ahmed. This email and any attachments are for the sole use of the intended recipients and may be privileged, confidential or otherwise exempt from disclosure under law. Any distribution, printing or other use by anyone other than the intended recipient is prohibited. If you are not an intended recipient, please contact the sender immediately, and permanently delete this email and its attachments. -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger at netskrt.io Thu Sep 10 23:14:06 2020 From: roger at netskrt.io (Roger Fischer) Date: Thu, 10 Sep 2020 16:14:06 -0700 Subject: large number of cache files Message-ID: <0A9E8A07-A294-48DF-A716-5BA41C024F4C@netskrt.io> Hello, from a practical perspective, what would be considered an unreasonable large number of cache files (unique cache keys) in a single nginx server? 1M, 10M, 100M? With a large cache, would there be any significant benefit in using multiple caches (multiple key_zones) in a single nginx server? Or using two nginx servers on the same physical server (or VM)? I am aware of the ~ 8K keys (files) per 1 MB of key zone memory, and that available memory thus poses a limit. I am curios at what point the the cache exceeds the comfort zone of the design. Thanks? Roger From themadbeaker at gmail.com Thu Sep 10 23:38:38 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 10 Sep 2020 18:38:38 -0500 Subject: Nginx 1.18 - sub_filter + if statement Message-ID: Check the 'context' for the sub_filter directives you are trying to use. They do not say they can be used with 'if'. http://nginx.org/en/docs/http/ngx_http_sub_module.html Also worth reading about using 'if': https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/ From francis at daoine.org Fri Sep 11 08:21:54 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 11 Sep 2020 09:21:54 +0100 Subject: proxy_pass Not Working on Port 80 In-Reply-To: References: <20200910081740.GC30691@daoine.org> Message-ID: <20200911082154.GD30691@daoine.org> On Thu, Sep 10, 2020 at 11:11:48AM -0400, figshta wrote: Hi there, > My sites are now working with TLS/SSL... Good to hear that you have it working; and best of luck with nginx! Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Sep 11 09:20:38 2020 From: nginx-forum at forum.nginx.org (Basanta) Date: Fri, 11 Sep 2020 05:20:38 -0400 Subject: NGINX SSL Pass through Message-ID: Hi , Need a help on "ssl-passthrough" on NGINX . I am using this annotation for an end to end ssl connection . nginx.ingress.kubernetes.io/ssl-passthrough: "true". The connection is failing for one application but passing for another application. The error for the failing application is Tried on Firefox and IE Using Internet Explorer: Can't connect securely to this page This might be because the site uses outdated or unsafe TLS security settings. If this keeps happening, try contacting the website?s owner. Using Firefox: Secure Connection Failed An error occurred during a connection to 10.224.250.44:5500. PR_END_OF_FILE_ERROR The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the web site owners to inform them of this problem. Through CURL * Initializing NSS with certpath: sql:/etc/pki/nssdb * NSS error -5938 (PR_END_OF_FILE_ERROR) * Encountered end of file * Closing connection 0 curl: (35) Encountered end of file Please help me to find the cause of this Problem .. Regards, Basanta Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289395,289395#msg-289395 From r at roze.lv Fri Sep 11 15:52:25 2020 From: r at roze.lv (Reinis Rozitis) Date: Fri, 11 Sep 2020 18:52:25 +0300 Subject: large number of cache files In-Reply-To: <0A9E8A07-A294-48DF-A716-5BA41C024F4C@netskrt.io> References: <0A9E8A07-A294-48DF-A716-5BA41C024F4C@netskrt.io> Message-ID: <000001d68853$8b87e350$a297a9f0$@roze.lv> > I am curios at what point the the cache exceeds the comfort zone of the design. In my opinion it depends more on the aspect how important is your cache / how quickly can you replace and repopulate it (how fast or loaded are your backends) / can your service work without it - as in if you have a single massive cache server what happens when it goes down? I'm not sure what the upper or "reasonable" limits are but from a personal/practical experience I'm currently running instances with 32Gb ram and 4 x 1Tb ssds for cache files/zones (distributed via split_clients directive) each zone has keys_zone=2000m / each drive is filled up to 800G with roughly ~10M objects so in total 40-50M objects per server instance. Traffic Is arroung 1-2Gbits at peaks and there are no performance issues. rr From kanika.singh at india.nec.com Fri Sep 11 17:37:00 2020 From: kanika.singh at india.nec.com (Kanika Singh) Date: Fri, 11 Sep 2020 17:37:00 +0000 Subject: Help regarding setting IP_tos value in IP header Message-ID: Hi, I have a setup where I am trying to set IP_tos value in IP header when a request is forwarded from nginx server to another machine. I have configured ngx_http_ip_tos_filter_module module in the nginx.conf file, but it is setting IP_tos value in the acknowledgement and not in the outgoing packets. Can you please suggest which module or setting can I use to achieve my target. Regards, Kanika Singh ________________________________ The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or NECTI or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of NECTI or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Sep 11 21:54:45 2020 From: nginx-forum at forum.nginx.org (zakirenish) Date: Fri, 11 Sep 2020 17:54:45 -0400 Subject: Keepalived Connections Reset after reloading the configuration (HUP Signal) In-Reply-To: <56774ba55034a4075b78189191a29a62.NginxMailingListEnglish@forum.nginx.org> References: <56774ba55034a4075b78189191a29a62.NginxMailingListEnglish@forum.nginx.org> Message-ID: <62159cd3128af693dfbe0a4b2f912438.NginxMailingListEnglish@forum.nginx.org> Having similar issue. Clients connections getting closed on reload. Maxim, would you point me to a code location where I can make server wait for `worker_shutdown_timeout` before terminating the conenction? either through `ngx_set_shutdown_timer(cycle);` or sleep on `ccf->shutdown_timeout`? I am looking through `ngx_process_cycle.c` where the QUIT signal sent on the channel. But having hard time figuring out what I should change. I would really appreciate it! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,197927,289405#msg-289405 From bhuvangu at gmail.com Sat Sep 12 07:04:15 2020 From: bhuvangu at gmail.com (Bhuvan Gupta) Date: Sat, 12 Sep 2020 12:34:15 +0530 Subject: slow nginx behavior when using as reverse proxy for https Message-ID: *Community,* Same as asked here also: https://stackoverflow.com/questions/63857630/slow-nginx-behavior-when-using-as-reverse-proxy-for-https *Setup:* 1. Dummy SSL endpoint https://hookb.in/VGQ3wdGGzKSE22bwzRdP 2. Install Nginx on localhost *Steps:* 1. Hit the hookb.in endpoint using browser for very first time and we get network activity like below. It took 865 ms [image: enter image description here] Fig 1 2. Subsequent hit to hookb.in endpoint using browser take much less time as it is using the same tcp connection, below is the screen shot for ref. (All Good!!) [image: enter image description here] Fig 2 3. setup the http-> https reverse proxy using below nginx config worker_processes 1; events { worker_connections 1024; } http { keepalive_timeout 65; server { listen 80; server_name localhost; location /session { proxy_pass https://hookb.in/VGQ3wdGGzKSE22bwzRdP; proxy_http_version 1.1; proxy_set_header Connection "keep-alive"; proxy_ssl_session_reuse on; proxy_socket_keepalive on; } } } 1. Now from browser hit http://127.0.0.1/session and nginx will work fine and proxy the content from https site. But nginx response time is always 200ms more than compared to accessing https site directly. Screen shot below for ref *Why nignx is taking extra time , is it opening new ssl connection every time or is there something else?* I understand with reverse proxy we are adding extra hop , but 200ms is big difference. *How can i fix it ?* [image: enter image description here] Any Help will be appricated!! Thank -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at lucee.org Sun Sep 13 22:42:28 2020 From: igal at lucee.org (Igal Sapir) Date: Sun, 13 Sep 2020 15:42:28 -0700 Subject: Conditional Proxy Caching in Location Message-ID: Hello, I have a variable that shows if a certain cookie exists in the Request, e.g. $req_has_somecookie, and I want to be able to use proxy_cache only for specific URIs, e.g. /slow-page/ if the variable is 0. I know that "if" is evil as it creates a new location scope. What's the best way to handle this? Thanks, Igal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Sep 13 22:44:13 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Sep 2020 01:44:13 +0300 Subject: slow nginx behavior when using as reverse proxy for https In-Reply-To: References: Message-ID: <20200913224413.GU18881@mdounin.ru> Hello! On Sat, Sep 12, 2020 at 12:34:15PM +0530, Bhuvan Gupta wrote: [...] > http { > keepalive_timeout 65; > server { > listen 80; > server_name localhost; > location /session { > proxy_pass https://hookb.in/VGQ3wdGGzKSE22bwzRdP; > proxy_http_version 1.1; > proxy_set_header Connection "keep-alive"; > proxy_ssl_session_reuse on; > proxy_socket_keepalive on; > } > } > } > > > > 1. Now from browser hit http://127.0.0.1/session and nginx will work > fine and proxy the content from https site. > But nginx response time is always 200ms more than compared to accessing > https site directly. Screen shot below for ref > *Why nignx is taking extra time , is it opening new ssl connection every > time or is there something else?* The configuration you are using implies that nginx will open a new connection to upstream server for each proxied request. To configure nginx to keep upstream connections alive, please see the description of the "keepalive" directive here: http://nginx.org/r/keepalive Notably, make sure to configure an upstream block with the "keepalive" directive. Something like this at the http level should work, assuming no other changes in the configuration: upstream hookb.in { server hookb.in:443; keepalive 2; } In the example above, nginx will keep up to two connections. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Mon Sep 14 12:23:39 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 14 Sep 2020 13:23:39 +0100 Subject: Conditional Proxy Caching in Location In-Reply-To: References: Message-ID: <20200914122339.GE30691@daoine.org> On Sun, Sep 13, 2020 at 03:42:28PM -0700, Igal Sapir wrote: Hi there, > I have a variable that shows if a certain cookie exists in the Request, > e.g. $req_has_somecookie, and I want to be able to use proxy_cache only for > specific URIs, e.g. /slow-page/ if the variable is 0. > > I know that "if" is evil as it creates a new location scope. > > What's the best way to handle this? Probably some combination of http://nginx.org/r/proxy_cache_bypass and http://nginx.org/r/proxy_no_cache You may want to change your variable-setting logic; or use a map (http://nginx.org/r/map) to make a new $skip_the_cache variable that is 0-or-empty when your $req_has_somecookie is not 0, and has a different value when $req_has_somecookie is 0, so that you can (e.g.) proxy_no_cache $skip_the_cache; in the appropriate location{}s. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Sep 14 17:11:53 2020 From: nginx-forum at forum.nginx.org (Navlesh) Date: Mon, 14 Sep 2020 13:11:53 -0400 Subject: Upload large files via Nginx reverse proxy In-Reply-To: References: Message-ID: Can you please share the details? how did you make it work? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280578,289418#msg-289418 From netanel0043 at zurim.edum.org.il Tue Sep 15 09:31:10 2020 From: netanel0043 at zurim.edum.org.il (=?UTF-8?B?16DXqteg15DXnCDXqdeY16jXnw==?=) Date: Tue, 15 Sep 2020 12:31:10 +0300 Subject: =?UTF-8?Q?How_I_can_to_call_from_php_script_to_python_via_shell=5Fexec=28?= =?UTF-8?Q?=29=3F=E2=80=8F?= Message-ID: I tried this but I recieved HTTP error 405 What should I do? -- thanks NETANEL -------------- next part -------------- An HTML attachment was scrubbed... URL: From Revanth.Suryadevara at arcserve.com Wed Sep 16 09:30:23 2020 From: Revanth.Suryadevara at arcserve.com (Suryadevara, Revanth) Date: Wed, 16 Sep 2020 09:30:23 +0000 Subject: Queries on Nginx 1.14.2 Message-ID: Hi, We noticed that on Debian 10 the default Nginx version is 1.14.2-2+deb10u3. However on your website (http://nginx.org/en/download.html) Nginx 1.14.2 is deemed as a "Legacy Version". So does that mean 1.14.2 is End of Support ? Will there be any patches provided to this version if any major issues are found ? In general do you suggest installing any version of Nginx on Debian 10 which is not part of Debian 10 repository ? If yes, do we need to be aware of any issues ? Thanks, Revanth. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 16 09:54:04 2020 From: nginx-forum at forum.nginx.org (taoyantu) Date: Wed, 16 Sep 2020 05:54:04 -0400 Subject: nginx 1.10.3 reload ,failed to respond Message-ID: <3a96de6f8d5bc7a1622c0710c5262424.NginxMailingListEnglish@forum.nginx.org> hello everyone I use nginx 1.10.3, when i use nginx -s reload ,my application request will have "NoHttpResponseException: helloword.com:80 faild to respond " In my understanding, using reload will not affect the request. What is the problem? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289424,289424#msg-289424 From francis at daoine.org Thu Sep 17 11:37:23 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 17 Sep 2020 12:37:23 +0100 Subject: Help regarding setting IP_tos value in IP header In-Reply-To: References: Message-ID: <20200917113723.GG30691@daoine.org> On Fri, Sep 11, 2020 at 05:37:00PM +0000, Kanika Singh wrote: Hi there, I don't have an answer for you; but possibly if you can provide more specific details of the problem, perhaps someone else will be able to help? > I have a setup where I am trying to set IP_tos value in IP header when a request is forwarded from nginx server to another machine. > > I have configured ngx_http_ip_tos_filter_module module in the nginx.conf file, but it is setting IP_tos value in the acknowledgement and not in the outgoing packets. > There is the request from the client to nginx. There is the request from nginx to the upstream. There is the response from upstream to nginx. There is the response from nginx to the client. Which ones of these do show the IP_tos value that you want; and which ones do not? And: how are you looking, to see what the IP_tos value is, in each case? That information might help identify where the problem is. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Sep 17 11:40:50 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 17 Sep 2020 12:40:50 +0100 Subject: How I can to call from php script to python via shell_exec()? In-Reply-To: References: Message-ID: <20200917114050.GH30691@daoine.org> On Tue, Sep 15, 2020 at 12:31:10PM +0300, ????? ???? wrote: Hi there, > I tried this but I recieved HTTP error 405 > What should I do? As written, this sounds like a PHP question; you might have more luck getting a useful response on a PHP list. HTTP 405 can happen in nginx if you issue a POST request to a url that nginx would normally handle by returning content from the filesystem -- if you show test test request that you make and the matching nginx config, someone here might be able to point out a problem or resolution on that part. Good luck with it, f -- Francis Daly francis at daoine.org From igal at lucee.org Thu Sep 17 17:33:31 2020 From: igal at lucee.org (Igal Sapir) Date: Thu, 17 Sep 2020 10:33:31 -0700 Subject: Conditional Proxy Caching in Location Message-ID: Thank you, Francis. That sounds like a good plan. Pardon the new thread but I was subscribed in Digest Mode and couldn't reply directly. Igal On Sun, Sep 13, 2020 at 03:42:28PM -0700, Igal Sapir wrote: Hi there, >* I have a variable that shows if a certain cookie exists in the Request, *>* e.g. $req_has_somecookie, and I want to be able to use proxy_cache only for *>* specific URIs, e.g. /slow-page/ if the variable is 0. *> >* I know that "if" is evil as it creates a new location scope. *> >* What's the best way to handle this? * Probably some combination of http://nginx.org/r/proxy_cache_bypass andhttp://nginx.org/r/proxy_no_cache You may want to change your variable-setting logic; or use a map (http://nginx.org/r/map) to make a new $skip_the_cache variable that is 0-or-empty when your $req_has_somecookie is not 0, and has a different value when $req_has_somecookie is 0, so that you can (e.g.) proxy_no_cache $skip_the_cache; in the appropriate location{}s. f -- Francis Daly francis at daoine.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at benjamindsmith.com Sat Sep 19 16:26:57 2020 From: lists at benjamindsmith.com (Lists) Date: Sat, 19 Sep 2020 09:26:57 -0700 Subject: Unable to use subrequest authentication for proxied site Message-ID: <1862178.usQuhbGJ8B@tesla.effortlessis.com> How do I configure nginx to use subrequest authentication for a reverse proxied application with websocket upgrades? The documentation doesn't seem to contain the information I need to do this. https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/ When I do, I either get A) 404 for every request. B) All requests pass regardless of php session ID (or not at all) or It seems that I can either proxy the authentication request (and nginx attempts to serve local content) OR proxy the request itself (and then I get the etherpad content without it being authenticated) I can't seem to get both working together. SYSTEM CONFIGURATION I have an instance of Etherpad running on a VM on a host called Alpha. Alpha is running CentOS 7, the VM is running Centos8. The Etherpad instance is used in a PHP application running on another server, and it's important to prevent the etherpad pages from being public, so it seems the easiest way to secure things is to use subrequest authentication. Applcation: https://mydomain.com Etherpad URL: https://etherpad.mydomain.com 1) Within the application, the URL for the etherpad iframe has &sess_id=$php_session_id added. EG: https://etherpad.mydomain/p/MyPad?sess_id=MyPhpSessId This is so that nginx can make a subrequest to the application, passing the PHP Sessid to validate that the end user with the PHP Sessid has rights to the document in the etherpad instance. 2) The application has an external authentication page at /external/ etherpad.php. It returns http code 200 when the values in the php session match the URL of the etherpad address as passed in the header X-Original-URI. (EG: "MyPad" is referenced in MyPhpSessid) and 401 when this test fails. 3) Nginx is set up on Alpha to proxy to the VM called Virtual. The proxy is set up with Let's Encrypt SSL certificates; the etherpad instance is not encrypted but has no direct ports to the public cloud. WHAT I'M EXPECTING FOR A SUCCESSFUL HIT 1) nginx receives a request like: https://etherpad.mydomain.com/p/MyPadId?sess_id=PhpSessionId 2) nginx makes a request to https://mydomain.dom/external/etherpad.php with header: X-Original-URI: /p/MyPadId?sess_id=PhpSessionId 3) /external/etherpad.php receives authentication request, gets X-Original-URI and validates the request, returning an http_code 200 when all is well. 4) nginx receives the result of #3, and passes the original request to the etherpad instance. 5) Etherpad responds with the result 6) Successful Etherpad instance runs in the application. ********************************************************* WHAT IS HAPPENING (Schenario 1) 1) nginx receives a request like: https://etherpad.mydomain.com/p/MyPadId?sess_id=PhpSessionId 2) nginx makes a request to https://mydomain.dom/external/etherpad.php with header: X-Original-URI: /p/MyPadId?sess_id=PhpSessionId 3) /external/etherpad.php receives authentication request, gets X-Original-URI and validates the request, returning an http_code 200 when all is well. 4) Nginx attempts to resolve the request as a local file, can't find it, and returns 404 to the end user. SCENARIO 1 NGINX CONFIG FILE ---------------------- map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { server_name etherpad.mydomain.com; listen 8008; listen 9000 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/etherpad.mydomain.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/etherpad.mydomain.com/ privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot location / { proxy_pass http://192.168.122.61:9001/; proxy_set_header Host $host; proxy_pass_header Server; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; # recommended with keepalive connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } location /p/ { auth_request /auth; auth_request_set $auth_status $upstream_status; } location /auth { internal; proxy_pass https://mydomain.com/external/etherpad.php; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } } ---------------------- ********************************************************* WHAT IS HAPPENING (Schenario 2: invalid session ID) 1) nginx receives a request like: https://etherpad.mydomain.com/p/MyPadId?sess_id=INVALID 2) nginx makes a request to https://mydomain.dom/external/etherpad.php with header: X-Original-URI: /p/MyPadId?sess_id=INVALID 3) /external/etherpad.php receives authentication request, gets X-Original-URI and validates the request, returning an http_code 401. 4) nginx IGNORES the result of #3, and passes the original request to the etherpad instance, even when validation fails. 5) Etherpad responds with the result 6) Successful Etherpad instance runs in the application despite lacking credentials. SCENARIO 2 NGINX CONFIG FILE ---------------------- map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { server_name etherpad.mydomain.com; listen 8008; listen 9000 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/etherpad.mydomain.com/ fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/etherpad.mydomain.com/ privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot location / { proxy_pass http://192.168.122.61:9001/; proxy_set_header Host $host; proxy_pass_header Server; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; # recommended with keepalive connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } location /p/ { auth_request /auth; auth_request_set $auth_status $upstream_status; # NOTE THAT THIS IS A COPY OF LOCATION / UNTIL THE END OF THIS BLOCK. proxy_pass http://192.168.122.61:9001/; proxy_set_header Host $host; proxy_pass_header Server; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; # recommended with keepalive connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } location /auth { internal; proxy_pass https://mydomain.com/external/etherpad.php; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } } -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From francis at daoine.org Sun Sep 20 15:29:32 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 20 Sep 2020 16:29:32 +0100 Subject: Unable to use subrequest authentication for proxied site In-Reply-To: <1862178.usQuhbGJ8B@tesla.effortlessis.com> References: <1862178.usQuhbGJ8B@tesla.effortlessis.com> Message-ID: <20200920152932.GI30691@daoine.org> On Sat, Sep 19, 2020 at 09:26:57AM -0700, Lists wrote: Hi there, > How do I configure nginx to use subrequest authentication for a reverse proxied > application with websocket upgrades? The documentation doesn't seem to contain > the information I need to do this. I have not tested the websocket part, but the rest of it "Works For Me" (see below); and the response that you report does not seem to be restricted to websockets in your case. > ********************************************************* > WHAT IS HAPPENING (Schenario 1) > 3) /external/etherpad.php receives authentication request, gets X-Original-URI > and validates the request, returning an http_code 200 when all is well. > > 4) Nginx attempts to resolve the request as a local file, can't find it, and > returns 404 to the end user. Does that return a 404, or something else, when /external/etherpad.php does not return http_code 200? > SCENARIO 1 NGINX CONFIG FILE > location /p/ { > auth_request /auth; > auth_request_set $auth_status $upstream_status; > } That does not cause the request /p/MyPad to be proxy_pass'ed anywhere, because there is no proxy_pass in this location. > ********************************************************* > WHAT IS HAPPENING (Schenario 2: invalid session ID) > > 3) /external/etherpad.php receives authentication request, gets X-Original-URI > and validates the request, returning an http_code 401. Out of interest - if you make that request yourself, manually, do you see the http 401; or do you see something like a http 200 with a message of "this is a 401"? (It's unlikely, but is the only thing I can think of right now that might show why your test case differs from mine, below.) curl -i -H 'X-Original-URI: /p/MyPadId?sess_id=INVALID' https://mydomain.dom/external/etherpad.php > SCENARIO 2 NGINX CONFIG FILE > ---------------------- This looks like it should be working -- with the caveat that the original request of /p/MyPad will become just /MyPad when it gets to the upstream server. If that's not what you are seeing on the upstream, then this is not the config that is being used. > location /p/ { > auth_request /auth; > auth_request_set $auth_status $upstream_status; > > # NOTE THAT THIS IS A COPY OF LOCATION / UNTIL THE END OF THIS BLOCK. > proxy_pass http://192.168.122.61:9001/; > } > location /auth { > internal; > proxy_pass https://mydomain.com/external/etherpad.php; > proxy_pass_request_body off; > proxy_set_header Content-Length ""; > proxy_set_header X-Original-URI $request_uri; > } An alternate test could be to (temporarily) remove that "internal;", and look at the response from curl -i https://etherpad.mydomain.com:9000/auth and see if that shows anything odd. My test setup was: == server { listen 9001; # the upstream return 200 "9001; I just got a request for $request_uri\n"; } server { listen 9002; # the auth service if ($http_please) { return 200 "ok\n"; } return 403 "nope\n"; } server { listen 9003; # the public face location / { return 200 "9003; I just got a request for $request_uri\n"; } location /p/ { auth_request /auth; proxy_pass http://127.0.0.1:9001/; } location /auth { proxy_pass http://127.0.0.1:9002; proxy_set_header Please $http_please; } } == and then curl -i http://127.0.0.1:9003/p/aaa gives me a 403 response, while curl -i -H Please:yes http://127.0.0.1:9003/p/aaa gives me a 200 response of "9001; I just got a request for /aaa". So my auth_request returns 200 and the proxy_pass happens; or my auth_request returns (in this case) 403 and the proxy_pass does not happen. Does your system respond differently, using this simplified test case? Does "nginx -V" show anything unusual? Cheers, f -- Francis Daly francis at daoine.org From lists at benjamindsmith.com Sun Sep 20 17:33:43 2020 From: lists at benjamindsmith.com (Lists) Date: Sun, 20 Sep 2020 10:33:43 -0700 Subject: Unable to use subrequest authentication for proxied site In-Reply-To: <20200920152932.GI30691@daoine.org> References: <1862178.usQuhbGJ8B@tesla.effortlessis.com> <20200920152932.GI30691@daoine.org> Message-ID: <4576310.GXAFRqVoOG@tesla.effortlessis.com> See reply below On Sunday, September 20, 2020 8:29:32 AM PDT Francis Daly wrote: > On Sat, Sep 19, 2020 at 09:26:57AM -0700, Lists wrote: > > Hi there, > > > How do I configure nginx to use subrequest authentication for a reverse > > proxied application with websocket upgrades? The documentation doesn't > > seem to contain the information I need to do this. > > I have not tested the websocket part, but the rest of it "Works For > Me" (see below); and the response that you report does not seem to be > restricted to websockets in your case. > > > ********************************************************* > > WHAT IS HAPPENING (Schenario 1) > > > > 3) /external/etherpad.php receives authentication request, gets > > X-Original-URI and validates the request, returning an http_code 200 when > > all is well. > > > > 4) Nginx attempts to resolve the request as a local file, can't find it, > > and returns 404 to the end user. > > Does that return a 404, or something else, when /external/etherpad.php > does not return http_code 200? Hmm, > > > SCENARIO 1 NGINX CONFIG FILE > > > > location /p/ { > > > > auth_request /auth; > > auth_request_set $auth_status $upstream_status; > > } > > That does not cause the request /p/MyPad to be proxy_pass'ed anywhere, > because there is no proxy_pass in this location. > > > ********************************************************* > > WHAT IS HAPPENING (Schenario 2: invalid session ID) > > > > > > 3) /external/etherpad.php receives authentication request, gets > > X-Original-URI and validates the request, returning an http_code 401. > > Out of interest - if you make that request yourself, manually, do you see > the http 401; or do you see something like a http 200 with a message of > "this is a 401"? > > (It's unlikely, but is the only thing I can think of right now that > might show why your test case differs from mine, below.) Turns out this is *exactly* what was happening - Ugh. I hadn't published an "open" security profile for the /external/etherpad.php URI and it was responding with a login page and http code 200. So publishing a security profile for that page fixed the problem and now (with the correct proxy adding "/p/" at the end) and it seems to be passing all the tests I expect. But if the result was a login screen instead of a successful result, I'm thinking it *still* should have (not) worked. How is getting a login screen instead of the expected result a "successful" request? So I'm doing an audit to update my authentication code to return proper http response codes instead of just friendly end-user http response "200" pages. I thank you for your time and consideration. > curl -i -H 'X-Original-URI: /p/MyPadId?sess_id=INVALID' https:// mydomain.dom/external/etherpad.php -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From Revanth.Suryadevara at arcserve.com Mon Sep 21 04:50:23 2020 From: Revanth.Suryadevara at arcserve.com (Suryadevara, Revanth) Date: Mon, 21 Sep 2020 04:50:23 +0000 Subject: Queries on Nginx 1.14.2 support Message-ID: Hi, We noticed that on Debian 10 the default Nginx version is 1.14.2-2+deb10u3. However on your website (http://nginx.org/en/download.html) Nginx 1.14.2 is deemed as a "Legacy Version". So does that mean 1.14.2 is End of Support ? Will there be any patches provided to this version if any major issues are found ? In general do you suggest installing any version of Nginx on Debian 10 which is not part of Debian 10 repository ? If yes, do we need to be aware of any issues ? Thanks, Revanth. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Sep 21 06:57:24 2020 From: nginx-forum at forum.nginx.org (Svietlana) Date: Mon, 21 Sep 2020 02:57:24 -0400 Subject: Reverse proxy to Openshift Message-ID: <7b1ce9d4337312bb49dd0f87281a3a24.NginxMailingListEnglish@forum.nginx.org> I was creating reverse proxy to openshift cloud(with TLS: Edge) without any issue with below configuration: "server { listen 28097; server_name localhost; location / { proxy_pass https://openshift.private.cloud.com; proxy_ssl_verify off; proxy_ssl_session_reuse on; proxy_temp_file_write_size 64k; proxy_connect_timeout 10080s; proxy_send_timeout 10080; proxy_read_timeout 10080; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_request_buffering off; proxy_buffering off; proxy_http_version 1.1; server_tokens off; } } " but recently openshift developers created new app with TLS passthrough and below configuration doesn't work. When I try to do curl I get "HTTP/1.1 503 Service Unavailable". I don't know what could be the issue. Does anyone have experience with that? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289443,289443#msg-289443 From francis at daoine.org Mon Sep 21 08:52:32 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 21 Sep 2020 09:52:32 +0100 Subject: Unable to use subrequest authentication for proxied site In-Reply-To: <4576310.GXAFRqVoOG@tesla.effortlessis.com> References: <1862178.usQuhbGJ8B@tesla.effortlessis.com> <20200920152932.GI30691@daoine.org> <4576310.GXAFRqVoOG@tesla.effortlessis.com> Message-ID: <20200921085232.GJ30691@daoine.org> On Sun, Sep 20, 2020 at 10:33:43AM -0700, Lists wrote: > On Sunday, September 20, 2020 8:29:32 AM PDT Francis Daly wrote: > > On Sat, Sep 19, 2020 at 09:26:57AM -0700, Lists wrote: Hi there, > > > 3) /external/etherpad.php receives authentication request, gets > > > X-Original-URI and validates the request, returning an http_code 401. > > > > Out of interest - if you make that request yourself, manually, do you see > > the http 401; or do you see something like a http 200 with a message of > > "this is a 401"? > Turns out this is *exactly* what was happening - Ugh. I hadn't published an > "open" security profile for the /external/etherpad.php URI and it was > responding with a login page and http code 200. > > So publishing a security profile for that page fixed the problem and now (with > the correct proxy adding "/p/" at the end) and it seems to be passing all the > tests I expect. Great that you identified the problem and the fix; thanks for sharing with the list that that was the problem here. That will help the next person with a similar unexpected behaviour. > But if the result was a login screen instead of a successful result, I'm > thinking it *still* should have (not) worked. How is getting a login screen > instead of the expected result a "successful" request? The auth_request part of nginx does not care whether the response body is a login page, or an animated gif of a smiling face. It only cares about the http response code. 200 == successful result, as far as it is concerned. > So I'm doing an audit to update my authentication code to return proper http > response codes instead of just friendly end-user http response "200" pages. That's probably the right thing to do overall; except that you probably will not control what the typical browser shows for (e.g.) a 401 response. If the rest of your application already works with the 200 "please login" screen, then potentially you could send the 401 to nginx in response to the auth_request request; and add an "error_page 401 = /login_screen;" in the nginx location{}, and make the nginx subrequest for /login_screen return that "please login" with a 200 status. http://nginx.org/r/error_page for more information on that option. That could maintain the control that you currently have over what the end-user sees, while still having nginx allow the expected requests based on what the upstream says. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Sep 21 13:17:23 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Sep 2020 16:17:23 +0300 Subject: Queries on Nginx 1.14.2 support In-Reply-To: References: Message-ID: <20200921131723.GB1136@mdounin.ru> Hello! On Mon, Sep 21, 2020 at 04:50:23AM +0000, Suryadevara, Revanth wrote: > We noticed that on Debian 10 the default Nginx version is > 1.14.2-2+deb10u3. However on your website > (http://nginx.org/en/download.html) Nginx 1.14.2 is deemed as a > "Legacy Version". So does that mean 1.14.2 is End of Support ? > Will there be any patches provided to this version if any major > issues are found ? > In general do you suggest installing any version of Nginx on > Debian 10 which is not part of Debian 10 repository ? If yes, do > we need to be aware of any issues ? These questions does not look to be appropriate for nginx-devel@ mailing list, not to mention nginx-announce at . Also, please avoid cross-posting, and keep your questions in the most relevant mailing list instead. Thank you. -- Maxim Dounin http://mdounin.ru/ From hannu.shemeikka at kyynel.net Thu Sep 24 06:01:38 2020 From: hannu.shemeikka at kyynel.net (Hannu Shemeikka) Date: Thu, 24 Sep 2020 09:01:38 +0300 Subject: Auth_request and multiple cookies from the authentication server Message-ID: Hi, I'm using using auth_request to authenticate requests to my locations. I have a working configuration but I noticed that the client is not receiving all cookies set by the authentication server. I'm using following syntax for setting the cookie: auth_request_set??????????? $auth_cookie $upstream_http_set_cookie; It seems that the variable $upstream_http_set_cookie only contains the first cookie and not all cookies set by the upstream server. Is this variable's behavior feature or is it a bug? Is there a workaround for this? I have tried using different solutions like using using the variable $upstream_cookie_ for setting each cookie but this variable seems to contain only the raw cookie value and doesn't include the flags, e.g. expires, httponly. I thought about using lua but I'm thinking of giving up with the lua route since it seems it would not be a good solution all things considered. Relevant part of the nginx configuration: ################## location / { ??? auth_request??????????????? /auth; ??? auth_request_set????????? $auth_cookie $upstream_http_set_cookie; ??? add_header????????????????? Set-Cookie $auth_cookie; ??? try_files???????????????????????? $uri @frontend; } location /auth { ??? internal; ??? proxy_set_header????????? X-Original-Method $request_method; ??? proxy_set_header????????? X-Real-IP $remote_addr; ??? proxy_set_header????????? X-Original-URI $request_uri; ??? proxy_set_header????????? Host $host; ??? proxy_pass???????????????????? http://$server/api/authz; } ################## - Hannu From kaushalshriyan at gmail.com Thu Sep 24 14:16:42 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 24 Sep 2020 19:46:42 +0530 Subject: Difference between Mainline and Stable Nginx version Message-ID: Hi, I am running CentOS Linux release 7.8.2003 (Core) and referring to https://nginx.org/en/linux_packages.html#RHEL-CentOS. Are there any difference between stable and mainline version?Should we need to use stable or mainline for Production environment? [nginx-stable] > name=nginx stable repo > baseurl=http://nginx.org/packages/centos/$releasever/$basearch/ > gpgcheck=1 > enabled=1 > gpgkey=https://nginx.org/keys/nginx_signing.key > module_hotfixes=true > > [nginx-mainline] > name=nginx mainline repo > baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/ > gpgcheck=1 > enabled=0 > gpgkey=https://nginx.org/keys/nginx_signing.key > module_hotfixes=true I will appreciate if someone can pitch in for my question to this mailing list. Thanks in Advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Thu Sep 24 14:20:51 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 24 Sep 2020 10:20:51 -0400 Subject: Difference between Mainline and Stable Nginx version In-Reply-To: References: Message-ID: Depending on your needs, I'd favor STable over Mainline. Stable is just that - the current release of NGINX that is considered 'stable' and doesn't have many new feature changes to it or new things that Mainline will have. Mainline is closer to 'cutting edge' than 'stable' NGINX.? While you can use both for production, unless the features in Stable do not meet your needs or you need a newly introduced feature only available in Mainline, I'd suggest you use Stable. (This applies to all distros, in my opinion, not just RHEL/CentOS.) Thomas On 9/24/20 10:16 AM, Kaushal Shriyan wrote: > Hi, > > I am running CentOS Linux release 7.8.2003 (Core) and referring > to?https://nginx.org/en/linux_packages.html#RHEL-CentOS. Are there any > difference between stable and mainline version?Should we need to use > stable or mainline for Production environment? > > [nginx-stable] > name=nginx stable repo > baseurl=http://nginx.org/packages/centos/$releasever/$basearch/ > gpgcheck=1 > enabled=1 > gpgkey=https://nginx.org/keys/nginx_signing.key > module_hotfixes=true > > [nginx-mainline] > name=nginx mainline repo > baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/ > gpgcheck=1 > enabled=0 > gpgkey=https://nginx.org/keys/nginx_signing.key > module_hotfixes=true > > > I will appreciate if?someone can pitch in for my question to this > mailing list. > > Thanks in Advance. > > Best Regards, > > Kaushal > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Thu Sep 24 14:35:36 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 24 Sep 2020 20:05:36 +0530 Subject: Difference between Mainline and Stable Nginx version In-Reply-To: References: Message-ID: On Thu, Sep 24, 2020 at 7:51 PM Thomas Ward wrote: > Depending on your needs, I'd favor STable over Mainline. > > Stable is just that - the current release of NGINX that is considered > 'stable' and doesn't have many new feature changes to it or new things that > Mainline will have. > > Mainline is closer to 'cutting edge' than 'stable' NGINX. While you can > use both for production, unless the features in Stable do not meet your > needs or you need a newly introduced feature only available in Mainline, > I'd suggest you use Stable. > > (This applies to all distros, in my opinion, not just RHEL/CentOS.) > > > Thomas > Thanks Thomas for the quick reply and much appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Thu Sep 24 14:37:13 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 24 Sep 2020 10:37:13 -0400 Subject: Difference between Mainline and Stable Nginx version In-Reply-To: References: Message-ID: <65aa0862-9faa-0143-3a5e-7222bc6fa11c@thomas-ward.net> This said, when Mainline is actually cut into Stable come ~April, then Stable gets all the new stuff Mainline had between the two stable releases - Stable for production/stability in featureset, but Mainline if you don't mind having new features available in case you need them.? For the most part I've seen both used interchangeably for basic setups, it's more if you need the advanced stuff or brand new things available in Mainline but not Stable, in my opinion, that drives which you use. Thomas On 9/24/20 10:35 AM, Kaushal Shriyan wrote: > > On Thu, Sep 24, 2020 at 7:51 PM Thomas Ward > wrote: > > Depending on your needs, I'd favor STable over Mainline. > > Stable is just that - the current release of NGINX that is > considered 'stable' and doesn't have many new feature changes to > it or new things that Mainline will have. > > Mainline is closer to 'cutting edge' than 'stable' NGINX.? While > you can use both for production, unless the features in Stable do > not meet your needs or you need a newly introduced feature only > available in Mainline, I'd suggest you use Stable. > > (This applies to all distros, in my opinion, not just RHEL/CentOS.) > > > Thomas > > > Thanks Thomas for the quick reply and much appreciated. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu Sep 24 14:47:02 2020 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 24 Sep 2020 17:47:02 +0300 Subject: Difference between Mainline and Stable Nginx version In-Reply-To: References: Message-ID: <03362816-eacb-6c7e-d105-06d1fe7fc5d8@nginx.com> Hello, On 24.09.2020 17:16, Kaushal Shriyan wrote: > Hi, > > I am running CentOS Linux release 7.8.2003 (Core) and referring > to?https://nginx.org/en/linux_packages.html#RHEL-CentOS. Are there any > difference between stable and mainline version?Should we need to use > stable or mainline for Production environment? > [...] We published a blog post on this topic while ago: https://www.nginx.com/blog/nginx-1-6-1-7-released/ -- Maxim Konovalov From kaushalshriyan at gmail.com Thu Sep 24 16:46:23 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 24 Sep 2020 22:16:23 +0530 Subject: upstream timed out (110: Connection timed out) while reading response header from upstream. Message-ID: Hi, I am running nginx version: nginx/1.14.1 on Red Hat Enterprise Linux release 8.1 (Ootpa) with php-fpm PHP 7.2.24 (fpm-fcgi) (built: Oct 22 2019 08:28:36) FastCGI Process Manager for PHP. When I clear the cache in Drupal CMS (https://www.drupal.org/) it returns http status code 504. I have attached the both nginx (/etc/nginx/nginx.conf) and www.conf config (/etc/php-fpm.d/www.conf) file for your reference. I have enabled the below directive in /etc/nginx/nginx.conf it did not work. proxy_read_timeout 180s; > fastcgi_read_timeout 180s; ==> /var/log/php-fpm/error.log <== [23-Sep-2020 08:36:25] WARNING: [pool www] child 12018, script '/var/www/html/motest/devportal/web/index.php' (request: "GET /index.php") executing too slow (3.083996 sec), logging [23-Sep-2020 08:36:25] NOTICE: child 12018 stopped for tracing [23-Sep-2020 08:36:25] NOTICE: about to trace 12018 [23-Sep-2020 08:36:25] NOTICE: finished trace of 12018 [23-Sep-2020 08:37:29] WARNING: [pool www] child 12022, script '/var/www/html/motest/devportal/web/index.php' (request: "POST /index.php") executing too slow (3.039999 sec), logging [23-Sep-2020 08:37:29] NOTICE: child 12022 stopped for tracing [23-Sep-2020 08:37:29] NOTICE: about to trace 12022 [23-Sep-2020 08:37:29] NOTICE: finished trace of 12022 [23-Sep-2020 08:37:30] WARNING: [pool www] child 12015, script '/var/www/html/motest/devportal/web/index.php' (request: "POST /index.php?_wrapper_format=drupal_ajax") executing too slow (3.733207 sec), logging [23-Sep-2020 08:37:30] NOTICE: child 12015 stopped for tracing [23-Sep-2020 08:37:30] NOTICE: about to trace 12015 [23-Sep-2020 08:37:30] NOTICE: finished trace of 12015 ==> /var/log/nginx/error.log <== 2020/09/23 08:38:26 [error] 12035#0: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 157.45.28.7, server: dev-portal.motest.net, request: "POST /admin/config/development/performance HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/www.sock", host: "dev-portal.motest.net", referrer: " https://dev-portal.motest.net/admin/config/development/performance" I will appreciate if someone can pitch in for my question to this mailing list. Please let me know if you need any additional details to debug this issue. Thanks in Advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # log_format upstream_time '$remote_addr - $remote_user [$time_local] ' # '"$request" $status $body_bytes_sent ' # 'rt=$request_time uct="$upstream_connect_time" uht="$upstream_header_time" urt="$upstream_response_time"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; server_names_hash_bucket_size 128; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; # set up host name for JPMC to redirect to start SP-initiated SAML login server { listen 443 ssl; server_name jpmcdev-portal.motest.net; return 302 https://dev-portal.motest.net/userlogin?company=http://ida.jpmorganchase.com/adfs/services/trust; return 404; } server { listen 443 ssl default_server; ssl_certificate /etc/pki/tls/certs/motest.net.chain.crt; ssl_certificate_key /etc/pki/tls/private/motest.net.key; #ssl_protocols TLSv1.3 TLSv1.2; #ssl_ciphers EECDH+AESGCM:EDH+AESGCM; server_name dev-portal.motest.net; root /var/www/html/motest/devportal/web; ## <-- Your only path reference. access_log off; index index.php index.html index.htm; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; # jon debug access_log /var/log/nginx/upstream_time.log upstream_time; location = /favicon.ico { log_not_found off; access_log off; } location ^~ /simplesaml { index index.php index.html index.htm; alias /var/www/html/motest/devportal/vendor/simplesamlphp/simplesamlphp/www; location ~ ^(?/simplesaml)(?.+?\.php)(?/.*)?$ { include fastcgi_params; fastcgi_pass unix:/run/php-fpm/www.sock; fastcgi_param SCRIPT_FILENAME $document_root$phpfile; fastcgi_param PATH_INFO $pathinfo if_not_empty; } } location = /robots.txt { allow all; log_not_found off; access_log off; } # Very rarely should these ever be accessed outside of your lan location ~* \.(txt|log)$ { allow 192.168.0.0/16; deny all; } location ~ \..*/.*\.php$ { return 403; } location ~ ^/sites/.*/private/ { return 403; } # Block access to scripts in site files directory location ~ ^/sites/[^/]+/files/.*\.php$ { deny all; } # Allow "Well-Known URIs" as per RFC 5785 location ~* ^/.well-known/ { allow all; } # Block access to "hidden" files and directories whose names begin with a # period. This includes directories used by version control systems such # as Subversion or Git to store control files. location ~ (^|/)\. { return 403; } location / { # try_files $uri @rewrite; # For Drupal <= 6 try_files $uri /index.php?$query_string; # For Drupal >= 7 } location @rewrite { rewrite ^/(.*)$ /index.php?q=$1; } # Don't allow direct access to PHP files in the vendor directory. location ~ /vendor/.*\.php$ { deny all; return 404; } # Protect files and directories from prying eyes. location ~* \.(engine|inc|install|make|module|profile|po|sh|.*sql|theme|twig|tpl(\.php)?|xtmpl|yml)(~|\.sw[op]|\.bak|\.orig|\.save)?$|composer\.(lock|json)$|web\.config$|^(\.(?!well-known).*|Entries.*|Repository|Root|Tag|Template)$|^#.*#$|\.php(~|\.sw[op]|\.bak|\.orig|\.save)$ { deny all; return 404; } # In Drupal 8, we must also match new paths where the '.php' appears in # the middle, such as update.php/selection. The rule we use is strict, # and only allows this pattern with the update.php front controller. # This allows legacy path aliases in the form of # blog/index.php/legacy-path to continue to route to Drupal nodes. If # you do not have any paths like that, then you might prefer to use a # laxer rule, such as: # location ~ \.php(/|$) # The laxer rule will continue to work if Drupal uses this new URL # pattern with front controllers other than update.php in a future # release. location ~ '\.php$|^/update.php' { fastcgi_split_path_info ^(.+?\.php)(|/.*)$; # Ensure the php file exists. Mitigates CVE-2019-11043 try_files $fastcgi_script_name =404; # Security note: If you're running a version of PHP older than the # latest 5.3, you should have "cgi.fix_pathinfo = 0;" in php.ini. # See http://serverfault.com/q/627903/94922 for details. include fastcgi_params; # Block httpoxy attacks. See https://httpoxy.org/. fastcgi_param HTTP_PROXY ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param QUERY_STRING $query_string; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_index index.php; fastcgi_intercept_errors on; proxy_read_timeout 180s; fastcgi_read_timeout 180s; # PHP 5 socket location. #fastcgi_pass unix:/var/run/php5-fpm.sock; # PHP 7 socket location. # fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; # fastcgi_pass unix:/run/php-fpm/www.sock; fastcgi_pass unix:/run/php-fpm/www.sock; #fastcgi_read_timeout 120; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ { try_files $uri @rewrite; expires max; log_not_found off; } # Fighting with Styles? This little gem is amazing. # location ~ ^/sites/.*/files/imagecache/ { # For Drupal <= 6 location ~ ^/sites/.*/files/styles/ { # For Drupal >= 7 try_files $uri @rewrite; } # Handle private files through Drupal. Private file's path can come # with a language prefix. location ~ ^(/[a-z\-]+)?/system/files/ { # For Drupal >= 7 try_files $uri /index.php?$query_string; } # Enforce clean URLs # Removes index.php from urls like www.example.com/index.php/my-page --> www.example.com/my-page # Could be done with 301 for permanent or other redirect codes. if ($request_uri ~* "^(.*/)index\.php/(.*)") { return 307 $1$2; } } } -------------- next part -------------- cat /etc/php-fpm.d/www.conf ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'access.log' ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or @php_fpm_prefix@) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. ; RPM: apache user chosen to provide access to the same directories as httpd user = nginx ; RPM: Keep a group allowed to write in log dir. group = nginx ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on ; a specific port; ; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses ; (IPv6 and IPv4-mapped) on a specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. listen = /run/php-fpm/www.sock ; Set listen(2) backlog. ; Default Value: 511 ;listen.backlog = 511 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. ; Default Values: user and group are set as the running user ; mode is set to 0660 listen.owner = nginx listen.group = nginx listen.mode = 0660 ; When POSIX Access Control Lists are supported you can set them using ; these options, value is a comma separated list of user/group names. ; When set, listen.owner and listen.group are ignored listen.acl_users = apache,nginx ;listen.acl_groups = ; List of addresses (IPv4/IPv6) of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any listen.allowed_clients = 127.0.0.1 ; Specify the nice(2) priority to apply to the pool processes (only if set) ; The value can vary from -19 (highest priority) to 20 (lower priority) ; Note: - It will only work if the FPM master process is launched as root ; - The pool processes will inherit the master process priority ; unless it specified otherwise ; Default Value: no set ; process.priority = -19 ; Set the process dumpable flag (PR_SET_DUMPABLE prctl) even if the process user ; or group is differrent than the master process user. It allows to create process ; core dump and ptrace the process for the pool user. ; Default Value: no ; process.dumpable = yes ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives. With this process management, there will be ; always at least 1 children. ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; ondemand - no children are created at startup. Children will be forked when ; new requests will connect. The following parameter are used: ; pm.max_children - the maximum number of children that ; can be alive at the same time. ; pm.process_idle_timeout - The number of seconds after which ; an idle process will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. The below defaults are based on a server without much resources. Don't ; forget to tweak pm.* to fit your needs. ; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand' ; Note: This value is mandatory. pm.max_children = 50 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 5 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 5 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 35 ; The number of seconds after which an idle process will be killed. ; Note: Used only when pm is set to 'ondemand' ; Default Value: 10s ;pm.process_idle_timeout = 10s; ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 ;pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. It shows the following informations: ; pool - the name of the pool; ; process manager - static, dynamic or ondemand; ; start time - the date and time FPM has started; ; start since - number of seconds since FPM has started; ; accepted conn - the number of request accepted by the pool; ; listen queue - the number of request in the queue of pending ; connections (see backlog in listen(2)); ; max listen queue - the maximum number of requests in the queue ; of pending connections since FPM has started; ; listen queue len - the size of the socket queue of pending connections; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes; ; max active processes - the maximum number of active processes since FPM ; has started; ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic' and 'ondemand'); ; Value are updated in real time. ; Example output: ; pool: www ; process manager: static ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 62636 ; accepted conn: 190460 ; listen queue: 0 ; max listen queue: 1 ; listen queue len: 42 ; idle processes: 4 ; active processes: 11 ; total processes: 15 ; max active processes: 12 ; max children reached: 0 ; ; By default the status page output is formatted as text/plain. Passing either ; 'html', 'xml' or 'json' in the query string will return the corresponding ; output syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; http://www.foo.bar/status?xml ; ; By default the status page only outputs short status. Passing 'full' in the ; query string will also return status for each pool process. ; Example: ; http://www.foo.bar/status?full ; http://www.foo.bar/status?json&full ; http://www.foo.bar/status?html&full ; http://www.foo.bar/status?xml&full ; The Full status returns for each process: ; pid - the PID of the process; ; state - the state of the process (Idle, Running, ...); ; start time - the date and time the process has started; ; start since - the number of seconds since the process has started; ; requests - the number of requests the process has served; ; request duration - the duration in ?s of the requests; ; request method - the request method (GET, POST, ...); ; request URI - the request URI with the query string; ; content length - the content length of the request (only with POST); ; user - the user (PHP_AUTH_USER) (or '-' if not set); ; script - the main script called (or '-' if not set); ; last request cpu - the %cpu the last request consumed ; it's always 0 if the process is not in Idle state ; because CPU calculation is done when the request ; processing has terminated; ; last request memory - the max amount of memory the last request consumed ; it's always 0 if the process is not in Idle state ; because memory calculation is done when the request ; processing has terminated; ; If the process is in Idle state, then informations are related to the ; last request the process has served. Otherwise informations are related to ; the current request being served. ; Example output: ; ************************ ; pid: 31330 ; state: Running ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 63087 ; requests: 12808 ; request duration: 1250261 ; request method: GET ; request URI: /test_mem.php?N=10000 ; content length: 0 ; user: - ; script: /home/fat/web/docs/php/test_mem.php ; last request cpu: 0.00 ; last request memory: 0 ; ; Note: There is a real-time FPM status monitoring sample web page available ; It's available in: @EXPANDED_DATADIR@/fpm/status.html ; ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ;ping.response = pong ; The access log file ; Default: not set ;access.log = log/$pool.access.log ; The access log format. ; The following syntax is allowed ; %%: the '%' character ; %C: %CPU used by the request ; it can accept the following format: ; - %{user}C for user CPU only ; - %{system}C for system CPU only ; - %{total}C for user + system CPU (default) ; %d: time taken to serve the request ; it can accept the following format: ; - %{seconds}d (default) ; - %{miliseconds}d ; - %{mili}d ; - %{microseconds}d ; - %{micro}d ; %e: an environment variable (same as $_ENV or $_SERVER) ; it must be associated with embraces to specify the name of the env ; variable. Some exemples: ; - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e ; - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e ; %f: script filename ; %l: content-length of the request (for POST request only) ; %m: request method ; %M: peak of memory allocated by PHP ; it can accept the following format: ; - %{bytes}M (default) ; - %{kilobytes}M ; - %{kilo}M ; - %{megabytes}M ; - %{mega}M ; %n: pool name ; %o: output header ; it must be associated with embraces to specify the name of the header: ; - %{Content-Type}o ; - %{X-Powered-By}o ; - %{Transfert-Encoding}o ; - .... ; %p: PID of the child that serviced the request ; %P: PID of the parent of the child that serviced the request ; %q: the query string ; %Q: the '?' character if query string exists ; %r: the request URI (without the query string, see %q and %Q) ; %R: remote IP address ; %s: status (response code) ; %t: server time the request was received ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; The strftime(3) format must be encapsuled in a %{}t tag ; e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t ; %T: time the log has been written (the request has finished) ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; The strftime(3) format must be encapsuled in a %{}t tag ; e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t ; %u: remote user ; ; Default: "%R - %u %t \"%m %r\" %s" ;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%" ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set slowlog = /var/log/php-fpm/www-slow.log ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Clear environment in FPM workers ; Prevents arbitrary environment variables from reaching FPM worker processes ; by clearing the environment in workers before env vars specified in this ; pool configuration are added. ; Setting to "no" will make all environment variables available to PHP code ; via getenv(), $_ENV and $_SERVER. ; Default Value: yes ;clear_env = no ; Limits the extensions of the main script FPM will allow to parse. This can ; prevent configuration mistakes on the web server side. You should only limit ; FPM to .php extensions to prevent malicious users to use other extensions to ; exectute php code. ; Note: set an empty value to allow all extensions. ; Default Value: .php ;security.limit_extensions = .php .php3 .php4 .php5 .php7 ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or @prefix@) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www at my.domain.com ;php_flag[display_errors] = off php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 128M ; Set the following data paths to directories owned by the FPM process user. ; ; Do not change the ownership of existing system directories, if the process ; user does not have write permission, create dedicated directories for this ; purpose. ; ; See warning about choosing the location of these directories on your system ; at http://php.net/session.save-path php_value[session.save_handler] = files php_value[session.save_path] = /var/lib/php/session php_value[soap.wsdl_cache_dir] = /var/lib/php/wsdlcache ;php_value[opcache.file_cache] = /var/lib/php/opcache From teward at thomas-ward.net Thu Sep 24 17:05:31 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 24 Sep 2020 13:05:31 -0400 Subject: upstream timed out (110: Connection timed out) while reading response header from upstream. In-Reply-To: References: Message-ID: <243b9707-2cb7-cef8-8a6d-984381843698@thomas-ward.net> Your PHP backend is the problem.? The PHP side of things is triggering a warning that it's executing too slowly, and that requires you to alter your PHP settings (which is not an NGINX thing) to accept a longer execution time on your scripts. Thomas On 9/24/20 12:46 PM, Kaushal Shriyan wrote: > ==> /var/log/php-fpm/error.log <== > ... > [23-Sep-2020 08:37:30] WARNING: [pool www] child 12015, script > '/var/www/html/motest/devportal/web/index.php' (request: "POST > /index.php?_wrapper_format=drupal_ajax") executing too slow (3.733207 > sec), logging > ... > ==> /var/log/nginx/error.log <== > 2020/09/23 08:38:26 [error] 12035#0: *1 upstream timed out (110: > Connection timed out) while reading response header from upstream, > client: 157.45.28.7, server: dev-portal.motest.net > , request: "POST > /admin/config/development/performance HTTP/1.1", upstream: > "fastcgi://unix:/run/php-fpm/www.sock", host: "dev-portal.motest.net > ", referrer: > "https://dev-portal.motest.net/admin/config/development/performance" -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at benjamindsmith.com Fri Sep 25 02:56:44 2020 From: lists at benjamindsmith.com (Lists) Date: Thu, 24 Sep 2020 19:56:44 -0700 Subject: Unable to use subrequest authentication for proxied site In-Reply-To: <20200921085232.GJ30691@daoine.org> References: <1862178.usQuhbGJ8B@tesla.effortlessis.com> <4576310.GXAFRqVoOG@tesla.effortlessis.com> <20200921085232.GJ30691@daoine.org> Message-ID: <5755469.lOV4Wx5bFT@tesla.effortlessis.com> Following up, after implementation and rollout. On Monday, September 21, 2020 1:52:32 AM PDT Francis Daly wrote: > That's probably the right thing to do overall; except that you probably > will not control what the typical browser shows for (e.g.) a 401 response. I've not seen that a 401 or whatever error code causes browsers to do "funny stuff". I've long had a mildly amusing 404 page, for example. > If the rest of your application already works with the 200 "please login" > screen, then potentially you could send the 401 to nginx in response to > the auth_request request; and add an "error_page 401 = /login_screen;" > in the nginx location{}, and make the nginx subrequest for /login_screen > return that "please login" with a 200 status. In my case, if the nginx proxy auth request fails, there are other issues elsewhere in the app and a simple denied screen is almost certainly sufficient. As a side note, within the app, if you make a request and it gives you a login screen instead, it has an http_code of 401. But if you specifically ask for the login screen EG: /login.php then that's 200. It's only the case where the thing requested is different than the thing returned that it gives you a 401, 403, or 404. > http://nginx.org/r/error_page for more information on that option. > > That could maintain the control that you currently have over what the > end-user sees, while still having nginx allow the expected requests > based on what the upstream says. Thank. I might do that at some point in the future but right now nginx is serving a subdomain within an iframe of the main app and so is not the primary page the user sees. In a *legitimate* use, anyway. the proxy auth is to thwart malicious use. Thanks again, Ben S -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 484 bytes Desc: This is a digitally signed message part. URL: From manuel.baesler at gmail.com Fri Sep 25 09:22:26 2020 From: manuel.baesler at gmail.com (Manuel) Date: Fri, 25 Sep 2020 11:22:26 +0200 Subject: Difference between Mainline and Stable Nginx version In-Reply-To: <03362816-eacb-6c7e-d105-06d1fe7fc5d8@nginx.com> References: <03362816-eacb-6c7e-d105-06d1fe7fc5d8@nginx.com> Message-ID: Kaushal, If you look at the image https://www.nginx.com/wp-content/uploads/2014/04/branch.png I personally would only use the mainline version. If a fix was a hidden security vulnerability and it is not a major bug fix it wont get into stable. Best, Manuel Am Do., 24. Sept. 2020 um 16:47 Uhr schrieb Maxim Konovalov : > Hello, > > On 24.09.2020 17:16, Kaushal Shriyan wrote: > > Hi, > > > > I am running CentOS Linux release 7.8.2003 (Core) and referring > > to https://nginx.org/en/linux_packages.html#RHEL-CentOS. Are there any > > difference between stable and mainline version?Should we need to use > > stable or mainline for Production environment? > > > [...] > > We published a blog post on this topic while ago: > > https://www.nginx.com/blog/nginx-1-6-1-7-released/ > > -- > Maxim Konovalov > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Sep 25 18:47:51 2020 From: nginx-forum at forum.nginx.org (Amateur Synologist) Date: Fri, 25 Sep 2020 14:47:51 -0400 Subject: =?UTF-8?Q?Nginx_configuration_to_secure_Ba=C3=AFkal_installation?= Message-ID: <1f0ffdfc0f99e6d0f6ea828f69d6287f.NginxMailingListEnglish@forum.nginx.org> Hi to all. I'm newbie in Linux and nginx, so I need your help I have Synology NAS with installed Ba?kal CardDAV server. Ba?kal Installation instructions says: "Only the html directory is needed to be accessible by your web browser. You may choose to lock out access to any other directory using your webserver configuration. In particular you should really make sure that the Specific directory is not accessible directly, as this could contain your sql database. The following configuration may be used for nginx: server { listen 80; server_name dav.example.org; root /var/www/baikal/html; index index.php; rewrite ^/.well-known/caldav /dav.php redirect; rewrite ^/.well-known/carddav /dav.php redirect; charset utf-8; location ~ /(\.ht|Core|Specific) { deny all; return 404; } location ~ ^(.+\.php)(.*)$ { try_files $fastcgi_script_name =404; include /etc/nginx/fastcgi_params; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } } " Source: https://sabre.io/baikal/install/ Can you tell me which nginx file(s) should I edit? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289540,289540#msg-289540 From teward at thomas-ward.net Fri Sep 25 19:38:46 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Fri, 25 Sep 2020 15:38:46 -0400 Subject: =?UTF-8?Q?Re=3A_Nginx_configuration_to_secure_Ba=C3=AFkal_installation?= In-Reply-To: <1f0ffdfc0f99e6d0f6ea828f69d6287f.NginxMailingListEnglish@forum.nginx.org> References: <1f0ffdfc0f99e6d0f6ea828f69d6287f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <92afc962-b04f-a7a1-51e4-ed76c4e7755b@thomas-ward.net> >From what I can tell the config as is is fine, and shouldn't need to have anything else exposed.? Since that's basically their nginx snippet in a nutshell. Their warning is more if you attempt to use something that doesn't have a predefined example set - like lighttpd - where you'd then have to configure it to have the proper docroot. Otherwise the configuration looks fine per their nginx example on the same linked instructions page. Thomas On 9/25/20 2:47 PM, Amateur Synologist wrote: > Hi to all. I'm newbie in Linux and nginx, so I need your help > I have Synology NAS with installed Ba?kal CardDAV server. > Ba?kal Installation instructions says: > > "Only the html directory is needed to be accessible by your web browser. You > may choose to lock out access to any other directory using your webserver > configuration. > In particular you should really make sure that the Specific directory is not > accessible directly, as this could contain your sql database. > > The following configuration may be used for nginx: > > server { > listen 80; > server_name dav.example.org; > > root /var/www/baikal/html; > index index.php; > > rewrite ^/.well-known/caldav /dav.php redirect; > rewrite ^/.well-known/carddav /dav.php redirect; > > charset utf-8; > > location ~ /(\.ht|Core|Specific) { > deny all; > return 404; > } > > location ~ ^(.+\.php)(.*)$ { > try_files $fastcgi_script_name =404; > include /etc/nginx/fastcgi_params; > fastcgi_split_path_info ^(.+\.php)(.*)$; > fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_path_info; > } > } > " > Source: https://sabre.io/baikal/install/ > > Can you tell me which nginx file(s) should I edit? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289540,289540#msg-289540 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From alicesafr at mail.ru Fri Sep 25 23:34:00 2020 From: alicesafr at mail.ru (alicesafr at mail.ru) Date: Sat, 26 Sep 2020 02:34:00 +0300 Subject: =?UTF-8?Q?Re=3A_Nginx_configuration_to_secure_Ba=C3=AFkal_installation?= In-Reply-To: <1f0ffdfc0f99e6d0f6ea828f69d6287f.NginxMailingListEnglish@forum.nginx.org> References: <1f0ffdfc0f99e6d0f6ea828f69d6287f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <89INV4UORBU4.IUZ4VQII0LU61@DESKTOP-ECS2EM0> An HTML attachment was scrubbed... URL: From alicesafr at mail.ru Fri Sep 25 23:34:00 2020 From: alicesafr at mail.ru (alicesafr at mail.ru) Date: Sat, 26 Sep 2020 02:34:00 +0300 Subject: =?UTF-8?Q?Re=3A_Nginx_configuration_to_secure_Ba=C3=AFkal_installation?= In-Reply-To: <92afc962-b04f-a7a1-51e4-ed76c4e7755b@thomas-ward.net> References: <1f0ffdfc0f99e6d0f6ea828f69d6287f.NginxMailingListEnglish@forum.nginx.org> <92afc962-b04f-a7a1-51e4-ed76c4e7755b@thomas-ward.net> Message-ID: <9I4E15UORBU4.ESADUZYB21191@DESKTOP-ECS2EM0> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Sep 26 09:23:30 2020 From: nginx-forum at forum.nginx.org (Amateur Synologist) Date: Sat, 26 Sep 2020 05:23:30 -0400 Subject: =?UTF-8?Q?Re=3A_Nginx_configuration_to_secure_Ba=C3=AFkal_installation?= In-Reply-To: <92afc962-b04f-a7a1-51e4-ed76c4e7755b@thomas-ward.net> References: <92afc962-b04f-a7a1-51e4-ed76c4e7755b@thomas-ward.net> Message-ID: <5dcb06e8282a55b2bc2532deacff5de1.NginxMailingListEnglish@forum.nginx.org> Thanx for answer. But their instructions says: "In particular you should really make sure that the Specific directory is not accessible directly, as this could contain your sql database" I've tried to enter path to Specific directory (baikal\Specific\db\) and I can access to sql database. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289540,289546#msg-289546 From alicesafr at mail.ru Sat Sep 26 09:33:38 2020 From: alicesafr at mail.ru (alicesafr at mail.ru) Date: Sat, 26 Sep 2020 12:33:38 +0300 Subject: =?UTF-8?Q?Re=3A_Nginx_configuration_to_secure_Ba=C3=AFkal_installation?= In-Reply-To: <5dcb06e8282a55b2bc2532deacff5de1.NginxMailingListEnglish@forum.nginx.org> References: <92afc962-b04f-a7a1-51e4-ed76c4e7755b@thomas-ward.net> <5dcb06e8282a55b2bc2532deacff5de1.NginxMailingListEnglish@forum.nginx.org> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DSC_1413.jpg Type: image/jpeg Size: 180516 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DSC_1417.jpg Type: image/jpeg Size: 140278 bytes Desc: not available URL: From sergio at outerface.net Sun Sep 27 19:54:17 2020 From: sergio at outerface.net (sergio) Date: Sun, 27 Sep 2020 22:54:17 +0300 Subject: packages.nginx.org IPv6 SSL is broken Message-ID: <4dea36e8-e29a-8aa2-1f83-70b9d87fd73b@outerface.net> https://packages.nginx.org is not accessible via IPv6 It's pingable and http also works fine. % openssl s_client -connect packages.nginx.org:443 CONNECTED(00000003) Please fix it of remove AAAA records. BTW, packages.nginx.org is not pingable by IPv4. -- sergio. From rainer at ultra-secure.de Sun Sep 27 20:20:53 2020 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sun, 27 Sep 2020 22:20:53 +0200 Subject: packages.nginx.org IPv6 SSL is broken In-Reply-To: <4dea36e8-e29a-8aa2-1f83-70b9d87fd73b@outerface.net> References: <4dea36e8-e29a-8aa2-1f83-70b9d87fd73b@outerface.net> Message-ID: <033D86BD-19B5-41A7-B0CB-33BEA20E8311@ultra-secure.de> > Am 27.09.2020 um 21:54 schrieb sergio : > > https://packages.nginx.org is not accessible via IPv6 > > It's pingable and http also works fine. > > % openssl s_client -connect packages.nginx.org:443 > CONNECTED(00000003) > > > Please fix it of remove AAAA records. > > BTW, packages.nginx.org is not pingable by IPv4. Looks like everything is on Amazon. Probably moved behind an F5 LB :-) From sb at nginx.com Mon Sep 28 10:28:13 2020 From: sb at nginx.com (Sergey Budnevitch) Date: Mon, 28 Sep 2020 13:28:13 +0300 Subject: packages.nginx.org IPv6 SSL is broken In-Reply-To: <4dea36e8-e29a-8aa2-1f83-70b9d87fd73b@outerface.net> References: <4dea36e8-e29a-8aa2-1f83-70b9d87fd73b@outerface.net> Message-ID: It works actually. Please capture packets using tcpdump/wireshark while running openssl s_client and provide pcap file. In nginx-ru mailing list there was similar issue and the root cause was in tunnel and broken PMTUD, so also try to reduce MTU on the interface. > On 27 Sep 2020, at 22:54, sergio wrote: > > https://packages.nginx.org is not accessible via IPv6 > > It's pingable and http also works fine. > > % openssl s_client -connect packages.nginx.org:443 > CONNECTED(00000003) > > > Please fix it of remove AAAA records. > > BTW, packages.nginx.org is not pingable by IPv4. > > -- > sergio. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ege.gorgun at tokeninc.com Mon Sep 28 18:59:19 2020 From: ege.gorgun at tokeninc.com (=?iso-8859-1?Q?Ege_G=F6rg=FCn?=) Date: Mon, 28 Sep 2020 18:59:19 +0000 Subject: NGINX Forked Operations do not get handled in the Same Session as the Master Process? Message-ID: Hi everyone, I hope you and your loved ones are in good health during these uncertain times. I've been trying to integrate our physical HSM (hardware security module) devices with NGINX to offload the SSL connections through the keys that we store in our HSM devices. I've 2 scenarios with 2 different results that need your attention: 1. When I configure NGINX configuration file (i.e. etc/nginx/nginx.conf) with the following and start NGINX as a foreground process, SSL connections get handled correctly and I'm able to see the logs written to the HSM driver's log file: master_process off; daemon off; 1. When I remove the above mentioned parameters and run NGINX as a background process, however, I believe forked operations do not get handled in the same session as the master process of NGINX and therefore they don't see our preloaded softcard or the key objects inside it. The following is reported when an SSL connection is attempted to be made: *1 SSL_do_handshake() failed (SSL: error:8207A060:PKCS#11 module:pkcs11_private_encrypt:Key handle invalid error:141EC044:SSL routines:tls_construct_server_key_exchange:internal error) while SSL handshaking, client: 172.31.88.4, server: 0.0.0.0:443 Since nothing gets written to the HSM driver's log file, I believe the driver doesn't even receive any requests orginating from NGINX. Here's what we are using: * Thales nShield Connnect 6000+ HSM devices with the latest firmware * Ubuntu v18.04 server distribution * NGINX v1.16.1 * OpenSSL v1.1.1d * OpenSC v0.20.0 * Libp11 v0.4.10 * p11-kit v0.23.21 * libengine-pkcs11-openssl v0.4.10-1 (OpenSSL engine for PKCS#11 modules) Any suggestions/help would be greatly appreciated. Regards, Ege Bu e-posta mesaji kisiye ozel olup, gizli bilgiler iceriyor olabilir. Eger bu e-posta mesaji size yanlislikla ulasmissa, icerigini hic bir sekilde kullanmayiniz ve ekli dosyalari acmayiniz. Bu durumda lutfen e-posta mesajini kullaniciya hemen geri gonderiniz ve tum kopyalarini mesaj kutunuzdan siliniz. Bu e-posta mesaji, hic bir sekilde, herhangi bir amac icin cogaltilamaz, yayinlanamaz ve para karsiligi satilamaz. Bu e-posta mesaji viruslere karsi anti-virus sistemleri tarafindan taranmistir. Ancak yollayici, bu e-posta mesajinin - virus koruma sistemleri ile kontrol ediliyor olsa bile - virus icermedigini garanti etmez ve meydana gelebilecek zararlardan dogacak hicbir sorumlulugu kabul etmez. This message is intended solely for the use of the individual or entity to whom it is addressed , and may contain confidential information. If you are not the intended recipient of this message or you receive this mail in error, you should refrain from making any use of the contents and from opening any attachment. In that case, please notify the sender immediately and return the message to the sender, then, delete and destroy all copies. This e-mail message, can not be copied, published or sold for any reason. This e-mail message has been swept by anti-virus systems for the presence of computer viruses. In doing so, however, sender cannot warrant that virus or other forms of data corruption may not be present and do not take any responsibility in any occurrence. -------------- next part -------------- An HTML attachment was scrubbed... URL: From omar-fares at outlook.sa Mon Sep 28 22:07:23 2020 From: omar-fares at outlook.sa (Omar Fares) Date: Mon, 28 Sep 2020 22:07:23 +0000 Subject: nginx Ticket Message-ID: <9D0D13C6-7606-48F4-A064-7C9F5B96046E@outlook.sa> Dears, Thanks to support When specific request come server to server call back for end point I get an error 159.122.151.254 - - [28/Sep/2020:22:53:44 +0200] "POST /api/V1/CallBack/merchantCallbackPage HTTP/1.1" 500 101 "-" "Java/1.8.0_151" What is that mean ? and how to solve this ? BR, -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Sep 28 22:12:24 2020 From: nginx-forum at forum.nginx.org (jriker1) Date: Mon, 28 Sep 2020 18:12:24 -0400 Subject: SSL routines:tls_process_client_hello:version too low Message-ID: <43f338c4ebd4f9366b2141932b0c4f71.NginxMailingListEnglish@forum.nginx.org> Hope I can post this as Chrome keeps complaining this site has a data breach. I have been using NGINX to route my 443 traffic for two servers for a while now. Now I can't get my RDP side of things working. Not sure why as it used to work. RDP is thru Essentials Server 2016 and it's Remote Web Access. I can get to the website but when I try to pull up one of my servers it fails. So the website works, the second server I'm using on 443 works, but not the RDP side of things. it does work if I remove NGINX and set 443 to route directly to the Essentials Server thru my router so know it's working still. What I get in the error logs when this happens is: 2020/09/28 05:09:50 [crit] 7556#7556: *1366 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 107.6.171.130, server: 0.0.0.0:443 2020/09/28 06:01:06 [crit] 7556#7556: *1385 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 107.6.171.130, server: 0.0.0.0:443 Thoughts? Thanks. JR Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289572,289572#msg-289572 From mdounin at mdounin.ru Tue Sep 29 14:46:07 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 Sep 2020 17:46:07 +0300 Subject: nginx-1.19.3 Message-ID: <20200929144607.GD1136@mdounin.ru> Changes with nginx 1.19.3 29 Sep 2020 *) Feature: the ngx_stream_set_module. *) Feature: the "proxy_cookie_flags" directive. *) Feature: the "userid_flags" directive. *) Bugfix: the "stale-if-error" cache control extension was erroneously applied if backend returned a response with status code 500, 502, 503, 504, 403, 404, or 429. *) Bugfix: "[crit] cache file ... has too long header" messages might appear in logs if caching was used and the backend returned responses with the "Vary" header line. *) Workaround: "[crit] SSL_write() failed" messages might appear in logs when using OpenSSL 1.1.1. *) Bugfix: "SSL_shutdown() failed (SSL: ... bad write retry)" messages might appear in logs; the bug had appeared in 1.19.2. *) Bugfix: a segmentation fault might occur in a worker process when using HTTP/2 if errors with code 400 were redirected to a proxied location using the "error_page" directive. *) Bugfix: socket leak when using HTTP/2 and subrequests in the njs module. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Sep 29 15:24:14 2020 From: nginx-forum at forum.nginx.org (kay) Date: Tue, 29 Sep 2020 11:24:14 -0400 Subject: Simple SMTP proxy without an auth (pass AUTH command to backend) Message-ID: <0cdf22c5fb978c65b0f2e13a0e4737eb.NginxMailingListEnglish@forum.nginx.org> I'd like to use nginx to serve TLS and/or StartTLS connections only, the rest must be "proxy passed" without a modification to the backend. Unfortunately I noticed https://www.ruby-forum.com/t/nginx-does-not-pass-smtp-auth-command-to-server/184290 topic, where Maxim Dounin mentioned that it is impossible. That was 10 years ago, probably now the situation is changed? Is there an option, which I can use to pass the AUTH command? P.S. Side question, I'd like to use a hostname in Auth-Server header: location = /mail/auth { add_header Auth-Status OK; add_header Auth-Server hostname; add_header Auth-Port 8025; return 204; } but nginx doesn't allow to do this. Is there an option or a workaround for this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289589,289589#msg-289589 From siddjain at live.com Tue Sep 29 16:08:50 2020 From: siddjain at live.com (Siddharth Jain) Date: Tue, 29 Sep 2020 16:08:50 +0000 Subject: Nginx returns 404 for a wordpress multisite installation In-Reply-To: References: , Message-ID: I have a wordpress 5.4 multisite install running on php-fpm and nginx. I have this config: https://www.nginx.com/resources/wiki/start/topics/recipes/wordpress/#rewrite-rules-for-multisite-using-subdirectories which I have taken directly from nginx.com I am able to access the base URL fine but any requests for the subsites return 404 from nginx. The request is not even forwarded to wordpress (php-fpm). Given a request for my child site which looks like /foobar, When I look at the config it seems it will match following location block location / { try_files $uri $uri/ /index.php?$args ; } >From there it will attempt to do internal redirect to /index.php. So the foobar is lost and I would expect the base site to load - which is also wrong btw. I tested this over here: https://nginx.viraptor.info/ [cid:7468dba8-ae19-4bfa-8d7a-ce3d05dc1e1d] But a 404 is observed instead. Can anybody help please? Is there any tool that can be used to test what nginx is doing when it receives a URL? I would like to get a dump of all the variables such as request_filename, uri etc. and the location block its selecting etc. I am running everything inside docker and using nginx:1.17 image. # nginx -V nginx version: nginx/1.17.10 built by gcc 8.3.0 (Debian 8.3.0-6) built with OpenSSL 1.1.1d 10 Sep 2019 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.17.10/debian/debuild-base/nginx-1.17.10=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 257107 bytes Desc: image.png URL: From xeioex at nginx.com Tue Sep 29 17:32:36 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 29 Sep 2020 20:32:36 +0300 Subject: njs-0.4.4 Message-ID: <77d03d63-dcbf-3d5b-0a25-8444d20afb1f@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript specifications. Notable new features: - Buffer object. : >> var buf = Buffer.from([0x80,206,177,206,178]) : undefined : >> buf.slice(1).toString() : '??' : >> buf.toString('base64') : 'gM6xzrI=' - DataView object. : >> (new DataView(buf.buffer)).getUint16() : 32974 You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs - Using node modules with njs: http://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: ? http://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.4.4???????????????????????????????????????????????? 29 Sep 2020 ??? nginx modules: ??? *) Bugfix: fixed location merge. ??? *) Bugfix: fixed r.httpVersion for HTTP/2. ??? Core: ??? *) Feature: added support for numeric separators (ES12). ??? *) Feature: added remaining methods for %TypedArray%.prototype. ?????? The following methods were added: every(), filter(), find(), ?????? findIndex(), forEach(), includes(), indexOf(), lastIndexOf(), ?????? map(), reduce(), reduceRight(), reverse(), some(). ??? *) Feature: added %TypedArray% remaining methods. ?????? The following methods were added: from(), of(). ??? *) Feature: added DataView object. ??? *) Feature: added Buffer object implementation. ??? *) Feature: added support for ArrayBuffer in ?????? TextDecoder.prototype.decode(). ??? *) Feature: added support for Buffer object in "crypto" methods. ??? *) Feature: added support for Buffer object in "fs" methods. ??? *) Change: Hash.prototype.digest() and Hmac.prototype.digest() ?????? now return a Buffer instance instead of a byte string when ?????? encoding is not provided. ??? *) Change: fs.readFile() and friends now return a Buffer instance ?????? instead of a byte string when encoding is not provided. ??? *) Bugfix: fixed function "prototype" property handler while ?????? setting. ??? *) Bugfix: fixed function "constructor" property handler while ?????? setting. ??? *) Bugfix: fixed String.prototype.indexOf() for byte strings. ??? *) Bugfix: fixed RegExpBuiltinExec() with a global flag and ?????? byte strings. ??? *) Bugfix: fixed RegExp.prototype[Symbol.replace] when the ?????? replacement value is a function. ??? *) Bugfix: fixed TextDecoder.prototype.decode() with non-zero ?????? TypedArray offset. From francis at daoine.org Tue Sep 29 20:11:34 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Sep 2020 21:11:34 +0100 Subject: =?UTF-8?Q?Re=3A_Nginx_configuration_to_secure_Ba=C3=AFkal_installation?= In-Reply-To: <5dcb06e8282a55b2bc2532deacff5de1.NginxMailingListEnglish@forum.nginx.org> References: <92afc962-b04f-a7a1-51e4-ed76c4e7755b@thomas-ward.net> <5dcb06e8282a55b2bc2532deacff5de1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200929201134.GK30691@daoine.org> On Sat, Sep 26, 2020 at 05:23:30AM -0400, Amateur Synologist wrote: Hi there, I think your first question was "which file should this go in"? You may have the answer already -- basically, it is "whichever file your nginx reads". If you have a running system, that is "the -c argument to nginx"; falling back to its compile-time default -- "nginx -V" can help indicate what that is. > But their instructions says: "In particular you should really make sure that > the Specific directory is not accessible directly, as this could contain > your sql database" > I've tried to enter path to Specific directory (baikal\Specific\db\) and I > can access to sql database. The configuration you showed includes > location ~ /(\.ht|Core|Specific) { > deny all; > return 404; > } and nothing else that would obviously match that request. So if you are getting a http 200 response, then the config that is being used is not the config that you showed. Just to confirm: you are actually accessing something like http://dav.example.org/baikal/Specific/db, yes? Can you show the request/response using something like "curl -v"? Thanks, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 29 20:28:23 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Sep 2020 21:28:23 +0100 Subject: SSL routines:tls_process_client_hello:version too low In-Reply-To: <43f338c4ebd4f9366b2141932b0c4f71.NginxMailingListEnglish@forum.nginx.org> References: <43f338c4ebd4f9366b2141932b0c4f71.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200929202823.GL30691@daoine.org> On Mon, Sep 28, 2020 at 06:12:24PM -0400, jriker1 wrote: Hi there, > What I get in the error logs when this happens is: > > 2020/09/28 05:09:50 [crit] 7556#7556: *1366 SSL_do_handshake() failed (SSL: > error:1417D18C:SSL routines:tls_process_client_hello:version too low) while > SSL handshaking, client: 107.6.171.130, server: 0.0.0.0:443 > 2020/09/28 06:01:06 [crit] 7556#7556: *1385 SSL_do_handshake() failed (SSL: > error:1417D18C:SSL routines:tls_process_client_hello:version too low) while > SSL handshaking, client: 107.6.171.130, server: 0.0.0.0:443 That means that the client is asking to use a version of TLS/SSL that the server is configured not to accept. Usually, the fix is to get the client to use a newer version of TLS. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 29 21:06:32 2020 From: nginx-forum at forum.nginx.org (jriker1) Date: Tue, 29 Sep 2020 17:06:32 -0400 Subject: SSL routines:tls_process_client_hello:version too low In-Reply-To: <20200929202823.GL30691@daoine.org> References: <20200929202823.GL30691@daoine.org> Message-ID: <022bcf157c25f7a382943f1d2eb865e3.NginxMailingListEnglish@forum.nginx.org> Thanks. Only thing I can see in a Wireshark trace is TLS 1.2 so shouldn't be an issue from what I can see but who knows. So it works without NGINX but that said couple things. 1. Is there a way to just make NGINX accept things and work? Way to prove it's a TLS issue then? 2. What would have changed that would have worked before and now it doesn't? I know the cert was renewed right before the pandemic so may be related but it is a GoDaddy cert so wouldn't be self signed and shows valid. Thanks. JR Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289572,289602#msg-289602 From pluknet at nginx.com Tue Sep 29 21:38:30 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 29 Sep 2020 22:38:30 +0100 Subject: SSL routines:tls_process_client_hello:version too low In-Reply-To: <43f338c4ebd4f9366b2141932b0c4f71.NginxMailingListEnglish@forum.nginx.org> References: <43f338c4ebd4f9366b2141932b0c4f71.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3DB2C8BF-F633-4EA0-9133-BA27AAF2B3CD@nginx.com> > On 28 Sep 2020, at 23:12, jriker1 wrote: > > Hope I can post this as Chrome keeps complaining this site has a data > breach. The primary interface is using mailing lists: http://nginx.org/en/support.html > > I have been using NGINX to route my 443 traffic for two servers for a while > now. Now I can't get my RDP side of things working. Not sure why as it > used to work. RDP is thru Essentials Server 2016 and it's Remote Web > Access. I can get to the website but when I try to pull up one of my > servers it fails. So the website works, the second server I'm using on 443 > works, but not the RDP side of things. it does work if I remove NGINX and > set 443 to route directly to the Essentials Server thru my router so know > it's working still. What I get in the error logs when this happens is: > > 2020/09/28 05:09:50 [crit] 7556#7556: *1366 SSL_do_handshake() failed (SSL: > error:1417D18C:SSL routines:tls_process_client_hello:version too low) while > SSL handshaking, client: 107.6.171.130, server: 0.0.0.0:443 > 2020/09/28 06:01:06 [crit] 7556#7556: *1385 SSL_do_handshake() failed (SSL: > error:1417D18C:SSL routines:tls_process_client_hello:version too low) while > SSL handshaking, client: 107.6.171.130, server: 0.0.0.0:443 > You didn't provide nginx version, but the error message suggests you are using OpenSSL 1.1.0. I don't know what is Essentials Server 2016, but if it is something from Microsoft, then I heard that it still prefers to use TLSv1. Then you might want to look at this thread as somewhat related: http://mailman.nginx.org/pipermail/nginx/2018-November/057154.html -- Sergey Kandaurov From francis at daoine.org Tue Sep 29 21:39:31 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Sep 2020 22:39:31 +0100 Subject: SSL routines:tls_process_client_hello:version too low In-Reply-To: <022bcf157c25f7a382943f1d2eb865e3.NginxMailingListEnglish@forum.nginx.org> References: <20200929202823.GL30691@daoine.org> <022bcf157c25f7a382943f1d2eb865e3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200929213931.GM30691@daoine.org> On Tue, Sep 29, 2020 at 05:06:32PM -0400, jriker1 wrote: Hi there, > Thanks. Only thing I can see in a Wireshark trace is TLS 1.2 so shouldn't > be an issue from what I can see but who knows. > > So it works without NGINX but that said couple things. > > 1. Is there a way to just make NGINX accept things and work? Way to prove > it's a TLS issue then? If the nginx (debug?) logs show that nginx is rejecting this client for specific tls-version reasons, then you could choose to configure nginx to accept other tls versions, and see if that changes anything. http://nginx.org/r/ssl_protocols > 2. What would have changed that would have worked before and now it doesn't? > I know the cert was renewed right before the pandemic so may be related but > it is a GoDaddy cert so wouldn't be self signed and shows valid. If you can show a nginx config that does not respond as you wish it to for a specific request, then someone might be able to point out why that happens. As to what changed? Only you have the old setup and the new setup, to compare. Maybe something changed on the client side that is not in your control? Maybe non-configured default settings changed between the two setups? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 29 22:32:31 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Sep 2020 23:32:31 +0100 Subject: Simple SMTP proxy without an auth (pass AUTH command to backend) In-Reply-To: <0cdf22c5fb978c65b0f2e13a0e4737eb.NginxMailingListEnglish@forum.nginx.org> References: <0cdf22c5fb978c65b0f2e13a0e4737eb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200929223231.GN30691@daoine.org> On Tue, Sep 29, 2020 at 11:24:14AM -0400, kay wrote: Hi there, > I'd like to use nginx to serve TLS and/or StartTLS connections only, the > rest must be "proxy passed" without a modification to the backend. "TLS-only" might work if you use "stream" rather than "mail", so that nginx is the TLS-termination of an otherwise-opaque stream of traffic. The rest of what you describe does not appear to match the nginx "smtp proxy" model (which is, briefly, a tcp connection is authenticated and then blindly forwarded to a back-end ip:port). > Unfortunately I noticed > https://www.ruby-forum.com/t/nginx-does-not-pass-smtp-auth-command-to-server/184290 > topic, where Maxim Dounin mentioned that it is impossible. That was 10 years > ago, probably now the situation is changed? Is there an option, which I can > use to pass the AUTH command? I don't think so, no. Probably no-one cared enough about this feature to design and implement something in nginx; instead they either changed their own design to fit the nginx model, or they used something other than nginx. > P.S. Side question, I'd like to use a hostname in Auth-Server header: > > location = /mail/auth { > add_header Auth-Status OK; > add_header Auth-Server hostname; > add_header Auth-Port 8025; > return 204; > } > > but nginx doesn't allow to do this. Is there an option or a workaround for > this? Option - no, not today. Workaround - in that location{}, do something dynamic to learn the IP address that you want this smtp connection to be passed to, and send that IP address in the header. *Someone* has to turn the hostname into an IP address. The nginx mail proxy protocol is that "the server" does that, not "the client". Possibly a patch to change that would be accepted, if it is shown to be reliable and an improvement on what is there now. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 29 23:45:31 2020 From: nginx-forum at forum.nginx.org (jriker1) Date: Tue, 29 Sep 2020 19:45:31 -0400 Subject: SSL routines:tls_process_client_hello:version too low In-Reply-To: <3DB2C8BF-F633-4EA0-9133-BA27AAF2B3CD@nginx.com> References: <3DB2C8BF-F633-4EA0-9133-BA27AAF2B3CD@nginx.com> Message-ID: <08f877e2e877403aeaea769bc4084ef0.NginxMailingListEnglish@forum.nginx.org> Thanks for the replies. I can't debug right now as at a hotel and can't turn on NGINX as if/when it fails I won't be able to access my servers again so will do that later this week however right now I am on NGINX 1.14.1. Essentials Server 2016 is basically RD Gateway. My configuration right now in my own conf file is and the forum will kill my configuration unless there is a way to do this: server { listen 80; server_name remote.hidden.net; # redirect http to https return 301 https://$server_name$request_uri; client_max_body_size 0; proxy_http_version 1.1; proxy_buffering off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; location / { proxy_pass http://192.168.0.1; } } server { listen 80; server_name sysmarthome.hidden.net; # redirect http to https return 301 https://$server_name$request_uri; client_max_body_size 0; proxy_http_version 1.1; proxy_buffering off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; location / { proxy_pass http://192.168.0.50; } } upstream essentials { server 192.168.0.1:443; keepalive 32; } upstream homeassistant { server 192.168.0.50:8123; keepalive 32; } server { listen 443 ssl http2; server_name remote.*; ssl_certificate /config/user-data/ssl_chain_essentials.pem; ssl_certificate_key /config/user-data/ssl_chain_key_essentials.pem; client_max_body_size 0; proxy_http_version 1.1; proxy_buffering off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; location / { proxy_pass https://essentials; } } server { listen 443 ssl http2; server_name sysmarthome.*; ssl_certificate /config/user-data/ssl_chain_homeassistant.pem; ssl_certificate_key /config/user-data/ssl_chain_key_homeassistant.pem; client_max_body_size 0; proxy_http_version 1.1; proxy_buffering off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; location / { proxy_pass https://homeassistant; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289572,289606#msg-289606 From sergio at outerface.net Wed Sep 30 01:36:36 2020 From: sergio at outerface.net (sergio) Date: Wed, 30 Sep 2020 04:36:36 +0300 Subject: packages.nginx.org IPv6 SSL is broken In-Reply-To: References: <4dea36e8-e29a-8aa2-1f83-70b9d87fd73b@outerface.net> Message-ID: <03d337be-04ac-0fc4-3503-d0ae1b60bb06@outerface.net> On 28/09/2020 13:28, Sergey Budnevitch wrote: > It works actually. Indeed. This has worked before, so I was wrong. It works on router, but not on clients. > was in tunnel and broken PMTUD Yep, I'm using HE tunnel with 1480 MTU auto configured. > so also try to reduce MTU on the interface. But setting it manually to lowest posible 1280 changes nothing. Setting MTU to 1480 on client's eth0 make it work, but this is not a solution. The router is debian stable with 4.19.0 or 5.7.0 -- sergio. From nginx-forum at forum.nginx.org Wed Sep 30 15:05:07 2020 From: nginx-forum at forum.nginx.org (kay) Date: Wed, 30 Sep 2020 11:05:07 -0400 Subject: Simple SMTP proxy without an auth (pass AUTH command to backend) In-Reply-To: <20200929223231.GN30691@daoine.org> References: <20200929223231.GN30691@daoine.org> Message-ID: > "TLS-only" might work if you use "stream" rather than "mail", so that > nginx is the TLS-termination of an otherwise-opaque stream of traffic. Thanks for the hint. I think I can omit starttls support and use only TLS Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289589,289617#msg-289617 From nginx-forum at forum.nginx.org Wed Sep 30 15:20:18 2020 From: nginx-forum at forum.nginx.org (jriker1) Date: Wed, 30 Sep 2020 11:20:18 -0400 Subject: SSL routines:tls_process_client_hello:version too low In-Reply-To: <08f877e2e877403aeaea769bc4084ef0.NginxMailingListEnglish@forum.nginx.org> References: <3DB2C8BF-F633-4EA0-9133-BA27AAF2B3CD@nginx.com> <08f877e2e877403aeaea769bc4084ef0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0ea415173df1f2171c9b65a1faf68a5e.NginxMailingListEnglish@forum.nginx.org> I thought I could fix it by adding the below into the servr block for remote.* but didn't help: ssl_dhparam /config/user-data/dhparam.pem; ssl_protocols TLSv1 TLSV1.1 TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SH ssl_ecdh_curve secp384r1; ssl_session_timeout 10m; ssl_session_cache shared:SSL:10m; Note the ssl_ciphers was longer but all I can pull thru telnet. Also I scanned the logs and personally didn't see anything Below is the log since I don't see an attach option. I turned it on, then logged into the Remote Desktop Gatway website which is basically remote.HIDDEN.net/remote and then launched a remote desktop session thru it which should all route thru 443. It failed again still and i stopped NGINX. IP 192.168.0.187 is my workstation trying to launch the remote session. OK no logs. Site says it's to long and pastebin won't take it. Any thoughts? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289572,289618#msg-289618 From nginx-forum at forum.nginx.org Wed Sep 30 20:39:03 2020 From: nginx-forum at forum.nginx.org (jriker1) Date: Wed, 30 Sep 2020 16:39:03 -0400 Subject: SSL routines:tls_process_client_hello:version too low In-Reply-To: <0ea415173df1f2171c9b65a1faf68a5e.NginxMailingListEnglish@forum.nginx.org> References: <3DB2C8BF-F633-4EA0-9133-BA27AAF2B3CD@nginx.com> <08f877e2e877403aeaea769bc4084ef0.NginxMailingListEnglish@forum.nginx.org> <0ea415173df1f2171c9b65a1faf68a5e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2a241f300e93b9a8825ec15dff2eea69.NginxMailingListEnglish@forum.nginx.org> Not sure if they are relevant but went thru the entire log. Found these references. Guessing related but not sure they tell me personally anything: 2020/09/30 09:56:48 [debug] 17117#17117: *7 http run request: "/remoteDesktopGateway/?" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http upstream check client, write event:1, "/remoteDesktopGateway/" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http upstream request: "/remoteDesktopGateway/?" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http upstream process header 2020/09/30 09:56:48 [debug] 17117#17117: *7 malloc: 56DF0A20:4096 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_read: 812 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_read: -1 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_get_error: 2 2020/09/30 09:56:48 [debug] 17117#17117: *7 http proxy status 401 "401 Unauthorized" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http proxy header: "Content-Type: text/html; charset=us-ascii" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http proxy header: "Server: Microsoft-HTTPAPI/2.0" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http proxy header: "WWW-Authenticate: Negotiate TlRMTVNTUAACAAAACAAIADgAAAAVgonizPkeQUJL8P0AAAAAAAAAAJIAkgBAAAAACgA5OAAAAA9JAEMARQBPAAIACABJAEMARQBPAAEAFABIAE8ATQBFAFMARQBSAFYARQBSAAQAFABJAEMARQBPAC4AbABvAGMAYQBsAAMAKgBIAE8ATQBFAFMARQBSAFYARQBSAC4AaQBjAGUAbwAuAGwAbwBjAGEAbAAFABQASQBDAEUATwAuAGwAbwBjAGEAbAAHAAgAp/sS7DmX1gEAAAAA" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http proxy header: "Date: Wed, 30 Sep 2020 14:56:47 GMT" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http proxy header: "Content-Length: 341" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http proxy header done 2020/09/30 09:56:48 [debug] 17117#17117: *7 HTTP/1.1 401 Unauthorized Server: nginx/1.14.1 Date: Wed, 30 Sep 2020 14:56:48 GMT Content-Type: text/html; charset=us-ascii Content-Length: 341 Connection: keep-alive WWW-Authenticate: Negotiate TlRMTVNTUAACAAAACAAIADgAAAAVgonizPkeQUJL8P0AAAAAAAAAAJIAkgBAAAAACgA5OAAAAA9JAEMARQBPAAIACABJAEMARQBPAAEAFABIAE8ATQBFAFMARQBSAFYARQBSAAQAFABJAEMARQBPAC4AbABvAGMAYQBsAAMAKgBIAE8ATQBFAFMARQBSAFYARQBSAC4AaQBjAGUAbwAuAGwAbwBjAGEAbAAFABQASQBDAEUATwAuAGwAbwBjAGEAbAAHAAgAp/sS7DmX1gEAAAAA ------------------- 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_read: -1 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_get_error: 5 2020/09/30 09:56:48 [debug] 17117#17117: *7 peer shutdown SSL cleanly 2020/09/30 09:56:48 [error] 17117#17117: *7 upstream prematurely closed connection while reading response header from upstream, client: 192.168.0.187, server: remote.*, request: "RDG_OUT_DATA /remoteDesktopGateway/ HTTP/1.1", upstream: "https://192.168.0.1:443/remoteDesktopGateway/", host: "remote.HIDDEN.net" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http next upstream, 2 2020/09/30 09:56:48 [debug] 17117#17117: *7 free keepalive peer 2020/09/30 09:56:48 [debug] 17117#17117: *7 free rr peer 1 4 2020/09/30 09:56:48 [debug] 17117#17117: *7 close http upstream connection: 16 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_shutdown: 1 --------------------- 2020/09/30 09:56:48 [debug] 17117#17117: *7 stream socket 13 2020/09/30 09:56:48 [debug] 17117#17117: *7 epoll add connection: fd:13 ev:80002005 2020/09/30 09:56:48 [debug] 17117#17117: *7 connect to 192.168.0.1:443, fd:13 #9 2020/09/30 09:56:48 [debug] 17117#17117: *7 http upstream connect: -2 2020/09/30 09:56:48 [debug] 17117#17117: *7 posix_memalign: 56DE47D0:128 @16 2020/09/30 09:56:48 [debug] 17117#17117: *7 event timer add: 13: 60000:535061225 2020/09/30 09:56:48 [debug] 17117#17117: *7 http upstream request: "/remoteDesktopGateway/?" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http upstream send request handler 2020/09/30 09:56:48 [debug] 17117#17117: *7 set session: 56D911A0 2020/09/30 09:56:48 [debug] 17117#17117: *7 tcp_nodelay 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_do_handshake: -1 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_get_error: 2 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL handshake handler: 0 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_do_handshake: 1 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL: TLSv1.2, cipher: "AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256" 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL reused session 2020/09/30 09:56:48 [debug] 17117#17117: *7 http upstream ssl handshake: "/remoteDesktopGateway/?" 2020/09/30 09:56:48 [debug] 17117#17117: *7 save session: 56D911A0 2020/09/30 09:56:48 [debug] 17117#17117: *7 old session: 56D911A0 2020/09/30 09:56:48 [debug] 17117#17117: *7 http upstream send request 2020/09/30 09:56:48 [debug] 17117#17117: *7 http upstream send request body 2020/09/30 09:56:48 [debug] 17117#17117: *7 chain writer buf fl:1 s:1251 2020/09/30 09:56:48 [debug] 17117#17117: *7 chain writer in: 56DD8130 2020/09/30 09:56:48 [debug] 17117#17117: *7 posix_memalign: 56DE1D50:128 @16 2020/09/30 09:56:48 [debug] 17117#17117: *7 malloc: 56DDCC88:16384 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL buf copy: 1251 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL to write: 1251 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_write: 1251 2020/09/30 09:56:48 [debug] 17117#17117: *7 chain writer out: 00000000 2020/09/30 09:56:48 [debug] 17117#17117: *7 event timer del: 13: 535061225 2020/09/30 09:56:48 [debug] 17117#17117: *7 event timer add: 13: 60000:535061235 2020/09/30 09:56:48 [debug] 17117#17117: *7 http upstream process header 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_read: -1 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_get_error: 5 2020/09/30 09:56:48 [debug] 17117#17117: *7 peer shutdown SSL cleanly 2020/09/30 09:56:48 [error] 17117#17117: *7 upstream prematurely closed connection while reading response header from upstream, client: 192.168.0.187, server: remote.*, request: "RDG_OUT_DATA /remoteDesktopGateway/ HTTP/1.1", upstream: "https://192.168.0.1:443/remoteDesktopGateway/", host: "remote.HIDDEN.net" 2020/09/30 09:56:48 [debug] 17117#17117: *7 http next upstream, 2 2020/09/30 09:56:48 [debug] 17117#17117: *7 free keepalive peer 2020/09/30 09:56:48 [debug] 17117#17117: *7 free rr peer 1 4 2020/09/30 09:56:48 [debug] 17117#17117: *7 finalize http upstream request: 502 2020/09/30 09:56:48 [debug] 17117#17117: *7 finalize http proxy request 2020/09/30 09:56:48 [debug] 17117#17117: *7 SSL_shutdown: 1 2020/09/30 09:56:48 [debug] 17117#17117: *7 close http upstream connection: 13 2020/09/30 09:56:48 [debug] 17117#17117: *7 free: 56DDCC88 2020/09/30 09:56:48 [debug] 17117#17117: *7 free: 56DE47D0, unused: 24 2020/09/30 09:56:48 [debug] 17117#17117: *7 free: 56DE1D50, unused: 56 2020/09/30 09:56:48 [debug] 17117#17117: *7 event timer del: 13: 535061235 2020/09/30 09:56:48 [debug] 17117#17117: *7 reusable connection: 0 2020/09/30 09:56:48 [debug] 17117#17117: *7 http finalize request: 502, "/remoteDesktopGateway/?" a:1, c:1 2020/09/30 09:56:48 [debug] 17117#17117: *7 http special response: 502, "/remoteDesktopGateway/?" 2020/09/30 09:56:48 [debug] 17117#17117: *7 HTTP/1.1 502 Bad Gateway Server: nginx/1.14.1 Date: Wed, 30 Sep 2020 14:56:48 GMT Content-Type: text/html Content-Length: 173 Connection: keep-alive Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289572,289621#msg-289621