From nginx-forum at forum.nginx.org Sun Jul 1 07:52:59 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Sun, 01 Jul 2018 03:52:59 -0400 Subject: variable for domain name in redirect Message-ID: Hello, I have something in one of my server bloc: ``` # switch to TLS for page.php, contact.php, faq.php, known https sources and / if ($scheme = http) { set $rule_9 1$rule_9; } if ($outdated = 0){ set $rule_9 2$rule_9; } if ($request_uri ~ "^/(auction|biglemon|carview(-bike)?|carsensor(-c)?|daihatsu|gazoo|goo(bike(-catalog)?|net)|honda|koubai|kuriyama-truck|kurumaerabi(-search)?|truck-reparts(-b)?|truskey|list1|pdns|rakuten|page|faq|contact(_japan)?|jpa(-b|L1)?|_jpa[23])\.php|^/(\?.*)?$") { set $rule_9 3$rule_9; } if ($rule_9 = "3210"){ return 301 https://www.server.com$request_uri; } ``` Then something similar in another ``` # switch to TLS for page.php, contact.php, faq.php, known https sources and / if ($scheme = http) { set $rule_9 1$rule_9; } if ($outdated = 0){ set $rule_9 2$rule_9; } if ($request_uri ~ "^/(auction|biglemon|carview(-bike)?|carsensor(-c)?|daihatsu|gazoo|goo(bike(-catalog)?|net)|honda|koubai|kuriyama-truck|kurumaerabi(-search)?|truck-reparts(-b)?|truskey|list1|pdns|rakuten|page|faq|contact(_japan)?|jpa(-b|L1)?|_jpa[23])\.php|^/(\?.*)?$") { set $rule_9 3$rule_9; } if ($rule_9 = "3210"){ return 301 https://dev.server.com$request_uri; } ``` The problem is that have several servers. 1 - Is there a way to use variable for the some server names in the 'return 301' line so I could use shared code and not have to edit this snippet everywhere another site turns https? 2 - I have a lot of those return 301 where code could be shared if something like this were available e.g. $scheme://$actual_real_server_name$request_uri Difficulty, the server_name can and often contains multiple entries. I have found nothing useful about this on serverfault nor anywhere else. Dealing with this for several years. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280353,280353#msg-280353 From mdounin at mdounin.ru Sun Jul 1 15:17:55 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 1 Jul 2018 18:17:55 +0300 Subject: variable for domain name in redirect In-Reply-To: References: Message-ID: <20180701151755.GP35731@mdounin.ru> Hello! On Sun, Jul 01, 2018 at 03:52:59AM -0400, shiz wrote: > I have something in one of my server bloc: [...] > if ($rule_9 = "3210"){ > return 301 https://dev.server.com$request_uri; > } > ``` > > The problem is that have several servers. > > 1 - Is there a way to use variable for the some server names in the 'return > 301' line so I could use shared code and not have to edit this snippet > everywhere another site turns https? > 2 - I have a lot of those return 301 where code could be shared if something > like this were available e.g. > $scheme://$actual_real_server_name$request_uri > > Difficulty, the server_name can and often contains multiple entries. > > I have found nothing useful about this on serverfault nor anywhere else. > Dealing with this for several years. In most cases, $server_name is the variable you want. This variable contains the name of the server as defined by the server_name directive, see http://nginx.org/r/$server_name. If there are multiple names defined, it will contain the first one. In some cases, you may also want to use the $host variable. It will contain the name as obtained from the client, and might be better than $server_name if a single server{} block is used to handle requests to multiple different names and you don't want nginx to return redirects to a canonical name, but prefere to keep names separate. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sun Jul 1 19:56:59 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Sun, 01 Jul 2018 15:56:59 -0400 Subject: variable for domain name in redirect In-Reply-To: <20180701151755.GP35731@mdounin.ru> References: <20180701151755.GP35731@mdounin.ru> Message-ID: > In most cases, $server_name is the variable you want. Thanks so much. Works like a charm. This simplifies my configuration/maintenance a lot. Best! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280353,280360#msg-280360 From r1ch+nginx at teamliquid.net Mon Jul 2 16:18:15 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 2 Jul 2018 18:18:15 +0200 Subject: Wait for backend In-Reply-To: References: Message-ID: One way to do this may be to block the port with a firewall rule during reload. This way nginx will have to wait for the connect timeout (and hopefully retry) rather than failing immediately after receiving a RST from the closed port. On Thu, Jun 28, 2018 at 2:15 PM duda wrote: > *That is > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,280293,280320#msg-280320 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jul 2 16:53:16 2018 From: nginx-forum at forum.nginx.org (duda) Date: Mon, 02 Jul 2018 12:53:16 -0400 Subject: Wait for backend In-Reply-To: References: Message-ID: <4cc13551673798f56b1d335b3a6b5e1c.NginxMailingListEnglish@forum.nginx.org> That is very sad. IMHO waiting for backend is a very simple feature, it can be implemented on nodejs or in Golang in couple lines. I am surprised that nginx does not support that. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280293,280377#msg-280377 From nginx-forum at forum.nginx.org Tue Jul 3 13:24:18 2018 From: nginx-forum at forum.nginx.org (mevans336) Date: Tue, 03 Jul 2018 09:24:18 -0400 Subject: Reverse Proxy Prompt for Client Certificate? Message-ID: I am trying to set up a reverse proxy to the Windows Admin Center (WAC). The WAC requires the use of a client certificate for authentication. When I log into the WAC via https://localhost:6516 or https://192.168.0.100:6516 I am prompted for the certificate and everything works fine. If I attempt to log in from outside my network across the WAN, I simply receive a 403 without being prompted for the certificate. Microsoft says if you don't get the certificate prompt or choose the wrong one, you will get the 403, so I think something with my nginx reverse proxy config needs to be set to pass the certificate request through? Here is the relevant config ... I started with nothing but a bare proxy_pass and have added the rest of the directives on as I was trying to get it working. location /winac { proxy_pass https://192.168.0.100:6516; proxy_ssl_verify off; proxy_set_header X-SSL-CERT $ssl_client_escaped_cert; proxy_set_header X-SSL-CERT $ssl_client_cert; proxy_pass_request_headers on; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280385,280385#msg-280385 From r1ch+nginx at teamliquid.net Tue Jul 3 15:21:00 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 3 Jul 2018 17:21:00 +0200 Subject: Reverse Proxy Prompt for Client Certificate? In-Reply-To: References: Message-ID: I don't think this is possible. By the time you know the client wishes to request the /winac location, the SSL session has already been established, at which point the server can no longer send a ClientCertificateRequest. Using the stream module to proxy the whole connection may work, but obviously this prevents changing functionality at the HTTP level. http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html On Tue, Jul 3, 2018 at 3:24 PM mevans336 wrote: > I am trying to set up a reverse proxy to the Windows Admin Center (WAC). > The > WAC requires the use of a client certificate for authentication. When I log > into the WAC via https://localhost:6516 or https://192.168.0.100:6516 I am > prompted for the certificate and everything works fine. If I attempt to log > in from outside my network across the WAN, I simply receive a 403 without > being prompted for the certificate. > > Microsoft says if you don't get the certificate prompt or choose the wrong > one, you will get the 403, so I think something with my nginx reverse proxy > config needs to be set to pass the certificate request through? > > Here is the relevant config ... I started with nothing but a bare > proxy_pass > and have added the rest of the directives on as I was trying to get it > working. > > location /winac { > proxy_pass > https://192.168.0.100:6516; > proxy_ssl_verify off; > proxy_set_header X-SSL-CERT > $ssl_client_escaped_cert; > proxy_set_header X-SSL-CERT > $ssl_client_cert; > proxy_pass_request_headers on; > } > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,280385,280385#msg-280385 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 3 15:37:23 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Jul 2018 18:37:23 +0300 Subject: nginx-1.15.1 Message-ID: <20180703153723.GG56558@mdounin.ru> Changes with nginx 1.15.1 03 Jul 2018 *) Feature: the "random" directive inside the "upstream" block. *) Feature: improved performance when using the "hash" and "ip_hash" directives with the "zone" directive. *) Feature: the "reuseport" parameter of the "listen" directive now uses SO_REUSEPORT_LB on FreeBSD 12. *) Bugfix: HTTP/2 server push did not work if SSL was terminated by a proxy server in front of nginx. *) Bugfix: the "tcp_nopush" directive was always used on backend connections. *) Bugfix: sending a disk-buffered request body to a gRPC backend might fail. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Jul 3 16:08:01 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 3 Jul 2018 16:08:01 +0000 Subject: [nginx-announce] nginx-1.15.1 In-Reply-To: <20180703153729.GH56558@mdounin.ru> References: <20180703153729.GH56558@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.15.1 for Windows https://kevinworthington.com/nginxwin1151 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jul 3, 2018 at 3:37 PM, Maxim Dounin wrote: > Changes with nginx 1.15.1 03 Jul > 2018 > > *) Feature: the "random" directive inside the "upstream" block. > > *) Feature: improved performance when using the "hash" and "ip_hash" > directives with the "zone" directive. > > *) Feature: the "reuseport" parameter of the "listen" directive now > uses > SO_REUSEPORT_LB on FreeBSD 12. > > *) Bugfix: HTTP/2 server push did not work if SSL was terminated by a > proxy server in front of nginx. > > *) Bugfix: the "tcp_nopush" directive was always used on backend > connections. > > *) Bugfix: sending a disk-buffered request body to a gRPC backend might > fail. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jul 3 16:10:01 2018 From: nginx-forum at forum.nginx.org (mevans336) Date: Tue, 03 Jul 2018 12:10:01 -0400 Subject: Reverse Proxy Prompt for Client Certificate? In-Reply-To: References: Message-ID: Kemp can do it: https://www.tech-coffee.net/deploy-windows-admin-center-in-ha-through-kemp-load-balancer/ I can give the stream module a shot also. Would this be a basic config to get me started? stream { listen 443 proxy_pass https://192.168.1.0:6516/ proxy_ssl_verify off; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280385,280393#msg-280393 From nginx-forum at forum.nginx.org Wed Jul 4 07:31:59 2018 From: nginx-forum at forum.nginx.org (shivramg94) Date: Wed, 04 Jul 2018 03:31:59 -0400 Subject: SSL Handshake Failure with error:1407609B:SSL in error logs Message-ID: <60b28fadf9405d94871033dcecf0a5ef.NginxMailingListEnglish@forum.nginx.org> Hi, We are trying to configure TCP load balancing with TLS termination. But when we try to access the URL, we could see the below error in the nginx error and access logs Nginx Error Log: 2018/07/04 07:16:45 [crit] 7944#0: *61 SSL_do_handshake() failed (SSL: error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy request) while SSL handshaking, client: XX.XXX.XX.XX, server: 0.0.0.0:443 Nginx Access Log: 10.90.241.125 - - [04/Jul/2018:07:24:55 +0000] TCP 500 0 0 0.000 "-" The nginx.conf file looks like this stream { log_format sample '$remote_addr - - [$time_local] $protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr"'; upstream backends { server sample-domain-name.com:443; } server { listen 443 ssl; access_log /etc/access_logs/tcp_access_log sample; ssl_certificate Certificate_PATH; ssl_certificate_key Private_Key_Path; proxy_ssl off; proxy_pass backends; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280396,280396#msg-280396 From hamidul.islam at veriday.com Wed Jul 4 14:08:30 2018 From: hamidul.islam at veriday.com (Hamidul Islam) Date: Wed, 4 Jul 2018 10:08:30 -0400 Subject: Question on having multiple SSL cert for multiple domains Message-ID: Hi NGINX Support It will be very helpful to advise how can we add 2 different SSL for 2 URLs at the nginx configuration. At present we have 2 configuration files for managing the URLs At one of the configuration (Config file 01) file we have one site with SSL cert. And at another configuration file(Config file 02) we have many domains with no cert. What we want is to have a SSL cert to one of the domain at the 2nd configuration file that has many domains. *At Config File 01 the configuration example is as below with the SSL cert:* # SSL cert is declared globally as below: ssl_certificate /etc/nginx/sslSHA2_2015/server.bundle.crt; ssl_certificate_key /etc/nginx/sslSHA2_2015/server.key; server { listen 443 ssl; server_name site1.com:443; ........} *At Config File 02 the configuration example is as below that does not has any SSL cert:* server { listen 80; server_name www.site2.ca site2.ca; client_max_body_size 50M; proxy_read_timeout 180s; ......} server { listen 80; server_name www.site3.ca site3.ca; client_max_body_size 50M; proxy_read_timeout 180s; ......} Can you advise how can we add a SSL cert for one domain (example for site2.ca) in the config02 file and what changes we need to do in the config 01 file? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From giacomo at beta.srl Wed Jul 4 14:17:18 2018 From: giacomo at beta.srl (Giacomo Arru - BETA Technologies) Date: Wed, 4 Jul 2018 16:17:18 +0200 (CEST) Subject: Problems with Tomcat + NGINX Message-ID: <2033867596.629881.1530713838115.JavaMail.zimbra@beta.srl> Tomcat: 9.0.8 nginx: 1.12.2 I have this configuration: Vaadin 8 application, served via Tomcat 9. The application has manual push with websocket transport. If I use the app directly from Tomcat, -the Websocket connection works correctly. -the upload within the app of 10mb files works. If I use the application through nginx proxy, the upload works for very small files only (max 61440 bytes) and the websocket initially works, but after 30 seconds the application hangs (I think the websocket gets closed). This is the nginx configuration: nginx.conf user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; log_format main_ext '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '"$host" sn="$server_name" ' 'rt=$request_time ' 'ua="$upstream_addr" us="$upstream_status" ' 'ut="$upstream_response_time" ul="$upstream_response_length" ' 'cs=$upstream_cache_status' ; access_log /var/log/nginx/access.log main_ext; # Mitigate httpoxy attack (see README for details) proxy_set_header Proxy ""; include /etc/nginx/mime.types; default_type application/octet-stream; map $http_upgrade $connection_upgrade { default upgrade; '' close; } # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 80; server_name demo.myserver.com; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; } client_body_buffer_size 10M; client_max_body_size 10M; gzip on; send_timeout 600; proxy_connect_timeout 81640; proxy_send_timeout 81640; proxy_read_timeout 81640; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_buffering off; proxy_redirect off; proxy_request_buffering off; types_hash_max_size 2048; sendfile on; tcp_nopush on; tcp_nodelay on; } myvhost.conf proxy_cache_path /tmp/NGINX_cache-demo/ levels=1:2 keys_zone=demo:10m max_size=100m inactive=1h; upstream demo { ip_hash; server 172.16.1.1:8080 max_fails=0 fail_timeout=3s; keepalive 100; } server { listen 80; server_name demo.myserver.com; # Redirect all HTTP to HTTPS location / { return 301 https://$server_name$request_uri; } } server { server_name demo.impresacloud.com; listen 443 ssl http2; # managed by Certbot ssl_certificate /etc/letsencrypt/live/demo.impresacloud.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/demo.impresacloud.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot access_log /var/log/nginx/access_demo.log main_ext; error_log /var/log/nginx/error_demo.log info; client_max_body_size 128m; # disable unsupported ciphers #ssl_ciphers AESGCM:HIGH:!aNULL:!MD5; # ssl optimizations ssl_session_cache shared:SSL:60m; #sl_session_timeout 60m; add_header Strict-Transport-Security "max-age=31536000"; client_header_timeout 3m; client_body_timeout 3m; # Risolve loop di redirect location = /app/ { return 302 /; } location = /app { return 302 /; } # A location block is needed per URI group location / { #proxy_read_timeout 300; #proxy_connect_timeout 300; proxy_cache demo; proxy_cookie_path /app /; error_page 500 502 503 504 /server_down.html; ### force timeouts if one of backend is died ## #proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; ### Set headers #### #proxy_set_header Accept-Encoding ""; proxy_set_header X-Forwarded-Host $host; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Server $host; proxy_cache_bypass $http_upgrade; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_buffering off; proxy_ignore_client_abort off; proxy_redirect off; ### Most PHP, Python, Rails, Java App can use this header ### #proxy_set_header X-Forwarded-Proto $scheme; #add_header Front-End-Https on; #proxy_pass_request_headers On; #proxy_buffer_size 64k; #proxy_buffers 16 32k; #proxy_busy_buffers_size 64k; #proxy_connect_timeout 3600; #proxy_read_timeout 84600s; #proxy_send_timeout 84600s; #reset_timedout_connection off; proxy_pass http://demo/app/; } location = /server_down.html { root /opt/ImpresaCloud/proxy_html/; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From danny at trisect.uk Wed Jul 4 15:03:14 2018 From: danny at trisect.uk (Danny Horne) Date: Wed, 4 Jul 2018 16:03:14 +0100 Subject: Question on having multiple SSL cert for multiple domains In-Reply-To: References: Message-ID: <01b612cc-3850-d512-bf98-f2ad13e53b71@trisect.uk> On 04/07/18 15:08, Hamidul Islam wrote: > Hi NGINX Support > > It will be very helpful to advise how can we add 2 different SSL for 2 > URLs at the nginx configuration. > > Can you advise how can we add a SSL cert for one domain (example > for?site2.ca ) in the config02 file and what changes > we need to do in the config 01 file? > > Thanks > > Easiest way (in my opinion) would be to place the ssl configurations with the appropriate server block From m16+nginx at monksofcool.net Wed Jul 4 15:29:29 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Wed, 4 Jul 2018 17:29:29 +0200 Subject: Question on having multiple SSL cert for multiple domains In-Reply-To: <01b612cc-3850-d512-bf98-f2ad13e53b71@trisect.uk> References: <01b612cc-3850-d512-bf98-f2ad13e53b71@trisect.uk> Message-ID: On 04.07.2018 17:03, Danny Horne via nginx wrote: > Easiest way (in my opinion) would be to place the ssl configurations > with the appropriate server block Agreed. The SSL configuration parameters can either be added directly or via 'include' statements. Personally, I prefer using generator scripts which produce nginx config files over include statements because it makes the resulting files easier to read. -Ralph From jakub at 31337.pl Wed Jul 4 15:56:02 2018 From: jakub at 31337.pl (=?utf-8?Q?Jakub_Mrozi=C5=84ski?=) Date: Wed, 4 Jul 2018 17:56:02 +0200 Subject: Iimit_rate for POST operations Message-ID: Hi, I would like to throttle requests POSTed by clients. Currently limit_rate is only working on traffic from server to client (download), is there any specific reason for that it can?t limit ?upload?? I have tested that using simple calls ?curl -X POST -d at ..?, POST requests were handled by backend over proxy_pass. BR jm From iippolitov at nginx.com Wed Jul 4 16:30:29 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Wed, 4 Jul 2018 19:30:29 +0300 Subject: Problems with Tomcat + NGINX In-Reply-To: <2033867596.629881.1530713838115.JavaMail.zimbra@beta.srl> References: <2033867596.629881.1530713838115.JavaMail.zimbra@beta.srl> Message-ID: <3833dc5d-52f0-e579-4ef0-e751df8bd062@nginx.com> Giacomo, Have a look at nginx error and access logs. Most likely, that's tomcat default timeout fires. Regards, Igor. On 04.07.2018 17:17, Giacomo Arru - BETA Technologies wrote: > > Tomcat: 9.0.8 nginx: 1.12.2 > > > I have this configuration: > > > Vaadin 8 application, served via Tomcat 9. > > > The application has manual push with websocket transport. > > > > *If I use the app directly from Tomcat,* > > > -the Websocket connection works correctly. > > > -the upload within the app of 10mb files works. > > > > *If I use the application through nginx proxy,* > > > the upload works for very small files only (max 61440 bytes) and the > websocket initially works, but after 30 seconds the application hangs > (I think the websocket gets closed). > > > > This is the nginx configuration: > > > > > > *nginx.conf* > > > > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > > # Load dynamic modules. See /usr/share/nginx/README.dynamic. > include /usr/share/nginx/modules/*.conf; > > events { > ??? worker_connections 1024; > } > > > http { > ??? log_format? main? '$remote_addr - $remote_user [$time_local] > "$request" ' > ????????????????????? '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > ??? log_format main_ext '$remote_addr - $remote_user [$time_local] > "$request" ' > ??? ??? ??? '$status $body_bytes_sent "$http_referer" ' > ??? ??? ??? '"$http_user_agent" "$http_x_forwarded_for" ' > ??? ??? ??? '"$host" sn="$server_name" ' > ??? ??? ??? 'rt=$request_time ' > ??? ??? ??? 'ua="$upstream_addr" us="$upstream_status" ' > 'ut="$upstream_response_time" ul="$upstream_response_length" ' > 'cs=$upstream_cache_status' ; > > ??? access_log /var/log/nginx/access.log? main_ext; > > ??? # Mitigate httpoxy attack (see README for details) > ??? proxy_set_header Proxy ""; > > ??? include /etc/nginx/mime.types; > ??? default_type application/octet-stream; > > > ??? map $http_upgrade $connection_upgrade { > ??? ??? default upgrade; > ??? ??? ''????? close; > ??? } > > ??? # Load modular configuration files from the /etc/nginx/conf.d > directory. > ??? # See http://nginx.org/en/docs/ngx_core_module.html#include > ??? # for more information. > ??? include /etc/nginx/conf.d/*.conf; > > > ??? server { > ??? listen 80; > ??? server_name demo.myserver.com; > ??????? # Load configuration files for the default server block. > ??????? include /etc/nginx/default.d/*.conf; > ??? } > > ??? client_body_buffer_size 10M; > ??? client_max_body_size 10M; > ??? gzip??? ??? ??? ??? ??? ??? on; > ??? send_timeout 600; > ??? proxy_connect_timeout 81640; > ??? proxy_send_timeout 81640; > ??? proxy_read_timeout 81640; > ??? proxy_set_header Connection ""; > ??? proxy_http_version 1.1; > ??? proxy_buffering off; > ??? proxy_redirect off; > ??? proxy_request_buffering off; > ??? types_hash_max_size 2048; > ??? sendfile??? ??? ??? ??? ??? on; > ??? tcp_nopush on; > ??? tcp_nodelay on; > > } > > > > *myvhost.conf* > > > > > proxy_cache_path /tmp/NGINX_cache-demo/ levels=1:2 keys_zone=demo:10m > max_size=100m inactive=1h; > > upstream demo { > ??? ip_hash; > > ??? server 172.16.1.1:8080 max_fails=0 fail_timeout=3s; > ??? keepalive 100; > } > > > server { > ??? listen 80; > ??? server_name demo.myserver.com; > > ??? # Redirect all HTTP to HTTPS > ??? location / { > ??? ??? return 301 https://$server_name$request_uri; > ??? } > > } > > server { > ??? server_name demo.impresacloud.com; > > ??? listen 443 ssl http2; # managed by Certbot > ??? ssl_certificate > /etc/letsencrypt/live/demo.impresacloud.com/fullchain.pem; # managed > by Certbot > ??? ssl_certificate_key > /etc/letsencrypt/live/demo.impresacloud.com/privkey.pem; # managed by > Certbot > ??? include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot > ??? ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot > ??? access_log /var/log/nginx/access_demo.log main_ext; > ??? error_log /var/log/nginx/error_demo.log info; > > ??? ??? client_max_body_size 128m; > ??? ??? # disable unsupported ciphers > ??? ??? #ssl_ciphers AESGCM:HIGH:!aNULL:!MD5; > > ??? ??? # ssl optimizations > ??? ??? ssl_session_cache shared:SSL:60m; > ??? ??? #sl_session_timeout 60m; > ??? ??? add_header Strict-Transport-Security "max-age=31536000"; > > ??? ??? ??? client_header_timeout 3m; > ??? ??? ??? client_body_timeout 3m; > > > ??? # Risolve loop di redirect > ??? location = /app/ { > ??????? return 302 /; > ??? } > ??? location = /app { > ??????? return 302 /; > ??? } > > ??? # A location block is needed per URI group > > ??? location / { > > ??? ??? #proxy_read_timeout 300; > ??? ??? #proxy_connect_timeout 300; > ??? ??? proxy_cache demo; > ??? ??? proxy_cookie_path /app /; > ??? ??? error_page 500 502 503 504 /server_down.html; > > ??? ??? ### force timeouts if one of backend is died ## > ??????? #proxy_next_upstream error timeout invalid_header http_500 > http_502 http_503 http_504; > ??????? ### Set headers #### > ??? ??? #proxy_set_header Accept-Encoding?? ""; > ??? ??? proxy_set_header X-Forwarded-Host ??? ??? $host; > ??? ??? proxy_set_header??????? Host ??? $host; > ??????? proxy_set_header??????? X-Real-IP ??? $remote_addr; > ??????? proxy_set_header??????? X-Forwarded-For ??? > $proxy_add_x_forwarded_for; > ??? ??? proxy_set_header X-Forwarded-Server??? $host; > ??? ??? proxy_cache_bypass $http_upgrade; > ??? ??? proxy_set_header Upgrade $http_upgrade; > ??? ??? proxy_set_header Connection $connection_upgrade; > > ??? ??? ??? proxy_buffering?????????? off; > ??? ??? ??? proxy_ignore_client_abort off; > ??? ??? ??? proxy_redirect off; > > ??????? ### Most PHP, Python, Rails, Java App can use this header ### > ??????? #proxy_set_header X-Forwarded-Proto $scheme; > ??????? #add_header Front-End-Https?? on; > ??? ??? #proxy_pass_request_headers On; > > ??? ??? #proxy_buffer_size 64k; > ??? ??? #proxy_buffers 16 32k; > ??? ??? #proxy_busy_buffers_size 64k; > > ??? ??? ??? #proxy_connect_timeout?? 3600; > ??????????? #proxy_read_timeout????? 84600s; > ??????????? #proxy_send_timeout????? 84600s; > > ??? ??? #reset_timedout_connection??? off; > > > ??? ??? proxy_pass http://demo/app/; > ??? } > > ?? ??? location = /server_down.html { > ??? ??? root? /opt/ImpresaCloud/proxy_html/; > ??? } > > > } > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 4 16:30:37 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Jul 2018 19:30:37 +0300 Subject: Iimit_rate for POST operations In-Reply-To: References: Message-ID: <20180704163036.GL56558@mdounin.ru> Hello! On Wed, Jul 04, 2018 at 05:56:02PM +0200, Jakub Mrozi?ski wrote: > I would like to throttle requests POSTed by clients. Currently > limit_rate is only working on traffic from server to client > (download), is there any specific reason for that it can?t limit > ?upload?? The most specific reason is that it's not something implemented, mostly because use cases which require upload limiting are quite rare in HTTP. And if implemented, it should be called differently, as upload and download limiting are quite different things, and using the same directive to control both looks like a bad idea. Upload limiting is currently only available in the stream proxy, see http://nginx.org/r/proxy_upload_rate. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jul 4 17:04:59 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Jul 2018 20:04:59 +0300 Subject: SSL Handshake Failure with error:1407609B:SSL in error logs In-Reply-To: <60b28fadf9405d94871033dcecf0a5ef.NginxMailingListEnglish@forum.nginx.org> References: <60b28fadf9405d94871033dcecf0a5ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180704170459.GM56558@mdounin.ru> Hello! On Wed, Jul 04, 2018 at 03:31:59AM -0400, shivramg94 wrote: > We are trying to configure TCP load balancing with TLS termination. But when > we try to access the URL, we could see the below error in the nginx error > and access logs > > Nginx Error Log: > > 2018/07/04 07:16:45 [crit] 7944#0: *61 SSL_do_handshake() failed (SSL: > error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy request) > while SSL handshaking, client: XX.XXX.XX.XX, server: 0.0.0.0:443 > > Nginx Access Log: > > 10.90.241.125 - - [04/Jul/2018:07:24:55 +0000] TCP 500 0 0 0.000 "-" > > The nginx.conf file looks like this > > stream { > log_format sample '$remote_addr - - [$time_local] $protocol $status > $bytes_sent $bytes_received $session_time "$upstream_addr"'; > upstream backends { > server sample-domain-name.com:443; > } > server { > listen 443 ssl; > access_log /etc/access_logs/tcp_access_log sample; > ssl_certificate Certificate_PATH; > ssl_certificate_key Private_Key_Path; > proxy_ssl off; > proxy_pass backends; > } > } The error in question means that OpenSSL encountered "CONNE..." string instead of an SSL ClientHello message. That is, it looks like you are trying to talk to nginx without SSL, while you've configured it to expect SSL on the socket in question. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Jul 6 12:32:07 2018 From: nginx-forum at forum.nginx.org (stephan13360) Date: Fri, 06 Jul 2018 08:32:07 -0400 Subject: proxy_cache_background_update leads to 200 ms delay Message-ID: I recently set proxy_cache_valid 200 to 1 second, down from 15 minutes to refresh the content more often (With proxy_cache_background_update on; already activated long before that). Our Web monitoring, checking our site every minute, showed an increase in response time following this change. After some investigating I pinned it down to a ~200 ms delay coming from using proxy_cache_background_update. With an EXPIRED cache and proxy_cache_background_update disabled the site takes around 500 ms to load. With a cache HIT its takes around ~5 ms. Then I stopped php-fpm, disabled proxy_cache_background_update and got a STALE response (proxy_cache_use_stale error) taking ~6 ms. The I started php-fpm again and enabled proxy_cache_background_update, wehen the cache expired I get a STALE response taking around ~200 ms I got the times I measured using curl's time_total on localhost, so network delay is out of the picture. I ran the tests dozens of times and the milliseconds are very consistent. I have a few sites / server showing this behavior and some that don't. The server in this example is an AWS EC2 t2.small with SSD storage. Only NGINX and the cache is running on this Instance, PHP ist running on a different server. Currently I don't now why this is happening an would appreciate any hints. Example Output from cURL: ================EXPIRED================ HTTP/1.1 200 OK Server: nginx Date: Fri, 06 Jul 2018 12:00:46 GMT Content-Type: text/html; charset=utf-8 Content-Length: 112834 Connection: keep-alive Vary: Accept-Encoding Vary: Accept-Encoding Content-Language: de X-Cache: EXPIRED Status Code: 200 Lookup time: 0.004181 s Connect time (TCP): 0.004236 s Connect time (SSL): 0.000000 s Pretransfer time: 0.004259 s Starttransfer time: 0.503897 s Size download: 112834 bytes Speed download: 223248.000 bytes/s Total time: 0.505418 s ================HIT================ HTTP/1.1 200 OK Server: nginx Date: Fri, 06 Jul 2018 12:01:23 GMT Content-Type: text/html; charset=utf-8 Content-Length: 112834 Connection: keep-alive Vary: Accept-Encoding Vary: Accept-Encoding Content-Language: de X-Cache: HIT Status Code: 200 Lookup time: 0.004200 s Connect time (TCP): 0.004257 s Connect time (SSL): 0.000000 s Pretransfer time: 0.004281 s Starttransfer time: 0.005348 s Size download: 112834 bytes Speed download: 20642883.000 bytes/s Total time: 0.005466 s ================STALE (proxy_cache_background_update on)================ HTTP/1.1 200 OK Server: nginx Date: Fri, 06 Jul 2018 12:02:28 GMT Content-Type: text/html; charset=utf-8 Content-Length: 112834 Connection: keep-alive Vary: Accept-Encoding Vary: Accept-Encoding Content-Language: de X-Cache: STALE Status Code: 200 Lookup time: 0.004203 s Connect time (TCP): 0.004267 s Connect time (SSL): 0.000000 s Pretransfer time: 0.004294 s Starttransfer time: 0.004870 s Size download: 112834 bytes Speed download: 537563.000 bytes/s Total time: 0.209899 s ================STALE (proxy_cache_background_update off)================ HTTP/1.1 200 OK Server: nginx Date: Fri, 06 Jul 2018 12:03:03 GMT Content-Type: text/html; charset=utf-8 Content-Length: 112834 Connection: keep-alive Vary: Accept-Encoding Vary: Accept-Encoding Content-Language: de X-Cache: STALE Status Code: 200 Lookup time: 0.004204 s Connect time (TCP): 0.004262 s Connect time (SSL): 0.000000 s Pretransfer time: 0.004289 s Starttransfer time: 0.005906 s Size download: 112834 bytes Speed download: 18752534.000 bytes/s Total time: 0.006017 s Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280434,280434#msg-280434 From nginx-forum at forum.nginx.org Fri Jul 6 13:55:00 2018 From: nginx-forum at forum.nginx.org (stephan13360) Date: Fri, 06 Jul 2018 09:55:00 -0400 Subject: proxy_cache_background_update leads to 200 ms delay In-Reply-To: References: Message-ID: To clarify a few things: The delay has nothing to do with the proxy_cache_valid 200 time, this change only made me realize that there seems to be a problem. Before this change our web monitoring would always get a cache HIT, and after it mostly got a STALE response because it checks every minute. We use NGINX 1.15.0 mainline. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280434,280437#msg-280437 From gfrankliu at gmail.com Fri Jul 6 17:06:10 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 6 Jul 2018 10:06:10 -0700 Subject: SSL Handshake Failure with error:1407609B:SSL in error logs In-Reply-To: <60b28fadf9405d94871033dcecf0a5ef.NginxMailingListEnglish@forum.nginx.org> References: <60b28fadf9405d94871033dcecf0a5ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: Since your backend is already doing ssl, you should remove ssl from the listen, so that nginx will just do a simple TCP pass through: Change listen 443 ssl; to listen 443; On Wed, Jul 4, 2018 at 12:31 AM, shivramg94 wrote: > Hi, > > We are trying to configure TCP load balancing with TLS termination. But > when > we try to access the URL, we could see the below error in the nginx error > and access logs > > Nginx Error Log: > > 2018/07/04 07:16:45 [crit] 7944#0: *61 SSL_do_handshake() failed (SSL: > error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy request) > while SSL handshaking, client: XX.XXX.XX.XX, server: 0.0.0.0:443 > > Nginx Access Log: > > 10.90.241.125 - - [04/Jul/2018:07:24:55 +0000] TCP 500 0 0 0.000 "-" > > The nginx.conf file looks like this > > stream { > log_format sample '$remote_addr - - [$time_local] $protocol $status > $bytes_sent $bytes_received $session_time "$upstream_addr"'; > upstream backends { > server sample-domain-name.com:443; > } > server { > listen 443 ssl; > access_log /etc/access_logs/tcp_access_log sample; > ssl_certificate Certificate_PATH; > ssl_certificate_key Private_Key_Path; > proxy_ssl off; > proxy_pass backends; > } > } > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,280396,280396#msg-280396 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sat Jul 7 11:14:27 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 07 Jul 2018 14:14:27 +0300 Subject: proxy_cache_background_update leads to 200 ms delay In-Reply-To: References: Message-ID: <2277498.0kgbyRdg0M@vbart-laptop> On Friday, 6 July 2018 15:32:07 MSK stephan13360 wrote: > I recently set proxy_cache_valid 200 to 1 second, down from 15 minutes to > refresh the content more often (With proxy_cache_background_update on; > already activated long before that). > Our Web monitoring, checking our site every minute, showed an increase in > response time following this change. > > After some investigating I pinned it down to a ~200 ms delay coming from > using proxy_cache_background_update. > [..] I assume you have tcp_nopush directive enabled, then please try switching it off. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Sat Jul 7 12:15:43 2018 From: nginx-forum at forum.nginx.org (stephan13360) Date: Sat, 07 Jul 2018 08:15:43 -0400 Subject: proxy_cache_background_update leads to 200 ms delay In-Reply-To: <2277498.0kgbyRdg0M@vbart-laptop> References: <2277498.0kgbyRdg0M@vbart-laptop> Message-ID: <2083c6b50f0e8ff83882cf501e7fc38d.NginxMailingListEnglish@forum.nginx.org> Wow, thats it! The delay is gone. For now I am satisfied that the delay is gone and will read up some more on tcp_nopush. For the future: Is there any information on why the combination of tcp_nopush and proxy_cache_background_update create the delay and not the STALE response you get when the backend ist down and proxy_cache_background_update is off? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280434,280444#msg-280444 From lucas at lucasrolff.com Sat Jul 7 14:42:13 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sat, 7 Jul 2018 14:42:13 +0000 Subject: proxy_cache_background_update leads to 200 ms delay In-Reply-To: <2083c6b50f0e8ff83882cf501e7fc38d.NginxMailingListEnglish@forum.nginx.org> References: <2277498.0kgbyRdg0M@vbart-laptop> <2083c6b50f0e8ff83882cf501e7fc38d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <25594803-F5F7-4A7E-8967-A704FEC5E382@lucasrolff.com> It's not a combination of tcp_nopush and proxy_cache_background_update that creates this delay. tcp_nopush (TCP_CORK in Linux) introduces a delay of packets being sent for up to 200ms or until the packet size gets to the defined MTU. proxy_cache_background_update (if I remember correctly), will do the common checks at the origin to check if a file changed, since this request performed is (often) less than the MTU, you'll end up having to wait for the 200ms delay. So disabling tcp_nopush also disables the 200ms delay. ?On 07/07/2018, 14.15, "nginx on behalf of stephan13360" wrote: Wow, thats it! The delay is gone. For now I am satisfied that the delay is gone and will read up some more on tcp_nopush. For the future: Is there any information on why the combination of tcp_nopush and proxy_cache_background_update create the delay and not the STALE response you get when the backend ist down and proxy_cache_background_update is off? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280434,280444#msg-280444 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Jul 7 15:38:40 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Sat, 07 Jul 2018 11:38:40 -0400 Subject: SSL errors, verbosity level Message-ID: Hi, I see those messages in my error logs daily. ``` 2018/07/07 08:01:32 [crit] 31935#31935: *342781 SSL_do_handshake() failed (SSL: error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol) while SSL handshaking, client: 173.208.91.177, server: 0.0.0.0:443 2018/07/07 08:06:24 [crit] 31939#31939: *343099 SSL_do_handshake() failed (SSL: error:1420918C:SSL routines:tls_early_post_process_client_hello:version too low) while SSL handshaking, client: 141.212.122.16, server: 0.0.0.0:443 ``` Is there a way to increase verbosity, i.e. which protocol is unsupported? which version is too low? Nginx 1.15.1, supporting TLSv1.2, TLSv1.3 draft 23, OpenSSL-1.1.1-pre2 Not sure if it could be done within nginx, maybe OpenSSL source has to be edited? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280446,280446#msg-280446 From hobbsjb at yahoo.com Mon Jul 9 09:18:00 2018 From: hobbsjb at yahoo.com (Jb Hobbs) Date: Mon, 9 Jul 2018 05:18:00 -0400 Subject: [no subject] Message-ID: <1531127884.cSJ2fVJWQFcVhcSJ2fsSrN@mf-smf-ucb036c1> http://join.pointtome.com Jb Hobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Jul 9 12:16:16 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 09 Jul 2018 15:16:16 +0300 Subject: proxy_cache_background_update leads to 200 ms delay In-Reply-To: <2083c6b50f0e8ff83882cf501e7fc38d.NginxMailingListEnglish@forum.nginx.org> References: <2277498.0kgbyRdg0M@vbart-laptop> <2083c6b50f0e8ff83882cf501e7fc38d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1653406.ohUx97vUSs@vbart-workstation> On Saturday 07 July 2018 08:15:43 stephan13360 wrote: > Wow, thats it! The delay is gone. > > For now I am satisfied that the delay is gone and will read up some more on > tcp_nopush. > > For the future: Is there any information on why the combination of > tcp_nopush and proxy_cache_background_update create the delay and not the > STALE response you get when the backend ist down and > proxy_cache_background_update is off? > When a client connection is closing or switching to keepalive state, the response body left in the socket is explicitly pushed. In current implementation the client connection is kept "busy" during background update and the last chunk may rest in the socket until kernel will send it. wbr, Valentin V. Bartenev From gfrankliu at gmail.com Tue Jul 10 00:16:36 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 9 Jul 2018 17:16:36 -0700 Subject: keepalive and 5xx Message-ID: Does nginx automatically disconnect keepalive connection if 5xx response code is generated? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kmark at palantir.com Tue Jul 10 10:46:22 2018 From: kmark at palantir.com (Kevin Mark) Date: Tue, 10 Jul 2018 10:46:22 +0000 Subject: nginx on Windows Message-ID: Hello all, I was wondering if there was any up-to-date documentation about running nginx on Windows in a production environment. The official documentation here (https://nginx.org/en/docs/windows.html) notes some pretty serious limitations but its last update was 18 months ago and its last major revision was in 2012. For instance, is the 1024 connection limit still around and would nginx on Windows still be characterized as beta software? Thanks, Kevin Mark From maxim at nginx.com Tue Jul 10 11:00:08 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 10 Jul 2018 14:00:08 +0300 Subject: nginx on Windows In-Reply-To: References: Message-ID: Hi Kevin, On 10/07/2018 13:46, Kevin Mark wrote: > Hello all, > > I was wondering if there was any up-to-date documentation about > running nginx on Windows in a production environment. The > official documentation here > (https://nginx.org/en/docs/windows.html) notes some pretty > serious limitations but its last update was 18 months ago and its > last major revision was in 2012. For instance, is the 1024 > connection limit still around and would nginx on Windows still be > characterized as beta software? > Yes, this limit is still present. I would say we have an experimental support and this experiment almost stalled years ago. In the same time we heard about many cases when people use nginx for Windows in production[*]. * Definition of "production" may vary significantly. -- Maxim Konovalov From nginx-forum at forum.nginx.org Tue Jul 10 11:06:19 2018 From: nginx-forum at forum.nginx.org (rudyxie) Date: Tue, 10 Jul 2018 07:06:19 -0400 Subject: TLS 1.3 In-Reply-To: <20180411174234.GL77253@mdounin.ru> References: <20180411174234.GL77253@mdounin.ru> Message-ID: <9790f6caddf8adef91f41e57023b9f08.NginxMailingListEnglish@forum.nginx.org> Has the nginx 1.15.x support the 0-RTT early data of TLS1.3 ? I read the change log of nginx 1.15.x and not found it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279404,280472#msg-280472 From nginx-forum at forum.nginx.org Tue Jul 10 11:09:46 2018 From: nginx-forum at forum.nginx.org (rudyxie) Date: Tue, 10 Jul 2018 07:09:46 -0400 Subject: Has nginx 1.15.x support the 0-RTT feature of TLSv1.3? Message-ID: Has the nginx 1.15.x support the 0-RTT early data of TLS1.3 ? I read the change logs of nginx 1.15.x and not found it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280473,280473#msg-280473 From nginx-forum at forum.nginx.org Tue Jul 10 11:42:19 2018 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 10 Jul 2018 07:42:19 -0400 Subject: nginx on Windows In-Reply-To: References: Message-ID: <11383c3ac3131c346480f3bcce00e393.NginxMailingListEnglish@forum.nginx.org> Have a look here http://nginx-win.ecsds.eu/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280470,280474#msg-280474 From nginx-forum at forum.nginx.org Tue Jul 10 12:07:03 2018 From: nginx-forum at forum.nginx.org (jstephens) Date: Tue, 10 Jul 2018 08:07:03 -0400 Subject: security scores and TLS config Message-ID: Hello, With some experience in F5 and NetScaler world but still new to Nginx I have been tasked with migrating 50+ public URLs to NGINX Plus configured as keepalived HA pair. What would be best SSL configuration to achieve highest security scores from Qaulys SSLLabs or BitSight ? Can someone recommend or share current best SSL config ? Alos, as for overall design what is an optimal design in such case ? 1. Single keepalived IP with server_name directives or separate IP for each URL ? If separate IPs, do i have to list them in keepalived config ? 2. Is single SSL config file possible to share the same encryption settings across all URLs ? Obviously my goal here is to achieve high availability with A+ security scores. Any help will be highly appreciated. Jay Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280475,280475#msg-280475 From rgacote at appropriatesolutions.com Tue Jul 10 13:48:47 2018 From: rgacote at appropriatesolutions.com (Ray Cote) Date: Tue, 10 Jul 2018 09:48:47 -0400 Subject: security scores and TLS config In-Reply-To: References: Message-ID: On Tue, Jul 10, 2018 at 8:07 AM, jstephens wrote: > What would be best SSL configuration to achieve highest > security scores from Qaulys SSLLabs or BitSight ? Can someone recommend or > share current best SSL config ? > Recommend you start with the Mozilla TLS configuration page. Mozilla Modern is the way to go (assuming all your clients use new enough browsers). https://wiki.mozilla.org/Security/Server_Side_TLS --Ray -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Tue Jul 10 15:56:21 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 10 Jul 2018 18:56:21 +0300 Subject: security scores and TLS config In-Reply-To: References: Message-ID: <3d37dcbf-8192-6295-8f1f-f084f9ecd237@nginx.com> Hi Jay, On 10/07/2018 15:07, jstephens wrote: > Hello, > With some experience in F5 and NetScaler world but still new to Nginx I have > been tasked with migrating 50+ public URLs to NGINX Plus configured as > keepalived HA pair. What would be best SSL configuration to achieve highest > security scores from Qaulys SSLLabs or BitSight ? Can someone recommend or > share current best SSL config ? > > Alos, as for overall design what is an optimal design in such case ? > 1. Single keepalived IP with server_name directives or separate IP for each > URL ? If separate IPs, do i have to list them in keepalived config ? > 2. Is single SSL config file possible to share the same encryption settings > across all URLs ? > > Obviously my goal here is to achieve high availability with A+ security > scores. > I'd suggest to reach nginx-plus support with your inquiry. Thanks, Maxim -- Maxim Konovalov From nginx-forum at forum.nginx.org Tue Jul 10 16:40:45 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 10 Jul 2018 12:40:45 -0400 Subject: nginx on Windows In-Reply-To: <11383c3ac3131c346480f3bcce00e393.NginxMailingListEnglish@forum.nginx.org> References: <11383c3ac3131c346480f3bcce00e393.NginxMailingListEnglish@forum.nginx.org> Message-ID: <453bde0befee51960614afe7f2a006b1.NginxMailingListEnglish@forum.nginx.org> itpp2012 Wrote: ------------------------------------------------------- > Have a look here http://nginx-win.ecsds.eu/ Best Nginx for windows builds around :) love itpp2012's work. He also fixed the concurrent connection limitations and continuously ads modules like Lua for Nginx into his builds what are stable and production ready. Nginx.org should make him the maintainer and dev for windows builds in my opinion. Openresty who built Lua for Nginx has Windows builds of Nginx but i don't know if they have fixed the concurrent connection limitations. https://openresty.org/en/download.html These are the only two i am aware of i would recommend itpp2012's builds mostly because i have used run and tested them. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280470,280479#msg-280479 From mdounin at mdounin.ru Tue Jul 10 16:51:57 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Jul 2018 19:51:57 +0300 Subject: TLS 1.3 In-Reply-To: <9790f6caddf8adef91f41e57023b9f08.NginxMailingListEnglish@forum.nginx.org> References: <20180411174234.GL77253@mdounin.ru> <9790f6caddf8adef91f41e57023b9f08.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180710165157.GD56558@mdounin.ru> Hello! On Tue, Jul 10, 2018 at 07:06:19AM -0400, rudyxie wrote: > Has the nginx 1.15.x support the 0-RTT early data of TLS1.3 ? I read the > change log of nginx 1.15.x and not found it. Development of the 1.15.x branch is in progress. Support for 0-RTT mode aka early data is still in plans. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jul 10 16:52:39 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Jul 2018 19:52:39 +0300 Subject: Has nginx 1.15.x support the 0-RTT feature of TLSv1.3? In-Reply-To: References: Message-ID: <20180710165239.GE56558@mdounin.ru> Hello! On Tue, Jul 10, 2018 at 07:09:46AM -0400, rudyxie wrote: > Has the nginx 1.15.x support the 0-RTT early data of TLS1.3 ? I read the > change logs of nginx 1.15.x and not found it. Development of the 1.15.x branch is in progress. Support for 0-RTT mode aka early data is still in plans. (There is no real need to post the same question twice. Thank you for understanding.) -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jul 10 17:04:41 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Jul 2018 20:04:41 +0300 Subject: keepalive and 5xx In-Reply-To: References: Message-ID: <20180710170441.GF56558@mdounin.ru> Hello! On Mon, Jul 09, 2018 at 05:16:36PM -0700, Frank Liu wrote: > Does nginx automatically disconnect keepalive connection if 5xx response > code is generated? Not really. Keepalive is automatically switched off when a response with one of the following error codes is generated by nginx itself: - 400 Bad Request - 413 Request Entity Too Large - 414 Request URI Too Large - 500 Internal Server Error - 501 Not Implemented This is because such errors indicate that we might not be able to maintain protocol state properly, and hence we need to close the connection. Details can be found in the ngx_http_special_response_handler() function, see here: http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_special_response.c#l428 -- Maxim Dounin http://mdounin.ru/ From hamidul.islam at veriday.com Tue Jul 10 17:23:49 2018 From: hamidul.islam at veriday.com (Hamidul Islam) Date: Tue, 10 Jul 2018 13:23:49 -0400 Subject: Block a css file thru NGINX rule Message-ID: Hi I would like a css file (named theme.css) to be blocked when loading a page thru nginx. This css file is dynamically created everytime a new page is loaded. Wondering is that is possible to block thru nginx rule? If so please give an example. Thanks Hamidul -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Jul 10 17:42:24 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 10 Jul 2018 20:42:24 +0300 Subject: SSL errors, verbosity level In-Reply-To: References: Message-ID: <3F227703-0188-48F3-B731-C82F731C0B0C@nginx.com> > On 7 Jul 2018, at 18:38, shiz wrote: > > Hi, > > I see those messages in my error logs daily. > > ``` > 2018/07/07 08:01:32 [crit] 31935#31935: *342781 SSL_do_handshake() failed > (SSL: error:14209102:SSL > routines:tls_early_post_process_client_hello:unsupported protocol) while SSL > handshaking, client: 173.208.91.177, server: 0.0.0.0:443 > 2018/07/07 08:06:24 [crit] 31939#31939: *343099 SSL_do_handshake() failed > (SSL: error:1420918C:SSL > routines:tls_early_post_process_client_hello:version too low) while SSL > handshaking, client: 141.212.122.16, server: 0.0.0.0:443 > ``` > > Is there a way to increase verbosity, i.e. which protocol is unsupported? > which version is too low? > > Nginx 1.15.1, supporting TLSv1.2, TLSv1.3 draft 23, OpenSSL-1.1.1-pre2 > > Not sure if it could be done within nginx, maybe OpenSSL source has to be > edited? This may be caused by TLSv1.3 version draft mismatch as found in CH supported_versions. You may want to update OpenSSL. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Tue Jul 10 18:09:55 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Tue, 10 Jul 2018 14:09:55 -0400 Subject: SSL errors, verbosity level In-Reply-To: <3F227703-0188-48F3-B731-C82F731C0B0C@nginx.com> References: <3F227703-0188-48F3-B731-C82F731C0B0C@nginx.com> Message-ID: <2b089fe24a7009325fa4aaae5ba84921.NginxMailingListEnglish@forum.nginx.org> > You may want to update OpenSSL. Thanks but I did and almost zero browser was able to use draft 26 or 28. Therefore I downgraded OpenSSL from 1.1.1-pre8 to 1.1.1-pre2 (draft 23). Although TLS 1.3 has been finalized, Openssl 1.1.1 is still work in progress. Tested with latest Opera, Palemoon, Blackhawk, Vivaldi and Slimjet. I don't use Chrome nor Firefox. Had to disable CT too, generating way too much errors from older browsers. Seems this project is unmaintained for a year. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280446,280486#msg-280486 From nginx-forum at forum.nginx.org Tue Jul 10 23:52:27 2018 From: nginx-forum at forum.nginx.org (jstephens) Date: Tue, 10 Jul 2018 19:52:27 -0400 Subject: security scores and TLS config In-Reply-To: References: Message-ID: Thanks Ray, the SSL Configuration Generator looks really good and modern config is what I was looking for, I guess. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280475,280487#msg-280487 From gfrankliu at gmail.com Tue Jul 10 23:59:43 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 10 Jul 2018 16:59:43 -0700 Subject: keepalive and 5xx In-Reply-To: <20180710170441.GF56558@mdounin.ru> References: <20180710170441.GF56558@mdounin.ru> Message-ID: Hi Maxim, When you say "Keepalive is automatically switched off...", do you mean nginx will send "Connection: close" as part of the response? What happens if client doesn't honor that, and keeps sending another request to the existing connection? You also mentioned "error codes is generated by nginx itself", so what happens if nginx is used as reverse proxy, and the error code is coming from upstream? Will nginx switch off keepalive with client too? Thanks! Frank On Tue, Jul 10, 2018 at 10:04 AM, Maxim Dounin wrote: > Hello! > > On Mon, Jul 09, 2018 at 05:16:36PM -0700, Frank Liu wrote: > > > Does nginx automatically disconnect keepalive connection if 5xx response > > code is generated? > > Not really. Keepalive is automatically switched off when a > response with one of the following error codes is generated by > nginx itself: > > - 400 Bad Request > - 413 Request Entity Too Large > - 414 Request URI Too Large > - 500 Internal Server Error > - 501 Not Implemented > > This is because such errors indicate that we might not be able > to maintain protocol state properly, and hence we need to close > the connection. > > Details can be found in the ngx_http_special_response_handler() > function, see here: > > http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_ > special_response.c#l428 > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Wed Jul 11 00:11:40 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 10 Jul 2018 17:11:40 -0700 Subject: SSL errors, verbosity level In-Reply-To: References: Message-ID: Those unsupported ssl version messages should be in "info" level instead of "crit", just like other SSL related errors. Applying below patch should make your error log cleaner: https://nginx.googlesource.com/nginx/+/6853c9c868504432ffadb8a7ca58ce8e50a83450%5E%21/ On Sat, Jul 7, 2018 at 8:38 AM, shiz wrote: > Hi, > > I see those messages in my error logs daily. > > ``` > 2018/07/07 08:01:32 [crit] 31935#31935: *342781 SSL_do_handshake() failed > (SSL: error:14209102:SSL > routines:tls_early_post_process_client_hello:unsupported protocol) while > SSL > handshaking, client: 173.208.91.177, server: 0.0.0.0:443 > 2018/07/07 08:06:24 [crit] 31939#31939: *343099 SSL_do_handshake() failed > (SSL: error:1420918C:SSL > routines:tls_early_post_process_client_hello:version too low) while SSL > handshaking, client: 141.212.122.16, server: 0.0.0.0:443 > ``` > > Is there a way to increase verbosity, i.e. which protocol is unsupported? > which version is too low? > > Nginx 1.15.1, supporting TLSv1.2, TLSv1.3 draft 23, OpenSSL-1.1.1-pre2 > > Not sure if it could be done within nginx, maybe OpenSSL source has to be > edited? > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,280446,280446#msg-280446 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 11 03:36:11 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Jul 2018 06:36:11 +0300 Subject: keepalive and 5xx In-Reply-To: References: <20180710170441.GF56558@mdounin.ru> Message-ID: <20180711033611.GI56558@mdounin.ru> Hello! On Tue, Jul 10, 2018 at 04:59:43PM -0700, Frank Liu wrote: > When you say "Keepalive is automatically switched off...", do you mean > nginx will send "Connection: close" as part of the response? What happens > if client doesn't honor that, and keeps sending another request to the > existing connection? This means that nginx will not use keepalive for a particular connection. That is, it will send "Connection: close" and will close the connection. If there are any additional requests in the connection (either pipelined, or due client ignoring "Connection: close" signal), the client will have to retransmit them. Much like in any other case when keepalive is switched off. > You also mentioned "error codes is generated by nginx itself", so what > happens if nginx is used as reverse proxy, and the error code is coming > from upstream? Will nginx switch off keepalive with client too? No (and that's why I wrote "generated by nginx itself"). As long as the code is in an upstream response, it is simply passed to the client. -- Maxim Dounin http://mdounin.ru/ From mailinglist at unix-solution.de Wed Jul 11 08:56:50 2018 From: mailinglist at unix-solution.de (basti) Date: Wed, 11 Jul 2018 10:56:50 +0200 Subject: Server (Proxy) Access by IP and Forward real IP to Proxy Logs Message-ID: <9b44b0a8-d6cc-655d-fe84-11166e49af3e@unix-solution.de> Hello, I have the following config Frontend (with IP x.y.1.1) -> Proxy In the Proxy settings I have "allow x.y.1.1" and this work very well. Now I want to see the client IP how access to frontend in the proxy logs and have add something like # to get real IP who access set_real_ip_from x.y.1.1/32; real_ip_header X-Real-IP; now I see the Real IP who access the frontend but I get no access to proxy. I have also try to place the "allow x.y.1.1" before the set_real_ip_from but this does not help. Is there a way to get this working? In other words to allow connection from Frontend and print client IP's to log (without iptables) ?? Best Regards, From nginx-forum at forum.nginx.org Wed Jul 11 13:18:54 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Wed, 11 Jul 2018 09:18:54 -0400 Subject: SSL errors, verbosity level In-Reply-To: References: Message-ID: <896f584943acada1bfd90cc3f1183a5f.NginxMailingListEnglish@forum.nginx.org> > Those unsupported ssl version messages should be in "info" level That is a very useful patch, many thanks Frank Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280446,280496#msg-280496 From scott.callum at gmail.com Wed Jul 11 13:53:15 2018 From: scott.callum at gmail.com (Callum Scott) Date: Wed, 11 Jul 2018 14:53:15 +0100 Subject: Logfile formatting Message-ID: I'm currently looking at swapping out some of our Apache web servers for Nginx to act as a reverse proxy. One of my issues is that I need, at least in the short term, for the log format to remain the same. I have two issues that are cropping up. The first is that with my current configuration I am getting the following error if I try to start nginx: nginx: [emerg] unknown "bytes_received" variable I am using the latest version avialble in the nginx repo: # nginx -V nginx version: nginx/1.14.0 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' Secondaly i am unable to find an equivelent for the: %f, %R or %l apache logs This is the log file I am trying to replicate: LogFormat "%v,%V,%h,%l,%u,%t,\"%m\",\"%U\",\"%q\",\"%H\",\"%{UNIQUE_ID}e\",%>s,\"%{Referer}i\",\"%{User-Agent}i\",\"%{SSL_PROTOCOL}x\",\"%{SSL_CIPHER}x\",%p,%D,%I,%O,%B,\"%R\",\"%f\"" vhostcombined and what I have so far: log_format proxylog '$server_name,$hostname,$remote_addr,-,$remote_user,[$time_local],' '"$request_method","$request_uri","$query_string",' '"$server_protocol","$request_id","$status","$http_referer,"' '"$http_user_agent","$ssl_protocol","$ssl_cipher",$server_port,' '$request_time,$bytes_received,$bytes_sent,"proxy-server"'; Any pointers for the above issues would be gratefully received. -- Callum -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Wed Jul 11 13:56:47 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Wed, 11 Jul 2018 16:56:47 +0300 Subject: Logfile formatting In-Reply-To: References: Message-ID: <98b35510-946f-6f24-c6b7-0fc6251b31d5@nginx.com> Hello, Scott. I think, you can try $request_length. Here is a convenient link to help you with your task: http://nginx.org/en/docs/varindex.html Most of times you can find a proper variable there. On 11.07.2018 16:53, Callum Scott wrote: > I'm currently looking at swapping out some of our Apache web servers > for Nginx to act as a reverse proxy. > > One of my issues is that I need, at least in the short term, for the > log format to remain the same. > > I have two issues that are cropping up. > > The first is that with my current configuration I am getting the > following error if I try to start nginx: > > nginx: [emerg] unknown "bytes_received" variable > > I am using the latest version avialble in the nginx repo: > > # nginx -V > nginx version: nginx/1.14.0 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC) > built with OpenSSL 1.0.1e-fips 11 Feb 2013 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/usr/lib64/nginx/modules > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-compat --with-file-aio --with-threads > --with-http_addition_module --with-http_auth_request_module > --with-http_dav_module --with-http_flv_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_mp4_module --with-http_random_index_module > --with-http_realip_module --with-http_secure_link_module > --with-http_slice_module --with-http_ssl_module > --with-http_stub_status_module --with-http_sub_module > --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream > --with-stream_realip_module --with-stream_ssl_module > --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' > --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' > > Secondaly i am unable to find an equivelent for the: > > %f, %R or %l apache logs > > This is the log file I am trying to replicate: > LogFormat > "%v,%V,%h,%l,%u,%t,\"%m\",\"%U\",\"%q\",\"%H\",\"%{UNIQUE_ID}e\",%>s,\"%{Referer}i\",\"%{User-Agent}i\",\"%{SSL_PROTOCOL}x\",\"%{SSL_CIPHER}x\",%p,%D,%I,%O,%B,\"%R\",\"%f\"" > vhostcombined > > and what I have so far: > > ??? log_format? proxylog > '$server_name,$hostname,$remote_addr,-,$remote_user,[$time_local],' > '"$request_method","$request_uri","$query_string",' > '"$server_protocol","$request_id","$status","$http_referer,"' > '"$http_user_agent","$ssl_protocol","$ssl_cipher",$server_port,' > '$request_time,$bytes_received,$bytes_sent,"proxy-server"'; > > Any pointers for the above issues would be gratefully received. > -- > Callum > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 11 14:48:01 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 11 Jul 2018 10:48:01 -0400 Subject: Nginx Lua Caching and removing unwanted Arguements for higher HIT ratio issue Message-ID: <1e8c1695cd843780162868ade985d5e3.NginxMailingListEnglish@forum.nginx.org> So my issue is mostly directed towards Yichun Zhang (agentzh) if he is still active here. I hope so. My problem is I am trying to increase my Cache HIT ratio by removing arguments from the URL that are fake / unwanted and order the arguments in a alphabetical (same order every time) for a higher Cache HIT ratio. Here is my code. location ~ \.php$ { ##within the PHP location block ## Create fastcgi_param vars # remove duplicate values like index.php?variable=value&&&&&etc from URL's for higher cache hit ratio set_by_lua_block $cache_request_uri { local function fix_url(s,C) for c in C:gmatch(".") do s=s:gsub(c.."+",c) end return s end return string.lower(fix_url(ngx.var.request_uri, "+/=&?;~*@$,:")) } #TODO : Order request body variables so they are the same order for higher cache hit ratio set_by_lua_block $cache_request_body { return ngx.var.request_body } # Order Arguement variables for higher cache hit ratio and remove any custom defined arguements that users may be using to bypass cache in an attempt of DoS. set_by_lua_block $cache_request_uri { ngx.log(ngx.ERR, "before error: ", ngx.var.request_uri) ngx.log(ngx.ERR, "test: ", ngx.var.uri) local function has_value (tab, val) for index, value in ipairs(tab) do -- We grab the first index of our sub-table instead if string.lower(value) == string.lower(val) then return true end end return false end --Anti-DDoS and Remove arguements from URLs local args = ngx.req.get_uri_args() local remove_args_table = { --table of blacklisted arguement to remove from url to stop DoS and increase Cache HIT ratio. "rnd", "rand", "random", "ddos", "dddddooooossss", "randomz", } for key,value in pairs(args) do if has_value(remove_args_table, value) then --print 'Yep' --print(value .. " ") ngx.log(ngx.ERR, "error: ", key .. " | " .. value) args[key] = nil --remove the arguement from the args table else --print 'Nope' end end --ngx.req.set_uri_args(args) --for k,v in pairs(args) do --[[print(k,v)]] ngx.log(ngx.ERR, "error: ", k .. " | " .. v) end ngx.log(ngx.ERR, "after error: ", ngx.var.request_uri) --return ngx.req.set_uri_args(args) return ngx.var.uri .. args --Anti-DDoS and Remove arguements from URLs } fastcgi_cache microcache; fastcgi_cache_key "$scheme$host$cache_request_uri$request_method$cache_request_body"; fastcgi_param REQUEST_URI $cache_request_uri; #need to make sure that web application URI has been modified by Lua Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280500,280500#msg-280500 From gfrankliu at gmail.com Wed Jul 11 16:21:19 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 11 Jul 2018 09:21:19 -0700 Subject: SSL errors, verbosity level In-Reply-To: <896f584943acada1bfd90cc3f1183a5f.NginxMailingListEnglish@forum.nginx.org> References: <896f584943acada1bfd90cc3f1183a5f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Glad it works and thanks Piotr Sikora for the patch! Since you are using newer openssl, you may want to apply this patch: https://nginx.googlesource.com/nginx/+/ec0b8aad6ca3cb37e03d1c06e42f110e4737af1f%5E%21/ On Wed, Jul 11, 2018 at 6:18 AM, shiz wrote: > > Those unsupported ssl version messages should be in "info" level > > That is a very useful patch, many thanks Frank > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,280446,280496#msg-280496 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 11 19:03:31 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Wed, 11 Jul 2018 15:03:31 -0400 Subject: SSL errors, verbosity level In-Reply-To: References: Message-ID: <87c887d08f2dd869f3d11545ed710bcd.NginxMailingListEnglish@forum.nginx.org> > Since you are using newer openssl, you may want to apply this patch I agree, many thanks to Piotr Sikora and to you, Frank! 2nd patch applied as well. My error log is a lot more readable now. I can see those real critical messages without being cluttered by meaningless/unfixable SSL issues. Any chance those are merged into nginx 1.15.2? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280446,280504#msg-280504 From nginx-forum at forum.nginx.org Thu Jul 12 09:48:55 2018 From: nginx-forum at forum.nginx.org (prajos) Date: Thu, 12 Jul 2018 05:48:55 -0400 Subject: Nginx runs out of memory with large value for 'keepalive_requests' Message-ID: <50bea0686b45577cbfa8de3fd5ed813f.NginxMailingListEnglish@forum.nginx.org> Hi all, I'm using nginx as a Revers proxy to a service (A). nginx receives a large number of persistent connections from a single client service(B). Service B sends a lot of requests (2K rps) over these persistent connections. The amount of memory nginx uses seems to increase as a function of 'keepalive_requests 2147483647' . The memory used keeps raising until the machine runs out of memory (4GB, aws instance). While a smaller ''keepalive_requests 8192' doesn't create the exact problem. Some additional observations: When I reload nginx the memory usage comes down and then slowly starts building up. when I test nginx with a gatling test tool as a client, this behaviour is not observed. When I use the actual service(B), this behaviour seems to reappear. I curious to know what exactly is happening and how can I fix this issue of high memory usage ? my nginx server side configuration looks like: server { listen 443 ssl default_server; ... ... location / { # keepalive_timeout 14400s; # keepalive_requests 2147483647; ----> over 10 hrs, memory usages go to 4 GB keepalive_timeout 600s; keepalive_requests 8192; proxy_pass http://ingress; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Connection ""; } .. } Thanks for all the help, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280505,280505#msg-280505 From nginx-forum at forum.nginx.org Thu Jul 12 10:02:30 2018 From: nginx-forum at forum.nginx.org (ayman) Date: Thu, 12 Jul 2018 06:02:30 -0400 Subject: Nginx crashing with image filter and cache enabled In-Reply-To: <20180613122626.GY32137@mdounin.ru> References: <20180613122626.GY32137@mdounin.ru> Message-ID: Hi, I have upgraded the GD library on the server recompiling nginx again and all is good now. Thanks a lot. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280115,280507#msg-280507 From mdounin at mdounin.ru Thu Jul 12 12:43:24 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Jul 2018 15:43:24 +0300 Subject: Nginx runs out of memory with large value for 'keepalive_requests' In-Reply-To: <50bea0686b45577cbfa8de3fd5ed813f.NginxMailingListEnglish@forum.nginx.org> References: <50bea0686b45577cbfa8de3fd5ed813f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180712124323.GN56558@mdounin.ru> Hello! On Thu, Jul 12, 2018 at 05:48:55AM -0400, prajos wrote: > Hi all, > I'm using nginx as a Revers proxy to a service (A). nginx receives a large > number of persistent connections from a single client service(B). > Service B sends a lot of requests (2K rps) over these persistent > connections. > > The amount of memory nginx uses seems to increase as a function of > 'keepalive_requests 2147483647' . The memory used keeps raising until the > machine runs out of memory (4GB, aws instance). While a smaller > ''keepalive_requests 8192' doesn't create the exact problem. > > Some additional observations: > When I reload nginx the memory usage comes down and then slowly starts > building up. > when I test nginx with a gatling test tool as a client, this behaviour is > not observed. > When I use the actual service(B), this behaviour seems to reappear. > > I curious to know what exactly is happening and how can I fix this issue of > high memory usage ? There may be allocations from a connection memory pool, and these allocations are freed only on connection close. Trying to use "keepalive_requests 2147483647" is expected to result in memory usage growth as long as connections are never closed. Tuning various settings might help to eliminate connection-related allocations. In particular, if you've already tuned some settings from their default values, switching back to defaults might be a good starting point. Though in general it is a bad idea to never close keepalive connections, the number of requests is limited for a reason. -- Maxim Dounin http://mdounin.ru/ From r1ch+nginx at teamliquid.net Fri Jul 13 11:13:49 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 13 Jul 2018 13:13:49 +0200 Subject: SSL errors, verbosity level In-Reply-To: <87c887d08f2dd869f3d11545ed710bcd.NginxMailingListEnglish@forum.nginx.org> References: <87c887d08f2dd869f3d11545ed710bcd.NginxMailingListEnglish@forum.nginx.org> Message-ID: I'd also like to voice support for having this patch upstream. I've been using a similar patch ever since requiring TLS 1.2 as the error log is filled with "critical" version errors otherwise. On Wed, Jul 11, 2018 at 9:03 PM shiz wrote: > > Since you are using newer openssl, you may want to apply this patch > > I agree, many thanks to Piotr Sikora and to you, Frank! > > 2nd patch applied as well. > > My error log is a lot more readable now. I can see those real critical > messages without being cluttered by meaningless/unfixable SSL issues. > > Any chance those are merged into nginx 1.15.2? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,280446,280504#msg-280504 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Jul 13 13:28:45 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 13 Jul 2018 16:28:45 +0300 Subject: Unit 1.3 release Message-ID: <2876808.9FVBWU8a2a@vbart-workstation> Hello, I'm glad to announce a new release of NGINX Unit. Changes with Unit 1.3 13 Jul 2018 *) Change: UTF-8 characters are now allowed in request header field values. *) Feature: configuration of the request body size limit. *) Feature: configuration of various HTTP connection timeouts. *) Feature: Ruby module now automatically uses Bundler where possible. *) Feature: http.Flusher interface in Go module. *) Bugfix: various issues in HTTP connection errors handling. *) Bugfix: requests with body data might be handled incorrectly in PHP module. *) Bugfix: individual PHP configuration options specified via control API were reset to previous values after the first request in application process. Here's an example configuration with new parameters: { "settings": { "http": { "header_read_timeout": 30, "body_read_timeout": 30, "send_timeout": 30, "idle_timeout": 180, "max_body_size": 8388608 } }, "listeners": { "127.0.0.1:8034": { "application": "mercurial" } }, "applications": { "mercurial": { "type": "python 2", "module": "hgweb", "path": "/data/hg" } } } All timeouts values are specified in seconds. The "max_body_size" value is specified in bytes. Please note that the parameters of the "http" object in this example are set to their default values. So, there's no need to set them explicitly if you are happy with the values above. Binary Linux packages and Docker images are available here: - Packages: https://unit.nginx.org/installation/#precompiled-packages - Docker: https://hub.docker.com/r/nginx/unit/tags/ Also, please follow our blog posts to learn more about new features in the recent versions of Unit: - https://www.nginx.com/blog/tag/nginx-unit/ wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Mon Jul 16 08:13:55 2018 From: nginx-forum at forum.nginx.org (kirkboyd) Date: Mon, 16 Jul 2018 04:13:55 -0400 Subject: env TZ : timezone setting Message-ID: Has this been addressed with a new release? https://forum.nginx.org/read.php?2,214494,214536#msg-214536 "env TZ=Asia/Shanghai". It still does not work for me with nginx 1.13.x on CentOS 7 thanks, Kamalkishor. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280527,280527#msg-280527 From tcurdt at vafer.org Mon Jul 16 14:38:38 2018 From: tcurdt at vafer.org (Torsten Curdt) Date: Mon, 16 Jul 2018 16:38:38 +0200 Subject: redirect based on file content Message-ID: I want to have files in the filesystem that specify the response code and redirect location instead of relying on the nginx configuration for it. Imagine a file foo.ext looking like: 301 https://some.host.com/foo.bla On a GET of foo.ext it should result in a 301 to https://some.host.com/foo.bla So far I haven't found a module for this. I presume it should not be too terribly hard to write a module for it but maybe I missed something? So I thought I rather double check if there is an easier route. Any thoughts? cheers, Torsten -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jul 16 16:36:09 2018 From: nginx-forum at forum.nginx.org (itpp2012) Date: Mon, 16 Jul 2018 12:36:09 -0400 Subject: redirect based on file content In-Reply-To: References: Message-ID: This can be done with Lua but each disk access is a blocking call to nginx, your design should include caching of such calls (access disk once every 100 calls or after 60 seconds). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280529,280533#msg-280533 From xeioex at nginx.com Mon Jul 16 18:09:32 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 16 Jul 2018 21:09:32 +0300 Subject: redirect based on file content In-Reply-To: References: Message-ID: <23d8b3fd-edab-2fa6-85ab-6250eb4a996c@nginx.com> You can try to use njs here: http://nginx.org/en/docs/http/ngx_http_js_module.html http://nginx.org/en/docs/njs/njs_api.html#http - more about r object. nginx.conf: http { js_include http.njs; ... server { listen 9000; location / { js_content redirect; } } } http.njs: function redirect(r) { var fs = require('fs'); var body = fs.readFileSync('/redirects/' + r.uri); var parts = body.split(' '); var code = Number(parts[0]); var uri = parts[1]; r.return(code, uri); } On 16.07.2018 17:38, Torsten Curdt wrote: > I want to have files in the filesystem that specify the response code > and redirect location instead of relying on the nginx configuration for it. > > Imagine a file foo.ext looking like: > > ? 301 https://some.host.com/foo.bla > > On a GET of foo.ext it should result in a 301 to > https://some.host.com/foo.bla > > So far I haven't found a module for this. I presume it should not be too > terribly hard to write a module for it but maybe I missed something? So > I thought I rather double check if there is an easier route. > > Any thoughts? > > cheers, > Torsten > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From gfrankliu at gmail.com Mon Jul 16 22:59:30 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 16 Jul 2018 15:59:30 -0700 Subject: SSL errors, verbosity level In-Reply-To: References: <87c887d08f2dd869f3d11545ed710bcd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks Maxim and those two patches are now merged upstream: http://mailman.nginx.org/pipermail/nginx-devel/2018-July/011287.html http://mailman.nginx.org/pipermail/nginx-devel/2018-July/011288.html On Fri, Jul 13, 2018 at 4:13 AM, Richard Stanway wrote: > I'd also like to voice support for having this patch upstream. I've been > using a similar patch ever since requiring TLS 1.2 as the error log is > filled with "critical" version errors otherwise. > > On Wed, Jul 11, 2018 at 9:03 PM shiz wrote: > >> > Since you are using newer openssl, you may want to apply this patch >> >> I agree, many thanks to Piotr Sikora and to you, Frank! >> >> 2nd patch applied as well. >> >> My error log is a lot more readable now. I can see those real critical >> messages without being cluttered by meaningless/unfixable SSL issues. >> >> Any chance those are merged into nginx 1.15.2? >> >> Posted at Nginx Forum: https://forum.nginx.org/read. >> php?2,280446,280504#msg-280504 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jul 17 00:27:07 2018 From: nginx-forum at forum.nginx.org (bwmetcalf@gmail.com) Date: Mon, 16 Jul 2018 20:27:07 -0400 Subject: UDP load balancing and ephemeral ports Message-ID: <9b70668980a7c900fa773f31a0528195.NginxMailingListEnglish@forum.nginx.org> Hello, A couple of questions regarding UDP load balancing. If a UDP listener is configured to expect a response from its upstream nodes, is it possible to have another IP outside of the pool of upstream nodes send a response to the ephemeral port where nginx is expecting a response? I'm pretty sure the answer is no and the response has to come from the IP where the request was forwarded, but wanting to verify. We have a use case where another part of our backend system could possibly send that response if coded to do so, but I'm pretty sure this simply will not work. Secondly, how long will nginx keep the ephemeral port open waiting for a response from the upstream node where the request is sent and is this configurable? It looks like proxy_responses might be helpful in quickly terminating a session after the desired number of responses are received? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280540,280540#msg-280540 From nginx-forum at forum.nginx.org Tue Jul 17 01:24:58 2018 From: nginx-forum at forum.nginx.org (ashuai) Date: Mon, 16 Jul 2018 21:24:58 -0400 Subject: Dynamic module is not binary compatible Message-ID: Hello! I want to add dynamic module for nginx , but I got "module is not binary compatible". environment: Ubuntu 16.04 Nginx 1.12.1 (apt-get) Nginx itself parameters: nginx -V nginx version: nginx/1.12.1 built with OpenSSL 1.0.2g 1 Mar 2016 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=/build/nginx-aqArPM/nginx-1.12.1/debian/modules/nginx-auth-pam --add-dynamic-module=/build/nginx-aqArPM/nginx-1.12.1/debian/modules/nginx-dav-ext-module --add-dynamic-module=/build/nginx-aqArPM/nginx-1.12.1/debian/modules/nginx-echo --add-dynamic-module=/build/nginx-aqArPM/nginx-1.12.1/debian/modules/nginx-upstream-fair --add-dynamic-module=/build/nginx-aqArPM/nginx-1.12.1/debian/modules/ngx_http_substitutions_filter_module I downloaded nginx-1.12.1.tar.gz and https://github.com/leev/ngx_http_geoip2_module.git and then unzipped them $ ./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=../ngx_http_geoip2_module-master $ make modules && sudo objs/ngx_http_geoip2_module.so /usr/share/nginx/modules Then configure nginx load_module and test. nginx: [emerg] module "/usr/share/nginx/modules/ngx_http_geoip2_module.so" is not binary compatible in /etc/nginx/nginx.conf Can someone help me? Thank you in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280541,280541#msg-280541 From nginx-forum at forum.nginx.org Tue Jul 17 05:36:33 2018 From: nginx-forum at forum.nginx.org (ashuai) Date: Tue, 17 Jul 2018 01:36:33 -0400 Subject: Dynamic module is not binary compatible In-Reply-To: References: Message-ID: <3fb6dd27ca9fb9cb2b6b9cf01ee0fee1.NginxMailingListEnglish@forum.nginx.org> I found a solution https://github.com/apache/incubator-pagespeed-ngx/issues/1440#issuecomment-315520779 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280541,280542#msg-280542 From mdounin at mdounin.ru Tue Jul 17 12:40:49 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Jul 2018 15:40:49 +0300 Subject: UDP load balancing and ephemeral ports In-Reply-To: <9b70668980a7c900fa773f31a0528195.NginxMailingListEnglish@forum.nginx.org> References: <9b70668980a7c900fa773f31a0528195.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180717124049.GA56558@mdounin.ru> Hello! On Mon, Jul 16, 2018 at 08:27:07PM -0400, bwmetcalf at gmail.com wrote: > A couple of questions regarding UDP load balancing. If a UDP listener is > configured to expect a response from its upstream nodes, is it possible to > have another IP outside of the pool of upstream nodes send a response to the > ephemeral port where nginx is expecting a response? I'm pretty sure the > answer is no and the response has to come from the IP where the request was > forwarded, but wanting to verify. We have a use case where another part of > our backend system could possibly send that response if coded to do so, but > I'm pretty sure this simply will not work. You are right, this won't work. > Secondly, how long will nginx keep the ephemeral port open waiting for a > response from the upstream node where the request is sent and is this > configurable? It looks like proxy_responses might be helpful in quickly > terminating a session after the desired number of responses are received? The port is closed either on proxy_timeout, or per proxy_responses. Details can be found here: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Jul 17 14:13:35 2018 From: nginx-forum at forum.nginx.org (bwmetcalf@gmail.com) Date: Tue, 17 Jul 2018 10:13:35 -0400 Subject: UDP load balancing and ephemeral ports In-Reply-To: <20180717124049.GA56558@mdounin.ru> References: <20180717124049.GA56558@mdounin.ru> Message-ID: Cool. Thanks. Slightly related... given that the proxy_timeout is 10m by default, can ephemeral ports on the backend be shared by different clients making requests to nginx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280540,280552#msg-280552 From mdounin at mdounin.ru Tue Jul 17 14:22:12 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Jul 2018 17:22:12 +0300 Subject: UDP load balancing and ephemeral ports In-Reply-To: References: <20180717124049.GA56558@mdounin.ru> Message-ID: <20180717142212.GD56558@mdounin.ru> Hello! On Tue, Jul 17, 2018 at 10:13:35AM -0400, bwmetcalf at gmail.com wrote: > Cool. Thanks. Slightly related... given that the proxy_timeout is 10m by > default, can ephemeral ports on the backend be shared by different clients > making requests to nginx? No. The port is bound to a session with a specific client, much like with TCP proxying. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Jul 17 17:08:13 2018 From: nginx-forum at forum.nginx.org (jarstewa) Date: Tue, 17 Jul 2018 13:08:13 -0400 Subject: limit_req applied to upstream auth_request requests? Message-ID: <396c9f96d62fb272acfd023c64072cf1.NginxMailingListEnglish@forum.nginx.org> Hi, I currently have an nginx configuration that uses the limit_req directive to throttle upstream content requests. Now I'm trying to add similar rate limiting for auth requests, but I haven't been able to get the auth throttle to kick in during testing (whereas the content throttle works as expected). Is there some known limitation of using limit_req against auth_request requests, or do I simply have a problem in my configuration? Thank you. http { map $request_uri $guid { default "unknown"; ~^/out/(?P.+?)/.+$ $id; } map $http_x_forwarded_for $last_client_ip { default $http_x_forwarded_for; ~,\s*(?P[\.\d]+?)\s*$ $last_ip; } limit_req_zone $guid zone=content:20m rate=500r/s; limit_req_zone $guid zone=auth:20m rate=100r/s; server { location /out/ { auth_request /auth; proxy_pass $upstream_server; proxy_cache content_cache; set $cache_key "${request_path}"; proxy_cache_key $cache_key; proxy_cache_valid 200 301 302 10s; #Throttling works here <--- limit_req zone=content burst=50 nodelay; limit_req_status 429; } location /auth { internal; proxy_pass_request_body off; proxy_pass $upstream_server/auth?id=$guid&requestor=$last_client_ip; proxy_cache auth_cache; set $auth_cache_key "${guid}|${last_client_ip}"; proxy_cache_key $auth_cache_key; proxy_cache_valid 200 301 302 5m; proxy_cache_valid 401 403 404 5m; #Throttling seems not to work here <--- limit_req zone=auth burst=50 nodelay; limit_req_status 429; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280554,280554#msg-280554 From Jason.Whittington at equifax.com Tue Jul 17 17:12:28 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Tue, 17 Jul 2018 17:12:28 +0000 Subject: How are you managing CI/CD for your nginx configs? Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A433FAA43A@STLEISEXCMBX3.eis.equifax.com> Last year I gave a talk at nginx.conf describing some success we have had using Octopus Deploy as a CD tool for nginx configs. The particular Octopus features that make this good are * Octopus gives us a good variable replacement / template system so that I can define a template along with variables for different environments (which really helps me ensure consistency between environments) * Octopus has good abstractions for grouping servers into roles and environments (So say, DMZ and APP servers living in DEV, TEST, and PROD environments) * Octopus has a good release model and great visibility of "which release is deployed to which environment". As in "1.2.2 is in dev, 1.2.1 is in test, 1.1.9 is in production" * Octopus has good security controls so I can control who is allowed to "push the button" to deploy dev->test->prod * Octopus can be driven via APIs and supports scripting (particularly powershell) that can be used to interact with other APIs. When I demoed this at nginx conf I was using mono on the nginx VM to invoke bash scripts. The only problem is that Octopus is a very Windows-centric product. I'm interested in doing this same sort of management using a "linux-centric" toolchain and would be interested to hear what tool chains others might be using. Ansible? Jenkins? Puppet/Chef? The process I describe above is what we do with servers that are relatively long-lived. I would also be curious what toolchains you've found to be effective when servers are more transient. E.g. do you build server images that have the nginx config "baked in"? Or do you stand up the VM and push configs / certs in a secondary deployment step. Thanks! Jason This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. From michael.friscia at yale.edu Tue Jul 17 17:38:23 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Tue, 17 Jul 2018 17:38:23 +0000 Subject: How are you managing CI/CD for your nginx configs? In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A433FAA43A@STLEISEXCMBX3.eis.equifax.com> References: <995C5C9AD54A3C419AF1C20A8B6AB9A433FAA43A@STLEISEXCMBX3.eis.equifax.com> Message-ID: <053453B4-BA65-43BC-B395-5C13226AB925@yale.edu> I prefer simple setups that start with the question, "What is the least I can do to manage this new thing?" I've worked with all the various things mentioned below like Chef, Puppet, Ansible and many more but scaled everything back to keep it really simple this time. I have 6 Nginx servers (pairs of servers running in USA, Europe and Asia) and we run about 800 DNS names through them as either redirects or full vanity URLs using about 70 configuration files. These are all Linux and all running Nginx-Plus. I put all my configs in a single directory, everything that is .conf is included then I use .inc to include only where needed in specific server or location blocks for the sake of code re-use. This is all in a git repository. Each server has a cron setup to run once a minute with an offset so that the crons run 20 seconds apart and in 2 minutes time, all servers will patch up to the latest version. The cron basically just does a git pull and if an update is needed, it deletes the conf directory in the Nginx install location, copies all the files from Git and then restarts Nginx. Inside the Git repo is a file I call autodeploy.act and that file is a list of the servers by name with a 0 or 1 at the end. When the cron runs, it compares the current server name to the list in this file and if the number is a 1, it runs the update, change the number back to 0, logs that it updated and then commits the changes back to GIT. My actual workflow goes like this. I make changes and commit them to Git using Visual Studio Code with the Nginx color and tag plugins. I go onto my dev server and run a deployment bash script that replaces all the conf files, restarts Nginx and then runs a bunch of cURL commands to make sure I get back HTTP 200 responses. Then I know that I didn't break Nginx and I can then set my local HOSTS file to point to this dev box and perform some tests. If all goes well I open up the autodeploy.act file and change all the 0's to 1's and after a couple minutes run a git pull and see that my changes are live, edit my local HOSTS file back to normal and test my settings. My goal was to install as few things on these servers as possible. Basically they all run Git and Nginx and that's it. I did not want to get involved with Chef, Puppet, Docker or anything too fancy since the goal is pretty basic. I want to replace some files and restart a service. Bash and Cron had all I needed so I decided to make this as simple as possible. I'm all for using these other tools, but I don't like to force tools on a problem that can be solved with everything built into the operating system. I mostly did not want to then have to apply patches, updates and all that to yet another product. In any event, I hope someone might find this approach useful. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu ?On 7/17/18, 1:12 PM, "nginx on behalf of Jason Whittington" wrote: Last year I gave a talk at nginx.conf describing some success we have had using Octopus Deploy as a CD tool for nginx configs. The particular Octopus features that make this good are * Octopus gives us a good variable replacement / template system so that I can define a template along with variables for different environments (which really helps me ensure consistency between environments) * Octopus has good abstractions for grouping servers into roles and environments (So say, DMZ and APP servers living in DEV, TEST, and PROD environments) * Octopus has a good release model and great visibility of "which release is deployed to which environment". As in "1.2.2 is in dev, 1.2.1 is in test, 1.1.9 is in production" * Octopus has good security controls so I can control who is allowed to "push the button" to deploy dev->test->prod * Octopus can be driven via APIs and supports scripting (particularly powershell) that can be used to interact with other APIs. When I demoed this at nginx conf I was using mono on the nginx VM to invoke bash scripts. The only problem is that Octopus is a very Windows-centric product. I'm interested in doing this same sort of management using a "linux-centric" toolchain and would be interested to hear what tool chains others might be using. Ansible? Jenkins? Puppet/Chef? The process I describe above is what we do with servers that are relatively long-lived. I would also be curious what toolchains you've found to be effective when servers are more transient. E.g. do you build server images that have the nginx config "baked in"? Or do you stand up the VM and push configs / certs in a secondary deployment step. Thanks! Jason This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. _______________________________________________ nginx mailing list nginx at nginx.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&data=02%7C01%7Cmichael.friscia%40yale.edu%7C68137d1ab37d468cc2b608d5ec088447%7Cdd8cbebb21394df8b4114e3e87abeb5c%7C0%7C1%7C636674443701504484&sdata=8RjRRKE9%2BXGlV2vcPTLKnAQhY4fA7mU6RHt8WuJbkCw%3D&reserved=0 From mdounin at mdounin.ru Tue Jul 17 19:31:32 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Jul 2018 22:31:32 +0300 Subject: limit_req applied to upstream auth_request requests? In-Reply-To: <396c9f96d62fb272acfd023c64072cf1.NginxMailingListEnglish@forum.nginx.org> References: <396c9f96d62fb272acfd023c64072cf1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180717193132.GE56558@mdounin.ru> Hello! On Tue, Jul 17, 2018 at 01:08:13PM -0400, jarstewa wrote: > Hi, I currently have an nginx configuration that uses the limit_req > directive to throttle upstream content requests. Now I'm trying to add > similar rate limiting for auth requests, but I haven't been able to get the > auth throttle to kick in during testing (whereas the content throttle works > as expected). Is there some known limitation of using limit_req against > auth_request requests, or do I simply have a problem in my configuration? The limit_req directive doesn't try to limit requests already limited, as well as subrequests within these requests. You should configure all limits you want to apply to a request in one place. > location /out/ { > auth_request /auth; [...] > #Throttling works here <--- > limit_req zone=content burst=50 nodelay; > limit_req_status 429; > } > > location /auth { > internal; [...] > limit_req zone=auth burst=50 nodelay; > limit_req_status 429; > } Note well that this configuration implies that every request to "/out/..." will generate a subrequest to "/auth". As such, you can safely move the "limit_req zone=auth ..." limit to "location /out/", as results will be (mostly) identical. Note well that auth subrequest is expected to return either 2xx, or 401, or 403. Anything else, including 429 you are trying to configure in the provided snippet, will be considered an error, and nginx will return 500 to the client. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Jul 17 21:23:44 2018 From: nginx-forum at forum.nginx.org (jarstewa) Date: Tue, 17 Jul 2018 17:23:44 -0400 Subject: limit_req applied to upstream auth_request requests? In-Reply-To: <20180717193132.GE56558@mdounin.ru> References: <20180717193132.GE56558@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > ... > Note well that this configuration implies that every request to > "/out/..." will generate a subrequest to "/auth". As such, you > can safely move the "limit_req zone=auth ..." limit to "location > /out/", as results will be (mostly) identical. The example I've posted is a simplified version of my actual configuration. In reality I have several locations similar to /out/, each with a separate throttle rate. > Note well that auth subrequest is expected to return either 2xx, > or 401, or 403. Anything else, including 429 you are trying to > configure in the provided snippet, will be considered an error, and > nginx will return 500 to the client. Ok, good point. So I should expect to receive a 500 from the client for a request whose authentication subrequest was throttled (and thus returned 429). But I don't actually see this happening. The nginx log contains only throttle events from the content throttle, and the client receives 200s until the throttling 429s kick in. So, I think you seem to be suggesting that throttling /auth should not be necessary, and may in fact be a bad idea. But I would still like to understand why it isn't working as I would expect. Thanks again, Jared Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280554,280559#msg-280559 From peter_booth at me.com Wed Jul 18 03:51:19 2018 From: peter_booth at me.com (Peter Booth) Date: Tue, 17 Jul 2018 23:51:19 -0400 Subject: How are you managing CI/CD for your nginx configs? In-Reply-To: <053453B4-BA65-43BC-B395-5C13226AB925@yale.edu> References: <995C5C9AD54A3C419AF1C20A8B6AB9A433FAA43A@STLEISEXCMBX3.eis.equifax.com> <053453B4-BA65-43BC-B395-5C13226AB925@yale.edu> Message-ID: <8A0CBF4B-9707-4DB1-9013-B7906B449FF6@me.com> I?ve tried chef, puppet and ansible at thre different shops. I wanted to like chef and puppet because they are Ruby based (which I like) but they seemed clunky, ugly, and heavyweight. Ansible seemed to solve the easy problems. When I had a startup I just used Capistrano for deployments, with erb as a templating tool tool to generate config files and to pregenerate static web pages. What could easily have been a rails, php or servlet based dynamic app instead became a static website that was rebuilt whenever the rails based admin tool modified metadata. This enabled a number of things: Multiple nonproduction versions of the site coexisted: Qa1.mysite.com Qa2.mysite.com .. Qa5.mysite.com With a latest.mysite.com tgst would cycle thru qa1 thru qa5 and round again Sent from my iPhone > On Jul 17, 2018, at 1:38 PM, Friscia, Michael wrote: > > I prefer simple setups that start with the question, "What is the least I can do to manage this new thing?" > > I've worked with all the various things mentioned below like Chef, Puppet, Ansible and many more but scaled everything back to keep it really simple this time. I have 6 Nginx servers (pairs of servers running in USA, Europe and Asia) and we run about 800 DNS names through them as either redirects or full vanity URLs using about 70 configuration files. These are all Linux and all running Nginx-Plus. > > I put all my configs in a single directory, everything that is .conf is included then I use .inc to include only where needed in specific server or location blocks for the sake of code re-use. > > This is all in a git repository. > > Each server has a cron setup to run once a minute with an offset so that the crons run 20 seconds apart and in 2 minutes time, all servers will patch up to the latest version. The cron basically just does a git pull and if an update is needed, it deletes the conf directory in the Nginx install location, copies all the files from Git and then restarts Nginx. > > Inside the Git repo is a file I call autodeploy.act and that file is a list of the servers by name with a 0 or 1 at the end. When the cron runs, it compares the current server name to the list in this file and if the number is a 1, it runs the update, change the number back to 0, logs that it updated and then commits the changes back to GIT. > > My actual workflow goes like this. I make changes and commit them to Git using Visual Studio Code with the Nginx color and tag plugins. I go onto my dev server and run a deployment bash script that replaces all the conf files, restarts Nginx and then runs a bunch of cURL commands to make sure I get back HTTP 200 responses. Then I know that I didn't break Nginx and I can then set my local HOSTS file to point to this dev box and perform some tests. If all goes well I open up the autodeploy.act file and change all the 0's to 1's and after a couple minutes run a git pull and see that my changes are live, edit my local HOSTS file back to normal and test my settings. > > My goal was to install as few things on these servers as possible. Basically they all run Git and Nginx and that's it. I did not want to get involved with Chef, Puppet, Docker or anything too fancy since the goal is pretty basic. I want to replace some files and restart a service. Bash and Cron had all I needed so I decided to make this as simple as possible. I'm all for using these other tools, but I don't like to force tools on a problem that can be solved with everything built into the operating system. I mostly did not want to then have to apply patches, updates and all that to yet another product. > > In any event, I hope someone might find this approach useful. > > ___________________________________________ > Michael Friscia > Office of Communications > Yale School of Medicine > (203) 737-7932 - office > (203) 931-5381 - mobile > http://web.yale.edu > > > ?On 7/17/18, 1:12 PM, "nginx on behalf of Jason Whittington" wrote: > > Last year I gave a talk at nginx.conf describing some success we have had using Octopus Deploy as a CD tool for nginx configs. The particular Octopus features that make this good are > > * Octopus gives us a good variable replacement / template system so that I can define a template along with variables for different environments (which really helps me ensure consistency between environments) > * Octopus has good abstractions for grouping servers into roles and environments (So say, DMZ and APP servers living in DEV, TEST, and PROD environments) > * Octopus has a good release model and great visibility of "which release is deployed to which environment". As in "1.2.2 is in dev, 1.2.1 is in test, 1.1.9 is in production" > * Octopus has good security controls so I can control who is allowed to "push the button" to deploy dev->test->prod > * Octopus can be driven via APIs and supports scripting (particularly powershell) that can be used to interact with other APIs. When I demoed this at nginx conf I was using mono on the nginx VM to invoke bash scripts. > Yet > The only problem is th?at Octopus is a very Windows-centric product. I'm interested in doing this same sort of management using a "linux-centric" toolchain and would be interested to hear what tool chains others might be using. Ansible? Jenkins? Puppet/Chef? > > The process I describe above is what we do with servers that are relatively long-lived. I would also be curious what toolchains you've found to be effective when servers are more transient. E.g. do you build server images that have the nginx config "baked in"? Or do you stand up the VM and push configs / certs in a secondary deployment step. > > Thanks! > Jason > This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&data=02%7C01%7Cmichael.friscia%40yale.edu%7C68137d1ab37d468cc2b608d5ec088447%7Cdd8cbebb21394df8b4114e3e87abeb5c%7C0%7C1%7C636674443701504484&sdata=8RjRRKE9%2BXGlV2vcPTLKnAQhY4fA7mU6RHt8WuJbkCw%3D&reserved=0 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Wed Jul 18 12:15:42 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Jul 2018 15:15:42 +0300 Subject: limit_req applied to upstream auth_request requests? In-Reply-To: References: <20180717193132.GE56558@mdounin.ru> Message-ID: <20180718121542.GG56558@mdounin.ru> Hello! On Tue, Jul 17, 2018 at 05:23:44PM -0400, jarstewa wrote: [...] > So, I think you seem to be suggesting that throttling /auth should not be > necessary, and may in fact be a bad idea. But I would still like to > understand why it isn't working as I would expect. The explanation on why it isn't working was in the first paragraph I wrote: : The limit_req directive doesn't try to limit requests already : limited, as well as subrequests within these requests. You should : configure all limits you want to apply to a request in one place. -- Maxim Dounin http://mdounin.ru/ From michael.friscia at yale.edu Wed Jul 18 15:10:54 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 18 Jul 2018 15:10:54 +0000 Subject: Redirect without and SSL certificate Message-ID: <138D4C1D-6189-4386-9683-8FAB13B42504@yale.edu> We have a problem where we have a large number of vanity domain names that are redirected. For example we have surgery.yale.edu which redirects to medicine.yale.edu/surgery. This works fine until someone tries to request https://surgery.yale.edu. For administrative reasons, I cannot get a wildcard certificate to handle *.yale.edu and make this simple to solve. My question is if there is any way to redirect a request listening on port 80 and 443 but bypass the SSL certificate warning so it will redirect? I would assume the order of operation with HTTPS is to first validate the certificate but I really want the 301 redirect to take place before the SSL cert is verified. I?m open to ideas but we are limited in what we can actually do so as it stands the only solution we have is to request a certificate for each of the 600+ domains. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 18 15:31:06 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Jul 2018 18:31:06 +0300 Subject: Redirect without and SSL certificate In-Reply-To: <138D4C1D-6189-4386-9683-8FAB13B42504@yale.edu> References: <138D4C1D-6189-4386-9683-8FAB13B42504@yale.edu> Message-ID: <20180718153106.GI56558@mdounin.ru> Hello! On Wed, Jul 18, 2018 at 03:10:54PM +0000, Friscia, Michael wrote: > We have a problem where we have a large number of vanity domain > names that are redirected. For example we have surgery.yale.edu > which redirects to medicine.yale.edu/surgery. This works fine > until someone tries to request https://surgery.yale.edu. For > administrative reasons, I cannot get a wildcard certificate to > handle *.yale.edu and make this simple to solve. > > My question is if there is any way to redirect a request > listening on port 80 and 443 but bypass the SSL certificate > warning so it will redirect? I would assume the order of > operation with HTTPS is to first validate the certificate but I > really want the 301 redirect to take place before the SSL cert > is verified. > > I?m open to ideas but we are limited in what we can actually do > so as it stands the only solution we have is to request a > certificate for each of the 600+ domains. Certificate warning appears when client establishes a connection and cannot verify a certificate. The connection is not established at this point, and a request is not sent. You cannot return a redirect unless the client agrees to continue despite the certificate warning. That is, if you want redirects to be returned, the only option is to obtain valid certificates. Another option might be to reject https connections to domains when it is not configured to use https. When using SNI, you can configure nginx to selectively reject connections to some names by using unsatisfiable ssl_ciphers (see https://trac.nginx.org/nginx/ticket/195#comment:6). -- Maxim Dounin http://mdounin.ru/ From jeff at p27.eu Wed Jul 18 15:33:00 2018 From: jeff at p27.eu (Jeff Abrahamson) Date: Wed, 18 Jul 2018 17:33:00 +0200 Subject: Redirect without and SSL certificate In-Reply-To: <138D4C1D-6189-4386-9683-8FAB13B42504@yale.edu> References: <138D4C1D-6189-4386-9683-8FAB13B42504@yale.edu> Message-ID: <8c4465e0-64a1-ed94-1e34-7e4411bb9860@p27.eu> Could you use letsencrypt to manage all those certs? What you want can't work: the client makes an SSL request, you respond (with a 301), the client detects that the interaction was not properly authenticated, and so complains to the user.? It's out of your hands, which is the whole point of SSL identity validation. Jeff Abrahamson +33 6 24 40 01 57 +44 7920 594 255 http://p27.eu/jeff/ On 18/07/18 17:10, Friscia, Michael wrote: > > We have a problem where we have a large number of vanity domain names > that are redirected. For example we have surgery.yale.edu which > redirects to medicine.yale.edu/surgery. This works fine until someone > tries to request https://surgery.yale.edu. For administrative reasons, > I cannot get a wildcard certificate to handle *.yale.edu and make this > simple to solve. > > ? > > My question is if there is any way to redirect a request listening on > port 80 and 443 but bypass the SSL certificate warning so it will > redirect? I would assume the order of operation with HTTPS is to first > validate the certificate but I really want the 301 redirect to take > place before the SSL cert is verified. > > ? > > I?m open to ideas but we are limited in what we can actually do so as > it stands the only solution we have is to request a certificate for > each of the 600+ domains. > > ? > > ___________________________________________ > > Michael Friscia > > Office of Communications > > Yale School of Medicine > > (203) 737-7932 - office > > (203) 931-5381 - mobile > > http://web.yale.edu > > ? > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Jeff Abrahamson +33 6 24 40 01 57 +44 7920 594 255 http://p27.eu/jeff/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Wed Jul 18 15:49:01 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 18 Jul 2018 15:49:01 +0000 Subject: Redirect without and SSL certificate In-Reply-To: <8c4465e0-64a1-ed94-1e34-7e4411bb9860@p27.eu> References: <138D4C1D-6189-4386-9683-8FAB13B42504@yale.edu> <8c4465e0-64a1-ed94-1e34-7e4411bb9860@p27.eu> Message-ID: <987A7756-E2F2-4969-B911-01F12DC4D0D9@yale.edu> Thanks, I had not heard of that solution so I will chase it down to see if we can make it work. As for the response, I assumed that was the case and what?s the point of SSL if there was a way to bypass it?just wishful thinking? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu From: Jeff Abrahamson Date: Wednesday, July 18, 2018 at 11:33 AM To: "nginx at nginx.org" , Michael Friscia Subject: Re: Redirect without and SSL certificate Could you use letsencrypt to manage all those certs? What you want can't work: the client makes an SSL request, you respond (with a 301), the client detects that the interaction was not properly authenticated, and so complains to the user. It's out of your hands, which is the whole point of SSL identity validation. Jeff Abrahamson +33 6 24 40 01 57 +44 7920 594 255 http://p27.eu/jeff/ On 18/07/18 17:10, Friscia, Michael wrote: We have a problem where we have a large number of vanity domain names that are redirected. For example we have surgery.yale.edu which redirects to medicine.yale.edu/surgery. This works fine until someone tries to request https://surgery.yale.edu. For administrative reasons, I cannot get a wildcard certificate to handle *.yale.edu and make this simple to solve. My question is if there is any way to redirect a request listening on port 80 and 443 but bypass the SSL certificate warning so it will redirect? I would assume the order of operation with HTTPS is to first validate the certificate but I really want the 301 redirect to take place before the SSL cert is verified. I?m open to ideas but we are limited in what we can actually do so as it stands the only solution we have is to request a certificate for each of the 600+ domains. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Jeff Abrahamson +33 6 24 40 01 57 +44 7920 594 255 http://p27.eu/jeff/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 18 18:13:31 2018 From: nginx-forum at forum.nginx.org (jarstewa) Date: Wed, 18 Jul 2018 14:13:31 -0400 Subject: limit_req applied to upstream auth_request requests? In-Reply-To: <20180718121542.GG56558@mdounin.ru> References: <20180718121542.GG56558@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > The explanation on why it isn't working was in the first paragraph > I wrote: > > : The limit_req directive doesn't try to limit requests already > : limited, as well as subrequests within these requests. You should > : configure all limits you want to apply to a request in one place. My apologies for missing that section of your reply. Thank you for the explanation! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280554,280571#msg-280571 From nginx-forum at forum.nginx.org Thu Jul 19 15:45:01 2018 From: nginx-forum at forum.nginx.org (cyberfarer) Date: Thu, 19 Jul 2018 11:45:01 -0400 Subject: Upload large files via Nginx reverse proxy Message-ID: <4399005718a90a7ab2e65c713b3f73d3.NginxMailingListEnglish@forum.nginx.org> We have Nginx as a reverse proxy server to a Pydio server backend running with Apache2. We are attempting to upload a 50G file. The Nginx server, 1.14.0, is attempting to write the file locally to /var/lib/nginx/body/ rather than sending it directly to the backend. Our proxy server has a very small footprint of just 13G. Is there an option to send the file directly to the backend without writing locally? Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280578,280578#msg-280578 From mdounin at mdounin.ru Thu Jul 19 16:12:19 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Jul 2018 19:12:19 +0300 Subject: Upload large files via Nginx reverse proxy In-Reply-To: <4399005718a90a7ab2e65c713b3f73d3.NginxMailingListEnglish@forum.nginx.org> References: <4399005718a90a7ab2e65c713b3f73d3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180719161219.GL56558@mdounin.ru> Hello! On Thu, Jul 19, 2018 at 11:45:01AM -0400, cyberfarer wrote: > We have Nginx as a reverse proxy server to a Pydio server backend running > with Apache2. We are attempting to upload a 50G file. The Nginx server, > 1.14.0, is attempting to write the file locally to /var/lib/nginx/body/ > rather than sending it directly to the backend. Our proxy server has a very > small footprint of just 13G. > > Is there an option to send the file directly to the backend without writing > locally? http://nginx.org/r/proxy_request_buffering -- Maxim Dounin http://mdounin.ru/ From gfrankliu at gmail.com Thu Jul 19 16:12:34 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 19 Jul 2018 09:12:34 -0700 Subject: Upload large files via Nginx reverse proxy In-Reply-To: <4399005718a90a7ab2e65c713b3f73d3.NginxMailingListEnglish@forum.nginx.org> References: <4399005718a90a7ab2e65c713b3f73d3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Does this work for you? https://serverfault.com/questions/768693/nginx-how-to-completely-disable-request-body-buffering On Thu, Jul 19, 2018 at 8:45 AM, cyberfarer wrote: > We have Nginx as a reverse proxy server to a Pydio server backend running > with Apache2. We are attempting to upload a 50G file. The Nginx server, > 1.14.0, is attempting to write the file locally to /var/lib/nginx/body/ > rather than sending it directly to the backend. Our proxy server has a very > small footprint of just 13G. > > Is there an option to send the file directly to the backend without writing > locally? > > Thank you. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,280578,280578#msg-280578 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jul 20 02:57:48 2018 From: nginx-forum at forum.nginx.org (dyadaval) Date: Thu, 19 Jul 2018 22:57:48 -0400 Subject: Nginx and libatomic In-Reply-To: <20091206082319.GA85084@rambler-co.ru> References: <20091206082319.GA85084@rambler-co.ru> Message-ID: <836834a780c83504c9d387c90a1b60d8.NginxMailingListEnglish@forum.nginx.org> Hello, Without stream modules, nginx cross compiles well. With stream modules, I see the below error: buildmachine:nginx-1.12.0_ppc[sim-qnx-r18.x]$vim src/core/ngx_rwlock.c buildmachine:nginx-1.12.0_ppc[sim-qnx-r18.x]$make make -f objs/Makefile make[1]: Entering directory '/home/dyadavalli/nginx/nginx-1.12.0_ppc' qcc -c -V 4.4.2,gcc_ntoppcbe -DNGX_SYS_NERR=135 -DNGX_HAVE_MAP_ANON -DNGX_PTR_SIZE=4 -DNGX_SIZE_T_LEN=10 -DNGX_MAX_SIZE_T_VALUE=2147483647 -DNGX_MAX_OFF_T_VALUE=2147483647 -DNGX_MAX_TIME_T_VALUE=2147483647 -DNGX_OFF_T_LEN=10 -DNGX_TIME_T_LEN=10 -DNGX_HAVE_MSGHDR_MSG_CONTROL -I~/myopenssl_ppc/include/ -I src/core -I src/event -I src/event/modules -I src/os/unix -I /home/dyadavalli/nginx-1.12.0_ppc/nginx_conf -I /home/dyadavalli/myopenssl_ppc/include -I objs \ -o objs/src/core/ngx_rwlock.o \ src/core/ngx_rwlock.c src/core/ngx_rwlock.c:116:2: error: #error ngx_atomic_cmp_set() is not defined! cc: /opt/QNX651/host/linux/x86/usr/lib/gcc/powerpc-unknown-nto-qnx6.5.0/4.4.2/cc1 error 1 make[1]: *** [objs/Makefile:619: objs/src/core/ngx_rwlock.o] Error 1 make[1]: Leaving directory '/home/dyadavalli/nginx-1.12.0_ppc' make: *** [Makefile:8: build] Error 2 buildmachine:nginx-1.12.0_ppc[sim-qnx-r18.x]$ I read that nginx has built in libatomic sources. Could you please tell me how to use it via configure script. Thanks for help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,28343,280581#msg-280581 From michael.friscia at yale.edu Fri Jul 20 11:10:25 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 20 Jul 2018 11:10:25 +0000 Subject: Need help with regex Message-ID: I have a regex that works in an online tool but when I put this into my configuration file it is not working. The problem is that I want all URLs that start with /info to be redirected with the exception of one unfortunately named PDF file. This regex tests perfectly in an online tool ^/info(\/)?(?!\.pdf) which shows that anything /information /info and /info/ all redirect and this will not /informationforFamiliesAlliesPacket_298781_284_5_v1.pdf But when I put this into action, the PDF requests are still being redirected like any other /info call made. I use a config file with a number of redirects so the full location block is simply: location ~* ^/info(\/)?(?!\.pdf) { return 301 https://www.yalemedicine.org/conditions/; } My thought process was to still redirect unless ?.pdf? existed in the URL just in case we upload more ?info?.pdf? documents into the system, I didn?t want to make this exception too specific. Any thoughts on this would be great, my regex skills are good enough most of the time but failing me right now. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jul 20 11:20:01 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Jul 2018 14:20:01 +0300 Subject: Nginx and libatomic In-Reply-To: <836834a780c83504c9d387c90a1b60d8.NginxMailingListEnglish@forum.nginx.org> References: <20091206082319.GA85084@rambler-co.ru> <836834a780c83504c9d387c90a1b60d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180720112000.GM56558@mdounin.ru> Hello! On Thu, Jul 19, 2018 at 10:57:48PM -0400, dyadaval wrote: > Hello, > Without stream modules, nginx cross compiles well. nginx does not support cross-compilation. Consider using native compilation instead. -- Maxim Dounin http://mdounin.ru/ From michael.friscia at yale.edu Fri Jul 20 11:30:32 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 20 Jul 2018 11:30:32 +0000 Subject: Need help with regex In-Reply-To: References: Message-ID: <82D07D43-C8CA-43E2-8E79-644CEF862990@yale.edu> Ok, never mind. It was working all along. My HOSTS file was screwing me up and pointing to a local instance that did not have this config. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu From: nginx on behalf of Michael Friscia Reply-To: "nginx at nginx.org" Date: Friday, July 20, 2018 at 7:10 AM To: "nginx at nginx.org" Subject: Need help with regex I have a regex that works in an online tool but when I put this into my configuration file it is not working. The problem is that I want all URLs that start with /info to be redirected with the exception of one unfortunately named PDF file. This regex tests perfectly in an online tool ^/info(\/)?(?!\.pdf) which shows that anything /information /info and /info/ all redirect and this will not /informationforFamiliesAlliesPacket_298781_284_5_v1.pdf But when I put this into action, the PDF requests are still being redirected like any other /info call made. I use a config file with a number of redirects so the full location block is simply: location ~* ^/info(\/)?(?!\.pdf) { return 301 https://www.yalemedicine.org/conditions/; } My thought process was to still redirect unless ?.pdf? existed in the URL just in case we upload more ?info?.pdf? documents into the system, I didn?t want to make this exception too specific. Any thoughts on this would be great, my regex skills are good enough most of the time but failing me right now. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jul 20 18:17:50 2018 From: nginx-forum at forum.nginx.org (cyberfarer) Date: Fri, 20 Jul 2018 14:17:50 -0400 Subject: Upload large files via Nginx reverse proxy In-Reply-To: References: Message-ID: It did. Thank you! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280578,280589#msg-280589 From nginx-forum at forum.nginx.org Fri Jul 20 19:42:40 2018 From: nginx-forum at forum.nginx.org (jarstewa) Date: Fri, 20 Jul 2018 15:42:40 -0400 Subject: Caching result of auth_request_set? Message-ID: <9951115b99cd55b04c7eed58508badae.NginxMailingListEnglish@forum.nginx.org> I'm currently using the auth_request directive and caching the result based on a guid + IP address: >location /auth { > internal; > > proxy_pass_request_body off; > proxy_pass $upstream_server/auth?id=$guid&requestor=$last_client_ip; > > proxy_cache auth_cache; > set $auth_cache_key "${guid}|${last_client_ip}"; > proxy_cache_key $auth_cache_key; > proxy_cache_valid 200 301 302 5m; > proxy_cache_valid 401 403 404 5m; > >} It would be very convenient for me to return back a bit of metadata associated with the guid from the upstream auth request, and send that bit of metadata along with the actual request that will follow if the auth subrequest succeeds. It looks like this is possible via the auth_request_set directive, but I am not sure how auth_request_set would interact with proxy_cache. For auth requests, is proxy cache only caching the HTTP response code? Or is it caching the full response including variables? In other words, will auth_request_set still work correctly to set a variable when the auth response is cached? Thank you! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280590,280590#msg-280590 From mdounin at mdounin.ru Sun Jul 22 02:52:14 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Jul 2018 05:52:14 +0300 Subject: Caching result of auth_request_set? In-Reply-To: <9951115b99cd55b04c7eed58508badae.NginxMailingListEnglish@forum.nginx.org> References: <9951115b99cd55b04c7eed58508badae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180722025213.GS56558@mdounin.ru> Hello! On Fri, Jul 20, 2018 at 03:42:40PM -0400, jarstewa wrote: > I'm currently using the auth_request directive and caching the result based > on a guid + IP address: > > >location /auth { > > internal; > > > > proxy_pass_request_body off; > > proxy_pass $upstream_server/auth?id=$guid&requestor=$last_client_ip; > > > > proxy_cache auth_cache; > > set $auth_cache_key "${guid}|${last_client_ip}"; > > proxy_cache_key $auth_cache_key; > > proxy_cache_valid 200 301 302 5m; > > proxy_cache_valid 401 403 404 5m; > > > >} > > It would be very convenient for me to return back a bit of metadata > associated with the guid from the upstream auth request, and send that bit > of metadata along with the actual request that will follow if the auth > subrequest succeeds. It looks like this is possible via the > auth_request_set directive, but I am not sure how auth_request_set would > interact with proxy_cache. > > For auth requests, is proxy cache only caching the HTTP response code? Or > is it caching the full response including variables? In other words, will > auth_request_set still work correctly to set a variable when the auth > response is cached? Thank you! Unless it is specifically instructed to intercept errors using the proxy_intercept_errors directive, proxy cache caches full response as get from the upstream server. That is, auth_request_set is expected to work just fine. -- Maxim Dounin http://mdounin.ru/ From roland at spinnaker.de Mon Jul 23 16:01:04 2018 From: roland at spinnaker.de (Roland Rosenfeld) Date: Mon, 23 Jul 2018 18:01:04 +0200 Subject: mail proxy (IMAP/POP3): balancing between workers Message-ID: <20180723160104.zm2y5ibvz7p4tiac@sys-241.netcologne.de> Hi, I run NginX as mail proxy (IMAP/POP3) and have a setup with worker_processes 8; worker_rlimit_nofile 32768; events { worker_connections 4096; multi_accept on; } I upgraded this setup from Linux 3.16 and NginX 1.10 to Linux 4.9 and NginX 1.14. After this upgrade I run into trouble, since after reaching a maximum of approximately 2600 proxied connections I run into the following error messages: 4096 worker_connections are not enough or 4096 worker_connections are not enough while in http auth state I found out, that nearly all connections were proxied by the first worker process, while nearly all other worker processes seem to be mostly inactive. But I want to balance the connections between the workers, otherwise the worker_rlimit_nofile and the worker_connections are too low. As a first workaround I defined accept_mutex on; which changed it's defaults in 0.11.3 from "on" to "off". This seems to mitigate the issue for me (at least all worker processes now use the CPU again according to ps output). I'm not sure, whether the balancing is as good as with the old setup, but it looks much better than before the workaround. But what's the correct way to tell NginX, that it should balance the connections between all worker processes? According to manual "accept_mutex on" isend needed with EPOLLEXCLUSIVE which should be active on my system with Linux 4.9 and glibc 2.24. Greetings Roland From turritopsis.dohrnii at teo-en-ming.com Tue Jul 24 04:01:30 2018 From: turritopsis.dohrnii at teo-en-ming.com (Turritopsis Dohrnii Teo En Ming) Date: Tue, 24 Jul 2018 04:01:30 +0000 Subject: Is Nginx the ideal reverse proxy for deploying cPanel and Exchange 2016 behind one single public IP address? Message-ID: <4096e1149b9e416085825f3ddb152e64@teo-en-ming.com> Good morning from Singapore, Is Nginx the ideal reverse proxy for deploying cPanel and Exchange 2016 behind one single public IP address? I am planning to use nginx to be the reverse proxy for DNS, IMAP, IMAP/S, POP3, POP3/S, SMTP, and SMTP/S protocols for cPanel and Exchange 2016 behind one single public IP address. cPanel will use one domain name and Exchange 2016 groupware will use another domain name. Can nginx do that? Please point me to the best installation and configuration guides for all of my requirements above. Thank you very much. ===BEGIN SIGNATURE=== Turritopsis Dohrnii Teo En Ming's Academic Qualifications as at 30 Oct 2017 [1] https://tdtemcerts.wordpress.com/ [2] http://tdtemcerts.blogspot.sg/ [3] https://www.scribd.com/user/270125049/Teo-En-Ming ===END SIGNATURE=== -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jul 24 08:29:37 2018 From: nginx-forum at forum.nginx.org (hcnhcn012) Date: Tue, 24 Jul 2018 04:29:37 -0400 Subject: Have problems adding header 'Set-Cookie' to headers_out in my nginx sub request module Message-ID: **nginx version: `1.10.3`** Here's my code to add 'Set-Cookie' to headers: void add_headers_out(ngx_http_request_t *r, char* cookies) { ngx_table_elt_t *h; ngx_str_t k = ngx_string("Set-Cookie"); ngx_str_t v = ngx_string(cookies); h = ngx_list_push(&r->headers_out.headers); if(h == NULL) { return ; } h->hash = ngx_hash_key_lc(k.data, k.len); h->key.len = k.len; h->key.data = k.data; h->value.len = v.len; h->value.data = v.data; } When I call `add_headers_out` in my parent request handler: static void multipost_post_handler(ngx_http_request_t *r) { ... ///////// fill up headers and body //// body int bodylen = body.len; ngx_buf_t *b = ngx_create_temp_buf(r->pool, bodylen); b->pos = body.data; b->last = b->pos + bodylen; b->last_buf = 1; ngx_chain_t out; out.buf = b; out.next = NULL; //// headers r->headers_out.content_type = myctx->content_type; r->headers_out.content_length_n = bodylen; r->headers_out.status = myctx->status_code; // myctx->cookie1: "PHPSESSID=1f74a78647e192496597c240de765d45;" add_headers_out(r, myctx->cookie1); // Test: checking additional headers by iterating headers_out.headers get_headers_out(r); // returns: "Set-Cookie : PHPSESSID=1f74a78647e192496597c240de765d45;" // Send response to client r->connection->buffered |= NGX_HTTP_WRITE_BUFFERED; ngx_int_t ret = ngx_http_send_header(r); ret = ngx_http_output_filter(r, &out); ngx_http_finalize_request(r, ret); return ; } It seems no problem in my code, but when I use my nginx module as a reverse proxy module to some sites, I find `Set-Cookie` is different. For example, I can only see some small part of original `Set-Cookie: PHPSES(then go with nothing)` through chrome. I do not know what cause that problem. Thanks for helping! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280625,280625#msg-280625 From mdounin at mdounin.ru Tue Jul 24 13:28:30 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Jul 2018 16:28:30 +0300 Subject: nginx-1.15.2 Message-ID: <20180724132830.GE56558@mdounin.ru> Changes with nginx 1.15.2 24 Jul 2018 *) Feature: the $ssl_preread_protocol variable in the ngx_stream_ssl_preread_module. *) Feature: now when using the "reset_timedout_connection" directive nginx will reset connections being closed with the 444 code. *) Change: a logging level of the "http request", "https proxy request", "unsupported protocol", and "version too low" SSL errors has been lowered from "crit" to "info". *) Bugfix: DNS requests were not resent if initial sending of a request failed. *) Bugfix: the "reuseport" parameter of the "listen" directive was ignored if the number of worker processes was specified after the "listen" directive. *) Bugfix: when using OpenSSL 1.1.0 or newer it was not possible to switch off "ssl_prefer_server_ciphers" in a virtual server if it was switched on in the default server. *) Bugfix: SSL session reuse with upstream servers did not work with the TLS 1.3 protocol. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Jul 24 14:02:33 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 24 Jul 2018 14:02:33 +0000 Subject: [nginx-announce] nginx-1.15.2 In-Reply-To: <20180724132835.GF56558@mdounin.ru> References: <20180724132835.GF56558@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.15.2 for Windows https://kevinworthington.com/nginxwin1152 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jul 24, 2018 at 1:28 PM, Maxim Dounin wrote: > Changes with nginx 1.15.2 24 Jul > 2018 > > *) Feature: the $ssl_preread_protocol variable in the > ngx_stream_ssl_preread_module. > > *) Feature: now when using the "reset_timedout_connection" directive > nginx will reset connections being closed with the 444 code. > > *) Change: a logging level of the "http request", "https proxy > request", > "unsupported protocol", and "version too low" SSL errors has been > lowered from "crit" to "info". > > *) Bugfix: DNS requests were not resent if initial sending of a request > failed. > > *) Bugfix: the "reuseport" parameter of the "listen" directive was > ignored if the number of worker processes was specified after the > "listen" directive. > > *) Bugfix: when using OpenSSL 1.1.0 or newer it was not possible to > switch off "ssl_prefer_server_ciphers" in a virtual server if it was > switched on in the default server. > > *) Bugfix: SSL session reuse with upstream servers did not work with > the > TLS 1.3 protocol. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corestudiosinc at gmail.com Wed Jul 25 13:01:31 2018 From: corestudiosinc at gmail.com (powiii) Date: Wed, 25 Jul 2018 16:01:31 +0300 Subject: question: http2_push_preload request with cookie Message-ID: Hello. I've recently experimented with the `http2_push_preload` directive to preemptively submit a response to an XHR request. I've noticed that in the request that nginx performs to fetch the hinted resource, no cookies are submitted. However, Chrome does not consider the cached response a candidate for serving the actual XHR that is later sent by the client, which contains `withCredentials=true` and does contain cookies. This is problematic in scenarios where cookies are required to be present. For example, assume the following case: - a logged in user visits page A that we know will trigger an XHR to B.json - information about the session of the user is persisted in a cookie - B.json can only be served to logged in users - we want to push B.json to the client using an early hint, since we know it'll be needed what happens now is the following: 1) Chrome requests page A, nginx responds with page A and an early hint for B.json 2) nginx requests B.json *without* sending any cookies 3) Chrome fetches response for A and B.json 3) Chrome performs an XHR(withCredentials=true) to fetch B.json and does not use B.json from the push cache, since it considers it a different request altogether My question is: how are we supposed to treat such a case? Are there any plans to support this? Thanks in advance, P.S. The ruby script I've used is the following and can be run with `bundle exec rackup test.rb` (requires ruby and bundler): ``` require 'rack' require 'webrick' XHR = "/foo.json" body = %{ I'm the homepage and I'm performing an XHR } require 'pp' app = Proc.new do |env| puts if env["PATH_INFO"].include?(".json") ['200', {'Content-Type' => 'application/json'}, ['{"foo":"bar"}']] else ['200', {'Content-Type' => 'text/html', "Link" => "<#{XHR}>; rel=preload; as=fetch; crossorigin"}, [body]] end end Rack::Handler::WEBrick.run(app, Port: 8123) ``` OS: Darwin 17.4.0 Darwin Kernel Version 17.4.0: Sun Dec 17 09:19:54 PST 2017; root:xnu-4570.41.2~1/RELEASE_X86_64 x86_64 nginx version: nginx/1.15.1 built by clang 9.1.0 (clang-902.0.39.2) built with OpenSSL 1.0.2o 27 Mar 2018 TLS SNI support enabled configure arguments: --prefix=/usr/local/Cellar/nginx/1.15.1 --sbin-path=/usr/local/Cellar/nginx/1.15.1/bin/nginx --with-cc-opt='-I/usr/local/opt/pcre/include -I/usr/local/opt/openssl/include' --with-ld-opt='-L/usr/local/opt/pcre/lib -L/usr/local/opt/openssl/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --pid-path=/usr/local/var/run/nginx.pid --lock-path=/usr/local/var/run/nginx.lock --http-client-body-temp-path=/usr/local/var/run/nginx/client_body_temp --http-proxy-temp-path=/usr/local/var/run/nginx/proxy_temp --http-fastcgi-temp-path=/usr/local/var/run/nginx/fastcgi_temp --http-uwsgi-temp-path=/usr/local/var/run/nginx/uwsgi_temp --http-scgi-temp-path=/usr/local/var/run/nginx/scgi_temp --http-log-path=/usr/local/var/log/nginx/access.log --error-log-path=/usr/local/var/log/nginx/error.log --with-debug --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_degradation_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-ipv6 --with-mail --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module From michael.friscia at yale.edu Wed Jul 25 13:14:29 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 25 Jul 2018 13:14:29 +0000 Subject: Handling upstream response 401 Message-ID: I have a problem that I thought I knew how to solve but must be just having a mind blank moment. If the upstream server returns a 401 response I want to make sure Nginx serves the response. Right now it is serving the stale version. What happened is that the upstream page was public but then made secure, so it sends back the 401 redirect for browser login. Nginx is behaving properly in serving stale but I want to change how it works just for 401. We do serve stale for 404 because we don?t see a need to serve a fresh response every time for content that doesn?t exist. An alternative is to force the upstream app to return 501 instead of 401 but my understanding is that there are technical issues at stake that force me to try to resolve in Nginx. Any help would be appreciated, I just feel like it?s an obvious fix and I?m forgetting how. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 25 13:44:23 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jul 2018 16:44:23 +0300 Subject: Handling upstream response 401 In-Reply-To: References: Message-ID: <20180725134423.GK56558@mdounin.ru> Hello! On Wed, Jul 25, 2018 at 01:14:29PM +0000, Friscia, Michael wrote: > If the upstream server returns a 401 response I want to make > sure Nginx serves the response. Right now it is serving the > stale version. What happened is that the upstream page was > public but then made secure, so it sends back the 401 redirect > for browser login. Nginx is behaving properly in serving stale > but I want to change how it works just for 401. We do serve > stale for 404 because we don?t see a need to serve a fresh > response every time for content that doesn?t exist. Are you sure you are seeng nginx returning a stale response on 401 from the upstream server? With proxy_cache_use_stale you can configure nginx to return stale responses on 500, 502, 503, 504, 403, 404, and 429 (see http://nginx.org/r/proxy_cache_use_stale). It does not, however, return stale responses on 401. Either you've did something very strange in your configuration, or you are trying to solve a problem which does not exist. -- Maxim Dounin http://mdounin.ru/ From michael.friscia at yale.edu Wed Jul 25 13:55:27 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 25 Jul 2018 13:55:27 +0000 Subject: Handling upstream response 401 In-Reply-To: <20180725134423.GK56558@mdounin.ru> References: <20180725134423.GK56558@mdounin.ru> Message-ID: I'm about 98% sure it is returning a 401 but I'm going to do some more research. I don't think we did anything too dumb proxy_cache_valid 200 301 302 404 3m; proxy_cache_use_stale error timeout updating invalid_header http_500 http_502 http_503 http_504; This is kind of what is confusing me but also makes me agree that I'm chasing a problem that is different than what I think. Our fix was to purge the pages and everything was fine. So I do know that the upstream response after the security change took place is causing Nginx to serve the previously public/cached version and it always says it served stale. I know that because I have a bunch of custom headers to help debug this type of situation. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu ?On 7/25/18, 9:44 AM, "nginx on behalf of Maxim Dounin" wrote: Hello! On Wed, Jul 25, 2018 at 01:14:29PM +0000, Friscia, Michael wrote: > If the upstream server returns a 401 response I want to make > sure Nginx serves the response. Right now it is serving the > stale version. What happened is that the upstream page was > public but then made secure, so it sends back the 401 redirect > for browser login. Nginx is behaving properly in serving stale > but I want to change how it works just for 401. We do serve > stale for 404 because we don?t see a need to serve a fresh > response every time for content that doesn?t exist. Are you sure you are seeng nginx returning a stale response on 401 from the upstream server? With proxy_cache_use_stale you can configure nginx to return stale responses on 500, 502, 503, 504, 403, 404, and 429 (see https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnginx.org%2Fr%2Fproxy_cache_use_stale&data=02%7C01%7Cmichael.friscia%40yale.edu%7C68ae0d6a128c4c2bb84808d5f234c28e%7Cdd8cbebb21394df8b4114e3e87abeb5c%7C0%7C0%7C636681230777147527&sdata=Kxl2x76bfhemxInzlnWC601J%2FwIoOTb8C1eYaVS3s%2FA%3D&reserved=0). It does not, however, return stale responses on 401. Either you've did something very strange in your configuration, or you are trying to solve a problem which does not exist. -- Maxim Dounin https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=02%7C01%7Cmichael.friscia%40yale.edu%7C68ae0d6a128c4c2bb84808d5f234c28e%7Cdd8cbebb21394df8b4114e3e87abeb5c%7C0%7C0%7C636681230777147527&sdata=GkWklGERT05XBbBapSY0h8awKUd6TVRQqnmMzWczKhk%3D&reserved=0 _______________________________________________ nginx mailing list nginx at nginx.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&data=02%7C01%7Cmichael.friscia%40yale.edu%7C68ae0d6a128c4c2bb84808d5f234c28e%7Cdd8cbebb21394df8b4114e3e87abeb5c%7C0%7C0%7C636681230777147527&sdata=p0MK9pgGrdc20wVxNxTXigE5DE%2FY%2B7tqFe2YKdXAjbQ%3D&reserved=0 From gfrankliu at gmail.com Wed Jul 25 14:46:49 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 25 Jul 2018 07:46:49 -0700 Subject: support http and https on the same port Message-ID: Stream servers can now do ssl and non-ssl on the same port: https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/ Can this be added to http virtual hosts as well? If ssl is on a listening port and client doesn't send ClientHello, can nginx fallback to use normal http? Maybe introduce a new directive "fallback_http on;"? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 25 15:14:57 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jul 2018 18:14:57 +0300 Subject: support http and https on the same port In-Reply-To: References: Message-ID: <20180725151457.GM56558@mdounin.ru> Hello! On Wed, Jul 25, 2018 at 07:46:49AM -0700, Frank Liu wrote: > Stream servers can now do ssl and non-ssl on the same port: > https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/ > > Can this be added to http virtual hosts as well? > If ssl is on a listening port and client doesn't send ClientHello, can > nginx fallback to use normal http? Maybe introduce a new directive > "fallback_http on;"? It is available since nginx 0.1.0, see the 497 error code here: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors It might not be a good idea to actually configure things that way though. -- Maxim Dounin http://mdounin.ru/ From gfrankliu at gmail.com Wed Jul 25 16:16:30 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 25 Jul 2018 09:16:30 -0700 Subject: support http and https on the same port In-Reply-To: <20180725151457.GM56558@mdounin.ru> References: <20180725151457.GM56558@mdounin.ru> Message-ID: Thanks Maxim! Is there a way to tell nginx to treat 497 as no error, and continue normal processing? On Wed, Jul 25, 2018 at 8:14 AM, Maxim Dounin wrote: > Hello! > > On Wed, Jul 25, 2018 at 07:46:49AM -0700, Frank Liu wrote: > > > Stream servers can now do ssl and non-ssl on the same port: > > https://www.nginx.com/blog/running-non-ssl-protocols- > over-ssl-port-nginx-1-15-2/ > > > > Can this be added to http virtual hosts as well? > > If ssl is on a listening port and client doesn't send ClientHello, can > > nginx fallback to use normal http? Maybe introduce a new directive > > "fallback_http on;"? > > It is available since nginx 0.1.0, see the 497 error code here: > > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors > > It might not be a good idea to actually configure things that way > though. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Wed Jul 25 17:26:20 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 25 Jul 2018 10:26:20 -0700 Subject: support http and https on the same port In-Reply-To: References: <20180725151457.GM56558@mdounin.ru> Message-ID: I just tried it quickly. nginx gives 400 instead of 497 when I connect as http to a ssl virtual host. server { listen 8443 ssl; server_name localhost; ssl_certificate /opt/nginx/ssl/localhost.crt; ssl_certificate_key /opt/nginx/ssl/localhost.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; } curl -v http://localhost:8443 * About to connect() to localhost port 8443 (#0) * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 8443 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: localhost:8443 > Accept: */* > < HTTP/1.1 400 Bad Request < Server: nginx/1.15.2 < Date: Wed, 25 Jul 2018 17:23:24 GMT < Content-Type: text/html < Content-Length: 271 < Connection: close < 400 The plain HTTP request was sent to HTTPS port

400 Bad Request

The plain HTTP request was sent to HTTPS port

nginx/1.15.2
* Closing connection 0 Am I missing something? On Wed, Jul 25, 2018 at 9:16 AM, Frank Liu wrote: > Thanks Maxim! > Is there a way to tell nginx to treat 497 as no error, and continue normal > processing? > > On Wed, Jul 25, 2018 at 8:14 AM, Maxim Dounin wrote: > >> Hello! >> >> On Wed, Jul 25, 2018 at 07:46:49AM -0700, Frank Liu wrote: >> >> > Stream servers can now do ssl and non-ssl on the same port: >> > https://www.nginx.com/blog/running-non-ssl-protocols-over- >> ssl-port-nginx-1-15-2/ >> > >> > Can this be added to http virtual hosts as well? >> > If ssl is on a listening port and client doesn't send ClientHello, can >> > nginx fallback to use normal http? Maybe introduce a new directive >> > "fallback_http on;"? >> >> It is available since nginx 0.1.0, see the 497 error code here: >> >> http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors >> >> It might not be a good idea to actually configure things that way >> though. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Wed Jul 25 18:37:20 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 25 Jul 2018 11:37:20 -0700 Subject: support http and https on the same port In-Reply-To: References: <20180725151457.GM56558@mdounin.ru> Message-ID: I tried: error_page 497 $request_uri; It is kind of working, and I get the correct content/code back, but the response header is still has 400: HTTP/1.1 400 Bad Request My use case has nginx as a reverse proxy, and the real response code from upstream is dropped though the response body and other headers are retained. I also tried: error_page 497 =200 $request_uri; and now I get: HTTP/1.1 200 OK instead of real response code from upstream. On Wed, Jul 25, 2018 at 10:26 AM, Frank Liu wrote: > I just tried it quickly. nginx gives 400 instead of 497 when I connect as > http to a ssl virtual host. > > server { > listen 8443 ssl; > server_name localhost; > > ssl_certificate /opt/nginx/ssl/localhost.crt; > ssl_certificate_key /opt/nginx/ssl/localhost.key; > > ssl_session_cache shared:SSL:10m; > ssl_session_timeout 10m; > > } > > curl -v http://localhost:8443 > * About to connect() to localhost port 8443 (#0) > * Trying 127.0.0.1... > * Connected to localhost (127.0.0.1) port 8443 (#0) > > GET / HTTP/1.1 > > User-Agent: curl/7.29.0 > > Host: localhost:8443 > > Accept: */* > > > < HTTP/1.1 400 Bad Request > < Server: nginx/1.15.2 > < Date: Wed, 25 Jul 2018 17:23:24 GMT > < Content-Type: text/html > < Content-Length: 271 > < Connection: close > < > > 400 The plain HTTP request was sent to HTTPS > port > >

400 Bad Request

>
The plain HTTP request was sent to HTTPS port
>
nginx/1.15.2
> > > * Closing connection 0 > > Am I missing something? > > > On Wed, Jul 25, 2018 at 9:16 AM, Frank Liu wrote: > >> Thanks Maxim! >> Is there a way to tell nginx to treat 497 as no error, and continue >> normal processing? >> >> On Wed, Jul 25, 2018 at 8:14 AM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Wed, Jul 25, 2018 at 07:46:49AM -0700, Frank Liu wrote: >>> >>> > Stream servers can now do ssl and non-ssl on the same port: >>> > https://www.nginx.com/blog/running-non-ssl-protocols-over-ss >>> l-port-nginx-1-15-2/ >>> > >>> > Can this be added to http virtual hosts as well? >>> > If ssl is on a listening port and client doesn't send ClientHello, can >>> > nginx fallback to use normal http? Maybe introduce a new directive >>> > "fallback_http on;"? >>> >>> It is available since nginx 0.1.0, see the 497 error code here: >>> >>> http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors >>> >>> It might not be a good idea to actually configure things that way >>> though. >>> >>> -- >>> Maxim Dounin >>> http://mdounin.ru/ >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 25 20:23:03 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jul 2018 23:23:03 +0300 Subject: support http and https on the same port In-Reply-To: References: <20180725151457.GM56558@mdounin.ru> Message-ID: <20180725202303.GO56558@mdounin.ru> Hello! On Wed, Jul 25, 2018 at 11:37:20AM -0700, Frank Liu wrote: > I tried: > > error_page 497 $request_uri; > > It is kind of working, and I get the correct content/code back, but the > response header is still has 400: > > HTTP/1.1 400 Bad Request > > My use case has nginx as a reverse proxy, and the real response code from > upstream is dropped though the response body and other headers are retained. > > I also tried: > error_page 497 =200 $request_uri; > and now I get: > HTTP/1.1 200 OK > instead of real response code from upstream. Try this instead: error_page 497 = @http; location @http { proxy_pass http://upstream.server; } But, as previously said, it might not be a good idea to actually configure things that way. Rather, 497 can and should be used to return a proper error and/or redirection to the correct address with the protocol properly specified. -- Maxim Dounin http://mdounin.ru/ From gfrankliu at gmail.com Wed Jul 25 20:33:09 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 25 Jul 2018 13:33:09 -0700 Subject: support http and https on the same port In-Reply-To: <20180725202303.GO56558@mdounin.ru> References: <20180725151457.GM56558@mdounin.ru> <20180725202303.GO56558@mdounin.ru> Message-ID: In the current setup, I have location / { ... bunch of stuff ... proxy_pass http://upstream.server; } Instead of duplicating the whole location block, can we do something like location @http | / { ... bunch of stuff ... proxy_pass http://upstream.server; } On Wed, Jul 25, 2018 at 1:23 PM, Maxim Dounin wrote: > Hello! > > On Wed, Jul 25, 2018 at 11:37:20AM -0700, Frank Liu wrote: > > > I tried: > > > > error_page 497 $request_uri; > > > > It is kind of working, and I get the correct content/code back, but the > > response header is still has 400: > > > > HTTP/1.1 400 Bad Request > > > > My use case has nginx as a reverse proxy, and the real response code from > > upstream is dropped though the response body and other headers are > retained. > > > > I also tried: > > error_page 497 =200 $request_uri; > > and now I get: > > HTTP/1.1 200 OK > > instead of real response code from upstream. > > Try this instead: > > error_page 497 = @http; > > location @http { > proxy_pass http://upstream.server; > } > > But, as previously said, it might not be a good idea to actually > configure things that way. Rather, 497 can and should be used to > return a proper error and/or redirection to the correct address > with the protocol properly specified. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Wed Jul 25 21:53:55 2018 From: alex at samad.com.au (Alex Samad) Date: Thu, 26 Jul 2018 07:53:55 +1000 Subject: Feature request Message-ID: Hi Not sure where to put this. But I would like to have the ability to add client cert required any where on the URI tree so www.abc.com.au/ you can access with out a cert but www.abc.com.au/private/ you need a cert www.abc.com.au/public/ no cert needed A -------------- next part -------------- An HTML attachment was scrubbed... URL: From giacomo at beta.srl Thu Jul 26 08:06:22 2018 From: giacomo at beta.srl (Giacomo Arru - BETA Technologies) Date: Thu, 26 Jul 2018 10:06:22 +0200 (CEST) Subject: nginx -> httpd -> mod_jk -> tomcat Message-ID: <1611295917.716835.1532592382462.JavaMail.zimbra@beta.srl> Hi everybody, I recently begun using proxy with nginx (same tests were made with haproxy). My needs are to proxy for failover and balancing tomcat: I need to serve lots of users with production app. While I understood that a 100% tomcat AJP1.3 compatibility is achievable with apache httpd only and mod_jk, I successfully serve my app with apache to a simple 80 http port (cookie path already patched). So I decided to have a localhost apache httpd to proxy tomcat with AJP. IT works perfectly. Now, I need to proxy httpd with nginx, adding SSL with letsencrypt. I successfuly configured the proxy and everything works but uploads: if I send a file to my app, only small uploads work. I'd like to investigate the headers, maybe I need to transform some string but I'm a completely newbie from this point of view. Do you have some tips on how to investigate the problem? Thanks, Giacomo Arru -------------- next part -------------- An HTML attachment was scrubbed... URL: From kovacs at gmail.com Thu Jul 26 17:18:07 2018 From: kovacs at gmail.com (Michael Kovacs) Date: Thu, 26 Jul 2018 10:18:07 -0700 Subject: Nginx url decoding URI problem with proxy_pass Message-ID: Greetings Nginx mailing list! I'm using nginx as an image proxy and am using proxy_pass to fetch the image. Unfortunately if that image URL has a space in it the proxy_pass fails. It works fine with any other image. example successful URL: /image_preview/https://somedomain.com/image.jpg example failedl URL: /image_preview/https://somedomain.com/My%20Images/image.jpg ^^ Nginx is URL decoding the url in the path and putting the space back in. Here's my nginx.conf # redirect thumbnail url to real world location ~ ^/image_preview/(.*?)$ { resolver ${HOST}; set $fallback_image_url ${FALLBACK_IMAGE_URL}; set $image_url $1; if ($args) { set $image_url $1?$args; } proxy_intercept_errors on; error_page 301 302 303 305 306 307 308 $fallback_image_url; error_page 400 401 402 403 404 405 406 409 408 409 410 411 412 413 414 415 416 417 $fallback_image_url; error_page 500 501 502 503 504 505 506 507 508 509 510 511 520 522 598 599 $fallback_image_url; proxy_connect_timeout 2s; proxy_read_timeout 4s; proxy_pass_request_headers off; proxy_buffering off; proxy_redirect off; proxy_pass $image_url; proxy_set_header If-None-Match $http_if_none_match; proxy_set_header If-Modified-Since $http_if_modified_since; } I've scoured the docs, stackoverflow, and the internet in general but don't see how to address this problem. As I see it I have two options: 1) Find a way to make nginx not URL decode the param URL (doesn't seem possible) 2) The original $request_uri contains the URL encoded URL. Find a way to create a rewrite rule to strip off the prefix and proxy_pass to the resulting URL. I haven't found a way to do something liek that as it appears rewrite rules will only operate on the URI in context and that URI appears to be decoded. I've found an entire chapter in "Nginx Troubleshooting" on creating a proxy for external links. But that example also appears to fall vicitm to this same problem. Any help/pointers would be appreciated as I am pretty well stuck at this point on an approach that might work. Thanks, -Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From dantullis at yahoo.com Thu Jul 26 18:43:21 2018 From: dantullis at yahoo.com (Dan Tullis) Date: Thu, 26 Jul 2018 18:43:21 +0000 (UTC) Subject: secure/hide "api.anothersite.com" from public and only allow "mysite.com" to access it via 127.0.0.1:50010 internally In-Reply-To: <2054972144.2937109.1532630361869@mail.yahoo.com> References: <2054972144.2937109.1532630361869.ref@mail.yahoo.com> <2054972144.2937109.1532630361869@mail.yahoo.com> Message-ID: <741107144.2935649.1532630601792@mail.yahoo.com> I would like to hide a backend API REST server from public view and have it accessed from frontend web server locally/internally. Is this possible? Below are my setup and configs: angular/nodejs frontend app, say it is "mysite.com" running on server at 127.0.0.1:51910 nodejs backend app, say it is "api.anothersite.com" running on server at 127.00.0.1:50010 nginx(open source) listens for the server_name/domain and does a proxy_pass to the host/port listed above I currently can communicate back and forth with GET and POST requests and JSON responses. So far everything is great. However, beside just using CORS, I would now like to secure/hide "api.anothersite.com" from the public and just allow "mysite.com" to access 127.0.0.1:50010 internally instead of "api.anothersite.com" Can this be done via nginx? ?? server { ?????????? server_name api.anothersite.com; ? ?????????? listen 443 ssl; ?????????? ssl_certificate /etc/letsencrypt/live/anothersite.com/fullchain.pem; ?????????? ssl_certificate_key /etc/letsencrypt/live/anothersite.com/privkey.pem; ?????????? include /etc/letsencrypt/options-ssl-nginx.conf; ?????????? ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; ? ? ????? location / { ????????????? #allow xx.xx.xx.xx; ????????????? #allow 127.0.0.1; ????????????? #deny all; ????????????? proxy_pass http://127.0.0.1:50010; ? ????????????? proxy_http_version 1.1; ????????????? proxy_set_header Upgrade $http_upgrade; ????????????? proxy_set_header Connection 'upgrade'; ????????????? proxy_set_header Host $host; ????????????? proxy_cache_bypass $http_upgrade; ????????? } ? } ?? server { ?????? server_name mysite.com www.mysite.com; ? ?????? location / { ? ????? proxy_http_version 1.1; ??????????? proxy_pass http://localhost:51910; ??????????? proxy_set_header Upgrade $http_upgrade; ??????????? proxy_set_header Connection 'upgrade'; ?????????? # proxy_set_header Host $host; ?????????? proxy_set_header Host mysite.com; ?????????? proxy_cache_bypass $http_upgrade; ?????????? proxy_pass_request_headers on; ????? } ? ????? #error_page? 404????????????? /404.html; ? ????? # redirect server error pages to the static page /50x.html ????? # ????? error_page?? 500 502 503 504? /50x.html; ????? location = /50x.html { ????????? root?? /usr/share/nginx/html; ????? } ? ????? listen 443 ssl; ????? ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; ????? ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; ????? include /etc/letsencrypt/options-ssl-nginx.conf; ????? ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; ? } ? ? server { ????? if ($host = www.mysite.com) { ????????? return 301 https://$host$request_uri; ????? } ? ????? if ($host = mysite.com) { ????????? return 301 https://$host$request_uri; ????? } ? ????? listen?????? 80; ????? server_name mysite.com www.mysite.com; ????? return 404; ? } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Thu Jul 26 18:47:38 2018 From: mailinglist at unix-solution.de (basti) Date: Thu, 26 Jul 2018 20:47:38 +0200 Subject: proxy_pass to dyndns address Message-ID: Hello, inside a location I have a proxy_pass to a hostname with a dynamic IP for example location ^~ /example/ { proxy_pass https://host1.dyndns.example.com; } getent hosts resolve the right IP. But in via nginx return a 504. When I reload nginx it work until IP is changed. The DNS Server for this is on the same host. TTL is only 300s. I have found the resolver directive, I'm not sure if this is the right one because of the small TTL. Is there a way to get this working? Best Regards From iippolitov at nginx.com Thu Jul 26 19:00:37 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Thu, 26 Jul 2018 22:00:37 +0300 Subject: Nginx url decoding URI problem with proxy_pass In-Reply-To: References: Message-ID: Michael, You can use rewrite. Just catch the host part: >rewrite /image_preview/https://(?[^ :/]/(.*) /$1; >proxy_pass https://$my_host rewrite will encode the URL back again. On 26.07.2018 20:18, Michael Kovacs wrote: > Greetings Nginx mailing list! > > I'm using nginx as an image proxy and am using proxy_pass to fetch the > image. Unfortunately if that image URL has a space in it the > proxy_pass fails. It works fine with any other image. > > example successful URL: > > /image_preview/https://somedomain.com/image.jpg > > > example failedl URL: > > /image_preview/https://somedomain.com/My%20Images/image.jpg > > > ^^ > Nginx is URL decoding the url in the path and putting the space back in. > > Here's my nginx.conf > > ? ? ? ? # redirect thumbnail url to real world > ? ? ? ? location ~ ^/image_preview/(.*?)$? { > ? ? ? ? ? ? resolver ${HOST}; > > ? ? ? ? ? ? set $fallback_image_url ${FALLBACK_IMAGE_URL}; > ? ? ? ? ? ? set $image_url $1; > > ? ? ? ? ? ? if ($args) { > ? ? ? ? ? ? ? ? set $image_url $1?$args; > ? ? ? ? ? ? } > > ? ? ? ? ? ? proxy_intercept_errors on; > ? ? ? ? ? ? error_page 301 302 303 305 306 307 308 $fallback_image_url; > ? ? ? ? ? ? error_page 400 401 402 403 404 405 406 409 408 409 410 411 > 412 413 414 415 416 417 $fallback_image_url; > ? ? ? ? ? ? error_page 500 501 502 503 504 505 506 507 508 509 510 511 > 520 522 598 599 $fallback_image_url; > > ? ? ? ? ? ? proxy_connect_timeout 2s; > ? ? ? ? ? ? proxy_read_timeout 4s; > ? ? ? ? ? ? proxy_pass_request_headers off; > ? ? ? ? ? ? proxy_buffering off; > ? ? ? ? ? ? proxy_redirect off; > > ? ? ? ? ? ? proxy_pass $image_url; > ? ? ? ? ? ? proxy_set_header If-None-Match $http_if_none_match; > ? ? ? ? ? ? proxy_set_header If-Modified-Since $http_if_modified_since; > ? ? ? ? } > > I've scoured the docs, stackoverflow, and the internet in general but > don't see how to address this problem. As I see it I have two options: > > 1) Find a way to make nginx not URL decode the param URL (doesn't seem > possible) > 2) The original $request_uri contains the URL encoded URL. Find a way > to create a rewrite rule to strip off the prefix and proxy_pass to the > resulting URL. I haven't found a way to do something liek that as it > appears rewrite rules will only operate on the URI in context and that > URI appears to be decoded. > > I've found an entire chapter in "Nginx Troubleshooting" on creating a > proxy for external links. But that example also appears to fall vicitm > to this same problem. > > Any help/pointers would be appreciated as I am pretty well stuck at > this point on an? approach that might work. > > Thanks, > -Michael > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From shaun.tarves at jackpinetech.com Thu Jul 26 20:16:11 2018 From: shaun.tarves at jackpinetech.com (Shaun Tarves) Date: Thu, 26 Jul 2018 16:16:11 -0400 Subject: Large CRL file crashing nginx on reload Message-ID: Hi, We are trying to use nginx to support the DoD PKI infrastructure, which includes many DoD and contractor CRLs. The combined CRL file is over 350MB in size, which seems to crash nginx during a reload (at least on Red Hat 6). Our cert/key/crl set up is valid and working, and when only including a subset of the CRL files we have, reloads work fine. When we concatenate all the CRLs we need to support, the config reload request causes worker threads to become defunct and messages in the error log indicate the following: 2018/07/26 16:05:25 [alert] 30624#30624: fork() failed while spawning "worker process" (12: Cannot allocate memory) 2018/07/26 16:05:25 [alert] 30624#30624: sendmsg() failed (9: Bad file descriptor) 2018/07/26 16:08:42 [alert] 30624#30624: worker process 1611 exited on signal 9 Is there any way we can get nginx to support such a large volume of CRLs? -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Thu Jul 26 21:45:52 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Fri, 27 Jul 2018 00:45:52 +0300 Subject: Large CRL file crashing nginx on reload In-Reply-To: References: Message-ID: <4cd2ab6a-2c1f-d1b8-31ea-61b106c402df@nginx.com> Shaun, Can you post a snippet on how you include crl into your configuration and 'ps aux | grep nginx' output, please? The wild guess is that you include the crl several times. And on reload you get twice as many workers as there are usually. You can try moving ssl_crl statement into http{} context. On 26.07.2018 23:16, Shaun Tarves wrote: > Hi, > > We are trying to use nginx to support the DoD PKI infrastructure, > which includes many DoD and contractor CRLs. The combined CRL file is > over 350MB in size, which seems to crash nginx during a reload (at > least on Red Hat 6). Our cert/key/crl set up is valid and working, and > when only including a subset of the CRL files we have, reloads work fine. > > When we concatenate all the CRLs we need to support, the config reload > request causes worker threads to become defunct and messages in the > error log indicate the following: > > 2018/07/26 16:05:25 [alert] 30624#30624: fork() failed while spawning > "worker process" (12: Cannot allocate memory) > > 2018/07/26 16:05:25 [alert] 30624#30624: sendmsg() failed (9: Bad file > descriptor) > > 2018/07/26 16:08:42 [alert] 30624#30624: worker process 1611 exited on > signal 9 > > > Is there any way we can get nginx to support such a large volume of CRLs? > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jul 27 00:13:00 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Jul 2018 03:13:00 +0300 Subject: Large CRL file crashing nginx on reload In-Reply-To: References: Message-ID: <20180727001300.GV56558@mdounin.ru> Hello! On Thu, Jul 26, 2018 at 04:16:11PM -0400, Shaun Tarves wrote: > We are trying to use nginx to support the DoD PKI infrastructure, which > includes many DoD and contractor CRLs. The combined CRL file is over 350MB > in size, which seems to crash nginx during a reload (at least on Red Hat > 6). Our cert/key/crl set up is valid and working, and when only including a > subset of the CRL files we have, reloads work fine. > > When we concatenate all the CRLs we need to support, the config reload > request causes worker threads to become defunct and messages in the error > log indicate the following: > > 2018/07/26 16:05:25 [alert] 30624#30624: fork() failed while spawning > "worker process" (12: Cannot allocate memory) The error suggest you've run out of memory. > 2018/07/26 16:05:25 [alert] 30624#30624: sendmsg() failed (9: Bad file > descriptor) > > 2018/07/26 16:08:42 [alert] 30624#30624: worker process 1611 exited on > signal 9 And this one suggests nginx worker was killed with signal 9, likely by the OOM Killer. That is, again, you've run out of memory. > Is there any way we can get nginx to support such a large volume > of CRLs? It looks like you problem is that you don't have enough memory for your configuration. Most trivial solution would be to add more memory. Another possible solution would be to carefully inspect the configuration, and, if possible, reduce amount of memory required. In particular, when using such a big CRLs it is important to only specify them in configuration context they are needed, as each SSL context with a CRL configured will load its own copy of the CRL. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Jul 27 06:13:41 2018 From: nginx-forum at forum.nginx.org (walwong) Date: Fri, 27 Jul 2018 02:13:41 -0400 Subject: 10054: An existing connection was forcibly closed by the remote host In-Reply-To: <42b94aba28ed83dd4a92a08ab347ec43.NginxMailingListEnglish@forum.nginx.org> References: <42b94aba28ed83dd4a92a08ab347ec43.NginxMailingListEnglish@forum.nginx.org> Message-ID: apply these settings to the proxy: keepalive_requests 500; proxy_http_version 1.1; context: http, server, location Version 1.1 is recommended for use with keepalive connections Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267564,280683#msg-280683 From dantullis at yahoo.com Fri Jul 27 14:22:45 2018 From: dantullis at yahoo.com (Dan Tullis) Date: Fri, 27 Jul 2018 14:22:45 +0000 (UTC) Subject: secure/hide "api.anothersite.com" from public and only allow "mysite.com" to access it via 127.0.0.1:50010 internally In-Reply-To: <741107144.2935649.1532630601792@mail.yahoo.com> References: <2054972144.2937109.1532630361869.ref@mail.yahoo.com> <2054972144.2937109.1532630361869@mail.yahoo.com> <741107144.2935649.1532630601792@mail.yahoo.com> Message-ID: <1375806310.215604.1532701365459@mail.yahoo.com> FYI - I believe I figured it out. Suggestions welcomed. Here is what I did: On the frontend: Instead of doing GETs and POSTs to "api.anothersite.com/api/messages" I now do the call to "mysite.com/api/messages" On the backend: added an additional "location" similar to: ?location /api/messages {??? # the backend server ??? proxy_pass http://localhost:50010/api/messages/; ?} ----- Forwarded Message ----- I would like to hide a backend API REST server from public view and have it accessed from frontend web server locally/internally. Is this possible? Below are my setup and configs: angular/nodejs frontend app, say it is "mysite.com" running on server at 127.0.0.1:51910 nodejs backend app, say it is "api.anothersite.com" running on server at 127.00.0.1:50010 nginx(open source) listens for the server_name/domain and does a proxy_pass to the host/port listed above I currently can communicate back and forth with GET and POST requests and JSON responses. So far everything is great. However, beside just using CORS, I would now like to secure/hide "api.anothersite.com" from the public and just allow "mysite.com" to access 127.0.0.1:50010 internally instead of "api.anothersite.com" Can this be done via nginx? ?? server { ?????????? server_name api.anothersite.com; ? ?????????? listen 443 ssl; ?????????? ssl_certificate /etc/letsencrypt/live/anothersite.com/fullchain.pem; ?????????? ssl_certificate_key /etc/letsencrypt/live/anothersite.com/privkey.pem; ?????????? include /etc/letsencrypt/options-ssl-nginx.conf; ?????????? ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; ? ? ????? location / { ????????????? #allow xx.xx.xx.xx; ????????????? #allow 127.0.0.1; ????????????? #deny all; ????????????? proxy_pass http://127.0.0.1:50010; ? ????????????? proxy_http_version 1.1; ????????????? proxy_set_header Upgrade $http_upgrade; ????????????? proxy_set_header Connection 'upgrade'; ????????????? proxy_set_header Host $host; ????????????? proxy_cache_bypass $http_upgrade; ????????? } ? } ?? server { ?????? server_name mysite.com www.mysite.com; ? ?????? location / { ? ????? proxy_http_version 1.1; ??????????? proxy_pass http://localhost:51910; ??????????? proxy_set_header Upgrade $http_upgrade; ??????????? proxy_set_header Connection 'upgrade'; ?????????? # proxy_set_header Host $host; ?????????? proxy_set_header Host mysite.com; ?????????? proxy_cache_bypass $http_upgrade; ?????????? proxy_pass_request_headers on; ????? } ? ????? #error_page? 404????????????? /404.html; ? ????? # redirect server error pages to the static page /50x.html ????? # ????? error_page?? 500 502 503 504? /50x.html; ????? location = /50x.html { ????????? root?? /usr/share/nginx/html; ????? } ? ????? listen 443 ssl; ????? ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; ????? ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; ????? include /etc/letsencrypt/options-ssl-nginx.conf; ????? ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; ? } ? ? server { ????? if ($host = www.mysite.com) { ????????? return 301 https://$host$request_uri; ????? } ? ????? if ($host = mysite.com) { ????????? return 301 https://$host$request_uri; ????? } ? ????? listen?????? 80; ????? server_name mysite.com www.mysite.com; ????? return 404; ? } -------------- next part -------------- An HTML attachment was scrubbed... URL: From shaun.tarves at jackpinetech.com Fri Jul 27 14:56:38 2018 From: shaun.tarves at jackpinetech.com (Shaun Tarves) Date: Fri, 27 Jul 2018 10:56:38 -0400 Subject: Large CRL file crashing nginx on reload In-Reply-To: References: Message-ID: Here are the relevant parts of our configuration: worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 512; } http { server { listen xx.xx.xx.xx:443 default_server ssl; ssl on; ssl_certificate /opt/xxx.pem; ssl_certificate_key /opt/xxx.key ssl_ciphers 'AES128+EECDH:AES128+EDH:!aNULL'; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_session_cache shared:SSL:10m; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_client_certificate /opt/ca.crt.pem ssl_crl /opt/ca.crl/.pem; } } During a "reload" command, here is how our ps looks: [root at www nginx]# service nginx reload Reloading nginx: [ OK ] [root at www nginx]# ps -ef | grep nginx root 9605 1 9 15:06 ? 00:00:17 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf cons3rt 9606 9605 0 15:06 ? 00:00:00 nginx: worker process root 11009 27847 0 15:09 pts/2 00:00:00 grep nginx [root at www nginx]# ps -ef | grep nginx root 9605 1 10 15:06 ? 00:00:24 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf cons3rt 9606 9605 0 15:06 ? 00:00:00 nginx: worker process is shutting down root 11091 27847 0 15:10 pts/2 00:00:00 grep nginx [root at www nginx]# ps -ef | grep nginx root 9605 1 10 15:06 ? 00:00:24 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf cons3rt 9606 9605 0 15:06 ? 00:00:00 nginx: worker process is shutting down root 11362 27847 0 15:10 pts/2 00:00:00 grep nginx [root at www nginx]# ps -ef | grep nginx root 9605 1 9 15:06 ? 00:00:24 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf cons3rt 9606 9605 1 15:06 ? 00:00:02 nginx: worker process is shutting down root 11395 27847 0 15:10 pts/2 00:00:00 grep nginx [root at www nginx]# vi /var/log/nginx/error.log [root at www nginx]# ps -ef | grep nginx root 9605 1 7 15:06 ? 00:00:24 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf cons3rt 9606 9605 5 15:06 ? 00:00:19 nginx: worker process is shutting down root 11771 27847 0 15:12 pts/2 00:00:00 grep nginx [root at www nginx]# service nginx stop Stopping nginx: [FAILED] On Thu, Jul 26, 2018 at 4:16 PM Shaun Tarves wrote: > Hi, > > We are trying to use nginx to support the DoD PKI infrastructure, which > includes many DoD and contractor CRLs. The combined CRL file is over 350MB > in size, which seems to crash nginx during a reload (at least on Red Hat > 6). Our cert/key/crl set up is valid and working, and when only including a > subset of the CRL files we have, reloads work fine. > > When we concatenate all the CRLs we need to support, the config reload > request causes worker threads to become defunct and messages in the error > log indicate the following: > > 2018/07/26 16:05:25 [alert] 30624#30624: fork() failed while spawning > "worker process" (12: Cannot allocate memory) > > 2018/07/26 16:05:25 [alert] 30624#30624: sendmsg() failed (9: Bad file > descriptor) > > 2018/07/26 16:08:42 [alert] 30624#30624: worker process 1611 exited on > signal 9 > > Is there any way we can get nginx to support such a large volume of CRLs? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jul 27 15:19:18 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Jul 2018 18:19:18 +0300 Subject: Large CRL file crashing nginx on reload In-Reply-To: References: Message-ID: <20180727151918.GX56558@mdounin.ru> Hello! On Fri, Jul 27, 2018 at 10:56:38AM -0400, Shaun Tarves wrote: > Here are the relevant parts of our configuration: > > worker_processes 1; > pid /var/run/nginx.pid; > events { > worker_connections 512; > } > http { > server { > listen xx.xx.xx.xx:443 default_server ssl; > ssl on; > ssl_certificate /opt/xxx.pem; > ssl_certificate_key /opt/xxx.key > ssl_ciphers 'AES128+EECDH:AES128+EDH:!aNULL'; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_session_cache shared:SSL:10m; > ssl_prefer_server_ciphers on; > ssl_verify_client optional; > ssl_client_certificate /opt/ca.crt.pem > ssl_crl /opt/ca.crl/.pem; > } > } Configuration looks fine - there is only one server{} block where the "ssl_crl" directive is used, so there should be only one copy of CRL loaded per configuration. Accordingly, it looks like you've simply run out of memory. Check the amount of memory as available on your server (and/or memory limits, if any) and the amount of memory as used by nginx with the CRL loaded. Note that for the configuration reload to work you will need extra memory to load an additional copy of the configuration and to start new worker processes. See http://nginx.org/en/docs/control.html#reconfiguration for details on who configuration reload works. -- Maxim Dounin http://mdounin.ru/ From shaun.tarves at jackpinetech.com Fri Jul 27 18:18:26 2018 From: shaun.tarves at jackpinetech.com (Shaun Tarves) Date: Fri, 27 Jul 2018 14:18:26 -0400 Subject: Large CRL file crashing nginx on reload In-Reply-To: References: Message-ID: That is exactly the issue. Seeing what the "reload" did to the memory (starting a new worker process) was the culprit. I was thinking the configuration reload should just refresh what's in memory, but it clearly doubles the requirement of memory and must wait until the previous child can stop gracefully. Thank you for the help! On Fri, Jul 27, 2018 at 10:56 AM Shaun Tarves wrote: > Here are the relevant parts of our configuration: > > worker_processes 1; > pid /var/run/nginx.pid; > events { > worker_connections 512; > } > http { > server { > listen xx.xx.xx.xx:443 default_server ssl; > ssl on; > ssl_certificate /opt/xxx.pem; > ssl_certificate_key /opt/xxx.key > ssl_ciphers 'AES128+EECDH:AES128+EDH:!aNULL'; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_session_cache shared:SSL:10m; > ssl_prefer_server_ciphers on; > ssl_verify_client optional; > ssl_client_certificate /opt/ca.crt.pem > ssl_crl /opt/ca.crl/.pem; > } > } > > During a "reload" command, here is how our ps looks: > > [root at www nginx]# service nginx reload > > Reloading nginx: [ OK ] > > [root at www nginx]# ps -ef | grep nginx > > root 9605 1 9 15:06 ? 00:00:17 nginx: master process > /usr/sbin/nginx -c /etc/nginx/nginx.conf > > cons3rt 9606 9605 0 15:06 ? 00:00:00 nginx: worker process > > > root 11009 27847 0 15:09 pts/2 00:00:00 grep nginx > > [root at www nginx]# ps -ef | grep nginx > > root 9605 1 10 15:06 ? 00:00:24 nginx: master process > /usr/sbin/nginx -c /etc/nginx/nginx.conf > > cons3rt 9606 9605 0 15:06 ? 00:00:00 nginx: worker process is > shutting down > > root 11091 27847 0 15:10 pts/2 00:00:00 grep nginx > > [root at www nginx]# ps -ef | grep nginx > > root 9605 1 10 15:06 ? 00:00:24 nginx: master process > /usr/sbin/nginx -c /etc/nginx/nginx.conf > > cons3rt 9606 9605 0 15:06 ? 00:00:00 nginx: worker process is > shutting down > > root 11362 27847 0 15:10 pts/2 00:00:00 grep nginx > > [root at www nginx]# ps -ef | grep nginx > > root 9605 1 9 15:06 ? 00:00:24 nginx: master process > /usr/sbin/nginx -c /etc/nginx/nginx.conf > > cons3rt 9606 9605 1 15:06 ? 00:00:02 nginx: worker process is > shutting down > > root 11395 27847 0 15:10 pts/2 00:00:00 grep nginx > > [root at www nginx]# vi /var/log/nginx/error.log > > [root at www nginx]# ps -ef | grep nginx > > root 9605 1 7 15:06 ? 00:00:24 nginx: master process > /usr/sbin/nginx -c /etc/nginx/nginx.conf > > cons3rt 9606 9605 5 15:06 ? 00:00:19 nginx: worker process is > shutting down > > root 11771 27847 0 15:12 pts/2 00:00:00 grep nginx > > [root at www nginx]# service nginx stop > > Stopping nginx: [FAILED] > > > > On Thu, Jul 26, 2018 at 4:16 PM Shaun Tarves < > shaun.tarves at jackpinetech.com> wrote: > >> Hi, >> >> We are trying to use nginx to support the DoD PKI infrastructure, which >> includes many DoD and contractor CRLs. The combined CRL file is over 350MB >> in size, which seems to crash nginx during a reload (at least on Red Hat >> 6). Our cert/key/crl set up is valid and working, and when only including a >> subset of the CRL files we have, reloads work fine. >> >> When we concatenate all the CRLs we need to support, the config reload >> request causes worker threads to become defunct and messages in the error >> log indicate the following: >> >> 2018/07/26 16:05:25 [alert] 30624#30624: fork() failed while spawning >> "worker process" (12: Cannot allocate memory) >> >> 2018/07/26 16:05:25 [alert] 30624#30624: sendmsg() failed (9: Bad file >> descriptor) >> >> 2018/07/26 16:08:42 [alert] 30624#30624: worker process 1611 exited on >> signal 9 >> >> Is there any way we can get nginx to support such a large volume of CRLs? >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Sat Jul 28 16:41:59 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Sat, 28 Jul 2018 18:41:59 +0200 Subject: nginx -> httpd -> mod_jk -> tomcat In-Reply-To: <1611295917.716835.1532592382462.JavaMail.zimbra@beta.srl> References: <1611295917.716835.1532592382462.JavaMail.zimbra@beta.srl> Message-ID: <20180728164159.GB23996@aleks-PC> Hi. On 26/07/2018 10:06, Giacomo Arru - BETA Technologies wrote: >Hi everybody, > >I recently begun using proxy with nginx (same tests were made with >haproxy). Which one do you prefer as both are very good and have similar features. >My needs are to proxy for failover and balancing tomcat: I need to >serve lots of users with production app. > >While I understood that a 100% tomcat AJP1.3 compatibility is >achievable with apache httpd only and mod_jk, I successfully serve my >app with apache to a simple 80 http port (cookie path already >patched). So I decided to have a localhost apache httpd to proxy tomcat >with AJP. IT works perfectly. For which particular feature do you need ajp? I have used several tomcats with httpd with ajp but after some horrible errors we switched to http connector. https://tomcat.apache.org/tomcat-9.0-doc/config/http.html In case you don't need a special feature in httpd this makes also possible to reduce the complexity of your chain ;-) nginx (http/https/server static files) -> tomcat >Now, I need to proxy httpd with nginx, adding SSL with letsencrypt. I >successfuly configured the proxy and everything works but uploads: if I >send a file to my app, only small uploads work. > >I'd like to investigate the headers, maybe I need to transform some >string but I'm a completely newbie from this point of view. > >Do you have some tips on how to investigate the problem? With small you mean ~1M, that's the default setup for client_max_body_size https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size >Thanks, > >Giacomo Arru Best regards Aleks From nginx-forum at forum.nginx.org Mon Jul 30 06:58:56 2018 From: nginx-forum at forum.nginx.org (linsonj) Date: Mon, 30 Jul 2018 02:58:56 -0400 Subject: Modify url at nginx Message-ID: Hello All, We have a use case. Our web application is deployed in tomcat7. At front, nginx is configured as reverse proxy and all requests are passed through nginx and are forwarded to tomcat7. Nginx serve static files directly and dynamic requests ( json ) are forwarded to tomcat7. At backend, we have MySQL db to save the application settings. What we want is when client type https://test1.apphost.com , nginx see url as test1.apphost.com. Before proxy pass request to tomcat7, it should modify url to https://test.apphost.com so tomcat7 see client url as test.apphost.com. Once request is processed, response is given back to nginx and nginx give it back to end url https://test1.apphost.com This is needed because in our application database, we use domain name to DB name mapping. Currently one domain name mapping entry is allowed. We want to allow multiple urls to login to our application from client side. That means, we use modified url (domain name ) test.apphost.com in database settings. When client type https://test1.apphost.com, nginx should modify it to test.apphost.com which matches the database mapping settings thus allow successful login. We have following nginx config settings put in place. server { listen 80; rewrite ^(.*) https://$host$1 permanent; error_page 500 502 503 504 /50x.html; } server { listen 443 ssl default_server; location /server { proxy_pass http://127.0.0.1:8080/server; proxy_connect_timeout 6000; proxy_send_timeout 6000; proxy_read_timeout 6000; proxy_request_buffering off; send_timeout 6000; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_temp_path /var/nginx/proxy_temp; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_redirect off; proxy_cache sd6; add_header X-Proxy-Cache $upstream_cache_status; proxy_cache_bypass $http_cache_control; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options nosniff; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; add_header Referrer-Policy "no-referrer"; } ssl on; ssl_certificate /etc/nginx/ssl/example.com.bundle.crt; ssl_certificate_key /etc/nginx/ssl/example.com.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; ssl_dhparam /etc/nginx/ssl/dhparam.pem; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_session_timeout 24h; keepalive_timeout 300; access_log /var/log/nginx/ssl-access.log; error_log /var/log/nginx/ssl-error.log; } Would be of great help if someone can advise us how can we modify the url based on the use case explained above. Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280695,280695#msg-280695 From pratyush at hostindya.com Mon Jul 30 07:17:01 2018 From: pratyush at hostindya.com (Pratyush Kumar) Date: Mon, 30 Jul 2018 12:47:01 +0530 Subject: Modify url at nginx In-Reply-To: Message-ID: <7cba93be-9d8b-4287-8f36-83118400be7f@email.android.com> An HTML attachment was scrubbed... URL: From giacomo at beta.srl Mon Jul 30 10:44:48 2018 From: giacomo at beta.srl (Giacomo Arru - BETA Technologies) Date: Mon, 30 Jul 2018 12:44:48 +0200 (CEST) Subject: nginx -> httpd -> mod_jk -> tomcat In-Reply-To: <20180728164159.GB23996@aleks-PC> References: <1611295917.716835.1532592382462.JavaMail.zimbra@beta.srl> <20180728164159.GB23996@aleks-PC> Message-ID: <399152655.9717192.1532947488692.JavaMail.zimbra@beta.srl> Hi Aleksandar, thank you for your reply. > For which particular feature do you need ajp? > I have used several tomcats with httpd with ajp but after some horrible > errors we switched to http connector. We wanted to use HTTP Connector, and set it up. But we couldn't manage to make the app work correctly: we coudn't upload files > 60k and tweaking nginx and tomcat configuration wouldn't help. We were unable to debug the problem. With AJP connector, all features work fine. The app is developed with Vaadin 8. We have no static files in our app because they already are served with a CDN. We would like test nginx with AJP connector (if somehow supported!). client_max_body_size is set to 128m so I think the problem occurs at http level while uploading files but I can't manage to debug my system. Giacomo Da: "Aleksandar Lazic" A: "nginx" Inviato: Sabato, 28 luglio 2018 18:41:59 Oggetto: Re: nginx -> httpd -> mod_jk -> tomcat Hi. On 26/07/2018 10:06, Giacomo Arru - BETA Technologies wrote: >Hi everybody, > >I recently begun using proxy with nginx (same tests were made with >haproxy). Which one do you prefer as both are very good and have similar features. >My needs are to proxy for failover and balancing tomcat: I need to >serve lots of users with production app. > >While I understood that a 100% tomcat AJP1.3 compatibility is >achievable with apache httpd only and mod_jk, I successfully serve my >app with apache to a simple 80 http port (cookie path already >patched). So I decided to have a localhost apache httpd to proxy tomcat >with AJP. IT works perfectly. For which particular feature do you need ajp? I have used several tomcats with httpd with ajp but after some horrible errors we switched to http connector. https://tomcat.apache.org/tomcat-9.0-doc/config/http.html In case you don't need a special feature in httpd this makes also possible to reduce the complexity of your chain ;-) nginx (http/https/server static files) -> tomcat >Now, I need to proxy httpd with nginx, adding SSL with letsencrypt. I >successfuly configured the proxy and everything works but uploads: if I >send a file to my app, only small uploads work. > >I'd like to investigate the headers, maybe I need to transform some >string but I'm a completely newbie from this point of view. > >Do you have some tips on how to investigate the problem? With small you mean ~1M, that's the default setup for client_max_body_size https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size >Thanks, > >Giacomo Arru Best regards Aleks _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Questo messaggio stato analizzato con Libra ESVA ed risultato non infetto. Seguire il link qui sotto per segnalarlo come spam: -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Mon Jul 30 16:02:41 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 30 Jul 2018 18:02:41 +0200 Subject: proxy_pass to dyndns address In-Reply-To: References: Message-ID: nginx only resolves hostname once on startup. See this workaround: https://github.com/DmitryFillo/nginx-proxy-pitfalls On Thu, Jul 26, 2018 at 8:47 PM basti wrote: > Hello, > > inside a location I have a proxy_pass to a hostname with a dynamic IP > for example > > location ^~ /example/ { > proxy_pass https://host1.dyndns.example.com; > } > > getent hosts resolve the right IP. > But in via nginx return a 504. > > When I reload nginx it work until IP is changed. > The DNS Server for this is on the same host. > TTL is only 300s. > > I have found the resolver directive, I'm not sure if this is the right > one because of the small TTL. > Is there a way to get this working? > > Best Regards > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Mon Jul 30 16:56:02 2018 From: mailinglist at unix-solution.de (basti) Date: Mon, 30 Jul 2018 18:56:02 +0200 Subject: proxy_pass to dyndns address In-Reply-To: References: Message-ID: Thanks a lot. On 30.07.2018 18:02, Richard Stanway wrote: > nginx only resolves hostname once on startup. See this workaround: > > https://github.com/DmitryFillo/nginx-proxy-pitfalls > > On Thu, Jul 26, 2018 at 8:47 PM basti > wrote: > > Hello, > > inside a location I have a proxy_pass to a hostname with a dynamic IP > for example > > location ^~ /example/ { > ? ? ? ? proxy_pass https://host1.dyndns.example.com; > } > > getent hosts resolve the right IP. > But in via nginx return a 504. > > When I reload nginx it work until IP is changed. > The DNS Server for this is on the same host. > TTL is only 300s. > > I have found the resolver directive, I'm not sure if this is the right > one because of the small TTL. > Is there a way to get this working? > > Best Regards > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Mon Jul 30 17:25:19 2018 From: nginx-forum at forum.nginx.org (jrmarsha) Date: Mon, 30 Jul 2018 13:25:19 -0400 Subject: How to create a module to process a data stream during transfer Message-ID: <6db98e5be12332fe47e10172347bfb4b.NginxMailingListEnglish@forum.nginx.org> Hello all, I am looking for a way to do two things in particular. The first is the be able to have a way to direct HTTP POST's to a program's stdin with arguments and to then take its stdout and put that back in the stream being uploaded, and then to apply this to a flex/bison program or module. This is to handle 300GB files without saving them to disk, but still getting the important information out of them. Having the general stdin/stdout part is also because I think this should be more generalized for uploads and downloads in general so things like streaming video become less of a problem. Apache does not have their modules set up in such a way as to feasibly do this without a whole new major revision. Looking at nginx, it looks closer but I'm still not expert enough to know without help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280708,280708#msg-280708 From al-nginx at none.at Mon Jul 30 18:11:57 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 30 Jul 2018 20:11:57 +0200 Subject: nginx -> httpd -> mod_jk -> tomcat In-Reply-To: <399152655.9717192.1532947488692.JavaMail.zimbra@beta.srl> References: <1611295917.716835.1532592382462.JavaMail.zimbra@beta.srl> <20180728164159.GB23996@aleks-PC> <399152655.9717192.1532947488692.JavaMail.zimbra@beta.srl> Message-ID: <20180730181157.GD25232@aleks-PC> On 30/07/2018 12:44, Giacomo Arru - BETA Technologies wrote: >Hi Aleksandar, thank you for your reply. > > >> For which particular feature do you need ajp? >> I have used several tomcats with httpd with ajp but after some >> horrible errors we switched to http connector. > >We wanted to use HTTP Connector, and set it up. But we couldn't manage >to make the app work correctly: we coudn't upload files > 60k and >tweaking nginx and tomcat configuration wouldn't help. We were unable >to debug the problem. With AJP connector, all features work fine. The >app is developed with Vaadin 8. Well this sounds weird as ajp and http have a lot common parts in httpd and tomcat. I can help you to debug this behaviour if your are willing. >We have no static files in our app because they already are served with a CDN. > >We would like test nginx with AJP connector (if somehow supported!). Well there is one ajp module but I'm not sure how production ready it is. https://github.com/yaoweibin/nginx_ajp_module >client_max_body_size is set to 128m What's the message in the error log when the upload stops/breaks? Could you get a debug log? https://nginx.org/en/docs/debugging_log.html >so I think the problem occurs at http level while uploading files but I >can't manage to debug my system. As we don't know anything about you system it will help a lot to get some infos about your setup. Versions + configs: OS: tomcat: httpd: nginx: Environment : azure, google, aws, bare metal, .... ? >Giacomo Best regards Aleks PS: If it's OT for the list please let us know, thanks. >Da: "Aleksandar Lazic" >A: "nginx" >Inviato: Sabato, 28 luglio 2018 18:41:59 >Oggetto: Re: nginx -> httpd -> mod_jk -> tomcat > >Hi. > >On 26/07/2018 10:06, Giacomo Arru - BETA Technologies wrote: >>Hi everybody, >> >>I recently begun using proxy with nginx (same tests were made with >>haproxy). > >Which one do you prefer as both are very good and have similar features. > >>My needs are to proxy for failover and balancing tomcat: I need to >>serve lots of users with production app. >> >>While I understood that a 100% tomcat AJP1.3 compatibility is >>achievable with apache httpd only and mod_jk, I successfully serve my >>app with apache to a simple 80 http port (cookie path already >>patched). So I decided to have a localhost apache httpd to proxy tomcat >>with AJP. IT works perfectly. > >For which particular feature do you need ajp? >I have used several tomcats with httpd with ajp but after some horrible >errors we switched to http connector. >https://tomcat.apache.org/tomcat-9.0-doc/config/http.html > >In case you don't need a special feature in httpd this makes also >possible to reduce the complexity of your chain ;-) > >nginx (http/https/server static files) -> tomcat > >>Now, I need to proxy httpd with nginx, adding SSL with letsencrypt. I >>successfuly configured the proxy and everything works but uploads: if I >>send a file to my app, only small uploads work. >> >>I'd like to investigate the headers, maybe I need to transform some >>string but I'm a completely newbie from this point of view. >> >>Do you have some tips on how to investigate the problem? > >With small you mean ~1M, that's the default setup for > >client_max_body_size >https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size > >>Thanks, >> >>Giacomo Arru > >Best regards >Aleks >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > >-- >Questo messaggio stato analizzato con Libra ESVA ed risultato non infetto. >Seguire il link qui sotto per segnalarlo come spam: >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jul 30 21:14:18 2018 From: nginx-forum at forum.nginx.org (George) Date: Mon, 30 Jul 2018 17:14:18 -0400 Subject: nginx reuseport duplicate listen options ? Message-ID: I know that nginx reuseport is only usable per ip:port pair so I am confused about this error. I have 3 nginx vhosts vhost #1 server { listen 443 ssl http2 default_server backlog=2048 reuseport; } vhost #2 server { listen 80 default_server backlog=2048 reuseport fastopen=256; } vhost #3 server { listen 443 ssl http2; } This configuration works and I see socket sharding in use on 8 cpu thread centos 7.5 64 server ss -lnt | egrep -e ':80 |:443 ' LISTEN 0 2048 *:443 *:* LISTEN 0 2048 *:443 *:* LISTEN 0 2048 *:443 *:* LISTEN 0 2048 *:443 *:* LISTEN 0 2048 *:443 *:* LISTEN 0 2048 *:443 *:* LISTEN 0 2048 *:443 *:* LISTEN 0 2048 *:443 *:* LISTEN 0 2048 *:80 *:* LISTEN 0 2048 *:80 *:* LISTEN 0 2048 *:80 *:* LISTEN 0 2048 *:80 *:* LISTEN 0 2048 *:80 *:* LISTEN 0 2048 *:80 *:* LISTEN 0 2048 *:80 *:* LISTEN 0 2048 *:80 *:* but if i had the 3 nginx vhosts where reuseport was used on vhost #3 instead of vhost #2, i get error 'nginx: [emerg] duplicate listen options for 0.0.0.0:443 in' vhost #1 server { listen 443 ssl http2 default_server backlog=2048; } vhost #2 server { listen 80 default_server backlog=2048 reuseport fastopen=256; } vhost #3 server { listen 443 ssl http2 reuseport; } nginx 1.15.3 and 1.15.2 with GCC 7.3.1/8.2 or OpenSSL 1.1.0h/1.1.1-pre8 all result in same error 'nginx: [emerg] duplicate listen options for 0.0.0.0:443 in' ??? nginx -V nginx version: nginx/1.15.3 (260718-233400) built by gcc 8.2.0 (GCC) built with OpenSSL 1.1.1-pre8 (beta) 20 Jun 2018 TLS SNI support enabled configure arguments: --with-ld-opt='-L/usr/local/lib -ljemalloc -Wl,-z,relro -Wl,-rpath,/usr/local/lib' --with-cc-opt='-I/usr/local/include -m64 -march=native -DTCP_FASTOPEN=23 -g -O3 -Wno-error=strict-aliasing -fstack-protector-strong -flto -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wimplicit-fallthrough=0 -fcode-hoisting -Wno-cast-function-type -Wp,-D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --sbin-path=/usr/local/sbin/nginx --conf-path=/usr/local/nginx/conf/nginx.conf --build=260718-233400 --with-compat --with-http_stub_status_module --with-http_secure_link_module --add-dynamic-module=../nginx-module-vts --with-libatomic --with-http_gzip_static_module --add-dynamic-module=../ngx_brotli --with-http_sub_module --with-http_addition_module --with-http_image_filter_module=dynamic --with-http_geoip_module --with-stream_geoip_module --with-stream_realip_module --with-stream_ssl_preread_module --with-threads --with-stream=dynamic --with-stream_ssl_module --with-http_realip_module --add-dynamic-module=../ngx-fancyindex-0.4.2 --add-module=../ngx_cache_purge-2.4.2 --add-module=../ngx_devel_kit-0.3.0 --add-dynamic-module=../set-misc-nginx-module-0.32 --add-dynamic-module=../echo-nginx-module-0.61 --add-module=../redis2-nginx-module-0.15 --add-module=../ngx_http_redis-0.3.7 --add-module=../memc-nginx-module-0.18 --add-module=../srcache-nginx-module-0.31 --add-dynamic-module=../headers-more-nginx-module-0.33 --with-pcre=../pcre-8.42 --with-pcre-jit --with-zlib=../zlib-cloudflare-1.3.0 --with-http_ssl_module --with-http_v2_module --with-openssl=../openssl-1.1.1-pre8 --with-openssl-opt='enable-ec_nistp_64_gcc_128 enable-tls1_3' Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280710,280710#msg-280710 From nginx-forum at forum.nginx.org Mon Jul 30 21:15:41 2018 From: nginx-forum at forum.nginx.org (George) Date: Mon, 30 Jul 2018 17:15:41 -0400 Subject: nginx reuseport duplicate listen options ? In-Reply-To: References: Message-ID: <419e2f8917bb217a9226b6f98797436c.NginxMailingListEnglish@forum.nginx.org> correct meant vhost #1 'but if i had the 3 nginx vhosts where reuseport was used on vhost #3 instead of vhost #1, i get error ' Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280710,280711#msg-280711 From francis at daoine.org Mon Jul 30 21:51:53 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 30 Jul 2018 22:51:53 +0100 Subject: Modify url at nginx In-Reply-To: References: Message-ID: <20180730215153.GB18403@daoine.org> On Mon, Jul 30, 2018 at 02:58:56AM -0400, linsonj wrote: Hi there, > What we want is when client type https://test1.apphost.com , nginx see url > as test1.apphost.com. Before proxy pass request to tomcat7, it should modify > url to https://test.apphost.com so tomcat7 see client url as > test.apphost.com. Once request is processed, response is given back to nginx > and nginx give it back to end url https://test1.apphost.com Untested, but I suggest you add a upstream test.apphost.com { server 127.0.0.1:8080; } and then make the following changes: > location /server { > > proxy_pass http://127.0.0.1:8080/server; change to proxy_pass http://test.apphost.com; > proxy_set_header Host $host; Remove that. > proxy_redirect off; Remove that. > Would be of great help if someone can advise us how can we modify the url > based on the use case explained above. If you don't add the "upstream", then you should change proxy_set_header Host $host; to proxy_set_header Host test.apphost.com; because that is the Host: header that you say you want tomcat to get. proxy_redirect (http://nginx.org/r/proxy_redirect) will rewrite a http Location: response header, if it is given the chance. The only place I think that this should fail, is if the tomcat service returns http body content which refers to test.apphost.com. Ideally, it shouldn't, or can be configured not to. Good luck with it, f -- Francis Daly francis at daoine.org From anoopalias01 at gmail.com Tue Jul 31 04:22:29 2018 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 31 Jul 2018 09:52:29 +0530 Subject: posix_memalign error Message-ID: I am repeatedly seeing errors like ###################### 2018/07/31 03:46:33 [emerg] 2854560#2854560: posix_memalign(16, 16384) failed (12: Cannot allocate memory) 2018/07/31 03:54:09 [emerg] 2890190#2890190: posix_memalign(16, 16384) failed (12: Cannot allocate memory) 2018/07/31 04:08:36 [emerg] 2939230#2939230: posix_memalign(16, 16384) failed (12: Cannot allocate memory) 2018/07/31 04:24:48 [emerg] 2992650#2992650: posix_memalign(16, 16384) failed (12: Cannot allocate memory) 2018/07/31 04:42:09 [emerg] 3053092#3053092: posix_memalign(16, 16384) failed (12: Cannot allocate memory) 2018/07/31 04:42:17 [emerg] 3053335#3053335: posix_memalign(16, 16384) failed (12: Cannot allocate memory) 2018/07/31 04:42:28 [emerg] 3053937#3053937: posix_memalign(16, 16384) failed (12: Cannot allocate memory) 2018/07/31 04:47:54 [emerg] 3070638#3070638: posix_memalign(16, 16384) failed (12: Cannot allocate memory) #################### on a few servers The servers have enough memory free and the swap usage is 0, yet somehow the kernel denies the posix_memalign with ENOMEM ( this is what I think is happening!) The numbers requested are always 16, 16k . This makes me suspicious I have no setting in nginx.conf that reference a 16k Is there any chance of finding out what requests this and why this is not fulfilled -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gschultz at bayoutech.co Tue Jul 31 07:49:31 2018 From: gschultz at bayoutech.co (Gregory Schultz) Date: Tue, 31 Jul 2018 00:49:31 -0700 Subject: Getting NGINX to view an alias Message-ID: Hi, I'm new at NGINX and I'm having difficulty in setting up to read an alias. I'm setting up adminer on NGINX to use an alias to see a file outside of it's main directory. The file is called latest.php in /usr/share/adminer. I created a synlink to link adminer.php to latest.php. I'm trying to access adminer through /admin/adminer.php but returns a 404 when I try to access the file. my config file: server { listen 80; listen [::]:80; include /etc/nginx-rc/conf.d/[site]/main.conf; location /admin/ { alias /usr/share/adminer/; } } Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 31 13:38:13 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 31 Jul 2018 16:38:13 +0300 Subject: posix_memalign error In-Reply-To: References: Message-ID: <20180731133813.GG56558@mdounin.ru> Hello! On Tue, Jul 31, 2018 at 09:52:29AM +0530, Anoop Alias wrote: > I am repeatedly seeing errors like > > ###################### > 2018/07/31 03:46:33 [emerg] 2854560#2854560: posix_memalign(16, 16384) > failed (12: Cannot allocate memory) > 2018/07/31 03:54:09 [emerg] 2890190#2890190: posix_memalign(16, 16384) > failed (12: Cannot allocate memory) > 2018/07/31 04:08:36 [emerg] 2939230#2939230: posix_memalign(16, 16384) > failed (12: Cannot allocate memory) > 2018/07/31 04:24:48 [emerg] 2992650#2992650: posix_memalign(16, 16384) > failed (12: Cannot allocate memory) > 2018/07/31 04:42:09 [emerg] 3053092#3053092: posix_memalign(16, 16384) > failed (12: Cannot allocate memory) > 2018/07/31 04:42:17 [emerg] 3053335#3053335: posix_memalign(16, 16384) > failed (12: Cannot allocate memory) > 2018/07/31 04:42:28 [emerg] 3053937#3053937: posix_memalign(16, 16384) > failed (12: Cannot allocate memory) > 2018/07/31 04:47:54 [emerg] 3070638#3070638: posix_memalign(16, 16384) > failed (12: Cannot allocate memory) > #################### > > on a few servers > > The servers have enough memory free and the swap usage is 0, yet somehow > the kernel denies the posix_memalign with ENOMEM ( this is what I think is > happening!) > > The numbers requested are always 16, 16k . This makes me suspicious > > I have no setting in nginx.conf that reference a 16k > > Is there any chance of finding out what requests this and why this is not > fulfilled There are at least some buffers which default to 16k - for example, ssl_buffer_size (http://nginx.org/r/ssl_buffer_size). You may try debugging log to futher find out where the particular allocation happens, see here for details: http://nginx.org/en/docs/debugging_log.html But I don't really think it worth the effort. The error is pretty clear, and it's better to focus on why these allocations are denied. Likely you are hitting some limit. -- Maxim Dounin http://mdounin.ru/ From kelsey.dannels at nginx.com Tue Jul 31 15:01:00 2018 From: kelsey.dannels at nginx.com (Kelsey Dannels) Date: Tue, 31 Jul 2018 11:01:00 -0400 Subject: 2018 NGINX User Survey: Help Us Shape the Future Message-ID: Hello- My name is Kelsey and I recently joined the NGINX team. Reaching out because it?s that time of year for the annual NGINX User Survey. We're always eager to hear about your experiences to help us evolve, improve and shape our product roadmap. Please take ten minutes to share your thoughts: https://nkadmin.typeform.com/to/e1A4mJ?source=email Best, Kelsey -- *Join us at **NGINX Conf 2018* *, Oct 8-11, Atlanta, GA* Kelsey Dannels Marketing Communication Specialist Mobile: 650 773 1046 San Francisco -------------- next part -------------- An HTML attachment was scrubbed... URL: