From jombik at platon.org Tue May 1 06:47:15 2018 From: jombik at platon.org (Ondrej Jombik) Date: Tue, 1 May 2018 08:47:15 +0200 (CEST) Subject: Knowing the server port inside Perl code Message-ID: When using mail module for SMTP and doing auth using Perl code, it might be handy to know entry port number. For example 25/TCP, 465/TCP or 587/TCP; those are the most used ones. I thought this would be somewhere among provided headers: $request->header_in('Auth-Method'); $request->header_in('Auth-Protocol'); $request->header_in('Auth-User'); $request->header_in('Auth-Pass'); $request->header_in('Auth-Salt'); $request->header_in('Client-IP'); $request->header_in('Client-Host'); [... ...] However there is nothing like 'Auth-Port', or 'Client-Port' or 'Server-Port' or any port. 'Auth-Protocol' is no help, because we have same protocol running on multiple ports; typically 25/TCP is the same as 587/TCP when sending e-mails with auth. So I tried to help myself: proxy on; auth_http_header Auth-Port $server_port; auth_http 127.0.0.1:80/auth; proxy_pass_error_message on; - or - auth_http_header Auth-Port $proxy_port; But none of those worked. How I can know entry port number inside Perl code? -- Ondrej JOMBIK Platon Technologies s.r.o., Hlavna 3, Sala SK-92701 +421222111321 - info at platon.net - http://platon.net Read our latest blog: https://blog.platon.sk/icann-sknic-tld-problemy/ My current location: Phoenix, Arizona My current timezone: -0700 GMT (MST) (updated automatically) From nginx-forum at forum.nginx.org Tue May 1 09:28:44 2018 From: nginx-forum at forum.nginx.org (Winfried) Date: Tue, 01 May 2018 05:28:44 -0400 Subject: Simple steps to harden Nginx for home use? Message-ID: <70532bb69e43078c576f3540d0b03c20.NginxMailingListEnglish@forum.nginx.org> Hello, I use Nginx on a home Debian appliance to run a couple of personal web sites. It's the only port reachable from the Net through the ADSL model with NAT firewall enabled. Recently, the server was no longer responding and I couldn't log on: [code] (initramfs) root /bin/sh: root: not found [/code] Since I was in a rush, I simply wiped the USB keydrive clean, reinstalled Debian and the htdocs. Provided it was a hack and no some internal issue (keydrive?), are there simple steps I can take to harden Nginx ? Thank you. PS: I use apt to install applications. FWIW, here's what "nginx -V" says after installing it from the repository: nginx version: nginx/1.10.3 built with OpenSSL 1.1.0f 25 May 2017 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-re6b6X/nginx-1.10.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_flv_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_mp4_module --with-http_perl_module=dynamic --with-http_random_index_module --with-http_secure_link_module --with-http_sub_module --with-http_xslt_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-stream=dynamic --with-stream_ssl_module --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/headers-more-nginx-module --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-auth-pam --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-cache-purge --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-dav-ext-module --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-development-kit --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-echo --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/ngx-fancyindex --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nchan --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-lua --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-upload-progress --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-upstream-fair --add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/ngx_http_substitutions_filter_module Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279655,279655#msg-279655 From cult at free.fr Tue May 1 11:52:39 2018 From: cult at free.fr (Vincent) Date: Tue, 1 May 2018 13:52:39 +0200 Subject: Configure Nginx Fast CGI cache ON error_page 404 Message-ID: <9b6ddf52-7105-b669-e76a-299dcb25ea2d@free.fr> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 2 09:34:04 2018 From: nginx-forum at forum.nginx.org (bmrf) Date: Wed, 02 May 2018 05:34:04 -0400 Subject: Regex in proxy_hide_header Message-ID: <4acc1d7f0297d0cf9d30ac0b9716eee0.NginxMailingListEnglish@forum.nginx.org> Hi list, I was trying to unset/delete a header using proxy_hide_header. The problem is that the header name is always unknown, but it has always the same pattern, it starts with several whitespaces followed by random characters, something like \s+\w+ If regex is not supported at proxy_hide_header, as it seems it is, is there any other way to accomplish this? Thanks a lot! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279657,279657#msg-279657 From oleg at mamontov.net Wed May 2 10:30:08 2018 From: oleg at mamontov.net (Oleg A. Mamontov) Date: Wed, 2 May 2018 13:30:08 +0300 Subject: Regex in proxy_hide_header In-Reply-To: <4acc1d7f0297d0cf9d30ac0b9716eee0.NginxMailingListEnglish@forum.nginx.org> References: <4acc1d7f0297d0cf9d30ac0b9716eee0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180502103008.6f7z2ablvv3zqg4v@xenon.mamontov.net> On Wed, May 02, 2018 at 09:34:04AM +0000, bmrf wrote: >Hi list, > >I was trying to unset/delete a header using proxy_hide_header. The problem >is that the header name is always unknown, but it has always the same >pattern, it starts with several whitespaces followed by random characters, >something like \s+\w+ > >If regex is not supported at proxy_hide_header, as it seems it is, is there >any other way to accomplish this? Probably it makes sense to take a look: https://github.com/openresty/headers-more-nginx-module#more_clear_headers "The wildcard character, *, can also be used at the end of the header name to specify a pattern." > >Thanks a lot! -- Cheers, Oleg A. Mamontov mailto: oleg at mamontov.net skype: lonerr11 cell: +7 (903) 798-1352 From mdounin at mdounin.ru Wed May 2 11:08:43 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 May 2018 14:08:43 +0300 Subject: Knowing the server port inside Perl code In-Reply-To: References: Message-ID: <20180502110843.GE32137@mdounin.ru> Hello! On Tue, May 01, 2018 at 08:47:15AM +0200, Ondrej Jombik wrote: > When using mail module for SMTP and doing auth using Perl code, it might > be handy to know entry port number. For example 25/TCP, 465/TCP or > 587/TCP; those are the most used ones. > > I thought this would be somewhere among provided headers: > > $request->header_in('Auth-Method'); > $request->header_in('Auth-Protocol'); > $request->header_in('Auth-User'); > $request->header_in('Auth-Pass'); > $request->header_in('Auth-Salt'); > $request->header_in('Client-IP'); > $request->header_in('Client-Host'); > [... ...] > > However there is nothing like 'Auth-Port', or 'Client-Port' or > 'Server-Port' or any port. > > 'Auth-Protocol' is no help, because we have same protocol running on > multiple ports; typically 25/TCP is the same as 587/TCP when sending > e-mails with auth. > > So I tried to help myself: > > proxy on; > auth_http_header Auth-Port $server_port; > auth_http 127.0.0.1:80/auth; > proxy_pass_error_message on; > > - or - > > auth_http_header Auth-Port $proxy_port; > > But none of those worked. > > How I can know entry port number inside Perl code? If you really want to know server port, you can get one by configuring different auth_http_header in server{} blocks listening on different ports. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed May 2 13:14:58 2018 From: nginx-forum at forum.nginx.org (bmrf) Date: Wed, 02 May 2018 09:14:58 -0400 Subject: Regex in proxy_hide_header In-Reply-To: <20180502103008.6f7z2ablvv3zqg4v@xenon.mamontov.net> References: <20180502103008.6f7z2ablvv3zqg4v@xenon.mamontov.net> Message-ID: Oleg A. Mamontov Wrote: ------------------------------------------------------- > On Wed, May 02, 2018 at 09:34:04AM +0000, bmrf wrote: > >Hi list, > > > >I was trying to unset/delete a header using proxy_hide_header. The > problem > >is that the header name is always unknown, but it has always the same > >pattern, it starts with several whitespaces followed by random > characters, > >something like \s+\w+ > > > >If regex is not supported at proxy_hide_header, as it seems it is, > is there > >any other way to accomplish this? > > Probably it makes sense to take a look: > https://github.com/openresty/headers-more-nginx-module#more_clear_head > ers > > "The wildcard character, *, can also be used at the end of the header > name to specify a pattern." The header I need to delete is always different, each time a request is done it is different and alway with this weird patter \s+\w+. (4 whitespaces followed by 8 random characters) Some real examples, it's cut to 1 whitespace character, but there're 4: " XkIOPalY" " peYhKOlx" " KpyTKolq" So using headers-more-nginx-module wildcard character, *, at the end of the header name does not help here. Anyway, thank you and if you have any other suggestion it's more than welcome. > > > >Thanks a lot! > > -- > Cheers, > Oleg A. Mamontov > > mailto: oleg at mamontov.net > > skype: lonerr11 > cell: +7 (903) 798-1352 > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279657,279660#msg-279660 From mephystoonhell at gmail.com Thu May 3 08:30:20 2018 From: mephystoonhell at gmail.com (Mephysto On Hell) Date: Thu, 3 May 2018 10:30:20 +0200 Subject: Proxy pass and SSL certificates Message-ID: Hello everyone, I am using Nginx in a production environment since some years, but I am almost a newbie with SSL certificates and connections. A the moment I have a configuration with two levels: 1. A first level Nginx that operate as load balancer 2. Two second level Nginx: the first host a web site and it do not need a SSL connection, the second hosts an Owncloud instance and it need a SSL connection. I am using Certbot and Let's Encrypt to generate signed certificates. A the moment I have certificates installed in both levels and until last month this configuration was working. After certificates renewal (every three months) I am getting an ERR_CERT_DATE_INVALID and I can not access to Owncloud. Only second level certificate has been renewed. But if I try to connect directly to second level Nginx, I do not get any error and I can access to Owncloud. This is first level Nginx config: upstream cloud { server 10.39.0.52; } upstream cloud_ssl { server 10.39.0.52:443; } server { listen 80 default_server; listen [::]:80 default_server; server_name cloud.diakont.it cloud.diakont.srl; return 301 https://$server_name$request_uri; } server { listen 443 ssl default_server; listen [::]:443 ssl default_server; ssl on; server_name cloud.diakont.it cloud.diakont.srl; include snippets/cloud.diakont.it.conf; include snippets/ssl-params.conf; error_log /var/log/nginx/cloudssl.diakont.it.error.log info; access_log /var/log/nginx/cloudssl.diakont.it.access.log; location / { proxy_pass https://cloud_ssl/; proxy_redirect default; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; } } I would like to set first level Nginx to establish a SSL connection with Owncloud without having to renew the certificates on both levels. Is it possible? How do I have to change my config? Thanks in advance. Meph -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu May 3 11:42:01 2018 From: nginx-forum at forum.nginx.org (Joncheski) Date: Thu, 03 May 2018 07:42:01 -0400 Subject: Reverse proxy from NGINX to Keycloak with 2FA In-Reply-To: <20180430223548.GC19311@daoine.org> References: <20180430223548.GC19311@daoine.org> Message-ID: <3b35f35a31e995482a6c710f8d87ae94.NginxMailingListEnglish@forum.nginx.org> Hi Francis, Thanks for your reply. I have tried with tcp port forwarder ("stream") but my host is changed to the client's url, which directly sends me to Keycloak, which I do not want to have direct access to Keycloak, so I use proxy. Keycloak has been configured to verify a client certificate that needs its CN to be identically with the username you enter, normally have keystore and truststore installed to check from whom it was issued and signed (which is associated with Key Management System for whether it is invalid or revoke). I have done it and can NGINX check the client certificate (I add these things: ssl_client_certificate path-of-root-ca, and ssl_verify_client on), whether it has been issued and signed by my PKI Key Management System, but the problem is that the user can submit a certificate from one user, and in Keycloak to announce with another. I want to stop this thing, so I have a full 2FA. Keycloak is the only one to check it. I want to ask you, can the client certificate that is attached to NGINX through the ssl_verify_client option be forwarded to Keycloak? Best regards, Goce Joncheski Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279549,279663#msg-279663 From oleg at mamontov.net Thu May 3 12:02:06 2018 From: oleg at mamontov.net (Oleg A. Mamontov) Date: Thu, 3 May 2018 15:02:06 +0300 Subject: Regex in proxy_hide_header In-Reply-To: References: <20180502103008.6f7z2ablvv3zqg4v@xenon.mamontov.net> Message-ID: <20180503120206.fz5hzbx7ytwu6mfl@xenon.mamontov.net> On Wed, May 02, 2018 at 01:14:58PM +0000, bmrf wrote: >Oleg A. Mamontov Wrote: >------------------------------------------------------- >> On Wed, May 02, 2018 at 09:34:04AM +0000, bmrf wrote: >> >Hi list, >> > >> >I was trying to unset/delete a header using proxy_hide_header. The >> problem >> >is that the header name is always unknown, but it has always the same >> >pattern, it starts with several whitespaces followed by random >> characters, >> >something like \s+\w+ >> > >> >If regex is not supported at proxy_hide_header, as it seems it is, >> is there >> >any other way to accomplish this? >> >> Probably it makes sense to take a look: >> https://github.com/openresty/headers-more-nginx-module#more_clear_head >> ers >> >> "The wildcard character, *, can also be used at the end of the header >> name to specify a pattern." > >The header I need to delete is always different, each time a request is done >it is different and alway with this weird patter \s+\w+. (4 whitespaces >followed by 8 random characters) > >Some real examples, it's cut to 1 whitespace character, but there're 4: > >" XkIOPalY" >" peYhKOlx" >" KpyTKolq" > >So using headers-more-nginx-module wildcard character, *, at the end of the >header name does not help here. Anyway, thank you and if you have any other >suggestion it's more than welcome. Okay, so it seems that https://github.com/openresty/lua-nginx-module#header_filter_by_lua_block using iteration over https://github.com/openresty/lua-nginx-module#ngxrespget_headers is what you're looking for. >> > >> >Thanks a lot! -- Cheers, Oleg A. Mamontov mailto: oleg at mamontov.net skype: lonerr11 cell: +7 (903) 798-1352 From nginx-forum at forum.nginx.org Fri May 4 11:34:40 2018 From: nginx-forum at forum.nginx.org (Joncheski) Date: Fri, 04 May 2018 07:34:40 -0400 Subject: Proxy pass and SSL certificates In-Reply-To: References: Message-ID: <8fea04109fd128f4d2d21fe7cefd1575.NginxMailingListEnglish@forum.nginx.org> Hello Meph, Can you send the other configuration file ( ssl-params.conf and cloud.diakont.it.conf ) which you call in this configuration. And in "location /" , you need to enter this "proxy_redirect default;" because this is default argument. Best regards, Goce Joncheski Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279665,279674#msg-279674 From mephystoonhell at gmail.com Fri May 4 12:32:20 2018 From: mephystoonhell at gmail.com (Mephysto On Hell) Date: Fri, 4 May 2018 14:32:20 +0200 Subject: Proxy pass and SSL certificates In-Reply-To: <8fea04109fd128f4d2d21fe7cefd1575.NginxMailingListEnglish@forum.nginx.org> References: <8fea04109fd128f4d2d21fe7cefd1575.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello Goce, thank you very much for you answer. I attached files you requested at this email. On 4 May 2018 at 13:34, Joncheski wrote: > Hello Meph, > > Can you send the other configuration file ( ssl-params.conf and > cloud.diakont.it.conf ) which you call in this configuration. > And in "location /" , you need to enter this "proxy_redirect default;" > because this is default argument. > > Best regards, > Goce Joncheski > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,279665,279674#msg-279674 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ssl-params.conf Type: application/octet-stream Size: 747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cloud.diakont.it.conf Type: application/octet-stream Size: 143 bytes Desc: not available URL: From francis at daoine.org Fri May 4 13:22:08 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 4 May 2018 14:22:08 +0100 Subject: Reverse proxy from NGINX to Keycloak with 2FA In-Reply-To: <3b35f35a31e995482a6c710f8d87ae94.NginxMailingListEnglish@forum.nginx.org> References: <20180430223548.GC19311@daoine.org> <3b35f35a31e995482a6c710f8d87ae94.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180504132208.GD19311@daoine.org> On Thu, May 03, 2018 at 07:42:01AM -0400, Joncheski wrote: Hi there, > I have tried with tcp port forwarder ("stream") but my host is changed to > the client's url, which directly sends me to Keycloak, which I do not want > to have direct access to Keycloak, so I use proxy. The end-client must not talk to Keycloak. Ok. Keycloak wants to get the client certificate, and some indication that the connecting client has the private key that is associated with the certificate. (Effectively, the certificate is "the username", and the private key is "the password".) Normally, Keycloak would be able to verify that the client has the matching private key, because the ssl connection between Keycloak and the client would demonstrate that. You do not want that to happen. So you must configure Keycloak (if it is possible) to believe nginx when it says that this client has the private key that matches the included certificate (because nginx used the ssl connection between nginx and the client to demonstrate that). > Keycloak has been configured to verify a client certificate that needs its > CN to be identically with the username you enter, normally have keystore and > truststore installed to check from whom it was issued and signed (which is > associated with Key Management System for whether it is invalid or revoke). Nginx can give the client certificate to Keycloak, and Keycloak can confirm that the certificate was issued by the correct Certificate Authority, and can check whatever it wants about the CN. But Keycloak cannot directly confirm that the client has the matching private key -- it must be told to believe nginx when nginx says that the client has the matching private key. > I have done it and can NGINX check the client certificate (I add these > things: ssl_client_certificate path-of-root-ca, and ssl_verify_client on), Yes, nginx could check that (but it probably does not need to, if Keycloak will be checking it anyway). > whether it has been issued and signed by my PKI Key Management System, but > the problem is that the user can submit a certificate from one user, and in > Keycloak to announce with another. I want to stop this thing, so I have a > full 2FA. Keycloak is the only one to check it. I don't understand what you mean there. That's ok; I don't have to understand. So long as you are happy that it makes sense to you, that's good enough. > I want to ask you, can the client certificate that is attached to NGINX > through the ssl_verify_client option be forwarded to Keycloak? Yes. http://nginx.org/r/ssl_verify_client The contents of the certificate is accessible through the $ssl_client_cert variable. You can tell nginx to include that variable in a http header, for example, that you tell Keycloak to read and believe that the client has the matching private key. The whole thing cannot be done without configuration within Keycloak. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri May 4 14:04:02 2018 From: nginx-forum at forum.nginx.org (rsckp) Date: Fri, 04 May 2018 10:04:02 -0400 Subject: using return (http_rewrite) with etag Message-ID: <6ebb0b4fb938361f5b68189bb39d7d9b.NginxMailingListEnglish@forum.nginx.org> Hi guys, In my configuration I'm using return directive from http_rewrite module. I'd also like to enable etag to speed things up. Sadly, so far didn't manage to get it to work. Is such configuration even possible? If I hash out "return...", etag works like a charm. server { listen 80 default_server; root /var/www/html; index index.nginx-debian.html; default_type application/json; etag on; return 200 'xxx'; } Debian 9.4, nginx-light 1.10.3-1+deb9u1. Thanks in advance for any thoughts. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279680,279680#msg-279680 From mdounin at mdounin.ru Fri May 4 14:18:59 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 May 2018 17:18:59 +0300 Subject: using return (http_rewrite) with etag In-Reply-To: <6ebb0b4fb938361f5b68189bb39d7d9b.NginxMailingListEnglish@forum.nginx.org> References: <6ebb0b4fb938361f5b68189bb39d7d9b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180504141859.GI32137@mdounin.ru> Hello! On Fri, May 04, 2018 at 10:04:02AM -0400, rsckp wrote: > Hi guys, > > In my configuration I'm using return directive from http_rewrite module. I'd > also like to enable etag to speed things up. Sadly, so far didn't manage to > get it to work. Is such configuration even possible? > > If I hash out "return...", etag works like a charm. > > server { > listen 80 default_server; > root /var/www/html; > index index.nginx-debian.html; > > default_type application/json; > etag on; > return 200 'xxx'; > } > > Debian 9.4, nginx-light 1.10.3-1+deb9u1. > > Thanks in advance for any thoughts. The "etag" directive controls whether entity tags will be generated for static files. Entity tags (as well as Last-Modified headers) are never generated for responses produced with the "return" directive. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri May 4 16:52:01 2018 From: nginx-forum at forum.nginx.org (bmrf) Date: Fri, 04 May 2018 12:52:01 -0400 Subject: Regex in proxy_hide_header In-Reply-To: <20180503120206.fz5hzbx7ytwu6mfl@xenon.mamontov.net> References: <20180503120206.fz5hzbx7ytwu6mfl@xenon.mamontov.net> Message-ID: <45b95039ca2a419a489a1a94a6b3ce98.NginxMailingListEnglish@forum.nginx.org> Thanks Oleg! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279657,279683#msg-279683 From b631093f-779b-4d67-9ffe-5f6d5b1d3f8a at protonmail.ch Sat May 5 11:21:21 2018 From: b631093f-779b-4d67-9ffe-5f6d5b1d3f8a at protonmail.ch (Bob Smith) Date: Sat, 05 May 2018 07:21:21 -0400 Subject: NGINX mangling rewrites when encoded URLs present Message-ID: nginx version: nginx/1.13.12 This is my rewrite: location / { rewrite ^/(.*)$ https://example.net/$1 permanent; } I am getting some really odd behavior. For example: mysubdomain.example.com/CL0/https:%2F%2Fapple.com Gets re-written to example.net/CLO/https:/apple.com Only one forward-slash, not two before apple.com. The original declaration was %2F%2F ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Sat May 5 22:17:17 2018 From: r at roze.lv (Reinis Rozitis) Date: Sun, 6 May 2018 01:17:17 +0300 Subject: NGINX mangling rewrites when encoded URLs present In-Reply-To: References: Message-ID: <000001d3e4be$d43d1b50$7cb751f0$@roze.lv> > rewrite ^/(.*)$ https://example.net/$1 permanent; > ... > > Gets re-written to > > example.net/CLO/https:/apple.com > > Only one forward-slash, not two before apple.com. The original declaration was %2F%2F ? It's probably because that way the $1 is/gets url-decoded and merge_slashes kicks in ( http://nginx.org/en/docs/http/ngx_http_core_module.html#merge_slashes ). Try something like: location / { return 301 https://example.net$request_uri; } rr From nginx-forum at forum.nginx.org Sun May 6 07:44:11 2018 From: nginx-forum at forum.nginx.org (Ortal) Date: Sun, 06 May 2018 03:44:11 -0400 Subject: ngx http upstream request body Message-ID: Hello, I am building an nginx module, using ngx_http_upstream. I am using ngx_http_request_t struct and I would like to know if my assumption that request_body->bufs will not be reuse (free) until the connection will be finalized? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279690,279690#msg-279690 From Ajay_Sonawane at symantec.com Mon May 7 05:15:34 2018 From: Ajay_Sonawane at symantec.com (Ajay Sonawane) Date: Mon, 7 May 2018 05:15:34 +0000 Subject: Connect to NGINX reverse proxy through proxy Message-ID: I am using NGINX as a HTTPS reverse proxy and load balancer. My clients are able to connect to reverse proxy using SSL and reverse proxy is able to terminate SSL connection and establish a new connection with backend server, data exchange is also happening. Now I am trying to setup a proxy between a client and NGINX. I am using SQUID proxy in between. I have enabled proxy protocol on nginx using listen 443 ssl proxy_protocol; proxy_protocol on; Still my client is not able to connect to NGINX through proxy. Is there anything else I need to do. Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 7 06:36:26 2018 From: nginx-forum at forum.nginx.org (rsckp) Date: Mon, 07 May 2018 02:36:26 -0400 Subject: using return (http_rewrite) with etag In-Reply-To: <20180504141859.GI32137@mdounin.ru> References: <20180504141859.GI32137@mdounin.ru> Message-ID: That would explain it. Thank you for the information! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279680,279692#msg-279692 From arut at nginx.com Mon May 7 10:25:59 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 7 May 2018 13:25:59 +0300 Subject: Connect to NGINX reverse proxy through proxy In-Reply-To: References: Message-ID: <20180507102559.GA1824@Romans-MacBook-Air.local> Hello, On Mon, May 07, 2018 at 05:15:34AM +0000, Ajay Sonawane wrote: > I am using NGINX as a HTTPS reverse proxy and load balancer. My clients are able to connect to reverse proxy using SSL and reverse proxy is able to terminate SSL connection and establish a new connection with backend server, data exchange is also happening. > > > Now I am trying to setup a proxy between a client and NGINX. I am using SQUID proxy in between. I have enabled proxy protocol on nginx using > > > listen 443 ssl proxy_protocol; This line instructs nginx to expect PROXY protocol header from SQUID. Are you sure SQUID sends it? It looks like SQUID didn't support sending PROXY protocol header up until recently. > proxy_protocol on; > > > > > Still my client is not able to connect to NGINX through proxy. Is there anything else I need to do. For details it's better to look into error.log. -- Roman Arutyunyan From Ajay_Sonawane at symantec.com Mon May 7 10:37:08 2018 From: Ajay_Sonawane at symantec.com (Ajay Sonawane) Date: Mon, 7 May 2018 10:37:08 +0000 Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy In-Reply-To: <20180507102559.GA1824@Romans-MacBook-Air.local> References: , <20180507102559.GA1824@Romans-MacBook-Air.local> Message-ID: >>For details it's better to look into error.log. Error log says "Broker header [some garbage chars] while reading PROXY protocol, client: IPADDRESS, server:0.0.0.8443 ________________________________ From: nginx on behalf of Roman Arutyunyan Sent: Monday, May 7, 2018 3:55:59 PM To: nginx at nginx.org Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy Hello, On Mon, May 07, 2018 at 05:15:34AM +0000, Ajay Sonawane wrote: > I am using NGINX as a HTTPS reverse proxy and load balancer. My clients are able to connect to reverse proxy using SSL and reverse proxy is able to terminate SSL connection and establish a new connection with backend server, data exchange is also happening. > > > Now I am trying to setup a proxy between a client and NGINX. I am using SQUID proxy in between. I have enabled proxy protocol on nginx using > > > listen 443 ssl proxy_protocol; This line instructs nginx to expect PROXY protocol header from SQUID. Are you sure SQUID sends it? It looks like SQUID didn't support sending PROXY protocol header up until recently. > proxy_protocol on; > > > > > Still my client is not able to connect to NGINX through proxy. Is there anything else I need to do. For details it's better to look into error.log. -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org https://clicktime.symantec.com/a/1/-T9P8fTQru19QtJ92SY81cK1kgruSCyqw2a3i7ct9uA=?d=6I_E5mOuE_JiHm4QhzDePIEnOq_IvGHWcHWAQhy-J4UZqqAmz64BtlAUxaKeJ_QUeJlstY5j28Te7x5BUPJmBb7m6We9GzVL-5L0HAk8nw5PEVbXWoK8dlsjU1x4BITL4J3OeGFrdRvQR2wkGd5zLcFgsskyU4BCbuzKn8V5bKCmxB1DpG8cQVok5PkZ6Qg7YthetOt87ogtudPBDs_PJbaFVREIFlzqZKx96xuvYbT5uWM1w_ZYymY83doc7FsBvMyEFL2ozigFAfQT3usyvOndD3N6RIZxARXwdst7NOabaJMq1_Wofqujl-IAJ3M5MqakCUcNqdCC1EjAlA_YICSnnQ6daqQgPbBISB2mdbmdwAjRzNyu8eLvEue2CCe1_oSfgf7r3F4edwaTYA%3D%3D&u=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Mon May 7 10:54:51 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 7 May 2018 13:54:51 +0300 Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy In-Reply-To: References: <20180507102559.GA1824@Romans-MacBook-Air.local> Message-ID: <20180507105451.GB1824@Romans-MacBook-Air.local> On Mon, May 07, 2018 at 10:37:08AM +0000, Ajay Sonawane wrote: > >>For details it's better to look into error.log. > > Error log says "Broker header [some garbage chars] while reading PROXY protocol, client: IPADDRESS, server:0.0.0.8443 This means the client (SQUID in your case) does not send the PROXY protocol header. Remove the "proxy_protocol" parameter from "listen" to fix this. > ________________________________ > From: nginx on behalf of Roman Arutyunyan > Sent: Monday, May 7, 2018 3:55:59 PM > To: nginx at nginx.org > Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy > > Hello, > > On Mon, May 07, 2018 at 05:15:34AM +0000, Ajay Sonawane wrote: > > I am using NGINX as a HTTPS reverse proxy and load balancer. My clients are able to connect to reverse proxy using SSL and reverse proxy is able to terminate SSL connection and establish a new connection with backend server, data exchange is also happening. > > > > > > Now I am trying to setup a proxy between a client and NGINX. I am using SQUID proxy in between. I have enabled proxy protocol on nginx using > > > > > > listen 443 ssl proxy_protocol; > > This line instructs nginx to expect PROXY protocol header from SQUID. > Are you sure SQUID sends it? It looks like SQUID didn't support sending PROXY > protocol header up until recently. > > > proxy_protocol on; > > > > > > > > > > Still my client is not able to connect to NGINX through proxy. Is there anything else I need to do. > > For details it's better to look into error.log. > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://clicktime.symantec.com/a/1/-T9P8fTQru19QtJ92SY81cK1kgruSCyqw2a3i7ct9uA=?d=6I_E5mOuE_JiHm4QhzDePIEnOq_IvGHWcHWAQhy-J4UZqqAmz64BtlAUxaKeJ_QUeJlstY5j28Te7x5BUPJmBb7m6We9GzVL-5L0HAk8nw5PEVbXWoK8dlsjU1x4BITL4J3OeGFrdRvQR2wkGd5zLcFgsskyU4BCbuzKn8V5bKCmxB1DpG8cQVok5PkZ6Qg7YthetOt87ogtudPBDs_PJbaFVREIFlzqZKx96xuvYbT5uWM1w_ZYymY83doc7FsBvMyEFL2ozigFAfQT3usyvOndD3N6RIZxARXwdst7NOabaJMq1_Wofqujl-IAJ3M5MqakCUcNqdCC1EjAlA_YICSnnQ6daqQgPbBISB2mdbmdwAjRzNyu8eLvEue2CCe1_oSfgf7r3F4edwaTYA%3D%3D&u=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From cult at free.fr Mon May 7 12:24:05 2018 From: cult at free.fr (Vincent) Date: Mon, 7 May 2018 14:24:05 +0200 Subject: Nginx OR for 2 differents location Message-ID: An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Mon May 7 13:03:28 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Mon, 7 May 2018 16:03:28 +0300 Subject: Nginx OR for 2 differents location In-Reply-To: References: Message-ID: Hello, You can try location ~ (render_img.php|^/url_rewriting.php$) {} Which should effectively do the same On 07.05.2018 15:24, Vincent wrote: > > Hello, > > I have 2 location blocks like that: > > > |location =/url_rewriting.php {| > > and > > > ||||location ~render_img.php {|| > > > with exactly the same content. > > > Is it possible to use an OR to have only one location block? > > Thanks in advance, > > Vincent. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 7 14:04:09 2018 From: nginx-forum at forum.nginx.org (joovunir) Date: Mon, 07 May 2018 10:04:09 -0400 Subject: Does NGINX support URI (http.ldap) based CRL (revokation lists) checks? or how to handle CRL valid for 7 days Message-ID: <3aacb455b2b0cb49362fa78a6d6309e1.NginxMailingListEnglish@forum.nginx.org> Hi, I know NGINX supports CRL in file format (PEM), but as the CRLs for my certificate provider is only valid for 7 days, and downloading the files, converting to PEM and so on is time consuming, I wonder if NGINX supports URI based CRLs. I haven't found any thing in the documentation... so in case it doesn't support it, how do you handle that? scripts to download/convert/move the files from your certificates' provider? thanks in advance! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279702,279702#msg-279702 From kohenkatz at gmail.com Mon May 7 16:12:50 2018 From: kohenkatz at gmail.com (Moshe Katz) Date: Mon, 07 May 2018 16:12:50 +0000 Subject: Packages for Ubuntu 18.04 "Bionic"? Message-ID: Hello, I see that the new Ubuntu 18.04 release has Nginx 1.14.0 as its install version. However, as new development progresses, I will want to be on the `mainline` version on my servers. Right now, there is no official Nginx package support for 18.04, as the newest version in http://nginx.org/packages/mainline/ubuntu/ is `artful`. When can we expect packages for `bionic` to be officially available? Thanks, Moshe -------------- next part -------------- An HTML attachment was scrubbed... URL: From defan at nginx.com Mon May 7 16:15:51 2018 From: defan at nginx.com (Andrei Belov) Date: Mon, 7 May 2018 19:15:51 +0300 Subject: Packages for Ubuntu 18.04 "Bionic"? In-Reply-To: References: Message-ID: <9C308314-9B6C-4D10-B695-25B7EB342749@nginx.com> Hi Moshe, > On 07 May 2018, at 19:12, Moshe Katz wrote: > > Hello, > > I see that the new Ubuntu 18.04 release has Nginx 1.14.0 as its install version. > However, as new development progresses, I will want to be on the `mainline` version on my servers. > Right now, there is no official Nginx package support for 18.04, as the newest version in http://nginx.org/packages/mainline/ubuntu/ is `artful`. > > When can we expect packages for `bionic` to be officially available? Those should be available later this week. Thanks for your interest. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cult at free.fr Mon May 7 19:55:01 2018 From: cult at free.fr (Vincent) Date: Mon, 7 May 2018 21:55:01 +0200 Subject: Nginx OR for 2 differents location In-Reply-To: References: Message-ID: <4d56bfc7-81bb-390a-6016-07f72a31344a@free.fr> An HTML attachment was scrubbed... URL: From jfjm2002 at gmail.com Tue May 8 02:59:10 2018 From: jfjm2002 at gmail.com (Joe Doe) Date: Mon, 7 May 2018 19:59:10 -0700 Subject: Logging of mirror requests Message-ID: Hi, I have used ngx_http_mirror_module to create mirrors. I would like to log these requests as well? So in the /mirror location, I added access_log directive, but the log file was created, but no logs were produced. Is logging currently limited to only the original request? Best, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ajay_Sonawane at symantec.com Tue May 8 05:17:48 2018 From: Ajay_Sonawane at symantec.com (Ajay Sonawane) Date: Tue, 8 May 2018 05:17:48 +0000 Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy In-Reply-To: <20180507105451.GB1824@Romans-MacBook-Air.local> References: <20180507102559.GA1824@Romans-MacBook-Air.local> , <20180507105451.GB1824@Romans-MacBook-Air.local> Message-ID: Removing 'proxy_protocol' parameter fixed the problem. Thanks a lot. ________________________________ From: nginx on behalf of Roman Arutyunyan Sent: Monday, May 7, 2018 4:24:51 PM To: nginx at nginx.org Subject: Re: [EXT] Re: Connect to NGINX reverse proxy through proxy On Mon, May 07, 2018 at 10:37:08AM +0000, Ajay Sonawane wrote: > >>For details it's better to look into error.log. > > Error log says "Broker header [some garbage chars] while reading PROXY protocol, client: IPADDRESS, server:0.0.0.8443 This means the client (SQUID in your case) does not send the PROXY protocol header. Remove the "proxy_protocol" parameter from "listen" to fix this. > ________________________________ > From: nginx on behalf of Roman Arutyunyan > Sent: Monday, May 7, 2018 3:55:59 PM > To: nginx at nginx.org > Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy > > Hello, > > On Mon, May 07, 2018 at 05:15:34AM +0000, Ajay Sonawane wrote: > > I am using NGINX as a HTTPS reverse proxy and load balancer. My clients are able to connect to reverse proxy using SSL and reverse proxy is able to terminate SSL connection and establish a new connection with backend server, data exchange is also happening. > > > > > > Now I am trying to setup a proxy between a client and NGINX. I am using SQUID proxy in between. I have enabled proxy protocol on nginx using > > > > > > listen 443 ssl proxy_protocol; > > This line instructs nginx to expect PROXY protocol header from SQUID. > Are you sure SQUID sends it? It looks like SQUID didn't support sending PROXY > protocol header up until recently. > > > proxy_protocol on; > > > > > > > > > > Still my client is not able to connect to NGINX through proxy. Is there anything else I need to do. > > For details it's better to look into error.log. > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://clicktime.symantec.com/a/1/-T9P8fTQru19QtJ92SY81cK1kgruSCyqw2a3i7ct9uA=?d=6I_E5mOuE_JiHm4QhzDePIEnOq_IvGHWcHWAQhy-J4UZqqAmz64BtlAUxaKeJ_QUeJlstY5j28Te7x5BUPJmBb7m6We9GzVL-5L0HAk8nw5PEVbXWoK8dlsjU1x4BITL4J3OeGFrdRvQR2wkGd5zLcFgsskyU4BCbuzKn8V5bKCmxB1DpG8cQVok5PkZ6Qg7YthetOt87ogtudPBDs_PJbaFVREIFlzqZKx96xuvYbT5uWM1w_ZYymY83doc7FsBvMyEFL2ozigFAfQT3usyvOndD3N6RIZxARXwdst7NOabaJMq1_Wofqujl-IAJ3M5MqakCUcNqdCC1EjAlA_YICSnnQ6daqQgPbBISB2mdbmdwAjRzNyu8eLvEue2CCe1_oSfgf7r3F4edwaTYA%3D%3D&u=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://clicktime.symantec.com/a/1/iiK7PDu6t0LJJZcyrtHGQOP0hruXc0lm5KWh72JawJc=?d=zchGGR67Iik2GbGvPUQC-PpGi7Ku0O0GqbsZJKFz-j6IfASApbsJKzyFGhsJhW_ITVuwR--Gn1yeBVn-dCTHWruWcVnXGRvNM-11RN36_vODOpYutPp2ikEt1Kf4TOnD6VRSkprJ0TRoQ8mgXEASHF9NaVkTJtQj3kzZD953ikrNdU7JTvPd_jTYj797kIH4WZL4jsVCywcp6F8N1DtEHFj5uQsKvNeycQTe-Ck0BmzUJyeWSXxuXYfQnyAy-FVHxa6uVtbI6G4vx-WhcMoAZZmc20aBpbQHP8CyIgMnRvWp6kJ0oBGLq4TFj5LbKLuxIL4nPeqGtAQ2pSOTe89K32JZAHGVsYaAcxEI9aOBivM81JeIuLB_t93j4PpuP3do959qD2s3ZW0yR-UWfpAbwFC8ryDmgAY-&u=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org https://clicktime.symantec.com/a/1/iiK7PDu6t0LJJZcyrtHGQOP0hruXc0lm5KWh72JawJc=?d=zchGGR67Iik2GbGvPUQC-PpGi7Ku0O0GqbsZJKFz-j6IfASApbsJKzyFGhsJhW_ITVuwR--Gn1yeBVn-dCTHWruWcVnXGRvNM-11RN36_vODOpYutPp2ikEt1Kf4TOnD6VRSkprJ0TRoQ8mgXEASHF9NaVkTJtQj3kzZD953ikrNdU7JTvPd_jTYj797kIH4WZL4jsVCywcp6F8N1DtEHFj5uQsKvNeycQTe-Ck0BmzUJyeWSXxuXYfQnyAy-FVHxa6uVtbI6G4vx-WhcMoAZZmc20aBpbQHP8CyIgMnRvWp6kJ0oBGLq4TFj5LbKLuxIL4nPeqGtAQ2pSOTe89K32JZAHGVsYaAcxEI9aOBivM81JeIuLB_t93j4PpuP3do959qD2s3ZW0yR-UWfpAbwFC8ryDmgAY-&u=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 8 05:46:07 2018 From: nginx-forum at forum.nginx.org (auto) Date: Tue, 08 May 2018 01:46:07 -0400 Subject: Problem with to multiple virtual hosts Message-ID: <6febf3fdfc5a52e635cff33c75b4c92b.NginxMailingListEnglish@forum.nginx.org> We use nginx for Hosting multiple hosts. We have haves mixes, some sites are only available at http:// and other sites are available with https:// We create a new config-file for every virtual hosts (domain) if there is a new customer with a new Homepage. All works correctly. Today we create 2 new config-files on the nginx, copy the file to sites-enabled and make a nginx reload. Now, no sites works again. But there was no error after the nginx reload. In the Browser we get the error that the Site is not available. And we get this error at all Sites. In the nginx error.log we get the message *2948... no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 178...., server 0.0.0.0:443 In the Log-Files are many of these messages, i think ~20 lines of this. The Virtual-Host config File we create look like: server { listen 80; server_name example.de; return 301 http://www.$http_host$request_uri; } server { listen 80; server_name *.example.de; location / { access_log off; proxy_pass http://example.test.de; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwareded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_sestheader Connection "upgrade"; } } We get the error only if we create a new virtual-host file in the sites-enabled. if we copy the code into a exisiting virtual host file it works correctly and all other sites works again. Any Ideas why it doesn't work if we create a new file? We deleted the new file, create it again but always get the same effect with the error Message in the error-log file. I don't know if its important but we have 196 Files in the sites-enabled directory. If we create a new one the error come again, if we delete the file and write (copy&paste) the same code into a existing file, it works correctly?! We don't think that is a ssl error, we think that the count of files are the problem?! We want to create always a new virtual-host config-file for each customer and don't edit add the config to a existing file. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279708,279708#msg-279708 From nginx-forum at forum.nginx.org Tue May 8 07:34:04 2018 From: nginx-forum at forum.nginx.org (Joncheski) Date: Tue, 08 May 2018 03:34:04 -0400 Subject: Proxy pass and SSL certificates In-Reply-To: References: Message-ID: <8cda7fa6d5fff1e1d28f9a91d746fc81.NginxMailingListEnglish@forum.nginx.org> Hello Meph, In configuration file "cloud.diakont.it.conf": - "ssl_certificate" please set path of only public certificate of server (cloud.diakont.it), and in "ssl_certificate_key" please set path of only private key of server (cloud.diakont.it). In configuration file "ssl-params.conff": - The certificates that you use for the server and for the client, from whom are they issued and signed? If you are from your publisher and signer, these parameters will be removed: ssl_ecdh_curve, ssl_stapling, add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; Change parameter: resolver_timeout 10s. In nginx config: - Add this argument: proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_session_reuse on; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_trusted_certificate ; - And in location / like this: location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_pass https://cloud_ssl/; } And check the configuration file (nginx -t). After this, please send me more access and error log for this. Best regards, Goce Joncheski Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279665,279710#msg-279710 From ruz at sports.ru Tue May 8 11:43:54 2018 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Tue, 8 May 2018 14:43:54 +0300 Subject: big difference between request time and upstreams time Message-ID: Hello, Some selected log records: 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002] 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000] 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000] 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000] 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000] 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000] 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000] 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000] 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000] 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002] 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002] 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000] 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002] 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000] 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000] 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000] 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000] 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000] 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002] columns: wallclock time, request time, upstream_request_time, upstream_connect_time, upstream. Please, help me diagnose this problem further as I stuck. This is subset where request_time 50x bigger than upstream_request_time (just to make subset less noisy). I see request times up to 60 seconds. Can not tie it to some periodicity. It happens so often that don't see anything helpful in strace... I stuck... Any ideas? This is nginx/1.10.2 on FreeBSD 10.3-RELEASE-p7. -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Tue May 8 11:50:58 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Tue, 8 May 2018 14:50:58 +0300 Subject: big difference between request time and upstreams time In-Reply-To: References: Message-ID: ????? ? ?????? ???????? http://mailman.nginx.org/pipermail/nginx/2008-October/008025.html ????????, ?????? ?????, ? ????????. On 08.05.2018 14:43, ?????? ??????? wrote: > Hello, > > Some selected log records: > 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002 > ] > 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000 > ] > 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000 > ] > 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000 > ] > 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000 > ] > 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000 > ] > 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000 > ] > 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000 > ] > 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000 > ] > 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002 > ] > 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002 > ] > 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000 > ] > 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002 > ] > 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000 > ] > 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000 > ] > 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000 > ] > 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000 > ] > 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000 > ] > 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002 > ] > > columns: wallclock time, request time, upstream_request_time, > upstream_connect_time, upstream. > > Please, help me diagnose this problem further as I stuck. This is > subset where request_time 50x bigger than upstream_request_time (just > to make subset less noisy). I see request times up to 60 seconds. Can > not tie it to some periodicity. It happens so often that don't see > anything helpful in strace... I stuck... Any ideas? > > This is?nginx/1.10.2 on?FreeBSD 10.3-RELEASE-p7. > > -- > ?????? ??????? > ???????????? ?????? ?????????? ???-???????? > +7(916) 597-92-69, ruz?@ > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Tue May 8 12:11:39 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Tue, 8 May 2018 15:11:39 +0300 Subject: big difference between request time and upstreams time In-Reply-To: References: Message-ID: <6677871e-a058-41b9-1f7a-aee231183612@nginx.com> Sorry, didn't realize this is an English mailing list. To sum it up: the problem is most likely about clients and not the server. Discrepancy between request time and upstream time usually means that a client is slow or uses a bad connection. Basically, this is OK unless you have the only server out of many with this problem. This in turn may mean that the problem is with that server's network connection. Regards. On 08.05.2018 14:50, Igor A. Ippolitov wrote: > ????? ? ?????? ???????? > http://mailman.nginx.org/pipermail/nginx/2008-October/008025.html > ????????, ?????? ?????, ? ????????. > > On 08.05.2018 14:43, ?????? ??????? wrote: >> Hello, >> >> Some selected log records: >> 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002 >> ] >> 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000 >> ] >> 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000 >> ] >> 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000 >> ] >> 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000 >> ] >> 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000 >> ] >> 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000 >> ] >> 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000 >> ] >> 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000 >> ] >> 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002 >> ] >> 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002 >> ] >> 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000 >> ] >> 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002 >> ] >> 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000 >> ] >> 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000 >> ] >> 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000 >> ] >> 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000 >> ] >> 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000 >> ] >> 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002 >> ] >> >> columns: wallclock time, request time, upstream_request_time, >> upstream_connect_time, upstream. >> >> Please, help me diagnose this problem further as I stuck. This is >> subset where request_time 50x bigger than upstream_request_time (just >> to make subset less noisy). I see request times up to 60 seconds. Can >> not tie it to some periodicity. It happens so often that don't see >> anything helpful in strace... I stuck... Any ideas? >> >> This is?nginx/1.10.2 on?FreeBSD 10.3-RELEASE-p7. >> >> -- >> ?????? ??????? >> ???????????? ?????? ?????????? ???-???????? >> +7(916) 597-92-69, ruz?@ >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Tue May 8 14:28:37 2018 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 8 May 2018 17:28:37 +0300 Subject: Packages for Ubuntu 18.04 "Bionic"? In-Reply-To: References: Message-ID: <73aa4c3a-b037-bf0e-c419-e896e722d7e4@nginx.com> Hello, 07.05.2018 19:12, Moshe Katz wrote: > Hello, > > I see that the new Ubuntu 18.04 release has Nginx 1.14.0 > ?as its install version. > However, as new development progresses, I will want to be on the > `mainline` version on my servers. > Right now, there is no official Nginx package support for 18.04, as the > newest version in?http://nginx.org/packages/mainline/ubuntu/ is `artful`. > > When can we expect packages for `bionic` to be officially available? > > Thanks, > Moshe The packages for both stable and mainline branches are now available to download. Have a good one, -- Konstantin Pavlov https://www.nginx.com/ From ruz at sports.ru Tue May 8 15:51:20 2018 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Tue, 8 May 2018 18:51:20 +0300 Subject: big difference between request time and upstreams time In-Reply-To: <6677871e-a058-41b9-1f7a-aee231183612@nginx.com> References: <6677871e-a058-41b9-1f7a-aee231183612@nginx.com> Message-ID: On Tue, May 8, 2018 at 3:11 PM, Igor A. Ippolitov wrote: > Sorry, didn't realize this is an English mailing list. > > To sum it up: the problem is most likely about clients and not the server. > Discrepancy between request time and upstream time usually means that a > client is slow or uses a bad connection. > Basically, this is OK unless you have the only server out of many with > this problem. > This in turn may mean that the problem is with that server's network > connection. > The issue affects all of our primary nginx servers. However, they receive requests from 4 "routing" nginx servers and all backends via haproxy. The problem affects only requests from the routing nginxs, not backends. I would expect routing servers pull data from upstream ASAP. So slow clients in my mind should only affect those routing servers standing in front. Am I wrong? > Regards. > > > On 08.05.2018 14:50, Igor A. Ippolitov wrote: > > ????? ? ?????? ???????? http://mailman.nginx.org/ > pipermail/nginx/2008-October/008025.html > ????????, ?????? ?????, ? ????????. > > On 08.05.2018 14:43, ?????? ??????? wrote: > > Hello, > > Some selected log records: > 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002] > 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000] > 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000] > 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000] > 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000] > 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000] > 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000] > 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000] > 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000] > 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002] > 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002] > 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000] > 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002] > 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000] > 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000] > 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000] > 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000] > 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000] > 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002] > > columns: wallclock time, request time, upstream_request_time, > upstream_connect_time, upstream. > > Please, help me diagnose this problem further as I stuck. This is subset > where request_time 50x bigger than upstream_request_time (just to make > subset less noisy). I see request times up to 60 seconds. Can not tie it to > some periodicity. It happens so often that don't see anything helpful in > strace... I stuck... Any ideas? > > This is nginx/1.10.2 on FreeBSD 10.3-RELEASE-p7. > > -- > ?????? ??????? > ???????????? ?????? ?????????? ???-???????? > +7(916) 597-92-69, ruz @ > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Tue May 8 16:22:26 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Tue, 8 May 2018 19:22:26 +0300 Subject: big difference between request time and upstreams time In-Reply-To: References: <6677871e-a058-41b9-1f7a-aee231183612@nginx.com> Message-ID: <235690f2-8c96-c707-2594-7daf85fd18c9@nginx.com> Ruslan, This depends on your routing nginx configuration. If doesn't have enough buffers to contain a response completely and temporary files are turned off, then you will run into a situation, when the delay is propagated from client facing nginx to a middle layer nginx. The fact that only client facing requests are affected proves this idea. On 08.05.2018 18:51, ?????? ??????? wrote: > > > On Tue, May 8, 2018 at 3:11 PM, Igor A. Ippolitov > > wrote: > > Sorry, didn't realize this is an English mailing list. > > To sum it up: the problem is most likely about clients and not the > server. > Discrepancy between request time and upstream time usually means > that a client is slow or uses a bad connection. > Basically, this is OK unless you have the only server out of many > with this problem. > This in turn may mean that the problem is with that server's > network connection. > > > > The issue affects all of our primary nginx servers. > > However, they receive requests from 4 "routing" nginx servers and all > backends via haproxy. The problem affects only > requests from the routing nginxs, not backends. I would expect routing > servers pull data from upstream ASAP. So slow > clients in my mind should only affect those routing servers standing > in front. > > Am I wrong? > > > Regards. > > > On 08.05.2018 14:50, Igor A. Ippolitov wrote: >> ????? ? ?????? ???????? >> http://mailman.nginx.org/pipermail/nginx/2008-October/008025.html >> >> ????????, ?????? ?????, ? ????????. >> >> On 08.05.2018 14:43, ?????? ??????? wrote: >>> Hello, >>> >>> Some selected log records: >>> 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002 >>> ] >>> 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000 >>> ] >>> 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000 >>> ] >>> 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000 >>> ] >>> 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000 >>> ] >>> 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000 >>> ] >>> 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000 >>> ] >>> 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000 >>> ] >>> 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000 >>> ] >>> 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002 >>> ] >>> 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002 >>> ] >>> 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000 >>> ] >>> 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002 >>> ] >>> 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000 >>> ] >>> 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000 >>> ] >>> 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000 >>> ] >>> 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000 >>> ] >>> 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000 >>> ] >>> 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002 >>> ] >>> >>> columns: wallclock time, request time, upstream_request_time, >>> upstream_connect_time, upstream. >>> >>> Please, help me diagnose this problem further as I stuck. This >>> is subset where request_time 50x bigger than >>> upstream_request_time (just to make subset less noisy). I see >>> request times up to 60 seconds. Can not tie it to some >>> periodicity. It happens so often that don't see anything helpful >>> in strace... I stuck... Any ideas? >>> >>> This is?nginx/1.10.2 on?FreeBSD 10.3-RELEASE-p7. >>> >>> -- >>> ?????? ??????? >>> ???????????? ?????? ?????????? ???-???????? >>> +7(916) 597-92-69, ruz?@ >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > ?????? ??????? > ???????????? ?????? ?????????? ???-???????? > +7(916) 597-92-69, ruz?@ > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kohenkatz at gmail.com Tue May 8 17:21:50 2018 From: kohenkatz at gmail.com (Moshe Katz) Date: Tue, 08 May 2018 17:21:50 +0000 Subject: Packages for Ubuntu 18.04 "Bionic"? In-Reply-To: <73aa4c3a-b037-bf0e-c419-e896e722d7e4@nginx.com> References: <73aa4c3a-b037-bf0e-c419-e896e722d7e4@nginx.com> Message-ID: Great. thanks! On Tue, May 8, 2018 at 10:28 AM Konstantin Pavlov wrote: > Hello, > > 07.05.2018 19:12, Moshe Katz wrote: > > Hello, > > > > I see that the new Ubuntu 18.04 release has Nginx 1.14.0 > > as its install version. > > However, as new development progresses, I will want to be on the > > `mainline` version on my servers. > > Right now, there is no official Nginx package support for 18.04, as the > > newest version in http://nginx.org/packages/mainline/ubuntu/ is > `artful`. > > > > When can we expect packages for `bionic` to be officially available? > > > > Thanks, > > Moshe > > The packages for both stable and mainline branches are now available to > download. > > Have a good one, > > -- > Konstantin Pavlov > https://www.nginx.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruz at sports.ru Tue May 8 18:04:46 2018 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Tue, 8 May 2018 21:04:46 +0300 Subject: big difference between request time and upstreams time In-Reply-To: <235690f2-8c96-c707-2594-7daf85fd18c9@nginx.com> References: <6677871e-a058-41b9-1f7a-aee231183612@nginx.com> <235690f2-8c96-c707-2594-7daf85fd18c9@nginx.com> Message-ID: On Tue, May 8, 2018 at 7:22 PM, Igor A. Ippolitov wrote: > Ruslan, > > This depends on your routing nginx configuration. > If doesn't have enough buffers to contain a response completely and > temporary files are turned off, then you will run into a situation, when > the delay is propagated from client facing nginx to a middle layer nginx. > > The fact that only client facing requests are affected proves this idea. > Sure it sounds very much like my case. Any pointers on good article on this subject? Probably my goal is to free "primary" nginx servers as soon as possible and leave last mile delivery job to "routing" nginx in front. If there is no articles you know about on this matter then just point me at nginx options I should start from. > On 08.05.2018 18:51, ?????? ??????? wrote: > > > > On Tue, May 8, 2018 at 3:11 PM, Igor A. Ippolitov > wrote: > >> Sorry, didn't realize this is an English mailing list. >> >> To sum it up: the problem is most likely about clients and not the server. >> Discrepancy between request time and upstream time usually means that a >> client is slow or uses a bad connection. >> Basically, this is OK unless you have the only server out of many with >> this problem. >> This in turn may mean that the problem is with that server's network >> connection. >> > > > The issue affects all of our primary nginx servers. > > However, they receive requests from 4 "routing" nginx servers and all > backends via haproxy. The problem affects only > requests from the routing nginxs, not backends. I would expect routing > servers pull data from upstream ASAP. So slow > clients in my mind should only affect those routing servers standing in > front. > > Am I wrong? > > >> Regards. >> >> >> On 08.05.2018 14:50, Igor A. Ippolitov wrote: >> >> ????? ? ?????? ???????? http://mailman.nginx.org/piper >> mail/nginx/2008-October/008025.html >> ????????, ?????? ?????, ? ????????. >> >> On 08.05.2018 14:43, ?????? ??????? wrote: >> >> Hello, >> >> Some selected log records: >> 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002] >> 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000] >> 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000] >> 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000] >> 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000] >> 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000] >> 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000] >> 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000] >> 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000] >> 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002] >> 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002] >> 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000] >> 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002] >> 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000] >> 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000] >> 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000] >> 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000] >> 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000] >> 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002] >> >> columns: wallclock time, request time, upstream_request_time, >> upstream_connect_time, upstream. >> >> Please, help me diagnose this problem further as I stuck. This is subset >> where request_time 50x bigger than upstream_request_time (just to make >> subset less noisy). I see request times up to 60 seconds. Can not tie it to >> some periodicity. It happens so often that don't see anything helpful in >> strace... I stuck... Any ideas? >> >> This is nginx/1.10.2 on FreeBSD 10.3-RELEASE-p7. >> >> -- >> ?????? ??????? >> ???????????? ?????? ?????????? ???-???????? >> +7(916) 597-92-69, ruz @ >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > ?????? ??????? > ???????????? ?????? ?????????? ???-???????? > +7(916) 597-92-69, ruz @ > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Tue May 8 18:17:07 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Tue, 8 May 2018 21:17:07 +0300 Subject: big difference between request time and upstreams time In-Reply-To: References: <6677871e-a058-41b9-1f7a-aee231183612@nginx.com> <235690f2-8c96-c707-2594-7daf85fd18c9@nginx.com> Message-ID: <6ff14652-ab46-b5a1-c038-9791aab15de5@nginx.com> Ruslan, Not sure if I know a good article on the topic. Just ensure proxy_buffering is 'on', proxy_buffer size covers maximum possible reply headers? size and proxy_buffers matches 90% margin of your replies (or whatever you think is appropriate). Most of time these recommendations ensures optimal performance for nginx as a proxy. But more interesting question is if you really need to tune anything. If your front edge servers are well loaded, do you really need to load them even more? May be someone else will help with a proper text to read. On 08.05.2018 21:04, ?????? ??????? wrote: > > > On Tue, May 8, 2018 at 7:22 PM, Igor A. Ippolitov > > wrote: > > Ruslan, > > This depends on your routing nginx configuration. > If doesn't have enough buffers to contain a response completely > and temporary files are turned off, then you will run into a > situation, when the delay is propagated from client facing nginx > to a middle layer nginx. > > The fact that only client facing requests are affected proves this > idea. > > > Sure it sounds very much like my case. Any pointers on good article on > this subject? Probably my goal is to free "primary" nginx servers as > soon as possible and leave last mile delivery job to "routing" nginx > in front. If there is no articles you know about on this matter then > just point me at nginx options I should start from. > > > On 08.05.2018 18:51, ?????? ??????? wrote: >> >> >> On Tue, May 8, 2018 at 3:11 PM, Igor A. Ippolitov >> > wrote: >> >> Sorry, didn't realize this is an English mailing list. >> >> To sum it up: the problem is most likely about clients and >> not the server. >> Discrepancy between request time and upstream time usually >> means that a client is slow or uses a bad connection. >> Basically, this is OK unless you have the only server out of >> many with this problem. >> This in turn may mean that the problem is with that server's >> network connection. >> >> >> >> The issue affects all of our primary nginx servers. >> >> However, they receive requests from 4 "routing" nginx servers and >> all backends via haproxy. The problem affects only >> requests from the routing nginxs, not backends. I would expect >> routing servers pull data from upstream ASAP. So slow >> clients in my mind should only affect those routing servers >> standing in front. >> >> Am I wrong? >> >> >> Regards. >> >> >> On 08.05.2018 14:50, Igor A. Ippolitov wrote: >>> ????? ? ?????? ???????? >>> http://mailman.nginx.org/pipermail/nginx/2008-October/008025.html >>> >>> ????????, ?????? ?????, ? ????????. >>> >>> On 08.05.2018 14:43, ?????? ??????? wrote: >>>> Hello, >>>> >>>> Some selected log records: >>>> 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002 >>>> ] >>>> 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000 >>>> ] >>>> 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000 >>>> ] >>>> 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000 >>>> ] >>>> 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000 >>>> ] >>>> 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000 >>>> ] >>>> 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000 >>>> ] >>>> 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000 >>>> ] >>>> 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000 >>>> ] >>>> 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002 >>>> ] >>>> 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002 >>>> ] >>>> 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000 >>>> ] >>>> 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002 >>>> ] >>>> 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000 >>>> ] >>>> 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000 >>>> ] >>>> 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000 >>>> ] >>>> 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000 >>>> ] >>>> 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000 >>>> ] >>>> 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002 >>>> ] >>>> >>>> columns: wallclock time, request time, >>>> upstream_request_time, upstream_connect_time, upstream. >>>> >>>> Please, help me diagnose this problem further as I stuck. >>>> This is subset where request_time 50x bigger than >>>> upstream_request_time (just to make subset less noisy). I >>>> see request times up to 60 seconds. Can not tie it to some >>>> periodicity. It happens so often that don't see anything >>>> helpful in strace... I stuck... Any ideas? >>>> >>>> This is?nginx/1.10.2 on?FreeBSD 10.3-RELEASE-p7. >>>> >>>> -- >>>> ?????? ??????? >>>> ???????????? ?????? ?????????? ???-???????? >>>> +7(916) 597-92-69, ruz?@ >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> >> -- >> ?????? ??????? >> ???????????? ?????? ?????????? ???-???????? >> +7(916) 597-92-69, ruz?@ >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > ?????? ??????? > ???????????? ?????? ?????????? ???-???????? > +7(916) 597-92-69, ruz?@ > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue May 8 19:15:22 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 May 2018 22:15:22 +0300 Subject: Logging of mirror requests In-Reply-To: References: Message-ID: <20180508191522.GS32137@mdounin.ru> Hello! On Mon, May 07, 2018 at 07:59:10PM -0700, Joe Doe wrote: > I have used ngx_http_mirror_module to create mirrors. I would like to log > these requests as well? So in the /mirror location, I added access_log > directive, but the log file was created, but no logs were produced. > > Is logging currently limited to only the original request? By default, subrequests are not logged. If you want them to be logged, consider the "log_subrequest" directive (http://nginx.org/r/log_subrequest). -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue May 8 20:28:54 2018 From: nginx-forum at forum.nginx.org (pkris) Date: Tue, 08 May 2018 16:28:54 -0400 Subject: Restricting access by public IP blocking remote content Message-ID: <71eae06a168b0a2f829bcd05f5976158.NginxMailingListEnglish@forum.nginx.org> As the subject states when I restrict access to a subdirectory via IP, remote content like Google fonts, and Favicons are blocked. This of course makes sense, but without adding those hostnames to my admin-ip's file I use to allow IP's (explained below), can remote content like this be allowed by the actual web traffic I'm attempting to restrict to my VPN IP be filtered? /etc/nginx/sites-enabled/default: location /billingadmin { include includes/admin-ips; deny all; } /etc/nginx/includes/admin-ips: #LAN allow XXX.XXX.XXX.XXX; #VPN allow XXX.XXX.XXX.XXX; allow XXX.XXX.XXX.XXX; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279725,279725#msg-279725 From jfjm2002 at gmail.com Wed May 9 05:06:58 2018 From: jfjm2002 at gmail.com (Joe Doe) Date: Tue, 8 May 2018 22:06:58 -0700 Subject: Logging of mirror requests In-Reply-To: <20180508191522.GS32137@mdounin.ru> References: <20180508191522.GS32137@mdounin.ru> Message-ID: Thank you very much! That did the trick. On Tue, May 8, 2018 at 12:15 PM, Maxim Dounin wrote: > Hello! > > On Mon, May 07, 2018 at 07:59:10PM -0700, Joe Doe wrote: > > > I have used ngx_http_mirror_module to create mirrors. I would like to log > > these requests as well? So in the /mirror location, I added access_log > > directive, but the log file was created, but no logs were produced. > > > > Is logging currently limited to only the original request? > > By default, subrequests are not logged. If you want them to be > logged, consider the "log_subrequest" directive > (http://nginx.org/r/log_subrequest). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 9 06:10:04 2018 From: nginx-forum at forum.nginx.org (_gg_) Date: Wed, 09 May 2018 02:10:04 -0400 Subject: No shared cipher Message-ID: <92a86c1b805c7a584f20056a7ee8fef2.NginxMailingListEnglish@forum.nginx.org> Not sure if it's not more of an openssl/TLS 'issue'/question... For some time I've been observing SSL_do_handshake() failed (SSL: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher) while SSL handshaking in error.log while having ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ALL:!aNULL; in configuration. Examining Client Hello packet reveals client supported ciphers: Cipher Suites (9 suites) Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8) Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcc13) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c) Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f) Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a) I'm running nginx version: nginx/1.12.1 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled According to 'openssl ciphers' the third cipher on the list is supported and yet server responds with: TLSv1.2 Record Layer: Alert (Level: Fatal, Description: Handshake Failure) Content Type: Alert (21) Version: TLS 1.2 (0x0303) Length: 2 Alert Message Level: Fatal (2) Description: Handshake Failure (40) Either I've messed up my investigation or I'm completely misunderstanding something here. Why despite having a common cipher with a client server denies to handshake a connection? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279727,279727#msg-279727 From mephystoonhell at gmail.com Wed May 9 09:50:21 2018 From: mephystoonhell at gmail.com (Mephysto On Hell) Date: Wed, 9 May 2018 11:50:21 +0200 Subject: Proxy pass and SSL certificates In-Reply-To: <8cda7fa6d5fff1e1d28f9a91d746fc81.NginxMailingListEnglish@forum.nginx.org> References: <8cda7fa6d5fff1e1d28f9a91d746fc81.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello Goce, but with this configuration, can I disable SSL in target Nginx? Thanks in advance. Meph On 8 May 2018 at 09:34, Joncheski wrote: > Hello Meph, > > In configuration file "cloud.diakont.it.conf": > - "ssl_certificate" please set path of only public certificate of server > (cloud.diakont.it), and in "ssl_certificate_key" please set path of only > private key of server (cloud.diakont.it). > > In configuration file "ssl-params.conff": > - The certificates that you use for the server and for the client, from > whom > are they issued and signed? If you are from your publisher and signer, > these > parameters will be removed: ssl_ecdh_curve, ssl_stapling, add_header > X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; > > Change parameter: resolver_timeout 10s. > > In nginx config: > - Add this argument: > proxy_ssl_verify on; > proxy_ssl_verify_depth 2; > proxy_ssl_session_reuse on; > proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > proxy_ssl_trusted_certificate ; > - And in location / like this: > location / { > proxy_set_header X-Real-IP > $remote_addr; > proxy_set_header X-Forwarded-Proto > $scheme; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_set_header Upgrade > $http_upgrade; > proxy_set_header Connection > 'upgrade'; > proxy_set_header Host $host; > proxy_pass https://cloud_ssl/; > } > > And check the configuration file (nginx -t). > After this, please send me more access and error log for this. > > > Best regards, > Goce Joncheski > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,279665,279710#msg-279710 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfjm2002 at gmail.com Wed May 9 11:32:51 2018 From: jfjm2002 at gmail.com (Joe Doe) Date: Wed, 9 May 2018 04:32:51 -0700 Subject: inheritance of proxy_http_version and proxy_set_header Message-ID: I have many multiple mirrors for incoming request. To keep the config clean, I set: proxy_http_version 1.1; proxy_set_header ""; in the http context. This worked for us (verified keep-alive is working), and it will inherit to all the mirror proxy_pass. However, I recently added a mirror that used https, and I notice these settings no longer inherit to this mirror. At least keep-alive was not working. To address this, I had to add these 2 settings into the location specific to the mirror. (adding to the server context didn't work either) According to the documentation, these 2 settings can be in http, server and location context. And I assume if it's in http context, it would inherit to all the sub-blocks (and it did work for all the other http mirrors). Is this assumption incorrect and I should add these 2 settings to all the locations where I want to use keep-alive? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 9 18:19:23 2018 From: nginx-forum at forum.nginx.org (snir) Date: Wed, 09 May 2018 14:19:23 -0400 Subject: Set real ip not working Message-ID: Hello I want to get the real ip of the client but I'm all ways getting the ip of the ngnix server. I trayed using set_real_ip: http { upstream myapp1 { server 177.17.777.13:8080; } server { listen 80; real_ip_recursive on; set_real_ip_from 177.17.777.13; real_ip_header X-Forwarded-For; location / { proxy_pass http://myapp1; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279736,279736#msg-279736 From francis at daoine.org Wed May 9 20:10:58 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 9 May 2018 21:10:58 +0100 Subject: Problem with to multiple virtual hosts In-Reply-To: <6febf3fdfc5a52e635cff33c75b4c92b.NginxMailingListEnglish@forum.nginx.org> References: <6febf3fdfc5a52e635cff33c75b4c92b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180509201058.GE19311@daoine.org> On Tue, May 08, 2018 at 01:46:07AM -0400, auto wrote: Hi there, > Today we create 2 new config-files on the nginx, copy the file to > sites-enabled and make a nginx reload. > > Now, no sites works again. But there was no error after the nginx reload. > > In the Browser we get the error that the Site is not available. And we get > this error at all Sites. > > In the nginx error.log we get the message *2948... no "ssl_certificate" is > defined in server listening on SSL port while SSL handshaking, client: > 178...., server 0.0.0.0:443 The error message refers to something to do with ssl. The example config files you show do not mention ssl. Does the actual config that you are writing to the new file that leads to the failure, refer to ssl at all? Is the new file name alphabetically first in the list of files? Do you have the word "default_server" on any "listen" line in any file? f -- Francis Daly francis at daoine.org From francis at daoine.org Wed May 9 20:17:50 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 9 May 2018 21:17:50 +0100 Subject: Restricting access by public IP blocking remote content In-Reply-To: <71eae06a168b0a2f829bcd05f5976158.NginxMailingListEnglish@forum.nginx.org> References: <71eae06a168b0a2f829bcd05f5976158.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180509201750.GF19311@daoine.org> On Tue, May 08, 2018 at 04:28:54PM -0400, pkris wrote: Hi there, > As the subject states when I restrict access to a subdirectory via IP, > remote content like Google fonts, and Favicons are blocked. I don't understand what you are reporting there. Can you give one specific example? It looks like you are saying that when you intentionally block access to /billingadmin, you also accidentally block access to /favicon.ico and to totally unrelated urls like https://fonts.google.com/. That seems very strange to me, so I suspect that I am missing something. > This of course makes sense, but without adding those hostnames to my > admin-ip's file I use to allow IP's (explained below), can remote content > like this be allowed by the actual web traffic I'm attempting to restrict to > my VPN IP be filtered? Maybe it is clear to someone else, what you mean by this. If so, perhaps they will respond. But it might be helpful if you can rephrase your question, perhaps including an example request that does not get the response that you expect (and including the relevant nginx config). Good luck, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed May 9 20:25:03 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 9 May 2018 21:25:03 +0100 Subject: inheritance of proxy_http_version and proxy_set_header In-Reply-To: References: Message-ID: <20180509202503.GG19311@daoine.org> On Wed, May 09, 2018 at 04:32:51AM -0700, Joe Doe wrote: Hi there, > I have many multiple mirrors for incoming request. To keep the config > clean, I set: > proxy_http_version 1.1; > proxy_set_header ""; > > in the http context. This worked for us (verified keep-alive is working), > and it will inherit to all the mirror proxy_pass. Those config directives (corrected) will inherit to any "location" which does not have a "proxy_http_version" directive or a "proxy_set_header" directive, respectively. (Assuming that neither are set at "server" level either.) > However, I recently added a mirror that used https, and I notice these > settings no longer inherit to this mirror. At least keep-alive was not > working. To address this, I had to add these 2 settings into the location > specific to the mirror. (adding to the server context didn't work either) Can you show the config that does not react the way that you want it to? If you get the upstream (proxy_pass) server to "echo" the incoming request, can you see what http version and http headers are sent by nginx? > According to the documentation, these 2 settings can be in http, server and > location context. And I assume if it's in http context, it would inherit to > all the sub-blocks (and it did work for all the other http mirrors). Is > this assumption incorrect and I should add these 2 settings to all the > locations where I want to use keep-alive? Directive inheritance follows the rules, or there is a bug. If these two settings mean that keep-alive works for you, then you must make sure that these two settings are in, or inherited into, each location that you care about. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed May 9 20:36:24 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 9 May 2018 21:36:24 +0100 Subject: Set real ip not working In-Reply-To: References: Message-ID: <20180509203624.GH19311@daoine.org> On Wed, May 09, 2018 at 02:19:23PM -0400, snir wrote: Hi there, > I want to get the real ip of the client but I'm all ways getting the ip of > the ngnix server. What, specifically, do you mean by "getting the ip"? > I trayed using set_real_ip: The tcp connection from nginx to upstream will (almost) always come from an IP address of the nginx machine. It is possible that nginx can be configured to write a client IP address into a http header, that the upstream server can then be invited to read. For that, you will want to make sure to write the client IP address into a http header (proxy_set_header, perhaps $proxy_add_x_forwarded_for) and you will want to make sure to configure your upstream server to read it. For one test request, what is the client IP address that you care about? Do you see that IP address anywhere in the request from nginx to upstream? If not, fix that. If so: do you see upstream doing anything with that part of the request? If not, fix that. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu May 10 08:11:41 2018 From: nginx-forum at forum.nginx.org (Joncheski) Date: Thu, 10 May 2018 04:11:41 -0400 Subject: Proxy pass and SSL certificates In-Reply-To: References: Message-ID: Hello Meph, Not, exactly this has SSL. Here's a suggestion configuration: nginx.conf: ------------------------------------------------------------------------------------------------------ user nginx; worker_processes auto; error_log /var/log/nginx/cloudssl.diakont.it.error.log; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/cloudssl.diakont.it.access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream cloud { server 10.39.0.52; } upstream cloud_ssl { server 10.39.0.52:443; } server { listen 80 default_server; listen [::]:80 default_server; server_name cloud.diakont.it cloud.diakont.srl; return 301 https://$server_name$request_uri; } server { listen 443 ssl default_server; listen [::]:443 ssl default_server; server_name cloud.diakont.it; #HTTPS-and-SSL proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_session_reuse on; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_trusted_certificate ; include snippets/cloud.diakont.it.conf; include snippets/ssl-params.conf; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_pass https://cloud_ssl/; } } } ------------------------------------------------------------------------------------------------------ cloud.diakont.it.conf: ------------------------------------------------------------------------------------------------------ ssl_certificate #PATH OF PUBLIC CERTIFICATE FROM SDP GATEWAY#; ssl_certificate_key #PATH OF PRIVATE KEY FROM SDP GATEWAY#; ssl_trusted_certificate #PATH OF PUBLIC CA CERTIFICATE#; ------------------------------------------------------------------------------------------------------ ssl-params.conf: ------------------------------------------------------------------------------------------------------ ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS'; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; #this resolver and resolver_timeout maybe be comment resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 10s; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; ------------------------------------------------------------------------------------------------------ Test this configuration and tell me :) Best regards, Goce Joncheski Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279665,279741#msg-279741 From danny at trisect.uk Thu May 10 09:34:27 2018 From: danny at trisect.uk (Danny Horne) Date: Thu, 10 May 2018 10:34:27 +0100 Subject: Possible to use RHEL / CentOS repo on Fedora 28? Message-ID: Hi all, I'm running Fedora 28 Server, and in the default repos Nginx is lagging behind at 1.12.1, I found the following on the Nginx website - To set up the yum repository for RHEL/CentOS, create the file named |/etc/yum.repos.d/nginx.repo| with the following contents: [nginx] name=nginx repo baseurl=http://nginx.org/packages/mainline/OS/OSRELEASE/$basearch/ gpgcheck=0 enabled=1 Replace ?|OS|? with ?|rhel|? or ?|centos|?, depending on the distribution used, and ?|OSRELEASE|? with ?|6|? or ?|7|?, for 6.x or 7.x versions, respectively. Could I set up this repo to upgrade Nginx?? And if so, what would I use for OS and OSRELEASE? Thanks for looking From nginx-forum at forum.nginx.org Thu May 10 12:04:59 2018 From: nginx-forum at forum.nginx.org (snir) Date: Thu, 10 May 2018 08:04:59 -0400 Subject: Set real ip not working In-Reply-To: <20180509203624.GH19311@daoine.org> References: <20180509203624.GH19311@daoine.org> Message-ID: <959dadecae8c7cd346176de22e7123ae.NginxMailingListEnglish@forum.nginx.org> Thanks That what I needed location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://myapp1; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279736,279745#msg-279745 From hemelaar at desikkel.nl Thu May 10 12:54:28 2018 From: hemelaar at desikkel.nl (Jean-Paul Hemelaar) Date: Thu, 10 May 2018 14:54:28 +0200 Subject: No live upstreams Message-ID: Hi! I'm using Nginx as a proxy to Apache. I noticed some messages in my error.log that I cannot explain: 27463#0: *125209 no live upstreams while connecting to upstream, client: x.x.x.x, server: www.xxx.com, request: "GET /xxx/ HTTP/1.1", upstream: " http://backend/xxx/", host: "www.xxx.com" The errors appear after Apache returned some 502-errors; however in the configuration I have set the following: upstream backend { server 10.0.0.2:8080 max_fails=3 fail_timeout=10; server 127.0.0.1:8000 backup; keepalive 6; } server { location / { proxy_pass http://backend; proxy_next_upstream error timeout invalid_header; etc. } I expected that, if Apache returns a few 502's: - Nginx will not try to proceed to the next upstream as proxy_next_upstream doesn't mention the http_502 but just forward the 502 to the client - if the upstream is marked as failed (what I didn't expect to happen) the server will try the backup server instead What can be happening: - If the primary server sends a 502 it tries the backup that will send a 502 as well. Because the max_fails is not defined it will be marked as failed after the first failure. Not sure if the above assumption is true. If it is, why are they marked as failed even when the http_502 is not mentioned? Thanks! JP -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 10 13:09:16 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 May 2018 16:09:16 +0300 Subject: No shared cipher In-Reply-To: <92a86c1b805c7a584f20056a7ee8fef2.NginxMailingListEnglish@forum.nginx.org> References: <92a86c1b805c7a584f20056a7ee8fef2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180510130916.GV32137@mdounin.ru> Hello! On Wed, May 09, 2018 at 02:10:04AM -0400, _gg_ wrote: > Not sure if it's not more of an openssl/TLS 'issue'/question... > For some time I've been observing > > SSL_do_handshake() failed (SSL: error:1408A0C1:SSL > routines:ssl3_get_client_hello:no shared cipher) while SSL handshaking > > in error.log while having > > ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers ALL:!aNULL; > > in configuration. > > Examining Client Hello packet reveals client supported ciphers: > Cipher Suites (9 suites) > Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8) > Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcc13) > Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) > Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) > Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) > Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c) > Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) > Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f) > Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a) > > I'm running > nginx version: nginx/1.12.1 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) > built with OpenSSL 1.0.2k-fips 26 Jan 2017 > TLS SNI support enabled > > According to 'openssl ciphers' the third cipher on the list is supported and > yet server responds with: > TLSv1.2 Record Layer: Alert (Level: Fatal, Description: Handshake Failure) > Content Type: Alert (21) > Version: TLS 1.2 (0x0303) > Length: 2 > Alert Message > Level: Fatal (2) > Description: Handshake Failure (40) > > Either I've messed up my investigation or I'm completely misunderstanding > something here. > Why despite having a common cipher with a client server denies to handshake > a connection? Whether a cipher suite can be used or not depends on various factors. In particular: - list of ciphers the client supports; - list of ciphers the server supports; - the certificate used by the server (e.g., you won't be able to use RSA cipher suites with an ECDSA certificate); - when using ECDHE ciphers or ECDSA certificates - supported EC curves on both client and server; In this particular case the client supports only RSA ciphers, so, for example, there will be no shared cipher if you are using ECDSA certificate. -- Maxim Dounin http://mdounin.ru/ From michael.friscia at yale.edu Thu May 10 13:17:42 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Thu, 10 May 2018 13:17:42 +0000 Subject: Load balancing Message-ID: <86F468E0-8465-470A-8FF2-27E976614147@yale.edu> I?m working on a project to perform A/B testing with the web hosting platform. The simple version is that we are hosted everything on Azure and want to compare using their Web Apps versus running a VM with IIS. My question is about load balancing since there seems to be two ways to go about this. First is to use a simple config where I setup the three hosts I?m testing like this: upstream ym-host { least_conn; server ysm-iis-prod1.northcentralus.cloudapp.azure.com; server ysm-iis-prod2.northcentralus.cloudapp.azure.com; server ysm-ym-live-prod.trafficmanager.net; } This works but I am not sure how to set a header to indicate which host is being used. The alternative is to use split_client and the same configuration looks like this: upstream ym_host1 { server ysm-iis-prod1.northcentralus.cloudapp.azure.com; } upstream ym_host2 { server ysm-iis-prod2.northcentralus.cloudapp.azure.com; } upstream ym_host3 { server ysm-ym-live-prod.trafficmanager.net; } split_clients "$arg_token" $ymhost { 25% ym_host1; 25% ym_host2; 50% ym_host3; } Granted the $arg_token will change to something else but for now I use that since I can manipulate it easier. The benefit of the second is that I can add a header like X-UpstreamHost $ymhost and then I can see which host I am hitting. The benefit of the first is using the least connected round robin approach but I can?t add a header to indicate which host is being hit. For good reasons I won?t get into, adding the header at the web app is not an option. My question is three part 1. Which is considered the best approach to load balance for this sort of testing? 2. Is there a way to get the name of the host being used if I stick with the simpler approach that uses just the single upstream configuration? 3. What would be the best variable to use for the split_client approach to achieve closest to a round robin? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri May 11 01:30:54 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 10 May 2018 21:30:54 -0400 Subject: Nginx Proxy/FastCGI Caching X-Accel-Expires 0 or Off ? Message-ID: <4d81a6d8a6676539ddb24520ae9e58e9.NginxMailingListEnglish@forum.nginx.org> So in order for my web application to tell Nginx not to cache a page what header response should I be sending ? X-Accel-Expires: 0 X-Accel-Expires: Off I read here it should be "OFF" https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/#x-accel-expires But it does not mention if numeric value "0" has the same effect Nor does it mention if the "off" value is case sensitive or not. I am hoping case sensitivity does not matter. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279752,279752#msg-279752 From nginx-forum at forum.nginx.org Fri May 11 06:04:35 2018 From: nginx-forum at forum.nginx.org (_gg_) Date: Fri, 11 May 2018 02:04:35 -0400 Subject: No shared cipher In-Reply-To: <20180510130916.GV32137@mdounin.ru> References: <20180510130916.GV32137@mdounin.ru> Message-ID: <28c21dfbd923fb1dab0312e9985568ef.NginxMailingListEnglish@forum.nginx.org> Indeed, I have an EC certificate. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279727,279754#msg-279754 From nginx-forum at forum.nginx.org Fri May 11 06:42:29 2018 From: nginx-forum at forum.nginx.org (Dhinesh Kumar T) Date: Fri, 11 May 2018 02:42:29 -0400 Subject: How to enable 3des in TLS 1.0 and Disable 3des TLS 1.1 and above in Nginx Message-ID: <26905b3b1deada448aebb9267385f695.NginxMailingListEnglish@forum.nginx.org> How nginx enable 3des in TLS 1.0 and Disable 3des TLS 1.1 and above? Nginx: 1.12.2-1 OpenSSL: 1.0.2k-8 I have tried with creating multiple server, but that dint help. is there a way to do this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279755,279755#msg-279755 From nginx-forum at forum.nginx.org Fri May 11 08:17:21 2018 From: nginx-forum at forum.nginx.org (auto) Date: Fri, 11 May 2018 04:17:21 -0400 Subject: Problem with to multiple virtual hosts In-Reply-To: <20180509201058.GE19311@daoine.org> References: <20180509201058.GE19311@daoine.org> Message-ID: <956def47bd86121839b3ed3573431044.NginxMailingListEnglish@forum.nginx.org> @Francis: so this is the big question, we only want to include 2 new sites that are only available without ssl. so we included the files without the ssl part. but if we include it, we get a ssl error?! no the new files are anywhere between the other files, these files are not the first files in the alphabetically list. at the moment i don't now if we have the word "default_server" in any file of the virtual-host files. there are 196 files in the sites-enabled directory, maybe i have a look in the next days if there is the word "default_server" anywhere. a few days ago we created a additionally directory for the virtual-host files and insert there the new virtual-host files. We included the new directory in the config-file of the nginx.conf, and now it works! We don't know why, we think that the count of the files in the "normal" sites-enabled directory are the problem or so?! With this solution, it works correctly without any errors. These files are the same files we had included the first the in the "normal" sites-enabled directory. We think that the count of the 195 virtual-host files are the problem in one directory?! but we don't know it, we only believe it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279708,279756#msg-279756 From pluknet at nginx.com Fri May 11 10:17:55 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 11 May 2018 13:17:55 +0300 Subject: Nginx Proxy/FastCGI Caching X-Accel-Expires 0 or Off ? In-Reply-To: <4d81a6d8a6676539ddb24520ae9e58e9.NginxMailingListEnglish@forum.nginx.org> References: <4d81a6d8a6676539ddb24520ae9e58e9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <72A2DFDC-8205-414A-9E98-8FB498A82BF5@nginx.com> > On 11 May 2018, at 04:30, c0nw0nk wrote: > > So in order for my web application to tell Nginx not to cache a page what > header response should I be sending ? > > X-Accel-Expires: 0 > X-Accel-Expires: Off > > I read here it should be "OFF" > https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/#x-accel-expires > > But it does not mention if numeric value "0" has the same effect Nor does it > mention if the "off" value is case sensitive or not. Wiki materials are updated by its users and thus may not always contain up-to-date and correct information. See reference documentation: http://nginx.org/r/proxy_cache_valid -- Sergey Kandaurov From nginx-forum at forum.nginx.org Fri May 11 15:54:17 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 11 May 2018 11:54:17 -0400 Subject: Nginx Proxy/FastCGI Caching X-Accel-Expires 0 or Off ? In-Reply-To: <72A2DFDC-8205-414A-9E98-8FB498A82BF5@nginx.com> References: <72A2DFDC-8205-414A-9E98-8FB498A82BF5@nginx.com> Message-ID: Sergey Kandaurov Wrote: ------------------------------------------------------- > > On 11 May 2018, at 04:30, c0nw0nk > wrote: > > > > So in order for my web application to tell Nginx not to cache a page > what > > header response should I be sending ? > > > > X-Accel-Expires: 0 > > X-Accel-Expires: Off > > > > I read here it should be "OFF" > > > https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/#x- > accel-expires > > > > But it does not mention if numeric value "0" has the same effect Nor > does it > > mention if the "off" value is case sensitive or not. > > Wiki materials are updated by its users and thus may not always > contain up-to-date and correct information. > > See reference documentation: > http://nginx.org/r/proxy_cache_valid > > -- > Sergey Kandaurov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thank you for the information and help :) I am now using the "0" value and my header responses say "STALE" so it appears to be working well. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279752,279759#msg-279759 From mdounin at mdounin.ru Fri May 11 18:36:02 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 May 2018 21:36:02 +0300 Subject: How to enable 3des in TLS 1.0 and Disable 3des TLS 1.1 and above in Nginx In-Reply-To: <26905b3b1deada448aebb9267385f695.NginxMailingListEnglish@forum.nginx.org> References: <26905b3b1deada448aebb9267385f695.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180511183602.GZ32137@mdounin.ru> Hello! On Fri, May 11, 2018 at 02:42:29AM -0400, Dhinesh Kumar T wrote: > How nginx enable 3des in TLS 1.0 and Disable 3des TLS 1.1 and above? > > Nginx: 1.12.2-1 > OpenSSL: 1.0.2k-8 > > I have tried with creating multiple server, but that dint help. is there a > way to do this? No. Currently OpenSSL provides no mechanisms to selectively enable or disable ciphers depending on the protocol negotiated. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sat May 12 04:05:51 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Sat, 12 May 2018 00:05:51 -0400 Subject: Nginx Cache | @ prefix example Message-ID: So it says this on the docs : http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid The ?X-Accel-Expires? header field sets caching time of a response in seconds. The zero value disables caching for a response. If the value starts with the @ prefix, it sets an absolute time in seconds since Epoch, up to which the response may be cached. Can someone give an example of how this should look and what if i set it as zero what is the outcome then...? //unknown outcome / result...? X-Accel-Expires: @0 //Expire cache straight away. X-Accel-Expires: 0 //Expire cache in 5 seconds X-Accel-Expires: 5 //Expire cache in 5 seconds and allow "STALE" cache responses to be stored for 5 seconds ????? X-Accel-expires: @5 5 Hopefully I am right thinking that the above would work like this need some clarification. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279762,279762#msg-279762 From quintinpar at gmail.com Sat May 12 16:26:07 2018 From: quintinpar at gmail.com (Quintin Par) Date: Sat, 12 May 2018 10:26:07 -0600 Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid Message-ID: My proxy cache path is set to a very high size proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=staticfilecache:180m max_size=700m; and the size used is only sudo du -sh * 14M cache 4.0K proxy Proxy cache valid is set to proxy_cache_valid 200 120d; I track HIT and MISS via add_header X-Cache-Status $upstream_cache_status; Despite these settings I am seeing a lot of MISSes. And this is for pages I intentionally ran a cache warmer an hour ago. How do I debug why these MISSes are happening? How do I find out if the miss was due to eviction, expiration, some rogue header etc? Does Nginx provide commands for this? - Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Sat May 12 16:29:43 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sat, 12 May 2018 16:29:43 +0000 Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid In-Reply-To: References: Message-ID: It can be as simple as doing a curl to your ?origin? url (the one you proxy_pass to) for the files you see that gets a lot of MISS?s ? if there?s odd headers such as cookies etc, then you?ll most likely experience a bad cache if your nginx is configured to not ignore those headers. From: nginx on behalf of Quintin Par Reply-To: "nginx at nginx.org" Date: Saturday, 12 May 2018 at 18.26 To: "nginx at nginx.org" Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid [https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae639800.png?u=74734] My proxy cache path is set to a very high size proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=staticfilecache:180m max_size=700m; and the size used is only sudo du -sh * 14M cache 4.0K proxy Proxy cache valid is set to proxy_cache_valid 200 120d; I track HIT and MISS via add_header X-Cache-Status $upstream_cache_status; Despite these settings I am seeing a lot of MISSes. And this is for pages I intentionally ran a cache warmer an hour ago. How do I debug why these MISSes are happening? How do I find out if the miss was due to eviction, expiration, some rogue header etc? Does Nginx provide commands for this? - Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Sat May 12 17:32:13 2018 From: quintinpar at gmail.com (Quintin Par) Date: Sat, 12 May 2018 11:32:13 -0600 Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid In-Reply-To: References: Message-ID: That?s the tricky part. These MISSes are intermittent. Whenever I run curl I get HITs but I end up seeing a lot of MISS in the logs. How do I log these MiSSes with the reason? I want to know what headers ended up bypassing the cache. Here?s my caching config proxy_pass http://127.0.0.1:8000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; # If logged in, don't cache. if ($http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" ) { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri$do_not_cache"; proxy_cache staticfilecache; add_header Cache-Control public; proxy_cache_valid 200 120d; proxy_hide_header "Set-Cookie"; proxy_ignore_headers "Set-Cookie"; proxy_ignore_headers "Cache-Control"; proxy_hide_header "Cache-Control"; proxy_pass_header X-Accel-Expires; proxy_set_header Accept-Encoding ""; proxy_ignore_headers Expires; add_header X-Cache-Status $upstream_cache_status; proxy_cache_use_stale timeout; proxy_cache_bypass $arg_nocache $do_not_cache; - Quintin On Sat, May 12, 2018 at 10:29 AM Lucas Rolff wrote: > It can be as simple as doing a curl to your ?origin? url (the one you > proxy_pass to) for the files you see that gets a lot of MISS?s ? if there?s > odd headers such as cookies etc, then you?ll most likely experience a bad > cache if your nginx is configured to not ignore those headers. > > > > *From: *nginx on behalf of Quintin Par < > quintinpar at gmail.com> > *Reply-To: *"nginx at nginx.org" > *Date: *Saturday, 12 May 2018 at 18.26 > *To: *"nginx at nginx.org" > *Subject: *Debugging Nginx Cache Misses: Hitting high number of MISS > despite high proxy valid > > > > [image: > https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae639800.png?u=74734] > > My proxy cache path is set to a very high size > > > > proxy_cache_path /var/lib/nginx/cache levels=1:2 > keys_zone=staticfilecache:180m max_size=700m; > > and the size used is only > > > > sudo du -sh * > > 14M cache > > 4.0K proxy > > Proxy cache valid is set to > > > > proxy_cache_valid 200 120d; > > I track HIT and MISS via > > > > add_header X-Cache-Status $upstream_cache_status; > > Despite these settings I am seeing a lot of MISSes. And this is for pages > I intentionally ran a cache warmer an hour ago. > > > > How do I debug why these MISSes are happening? How do I find out if the > miss was due to eviction, expiration, some rogue header etc? Does Nginx > provide commands for this? > > > > - Quintin > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Sat May 12 18:01:03 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Sat, 12 May 2018 18:01:03 +0000 Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid In-Reply-To: References: , Message-ID: I'm not sure if this will help, but I ignore/hide a lot, this is in my config proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie; proxy_hide_header X-Accel-Expires; proxy_hide_header Pragma; proxy_hide_header Server; proxy_hide_header Request-Context; proxy_hide_header X-Powered-By; proxy_hide_header X-AspNet-Version; proxy_hide_header X-AspNetMvc-Version; I have not experienced the problem you mention, I just thought I would offer my config. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 ? office (203) 931-5381 ? mobile http://web.yale.edu ________________________________ From: nginx on behalf of Quintin Par Sent: Saturday, May 12, 2018 1:32 PM To: nginx at nginx.org Subject: Re: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid That?s the tricky part. These MISSes are intermittent. Whenever I run curl I get HITs but I end up seeing a lot of MISS in the logs. How do I log these MiSSes with the reason? I want to know what headers ended up bypassing the cache. Here?s my caching config proxy_pass http://127.0.0.1:8000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; # If logged in, don't cache. if ($http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" ) { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri$do_not_cache"; proxy_cache staticfilecache; add_header Cache-Control public; proxy_cache_valid 200 120d; proxy_hide_header "Set-Cookie"; proxy_ignore_headers "Set-Cookie"; proxy_ignore_headers "Cache-Control"; proxy_hide_header "Cache-Control"; proxy_pass_header X-Accel-Expires; proxy_set_header Accept-Encoding ""; proxy_ignore_headers Expires; add_header X-Cache-Status $upstream_cache_status; proxy_cache_use_stale timeout; proxy_cache_bypass $arg_nocache $do_not_cache; - Quintin On Sat, May 12, 2018 at 10:29 AM Lucas Rolff > wrote: It can be as simple as doing a curl to your ?origin? url (the one you proxy_pass to) for the files you see that gets a lot of MISS?s ? if there?s odd headers such as cookies etc, then you?ll most likely experience a bad cache if your nginx is configured to not ignore those headers. From: nginx > on behalf of Quintin Par > Reply-To: "nginx at nginx.org" > Date: Saturday, 12 May 2018 at 18.26 To: "nginx at nginx.org" > Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid [https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae639800.png?u=74734] My proxy cache path is set to a very high size proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=staticfilecache:180m max_size=700m; and the size used is only sudo du -sh * 14M cache 4.0K proxy Proxy cache valid is set to proxy_cache_valid 200 120d; I track HIT and MISS via add_header X-Cache-Status $upstream_cache_status; Despite these settings I am seeing a lot of MISSes. And this is for pages I intentionally ran a cache warmer an hour ago. How do I debug why these MISSes are happening? How do I find out if the miss was due to eviction, expiration, some rogue header etc? Does Nginx provide commands for this? - Quintin _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfjm2002 at gmail.com Sat May 12 18:18:23 2018 From: jfjm2002 at gmail.com (Joe Doe) Date: Sat, 12 May 2018 11:18:23 -0700 Subject: inheritance of proxy_http_version and proxy_set_header In-Reply-To: <20180509202503.GG19311@daoine.org> References: <20180509202503.GG19311@daoine.org> Message-ID: Here is the config with some info redacted. The only difference between the mirror that inherited the setting and the ones not is http vs https. For the time being, to get around the issue, the settings to use keep-alive for upstream servers are added to those mirrors. nginx.conf: user nginx; worker_processes auto; worker_rlimit_nofile 65535; error_log /app/logs/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 65535; } http { include /etc/nginx/conf.d/backend*.conf; client_body_buffer_size 8k; resolver x.x.x.x; # Use connenction pool proxy_http_version 1.1; proxy_set_header Connection ""; keepalive_requests 2000; keepalive_timeout 65; include /etc/nginx/conf.d/reports.conf; } reports.conf: server { listen 80; server_name servername.com; location / { mirror /a; mirror /b; mirror /c; mirror /d; mirror /e; mirror /f; proxy_pass http://primary; } location /a { internal; proxy_pass http://backend-a; } location /b { internal; proxy_pass http://backend-b; } location /c { internal; proxy_pass http://c; } location /d { internal; proxy_pass http://backend-d; } location /e { internal; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass https://backend-e; } location /f { internal; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass https://backend-f; } } On Wed, May 9, 2018 at 1:25 PM, Francis Daly wrote: > On Wed, May 09, 2018 at 04:32:51AM -0700, Joe Doe wrote: > > Hi there, > > > I have many multiple mirrors for incoming request. To keep the config > > clean, I set: > > proxy_http_version 1.1; > > proxy_set_header ""; > > > > in the http context. This worked for us (verified keep-alive is working), > > and it will inherit to all the mirror proxy_pass. > > Those config directives (corrected) will inherit to any "location" which > does not have a "proxy_http_version" directive or a "proxy_set_header" > directive, respectively. (Assuming that neither are set at "server" > level either.) > > > However, I recently added a mirror that used https, and I notice these > > settings no longer inherit to this mirror. At least keep-alive was not > > working. To address this, I had to add these 2 settings into the location > > specific to the mirror. (adding to the server context didn't work either) > > Can you show the config that does not react the way that you want it to? > > If you get the upstream (proxy_pass) server to "echo" the incoming > request, can you see what http version and http headers are sent by nginx? > > > According to the documentation, these 2 settings can be in http, server > and > > location context. And I assume if it's in http context, it would inherit > to > > all the sub-blocks (and it did work for all the other http mirrors). Is > > this assumption incorrect and I should add these 2 settings to all the > > locations where I want to use keep-alive? > > Directive inheritance follows the rules, or there is a bug. If these two > settings mean that keep-alive works for you, then you must make sure > that these two settings are in, or inherited into, each location that > you care about. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cherian.in at gmail.com Sun May 13 05:30:00 2018 From: cherian.in at gmail.com (Cherian Thomas) Date: Sat, 12 May 2018 23:30:00 -0600 Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid In-Reply-To: References: Message-ID: Thanks for this Michael. This is so surprising. If someone decides to Dos and crawls the website with a rogue header, this will essentially bypass the cache and put a strain on the website. In fact, I was hit by a dos attack that?s when I started looking at logs and realized the large number of MISSes. Can someone please help? - Cherian On Sat, May 12, 2018 at 12:01 PM, Friscia, Michael wrote: > I'm not sure if this will help, but I ignore/hide a lot, this is in my > config > > > proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie; > proxy_hide_header X-Accel-Expires; > proxy_hide_header Pragma; > proxy_hide_header Server; > proxy_hide_header Request-Context; > proxy_hide_header X-Powered-By; > proxy_hide_header X-AspNet-Version; > proxy_hide_header X-AspNetMvc-Version; > > > I have not experienced the problem you mention, I just thought I would > offer my config. > > > ___________________________________________ > > Michael Friscia > > Office of Communications > > Yale School of Medicine > > (203) 737-7932 ? office > > (203) 931-5381 ? mobile > > http://web.yale.edu > > > > ------------------------------ > *From:* nginx on behalf of Quintin Par < > quintinpar at gmail.com> > *Sent:* Saturday, May 12, 2018 1:32 PM > *To:* nginx at nginx.org > *Subject:* Re: Debugging Nginx Cache Misses: Hitting high number of MISS > despite high proxy valid > > > That?s the tricky part. These MISSes are intermittent. Whenever I run curl > I get HITs but I end up seeing a lot of MISS in the logs. > > > > How do I log these MiSSes with the reason? I want to know what headers > ended up bypassing the cache. > > > > Here?s my caching config > > > > proxy_pass http://127.0.0.1:8000 > > ; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto https; > > proxy_set_header X-Forwarded-Port 443; > > > > # If logged in, don't cache. > > if ($http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" > ) { > > set $do_not_cache 1; > > } > > proxy_cache_key "$scheme://$host$request_uri$ > do_not_cache"; > > proxy_cache staticfilecache; > > add_header Cache-Control public; > > proxy_cache_valid 200 120d; > > proxy_hide_header "Set-Cookie"; > > proxy_ignore_headers "Set-Cookie"; > > proxy_ignore_headers "Cache-Control"; > > proxy_hide_header "Cache-Control"; > > proxy_pass_header X-Accel-Expires; > > > > proxy_set_header Accept-Encoding ""; > > proxy_ignore_headers Expires; > > add_header X-Cache-Status $upstream_cache_status; > > proxy_cache_use_stale timeout; > > proxy_cache_bypass $arg_nocache $do_not_cache; > - Quintin > > > On Sat, May 12, 2018 at 10:29 AM Lucas Rolff wrote: > > It can be as simple as doing a curl to your ?origin? url (the one you > proxy_pass to) for the files you see that gets a lot of MISS?s ? if there?s > odd headers such as cookies etc, then you?ll most likely experience a bad > cache if your nginx is configured to not ignore those headers. > > > > *From: *nginx on behalf of Quintin Par < > quintinpar at gmail.com> > *Reply-To: *"nginx at nginx.org" > *Date: *Saturday, 12 May 2018 at 18.26 > *To: *"nginx at nginx.org" > *Subject: *Debugging Nginx Cache Misses: Hitting high number of MISS > despite high proxy valid > > > > [image: > https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae639800.png?u=74734] > > My proxy cache path is set to a very high size > > > > proxy_cache_path /var/lib/nginx/cache levels=1:2 > keys_zone=staticfilecache:180m max_size=700m; > > and the size used is only > > > > sudo du -sh * > > 14M cache > > 4.0K proxy > > Proxy cache valid is set to > > > > proxy_cache_valid 200 120d; > > I track HIT and MISS via > > > > add_header X-Cache-Status $upstream_cache_status; > > Despite these settings I am seeing a lot of MISSes. And this is for pages > I intentionally ran a cache warmer an hour ago. > > > > How do I debug why these MISSes are happening? How do I find out if the > miss was due to eviction, expiration, some rogue header etc? Does Nginx > provide commands for this? > > > > - Quintin > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun May 13 22:12:49 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Sun, 13 May 2018 18:12:49 -0400 Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid In-Reply-To: References: Message-ID: <95053daace8238a9e917c313d7426872.NginxMailingListEnglish@forum.nginx.org> You know you can DoS sites with Cache MISS via switching up URL params and arguements. Examples : HIT : index.php?var1=one&var2=two MISS : index.php?var2=two&var1=one MISS : index.php?random=1 index.php?random=2 index.php?random=3 etc etc Inserting random arguements to URL's will cause cache misses and changing the order of existing valid URL arguements will also cause misses. Cherian Thomas Wrote: ------------------------------------------------------- > Thanks for this Michael. > > > > This is so surprising. If someone decides to Dos and crawls the > website > with a rogue header, this will essentially bypass the cache and put a > strain on the website. In fact, I was hit by a dos attack that?s when > I > started looking at logs and realized the large number of MISSes. > > > > Can someone please help? > > > - Cherian > > On Sat, May 12, 2018 at 12:01 PM, Friscia, Michael > > wrote: > > > I'm not sure if this will help, but I ignore/hide a lot, this is in > my > > config > > > > > > proxy_ignore_headers X-Accel-Expires Expires Cache-Control > Set-Cookie; > > proxy_hide_header X-Accel-Expires; > > proxy_hide_header Pragma; > > proxy_hide_header Server; > > proxy_hide_header Request-Context; > > proxy_hide_header X-Powered-By; > > proxy_hide_header X-AspNet-Version; > > proxy_hide_header X-AspNetMvc-Version; > > > > > > I have not experienced the problem you mention, I just thought I > would > > offer my config. > > > > > > ___________________________________________ > > > > Michael Friscia > > > > Office of Communications > > > > Yale School of Medicine > > > > (203) 737-7932 ? office > > > > (203) 931-5381 ? mobile > > > > http://web.yale.edu > > > ffb?url=http%3A%2F%2Fweb.yale.edu%2F&userId=74734&signature=d652edf1f4 > f21323> > > > > > > ------------------------------ > > *From:* nginx on behalf of Quintin Par < > > quintinpar at gmail.com> > > *Sent:* Saturday, May 12, 2018 1:32 PM > > *To:* nginx at nginx.org > > *Subject:* Re: Debugging Nginx Cache Misses: Hitting high number of > MISS > > despite high proxy valid > > > > > > That?s the tricky part. These MISSes are intermittent. Whenever I > run curl > > I get HITs but I end up seeing a lot of MISS in the logs. > > > > > > > > How do I log these MiSSes with the reason? I want to know what > headers > > ended up bypassing the cache. > > > > > > > > Here?s my caching config > > > > > > > > proxy_pass http://127.0.0.1:8000 > > > d=DwMFaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_ > lS023SJrs&m=F-qGMOyS74uE8JM-dOLmNH92bQ1xQ-7Rj1d6k-_WST4&s=NHvlb1WColNw > TWBF36P1whJdu5iWHK9_50IDHugaEdQ&e=> > > ; > > > > proxy_set_header X-Real-IP $remote_addr; > > > > proxy_set_header X-Forwarded-For > > $proxy_add_x_forwarded_for; > > > > proxy_set_header X-Forwarded-Proto https; > > > > proxy_set_header X-Forwarded-Port 443; > > > > > > > > # If logged in, don't cache. > > > > if ($http_cookie ~* > "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" > > ) { > > > > set $do_not_cache 1; > > > > } > > > > proxy_cache_key "$scheme://$host$request_uri$ > > do_not_cache"; > > > > proxy_cache staticfilecache; > > > > add_header Cache-Control public; > > > > proxy_cache_valid 200 120d; > > > > proxy_hide_header "Set-Cookie"; > > > > proxy_ignore_headers "Set-Cookie"; > > > > proxy_ignore_headers "Cache-Control"; > > > > proxy_hide_header "Cache-Control"; > > > > proxy_pass_header X-Accel-Expires; > > > > > > > > proxy_set_header Accept-Encoding ""; > > > > proxy_ignore_headers Expires; > > > > add_header X-Cache-Status $upstream_cache_status; > > > > proxy_cache_use_stale timeout; > > > > proxy_cache_bypass $arg_nocache $do_not_cache; > > - Quintin > > > > > > On Sat, May 12, 2018 at 10:29 AM Lucas Rolff > wrote: > > > > It can be as simple as doing a curl to your ?origin? url (the one > you > > proxy_pass to) for the files you see that gets a lot of MISS?s ? if > there?s > > odd headers such as cookies etc, then you?ll most likely experience > a bad > > cache if your nginx is configured to not ignore those headers. > > > > > > > > *From: *nginx on behalf of Quintin Par < > > quintinpar at gmail.com> > > *Reply-To: *"nginx at nginx.org" > > *Date: *Saturday, 12 May 2018 at 18.26 > > *To: *"nginx at nginx.org" > > *Subject: *Debugging Nginx Cache Misses: Hitting high number of MISS > > despite high proxy valid > > > > > > > > [image: > > > https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae6398 > 00.png?u=74734] > > > > My proxy cache path is set to a very high size > > > > > > > > proxy_cache_path /var/lib/nginx/cache levels=1:2 > > keys_zone=staticfilecache:180m max_size=700m; > > > > and the size used is only > > > > > > > > sudo du -sh * > > > > 14M cache > > > > 4.0K proxy > > > > Proxy cache valid is set to > > > > > > > > proxy_cache_valid 200 120d; > > > > I track HIT and MISS via > > > > > > > > add_header X-Cache-Status $upstream_cache_status; > > > > Despite these settings I am seeing a lot of MISSes. And this is for > pages > > I intentionally ran a cache warmer an hour ago. > > > > > > > > How do I debug why these MISSes are happening? How do I find out if > the > > miss was due to eviction, expiration, some rogue header etc? Does > Nginx > > provide commands for this? > > > > > > > > - Quintin > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > e10?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp- > 3A__mailman.nginx.org_mailman_listinfo_nginx%26d%3DDwMFaQ%26c%3DcjytLX > gP8ixuoHflwc-poQ%26r%3DwvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs%26m > %3DF-qGMOyS74uE8JM-dOLmNH92bQ1xQ-7Rj1d6k-_WST4%26s%3DD3LnZhfobOtlEStCv > CDrcwmHydEHaGRFC4gnWvRT5Uk%26e%3D&userId=74734&signature=56c7a7ad18b2c > 057> > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > 260?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&us > erId=74734&signature=3763121afa828bb7> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279764,279771#msg-279771 From francis at daoine.org Sun May 13 23:11:12 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 14 May 2018 00:11:12 +0100 Subject: Problem with to multiple virtual hosts In-Reply-To: <956def47bd86121839b3ed3573431044.NginxMailingListEnglish@forum.nginx.org> References: <20180509201058.GE19311@daoine.org> <956def47bd86121839b3ed3573431044.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180513231112.GI19311@daoine.org> Hi there, it sounds like you have found a workaround for your production system, so that it a good thing. The probably-not-satisfactory but maybe-adequate thing is for you to create a new directory whenever the current directory has (say) 100 files, and put new files into the new directory. If you are willing to do some more testing, perhaps the problem can be identified and the "proper" solution designed. Since this will involve restarting nginx with known-broken config, you may be happier doing it on a non-production system. So: one test can be about the number of files -- if "195 is good; 197 is bad" is true in general. What I think you have reported is that with 195 files, things work; if you add the new config to the end of a current file, things work; but if you add the new config to new files instead of current files, things fail. On a system that shows that behaviour, can you have the config in the two new files so that it fails; and then delete five other files so that you have less than 195 files again. Does that work or fail? If it works, it suggests that there may be a file-count limit; if it fails, it suggests that the problem is not with the number of files. Another test can be about the config. With the 195-files case where it works, with the new config at the end of a current file, can you write the output of "nginx -T" to a file -- for example: nginx -T > 195-works And then with the 197-files case where it fails, can you do something similar: nginx -T > 197-fails Then you can play "spot-the-difference" between the two files, such as by doing diff -u 195-works 197-fails What you should see there is probably exactly the same content with + and - marks at the start, as the new config is probably in different places in the full config. If there is anything other than that, it may be interesting. (Also: the diff output should be mostly just the new config, so should be much smaller than the full config, and should have much less private information, so may be easier to edit before sharing, if that is a useful thing to do.) Maybe one or other of those tests will show something that will help identify the source of the problem. If you can afford the effort to try that, it may help the next person who has the same issue. Cheers, f -- Francis Daly francis at daoine.org From quintinpar at gmail.com Mon May 14 04:06:05 2018 From: quintinpar at gmail.com (Quintin Par) Date: Mon, 14 May 2018 00:06:05 -0400 Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid In-Reply-To: <95053daace8238a9e917c313d7426872.NginxMailingListEnglish@forum.nginx.org> References: <95053daace8238a9e917c313d7426872.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks all for the response. Michael, I am going to add those header ignores. Still puzzled by the large number of MISSEs and I?ve no clue why they are happening. Leads appreciated. - Quintin On Sun, May 13, 2018 at 6:12 PM, c0nw0nk wrote: > You know you can DoS sites with Cache MISS via switching up URL params and > arguements. > > Examples : > > HIT : > index.php?var1=one&var2=two > MISS : > index.php?var2=two&var1=one > > MISS : > index.php?random=1 > index.php?random=2 > index.php?random=3 > etc etc > > Inserting random arguements to URL's will cause cache misses and changing > the order of existing valid URL arguements will also cause misses. > > Cherian Thomas Wrote: > ------------------------------------------------------- > > Thanks for this Michael. > > > > > > > > This is so surprising. If someone decides to Dos and crawls the > > website > > with a rogue header, this will essentially bypass the cache and put a > > strain on the website. In fact, I was hit by a dos attack that?s when > > I > > started looking at logs and realized the large number of MISSes. > > > > > > > > Can someone please help? > > > > > > - Cherian > > > > On Sat, May 12, 2018 at 12:01 PM, Friscia, Michael > > > > wrote: > > > > > I'm not sure if this will help, but I ignore/hide a lot, this is in > > my > > > config > > > > > > > > > proxy_ignore_headers X-Accel-Expires Expires Cache-Control > > Set-Cookie; > > > proxy_hide_header X-Accel-Expires; > > > proxy_hide_header Pragma; > > > proxy_hide_header Server; > > > proxy_hide_header Request-Context; > > > proxy_hide_header X-Powered-By; > > > proxy_hide_header X-AspNet-Version; > > > proxy_hide_header X-AspNetMvc-Version; > > > > > > > > > I have not experienced the problem you mention, I just thought I > > would > > > offer my config. > > > > > > > > > ___________________________________________ > > > > > > Michael Friscia > > > > > > Office of Communications > > > > > > Yale School of Medicine > > > > > > (203) 737-7932 ? office > > > > > > (203) 931-5381 ? mobile > > > > > > http://web.yale.edu > > > > > > > > ffb?url=http%3A%2F%2Fweb.yale.edu > > %2F&userId=74734&signature=d652edf1f4 > > f21323> > > > > > > > > > ------------------------------ > > > *From:* nginx on behalf of Quintin Par < > > > quintinpar at gmail.com> > > > *Sent:* Saturday, May 12, 2018 1:32 PM > > > *To:* nginx at nginx.org > > > *Subject:* Re: Debugging Nginx Cache Misses: Hitting high number of > > MISS > > > despite high proxy valid > > > > > > > > > That?s the tricky part. These MISSes are intermittent. Whenever I > > run curl > > > I get HITs but I end up seeing a lot of MISS in the logs. > > > > > > > > > > > > How do I log these MiSSes with the reason? I want to know what > > headers > > > ended up bypassing the cache. > > > > > > > > > > > > Here?s my caching config > > > > > > > > > > > > proxy_pass http://127.0.0.1:8000 > > > > > > > > d=DwMFaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_ > > lS023SJrs&m=F-qGMOyS74uE8JM-dOLmNH92bQ1xQ-7Rj1d6k-_WST4&s=NHvlb1WColNw > > TWBF36P1whJdu5iWHK9_50IDHugaEdQ&e=> > > > ; > > > > > > proxy_set_header X-Real-IP $remote_addr; > > > > > > proxy_set_header X-Forwarded-For > > > $proxy_add_x_forwarded_for; > > > > > > proxy_set_header X-Forwarded-Proto https; > > > > > > proxy_set_header X-Forwarded-Port 443; > > > > > > > > > > > > # If logged in, don't cache. > > > > > > if ($http_cookie ~* > > "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" > > > ) { > > > > > > set $do_not_cache 1; > > > > > > } > > > > > > proxy_cache_key "$scheme://$host$request_uri$ > > > do_not_cache"; > > > > > > proxy_cache staticfilecache; > > > > > > add_header Cache-Control public; > > > > > > proxy_cache_valid 200 120d; > > > > > > proxy_hide_header "Set-Cookie"; > > > > > > proxy_ignore_headers "Set-Cookie"; > > > > > > proxy_ignore_headers "Cache-Control"; > > > > > > proxy_hide_header "Cache-Control"; > > > > > > proxy_pass_header X-Accel-Expires; > > > > > > > > > > > > proxy_set_header Accept-Encoding ""; > > > > > > proxy_ignore_headers Expires; > > > > > > add_header X-Cache-Status $upstream_cache_status; > > > > > > proxy_cache_use_stale timeout; > > > > > > proxy_cache_bypass $arg_nocache $do_not_cache; > > > - Quintin > > > > > > > > > On Sat, May 12, 2018 at 10:29 AM Lucas Rolff > > wrote: > > > > > > It can be as simple as doing a curl to your ?origin? url (the one > > you > > > proxy_pass to) for the files you see that gets a lot of MISS?s ? if > > there?s > > > odd headers such as cookies etc, then you?ll most likely experience > > a bad > > > cache if your nginx is configured to not ignore those headers. > > > > > > > > > > > > *From: *nginx on behalf of Quintin Par < > > > quintinpar at gmail.com> > > > *Reply-To: *"nginx at nginx.org" > > > *Date: *Saturday, 12 May 2018 at 18.26 > > > *To: *"nginx at nginx.org" > > > *Subject: *Debugging Nginx Cache Misses: Hitting high number of MISS > > > despite high proxy valid > > > > > > > > > > > > [image: > > > > > https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae6398 > > 00.png?u=74734] > > > > > > My proxy cache path is set to a very high size > > > > > > > > > > > > proxy_cache_path /var/lib/nginx/cache levels=1:2 > > > keys_zone=staticfilecache:180m max_size=700m; > > > > > > and the size used is only > > > > > > > > > > > > sudo du -sh * > > > > > > 14M cache > > > > > > 4.0K proxy > > > > > > Proxy cache valid is set to > > > > > > > > > > > > proxy_cache_valid 200 120d; > > > > > > I track HIT and MISS via > > > > > > > > > > > > add_header X-Cache-Status $upstream_cache_status; > > > > > > Despite these settings I am seeing a lot of MISSes. And this is for > > pages > > > I intentionally ran a cache warmer an hour ago. > > > > > > > > > > > > How do I debug why these MISSes are happening? How do I find out if > > the > > > miss was due to eviction, expiration, some rogue header etc? Does > > Nginx > > > provide commands for this? > > > > > > > > > > > > - Quintin > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > e10?url=https%3A%2F%2Furldefense.proofpoint.com > > %2Fv2%2Furl%3Fu%3Dhttp- > > 3A__mailman.nginx.org_mailman_listinfo_nginx%26d%3DDwMFaQ%26c%3DcjytLX > > gP8ixuoHflwc-poQ%26r%3DwvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs%26m > > %3DF-qGMOyS74uE8JM-dOLmNH92bQ1xQ-7Rj1d6k-_WST4%26s%3DD3LnZhfobOtlEStCv > > CDrcwmHydEHaGRFC4gnWvRT5Uk%26e%3D&userId=74734&signature=56c7a7ad18b2c > > 057> > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > 260?url=http%3A%2F%2Fmailman.nginx.org > > %2Fmailman%2Flistinfo%2Fnginx&us > > erId=74734&signature=3763121afa828bb7> > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,279764,279771#msg-279771 > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Mon May 14 11:33:30 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Mon, 14 May 2018 11:33:30 +0000 Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid In-Reply-To: References: <95053daace8238a9e917c313d7426872.NginxMailingListEnglish@forum.nginx.org> Message-ID: I wish I had a lead for you. I?ve never seen that behavoir ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu From: nginx on behalf of Quintin Par Reply-To: "nginx at nginx.org" Date: Monday, May 14, 2018 at 12:07 AM To: "nginx at nginx.org" Subject: Re: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid [https://mailtrack.io/trace/mail/830e676b314f1b30986adfc1c7df5f967b9aa282.png?u=74734] Thanks all for the response. Michael, I am going to add those header ignores. Still puzzled by the large number of MISSEs and I?ve no clue why they are happening. Leads appreciated. - Quintin On Sun, May 13, 2018 at 6:12 PM, c0nw0nk > wrote: You know you can DoS sites with Cache MISS via switching up URL params and arguements. Examples : HIT : index.php?var1=one&var2=two MISS : index.php?var2=two&var1=one MISS : index.php?random=1 index.php?random=2 index.php?random=3 etc etc Inserting random arguements to URL's will cause cache misses and changing the order of existing valid URL arguements will also cause misses. Cherian Thomas Wrote: ------------------------------------------------------- > Thanks for this Michael. > > > > This is so surprising. If someone decides to Dos and crawls the > website > with a rogue header, this will essentially bypass the cache and put a > strain on the website. In fact, I was hit by a dos attack that?s when > I > started looking at logs and realized the large number of MISSes. > > > > Can someone please help? > > > - Cherian > > On Sat, May 12, 2018 at 12:01 PM, Friscia, Michael > > > wrote: > > > I'm not sure if this will help, but I ignore/hide a lot, this is in > my > > config > > > > > > proxy_ignore_headers X-Accel-Expires Expires Cache-Control > Set-Cookie; > > proxy_hide_header X-Accel-Expires; > > proxy_hide_header Pragma; > > proxy_hide_header Server; > > proxy_hide_header Request-Context; > > proxy_hide_header X-Powered-By; > > proxy_hide_header X-AspNet-Version; > > proxy_hide_header X-AspNetMvc-Version; > > > > > > I have not experienced the problem you mention, I just thought I > would > > offer my config. > > > > > > ___________________________________________ > > > > Michael Friscia > > > > Office of Communications > > > > Yale School of Medicine > > > > (203) 737-7932 ? office > > > > (203) 931-5381 ? mobile > > > > http://web.yale.edu > > > > ffb?url=http%3A%2F%2Fweb.yale.edu%2F&userId=74734&signature=d652edf1f4 > f21323> > > > > > > ------------------------------ > > *From:* nginx > on behalf of Quintin Par < > > quintinpar at gmail.com> > > *Sent:* Saturday, May 12, 2018 1:32 PM > > *To:* nginx at nginx.org > > *Subject:* Re: Debugging Nginx Cache Misses: Hitting high number of > MISS > > despite high proxy valid > > > > > > That?s the tricky part. These MISSes are intermittent. Whenever I > run curl > > I get HITs but I end up seeing a lot of MISS in the logs. > > > > > > > > How do I log these MiSSes with the reason? I want to know what > headers > > ended up bypassing the cache. > > > > > > > > Here?s my caching config > > > > > > > > proxy_pass http://127.0.0.1:8000 > > > > d=DwMFaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_ > lS023SJrs&m=F-qGMOyS74uE8JM-dOLmNH92bQ1xQ-7Rj1d6k-_WST4&s=NHvlb1WColNw > TWBF36P1whJdu5iWHK9_50IDHugaEdQ&e=> > > ; > > > > proxy_set_header X-Real-IP $remote_addr; > > > > proxy_set_header X-Forwarded-For > > $proxy_add_x_forwarded_for; > > > > proxy_set_header X-Forwarded-Proto https; > > > > proxy_set_header X-Forwarded-Port 443; > > > > > > > > # If logged in, don't cache. > > > > if ($http_cookie ~* > "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" > > ) { > > > > set $do_not_cache 1; > > > > } > > > > proxy_cache_key "$scheme://$host$request_uri$ > > do_not_cache"; > > > > proxy_cache staticfilecache; > > > > add_header Cache-Control public; > > > > proxy_cache_valid 200 120d; > > > > proxy_hide_header "Set-Cookie"; > > > > proxy_ignore_headers "Set-Cookie"; > > > > proxy_ignore_headers "Cache-Control"; > > > > proxy_hide_header "Cache-Control"; > > > > proxy_pass_header X-Accel-Expires; > > > > > > > > proxy_set_header Accept-Encoding ""; > > > > proxy_ignore_headers Expires; > > > > add_header X-Cache-Status $upstream_cache_status; > > > > proxy_cache_use_stale timeout; > > > > proxy_cache_bypass $arg_nocache $do_not_cache; > > - Quintin > > > > > > On Sat, May 12, 2018 at 10:29 AM Lucas Rolff > > wrote: > > > > It can be as simple as doing a curl to your ?origin? url (the one > you > > proxy_pass to) for the files you see that gets a lot of MISS?s ? if > there?s > > odd headers such as cookies etc, then you?ll most likely experience > a bad > > cache if your nginx is configured to not ignore those headers. > > > > > > > > *From: *nginx > on behalf of Quintin Par < > > quintinpar at gmail.com> > > *Reply-To: *"nginx at nginx.org" > > > *Date: *Saturday, 12 May 2018 at 18.26 > > *To: *"nginx at nginx.org" > > > *Subject: *Debugging Nginx Cache Misses: Hitting high number of MISS > > despite high proxy valid > > > > > > > > [image: > > > https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae6398 > 00.png?u=74734] > > > > My proxy cache path is set to a very high size > > > > > > > > proxy_cache_path /var/lib/nginx/cache levels=1:2 > > keys_zone=staticfilecache:180m max_size=700m; > > > > and the size used is only > > > > > > > > sudo du -sh * > > > > 14M cache > > > > 4.0K proxy > > > > Proxy cache valid is set to > > > > > > > > proxy_cache_valid 200 120d; > > > > I track HIT and MISS via > > > > > > > > add_header X-Cache-Status $upstream_cache_status; > > > > Despite these settings I am seeing a lot of MISSes. And this is for > pages > > I intentionally ran a cache warmer an hour ago. > > > > > > > > How do I debug why these MISSes are happening? How do I find out if > the > > miss was due to eviction, expiration, some rogue header etc? Does > Nginx > > provide commands for this? > > > > > > > > - Quintin > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > e10?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp- > 3A__mailman.nginx.org_mailman_listinfo_nginx%26d%3DDwMFaQ%26c%3DcjytLX > gP8ixuoHflwc-poQ%26r%3DwvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs%26m > %3DF-qGMOyS74uE8JM-dOLmNH92bQ1xQ-7Rj1d6k-_WST4%26s%3DD3LnZhfobOtlEStCv > CDrcwmHydEHaGRFC4gnWvRT5Uk%26e%3D&userId=74734&signature=56c7a7ad18b2c > 057> > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > 260?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&us > erId=74734&signature=3763121afa828bb7> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279764,279771#msg-279771 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Mon May 14 15:07:47 2018 From: peter_booth at me.com (Peter Booth) Date: Mon, 14 May 2018 11:07:47 -0400 Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid In-Reply-To: References: <95053daace8238a9e917c313d7426872.NginxMailingListEnglish@forum.nginx.org> Message-ID: Quintin, I dont know anything about your context, but your setup looks over simplistic. Here are some things that I learned painfully over a few years of supporting a high traffic retail website 1. Is this a website that's on the internet, and thus exposed to random queries from bots and scrapers that you can?t control? 2. For your cache misses, how long best case, typical and worse case does your back-end take to build the pages? 3. You need to log everything that could feasibly affect the status of the site. For example, here?s a log config urationfrom one gnarly site that I worked on: log_format main '$http_x_forwarded_for $http_true_client_ip $remote_addr - $remote_user [$time_local] $host "$request" ' '$status $body_bytes_sent $upstream_cache_status $cookie_jsessionid $http_akamai_country $cookie_e4x_country $cookie_e4x_currency "$http_referer" ' '"$http_user_agent" "$request_time??; 4. the first problem is your cache key, and that it includes $request_uri which is the original uri including all arguments. So you are already exposed to DOS requests that could be unintentional, as anyone can bust your cache by adding an extra parameter. > proxy_cache_key "$scheme://$host$request_uri$do_not_cache"; 5. Not caching requests from logged in users is a very blunt tool. Is this a site where only administrative users are logged in? Imagine a retail site that sells clothing. It?s possible that a dynamic page that lists all the red dresses is something a logged in user sees. Perhaps the page can be cached ? But if there is a version of the page that shows 30 entries and other that shows 60 then they need to disambiguated by the cache key. Perhaps users can choose to see prices in Euro instead of USD? Then this also belongs in the key. If I am an American vacationing in Pari s then perhaps the default behavior should be to show me Euro prices, based n the value of a cookie that the CDN sets. In the situation the customer may want to override this default behavior and insist he sees USD prices. You can see how complex this can get. 7. The default behavior is to not cache responses that contain a set-cookie - imagine how cache pollution - sending someone another person?s personal data stored in a cookie could be much worse than a cache miss. But there are also settings where your backend is some legacy software that you dont control and the correct behavior isn?t to not cache but instead to remove the set-cookie from the response and cache the response without it. 8 How you prime the cache , monitor the cache, and clear the cache are crucial . Perhaps you have a script that uses curl or wget to retrieve a series of pages from your site. If the script is written naively then each step might cause a new servlet session to be created on the backend producing a memory issue. 9. script is very useful to track the health of your cache: https://github.com/perusio/nginx-cache-inspector 10. The if directive in nginx has some issues (see https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/ ) When I need to use complex configuration logic I use OpenResty. OpenResty is a bundle that combines the standard nginx with some additional lua modules. It?s still standard nginx - not forked or customized in any way. 11. A very cut down version of a cache config for one page follows: # Product arrays get cached location ~ /shop/ { rewrite "/(.*)/2];ord.*$" $1 ; proxy_no_cache $arg_mid $arg_siteID; proxy_cache_bypass $arg_mid $arg_siteID; proxy_cache_use_stale updating; default_type text/html; proxy_cache_valid 200 302 301 15m; proxy_ignore_headers Set-Cookie Cache-Control; proxy_pass_header off; proxy_hide_header Set-Cookie; expires 900s; add_header Last-Modified ""; add_header ETag ""; # Build cache key set $e4x_currency $cookie_e4x_currency; set_if_empty $e4x_currency 'USD'; set $num_items $cookie_EndecaNumberOfItems; set_if_empty $num_items 'LOW'; proxy_cache_key "$uri|$e4x_currency|$num_items"; proxy_cache product_arrays; # Add Canonical URL string set $folder_id $arg_FOLDER%3C%3Efolder_id; set $canonical_url "http://$http_host$uri"; add_header Link "<$canonical_url>; rel=\"canonical\""; proxy_pass http://apache$request_uri; } Tis snippet shows a key made of three parts. The real version has seven parts. Good luck! Peter > On 14 May 2018, at 12:06 AM, Quintin Par wrote: > > > Thanks all for the response. Michael, I am going to add those header ignores. > > Still puzzled by the large number of MISSEs and I?ve no clue why they are happening. Leads appreciated. > > > > > - Quintin > > On Sun, May 13, 2018 at 6:12 PM, c0nw0nk > wrote: > You know you can DoS sites with Cache MISS via switching up URL params and > arguements. > > Examples : > > HIT : > index.php?var1=one&var2=two > MISS : > index.php?var2=two&var1=one > > MISS : > index.php?random=1 > index.php?random=2 > index.php?random=3 > etc etc > > Inserting random arguements to URL's will cause cache misses and changing > the order of existing valid URL arguements will also cause misses. > > Cherian Thomas Wrote: > ------------------------------------------------------- > > Thanks for this Michael. > > > > > > > > This is so surprising. If someone decides to Dos and crawls the > > website > > with a rogue header, this will essentially bypass the cache and put a > > strain on the website. In fact, I was hit by a dos attack that?s when > > I > > started looking at logs and realized the large number of MISSes. > > > > > > > > Can someone please help? > > > > > > - Cherian > > > > On Sat, May 12, 2018 at 12:01 PM, Friscia, Michael > > > > > wrote: > > > > > I'm not sure if this will help, but I ignore/hide a lot, this is in > > my > > > config > > > > > > > > > proxy_ignore_headers X-Accel-Expires Expires Cache-Control > > Set-Cookie; > > > proxy_hide_header X-Accel-Expires; > > > proxy_hide_header Pragma; > > > proxy_hide_header Server; > > > proxy_hide_header Request-Context; > > > proxy_hide_header X-Powered-By; > > > proxy_hide_header X-AspNet-Version; > > > proxy_hide_header X-AspNetMvc-Version; > > > > > > > > > I have not experienced the problem you mention, I just thought I > > would > > > offer my config. > > > > > > > > > ___________________________________________ > > > > > > Michael Friscia > > > > > > Office of Communications > > > > > > Yale School of Medicine > > > > > > (203) 737-7932 ? office > > > > > > (203) 931-5381 ? mobile > > > > > > http://web.yale.edu > > > > > > > ffb?url=http%3A%2F%2Fweb.yale.edu %2F&userId=74734&signature=d652edf1f4 > > f21323> > > > > > > > > > ------------------------------ > > > *From:* nginx > on behalf of Quintin Par < > > > quintinpar at gmail.com > > > > *Sent:* Saturday, May 12, 2018 1:32 PM > > > *To:* nginx at nginx.org > > > *Subject:* Re: Debugging Nginx Cache Misses: Hitting high number of > > MISS > > > despite high proxy valid > > > > > > > > > That?s the tricky part. These MISSes are intermittent. Whenever I > > run curl > > > I get HITs but I end up seeing a lot of MISS in the logs. > > > > > > > > > > > > How do I log these MiSSes with the reason? I want to know what > > headers > > > ended up bypassing the cache. > > > > > > > > > > > > Here?s my caching config > > > > > > > > > > > > proxy_pass http://127.0.0.1:8000 > > > > > > > d=DwMFaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_ > > lS023SJrs&m=F-qGMOyS74uE8JM-dOLmNH92bQ1xQ-7Rj1d6k-_WST4&s=NHvlb1WColNw > > TWBF36P1whJdu5iWHK9_50IDHugaEdQ&e=> > > > ; > > > > > > proxy_set_header X-Real-IP $remote_addr; > > > > > > proxy_set_header X-Forwarded-For > > > $proxy_add_x_forwarded_for; > > > > > > proxy_set_header X-Forwarded-Proto https; > > > > > > proxy_set_header X-Forwarded-Port 443; > > > > > > > > > > > > # If logged in, don't cache. > > > > > > if ($http_cookie ~* > > "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" > > > ) { > > > > > > set $do_not_cache 1; > > > > > > } > > > > > > proxy_cache_key "$scheme://$host$request_uri$ > > > do_not_cache"; > > > > > > proxy_cache staticfilecache; > > > > > > add_header Cache-Control public; > > > > > > proxy_cache_valid 200 120d; > > > > > > proxy_hide_header "Set-Cookie"; > > > > > > proxy_ignore_headers "Set-Cookie"; > > > > > > proxy_ignore_headers "Cache-Control"; > > > > > > proxy_hide_header "Cache-Control"; > > > > > > proxy_pass_header X-Accel-Expires; > > > > > > > > > > > > proxy_set_header Accept-Encoding ""; > > > > > > proxy_ignore_headers Expires; > > > > > > add_header X-Cache-Status $upstream_cache_status; > > > > > > proxy_cache_use_stale timeout; > > > > > > proxy_cache_bypass $arg_nocache $do_not_cache; > > > - Quintin > > > > > > > > > On Sat, May 12, 2018 at 10:29 AM Lucas Rolff > > > wrote: > > > > > > It can be as simple as doing a curl to your ?origin? url (the one > > you > > > proxy_pass to) for the files you see that gets a lot of MISS?s ? if > > there?s > > > odd headers such as cookies etc, then you?ll most likely experience > > a bad > > > cache if your nginx is configured to not ignore those headers. > > > > > > > > > > > > *From: *nginx > on behalf of Quintin Par < > > > quintinpar at gmail.com > > > > *Reply-To: *"nginx at nginx.org " > > > > *Date: *Saturday, 12 May 2018 at 18.26 > > > *To: *"nginx at nginx.org " > > > > *Subject: *Debugging Nginx Cache Misses: Hitting high number of MISS > > > despite high proxy valid > > > > > > > > > > > > [image: > > > > > https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae6398 > > 00.png?u=74734] > > > > > > My proxy cache path is set to a very high size > > > > > > > > > > > > proxy_cache_path /var/lib/nginx/cache levels=1:2 > > > keys_zone=staticfilecache:180m max_size=700m; > > > > > > and the size used is only > > > > > > > > > > > > sudo du -sh * > > > > > > 14M cache > > > > > > 4.0K proxy > > > > > > Proxy cache valid is set to > > > > > > > > > > > > proxy_cache_valid 200 120d; > > > > > > I track HIT and MISS via > > > > > > > > > > > > add_header X-Cache-Status $upstream_cache_status; > > > > > > Despite these settings I am seeing a lot of MISSes. And this is for > > pages > > > I intentionally ran a cache warmer an hour ago. > > > > > > > > > > > > How do I debug why these MISSes are happening? How do I find out if > > the > > > miss was due to eviction, expiration, some rogue header etc? Does > > Nginx > > > provide commands for this? > > > > > > > > > > > > - Quintin > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > e10?url=https%3A%2F%2Furldefense.proofpoint.com %2Fv2%2Furl%3Fu%3Dhttp- > > 3A__mailman.nginx.org_mailman_listinfo_nginx%26d%3DDwMFaQ%26c%3DcjytLX > > gP8ixuoHflwc-poQ%26r%3DwvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs%26m > > %3DF-qGMOyS74uE8JM-dOLmNH92bQ1xQ-7Rj1d6k-_WST4%26s%3DD3LnZhfobOtlEStCv > > CDrcwmHydEHaGRFC4gnWvRT5Uk%26e%3D&userId=74734&signature=56c7a7ad18b2c > > 057> > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > 260?url=http%3A%2F%2Fmailman.nginx.org %2Fmailman%2Flistinfo%2Fnginx&us > > erId=74734&signature=3763121afa828bb7> > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279764,279771#msg-279771 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 14 17:22:46 2018 From: nginx-forum at forum.nginx.org (vedranf) Date: Mon, 14 May 2018 13:22:46 -0400 Subject: Regression in 1.14 when following upstream redirects Message-ID: <6c5f5cc5851f1408e45656cc394f515a.NginxMailingListEnglish@forum.nginx.org> Hello, There is a problem when nginx is configured to try to follow redirects (301) from upstream server in order to cache responses being directed to, rather than the short redirect itself. This worked in 1.12 and earlier releases. Here is the simplified configuration I use and which used to work: server { proxy_cache something; location / { proxy_pass http://upstream; } location @handle3XX { proxy_cache_key ...; set $target $upstream_http_location; proxy_pass $target; proxy_redirect off; internal; }} With 1.12 this would cause nginx to follow the redirect and return the response after the (absolute) redirect. With 1.14 something weird is going on, it returns 301 back to the client and upstream_cache_status variable is set to HIT (even though 3XX aren't configured to be cached at all). If I repeat the request, I get 500 with "invalid URL prefix in" because $target is now empty as it didn't connect to the upstream at all. Debug logs for the critical part show this below (trimmed). Common for both nginx versions: 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http upstream request: "/path" 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http upstream process header 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http proxy status 301 "301 Moved Permanently" 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http proxy header: "Date: Mon, 14 May 2018 16:06:20 GMT" 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http proxy header done 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http file cache free, fd: -1 2018/05/14 16:06:20 [debug] 6280#6280: *1728 finalize http upstream request: 301 2018/05/14 16:06:20 [debug] 6280#6280: *1728 finalize http proxy request 2018/05/14 16:06:20 [debug] 6280#6280: *1728 free keepalive peer 2018/05/14 16:06:20 [debug] 6280#6280: *1728 free rr peer 2 0 2018/05/14 16:06:20 [debug] 6280#6280: *1728 close http upstream connection: 393 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http finalize request: 301, "/path" a:1, c:1 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http special response: 301, "/path" 2018/05/14 16:06:20 [debug] 6280#6280: *1728 test location: "@handle3XX" 2018/05/14 16:06:20 [debug] 6280#6280: *1728 using location: @handle3XX "/path" 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http script complex value 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http script var: "https://site.com/path 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http script set $target 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http script var: "https://site.com/path" 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http init upstream, client timer: 0 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http script var: "/path" 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http cache key: "..." Now it starts to differ, 1.12: 2018/05/14 16:06:20 [debug] 6280#6280: *1728 add cleanup: 000000000B922848 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http file cache exists: -5 e:0 2018/05/14 16:06:20 [debug] 6280#6280: *1728 cache file: "/home/cache/63/67/e/1fb991ad8a2289a3c617c43166ae6763" 2018/05/14 16:06:20 [debug] 6280#6280: *1728 add cleanup: 000000000B922860 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http file cache lock u:1 wt:0 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http upstream cache: -5 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http proxy header: "GET /path HTTP/1.1 Host: site.com Connection: close Accept: */* " 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http cleanup add: 0000000003F06B58 2018/05/14 16:06:20 [debug] 6280#6280: *1728 http upstream resolve: "/path" 2018/05/14 16:06:20 [debug] 6280#6280: *1728 name was resolved to x.x.x.x and now goes on to proxy request to new upstream from the location response header ... while on 1.14: 2018/05/14 16:19:08 [debug] 8112#8112: *45398 add cleanup: 000000000D97C620 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http file cache exists: 0 e:0 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http upstream cache: 301 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http finalize request: 301, "/path" a:1, c:3 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http special response: 301, "/path" 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http script var: "HIT" 2018/05/14 16:19:08 [debug] 8112#8112: *45398 HTTP/1.1 301 Moved Permanently 2018/05/14 16:19:08 [debug] 8112#8112: *45398 write new buf t:1 f:0 000000000E02B938, pos 000000000E02B938, size: 348 file: 0, size: 0 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http write filter: l:1 f:0 s:348 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http write filter limit 16777216 2018/05/14 16:19:08 [debug] 8112#8112: *45398 writev: 348 of 348 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http write filter 0000000000000000 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http finalize request: 0, "/path" a:1, c:3 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http request count:3 blk:0 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http finalize request: -4, "/path" a:1, c:2 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http request count:2 blk:0 2018/05/14 16:19:08 [debug] 8112#8112: *45398 http finalize request: -4, "/path" a:1, c:1 and it basically never proxies to new upstream server ... 1.12 eventually responds with 200 or 404 from where the first upstream redirected nginx to, while 1.14 just passes 301 from the first upstream. Configurations when testing were the same, only nginx binary was different. Regards, Vedran Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279787,279787#msg-279787 From michael.friscia at yale.edu Mon May 14 17:26:19 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Mon, 14 May 2018 17:26:19 +0000 Subject: Custom error_page Message-ID: I?m not sure if I?m using error_page correctly. I?m trying to set this up so that if the upstream server returns a 500, then I show a custom error page. Is this possible? I have a custom error setup that works just fine using the instructions from this site: https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-to-use-custom-error-pages-on-ubuntu-14-04 That all works just fine, but it seems to only work for cases when Nginx is handling it and not when the upstream server is serving an error. Simply put, how can I have Nginx serve a custom error page when the upstream server returns a 500, 502, 503, 504 or 404? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Mon May 14 17:32:51 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Mon, 14 May 2018 17:32:51 +0000 Subject: Custom error_page In-Reply-To: References: Message-ID: <5F9C3036-17D5-4865-A38E-FD09B284A4BB@yale.edu> Ok, I kind of found the answer here http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors then I realized my problem is that I only want to do this for 500,502,503 and 504, I want the upstream server to handle 404 errors on its own. So is there a way to only intercept the 5xx errors instead of all? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu From: nginx on behalf of Michael Friscia Reply-To: "nginx at nginx.org" Date: Monday, May 14, 2018 at 1:26 PM To: "nginx at nginx.org" Subject: Custom error_page I?m not sure if I?m using error_page correctly. I?m trying to set this up so that if the upstream server returns a 500, then I show a custom error page. Is this possible? I have a custom error setup that works just fine using the instructions from this site: https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-to-use-custom-error-pages-on-ubuntu-14-04 That all works just fine, but it seems to only work for cases when Nginx is handling it and not when the upstream server is serving an error. Simply put, how can I have Nginx serve a custom error page when the upstream server returns a 500, 502, 503, 504 or 404? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From xserverlinux at gmail.com Tue May 15 02:02:45 2018 From: xserverlinux at gmail.com (Ricky Gutierrez) Date: Mon, 14 May 2018 20:02:45 -0600 Subject: Connection refused Message-ID: hello list, I have a reverse proxy with nginx front end and I have the backend with nginx some applications in php7 with mariadb, reviewing the log I see a lot of errors like this: 2018/05/09 17:44:58 [error] 14633#14633: *1761 connect() failed (111: Connection refused) while connecting to upstream, client: 186.77.203.203, server: web.mydomain.com, request: "GET /imagenes/slide7.jpg HTTP/2.0", upstream: "http://192.168.11.7:80/imagenes/slide7.jpg", host: "www.mydomain.com", referrer: "https://www.mydomain.com/" 2018/05/09 17:45:09 [error] 14633#14633: *1761 connect() failed (111: Connection refused) while connecting to upstream, client: 186.77.203.203, server: web.mydomain.com, request: "GET /imagenes/slide8.jpg HTTP/2.0", upstream: "http://192.168.11.7:80/imagenes/slide8.jpg", host: "www.mydomain.com", referrer: "https://www.mydomain.com/" 2018/05/09 17:45:12 [error] 14633#14633: *1761 upstream prematurely closed connection while reading response header from upstream, client: 186.77.203.203, server: web.mydomain.com, request: "GET /imagenes/slide6.jpg HTTP/2.0", upstream: "http://192.168.11.7:80/imagenes/slide6.jpg", host: "www.mydomain.com", referrer: "https://www.mydomain.com/" I made a change according to this link on github, but I can not remove the error https://github.com/owncloud/client/issues/5706 my config : proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 900s; proxy_send_timeout 900s; proxy_read_timeout 900s; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_redirect off; proxy_request_buffering off; proxy_buffering off; proxy_pass http://backend1; regardss -- rickygm http://gnuforever.homelinux.com From mdounin at mdounin.ru Tue May 15 03:38:05 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 May 2018 06:38:05 +0300 Subject: Regression in 1.14 when following upstream redirects In-Reply-To: <6c5f5cc5851f1408e45656cc394f515a.NginxMailingListEnglish@forum.nginx.org> References: <6c5f5cc5851f1408e45656cc394f515a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180515033805.GF32137@mdounin.ru> Hello! On Mon, May 14, 2018 at 01:22:46PM -0400, vedranf wrote: > There is a problem when nginx is configured to try to follow redirects (301) > from upstream server in order to cache responses being directed to, rather > than the short redirect itself. This worked in 1.12 and earlier releases. > Here is the simplified configuration I use and which used to work: > > server { proxy_cache something; > location / { proxy_pass http://upstream; } > location @handle3XX { > proxy_cache_key ...; > set $target $upstream_http_location; > proxy_pass $target; > proxy_redirect off; > internal; > }} > > With 1.12 this would cause nginx to follow the redirect and return the > response after the (absolute) redirect. With 1.14 something weird is going > on, it returns 301 back to the client and upstream_cache_status variable is > set to HIT (even though 3XX aren't configured to be cached at all). If I > repeat the request, I get 500 with "invalid URL prefix in" because $target > is now empty as it didn't connect to the upstream at all. > > Debug logs for the critical part show this below (trimmed). Common for both > nginx versions: [...] >From the incomplete configuration and debug log snippets you've provided it looks like your problem if that requests previously not cached now successfully extracted from cache. >From the snippets you've provided it is not possible to conclude if the previous behaviour was buggy and now fixed (and your previous configuration worked due to a bug), or the new behaviour is incorrect. There are at least some fixes in 1.13.x which might affect your configuration. In particular, this fix in 1.13.6 might be related: *) Bugfix: cache control headers were ignored when caching errors intercepted by error_page. To further investigate things you may want to provide full configuration which demonstrates the problem, and full debug logs for requests in both versions. Please avoid any modifications to configuration and debug logs. If you want to keep some information private, consider reproducing the problem in a sandbox without any private information instead. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue May 15 08:35:07 2018 From: nginx-forum at forum.nginx.org (Nginx-Chris) Date: Tue, 15 May 2018 04:35:07 -0400 Subject: Nginx only serves 1 App Message-ID: <43c68b260c620fe9930c1634ec8807df.NginxMailingListEnglish@forum.nginx.org> Root Server with Ubuntu 16.04. Nginx Version: 1.10.3 I have an Nginx server that serves 1 Application: An open source Cloud Server from Seafile that listens on cloud.mydomain.com I now tried to add another Application to my server: A Mattermost server that should listen on chat.mydomain.com When I am adding the Nginx config for Mattermost, then it only is available when I deactivate the Seafile nginx config. So the server only serves one application at a time and that's always the Seafile Server. Then no nginx error.logs or access.logs get any data from the Mattermost login attempts. I am pasting the configs below and am hoping that someone could give me a tip what I have a done wrong or what I need to change. I don't understand why Nginx does not listen for chat.mydomain.com Any help would be very much appreciated! SEAFILE NGINX CONFIG: server { listen 80 http2; listen [::]:80 http2; server_name cloud.mydomain.com; rewrite ^ https://$http_host$request_uri? permanent; # force redirect http to https # Enables or disables emitting nginx version on error pages and in the "Server" response header field. server_tokens off; } server { listen 443 ssl http2; # managed by Certbot listen [::]:443 http2; ssl on; server_name cloud.mydomain.com; ssl_session_cache shared:SSL:5m; server_tokens off; ssl_certificate /etc/letsencrypt/live/cloud.mydomain.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/cloud.mydomain.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot proxy_set_header X-Forwarded-For $remote_addr; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-Proto https; proxy_read_timeout 1200s; # used for view/edit office file via Office Online Server client_max_body_size 0; access_log /var/log/nginx/seahub.access.log; error_log /var/log/nginx/seahub.error.log; } location /seafhttp { rewrite ^/seafhttp(.*)$ $1 break; proxy_pass http://127.0.0.1:8082; client_max_body_size 0; proxy_connect_timeout 36000s; proxy_read_timeout 36000s; proxy_send_timeout 36000s; send_timeout 36000s; proxy_request_buffering off; } location /media { root /home/user/seafile.cloud/seafile-server-latest/seahub; } location /webdav { fastcgi_pass 127.0.0.1:8080; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS on; fastcgi_param HTTP_SCHEME https; client_max_body_size 0; proxy_connect_timeout 36000s; proxy_read_timeout 36000s; proxy_send_timeout 36000s; send_timeout 36000s; # This option is only available for Nginx >= 1.8.0. See more details below. proxy_request_buffering off; access_log /var/log/nginx/seafdav.access.log; error_log /var/log/nginx/seafdav.error.log; } } MATTERMOST NGINX CONFIG: upstream backend { server 127.0.0.1:8065; } proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off; server { listen 80; listen [::]:80; server_name chat.mydomain.com; location ~/api/v[0-9]+/(users/)?websocket$ { proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; client_max_body_size 50M; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Frame-Options SAMEORIGIN; proxy_buffers 256 16k; proxy_buffer_size 16k; proxy_read_timeout 600s; proxy_pass http://backend; } location / { client_max_body_size 50M; proxy_set_header Connection ""; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Frame-Options SAMEORIGIN; proxy_buffers 256 16k; proxy_buffer_size 16k; proxy_read_timeout 600s; proxy_cache mattermost_cache; proxy_cache_revalidate on; proxy_cache_min_uses 2; proxy_cache_use_stale timeout; proxy_cache_lock on; proxy_pass http://backend; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279794,279794#msg-279794 From michael.friscia at yale.edu Tue May 15 11:27:57 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Tue, 15 May 2018 11:27:57 +0000 Subject: Nginx only serves 1 App In-Reply-To: <43c68b260c620fe9930c1634ec8807df.NginxMailingListEnglish@forum.nginx.org> References: <43c68b260c620fe9930c1634ec8807df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <231E04BC-6600-46A1-B2D4-B64BB002AF5A@yale.edu> What happens if you only use one config file and put all of that in it? Nothing really stands out to me in your config. I run about 600 domain names through one Nginx server with many sub-domains in separate server blocks. I've had issues where a subdomain was not served correctly before. I ended up dumbing down the config to just server blocks with only access logs and a bunch of custom headers to make sure the request was being handled in the block I thought it would be in. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu ?On 5/15/18, 4:35 AM, "nginx on behalf of Nginx-Chris" wrote: Root Server with Ubuntu 16.04. Nginx Version: 1.10.3 I have an Nginx server that serves 1 Application: An open source Cloud Server from Seafile that listens on cloud.mydomain.com I now tried to add another Application to my server: A Mattermost server that should listen on chat.mydomain.com When I am adding the Nginx config for Mattermost, then it only is available when I deactivate the Seafile nginx config. So the server only serves one application at a time and that's always the Seafile Server. Then no nginx error.logs or access.logs get any data from the Mattermost login attempts. I am pasting the configs below and am hoping that someone could give me a tip what I have a done wrong or what I need to change. I don't understand why Nginx does not listen for chat.mydomain.com Any help would be very much appreciated! SEAFILE NGINX CONFIG: server { listen 80 http2; listen [::]:80 http2; server_name cloud.mydomain.com; rewrite ^ https://urldefense.proofpoint.com/v2/url?u=https-3A__-24http-5Fhost-24request-5Furi-3F&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=hLxgGEO_FMF7bre2y4zwEhuWxrmd6FLB6h2-H3GY8gI&s=RHIGJiTdHoUwX9sbfZHknM9vfW647qp6UbptEz7e1Ws&e= permanent; # force redirect http to https # Enables or disables emitting nginx version on error pages and in the "Server" response header field. server_tokens off; } server { listen 443 ssl http2; # managed by Certbot listen [::]:443 http2; ssl on; server_name cloud.mydomain.com; ssl_session_cache shared:SSL:5m; server_tokens off; ssl_certificate /etc/letsencrypt/live/cloud.mydomain.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/cloud.mydomain.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot proxy_set_header X-Forwarded-For $remote_addr; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; location / { proxy_pass https://urldefense.proofpoint.com/v2/url?u=http-3A__127.0.0.1-3A8000&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=hLxgGEO_FMF7bre2y4zwEhuWxrmd6FLB6h2-H3GY8gI&s=gaiThb6kszw6w9RjDjoPsqAw_Um42XnUU_AeFGxjfZE&e=; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-Proto https; proxy_read_timeout 1200s; # used for view/edit office file via Office Online Server client_max_body_size 0; access_log /var/log/nginx/seahub.access.log; error_log /var/log/nginx/seahub.error.log; } location /seafhttp { rewrite ^/seafhttp(.*)$ $1 break; proxy_pass https://urldefense.proofpoint.com/v2/url?u=http-3A__127.0.0.1-3A8082&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=hLxgGEO_FMF7bre2y4zwEhuWxrmd6FLB6h2-H3GY8gI&s=Y_70ReunmjI-s6NoOEW1_cBCwVu9_331wqcubeYDf1k&e=; client_max_body_size 0; proxy_connect_timeout 36000s; proxy_read_timeout 36000s; proxy_send_timeout 36000s; send_timeout 36000s; proxy_request_buffering off; } location /media { root /home/user/seafile.cloud/seafile-server-latest/seahub; } location /webdav { fastcgi_pass 127.0.0.1:8080; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS on; fastcgi_param HTTP_SCHEME https; client_max_body_size 0; proxy_connect_timeout 36000s; proxy_read_timeout 36000s; proxy_send_timeout 36000s; send_timeout 36000s; # This option is only available for Nginx >= 1.8.0. See more details below. proxy_request_buffering off; access_log /var/log/nginx/seafdav.access.log; error_log /var/log/nginx/seafdav.error.log; } } MATTERMOST NGINX CONFIG: upstream backend { server 127.0.0.1:8065; } proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off; server { listen 80; listen [::]:80; server_name chat.mydomain.com; location ~/api/v[0-9]+/(users/)?websocket$ { proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; client_max_body_size 50M; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Frame-Options SAMEORIGIN; proxy_buffers 256 16k; proxy_buffer_size 16k; proxy_read_timeout 600s; proxy_pass https://urldefense.proofpoint.com/v2/url?u=http-3A__backend&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=hLxgGEO_FMF7bre2y4zwEhuWxrmd6FLB6h2-H3GY8gI&s=Edm0IJLfbdHxa8wFWaoQGtzNOXNUh9kb8EBRlGiBcmg&e=; } location / { client_max_body_size 50M; proxy_set_header Connection ""; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Frame-Options SAMEORIGIN; proxy_buffers 256 16k; proxy_buffer_size 16k; proxy_read_timeout 600s; proxy_cache mattermost_cache; proxy_cache_revalidate on; proxy_cache_min_uses 2; proxy_cache_use_stale timeout; proxy_cache_lock on; proxy_pass https://urldefense.proofpoint.com/v2/url?u=http-3A__backend&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=hLxgGEO_FMF7bre2y4zwEhuWxrmd6FLB6h2-H3GY8gI&s=Edm0IJLfbdHxa8wFWaoQGtzNOXNUh9kb8EBRlGiBcmg&e=; } } Posted at Nginx Forum: https://urldefense.proofpoint.com/v2/url?u=https-3A__forum.nginx.org_read.php-3F2-2C279794-2C279794-23msg-2D279794&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=hLxgGEO_FMF7bre2y4zwEhuWxrmd6FLB6h2-H3GY8gI&s=iPwBJ99Xcf6Z2_mmfGEtm69A7wJxKyFdm2smjj5f67s&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=hLxgGEO_FMF7bre2y4zwEhuWxrmd6FLB6h2-H3GY8gI&s=UHkg6MTq4jm3GNg71q3ks25pomQ8zPhnmlYw3IRuF6A&e= From michael.friscia at yale.edu Tue May 15 12:12:10 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Tue, 15 May 2018 12:12:10 +0000 Subject: Blank Pages Message-ID: <8324938E-F72D-446E-B23C-3AEF59824D11@yale.edu> I?m wondering if there?s a simple way to solve this problem. The upstream application sometimes returns a blank 500 error which Nginx then serves as the blank page. This is working as intended. But what I?d like Nginx to do is display a custom error page if the upstream 500 error is blank, but if the upstream 500 page is not blank, then I want to serve the upstream 500 error page. Has anyone ever come up with a way to handle a case like that? I was thinking of having a custom header in the upstream app and if that header doesn?t exist, then serve a page from Nginx but before I run down that path I thought I?d ask. I cannot use proxy_intercept_errors on; because the upstream app serves customized 404 errors that I would lose. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From kohenkatz at gmail.com Tue May 15 13:15:26 2018 From: kohenkatz at gmail.com (Moshe Katz) Date: Tue, 15 May 2018 09:15:26 -0400 Subject: Nginx only serves 1 App In-Reply-To: <43c68b260c620fe9930c1634ec8807df.NginxMailingListEnglish@forum.nginx.org> References: <43c68b260c620fe9930c1634ec8807df.NginxMailingListEnglish@forum.nginx.org> Message-ID: Looks to me like your problem is that Seafile is using HTTPS but Mattermost is not. That said, I don't understand how you are able to get to Mattermost at all, since you are setting HSTS headers that should prevent your browser from going to a non-secure page on your domain. Add HTTPS configuration for Mattermost and see if that helps. -- Moshe Katz -- kohenkatz at gmail.com -- +1(301)867-3732 On Tue, May 15, 2018 at 4:35 AM Nginx-Chris wrote: > Root Server with Ubuntu 16.04. > Nginx Version: 1.10.3 > > I have an Nginx server that serves 1 Application: An open source Cloud > Server from Seafile that listens on cloud.mydomain.com > > I now tried to add another Application to my server: A Mattermost server > that should listen on chat.mydomain.com > > When I am adding the Nginx config for Mattermost, then it only is available > when I deactivate the Seafile nginx config. > > So the server only serves one application at a time and that's always the > Seafile Server. > Then no nginx error.logs or access.logs get any data from the Mattermost > login attempts. > > I am pasting the configs below and am hoping that someone could give me a > tip what I have a done wrong or what I need to change. > I don't understand why Nginx does not listen for chat.mydomain.com > > Any help would be very much appreciated! > > SEAFILE NGINX CONFIG: > > server { > > listen 80 http2; > listen [::]:80 http2; > server_name cloud.mydomain.com; > > rewrite ^ https://$http_host$request_uri? permanent; # force > redirect > http to https > > # Enables or disables emitting nginx version on error pages and in the > "Server" response header field. > server_tokens off; > > } > > server { > listen 443 ssl http2; # managed by Certbot > listen [::]:443 http2; > ssl on; > > server_name cloud.mydomain.com; > > ssl_session_cache shared:SSL:5m; > server_tokens off; > > ssl_certificate /etc/letsencrypt/live/cloud.mydomain.com/fullchain.pem > ; > # managed by Certbot > ssl_certificate_key > /etc/letsencrypt/live/cloud.mydomain.com/privkey.pem; # managed by Certbot > > include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot > > ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot > > proxy_set_header X-Forwarded-For $remote_addr; > > add_header Strict-Transport-Security "max-age=31536000; > includeSubDomains"; > > location / { > proxy_pass http://127.0.0.1:8000; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Host $server_name; > proxy_set_header X-Forwarded-Proto https; > > proxy_read_timeout 1200s; > > # used for view/edit office file via Office Online Server > client_max_body_size 0; > > access_log /var/log/nginx/seahub.access.log; > error_log /var/log/nginx/seahub.error.log; > } > > location /seafhttp { > rewrite ^/seafhttp(.*)$ $1 break; > proxy_pass http://127.0.0.1:8082; > client_max_body_size 0; > > proxy_connect_timeout 36000s; > proxy_read_timeout 36000s; > proxy_send_timeout 36000s; > send_timeout 36000s; > > proxy_request_buffering off; > } > > location /media { > root /home/user/seafile.cloud/seafile-server-latest/seahub; > } > > location /webdav { > fastcgi_pass 127.0.0.1:8080; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_script_name; > > fastcgi_param SERVER_PROTOCOL $server_protocol; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > fastcgi_param HTTPS on; > fastcgi_param HTTP_SCHEME https; > > client_max_body_size 0; > proxy_connect_timeout 36000s; > proxy_read_timeout 36000s; > proxy_send_timeout 36000s; > send_timeout 36000s; > > # This option is only available for Nginx >= 1.8.0. See more > details > below. > proxy_request_buffering off; > > access_log /var/log/nginx/seafdav.access.log; > error_log /var/log/nginx/seafdav.error.log; > } > } > > > > > MATTERMOST NGINX CONFIG: > > upstream backend { > server 127.0.0.1:8065; > } > > proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m > max_size=3g inactive=120m use_temp_path=off; > > server { > listen 80; > listen [::]:80; > server_name chat.mydomain.com; > > location ~/api/v[0-9]+/(users/)?websocket$ { > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection "upgrade"; > client_max_body_size 50M; > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Frame-Options SAMEORIGIN; > proxy_buffers 256 16k; > proxy_buffer_size 16k; > proxy_read_timeout 600s; > proxy_pass http://backend; > } > > location / { > client_max_body_size 50M; > proxy_set_header Connection ""; > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Frame-Options SAMEORIGIN; > proxy_buffers 256 16k; > proxy_buffer_size 16k; > proxy_read_timeout 600s; > proxy_cache mattermost_cache; > proxy_cache_revalidate on; > proxy_cache_min_uses 2; > proxy_cache_use_stale timeout; > proxy_cache_lock on; > proxy_pass http://backend; > } > } > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,279794,279794#msg-279794 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue May 15 14:27:51 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 May 2018 17:27:51 +0300 Subject: Blank Pages In-Reply-To: <8324938E-F72D-446E-B23C-3AEF59824D11@yale.edu> References: <8324938E-F72D-446E-B23C-3AEF59824D11@yale.edu> Message-ID: <20180515142750.GG32137@mdounin.ru> Hello! On Tue, May 15, 2018 at 12:12:10PM +0000, Friscia, Michael wrote: > I?m wondering if there?s a simple way to solve this problem. > > The upstream application sometimes returns a blank 500 error > which Nginx then serves as the blank page. This is working as > intended. But what I?d like Nginx to do is display a custom > error page if the upstream 500 error is blank, but if the > upstream 500 page is not blank, then I want to serve the > upstream 500 error page. > > Has anyone ever come up with a way to handle a case like that? > > I was thinking of having a custom header in the upstream app and > if that header doesn?t exist, then serve a page from Nginx but > before I run down that path I thought I?d ask. I cannot use > proxy_intercept_errors on; because the upstream app serves > customized 404 errors that I would lose. Note that you can configure interception of only 500 error page, as nginx will only intercept errors you have an explicit error_page for. That is, a configuration like this will only intercept 500, but not 404: location / { proxy_pass http://backend; proxy_intercept_errors on; error_page 500 /error500.httml; } location = /error500.html { ... } This won't allow to test if the returned upstream error response is blank or not, but may be enough for your use case based on the above description. -- Maxim Dounin http://mdounin.ru/ From michael.friscia at yale.edu Tue May 15 14:35:09 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Tue, 15 May 2018 14:35:09 +0000 Subject: Blank Pages In-Reply-To: <20180515142750.GG32137@mdounin.ru> References: <8324938E-F72D-446E-B23C-3AEF59824D11@yale.edu> <20180515142750.GG32137@mdounin.ru> Message-ID: Actually I think that solves my problem and I had not realized that. I just need to remove my error_page declaration from the global file and specify within each server block instead which is probably better anyway. Thank you! ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu ?On 5/15/18, 10:28 AM, "nginx on behalf of Maxim Dounin" wrote: Hello! On Tue, May 15, 2018 at 12:12:10PM +0000, Friscia, Michael wrote: > I?m wondering if there?s a simple way to solve this problem. > > The upstream application sometimes returns a blank 500 error > which Nginx then serves as the blank page. This is working as > intended. But what I?d like Nginx to do is display a custom > error page if the upstream 500 error is blank, but if the > upstream 500 page is not blank, then I want to serve the > upstream 500 error page. > > Has anyone ever come up with a way to handle a case like that? > > I was thinking of having a custom header in the upstream app and > if that header doesn?t exist, then serve a page from Nginx but > before I run down that path I thought I?d ask. I cannot use > proxy_intercept_errors on; because the upstream app serves > customized 404 errors that I would lose. Note that you can configure interception of only 500 error page, as nginx will only intercept errors you have an explicit error_page for. That is, a configuration like this will only intercept 500, but not 404: location / { proxy_pass https://urldefense.proofpoint.com/v2/url?u=http-3A__backend&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=pfIFbE09KkqBNglA9W4RlzUoYqKDM29rfBSHOg5XOik&s=4WzkIBmsh7_BBUxJIfTui_6hhHKCIVcsW2QWAb14d7w&e=; proxy_intercept_errors on; error_page 500 /error500.httml; } location = /error500.html { ... } This won't allow to test if the returned upstream error response is blank or not, but may be enough for your use case based on the above description. -- Maxim Dounin https://urldefense.proofpoint.com/v2/url?u=http-3A__mdounin.ru_&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=pfIFbE09KkqBNglA9W4RlzUoYqKDM29rfBSHOg5XOik&s=pF9q_2kt5MD_J8_OUzGeAckgBQ1reDUDNkn5oiPESK4&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=pfIFbE09KkqBNglA9W4RlzUoYqKDM29rfBSHOg5XOik&s=AmRF6lx3RCA9tqEtHlPy6LFMJ6_ITjmgF-eVj-BlzdI&e= From quintinpar at gmail.com Tue May 15 15:34:33 2018 From: quintinpar at gmail.com (Quintin Par) Date: Tue, 15 May 2018 11:34:33 -0400 Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid In-Reply-To: References: <95053daace8238a9e917c313d7426872.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thank you so much for this Peter. Very helpful. For what it?s worth, I run a static wordpress website. So the configuration should not be very complicated. The link that you provided also led me to https://github.com/perusio/wordpress-nginx To answer your queries: >1. Is this a website that's on the internet, and thus exposed to random queries from bots and scrapers that you can?t control? Yes and a lot of scammy attacks typical to all wordpress websites. I?ve enabled connection limiting and request limiting of wordpress along with fail2ban on the request limiting rule. > 2. For your cache misses, how long best case, typical and worse case does your back-end take to build the pages? I run a warmer script and I expect all the pages to stay there 120 days. This is run every week and takes 1 hour. 4. Instead of $request_uri what?s the right variable that excludes all parameters? Is it $uri? > 9. script is very useful to track the health of your cache: Thank you for this. Based on your response my suspicion is that url params might be the culprit here. But I wish there was a way to diagnostically get to the root cause. Do you know of any param/variable I can log to access log for this? - Quintin On Mon, May 14, 2018 at 11:08 AM Peter Booth wrote: > > Quintin, > > I dont know anything about your context, but your setup looks over > simplistic. Here are some things that I learned > painfully over a few years of supporting a high traffic retail website > > 1. Is this a website that's on the internet, and thus exposed to random > queries from bots and scrapers that you can?t control? > > 2. For your cache misses, how long best case, typical and worse case does > your back-end take to build the pages? > > 3. You need to log everything that could feasibly affect the status of the > site. For example, here?s a log config urationfrom one gnarly site that I > worked on: > > log_format main '$http_x_forwarded_for $http_true_client_ip > $remote_addr - $remote_user [$time_local] $host "$request" ' > '$status $body_bytes_sent $upstream_cache_status > $cookie_jsessionid $http_akamai_country $cookie_e4x_country > $cookie_e4x_currency "$http_referer" ' > '"$http_user_agent" "$request_time??; > > 4. the first problem is your cache key, and that it includes $request_uri > which is the original uri > * including all arguments. *So you are already exposed to DOS requests > that could be unintentional, > as anyone can bust your cache by adding an extra parameter. > > proxy_cache_key "$scheme://$host$request_uri$do_not_cache"; >> > > 5. Not caching requests from logged in users is a very blunt tool. Is this > a site where only administrative users are logged in? > > Imagine a retail site that sells clothing. It?s possible that a dynamic > page that lists all the red dresses is something > a logged in user sees. Perhaps the page can be cached ? But if there is a > version of the page that shows 30 entries and other > that shows 60 then they need to disambiguated by the cache key. Perhaps > users can choose to see prices in Euro instead of USD? > Then this also belongs in the key. If I am an American vacationing in Pari > s then perhaps the default behavior should be to show me > Euro prices, based n the value of a cookie that the CDN sets. In the > situation the customer may want to override this default behavior > and insist he sees USD prices. You can see how complex this can get. > > 7. The default behavior is to not cache responses that contain a > set-cookie - imagine how cache pollution - sending someone another person?s > personal data stored in a cookie could be much worse than a cache miss. But > there are also settings where your backend is some legacy software that you > dont control > and the correct behavior isn?t to not cache but instead to remove the > set-cookie from the response and cache the response without it. > > 8 How you prime the cache , monitor the cache, and clear the cache are > crucial . Perhaps you have a script that uses curl or wget to retrieve a > series of pages from your site. If the script is written naively then each > step might cause a new servlet session to be created on the backend > producing a memory issue. > > 9. script is very useful to track the health of your cache: > > https://github.com/perusio/nginx-cache-inspector > > 10. The if directive in nginx has some issues (see > https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/ ) > When I need to use complex configuration logic I use OpenResty. OpenResty > is a bundle that > combines the standard nginx with some additional lua modules. It?s still > standard nginx - > not forked or customized in any way. > > 11. > > A very cut down version of a cache config for one page follows: > > # Product arrays get cached > location ~ /shop/ { > rewrite "/(.*)/2];ord.*$" $1 ; > proxy_no_cache $arg_mid $arg_siteID; > proxy_cache_bypass $arg_mid $arg_siteID; > proxy_cache_use_stale updating; > default_type text/html; > proxy_cache_valid 200 302 301 15m; > proxy_ignore_headers Set-Cookie Cache-Control; > proxy_pass_header off; > proxy_hide_header Set-Cookie; > expires 900s; > add_header Last-Modified ""; > add_header ETag ""; > # Build cache key > set $e4x_currency $cookie_e4x_currency; > set_if_empty $e4x_currency 'USD'; > set $num_items $cookie_EndecaNumberOfItems; > set_if_empty $num_items 'LOW'; > proxy_cache_key "$uri|$e4x_currency|$num_items"; > proxy_cache product_arrays; > # Add Canonical URL string > set $folder_id $arg_FOLDER%3C%3Efolder_id; > set $canonical_url "http://$http_host$uri"; > add_header Link "<$canonical_url>; rel=\"canonical\""; > proxy_pass http://apache$request_uri; > } > > > Tis snippet shows a key made of three parts. The real version has seven > parts. > > Good luck! > > Peter > > > On 14 May 2018, at 12:06 AM, Quintin Par wrote: > > Thanks all for the response. Michael, I am going to add those header > ignores. > > > Still puzzled by the large number of MISSEs and I?ve no clue why they are > happening. Leads appreciated. > > > > > > > - Quintin > > On Sun, May 13, 2018 at 6:12 PM, c0nw0nk > wrote: > >> You know you can DoS sites with Cache MISS via switching up URL params and >> arguements. >> >> Examples : >> >> HIT : >> index.php?var1=one&var2=two >> MISS : >> index.php?var2=two&var1=one >> >> MISS : >> index.php?random=1 >> index.php?random=2 >> index.php?random=3 >> etc etc >> >> Inserting random arguements to URL's will cause cache misses and changing >> the order of existing valid URL arguements will also cause misses. >> >> Cherian Thomas Wrote: >> ------------------------------------------------------- >> > Thanks for this Michael. >> > >> > >> > >> > This is so surprising. If someone decides to Dos and crawls the >> > website >> > with a rogue header, this will essentially bypass the cache and put a >> > strain on the website. In fact, I was hit by a dos attack that?s when >> > I >> > started looking at logs and realized the large number of MISSes. >> > >> > >> > >> > Can someone please help? >> > >> > >> > >> - Quintin >> >> > >> > On Sat, May 12, 2018 at 12:01 PM, Friscia, Michael >> > > > > wrote: >> > >> > > I'm not sure if this will help, but I ignore/hide a lot, this is in >> > my >> > > config >> > > >> > > >> > > proxy_ignore_headers X-Accel-Expires Expires Cache-Control >> > Set-Cookie; >> > > proxy_hide_header X-Accel-Expires; >> > > proxy_hide_header Pragma; >> > > proxy_hide_header Server; >> > > proxy_hide_header Request-Context; >> > > proxy_hide_header X-Powered-By; >> > > proxy_hide_header X-AspNet-Version; >> > > proxy_hide_header X-AspNetMvc-Version; >> > > >> > > >> > > I have not experienced the problem you mention, I just thought I >> > would >> > > offer my config. >> > > >> > > >> > > ___________________________________________ >> > > >> > > Michael Friscia >> > > >> > > Office of Communications >> > > >> > > Yale School of Medicine >> > > >> > > (203) 737-7932 ? office >> > > >> > > (203) 931-5381 ? mobile >> > > >> > > http://web.yale.edu >> >> > > >> > > >> > ffb?url=http%3A%2F%2Fweb.yale.edu >> >> %2F&userId=74734&signature=d652edf1f4 >> > f21323> >> > > >> > > >> > > ------------------------------ >> > > *From:* nginx on behalf of Quintin Par < >> > > quintinpar at gmail.com> >> > > *Sent:* Saturday, May 12, 2018 1:32 PM >> > > *To:* nginx at nginx.org >> > > *Subject:* Re: Debugging Nginx Cache Misses: Hitting high number of >> > MISS >> > > despite high proxy valid >> > > >> > > >> > > That?s the tricky part. These MISSes are intermittent. Whenever I >> > run curl >> > > I get HITs but I end up seeing a lot of MISS in the logs. >> > > >> > > >> > > >> > > How do I log these MiSSes with the reason? I want to know what >> > headers >> > > ended up bypassing the cache. >> > > >> > > >> > > >> > > Here?s my caching config >> > > >> > > >> > > >> > > proxy_pass http://127.0.0.1:8000 >> >> > > >> > > >> > d=DwMFaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_ >> > lS023SJrs&m=F-qGMOyS74uE8JM-dOLmNH92bQ1xQ-7Rj1d6k-_WST4&s=NHvlb1WColNw >> > TWBF36P1whJdu5iWHK9_50IDHugaEdQ&e=> >> > > ; >> > > >> > > proxy_set_header X-Real-IP $remote_addr; >> > > >> > > proxy_set_header X-Forwarded-For >> > > $proxy_add_x_forwarded_for; >> > > >> > > proxy_set_header X-Forwarded-Proto https; >> > > >> > > proxy_set_header X-Forwarded-Port 443; >> > > >> > > >> > > >> > > # If logged in, don't cache. >> > > >> > > if ($http_cookie ~* >> > "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" >> > > ) { >> > > >> > > set $do_not_cache 1; >> > > >> > > } >> > > >> > > proxy_cache_key "$scheme://$host$request_uri$ >> > > do_not_cache"; >> > > >> > > proxy_cache staticfilecache; >> > > >> > > add_header Cache-Control public; >> > > >> > > proxy_cache_valid 200 120d; >> > > >> > > proxy_hide_header "Set-Cookie"; >> > > >> > > proxy_ignore_headers "Set-Cookie"; >> > > >> > > proxy_ignore_headers "Cache-Control"; >> > > >> > > proxy_hide_header "Cache-Control"; >> > > >> > > proxy_pass_header X-Accel-Expires; >> > > >> > > >> > > >> > > proxy_set_header Accept-Encoding ""; >> > > >> > > proxy_ignore_headers Expires; >> > > >> > > add_header X-Cache-Status $upstream_cache_status; >> > > >> > > proxy_cache_use_stale timeout; >> > > >> > > proxy_cache_bypass $arg_nocache $do_not_cache; >> > > - Quintin >> > > >> > > >> > > On Sat, May 12, 2018 at 10:29 AM Lucas Rolff >> > wrote: >> > > >> > > It can be as simple as doing a curl to your ?origin? url (the one >> > you >> > > proxy_pass to) for the files you see that gets a lot of MISS?s ? if >> > there?s >> > > odd headers such as cookies etc, then you?ll most likely experience >> > a bad >> > > cache if your nginx is configured to not ignore those headers. >> > > >> > > >> > > >> > > *From: *nginx on behalf of Quintin Par < >> > > quintinpar at gmail.com> >> > > *Reply-To: *"nginx at nginx.org" >> > > *Date: *Saturday, 12 May 2018 at 18.26 >> > > *To: *"nginx at nginx.org" >> > > *Subject: *Debugging Nginx Cache Misses: Hitting high number of MISS >> > > despite high proxy valid >> > > >> > > >> > > >> > > [image: >> > > >> > https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae6398 >> > 00.png?u=74734] >> > > >> > > My proxy cache path is set to a very high size >> > > >> > > >> > > >> > > proxy_cache_path /var/lib/nginx/cache levels=1:2 >> > > keys_zone=staticfilecache:180m max_size=700m; >> > > >> > > and the size used is only >> > > >> > > >> > > >> > > sudo du -sh * >> > > >> > > 14M cache >> > > >> > > 4.0K proxy >> > > >> > > Proxy cache valid is set to >> > > >> > > >> > > >> > > proxy_cache_valid 200 120d; >> > > >> > > I track HIT and MISS via >> > > >> > > >> > > >> > > add_header X-Cache-Status $upstream_cache_status; >> > > >> > > Despite these settings I am seeing a lot of MISSes. And this is for >> > pages >> > > I intentionally ran a cache warmer an hour ago. >> > > >> > > >> > > >> > > How do I debug why these MISSes are happening? How do I find out if >> > the >> > > miss was due to eviction, expiration, some rogue header etc? Does >> > Nginx >> > > provide commands for this? >> > > >> > > >> > > >> > > - Quintin >> > > _______________________________________________ >> > > nginx mailing list >> > > nginx at nginx.org >> > > http://mailman.nginx.org/mailman/listinfo/nginx >> >> > > >> > > >> > e10?url=https%3A%2F%2Furldefense.proofpoint.com >> >> %2Fv2%2Furl%3Fu%3Dhttp- >> > 3A__mailman.nginx.org_mailman_listinfo_nginx%26d%3DDwMFaQ%26c%3DcjytLX >> > gP8ixuoHflwc-poQ%26r%3DwvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs%26m >> > %3DF-qGMOyS74uE8JM-dOLmNH92bQ1xQ-7Rj1d6k-_WST4%26s%3DD3LnZhfobOtlEStCv >> > CDrcwmHydEHaGRFC4gnWvRT5Uk%26e%3D&userId=74734&signature=56c7a7ad18b2c >> > 057> >> > > >> > > >> > > _______________________________________________ >> > > nginx mailing list >> > > nginx at nginx.org >> > > http://mailman.nginx.org/mailman/listinfo/nginx >> >> > > >> > > >> > 260?url=http%3A%2F%2Fmailman.nginx.org >> >> %2Fmailman%2Flistinfo%2Fnginx&us >> > erId=74734&signature=3763121afa828bb7> >> > > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> Posted at Nginx Forum: >> https://forum.nginx.org/read.php?2,279764,279771#msg-279771 >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 15 16:20:31 2018 From: nginx-forum at forum.nginx.org (rickGsp) Date: Tue, 15 May 2018 12:20:31 -0400 Subject: Nginx Rate limiting for HTTPS requests Message-ID: Hi, I have been experimenting with Nginx rate limiting and I need some inputs on it?s working and what can be expected from this feature. I see some difference in what I expected from this feature going by the documentation and what I observed in my experiments. Here is the detail on my testing: I have a test server running Nginx and a backend server. Nginx is configured as HTTPS server listening on 443. I have configured Nginx as reverse proxy to my backend. We have a proprietary tool which feeds configured number of HTTPS requests (one request/connection) to test server and generates reports at the end of test. Report will have details how many requests return status as 200 and 503. Observation 1: As per my observations, more requests are getting processed with return status as 200 than expected if input request rate to Nginx is much higher than the rate limit configured. For example, with the following configuration in Nginx for rate limiting, Here are my tests: limit_req_zone $host zone=perhost:1m rate=100r/s; limit_req zone=perhost burst=100 nodelay; Test1: With input as 250 req/sec and rate limit configured at 100r/s, rate limiting works as expected since on average ~100 requests return with 200 status every second Test2: With input as 500 req/sec and rate limit configured at 100r/s, rate limiting does not work as expected since on average ~150 requests return with 200 status every second Test3: With input as 600 req/sec and rate limit configured at 100r/s, rate limiting does not work as expected since on average ~200 requests return with 200 status every second Test4: With input as 800 req/sec and rate limit configured at 100r/s, rate limiting does not work as expected since on average ~350 requests return with 200 status every second Observation 2: On the other side, If Nginx is configured as HTTP server listening on 80, rate limiting feature seems to be working fine for the same tests. I am not very sure what is happening here for HTTPS based testing. One observation I have made is that in HTTP case, requests gets processed very quickly whereas for HTTPS case, complete transaction takes relatively longer. Also, for low input rate of HTTPS requests transaction completion is not taking very long where as when input rate goes up, this delay further increase and then rate limiting start behaving unexpectedly. Can this be the cause of this difference in anyway? Please share your inputs on this. Thanks in advance Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279802,279802#msg-279802 From xserverlinux at gmail.com Tue May 15 16:22:14 2018 From: xserverlinux at gmail.com (Ricky Gutierrez) Date: Tue, 15 May 2018 10:22:14 -0600 Subject: Connection refused In-Reply-To: References: Message-ID: Any help? El lun., 14 may. 2018 20:02, Ricky Gutierrez escribi?: > hello list, I have a reverse proxy with nginx front end and I have the > backend with nginx some applications in php7 with mariadb, reviewing > the log I see a lot of errors like this: > > 2018/05/09 17:44:58 [error] 14633#14633: *1761 connect() failed (111: > Connection refused) while connecting to upstream, client: > 186.77.203.203, server: web.mydomain.com, request: "GET > /imagenes/slide7.jpg HTTP/2.0", upstream: > "http://192.168.11.7:80/imagenes/slide7.jpg", host: > "www.mydomain.com", referrer: "https://www.mydomain.com/" > > 2018/05/09 17:45:09 [error] 14633#14633: *1761 connect() failed (111: > Connection refused) while connecting to upstream, client: > 186.77.203.203, server: web.mydomain.com, request: "GET > /imagenes/slide8.jpg HTTP/2.0", upstream: > "http://192.168.11.7:80/imagenes/slide8.jpg", host: > "www.mydomain.com", referrer: "https://www.mydomain.com/" > > 2018/05/09 17:45:12 [error] 14633#14633: *1761 upstream prematurely > closed connection while reading response header from upstream, client: > 186.77.203.203, server: web.mydomain.com, request: "GET > /imagenes/slide6.jpg HTTP/2.0", upstream: > "http://192.168.11.7:80/imagenes/slide6.jpg", host: > "www.mydomain.com", referrer: "https://www.mydomain.com/" > > I made a change according to this link on github, but I can not remove the > error > > https://github.com/owncloud/client/issues/5706 > > my config : > > proxy_http_version 1.1; > proxy_set_header Connection ""; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header Host $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_connect_timeout 900s; > proxy_send_timeout 900s; > proxy_read_timeout 900s; > proxy_buffer_size 64k; > proxy_buffers 16 32k; > proxy_busy_buffers_size 64k; > proxy_redirect off; > proxy_request_buffering off; > proxy_buffering off; > proxy_pass http://backend1; > > regardss > > -- > rickygm > > http://gnuforever.homelinux.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue May 15 17:56:06 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 May 2018 20:56:06 +0300 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: References: Message-ID: <20180515175606.GH32137@mdounin.ru> Hello! On Tue, May 15, 2018 at 12:20:31PM -0400, rickGsp wrote: > I have been experimenting with Nginx rate limiting and I need some inputs on > it?s working and what can be expected from this feature. I see some > difference in what I expected from this feature going by the documentation > and what I observed in my experiments. > > Here is the detail on my testing: > I have a test server running Nginx and a backend server. Nginx is configured > as HTTPS server listening on 443. I have configured Nginx as reverse proxy > to my backend. We have a proprietary tool which feeds configured number of > HTTPS requests (one request/connection) to test server and generates reports > at the end of test. Report will have details how many requests return status > as 200 and 503. > > Observation 1: > As per my observations, more requests are getting processed with return > status as 200 than expected if input request rate to Nginx is much higher > than the rate limit configured. > For example, with the following configuration in Nginx for rate limiting, > Here are my tests: > limit_req_zone $host zone=perhost:1m rate=100r/s; > limit_req zone=perhost burst=100 nodelay; > > Test1: With input as 250 req/sec and rate limit configured at 100r/s, rate > limiting works as expected since on average ~100 requests return with 200 > status every second > > Test2: With input as 500 req/sec and rate limit configured at 100r/s, rate > limiting does not work as expected since on average ~150 requests return > with 200 status every second The question is: how did you get the ~150 r/s number? As per your description, the tool you are using reports number of requests returned corresponding status, but not rate. Make sure that calculation is not based on initial numbers, but counts real responses received and uses wall clock to calculate rate. That is, if you tool is expected to generate 500 r/s load for 10 seconds (5000 requests in total) and you've got 1500 requests with status 200, success rate is _not_ 150 r/s. To calculate success rate properly we need to know how long requests processing took. E.g., if it took 15 seconds from the load start to the last request finished, the real rate is 100 r/s. [...] > I am not very sure what is happening here for HTTPS based testing. One > observation I have made is that in HTTP case, requests gets processed very > quickly whereas for HTTPS case, complete transaction takes relatively > longer. Also, for low input rate of HTTPS requests transaction completion is > not taking very long where as when input rate goes up, this delay further > increase and then rate limiting start behaving unexpectedly. Can this be the > cause of this difference in anyway? Please share your inputs on this. Sure, see above. As long as request processing takes significant time, it becomes more important to measure time properly. Failing to do so will result in wrong numbers. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue May 15 20:32:30 2018 From: nginx-forum at forum.nginx.org (Nginx-Chris) Date: Tue, 15 May 2018 16:32:30 -0400 Subject: Nginx only serves 1 App In-Reply-To: References: Message-ID: Dear Moshe I did switch off the seafile configuration and that means that the normal chat.mydomain.com works again with nginx., I did then do > sudo certbot --nginx and the sitechat.mydomain.com now runs on with SSL. So then I switch seafile conf on again --> Seafile works as always. AND mattermost on chat.mydomain.com works, but ONLY if I add https:// in front of the web address. So: chat.mydomain.com <-- does only work when seafile off (then redirects) http://chat.mydomain.com <-- does only work when seafile off (then redirects) https://chat.mydomain.com <-- works when seafile is on and/or off. Why does nginx not redirect the chat.mydomain.com to https? The new config for chat.mydomain.com is this. it got changed by certbot automatically. MATTERMOST: server 127.0.0.1:8065; } proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off; server { server_name chat.mydomain.com; location ~/api/v[0-9]+/(users/)?websocket$ { proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; client_max_body_size 50M; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Frame-Options SAMEORIGIN; proxy_buffers 256 16k; proxy_buffer_size 16k; proxy_read_timeout 600s; proxy_pass http://backend; } location / { client_max_body_size 50M; proxy_set_header Connection ""; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Frame-Options SAMEORIGIN; proxy_buffers 256 16k; proxy_buffer_size 16k; proxy_read_timeout 600s; proxy_cache mattermost_cache; proxy_cache_revalidate on; proxy_cache_min_uses 2; proxy_cache_use_stale timeout; proxy_cache_lock on; proxy_pass http://backend; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/chat.mydomain.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/chat.mydomain.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = chat.mydomain.com) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; server_name chat.mydomain.com; return 404; # managed by Certbot Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279794,279806#msg-279806 From francis at daoine.org Tue May 15 21:11:17 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 15 May 2018 22:11:17 +0100 Subject: inheritance of proxy_http_version and proxy_set_header In-Reply-To: References: <20180509202503.GG19311@daoine.org> Message-ID: <20180515211117.GJ19311@daoine.org> On Sat, May 12, 2018 at 11:18:23AM -0700, Joe Doe wrote: Hi there, > Here is the config with some info redacted. The only difference between the > mirror that inherited the setting and the ones not is http vs https. For > the time being, to get around the issue, the settings to use keep-alive for > upstream servers are added to those mirrors. It's good that you have a workaround that lets your production system do what you want it to. As I understand it, you want the mirror'ed upstreams to take advantage of keep-alive. Your config uses two directives to set two specific things. With those directives "inherited" into the https-mirror'ed location, things do not work. With them explicit in that mirror'ed location, things do work. I am unable to reproduce that problem report. When I use the following config (port 8000 is the "front-end" web server; the other ports and ssl are the "back-end" servers), I see the same http version in $request and the same value of $http_connection for each of the back-ends (in upstream.log), without needing to explicitly override any config in the https-mirror'ed location. How does this differ from what you see, can you see? == http { log_format connection '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$http_connection" "$request"'; proxy_http_version 1.1; proxy_set_header Connection ""; server { listen 8000; server_name localhost; location / { mirror /a; mirror /b; mirror /c; proxy_pass http://127.0.0.1:8081; } location /a { internal; proxy_pass http://127.0.0.1:8082; } location /b { internal; proxy_pass http://127.0.0.1:8083; } location /c { internal; proxy_pass https://127.0.0.1:8443; } } server { listen 8443 ssl; listen 127.0.0.1:8081; listen 127.0.0.1:8082; listen 127.0.0.1:8083; server_name localhost; ssl_certificate cert.pem; ssl_certificate_key cert.key; access_log logs/upstream.log connection; location / { return 200 "request $request\nconnection $http_connection\n"; } } } === If I understand your report correctly, you would see something different in the last two fields of the "GET /c" log line from what is in the "GET /a" or "GET /b" log lines. I don't see any difference there. f -- Francis Daly francis at daoine.org From kohenkatz at gmail.com Wed May 16 03:44:10 2018 From: kohenkatz at gmail.com (Moshe Katz) Date: Tue, 15 May 2018 23:44:10 -0400 Subject: Nginx only serves 1 App In-Reply-To: References: Message-ID: That last "# managed by Certbot" section looks wrong - it shouldn't be using "if ($host = ...", since that's inefficient and there are much better ways to do it. I have a very similar server, so here are the config files I use for it. I don't like pasting them into emails, so I made a GitHub Gist: https://gist.github.com/kohenkatz/08a74d757e0695f4ec3dc34c44ea4369 (that also means I can edit it later if it doesn't work for you). Note that with this configuration you have to run Certbot in "certonly" mode instead of nginx mode. However, that is very easy. I have eight servers configured in this exact way (though most of them with applications other than Seafile and Mattermost, but it doesn't matter). Here is the certbot command I use: sudo certbot certonly --webroot -w /usr/share/nginx/html -d domain-name-here.example.com (If you changed the path for `.well-known` in the config files in my Gist, you will also need to change it here.) Let me know how this works for you. Moshe -- Moshe Katz -- kohenkatz at gmail.com -- +1(301)867-3732 On Tue, May 15, 2018 at 4:32 PM Nginx-Chris wrote: > Dear Moshe > > I did switch off the seafile configuration and that means that the normal > chat.mydomain.com works again with nginx., > > I did then do > > > sudo certbot --nginx > > and the sitechat.mydomain.com now runs on with SSL. > > So then I switch seafile conf on again --> Seafile works as always. > > AND mattermost on chat.mydomain.com works, but ONLY if I add https:// in > front of the web address. > > So: > > chat.mydomain.com <-- does only work when seafile off (then redirects) > http://chat.mydomain.com <-- does only work when seafile off (then > redirects) > > https://chat.mydomain.com <-- works when seafile is on and/or off. > > Why does nginx not redirect the chat.mydomain.com to https? > > The new config for chat.mydomain.com is this. it got changed by certbot > automatically. > > MATTERMOST: > > server 127.0.0.1:8065; > } > > proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m > max_size=3g inactive=120m use_temp_path=off; > > server { > server_name chat.mydomain.com; > > location ~/api/v[0-9]+/(users/)?websocket$ { > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection "upgrade"; > client_max_body_size 50M; > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Frame-Options SAMEORIGIN; > proxy_buffers 256 16k; > proxy_buffer_size 16k; > proxy_read_timeout 600s; > proxy_pass http://backend; > } > > location / { > client_max_body_size 50M; > proxy_set_header Connection ""; > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Frame-Options SAMEORIGIN; > proxy_buffers 256 16k; > proxy_buffer_size 16k; > proxy_read_timeout 600s; > proxy_cache mattermost_cache; > proxy_cache_revalidate on; > proxy_cache_min_uses 2; > proxy_cache_use_stale timeout; > proxy_cache_lock on; > proxy_pass http://backend; > } > > listen 443 ssl; # managed by Certbot > ssl_certificate /etc/letsencrypt/live/chat.mydomain.com/fullchain.pem; > # > managed by Certbot > ssl_certificate_key /etc/letsencrypt/live/ > chat.mydomain.com/privkey.pem; > # managed by Certbot > include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot > ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot > > } > > > server { > if ($host = chat.mydomain.com) { > return 301 https://$host$request_uri; > } # managed by Certbot > > > > listen 80; > server_name chat.mydomain.com; > return 404; # managed by Certbot > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,279794,279806#msg-279806 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 16 05:08:55 2018 From: nginx-forum at forum.nginx.org (Nginx-Chris) Date: Wed, 16 May 2018 01:08:55 -0400 Subject: Nginx only serves 1 App In-Reply-To: References: Message-ID: <6d1ae679f93b9bb699b0d2d72fdba0de.NginxMailingListEnglish@forum.nginx.org> Thanks a lot Moshe for all the efforts. The gist is pretty cool. I will check it out and have a go with it. I will also look closer at the config: > include /etc/letsencrypt/options-ssl-nginx.conf; Maybe there is something in there that's strange. I will get back to you here in this thread. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279794,279809#msg-279809 From nginx-forum at forum.nginx.org Wed May 16 05:12:08 2018 From: nginx-forum at forum.nginx.org (Nginx-Chris) Date: Wed, 16 May 2018 01:12:08 -0400 Subject: Nginx only serves 1 App In-Reply-To: References: Message-ID: <155fcb4d5e65183da1bfdab61d08861b.NginxMailingListEnglish@forum.nginx.org> The config that you propose does not require to switch nginx off for letsencrypt refreshs, correct? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279794,279810#msg-279810 From nginx-forum at forum.nginx.org Wed May 16 05:41:59 2018 From: nginx-forum at forum.nginx.org (Nginx-Chris) Date: Wed, 16 May 2018 01:41:59 -0400 Subject: Nginx only serves 1 App In-Reply-To: References: Message-ID: <518e4cf729afda77e5cfa3f5ccaf543f.NginxMailingListEnglish@forum.nginx.org> Here is what makes everything work ok: In the cloud.conf (Seafile) I deleted the "http2" in the server part that listens on port 80 and redirects. It looks like this now: server { listen 80; listen [::]:80; server_name cloud.mydomain.com; rewrite ^ https://$http_host$request_uri? permanent; # force redirect http to https # Enables or disables emitting nginx version on error pages and in the "Server" response header field. server_tokens off; } Noe everything works fine. I am not sure what advantage / disadvantage http2 had, to be honest. Maybe the http2 part should only be inside the config part that configures the 443 access? Well, this did the trick at least. I am still interested in the config that you posted on gist though. It looks really tidy and well organised. So I would still like to know if I can leave Nginx running for letsencrypt bot to work ;-)) Greetings, Chris Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279794,279811#msg-279811 From nginx-forum at forum.nginx.org Wed May 16 06:22:20 2018 From: nginx-forum at forum.nginx.org (Enrico) Date: Wed, 16 May 2018 02:22:20 -0400 Subject: Nginx redirection Message-ID: <42ccb1412974560296cd30e588ebe2ce.NginxMailingListEnglish@forum.nginx.org> Hi, I have a nginx server (called mynginxserver) and need to redirect some urls : I want to have all url with the string tso: http://www.mynginxserver.com/XXXXXXtsoXXXXX redirected to https://myserver.com/XXXXXXtsoXXXXX if the string tso is in the url and redirect to http://www.mynottsoserver.com/XXXXXXXXXXX if the string tso is not present. XXXXXXtsoXXXXX and XXXXXXXXXXX must be keep. The string "tso" can ben anywhere in the last parameter (tsoXXXXX, XtsoXXXX, XXXXtso, etc.) I think i need to use the location and rewrite directive but don't know how to do that. Thanks for your help Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279813,279813#msg-279813 From nginx-forum at forum.nginx.org Wed May 16 09:00:20 2018 From: nginx-forum at forum.nginx.org (rickGsp) Date: Wed, 16 May 2018 05:00:20 -0400 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: <20180515175606.GH32137@mdounin.ru> References: <20180515175606.GH32137@mdounin.ru> Message-ID: <9b16a49cf8c48ac17c673e083de546a1.NginxMailingListEnglish@forum.nginx.org> Thanks for responding Maxim. I understood what you are pointing at. Yes I have taken care of time measurement. Actually my test runs for 60 seconds and in total I expect 6000 requests returning 200 status with rate limit configured at 100r/s. However I see 9000 requests returning 200 status which means 150 req/sec. Shall I expect that even for HTTPS, rate limiting should work as perfectly as plain HTTP case. If yes, I am just wondering if there is something I am missing while configuring Nginx rate limiting for HTTPS. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279802,279814#msg-279814 From kohenkatz at gmail.com Wed May 16 13:13:30 2018 From: kohenkatz at gmail.com (Moshe Katz) Date: Wed, 16 May 2018 09:13:30 -0400 Subject: Nginx only serves 1 App In-Reply-To: <518e4cf729afda77e5cfa3f5ccaf543f.NginxMailingListEnglish@forum.nginx.org> References: <518e4cf729afda77e5cfa3f5ccaf543f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Somehow we all missed that - of course you can't run `http2` on port 80 and have it work since `http2` requires SSL. With that configuration, you would have been able to get to the chat subdomain only by going to `https:// chat.mydomain .com:80/` - notice that it is https but is forced back to port 80. (I purposely added spaces to prevent that from being a link in many mail clients.) To answer the question about LetsEncrypt renewal, you need to leave nginx running in order for it to work since it still relies on nginx to serve the `.well-known` files that make the domain verification work. If you would stop nginx, you would be unable to run the validation. The one thing that you do need to do is make sure that LetsEncrypt knows to reload nginx when a certificate changes so that nginx can see the new certificate file. If you are on a system that uses SystemD, this is what you need to do: Create a shell script in `/etc/letsencrypt/renewal-hooks/deploy` with the following contents: #!/bin/bash /bin/systemctl reload nginx.service Make sure to set it as executable, and then Certbot will run it automatically for every renewal. Alternatively, you can go into each file in `/etc/letsencrypt/renewal/*` and add the following line in the `[renewalparams]` section: deploy_hook = /bin/systemctl reload nginx Of course, that means you need to modify the renewal file for each domain separately. Moshe -- Moshe Katz -- kohenkatz at gmail.com -- +1(301)867-3732 On Wed, May 16, 2018 at 1:42 AM Nginx-Chris wrote: > Here is what makes everything work ok: > > In the cloud.conf (Seafile) I deleted the "http2" in the server part that > listens on port 80 and redirects. > > It looks like this now: > > server { > > listen 80; > listen [::]:80; > server_name cloud.mydomain.com; > > rewrite ^ https://$http_host$request_uri? permanent; # force > redirect > http to https > > # Enables or disables emitting nginx version on error pages and in the > "Server" response header field. > server_tokens off; > > } > > Noe everything works fine. > > I am not sure what advantage / disadvantage http2 had, to be honest. > > Maybe the http2 part should only be inside the config part that configures > the 443 access? > > Well, this did the trick at least. > > I am still interested in the config that you posted on gist though. > It looks really tidy and well organised. > > So I would still like to know if I can leave Nginx running for letsencrypt > bot to work ;-)) > > Greetings, Chris > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,279794,279811#msg-279811 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 16 13:27:14 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 May 2018 16:27:14 +0300 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: <9b16a49cf8c48ac17c673e083de546a1.NginxMailingListEnglish@forum.nginx.org> References: <20180515175606.GH32137@mdounin.ru> <9b16a49cf8c48ac17c673e083de546a1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180516132714.GJ32137@mdounin.ru> Hello! On Wed, May 16, 2018 at 05:00:20AM -0400, rickGsp wrote: > Thanks for responding Maxim. I understood what you are pointing at. Yes I > have taken care of time measurement. Actually my test runs for 60 seconds > and in total I expect 6000 requests returning 200 status with rate limit > configured at 100r/s. However I see 9000 requests returning 200 status which > means 150 req/sec. As I tried to explain in my previous message, "test runs for 60 seconds" can have two different meanings: 1) the load is generated for 60 seconds and 2) from first request started to the last request finished it takes 60 seconds. Make sure you are using the correct meaning. Also, it might be a good idea to look into nginx access logs to verify both time and numbers reported by your tool. > Shall I expect that even for HTTPS, rate limiting should work as perfectly > as plain HTTP case. If yes, I am just wondering if there is something I am > missing while configuring Nginx rate limiting for HTTPS. Yes, request rate limiting is expected to work identically for both HTTP and HTTPS. The difference of HTTPS is that it, in contrast to HTTP, requires a lot of resources for SSL handshakes, and it is perfectly normal if your server cannot handle 500 handshakes per second at all. As such, total test time might be significantly different from load generation time. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed May 16 14:08:23 2018 From: nginx-forum at forum.nginx.org (vedranf) Date: Wed, 16 May 2018 10:08:23 -0400 Subject: Regression in 1.14 when following upstream redirects In-Reply-To: <20180515033805.GF32137@mdounin.ru> References: <20180515033805.GF32137@mdounin.ru> Message-ID: <7ae434e186b56df8ed9122cd1160db23.NginxMailingListEnglish@forum.nginx.org> Hey, Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Mon, May 14, 2018 at 01:22:46PM -0400, vedranf wrote: > > > There is a problem when nginx is configured to try to follow > redirects (301) > > from upstream server in order to cache responses being directed to, > rather > > than the short redirect itself. This worked in 1.12 and earlier > releases. > > Here is the simplified configuration I use and which used to work: > > > > From the incomplete configuration and debug log snippets you've > provided it looks like your problem if that requests previously > not cached now successfully extracted from cache. > > From the snippets you've provided it is not possible to conclude > if the previous behaviour was buggy and now fixed (and your > previous configuration worked due to a bug), or the new behaviour > is incorrect. > > There are at least some fixes in 1.13.x which might affect your > configuration. In particular, this fix in 1.13.6 might be > related: > > *) Bugfix: cache control headers were ignored when caching errors > intercepted by error_page. Right, this seems to be causing it. I was able to replicate it only when 3XX redirect had Cache-Control set. Please look at the minimal configuration at: https://pastebin.com/tSqH4YJt with 1.12 you always get 204 response from 127.0.0.1:8181, with 1.14 first response is 204, but the subsequent responses are 500 with invalid URL prefix in "" error because in the second attempt request never goes to upstream (perhaps it assumes file is supposed to be in cache) and the variable is empty. Regards, Vedran Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279787,279819#msg-279819 From mdounin at mdounin.ru Wed May 16 14:40:09 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 May 2018 17:40:09 +0300 Subject: Regression in 1.14 when following upstream redirects In-Reply-To: <7ae434e186b56df8ed9122cd1160db23.NginxMailingListEnglish@forum.nginx.org> References: <20180515033805.GF32137@mdounin.ru> <7ae434e186b56df8ed9122cd1160db23.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180516144009.GL32137@mdounin.ru> Hello! On Wed, May 16, 2018 at 10:08:23AM -0400, vedranf wrote: > Hey, > > > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Mon, May 14, 2018 at 01:22:46PM -0400, vedranf wrote: > > > > > There is a problem when nginx is configured to try to follow > > redirects (301) > > > from upstream server in order to cache responses being directed to, > > rather > > > than the short redirect itself. This worked in 1.12 and earlier > > releases. > > > Here is the simplified configuration I use and which used to work: > > > > > > > From the incomplete configuration and debug log snippets you've > > provided it looks like your problem if that requests previously > > not cached now successfully extracted from cache. > > > > From the snippets you've provided it is not possible to conclude > > if the previous behaviour was buggy and now fixed (and your > > previous configuration worked due to a bug), or the new behaviour > > is incorrect. > > > > There are at least some fixes in 1.13.x which might affect your > > configuration. In particular, this fix in 1.13.6 might be > > related: > > > > *) Bugfix: cache control headers were ignored when caching errors > > intercepted by error_page. > > Right, this seems to be causing it. I was able to replicate it only when 3XX > redirect had Cache-Control set. Please look at the minimal configuration at: > https://pastebin.com/tSqH4YJt > with 1.12 you always get 204 response from 127.0.0.1:8181, with 1.14 first > response is 204, but the subsequent responses are 500 with invalid URL > prefix in "" error because in the second attempt request never goes to > upstream (perhaps it assumes file is supposed to be in cache) and the > variable is empty. Ok, thanks you for confirming. So your configuration relied on a bug which is now fixed. An obvious workaround would be to disable looking into Cache-Control / Expires headers using the proxy_ignore_headers directive (http://nginx.org/r/proxy_ignore_headers), so nginx will cache things only based on proxy_cache_valid. -- Maxim Dounin https://xkcd.com/1172/ From francis at daoine.org Wed May 16 22:42:46 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 16 May 2018 23:42:46 +0100 Subject: Nginx Cache | @ prefix example In-Reply-To: References: Message-ID: <20180516224246.GK19311@daoine.org> On Sat, May 12, 2018 at 12:05:51AM -0400, c0nw0nk wrote: Hi there, > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid > > The ?X-Accel-Expires? header field sets caching time of a response in > seconds. The zero value disables caching for a response. If the value starts > with the @ prefix, it sets an absolute time in seconds since Epoch, up to > which the response may be cached. > > Can someone give an example of how this should look and what if i set it as > zero what is the outcome then...? The upstream sometimes wants to say "this is valid for an hour", and sometimes wants to say "this is valid until midnight". "For an hour" is "3600". "Until midnight" could be "work out the time difference between now and midnight, and set that number". Or it could be "when is midnight? Set @-that number". The @-prefix is for when you want a thing to be cached until a specific time, rather that for a specific duration. You can find the number to use by, for example, using $ date -d 'tomorrow 0:0' +%s and it will probably be 10 digits long. > //unknown outcome / result...? > X-Accel-Expires: @0 $ date -d @0 will say something corresponding to "Thu Jan 1 00:00:00 UTC 1970". So this asks to "cache until 1970". Which is in the past, so possibly is "expire cache now"; but if you really want to expire the cache now you should do just that. > //Expire cache straight away. > X-Accel-Expires: 0 "disables caching" is what the documentation says. > //Expire cache in 5 seconds > X-Accel-Expires: 5 "Cache for 5 seconds". That's probably the same thing. > //Expire cache in 5 seconds and allow "STALE" cache responses to be stored > for 5 seconds ????? > X-Accel-expires: @5 5 The documentation you quoted doesn't seem to mention anything about STALE, or spaces in the header value. It looks like invalid input to nginx to me, so nginx could do anything (or nothing) with it. > Hopefully I am right thinking that the above would work like this need some > clarification. Request to cache for a duration -> use the number of seconds. Request to cache until a time -> use @ and the time stamp in a particular format. f -- Francis Daly francis at daoine.org From agentzh at gmail.com Wed May 16 22:47:52 2018 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 16 May 2018 15:47:52 -0700 Subject: [ANN] OpenResty 1.13.6.2 released Message-ID: Hi folks! I am happy to announce the new formal release, 1.13.6.2, of the OpenResty web platform based on NGINX and LuaJIT: https://openresty.org/en/download.html The (portable) source code distribution, the Win32/Win64 binary distributions, and the pre-built binary Linux packages for Ubuntu, Debian, Fedora, CentOS, RHEL, Amazon Linux are provided on this Download page. Starting from this release, we provide official 64-bit Windows native binary package for OpenResty. We also provide new yum package repositories for Ubuntu 18.04 Bionic. The next OpenResty release will be based on the nginx core 1.13.12 or a version in the upcoming nginx 1.15.x series. Special thanks go to all our developers and contributors! And thanks OpenResty Inc. (https://openresty.com/ ) for sponsoring a lot of the OpenResty core development work. We have the following highlights in this release: 1. We now have full official support for the OpenSSL 1.1.0 series (the last tested version is OpenSSL 1.1.0h). 2. We now provide official 64-bit Windows native binary packages for OpenResty. 3. We now provide a new table.clone() builtin Lua API function in our bundled version of LuaJIT, which can also be JIT compiled. 4. We now provide UDP downstream cosocket API in our ngx_stream_lua module. Now the community can build high performance UDP server applications with Lua atop OpenResty. 5. New flush_all() method added to our lua-resty-lrucache Lua library. 6. Our resty command-line utility's startup/exit time is significantly reduced on *NIX systems. Now it takes only ~10ms to run a hello world program on a mid-2015 Macbook Pro. 7. We now avoid running Lua VM instances in NGINX's helper processes like "cache loader" and "cache manager" to reduce memory footprint in those processes. 8. New raw_client_addr() function added to the ngx.ssl Lua module. 9. New ngx.base64 module added to lua-resty-core with new Lua API functions encode_base64url() and decode_base64url(). 10. Various time-related Lua APIs provided by ngx_lua are now re-implemented via LuaJIT FFI in lua-resty-core so that they can be JIT compiled and run much faster. 11. New lua_add_variable config directive provided by the ngx_stream_lua module so that we can define new NGINX variables for the stream subsystem. 12. New add_header() Lua API has been added to the new ngx.resp Lua module to mimic NGINX's standard add_header directive on the Lua land. 13. Support for the optional "init_ttl" argument in shdict:incr() method so that when the key is missing we can add a default TTL value of our own. 14. Added the "local=on" and "local=/path/to/resolv.conf" options to the standard "resolver" config directive. This can enable the use of system-level nameserver configurations of /etc/resolv.conf, for example, in nginx's own nonblocking DNS resolver. The complete change log since the last (formal) release, 1.13.6.1: * win64: distributing official 64-bit Windows binary packages for OpenResty using the MSYS2/MinGW toolchain. * win32: now we build our official 32-bit Windows binary packages for OpenResty using the MSYS2/MinGW toolchain. * win32: upgraded pcre to 8.42 and openssl to 1.1.0h. * optimize: now the openresty build system ("./configure") automatically patches the resty command-line utility to use its own nginx binary so that it does not have to compute it at runtime (which is a bit expensive). this saves about 10ms (from for total 20ms to 10ms) for resty's startup time, as measured on a mid-2015 MBP. That's 50% reduction in total startup time! Yay! * win32/win64: enabled ngx_stream_ssl_preread_module in our binary builds. * bugfix: ./configure: relative paths in --add-dynamic-module=PATH option did not work. thanks catatsuy for the patch. * feature: added a patch for the nginx core to add the "local=on" and "local=/path/to/resolv.conf" options to the standard "resolver" config directive. This can enable the use of system-level nameserver configurations of /etc/resolv.conf, for example, in nginx's own nonblocking DNS resolver. thanks Datong Sun for the patch. * feature: added the "socket_cloexec" patch to ensure most of the nginx connections could be closed before child process terminates. thanks spacewander for the patch. * feature: added patches to the nginx core to make sure ngx_stream_ssl_preread_module will not skip the rest of the preread phase when SNI server name parsing was successful. thanks Datong Sun for the patch. * feature: ./configure: updated the stream subsystem related options from nginx 1.13.6. thanks hy05190134 for the patch. * feature: added the SSL "sess_set_get_cb" yielding support patch for OpenSSL 1.1.0d and beyond. thanks spacewander for the patch. * feature: applied the "init_cycle_pool_release" patch to nginx 1.13.6+ cores to make it valgrind or asan clean. * bugfix: we incorrectly removed the existing Makefile even for "./configure --help". thanks spacewander for the patch. * feature: added information about OpenResty's commercial support in the default index.html page. * opm: doc index: updated the LuaJIT 2.1's official docs to the latest version. * upgraded resty-cli to 0.21. * resty: got rid of prerequisite perl modules to improve startup time. Startup time has been significantly reduced on *NIX systems. No improvment on Win32 though. On my mid-2015 MBP, the "resty -e "print(1)"" command's total time can drop from ~36ms to ~10ms. *bugfix: when the signal is received but the child process is already gone, resty incorrectly returned non-zero return code and output "No such process" error. thanks Datong Sun for the patch. * upgraded opm to 0.0.5. * bugfix: opm get: curl via HTTP proxies would complain about "bad response status line received". The first "Connection established" response might not come with any response header entries at all. * upgraded ngx_lua to 0.10.13. * feature: ngx.req.get_post_args(), ngx.req.get_uri_args(), ngx.req.get_headers(), ngx.resp.get_headers(), and ngx.decode_args() now would return an error string, "truncated", when the input exceeds the "max_args"/"max_headers" limits. * feature: added support for the OpenSSL 1.1.0 serires. thanks Alessandro Ghedini for the original patch and the subsequent polishment work from Dejiang Zhu and spacewander. * feature: added the "init_ttl" argument to the pure C function for the shdict:incr() API. thanks Thibault Charbonnier for the patch. * feature: added support for the 308 status code in ngx.redirect(). thanks Mikhail Senin for the patch. * feature: ssl: support enabling TLSv1.3 via the lua_ssl_protocols config directive. thanks Alessandro Ghedini for the patch. * feature: "ngx_http_lua_ffi_set_resp_header()": now add an override flag argument to control whether to override existing resp headers. this feature is required by the new ngx.resp module's "add_header()" Lua API (in lua-resty-core). thanks spacewander for the patch. * feature: allowed sending boolean and nil values in cosockets. thanks spacewander for the patch. * feature: api.h: exposed the "ngx_http_lua_ffi_str_t" C data type for other Nginx C modules. * feature: logged the tcp cosocket's remote end address when tcpsock:connect() times out and "lua_socket_log_errors" is on. This feature makes debug connect timeout errors easier, since domain name may map to different ip address in different time. thanks spacewander for the patch. * bugfix: ngx.resp.get_headers(): the "max_headers" limit did not cover builtin headers. * bugfix: "ngx_http_lua_ffi_ssl_set_serialized_session()": avoided memory leak when calling it repeatly. * bugfix: we now throw a Lua exception when ngx.location.capture* Lua API is used inside an HTTP2 request since it is known to lead to hanging. * bugfix: nginx rewrite directive may initiate internal redirects without clearing any module ctx and rewrite_by_lua* handlers might think it was re-entered and thus it might lead to request hang. thanks twistedfall for the report. * bugfix: avoided sharing the same code object for identical Lua inlined code chunks in different phases due to chunk name conflicts. thanks yandongxiao for the report and spacewander for the patch. * bugfix: ngx.req.raw_header(): the first part of the header would be discarded when using single LF as delimiter and the number of headers is large enough. thanks tokers for the patch. * bugfix: pure C API for ngx.var assignment: we failed to output the error message length. this might lead to error buffer overreads. thanks Ka-Hing Cheung for the patch. * bugfix: the upper bound of port ranges should be 65535 instead of 65536. thanks spacewander for the patch. * bugfix: we did not always free up all connections when cleaning up socket pools. thanks spacewander for the patch. * bugfix: use of lua-resty-core's ngx.re API in init_by_lua* might lead to memory issues during nginx HUP reload when no lua_shared_dict directives are used and the regex cache is enabled. * change: switched to "SSL_version()" calls from "TLS1_get_version()". "TLS1_get_version" is a simple wrapper for "SSL_version" that returns 0 when used with DTLS. However, it was removed from BoringSSL in 2015 so instead use "SSL_version" directly. Note: BoringSSL is never an officially supported target for this module. "ngx_http_lua_ffi_ssl_get_tls1_version" can never be reached with DTLS so the behaviour is the same. thanks Tom Thorogood for the patch. * optimize: switched exptime argument type to 'long' in the shdict FFI API to mitigate potential overflows. thanks Thibault Charbonnier for the patch. * optimize: avoided the string copy in "ngx_http_lua_ffi_req_get_method_name()". * optimize: corrected the initial table size of req socket objects. thanks spacewander for the patch. * optimize: destroy the Lua VM and avoid running any init_worker_by_lua* code inside cache helper processes. thanks spacewander for the patch. * doc: fixed an error message typo in "set_der_priv_key()". thanks Tom Thorogood for the patch. * doc: mentioned that OpenResty includes its own version of LuaJIT which is specifically optmized and enhanced for OpenResty. * doc: some typo fixes from hongliang. * doc: setting ngx.header.HEADER no longer throws out an exception when the header is already sent out; it now just logs an error message. thanks yandongxiao for the patch. * doc: typo fixes from yandongxiao. * doc: typo fixes from tan jinhua. * doc: fixed a typo in a code comment. thanks Alex Zhang for the patch. * upgraded lua-resty-core to 0.1.15. * feature: implemented ngx.resp module and its function add_header(). The ngx.resp module's "add_header" works like the "add_header" Nginx directive. Unlike the "ngx.header.HEADER=" API, this method appends new header to the old one instead of overriding any existing ones. Unlike the "add_header" directive, this method overrides the builtin header instead of appending to it. thanks spacewander for the patch. * feature: the FFI version of the ngx.req.get_uri_args() and ngx.req.get_headers() API functions now would return an error string, "truncated", when the input exceeds the "max_args"/"max_headers" limits. * bugfix: ngx.re: fixed a "split()" corner case when successtive separator characters are at the end of the subject string. * bugfix: shdict: switched exptime argument type to 'long' to mitigate potential overflows. * bugfix: ngx.ssl.session: avoided memory leaks when calling set_serialized_session repeatly. thanks spacewander for the patch. * optimize: avoided an extra string copy in ngx.req.get_method(). thanks spacewander for the patch. * change: replaced "return error()" with "error()" to avoid stack unwinding upon Lua exceptions. this should give much better Lua backtrace for the errors. thanks spacewander for the patch. * bugfix: ngx.re: fixed a split() edge-case when using control characters in the regex. thanks Thibault Charbonnier for the patch. * feature: shdict:incr(): added the "init_ttl" argument to set the ttl of values when they are first created via the "init" argument. thanks Thibault Charbonnier for the patch. * feature: re-implemented the remaining time related Lua APIs with FFI (like ngx.update_time, ngx.http_time, ngx.parse_http_time, and etc.). thanks spacewander for the patch. * feature: ngx.errlog: added the raw_log() API function to allow the building of custom logging facilities. thanks Thibault Charbonnier for the patch. * feature: added new API function "get_master_pid()" to the ngx.process module. thanks chronolaw for the patch. * doc: typo fixes from chronolaw. * feature: added new resty.core.phase module to include the pure FFI version of the ngx.get_phase() API. thanks Robert Paprocki for the patch. * feature: added new ngx.base64 Lua module with the functions encode_base64url() and decode_base64url(). thanks Datong Sn for the patch. * bugfix: resty.core.var: ngx.var.VAR assignment might over-read the error msg buffer. thanks Ka-Hing Cheung for the patch. * optimize: use plain text string.find calls when we mean it. * feature: ngx.ssl: added new raw_client_addr() Lua API function. thanks ??? for the patch. * upgraded lua-cjson to 2.1.0.6. * optimize: improved forward-compatibility with older versions of Lua/LuaJIT. thanks Thibault Charbonnier for the patch. * bugfix: fixed the C compiler warning "SO C90 forbids mixed declarations and code" on older operating systems. * feature: set "cjson.array_mt" on decoded JSON arrays. this can be turned on via "cjson.decode_array_with_array_mt(true)". off by default for backward compatibility. thanks Thibault Charbonnier for the patch. * feature: added new cjson.array_mt metatable to allow enforcing JSON array encoding. thanks Thibault Charbonnier for the patch. * bugfix: fixed a -Wsign-compare compiler warning. thanks gnought for the patch. * upgraded lua-resty-lrucache to 0.08. * feature: added new method flush_all() to flush all the data in an existing cache object. thanks yang.yang for the patch. * upgraded lua-resty-dns to 0.21. * refactor: cleaned up some variable names and locals. thanks Thijs Schreijer for the patch. * bugfix: fixed issues with retrans not being honoured upon connection failures. thanks Thijs Schreijer for the patch. * feature: improved error reporting, making it more precise, and returning errors of previous tries. thanks Thijs Schreijer for the patch. * bugfix: fix parsing state after SOA record. Correct parsing of Additional Records failed due to a bad parsing state after processing a SOA record in the Authorative nameservers section. DNS response based on "dig @ns1.google.com SOA google.com". thanks Peter Wu for the patch. * bugfix: fix typo in SOA record field "minimum". Rename "mininum" to "minimum", fixes issue in original feature added with lua-resty-dns v0.19rc1. * upgraded lua-resty-string to 0.11. * feature: resty.aes: added compaibility with OpenSSL 1.1.0+. thanks spacewander for the patch. * upgraded ngx_stream_lua to 0.0.5. * feature: we now have raw request downstream cosocket support for scripting UDP servers. thanks Datong Sun for the patch. * feature: added the preread handler postponing feature. thanks Datong Sun for the patch. * feature: added new config directive lua_add_variable to allow adding changeable. thanks Datong Sun for the patch. * upgraded ngx_set_misc to 0.32. * bugfix: set_quote_pgsql_str: we did not escape the "$" character. thanks Yuansheng Wang for the patch. * refactor: made "ngx_http_pg_utf_islegal()" much better. * bugfix: fixed the "-Wimplicit-fallthrough" warinings from GCC 7. thanks Andrei Belov for the patch. * upgraded ngx_redis2 to 0.15. * bugfix: "ragel -G2" genreates C code which results in "-Werror=implicit-fallthrough" compilation errors at least with gcc 7.2. switched to "ragel -T1" instaed. * upgraded ngx_memc to 0.19 * bugfix: "ragel -G2" genreates C code which results in "-Werror=implicit-fallthrough" compilation errors at least with gcc 7.2. switched to "ragel -T1" instaed. * upgraded ngx_encrypted_session to 0.08. * feature: added support for OpenSSL 1.1.0. thanks spacewander for the patch. * upgraded ngx_rds_csv to 0.09. * bugfix: fixed the "-Werror=implicit-fallthrough" compilation errors at least with gcc 7.2. * upgraded ngx_drizzle to 0.1.11. * bugfix: fixed the "-Werror=implicit-fallthrough" compilation errors at least with gcc 7.2. * upgraded ngx_xss to 0.06. * bugfix: "ragel -G2" genreates C code which results in "-Werror=implicit-fallthrough" compilation errors at least with gcc 7.2. switched to "ragel -T1" instaed. * bugfix: fixed errors and warnings with C compilers without variadic macro support. * upgraded LuaJIT to 2.1-20180419: https://github.com/openresty/luajit2/tags * feature: implemented new API function "jit.prngstate()" for reading or setting the current PRNG state number used in the JIT compiler. * feature: implemented the table.clone() builtin Lua API. This change only support shallow clone. e.g local tab_clone = require "table.clone" local x = {x=12, y={5, 6, 7}} local y = tab_clone(x) -- ... use y here ... We observed 7% over-all speedup in the edgelang-fan compiler's compiling speed whose Lua is generated by the fanlang compiler. thanks Shuxin Yang for the patch and OpenResty Inc. for sponsoring this work. * imported Mike Pall's latest changes: * DynASM/x86: Add BMI1 and BMI2 instructions. * Fix rechaining of pseudo-resurrected string keys. * Clear stack after "print_jit_status()" in CLI. * Fix GCC 7 "-Wimplicit-fallthrough" warnings. * FFI: Don't assert on "#1LL" (Lua 5.2 compatibility mode only). * MIPS64: Fix soft-float +-0.0 vs. +-0.0 comparison. * Fix LuaJIT API docs for "LUAJIT_MODE_*". * Fix ARMv8 (32 bit subset) detection. * Fix "string.format("%c", 0)". * Fix "IR_BUFPUT" assembly. * MIPS64: Fix "xpcall()" error case. * ARM64: Fix "xpcall()" error case. * Fix saved bytecode encapsulated in ELF objects. * MIPS64: Fix register allocation in assembly of HREF. * ARM64: Fix assembly of HREFK. * Fix FOLD rule for strength reduction of widening. The HTML version of the change log with lots of helpful hyper-links can be browsed here: https://openresty.org/en/changelog-1013006.html OpenResty is a full-fledged web platform by bundling the standard Nginx core, LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: https://qa.openresty.org/ We also always run our OpenResty Edge commercial software based on the latest open source version of OpenResty in our own global CDN network (dubbed "mini CDN") powering our openresty.org and openresty.com websites. See https://openresty.com/ for more details. Enjoy! Best regards, Yichun From lists at lazygranch.com Thu May 17 01:47:08 2018 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 16 May 2018 18:47:08 -0700 Subject: Dynamic modules versus build from scratch Message-ID: <20180516184708.15b60c58.lists@lazygranch.com> The centos nginx from the repo lacks ngx_http_hls_module. This is a technique to add the module without compilation. https://dzhorov.com/2017/04/compiling-dynamic-modules-into-nginx-centos-7 Does anyone have experience with this? I'd like to avoid building nginx from scratch to make the updates go faster. When I ran freeBSD, I built nginx, so that isn't the problem. Rather I want to stay as "native" to centos as possible. From nginx-forum at forum.nginx.org Thu May 17 04:50:10 2018 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 17 May 2018 00:50:10 -0400 Subject: Nginx Cache | @ prefix example In-Reply-To: <20180516224246.GK19311@daoine.org> References: <20180516224246.GK19311@daoine.org> Message-ID: <26dbef73ca335faa28bd40d2f3727341.NginxMailingListEnglish@forum.nginx.org> Thank you for the response and useful information Francis incredibly helpful. I am using the following function with this : http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_background_update proxy_cache_background_update on; My webapp outputting the X-Accel-Expires header is PHP like so. CODE: OUTPUT: Thu, 01 Jan 1970 00:00:00 GMT The 0 would be replaced by the time function what is a UNIX time stamp. CODE: OUTPUT: Thu, 17 May 2018 04:47:38 GMT I have noticed the formate on this is different to what you provided here : >>$ date -d @0 >>will say something corresponding to "Thu Jan 1 00:00:00 UTC 1970". Should it look like yours or Nginx will read and understand it in the format PHP is outputting it as ? Francis Daly Wrote: ------------------------------------------------------- > On Sat, May 12, 2018 at 12:05:51AM -0400, c0nw0nk wrote: > > Hi there, > > > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_v > alid > > > > The ?X-Accel-Expires? header field sets caching time of a response > in > > seconds. The zero value disables caching for a response. If the > value starts > > with the @ prefix, it sets an absolute time in seconds since Epoch, > up to > > which the response may be cached. > > > > Can someone give an example of how this should look and what if i > set it as > > zero what is the outcome then...? > > The upstream sometimes wants to say "this is valid for an hour", and > sometimes wants to say "this is valid until midnight". "For an hour" > is "3600". "Until midnight" could be "work out the time difference > between now and midnight, and set that number". Or it could be "when > is > midnight? Set @-that number". > > The @-prefix is for when you want a thing to be cached until a > specific > time, rather that for a specific duration. > > You can find the number to use by, for example, using > > $ date -d 'tomorrow 0:0' +%s > > and it will probably be 10 digits long. > > > //unknown outcome / result...? > > X-Accel-Expires: @0 > > $ date -d @0 > > will say something corresponding to "Thu Jan 1 00:00:00 UTC 1970". > > So this asks to "cache until 1970". Which is in the past, so possibly > is > "expire cache now"; but if you really want to expire the cache now you > should do just that. > > > //Expire cache straight away. > > X-Accel-Expires: 0 > > "disables caching" is what the documentation says. > > > //Expire cache in 5 seconds > > X-Accel-Expires: 5 > > "Cache for 5 seconds". That's probably the same thing. > > > //Expire cache in 5 seconds and allow "STALE" cache responses to be > stored > > for 5 seconds ????? > > X-Accel-expires: @5 5 > > The documentation you quoted doesn't seem to mention anything about > STALE, > or spaces in the header value. It looks like invalid input to nginx to > me, so nginx could do anything (or nothing) with it. > > > Hopefully I am right thinking that the above would work like this > need some > > clarification. > > Request to cache for a duration -> use the number of seconds. > > Request to cache until a time -> use @ and the time stamp in a > particular > format. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279762,279837#msg-279837 From nginx-forum at forum.nginx.org Thu May 17 06:28:13 2018 From: nginx-forum at forum.nginx.org (rambabuy) Date: Thu, 17 May 2018 02:28:13 -0400 Subject: NGX_AGAIN Handling in upload module Message-ID: HI I am facing an issue with NGX_AGAIN. ngx upload module is returning NGX_AGAIN while uploading a file. when nginx calls again upload module but upload module return NGX_OK id requestbody exist if( r->requestbody) return NGX_OK and its continue in a loop. any solution to this? Thanks Ram Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279838,279838#msg-279838 From nginx-forum at forum.nginx.org Thu May 17 06:35:08 2018 From: nginx-forum at forum.nginx.org (rambabuy) Date: Thu, 17 May 2018 02:35:08 -0400 Subject: NGX_AGAIN Handling in upload module In-Reply-To: References: Message-ID: <0bb0bdf32acb8ab183a996a59be6ef4a.NginxMailingListEnglish@forum.nginx.org> HI I am facing an issue with NGX_AGAIN. ngx upload module is returning NGX_AGAIN while uploading a file. when nginx calls again upload module but upload module return NGX_OK if requestbody exist. if( r->requestbody) return NGX_OK and its continue in a loop. any solution to this? if (!c->read->ready) { clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); ngx_add_timer(c->read, clcf->client_body_timeout); if (ngx_handle_read_event(c->read, 0) != NGX_OK) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } return NGX_AGAIN; } Thanks Ram Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279838,279839#msg-279839 From nginx-forum at forum.nginx.org Thu May 17 06:35:58 2018 From: nginx-forum at forum.nginx.org (rambabuy) Date: Thu, 17 May 2018 02:35:58 -0400 Subject: NGX_AGAIN Handling in upload module Message-ID: HI I am facing an issue with NGX_AGAIN. ngx upload module is returning NGX_AGAIN while uploading a file. when nginx calls again upload module but upload module returns NGX_OK if requestbody exist if( r->requestbody) return NGX_OK and its continue in a loop. any solution to this? if (!c->read->ready) { clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); ngx_add_timer(c->read, clcf->client_body_timeout); if (ngx_handle_read_event(c->read, 0) != NGX_OK) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } return NGX_AGAIN; } Thanks Ram Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279840,279840#msg-279840 From francis at daoine.org Thu May 17 07:14:09 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 17 May 2018 08:14:09 +0100 Subject: Nginx Cache | @ prefix example In-Reply-To: <26dbef73ca335faa28bd40d2f3727341.NginxMailingListEnglish@forum.nginx.org> References: <20180516224246.GK19311@daoine.org> <26dbef73ca335faa28bd40d2f3727341.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180517071409.GL19311@daoine.org> On Thu, May 17, 2018 at 12:50:10AM -0400, c0nw0nk wrote: Hi there, > Thank you for the response and useful information Francis incredibly > helpful. You're welcome. > My webapp outputting the X-Accel-Expires header is PHP like so. > > CODE: > echo(gmdate('D, d M Y H:i:s', 0) . ' GMT'); > ?> > OUTPUT: > Thu, 01 Jan 1970 00:00:00 GMT > > The 0 would be replaced by the time function what is a UNIX time stamp. A unix time stamp is "absolute time in seconds since Epoch". That is: it is a single number, probably 10 digits long for anything currently useful. Right now, it is: $ date +%s 1526540978 So if you want to set the expiry time of "in about an hour", you could send a header of X-Accel-Expires: @1526544000 where that timestamp corresponds to $ date -u -d @1526544000 Thu May 17 08:00:00 UTC 2018 The X-Accel-Expires header should have a single value: either digits, or an @ followed by digits. f -- Francis Daly francis at daoine.org From r1ch+nginx at teamliquid.net Thu May 17 12:57:38 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Thu, 17 May 2018 14:57:38 +0200 Subject: Connection refused In-Reply-To: References: Message-ID: You should check your upstream logs to see why it is closing connections or crashing. On Tue, May 15, 2018 at 6:22 PM Ricky Gutierrez wrote: > Any help? > > El lun., 14 may. 2018 20:02, Ricky Gutierrez > escribi?: > >> hello list, I have a reverse proxy with nginx front end and I have the >> backend with nginx some applications in php7 with mariadb, reviewing >> the log I see a lot of errors like this: >> >> 2018/05/09 17:44:58 [error] 14633#14633: *1761 connect() failed (111: >> Connection refused) while connecting to upstream, client: >> 186.77.203.203, server: web.mydomain.com, request: "GET >> /imagenes/slide7.jpg HTTP/2.0", upstream: >> "http://192.168.11.7:80/imagenes/slide7.jpg", host: >> "www.mydomain.com", referrer: "https://www.mydomain.com/" >> >> 2018/05/09 17:45:09 [error] 14633#14633: *1761 connect() failed (111: >> Connection refused) while connecting to upstream, client: >> 186.77.203.203, server: web.mydomain.com, request: "GET >> /imagenes/slide8.jpg HTTP/2.0", upstream: >> "http://192.168.11.7:80/imagenes/slide8.jpg", host: >> "www.mydomain.com", referrer: "https://www.mydomain.com/" >> >> 2018/05/09 17:45:12 [error] 14633#14633: *1761 upstream prematurely >> closed connection while reading response header from upstream, client: >> 186.77.203.203, server: web.mydomain.com, request: "GET >> /imagenes/slide6.jpg HTTP/2.0", upstream: >> "http://192.168.11.7:80/imagenes/slide6.jpg", host: >> "www.mydomain.com", referrer: "https://www.mydomain.com/" >> >> I made a change according to this link on github, but I can not remove >> the error >> >> https://github.com/owncloud/client/issues/5706 >> >> my config : >> >> proxy_http_version 1.1; >> proxy_set_header Connection ""; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header Host $host; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_connect_timeout 900s; >> proxy_send_timeout 900s; >> proxy_read_timeout 900s; >> proxy_buffer_size 64k; >> proxy_buffers 16 32k; >> proxy_busy_buffers_size 64k; >> proxy_redirect off; >> proxy_request_buffering off; >> proxy_buffering off; >> proxy_pass http://backend1; >> >> regardss >> >> -- >> rickygm >> >> http://gnuforever.homelinux.com >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at zlabx.com Thu May 17 18:52:15 2018 From: nginx at zlabx.com (nginx at zlabx.com) Date: Thu, 17 May 2018 18:52:15 +0000 Subject: Compile static nginx Message-ID: <4d3f1d77512f90ae77f4731fe593f491@zlabx.com> Hello all I am trying to compile a static version of nginx on Arch Linux. This is my first attempt at compiling a static program. I have tried a bunch of different options from examples that I have found googling around, but I have not had any success. I am hoping that someone can help point me in the correct direction. I am using a PKGBUILD file to build the package in a clean CHROOT environment. LINUX: 4.13.7-1-ARCH GCC version: gcc-7.3.0-1 openssl_version: openssl-1.0.2m pcre_version: pcre-8.41 zlib_version: zlib-1.2.11 NGINX version: 1.13.7 here is a link to my build log http://www.zlabx.com/nginx-build-log.html (http://www.zlabx.com/nginx-build-log.html) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu May 17 21:46:25 2018 From: nginx-forum at forum.nginx.org (foxman) Date: Thu, 17 May 2018 17:46:25 -0400 Subject: if( variable exists ) In-Reply-To: <201204032056.30563.ne@vbart.ru> References: <201204032056.30563.ne@vbart.ru> Message-ID: <3390de7e852f23ffb74976745e46ae91.NginxMailingListEnglish@forum.nginx.org> if ($arg_user) { do as $arg_user exist bla bla bbla } if ($arg_user !~ $arg_user) { do as $arg_user not exist } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,224860,279847#msg-279847 From pete at pragmatika.net Thu May 17 22:03:08 2018 From: pete at pragmatika.net (Pete Cooper) Date: Thu, 17 May 2018 23:03:08 +0100 Subject: custom log_format not inherited by server block Message-ID: <76956AAD-2845-485C-9494-BE9CEF236D27@pragmatika.net> Hello. I am compiling Nginx 1.14.0 from source on Ubuntu 18.04 LTS with a view to compiling ipscrub as a dynamic module. My compile completes without error, my nginx.conf validates, Nginx runs as expected, yet my server block throws an error about an unknown log format. If my `log_format` directive appears after the `access_log` directive in nginx.conf, it will not validate, stating: nginx: [emerg] unknown log format "ipscrubbed" in /etc/nginx/nginx.conf:15 If my `log_format` directive appears before the `access_log` directive in nginx.conf, it validates. If my `log_format` directive appears before the `access_log` directive in nginx.conf, the default server block will not validate, stating: nginx: [emerg] unknown log format "ipscrubbed" in /etc/nginx/sites-enabled/default:2 ?implying that although my custom `log_format` is valid, the default server block is not inheriting it. Which has completely thrown me. Do I need to reposition the `access_log` directive to a later point in the server block? Or is there something else fundamental that I'm overlooking? I would very much appreciate an additional pair of eyes on this, if your interest, time and attention permits. Thank you in advance. My compile script: https://gist.github.com/petecooper/95b532b343372f707876161ee338b870 My nginx.conf: https://gist.github.com/petecooper/29fcf66f1fad0279b157201c8f233c59 My server block: https://gist.github.com/petecooper/b3fa68a165afd03fdaca3ba32545f49e -- Pete Cooper pete at pragmatika.net https://pragmatika.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Fri May 18 00:05:34 2018 From: satcse88 at gmail.com (Sathish Kumar) Date: Fri, 18 May 2018 08:05:34 +0800 Subject: Nginx Directory Listing - Restrict by IP Address Message-ID: Hi Team, We have a requirement to allow directory listing from few servers and disallow from other ip addresses and all IP addresses should be able to download all files inside the directory. Can somebody provide the correct nginx config for the same. location / { root /downloads; autoindex on; allow 1.1.1.1; deny all; } If I use the above config, only on 1.1.1.1 IP address can directory list from this server and can file download but from other IP addresses download shows forbidden, due to IP address restriction Is there a way to overcome this issue, thanks. Thanks & Regards Sathish.V -------------- next part -------------- An HTML attachment was scrubbed... URL: From prajithpalakkuda at gmail.com Fri May 18 06:15:42 2018 From: prajithpalakkuda at gmail.com (PRAJITH) Date: Fri, 18 May 2018 11:45:42 +0530 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: References: Message-ID: Hi Satish, There are "if" constructs in nginx, please check http://nginx.org/r/if. if you want to allow multiple IP addresses, it might be better idea to use map. eg: map $remote_addr $allowed { default 0; 1.1.1.1 1; 2.2.2.2 1; } and then in in the download location block if ($allowed = 1) { autoindex on; } Thanks, Prajith On 18 May 2018 at 05:35, Sathish Kumar wrote: > Hi Team, > > We have a requirement to allow directory listing from few servers and > disallow from other ip addresses and all IP addresses should be able to > download all files inside the directory. > > Can somebody provide the correct nginx config for the same. > > location / { > root /downloads; > autoindex on; > allow 1.1.1.1; > deny all; > } > > If I use the above config, only on 1.1.1.1 IP address can directory list > from this server and can file download but from other IP addresses download > shows forbidden, due to IP address restriction > > Is there a way to overcome this issue, thanks. > > Thanks & Regards > Sathish.V > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Fri May 18 06:36:08 2018 From: satcse88 at gmail.com (Sathish Kumar) Date: Fri, 18 May 2018 14:36:08 +0800 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: References: Message-ID: Hi Prajith, I had tried this option but autoindex is not allowed under if statement. location / { root /downloads; if ($allowed = 1) { autoindex on; } } Error: "autoindex" directive is not allowed here in domain.conf Thanks & Regards Sathish.V On Fri, May 18, 2018 at 2:16 PM PRAJITH wrote: > Hi Satish, > > There are "if" constructs in nginx, please check http://nginx.org/r/if. > if you want to allow multiple IP addresses, it might be better idea to use > map. eg: > > map $remote_addr $allowed { > default 0; > 1.1.1.1 1; > 2.2.2.2 1; > } > > and then in in the download location block > > if ($allowed = 1) { > autoindex on; > } > > Thanks, > Prajith > > On 18 May 2018 at 05:35, Sathish Kumar wrote: > >> Hi Team, >> >> We have a requirement to allow directory listing from few servers and >> disallow from other ip addresses and all IP addresses should be able to >> download all files inside the directory. >> >> Can somebody provide the correct nginx config for the same. >> >> location / { >> root /downloads; >> autoindex on; >> allow 1.1.1.1; >> deny all; >> } >> >> If I use the above config, only on 1.1.1.1 IP address can directory list >> from this server and can file download but from other IP addresses download >> shows forbidden, due to IP address restriction >> >> Is there a way to overcome this issue, thanks. >> >> Thanks & Regards >> Sathish.V >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Fri May 18 06:46:14 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 17 May 2018 23:46:14 -0700 Subject: blank lines in config Message-ID: Should nginx ignore those blank lines (lines with spaces only) in config? I tried below in the config of nginx 1.14.0: server { ... set $testvar1 "testval1"; ... 300 blank lines, each with 20 spaces ... set $testvar2 "testval2"; ... } nginx configtest says: nginx: [emerg] too long parameter " ..." started in Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From cello86 at gmail.com Fri May 18 10:59:32 2018 From: cello86 at gmail.com (Marcello Lorenzi) Date: Fri, 18 May 2018 12:59:32 +0200 Subject: Nginx filter client authentication Message-ID: Hi All, we're trying to configure a client certificate authentication on a Nginx 1.12.2 instance on our development environment, and all works fine. We would filter the access to a specific site with some particular client certificate to avoid that other certificates trusted by the same CA can access to this endpoint. Is it possible to configure it? Thanks, Marcello -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Fri May 18 11:17:52 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 18 May 2018 11:17:52 +0000 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: References: Message-ID: I think you need to change this a little map $remote_addr $allowed { default ?off?; 1.1.1.1 ?on?; 2.2.2.2 ?on:; } and then in in the download location block autoindex $allowed; I use similar logic on different variables and try at all costs to avoid IF statements anywhere in the configs. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu From: nginx on behalf of PRAJITH Reply-To: "nginx at nginx.org" Date: Friday, May 18, 2018 at 2:16 AM To: "nginx at nginx.org" Subject: Re: Nginx Directory Listing - Restrict by IP Address Hi Satish, There are "if" constructs in nginx, please check http://nginx.org/r/if. if you want to allow multiple IP addresses, it might be better idea to use map. eg: map $remote_addr $allowed { default 0; 1.1.1.1 1; 2.2.2.2 1; } and then in in the download location block if ($allowed = 1) { autoindex on; } Thanks, Prajith On 18 May 2018 at 05:35, Sathish Kumar > wrote: Hi Team, We have a requirement to allow directory listing from few servers and disallow from other ip addresses and all IP addresses should be able to download all files inside the directory. Can somebody provide the correct nginx config for the same. location / { root /downloads; autoindex on; allow 1.1.1.1; deny all; } If I use the above config, only on 1.1.1.1 IP address can directory list from this server and can file download but from other IP addresses download shows forbidden, due to IP address restriction Is there a way to overcome this issue, thanks. Thanks & Regards Sathish.V _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Fri May 18 12:17:12 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Fri, 18 May 2018 15:17:12 +0300 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: References: Message-ID: <6b92e22a-4522-3d19-8a55-3f98981d0b37@nginx.com> Hello, guys. I think, you can try something like this: location = /downloads/ { ??? root /downloads/; ??? allow 1.1.1.1; ??? autoindex on; } location /downloads/ { ??? root /downloads/; } This will work nicely if you don't need subdirectories. If you need those, you can use a rewrite like: map $remote_addr $forbidlisting { ??? default 1; ??? 1.1.1.1 0; } location /downloads/ { ??? root /downloads/; ??? autoindex on; ??? if ($forbidlisting) { ??????? rewrite /downloads(.*) /noindex_downloads$1 last; ??? } } location /noindex_downloads/ { ??? internal; ??? root /downloads/; } On 18.05.2018 14:17, Friscia, Michael wrote: > > I think you need to change this a little > > map $remote_addr $allowed { > ??? default???????? ?off?; > ??? 1.1.1.1???????? ?on?; > ??? 2.2.2.2 ??????? ?on:; > } > > and then in in the download location block > > ?autoindex $allowed; > > I use similar logic on different variables and try at all costs to > avoid IF statements anywhere in the configs. > > ___________________________________________ > > Michael Friscia > > Office of Communications > > Yale School of Medicine > > (203) 737-7932 - office > > (203) 931-5381 - mobile > > http://web.yale.edu > > *From: *nginx on behalf of PRAJITH > > *Reply-To: *"nginx at nginx.org" > *Date: *Friday, May 18, 2018 at 2:16 AM > *To: *"nginx at nginx.org" > *Subject: *Re: Nginx Directory Listing - Restrict by IP Address > > Hi Satish, > > There are "if" constructs in nginx, please check > http://nginx.org/r/if. > if you want to allow multiple IP addresses, it might be better idea to > use map. eg: > > map $remote_addr $allowed { > ??? default???????? 0; > ??? 1.1.1.1???????? 1; > ??? 2.2.2.2 ??????? 1; > } > > and then in in the download location block > > ?if ($allowed = 1) { > ??????? autoindex on; > } > > Thanks, > > Prajith > > On 18 May 2018 at 05:35, Sathish Kumar > > wrote: > > Hi Team, > > We have a requirement to allow directory listing from few servers > and disallow from other ip addresses and all IP addresses should > be able to download all files inside the directory. > > Can somebody provide the correct nginx config for the same. > > |location / {| > > |root /downloads;| > > |autoindex on;| > > |allow 1.1.1.1;| > > |deny all;| > > |}| > > If I use the above config, only on 1.1.1.1 IP address can > directory list from this server and can file download but from > other IP addresses download shows forbidden, due to IP address > restriction > > Is there a way to overcome this issue, thanks. > > > Thanks & Regards > Sathish.V > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Fri May 18 12:55:37 2018 From: satcse88 at gmail.com (Sathish Kumar) Date: Fri, 18 May 2018 20:55:37 +0800 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: References: Message-ID: Hi, I tried this option but it says autoindex need to be on or off and it's not accepting a variable. [emerg] invalid value "$allowed" in "autoindex" directive, it must be "on" or "off" in domain.conf On Fri, May 18, 2018, 7:18 PM Friscia, Michael wrote: > I think you need to change this a little > > > > map $remote_addr $allowed { > default ?off?; > 1.1.1.1 ?on?; > 2.2.2.2 ?on:; > } > > and then in in the download location block > > autoindex $allowed; > > I use similar logic on different variables and try at all costs to avoid > IF statements anywhere in the configs. > > > > ___________________________________________ > > Michael Friscia > > Office of Communications > > Yale School of Medicine > > (203) 737-7932 - office > > (203) 931-5381 - mobile > > http://web.yale.edu > > > > *From: *nginx on behalf of PRAJITH < > prajithpalakkuda at gmail.com> > *Reply-To: *"nginx at nginx.org" > *Date: *Friday, May 18, 2018 at 2:16 AM > *To: *"nginx at nginx.org" > *Subject: *Re: Nginx Directory Listing - Restrict by IP Address > > > > Hi Satish, > > There are "if" constructs in nginx, please check http://nginx.org/r/if > . > if you want to allow multiple IP addresses, it might be better idea to use > map. eg: > > map $remote_addr $allowed { > default 0; > 1.1.1.1 1; > 2.2.2.2 1; > } > > and then in in the download location block > > if ($allowed = 1) { > autoindex on; > } > > Thanks, > > Prajith > > > > On 18 May 2018 at 05:35, Sathish Kumar wrote: > > Hi Team, > > We have a requirement to allow directory listing from few servers and > disallow from other ip addresses and all IP addresses should be able to > download all files inside the directory. > > Can somebody provide the correct nginx config for the same. > > location / { > > root /downloads; > > autoindex on; > > allow 1.1.1.1; > > deny all; > > } > > If I use the above config, only on 1.1.1.1 IP address can directory list > from this server and can file download but from other IP addresses download > shows forbidden, due to IP address restriction > > Is there a way to overcome this issue, thanks. > > > Thanks & Regards > Sathish.V > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Fri May 18 13:01:27 2018 From: satcse88 at gmail.com (Sathish Kumar) Date: Fri, 18 May 2018 21:01:27 +0800 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: <6b92e22a-4522-3d19-8a55-3f98981d0b37@nginx.com> References: <6b92e22a-4522-3d19-8a55-3f98981d0b37@nginx.com> Message-ID: Hi, Tried this option it throws rewrite error and am not able to download file from non whitelisted ip addresses. ERROR: rewrite or internal redirection cycle while processing "/noindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsDownloads/abcd/file.zip", client: 3.3.3.3, server: abc.com, request: "GET /Downloads/abcd/file.zip On Fri, May 18, 2018, 8:17 PM Igor A. Ippolitov wrote: > Hello, guys. > > I think, you can try something like this: > > location = /downloads/ { > root /downloads/; > allow 1.1.1.1; > autoindex on; > } > location /downloads/ { > root /downloads/; > } > > This will work nicely if you don't need subdirectories. > If you need those, you can use a rewrite like: > > map $remote_addr $forbidlisting { > default 1; > 1.1.1.1 0; > } > location /downloads/ { > root /downloads/; > autoindex on; > if ($forbidlisting) { > rewrite /downloads(.*) /noindex_downloads$1 last; > } > } > location /noindex_downloads/ { > internal; > root /downloads/; > } > > > On 18.05.2018 14:17, Friscia, Michael wrote: > > I think you need to change this a little > > > > map $remote_addr $allowed { > default ?off?; > 1.1.1.1 ?on?; > 2.2.2.2 ?on:; > } > > and then in in the download location block > > autoindex $allowed; > > I use similar logic on different variables and try at all costs to avoid > IF statements anywhere in the configs. > > > > ___________________________________________ > > Michael Friscia > > Office of Communications > > Yale School of Medicine > > (203) 737-7932 - office > > (203) 931-5381 - mobile > > http://web.yale.edu > > > > *From: *nginx on > behalf of PRAJITH > > *Reply-To: *"nginx at nginx.org" > > *Date: *Friday, May 18, 2018 at 2:16 AM > *To: *"nginx at nginx.org" > > *Subject: *Re: Nginx Directory Listing - Restrict by IP Address > > > > Hi Satish, > > There are "if" constructs in nginx, please check http://nginx.org/r/if > . > if you want to allow multiple IP addresses, it might be better idea to use > map. eg: > > map $remote_addr $allowed { > default 0; > 1.1.1.1 1; > 2.2.2.2 1; > } > > and then in in the download location block > > if ($allowed = 1) { > autoindex on; > } > > Thanks, > > Prajith > > > > On 18 May 2018 at 05:35, Sathish Kumar wrote: > > Hi Team, > > We have a requirement to allow directory listing from few servers and > disallow from other ip addresses and all IP addresses should be able to > download all files inside the directory. > > Can somebody provide the correct nginx config for the same. > > location / { > > root /downloads; > > autoindex on; > > allow 1.1.1.1; > > deny all; > > } > > If I use the above config, only on 1.1.1.1 IP address can directory list > from this server and can file download but from other IP addresses download > shows forbidden, due to IP address restriction > > Is there a way to overcome this issue, thanks. > > > Thanks & Regards > Sathish.V > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Fri May 18 13:02:21 2018 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 18 May 2018 18:32:21 +0530 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: References: Message-ID: Since this requires more logic, I think you can implement this in an application server / server-side scripting like php/python etc your application must verify the IP address and list files rather than web server On Fri, May 18, 2018 at 6:25 PM, Sathish Kumar wrote: > Hi, > > I tried this option but it says autoindex need to be on or off and it's > not accepting a variable. > > > [emerg] invalid value "$allowed" in "autoindex" directive, it must be "on" > or "off" in domain.conf > > > On Fri, May 18, 2018, 7:18 PM Friscia, Michael > wrote: > >> I think you need to change this a little >> >> >> >> map $remote_addr $allowed { >> default ?off?; >> 1.1.1.1 ?on?; >> 2.2.2.2 ?on:; >> } >> >> and then in in the download location block >> >> autoindex $allowed; >> >> I use similar logic on different variables and try at all costs to avoid >> IF statements anywhere in the configs. >> >> >> >> ___________________________________________ >> >> Michael Friscia >> >> Office of Communications >> >> Yale School of Medicine >> >> (203) 737-7932 - office >> >> (203) 931-5381 - mobile >> >> http://web.yale.edu >> >> >> >> *From: *nginx on behalf of PRAJITH < >> prajithpalakkuda at gmail.com> >> *Reply-To: *"nginx at nginx.org" >> *Date: *Friday, May 18, 2018 at 2:16 AM >> *To: *"nginx at nginx.org" >> *Subject: *Re: Nginx Directory Listing - Restrict by IP Address >> >> >> >> Hi Satish, >> >> There are "if" constructs in nginx, please check http://nginx.org/r/if >> . >> if you want to allow multiple IP addresses, it might be better idea to use >> map. eg: >> >> map $remote_addr $allowed { >> default 0; >> 1.1.1.1 1; >> 2.2.2.2 1; >> } >> >> and then in in the download location block >> >> if ($allowed = 1) { >> autoindex on; >> } >> >> Thanks, >> >> Prajith >> >> >> >> On 18 May 2018 at 05:35, Sathish Kumar wrote: >> >> Hi Team, >> >> We have a requirement to allow directory listing from few servers and >> disallow from other ip addresses and all IP addresses should be able to >> download all files inside the directory. >> >> Can somebody provide the correct nginx config for the same. >> >> location / { >> >> root /downloads; >> >> autoindex on; >> >> allow 1.1.1.1; >> >> deny all; >> >> } >> >> If I use the above config, only on 1.1.1.1 IP address can directory list >> from this server and can file download but from other IP addresses download >> shows forbidden, due to IP address restriction >> >> Is there a way to overcome this issue, thanks. >> >> >> Thanks & Regards >> Sathish.V >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri May 18 14:27:22 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 May 2018 17:27:22 +0300 Subject: custom log_format not inherited by server block In-Reply-To: <76956AAD-2845-485C-9494-BE9CEF236D27@pragmatika.net> References: <76956AAD-2845-485C-9494-BE9CEF236D27@pragmatika.net> Message-ID: <20180518142722.GY32137@mdounin.ru> Hello! On Thu, May 17, 2018 at 11:03:08PM +0100, Pete Cooper wrote: > I am compiling Nginx 1.14.0 from source on Ubuntu 18.04 LTS with > a view to compiling ipscrub as a dynamic module. > > My compile completes without error, my nginx.conf validates, > Nginx runs as expected, yet my server block throws an error > about an unknown log format. > > If my `log_format` directive appears after the `access_log` > directive in nginx.conf, it will not validate, stating: > > nginx: [emerg] unknown log format "ipscrubbed" in /etc/nginx/nginx.conf:15 > > If my `log_format` directive appears before the `access_log` > directive in nginx.conf, it validates. That's expected. The list of formats is global, and nginx will lookup appropriate format when processing the "access_log" directive. As such, you have to define log_format before access_log which uses it. > If my `log_format` directive appears before the `access_log` > directive in nginx.conf, the default server block will not > validate, stating: > > nginx: [emerg] unknown log format "ipscrubbed" in /etc/nginx/sites-enabled/default:2 > > ?implying that although my custom `log_format` is valid, the > default server block is not inheriting it. Which has completely > thrown me. Do I need to reposition the `access_log` directive to > a later point in the server block? Or is there something else > fundamental that I'm overlooking? Quoting the nginx.conf file: include /etc/nginx/sites-enabled/*; keepalive_timeout 65; log_format ipscrubbed '$remote_addr_ipscrub'; That is, the "ipscrubbed" format is defined _after_ specific server{} blocks are included from sites-enabled. With such a configuration you won't be able to use the "ipscrubbed" format in these included configuration files. To fix things, consider defining log_format _before_ including server-specific configuration files. The best solution would be to move the include /etc/nginx/sites-enabled/*; line to the end of the http{} block. -- Maxim Dounin http://mdounin.ru/ From iippolitov at nginx.com Fri May 18 15:10:06 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Fri, 18 May 2018 18:10:06 +0300 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: References: <6b92e22a-4522-3d19-8a55-3f98981d0b37@nginx.com> Message-ID: <60158dab-c9df-e4b4-0ec7-a91d805b5bdf@nginx.com> Sathish, I made a couple of minor mistakes. Please, try following configuration: > > map $remote_addr $forbidlisting { > ??? default 1; > ??? 1.1.1.1 0; > } > location /downloads { > ??? alias /downloads/; > ??? autoindex on; > ??? if ($forbidlisting) { > ??????? rewrite /downloads(.*) /noindex_downloads/$1 last; > ??? } > } > location /noindex_downloads/ { > ??? internal; > ??? alias /downloads/; > } I tried it and it works for me. On 18.05.2018 16:01, Sathish Kumar wrote: > Hi, > > Tried this option it throws rewrite error and am not able to download > file from non whitelisted ip addresses. > > > ERROR: > rewrite or internal redirection cycle while processing > "/noindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsDownloads/abcd/file.zip", > client: 3.3.3.3, server: abc.com , request: "GET > /Downloads/abcd/file.zip > > > On Fri, May 18, 2018, 8:17 PM Igor A. Ippolitov > wrote: > > Hello, guys. > > I think, you can try something like this: > > location = /downloads/ { > ??? root /downloads/; > ??? allow 1.1.1.1; > ??? autoindex on; > } > location /downloads/ { > ??? root /downloads/; > } > > This will work nicely if you don't need subdirectories. > If you need those, you can use a rewrite like: > > map $remote_addr $forbidlisting { > ??? default 1; > ??? 1.1.1.1 0; > } > location /downloads/ { > ??? root /downloads/; > ??? autoindex on; > ??? if ($forbidlisting) { > ??????? rewrite /downloads(.*) /noindex_downloads$1 last; > ??? } > } > location /noindex_downloads/ { > ??? internal; > ??? root /downloads/; > } > > > On 18.05.2018 14:17, Friscia, Michael wrote: >> >> I think you need to change this a little >> >> map $remote_addr $allowed { >> ??? default???????? ?off?; >> ??? 1.1.1.1???????? ?on?; >> ??? 2.2.2.2 ??????? ?on:; >> } >> >> and then in in the download location block >> >> ?autoindex $allowed; >> >> I use similar logic on different variables and try at all costs >> to avoid IF statements anywhere in the configs. >> >> ___________________________________________ >> >> Michael Friscia >> >> Office of Communications >> >> Yale School of Medicine >> >> (203) 737-7932 - office >> >> (203) 931-5381 - mobile >> >> http://web.yale.edu >> >> *From: *nginx >> on behalf of PRAJITH >> >> *Reply-To: *"nginx at nginx.org" >> >> *Date: *Friday, May 18, 2018 at 2:16 AM >> *To: *"nginx at nginx.org" >> >> *Subject: *Re: Nginx Directory Listing - Restrict by IP Address >> >> Hi Satish, >> >> There are "if" constructs in nginx, please check >> http://nginx.org/r/if. >> if you want to allow multiple IP addresses, it might be better >> idea to use map. eg: >> >> map $remote_addr $allowed { >> ??? default???????? 0; >> ??? 1.1.1.1???????? 1; >> ??? 2.2.2.2 ??????? 1; >> } >> >> and then in in the download location block >> >> ?if ($allowed = 1) { >> ??????? autoindex on; >> } >> >> Thanks, >> >> Prajith >> >> On 18 May 2018 at 05:35, Sathish Kumar >> > wrote: >> >> Hi Team, >> >> We have a requirement to allow directory listing from few >> servers and disallow from other ip addresses and all IP >> addresses should be able to download all files inside the >> directory. >> >> Can somebody provide the correct nginx config for the same. >> >> |location / {| >> >> |root /downloads;| >> >> |autoindex on;| >> >> |allow 1.1.1.1;| >> >> |deny all;| >> >> |}| >> >> If I use the above config, only on 1.1.1.1 IP address can >> directory list from this server and can file download but >> from other IP addresses download shows forbidden, due to IP >> address restriction >> >> Is there a way to overcome this issue, thanks. >> >> >> Thanks & Regards >> Sathish.V >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Fri May 18 16:32:16 2018 From: satcse88 at gmail.com (Sathish Kumar) Date: Sat, 19 May 2018 00:32:16 +0800 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: <60158dab-c9df-e4b4-0ec7-a91d805b5bdf@nginx.com> References: <6b92e22a-4522-3d19-8a55-3f98981d0b37@nginx.com> <60158dab-c9df-e4b4-0ec7-a91d805b5bdf@nginx.com> Message-ID: Hi, I am doing for location /, in that case how will have to change the below portion. location /downloads { alias /downloads/; autoindex on; if ($forbidlisting) { rewrite /downloads(.*) /noindex_downloads/$1 last; } } location /noindex_downloads/ { internal; alias /downloads/; } On Fri, May 18, 2018, 11:10 PM Igor A. Ippolitov wrote: > Sathish, > > I made a couple of minor mistakes. > > Please, try following configuration: > > > map $remote_addr $forbidlisting { > default 1; > 1.1.1.1 0; > } > location /downloads { > alias /downloads/; > autoindex on; > if ($forbidlisting) { > rewrite /downloads(.*) /noindex_downloads/$1 last; > } > } > location /noindex_downloads/ { > internal; > alias /downloads/; > } > > > I tried it and it works for me. > > > On 18.05.2018 16:01, Sathish Kumar wrote: > > Hi, > > Tried this option it throws rewrite error and am not able to download file > from non whitelisted ip addresses. > > > ERROR: > rewrite or internal redirection cycle while processing > "/noindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsDownloads/abcd/file.zip", > client: 3.3.3.3, server: abc.com, request: "GET /Downloads/abcd/file.zip > > > On Fri, May 18, 2018, 8:17 PM Igor A. Ippolitov > wrote: > >> Hello, guys. >> >> I think, you can try something like this: >> >> location = /downloads/ { >> root /downloads/; >> allow 1.1.1.1; >> autoindex on; >> } >> location /downloads/ { >> root /downloads/; >> } >> >> This will work nicely if you don't need subdirectories. >> If you need those, you can use a rewrite like: >> >> map $remote_addr $forbidlisting { >> default 1; >> 1.1.1.1 0; >> } >> location /downloads/ { >> root /downloads/; >> autoindex on; >> if ($forbidlisting) { >> rewrite /downloads(.*) /noindex_downloads$1 last; >> } >> } >> location /noindex_downloads/ { >> internal; >> root /downloads/; >> } >> >> >> On 18.05.2018 14:17, Friscia, Michael wrote: >> >> I think you need to change this a little >> >> >> >> map $remote_addr $allowed { >> default ?off?; >> 1.1.1.1 ?on?; >> 2.2.2.2 ?on:; >> } >> >> and then in in the download location block >> >> autoindex $allowed; >> >> I use similar logic on different variables and try at all costs to avoid >> IF statements anywhere in the configs. >> >> >> >> ___________________________________________ >> >> Michael Friscia >> >> Office of Communications >> >> Yale School of Medicine >> >> (203) 737-7932 - office >> >> (203) 931-5381 - mobile >> >> http://web.yale.edu >> >> >> >> *From: *nginx on >> behalf of PRAJITH >> >> *Reply-To: *"nginx at nginx.org" >> >> *Date: *Friday, May 18, 2018 at 2:16 AM >> *To: *"nginx at nginx.org" >> >> *Subject: *Re: Nginx Directory Listing - Restrict by IP Address >> >> >> >> Hi Satish, >> >> There are "if" constructs in nginx, please check http://nginx.org/r/if >> . >> if you want to allow multiple IP addresses, it might be better idea to use >> map. eg: >> >> map $remote_addr $allowed { >> default 0; >> 1.1.1.1 1; >> 2.2.2.2 1; >> } >> >> and then in in the download location block >> >> if ($allowed = 1) { >> autoindex on; >> } >> >> Thanks, >> >> Prajith >> >> >> >> On 18 May 2018 at 05:35, Sathish Kumar wrote: >> >> Hi Team, >> >> We have a requirement to allow directory listing from few servers and >> disallow from other ip addresses and all IP addresses should be able to >> download all files inside the directory. >> >> Can somebody provide the correct nginx config for the same. >> >> location / { >> >> root /downloads; >> >> autoindex on; >> >> allow 1.1.1.1; >> >> deny all; >> >> } >> >> If I use the above config, only on 1.1.1.1 IP address can directory list >> from this server and can file download but from other IP addresses download >> shows forbidden, due to IP address restriction >> >> Is there a way to overcome this issue, thanks. >> >> >> Thanks & Regards >> Sathish.V >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri May 18 16:35:51 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 18 May 2018 17:35:51 +0100 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: References: Message-ID: <20180518163551.GM19311@daoine.org> On Fri, May 18, 2018 at 08:05:34AM +0800, Sathish Kumar wrote: Hi there, > We have a requirement to allow directory listing from few servers and > disallow from other ip addresses and all IP addresses should be able to > download all files inside the directory. "Directory listings" is presumably only relevant when the request url ends in /. So if you have "autoindex on", then all you need to do is disallow some IP addresses from accessing those urls. > location / { > root /downloads; > autoindex on; > allow 1.1.1.1; > deny all; > } Replace the allow/deny part with location ~ /$ { allow 1.1.1.1; deny all; } and it should do what you want. The end result is: request ends in / --> check the allow list; otherwise, allow as normal. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri May 18 16:41:32 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 18 May 2018 17:41:32 +0100 Subject: Nginx filter client authentication In-Reply-To: References: Message-ID: <20180518164132.GN19311@daoine.org> On Fri, May 18, 2018 at 12:59:32PM +0200, Marcello Lorenzi wrote: Hi there, > we're trying to configure a client certificate authentication on a Nginx > 1.12.2 instance on our development environment, and all works fine. We > would filter the access to a specific site with some particular client > certificate to avoid that other certificates trusted by the same CA can > access to this endpoint. Is it possible to configure it? http://nginx.org/r/ssl_verify_client The variable $ssl_client_verify can tell you that the certificate is valid (signed by a trusted CA, and in date). The variable $ssl_client_cert and friends like $ssl_client_s_dn tell you the certificate contents. If you have the list of which certificates to allow, or which to reject, then you can use a "map" to set a reject-variable, and if that variable is true, deny access. f -- Francis Daly francis at daoine.org From iippolitov at nginx.com Fri May 18 17:03:00 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Fri, 18 May 2018 20:03:00 +0300 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: References: <6b92e22a-4522-3d19-8a55-3f98981d0b37@nginx.com> <60158dab-c9df-e4b4-0ec7-a91d805b5bdf@nginx.com> Message-ID: <29d605dc-421d-c48f-b357-8305e827a0d8@nginx.com> This works for me: > > ??? location / { > ??????? alias /downloads/; > ??????? autoindex on; > ??????? if ($forbidlisting) { > ??????????? rewrite ^/(.*) /noindex_root/$1 last; > ??????? } > ??? } > ??? location /noindex_root/ { > ??????? internal; > ??????? alias /downloads/; > ??? } On 18.05.2018 19:32, Sathish Kumar wrote: > Hi, > > I am doing for location /, in that case how will have to change the > below portion. > > location /downloads { > ??? alias /downloads/; > ??? autoindex on; > ??? if ($forbidlisting) { > ??????? rewrite /downloads(.*) /noindex_downloads/$1 last; > ??? } > } > location /noindex_downloads/ { > ??? internal; > ??? alias /downloads/; > } > > > > On Fri, May 18, 2018, 11:10 PM Igor A. Ippolitov > wrote: > > Sathish, > > I made a couple of minor mistakes. > > Please, try following configuration: > >> >> map $remote_addr $forbidlisting { >> ??? default 1; >> ??? 1.1.1.1 0; >> } >> location /downloads { >> ??? alias /downloads/; >> ??? autoindex on; >> ??? if ($forbidlisting) { >> ??????? rewrite /downloads(.*) /noindex_downloads/$1 last; >> ??? } >> } >> location /noindex_downloads/ { >> ??? internal; >> ??? alias /downloads/; >> } > > I tried it and it works for me. > > > On 18.05.2018 16:01, Sathish Kumar wrote: >> Hi, >> >> Tried this option it throws rewrite error and am not able to >> download file from non whitelisted ip addresses. >> >> >> ERROR: >> rewrite or internal redirection cycle while processing >> "/noindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsDownloads/abcd/file.zip", >> client: 3.3.3.3, server: abc.com , request: "GET >> /Downloads/abcd/file.zip >> >> >> On Fri, May 18, 2018, 8:17 PM Igor A. Ippolitov >> > wrote: >> >> Hello, guys. >> >> I think, you can try something like this: >> >> location = /downloads/ { >> ??? root /downloads/; >> ??? allow 1.1.1.1; >> ??? autoindex on; >> } >> location /downloads/ { >> ??? root /downloads/; >> } >> >> This will work nicely if you don't need subdirectories. >> If you need those, you can use a rewrite like: >> >> map $remote_addr $forbidlisting { >> ??? default 1; >> ??? 1.1.1.1 0; >> } >> location /downloads/ { >> ??? root /downloads/; >> ??? autoindex on; >> ??? if ($forbidlisting) { >> ??????? rewrite /downloads(.*) /noindex_downloads$1 last; >> ??? } >> } >> location /noindex_downloads/ { >> ??? internal; >> ??? root /downloads/; >> } >> >> >> On 18.05.2018 14:17, Friscia, Michael wrote: >>> >>> I think you need to change this a little >>> >>> map $remote_addr $allowed { >>> ??? default???????? ?off?; >>> ??? 1.1.1.1???????? ?on?; >>> ??? 2.2.2.2 ??????? ?on:; >>> } >>> >>> and then in in the download location block >>> >>> ?autoindex $allowed; >>> >>> I use similar logic on different variables and try at all >>> costs to avoid IF statements anywhere in the configs. >>> >>> ___________________________________________ >>> >>> Michael Friscia >>> >>> Office of Communications >>> >>> Yale School of Medicine >>> >>> (203) 737-7932 - office >>> >>> (203) 931-5381 - mobile >>> >>> http://web.yale.edu >>> >>> *From: *nginx >>> on behalf of PRAJITH >>> >>> *Reply-To: *"nginx at nginx.org" >>> >>> *Date: *Friday, May 18, 2018 at 2:16 AM >>> *To: *"nginx at nginx.org" >>> >>> *Subject: *Re: Nginx Directory Listing - Restrict by IP Address >>> >>> Hi Satish, >>> >>> There are "if" constructs in nginx, please check >>> http://nginx.org/r/if. >>> if you want to allow multiple IP addresses, it might be >>> better idea to use map. eg: >>> >>> map $remote_addr $allowed { >>> ??? default???????? 0; >>> ??? 1.1.1.1???????? 1; >>> ??? 2.2.2.2 ??????? 1; >>> } >>> >>> and then in in the download location block >>> >>> ?if ($allowed = 1) { >>> ??????? autoindex on; >>> } >>> >>> Thanks, >>> >>> Prajith >>> >>> On 18 May 2018 at 05:35, Sathish Kumar >>> > wrote: >>> >>> Hi Team, >>> >>> We have a requirement to allow directory listing from >>> few servers and disallow from other ip addresses and all >>> IP addresses should be able to download all files inside >>> the directory. >>> >>> Can somebody provide the correct nginx config for the same. >>> >>> |location / {| >>> >>> |root /downloads;| >>> >>> |autoindex on;| >>> >>> |allow 1.1.1.1;| >>> >>> |deny all;| >>> >>> |}| >>> >>> If I use the above config, only on 1.1.1.1 IP address >>> can directory list from this server and can file >>> download but from other IP addresses download shows >>> forbidden, due to IP address restriction >>> >>> Is there a way to overcome this issue, thanks. >>> >>> >>> Thanks & Regards >>> Sathish.V >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri May 18 20:02:55 2018 From: nginx-forum at forum.nginx.org (pedrobrigatto) Date: Fri, 18 May 2018 16:02:55 -0400 Subject: POST redirection with NGINX Message-ID: <0bb29ef9cae54a4d22a6997796513f73.NginxMailingListEnglish@forum.nginx.org> Hi guys, The base name of a web application has changed and now I need to implement a redirection of POST requests so that, whenever clients already using the old base path are not affected by this modification. So, let's say the old path to a web service is https://ip-address/old-name/rest/mymethod and now it is going to be https://ip-address/new-name/rest/mymethod I tried both return 307 and rewrite rules but nothing worked until now. Can you please give me a hand on this? Thank you very much in advance! Best regards, Pedro Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279868,279868#msg-279868 From satcse88 at gmail.com Sat May 19 01:39:03 2018 From: satcse88 at gmail.com (Sathish Kumar) Date: Sat, 19 May 2018 09:39:03 +0800 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: <29d605dc-421d-c48f-b357-8305e827a0d8@nginx.com> References: <6b92e22a-4522-3d19-8a55-3f98981d0b37@nginx.com> <60158dab-c9df-e4b4-0ec7-a91d805b5bdf@nginx.com> <29d605dc-421d-c48f-b357-8305e827a0d8@nginx.com> Message-ID: Hi Igor, I tried your config and getting error, can you help me. location / { alias /downloads/; root /data/files; autoindex on; if ($forbidlisting) { rewrite ^/(.*) /noindex_root/$1 last; } } location /noindex_root/ { internal; alias /downloads/; } nginx: [emerg] "root" directive is duplicate, "alias" directive was specified earlier in domain.conf Thanks & Regards Sathish.V On Sat, May 19, 2018 at 1:03 AM Igor A. Ippolitov wrote: > This works for me: > > > location / { > alias /downloads/; > autoindex on; > if ($forbidlisting) { > rewrite ^/(.*) /noindex_root/$1 last; > } > } > location /noindex_root/ { > internal; > alias /downloads/; > } > > > > On 18.05.2018 19:32, Sathish Kumar wrote: > > Hi, > > I am doing for location /, in that case how will have to change the below > portion. > > location /downloads { > alias /downloads/; > autoindex on; > if ($forbidlisting) { > rewrite /downloads(.*) /noindex_downloads/$1 last; > } > } > location /noindex_downloads/ { > internal; > alias /downloads/; > } > > > > On Fri, May 18, 2018, 11:10 PM Igor A. Ippolitov > wrote: > >> Sathish, >> >> I made a couple of minor mistakes. >> >> Please, try following configuration: >> >> >> map $remote_addr $forbidlisting { >> default 1; >> 1.1.1.1 0; >> } >> location /downloads { >> alias /downloads/; >> autoindex on; >> if ($forbidlisting) { >> rewrite /downloads(.*) /noindex_downloads/$1 last; >> } >> } >> location /noindex_downloads/ { >> internal; >> alias /downloads/; >> } >> >> >> I tried it and it works for me. >> >> >> On 18.05.2018 16:01, Sathish Kumar wrote: >> >> Hi, >> >> Tried this option it throws rewrite error and am not able to download >> file from non whitelisted ip addresses. >> >> >> ERROR: >> rewrite or internal redirection cycle while processing >> "/noindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsDownloads/abcd/file.zip", >> client: 3.3.3.3, server: abc.com, request: "GET /Downloads/abcd/file.zip >> >> >> On Fri, May 18, 2018, 8:17 PM Igor A. Ippolitov >> wrote: >> >>> Hello, guys. >>> >>> I think, you can try something like this: >>> >>> location = /downloads/ { >>> root /downloads/; >>> allow 1.1.1.1; >>> autoindex on; >>> } >>> location /downloads/ { >>> root /downloads/; >>> } >>> >>> This will work nicely if you don't need subdirectories. >>> If you need those, you can use a rewrite like: >>> >>> map $remote_addr $forbidlisting { >>> default 1; >>> 1.1.1.1 0; >>> } >>> location /downloads/ { >>> root /downloads/; >>> autoindex on; >>> if ($forbidlisting) { >>> rewrite /downloads(.*) /noindex_downloads$1 last; >>> } >>> } >>> location /noindex_downloads/ { >>> internal; >>> root /downloads/; >>> } >>> >>> >>> On 18.05.2018 14:17, Friscia, Michael wrote: >>> >>> I think you need to change this a little >>> >>> >>> >>> map $remote_addr $allowed { >>> default ?off?; >>> 1.1.1.1 ?on?; >>> 2.2.2.2 ?on:; >>> } >>> >>> and then in in the download location block >>> >>> autoindex $allowed; >>> >>> I use similar logic on different variables and try at all costs to avoid >>> IF statements anywhere in the configs. >>> >>> >>> >>> ___________________________________________ >>> >>> Michael Friscia >>> >>> Office of Communications >>> >>> Yale School of Medicine >>> >>> (203) 737-7932 - office >>> >>> (203) 931-5381 - mobile >>> >>> http://web.yale.edu >>> >>> >>> >>> *From: *nginx on >>> behalf of PRAJITH >>> >>> *Reply-To: *"nginx at nginx.org" >>> >>> *Date: *Friday, May 18, 2018 at 2:16 AM >>> *To: *"nginx at nginx.org" >>> >>> *Subject: *Re: Nginx Directory Listing - Restrict by IP Address >>> >>> >>> >>> Hi Satish, >>> >>> There are "if" constructs in nginx, please check http://nginx.org/r/if >>> . >>> if you want to allow multiple IP addresses, it might be better idea to use >>> map. eg: >>> >>> map $remote_addr $allowed { >>> default 0; >>> 1.1.1.1 1; >>> 2.2.2.2 1; >>> } >>> >>> and then in in the download location block >>> >>> if ($allowed = 1) { >>> autoindex on; >>> } >>> >>> Thanks, >>> >>> Prajith >>> >>> >>> >>> On 18 May 2018 at 05:35, Sathish Kumar wrote: >>> >>> Hi Team, >>> >>> We have a requirement to allow directory listing from few servers and >>> disallow from other ip addresses and all IP addresses should be able to >>> download all files inside the directory. >>> >>> Can somebody provide the correct nginx config for the same. >>> >>> location / { >>> >>> root /downloads; >>> >>> autoindex on; >>> >>> allow 1.1.1.1; >>> >>> deny all; >>> >>> } >>> >>> If I use the above config, only on 1.1.1.1 IP address can directory list >>> from this server and can file download but from other IP addresses download >>> shows forbidden, due to IP address restriction >>> >>> Is there a way to overcome this issue, thanks. >>> >>> >>> Thanks & Regards >>> Sathish.V >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> >>> >>> _______________________________________________ >>> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Sat May 19 03:59:19 2018 From: satcse88 at gmail.com (Sathish Kumar) Date: Sat, 19 May 2018 11:59:19 +0800 Subject: Nginx Directory Listing - Restrict by IP Address In-Reply-To: References: <6b92e22a-4522-3d19-8a55-3f98981d0b37@nginx.com> <60158dab-c9df-e4b4-0ec7-a91d805b5bdf@nginx.com> <29d605dc-421d-c48f-b357-8305e827a0d8@nginx.com> Message-ID: Hi All, I got it working now by adding the below code. Hope it will be useful for who ever may need or looking for a solution. Only whitelisted IP addresses can do directory listing, other IP addresses can only download the files. nginx.conf http{ .... geo $geoAutoIndexWhitelist { default 0; 1.1.1.1 1; } } site domain config domain.conf server { .... root /data/downloads; autoindex off; location / { if ($geoAutoIndexWhitelist) { rewrite ^/(.*)$ /allowed_downloads/$1/ last; } try_files $uri $uri.html $uri/ =404; } location /allowed_downloads/ { internal; alias /data/downloads/; autoindex on; } } Later reload nginx service. credits: shawn-c (stackoverflow) Thanks & Regards Sathish.V On Sat, May 19, 2018 at 9:39 AM Sathish Kumar wrote: > Hi Igor, > > I tried your config and getting error, can you help me. > > location / { > > alias /downloads/; > root /data/files; > autoindex on; > > if ($forbidlisting) { > rewrite ^/(.*) /noindex_root/$1 last; > > } > } > location /noindex_root/ { > internal; > alias /downloads/; > } > > > nginx: [emerg] "root" directive is duplicate, "alias" directive was > specified earlier in domain.conf > > > > Thanks & Regards > Sathish.V > > > On Sat, May 19, 2018 at 1:03 AM Igor A. Ippolitov > wrote: > >> This works for me: >> >> >> location / { >> alias /downloads/; >> autoindex on; >> if ($forbidlisting) { >> rewrite ^/(.*) /noindex_root/$1 last; >> } >> } >> location /noindex_root/ { >> internal; >> alias /downloads/; >> } >> >> >> >> On 18.05.2018 19:32, Sathish Kumar wrote: >> >> Hi, >> >> I am doing for location /, in that case how will have to change the below >> portion. >> >> location /downloads { >> alias /downloads/; >> autoindex on; >> if ($forbidlisting) { >> rewrite /downloads(.*) /noindex_downloads/$1 last; >> } >> } >> location /noindex_downloads/ { >> internal; >> alias /downloads/; >> } >> >> >> >> On Fri, May 18, 2018, 11:10 PM Igor A. Ippolitov >> wrote: >> >>> Sathish, >>> >>> I made a couple of minor mistakes. >>> >>> Please, try following configuration: >>> >>> >>> map $remote_addr $forbidlisting { >>> default 1; >>> 1.1.1.1 0; >>> } >>> location /downloads { >>> alias /downloads/; >>> autoindex on; >>> if ($forbidlisting) { >>> rewrite /downloads(.*) /noindex_downloads/$1 last; >>> } >>> } >>> location /noindex_downloads/ { >>> internal; >>> alias /downloads/; >>> } >>> >>> >>> I tried it and it works for me. >>> >>> >>> On 18.05.2018 16:01, Sathish Kumar wrote: >>> >>> Hi, >>> >>> Tried this option it throws rewrite error and am not able to download >>> file from non whitelisted ip addresses. >>> >>> >>> ERROR: >>> rewrite or internal redirection cycle while processing >>> "/noindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsnoindex_downloadsDownloads/abcd/file.zip", >>> client: 3.3.3.3, server: abc.com, request: "GET >>> /Downloads/abcd/file.zip >>> >>> >>> On Fri, May 18, 2018, 8:17 PM Igor A. Ippolitov >>> wrote: >>> >>>> Hello, guys. >>>> >>>> I think, you can try something like this: >>>> >>>> location = /downloads/ { >>>> root /downloads/; >>>> allow 1.1.1.1; >>>> autoindex on; >>>> } >>>> location /downloads/ { >>>> root /downloads/; >>>> } >>>> >>>> This will work nicely if you don't need subdirectories. >>>> If you need those, you can use a rewrite like: >>>> >>>> map $remote_addr $forbidlisting { >>>> default 1; >>>> 1.1.1.1 0; >>>> } >>>> location /downloads/ { >>>> root /downloads/; >>>> autoindex on; >>>> if ($forbidlisting) { >>>> rewrite /downloads(.*) /noindex_downloads$1 last; >>>> } >>>> } >>>> location /noindex_downloads/ { >>>> internal; >>>> root /downloads/; >>>> } >>>> >>>> >>>> On 18.05.2018 14:17, Friscia, Michael wrote: >>>> >>>> I think you need to change this a little >>>> >>>> >>>> >>>> map $remote_addr $allowed { >>>> default ?off?; >>>> 1.1.1.1 ?on?; >>>> 2.2.2.2 ?on:; >>>> } >>>> >>>> and then in in the download location block >>>> >>>> autoindex $allowed; >>>> >>>> I use similar logic on different variables and try at all costs to >>>> avoid IF statements anywhere in the configs. >>>> >>>> >>>> >>>> ___________________________________________ >>>> >>>> Michael Friscia >>>> >>>> Office of Communications >>>> >>>> Yale School of Medicine >>>> >>>> (203) 737-7932 - office >>>> >>>> (203) 931-5381 - mobile >>>> >>>> http://web.yale.edu >>>> >>>> >>>> >>>> *From: *nginx on >>>> behalf of PRAJITH >>>> >>>> *Reply-To: *"nginx at nginx.org" >>>> >>>> *Date: *Friday, May 18, 2018 at 2:16 AM >>>> *To: *"nginx at nginx.org" >>>> >>>> *Subject: *Re: Nginx Directory Listing - Restrict by IP Address >>>> >>>> >>>> >>>> Hi Satish, >>>> >>>> There are "if" constructs in nginx, please check http://nginx.org/r/if >>>> . >>>> if you want to allow multiple IP addresses, it might be better idea to use >>>> map. eg: >>>> >>>> map $remote_addr $allowed { >>>> default 0; >>>> 1.1.1.1 1; >>>> 2.2.2.2 1; >>>> } >>>> >>>> and then in in the download location block >>>> >>>> if ($allowed = 1) { >>>> autoindex on; >>>> } >>>> >>>> Thanks, >>>> >>>> Prajith >>>> >>>> >>>> >>>> On 18 May 2018 at 05:35, Sathish Kumar wrote: >>>> >>>> Hi Team, >>>> >>>> We have a requirement to allow directory listing from few servers and >>>> disallow from other ip addresses and all IP addresses should be able to >>>> download all files inside the directory. >>>> >>>> Can somebody provide the correct nginx config for the same. >>>> >>>> location / { >>>> >>>> root /downloads; >>>> >>>> autoindex on; >>>> >>>> allow 1.1.1.1; >>>> >>>> deny all; >>>> >>>> } >>>> >>>> If I use the above config, only on 1.1.1.1 IP address can directory >>>> list from this server and can file download but from other IP addresses >>>> download shows forbidden, due to IP address restriction >>>> >>>> Is there a way to overcome this issue, thanks. >>>> >>>> >>>> Thanks & Regards >>>> Sathish.V >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> _______________________________________________ >>> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From janejojo at gmail.com Sun May 20 02:03:34 2018 From: janejojo at gmail.com (Jane Jojo) Date: Sat, 19 May 2018 19:03:34 -0700 Subject: Website hit returning CSS code instead of html, is this due to some caching mess up? Message-ID: Hi all, This problem is intermittent and only some of my viewers experience this. For reference here?s the screencast of hitting the url via curl: https://d.pr/v/lRE2w2 and another one from a user: https://d.pr/i/uTWsst . My website is https://www.alittlebitofspice.com/ Recently I setup reverse proxy caching and here?s the relevant code location / { proxy_pass http://127.0.0.1:8000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; # If logged in, don't cache. if ($http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" ) { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$uri"; proxy_cache staticfilecache; proxy_cache_valid 200 302 100d; add_header Cache-Control public; #proxy_hide_header "Set-Cookie"; #proxy_ignore_headers "Set-Cookie"; proxy_ignore_headers Expires; proxy_ignore_headers "Cache-Control"; proxy_ignore_headers X-Accel-Expires; proxy_hide_header "Cache-Control"; proxy_hide_header Pragma; proxy_hide_header Server; proxy_hide_header Request-Context; proxy_hide_header X-Powered-By; proxy_set_header Accept-Encoding ""; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache_bypass $arg_nocache $do_not_cache; } Am I doing something wrong here? Can someone please help me understand why this is happening? Help much appreciated. - Jane -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun May 20 08:24:21 2018 From: nginx-forum at forum.nginx.org (rickGsp) Date: Sun, 20 May 2018 04:24:21 -0400 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: <20180516132714.GJ32137@mdounin.ru> References: <20180516132714.GJ32137@mdounin.ru> Message-ID: <396278e7464a5450f096eb26cc4e9736.NginxMailingListEnglish@forum.nginx.org> >>As I tried to explain in my previous message, "test runs for 60 >>seconds" can have two different meanings: 1) the load is generated >>for 60 seconds and 2) from first request started to the last >>request finished it takes 60 seconds. >>Make sure you are using the correct meaning. Also, it might >>be a good idea to look into nginx access logs to verify both time >>and numbers reported by your tool. Yes Maxim, I had understood your point. My test actually ran for 60 to 65 seconds which means it took 5 additional seconds to process the requests. Even access logs says the same. Also, on more powerful machine, I get expected result for the same test i.e 500 req/sec load but start seeing difference at relatively higher load.It seems to me that a results also depends on the resources available on the machine running Nginx. Surprisingly, CPU was not hitting the peak on both the machines.I am using CentOS systems for this testings. Actually in another test with plain HTTP requests, I observed the same issue of more requests than expected getting processed. However, for HTTP case, this behaviour appeared at 700 req/sec input load instead of 500 req/sec as in HTTPS. In this test requests got processed within 60 secs. With all the test results, I am being forced to think that Nginx rate limiting may not be able to stop DDoS attack with very high input load but is decent enough to handle sudden spikes and load which is slightly higher than configured rate limit, and computing power available also plays some role here. Do you think I am right? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279802,279874#msg-279874 From arcade at b1t.name Sun May 20 10:20:07 2018 From: arcade at b1t.name (Volodymyr Kostyrko) Date: Sun, 20 May 2018 13:20:07 +0300 Subject: unix sockets are not reused when restarting nginx Message-ID: <60a4bf5d-9261-e464-df24-165361019a51@b1t.name> Hello. I'm using nginx 1.14.0 on FreeBSD 11-STABLE. I'm trying to get caching for internally generated content so I'm proxying nginx to nginx: server { listen unix:/home/someuser/.media.nginx.sock; ? } This perfectly works when starting nginx initially. However when restarting I sometimes get error reopening sockets to serve them: nginx.error.log:2018/05/14 02:05:30 [emerg] 3583#0: bind() to unix:/home/someuser/.site.nginx.sock failed (48: Address already in use) nginx.error.log:2018/05/14 02:05:30 [emerg] 3583#0: bind() to unix:/home/someuser/.site.nginx.sock failed (48: Address already in use) nginx.error.log:2018/05/14 02:05:30 [emerg] 3583#0: bind() to unix:/home/someuser/.site.nginx.sock failed (48: Address already in use) nginx.error.log:2018/05/14 02:05:30 [emerg] 3583#0: bind() to unix:/home/someuser/.site.nginx.sock failed (48: Address already in use) nginx.error.log:2018/05/14 02:05:30 [emerg] 3583#0: bind() to unix:/home/someuser/.site.nginx.sock failed (48: Address already in use) This can happen even on boot. Removing sockets allows nginx to start. I also got this error: error.log:2018/05/07 16:07:49 [notice] 89443#0: getsockopt(TCP_FASTOPEN) unix:/home/someuser/.site.nginx.sock failed, ignored (22: Invalid argument) Thanks in advance. -- Sphinx of black quartz judge my vow. From peter_booth at me.com Sun May 20 18:45:58 2018 From: peter_booth at me.com (Peter Booth) Date: Sun, 20 May 2018 14:45:58 -0400 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: <396278e7464a5450f096eb26cc4e9736.NginxMailingListEnglish@forum.nginx.org> References: <20180516132714.GJ32137@mdounin.ru> <396278e7464a5450f096eb26cc4e9736.NginxMailingListEnglish@forum.nginx.org> Message-ID: Rate limiting is a useful but crude tool that should only be one if four or five different things you do to protect your backend: 1 browser caching 2 cDN 3 rate limiting 4 nginx caching reverse proxy What are your requests? Are they static content or proxied to a back end? Do users login? Is it valid for dynamic content built for one user to be returned to another? Sent from my iPhone On May 20, 2018, at 4:24 AM, rickGsp wrote: >>> As I tried to explain in my previous message, "test runs for 60 >>> seconds" can have two different meanings: 1) the load is generated >>> for 60 seconds and 2) from first request started to the last >>> request finished it takes 60 seconds. > >>> Make sure you are using the correct meaning. Also, it might >>> be a good idea to look into nginx access logs to verify both time >>> and numbers reported by your tool. > > Yes Maxim, I had understood your point. My test actually ran for 60 to 65 > seconds which means it took 5 additional seconds to process the requests. > Even access logs says the same. Also, on more powerful machine, I get > expected result for the same test i.e 500 req/sec load but start seeing > difference at relatively higher load.It seems to me that a results also > depends on the resources available on the machine running Nginx. > Surprisingly, CPU was not hitting the peak on both the machines.I am using > CentOS systems for this testings. > > Actually in another test with plain HTTP requests, I observed the same issue > of more requests than expected getting processed. However, for HTTP case, > this behaviour appeared at 700 req/sec input load instead of 500 req/sec as > in HTTPS. In this test requests got processed within 60 secs. > > With all the test results, I am being forced to think that Nginx rate > limiting may not be able to stop DDoS attack with very high input load but > is decent enough to handle sudden spikes and load which is slightly higher > than configured rate limit, and computing power available also plays some > role here. Do you think I am right? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279802,279874#msg-279874 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From peter_booth at me.com Sun May 20 18:47:03 2018 From: peter_booth at me.com (Peter Booth) Date: Sun, 20 May 2018 14:47:03 -0400 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: References: <20180516132714.GJ32137@mdounin.ru> <396278e7464a5450f096eb26cc4e9736.NginxMailingListEnglish@forum.nginx.org> Message-ID: 5. Do you use keepslive? Sent from my iPhone > On May 20, 2018, at 2:45 PM, Peter Booth wrote: > > Rate limiting is a useful but crude tool that should only be one if four or five different things you do to protect your backend: > > 1 browser caching > 2 cDN > 3 rate limiting > 4 nginx caching reverse proxy > > What are your requests? Are they static content or proxied to a back end? > Do users login? > Is it valid for dynamic content built for one user to be returned to another? > > Sent from my iPhone > > On May 20, 2018, at 4:24 AM, rickGsp wrote: > >>>> As I tried to explain in my previous message, "test runs for 60 >>>> seconds" can have two different meanings: 1) the load is generated >>>> for 60 seconds and 2) from first request started to the last >>>> request finished it takes 60 seconds. >> >>>> Make sure you are using the correct meaning. Also, it might >>>> be a good idea to look into nginx access logs to verify both time >>>> and numbers reported by your tool. >> >> Yes Maxim, I had understood your point. My test actually ran for 60 to 65 >> seconds which means it took 5 additional seconds to process the requests. >> Even access logs says the same. Also, on more powerful machine, I get >> expected result for the same test i.e 500 req/sec load but start seeing >> difference at relatively higher load.It seems to me that a results also >> depends on the resources available on the machine running Nginx. >> Surprisingly, CPU was not hitting the peak on both the machines.I am using >> CentOS systems for this testings. >> >> Actually in another test with plain HTTP requests, I observed the same issue >> of more requests than expected getting processed. However, for HTTP case, >> this behaviour appeared at 700 req/sec input load instead of 500 req/sec as >> in HTTPS. In this test requests got processed within 60 secs. >> >> With all the test results, I am being forced to think that Nginx rate >> limiting may not be able to stop DDoS attack with very high input load but >> is decent enough to handle sudden spikes and load which is slightly higher >> than configured rate limit, and computing power available also plays some >> role here. Do you think I am right? >> >> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279802,279874#msg-279874 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From satcse88 at gmail.com Mon May 21 03:49:53 2018 From: satcse88 at gmail.com (Sathish Kumar) Date: Mon, 21 May 2018 11:49:53 +0800 Subject: Block countries - Nginx Message-ID: Hi All, I have a requirement to block certain countries coming to our website. I managed to achieved it using the ngx_http_geoip_module. I have a problem now, if the request comes through Amazon API Gateway, how can I read the X-forwarded-for header or block these request too. nginx.conf map $geoip_country_code $allow_country { default yes; SG no; } geoip_country /etc/nginx/GeoIP.dat; # the country IP database geoip_city /etc/nginx/GeoLiteCity.dat; # the city IP database domain.conf if ($allow_country = no) { return 444; } Thanks & Regards Sathish.V -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 21 05:09:04 2018 From: nginx-forum at forum.nginx.org (rickGsp) Date: Mon, 21 May 2018 01:09:04 -0400 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: References: Message-ID: <80a1fa380d8e082a5676e08d0c657b7f.NginxMailingListEnglish@forum.nginx.org> > Rate limiting is a useful but crude tool that should only be one if four or five different things you do to protect your backend: > > 1 browser caching > 2 cDN > 3 rate limiting > 4 nginx caching reverse proxy > > What are your requests? Are they static content or proxied to a back end? > Do users login? > Is it valid for dynamic content built for one user to be returned to another? I am mainly using it to do reverse proxy to the backend. >Do you use keepalive? Here is the cleaned up version of the configuration in use: # configuration file /etc/nginx/nginx.conf: user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 4096 ; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; client_header_buffer_size 64k; #tcp_nopush on; keepalive_timeout 65s; #gzip on; include /etc/nginx/conf.d/*.conf; limit_req_zone $host zone=perhost:10m rate=100r/s; limit_req zone=perhost burst=100 nodelay; upstream service_lb { server 127.0.0.1:8020; server 127.0.0.1:8021; } } worker_rlimit_nofile 10000; # configuration file /etc/nginx/conf.d/nginx_ssl.conf: server { listen 192.168.0.50:443 ssl backlog=1024; listen 127.0.0.1:443 ssl; ssl_certificate /etc/nginx/conf.d/nginx.crt; ssl_certificate_key /etc/nginx/conf.d/nginx.key; ssl_protocols TLSv1.1 TLSv1.2; ssl_ciphers EECDH+AESGCM:EECDH+AES256:EECDH+AES128:EECDH+AES:kRSA+AESGCM:kRSA+AES:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-GCM-SHA256 :DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:!aNULL:!ADH:!eNULL:!EXP:!LOW:!DES:!3DES:!RC4:!MD5:!SEED; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:1024000; ssl_session_timeout 300; ssl_verify_client off; #charset koi8-r; access_log /var/log/nginx/access.log main; location /service/ { proxy_pass http://service_lb; break; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279802,279879#msg-279879 From mailinglist at unix-solution.de Mon May 21 06:02:20 2018 From: mailinglist at unix-solution.de (basti) Date: Mon, 21 May 2018 08:02:20 +0200 Subject: Block countries - Nginx In-Reply-To: References: Message-ID: <21849da1-5c62-13af-e672-5b2304426304@unix-solution.de> hello, the way to block ip's can also be used for PTR records, I think. Also as wildcard. On 21.05.2018 05:49, Sathish Kumar wrote: > Hi All, > > I have a requirement to block certain countries coming to our website.? > I managed to achieved it using the ngx_http_geoip_module. I have a > problem now, if the request comes through Amazon API Gateway, how can I > read the X-forwarded-for header or block these request too. > > nginx.conf > map $geoip_country_code $allow_country { > ?default yes;? > SG no;? > } > > > geoip_country /etc/nginx/GeoIP.dat; # the country IP database? > geoip_city /etc/nginx/GeoLiteCity.dat; # the city IP database > > > domain.conf > if ($allow_country = no) {? > return 444;? > } > > Thanks & Regards > Sathish.V > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Mon May 21 12:12:44 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 May 2018 15:12:44 +0300 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: <396278e7464a5450f096eb26cc4e9736.NginxMailingListEnglish@forum.nginx.org> References: <20180516132714.GJ32137@mdounin.ru> <396278e7464a5450f096eb26cc4e9736.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180521121244.GZ32137@mdounin.ru> Hello! On Sun, May 20, 2018 at 04:24:21AM -0400, rickGsp wrote: > >>As I tried to explain in my previous message, "test runs for 60 > >>seconds" can have two different meanings: 1) the load is generated > >>for 60 seconds and 2) from first request started to the last > >>request finished it takes 60 seconds. > > >>Make sure you are using the correct meaning. Also, it might > >>be a good idea to look into nginx access logs to verify both time > >>and numbers reported by your tool. > > Yes Maxim, I had understood your point. My test actually ran for 60 to 65 > seconds which means it took 5 additional seconds to process the requests. > Even access logs says the same. Also, on more powerful machine, I get > expected result for the same test i.e 500 req/sec load but start seeing > difference at relatively higher load.It seems to me that a results also > depends on the resources available on the machine running Nginx. > Surprisingly, CPU was not hitting the peak on both the machines.I am using > CentOS systems for this testings. > > Actually in another test with plain HTTP requests, I observed the same issue > of more requests than expected getting processed. However, for HTTP case, > this behaviour appeared at 700 req/sec input load instead of 500 req/sec as > in HTTPS. In this test requests got processed within 60 secs. > > With all the test results, I am being forced to think that Nginx rate > limiting may not be able to stop DDoS attack with very high input load but > is decent enough to handle sudden spikes and load which is slightly higher > than configured rate limit, and computing power available also plays some > role here. Do you think I am right? I'm pretty sure the problem is with your tests, not with nginx request rate limiting. Unfortunately, it is not possible to reproduce your tests and check what's going wrong as you are using proprietary software for tests. As suggested previously, it might be a good idea to verify numbers using nginx access logs. Seeing numbers of requests per seconds should be as trivial as grep ' 200 ' /path/to/log | awk '{print $4}' | uniq -c assuming default log format and only test requests in the log. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon May 21 13:27:35 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 May 2018 16:27:35 +0300 Subject: unix sockets are not reused when restarting nginx In-Reply-To: <60a4bf5d-9261-e464-df24-165361019a51@b1t.name> References: <60a4bf5d-9261-e464-df24-165361019a51@b1t.name> Message-ID: <20180521132735.GA32137@mdounin.ru> Hello! On Sun, May 20, 2018 at 01:20:07PM +0300, Volodymyr Kostyrko wrote: > Hello. > > I'm using nginx 1.14.0 on FreeBSD 11-STABLE. I'm trying to get caching > for internally generated content so I'm proxying nginx to nginx: > > server { > listen unix:/home/someuser/.media.nginx.sock; > > ? > } > > This perfectly works when starting nginx initially. However when > restarting I sometimes get error reopening sockets to serve them: > > nginx.error.log:2018/05/14 02:05:30 [emerg] 3583#0: bind() to > unix:/home/someuser/.site.nginx.sock failed (48: Address already in use) > nginx.error.log:2018/05/14 02:05:30 [emerg] 3583#0: bind() to > unix:/home/someuser/.site.nginx.sock failed (48: Address already in use) > nginx.error.log:2018/05/14 02:05:30 [emerg] 3583#0: bind() to > unix:/home/someuser/.site.nginx.sock failed (48: Address already in use) > nginx.error.log:2018/05/14 02:05:30 [emerg] 3583#0: bind() to > unix:/home/someuser/.site.nginx.sock failed (48: Address already in use) > nginx.error.log:2018/05/14 02:05:30 [emerg] 3583#0: bind() to > unix:/home/someuser/.site.nginx.sock failed (48: Address already in use) > > This can happen even on boot. Removing sockets allows nginx to start. Check how do you stop nginx. nginx removes unix sockets when it is stopped using the TERM and INT signals (fast shutdown), but not when it is stopped gracefully using the QUIT signal (graceful shutdown, see http://nginx.org/en/docs/control.html). This is because graceful shutdown is normally used during binary upgrade, and open listening sockets are passed to the new master process, so removing them will break things. If you are using graceful shutdown for other purposes than during binary upgrade for some reason, you have to remove listening unix sockets yourself. > I also got this error: > > error.log:2018/05/07 16:07:49 [notice] 89443#0: getsockopt(TCP_FASTOPEN) > unix:/home/someuser/.site.nginx.sock failed, ignored (22: Invalid argument) This is safe to ignore. The following patch will hide this notice: diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c --- a/src/core/ngx_connection.c +++ b/src/core/ngx_connection.c @@ -305,7 +305,9 @@ ngx_set_inherited_sockets(ngx_cycle_t *c { err = ngx_socket_errno; - if (err != NGX_EOPNOTSUPP && err != NGX_ENOPROTOOPT) { + if (err != NGX_EOPNOTSUPP && err != NGX_ENOPROTOOPT + && err != EINVAL) + { ngx_log_error(NGX_LOG_NOTICE, cycle->log, err, "getsockopt(TCP_FASTOPEN) %V failed, ignored", &ls[i].addr_text); -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon May 21 13:55:20 2018 From: nginx-forum at forum.nginx.org (rickGsp) Date: Mon, 21 May 2018 09:55:20 -0400 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: <20180521121244.GZ32137@mdounin.ru> References: <20180521121244.GZ32137@mdounin.ru> Message-ID: <4afa1e1c38025068c21a456e15c3fb00.NginxMailingListEnglish@forum.nginx.org> >>I'm pretty sure the problem is with your tests, not with nginx >>request rate limiting. Unfortunately, it is not possible to >>reproduce your tests and check what's going wrong as you are using >>proprietary software for tests. >>As suggested previously, it might be a good idea to verify numbers >>using nginx access logs. Seeing numbers of requests per seconds >>should be as trivial as >>grep ' 200 ' /path/to/log | awk '{print $4}' | uniq -c >>assuming default log format and only test requests in the log. Hi Maxim, Here is a piece of output for the following command as per our success return value as 202. grep ' 202 ' /path/to/log | awk '{print $4}' | uniq -c 232 [17/May/2018:03:46:03 171 [17/May/2018:03:46:04 101 [17/May/2018:03:46:05 124 [17/May/2018:03:46:06 169 [17/May/2018:03:46:07 105 [17/May/2018:03:46:08 5 [17/May/2018:03:46:09 1 [17/May/2018:03:46:08 218 [17/May/2018:03:46:09 104 [17/May/2018:03:46:10 269 [17/May/2018:03:46:11 130 [17/May/2018:03:46:12 97 [17/May/2018:03:46:13 96 [17/May/2018:03:46:14 124 [17/May/2018:03:46:15 248 [17/May/2018:03:46:16 237 [17/May/2018:03:46:17 126 [17/May/2018:03:46:18 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279802,279887#msg-279887 From mikydevel at yahoo.fr Mon May 21 13:57:07 2018 From: mikydevel at yahoo.fr (Mik J) Date: Mon, 21 May 2018 13:57:07 +0000 (UTC) Subject: Reverse proxy for multiple domains In-Reply-To: <20170830175719.GB20907@daoine.org> References: <263631856.3895413.1503833225646.ref@mail.yahoo.com> <263631856.3895413.1503833225646@mail.yahoo.com> <20170830175719.GB20907@daoine.org> Message-ID: <1503514002.6187669.1526911027040@mail.yahoo.com> Hello, Sorry if I'm asking again a question on the same topic. I would like to know what is the best practice to setup a web proxy. I do it like this - 1 virtual host per application on the reverse proxy and the proxy_pass points to one IP+path - 1 virtual host (default) for all application on the backend server but one location stanza per application The problem is that I meet many problems with installation of application: magento, glpi, etc Is it the correct way to do it ? On this reverse proxy I have a virtual host which looks like that server { listen 80; server_name application1.org; access_log /var/log/nginx/application1.org.access.log; error_log /var/log/nginx/application1.org.error.log; ... location ^~ / { proxy_pass? ? ? ? http://10.1.1.10:80/app/application1/; proxy_redirect? ? off; proxy_set_header? Host? ? ? ? ? ? $http_host; proxy_set_header? X-Real-IP? ? ? ? $remote_addr; proxy_set_header? X-Forwarded-For? $proxy_add_x_forwarded_for; proxy_set_header? X-Forwarded-Proto $scheme; } On the web server behind the proxy I just have one virtual host which is the default one server { listen 80 default_server; server_name _; index index.html index.htm index.php; root /var/www/htdocs; location ^~ /app/application1 { root /var/www; index index.php; location ~ \.php$ { root? ? ? ? ? /var/www; try_files $uri =404; fastcgi_pass? unix:/run/php-fpm.application1.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index? index.php; fastcgi_param? SCRIPT_FILENAME $document_root$fastcgi_script_name; include? ? ? ? fastcgi_params; } Le mercredi 30 ao?t 2017 ? 19:57:40 UTC+2, Francis Daly a ?crit : On Sun, Aug 27, 2017 at 11:27:05AM +0000, Mik J via nginx wrote: Hi there, > > Thats because the pages are called by the reverse proxy server > > like http://10.1.1.10:80/app/application1/;and it can't use a FQDN > > because it's in a private adressing > Francis: I don't follow that last part.=> I mean that the reverse proxy uses an IP to connect to the backend web server. If it used a fqdn, it has to resolve it, through a dns request The backend web server can care about the IP:port you connect to, and the Host: header you send. You can connect to 10.1.1.10:80 and send a Host: header of "app1" if you want to. No dns resolution involved. Anyway, it sounds like you have this part working now; so that's good. > I still have problems, the site doesn't diplay properly because it can't load a javascript > The request for the javascript looks like thathttp://application1.org/?wooslider-javascript=load&t=1503832510&ver=1.0.0 HTTP/1.1It arrives on the backend server I see it in the logs (file specified in the stanza location) > 10.1.1.10 forwarded for IP_CLIENT - - [27/Aug/2017:13:15:12 +0200] "GET /app1/?wooslider-javascript=load&t=1503832510&ver=1.0.0 HTTP/1.1" 404 5 "http://application1.org/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:54.0) Gecko/20100101 Firefox/54.0" A request for /?some-thing came to nginx; nginx reverse-proxied the request as /app1/?same-thing. That is all you want nginx to do, so it is working. If your back-end wordpress handles that request incorrectly, that is a question for your back-end wordpress configuration. People on this list who know about wordpress configuration are more likely to see the question if it is in a new thread with words like "wordpress" in the Subject: line. (If the actual question is "why does my browser request /?some-thing instead of /thing.js ?", that might also be related to the back-end config.) > Another question, if I want to set expires header, would it be better to do it on the reverse proxy or on the backend server ? Again, I'd suggest that people who know about "wordpress" and "expires" are much more likely to see that question if it is in a thread with an obvious Subject: line. Good luck with it! ??? f -- Francis Daly? ? ? ? francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon May 21 16:44:26 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 May 2018 19:44:26 +0300 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: <4afa1e1c38025068c21a456e15c3fb00.NginxMailingListEnglish@forum.nginx.org> References: <20180521121244.GZ32137@mdounin.ru> <4afa1e1c38025068c21a456e15c3fb00.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180521164426.GD32137@mdounin.ru> Hello! On Mon, May 21, 2018 at 09:55:20AM -0400, rickGsp wrote: > >>I'm pretty sure the problem is with your tests, not with nginx > >>request rate limiting. Unfortunately, it is not possible to > >>reproduce your tests and check what's going wrong as you are using > >>proprietary software for tests. > > >>As suggested previously, it might be a good idea to verify numbers > >>using nginx access logs. Seeing numbers of requests per seconds > >>should be as trivial as > > >>grep ' 200 ' /path/to/log | awk '{print $4}' | uniq -c > > >>assuming default log format and only test requests in the log. > > Hi Maxim, > > Here is a piece of output for the following command as per our success > return value as 202. > grep ' 202 ' /path/to/log | awk '{print $4}' | uniq -c > > 232 [17/May/2018:03:46:03 > 171 [17/May/2018:03:46:04 > 101 [17/May/2018:03:46:05 > 124 [17/May/2018:03:46:06 > 169 [17/May/2018:03:46:07 > 105 [17/May/2018:03:46:08 > 5 [17/May/2018:03:46:09 > 1 [17/May/2018:03:46:08 > 218 [17/May/2018:03:46:09 > 104 [17/May/2018:03:46:10 > 269 [17/May/2018:03:46:11 > 130 [17/May/2018:03:46:12 > 97 [17/May/2018:03:46:13 > 96 [17/May/2018:03:46:14 > 124 [17/May/2018:03:46:15 > 248 [17/May/2018:03:46:16 > 237 [17/May/2018:03:46:17 > 126 [17/May/2018:03:46:18 This certainly does not look right. Either there are some unrelated requests in the log, or requests are not limited as it can be expected from your configuration. Some additional things to check: - Make sure the $host variable you use for the limiting is not empty and not changed between requests created by your testing tool. Try logging the variable to see if it changes or not. Alternatively, replace it with a static string to see if it helps. - Make sure there are no unrelated requests in the log. In particular, you may want to use different logs in the server{} block you are limiting and in the http{} block. - Try another tool to see if you are able to reproduce the same effect. Something simple like "ab" or "http_load" might be a good choice. -- Maxim Dounin http://mdounin.ru/ From satcse88 at gmail.com Tue May 22 01:37:05 2018 From: satcse88 at gmail.com (Sathish Kumar) Date: Tue, 22 May 2018 09:37:05 +0800 Subject: Block countries - Nginx In-Reply-To: <21849da1-5c62-13af-e672-5b2304426304@unix-solution.de> References: <21849da1-5c62-13af-e672-5b2304426304@unix-solution.de> Message-ID: Hi All, Is there a way, I can block the clients which is coming through load balancer using http geo ip module nginx. Currently, I can block the clients which is not coming through load balancer or api gateway by geo ip module. On Mon, May 21, 2018, 2:02 PM basti wrote: > hello, > the way to block ip's can also be used for PTR records, I think. > Also as wildcard. > > On 21.05.2018 05:49, Sathish Kumar wrote: > > Hi All, > > > > I have a requirement to block certain countries coming to our website. > > I managed to achieved it using the ngx_http_geoip_module. I have a > > problem now, if the request comes through Amazon API Gateway, how can I > > read the X-forwarded-for header or block these request too. > > > > nginx.conf > > map $geoip_country_code $allow_country { > > default yes; > > SG no; > > } > > > > > > geoip_country /etc/nginx/GeoIP.dat; # the country IP database > > geoip_city /etc/nginx/GeoLiteCity.dat; # the city IP database > > > > > > domain.conf > > if ($allow_country = no) { > > return 444; > > } > > > > Thanks & Regards > > Sathish.V > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Tue May 22 07:08:00 2018 From: mailinglist at unix-solution.de (basti) Date: Tue, 22 May 2018 09:08:00 +0200 Subject: Block countries - Nginx In-Reply-To: References: <21849da1-5c62-13af-e672-5b2304426304@unix-solution.de> Message-ID: <3a48e26e-7da1-cbec-b9a8-0fd323b60651@unix-solution.de> Hello, if you have access to this load balancer, the best way would be to block it there. This also drop down the system load on your load balancer. Am 22.05.2018 um 03:37 schrieb Sathish Kumar: > Hi All, > > Is there a way, I can block the clients which is coming through load > balancer using http geo ip module nginx. > > > Currently, I can block the clients which is not coming through load > balancer or api gateway by geo ip module. > > > > > On Mon, May 21, 2018, 2:02 PM basti > wrote: > > hello, > the way to block ip's can also be used for PTR records, I think. > Also as wildcard. > > On 21.05.2018 05:49, Sathish Kumar wrote: > > Hi All, > > > > I have a requirement to block certain countries coming to our > website.? > > I managed to achieved it using the ngx_http_geoip_module. I have a > > problem now, if the request comes through Amazon API Gateway, how > can I > > read the X-forwarded-for header or block these request too. > > > > nginx.conf > > map $geoip_country_code $allow_country { > > ?default yes;? > > SG no;? > > } > > > > > > geoip_country /etc/nginx/GeoIP.dat; # the country IP database? > > geoip_city /etc/nginx/GeoLiteCity.dat; # the city IP database > > > > > > domain.conf > > if ($allow_country = no) {? > > return 444;? > > } > > > > Thanks & Regards > > Sathish.V > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From al-nginx at none.at Tue May 22 08:25:23 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 22 May 2018 10:25:23 +0200 Subject: Block countries - Nginx In-Reply-To: References: Message-ID: <20180522082523.GA17080@aleks-PC> On 21/05/2018 11:49, Sathish Kumar wrote: > Hi All, > > I have a requirement to block certain countries coming to our website. I > managed to achieved it using the ngx_http_geoip_module. I have a problem > now, if the request comes through Amazon API Gateway, how can I read the > X-forwarded-for header or block these request too. > > nginx.conf > map $geoip_country_code $allow_country { > default yes; > SG no; > } > > > geoip_country /etc/nginx/GeoIP.dat; # the country IP database > geoip_city /etc/nginx/GeoLiteCity.dat; # the city IP database > > > domain.conf > if ($allow_country = no) { > return 444; > } You can try to use $http_x_forwarded_for in the map. I think this blog post could point you in the right direction. https://serversforhackers.com/c/nginx-mapping-headers > Thanks & Regards > Sathish.V Best Regards aleks From gfrankliu at gmail.com Tue May 22 08:45:09 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 22 May 2018 01:45:09 -0700 Subject: Block countries - Nginx In-Reply-To: References: <21849da1-5c62-13af-e672-5b2304426304@unix-solution.de> Message-ID: Instead of the default nginx geoip module , I suggest you switch to third party geoip2 module for two reasons: 1) maxmind deprecated geoip1 db. 2)geoip2 module can do what you wanted, and the geo lookup can be based on any variables, such as $http_x_forwarded_for Frank On Mon, May 21, 2018 at 6:37 PM Sathish Kumar wrote: > Hi All, > > Is there a way, I can block the clients which is coming through load > balancer using http geo ip module nginx. > > > Currently, I can block the clients which is not coming through load > balancer or api gateway by geo ip module. > > > > > On Mon, May 21, 2018, 2:02 PM basti wrote: > >> hello, >> the way to block ip's can also be used for PTR records, I think. >> Also as wildcard. >> >> On 21.05.2018 05:49, Sathish Kumar wrote: >> > Hi All, >> > >> > I have a requirement to block certain countries coming to our website. >> > I managed to achieved it using the ngx_http_geoip_module. I have a >> > problem now, if the request comes through Amazon API Gateway, how can I >> > read the X-forwarded-for header or block these request too. >> > >> > nginx.conf >> > map $geoip_country_code $allow_country { >> > default yes; >> > SG no; >> > } >> > >> > >> > geoip_country /etc/nginx/GeoIP.dat; # the country IP database >> > geoip_city /etc/nginx/GeoLiteCity.dat; # the city IP database >> > >> > >> > domain.conf >> > if ($allow_country = no) { >> > return 444; >> > } >> > >> > Thanks & Regards >> > Sathish.V >> > >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> > >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Tue May 22 12:03:55 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 22 May 2018 14:03:55 +0200 Subject: POST redirection with NGINX In-Reply-To: <0bb29ef9cae54a4d22a6997796513f73.NginxMailingListEnglish@forum.nginx.org> References: <0bb29ef9cae54a4d22a6997796513f73.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180522120354.GA11492@aleks-PC> Hi Pedro On 18/05/2018 16:02, pedrobrigatto wrote: > Hi guys, > > The base name of a web application has changed and now I need to implement a > redirection of POST requests so that, whenever clients already using the old > base path are not affected by this modification. So, let's say the old path > to a web service is https://ip-address/old-name/rest/mymethod and now it is > going to be https://ip-address/new-name/rest/mymethod > > I tried both return 307 and rewrite rules but nothing worked until now. > Can you please give me a hand on this? This will not work without proper post handling. https://duckduckgo.com/?q=post+redirect+data Why not using a proxy_pass with new-name? https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass Untested location /old-name/rest/mymethod { proxy_pass https://ip-address/new-name/rest/mymethod; } > Thank you very much in advance! > > Best regards, > Pedro Best regards Aleks > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279868,279868#msg-279868 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue May 22 13:22:44 2018 From: nginx-forum at forum.nginx.org (satishkori) Date: Tue, 22 May 2018 09:22:44 -0400 Subject: Nginx chunked response Message-ID: <004cae4ff39e33e3408aeb9fbf6e1ae9.NginxMailingListEnglish@forum.nginx.org> Nginx some times does not serve whole response but only first chunk. We don't see this kind of behaviour all the times. Below our configuration. http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; location "/" { proxy_connect_timeout 300; proxy_send_timeout 300; proxy_read_timeout 300; send_timeout 300; } When i directly invoke target endpoint, its working fine. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279901,279901#msg-279901 From mdounin at mdounin.ru Tue May 22 13:51:59 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 May 2018 16:51:59 +0300 Subject: Nginx chunked response In-Reply-To: <004cae4ff39e33e3408aeb9fbf6e1ae9.NginxMailingListEnglish@forum.nginx.org> References: <004cae4ff39e33e3408aeb9fbf6e1ae9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180522135159.GG32137@mdounin.ru> Hello! On Tue, May 22, 2018 at 09:22:44AM -0400, satishkori wrote: > Nginx some times does not serve whole response but only first chunk. We > don't see this kind of behaviour all the times. Below our configuration. > > http { > include mime.types; > default_type application/octet-stream; > sendfile on; > keepalive_timeout 65; > > > location "/" { > > proxy_connect_timeout 300; > proxy_send_timeout 300; > proxy_read_timeout 300; > send_timeout 300; > } > > When i directly invoke target endpoint, its working fine. The configuration as shown is clearly invalid. Anyway, there are some things you may want to try: - Try looking into error log. In most cases, it contains details on what goes wrong. - Try to enable debug logging, see http://nginx.org/en/docs/debugging_log.html. It contains details on all operations done by nginx, and can be used to debug varios problems. - Make sure your backend properly works via HTTP/1.0 and/or try using "proxy_http_version 1.1" (if your backend is expected to return chunked responses, it might be confused by HTTP/1.0). -- Maxim Dounin http://mdounin.ru/ From ente.trompete at protonmail.com Tue May 22 15:59:01 2018 From: ente.trompete at protonmail.com (SW@EU) Date: Tue, 22 May 2018 11:59:01 -0400 Subject: how are port number in $host handled if I specify $host: Message-ID: <_KnQAD-xykoYtPv3xHh2sj_w3vXoNKK7b7xBEzegptPVQwMQfc1URB6kvLQ0m8sZBkc4fxKP5ojbMKgOx-XCRqSelWXa3sZDxdBgIo2-0kw=@protonmail.com> Hi, if I read the Module ngx_http_proxy_module documentation I will find e.g. an possible header rewrite in this way proxy_set_header Host $host:$proxy_port; but what would happens here if $host contains already a port number because the server does not listen on a default port. Maybe the server is listening on port 8080 but $proxy_port is 8081. Is the header then "Host: $hostname:8080:8081" or has nginx an automatic in this case that the port part is removed from $host if a port is separately specified? The same question for $proxy_host: and $proxy_host contains already a port number. Unfortunately I don't find any information about variable handling in nginx :-(. TIA, SW Sent with [ProtonMail](https://protonmail.com) Secure Email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 22 16:37:18 2018 From: nginx-forum at forum.nginx.org (rickGsp) Date: Tue, 22 May 2018 12:37:18 -0400 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: <20180521164426.GD32137@mdounin.ru> References: <20180521164426.GD32137@mdounin.ru> Message-ID: <2437715171cdd038e16963590befef7e.NginxMailingListEnglish@forum.nginx.org> >>- Make sure the $host variable you use for the limiting is not >>empty and not changed between requests created by your testing >>tool. Try logging the variable to see if it changes or not. >>Alternatively, replace it with a static string to see if it helps. Checked. $host variable is set for all the requests. >>- Make sure there are no unrelated requests in the log. In >>particular, you may want to use different logs in the server{} >>block you are limiting and in the http{} block. Checked. There are no unrelated requests in the log. >>- Try another tool to see if you are able to reproduce the same >>effect. Something simple like "ab" or "http_load" might be a >>good choice. Checked with "ab" with as following (concurrency 700 requests); ab -n 20000 -c 700 https://9.0.0.10:443/test.html Here is the piece of output. As per the report test ran for approx. 50 seconds and 20000-14622 = 5278 requests returned with success. This is as expected as per rate limiting at 100r/s for 50 seconds test. Notice that Mean requests processed per second is 396. Concurrency Level: 700 Time taken for tests: 50.437 seconds Complete requests: 20000 Failed requests: 14722 Requests per second: 396.53 [#/sec] (mean) Access log report for this test as per the following command seems to be fine: grep ' 200 ' /path/to/log | awk '{print $4}' | uniq -c 111 [22/May/2018:15:35:04 101 [22/May/2018:15:35:05 95 [22/May/2018:15:35:06 98 [22/May/2018:15:35:07 97 [22/May/2018:15:35:08 106 [22/May/2018:15:35:09 95 [22/May/2018:15:35:10 99 [22/May/2018:15:35:11 104 [22/May/2018:15:35:12 106 [22/May/2018:15:35:13 In another test, I ran two instances of "ab" in parallel with same configuration and following is the output.This is again approx. 50 seconds test. By combining both the reports (20000+20000) - (9344+10239) = 20417 requests returned with success. This is four times of expected 5000 requests/sec rate. I would like to understand this behaviour. I guess this is happening in my tests as well. In my case I just keep pushing requests without waiting for response. First instance: Concurrency Level: 700 Time taken for tests: 46.944 seconds Complete requests: 20000 Failed requests: 9344 Requests per second: 426.04 [#/sec] (mean) Second Instance: Concurrency Level: 700 Time taken for tests: 53.344 seconds Complete requests: 20000 Failed requests: 10239 Requests per second: 374.92 [#/sec] (mean) Access log report for this test as per the following command does not seem to be fine: grep ' 200 ' /path/to/log | awk '{print $4}' | uniq -c 180 [22/May/2018:15:52:59 276 [22/May/2018:15:53:00 33 [22/May/2018:15:53:01 20 [22/May/2018:15:53:00 70 [22/May/2018:15:53:01 1 [22/May/2018:15:53:00 181 [22/May/2018:15:53:01 16 [22/May/2018:15:53:02 2 [22/May/2018:15:53:01 99 [22/May/2018:15:53:02 1 [22/May/2018:15:53:01 177 [22/May/2018:15:53:02 329 [22/May/2018:15:53:03 8 [22/May/2018:15:53:02 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279802,279908#msg-279908 From mdounin at mdounin.ru Tue May 22 18:01:01 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 May 2018 21:01:01 +0300 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: <2437715171cdd038e16963590befef7e.NginxMailingListEnglish@forum.nginx.org> References: <20180521164426.GD32137@mdounin.ru> <2437715171cdd038e16963590befef7e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180522180101.GK32137@mdounin.ru> Hello! On Tue, May 22, 2018 at 12:37:18PM -0400, rickGsp wrote: > >>- Make sure the $host variable you use for the limiting is not > >>empty and not changed between requests created by your testing > >>tool. Try logging the variable to see if it changes or not. > >>Alternatively, replace it with a static string to see if it helps. > > Checked. $host variable is set for all the requests. > > >>- Make sure there are no unrelated requests in the log. In > >>particular, you may want to use different logs in the server{} > >>block you are limiting and in the http{} block. > > Checked. There are no unrelated requests in the log. > > >>- Try another tool to see if you are able to reproduce the same > >>effect. Something simple like "ab" or "http_load" might be a > >>good choice. > > Checked with "ab" with as following (concurrency 700 requests); > ab -n 20000 -c 700 https://9.0.0.10:443/test.html > > Here is the piece of output. As per the report test ran for approx. 50 > seconds and 20000-14622 = 5278 requests returned with success. This is as > expected as per rate limiting at 100r/s for 50 seconds test. Notice that > Mean requests processed per second is 396. > > Concurrency Level: 700 > Time taken for tests: 50.437 seconds > Complete requests: 20000 > Failed requests: 14722 > Requests per second: 396.53 [#/sec] (mean) > > Access log report for this test as per the following command seems to be > fine: grep ' 200 ' /path/to/log | awk '{print $4}' | uniq -c > 111 [22/May/2018:15:35:04 > 101 [22/May/2018:15:35:05 > 95 [22/May/2018:15:35:06 > 98 [22/May/2018:15:35:07 > 97 [22/May/2018:15:35:08 > 106 [22/May/2018:15:35:09 > 95 [22/May/2018:15:35:10 > 99 [22/May/2018:15:35:11 > 104 [22/May/2018:15:35:12 > 106 [22/May/2018:15:35:13 > > > In another test, I ran two instances of "ab" in parallel with same > configuration and following is the output.This is again approx. 50 seconds > test. By combining both the reports (20000+20000) - (9344+10239) = 20417 > requests returned with success. This is four times of expected 5000 > requests/sec rate. I would like to understand this behaviour. I guess this > is happening in my tests as well. In my case I just keep pushing requests > without waiting for response. > > First instance: > Concurrency Level: 700 > Time taken for tests: 46.944 seconds > Complete requests: 20000 > Failed requests: 9344 > Requests per second: 426.04 [#/sec] (mean) > > Second Instance: > Concurrency Level: 700 > Time taken for tests: 53.344 seconds > Complete requests: 20000 > Failed requests: 10239 > Requests per second: 374.92 [#/sec] (mean) > > > Access log report for this test as per the following command does not seem > to be fine: grep ' 200 ' /path/to/log | awk '{print $4}' | uniq -c > 180 [22/May/2018:15:52:59 > 276 [22/May/2018:15:53:00 > 33 [22/May/2018:15:53:01 > 20 [22/May/2018:15:53:00 > 70 [22/May/2018:15:53:01 > 1 [22/May/2018:15:53:00 > 181 [22/May/2018:15:53:01 > 16 [22/May/2018:15:53:02 > 2 [22/May/2018:15:53:01 > 99 [22/May/2018:15:53:02 > 1 [22/May/2018:15:53:01 > 177 [22/May/2018:15:53:02 > 329 [22/May/2018:15:53:03 > 8 [22/May/2018:15:53:02 Please show "uname -a", "nginx -V", and "ps -alxww | grep nginx" output. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Tue May 22 20:00:42 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 22 May 2018 21:00:42 +0100 Subject: Reverse proxy for multiple domains In-Reply-To: <1503514002.6187669.1526911027040@mail.yahoo.com> References: <263631856.3895413.1503833225646.ref@mail.yahoo.com> <263631856.3895413.1503833225646@mail.yahoo.com> <20170830175719.GB20907@daoine.org> <1503514002.6187669.1526911027040@mail.yahoo.com> Message-ID: <20180522200042.GO19311@daoine.org> On Mon, May 21, 2018 at 01:57:07PM +0000, Mik J via nginx wrote: Hi there, > I would like to know what is the best practice to setup a web proxy. > > I do it like this > - 1 virtual host per application on the reverse proxy and the proxy_pass points to one IP+path > - 1 virtual host (default) for all application on the backend server but one location stanza per application > > The problem is that I meet many problems with installation of application: magento, glpi, etc If the problem is *installing* the applications, that might be a question for the application list. If the problem is *reverse-proxying* the applications, that might be a question for the nginx list. It is good to be clear about what the specific problem you are seeing is. > Is it the correct way to do it ? It is usually easiest if the front-end /prefix and the back-end /prefix are identical. So if the back-end application is happy being installed at /application1/, then the front-end should reverse-proxy from frontend/application1/ to upstream1/application1/. In that case, multiple applications could all be on the same frontend server{}, or on different ones. If different ones, then it can redirect from / to /application1/ if that is simplest. If the back-end application insists on being installed at /, then the front-end should reverse-proxy from frontend/ to upstream2/. In that case, you will probably need multiple frontend server{}s; one for each similar application. > location ^~ / { > proxy_pass? ? ? ? http://10.1.1.10:80/app/application1/; "/" to "/app/application1/" is possible, but it is easy for things to go wrong. For example: if the application returns a link to /app/application1/file, the next request to the upstream might be to /app/application1/app/application1/file, which may not work as desired. > server { > listen 80 default_server; This config looks generally right, if it is the correct way to install this application... > server_name _; > index index.html index.htm index.php; > root /var/www/htdocs; > location ^~ /app/application1 { > root /var/www; > index index.php; > location ~ \.php$ { Note, though, that: > root? ? ? ? ? /var/www; > try_files $uri =404; those two lines... > fastcgi_pass? unix:/run/php-fpm.application1.sock; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_index? index.php; and those two lines, probably do not do anything useful here. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed May 23 04:58:53 2018 From: nginx-forum at forum.nginx.org (rickGsp) Date: Wed, 23 May 2018 00:58:53 -0400 Subject: Nginx Rate limiting for HTTPS requests In-Reply-To: <20180522180101.GK32137@mdounin.ru> References: <20180522180101.GK32137@mdounin.ru> Message-ID: <89f7ae32db2a23457baf34d8ce2b6e3f.NginxMailingListEnglish@forum.nginx.org> >>Please show "uname -a", "nginx -V", and "ps -alxww | grep nginx" >>output. #uname -a Linux localhost.localdomain 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux #nginx -V nginx version: nginx/1.14.0 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' #ps -alxww | grep nginx 5 0 9613 1 20 0 48516 1352 sigsus Ss ? 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf 5 996 9614 9613 20 0 53204 7892 ep_pol S ? 1:35 nginx: worker process 5 996 9615 9613 20 0 53800 8476 ep_pol S ? 1:37 nginx: worker process 5 996 9616 9613 20 0 54888 9648 ep_pol S ? 1:56 nginx: worker process 5 996 9617 9613 20 0 53008 7696 ep_pol S ? 2:22 nginx: worker process 5 996 9618 9613 20 0 53452 8140 ep_pol S ? 2:12 nginx: worker process 5 996 9619 9613 20 0 55036 9712 ep_pol S ? 2:14 nginx: worker process 5 996 9620 9613 20 0 58700 13484 ep_pol S ? 2:18 nginx: worker process 5 996 9621 9613 20 0 55532 10316 ep_pol S ? 2:20 nginx: worker process 5 996 9622 9613 20 0 53504 8300 ep_pol S ? 2:18 nginx: worker process 5 996 9623 9613 20 0 53204 7892 ep_pol S ? 2:12 nginx: worker process 5 996 9624 9613 20 0 52196 6992 ep_pol S ? 2:32 nginx: worker process 5 996 9625 9613 20 0 57164 11944 ep_pol S ? 2:24 nginx: worker process 0 0 26753 26580 20 0 112648 964 pipe_w S+ pts/0 0:00 grep --color=auto nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279802,279913#msg-279913 From nginx-forum at forum.nginx.org Wed May 23 07:42:26 2018 From: nginx-forum at forum.nginx.org (kunaldas) Date: Wed, 23 May 2018 03:42:26 -0400 Subject: Nginx is not logging QUIT signal handling logs Message-ID: <61b8d6619f525248f3e41cc240e9bf58.NginxMailingListEnglish@forum.nginx.org> Hi, I am sending QUIT signal to master process and it is terminating nginx also, but I am not able to get any logs for to verify for gracefully exit. I am using debug log level in http context as mentioned below. error_log /etc/nginx/logs/ferror.log debug; Sending signal to the NGINX master process by: kill -QUIT $( cat /usr/local/nginx/logs/nginx.pid ) Can anybody please let me know how to get signal handling logs. Thanks, Kunal Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279914,279914#msg-279914 From nginx-forum at forum.nginx.org Wed May 23 09:33:51 2018 From: nginx-forum at forum.nginx.org (isolomka) Date: Wed, 23 May 2018 05:33:51 -0400 Subject: Nginx thread pool is not working in 1.14 in custom module Message-ID: <7e9e6c006538cd060c8a9b5d83215f5f.NginxMailingListEnglish@forum.nginx.org> Hi, I have custom nginx module which uses thread pool to serve blocking synchronous calls to our library. It worked fine with nginx version 1.12.1. Now we've tried to upgrade nginx to latest 1.14 version and it seems thread pool is not working with that version. After some debugging we've found that the issue is in this commit https://github.com/nginx/nginx/commit/d1d48ed8448e24ef5297bb37387544ad241591fe For some reasons, it removed validation if request is blocked (line 2452). As a result, request is closed before task in thread pool is done. Nginx crashes with segmentation fault when tries to execute task handler (request is closed and pool is destroyed): Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00005610e5df62b7 in ngx_palloc (pool=0x0, size=80) at src/core/ngx_palloc.c:126 126 if (size <= pool->max) { [Current thread is 1 (Thread 0x7f95ce2f3700 (LWP 23159))] (gdb) bt #0 0x00005610e5df62b7 in ngx_palloc (pool=0x0, size=80) at src/core/ngx_palloc.c:126 Does that mean that using thread pool in custom modules is no longer supported? Does any workaround exist to fix it? Thank you in advance, Ihor Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279917,279917#msg-279917 From mdounin at mdounin.ru Wed May 23 12:33:24 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 May 2018 15:33:24 +0300 Subject: Nginx thread pool is not working in 1.14 in custom module In-Reply-To: <7e9e6c006538cd060c8a9b5d83215f5f.NginxMailingListEnglish@forum.nginx.org> References: <7e9e6c006538cd060c8a9b5d83215f5f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180523123323.GN32137@mdounin.ru> Hello! On Wed, May 23, 2018 at 05:33:51AM -0400, isolomka wrote: > Hi, > I have custom nginx module which uses thread pool to serve blocking > synchronous calls to our library. > It worked fine with nginx version 1.12.1. > > Now we've tried to upgrade nginx to latest 1.14 version and it seems thread > pool is not working with that version. > > After some debugging we've found that the issue is in this commit > https://github.com/nginx/nginx/commit/d1d48ed8448e24ef5297bb37387544ad241591fe > > For some reasons, it removed validation if request is blocked (line 2452). > As a result, request is closed before task in thread pool is done. Nginx > crashes with segmentation fault when tries to execute task handler (request > is closed and pool is destroyed): > > Program terminated with signal SIGSEGV, Segmentation fault. > #0 0x00005610e5df62b7 in ngx_palloc (pool=0x0, size=80) at > src/core/ngx_palloc.c:126 > 126 if (size <= pool->max) { > [Current thread is 1 (Thread 0x7f95ce2f3700 (LWP 23159))] > (gdb) bt > #0 0x00005610e5df62b7 in ngx_palloc (pool=0x0, size=80) at > src/core/ngx_palloc.c:126 > > > Does that mean that using thread pool in custom modules is no longer > supported? > Does any workaround exist to fix it? The the r->blocked check now resides in ngx_http_terminate_request(), the only way how request can be closed regardless of its reference counting. If you see problems in your code introduced by the commit in question, it might indicate there is something wrong with request reference counting in your code. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed May 23 13:08:08 2018 From: nginx-forum at forum.nginx.org (isolomka) Date: Wed, 23 May 2018 09:08:08 -0400 Subject: Nginx thread pool is not working in 1.14 in custom module In-Reply-To: <20180523123323.GN32137@mdounin.ru> References: <20180523123323.GN32137@mdounin.ru> Message-ID: <69464b2abb3132bc7ab33b7985c25c9e.NginxMailingListEnglish@forum.nginx.org> Thanks for the response. The main issue is that now request is closed before actual task is done in thread pool. How can I avoid that? It worked fine before the upgrade. What is correct thread pool usage in custom module in 1.14? Here is my request handler for reference: static ngx_int_t ngx_http_thread_handler(ngx_http_request_t* r) { //... // Add handler (blocking handler) task->handler = ngx_http_cgpi_task_handler; // Init event task->event.data = taskCtx; task->event.handler = ngx_http_cgpi_task_done_cb; // Try to get the pool to put task ngx_thread_pool_t* tp = clcf->thread_pool; if (tp == NULL) { // Create pool if not exists if (ngx_http_complex_value(r, clcf->thread_pool_value, &name) != NGX_OK) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "ngx_http_complex_value \"%V\" failed", &name); return NGX_ERROR; } tp = ngx_thread_pool_get((ngx_cycle_t* ) ngx_cycle, &name); if (tp == NULL) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "thread pool \"%V\" not found", &name); return NGX_ERROR; } } // Put the task into thread pool if (ngx_thread_task_post(tp, task) != NGX_OK) { return NGX_ERROR; } // Make the request blocked r->main->blocked++; r->aio = 1; return NGX_AGAIN; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279917,279920#msg-279920 From michael.friscia at yale.edu Wed May 23 13:37:17 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 23 May 2018 13:37:17 +0000 Subject: "This page isn't working" error Message-ID: <4BE6ED6A-ABB0-4957-8C00-D2CF8F3D1976@yale.edu> I wonder if anyone knows how to debug this. I have two URLs: Working https://www.yalemedicine.org/doctors/marcus_bosenberg/ not working https://www.yalemedicine.org/doctors/antonio_subtil/ From the Nginx configuration side, these go through the same identical configuration. If I go to the upstream server, both URLs work for the ?non-Nginx? version of the pages. The problem I have is that I can?t seem to get an error in the logs from Nginx since I am just getting the error ?This page isn?t working? and as a result there are no useful headers or information being passed. Any thoughts/help would be appreciated on what I need to do to get some sort of logged error from Nginx to give me a clue what is wrong with this page. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Wed May 23 14:20:48 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 23 May 2018 14:20:48 +0000 Subject: "This page isn't working" error In-Reply-To: <4BE6ED6A-ABB0-4957-8C00-D2CF8F3D1976@yale.edu> References: <4BE6ED6A-ABB0-4957-8C00-D2CF8F3D1976@yale.edu> Message-ID: <74F3DDFA-ABFA-4FDA-B02F-FF606BDCFDE8@yale.edu> Never mind, I had an error in a config file that was forcing a 444 response based on a regex that accidentally matched the second URL? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu From: nginx on behalf of Michael Friscia Reply-To: "nginx at nginx.org" Date: Wednesday, May 23, 2018 at 9:37 AM To: "nginx at nginx.org" Subject: "This page isn't working" error I wonder if anyone knows how to debug this. I have two URLs: Working https://www.yalemedicine.org/doctors/marcus_bosenberg/ not working https://www.yalemedicine.org/doctors/antonio_subtil/ From the Nginx configuration side, these go through the same identical configuration. If I go to the upstream server, both URLs work for the ?non-Nginx? version of the pages. The problem I have is that I can?t seem to get an error in the logs from Nginx since I am just getting the error ?This page isn?t working? and as a result there are no useful headers or information being passed. Any thoughts/help would be appreciated on what I need to do to get some sort of logged error from Nginx to give me a clue what is wrong with this page. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 23 14:28:14 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 May 2018 17:28:14 +0300 Subject: Nginx thread pool is not working in 1.14 in custom module In-Reply-To: <69464b2abb3132bc7ab33b7985c25c9e.NginxMailingListEnglish@forum.nginx.org> References: <20180523123323.GN32137@mdounin.ru> <69464b2abb3132bc7ab33b7985c25c9e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180523142814.GP32137@mdounin.ru> Hello! On Wed, May 23, 2018 at 09:08:08AM -0400, isolomka wrote: > Thanks for the response. > The main issue is that now request is closed before actual task is done in > thread pool. > How can I avoid that? > It worked fine before the upgrade. > What is correct thread pool usage in custom module in 1.14? > > Here is my request handler for reference: > static ngx_int_t ngx_http_thread_handler(ngx_http_request_t* r) > { > //... > > // Add handler (blocking handler) > task->handler = ngx_http_cgpi_task_handler; > // Init event > task->event.data = taskCtx; > task->event.handler = ngx_http_cgpi_task_done_cb; > > // Try to get the pool to put task > ngx_thread_pool_t* tp = clcf->thread_pool; > > if (tp == NULL) > { > // Create pool if not exists > if (ngx_http_complex_value(r, clcf->thread_pool_value, &name) != > NGX_OK) > { > ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > "ngx_http_complex_value \"%V\" failed", &name); > return NGX_ERROR; > } > tp = ngx_thread_pool_get((ngx_cycle_t* ) ngx_cycle, &name); > if (tp == NULL) > { > ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "thread pool \"%V\" > not found", &name); > return NGX_ERROR; > } > } > > // Put the task into thread pool > if (ngx_thread_task_post(tp, task) != NGX_OK) > { > return NGX_ERROR; > } > // Make the request blocked > r->main->blocked++; > r->aio = 1; > > return NGX_AGAIN; > } The code returns control to the caller without incrementing r->main->count. As such, the request is expected to be complete and will be closed. This is incorrect, and will cause various problems including in previous versions - e.g., expect a similar segmentation fault if a write event happens on the connection. To fix things you should increment r->main->count and return NGX_DONE, and then call ngx_http_finalize_request() when your external processing is complete, much like with normal non-threaded external processing. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed May 23 14:31:07 2018 From: nginx-forum at forum.nginx.org (mdm100) Date: Wed, 23 May 2018 10:31:07 -0400 Subject: 403 Forbidden Message-ID: <8f623296855bb666c27aa75e4cc86b3a.NginxMailingListEnglish@forum.nginx.org> 403 Forbidden nginx/1.12.1 (Ubuntu) Distributor ID: Ubuntu Description: Ubuntu 17.10 Release: 17.10 Codename: artful I have many virtual sites running on my server identical to this one, but for some reason I have run into the 403 wall. Permissions for my virtual website dir sudo chown -R www-data:www-data /var/www/html/C1/ sudo chmod -R 755 /var/www/html/C1/ results drwxrwxrwx 3 www-data www-data 4096 May 23 12:37 C1 The virtual directory is as follows.. and is linked to. sudo ln -s /etc/nginx/sites-available/c1 /etc/nginx/sites-enabled/ server { listen 80; listen [::]:80; root /var/www/html/C1; index index.html index.php index.htm; server_name c1inventory.xxxxxxx.com; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; fastcgi_pass unix:/var/run/php/php7.1-fpm.sock; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Dump from the nginx error log. 2018/05/23 14:15:49 [error] 1530#1530: *1 directory index of "/var/www/html/C1/" is forbidden, client: 67.127.276.257, server: c1inventory.xxxxxx.com, request: "GET / HTTP/1.1", host: "c1inventory.xxxxxx.com" Any help is greatly appreciated. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279925,279925#msg-279925 From al-nginx at none.at Wed May 23 14:43:57 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 23 May 2018 16:43:57 +0200 Subject: "This page isn't working" error In-Reply-To: <4BE6ED6A-ABB0-4957-8C00-D2CF8F3D1976@yale.edu> References: <4BE6ED6A-ABB0-4957-8C00-D2CF8F3D1976@yale.edu> Message-ID: <20180523144357.GA8604@aleks-PC> Hi. On 23/05/2018 13:37, Friscia, Michael wrote: >I wonder if anyone knows how to debug this. > >I have two URLs: >Working >https://www.yalemedicine.org/doctors/marcus_bosenberg/ >not working >https://www.yalemedicine.org/doctors/antonio_subtil/ Looks like your SDL Component does not exist for this url? ``` # curl -v https://www.yalemedicine.org/doctors/antonio_subtil/ -o lala.html ... { [5 bytes data] < HTTP/1.1 404 Not Found < Server: nginx < Date: Wed, 23 May 2018 14:34:55 GMT < Content-Type: text/html; charset=utf-8 < Transfer-Encoding: chunked < Connection: keep-alive < Cache-Control: public, max-age=3600 < Expires: Wed, 23 May 2018 15:06:17 GMT < Last-Modified: Wed, 23 May 2018 14:06:17 GMT < X-Secured-Page: false < access-control-allow-origin: * < X-ID: 5623a9b5930d9c6cc5c62fbd7d35758b < X-Proxy: ysm-nginx-prod14 < X-ProxyKey: httpswww.yalemedicine.org/doctors/antonio_subtil/ < X-ProxyKeyAccept: httpswww.yalemedicine.org/doctors/antonio_subtil/*/* < X-NoCache: 0 (1=bypass/0=cache delivery) < X-UpstreamCacheStatus: STALE < X-RemoteAddr: 195.90.20.201 < X-Origin-Forwarded-For: 195.90.20.201 < { [15694 bytes data] 100 33286 0 33286 0 0 8051 0 --:--:-- 0:00:04 --:--:-- 8051 * Connection #0 to host www.yalemedicine.org left intact ``` ``` .... ``` This is in the working one. ``` ...